--- /dev/null
+
+relayfs - a high-speed data relay filesystem
+============================================
+
+relayfs is a filesystem designed to provide an efficient mechanism for
+tools and facilities to relay large amounts of data from kernel space
+to user space.
+
+The main idea behind relayfs is that every data flow is put into a
+separate "channel" and each channel is a file. In practice, each
+channel is a separate memory buffer allocated from within kernel space
+upon channel instantiation. Software needing to relay data to user
+space would open a channel or a number of channels, depending on its
+needs, and would log data to that channel. All the buffering and
+locking mechanics are taken care of by relayfs. The actual format and
+protocol used for each channel is up to relayfs' clients.
+
+relayfs makes no provisions for copying the same data to more than a
+single channel. This is for the clients of the relay to take care of,
+and so is any form of data filtering. The purpose is to keep relayfs
+as simple as possible.
+
+
+Usage
+=====
+
+In addition to the relayfs kernel API described below, relayfs
+implements basic file operations. Here are the file operations that
+are available and some comments regarding their behavior:
+
+open() enables user to open an _existing_ channel. A channel can be
+ opened in blocking or non-blocking mode, and can be opened
+ for reading as well as for writing. Readers will by default
+ be auto-consuming.
+
+mmap() results in channel's memory buffer being mmapped into the
+ caller's memory space.
+
+read() since we are dealing with circular buffers, the user is only
+ allowed to read forward. Some apps may want to loop around
+ read() waiting for incoming data - if there is no data
+ available, read will put the reader on a wait queue until
+ data is available (blocking mode). Non-blocking reads return
+ -EAGAIN if data is not available.
+
+
+write() writing from user space operates exactly as relay_write() does
+ (described below).
+
+poll() POLLIN/POLLRDNORM/POLLOUT/POLLWRNORM/POLLERR supported.
+
+close() decrements the channel's refcount. When the refcount reaches
+ 0 i.e. when no process or kernel client has the file open
+ (see relay_close() below), the channel buffer is freed.
+
+
+In order for a user application to make use of relayfs files, the
+relayfs filesystem must be mounted. For example,
+
+ mount -t relayfs relayfs /mountpoint
+
+
+The relayfs kernel API
+======================
+
+relayfs channels are implemented as circular buffers subdivided into
+'sub-buffers'. kernel clients write data into the channel using
+relay_write(), and are notified via a set of callbacks when
+significant events occur within the channel. 'Significant events'
+include:
+
+- a sub-buffer has been filled i.e. the current write won't fit into the
+ current sub-buffer, and a 'buffer-switch' is triggered, after which
+ the data is written into the next buffer (if the next buffer is
+ empty). The client is notified of this condition via two callbacks,
+ one providing an opportunity to perform start-of-buffer tasks, the
+ other end-of-buffer tasks.
+
+- data is ready for the client to process. The client can choose to
+ be notified either on a per-sub-buffer basis (bulk delivery) or
+ per-write basis (packet delivery).
+
+- data has been written to the channel from user space. The client can
+ use this notification to accept and process 'commands' sent to the
+ channel via write(2).
+
+- the channel has been opened/closed/mapped/unmapped from user space.
+ The client can use this notification to trigger actions within the
+ kernel application, such as enabling/disabling logging to the
+ channel. It can also return result codes from the callback,
+ indicating that the operation should fail e.g. in order to restrict
+ more than one user space open or mmap.
+
+- the channel needs resizing, or needs to update its
+ state based on the results of the resize. Resizing the channel is
+ up to the kernel client to actually perform. If the channel is
+ configured for resizing, the client is notified when the unread data
+ in the channel passes a preset threshold, giving it the opportunity
+ to allocate a new channel buffer and replace the old one.
+
+Reader objects
+--------------
+
+Channel readers use an opaque rchan_reader object to read from
+channels. For VFS readers (those using read(2) to read from a
+channel), these objects are automatically created and used internally;
+only kernel clients that need to directly read from channels, or whose
+userspace applications use mmap to access channel data, need to know
+anything about rchan_readers - others may skip this section.
+
+A relay channel can have any number of readers, each represented by an
+rchan_reader instance, which is used to encapsulate reader settings
+and state. rchan_reader objects should be treated as opaque by kernel
+clients. To create a reader object for directly accessing a channel
+from kernel space, call the add_rchan_reader() kernel API function:
+
+rchan_reader *add_rchan_reader(rchan_id, auto_consume)
+
+This function returns an rchan_reader instance if successful, which
+should then be passed to relay_read() when the kernel client is
+interested in reading from the channel.
+
+The auto_consume parameter indicates whether a read done by this
+reader will automatically 'consume' that portion of the unread channel
+buffer when relay_read() is called (see below for more details).
+
+To close the reader, call
+
+remove_rchan_reader(reader)
+
+which will remove the reader from the list of current readers.
+
+
+To create a reader object representing a userspace mmap reader in the
+kernel application, call the add_map_reader() kernel API function:
+
+rchan_reader *add_map_reader(rchan_id)
+
+This function returns an rchan_reader instance if successful, whose
+main purpose is as an argument to be passed into
+relay_buffers_consumed() when the kernel client becomes aware that
+data has been read by a user application using mmap to read from the
+channel buffer. There is no auto_consume option in this case, since
+only the kernel client/user application knows when data has been read.
+
+To close the map reader, call
+
+remove_map_reader(reader)
+
+which will remove the reader from the list of current readers.
+
+Consumed count
+--------------
+
+A relayfs channel is a circular buffer, which means that if there is
+no reader reading from it or a reader reading too slowly, at some
+point the channel writer will 'lap' the reader and data will be lost.
+In normal use, readers will always be able to keep up with writers and
+the buffer is thus never in danger of becoming full. In many
+applications, it's sufficient to ensure that this is practically
+speaking always the case, by making the buffers large enough. These
+types of applications can basically open the channel as
+RELAY_MODE_CONTINOUS (the default anyway) and not worry about the
+meaning of 'consume' and skip the rest of this section.
+
+If it's important for the application that a kernel client never allow
+writers to overwrite unread data, the channel should be opened using
+RELAY_MODE_NO_OVERWRITE and must be kept apprised of the count of
+bytes actually read by the (typically) user-space channel readers.
+This count is referred to as the 'consumed count'. read(2) channel
+readers automatically update the channel's 'consumed count' as they
+read. If the usage mode is to have only read(2) readers, which is
+typically the case, the kernel client doesn't need to worry about any
+of the relayfs functions having to do with 'bytes consumed' and can
+skip the rest of this section. (Note that it is possible to have
+multiple read(2) or auto-consuming readers, but like having multiple
+readers on a pipe, these readers will race with each other i.e. it's
+supported, but doesn't make much sense).
+
+If the kernel client cannot rely on an auto-consuming reader to keep
+the 'consumed count' up-to-date, then it must do so manually, by
+making the appropriate calls to relay_buffers_consumed() or
+relay_bytes_consumed(). In most cases, this should only be necessary
+for bulk mmap clients - almost all packet clients should be covered by
+having auto-consuming read(2) readers. For mmapped bulk clients, for
+instance, there are no auto-consuming VFS readers, so the kernel
+client needs to make the call to relay_buffers_consumed() after
+sub-buffers are read.
+
+Kernel API
+----------
+
+Here's a summary of the API relayfs provides to in-kernel clients:
+
+int relay_open(channel_path, bufsize, nbufs, channel_flags,
+ channel_callbacks, start_reserve, end_reserve,
+ rchan_start_reserve, resize_min, resize_max, mode,
+ init_buf, init_buf_size)
+int relay_write(channel_id, *data_ptr, count, time_delta_offset, **wrote)
+rchan_reader *add_rchan_reader(channel_id, auto_consume)
+int remove_rchan_reader(rchan_reader *reader)
+rchan_reader *add_map_reader(channel_id)
+int remove_map_reader(rchan_reader *reader)
+int relay_read(reader, buf, count, wait, *actual_read_offset)
+void relay_buffers_consumed(reader, buffers_consumed)
+void relay_bytes_consumed(reader, bytes_consumed, read_offset)
+int relay_bytes_avail(reader)
+int rchan_full(reader)
+int rchan_empty(reader)
+int relay_info(channel_id, *channel_info)
+int relay_close(channel_id)
+int relay_realloc_buffer(channel_id, nbufs, async)
+int relay_replace_buffer(channel_id)
+int relay_reset(int rchan_id)
+
+----------
+int relay_open(channel_path, bufsize, nbufs,
+ channel_flags, channel_callbacks, start_reserve,
+ end_reserve, rchan_start_reserve, resize_min, resize_max, mode)
+
+relay_open() is used to create a new entry in relayfs. This new entry
+is created according to channel_path. channel_path contains the
+absolute path to the channel file on relayfs. If, for example, the
+caller sets channel_path to "/xlog/9", a "xlog/9" entry will appear
+within relayfs automatically and the "xlog" directory will be created
+in the filesystem's root. relayfs does not implement any policy on
+its content, except to disallow the opening of two channels using the
+same file. There are, nevertheless a set of guidelines for using
+relayfs. Basically, each facility using relayfs should use a top-level
+directory identifying it. The entry created above, for example,
+presumably belongs to the "xlog" software.
+
+The remaining parameters for relay_open() are as follows:
+
+- channel_flags - an ORed combination of attribute values controlling
+ common channel characteristics:
+
+ - logging scheme - relayfs use 2 mutually exclusive schemes
+ for logging data to a channel. The 'lockless scheme'
+ reserves and writes data to a channel without the need of
+ any type of locking on the channel. This is the preferred
+ scheme, but may not be available on a given architecture (it
+ relies on the presence of a cmpxchg instruction). It's
+ specified by the RELAY_SCHEME_LOCKLESS flag. The 'locking
+ scheme' either obtains a lock on the channel for writing or
+ disables interrupts, depending on whether the channel was
+ opened for SMP or global usage (see below). It's specified
+ by the RELAY_SCHEME_LOCKING flag. While a client may want
+ to explicitly specify a particular scheme to use, it's more
+ convenient to specify RELAY_SCHEME_ANY for this flag, which
+ will allow relayfs to choose the best available scheme i.e.
+ lockless if supported.
+
+ - overwrite mode (default is RELAY_MODE_CONTINUOUS) -
+ If RELAY_MODE_CONTINUOUS is specified, writes to the channel
+ will succeed regardless of whether there are up-to-date
+ consumers or not. If RELAY_MODE_NO_OVERWRITE is specified,
+ the channel becomes 'full' when the total amount of buffer
+ space unconsumed by readers equals or exceeds the total
+ buffer size. With the buffer in this state, writes to the
+ buffer will fail - clients need to check the return code from
+ relay_write() to determine if this is the case and act
+ accordingly - 0 or a negative value indicate the write failed.
+
+ - SMP usage - this applies only when the locking scheme is in
+ use. If RELAY_USAGE_SMP is specified, it's assumed that the
+ channel will be used in a per-CPU fashion and consequently,
+ the only locking that will be done for writes is to disable
+ local irqs. If RELAY_USAGE_GLOBAL is specified, it's assumed
+ that writes to the buffer can occur within any CPU context,
+ and spinlock_irq_save will be used to lock the buffer.
+
+ - delivery mode - if RELAY_DELIVERY_BULK is specified, the
+ client will be notified via its deliver() callback whenever a
+ sub-buffer has been filled. Alternatively,
+ RELAY_DELIVERY_PACKET will cause delivery to occur after the
+ completion of each write. See the description of the channel
+ callbacks below for more details.
+
+ - timestamping - if RELAY_TIMESTAMP_TSC is specified and the
+ architecture supports it, efficient TSC 'timestamps' can be
+ associated with each write, otherwise more expensive
+ gettimeofday() timestamping is used. At the beginning of
+ each sub-buffer, a gettimeofday() timestamp and the current
+ TSC, if supported, are read, and are passed on to the client
+ via the buffer_start() callback. This allows correlation of
+ the current time with the current TSC for subsequent writes.
+ Each subsequent write is associated with a 'time delta',
+ which is either the current TSC, if the channel is using
+ TSCs, or the difference between the buffer_start gettimeofday
+ timestamp and the gettimeofday time read for the current
+ write. Note that relayfs never writes either a timestamp or
+ time delta into the buffer unless explicitly asked to (see
+ the description of relay_write() for details).
+
+- bufsize - the size of the 'sub-buffers' making up the circular channel
+ buffer. For the lockless scheme, this must be a power of 2.
+
+- nbufs - the number of 'sub-buffers' making up the circular
+ channel buffer. This must be a power of 2.
+
+ The total size of the channel buffer is bufsize * nbufs rounded up
+ to the next kernel page size. If the lockless scheme is used, both
+ bufsize and nbufs must be a power of 2. If the locking scheme is
+ used, the bufsize can be anything and nbufs must be a power of 2. If
+ RELAY_SCHEME_ANY is used, the bufsize and nbufs should be a power of 2.
+
+ NOTE: if nbufs is 1, relayfs will bypass the normal size
+ checks and will allocate an rvmalloced buffer of size bufsize.
+ This buffer will be freed when relay_close() is called, if the channel
+ isn't still being referenced.
+
+- callbacks - a table of callback functions called when events occur
+ within the data relay that clients need to know about:
+
+ - int buffer_start(channel_id, current_write_pos, buffer_id,
+ start_time, start_tsc, using_tsc) -
+
+ called at the beginning of a new sub-buffer, the
+ buffer_start() callback gives the client an opportunity to
+ write data into space reserved at the beginning of a
+ sub-buffer. The client should only write into the buffer
+ if it specified a value for start_reserve and/or
+ channel_start_reserve (see below) when the channel was
+ opened. In the latter case, the client can determine
+ whether to write its one-time rchan_start_reserve data by
+ examining the value of buffer_id, which will be 0 for the
+ first sub-buffer. The address that the client can write
+ to is contained in current_write_pos (the client by
+ definition knows how much it can write i.e. the value it
+ passed to relay_open() for start_reserve/
+ channel_start_reserve). start_time contains the
+ gettimeofday() value for the start of the buffer and start
+ TSC contains the TSC read at the same time. The using_tsc
+ param indicates whether or not start_tsc is valid (it
+ wouldn't be if TSC timestamping isn't being used).
+
+ The client should return the number of bytes it wrote to
+ the channel, 0 if none.
+
+ - int buffer_end(channel_id, current_write_pos, end_of_buffer,
+ end_time, end_tsc, using_tsc)
+
+ called at the end of a sub-buffer, the buffer_end()
+ callback gives the client an opportunity to perform
+ end-of-buffer processing. Note that the current_write_pos
+ is the position where the next write would occur, but
+ since the current write wouldn't fit (which is the trigger
+ for the buffer_end event), the buffer is considered full
+ even though there may be unused space at the end. The
+ end_of_buffer param pointer value can be used to determine
+ exactly the size of the unused space. The client should
+ only write into the buffer if it specified a value for
+ end_reserve when the channel was opened. If the client
+ doesn't write anything i.e. returns 0, the unused space at
+ the end of the sub-buffer is available via relay_info() -
+ this data may be needed by the client later if it needs to
+ process raw sub-buffers (an alternative would be to save
+ the unused bytes count value in end_reserve space at the
+ end of each sub-buffer during buffer_end processing and
+ read it when needed at a later time. The other
+ alternative would be to use read(2), which makes the
+ unused count invisible to the caller). end_time contains
+ the gettimeofday() value for the end of the buffer and end
+ TSC contains the TSC read at the same time. The using_tsc
+ param indicates whether or not end_tsc is valid (it
+ wouldn't be if TSC timestamping isn't being used).
+
+ The client should return the number of bytes it wrote to
+ the channel, 0 if none.
+
+ - void deliver(channel_id, from, len)
+
+ called when data is ready for the client. This callback
+ is used to notify a client when a sub-buffer is complete
+ (in the case of bulk delivery) or a single write is
+ complete (packet delivery). A bulk delivery client might
+ wish to then signal a daemon that a sub-buffer is ready.
+ A packet delivery client might wish to process the packet
+ or send it elsewhere. The from param is a pointer to the
+ delivered data and len specifies how many bytes are ready.
+
+ - void user_deliver(channel_id, from, len)
+
+ called when data has been written to the channel from user
+ space. This callback is used to notify a client when a
+ successful write from userspace has occurred, independent
+ of whether bulk or packet delivery is in use. This can be
+ used to allow userspace programs to communicate with the
+ kernel client through the channel via out-of-band write(2)
+ 'commands' instead of via ioctls, for instance. The from
+ param is a pointer to the delivered data and len specifies
+ how many bytes are ready. Note that this callback occurs
+ after the bytes have been successfully written into the
+ channel, which means that channel readers must be able to
+ deal with the 'command' data which will appear in the
+ channel data stream just as any other userspace or
+ non-userspace write would.
+
+ - int needs_resize(channel_id, resize_type,
+ suggested_buf_size, suggested_n_bufs)
+
+ called when a channel's buffers are in danger of becoming
+ full i.e. the number of unread bytes in the channel passes
+ a preset threshold, or when the current capacity of a
+ channel's buffer is no longer needed. Also called to
+ notify the client when a channel's buffer has been
+ replaced. If resize_type is RELAY_RESIZE_EXPAND or
+ RELAY_RESIZE_SHRINK, the kernel client should arrange to
+ call relay_realloc_buffer() with the suggested buffer size
+ and buffer count, which will allocate (but will not
+ replace the old one) a new buffer of the recommended size
+ for the channel. When the allocation has completed,
+ needs_resize() is again called, this time with a
+ resize_type of RELAY_RESIZE_REPLACE. The kernel client
+ should then arrange to call relay_replace_buffer() to
+ actually replace the old channel buffer with the newly
+ allocated buffer. Finally, once the buffer replacement
+ has completed, needs_resize() is again called, this time
+ with a resize_type of RELAY_RESIZE_REPLACED, to inform the
+ client that the replacement is complete and additionally
+ confirming the current sub-buffer size and number of
+ sub-buffers. Note that a resize can be canceled if
+ relay_realloc_buffer() is called with the async param
+ non-zero and the resize conditions no longer hold. In
+ this case, the RELAY_RESIZE_REPLACED suggested number of
+ sub-buffers will be the same as the number of sub-buffers
+ that existed before the RELAY_RESIZE_SHRINK or EXPAND i.e.
+ values indicating that the resize didn't actually occur.
+
+ - int fileop_notify(channel_id, struct file *filp, enum relay_fileop)
+
+ called when a userspace file operation has occurred or
+ will occur on a relayfs channel file. These notifications
+ can be used by the kernel client to trigger actions within
+ the kernel client when the corresponding event occurs,
+ such as enabling logging only when a userspace application
+ opens or mmaps a relayfs file and disabling it again when
+ the file is closed or unmapped. The kernel client can
+ also return its own return value, which can affect the
+ outcome of file operation - returning 0 indicates that the
+ operation should succeed, and returning a negative value
+ indicates that the operation should be failed, and that
+ the returned value should be returned to the ultimate
+ caller e.g. returning -EPERM from the open fileop will
+ cause the open to fail with -EPERM. Among other things,
+ the return value can be used to restrict a relayfs file
+ from being opened or mmap'ed more than once. The currently
+ implemented fileops are:
+
+ RELAY_FILE_OPEN - a relayfs file is being opened. Return
+ 0 to allow it to succeed, negative to
+ have it fail. A negative return value will
+ be passed on unmodified to the open fileop.
+ RELAY_FILE_CLOSE- a relayfs file is being closed. The return
+ value is ignored.
+ RELAY_FILE_MAP - a relayfs file is being mmap'ed. Return 0
+ to allow it to succeed, negative to have
+ it fail. A negative return value will be
+ passed on unmodified to the mmap fileop.
+ RELAY_FILE_UNMAP- a relayfs file is being unmapped. The return
+ value is ignored.
+
+ - void ioctl(rchan_id, cmd, arg)
+
+ called when an ioctl call is made using a relayfs file
+ descriptor. The cmd and arg are passed along to this
+ callback unmodified for it to do as it wishes with. The
+ return value from this callback is used as the return value
+ of the ioctl call.
+
+ If the callbacks param passed to relay_open() is NULL, a set of
+ default do-nothing callbacks will be defined for the channel.
+ Likewise, any NULL rchan_callback function contained in a non-NULL
+ callbacks struct will be filled in with a default callback function
+ that does nothing.
+
+- start_reserve - the number of bytes to be reserved at the start of
+ each sub-buffer. The client can do what it wants with this number
+ of bytes when the buffer_start() callback is invoked. Typically
+ clients would use this to write per-sub-buffer header data.
+
+- end_reserve - the number of bytes to be reserved at the end of each
+ sub-buffer. The client can do what it wants with this number of
+ bytes when the buffer_end() callback is invoked. Typically clients
+ would use this to write per-sub-buffer footer data.
+
+- channel_start_reserve - the number of bytes to be reserved, in
+ addition to start_reserve, at the beginning of the first sub-buffer
+ in the channel. The client can do what it wants with this number of
+ bytes when the buffer_start() callback is invoked. Typically
+ clients would use this to write per-channel header data.
+
+- resize_min - if set, this signifies that the channel is
+ auto-resizeable. The value specifies the size that the channel will
+ try to maintain as a normal working size, and that it won't go
+ below. The client makes use of the resizing callbacks and
+ relay_realloc_buffer() and relay_replace_buffer() to actually effect
+ the resize.
+
+- resize_max - if set, this signifies that the channel is
+ auto-resizeable. The value specifies the maximum size the channel
+ can have as a result of resizing.
+
+- mode - if non-zero, specifies the file permissions that will be given
+ to the channel file. If 0, the default rw user perms will be used.
+
+- init_buf - if non-NULL, rather than allocating the channel buffer,
+ this buffer will be used as the initial channel buffer. The kernel
+ API function relay_discard_init_buf() can later be used to have
+ relayfs allocate a normal mmappable channel buffer and switch over
+ to using it after copying the init_buf contents into it. Currently,
+ the size of init_buf must be exactly buf_size * n_bufs. The caller
+ is responsible for managing the init_buf memory. This feature is
+ typically used for init-time channel use and should normally be
+ specified as NULL.
+
+- init_buf_size - the total size of init_buf, if init_buf is specified
+ as non-NULL. Currently, the size of init_buf must be exactly
+ buf_size * n_bufs.
+
+Upon successful completion, relay_open() returns a channel id
+to be used for all other operations with the relay. All buffers
+managed by the relay are allocated using rvmalloc/rvfree to allow
+for easy mmapping to user-space.
+
+----------
+int relay_write(channel_id, *data_ptr, count, time_delta_offset, **wrote_pos)
+
+relay_write() reserves space in the channel and writes count bytes of
+data pointed to by data_ptr to it. Automatically performs any
+necessary locking, depending on the scheme and SMP usage in effect (no
+locking is done for the lockless scheme regardless of usage). It
+returns the number of bytes written, or 0/negative on failure. If
+time_delta_offset is >= 0, the internal time delta, the internal time
+delta calculated when the slot was reserved will be written at that
+offset. This is the TSC or gettimeofday() delta between the current
+write and the beginning of the buffer, whichever method is being used
+by the channel. Trying to write a count larger than the bufsize
+specified to relay_open() (taking into account the reserved
+start-of-buffer and end-of-buffer space as well) will fail. If
+wrote_pos is non-NULL, it will receive the location the data was
+written to, which may be needed for some applications but is not
+normally interesting. Most applications should pass in NULL for this
+param.
+
+----------
+struct rchan_reader *add_rchan_reader(int rchan_id, int auto_consume)
+
+add_rchan_reader creates and initializes a reader object for a
+channel. An opaque rchan_reader object is returned on success, and is
+passed to relay_read() when reading the channel. If the boolean
+auto_consume parameter is 1, the reader is defined to be
+auto-consuming. auto-consuming reader objects are automatically
+created and used for VFS read(2) readers.
+
+----------
+void remove_rchan_reader(struct rchan_reader *reader)
+
+remove_rchan_reader finds and removes the given reader from the
+channel. This function is used only by non-VFS read(2) readers. VFS
+read(2) readers are automatically removed when the corresponding file
+object is closed.
+
+----------
+reader add_map_reader(int rchan_id)
+
+Creates and initializes an rchan_reader object for channel map
+readers, and is needed for updating relay_bytes/buffers_consumed()
+when kernel clients become aware of the need to do so by their mmap
+user clients.
+
+----------
+int remove_map_reader(reader)
+
+Finds and removes the given map reader from the channel. This function
+is useful only for map readers.
+
+----------
+int relay_read(reader, buf, count, wait, *actual_read_offset)
+
+Reads count bytes from the channel, or as much as is available within
+the sub-buffer currently being read. The read offset that will be
+read from is the position contained within the reader object. If the
+wait flag is set, buf is non-NULL, and there is nothing available, it
+will wait until there is. If the wait flag is 0 and there is nothing
+available, -EAGAIN is returned. If buf is NULL, the value returned is
+the number of bytes that would have been read. actual_read_offset is
+the value that should be passed as the read offset to
+relay_bytes_consumed, needed only if the reader is not auto-consuming
+and the channel is MODE_NO_OVERWRITE, but in any case, it must not be
+NULL.
+
+----------
+
+int relay_bytes_avail(reader)
+
+Returns the number of bytes available relative to the reader's current
+read position within the corresponding sub-buffer, 0 if there is
+nothing available. Note that this doesn't return the total bytes
+available in the channel buffer - this is enough though to know if
+anything is available, however, or how many bytes might be returned
+from the next read.
+
+----------
+void relay_buffers_consumed(reader, buffers_consumed)
+
+Adds to the channel's consumed buffer count. buffers_consumed should
+be the number of buffers newly consumed, not the total number
+consumed. NOTE: kernel clients don't need to call this function if
+the reader is auto-consuming or the channel is MODE_CONTINUOUS.
+
+In order for the relay to detect the 'buffers full' condition for a
+channel, it must be kept up-to-date with respect to the number of
+buffers consumed by the client. If the addition of the value of the
+bufs_consumed param to the current bufs_consumed count for the channel
+would exceed the bufs_produced count for the channel, the channel's
+bufs_consumed count will be set to the bufs_produced count for the
+channel. This allows clients to 'catch up' if necessary.
+
+----------
+void relay_bytes_consumed(reader, bytes_consumed, read_offset)
+
+Adds to the channel's consumed count. bytes_consumed should be the
+number of bytes actually read e.g. return value of relay_read() and
+the read_offset should be the actual offset the bytes were read from
+e.g. the actual_read_offset set by relay_read(). NOTE: kernel clients
+don't need to call this function if the reader is auto-consuming or
+the channel is MODE_CONTINUOUS.
+
+In order for the relay to detect the 'buffers full' condition for a
+channel, it must be kept up-to-date with respect to the number of
+bytes consumed by the client. For packet clients, it makes more sense
+to update after each read rather than after each complete sub-buffer
+read. The bytes_consumed count updates bufs_consumed when a buffer
+has been consumed so this count remains consistent.
+
+----------
+int relay_info(channel_id, *channel_info)
+
+relay_info() fills in an rchan_info struct with channel status and
+attribute information such as usage modes, sub-buffer size and count,
+the allocated size of the entire buffer, buffers produced and
+consumed, current buffer id, count of writes lost due to buffers full
+condition.
+
+The virtual address of the channel buffer is also available here, for
+those clients that need it.
+
+Clients may need to know how many 'unused' bytes there are at the end
+of a given sub-buffer. This would only be the case if the client 1)
+didn't either write this count to the end of the sub-buffer or
+otherwise note it (it's available as the difference between the buffer
+end and current write pos params in the buffer_end callback) (if the
+client returned 0 from the buffer_end callback, it's assumed that this
+is indeed the case) 2) isn't using the read() system call to read the
+buffer. In other words, if the client isn't annotating the stream and
+is reading the buffer by mmaping it, this information would be needed
+in order for the client to 'skip over' the unused bytes at the ends of
+sub-buffers.
+
+Additionally, for the lockless scheme, clients may need to know
+whether a particular sub-buffer is actually complete. An array of
+boolean values, one per sub-buffer, contains non-zero if the buffer is
+complete, non-zero otherwise.
+
+----------
+int relay_close(channel_id)
+
+relay_close() is used to close the channel. It finalizes the last
+sub-buffer (the one currently being written to) and marks the channel
+as finalized. The channel buffer and channel data structure are then
+freed automatically when the last reference to the channel is given
+up.
+
+----------
+int relay_realloc_buffer(channel_id, nbufs, async)
+
+Allocates a new channel buffer using the specified sub-buffer count
+(note that resizing can't change sub-buffer sizes). If async is
+non-zero, the allocation is done in the background using a work queue.
+When the allocation has completed, the needs_resize() callback is
+called with a resize_type of RELAY_RESIZE_REPLACE. This function
+doesn't replace the old buffer with the new - see
+relay_replace_buffer().
+
+This function is called by kernel clients in response to a
+needs_resize() callback call with a resize type of RELAY_RESIZE_EXPAND
+or RELAY_RESIZE_SHRINK. That callback also includes a suggested
+new_bufsize and new_nbufs which should be used when calling this
+function.
+
+Returns 0 on success, or errcode if the channel is busy or if
+the allocation couldn't happen for some reason.
+
+NOTE: if async is not set, this function should not be called with a
+lock held, as it may sleep.
+
+----------
+int relay_replace_buffer(channel_id)
+
+Replaces the current channel buffer with the new buffer allocated by
+relay_realloc_buffer and contained in the channel struct. When the
+replacement is complete, the needs_resize() callback is called with
+RELAY_RESIZE_REPLACED. This function is called by kernel clients in
+response to a needs_resize() callback having a resize type of
+RELAY_RESIZE_REPLACE.
+
+Returns 0 on success, or errcode if the channel is busy or if the
+replacement or previous allocation didn't happen for some reason.
+
+NOTE: This function will not sleep, so can called in any context and
+with locks held. The client should, however, ensure that the channel
+isn't actively being read from or written to.
+
+----------
+int relay_reset(rchan_id)
+
+relay_reset() has the effect of erasing all data from the buffer and
+restarting the channel in its initial state. The buffer itself is not
+freed, so any mappings are still in effect. NOTE: Care should be
+taken that the channnel isn't actually being used by anything when
+this call is made.
+
+----------
+int rchan_full(reader)
+
+returns 1 if the channel is full with respect to the reader, 0 if not.
+
+----------
+int rchan_empty(reader)
+
+returns 1 if the channel is empty with respect to the reader, 0 if not.
+
+----------
+int relay_discard_init_buf(rchan_id)
+
+allocates an mmappable channel buffer, copies the contents of init_buf
+into it, and sets the current channel buffer to the newly allocated
+buffer. This function is used only in conjunction with the init_buf
+and init_buf_size params to relay_open(), and is typically used when
+the ability to write into the channel at init-time is needed. The
+basic usage is to specify an init_buf and init_buf_size to relay_open,
+then call this function when it's safe to switch over to a normally
+allocated channel buffer. 'Safe' means that the caller is in a
+context that can sleep and that nothing is actively writing to the
+channel. Returns 0 if successful, negative otherwise.
+
+
+Writing directly into the channel
+=================================
+
+Using the relay_write() API function as described above is the
+preferred means of writing into a channel. In some cases, however,
+in-kernel clients might want to write directly into a relay channel
+rather than have relay_write() copy it into the buffer on the client's
+behalf. Clients wishing to do this should follow the model used to
+implement relay_write itself. The general sequence is:
+
+- get a pointer to the channel via rchan_get(). This increments the
+ channel's reference count.
+- call relay_lock_channel(). This will perform the proper locking for
+ the channel given the scheme in use and the SMP usage.
+- reserve a slot in the channel via relay_reserve()
+- write directly to the reserved address
+- call relay_commit() to commit the write
+- call relay_unlock_channel()
+- call rchan_put() to release the channel reference
+
+In particular, clients should make sure they call rchan_get() and
+rchan_put() and not hold on to references to the channel pointer.
+Also, forgetting to use relay_lock_channel()/relay_unlock_channel()
+has no effect if the lockless scheme is being used, but could result
+in corrupted buffer contents if the locking scheme is used.
+
+
+Limitations
+===========
+
+Writes made via the write() system call are currently limited to 2
+pages worth of data. There is no such limit on the in-kernel API
+function relay_write().
+
+User applications can currently only mmap the complete buffer (it
+doesn't really make sense to mmap only part of it, given its purpose).
+
+
+Latest version
+==============
+
+The latest version can be found at:
+
+http://www.opersys.com/relayfs
+
+Example relayfs clients, such as dynamic printk and the Linux Trace
+Toolkit, can also be found there.
+
+
+Credits
+=======
+
+The ideas and specs for relayfs came about as a result of discussions
+on tracing involving the following:
+
+Michel Dagenais <michel.dagenais@polymtl.ca>
+Richard Moore <richardj_moore@uk.ibm.com>
+Bob Wisniewski <bob@watson.ibm.com>
+Karim Yaghmour <karim@opersys.com>
+Tom Zanussi <zanussi@us.ibm.com>
+
+Also thanks to Hubertus Franke for a lot of useful suggestions and bug
+reports, and for contributing the klog code.
--- /dev/null
+This file contains some additional information for the Philips and OEM webcams.
+E-mail: webcam@smcc.demon.nl Last updated: 2004-01-19
+Site: http://www.smcc.demon.nl/webcam/
+
+As of this moment, the following cameras are supported:
+ * Philips PCA645
+ * Philips PCA646
+ * Philips PCVC675
+ * Philips PCVC680
+ * Philips PCVC690
+ * Philips PCVC720/40
+ * Philips PCVC730
+ * Philips PCVC740
+ * Philips PCVC750
+ * Askey VC010
+ * Creative Labs Webcam 5
+ * Creative Labs Webcam Pro Ex
+ * Logitech QuickCam 3000 Pro
+ * Logitech QuickCam 4000 Pro
+ * Logitech QuickCam Notebook Pro
+ * Logitech QuickCam Zoom
+ * Logitech QuickCam Orbit
+ * Logitech QuickCam Sphere
+ * Samsung MPC-C10
+ * Samsung MPC-C30
+ * Sotec Afina Eye
+ * AME CU-001
+ * Visionite VCS-UM100
+ * Visionite VCS-UC300
+
+The main webpage for the Philips driver is at the address above. It contains
+a lot of extra information, a FAQ, and the binary plugin 'PWCX'. This plugin
+contains decompression routines that allow you to use higher image sizes and
+framerates; in addition the webcam uses less bandwidth on the USB bus (handy
+if you want to run more than 1 camera simultaneously). These routines fall
+under a NDA, and may therefor not be distributed as source; however, its use
+is completely optional.
+
+You can build this code either into your kernel, or as a module. I recommend
+the latter, since it makes troubleshooting a lot easier. The built-in
+microphone is supported through the USB Audio class.
+
+When you load the module you can set some default settings for the
+camera; some programs depend on a particular image-size or -format and
+don't know how to set it properly in the driver. The options are:
+
+size
+ Can be one of 'sqcif', 'qsif', 'qcif', 'sif', 'cif' or
+ 'vga', for an image size of resp. 128x96, 160x120, 176x144,
+ 320x240, 352x288 and 640x480 (of course, only for those cameras that
+ support these resolutions).
+
+fps
+ Specifies the desired framerate. Is an integer in the range of 4-30.
+
+fbufs
+ This paramter specifies the number of internal buffers to use for storing
+ frames from the cam. This will help if the process that reads images from
+ the cam is a bit slow or momentarely busy. However, on slow machines it
+ only introduces lag, so choose carefully. The default is 3, which is
+ reasonable. You can set it between 2 and 5.
+
+mbufs
+ This is an integer between 1 and 10. It will tell the module the number of
+ buffers to reserve for mmap(), VIDIOCCGMBUF, VIDIOCMCAPTURE and friends.
+ The default is 2, which is adequate for most applications (double
+ buffering).
+
+ Should you experience a lot of 'Dumping frame...' messages during
+ grabbing with a tool that uses mmap(), you might want to increase if.
+ However, it doesn't really buffer images, it just gives you a bit more
+ slack when your program is behind. But you need a multi-threaded or
+ forked program to really take advantage of these buffers.
+
+ The absolute maximum is 10, but don't set it too high! Every buffer takes
+ up 460 KB of RAM, so unless you have a lot of memory setting this to
+ something more than 4 is an absolute waste. This memory is only
+ allocated during open(), so nothing is wasted when the camera is not in
+ use.
+
+power_save
+ When power_save is enabled (set to 1), the module will try to shut down
+ the cam on close() and re-activate on open(). This will save power and
+ turn off the LED. Not all cameras support this though (the 645 and 646
+ don't have power saving at all), and some models don't work either (they
+ will shut down, but never wake up). Consider this experimental. By
+ default this option is disabled.
+
+compression (only useful with the plugin)
+ With this option you can control the compression factor that the camera
+ uses to squeeze the image through the USB bus. You can set the
+ parameter between 0 and 3:
+ 0 = prefer uncompressed images; if the requested mode is not available
+ in an uncompressed format, the driver will silently switch to low
+ compression.
+ 1 = low compression.
+ 2 = medium compression.
+ 3 = high compression.
+
+ High compression takes less bandwidth of course, but it could also
+ introduce some unwanted artefacts. The default is 2, medium compression.
+ See the FAQ on the website for an overview of which modes require
+ compression.
+
+ The compression parameter does not apply to the 645 and 646 cameras
+ and OEM models derived from those (only a few). Most cams honour this
+ parameter.
+
+leds
+ This settings takes 2 integers, that define the on/off time for the LED
+ (in milliseconds). One of the interesting things that you can do with
+ this is let the LED blink while the camera is in use. This:
+
+ leds=500,500
+
+ will blink the LED once every second. But with:
+
+ leds=0,0
+
+ the LED never goes on, making it suitable for silent surveillance.
+
+ By default the camera's LED is on solid while in use, and turned off
+ when the camera is not used anymore.
+
+ This parameter works only with the ToUCam range of cameras (720, 730, 740,
+ 750) and OEMs. For other cameras this command is silently ignored, and
+ the LED cannot be controlled.
+
+ Finally: this parameters does not take effect UNTIL the first time you
+ open the camera device. Until then, the LED remains on.
+
+dev_hint
+ A long standing problem with USB devices is their dynamic nature: you
+ never know what device a camera gets assigned; it depends on module load
+ order, the hub configuration, the order in which devices are plugged in,
+ and the phase of the moon (i.e. it can be random). With this option you
+ can give the driver a hint as to what video device node (/dev/videoX) it
+ should use with a specific camera. This is also handy if you have two
+ cameras of the same model.
+
+ A camera is specified by its type (the number from the camera model,
+ like PCA645, PCVC750VC, etc) and optionally the serial number (visible
+ in /proc/bus/usb/devices). A hint consists of a string with the following
+ format:
+
+ [type[.serialnumber]:]node
+
+ The square brackets mean that both the type and the serialnumber are
+ optional, but a serialnumber cannot be specified without a type (which
+ would be rather pointless). The serialnumber is separated from the type
+ by a '.'; the node number by a ':'.
+
+ This somewhat cryptic syntax is best explained by a few examples:
+
+ dev_hint=3,5 The first detected cam gets assigned
+ /dev/video3, the second /dev/video5. Any
+ other cameras will get the first free
+ available slot (see below).
+
+ dev_hint=645:1,680:2 The PCA645 camera will get /dev/video1,
+ and a PCVC680 /dev/video2.
+
+ dev_hint=645.0123:3,645.4567:0 The PCA645 camera with serialnumber
+ 0123 goes to /dev/video3, the same
+ camera model with the 4567 serial
+ gets /dev/video0.
+
+ dev_hint=750:1,4,5,6 The PCVC750 camera will get /dev/video1, the
+ next 3 Philips cams will use /dev/video4
+ through /dev/video6.
+
+ Some points worth knowing:
+ - Serialnumbers are case sensitive and must be written full, including
+ leading zeroes (it's treated as a string).
+ - If a device node is already occupied, registration will fail and
+ the webcam is not available.
+ - You can have up to 64 video devices; be sure to make enough device
+ nodes in /dev if you want to spread the numbers (this does not apply
+ to devfs). After /dev/video9 comes /dev/video10 (not /dev/videoA).
+ - If a camera does not match any dev_hint, it will simply get assigned
+ the first available device node, just as it used to be.
+
+trace
+ In order to better detect problems, it is now possible to turn on a
+ 'trace' of some of the calls the module makes; it logs all items in your
+ kernel log at debug level.
+
+ The trace variable is a bitmask; each bit represents a certain feature.
+ If you want to trace something, look up the bit value(s) in the table
+ below, add the values together and supply that to the trace variable.
+
+ Value Value Description Default
+ (dec) (hex)
+ 1 0x1 Module initialization; this will log messages On
+ while loading and unloading the module
+
+ 2 0x2 probe() and disconnect() traces On
+
+ 4 0x4 Trace open() and close() calls Off
+
+ 8 0x8 read(), mmap() and associated ioctl() calls Off
+
+ 16 0x10 Memory allocation of buffers, etc. Off
+
+ 32 0x20 Showing underflow, overflow and Dumping frame On
+ messages
+
+ 64 0x40 Show viewport and image sizes Off
+
+ 128 0x80 PWCX debugging Off
+
+ For example, to trace the open() & read() fuctions, sum 8 + 4 = 12,
+ so you would supply trace=12 during insmod or modprobe. If
+ you want to turn the initialization and probing tracing off, set trace=0.
+ The default value for trace is 35 (0x23).
+
+
+
+Example:
+
+ # modprobe pwc size=cif fps=15 power_save=1
+
+The fbufs, mbufs and trace parameters are global and apply to all connected
+cameras. Each camera has its own set of buffers.
+
+size and fps only specify defaults when you open() the device; this is to
+accommodate some tools that don't set the size. You can change these
+settings after open() with the Video4Linux ioctl() calls. The default of
+defaults is QCIF size at 10 fps.
+
+The compression parameter is semiglobal; it sets the initial compression
+preference for all camera's, but this parameter can be set per camera with
+the VIDIOCPWCSCQUAL ioctl() call.
+
+All parameters are optional.
+
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 12
-EXTRAVERSION = -1.1_1398_FC4.1.planetlab
-NAME=Woozy Numbat
+EXTRAVERSION = -1.1_1390_FC4.1.planetlab.2005.08.04
+NAME=Woozy Beaver
# *DOCUMENTATION*
# To see a list of typical targets execute "make help"
.quad alpha_ni_syscall /* 270 */
.quad alpha_ni_syscall
.quad alpha_ni_syscall
- .quad alpha_ni_syscall
+ .quad sys_vserver /* 273 sys_vserver */
.quad alpha_ni_syscall
.quad alpha_ni_syscall /* 275 */
.quad alpha_ni_syscall
--- /dev/null
+/*
+ * Alpha IO and memory functions.. Just expand the inlines in the header
+ * files..
+ */
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/string.h>
+
+#include <asm/io.h>
+
+u8 _inb(unsigned long addr)
+{
+ return __inb(addr);
+}
+
+u16 _inw(unsigned long addr)
+{
+ return __inw(addr);
+}
+
+u32 _inl(unsigned long addr)
+{
+ return __inl(addr);
+}
+
+
+void _outb(u8 b, unsigned long addr)
+{
+ __outb(b, addr);
+}
+
+void _outw(u16 b, unsigned long addr)
+{
+ __outw(b, addr);
+}
+
+void _outl(u32 b, unsigned long addr)
+{
+ __outl(b, addr);
+}
+
+u8 ___raw_readb(unsigned long addr)
+{
+ return __readb(addr);
+}
+
+u16 ___raw_readw(unsigned long addr)
+{
+ return __readw(addr);
+}
+
+u32 ___raw_readl(unsigned long addr)
+{
+ return __readl(addr);
+}
+
+u64 ___raw_readq(unsigned long addr)
+{
+ return __readq(addr);
+}
+
+u8 _readb(unsigned long addr)
+{
+ unsigned long r = __readb(addr);
+ mb();
+ return r;
+}
+
+u16 _readw(unsigned long addr)
+{
+ unsigned long r = __readw(addr);
+ mb();
+ return r;
+}
+
+u32 _readl(unsigned long addr)
+{
+ unsigned long r = __readl(addr);
+ mb();
+ return r;
+}
+
+u64 _readq(unsigned long addr)
+{
+ unsigned long r = __readq(addr);
+ mb();
+ return r;
+}
+
+void ___raw_writeb(u8 b, unsigned long addr)
+{
+ __writeb(b, addr);
+}
+
+void ___raw_writew(u16 b, unsigned long addr)
+{
+ __writew(b, addr);
+}
+
+void ___raw_writel(u32 b, unsigned long addr)
+{
+ __writel(b, addr);
+}
+
+void ___raw_writeq(u64 b, unsigned long addr)
+{
+ __writeq(b, addr);
+}
+
+void _writeb(u8 b, unsigned long addr)
+{
+ __writeb(b, addr);
+ mb();
+}
+
+void _writew(u16 b, unsigned long addr)
+{
+ __writew(b, addr);
+ mb();
+}
+
+void _writel(u32 b, unsigned long addr)
+{
+ __writel(b, addr);
+ mb();
+}
+
+void _writeq(u64 b, unsigned long addr)
+{
+ __writeq(b, addr);
+ mb();
+}
+
+/*
+ * Read COUNT 8-bit bytes from port PORT into memory starting at
+ * SRC.
+ */
+void insb (unsigned long port, void *dst, unsigned long count)
+{
+ while (((unsigned long)dst) & 0x3) {
+ if (!count)
+ return;
+ count--;
+ *(unsigned char *) dst = inb(port);
+ dst += 1;
+ }
+
+ while (count >= 4) {
+ unsigned int w;
+ count -= 4;
+ w = inb(port);
+ w |= inb(port) << 8;
+ w |= inb(port) << 16;
+ w |= inb(port) << 24;
+ *(unsigned int *) dst = w;
+ dst += 4;
+ }
+
+ while (count) {
+ --count;
+ *(unsigned char *) dst = inb(port);
+ dst += 1;
+ }
+}
+
+
+/*
+ * Read COUNT 16-bit words from port PORT into memory starting at
+ * SRC. SRC must be at least short aligned. This is used by the
+ * IDE driver to read disk sectors. Performance is important, but
+ * the interfaces seems to be slow: just using the inlined version
+ * of the inw() breaks things.
+ */
+void insw (unsigned long port, void *dst, unsigned long count)
+{
+ if (((unsigned long)dst) & 0x3) {
+ if (((unsigned long)dst) & 0x1) {
+ panic("insw: memory not short aligned");
+ }
+ if (!count)
+ return;
+ count--;
+ *(unsigned short *) dst = inw(port);
+ dst += 2;
+ }
+
+ while (count >= 2) {
+ unsigned int w;
+ count -= 2;
+ w = inw(port);
+ w |= inw(port) << 16;
+ *(unsigned int *) dst = w;
+ dst += 4;
+ }
+
+ if (count) {
+ *(unsigned short*) dst = inw(port);
+ }
+}
+
+
+/*
+ * Read COUNT 32-bit words from port PORT into memory starting at
+ * SRC. Now works with any alignment in SRC. Performance is important,
+ * but the interfaces seems to be slow: just using the inlined version
+ * of the inl() breaks things.
+ */
+void insl (unsigned long port, void *dst, unsigned long count)
+{
+ unsigned int l = 0, l2;
+
+ if (!count)
+ return;
+
+ switch (((unsigned long) dst) & 0x3)
+ {
+ case 0x00: /* Buffer 32-bit aligned */
+ while (count--)
+ {
+ *(unsigned int *) dst = inl(port);
+ dst += 4;
+ }
+ break;
+
+ /* Assuming little endian Alphas in cases 0x01 -- 0x03 ... */
+
+ case 0x02: /* Buffer 16-bit aligned */
+ --count;
+
+ l = inl(port);
+ *(unsigned short *) dst = l;
+ dst += 2;
+
+ while (count--)
+ {
+ l2 = inl(port);
+ *(unsigned int *) dst = l >> 16 | l2 << 16;
+ dst += 4;
+ l = l2;
+ }
+ *(unsigned short *) dst = l >> 16;
+ break;
+
+ case 0x01: /* Buffer 8-bit aligned */
+ --count;
+
+ l = inl(port);
+ *(unsigned char *) dst = l;
+ dst += 1;
+ *(unsigned short *) dst = l >> 8;
+ dst += 2;
+ while (count--)
+ {
+ l2 = inl(port);
+ *(unsigned int *) dst = l >> 24 | l2 << 8;
+ dst += 4;
+ l = l2;
+ }
+ *(unsigned char *) dst = l >> 24;
+ break;
+
+ case 0x03: /* Buffer 8-bit aligned */
+ --count;
+
+ l = inl(port);
+ *(unsigned char *) dst = l;
+ dst += 1;
+ while (count--)
+ {
+ l2 = inl(port);
+ *(unsigned int *) dst = l << 24 | l2 >> 8;
+ dst += 4;
+ l = l2;
+ }
+ *(unsigned short *) dst = l >> 8;
+ dst += 2;
+ *(unsigned char *) dst = l >> 24;
+ break;
+ }
+}
+
+
+/*
+ * Like insb but in the opposite direction.
+ * Don't worry as much about doing aligned memory transfers:
+ * doing byte reads the "slow" way isn't nearly as slow as
+ * doing byte writes the slow way (no r-m-w cycle).
+ */
+void outsb(unsigned long port, const void * src, unsigned long count)
+{
+ while (count) {
+ count--;
+ outb(*(char *)src, port);
+ src += 1;
+ }
+}
+
+/*
+ * Like insw but in the opposite direction. This is used by the IDE
+ * driver to write disk sectors. Performance is important, but the
+ * interfaces seems to be slow: just using the inlined version of the
+ * outw() breaks things.
+ */
+void outsw (unsigned long port, const void *src, unsigned long count)
+{
+ if (((unsigned long)src) & 0x3) {
+ if (((unsigned long)src) & 0x1) {
+ panic("outsw: memory not short aligned");
+ }
+ outw(*(unsigned short*)src, port);
+ src += 2;
+ --count;
+ }
+
+ while (count >= 2) {
+ unsigned int w;
+ count -= 2;
+ w = *(unsigned int *) src;
+ src += 4;
+ outw(w >> 0, port);
+ outw(w >> 16, port);
+ }
+
+ if (count) {
+ outw(*(unsigned short *) src, port);
+ }
+}
+
+
+/*
+ * Like insl but in the opposite direction. This is used by the IDE
+ * driver to write disk sectors. Works with any alignment in SRC.
+ * Performance is important, but the interfaces seems to be slow:
+ * just using the inlined version of the outl() breaks things.
+ */
+void outsl (unsigned long port, const void *src, unsigned long count)
+{
+ unsigned int l = 0, l2;
+
+ if (!count)
+ return;
+
+ switch (((unsigned long) src) & 0x3)
+ {
+ case 0x00: /* Buffer 32-bit aligned */
+ while (count--)
+ {
+ outl(*(unsigned int *) src, port);
+ src += 4;
+ }
+ break;
+
+ case 0x02: /* Buffer 16-bit aligned */
+ --count;
+
+ l = *(unsigned short *) src << 16;
+ src += 2;
+
+ while (count--)
+ {
+ l2 = *(unsigned int *) src;
+ src += 4;
+ outl (l >> 16 | l2 << 16, port);
+ l = l2;
+ }
+ l2 = *(unsigned short *) src;
+ outl (l >> 16 | l2 << 16, port);
+ break;
+
+ case 0x01: /* Buffer 8-bit aligned */
+ --count;
+
+ l = *(unsigned char *) src << 8;
+ src += 1;
+ l |= *(unsigned short *) src << 16;
+ src += 2;
+ while (count--)
+ {
+ l2 = *(unsigned int *) src;
+ src += 4;
+ outl (l >> 8 | l2 << 24, port);
+ l = l2;
+ }
+ l2 = *(unsigned char *) src;
+ outl (l >> 8 | l2 << 24, port);
+ break;
+
+ case 0x03: /* Buffer 8-bit aligned */
+ --count;
+
+ l = *(unsigned char *) src << 24;
+ src += 1;
+ while (count--)
+ {
+ l2 = *(unsigned int *) src;
+ src += 4;
+ outl (l >> 24 | l2 << 8, port);
+ l = l2;
+ }
+ l2 = *(unsigned short *) src;
+ src += 2;
+ l2 |= *(unsigned char *) src << 16;
+ outl (l >> 24 | l2 << 8, port);
+ break;
+ }
+}
+
+
+/*
+ * Copy data from IO memory space to "real" memory space.
+ * This needs to be optimized.
+ */
+void _memcpy_fromio(void * to, unsigned long from, long count)
+{
+ /* Optimize co-aligned transfers. Everything else gets handled
+ a byte at a time. */
+
+ if (count >= 8 && ((unsigned long)to & 7) == (from & 7)) {
+ count -= 8;
+ do {
+ *(u64 *)to = __raw_readq(from);
+ count -= 8;
+ to += 8;
+ from += 8;
+ } while (count >= 0);
+ count += 8;
+ }
+
+ if (count >= 4 && ((unsigned long)to & 3) == (from & 3)) {
+ count -= 4;
+ do {
+ *(u32 *)to = __raw_readl(from);
+ count -= 4;
+ to += 4;
+ from += 4;
+ } while (count >= 0);
+ count += 4;
+ }
+
+ if (count >= 2 && ((unsigned long)to & 1) == (from & 1)) {
+ count -= 2;
+ do {
+ *(u16 *)to = __raw_readw(from);
+ count -= 2;
+ to += 2;
+ from += 2;
+ } while (count >= 0);
+ count += 2;
+ }
+
+ while (count > 0) {
+ *(u8 *) to = __raw_readb(from);
+ count--;
+ to++;
+ from++;
+ }
+}
+
+/*
+ * Copy data from "real" memory space to IO memory space.
+ * This needs to be optimized.
+ */
+void _memcpy_toio(unsigned long to, const void * from, long count)
+{
+ /* Optimize co-aligned transfers. Everything else gets handled
+ a byte at a time. */
+ /* FIXME -- align FROM. */
+
+ if (count >= 8 && (to & 7) == ((unsigned long)from & 7)) {
+ count -= 8;
+ do {
+ __raw_writeq(*(const u64 *)from, to);
+ count -= 8;
+ to += 8;
+ from += 8;
+ } while (count >= 0);
+ count += 8;
+ }
+
+ if (count >= 4 && (to & 3) == ((unsigned long)from & 3)) {
+ count -= 4;
+ do {
+ __raw_writel(*(const u32 *)from, to);
+ count -= 4;
+ to += 4;
+ from += 4;
+ } while (count >= 0);
+ count += 4;
+ }
+
+ if (count >= 2 && (to & 1) == ((unsigned long)from & 1)) {
+ count -= 2;
+ do {
+ __raw_writew(*(const u16 *)from, to);
+ count -= 2;
+ to += 2;
+ from += 2;
+ } while (count >= 0);
+ count += 2;
+ }
+
+ while (count > 0) {
+ __raw_writeb(*(const u8 *) from, to);
+ count--;
+ to++;
+ from++;
+ }
+ mb();
+}
+
+/*
+ * "memset" on IO memory space.
+ */
+void _memset_c_io(unsigned long to, unsigned long c, long count)
+{
+ /* Handle any initial odd byte */
+ if (count > 0 && (to & 1)) {
+ __raw_writeb(c, to);
+ to++;
+ count--;
+ }
+
+ /* Handle any initial odd halfword */
+ if (count >= 2 && (to & 2)) {
+ __raw_writew(c, to);
+ to += 2;
+ count -= 2;
+ }
+
+ /* Handle any initial odd word */
+ if (count >= 4 && (to & 4)) {
+ __raw_writel(c, to);
+ to += 4;
+ count -= 4;
+ }
+
+ /* Handle all full-sized quadwords: we're aligned
+ (or have a small count) */
+ count -= 8;
+ if (count >= 0) {
+ do {
+ __raw_writeq(c, to);
+ to += 8;
+ count -= 8;
+ } while (count >= 0);
+ }
+ count += 8;
+
+ /* The tail is word-aligned if we still have count >= 4 */
+ if (count >= 4) {
+ __raw_writel(c, to);
+ to += 4;
+ count -= 4;
+ }
+
+ /* The tail is half-word aligned if we have count >= 2 */
+ if (count >= 2) {
+ __raw_writew(c, to);
+ to += 2;
+ count -= 2;
+ }
+
+ /* And finally, one last byte.. */
+ if (count) {
+ __raw_writeb(c, to);
+ }
+ mb();
+}
+
+void
+scr_memcpyw(u16 *d, const u16 *s, unsigned int count)
+{
+ if (! __is_ioaddr((unsigned long) s)) {
+ /* Source is memory. */
+ if (! __is_ioaddr((unsigned long) d))
+ memcpy(d, s, count);
+ else
+ memcpy_toio(d, s, count);
+ } else {
+ /* Source is screen. */
+ if (! __is_ioaddr((unsigned long) d))
+ memcpy_fromio(d, s, count);
+ else {
+ /* FIXME: Should handle unaligned ops and
+ operation widening. */
+ count /= 2;
+ while (count--) {
+ u16 tmp = __raw_readw((unsigned long)(s++));
+ __raw_writew(tmp, (unsigned long)(d++));
+ }
+ }
+ }
+}
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_ARM=y
+CONFIG_MMU=y
+CONFIG_UID16=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+
+#
+# General setup
+#
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_EMBEDDED is not set
+CONFIG_KALLSYMS=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+# CONFIG_MODULE_UNLOAD is not set
+CONFIG_OBSOLETE_MODPARM=y
+# CONFIG_MODVERSIONS is not set
+CONFIG_KMOD=y
+
+#
+# System Type
+#
+# CONFIG_ARCH_ADIFCC is not set
+# CONFIG_ARCH_CLPS7500 is not set
+# CONFIG_ARCH_CLPS711X is not set
+# CONFIG_ARCH_CO285 is not set
+# CONFIG_ARCH_PXA is not set
+# CONFIG_ARCH_EBSA110 is not set
+# CONFIG_ARCH_CAMELOT is not set
+# CONFIG_ARCH_FOOTBRIDGE is not set
+# CONFIG_ARCH_INTEGRATOR is not set
+CONFIG_ARCH_IOP3XX=y
+# CONFIG_ARCH_L7200 is not set
+# CONFIG_ARCH_RPC is not set
+# CONFIG_ARCH_SA1100 is not set
+# CONFIG_ARCH_SHARK is not set
+
+#
+# CLPS711X/EP721X Implementations
+#
+
+#
+# Epxa10db
+#
+
+#
+# Footbridge Implementations
+#
+
+#
+# IOP3xx Implementation Options
+#
+CONFIG_ARCH_IQ80310=y
+# CONFIG_ARCH_IQ80321 is not set
+CONFIG_ARCH_IOP310=y
+# CONFIG_ARCH_IOP321 is not set
+
+#
+# IOP3xx Chipset Features
+#
+# CONFIG_IOP3XX_AAU is not set
+# CONFIG_IOP3XX_DMA is not set
+# CONFIG_IOP3XX_MU is not set
+# CONFIG_IOP3XX_PMON is not set
+
+#
+# ADIFCC Implementation Options
+#
+
+#
+# ADI Board Types
+#
+
+#
+# Intel PXA250/210 Implementations
+#
+
+#
+# SA11x0 Implementations
+#
+
+#
+# Processor Type
+#
+CONFIG_CPU_32=y
+CONFIG_CPU_XSCALE=y
+CONFIG_XS80200=y
+CONFIG_CPU_32v5=y
+
+#
+# Processor Features
+#
+CONFIG_ARM_THUMB=y
+CONFIG_XSCALE_PMU=y
+
+#
+# General setup
+#
+CONFIG_PCI=y
+# CONFIG_ZBOOT_ROM is not set
+CONFIG_ZBOOT_ROM_TEXT=0x00060000
+CONFIG_ZBOOT_ROM_BSS=0xa1008000
+# CONFIG_PCI_LEGACY_PROC is not set
+CONFIG_PCI_NAMES=y
+# CONFIG_HOTPLUG is not set
+
+#
+# MMC/SD Card support
+#
+# CONFIG_MMC is not set
+
+#
+# At least one math emulation must be selected
+#
+CONFIG_FPE_NWFPE=y
+# CONFIG_FPE_NWFPE_XP is not set
+# CONFIG_FPE_FASTFPE is not set
+CONFIG_KCORE_ELF=y
+# CONFIG_KCORE_AOUT is not set
+CONFIG_BINFMT_AOUT=y
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_MISC is not set
+# CONFIG_PM is not set
+# CONFIG_PREEMPT is not set
+# CONFIG_ARTHUR is not set
+CONFIG_CMDLINE="console=ttyS0,115200 ip=bootp mem=32M root=/dev/nfs initrd=0xc0800000,4M"
+CONFIG_ALIGNMENT_TRAP=y
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+CONFIG_MTD=y
+# CONFIG_MTD_DEBUG is not set
+CONFIG_MTD_PARTITIONS=y
+# CONFIG_MTD_CONCAT is not set
+CONFIG_MTD_REDBOOT_PARTS=y
+# CONFIG_MTD_CMDLINE_PARTS is not set
+# CONFIG_MTD_AFS_PARTS is not set
+
+#
+# User Modules And Translation Layers
+#
+CONFIG_MTD_CHAR=y
+CONFIG_MTD_BLOCK=y
+# CONFIG_FTL is not set
+# CONFIG_NFTL is not set
+# CONFIG_INFTL is not set
+
+#
+# RAM/ROM/Flash chip drivers
+#
+CONFIG_MTD_CFI=y
+# CONFIG_MTD_JEDECPROBE is not set
+CONFIG_MTD_GEN_PROBE=y
+# CONFIG_MTD_CFI_ADV_OPTIONS is not set
+CONFIG_MTD_CFI_INTELEXT=y
+# CONFIG_MTD_CFI_AMDSTD is not set
+# CONFIG_MTD_CFI_STAA is not set
+# CONFIG_MTD_RAM is not set
+# CONFIG_MTD_ROM is not set
+# CONFIG_MTD_ABSENT is not set
+# CONFIG_MTD_OBSOLETE_CHIPS is not set
+
+#
+# Mapping drivers for chip access
+#
+# CONFIG_MTD_COMPLEX_MAPPINGS is not set
+# CONFIG_MTD_PHYSMAP is not set
+# CONFIG_MTD_ARM_INTEGRATOR is not set
+CONFIG_MTD_IQ80310=y
+# CONFIG_MTD_EDB7312 is not set
+
+#
+# Self-contained MTD device drivers
+#
+# CONFIG_MTD_PMC551 is not set
+# CONFIG_MTD_SLRAM is not set
+# CONFIG_MTD_MTDRAM is not set
+# CONFIG_MTD_BLKMTD is not set
+
+#
+# Disk-On-Chip Device Drivers
+#
+# CONFIG_MTD_DOC2000 is not set
+# CONFIG_MTD_DOC2001 is not set
+# CONFIG_MTD_DOC2001PLUS is not set
+
+#
+# NAND Flash Device Drivers
+#
+# CONFIG_MTD_NAND is not set
+
+#
+# Plug and Play support
+#
+# CONFIG_PNP is not set
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_LOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=8192
+CONFIG_BLK_DEV_INITRD=y
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+# CONFIG_PACKET is not set
+# CONFIG_NETLINK_DEV is not set
+CONFIG_NETFILTER=y
+# CONFIG_NETFILTER_DEBUG is not set
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_PNP=y
+# CONFIG_IP_PNP_DHCP is not set
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_ARPD is not set
+# CONFIG_INET_ECN is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+
+#
+# IP: Netfilter Configuration
+#
+# CONFIG_IP_NF_CONNTRACK is not set
+# CONFIG_IP_NF_QUEUE is not set
+# CONFIG_IP_NF_IPTABLES is not set
+# CONFIG_IP_NF_ARPTABLES is not set
+# CONFIG_IP_NF_COMPAT_IPCHAINS is not set
+# CONFIG_IP_NF_COMPAT_IPFWADM is not set
+
+#
+# IP: Virtual Server Configuration
+#
+# CONFIG_IP_VS is not set
+# CONFIG_IPV6 is not set
+# CONFIG_XFRM_USER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+CONFIG_IPV6_SCTP__=y
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_LLC is not set
+# CONFIG_DECNET is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+CONFIG_NETDEVICES=y
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+# CONFIG_ETHERTAP is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+# CONFIG_SMC91X is not set
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_NET_VENDOR_3COM is not set
+
+#
+# Tulip family network device support
+#
+# CONFIG_NET_TULIP is not set
+# CONFIG_HP100 is not set
+CONFIG_NET_PCI=y
+# CONFIG_PCNET32 is not set
+# CONFIG_AMD8111_ETH is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
+# CONFIG_B44 is not set
+# CONFIG_DGRS is not set
+CONFIG_EEPRO100=y
+# CONFIG_EEPRO100_PIO is not set
+# CONFIG_E100 is not set
+# CONFIG_FEALNX is not set
+# CONFIG_NATSEMI is not set
+# CONFIG_NE2K_PCI is not set
+# CONFIG_8139CP is not set
+# CONFIG_8139TOO is not set
+# CONFIG_SIS900 is not set
+# CONFIG_EPIC100 is not set
+# CONFIG_SUNDANCE is not set
+# CONFIG_TLAN is not set
+# CONFIG_VIA_RHINE is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_TIGON3 is not set
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_IXGB is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Token Ring devices (depends on LLC=y)
+#
+# CONFIG_RCPCI is not set
+# CONFIG_SHAPER is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+
+#
+# IrDA (infrared) support
+#
+# CONFIG_IRDA is not set
+
+#
+# Amateur Radio support
+#
+# CONFIG_HAMRADIO is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+CONFIG_IDE=y
+
+#
+# IDE, ATA and ATAPI Block devices
+#
+CONFIG_BLK_DEV_IDE=y
+
+#
+# Please see Documentation/ide.txt for help/info on IDE drives
+#
+# CONFIG_BLK_DEV_HD is not set
+CONFIG_BLK_DEV_IDEDISK=y
+# CONFIG_IDEDISK_MULTI_MODE is not set
+# CONFIG_IDEDISK_STROKE is not set
+CONFIG_BLK_DEV_IDECD=y
+# CONFIG_BLK_DEV_IDEFLOPPY is not set
+# CONFIG_IDE_TASK_IOCTL is not set
+# CONFIG_IDE_TASKFILE_IO is not set
+
+#
+# IDE chipset support/bugfixes
+#
+# CONFIG_BLK_DEV_IDEPCI is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# IEEE 1394 (FireWire) support (EXPERIMENTAL)
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN_BOOL is not set
+
+#
+# Input device support
+#
+# CONFIG_INPUT is not set
+
+#
+# Userland interfaces
+#
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+# CONFIG_SERIO is not set
+
+#
+# Input Device Drivers
+#
+
+#
+# Character devices
+#
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+# CONFIG_SERIAL_8250_EXTENDED is not set
+
+#
+# Non-8250 serial port support
+#
+# CONFIG_SERIAL_DZ is not set
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_UNIX98_PTY_COUNT=256
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# I2C Hardware Sensors Mainboard support
+#
+
+#
+# I2C Hardware Sensors Chip support
+#
+# CONFIG_I2C_SENSOR is not set
+
+#
+# L3 serial bus support
+#
+# CONFIG_L3 is not set
+
+#
+# Mice
+#
+# CONFIG_BUSMOUSE is not set
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+# CONFIG_NVRAM is not set
+# CONFIG_RTC is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+# CONFIG_HANGCHECK_TIMER is not set
+
+#
+# Multimedia devices
+#
+CONFIG_VIDEO_DEV=y
+
+#
+# Video For Linux
+#
+# CONFIG_VIDEO_PROC_FS is not set
+
+#
+# Video Adapters
+#
+# CONFIG_VIDEO_PMS is not set
+# CONFIG_VIDEO_CPIA is not set
+# CONFIG_VIDEO_STRADIS is not set
+# CONFIG_VIDEO_HEXIUM_ORION is not set
+# CONFIG_VIDEO_HEXIUM_GEMINI is not set
+
+#
+# Radio Adapters
+#
+# CONFIG_RADIO_GEMTEK_PCI is not set
+# CONFIG_RADIO_MAXIRADIO is not set
+# CONFIG_RADIO_MAESTRO is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+CONFIG_DVB=y
+CONFIG_DVB_CORE=y
+
+#
+# Supported Frontend Modules
+#
+# CONFIG_DVB_STV0299 is not set
+# CONFIG_DVB_ALPS_BSRV2 is not set
+# CONFIG_DVB_ALPS_TDLB7 is not set
+# CONFIG_DVB_ALPS_TDMB7 is not set
+# CONFIG_DVB_ATMEL_AT76C651 is not set
+# CONFIG_DVB_CX24110 is not set
+# CONFIG_DVB_GRUNDIG_29504_491 is not set
+# CONFIG_DVB_GRUNDIG_29504_401 is not set
+# CONFIG_DVB_MT312 is not set
+# CONFIG_DVB_VES1820 is not set
+# CONFIG_DVB_TDA1004X is not set
+
+#
+# Supported SAA7146 based PCI Adapters
+#
+# CONFIG_DVB_AV7110 is not set
+# CONFIG_DVB_BUDGET is not set
+
+#
+# Supported FlexCopII (B2C2) Adapters
+#
+# CONFIG_DVB_B2C2_SKYSTAR is not set
+# CONFIG_VIDEO_BTCX is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT3_FS is not set
+# CONFIG_JBD is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_FAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+# CONFIG_DEVFS_FS is not set
+CONFIG_DEVPTS_FS=y
+# CONFIG_DEVPTS_FS_XATTR is not set
+CONFIG_TMPFS=y
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_JFFS_FS is not set
+CONFIG_JFFS2_FS=y
+CONFIG_JFFS2_FS_DEBUG=0
+# CONFIG_JFFS2_FS_NAND is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+# CONFIG_NFS_V3 is not set
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFSD is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+# CONFIG_EXPORTFS is not set
+CONFIG_SUNRPC=y
+# CONFIG_SUNRPC_GSS is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_INTERMEZZO_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+# CONFIG_MAC_PARTITION is not set
+CONFIG_MSDOS_PARTITION=y
+# CONFIG_BSD_DISKLABEL is not set
+# CONFIG_MINIX_SUBPARTITION is not set
+# CONFIG_SOLARIS_X86_PARTITION is not set
+# CONFIG_UNIXWARE_DISKLABEL is not set
+# CONFIG_LDM_PARTITION is not set
+# CONFIG_NEC98_PARTITION is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
+# CONFIG_EFI_PARTITION is not set
+
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia Capabilities Port drivers
+#
+# CONFIG_MCP is not set
+
+#
+# Console Switches
+#
+# CONFIG_SWITCHES is not set
+
+#
+# USB support
+#
+# CONFIG_USB is not set
+# CONFIG_USB_GADGET is not set
+
+#
+# Bluetooth support
+#
+# CONFIG_BT is not set
+
+#
+# Kernel hacking
+#
+CONFIG_FRAME_POINTER=y
+CONFIG_DEBUG_USER=y
+# CONFIG_DEBUG_INFO is not set
+CONFIG_DEBUG_KERNEL=y
+# CONFIG_DEBUG_SLAB is not set
+# CONFIG_MAGIC_SYSRQ is not set
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_DEBUG_WAITQ is not set
+CONFIG_DEBUG_BUGVERBOSE=y
+CONFIG_DEBUG_ERRORS=y
+CONFIG_DEBUG_LL=y
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Library routines
+#
+# CONFIG_CRC32 is not set
+CONFIG_ZLIB_INFLATE=y
+CONFIG_ZLIB_DEFLATE=y
.long sys_mq_notify
.long sys_mq_getsetattr
/* 280 */ .long sys_waitid
- .long sys_socket
.long sys_bind
.long sys_connect
.long sys_listen
--- /dev/null
+/*
+ * linux/arch/arm/mach-iop3xx/iop310-irq.c
+ *
+ * Generic IOP310 IRQ handling functionality
+ *
+ * Author: Nicolas Pitre
+ * Copyright: (C) 2001 MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Added IOP310 chipset and IQ80310 board demuxing, masking code. - DS
+ *
+ */
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+
+#include <asm/mach/irq.h>
+#include <asm/irq.h>
+#include <asm/hardware.h>
+
+extern void xs80200_irq_mask(unsigned int);
+extern void xs80200_irq_unmask(unsigned int);
+extern void xs80200_init_irq(void);
+
+extern void do_IRQ(int, struct pt_regs *);
+
+static u32 iop310_mask /* = 0 */;
+
+static void iop310_irq_mask (unsigned int irq)
+{
+ iop310_mask ++;
+
+ /*
+ * No mask bits on the 80312, so we have to
+ * mask everything from the outside!
+ */
+ if (iop310_mask == 1) {
+ disable_irq(IRQ_XS80200_EXTIRQ);
+ irq_desc[IRQ_XS80200_EXTIRQ].chip->mask(IRQ_XS80200_EXTIRQ);
+ }
+}
+
+static void iop310_irq_unmask (unsigned int irq)
+{
+ if (iop310_mask)
+ iop310_mask --;
+
+ /*
+ * Check if all 80312 sources are unmasked now
+ */
+ if (iop310_mask == 0)
+ enable_irq(IRQ_XS80200_EXTIRQ);
+}
+
+struct irqchip ext_chip = {
+ .ack = iop310_irq_mask,
+ .mask = iop310_irq_mask,
+ .unmask = iop310_irq_unmask,
+};
+
+void
+iop310_irq_demux(unsigned int irq, struct irqdesc *desc, struct pt_regs *regs)
+{
+ u32 fiq1isr = *((volatile u32*)IOP310_FIQ1ISR);
+ u32 fiq2isr = *((volatile u32*)IOP310_FIQ2ISR);
+ struct irqdesc *d;
+ unsigned int irqno = 0;
+
+ if(fiq1isr)
+ {
+ if(fiq1isr & 0x1)
+ irqno = IRQ_IOP310_DMA0;
+ if(fiq1isr & 0x2)
+ irqno = IRQ_IOP310_DMA1;
+ if(fiq1isr & 0x4)
+ irqno = IRQ_IOP310_DMA2;
+ if(fiq1isr & 0x10)
+ irqno = IRQ_IOP310_PMON;
+ if(fiq1isr & 0x20)
+ irqno = IRQ_IOP310_AAU;
+ }
+ else
+ {
+ if(fiq2isr & 0x2)
+ irqno = IRQ_IOP310_I2C;
+ if(fiq2isr & 0x4)
+ irqno = IRQ_IOP310_MU;
+ }
+
+ if (irqno) {
+ d = irq_desc + irqno;
+ d->handle(irqno, d, regs);
+ }
+}
+
+void __init iop310_init_irq(void)
+{
+ unsigned int i;
+
+ for(i = IOP310_IRQ_OFS; i < NR_IOP310_IRQS; i++)
+ {
+ set_irq_chip(i, &ext_chip);
+ set_irq_handler(i, do_level_IRQ);
+ set_irq_flags(i, IRQF_VALID | IRQF_PROBE);
+ }
+
+ xs80200_init_irq();
+}
--- /dev/null
+/*
+ * arch/arm/mach-iop3xx/iop310-pci.c
+ *
+ * PCI support for the Intel IOP310 chipset
+ *
+ * Matt Porter <mporter@mvista.com>
+ *
+ * Copyright (C) 2001 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/system.h>
+#include <asm/hardware.h>
+#include <asm/mach/pci.h>
+
+#include <asm/arch/iop310.h>
+
+/*
+ * *** Special note - why the IOP310 should NOT be used ***
+ *
+ * The PCI ATU is a brain dead implementation, only allowing 32-bit
+ * accesses to PCI configuration space. This is especially brain
+ * dead for writes to this space. A simple for-instance:
+ *
+ * You want to modify the command register *without* corrupting the
+ * status register.
+ *
+ * To perform this, you need to read *32* bits of data from offset 4,
+ * mask off the low 16, replace them with the new data, and write *32*
+ * bits back.
+ *
+ * Writing the status register at offset 6 with status bits set *clears*
+ * the status.
+ *
+ * Hello? Could we have a *SANE* implementation of a PCI ATU some day
+ * *PLEASE*?
+ */
+#undef DEBUG
+#ifdef DEBUG
+#define DBG(x...) printk(x)
+#else
+#define DBG(x...) do { } while (0)
+#endif
+
+/*
+ * Calculate the address, etc from the bus, devfn and register
+ * offset. Note that we have two root buses, so we need some
+ * method to determine whether we need config type 0 or 1 cycles.
+ * We use a root bus number in our bus->sysdata structure for this.
+ */
+static u32 iop310_cfg_address(struct pci_bus *bus, int devfn, int where)
+{
+ struct pci_sys_data *sys = bus->sysdata;
+ u32 addr;
+
+ if (sys->busnr == bus->number)
+ addr = 1 << (PCI_SLOT(devfn) + 16);
+ else
+ addr = bus->number << 16 | PCI_SLOT(devfn) << 11 | 1;
+
+ addr |= PCI_FUNC(devfn) << 8 | (where & ~3);
+
+ return addr;
+}
+
+/*
+ * Primary PCI interface support.
+ */
+static int iop310_pri_pci_status(void)
+{
+ unsigned int status;
+ int ret = 0;
+
+ status = *IOP310_PATUSR;
+ if (status & 0xf900) {
+ *IOP310_PATUSR = status & 0xf900;
+ ret = 1;
+ }
+ status = *IOP310_PATUISR;
+ if (status & 0x0000018f) {
+ *IOP310_PATUISR = status & 0x0000018f;
+ ret = 1;
+ }
+ status = *IOP310_PSR;
+ if (status & 0xf900) {
+ *IOP310_PSR = status & 0xf900;
+ ret = 1;
+ }
+ status = *IOP310_PBISR;
+ if (status & 0x003f) {
+ *IOP310_PBISR = status & 0x003f;
+ ret = 1;
+ }
+ return ret;
+}
+
+/*
+ * Simply write the address register and read the configuration
+ * data. Note that the 4 nop's ensure that we are able to handle
+ * a delayed abort (in theory.)
+ */
+static inline u32 iop310_pri_read(unsigned long addr)
+{
+ u32 val;
+
+ __asm__ __volatile__(
+ "str %1, [%2]\n\t"
+ "ldr %0, [%3]\n\t"
+ "nop\n\t"
+ "nop\n\t"
+ "nop\n\t"
+ "nop\n\t"
+ : "=r" (val)
+ : "r" (addr), "r" (IOP310_POCCAR), "r" (IOP310_POCCDR));
+
+ return val;
+}
+
+static int
+iop310_pri_read_config(struct pci_bus *bus, unsigned int devfn, int where,
+ int size, u32 *value)
+{
+ unsigned long addr = iop310_cfg_address(bus, devfn, where);
+ u32 val = iop310_pri_read(addr) >> ((where & 3) * 8);
+
+ if (iop310_pri_pci_status())
+ val = 0xffffffff;
+
+ *value = val;
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int
+iop310_pri_write_config(struct pci_bus *bus, unsigned int devfn, int where,
+ int size, u32 value)
+{
+ unsigned long addr = iop310_cfg_address(bus, devfn, where);
+ u32 val;
+
+ if (size != 4) {
+ val = iop310_pri_read(addr);
+ if (!iop310_pri_pci_status() == 0)
+ return PCIBIOS_SUCCESSFUL;
+
+ where = (where & 3) * 8;
+
+ if (size == 1)
+ val &= ~(0xff << where);
+ else
+ val &= ~(0xffff << where);
+
+ *IOP310_POCCDR = val | value << where;
+ } else {
+ asm volatile(
+ "str %1, [%2]\n\t"
+ "str %0, [%3]\n\t"
+ "nop\n\t"
+ "nop\n\t"
+ "nop\n\t"
+ "nop\n\t"
+ :
+ : "r" (value), "r" (addr),
+ "r" (IOP310_POCCAR), "r" (IOP310_POCCDR));
+ }
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static struct pci_ops iop310_primary_ops = {
+ .read = iop310_pri_read_config,
+ .write = iop310_pri_write_config,
+};
+
+/*
+ * Secondary PCI interface support.
+ */
+static int iop310_sec_pci_status(void)
+{
+ unsigned int usr, uisr;
+ int ret = 0;
+
+ usr = *IOP310_SATUSR;
+ uisr = *IOP310_SATUISR;
+ if (usr & 0xf900) {
+ *IOP310_SATUSR = usr & 0xf900;
+ ret = 1;
+ }
+ if (uisr & 0x0000069f) {
+ *IOP310_SATUISR = uisr & 0x0000069f;
+ ret = 1;
+ }
+ if (ret)
+ DBG("ERROR (%08x %08x)", usr, uisr);
+ return ret;
+}
+
+/*
+ * Simply write the address register and read the configuration
+ * data. Note that the 4 nop's ensure that we are able to handle
+ * a delayed abort (in theory.)
+ */
+static inline u32 iop310_sec_read(unsigned long addr)
+{
+ u32 val;
+
+ __asm__ __volatile__(
+ "str %1, [%2]\n\t"
+ "ldr %0, [%3]\n\t"
+ "nop\n\t"
+ "nop\n\t"
+ "nop\n\t"
+ "nop\n\t"
+ : "=r" (val)
+ : "r" (addr), "r" (IOP310_SOCCAR), "r" (IOP310_SOCCDR));
+
+ return val;
+}
+
+static int
+iop310_sec_read_config(struct pci_bus *bus, unsigned int devfn, int where,
+ int size, u32 *value)
+{
+ unsigned long addr = iop310_cfg_address(bus, devfn, where);
+ u32 val = iop310_sec_read(addr) >> ((where & 3) * 8);
+
+ if (iop310_sec_pci_status())
+ val = 0xffffffff;
+
+ *value = val;
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int
+iop310_sec_write_config(struct pci_bus *bus, unsigned int devfn, int where,
+ int size, u32 value)
+{
+ unsigned long addr = iop310_cfg_address(bus, devfn, where);
+ u32 val;
+
+ if (size != 4) {
+ val = iop310_sec_read(addr);
+
+ if (!iop310_sec_pci_status() == 0)
+ return PCIBIOS_SUCCESSFUL;
+
+ where = (where & 3) * 8;
+
+ if (size == 1)
+ val &= ~(0xff << where);
+ else
+ val &= ~(0xffff << where);
+
+ *IOP310_SOCCDR = val | value << where;
+ } else {
+ asm volatile(
+ "str %1, [%2]\n\t"
+ "str %0, [%3]\n\t"
+ "nop\n\t"
+ "nop\n\t"
+ "nop\n\t"
+ "nop\n\t"
+ :
+ : "r" (value), "r" (addr),
+ "r" (IOP310_SOCCAR), "r" (IOP310_SOCCDR));
+ }
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static struct pci_ops iop310_secondary_ops = {
+ .read = iop310_sec_read_config,
+ .write = iop310_sec_write_config,
+};
+
+/*
+ * When a PCI device does not exist during config cycles, the 80200 gets
+ * an external abort instead of returning 0xffffffff. If it was an
+ * imprecise abort, we need to correct the return address to point after
+ * the instruction. Also note that the Xscale manual says:
+ *
+ * "if a stall-until-complete LD or ST instruction triggers an
+ * imprecise fault, then that fault will be seen by the program
+ * within 3 instructions."
+ *
+ * This does not appear to be the case. With 8 NOPs after the load, we
+ * see the imprecise abort occurring on the STM of iop310_sec_pci_status()
+ * which is about 10 instructions away.
+ *
+ * Always trust reality!
+ */
+static int
+iop310_pci_abort(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+{
+ DBG("PCI abort: address = 0x%08lx fsr = 0x%03x PC = 0x%08lx LR = 0x%08lx\n",
+ addr, fsr, regs->ARM_pc, regs->ARM_lr);
+
+ /*
+ * If it was an imprecise abort, then we need to correct the
+ * return address to be _after_ the instruction.
+ */
+ if (fsr & (1 << 10))
+ regs->ARM_pc += 4;
+
+ return 0;
+}
+
+/*
+ * Scan an IOP310 PCI bus. sys->bus defines which bus we scan.
+ */
+struct pci_bus *iop310_scan_bus(int nr, struct pci_sys_data *sys)
+{
+ struct pci_ops *ops;
+
+ if (nr)
+ ops = &iop310_secondary_ops;
+ else
+ ops = &iop310_primary_ops;
+
+ return pci_scan_bus(sys->busnr, ops, sys);
+}
+
+/*
+ * Setup the system data for controller 'nr'. Return 0 if none found,
+ * 1 if found, or negative error.
+ *
+ * We can alter:
+ * io_offset - offset between IO resources and PCI bus BARs
+ * mem_offset - offset between mem resources and PCI bus BARs
+ * resource[0] - parent IO resource
+ * resource[1] - parent non-prefetchable memory resource
+ * resource[2] - parent prefetchable memory resource
+ * swizzle - bridge swizzling function
+ * map_irq - irq mapping function
+ *
+ * Note that 'io_offset' and 'mem_offset' are left as zero since
+ * the IOP310 doesn't attempt to perform any address translation
+ * on accesses from the host to the bus.
+ */
+int iop310_setup(int nr, struct pci_sys_data *sys)
+{
+ struct resource *res;
+
+ if (nr >= 2)
+ return 0;
+
+ res = kmalloc(sizeof(struct resource) * 2, GFP_KERNEL);
+ if (!res)
+ panic("PCI: unable to alloc resources");
+
+ memset(res, 0, sizeof(struct resource) * 2);
+
+ switch (nr) {
+ case 0:
+ res[0].start = IOP310_PCIPRI_LOWER_IO + 0x6e000000;
+ res[0].end = IOP310_PCIPRI_LOWER_IO + 0x6e00ffff;
+ res[0].name = "PCI IO Primary";
+ res[0].flags = IORESOURCE_IO;
+
+ res[1].start = IOP310_PCIPRI_LOWER_MEM;
+ res[1].end = IOP310_PCIPRI_LOWER_MEM + IOP310_PCI_WINDOW_SIZE;
+ res[1].name = "PCI Memory Primary";
+ res[1].flags = IORESOURCE_MEM;
+ break;
+
+ case 1:
+ res[0].start = IOP310_PCISEC_LOWER_IO + 0x6e000000;
+ res[0].end = IOP310_PCISEC_LOWER_IO + 0x6e00ffff;
+ res[0].name = "PCI IO Secondary";
+ res[0].flags = IORESOURCE_IO;
+
+ res[1].start = IOP310_PCISEC_LOWER_MEM;
+ res[1].end = IOP310_PCISEC_LOWER_MEM + IOP310_PCI_WINDOW_SIZE;
+ res[1].name = "PCI Memory Secondary";
+ res[1].flags = IORESOURCE_MEM;
+ break;
+ }
+
+ request_resource(&ioport_resource, &res[0]);
+ request_resource(&iomem_resource, &res[1]);
+
+ sys->resource[0] = &res[0];
+ sys->resource[1] = &res[1];
+ sys->resource[2] = NULL;
+ sys->io_offset = 0x6e000000;
+
+ return 1;
+}
+
+void iop310_init(void)
+{
+ DBG("PCI: Intel 80312 PCI-to-PCI init code.\n");
+ DBG(" ATU secondary: ATUCR =0x%08x\n", *IOP310_ATUCR);
+ DBG(" ATU secondary: SOMWVR=0x%08x SOIOWVR=0x%08x\n",
+ *IOP310_SOMWVR, *IOP310_SOIOWVR);
+ DBG(" ATU secondary: SIABAR=0x%08x SIALR =0x%08x SIATVR=%08x\n",
+ *IOP310_SIABAR, *IOP310_SIALR, *IOP310_SIATVR);
+ DBG(" ATU primary: POMWVR=0x%08x POIOWVR=0x%08x\n",
+ *IOP310_POMWVR, *IOP310_POIOWVR);
+ DBG(" ATU primary: PIABAR=0x%08x PIALR =0x%08x PIATVR=%08x\n",
+ *IOP310_PIABAR, *IOP310_PIALR, *IOP310_PIATVR);
+ DBG(" P2P: PCR=0x%04x BCR=0x%04x EBCR=0x%04x\n",
+ *IOP310_PCR, *IOP310_BCR, *IOP310_EBCR);
+
+ /*
+ * Windows have to be carefully opened via a nice set of calls
+ * here or just some direct register fiddling in the board
+ * specific init when we want transactions to occur between the
+ * two PCI hoses.
+ *
+ * To do this, we will have manage RETRY assertion between the
+ * firmware and the kernel. This will ensure that the host
+ * system's enumeration code is held off until we have tweaked
+ * the interrupt routing and public/private IDSELs.
+ *
+ * For now we will simply default to disabling the integrated type
+ * 81 P2P bridge.
+ */
+ *IOP310_PCR &= 0xfff8;
+
+ hook_fault_code(16+6, iop310_pci_abort, SIGBUS, "imprecise external abort");
+}
--- /dev/null
+/*
+ * linux/arch/arm/mach-iop3xx/iq80310-irq.c
+ *
+ * IRQ hadling/demuxing for IQ80310 board
+ *
+ * Author: Nicolas Pitre
+ * Copyright: (C) 2001 MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * 2.4.7-rmk1-iop310.1
+ * Moved demux from asm to C - DS
+ * Fixes for various revision boards - DS
+ */
+#include <linux/init.h>
+#include <linux/list.h>
+
+#include <asm/irq.h>
+#include <asm/mach/irq.h>
+#include <asm/hardware.h>
+#include <asm/system.h>
+
+extern void iop310_init_irq(void);
+extern void iop310_irq_demux(unsigned int, struct irqdesc *, struct pt_regs *);
+
+static void iq80310_irq_mask(unsigned int irq)
+{
+ *(volatile char *)IQ80310_INT_MASK |= (1 << (irq - IQ80310_IRQ_OFS));
+}
+
+static void iq80310_irq_unmask(unsigned int irq)
+{
+ *(volatile char *)IQ80310_INT_MASK &= ~(1 << (irq - IQ80310_IRQ_OFS));
+}
+
+static struct irqchip iq80310_irq_chip = {
+ .ack = iq80310_irq_mask,
+ .mask = iq80310_irq_mask,
+ .unmask = iq80310_irq_unmask,
+};
+
+extern struct irqchip ext_chip;
+
+static void
+iq80310_cpld_irq_handler(unsigned int irq, struct irqdesc *desc,
+ struct pt_regs *regs)
+{
+ unsigned int irq_stat = *(volatile u8*)IQ80310_INT_STAT;
+ unsigned int irq_mask = *(volatile u8*)IQ80310_INT_MASK;
+ unsigned int i, handled = 0;
+ struct irqdesc *d;
+
+ desc->chip->ack(irq);
+
+ /*
+ * Mask out the interrupts which aren't enabled.
+ */
+ irq_stat &= 0x1f & ~irq_mask;
+
+ /*
+ * Test each IQ80310 CPLD interrupt
+ */
+ for (i = IRQ_IQ80310_TIMER, d = irq_desc + IRQ_IQ80310_TIMER;
+ irq_stat; i++, d++, irq_stat >>= 1)
+ if (irq_stat & 1) {
+ d->handle(i, d, regs);
+ handled++;
+ }
+
+ /*
+ * If running on a board later than REV D.1, we can
+ * decode the PCI interrupt status.
+ */
+ if (system_rev) {
+ irq_stat = *((volatile u8*)IQ80310_PCI_INT_STAT) & 7;
+
+ for (i = IRQ_IQ80310_INTA, d = irq_desc + IRQ_IQ80310_INTA;
+ irq_stat; i++, d++, irq_stat >>= 1)
+ if (irq_stat & 0x1) {
+ d->handle(i, d, regs);
+ handled++;
+ }
+ }
+
+ /*
+ * If on a REV D.1 or lower board, we just assumed INTA
+ * since PCI is not routed, and it may actually be an
+ * on-chip interrupt.
+ *
+ * Note that we're giving on-chip interrupts slightly
+ * higher priority than PCI by handling them first.
+ *
+ * On boards later than REV D.1, if we didn't read a
+ * CPLD interrupt, we assume it's from a device on the
+ * chipset itself.
+ */
+ if (system_rev == 0 || handled == 0)
+ iop310_irq_demux(irq, desc, regs);
+
+ desc->chip->unmask(irq);
+}
+
+void __init iq80310_init_irq(void)
+{
+ volatile char *mask = (volatile char *)IQ80310_INT_MASK;
+ unsigned int i;
+
+ iop310_init_irq();
+
+ /*
+ * Setup PIRSR to route PCI interrupts into xs80200
+ */
+ *IOP310_PIRSR = 0xff;
+
+ /*
+ * Setup the IRQs in the FE820000/FE860000 registers
+ */
+ for (i = IQ80310_IRQ_OFS; i <= IRQ_IQ80310_INTD; i++) {
+ set_irq_chip(i, &iq80310_irq_chip);
+ set_irq_handler(i, do_level_IRQ);
+ set_irq_flags(i, IRQF_VALID | IRQF_PROBE);
+ }
+
+ /*
+ * Setup the PCI IRQs
+ */
+ for (i = IRQ_IQ80310_INTA; i < IRQ_IQ80310_INTC; i++) {
+ set_irq_chip(i, &ext_chip);
+ set_irq_handler(i, do_level_IRQ);
+ set_irq_flags(i, IRQF_VALID);
+ }
+
+ *mask = 0xff; /* mask all sources */
+
+ set_irq_chained_handler(IRQ_XS80200_EXTIRQ,
+ &iq80310_cpld_irq_handler);
+}
--- /dev/null
+/*
+ * arch/arm/mach-iop3xx/iq80310-pci.c
+ *
+ * PCI support for the Intel IQ80310 reference board
+ *
+ * Matt Porter <mporter@mvista.com>
+ *
+ * Copyright (C) 2001 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+
+#include <asm/hardware.h>
+#include <asm/irq.h>
+#include <asm/mach/pci.h>
+#include <asm/mach-types.h>
+
+/*
+ * The following macro is used to lookup irqs in a standard table
+ * format for those systems that do not already have PCI
+ * interrupts properly routed. We assume 1 <= pin <= 4
+ */
+#define PCI_IRQ_TABLE_LOOKUP(minid,maxid) \
+({ int _ctl_ = -1; \
+ unsigned int _idsel = idsel - minid; \
+ if (_idsel <= maxid) \
+ _ctl_ = pci_irq_table[_idsel][pin-1]; \
+ _ctl_; })
+
+#define INTA IRQ_IQ80310_INTA
+#define INTB IRQ_IQ80310_INTB
+#define INTC IRQ_IQ80310_INTC
+#define INTD IRQ_IQ80310_INTD
+
+#define INTE IRQ_IQ80310_I82559
+
+typedef u8 irq_table[4];
+
+/*
+ * IRQ tables for primary bus.
+ *
+ * On a Rev D.1 and older board, INT A-C are not routed, so we
+ * just fake it as INTA and than we take care of handling it
+ * correctly in the IRQ demux routine.
+ */
+static irq_table pci_pri_d_irq_table[] = {
+/* Pin: A B C D */
+ { INTA, INTD, INTA, INTA }, /* PCI Slot J3 */
+ { INTD, INTA, INTA, INTA }, /* PCI Slot J4 */
+};
+
+static irq_table pci_pri_f_irq_table[] = {
+/* Pin: A B C D */
+ { INTC, INTD, INTA, INTB }, /* PCI Slot J3 */
+ { INTD, INTA, INTB, INTC }, /* PCI Slot J4 */
+};
+
+static int __init
+iq80310_pri_map_irq(struct pci_dev *dev, u8 idsel, u8 pin)
+{
+ irq_table *pci_irq_table;
+
+ BUG_ON(pin < 1 || pin > 4);
+
+ if (!system_rev) {
+ pci_irq_table = pci_pri_d_irq_table;
+ } else {
+ pci_irq_table = pci_pri_f_irq_table;
+ }
+
+ return PCI_IRQ_TABLE_LOOKUP(2, 3);
+}
+
+/*
+ * IRQ tables for secondary bus.
+ *
+ * On a Rev D.1 and older board, INT A-C are not routed, so we
+ * just fake it as INTA and than we take care of handling it
+ * correctly in the IRQ demux routine.
+ */
+static irq_table pci_sec_d_irq_table[] = {
+/* Pin: A B C D */
+ { INTA, INTA, INTA, INTD }, /* PCI Slot J1 */
+ { INTA, INTA, INTD, INTA }, /* PCI Slot J5 */
+ { INTE, INTE, INTE, INTE }, /* P2P Bridge */
+};
+
+static irq_table pci_sec_f_irq_table[] = {
+/* Pin: A B C D */
+ { INTA, INTB, INTC, INTD }, /* PCI Slot J1 */
+ { INTB, INTC, INTD, INTA }, /* PCI Slot J5 */
+ { INTE, INTE, INTE, INTE }, /* P2P Bridge */
+};
+
+static int __init
+iq80310_sec_map_irq(struct pci_dev *dev, u8 idsel, u8 pin)
+{
+ irq_table *pci_irq_table;
+
+ BUG_ON(pin < 1 || pin > 4);
+
+ if (!system_rev) {
+ pci_irq_table = pci_sec_d_irq_table;
+ } else {
+ pci_irq_table = pci_sec_f_irq_table;
+ }
+
+ return PCI_IRQ_TABLE_LOOKUP(0, 2);
+}
+
+static int iq80310_pri_host;
+
+static int iq80310_setup(int nr, struct pci_sys_data *sys)
+{
+ switch (nr) {
+ case 0:
+ if (!iq80310_pri_host)
+ return 0;
+
+ sys->map_irq = iq80310_pri_map_irq;
+ break;
+
+ case 1:
+ sys->map_irq = iq80310_sec_map_irq;
+ break;
+
+ default:
+ return 0;
+ }
+
+ return iop310_setup(nr, sys);
+}
+
+static void iq80310_preinit(void)
+{
+ iq80310_pri_host = *(volatile u32 *)IQ80310_BACKPLANE & 1;
+
+ printk(KERN_INFO "PCI: IQ80310 is a%s\n",
+ iq80310_pri_host ? " system controller" : "n agent");
+
+ iop310_init();
+}
+
+static struct hw_pci iq80310_pci __initdata = {
+ .swizzle = pci_std_swizzle,
+ .nr_controllers = 2,
+ .setup = iq80310_setup,
+ .scan = iop310_scan_bus,
+ .preinit = iq80310_preinit,
+};
+
+static int __init iq80310_pci_init(void)
+{
+ if (machine_is_iq80310())
+ pci_common_init(&iq80310_pci);
+ return 0;
+}
+
+subsys_initcall(iq80310_pci_init);
--- /dev/null
+/*
+ * linux/arch/arm/mach-iop3xx/time-iq80310.c
+ *
+ * Timer functions for IQ80310 onboard timer
+ *
+ * Author: Nicolas Pitre
+ * Copyright: (C) 2001 MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/time.h>
+#include <linux/init.h>
+#include <linux/timex.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/uaccess.h>
+#include <asm/mach/irq.h>
+
+static void iq80310_write_timer (u_long val)
+{
+ volatile u_char *la0 = (volatile u_char *)IQ80310_TIMER_LA0;
+ volatile u_char *la1 = (volatile u_char *)IQ80310_TIMER_LA1;
+ volatile u_char *la2 = (volatile u_char *)IQ80310_TIMER_LA2;
+
+ *la0 = val;
+ *la1 = val >> 8;
+ *la2 = (val >> 16) & 0x3f;
+}
+
+static u_long iq80310_read_timer (void)
+{
+ volatile u_char *la0 = (volatile u_char *)IQ80310_TIMER_LA0;
+ volatile u_char *la1 = (volatile u_char *)IQ80310_TIMER_LA1;
+ volatile u_char *la2 = (volatile u_char *)IQ80310_TIMER_LA2;
+ volatile u_char *la3 = (volatile u_char *)IQ80310_TIMER_LA3;
+ u_long b0, b1, b2, b3, val;
+
+ b0 = *la0; b1 = *la1; b2 = *la2; b3 = *la3;
+ b0 = (((b0 & 0x40) >> 1) | (b0 & 0x1f));
+ b1 = (((b1 & 0x40) >> 1) | (b1 & 0x1f));
+ b2 = (((b2 & 0x40) >> 1) | (b2 & 0x1f));
+ b3 = (b3 & 0x0f);
+ val = ((b0 << 0) | (b1 << 6) | (b2 << 12) | (b3 << 18));
+ return val;
+}
+
+/*
+ * IRQs are disabled before entering here from do_gettimeofday().
+ * Note that the counter may wrap. When it does, 'elapsed' will
+ * be small, but we will have a pending interrupt.
+ */
+static unsigned long iq80310_gettimeoffset (void)
+{
+ unsigned long elapsed, usec;
+ unsigned int stat1, stat2;
+
+ stat1 = *(volatile u8 *)IQ80310_INT_STAT;
+ elapsed = iq80310_read_timer();
+ stat2 = *(volatile u8 *)IQ80310_INT_STAT;
+
+ /*
+ * If an interrupt was pending before we read the timer,
+ * we've already wrapped. Factor this into the time.
+ * If an interrupt was pending after we read the timer,
+ * it may have wrapped between checking the interrupt
+ * status and reading the timer. Re-read the timer to
+ * be sure its value is after the wrap.
+ */
+ if (stat1 & 1)
+ elapsed += LATCH;
+ else if (stat2 & 1)
+ elapsed = LATCH + iq80310_read_timer();
+
+ /*
+ * Now convert them to usec.
+ */
+ usec = (unsigned long)(elapsed * (tick_nsec / 1000))/LATCH;
+
+ return usec;
+}
+
+
+static irqreturn_t
+iq80310_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ volatile u_char *timer_en = (volatile u_char *)IQ80310_TIMER_EN;
+
+ /* clear timer interrupt */
+ *timer_en &= ~2;
+ *timer_en |= 2;
+
+ do_timer(regs);
+
+ return IRQ_HANDLED;
+}
+
+extern unsigned long (*gettimeoffset)(void);
+
+static struct irqaction timer_irq = {
+ .name = "timer",
+ .handler = iq80310_timer_interrupt,
+};
+
+
+void __init time_init(void)
+{
+ volatile u_char *timer_en = (volatile u_char *)IQ80310_TIMER_EN;
+
+ gettimeoffset = iq80310_gettimeoffset;
+
+ setup_irq(IRQ_IQ80310_TIMER, &timer_irq);
+
+ *timer_en = 0;
+ iq80310_write_timer(LATCH);
+ *timer_en |= 2;
+ *timer_en |= 1;
+}
--- /dev/null
+/*
+ * linux/arch/arm/mach-iop3xx/mm.c
+ *
+ * Low level memory initialization for IOP310 based systems
+ *
+ * Author: Nicolas Pitre <npitre@mvista.com>
+ *
+ * Copyright 2000-2001 MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/init.h>
+
+#include <asm/io.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+
+#include <asm/mach/map.h>
+
+#ifdef CONFIG_IOP310_MU
+#include "message.h"
+#endif
+
+/*
+ * Standard IO mapping for all IOP310 based systems
+ */
+static struct map_desc iop80310_std_desc[] __initdata = {
+ /* virtual physical length type */
+ // IOP310 Memory Mapped Registers
+ { 0xe8001000, 0x00001000, 0x00001000, MT_DEVICE },
+ // PCI I/O Space
+ { 0xfe000000, 0x90000000, 0x00020000, MT_DEVICE }
+};
+
+void __init iop310_map_io(void)
+{
+ iotable_init(iop80310_std_desc, ARRAY_SIZE(iop80310_std_desc));
+}
+
+/*
+ * IQ80310 specific IO mappings
+ */
+#ifdef CONFIG_ARCH_IQ80310
+static struct map_desc iq80310_io_desc[] __initdata = {
+ /* virtual physical length type */
+ // IQ80310 On-Board Devices
+ { 0xfe800000, 0xfe800000, 0x00100000, MT_DEVICE }
+};
+
+void __init iq80310_map_io(void)
+{
+#ifdef CONFIG_IOP310_MU
+ /* acquiring 1MB of memory aligned on 1MB boundary for MU */
+ mu_mem = __alloc_bootmem(0x100000, 0x100000, 0);
+#endif
+
+ iop310_map_io();
+
+ iotable_init(iq80310_io_desc, ARRAY_SIZE(iq80310_io_desc));
+}
+#endif // CONFIG_ARCH_IQ80310
+
--- /dev/null
+/*
+ * linux/arch/arm/mach-iop3xx/xs80200-irq.c
+ *
+ * Generic IRQ handling for the XS80200 XScale core.
+ *
+ * Author: Nicolas Pitre
+ * Copyright: (C) 2001 MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/init.h>
+#include <linux/list.h>
+
+#include <asm/mach/irq.h>
+#include <asm/irq.h>
+#include <asm/hardware.h>
+
+static void xs80200_irq_mask (unsigned int irq)
+{
+ unsigned long intctl;
+ asm ("mrc p13, 0, %0, c0, c0, 0" : "=r" (intctl));
+ switch (irq) {
+ case IRQ_XS80200_BCU: intctl &= ~(1<<3); break;
+ case IRQ_XS80200_PMU: intctl &= ~(1<<2); break;
+ case IRQ_XS80200_EXTIRQ: intctl &= ~(1<<1); break;
+ case IRQ_XS80200_EXTFIQ: intctl &= ~(1<<0); break;
+ }
+ asm ("mcr p13, 0, %0, c0, c0, 0" : : "r" (intctl));
+}
+
+static void xs80200_irq_unmask (unsigned int irq)
+{
+ unsigned long intctl;
+ asm ("mrc p13, 0, %0, c0, c0, 0" : "=r" (intctl));
+ switch (irq) {
+ case IRQ_XS80200_BCU: intctl |= (1<<3); break;
+ case IRQ_XS80200_PMU: intctl |= (1<<2); break;
+ case IRQ_XS80200_EXTIRQ: intctl |= (1<<1); break;
+ case IRQ_XS80200_EXTFIQ: intctl |= (1<<0); break;
+ }
+ asm ("mcr p13, 0, %0, c0, c0, 0" : : "r" (intctl));
+}
+
+static struct irqchip xs80200_chip = {
+ .ack = xs80200_irq_mask,
+ .mask = xs80200_irq_mask,
+ .unmask = xs80200_irq_unmask,
+};
+
+void __init xs80200_init_irq(void)
+{
+ unsigned int i;
+
+ asm("mcr p13, 0, %0, c0, c0, 0" : : "r" (0));
+
+ for (i = 0; i < NR_XS80200_IRQS; i++) {
+ set_irq_chip(i, &xs80200_chip);
+ set_irq_handler(i, do_level_IRQ);
+ set_irq_flags(i, IRQF_VALID);
+ }
+}
--- /dev/null
+/*
+ * linux/arch/arm/mach-omap/bus.c
+ *
+ * Virtual bus for OMAP. Allows better power management, such as managing
+ * shared clocks, and mapping of bus addresses to Local Bus addresses.
+ *
+ * See drivers/usb/host/ohci-omap.c or drivers/video/omap/omapfb.c for
+ * examples on how to register drivers to this bus.
+ *
+ * Copyright (C) 2003 - 2004 Nokia Corporation
+ * Written by Tony Lindgren <tony@atomide.com>
+ * Portions of code based on sa1111.c.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/mach/irq.h>
+
+#include <asm/arch/bus.h>
+
+static int omap_bus_match(struct device *_dev, struct device_driver *_drv);
+static int omap_bus_suspend(struct device *dev, u32 state);
+static int omap_bus_resume(struct device *dev);
+
+/*
+ * OMAP bus definitions
+ *
+ * NOTE: Most devices should use TIPB. LBUS does automatic address mapping
+ * to Local Bus addresses, and should only be used for Local Bus devices.
+ * We may add new buses later on for power management reasons. Basically
+ * we want to be able to turn off any bus if it's not used by device
+ * drivers.
+ */
+static struct device omap_bus_devices[OMAP_NR_BUSES] = {
+ {
+ .bus_id = OMAP_BUS_NAME_TIPB
+ }, {
+ .bus_id = OMAP_BUS_NAME_LBUS
+ },
+};
+
+static struct bus_type omap_bus_types[OMAP_NR_BUSES] = {
+ {
+ .name = OMAP_BUS_NAME_TIPB,
+ .match = omap_bus_match,
+ .suspend = omap_bus_suspend,
+ .resume = omap_bus_resume,
+ }, {
+ .name = OMAP_BUS_NAME_LBUS, /* Local bus on 1510 */
+ .match = omap_bus_match,
+ .suspend = omap_bus_suspend,
+ .resume = omap_bus_resume,
+ },
+};
+
+static int omap_bus_match(struct device *dev, struct device_driver *drv)
+{
+ struct omap_dev *omapdev = OMAP_DEV(dev);
+ struct omap_driver *omapdrv = OMAP_DRV(drv);
+
+ return omapdev->devid == omapdrv->devid;
+}
+
+static int omap_bus_suspend(struct device *dev, u32 state)
+{
+ struct omap_dev *omapdev = OMAP_DEV(dev);
+ struct omap_driver *omapdrv = OMAP_DRV(dev->driver);
+ int ret = 0;
+
+ if (omapdrv && omapdrv->suspend)
+ ret = omapdrv->suspend(omapdev, state);
+ return ret;
+}
+
+static int omap_bus_resume(struct device *dev)
+{
+ struct omap_dev *omapdev = OMAP_DEV(dev);
+ struct omap_driver *omapdrv = OMAP_DRV(dev->driver);
+ int ret = 0;
+
+ if (omapdrv && omapdrv->resume)
+ ret = omapdrv->resume(omapdev);
+ return ret;
+}
+
+static int omap_device_probe(struct device *dev)
+{
+ struct omap_dev *omapdev = OMAP_DEV(dev);
+ struct omap_driver *omapdrv = OMAP_DRV(dev->driver);
+ int ret = -ENODEV;
+
+ if (omapdrv && omapdrv->probe)
+ ret = omapdrv->probe(omapdev);
+
+ return ret;
+}
+
+static int omap_device_remove(struct device *dev)
+{
+ struct omap_dev *omapdev = OMAP_DEV(dev);
+ struct omap_driver *omapdrv = OMAP_DRV(dev->driver);
+ int ret = 0;
+
+ if (omapdrv && omapdrv->remove)
+ ret = omapdrv->remove(omapdev);
+ return ret;
+}
+
+int omap_device_register(struct omap_dev *odev)
+{
+ if (!odev)
+ return -EINVAL;
+
+ if (odev->busid < 0 || odev->busid >= OMAP_NR_BUSES) {
+ printk(KERN_ERR "%s: busid invalid: %s: bus: %i\n",
+ __FUNCTION__, odev->name, odev->busid);
+ return -EINVAL;
+ }
+
+ odev->dev.parent = &omap_bus_devices[odev->busid];
+ odev->dev.bus = &omap_bus_types[odev->busid];
+
+ /* This is needed for USB OHCI to work */
+ if (odev->dma_mask)
+ odev->dev.dma_mask = odev->dma_mask;
+
+ if (odev->coherent_dma_mask)
+ odev->dev.coherent_dma_mask = odev->coherent_dma_mask;
+
+ snprintf(odev->dev.bus_id, BUS_ID_SIZE, "%s%u",
+ odev->name, odev->devid);
+
+ printk("Registering OMAP device '%s'. Parent at %s\n",
+ odev->dev.bus_id, odev->dev.parent->bus_id);
+
+ return device_register(&odev->dev);
+}
+
+void omap_device_unregister(struct omap_dev *odev)
+{
+ if (odev)
+ device_unregister(&odev->dev);
+}
+
+int omap_driver_register(struct omap_driver *driver)
+{
+ int ret;
+
+ if (driver->busid < 0 || driver->busid >= OMAP_NR_BUSES) {
+ printk(KERN_ERR "%s: busid invalid: bus: %i device: %i\n",
+ __FUNCTION__, driver->busid, driver->devid);
+ return -EINVAL;
+ }
+
+ driver->drv.probe = omap_device_probe;
+ driver->drv.remove = omap_device_remove;
+ driver->drv.bus = &omap_bus_types[driver->busid];
+
+ /*
+ * driver_register calls bus_add_driver
+ */
+ ret = driver_register(&driver->drv);
+
+ return ret;
+}
+
+void omap_driver_unregister(struct omap_driver *driver)
+{
+ driver_unregister(&driver->drv);
+}
+
+static int __init omap_bus_init(void)
+{
+ int i, ret;
+
+ /* Initialize all OMAP virtual buses */
+ for (i = 0; i < OMAP_NR_BUSES; i++) {
+ ret = device_register(&omap_bus_devices[i]);
+ if (ret != 0) {
+ printk(KERN_ERR "Unable to register bus device %s\n",
+ omap_bus_devices[i].bus_id);
+ continue;
+ }
+ ret = bus_register(&omap_bus_types[i]);
+ if (ret != 0) {
+ printk(KERN_ERR "Unable to register bus %s\n",
+ omap_bus_types[i].name);
+ device_unregister(&omap_bus_devices[i]);
+ }
+ }
+ printk("OMAP virtual buses initialized\n");
+
+ return ret;
+}
+
+static void __exit omap_bus_exit(void)
+{
+ int i;
+
+ /* Unregister all OMAP virtual buses */
+ for (i = 0; i < OMAP_NR_BUSES; i++) {
+ bus_unregister(&omap_bus_types[i]);
+ device_unregister(&omap_bus_devices[i]);
+ }
+}
+
+postcore_initcall(omap_bus_init);
+module_exit(omap_bus_exit);
+
+MODULE_DESCRIPTION("Virtual bus for OMAP");
+MODULE_LICENSE("GPL");
+
+EXPORT_SYMBOL(omap_bus_types);
+EXPORT_SYMBOL(omap_driver_register);
+EXPORT_SYMBOL(omap_driver_unregister);
+EXPORT_SYMBOL(omap_device_register);
+EXPORT_SYMBOL(omap_device_unregister);
+
--- /dev/null
+/*
+ * linux/arch/arm/mach-omap/leds-perseus2.c
+ *
+ * Copyright 2003 by Texas Instruments Incorporated
+ *
+ */
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/kernel_stat.h>
+#include <linux/sched.h>
+#include <linux/version.h>
+
+#include <asm/io.h>
+#include <asm/hardware.h>
+#include <asm/leds.h>
+#include <asm/system.h>
+
+#include "leds.h"
+
+void perseus2_leds_event(led_event_t evt)
+{
+ unsigned long flags;
+ static unsigned long hw_led_state = 0;
+
+ local_irq_save(flags);
+
+ switch (evt) {
+ case led_start:
+ hw_led_state |= OMAP730_FPGA_LED_STARTSTOP;
+ break;
+
+ case led_stop:
+ hw_led_state &= ~OMAP730_FPGA_LED_STARTSTOP;
+ break;
+
+ case led_claim:
+ hw_led_state |= OMAP730_FPGA_LED_CLAIMRELEASE;
+ break;
+
+ case led_release:
+ hw_led_state &= ~OMAP730_FPGA_LED_CLAIMRELEASE;
+ break;
+
+#ifdef CONFIG_LEDS_TIMER
+ case led_timer:
+ /*
+ * Toggle Timer LED
+ */
+ if (hw_led_state & OMAP730_FPGA_LED_TIMER)
+ hw_led_state &= ~OMAP730_FPGA_LED_TIMER;
+ else
+ hw_led_state |= OMAP730_FPGA_LED_TIMER;
+ break;
+#endif
+
+#ifdef CONFIG_LEDS_CPU
+ case led_idle_start:
+ hw_led_state |= OMAP730_FPGA_LED_IDLE;
+ break;
+
+ case led_idle_end:
+ hw_led_state &= ~OMAP730_FPGA_LED_IDLE;
+ break;
+#endif
+
+ case led_halted:
+ if (hw_led_state & OMAP730_FPGA_LED_HALTED)
+ hw_led_state &= ~OMAP730_FPGA_LED_HALTED;
+ else
+ hw_led_state |= OMAP730_FPGA_LED_HALTED;
+ break;
+
+ case led_green_on:
+ break;
+
+ case led_green_off:
+ break;
+
+ case led_amber_on:
+ break;
+
+ case led_amber_off:
+ break;
+
+ case led_red_on:
+ break;
+
+ case led_red_off:
+ break;
+
+ default:
+ break;
+ }
+
+
+ /*
+ * Actually burn the LEDs
+ */
+ __raw_writew(~hw_led_state & 0xffff, OMAP730_FPGA_LEDS);
+
+ local_irq_restore(flags);
+}
install: $(BOOTIMAGE)
sh $(srctree)/$(src)/install.sh $(KERNELRELEASE) $< System.map "$(INSTALL_PATH)"
+ if [ -f init/kerntypes.o ]; then cp init/kerntypes.o $(INSTALL_PATH)/Kerntypes; fi
#include "syscall_table.S"
syscall_table_size=(.-sys_call_table)
+
--- /dev/null
+/*
+ * linux/arch/i386/kernel/entry_trampoline.c
+ *
+ * (C) Copyright 2003 Ingo Molnar
+ *
+ * This file contains the needed support code for 4GB userspace
+ */
+
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/highmem.h>
+#include <asm/desc.h>
+#include <asm/atomic_kmap.h>
+
+extern char __entry_tramp_start, __entry_tramp_end, __start___entry_text;
+
+void __init init_entry_mappings(void)
+{
+#ifdef CONFIG_X86_HIGH_ENTRY
+
+ void *tramp;
+ int p;
+
+ /*
+ * We need a high IDT and GDT for the 4G/4G split:
+ */
+ trap_init_virtual_IDT();
+
+ __set_fixmap(FIX_ENTRY_TRAMPOLINE_0, __pa((unsigned long)&__entry_tramp_start), PAGE_KERNEL_EXEC);
+ __set_fixmap(FIX_ENTRY_TRAMPOLINE_1, __pa((unsigned long)&__entry_tramp_start) + PAGE_SIZE, PAGE_KERNEL_EXEC);
+ tramp = (void *)fix_to_virt(FIX_ENTRY_TRAMPOLINE_0);
+
+ printk("mapped 4G/4G trampoline to %p.\n", tramp);
+ BUG_ON((void *)&__start___entry_text != tramp);
+ /*
+ * Virtual kernel stack:
+ */
+ BUG_ON(__kmap_atomic_vaddr(KM_VSTACK_TOP) & (THREAD_SIZE-1));
+ BUG_ON(sizeof(struct desc_struct)*NR_CPUS*GDT_ENTRIES > 2*PAGE_SIZE);
+ BUG_ON((unsigned int)&__entry_tramp_end - (unsigned int)&__entry_tramp_start > 2*PAGE_SIZE);
+
+ /*
+ * set up the initial thread's virtual stack related
+ * fields:
+ */
+ for (p = 0; p < ARRAY_SIZE(current->thread.stack_page); p++)
+ current->thread.stack_page[p] = virt_to_page((char *)current->thread_info + (p*PAGE_SIZE));
+
+ current->thread_info->virtual_stack = (void *)__kmap_atomic_vaddr(KM_VSTACK_TOP);
+
+ for (p = 0; p < ARRAY_SIZE(current->thread.stack_page); p++) {
+ __kunmap_atomic_type(KM_VSTACK_TOP-p);
+ __kmap_atomic(current->thread.stack_page[p], KM_VSTACK_TOP-p);
+ }
+#endif
+ current->thread_info->real_stack = (void *)current->thread_info;
+ current->thread_info->user_pgd = NULL;
+ current->thread.esp0 = (unsigned long)current->thread_info->real_stack + THREAD_SIZE;
+}
+
+
+
+void __init entry_trampoline_setup(void)
+{
+ /*
+ * old IRQ entries set up by the boot code will still hang
+ * around - they are a sign of hw trouble anyway, now they'll
+ * produce a double fault message.
+ */
+ trap_init_virtual_GDT();
+}
#include <linux/tty.h>
#include <linux/highmem.h>
#include <linux/time.h>
+#include <linux/nmi.h>
#include <asm/semaphore.h>
#include <asm/processor.h>
#endif
EXPORT_SYMBOL(csum_partial);
+
+#ifdef CONFIG_CRASH_DUMP
+#ifdef CONFIG_SMP
+extern irq_desc_t irq_desc[NR_IRQS];
+extern unsigned long irq_affinity[NR_IRQS];
+extern void stop_this_cpu(void *);
+EXPORT_SYMBOL(irq_desc);
+EXPORT_SYMBOL(irq_affinity);
+EXPORT_SYMBOL(stop_this_cpu);
+EXPORT_SYMBOL(dump_send_ipi);
+#endif
+extern int page_is_ram(unsigned long);
+EXPORT_SYMBOL(page_is_ram);
+#ifdef ARCH_HAS_NMI_WATCHDOG
+EXPORT_SYMBOL(touch_nmi_watchdog);
+#endif
+#endif
#include <linux/module.h>
#include <linux/nmi.h>
#include <linux/sysdev.h>
+#include <linux/dump.h>
#include <linux/sysctl.h>
#include <asm/smp.h>
*/
#define LOWMEMSIZE() (0x9f000)
+unsigned long crashdump_addr = 0xdeadbeef;
+
static void __init parse_cmdline_early (char ** cmdline_p)
{
char c = ' ', *to = command_line, *from = saved_command_line;
if (!memcmp(from, "dump", 4))
dump_enabled = 1;
+ if (c == ' ' && !memcmp(from, "crashdump=", 10))
+ crashdump_addr = memparse(from+10, &from);
+
/*
* vmalloc=size forces the vmalloc area to be exactly 'size'
* bytes. This can be used to increase (or decrease) the
static char * __init machine_specific_memory_setup(void);
+#ifdef CONFIG_CRASH_DUMP_SOFTBOOT
+extern void crashdump_reserve(void);
+#endif
+
#ifdef CONFIG_MCA
static void set_mca_bus(int x)
{
#endif
+#ifdef CONFIG_CRASH_DUMP_SOFTBOOT
+ crashdump_reserve(); /* Preserve crash dump state from prev boot */
+#endif
+
dmi_scan_machine();
#ifdef CONFIG_X86_GENERICARCH
#include <linux/mc146818rtc.h>
#include <linux/cache.h>
#include <linux/interrupt.h>
+#include <linux/dump.h>
#include <asm/mtrr.h>
#include <asm/tlbflush.h>
*/
apic_wait_icr_idle();
+ if (vector == CRASH_DUMP_VECTOR)
+ cfg = (cfg&~APIC_VECTOR_MASK)|APIC_DM_NMI;
+
/*
* No need to touch the target chip field
*/
cfg = __prepare_ICR(shortcut, vector);
- if (vector == CRASH_DUMP_VECTOR)
+ if (vector == CRASH_DUMP_VECTOR) {
+ /*
+ * Setup DUMP IPI to be delivered as an NMI
+ */
cfg = (cfg&~APIC_VECTOR_MASK)|APIC_DM_NMI;
+ }
/*
* Send the IPI. The write to APIC_ICR fires this off.
* program the ICR
*/
cfg = __prepare_ICR(0, vector);
-
+
+ if (vector == CRASH_DUMP_VECTOR) {
+ /*
+ * Setup DUMP IPI to be delivered as an NMI
+ */
+ cfg = (cfg&~APIC_VECTOR_MASK)|APIC_DM_NMI;
+ }
/*
* Send the IPI. The write to APIC_ICR fires this off.
*/
on_each_cpu(do_flush_tlb_all, NULL, 1, 1);
}
+void dump_send_ipi(void)
+{
+ send_IPI_allbutself(CRASH_DUMP_VECTOR);
+}
+
/*
* this function sends a 'reschedule' IPI to another CPU.
* it goes straight through and wastes no time serializing
return 0;
}
-static void stop_this_cpu (void * dummy)
+void stop_this_cpu (void * dummy)
{
/*
* Remove this CPU:
local_irq_enable();
}
+EXPORT_SYMBOL(smp_send_stop);
+
/*
* Reschedule call back. Nothing to do,
* all the work is done automatically when
atomic_inc(&call_data->finished);
}
}
-
--- /dev/null
+/* Originally gcc generated, modified by hand */
+
+#include <linux/linkage.h>
+#include <asm/segment.h>
+#include <asm/page.h>
+
+ .text
+
+ENTRY(pmdisk_arch_suspend)
+ cmpl $0,4(%esp)
+ jne .L1450
+
+ movl %esp, saved_context_esp
+ movl %ebx, saved_context_ebx
+ movl %ebp, saved_context_ebp
+ movl %esi, saved_context_esi
+ movl %edi, saved_context_edi
+ pushfl ; popl saved_context_eflags
+
+ call pmdisk_suspend
+ jmp .L1449
+ .p2align 4,,7
+.L1450:
+ movl $swsusp_pg_dir-__PAGE_OFFSET,%ecx
+ movl %ecx,%cr3
+
+ movl pm_pagedir_nosave,%ebx
+ xorl %eax, %eax
+ xorl %edx, %edx
+ .p2align 4,,7
+.L1455:
+ movl 4(%ebx,%edx),%edi
+ movl (%ebx,%edx),%esi
+
+ movl $1024, %ecx
+ rep
+ movsl
+
+ movl %cr3, %ecx;
+ movl %ecx, %cr3; # flush TLB
+
+ incl %eax
+ addl $16, %edx
+ cmpl pmdisk_pages,%eax
+ jb .L1455
+ .p2align 4,,7
+.L1453:
+ movl saved_context_esp, %esp
+ movl saved_context_ebp, %ebp
+ movl saved_context_ebx, %ebx
+ movl saved_context_esi, %esi
+ movl saved_context_edi, %edi
+ pushl saved_context_eflags ; popfl
+ call pmdisk_resume
+.L1449:
+ ret
up_write(¤t->mm->mmap_sem);
}
+ /*
+ * When user stack is not executable, push sigreturn code to stack makes
+ * segmentation fault raised when returning to kernel. So now sigreturn
+ * code is locked in specific gate page, which is pointed by pretcode
+ * when setup_frame_ia32
+ */
+ vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
+ if (vma) {
+ memset(vma, 0, sizeof(*vma));
+ vma->vm_mm = current->mm;
+ vma->vm_start = IA32_GATE_OFFSET;
+ vma->vm_end = vma->vm_start + PAGE_SIZE;
+ vma->vm_page_prot = PAGE_COPY_EXEC;
+ vma->vm_flags = VM_READ | VM_MAYREAD | VM_EXEC
+ | VM_MAYEXEC | VM_RESERVED;
+ vma->vm_ops = &ia32_gate_page_vm_ops;
+ down_write(¤t->mm->mmap_sem);
+ {
+ insert_vm_struct(current->mm, vma);
+ }
+ up_write(¤t->mm->mmap_sem);
+ }
+
/*
* Install LDT as anonymous memory. This gives us all-zero segment descriptors
* until a task modifies them via modify_ldt().
#include <linux/pagemap.h>
#include <linux/mount.h>
#include <linux/version.h>
+#include <linux/vs_memory.h>
+#include <linux/vs_cvirt.h>
#include <linux/bitops.h>
#include <linux/vs_memory.h>
#include <linux/vs_cvirt.h>
--- /dev/null
+# arch/ia64/sn/fakeprom/Makefile
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (c) 2000-2003 Silicon Graphics, Inc. All rights reserved.
+#
+# Medusa fake PROM support
+#
+
+EXTRA_TARGETS := fpromasm.o main.o fw-emu.o fpmem.o klgraph_init.o \
+ fprom vmlinux.sym
+
+OBJS := $(obj)/fpromasm.o $(obj)/main.o $(obj)/fw-emu.o $(obj)/fpmem.o \
+ $(obj)/klgraph_init.o
+
+LDFLAGS_fprom = -static -T
+
+.PHONY: fprom
+
+fprom: $(obj)/fprom
+
+$(obj)/fprom: $(src)/fprom.lds $(OBJS) arch/ia64/lib/lib.a FORCE
+ $(call if_changed,ld)
+
+$(obj)/vmlinux.sym: $(src)/make_textsym System.map
+ $(src)/make_textsym vmlinux > vmlinux.sym
+ $(call cmd,cptotop)
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2002-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+
+This directory contains the files required to build
+the fake PROM image that is currently being used to
+boot IA64 kernels running under the SGI Medusa kernel.
+
+The FPROM currently provides the following functions:
+
+ - PAL emulation for all PAL calls we've made so far.
+ - SAL emulation for all SAL calls we've made so far.
+ - EFI emulation for all EFI calls we've made so far.
+ - builds the "ia64_bootparam" structure that is
+ passed to the kernel from SAL. This structure
+ shows the cpu & memory configurations.
+ - supports medusa boottime options for changing
+ the number of cpus present
+ - supports medusa boottime options for changing
+ the memory configuration.
+
+
+
+At some point, this fake PROM will be replaced by the
+real PROM.
+
+
+
+
+To build a fake PROM, cd to this directory & type:
+
+ make
+
+This will (or should) build a fake PROM named "fprom".
+
+
+
+
+Use this fprom image when booting the Medusa simulator. The
+control file used to boot Medusa should include the
+following lines:
+
+ load fprom
+ load vmlinux
+ sr pc 0x100000
+ sr g 9 <address of kernel _start function> #(currently 0xe000000000520000)
+
+NOTE: There is a script "runsim" in this directory that can be used to
+simplify setting up an environment for running under Medusa.
+
+
+
+
+The following parameters may be passed to the fake PROM to
+control the PAL/SAL/EFI parameters passed to the kernel:
+
+ GR[8] = # of cpus
+ GR[9] = address of primary entry point into the kernel
+ GR[20] = memory configuration for node 0
+ GR[21] = memory configuration for node 1
+ GR[22] = memory configuration for node 2
+ GR[23] = memory configuration for node 3
+
+
+Registers GR[20] - GR[23] contain information to specify the
+amount of memory present on nodes 0-3.
+
+ - if nothing is specified (all registers are 0), the configuration
+ defaults to 8 MB on node 0.
+
+ - a mem config entry for node N is passed in GR[20+N]
+
+ - a mem config entry consists of 8 hex digits. Each digit gives the
+ amount of physical memory available on the node starting at
+ 1GB*<dn>, where dn is the digit number. The amount of memory
+ is 8MB*2**<d>. (If <d> = 0, the memory size is 0).
+
+ SN1 doesn't support dimms this small but small memory systems
+ boot faster on Medusa.
+
+
+
+An example helps a lot. The following specifies that node 0 has
+physical memory 0 to 8MB and 1GB to 1GB+32MB, and that node 1 has
+64MB starting at address 0 of the node which is 8GB.
+
+ gr[20] = 0x21 # 0 to 8MB, 1GB to 1GB+32MB
+ gr[21] = 0x4 # 8GB to 8GB+64MB
+
--- /dev/null
+/*
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+
+
+/*
+ * FPROM EFI memory descriptor build routines
+ *
+ * - Routines to build the EFI memory descriptor map
+ * - Should also be usable by the SGI prom to convert
+ * klconfig to efi_memmap
+ */
+
+#include <linux/config.h>
+#include <linux/efi.h>
+#include "fpmem.h"
+
+/*
+ * args points to a layout in memory like this
+ *
+ * 32 bit 32 bit
+ *
+ * numnodes numcpus
+ *
+ * 16 bit 16 bit 32 bit
+ * nasid0 cpuconf membankdesc0
+ * nasid1 cpuconf membankdesc1
+ * .
+ * .
+ * .
+ * .
+ * .
+ */
+
+sn_memmap_t *sn_memmap ;
+sn_config_t *sn_config ;
+
+/*
+ * There is a hole in the node 0 address space. Dont put it
+ * in the memory map
+ */
+#define NODE0_HOLE_SIZE (20*MB)
+#define NODE0_HOLE_END (4UL*GB)
+
+#define MB (1024*1024)
+#define GB (1024*MB)
+#define KERNEL_SIZE (4*MB)
+#define PROMRESERVED_SIZE (1*MB)
+
+#ifdef SGI_SN2
+#define PHYS_ADDRESS(_n, _x) (((long)_n<<38) | (long)_x | 0x3000000000UL)
+#define MD_BANK_SHFT 34
+#endif
+
+/*
+ * For SN, this may not take an arg and gets the numnodes from
+ * the prom variable or by traversing klcfg or promcfg
+ */
+int
+GetNumNodes(void)
+{
+ return sn_config->nodes;
+}
+
+int
+GetNumCpus(void)
+{
+ return sn_config->cpus;
+}
+
+/* For SN, get the index th nasid */
+
+int
+GetNasid(int index)
+{
+ return sn_memmap[index].nasid ;
+}
+
+node_memmap_t
+GetMemBankInfo(int index)
+{
+ return sn_memmap[index].node_memmap ;
+}
+
+int
+IsCpuPresent(int cnode, int cpu)
+{
+ return sn_memmap[cnode].cpuconfig & (1UL<<cpu);
+}
+
+
+/*
+ * Made this into an explicit case statement so that
+ * we can assign specific properties to banks like bank0
+ * actually disabled etc.
+ */
+
+#ifdef SGI_SN2
+int
+IsBankPresent(int index, node_memmap_t nmemmap)
+{
+ switch (index) {
+ case 0:return BankPresent(nmemmap.b0size);
+ case 1:return BankPresent(nmemmap.b1size);
+ case 2:return BankPresent(nmemmap.b2size);
+ case 3:return BankPresent(nmemmap.b3size);
+ default:return -1 ;
+ }
+}
+
+int
+GetBankSize(int index, node_memmap_t nmemmap)
+{
+ /*
+ * Add 2 because there are 4 dimms per bank.
+ */
+ switch (index) {
+ case 0:return 2 + ((long)nmemmap.b0size + nmemmap.b0dou);
+ case 1:return 2 + ((long)nmemmap.b1size + nmemmap.b1dou);
+ case 2:return 2 + ((long)nmemmap.b2size + nmemmap.b2dou);
+ case 3:return 2 + ((long)nmemmap.b3size + nmemmap.b3dou);
+ default:return -1 ;
+ }
+}
+
+#endif
+
+void
+build_mem_desc(efi_memory_desc_t *md, int type, long paddr, long numbytes, long attr)
+{
+ md->type = type;
+ md->phys_addr = paddr;
+ md->virt_addr = 0;
+ md->num_pages = numbytes >> 12;
+ md->attribute = attr;
+}
+
+int
+build_efi_memmap(void *md, int mdsize)
+{
+ int numnodes = GetNumNodes() ;
+ int cnode,bank ;
+ int nasid ;
+ node_memmap_t membank_info ;
+ int bsize;
+ int count = 0 ;
+ long paddr, hole, numbytes;
+
+
+ for (cnode=0;cnode<numnodes;cnode++) {
+ nasid = GetNasid(cnode) ;
+ membank_info = GetMemBankInfo(cnode) ;
+ for (bank=0;bank<MD_BANKS_PER_NODE;bank++) {
+ if (IsBankPresent(bank, membank_info)) {
+ bsize = GetBankSize(bank, membank_info) ;
+ paddr = PHYS_ADDRESS(nasid, (long)bank<<MD_BANK_SHFT);
+ numbytes = BankSizeBytes(bsize);
+#ifdef SGI_SN2
+ /*
+ * Ignore directory.
+ * Shorten memory chunk by 1 page - makes a better
+ * testcase & is more like the real PROM.
+ */
+ numbytes = numbytes * 31 / 32;
+#endif
+ /*
+ * Only emulate the memory prom grabs
+ * if we have lots of memory, to allow
+ * us to simulate smaller memory configs than
+ * we can actually run on h/w. Otherwise,
+ * linux throws away a whole "granule".
+ */
+ if (cnode == 0 && bank == 0 &&
+ numbytes > 128*1024*1024) {
+ numbytes -= 1000;
+ }
+
+ /*
+ * Check for the node 0 hole. Since banks cant
+ * span the hole, we only need to check if the end of
+ * the range is the end of the hole.
+ */
+ if (paddr+numbytes == NODE0_HOLE_END)
+ numbytes -= NODE0_HOLE_SIZE;
+ /*
+ * UGLY hack - we must skip overr the kernel and
+ * PROM runtime services but we dont exactly where it is.
+ * So lets just reserve:
+ * node 0
+ * 0-1MB for PAL
+ * 1-4MB for SAL
+ * node 1-N
+ * 0-1 for SAL
+ */
+ if (bank == 0) {
+ if (cnode == 0) {
+ hole = 2*1024*1024;
+ build_mem_desc(md, EFI_PAL_CODE, paddr, hole, EFI_MEMORY_WB|EFI_MEMORY_WB);
+ numbytes -= hole;
+ paddr += hole;
+ count++ ;
+ md += mdsize;
+ hole = 1*1024*1024;
+ build_mem_desc(md, EFI_CONVENTIONAL_MEMORY, paddr, hole, EFI_MEMORY_UC);
+ numbytes -= hole;
+ paddr += hole;
+ count++ ;
+ md += mdsize;
+ hole = 1*1024*1024;
+ build_mem_desc(md, EFI_RUNTIME_SERVICES_DATA, paddr, hole, EFI_MEMORY_WB|EFI_MEMORY_WB);
+ numbytes -= hole;
+ paddr += hole;
+ count++ ;
+ md += mdsize;
+ } else {
+ hole = 2*1024*1024;
+ build_mem_desc(md, EFI_RUNTIME_SERVICES_DATA, paddr, hole, EFI_MEMORY_WB|EFI_MEMORY_WB);
+ numbytes -= hole;
+ paddr += hole;
+ count++ ;
+ md += mdsize;
+ hole = 2*1024*1024;
+ build_mem_desc(md, EFI_RUNTIME_SERVICES_DATA, paddr, hole, EFI_MEMORY_UC);
+ numbytes -= hole;
+ paddr += hole;
+ count++ ;
+ md += mdsize;
+ }
+ }
+ build_mem_desc(md, EFI_CONVENTIONAL_MEMORY, paddr, numbytes, EFI_MEMORY_WB|EFI_MEMORY_WB);
+
+ md += mdsize ;
+ count++ ;
+ }
+ }
+ }
+ return count ;
+}
+
+void
+build_init(unsigned long args)
+{
+ sn_config = (sn_config_t *) (args);
+ sn_memmap = (sn_memmap_t *)(args + 8) ; /* SN equiv for this is */
+ /* init to klconfig start */
+}
--- /dev/null
+/*
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/config.h>
+
+/*
+ * Structure of the mem config of the node as a SN MI reg
+ * Medusa supports this reg config.
+ *
+ * BankSize nibble to bank size mapping
+ *
+ * 1 - 64 MB
+ * 2 - 128 MB
+ * 3 - 256 MB
+ * 4 - 512 MB
+ * 5 - 1024 MB (1GB)
+ */
+
+#define MBSHIFT 20
+
+#ifdef SGI_SN2
+typedef struct node_memmap_s
+{
+ unsigned int b0size :3, /* 0-2 bank 0 size */
+ b0dou :1, /* 3 bank 0 is 2-sided */
+ ena0 :1, /* 4 bank 0 enabled */
+ r0 :3, /* 5-7 reserved */
+ b1size :3, /* 8-10 bank 1 size */
+ b1dou :1, /* 11 bank 1 is 2-sided */
+ ena1 :1, /* 12 bank 1 enabled */
+ r1 :3, /* 13-15 reserved */
+ b2size :3, /* 16-18 bank 2 size */
+ b2dou :1, /* 19 bank 1 is 2-sided */
+ ena2 :1, /* 20 bank 2 enabled */
+ r2 :3, /* 21-23 reserved */
+ b3size :3, /* 24-26 bank 3 size */
+ b3dou :1, /* 27 bank 3 is 2-sided */
+ ena3 :1, /* 28 bank 3 enabled */
+ r3 :3; /* 29-31 reserved */
+} node_memmap_t ;
+
+#define SN2_BANK_SIZE_SHIFT (MBSHIFT+6) /* 64 MB */
+#define BankPresent(bsize) (bsize<6)
+#define BankSizeBytes(bsize) (BankPresent(bsize) ? 1UL<<((bsize)+SN2_BANK_SIZE_SHIFT) : 0)
+#define MD_BANKS_PER_NODE 4
+#define MD_BANKSIZE (1UL << 34)
+#endif
+
+typedef struct sn_memmap_s
+{
+ short nasid ;
+ short cpuconfig;
+ node_memmap_t node_memmap ;
+} sn_memmap_t ;
+
+typedef struct sn_config_s
+{
+ int cpus;
+ int nodes;
+ sn_memmap_t memmap[1]; /* start of array */
+} sn_config_t;
+
+
+
+extern void build_init(unsigned long);
+extern int build_efi_memmap(void *, int);
+extern int GetNumNodes(void);
+extern int GetNumCpus(void);
+extern int IsCpuPresent(int, int);
+extern int GetNasid(int);
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2002-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+
+OUTPUT_FORMAT("elf64-ia64-little")
+OUTPUT_ARCH(ia64)
+ENTRY(_start)
+SECTIONS
+{
+ v = 0x0000000000000000 ; /* this symbol is here to make debugging with kdb easier... */
+
+ . = (0x000000000000000 + 0x100000) ;
+
+ _text = .;
+ .text : AT(ADDR(.text) - 0x0000000000000000 )
+ {
+ *(__ivt_section)
+ /* these are not really text pages, but the zero page needs to be in a fixed location: */
+ *(__special_page_section)
+ __start_gate_section = .;
+ *(__gate_section)
+ __stop_gate_section = .;
+ *(.text)
+ }
+
+ /* Global data */
+ _data = .;
+
+ .rodata : AT(ADDR(.rodata) - 0x0000000000000000 )
+ { *(.rodata) *(.rodata.*) }
+ .opd : AT(ADDR(.opd) - 0x0000000000000000 )
+ { *(.opd) }
+ .data : AT(ADDR(.data) - 0x0000000000000000 )
+ { *(.data) *(.gnu.linkonce.d*) CONSTRUCTORS }
+
+ __gp = ALIGN (8) + 0x200000;
+
+ .got : AT(ADDR(.got) - 0x0000000000000000 )
+ { *(.got.plt) *(.got) }
+ /* We want the small data sections together, so single-instruction offsets
+ can access them all, and initialized data all before uninitialized, so
+ we can shorten the on-disk segment size. */
+ .sdata : AT(ADDR(.sdata) - 0x0000000000000000 )
+ { *(.sdata) }
+ _edata = .;
+ _bss = .;
+ .sbss : AT(ADDR(.sbss) - 0x0000000000000000 )
+ { *(.sbss) *(.scommon) }
+ .bss : AT(ADDR(.bss) - 0x0000000000000000 )
+ { *(.bss) *(COMMON) }
+ . = ALIGN(64 / 8);
+ _end = .;
+
+ /* Sections to be discarded */
+ /DISCARD/ : {
+ *(.text.exit)
+ *(.data.exit)
+ }
+
+ /* Stabs debugging sections. */
+ .stab 0 : { *(.stab) }
+ .stabstr 0 : { *(.stabstr) }
+ .stab.excl 0 : { *(.stab.excl) }
+ .stab.exclstr 0 : { *(.stab.exclstr) }
+ .stab.index 0 : { *(.stab.index) }
+ .stab.indexstr 0 : { *(.stab.indexstr) }
+ /* DWARF debug sections.
+ Symbols in the DWARF debugging sections are relative to the beginning
+ of the section so we begin them at 0. */
+ /* DWARF 1 */
+ .debug 0 : { *(.debug) }
+ .line 0 : { *(.line) }
+ /* GNU DWARF 1 extensions */
+ .debug_srcinfo 0 : { *(.debug_srcinfo) }
+ .debug_sfnames 0 : { *(.debug_sfnames) }
+ /* DWARF 1.1 and DWARF 2 */
+ .debug_aranges 0 : { *(.debug_aranges) }
+ .debug_pubnames 0 : { *(.debug_pubnames) }
+ /* DWARF 2 */
+ .debug_info 0 : { *(.debug_info) }
+ .debug_abbrev 0 : { *(.debug_abbrev) }
+ .debug_line 0 : { *(.debug_line) }
+ .debug_frame 0 : { *(.debug_frame) }
+ .debug_str 0 : { *(.debug_str) }
+ .debug_loc 0 : { *(.debug_loc) }
+ .debug_macinfo 0 : { *(.debug_macinfo) }
+ /* SGI/MIPS DWARF 2 extensions */
+ .debug_weaknames 0 : { *(.debug_weaknames) }
+ .debug_funcnames 0 : { *(.debug_funcnames) }
+ .debug_typenames 0 : { *(.debug_typenames) }
+ .debug_varnames 0 : { *(.debug_varnames) }
+ /* These must appear regardless of . */
+ /* Discard them for now since Intel SoftSDV cannot handle them.
+ .comment 0 : { *(.comment) }
+ .note 0 : { *(.note) }
+ */
+ /DISCARD/ : { *(.comment) }
+ /DISCARD/ : { *(.note) }
+}
--- /dev/null
+/*
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * (Code copied from or=ther files)
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ * Copyright (C) 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+
+
+#define __ASSEMBLY__ 1
+#include <linux/config.h>
+#include <asm/processor.h>
+#include <asm/sn/addrs.h>
+#include <asm/sn/sn2/shub_mmr.h>
+
+/*
+ * This file contains additional set up code that is needed to get going on
+ * Medusa. This code should disappear once real hw is available.
+ *
+ * On entry to this routine, the following register values are assumed:
+ *
+ * gr[8] - BSP cpu
+ * pr[9] - kernel entry address
+ * pr[10] - cpu number on the node
+ *
+ * NOTE:
+ * This FPROM may be loaded/executed at an address different from the
+ * address that it was linked at. The FPROM is linked to run on node 0
+ * at address 0x100000. If the code in loaded into another node, it
+ * must be loaded at offset 0x100000 of the node. In addition, the
+ * FPROM does the following things:
+ * - determine the base address of the node it is loaded on
+ * - add the node base to _gp.
+ * - add the node base to all addresses derived from "movl"
+ * instructions. (I couldnt get GPREL addressing to work)
+ * (maybe newer versions of the tools will support this)
+ * - scan the .got section and add the node base to all
+ * pointers in this section.
+ * - add the node base to all physical addresses in the
+ * SAL/PAL/EFI table built by the C code. (This is done
+ * in the C code - not here)
+ * - add the node base to the TLB entries for vmlinux
+ */
+
+#define KERNEL_BASE 0xe000000000000000
+#define BOOT_PARAM_ADDR 0x40000
+
+
+/*
+ * ar.k0 gets set to IOPB_PA value, on 460gx chipset it should
+ * be 0x00000ffffc000000, but on snia we use the (inverse swizzled)
+ * IOSPEC_BASE value
+ */
+#ifdef SGI_SN2
+#define IOPB_PA 0xc000000fcc000000
+#endif
+
+#define RR_RID 8
+
+
+
+// ====================================================================================
+ .text
+ .align 16
+ .global _start
+ .proc _start
+_start:
+
+// Setup psr and rse for system init
+ mov psr.l = r0;;
+ srlz.d;;
+ invala
+ mov ar.rsc = r0;;
+ loadrs
+ ;;
+
+// Isolate node number we are running on.
+ mov r6 = ip;;
+#ifdef SGI_SN2
+ shr r5 = r6,38 // r5 = node number
+ dep r6 = 0,r6,0,36 // r6 = base memory address of node
+
+#endif
+
+
+// Set & relocate gp.
+ movl r1= __gp;; // Add base memory address
+ or r1 = r1,r6 // Relocate to boot node
+
+// Lets figure out who we are & put it in the LID register.
+#ifdef SGI_SN2
+// On SN2, we (currently) pass the cpu number in r10 at boot
+ and r25=3,r10;;
+ movl r16=0x8000008110000400 // Allow IPIs
+ mov r17=-1;;
+ st8 [r16]=r17
+ movl r16=0x8000008110060580;; // SHUB_ID
+ ld8 r27=[r16];;
+ extr.u r27=r27,32,11;;
+ shl r26=r25,28;; // Align local cpu# to lid.eid
+ shl r27=r27,16;; // Align NASID to lid.id
+ or r26=r26,r27;; // build the LID
+#else
+// The BR_PI_SELF_CPU_NUM register gives us a value of 0-3.
+// This identifies the cpu on the node.
+// Merge the cpu number with the NASID to generate the LID.
+ movl r24=0x80000a0001000020;; // BR_PI_SELF_CPU_NUM
+ ld8 r25=[r24] // Fetch PI_SELF
+ movl r27=0x80000a0001600000;; // Fetch REVID to get local NASID
+ ld8 r27=[r27];;
+ extr.u r27=r27,32,8;;
+ shl r26=r25,16;; // Align local cpu# to lid.eid
+ shl r27=r27,24;; // Align NASID to lid.id
+ or r26=r26,r27;; // build the LID
+#endif
+ mov cr.lid=r26 // Now put in in the LID register
+
+ movl r2=FPSR_DEFAULT;;
+ mov ar.fpsr=r2
+ movl sp = bootstacke-16;;
+ or sp = sp,r6 // Relocate to boot node
+
+// Save the NASID that we are loaded on.
+ movl r2=base_nasid;; // Save base_nasid for C code
+ or r2 = r2,r6;; // Relocate to boot node
+ st8 [r2]=r5 // Uncond st8 - same on all cpus
+
+// Save the kernel entry address. It is passed in r9 on one of
+// the cpus.
+ movl r2=bsp_entry_pc
+ cmp.ne p6,p0=r9,r0;;
+ or r2 = r2,r6;; // Relocate to boot node
+(p6) st8 [r2]=r9 // Uncond st8 - same on all cpus
+
+
+// The following can ONLY be done by 1 cpu. Lets set a lock - the
+// cpu that gets it does the initilization. The rest just spin waiting
+// til initilization is complete.
+ movl r22 = initlock;;
+ or r22 = r22,r6 // Relocate to boot node
+ mov r23 = 1;;
+ xchg8 r23 = [r22],r23;;
+ cmp.eq p6,p0 = 0,r23
+(p6) br.cond.spnt.few init
+1: ld4 r23 = [r22];;
+ cmp.eq p6,p0 = 1,r23
+(p6) br.cond.sptk 1b
+ br initx
+
+// Add base address of node memory to each pointer in the .got section.
+init: movl r16 = _GLOBAL_OFFSET_TABLE_;;
+ or r16 = r16,r6;; // Relocate to boot node
+1: ld8 r17 = [r16];;
+ cmp.eq p6,p7=0,r17
+(p6) br.cond.sptk.few.clr 2f;;
+ or r17 = r17,r6;; // Relocate to boot node
+ st8 [r16] = r17,8
+ br 1b
+2:
+ mov r23 = 2;; // All done, release the spinning cpus
+ st4 [r22] = r23
+initx:
+
+//
+// I/O-port space base address:
+//
+ movl r2 = IOPB_PA;;
+ mov ar.k0 = r2
+
+
+// Now call main & pass it the current LID value.
+ alloc r2=ar.pfs,0,0,2,0
+ mov r32=r26
+ mov r33=r8;;
+ br.call.sptk.few rp=fmain
+
+// Initialize Region Registers
+//
+ mov r10 = r0
+ mov r2 = (13<<2)
+ mov r3 = r0;;
+1: cmp4.gtu p6,p7 = 7, r3
+ dep r10 = r3, r10, 61, 3
+ dep r2 = r3, r2, RR_RID, 4;;
+(p7) dep r2 = 0, r2, 0, 1;;
+(p6) dep r2 = -1, r2, 0, 1;;
+ mov rr[r10] = r2
+ add r3 = 1, r3;;
+ srlz.d;;
+ cmp4.gtu p6,p0 = 8, r3
+(p6) br.cond.sptk.few.clr 1b
+
+//
+// Return value indicates if we are the BSP or AP.
+// 1 = BSP, 0 = AP
+ mov cr.tpr=r0;;
+ cmp.eq p6,p0=r8,r0
+(p6) br.cond.spnt slave
+
+//
+// Go to kernel C startup routines
+// Need to do a "rfi" in order set "it" and "ed" bits in the PSR.
+// This is the only way to set them.
+
+ movl r28=BOOT_PARAM_ADDR
+ movl r2=bsp_entry_pc;;
+ or r28 = r28,r6;; // Relocate to boot node
+ or r2 = r2,r6;; // Relocate to boot node
+ ld8 r2=[r2];;
+ or r2=r2,r6;;
+ dep r2=0,r2,61,3;; // convert to phys mode
+
+//
+// Turn on address translation, interrupt collection, psr.ed, protection key.
+// Interrupts (PSR.i) are still off here.
+//
+
+ movl r3 = ( IA64_PSR_BN | \
+ IA64_PSR_AC | \
+ IA64_PSR_DB | \
+ IA64_PSR_DA | \
+ IA64_PSR_IC \
+ )
+ ;;
+ mov cr.ipsr = r3
+
+//
+// Go to kernel C startup routines
+// Need to do a "rfi" in order set "it" and "ed" bits in the PSR.
+// This is the only way to set them.
+
+ mov r8=r28;;
+ bsw.1 ;;
+ mov r28=r8;;
+ bsw.0 ;;
+ mov cr.iip = r2
+ srlz.d;;
+ rfi;;
+
+ .endp _start
+
+
+
+// Slave processors come here to spin til they get an interrupt. Then they launch themselves to
+// the place ap_entry points. No initialization is necessary - the kernel makes no
+// assumptions about state on this entry.
+// Note: should verify that the interrupt we got was really the ap_wakeup
+// interrupt but this should not be an issue on medusa
+slave:
+ nop.i 0x8beef // Medusa - put cpu to sleep til interrupt occurs
+ mov r8=cr.irr0;; // Check for interrupt pending.
+ cmp.eq p6,p0=r8,r0
+(p6) br.cond.sptk slave;;
+
+ mov r8=cr.ivr;; // Got one. Must read ivr to accept it
+ srlz.d;;
+ mov cr.eoi=r0;; // must write eoi to clear
+ movl r8=ap_entry;; // now jump to kernel entry
+ or r8 = r8,r6;; // Relocate to boot node
+ ld8 r9=[r8],8;;
+ ld8 r1=[r8]
+ mov b0=r9;;
+ br b0
+
+// Here is the kernel stack used for the fake PROM
+ .bss
+ .align 16384
+bootstack:
+ .skip 16384
+bootstacke:
+initlock:
+ data4
+
+
+
+//////////////////////////////////////////////////////////////////////////////////////////////////////////
+// This code emulates the PAL. Only essential interfaces are emulated.
+
+
+ .text
+ .global pal_emulator
+ .proc pal_emulator
+pal_emulator:
+ mov r8=-1
+
+ mov r9=256
+ ;;
+ cmp.gtu p6,p7=r9,r28 /* r28 <= 255? */
+(p6) br.cond.sptk.few static
+ ;;
+ mov r9=512
+ ;;
+ cmp.gtu p6,p7=r9,r28
+(p6) br.cond.sptk.few stacked
+ ;;
+
+static: cmp.eq p6,p7=6,r28 /* PAL_PTCE_INFO */
+(p7) br.cond.sptk.few 1f
+ movl r8=0 /* status = 0 */
+ movl r9=0x100000000 /* tc.base */
+ movl r10=0x0000000200000003 /* count[0], count[1] */
+ movl r11=0x1000000000002000 /* stride[0], stride[1] */
+ ;;
+
+1: cmp.eq p6,p7=14,r28 /* PAL_FREQ_RATIOS */
+(p7) br.cond.sptk.few 1f
+ movl r8=0 /* status = 0 */
+ movl r9 =0x100000064 /* proc_ratio (1/100) */
+ movl r10=0x100000100 /* bus_ratio<<32 (1/256) */
+ movl r11=0x10000000a /* itc_ratio<<32 (1/100) */
+ ;;
+
+1: cmp.eq p6,p7=8,r28 /* PAL_VM_SUMMARY */
+(p7) br.cond.sptk.few 1f
+ movl r8=0
+#ifdef SGI_SN2
+ movl r9=0x0203083001151065
+ movl r10=0x183f
+#endif
+ movl r11=0
+ ;;
+
+1: cmp.eq p6,p7=19,r28 /* PAL_RSE_INFO */
+(p7) br.cond.sptk.few 1f
+ movl r8=0
+ movl r9=0x60
+ movl r10=0x0
+ movl r11=0
+ ;;
+
+1: cmp.eq p6,p7=15,r28 /* PAL_PERF_MON_INFO */
+(p7) br.cond.sptk.few 1f
+ movl r8=0
+ movl r9=0x08122004
+ movl r10=0x0
+ movl r11=0
+ mov r2=ar.lc
+ mov r3=16;;
+ mov ar.lc=r3
+ mov r3=r29;;
+5: st8 [r3]=r0,8
+ br.cloop.sptk.few 5b;;
+ mov ar.lc=r2
+ mov r3=r29
+ movl r2=0x1fff;; /* PMC regs */
+ st8 [r3]=r2
+ add r3=32,r3
+ movl r2=0x3ffff;; /* PMD regs */
+ st8 [r3]=r2
+ add r3=32,r3
+ movl r2=0xf0;; /* cycle regs */
+ st8 [r3]=r2
+ add r3=32,r3
+ movl r2=0x10;; /* retired regs */
+ st8 [r3]=r2
+ ;;
+
+1: cmp.eq p6,p7=19,r28 /* PAL_RSE_INFO */
+(p7) br.cond.sptk.few 1f
+ movl r8=0 /* status = 0 */
+ movl r9=96 /* num phys stacked */
+ movl r10=0 /* hints */
+ movl r11=0
+ ;;
+
+1: cmp.eq p6,p7=1,r28 /* PAL_CACHE_FLUSH */
+(p7) br.cond.sptk.few 1f
+ mov r9=ar.lc
+ movl r8=524288 /* flush 512k million cache lines (16MB) */
+ ;;
+ mov ar.lc=r8
+ movl r8=0xe000000000000000
+ ;;
+.loop: fc r8
+ add r8=32,r8
+ br.cloop.sptk.few .loop
+ sync.i
+ ;;
+ srlz.i
+ ;;
+ mov ar.lc=r9
+ mov r8=r0
+1: br.cond.sptk.few rp
+
+stacked:
+ br.ret.sptk.few rp
+
+ .endp pal_emulator
+
--- /dev/null
+/*
+ * PAL & SAL emulation.
+ *
+ * Copyright (C) 1998-2000 Hewlett-Packard Co
+ * Copyright (C) 1998-2000 David Mosberger-Tang <davidm@hpl.hp.com>
+ *
+ *
+ * Copyright (C) 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/NoticeExplan
+ */
+#include <linux/config.h>
+#include <linux/efi.h>
+#include <linux/kernel.h>
+#include <asm/pal.h>
+#include <asm/sal.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/processor.h>
+#include <asm/sn/sn_cpuid.h>
+#ifdef SGI_SN2
+#include <asm/sn/sn2/addrs.h>
+#include <asm/sn/sn2/shub_mmr.h>
+#endif
+#include <linux/acpi.h>
+#include "fpmem.h"
+
+#define RSDP_NAME "RSDP"
+#define RSDP_SIG "RSD PTR " /* RSDT Pointer signature */
+#define APIC_SIG "APIC" /* Multiple APIC Description Table */
+#define DSDT_SIG "DSDT" /* Differentiated System Description Table */
+#define FADT_SIG "FACP" /* Fixed ACPI Description Table */
+#define FACS_SIG "FACS" /* Firmware ACPI Control Structure */
+#define PSDT_SIG "PSDT" /* Persistent System Description Table */
+#define RSDT_SIG "RSDT" /* Root System Description Table */
+#define XSDT_SIG "XSDT" /* Extended System Description Table */
+#define SSDT_SIG "SSDT" /* Secondary System Description Table */
+#define SBST_SIG "SBST" /* Smart Battery Specification Table */
+#define SPIC_SIG "SPIC" /* IOSAPIC table */
+#define SRAT_SIG "SRAT" /* SRAT table */
+#define SLIT_SIG "SLIT" /* SLIT table */
+#define BOOT_SIG "BOOT" /* Boot table */
+#define ACPI_SRAT_REVISION 1
+#define ACPI_SLIT_REVISION 1
+
+#define OEMID "SGI"
+#ifdef SGI_SN2
+#define PRODUCT "SN2"
+#define PROXIMITY_DOMAIN(nasid) (((nasid)>>1) & 255)
+#endif
+
+#define MB (1024*1024UL)
+#define GB (MB*1024UL)
+#define BOOT_PARAM_ADDR 0x40000
+#define MAX(i,j) ((i) > (j) ? (i) : (j))
+#define MIN(i,j) ((i) < (j) ? (i) : (j))
+#define ALIGN8(p) (((long)(p) +7) & ~7)
+
+#define FPROM_BUG() do {while (1);} while (0)
+#define MAX_SN_NODES 128
+#define MAX_LSAPICS 512
+#define MAX_CPUS 512
+#define MAX_CPUS_NODE 4
+#define CPUS_PER_NODE 4
+#define CPUS_PER_FSB 2
+#define CPUS_PER_FSB_MASK (CPUS_PER_FSB-1)
+
+#define NUM_EFI_DESCS 2
+
+#define RSDP_CHECKSUM_LENGTH 20
+
+typedef union ia64_nasid_va {
+ struct {
+#if defined(SGI_SN2)
+ unsigned long off : 36; /* intra-region offset */
+ unsigned long attr : 2;
+ unsigned long nasid : 11; /* NASID */
+ unsigned long off2 : 12; /* fill */
+ unsigned long reg : 3; /* region number */
+#endif
+ } f;
+ unsigned long l;
+ void *p;
+} ia64_nasid_va;
+
+typedef struct {
+ unsigned long pc;
+ unsigned long gp;
+} func_ptr_t;
+
+#define IS_VIRTUAL_MODE() ({struct ia64_psr psr; asm("mov %0=psr" : "=r"(psr)); psr.dt;})
+#define ADDR_OF(p) (IS_VIRTUAL_MODE() ? ((void*)((long)(p)+PAGE_OFFSET)) : ((void*) (p)))
+
+#if defined(SGI_SN2)
+#define __fwtab_pa(n,x) ({ia64_nasid_va _v; _v.l = (long) (x); _v.f.nasid = (x) ? (n) : 0; _v.f.reg = 0; _v.f.attr = 3; _v.l;})
+#endif
+
+/*
+ * The following variables are passed thru registersfrom the configuration file and
+ * are set via the _start function.
+ */
+long base_nasid;
+long num_cpus;
+long bsp_entry_pc=0;
+long num_nodes;
+long app_entry_pc;
+int bsp_lid;
+func_ptr_t ap_entry;
+
+
+extern void pal_emulator(void);
+static efi_runtime_services_t *efi_runtime_p;
+static char fw_mem[( sizeof(efi_system_table_t)
+ + sizeof(efi_runtime_services_t)
+ + NUM_EFI_DESCS*sizeof(efi_config_table_t)
+ + sizeof(struct ia64_sal_systab)
+ + sizeof(struct ia64_sal_desc_entry_point)
+ + sizeof(struct ia64_sal_desc_ap_wakeup)
+ + sizeof(struct acpi20_table_rsdp)
+ + sizeof(struct acpi_table_xsdt)
+ + sizeof(struct acpi_table_slit)
+ + MAX_SN_NODES*MAX_SN_NODES+8
+ + sizeof(struct acpi_table_madt)
+ + 16*MAX_CPUS
+ + (1+8*MAX_SN_NODES)*(sizeof(efi_memory_desc_t))
+ + sizeof(struct acpi_table_srat)
+ + MAX_CPUS*sizeof(struct acpi_table_processor_affinity)
+ + MAX_SN_NODES*sizeof(struct acpi_table_memory_affinity)
+ + sizeof(ia64_sal_desc_ptc_t) +
+ + MAX_SN_NODES*sizeof(ia64_sal_ptc_domain_info_t) +
+ + MAX_CPUS*sizeof(ia64_sal_ptc_domain_proc_entry_t) +
+ + 1024)] __attribute__ ((aligned (8)));
+
+
+static efi_status_t
+efi_get_time (efi_time_t *tm, efi_time_cap_t *tc)
+{
+ if (tm) {
+ memset(tm, 0, sizeof(*tm));
+ tm->year = 2000;
+ tm->month = 2;
+ tm->day = 13;
+ tm->hour = 10;
+ tm->minute = 11;
+ tm->second = 12;
+ }
+
+ if (tc) {
+ tc->resolution = 10;
+ tc->accuracy = 12;
+ tc->sets_to_zero = 1;
+ }
+
+ return EFI_SUCCESS;
+}
+
+static void
+efi_reset_system (int reset_type, efi_status_t status, unsigned long data_size, efi_char16_t *data)
+{
+ while(1); /* Is there a pseudo-op to stop medusa */
+}
+
+static efi_status_t
+efi_success (void)
+{
+ return EFI_SUCCESS;
+}
+
+static efi_status_t
+efi_unimplemented (void)
+{
+ return EFI_UNSUPPORTED;
+}
+
+#ifdef SGI_SN2
+
+#undef cpu_physical_id
+#define cpu_physical_id(cpuid) ((ia64_getreg(_IA64_REG_CR_LID) >> 16) & 0xffff)
+
+void
+fprom_send_cpei(void) {
+ long *p, val;
+ long physid;
+ long nasid, slice;
+
+ physid = cpu_physical_id(0);
+ nasid = cpu_physical_id_to_nasid(physid);
+ slice = cpu_physical_id_to_slice(physid);
+
+ p = (long*)GLOBAL_MMR_ADDR(nasid, SH_IPI_INT);
+ val = (1UL<<SH_IPI_INT_SEND_SHFT) |
+ (physid<<SH_IPI_INT_PID_SHFT) |
+ ((long)0<<SH_IPI_INT_TYPE_SHFT) |
+ ((long)0x1e<<SH_IPI_INT_IDX_SHFT) |
+ (0x000feeUL<<SH_IPI_INT_BASE_SHFT);
+ *p = val;
+
+}
+#endif
+
+
+static struct sal_ret_values
+sal_emulator (long index, unsigned long in1, unsigned long in2,
+ unsigned long in3, unsigned long in4, unsigned long in5,
+ unsigned long in6, unsigned long in7)
+{
+ long r9 = 0;
+ long r10 = 0;
+ long r11 = 0;
+ long status;
+
+ /*
+ * Don't do a "switch" here since that gives us code that
+ * isn't self-relocatable.
+ */
+ status = 0;
+ if (index == SAL_FREQ_BASE) {
+ switch (in1) {
+ case SAL_FREQ_BASE_PLATFORM:
+ r9 = 500000000;
+ break;
+
+ case SAL_FREQ_BASE_INTERVAL_TIMER:
+ /*
+ * Is this supposed to be the cr.itc frequency
+ * or something platform specific? The SAL
+ * doc ain't exactly clear on this...
+ */
+ r9 = 700000000;
+ break;
+
+ case SAL_FREQ_BASE_REALTIME_CLOCK:
+ r9 = 50000000;
+ break;
+
+ default:
+ status = -1;
+ break;
+ }
+ } else if (index == SAL_SET_VECTORS) {
+ if (in1 == SAL_VECTOR_OS_BOOT_RENDEZ) {
+ func_ptr_t *fp;
+ fp = ADDR_OF(&ap_entry);
+ fp->pc = in2;
+ fp->gp = in3;
+ } else if (in1 == SAL_VECTOR_OS_MCA || in1 == SAL_VECTOR_OS_INIT) {
+ } else {
+ status = -1;
+ }
+ ;
+ } else if (index == SAL_GET_STATE_INFO) {
+ ;
+ } else if (index == SAL_GET_STATE_INFO_SIZE) {
+ r9 = 10000;
+ ;
+ } else if (index == SAL_CLEAR_STATE_INFO) {
+ ;
+ } else if (index == SAL_MC_RENDEZ) {
+ ;
+ } else if (index == SAL_MC_SET_PARAMS) {
+ ;
+ } else if (index == SAL_CACHE_FLUSH) {
+ ;
+ } else if (index == SAL_CACHE_INIT) {
+ ;
+ } else if (index == SAL_UPDATE_PAL) {
+ ;
+#ifdef SGI_SN2
+ } else if (index == SN_SAL_LOG_CE) {
+#ifdef ajmtestcpei
+ fprom_send_cpei();
+#else /* ajmtestcpei */
+ ;
+#endif /* ajmtestcpei */
+#endif
+ } else if (index == SN_SAL_PROBE) {
+ r9 = 0UL;
+ if (in2 == 4) {
+ r9 = *(unsigned *)in1;
+ if (r9 == -1) {
+ status = 1;
+ }
+ } else if (in2 == 2) {
+ r9 = *(unsigned short *)in1;
+ if (r9 == -1) {
+ status = 1;
+ }
+ } else if (in2 == 1) {
+ r9 = *(unsigned char *)in1;
+ if (r9 == -1) {
+ status = 1;
+ }
+ } else if (in2 == 8) {
+ r9 = *(unsigned long *)in1;
+ if (r9 == -1) {
+ status = 1;
+ }
+ } else {
+ status = 2;
+ }
+ } else if (index == SN_SAL_GET_KLCONFIG_ADDR) {
+ r9 = 0x30000;
+ } else if (index == SN_SAL_CONSOLE_PUTC) {
+ status = -1;
+ } else if (index == SN_SAL_CONSOLE_GETC) {
+ status = -1;
+ } else if (index == SN_SAL_CONSOLE_POLL) {
+ status = -1;
+ } else if (index == SN_SAL_SYSCTL_IOBRICK_MODULE_GET) {
+ status = -1;
+ } else {
+ status = -1;
+ }
+
+ asm volatile ("" :: "r"(r9), "r"(r10), "r"(r11));
+ return ((struct sal_ret_values) {status, r9, r10, r11});
+}
+
+
+/*
+ * This is here to work around a bug in egcs-1.1.1b that causes the
+ * compiler to crash (seems like a bug in the new alias analysis code.
+ */
+void *
+id (long addr)
+{
+ return (void *) addr;
+}
+
+
+/*
+ * Fix the addresses in a function pointer by adding base node address
+ * to pc & gp.
+ */
+void
+fix_function_pointer(void *fp)
+{
+ func_ptr_t *_fp;
+
+ _fp = fp;
+ _fp->pc = __fwtab_pa(base_nasid, _fp->pc);
+ _fp->gp = __fwtab_pa(base_nasid, _fp->gp);
+}
+
+void
+fix_virt_function_pointer(void **fptr)
+{
+ func_ptr_t *fp;
+ long *p;
+
+ p = (long*)fptr;
+ fp = *fptr;
+ fp->pc = fp->pc | PAGE_OFFSET;
+ fp->gp = fp->gp | PAGE_OFFSET;
+ *p |= PAGE_OFFSET;
+}
+
+
+int
+efi_set_virtual_address_map(void)
+{
+ efi_runtime_services_t *runtime;
+
+ runtime = efi_runtime_p;
+ fix_virt_function_pointer((void**)&runtime->get_time);
+ fix_virt_function_pointer((void**)&runtime->set_time);
+ fix_virt_function_pointer((void**)&runtime->get_wakeup_time);
+ fix_virt_function_pointer((void**)&runtime->set_wakeup_time);
+ fix_virt_function_pointer((void**)&runtime->set_virtual_address_map);
+ fix_virt_function_pointer((void**)&runtime->get_variable);
+ fix_virt_function_pointer((void**)&runtime->get_next_variable);
+ fix_virt_function_pointer((void**)&runtime->set_variable);
+ fix_virt_function_pointer((void**)&runtime->get_next_high_mono_count);
+ fix_virt_function_pointer((void**)&runtime->reset_system);
+ return EFI_SUCCESS;
+}
+
+void
+acpi_table_initx(struct acpi_table_header *p, char *sig, int siglen, int revision, int oem_revision)
+{
+ memcpy(p->signature, sig, siglen);
+ memcpy(p->oem_id, OEMID, 6);
+ memcpy(p->oem_table_id, sig, 4);
+ memcpy(p->oem_table_id+4, PRODUCT, 4);
+ p->revision = revision;
+ p->oem_revision = (revision<<16) + oem_revision;
+ memcpy(p->asl_compiler_id, "FPRM", 4);
+ p->asl_compiler_revision = 1;
+}
+
+void
+acpi_checksum(struct acpi_table_header *p, int length)
+{
+ u8 *cp, *cpe, checksum;
+
+ p->checksum = 0;
+ p->length = length;
+ checksum = 0;
+ for (cp=(u8*)p, cpe=cp+p->length; cp<cpe; cp++)
+ checksum += *cp;
+ p->checksum = -checksum;
+}
+
+void
+acpi_checksum_rsdp20(struct acpi20_table_rsdp *p, int length)
+{
+ u8 *cp, *cpe, checksum;
+
+ p->checksum = 0;
+ p->ext_checksum = 0;
+ p->length = length;
+ checksum = 0;
+ for (cp=(u8*)p, cpe=cp+20; cp<cpe; cp++)
+ checksum += *cp;
+ p->checksum = -checksum;
+
+ checksum = 0;
+ for (cp=(u8*)p, cpe=cp+length; cp<cpe; cp++)
+ checksum += *cp;
+ p->ext_checksum = -checksum;
+}
+
+int
+nasid_present(int nasid)
+{
+ int cnode;
+ for (cnode=0; cnode<num_nodes; cnode++)
+ if (GetNasid(cnode) == nasid)
+ return 1;
+ return 0;
+}
+
+void
+sys_fw_init (const char *args, int arglen, int bsp)
+{
+ /*
+ * Use static variables to keep from overflowing the RSE stack
+ */
+ static efi_system_table_t *efi_systab;
+ static efi_runtime_services_t *efi_runtime;
+ static efi_config_table_t *efi_tables;
+ static ia64_sal_desc_ptc_t *sal_ptc;
+ static ia64_sal_ptc_domain_info_t *sal_ptcdi;
+ static ia64_sal_ptc_domain_proc_entry_t *sal_ptclid;
+ static struct acpi20_table_rsdp *acpi20_rsdp;
+ static struct acpi_table_xsdt *acpi_xsdt;
+ static struct acpi_table_slit *acpi_slit;
+ static struct acpi_table_madt *acpi_madt;
+ static struct acpi_table_lsapic *lsapic20;
+ static struct ia64_sal_systab *sal_systab;
+ static struct acpi_table_srat *acpi_srat;
+ static struct acpi_table_processor_affinity *srat_cpu_affinity;
+ static struct acpi_table_memory_affinity *srat_memory_affinity;
+ static efi_memory_desc_t *efi_memmap, *md;
+ static unsigned long *pal_desc, *sal_desc;
+ static struct ia64_sal_desc_entry_point *sal_ed;
+ static struct ia64_boot_param *bp;
+ static struct ia64_sal_desc_ap_wakeup *sal_apwake;
+ static unsigned char checksum;
+ static char *cp, *cmd_line, *vendor;
+ static void *ptr;
+ static int mdsize, domain, last_domain ;
+ static int i, j, cnode, max_nasid, nasid, cpu, num_memmd, cpus_found;
+
+ /*
+ * Pass the parameter base address to the build_efi_xxx routines.
+ */
+#if defined(SGI_SN2)
+ build_init(0x3000000000UL | ((long)base_nasid<<38));
+#endif
+
+ num_nodes = GetNumNodes();
+ num_cpus = GetNumCpus();
+ for (max_nasid=0, cnode=0; cnode<num_nodes; cnode++)
+ max_nasid = MAX(max_nasid, GetNasid(cnode));
+
+
+ memset(fw_mem, 0, sizeof(fw_mem));
+
+ pal_desc = (unsigned long *) &pal_emulator;
+ sal_desc = (unsigned long *) &sal_emulator;
+ fix_function_pointer(&pal_emulator);
+ fix_function_pointer(&sal_emulator);
+
+ /* Align this to 16 bytes, probably EFI does this */
+ mdsize = (sizeof(efi_memory_desc_t) + 15) & ~15 ;
+
+ cp = fw_mem;
+ efi_systab = (void *) cp; cp += ALIGN8(sizeof(*efi_systab));
+ efi_runtime_p = efi_runtime = (void *) cp; cp += ALIGN8(sizeof(*efi_runtime));
+ efi_tables = (void *) cp; cp += ALIGN8(NUM_EFI_DESCS*sizeof(*efi_tables));
+ sal_systab = (void *) cp; cp += ALIGN8(sizeof(*sal_systab));
+ sal_ed = (void *) cp; cp += ALIGN8(sizeof(*sal_ed));
+ sal_ptc = (void *) cp; cp += ALIGN8(sizeof(*sal_ptc));
+ sal_apwake = (void *) cp; cp += ALIGN8(sizeof(*sal_apwake));
+ acpi20_rsdp = (void *) cp; cp += ALIGN8(sizeof(*acpi20_rsdp));
+ acpi_xsdt = (void *) cp; cp += ALIGN8(sizeof(*acpi_xsdt) + 64);
+ /* save space for more OS defined table pointers. */
+
+ acpi_slit = (void *) cp; cp += ALIGN8(sizeof(*acpi_slit) + 8 + (max_nasid+1)*(max_nasid+1));
+ acpi_madt = (void *) cp; cp += ALIGN8(sizeof(*acpi_madt) + sizeof(struct acpi_table_lsapic) * (num_cpus+1));
+ acpi_srat = (void *) cp; cp += ALIGN8(sizeof(struct acpi_table_srat));
+ cp += sizeof(struct acpi_table_processor_affinity)*num_cpus + sizeof(struct acpi_table_memory_affinity)*num_nodes;
+ vendor = (char *) cp; cp += ALIGN8(40);
+ efi_memmap = (void *) cp; cp += ALIGN8(8*32*sizeof(*efi_memmap));
+ sal_ptcdi = (void *) cp; cp += ALIGN8(CPUS_PER_FSB*(1+num_nodes)*sizeof(*sal_ptcdi));
+ sal_ptclid = (void *) cp; cp += ALIGN8(((3+num_cpus)*sizeof(*sal_ptclid)+7)/8*8);
+ cmd_line = (void *) cp;
+
+ if (args) {
+ if (arglen >= 1024)
+ arglen = 1023;
+ memcpy(cmd_line, args, arglen);
+ } else {
+ arglen = 0;
+ }
+ cmd_line[arglen] = '\0';
+ /*
+ * For now, just bring up bash.
+ * If you want to execute all the startup scripts, delete the "init=..".
+ * You can also edit this line to pass other arguments to the kernel.
+ * Note: disable kernel text replication.
+ */
+ strcpy(cmd_line, "init=/bin/bash console=ttyS0");
+
+ memset(efi_systab, 0, sizeof(efi_systab));
+ efi_systab->hdr.signature = EFI_SYSTEM_TABLE_SIGNATURE;
+ efi_systab->hdr.revision = EFI_SYSTEM_TABLE_REVISION;
+ efi_systab->hdr.headersize = sizeof(efi_systab->hdr);
+ efi_systab->fw_vendor = __fwtab_pa(base_nasid, vendor);
+ efi_systab->fw_revision = 1;
+ efi_systab->runtime = __fwtab_pa(base_nasid, efi_runtime);
+ efi_systab->nr_tables = 2;
+ efi_systab->tables = __fwtab_pa(base_nasid, efi_tables);
+ memcpy(vendor, "S\0i\0l\0i\0c\0o\0n\0-\0G\0r\0a\0p\0h\0i\0c\0s\0\0", 40);
+
+ efi_runtime->hdr.signature = EFI_RUNTIME_SERVICES_SIGNATURE;
+ efi_runtime->hdr.revision = EFI_RUNTIME_SERVICES_REVISION;
+ efi_runtime->hdr.headersize = sizeof(efi_runtime->hdr);
+ efi_runtime->get_time = __fwtab_pa(base_nasid, &efi_get_time);
+ efi_runtime->set_time = __fwtab_pa(base_nasid, &efi_unimplemented);
+ efi_runtime->get_wakeup_time = __fwtab_pa(base_nasid, &efi_unimplemented);
+ efi_runtime->set_wakeup_time = __fwtab_pa(base_nasid, &efi_unimplemented);
+ efi_runtime->set_virtual_address_map = __fwtab_pa(base_nasid, &efi_set_virtual_address_map);
+ efi_runtime->get_variable = __fwtab_pa(base_nasid, &efi_unimplemented);
+ efi_runtime->get_next_variable = __fwtab_pa(base_nasid, &efi_unimplemented);
+ efi_runtime->set_variable = __fwtab_pa(base_nasid, &efi_unimplemented);
+ efi_runtime->get_next_high_mono_count = __fwtab_pa(base_nasid, &efi_unimplemented);
+ efi_runtime->reset_system = __fwtab_pa(base_nasid, &efi_reset_system);
+
+ efi_tables->guid = SAL_SYSTEM_TABLE_GUID;
+ efi_tables->table = __fwtab_pa(base_nasid, sal_systab);
+ efi_tables++;
+ efi_tables->guid = ACPI_20_TABLE_GUID;
+ efi_tables->table = __fwtab_pa(base_nasid, acpi20_rsdp);
+ efi_tables++;
+
+ fix_function_pointer(&efi_unimplemented);
+ fix_function_pointer(&efi_get_time);
+ fix_function_pointer(&efi_success);
+ fix_function_pointer(&efi_reset_system);
+ fix_function_pointer(&efi_set_virtual_address_map);
+
+
+ /* fill in the ACPI20 system table - has a pointer to the ACPI table header */
+ memcpy(acpi20_rsdp->signature, "RSD PTR ", 8);
+ acpi20_rsdp->xsdt_address = (u64)__fwtab_pa(base_nasid, acpi_xsdt);
+ acpi20_rsdp->revision = 2;
+ acpi_checksum_rsdp20(acpi20_rsdp, sizeof(struct acpi20_table_rsdp));
+
+ /* Set up the XSDT table - contains pointers to the other ACPI tables */
+ acpi_table_initx(&acpi_xsdt->header, XSDT_SIG, 4, 1, 1);
+ acpi_xsdt->entry[0] = __fwtab_pa(base_nasid, acpi_madt);
+ acpi_xsdt->entry[1] = __fwtab_pa(base_nasid, acpi_slit);
+ acpi_xsdt->entry[2] = __fwtab_pa(base_nasid, acpi_srat);
+ acpi_checksum(&acpi_xsdt->header, sizeof(struct acpi_table_xsdt) + 16);
+
+ /* Set up the APIC table */
+ acpi_table_initx(&acpi_madt->header, APIC_SIG, 4, 1, 1);
+ lsapic20 = (struct acpi_table_lsapic*) (acpi_madt + 1);
+ for (cnode=0; cnode<num_nodes; cnode++) {
+ nasid = GetNasid(cnode);
+ for(cpu=0; cpu<CPUS_PER_NODE; cpu++) {
+ if (!IsCpuPresent(cnode, cpu))
+ continue;
+ lsapic20->header.type = ACPI_MADT_LSAPIC;
+ lsapic20->header.length = sizeof(struct acpi_table_lsapic);
+ lsapic20->acpi_id = cnode*4+cpu;
+ lsapic20->flags.enabled = 1;
+#if defined(SGI_SN2)
+ lsapic20->eid = nasid&0xffff;
+ lsapic20->id = (cpu<<4) | (nasid>>16);
+#endif
+ lsapic20 = (struct acpi_table_lsapic*) ((long)lsapic20+sizeof(struct acpi_table_lsapic));
+ }
+ }
+ acpi_checksum(&acpi_madt->header, (char*)lsapic20 - (char*)acpi_madt);
+
+ /* Set up the SRAT table */
+ acpi_table_initx(&acpi_srat->header, SRAT_SIG, 4, ACPI_SRAT_REVISION, 1);
+ ptr = acpi_srat+1;
+ for (cnode=0; cnode<num_nodes; cnode++) {
+ nasid = GetNasid(cnode);
+ srat_memory_affinity = ptr;
+ ptr = srat_memory_affinity+1;
+ srat_memory_affinity->header.type = ACPI_SRAT_MEMORY_AFFINITY;
+ srat_memory_affinity->header.length = sizeof(struct acpi_table_memory_affinity);
+ srat_memory_affinity->proximity_domain = PROXIMITY_DOMAIN(nasid);
+ srat_memory_affinity->base_addr_lo = 0;
+ srat_memory_affinity->length_lo = 0;
+#if defined(SGI_SN2)
+ srat_memory_affinity->base_addr_hi = (nasid<<6) | (3<<4);
+ srat_memory_affinity->length_hi = (MD_BANKSIZE*MD_BANKS_PER_NODE)>>32;
+#endif
+ srat_memory_affinity->memory_type = ACPI_ADDRESS_RANGE_MEMORY;
+ srat_memory_affinity->flags.enabled = 1;
+ }
+
+ for (cnode=0; cnode<num_nodes; cnode++) {
+ nasid = GetNasid(cnode);
+ for(cpu=0; cpu<CPUS_PER_NODE; cpu++) {
+ if (!IsCpuPresent(cnode, cpu))
+ continue;
+ srat_cpu_affinity = ptr;
+ ptr = srat_cpu_affinity + 1;
+ srat_cpu_affinity->header.type = ACPI_SRAT_PROCESSOR_AFFINITY;
+ srat_cpu_affinity->header.length = sizeof(struct acpi_table_processor_affinity);
+ srat_cpu_affinity->proximity_domain = PROXIMITY_DOMAIN(nasid);
+ srat_cpu_affinity->flags.enabled = 1;
+#if defined(SGI_SN2)
+ srat_cpu_affinity->lsapic_eid = nasid&0xffff;
+ srat_cpu_affinity->apic_id = (cpu<<4) | (nasid>>16);
+#endif
+ }
+ }
+ acpi_checksum(&acpi_srat->header, (char*)ptr - (char*)acpi_srat);
+
+
+ /* Set up the SLIT table */
+ acpi_table_initx(&acpi_slit->header, SLIT_SIG, 4, ACPI_SLIT_REVISION, 1);
+ acpi_slit->localities = PROXIMITY_DOMAIN(max_nasid)+1;
+ cp=acpi_slit->entry;
+ memset(cp, 255, acpi_slit->localities*acpi_slit->localities);
+
+ for (i=0; i<=max_nasid; i++)
+ for (j=0; j<=max_nasid; j++)
+ if (nasid_present(i) && nasid_present(j))
+ *(cp+PROXIMITY_DOMAIN(i)*acpi_slit->localities+PROXIMITY_DOMAIN(j)) = 10 + MIN(254, 5*abs(i-j));
+
+ cp = acpi_slit->entry + acpi_slit->localities*acpi_slit->localities;
+ acpi_checksum(&acpi_slit->header, cp - (char*)acpi_slit);
+
+
+ /* fill in the SAL system table: */
+ memcpy(sal_systab->signature, "SST_", 4);
+ sal_systab->size = sizeof(*sal_systab);
+ sal_systab->sal_rev_minor = 1;
+ sal_systab->sal_rev_major = 0;
+ sal_systab->entry_count = 3;
+ sal_systab->sal_b_rev_major = 0x1; /* set the SN SAL rev to */
+ sal_systab->sal_b_rev_minor = 0x0; /* 1.00 */
+
+ strcpy(sal_systab->oem_id, "SGI");
+ strcpy(sal_systab->product_id, "SN2");
+
+ /* fill in an entry point: */
+ sal_ed->type = SAL_DESC_ENTRY_POINT;
+ sal_ed->pal_proc = __fwtab_pa(base_nasid, pal_desc[0]);
+ sal_ed->sal_proc = __fwtab_pa(base_nasid, sal_desc[0]);
+ sal_ed->gp = __fwtab_pa(base_nasid, sal_desc[1]);
+
+ /* kludge the PTC domain info */
+ sal_ptc->type = SAL_DESC_PTC;
+ sal_ptc->num_domains = 0;
+ sal_ptc->domain_info = __fwtab_pa(base_nasid, sal_ptcdi);
+ cpus_found = 0;
+ last_domain = -1;
+ sal_ptcdi--;
+ for (cnode=0; cnode<num_nodes; cnode++) {
+ nasid = GetNasid(cnode);
+ for(cpu=0; cpu<CPUS_PER_NODE; cpu++) {
+ if (IsCpuPresent(cnode, cpu)) {
+ domain = cnode*CPUS_PER_NODE + cpu/CPUS_PER_FSB;
+ if (domain != last_domain) {
+ sal_ptc->num_domains++;
+ sal_ptcdi++;
+ sal_ptcdi->proc_count = 0;
+ sal_ptcdi->proc_list = __fwtab_pa(base_nasid, sal_ptclid);
+ last_domain = domain;
+ }
+ sal_ptcdi->proc_count++;
+ sal_ptclid->id = nasid;
+ sal_ptclid->eid = cpu;
+ sal_ptclid++;
+ cpus_found++;
+ }
+ }
+ }
+
+ if (cpus_found != num_cpus)
+ FPROM_BUG();
+
+ /* Make the AP WAKEUP entry */
+ sal_apwake->type = SAL_DESC_AP_WAKEUP;
+ sal_apwake->mechanism = IA64_SAL_AP_EXTERNAL_INT;
+ sal_apwake->vector = 18;
+
+ for (checksum=0, cp=(char*)sal_systab; cp < (char *)efi_memmap; ++cp)
+ checksum += *cp;
+ sal_systab->checksum = -checksum;
+
+ /* If the checksum is correct, the kernel tries to use the
+ * table. We dont build enough table & the kernel aborts.
+ * Note that the PROM hasd thhhe same problem!!
+ */
+
+ md = &efi_memmap[0];
+ num_memmd = build_efi_memmap((void *)md, mdsize) ;
+
+ bp = (struct ia64_boot_param*) __fwtab_pa(base_nasid, BOOT_PARAM_ADDR);
+ bp->efi_systab = __fwtab_pa(base_nasid, &fw_mem);
+ bp->efi_memmap = __fwtab_pa(base_nasid, efi_memmap);
+ bp->efi_memmap_size = num_memmd*mdsize;
+ bp->efi_memdesc_size = mdsize;
+ bp->efi_memdesc_version = 0x101;
+ bp->command_line = __fwtab_pa(base_nasid, cmd_line);
+ bp->console_info.num_cols = 80;
+ bp->console_info.num_rows = 25;
+ bp->console_info.orig_x = 0;
+ bp->console_info.orig_y = 24;
+ bp->fpswa = 0;
+
+ /*
+ * Now pick the BSP & store it LID value in
+ * a global variable. Note if BSP is greater than last cpu,
+ * pick the last cpu.
+ */
+ for (cnode=0; cnode<num_nodes; cnode++) {
+ for(cpu=0; cpu<CPUS_PER_NODE; cpu++) {
+ if (!IsCpuPresent(cnode, cpu))
+ continue;
+#ifdef SGI_SN2
+ bsp_lid = (GetNasid(cnode)<<16) | (cpu<<28);
+#endif
+ if (bsp-- > 0)
+ continue;
+ return;
+ }
+ }
+}
--- /dev/null
+/* $Id: klgraph_init.c,v 1.1 2002/02/28 17:31:25 marcelo Exp $
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+
+
+/*
+ * This is a temporary file that statically initializes the expected
+ * initial klgraph information that is normally provided by prom.
+ */
+
+#include <linux/types.h>
+#include <linux/config.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/io.h>
+#include <asm/sn/driver.h>
+#include <asm/sn/iograph.h>
+#include <asm/param.h>
+#include <asm/sn/pio.h>
+#include <asm/sn/xtalk/xwidget.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/addrs.h>
+#include <asm/sn/invent.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/hcl_util.h>
+#include <asm/sn/intr.h>
+#include <asm/sn/xtalk/xtalkaddrs.h>
+#include <asm/sn/klconfig.h>
+
+#define SYNERGY_WIDGET ((char *)0xc0000e0000000000)
+#define SYNERGY_SWIZZLE ((char *)0xc0000e0000000400)
+#define HUBREG ((char *)0xc0000a0001e00000)
+#define WIDGET0 ((char *)0xc0000a0000000000)
+#define WIDGET4 ((char *)0xc0000a0000000004)
+
+#define SYNERGY_WIDGET ((char *)0xc0000e0000000000)
+#define SYNERGY_SWIZZLE ((char *)0xc0000e0000000400)
+#define HUBREG ((char *)0xc0000a0001e00000)
+#define WIDGET0 ((char *)0xc0000a0000000000)
+
+#define convert(a,b,c) temp = (u64 *)a; *temp = b; temp++; *temp = c
+void
+klgraph_init(void)
+{
+
+ u64 *temp;
+ /*
+ * Initialize some hub/xbow registers that allows access to
+ * Xbridge etc. These are normally done in PROM.
+ */
+
+ /* Write IOERR clear to clear the CRAZY bit in the status */
+ *(volatile uint64_t *)0xc000000801c001f8 = (uint64_t)0xffffffff;
+
+ /* set widget control register...setting bedrock widget id to a */
+ *(volatile uint64_t *)0xc000000801c00020 = (uint64_t)0x801a;
+
+ /* set io outbound widget access...allow all */
+ *(volatile uint64_t *)0xc000000801c00110 = (uint64_t)0xff01;
+
+ /* set io inbound widget access...allow all */
+ *(volatile uint64_t *)0xc000000801c00118 = (uint64_t)0xff01;
+
+ /* set io crb timeout to max */
+ *(volatile uint64_t *)0xc000000801c003c0 = (uint64_t)0xffffff;
+ *(volatile uint64_t *)0xc000000801c003c0 = (uint64_t)0xffffff;
+
+ /* set local block io permission...allow all */
+// [LB] *(volatile uint64_t *)0xc000000801e04010 = (uint64_t)0xfffffffffffffff;
+
+ /* clear any errors */
+ /* clear_ii_error(); medusa should have cleared these */
+
+ /* set default read response buffers in bridge */
+// [PI] *(volatile u32 *)0xc00000080f000280L = 0xba98;
+// [PI] *(volatile u32 *)0xc00000080f000288L = 0xba98;
+
+ /*
+ * klconfig entries initialization - mankato
+ */
+ convert(0xe000003000030000, 0x00000000beedbabe, 0x0000004800000000);
+ convert(0xe000003000030010, 0x0003007000000018, 0x800002000f820178);
+ convert(0xe000003000030020, 0x80000a000f024000, 0x800002000f800000);
+ convert(0xe000003000030030, 0x0300fafa00012580, 0x00000000040f0000);
+ convert(0xe000003000030040, 0x0000000000000000, 0x0003097000030070);
+ convert(0xe000003000030050, 0x00030970000303b0, 0x0003181000033f70);
+ convert(0xe000003000030060, 0x0003d51000037570, 0x0000000000038330);
+ convert(0xe000003000030070, 0x0203110100030140, 0x0001000000000101);
+ convert(0xe000003000030080, 0x0900000000000000, 0x000000004e465e67);
+ convert(0xe000003000030090, 0x0003097000000000, 0x00030b1000030a40);
+ convert(0xe0000030000300a0, 0x00030cb000030be0, 0x000315a0000314d0);
+ convert(0xe0000030000300b0, 0x0003174000031670, 0x0000000000000000);
+ convert(0xe000003000030100, 0x000000000000001a, 0x3350490000000000);
+ convert(0xe000003000030110, 0x0000000000000037, 0x0000000000000000);
+ convert(0xe000003000030140, 0x0002420100030210, 0x0001000000000101);
+ convert(0xe000003000030150, 0x0100000000000000, 0xffffffffffffffff);
+ convert(0xe000003000030160, 0x00030d8000000000, 0x0000000000030e50);
+ convert(0xe0000030000301c0, 0x0000000000000000, 0x0000000000030070);
+ convert(0xe0000030000301d0, 0x0000000000000025, 0x424f490000000000);
+ convert(0xe0000030000301e0, 0x000000004b434952, 0x0000000000000000);
+ convert(0xe000003000030210, 0x00027101000302e0, 0x00010000000e4101);
+ convert(0xe000003000030220, 0x0200000000000000, 0xffffffffffffffff);
+ convert(0xe000003000030230, 0x00030f2000000000, 0x0000000000030ff0);
+ convert(0xe000003000030290, 0x0000000000000000, 0x0000000000030140);
+ convert(0xe0000030000302a0, 0x0000000000000026, 0x7262490000000000);
+ convert(0xe0000030000302b0, 0x00000000006b6369, 0x0000000000000000);
+ convert(0xe0000030000302e0, 0x0002710100000000, 0x00010000000f3101);
+ convert(0xe0000030000302f0, 0x0500000000000000, 0xffffffffffffffff);
+ convert(0xe000003000030300, 0x000310c000000000, 0x0003126000031190);
+ convert(0xe000003000030310, 0x0003140000031330, 0x0000000000000000);
+ convert(0xe000003000030360, 0x0000000000000000, 0x0000000000030140);
+ convert(0xe000003000030370, 0x0000000000000029, 0x7262490000000000);
+ convert(0xe000003000030380, 0x00000000006b6369, 0x0000000000000000);
+ convert(0xe000003000030970, 0x0000000002010102, 0x0000000000000000);
+ convert(0xe000003000030980, 0x000000004e465e67, 0xffffffff00000000);
+ /* convert(0x00000000000309a0, 0x0000000000037570, 0x0000000100000000); */
+ convert(0xe0000030000309a0, 0x0000000000037570, 0xffffffff00000000);
+ convert(0xe0000030000309b0, 0x0000000000030070, 0x0000000000000000);
+ convert(0xe0000030000309c0, 0x000000000003f420, 0x0000000000000000);
+ convert(0xe000003000030a40, 0x0000000002010125, 0x0000000000000000);
+ convert(0xe000003000030a50, 0xffffffffffffffff, 0xffffffff00000000);
+ convert(0xe000003000030a70, 0x0000000000037b78, 0x0000000000000000);
+ convert(0xe000003000030b10, 0x0000000002010125, 0x0000000000000000);
+ convert(0xe000003000030b20, 0xffffffffffffffff, 0xffffffff00000000);
+ convert(0xe000003000030b40, 0x0000000000037d30, 0x0000000000000001);
+ convert(0xe000003000030be0, 0x00000000ff010203, 0x0000000000000000);
+ convert(0xe000003000030bf0, 0xffffffffffffffff, 0xffffffff000000ff);
+ convert(0xe000003000030c10, 0x0000000000037ee8, 0x0100010000000200);
+ convert(0xe000003000030cb0, 0x00000000ff310111, 0x0000000000000000);
+ convert(0xe000003000030cc0, 0xffffffffffffffff, 0x0000000000000000);
+ convert(0xe000003000030d80, 0x0000000002010104, 0x0000000000000000);
+ convert(0xe000003000030d90, 0xffffffffffffffff, 0x00000000000000ff);
+ convert(0xe000003000030db0, 0x0000000000037f18, 0x0000000000000000);
+ convert(0xe000003000030dc0, 0x0000000000000000, 0x0003007000060000);
+ convert(0xe000003000030de0, 0x0000000000000000, 0x0003021000050000);
+ convert(0xe000003000030df0, 0x000302e000050000, 0x0000000000000000);
+ convert(0xe000003000030e30, 0x0000000000000000, 0x000000000000000a);
+ convert(0xe000003000030e50, 0x00000000ff00011a, 0x0000000000000000);
+ convert(0xe000003000030e60, 0xffffffffffffffff, 0x0000000000000000);
+ convert(0xe000003000030e80, 0x0000000000037fe0, 0x9e6e9e9e9e9e9e9e);
+ convert(0xe000003000030e90, 0x000000000000bc6e, 0x0000000000000000);
+ convert(0xe000003000030f20, 0x0000000002010205, 0x00000000d0020000);
+ convert(0xe000003000030f30, 0xffffffffffffffff, 0x0000000e0000000e);
+ convert(0xe000003000030f40, 0x000000000000000e, 0x0000000000000000);
+ convert(0xe000003000030f50, 0x0000000000038010, 0x00000000000007ff);
+ convert(0xe000003000030f70, 0x0000000000000000, 0x0000000022001077);
+ convert(0xe000003000030fa0, 0x0000000000000000, 0x000000000003f4a8);
+ convert(0xe000003000030ff0, 0x0000000000310120, 0x0000000000000000);
+ convert(0xe000003000031000, 0xffffffffffffffff, 0xffffffff00000002);
+ convert(0xe000003000031010, 0x000000000000000e, 0x0000000000000000);
+ convert(0xe000003000031020, 0x0000000000038088, 0x0000000000000000);
+ convert(0xe0000030000310c0, 0x0000000002010205, 0x00000000d0020000);
+ convert(0xe0000030000310d0, 0xffffffffffffffff, 0x0000000f0000000f);
+ convert(0xe0000030000310e0, 0x000000000000000f, 0x0000000000000000);
+ convert(0xe0000030000310f0, 0x00000000000380b8, 0x00000000000007ff);
+ convert(0xe000003000031120, 0x0000000022001077, 0x00000000000310a9);
+ convert(0xe000003000031130, 0x00000000580211c1, 0x000000008009104c);
+ convert(0xe000003000031140, 0x0000000000000000, 0x000000000003f4c0);
+ convert(0xe000003000031190, 0x0000000000310120, 0x0000000000000000);
+ convert(0xe0000030000311a0, 0xffffffffffffffff, 0xffffffff00000003);
+ convert(0xe0000030000311b0, 0x000000000000000f, 0x0000000000000000);
+ convert(0xe0000030000311c0, 0x0000000000038130, 0x0000000000000000);
+ convert(0xe000003000031260, 0x0000000000110106, 0x0000000000000000);
+ convert(0xe000003000031270, 0xffffffffffffffff, 0xffffffff00000004);
+ convert(0xe000003000031270, 0xffffffffffffffff, 0xffffffff00000004);
+ convert(0xe000003000031280, 0x000000000000000f, 0x0000000000000000);
+ convert(0xe0000030000312a0, 0x00000000ff110013, 0x0000000000000000);
+ convert(0xe0000030000312b0, 0xffffffffffffffff, 0xffffffff00000000);
+ convert(0xe0000030000312c0, 0x000000000000000f, 0x0000000000000000);
+ convert(0xe0000030000312e0, 0x0000000000110012, 0x0000000000000000);
+ convert(0xe0000030000312f0, 0xffffffffffffffff, 0xffffffff00000000);
+ convert(0xe000003000031300, 0x000000000000000f, 0x0000000000000000);
+ convert(0xe000003000031310, 0x0000000000038160, 0x0000000000000000);
+ convert(0xe000003000031330, 0x00000000ff310122, 0x0000000000000000);
+ convert(0xe000003000031340, 0xffffffffffffffff, 0xffffffff00000005);
+ convert(0xe000003000031350, 0x000000000000000f, 0x0000000000000000);
+ convert(0xe000003000031360, 0x0000000000038190, 0x0000000000000000);
+ convert(0xe000003000031400, 0x0000000000310121, 0x0000000000000000);
+ convert(0xe000003000031400, 0x0000000000310121, 0x0000000000000000);
+ convert(0xe000003000031410, 0xffffffffffffffff, 0xffffffff00000006);
+ convert(0xe000003000031420, 0x000000000000000f, 0x0000000000000000);
+ convert(0xe000003000031430, 0x00000000000381c0, 0x0000000000000000);
+ convert(0xe0000030000314d0, 0x00000000ff010201, 0x0000000000000000);
+ convert(0xe0000030000314e0, 0xffffffffffffffff, 0xffffffff00000000);
+ convert(0xe000003000031500, 0x00000000000381f0, 0x000030430000ffff);
+ convert(0xe000003000031510, 0x000000000000ffff, 0x0000000000000000);
+ convert(0xe0000030000315a0, 0x00000020ff000201, 0x0000000000000000);
+ convert(0xe0000030000315b0, 0xffffffffffffffff, 0xffffffff00000001);
+ convert(0xe0000030000315d0, 0x0000000000038240, 0x00003f3f0000ffff);
+ convert(0xe0000030000315e0, 0x000000000000ffff, 0x0000000000000000);
+ convert(0xe000003000031670, 0x00000000ff010201, 0x0000000000000000);
+ convert(0xe000003000031680, 0xffffffffffffffff, 0x0000000100000002);
+ convert(0xe0000030000316a0, 0x0000000000038290, 0x000030430000ffff);
+ convert(0xe0000030000316b0, 0x000000000000ffff, 0x0000000000000000);
+ convert(0xe000003000031740, 0x00000020ff000201, 0x0000000000000000);
+ convert(0xe000003000031750, 0xffffffffffffffff, 0x0000000500000003);
+ convert(0xe000003000031770, 0x00000000000382e0, 0x00003f3f0000ffff);
+ convert(0xe000003000031780, 0x000000000000ffff, 0x0000000000000000);
+}
--- /dev/null
+/*
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <asm/bitops.h>
+
+extern void klgraph_init(void);
+void bedrock_init(int);
+void synergy_init(int, int);
+void sys_fw_init (const char *args, int arglen, int bsp);
+
+volatile int bootmaster=0; /* Used to pick bootmaster */
+volatile int nasidmaster[128]={0}; /* Used to pick node/synergy masters */
+int init_done=0;
+extern int bsp_lid;
+
+#define get_bit(b,p) (((*p)>>(b))&1)
+
+int
+fmain(int lid, int bsp) {
+ int syn, nasid, cpu;
+
+ /*
+ * First lets figure out who we are. This is done from the
+ * LID passed to us.
+ */
+ nasid = (lid>>16)&0xfff;
+ cpu = (lid>>28)&3;
+ syn = 0;
+
+ /*
+ * Now pick a nasid master to initialize Bedrock registers.
+ */
+ if (test_and_set_bit(8, &nasidmaster[nasid]) == 0) {
+ bedrock_init(nasid);
+ test_and_set_bit(9, &nasidmaster[nasid]);
+ } else
+ while (get_bit(9, &nasidmaster[nasid]) == 0);
+
+
+ /*
+ * Now pick a BSP & finish init.
+ */
+ if (test_and_set_bit(0, &bootmaster) == 0) {
+ sys_fw_init(0, 0, bsp);
+ test_and_set_bit(1, &bootmaster);
+ } else
+ while (get_bit(1, &bootmaster) == 0);
+
+ return (lid == bsp_lid);
+}
+
+
+void
+bedrock_init(int nasid)
+{
+ nasid = nasid; /* to quiet gcc */
+#if 0
+ /*
+ * Undef if you need fprom to generate a 1 node klgraph
+ * information .. only works for 1 node for nasid 0.
+ */
+ klgraph_init();
+#endif
+}
+
+
+void
+synergy_init(int nasid, int syn)
+{
+ long *base;
+ long off;
+
+ /*
+ * Enable all FSB flashed interrupts.
+ * I'd really like defines for this......
+ */
+ base = (long*)0x80000e0000000000LL; /* base of synergy regs */
+ for (off = 0x2a0; off < 0x2e0; off+=8) /* offset for VEC_MASK_{0-3}_A/B */
+ *(base+off/8) = -1LL;
+
+ /*
+ * Set the NASID in the FSB_CONFIG register.
+ */
+ base = (long*)0x80000e0000000450LL;
+ *base = (long)((nasid<<16)|(syn<<9));
+}
+
+
+/* Why isnt there a bcopy/memcpy in lib64.a */
+
+void*
+memcpy(void * dest, const void *src, size_t count)
+{
+ char *s, *se, *d;
+
+ for(d=dest, s=(char*)src, se=s+count; s<se; s++, d++)
+ *d = *s;
+ return dest;
+}
--- /dev/null
+#!/bin/sh
+#
+# Build a textsym file for use in the Arium ITP probe.
+#
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (c) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+#
+
+help() {
+cat <<END
+Build a WinDD "symtxt" file for use with the Arium ECM-30 probe.
+
+ Usage: $0 [<vmlinux file> [<output file>]]
+ If no input file is specified, it defaults to vmlinux.
+ If no output file name is specified, it defaults to "textsym".
+END
+exit 1
+}
+
+err () {
+ echo "ERROR - $*" >&2
+ exit 1
+}
+
+
+OPTS="H"
+while getopts "$OPTS" c ; do
+ case $c in
+ H) help;;
+ \?) help;;
+ esac
+
+done
+shift `expr $OPTIND - 1`
+
+#OBJDUMP=/usr/bin/ia64-linux-objdump
+LINUX=${1:-vmlinux}
+TEXTSYM=${2:-${LINUX}.sym}
+TMPSYM=${2:-${LINUX}.sym.tmp}
+trap "/bin/rm -f $TMPSYM" 0
+
+[ -f $VMLINUX ] || help
+
+$OBJDUMP -t $LINUX | egrep -v '__ks' | sort > $TMPSYM
+SN1=`egrep "dig_setup|Synergy_da_indr" $TMPSYM|wc -l`
+
+# Dataprefix and textprefix correspond to the VGLOBAL_BASE and VPERNODE_BASE.
+# Eventually, these values should be:
+# dataprefix ffffffff
+# textprefix fffffffe
+# but right now they're still changing, so make them dynamic.
+dataprefix=`awk ' / \.data / { print substr($1, 0, 8) ; exit ; }' $TMPSYM`
+textprefix=`awk ' / \.text / { print substr($1, 0, 8) ; exit ; }' $TMPSYM`
+
+# pipe everything thru sort
+echo "TEXTSYM V1.0"
+(cat <<END
+GLOBAL | ${textprefix}00000000 | CODE | VEC_VHPT_Translation_0000
+GLOBAL | ${textprefix}00000400 | CODE | VEC_ITLB_0400
+GLOBAL | ${textprefix}00000800 | CODE | VEC_DTLB_0800
+GLOBAL | ${textprefix}00000c00 | CODE | VEC_Alt_ITLB_0c00
+GLOBAL | ${textprefix}00001000 | CODE | VEC_Alt_DTLB_1000
+GLOBAL | ${textprefix}00001400 | CODE | VEC_Data_nested_TLB_1400
+GLOBAL | ${textprefix}00001800 | CODE | VEC_Instruction_Key_Miss_1800
+GLOBAL | ${textprefix}00001c00 | CODE | VEC_Data_Key_Miss_1c00
+GLOBAL | ${textprefix}00002000 | CODE | VEC_Dirty-bit_2000
+GLOBAL | ${textprefix}00002400 | CODE | VEC_Instruction_Access-bit_2400
+GLOBAL | ${textprefix}00002800 | CODE | VEC_Data_Access-bit_2800
+GLOBAL | ${textprefix}00002c00 | CODE | VEC_Break_instruction_2c00
+GLOBAL | ${textprefix}00003000 | CODE | VEC_External_Interrupt_3000
+GLOBAL | ${textprefix}00003400 | CODE | VEC_Reserved_3400
+GLOBAL | ${textprefix}00003800 | CODE | VEC_Reserved_3800
+GLOBAL | ${textprefix}00003c00 | CODE | VEC_Reserved_3c00
+GLOBAL | ${textprefix}00004000 | CODE | VEC_Reserved_4000
+GLOBAL | ${textprefix}00004400 | CODE | VEC_Reserved_4400
+GLOBAL | ${textprefix}00004800 | CODE | VEC_Reserved_4800
+GLOBAL | ${textprefix}00004c00 | CODE | VEC_Reserved_4c00
+GLOBAL | ${textprefix}00005000 | CODE | VEC_Page_Not_Present_5000
+GLOBAL | ${textprefix}00005100 | CODE | VEC_Key_Permission_5100
+GLOBAL | ${textprefix}00005200 | CODE | VEC_Instruction_Access_Rights_5200
+GLOBAL | ${textprefix}00005300 | CODE | VEC_Data_Access_Rights_5300
+GLOBAL | ${textprefix}00005400 | CODE | VEC_General_Exception_5400
+GLOBAL | ${textprefix}00005500 | CODE | VEC_Disabled_FP-Register_5500
+GLOBAL | ${textprefix}00005600 | CODE | VEC_Nat_Consumption_5600
+GLOBAL | ${textprefix}00005700 | CODE | VEC_Speculation_5700
+GLOBAL | ${textprefix}00005800 | CODE | VEC_Reserved_5800
+GLOBAL | ${textprefix}00005900 | CODE | VEC_Debug_5900
+GLOBAL | ${textprefix}00005a00 | CODE | VEC_Unaligned_Reference_5a00
+GLOBAL | ${textprefix}00005b00 | CODE | VEC_Unsupported_Data_Reference_5b00
+GLOBAL | ${textprefix}00005c00 | CODE | VEC_Floating-Point_Fault_5c00
+GLOBAL | ${textprefix}00005d00 | CODE | VEC_Floating_Point_Trap_5d00
+GLOBAL | ${textprefix}00005e00 | CODE | VEC_Lower_Privilege_Tranfer_Trap_5e00
+GLOBAL | ${textprefix}00005f00 | CODE | VEC_Taken_Branch_Trap_5f00
+GLOBAL | ${textprefix}00006000 | CODE | VEC_Single_Step_Trap_6000
+GLOBAL | ${textprefix}00006100 | CODE | VEC_Reserved_6100
+GLOBAL | ${textprefix}00006200 | CODE | VEC_Reserved_6200
+GLOBAL | ${textprefix}00006300 | CODE | VEC_Reserved_6300
+GLOBAL | ${textprefix}00006400 | CODE | VEC_Reserved_6400
+GLOBAL | ${textprefix}00006500 | CODE | VEC_Reserved_6500
+GLOBAL | ${textprefix}00006600 | CODE | VEC_Reserved_6600
+GLOBAL | ${textprefix}00006700 | CODE | VEC_Reserved_6700
+GLOBAL | ${textprefix}00006800 | CODE | VEC_Reserved_6800
+GLOBAL | ${textprefix}00006900 | CODE | VEC_IA-32_Exeception_6900
+GLOBAL | ${textprefix}00006a00 | CODE | VEC_IA-32_Intercept_6a00
+GLOBAL | ${textprefix}00006b00 | CODE | VEC_IA-32_Interrupt_6b00
+GLOBAL | ${textprefix}00006c00 | CODE | VEC_Reserved_6c00
+GLOBAL | ${textprefix}00006d00 | CODE | VEC_Reserved_6d00
+GLOBAL | ${textprefix}00006e00 | CODE | VEC_Reserved_6e00
+GLOBAL | ${textprefix}00006f00 | CODE | VEC_Reserved_6f00
+GLOBAL | ${textprefix}00007000 | CODE | VEC_Reserved_7000
+GLOBAL | ${textprefix}00007100 | CODE | VEC_Reserved_7100
+GLOBAL | ${textprefix}00007200 | CODE | VEC_Reserved_7200
+GLOBAL | ${textprefix}00007300 | CODE | VEC_Reserved_7300
+GLOBAL | ${textprefix}00007400 | CODE | VEC_Reserved_7400
+GLOBAL | ${textprefix}00007500 | CODE | VEC_Reserved_7500
+GLOBAL | ${textprefix}00007600 | CODE | VEC_Reserved_7600
+GLOBAL | ${textprefix}00007700 | CODE | VEC_Reserved_7700
+GLOBAL | ${textprefix}00007800 | CODE | VEC_Reserved_7800
+GLOBAL | ${textprefix}00007900 | CODE | VEC_Reserved_7900
+GLOBAL | ${textprefix}00007a00 | CODE | VEC_Reserved_7a00
+GLOBAL | ${textprefix}00007b00 | CODE | VEC_Reserved_7b00
+GLOBAL | ${textprefix}00007c00 | CODE | VEC_Reserved_7c00
+GLOBAL | ${textprefix}00007d00 | CODE | VEC_Reserved_7d00
+GLOBAL | ${textprefix}00007e00 | CODE | VEC_Reserved_7e00
+GLOBAL | ${textprefix}00007f00 | CODE | VEC_Reserved_7f00
+END
+
+
+awk '
+/ _start$/ {start=1}
+/ start_ap$/ {start=1}
+/__start_gate_section/ {start=1}
+/^'${dataprefix}\|${textprefix}'/ {
+ if ($4 == ".kdb")
+ next
+ if (start && substr($NF,1,1) != "0") {
+ type = substr($0,26,5)
+ if (type == ".text")
+ printf "GLOBAL | %s | CODE | %s\n", $1, $NF
+ else {
+ n = 0
+ s = $(NF-1)
+ while (length(s) > 0) {
+ n = n*16 + (index("0123456789abcdef", substr(s,1,1)) - 1)
+ s = substr(s,2)
+ }
+ printf "GLOBAL | %s | DATA | %s | %d\n", $1, $NF, n
+ }
+ }
+ if($NF == "_end")
+ exit
+
+}
+' $TMPSYM ) | egrep -v " __device| __vendor" | awk -v sn1="$SN1" '
+/GLOBAL/ {
+ print $0
+ if (sn1 != 0) {
+ /* 32 bits of sn1 physical addrs, */
+ print substr($0,1,9) "04" substr($0,20,16) "Phy_" substr($0,36)
+ } else {
+ /* 38 bits of sn2 physical addrs, need addr space bits */
+ print substr($0,1,9) "3004" substr($0,20,16) "Phy_" substr($0,36)
+ }
+
+} ' | sort -k3
+
+N=`wc -l $TEXTSYM|awk '{print $1}'`
+echo "Generated TEXTSYM file" >&2
+echo " $LINUX --> $TEXTSYM" >&2
+echo " Found $N symbols" >&2
--- /dev/null
+#!/bin/sh
+
+# Script for running PROMs and LINUX kernwls on medusa.
+# Type "sim -H" for instructions.
+
+MEDUSA=${MEDUSA:-/home/rickc/official_medusa/medusa}
+
+# ------------------ err -----------------------
+err() {
+ echo "ERROR - $1"
+ exit 1
+}
+
+# ---------------- help ----------------------
+help() {
+cat <<END
+Script for running a PROM or LINUX kernel under medusa.
+This script creates a control file, creates links to the appropriate
+linux/prom files, and/or calls medusa to make simulation runs.
+
+Usage:
+ Initial setup:
+ sim [-c <config_file>] <-p> | <-k> [<work_dir>]
+ -p Create PROM control file & links
+ -k Create LINUX control file & links
+ -c<cf> Control file name [Default: cf]
+ <work_dir> Path to directory that contains the linux or PROM files.
+ The directory can be any of the following:
+ (linux simulations)
+ worktree
+ worktree/linux
+ any directory with vmlinux, vmlinux.sym & fprom files
+ (prom simulations)
+ worktree
+ worktree/stand/arcs/IP37prom/dev
+ any directory with fw.bin & fw.sim files
+
+ Simulations:
+ sim [-X <n>] [-o <output>] [-M] [<config_file>]
+ -c<cf> Control file name [Default: cf]
+ -M Pipe output thru fmtmedusa
+ -o Output filename (copy of all commands/output) [Default: simout]
+ -X Specifies number of instructions to execute [Default: 0]
+ (Used only in auto test mode - not described here)
+
+Examples:
+ sim -p <promtree> # create control file (cf) & links for prom simulations
+ sim -k <linuxtree> # create control file (cf) & links for linux simulations
+ sim -p -c cfprom # create a prom control file (cfprom) only. No links are made.
+
+ sim # run medusa using previously created links &
+ # control file (cf).
+END
+exit 1
+}
+
+# ----------------------- create control file header --------------------
+create_cf_header() {
+cat <<END >>$CF
+#
+# Template for a control file for running linux kernels under medusa.
+# You probably want to make mods here but this is a good starting point.
+#
+
+# Preferences
+setenv cpu_stepping A
+setenv exceptionPrint off
+setenv interrupt_messages off
+setenv lastPCsize 100000
+setenv low_power_mode on
+setenv partialIntelChipSet on
+setenv printIntelMessages off
+setenv prom_write_action halt
+setenv prom_write_messages on
+setenv step_quantum 100
+setenv swizzling on
+setenv tsconsole on
+setenv uart_echo on
+symbols on
+
+# IDE disk params
+setenv diskCylinders 611
+setenv bootDrive C
+setenv diskHeads 16
+setenv diskPath idedisk
+setenv diskPresent 1
+setenv diskSpt 63
+
+# Hardware config
+setenv coherency_type nasid
+setenv cpu_cache_type default
+setenv synergy_cache_type syn_cac_64m_8w
+setenv l4_uc_snoop off
+
+# Numalink config
+setenv route_enable on
+setenv network_type router # Select [xbar|router]
+setenv network_warning 0xff
+
+END
+}
+
+
+# ------------------ create control file entries for linux simulations -------------
+create_cf_linux() {
+cat <<END >>$CF
+# Kernel specific options
+setenv calias_size 0
+setenv mca_on_memory_failure off
+setenv LOADPC 0x00100000 # FPROM load address/entry point (8 digits!)
+setenv symbol_table vmlinux.sym
+load fprom
+load vmlinux
+
+# Useful breakpoints to always have set. Add more if desired.
+break 0xe000000000505e00 all # dispatch_to_fault_handler
+break panic all # stop on panic
+break die_if_kernel all # may as well stop
+
+END
+}
+
+# ------------------ create control file entries for prom simulations ---------------
+create_cf_prom() {
+ SYM2=""
+ ADDR="0x80000000ff800000"
+ [ "$EMBEDDED_LINUX" != "0" ] || SYM2="setenv symbol_table2 vmlinux.sym"
+ [ "$SIZE" = "8MB" ] || ADDR="0x80000000ffc00000"
+ cat <<END >>$CF
+# PROM specific options
+setenv mca_on_memory_failure on
+setenv LOADPC 0x80000000ffffffb0
+setenv promFile fw.bin
+setenv promAddr $ADDR
+setenv symbol_table fw.sym
+$SYM2
+
+# Useful breakpoints to always have set. Add more if desired.
+break ivt_gexx all
+break ivt_brk all
+break PROM_Panic_Spin all
+break PROM_Panic all
+break PROM_C_Panic all
+break fled_die all
+break ResetNow all
+break zzzbkpt all
+
+END
+}
+
+
+# ------------------ create control file entries for memory configuration -------------
+create_cf_memory() {
+cat <<END >>$CF
+# CPU/Memory map format:
+# setenv nodeN_memory_config 0xBSBSBSBS
+# B=banksize (0=unused, 1=64M, 2=128M, .., 5-1G, c=8M, d=16M, e=32M)
+# S=bank enable (0=both disable, 3=both enable, 2=bank1 enable, 1=bank0 enable)
+# rightmost digits are for bank 0, the lowest address.
+# setenv nodeN_nasid <nasid>
+# specifies the NASID for the node. This is used ONLY if booting the kernel.
+# On PROM configurations, set to 0 - PROM will change it later.
+# setenv nodeN_cpu_config <cpu_mask>
+# Set bit number N to 1 to enable cpu N. Ex., a value of 5 enables cpu 0 & 2.
+#
+# Repeat the above 3 commands for each node.
+#
+# For kernel, default to 32MB. Although this is not a valid hardware configuration,
+# it runs faster on medusa. For PROM, 64MB is smallest allowed value.
+
+setenv node0_cpu_config 0x1 # Enable only cpu 0 on the node
+END
+
+if [ $LINUX -eq 1 ] ; then
+cat <<END >>$CF
+setenv node0_nasid 0 # cnode 0 has NASID 0
+setenv node0_memory_config 0xe1 # 32MB
+END
+else
+cat <<END >>$CF
+setenv node0_memory_config 0x31 # 256MB
+END
+fi
+}
+
+# -------------------- set links to linux files -------------------------
+set_linux_links() {
+ if [ -d $D/linux/arch ] ; then
+ D=$D/linux
+ elif [ -d $D/arch -o -e vmlinux.sym -o -e $D/vmlinux ] ; then
+ D=$D
+ else
+ err "cant determine directory for linux binaries"
+ fi
+ rm -rf vmlinux vmlinux.sym fprom
+ ln -s $D/vmlinux vmlinux
+ if [ -f $D/vmlinux.sym ] ; then
+ ln -s $D/vmlinux.sym vmlinux.sym
+ elif [ -f $D/System.map ] ; then
+ ln -s $D/System.map vmlinux.sym
+ fi
+ if [ -d $D/arch ] ; then
+ ln -s $D/arch/ia64/sn/fprom/fprom fprom
+ else
+ ln -s $D/fprom fprom
+ fi
+ echo " .. Created links to linux files"
+}
+
+# -------------------- set links to prom files -------------------------
+set_prom_links() {
+ if [ -d $D/stand ] ; then
+ D=$D/stand/arcs/IP37prom/dev
+ elif [ -d $D/sal ] ; then
+ D=$D
+ else
+ err "cant determine directory for PROM binaries"
+ fi
+ SETUP="/tmp/tmp.$$"
+ rm -r -f $SETUP
+ sed 's/export/setenv/' < $D/../../../../.setup | sed 's/=/ /' >$SETUP
+ egrep -q '^ *setenv *PROMSIZE *8MB|^ *export' $SETUP
+ if [ $? -eq 0 ] ; then
+ SIZE="8MB"
+ else
+ SIZE="4MB"
+ fi
+ grep -q '^ *setenv *LAUNCH_VMLINUX' $SETUP
+ EMBEDDED_LINUX=$?
+ PRODUCT=`grep '^ *setenv *PRODUCT' $SETUP | cut -d" " -f3`
+ rm -f fw.bin fw.map fw.sym vmlinux vmlinux.sym fprom $SETUP
+ SDIR="${PRODUCT}${SIZE}.O"
+ BIN="${PRODUCT}ip37prom${SIZE}"
+ ln -s $D/$SDIR/$BIN.bin fw.bin
+ ln -s $D/$SDIR/$BIN.map fw.map
+ ln -s $D/$SDIR/$BIN.sym fw.sym
+ echo " .. Created links to $SIZE prom files"
+ if [ $EMBEDDED_LINUX -eq 0 ] ; then
+ ln -s $D/linux/vmlinux vmlinux
+ ln -s $D/linux/vmlinux.sym vmlinux.sym
+ if [ -d linux/arch ] ; then
+ ln -s $D/linux/arch/ia64/sn/fprom/fprom fprom
+ else
+ ln -s $D/linux/fprom fprom
+ fi
+ echo " .. Created links to embedded linux files in prom tree"
+ fi
+}
+
+# --------------- start of shell script --------------------------------
+OUT="simout"
+FMTMED=0
+STEPCNT=0
+PROM=0
+LINUX=0
+NCF="cf"
+while getopts "HMX:c:o:pk" c ; do
+ case ${c} in
+ H) help;;
+ M) FMTMED=1;;
+ X) STEPCNT=${OPTARG};;
+ c) NCF=${OPTARG};;
+ k) PROM=0;LINUX=1;;
+ p) PROM=1;LINUX=0;;
+ o) OUT=${OPTARG};;
+ \?) exit 1;;
+ esac
+done
+shift `expr ${OPTIND} - 1`
+
+# Check if command is for creating control file and/or links to images.
+if [ $PROM -eq 1 -o $LINUX -eq 1 ] ; then
+ CF=$NCF
+ [ ! -f $CF ] || err "wont overwrite an existing control file ($CF)"
+ if [ $# -gt 0 ] ; then
+ D=$1
+ [ -d $D ] || err "cannot find directory $D"
+ [ $PROM -eq 0 ] || set_prom_links
+ [ $LINUX -eq 0 ] || set_linux_links
+ fi
+ create_cf_header
+ [ $PROM -eq 0 ] || create_cf_prom
+ [ $LINUX -eq 0 ] || create_cf_linux
+ [ ! -f ../idedisk ] || ln -s ../idedisk .
+ create_cf_memory
+ echo " .. Basic control file created (in $CF). You might want to edit"
+ echo " this file (at least, look at it)."
+ exit 0
+fi
+
+# Verify that the control file exists
+CF=${1:-$NCF}
+[ -f $CF ] || err "No control file exists. For help, type: $0 -H"
+
+# Build the .cf files from the user control file. The .cf file is
+# identical except that the actual start & load addresses are inserted
+# into the file. In addition, the FPROM commands for configuring memory
+# and LIDs are generated.
+
+rm -f .cf .cf1 .cf2
+awk '
+function strtonum(n) {
+ if (substr(n,1,2) != "0x")
+ return int(n)
+ n = substr(n,3)
+ r=0
+ while (length(n) > 0) {
+ r = r*16+(index("0123456789abcdef", substr(n,1,1))-1)
+ n = substr(n,2)
+ }
+ return r
+ }
+/^#/ {next}
+/^$/ {next}
+/^setenv *LOADPC/ {loadpc = $3; next}
+/^setenv *node.._cpu_config/ {n=int(substr($2,5,2)); cpuconf[n] = strtonum($3); print; next}
+/^setenv *node.._memory_config/ {n=int(substr($2,5,2)); memconf[n] = strtonum($3); print; next}
+/^setenv *node.._nasid/ {n=int(substr($2,5,2)); nasid[n] = strtonum($3); print; next}
+/^setenv *node._cpu_config/ {n=int(substr($2,5,1)); cpuconf[n] = strtonum($3); print; next}
+/^setenv *node._memory_config/ {n=int(substr($2,5,1)); memconf[n] = strtonum($3); print; next}
+/^setenv *node._nasid/ {n=int(substr($2,5,1)); nasid[n] = strtonum($3); print; next}
+ {print}
+END {
+ # Generate the memmap info that starts at the beginning of
+ # the node the kernel was loaded on.
+ loadnasid = nasid[0]
+ cnode = 0
+ for (i=0; i<128; i++) {
+ if (memconf[i] != "") {
+ printf "sm 0x%x%08x 0x%x%04x%04x\n",
+ 2*loadnasid, 8*cnodes+8, memconf[i], cpuconf[i], nasid[i]
+ cnodes++
+ cpus += substr("0112122312232334", cpuconf[i]+1,1)
+ }
+ }
+ printf "sm 0x%x00000000 0x%x%08x\n", 2*loadnasid, cnodes, cpus
+ printf "setenv number_of_nodes %d\n", cnodes
+
+ # Now set the starting PC for each cpu.
+ cnode = 0
+ lowcpu=-1
+ for (i=0; i<128; i++) {
+ if (memconf[i] != "") {
+ printf "setnode %d\n", cnode
+ conf = cpuconf[i]
+ for (j=0; j<4; j++) {
+ if (conf != int(conf/2)*2) {
+ printf "setcpu %d\n", j
+ if (length(loadpc) == 18)
+ printf "sr pc %s\n", loadpc
+ else
+ printf "sr pc 0x%x%s\n", 2*loadnasid, substr(loadpc,3)
+ if (lowcpu == -1)
+ lowcpu = j
+ }
+ conf = int(conf/2)
+ }
+ cnode++
+ }
+ }
+ printf "setnode 0\n"
+ printf "setcpu %d\n", lowcpu
+ }
+' <$CF >.cf
+
+# Now build the .cf1 & .cf2 control files.
+CF2_LINES="^sm |^break |^run |^si |^quit |^symbols "
+egrep "$CF2_LINES" .cf >.cf2
+egrep -v "$CF2_LINES" .cf >.cf1
+if [ $STEPCNT -ne 0 ] ; then
+ echo "s $STEPCNT" >>.cf2
+ echo "lastpc 1000" >>.cf2
+ echo "q" >>.cf2
+fi
+if [ -f vmlinux.sym ] ; then
+ awk '/ _start$/ {print "sr g 9 0x" $3}' < vmlinux.sym >> .cf2
+fi
+echo "script-on $OUT" >>.cf2
+
+# Now start medusa....
+if [ $FMTMED -ne 0 ] ; then
+ $MEDUSA -system mpsn1 -c .cf1 -i .cf2 | fmtmedusa
+elif [ $STEPCNT -eq 0 ] ; then
+ $MEDUSA -system mpsn1 -c .cf1 -i .cf2
+else
+ $MEDUSA -system mpsn1 -c .cf1 -i .cf2 2>&1
+fi
--- /dev/null
+# arch/ia64/sn/io/Makefile
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2000-2002 Silicon Graphics, Inc. All Rights Reserved.
+#
+# Makefile for the sn io routines.
+#
+
+obj-y += xswitch.o cdl.o snia_if.o \
+ io.o machvec/ drivers/ platform_init/ sn2/ hwgfs/
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <asm/sn/sgi.h>
+#include <asm/io.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/pci/pic.h>
+#include "asm/sn/ioerror_handling.h"
+#include <asm/sn/xtalk/xbow.h>
+
+/* these get called directly in cdl_add_connpt in fops bypass hack */
+extern int xbow_attach(vertex_hdl_t);
+extern int pic_attach(vertex_hdl_t);
+
+/*
+ * cdl: Connection and Driver List
+ *
+ * We are not porting this to Linux. Devices are registered via
+ * the normal Linux PCI layer. This is a very simplified version
+ * of cdl that will allow us to register and call our very own
+ * IO Infrastructure Drivers e.g. pcibr.
+ */
+
+#define MAX_SGI_IO_INFRA_DRVR 5
+
+static struct cdl sgi_infrastructure_drivers[MAX_SGI_IO_INFRA_DRVR] =
+{
+ { PIC_WIDGET_PART_NUM_BUS0, PIC_WIDGET_MFGR_NUM, pic_attach /* &pcibr_fops */},
+ { PIC_WIDGET_PART_NUM_BUS1, PIC_WIDGET_MFGR_NUM, pic_attach /* &pcibr_fops */},
+ { XXBOW_WIDGET_PART_NUM, XXBOW_WIDGET_MFGR_NUM, xbow_attach /* &xbow_fops */},
+ { XBOW_WIDGET_PART_NUM, XBOW_WIDGET_MFGR_NUM, xbow_attach /* &xbow_fops */},
+ { PXBOW_WIDGET_PART_NUM, XXBOW_WIDGET_MFGR_NUM, xbow_attach /* &xbow_fops */},
+};
+
+/*
+ * cdl_add_connpt: We found a device and it's connect point. Call the
+ * attach routine of that driver.
+ *
+ * May need support for pciba registration here ...
+ *
+ * This routine use to create /hw/.id/pci/.../.. that links to
+ * /hw/module/006c06/Pbrick/xtalk/15/pci/<slotnum> .. do we still need
+ * it? The specified driver attach routine does not reference these
+ * vertices.
+ */
+int
+cdl_add_connpt(int part_num, int mfg_num,
+ vertex_hdl_t connpt, int drv_flags)
+{
+ int i;
+
+ /*
+ * Find the driver entry point and call the attach routine.
+ */
+ for (i = 0; i < MAX_SGI_IO_INFRA_DRVR; i++) {
+ if ( (part_num == sgi_infrastructure_drivers[i].part_num) &&
+ ( mfg_num == sgi_infrastructure_drivers[i].mfg_num) ) {
+ /*
+ * Call the device attach routines.
+ */
+ if (sgi_infrastructure_drivers[i].attach) {
+ return(sgi_infrastructure_drivers[i].attach(connpt));
+ }
+ } else {
+ continue;
+ }
+ }
+
+ /* printk("WARNING: cdl_add_connpt: Driver not found for part_num 0x%x mfg_num 0x%x\n", part_num, mfg_num); */
+
+ return (0);
+}
--- /dev/null
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2002-2003 Silicon Graphics, Inc. All Rights Reserved.
+#
+# Makefile for the sn2 io routines.
+
+obj-y += ioconfig_bus.o
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * ioconfig_bus - SGI's Persistent PCI Bus Numbering.
+ *
+ * Copyright (C) 1992-1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/ctype.h>
+#include <linux/module.h>
+#include <linux/init.h>
+
+#include <linux/pci.h>
+
+#include <asm/uaccess.h>
+#include <asm/sn/sgi.h>
+#include <asm/io.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/sn/addrs.h>
+#include <asm/sn/ioconfig_bus.h>
+
+#define SGI_IOCONFIG_BUS "SGI-PERSISTENT PCI BUS NUMBERING"
+#define SGI_IOCONFIG_BUS_VERSION "1.0"
+
+/*
+ * Some Global definitions.
+ */
+static vertex_hdl_t ioconfig_bus_handle;
+static unsigned long ioconfig_bus_debug;
+static struct ioconfig_parm parm;
+
+#ifdef IOCONFIG_BUS_DEBUG
+#define DBG(x...) printk(x)
+#else
+#define DBG(x...)
+#endif
+
+static u64 ioconfig_activated;
+static char ioconfig_kernopts[128];
+
+/*
+ * For debugging purpose .. hardcode a table ..
+ */
+struct ascii_moduleid *ioconfig_bus_table;
+
+static int free_entry;
+static int new_entry;
+
+int next_basebus_number;
+
+void
+ioconfig_get_busnum(char *io_moduleid, int *bus_num)
+{
+ struct ascii_moduleid *temp;
+ int index;
+
+ DBG("ioconfig_get_busnum io_moduleid %s\n", io_moduleid);
+
+ *bus_num = -1;
+ temp = ioconfig_bus_table;
+ if (!ioconfig_bus_table)
+ return;
+ for (index = 0; index < free_entry; temp++, index++) {
+ if ( (io_moduleid[0] == temp->io_moduleid[0]) &&
+ (io_moduleid[1] == temp->io_moduleid[1]) &&
+ (io_moduleid[2] == temp->io_moduleid[2]) &&
+ (io_moduleid[4] == temp->io_moduleid[4]) &&
+ (io_moduleid[5] == temp->io_moduleid[5]) ) {
+ *bus_num = index * 0x10;
+ return;
+ }
+ }
+
+ /*
+ * New IO Brick encountered.
+ */
+ if (((int)io_moduleid[0]) == 0) {
+ DBG("ioconfig_get_busnum: Invalid Module Id given %s\n", io_moduleid);
+ return;
+ }
+
+ io_moduleid[3] = '#';
+ strcpy((char *)&(ioconfig_bus_table[free_entry].io_moduleid), io_moduleid);
+ *bus_num = free_entry * 0x10;
+ free_entry++;
+}
+
+static void
+dump_ioconfig_table(void)
+{
+
+ int index = 0;
+ struct ascii_moduleid *temp;
+
+ temp = ioconfig_bus_table;
+ if (!temp) {
+ DBG("ioconfig_bus_table tabel empty\n");
+ return;
+ }
+ while (index < free_entry) {
+ DBG("ASSCI Module ID %s\n", temp->io_moduleid);
+ temp++;
+ index++;
+ }
+}
+
+/*
+ * nextline
+ * This routine returns the nextline in the buffer.
+ */
+int nextline(char *buffer, char **next, char *line)
+{
+
+ char *temp;
+
+ if (buffer[0] == 0x0) {
+ return(0);
+ }
+
+ temp = buffer;
+ while (*temp != 0) {
+ *line = *temp;
+ if (*temp != '\n'){
+ *line = *temp;
+ temp++; line++;
+ } else
+ break;
+ }
+
+ if (*temp == 0)
+ *next = temp;
+ else
+ *next = ++temp;
+
+ return(1);
+}
+
+/*
+ * build_pcibus_name
+ * This routine parses the ioconfig contents read into
+ * memory by ioconfig command in EFI and builds the
+ * persistent pci bus naming table.
+ */
+int
+build_moduleid_table(char *file_contents, struct ascii_moduleid *table)
+{
+ /*
+ * Read the whole file into memory.
+ */
+ int rc;
+ char *name;
+ char *temp;
+ char *next;
+ char *curr;
+ char *line;
+ struct ascii_moduleid *moduleid;
+
+ line = kmalloc(256, GFP_KERNEL);
+ name = kmalloc(125, GFP_KERNEL);
+ if (!line || !name) {
+ if (line)
+ kfree(line);
+ if (name)
+ kfree(name);
+ printk("build_moduleid_table(): Unabled to allocate memmory");
+ return -ENOMEM;
+ }
+
+ memset(line, 0,256);
+ memset(name, 0, 125);
+ moduleid = table;
+ curr = file_contents;
+ while (nextline(curr, &next, line)){
+
+ DBG("curr 0x%lx next 0x%lx\n", curr, next);
+
+ temp = line;
+ /*
+ * Skip all leading Blank lines ..
+ */
+ while (isspace(*temp))
+ if (*temp != '\n')
+ temp++;
+ else
+ break;
+
+ if (*temp == '\n') {
+ curr = next;
+ memset(line, 0, 256);
+ continue;
+ }
+
+ /*
+ * Skip comment lines
+ */
+ if (*temp == '#') {
+ curr = next;
+ memset(line, 0, 256);
+ continue;
+ }
+
+ /*
+ * Get the next free entry in the table.
+ */
+ rc = sscanf(temp, "%s", name);
+ strcpy(&moduleid->io_moduleid[0], name);
+ DBG("Found %s\n", name);
+ moduleid++;
+ free_entry++;
+ curr = next;
+ memset(line, 0, 256);
+ }
+
+ new_entry = free_entry;
+ kfree(line);
+ kfree(name);
+
+ return 0;
+}
+
+int
+ioconfig_bus_init(void)
+{
+
+ DBG("ioconfig_bus_init called.\n");
+
+ ioconfig_bus_table = kmalloc( 512, GFP_KERNEL );
+ if (!ioconfig_bus_table) {
+ printk("ioconfig_bus_init : cannot allocate memory\n");
+ return -1;
+ }
+
+ memset(ioconfig_bus_table, 0, 512);
+
+ /*
+ * If ioconfig options are given on the bootline .. take it.
+ */
+ if (*ioconfig_kernopts != '\0') {
+ /*
+ * ioconfig="..." kernel options given.
+ */
+ DBG("ioconfig_bus_init: Kernel Options given.\n");
+ if ( build_moduleid_table((char *)ioconfig_kernopts, ioconfig_bus_table) < 0 )
+ return -1;
+ (void) dump_ioconfig_table();
+ }
+ return 0;
+}
+
+void
+ioconfig_bus_new_entries(void)
+{
+ int index;
+ struct ascii_moduleid *temp;
+
+ if ((ioconfig_activated) && (free_entry > new_entry)) {
+ printk("### Please add the following new IO Bricks Module ID \n");
+ printk("### to your Persistent Bus Numbering Config File\n");
+ } else
+ return;
+
+ index = new_entry;
+ if (!ioconfig_bus_table) {
+ printk("ioconfig_bus_table table is empty\n");
+ return;
+ }
+ temp = &ioconfig_bus_table[index];
+ while (index < free_entry) {
+ printk("%s\n", (char *)temp);
+ temp++;
+ index++;
+ }
+ printk("### End\n");
+
+}
+static int ioconfig_bus_ioctl(struct inode * inode, struct file * file,
+ unsigned int cmd, unsigned long arg)
+{
+ /*
+ * Copy in the parameters.
+ */
+ if (copy_from_user(&parm, (char *)arg, sizeof(struct ioconfig_parm)))
+ return -EFAULT;
+ parm.number = free_entry - new_entry;
+ parm.ioconfig_activated = ioconfig_activated;
+ if (copy_to_user((char *)arg, &parm, sizeof(struct ioconfig_parm)))
+ return -EFAULT;
+
+ if (!ioconfig_bus_table)
+ return -EFAULT;
+
+ if (copy_to_user((char *)parm.buffer, &ioconfig_bus_table[new_entry], sizeof(struct ascii_moduleid) * (free_entry - new_entry)))
+ return -EFAULT;
+
+ return 0;
+}
+
+/*
+ * ioconfig_bus_open - Opens the special device node "/dev/hw/.ioconfig_bus".
+ */
+static int ioconfig_bus_open(struct inode * inode, struct file * filp)
+{
+ if (ioconfig_bus_debug) {
+ DBG("ioconfig_bus_open called.\n");
+ }
+
+ return(0);
+
+}
+
+/*
+ * ioconfig_bus_close - Closes the special device node "/dev/hw/.ioconfig_bus".
+ */
+static int ioconfig_bus_close(struct inode * inode, struct file * filp)
+{
+
+ if (ioconfig_bus_debug) {
+ DBG("ioconfig_bus_close called.\n");
+ }
+
+ return(0);
+}
+
+struct file_operations ioconfig_bus_fops = {
+ .ioctl = ioconfig_bus_ioctl,
+ .open = ioconfig_bus_open, /* open */
+ .release=ioconfig_bus_close /* release */
+};
+
+
+/*
+ * init_ifconfig_bus() - Boot time initialization. Ensure that it is called
+ * after hwgfs has been initialized.
+ *
+ */
+int init_ioconfig_bus(void)
+{
+ ioconfig_bus_handle = hwgraph_register(hwgraph_root, ".ioconfig_bus",
+ 0, 0,
+ 0, 0,
+ S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP, 0, 0,
+ &ioconfig_bus_fops, NULL);
+
+ if (ioconfig_bus_handle == NULL) {
+ panic("Unable to create SGI PERSISTENT BUS NUMBERING Driver.\n");
+ }
+
+ return 0;
+}
+
+static int __init ioconfig_bus_setup (char *str)
+{
+
+ char *temp;
+
+ DBG("ioconfig_bus_setup: Kernel Options %s\n", str);
+
+ temp = (char *)ioconfig_kernopts;
+ memset(temp, 0, 128);
+ while ( (*str != '\0') && !isspace (*str) ) {
+ if (*str == ',') {
+ *temp = '\n';
+ temp++;
+ str++;
+ continue;
+ }
+ *temp = *str;
+ temp++;
+ str++;
+ }
+
+ return(0);
+
+}
+__setup("ioconfig=", ioconfig_bus_setup);
--- /dev/null
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2002-2003 Silicon Graphics, Inc. All Rights Reserved.
+#
+# Makefile for the sn2 io routines.
+
+obj-y += hcl.o labelcl.o hcl_util.o ramfs.o interface.o
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * hcl - SGI's Hardware Graph compatibility layer.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/config.h>
+#include <linux/slab.h>
+#include <linux/ctype.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/string.h>
+#include <linux/sched.h> /* needed for smp_lock.h :( */
+#include <linux/smp_lock.h>
+#include <asm/sn/sgi.h>
+#include <asm/io.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/hwgfs.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/simulator.h>
+
+#define vertex_hdl_t hwgfs_handle_t
+
+vertex_hdl_t hwgraph_root;
+vertex_hdl_t linux_busnum;
+extern int pci_bus_cvlink_init(void);
+unsigned long hwgraph_debug_mask;
+
+/*
+ * init_hcl() - Boot time initialization.
+ *
+ */
+int __init init_hcl(void)
+{
+ extern void string_table_init(struct string_table *);
+ extern struct string_table label_string_table;
+ extern int init_ioconfig_bus(void);
+ extern int init_hwgfs_fs(void);
+ int rv = 0;
+
+ init_hwgfs_fs();
+
+ /*
+ * Create the hwgraph_root.
+ */
+ rv = hwgraph_path_add(NULL, EDGE_LBL_HW, &hwgraph_root);
+ if (rv) {
+ printk("init_hcl: Failed to create hwgraph_root.\n");
+ return -1;
+ }
+
+ /*
+ * Initialize the HCL string table.
+ */
+
+ string_table_init(&label_string_table);
+
+ /*
+ * Create the directory that links Linux bus numbers to our Xwidget.
+ */
+ rv = hwgraph_path_add(hwgraph_root, EDGE_LBL_LINUX_BUS, &linux_busnum);
+ if (linux_busnum == NULL) {
+ printk("HCL: Unable to create %s\n", EDGE_LBL_LINUX_BUS);
+ return -1;
+ }
+
+ if (pci_bus_cvlink_init() < 0 ) {
+ printk("init_hcl: Failed to create pcibus cvlink.\n");
+ return -1;
+ }
+
+ /*
+ * Persistent Naming.
+ */
+ init_ioconfig_bus();
+
+ return 0;
+}
+
+/*
+ * Get device specific "fast information".
+ *
+ */
+arbitrary_info_t
+hwgraph_fastinfo_get(vertex_hdl_t de)
+{
+ arbitrary_info_t fastinfo;
+ int rv;
+
+ if (!de) {
+ printk(KERN_WARNING "HCL: hwgraph_fastinfo_get handle given is NULL.\n");
+ dump_stack();
+ return(-1);
+ }
+
+ rv = labelcl_info_get_IDX(de, HWGRAPH_FASTINFO, &fastinfo);
+ if (rv == 0)
+ return(fastinfo);
+
+ return(0);
+}
+
+
+/*
+ * hwgraph_connectpt_get: Returns the entry's connect point.
+ *
+ */
+vertex_hdl_t
+hwgraph_connectpt_get(vertex_hdl_t de)
+{
+ int rv;
+ arbitrary_info_t info;
+ vertex_hdl_t connect;
+
+ rv = labelcl_info_get_IDX(de, HWGRAPH_CONNECTPT, &info);
+ if (rv != 0) {
+ return(NULL);
+ }
+
+ connect = (vertex_hdl_t)info;
+ return(connect);
+
+}
+
+
+/*
+ * hwgraph_mk_dir - Creates a directory entry.
+ */
+vertex_hdl_t
+hwgraph_mk_dir(vertex_hdl_t de, const char *name,
+ unsigned int namelen, void *info)
+{
+
+ int rv;
+ labelcl_info_t *labelcl_info = NULL;
+ vertex_hdl_t new_handle = NULL;
+ vertex_hdl_t parent = NULL;
+
+ /*
+ * Create the device info structure for hwgraph compatiblity support.
+ */
+ labelcl_info = labelcl_info_create();
+ if (!labelcl_info)
+ return(NULL);
+
+ /*
+ * Create an entry.
+ */
+ new_handle = hwgfs_mk_dir(de, name, (void *)labelcl_info);
+ if (!new_handle) {
+ labelcl_info_destroy(labelcl_info);
+ return(NULL);
+ }
+
+ /*
+ * Get the parent handle.
+ */
+ parent = hwgfs_get_parent (new_handle);
+
+ /*
+ * To provide the same semantics as the hwgraph, set the connect point.
+ */
+ rv = hwgraph_connectpt_set(new_handle, parent);
+ if (!rv) {
+ /*
+ * We need to clean up!
+ */
+ }
+
+ /*
+ * If the caller provides a private data pointer, save it in the
+ * labelcl info structure(fastinfo). This can be retrieved via
+ * hwgraph_fastinfo_get()
+ */
+ if (info)
+ hwgraph_fastinfo_set(new_handle, (arbitrary_info_t)info);
+
+ return(new_handle);
+
+}
+
+/*
+ * hwgraph_path_add - Create a directory node with the given path starting
+ * from the given fromv.
+ */
+int
+hwgraph_path_add(vertex_hdl_t fromv,
+ char *path,
+ vertex_hdl_t *new_de)
+{
+
+ unsigned int namelen = strlen(path);
+ int rv;
+
+ /*
+ * We need to handle the case when fromv is NULL ..
+ * in this case we need to create the path from the
+ * hwgraph root!
+ */
+ if (fromv == NULL)
+ fromv = hwgraph_root;
+
+ /*
+ * check the entry doesn't already exist, if it does
+ * then we simply want new_de to point to it (otherwise
+ * we'll overwrite the existing labelcl_info struct)
+ */
+ rv = hwgraph_edge_get(fromv, path, new_de);
+ if (rv) { /* couldn't find entry so we create it */
+ *new_de = hwgraph_mk_dir(fromv, path, namelen, NULL);
+ if (new_de == NULL)
+ return(-1);
+ else
+ return(0);
+ }
+ else
+ return(0);
+
+}
+
+/*
+ * hwgraph_register - Creates a special device file.
+ *
+ */
+vertex_hdl_t
+hwgraph_register(vertex_hdl_t de, const char *name,
+ unsigned int namelen, unsigned int flags,
+ unsigned int major, unsigned int minor,
+ umode_t mode, uid_t uid, gid_t gid,
+ struct file_operations *fops,
+ void *info)
+{
+
+ vertex_hdl_t new_handle = NULL;
+
+ /*
+ * Create an entry.
+ */
+ new_handle = hwgfs_register(de, name, flags, major,
+ minor, mode, fops, info);
+
+ return(new_handle);
+
+}
+
+
+/*
+ * hwgraph_mk_symlink - Create a symbolic link.
+ */
+int
+hwgraph_mk_symlink(vertex_hdl_t de, const char *name, unsigned int namelen,
+ unsigned int flags, const char *link, unsigned int linklen,
+ vertex_hdl_t *handle, void *info)
+{
+
+ void *labelcl_info = NULL;
+ int status = 0;
+ vertex_hdl_t new_handle = NULL;
+
+ /*
+ * Create the labelcl info structure for hwgraph compatiblity support.
+ */
+ labelcl_info = labelcl_info_create();
+ if (!labelcl_info)
+ return(-1);
+
+ /*
+ * Create a symbolic link.
+ */
+ status = hwgfs_mk_symlink(de, name, flags, link,
+ &new_handle, labelcl_info);
+ if ( (!new_handle) || (!status) ){
+ labelcl_info_destroy((labelcl_info_t *)labelcl_info);
+ return(-1);
+ }
+
+ /*
+ * If the caller provides a private data pointer, save it in the
+ * labelcl info structure(fastinfo). This can be retrieved via
+ * hwgraph_fastinfo_get()
+ */
+ if (info)
+ hwgraph_fastinfo_set(new_handle, (arbitrary_info_t)info);
+
+ *handle = new_handle;
+ return(0);
+
+}
+
+/*
+ * hwgraph_vertex_destroy - Destroy the entry
+ */
+int
+hwgraph_vertex_destroy(vertex_hdl_t de)
+{
+
+ void *labelcl_info = NULL;
+
+ labelcl_info = hwgfs_get_info(de);
+ hwgfs_unregister(de);
+
+ if (labelcl_info)
+ labelcl_info_destroy((labelcl_info_t *)labelcl_info);
+
+ return(0);
+}
+
+int
+hwgraph_edge_add(vertex_hdl_t from, vertex_hdl_t to, char *name)
+{
+
+ char *path, *link;
+ char *s1;
+ char *index;
+ vertex_hdl_t handle = NULL;
+ int rv;
+ int i, count;
+
+ path = kmalloc(1024, GFP_KERNEL);
+ if (!path)
+ return -ENOMEM;
+ memset((char *)path, 0x0, 1024);
+ link = kmalloc(1024, GFP_KERNEL);
+ if (!link) {
+ kfree(path);
+ return -ENOMEM;
+ }
+ memset((char *)link, 0x0, 1024);
+
+ i = hwgfs_generate_path (from, path, 1024);
+ s1 = (char *)path;
+ count = 0;
+ while (1) {
+ index = strstr (s1, "/");
+ if (index) {
+ count++;
+ s1 = ++index;
+ } else {
+ count++;
+ break;
+ }
+ }
+
+ for (i = 0; i < count; i++) {
+ strcat((char *)link,"../");
+ }
+
+ memset(path, 0x0, 1024);
+ i = hwgfs_generate_path (to, path, 1024);
+ strcat((char *)link, (char *)path);
+
+ /*
+ * Otherwise, just create a symlink to the vertex.
+ * In this case the vertex was previous created with a REAL pathname.
+ */
+ rv = hwgfs_mk_symlink (from, (const char *)name,
+ 0, link,
+ &handle, NULL);
+ kfree(path);
+ kfree(link);
+
+ return(rv);
+
+
+}
+
+/* ARGSUSED */
+int
+hwgraph_edge_get(vertex_hdl_t from, char *name, vertex_hdl_t *toptr)
+{
+
+ vertex_hdl_t target_handle = NULL;
+
+ if (name == NULL)
+ return(-1);
+
+ if (toptr == NULL)
+ return(-1);
+
+ /*
+ * If the name is "." just return the current entry handle.
+ */
+ if (!strcmp(name, HWGRAPH_EDGELBL_DOT)) {
+ if (toptr) {
+ *toptr = from;
+ }
+ } else if (!strcmp(name, HWGRAPH_EDGELBL_DOTDOT)) {
+ /*
+ * Hmmm .. should we return the connect point or parent ..
+ * see in hwgraph, the concept of parent is the connectpt!
+ *
+ * Maybe we should see whether the connectpt is set .. if
+ * not just return the parent!
+ */
+ target_handle = hwgraph_connectpt_get(from);
+ if (target_handle) {
+ /*
+ * Just return the connect point.
+ */
+ *toptr = target_handle;
+ return(0);
+ }
+ target_handle = hwgfs_get_parent(from);
+ *toptr = target_handle;
+
+ } else {
+ target_handle = hwgfs_find_handle (from, name, 0, 0,
+ 0, 1); /* Yes traverse symbolic links */
+ }
+
+ if (target_handle == NULL)
+ return(-1);
+ else
+ *toptr = target_handle;
+
+ return(0);
+}
+
+/*
+ * hwgraph_info_add_LBL - Adds a new label for the device. Mark the info_desc
+ * of the label as INFO_DESC_PRIVATE and store the info in the label.
+ */
+/* ARGSUSED */
+int
+hwgraph_info_add_LBL( vertex_hdl_t de,
+ char *name,
+ arbitrary_info_t info)
+{
+ return(labelcl_info_add_LBL(de, name, INFO_DESC_PRIVATE, info));
+}
+
+/*
+ * hwgraph_info_remove_LBL - Remove the label entry for the device.
+ */
+/* ARGSUSED */
+int
+hwgraph_info_remove_LBL( vertex_hdl_t de,
+ char *name,
+ arbitrary_info_t *old_info)
+{
+ return(labelcl_info_remove_LBL(de, name, NULL, old_info));
+}
+
+/*
+ * hwgraph_info_replace_LBL - replaces an existing label with
+ * a new label info value.
+ */
+/* ARGSUSED */
+int
+hwgraph_info_replace_LBL( vertex_hdl_t de,
+ char *name,
+ arbitrary_info_t info,
+ arbitrary_info_t *old_info)
+{
+ return(labelcl_info_replace_LBL(de, name,
+ INFO_DESC_PRIVATE, info,
+ NULL, old_info));
+}
+/*
+ * hwgraph_info_get_LBL - Get and return the info value in the label of the
+ * device.
+ */
+/* ARGSUSED */
+int
+hwgraph_info_get_LBL(vertex_hdl_t de,
+ char *name,
+ arbitrary_info_t *infop)
+{
+ return(labelcl_info_get_LBL(de, name, NULL, infop));
+}
+
+/*
+ * hwgraph_info_get_exported_LBL - Retrieve the info_desc and info pointer
+ * of the given label for the device. The weird thing is that the label
+ * that matches the name is return irrespective of the info_desc value!
+ * Do not understand why the word "exported" is used!
+ */
+/* ARGSUSED */
+int
+hwgraph_info_get_exported_LBL(vertex_hdl_t de,
+ char *name,
+ int *export_info,
+ arbitrary_info_t *infop)
+{
+ int rc;
+ arb_info_desc_t info_desc;
+
+ rc = labelcl_info_get_LBL(de, name, &info_desc, infop);
+ if (rc == 0)
+ *export_info = (int)info_desc;
+
+ return(rc);
+}
+
+/*
+ * hwgraph_info_get_next_LBL - Returns the next label info given the
+ * current label entry in place.
+ *
+ * Once again this has no locking or reference count for protection.
+ *
+ */
+/* ARGSUSED */
+int
+hwgraph_info_get_next_LBL(vertex_hdl_t de,
+ char *buf,
+ arbitrary_info_t *infop,
+ labelcl_info_place_t *place)
+{
+ return(labelcl_info_get_next_LBL(de, buf, NULL, infop, place));
+}
+
+/*
+ * hwgraph_info_export_LBL - Retrieve the specified label entry and modify
+ * the info_desc field with the given value in nbytes.
+ */
+/* ARGSUSED */
+int
+hwgraph_info_export_LBL(vertex_hdl_t de, char *name, int nbytes)
+{
+ arbitrary_info_t info;
+ int rc;
+
+ if (nbytes == 0)
+ nbytes = INFO_DESC_EXPORT;
+
+ if (nbytes < 0)
+ return(-1);
+
+ rc = labelcl_info_get_LBL(de, name, NULL, &info);
+ if (rc != 0)
+ return(rc);
+
+ rc = labelcl_info_replace_LBL(de, name,
+ nbytes, info, NULL, NULL);
+
+ return(rc);
+}
+
+/*
+ * hwgraph_info_unexport_LBL - Retrieve the given label entry and change the
+ * label info_descr filed to INFO_DESC_PRIVATE.
+ */
+/* ARGSUSED */
+int
+hwgraph_info_unexport_LBL(vertex_hdl_t de, char *name)
+{
+ arbitrary_info_t info;
+ int rc;
+
+ rc = labelcl_info_get_LBL(de, name, NULL, &info);
+ if (rc != 0)
+ return(rc);
+
+ rc = labelcl_info_replace_LBL(de, name,
+ INFO_DESC_PRIVATE, info, NULL, NULL);
+
+ return(rc);
+}
+
+/*
+ * hwgraph_traverse - Find and return the handle starting from de.
+ *
+ */
+graph_error_t
+hwgraph_traverse(vertex_hdl_t de, char *path, vertex_hdl_t *found)
+{
+ /*
+ * get the directory entry (path should end in a directory)
+ */
+
+ *found = hwgfs_find_handle(de, /* start dir */
+ path, /* path */
+ 0, /* major */
+ 0, /* minor */
+ 0, /* char | block */
+ 1); /* traverse symlinks */
+ if (*found == NULL)
+ return(GRAPH_NOT_FOUND);
+ else
+ return(GRAPH_SUCCESS);
+}
+
+/*
+ * Find the canonical name for a given vertex by walking back through
+ * connectpt's until we hit the hwgraph root vertex (or until we run
+ * out of buffer space or until something goes wrong).
+ *
+ * COMPATIBILITY FUNCTIONALITY
+ * Walks back through 'parents', not necessarily the same as connectpts.
+ *
+ * Need to resolve the fact that does not return the path from
+ * "/" but rather it just stops right before /dev ..
+ */
+int
+hwgraph_vertex_name_get(vertex_hdl_t vhdl, char *buf, unsigned int buflen)
+{
+ char *locbuf;
+ int pos;
+
+ if (buflen < 1)
+ return(-1); /* XXX should be GRAPH_BAD_PARAM ? */
+
+ locbuf = kmalloc(buflen, GFP_KERNEL);
+
+ pos = hwgfs_generate_path(vhdl, locbuf, buflen);
+ if (pos < 0) {
+ kfree(locbuf);
+ return pos;
+ }
+
+ strcpy(buf, &locbuf[pos]);
+ kfree(locbuf);
+ return 0;
+}
+
+/*
+** vertex_to_name converts a vertex into a canonical name by walking
+** back through connect points until we hit the hwgraph root (or until
+** we run out of buffer space).
+**
+** Usually returns a pointer to the original buffer, filled in as
+** appropriate. If the buffer is too small to hold the entire name,
+** or if anything goes wrong while determining the name, vertex_to_name
+** returns "UnknownDevice".
+*/
+
+#define DEVNAME_UNKNOWN "UnknownDevice"
+
+char *
+vertex_to_name(vertex_hdl_t vhdl, char *buf, unsigned int buflen)
+{
+ if (hwgraph_vertex_name_get(vhdl, buf, buflen) == GRAPH_SUCCESS)
+ return(buf);
+ else
+ return(DEVNAME_UNKNOWN);
+}
+
+
+void
+hwgraph_debug(char *file, const char * function, int line, vertex_hdl_t vhdl1, vertex_hdl_t vhdl2, char *format, ...)
+{
+
+ int pos;
+ char *hwpath;
+ va_list ap;
+
+ if ( !hwgraph_debug_mask )
+ return;
+
+ hwpath = kmalloc(MAXDEVNAME, GFP_KERNEL);
+ if (!hwpath) {
+ printk("HWGRAPH_DEBUG kmalloc fails at %d ", __LINE__);
+ return;
+ }
+
+ printk("HWGRAPH_DEBUG %s %s %d : ", file, function, line);
+
+ if (vhdl1){
+ memset(hwpath, 0, MAXDEVNAME);
+ pos = hwgfs_generate_path(vhdl1, hwpath, MAXDEVNAME);
+ printk("vhdl1 = %s : ", &hwpath[pos]);
+ }
+
+ if (vhdl2){
+ memset(hwpath, 0, MAXDEVNAME);
+ pos = hwgfs_generate_path(vhdl2, hwpath, MAXDEVNAME);
+ printk("vhdl2 = %s :", &hwpath[pos]);
+ }
+
+ memset(hwpath, 0, MAXDEVNAME);
+ va_start(ap, format);
+ vsnprintf(hwpath, 500, format, ap);
+ va_end(ap);
+ hwpath[MAXDEVNAME -1] = (char)0; /* Just in case. */
+ printk(" %s", hwpath);
+ kfree(hwpath);
+}
+
+EXPORT_SYMBOL(hwgraph_mk_dir);
+EXPORT_SYMBOL(hwgraph_path_add);
+EXPORT_SYMBOL(hwgraph_register);
+EXPORT_SYMBOL(hwgraph_vertex_destroy);
+EXPORT_SYMBOL(hwgraph_fastinfo_get);
+EXPORT_SYMBOL(hwgraph_connectpt_get);
+EXPORT_SYMBOL(hwgraph_info_add_LBL);
+EXPORT_SYMBOL(hwgraph_info_remove_LBL);
+EXPORT_SYMBOL(hwgraph_info_replace_LBL);
+EXPORT_SYMBOL(hwgraph_info_get_LBL);
+EXPORT_SYMBOL(hwgraph_info_get_exported_LBL);
+EXPORT_SYMBOL(hwgraph_info_get_next_LBL);
+EXPORT_SYMBOL(hwgraph_info_export_LBL);
+EXPORT_SYMBOL(hwgraph_info_unexport_LBL);
+EXPORT_SYMBOL(hwgraph_traverse);
+EXPORT_SYMBOL(hwgraph_vertex_name_get);
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <asm/sn/sgi.h>
+#include <asm/io.h>
+#include <asm/sn/io.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/hwgfs.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/hcl_util.h>
+#include <asm/sn/nodepda.h>
+
+static vertex_hdl_t hwgraph_all_cnodes = GRAPH_VERTEX_NONE;
+extern vertex_hdl_t hwgraph_root;
+static vertex_hdl_t hwgraph_all_cpuids = GRAPH_VERTEX_NONE;
+extern int maxcpus;
+
+void
+mark_cpuvertex_as_cpu(vertex_hdl_t vhdl, cpuid_t cpuid)
+{
+ char cpuid_buffer[10];
+
+ if (cpuid == CPU_NONE)
+ return;
+
+ if (hwgraph_all_cpuids == GRAPH_VERTEX_NONE) {
+ (void)hwgraph_path_add( hwgraph_root,
+ EDGE_LBL_CPUNUM,
+ &hwgraph_all_cpuids);
+ }
+
+ sprintf(cpuid_buffer, "%ld", cpuid);
+ (void)hwgraph_edge_add( hwgraph_all_cpuids, vhdl, cpuid_buffer);
+}
+
+/*
+** Return the "master" for a given vertex. A master vertex is a
+** controller or adapter or other piece of hardware that the given
+** vertex passes through on the way to the rest of the system.
+*/
+vertex_hdl_t
+device_master_get(vertex_hdl_t vhdl)
+{
+ graph_error_t rc;
+ vertex_hdl_t master;
+
+ rc = hwgraph_edge_get(vhdl, EDGE_LBL_MASTER, &master);
+ if (rc == GRAPH_SUCCESS)
+ return(master);
+ else
+ return(GRAPH_VERTEX_NONE);
+}
+
+/*
+** Set the master for a given vertex.
+** Returns 0 on success, non-0 indicates failure
+*/
+int
+device_master_set(vertex_hdl_t vhdl, vertex_hdl_t master)
+{
+ graph_error_t rc;
+
+ rc = hwgraph_edge_add(vhdl, master, EDGE_LBL_MASTER);
+ return(rc != GRAPH_SUCCESS);
+}
+
+
+/*
+** Return the compact node id of the node that ultimately "owns" the specified
+** vertex. In order to do this, we walk back through masters and connect points
+** until we reach a vertex that represents a node.
+*/
+cnodeid_t
+master_node_get(vertex_hdl_t vhdl)
+{
+ cnodeid_t cnodeid;
+ vertex_hdl_t master;
+
+ for (;;) {
+ cnodeid = nodevertex_to_cnodeid(vhdl);
+ if (cnodeid != CNODEID_NONE)
+ return(cnodeid);
+
+ master = device_master_get(vhdl);
+
+ /* Check for exceptional cases */
+ if (master == vhdl) {
+ /* Since we got a reference to the "master" thru
+ * device_master_get() we should decrement
+ * its reference count by 1
+ */
+ return(CNODEID_NONE);
+ }
+
+ if (master == GRAPH_VERTEX_NONE) {
+ master = hwgraph_connectpt_get(vhdl);
+ if ((master == GRAPH_VERTEX_NONE) ||
+ (master == vhdl)) {
+ return(CNODEID_NONE);
+ }
+ }
+
+ vhdl = master;
+ }
+}
+
+
+/*
+** If the specified device represents a node, return its
+** compact node ID; otherwise, return CNODEID_NONE.
+*/
+cnodeid_t
+nodevertex_to_cnodeid(vertex_hdl_t vhdl)
+{
+ int rv = 0;
+ arbitrary_info_t cnodeid = CNODEID_NONE;
+
+ rv = labelcl_info_get_LBL(vhdl, INFO_LBL_CNODEID, NULL, &cnodeid);
+
+ return((cnodeid_t)cnodeid);
+}
+
+void
+mark_nodevertex_as_node(vertex_hdl_t vhdl, cnodeid_t cnodeid)
+{
+ if (cnodeid == CNODEID_NONE)
+ return;
+
+ cnodeid_to_vertex(cnodeid) = vhdl;
+ labelcl_info_add_LBL(vhdl, INFO_LBL_CNODEID, INFO_DESC_EXPORT,
+ (arbitrary_info_t)cnodeid);
+
+ {
+ char cnodeid_buffer[10];
+
+ if (hwgraph_all_cnodes == GRAPH_VERTEX_NONE) {
+ (void)hwgraph_path_add( hwgraph_root,
+ EDGE_LBL_NODENUM,
+ &hwgraph_all_cnodes);
+ }
+
+ sprintf(cnodeid_buffer, "%d", cnodeid);
+ (void)hwgraph_edge_add( hwgraph_all_cnodes,
+ vhdl,
+ cnodeid_buffer);
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__, __LINE__, hwgraph_all_cnodes, NULL, "Creating path vhdl1\n");
+ }
+}
+
+/*
+** dev_to_name converts a vertex_hdl_t into a canonical name. If the vertex_hdl_t
+** represents a vertex in the hardware graph, it is converted in the
+** normal way for vertices. If the vertex_hdl_t is an old vertex_hdl_t (one which
+** does not represent a hwgraph vertex), we synthesize a name based
+** on major/minor number.
+**
+** Usually returns a pointer to the original buffer, filled in as
+** appropriate. If the buffer is too small to hold the entire name,
+** or if anything goes wrong while determining the name, dev_to_name
+** returns "UnknownDevice".
+*/
+char *
+dev_to_name(vertex_hdl_t dev, char *buf, uint buflen)
+{
+ return(vertex_to_name(dev, buf, buflen));
+}
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * Portions based on Adam Richter's smalldevfs and thus
+ * Copyright 2002-2003 Yggdrasil Computing, Inc.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/mount.h>
+#include <linux/namei.h>
+#include <linux/string.h>
+#include <linux/slab.h>
+#include <asm/sn/hwgfs.h>
+
+
+extern struct vfsmount *hwgfs_vfsmount;
+
+static int
+walk_parents_mkdir(
+ const char **path,
+ struct nameidata *nd,
+ int is_dir)
+{
+ char *slash;
+ char buf[strlen(*path)+1];
+ int error;
+
+ while ((slash = strchr(*path, '/')) != NULL) {
+ int len = slash - *path;
+ memcpy(buf, *path, len);
+ buf[len] = '\0';
+
+ error = path_walk(buf, nd);
+ if (unlikely(error))
+ return error;
+
+ nd->dentry = lookup_create(nd, is_dir);
+ nd->flags |= LOOKUP_PARENT;
+ if (unlikely(IS_ERR(nd->dentry)))
+ return PTR_ERR(nd->dentry);
+
+ if (!nd->dentry->d_inode)
+ error = vfs_mkdir(nd->dentry->d_parent->d_inode,
+ nd->dentry, 0755);
+
+ up(&nd->dentry->d_parent->d_inode->i_sem);
+ if (unlikely(error))
+ return error;
+
+ *path += len + 1;
+ }
+
+ return 0;
+}
+
+/* On success, returns with parent_inode->i_sem taken. */
+static int
+hwgfs_decode(
+ hwgfs_handle_t dir,
+ const char *name,
+ int is_dir,
+ struct inode **parent_inode,
+ struct dentry **dentry)
+{
+ struct nameidata nd;
+ int error;
+
+ if (!dir)
+ dir = hwgfs_vfsmount->mnt_sb->s_root;
+
+ memset(&nd, 0, sizeof(nd));
+ nd.flags = LOOKUP_PARENT;
+ nd.mnt = mntget(hwgfs_vfsmount);
+ nd.dentry = dget(dir);
+
+ error = walk_parents_mkdir(&name, &nd, is_dir);
+ if (unlikely(error))
+ return error;
+
+ error = path_walk(name, &nd);
+ if (unlikely(error))
+ return error;
+
+ *dentry = lookup_create(&nd, is_dir);
+
+ if (unlikely(IS_ERR(*dentry)))
+ return PTR_ERR(*dentry);
+ *parent_inode = (*dentry)->d_parent->d_inode;
+ return 0;
+}
+
+static int
+path_len(
+ struct dentry *de,
+ struct dentry *root)
+{
+ int len = 0;
+
+ while (de != root) {
+ len += de->d_name.len + 1; /* count the '/' */
+ de = de->d_parent;
+ }
+ return len; /* -1 because we omit the leading '/',
+ +1 because we include trailing '\0' */
+}
+
+int
+hwgfs_generate_path(
+ hwgfs_handle_t de,
+ char *path,
+ int buflen)
+{
+ struct dentry *hwgfs_root;
+ int len;
+ char *path_orig = path;
+
+ if (unlikely(de == NULL))
+ return -EINVAL;
+
+ hwgfs_root = hwgfs_vfsmount->mnt_sb->s_root;
+ if (unlikely(de == hwgfs_root))
+ return -EINVAL;
+
+ spin_lock(&dcache_lock);
+ len = path_len(de, hwgfs_root);
+ if (len > buflen) {
+ spin_unlock(&dcache_lock);
+ return -ENAMETOOLONG;
+ }
+
+ path += len - 1;
+ *path = '\0';
+
+ for (;;) {
+ path -= de->d_name.len;
+ memcpy(path, de->d_name.name, de->d_name.len);
+ de = de->d_parent;
+ if (de == hwgfs_root)
+ break;
+ *(--path) = '/';
+ }
+
+ spin_unlock(&dcache_lock);
+ BUG_ON(path != path_orig);
+ return 0;
+}
+
+hwgfs_handle_t
+hwgfs_register(
+ hwgfs_handle_t dir,
+ const char *name,
+ unsigned int flags,
+ unsigned int major,
+ unsigned int minor,
+ umode_t mode,
+ void *ops,
+ void *info)
+{
+ dev_t devnum = MKDEV(major, minor);
+ struct inode *parent_inode;
+ struct dentry *dentry;
+ int error;
+
+ error = hwgfs_decode(dir, name, 0, &parent_inode, &dentry);
+ if (likely(!error)) {
+ error = vfs_mknod(parent_inode, dentry, mode, devnum);
+ if (likely(!error)) {
+ /*
+ * Do this inside parents i_sem to avoid racing
+ * with lookups.
+ */
+ if (S_ISCHR(mode))
+ dentry->d_inode->i_fop = ops;
+ dentry->d_fsdata = info;
+ up(&parent_inode->i_sem);
+ } else {
+ up(&parent_inode->i_sem);
+ dput(dentry);
+ dentry = NULL;
+ }
+ }
+
+ return dentry;
+}
+
+int
+hwgfs_mk_symlink(
+ hwgfs_handle_t dir,
+ const char *name,
+ unsigned int flags,
+ const char *link,
+ hwgfs_handle_t *handle,
+ void *info)
+{
+ struct inode *parent_inode;
+ struct dentry *dentry;
+ int error;
+
+ error = hwgfs_decode(dir, name, 0, &parent_inode, &dentry);
+ if (likely(!error)) {
+ error = vfs_symlink(parent_inode, dentry, link, S_IALLUGO);
+ dentry->d_fsdata = info;
+ if (handle)
+ *handle = dentry;
+ up(&parent_inode->i_sem);
+ /* dput(dentry); */
+ }
+ return error;
+}
+
+hwgfs_handle_t
+hwgfs_mk_dir(
+ hwgfs_handle_t dir,
+ const char *name,
+ void *info)
+{
+ struct inode *parent_inode;
+ struct dentry *dentry;
+ int error;
+
+ error = hwgfs_decode(dir, name, 1, &parent_inode, &dentry);
+ if (likely(!error)) {
+ error = vfs_mkdir(parent_inode, dentry, 0755);
+ up(&parent_inode->i_sem);
+
+ if (unlikely(error)) {
+ dput(dentry);
+ dentry = NULL;
+ } else {
+ dentry->d_fsdata = info;
+ }
+ }
+ return dentry;
+}
+
+void
+hwgfs_unregister(
+ hwgfs_handle_t de)
+{
+ struct inode *parent_inode = de->d_parent->d_inode;
+
+ if (S_ISDIR(de->d_inode->i_mode))
+ vfs_rmdir(parent_inode, de);
+ else
+ vfs_unlink(parent_inode, de);
+}
+
+/* XXX: this function is utterly bogus. Every use of it is racy and the
+ prototype is stupid. You have been warned. --hch. */
+hwgfs_handle_t
+hwgfs_find_handle(
+ hwgfs_handle_t base,
+ const char *name,
+ unsigned int major, /* IGNORED */
+ unsigned int minor, /* IGNORED */
+ char type, /* IGNORED */
+ int traverse_symlinks)
+{
+ struct dentry *dentry = NULL;
+ struct nameidata nd;
+ int error;
+
+ BUG_ON(*name=='/');
+
+ memset(&nd, 0, sizeof(nd));
+
+ nd.mnt = mntget(hwgfs_vfsmount);
+ nd.dentry = dget(base ? base : hwgfs_vfsmount->mnt_sb->s_root);
+ nd.flags = (traverse_symlinks ? LOOKUP_FOLLOW : 0);
+
+ error = path_walk(name, &nd);
+ if (likely(!error)) {
+ dentry = nd.dentry;
+ path_release(&nd); /* stale data from here! */
+ }
+
+ return dentry;
+}
+
+hwgfs_handle_t
+hwgfs_get_parent(
+ hwgfs_handle_t de)
+{
+ struct dentry *parent;
+
+ spin_lock(&de->d_lock);
+ parent = de->d_parent;
+ spin_unlock(&de->d_lock);
+
+ return parent;
+}
+
+int
+hwgfs_set_info(
+ hwgfs_handle_t de,
+ void *info)
+{
+ if (unlikely(de == NULL))
+ return -EINVAL;
+ de->d_fsdata = info;
+ return 0;
+}
+
+void *
+hwgfs_get_info(
+ hwgfs_handle_t de)
+{
+ return de->d_fsdata;
+}
+
+EXPORT_SYMBOL(hwgfs_generate_path);
+EXPORT_SYMBOL(hwgfs_register);
+EXPORT_SYMBOL(hwgfs_unregister);
+EXPORT_SYMBOL(hwgfs_mk_symlink);
+EXPORT_SYMBOL(hwgfs_mk_dir);
+EXPORT_SYMBOL(hwgfs_find_handle);
+EXPORT_SYMBOL(hwgfs_get_parent);
+EXPORT_SYMBOL(hwgfs_set_info);
+EXPORT_SYMBOL(hwgfs_get_info);
--- /dev/null
+/* labelcl - SGI's Hwgraph Compatibility Layer.
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+*/
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/string.h>
+#include <linux/sched.h> /* needed for smp_lock.h :( */
+#include <linux/smp_lock.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/hwgfs.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+
+/*
+** Very simple and dumb string table that supports only find/insert.
+** In practice, if this table gets too large, we may need a more
+** efficient data structure. Also note that currently there is no
+** way to delete an item once it's added. Therefore, name collision
+** will return an error.
+*/
+
+struct string_table label_string_table;
+
+
+
+/*
+ * string_table_init - Initialize the given string table.
+ */
+void
+string_table_init(struct string_table *string_table)
+{
+ string_table->string_table_head = NULL;
+ string_table->string_table_generation = 0;
+
+ /*
+ * We nedd to initialize locks here!
+ */
+
+ return;
+}
+
+
+/*
+ * string_table_destroy - Destroy the given string table.
+ */
+void
+string_table_destroy(struct string_table *string_table)
+{
+ struct string_table_item *item, *next_item;
+
+ item = string_table->string_table_head;
+ while (item) {
+ next_item = item->next;
+
+ STRTBL_FREE(item);
+ item = next_item;
+ }
+
+ /*
+ * We need to destroy whatever lock we have here
+ */
+
+ return;
+}
+
+
+
+/*
+ * string_table_insert - Insert an entry in the string table .. duplicate
+ * names are not allowed.
+ */
+char *
+string_table_insert(struct string_table *string_table, char *name)
+{
+ struct string_table_item *item, *new_item = NULL, *last_item = NULL;
+
+again:
+ /*
+ * Need to lock the table ..
+ */
+ item = string_table->string_table_head;
+ last_item = NULL;
+
+ while (item) {
+ if (!strcmp(item->string, name)) {
+ /*
+ * If we allocated space for the string and the found that
+ * someone else already entered it into the string table,
+ * free the space we just allocated.
+ */
+ if (new_item)
+ STRTBL_FREE(new_item);
+
+
+ /*
+ * Search optimization: move the found item to the head
+ * of the list.
+ */
+ if (last_item != NULL) {
+ last_item->next = item->next;
+ item->next = string_table->string_table_head;
+ string_table->string_table_head = item;
+ }
+ goto out;
+ }
+ last_item = item;
+ item=item->next;
+ }
+
+ /*
+ * name was not found, so add it to the string table.
+ */
+ if (new_item == NULL) {
+ long old_generation = string_table->string_table_generation;
+
+ new_item = STRTBL_ALLOC(strlen(name));
+
+ strcpy(new_item->string, name);
+
+ /*
+ * While we allocated memory for the new string, someone else
+ * changed the string table.
+ */
+ if (old_generation != string_table->string_table_generation) {
+ goto again;
+ }
+ } else {
+ /* At this we only have the string table lock in access mode.
+ * Promote the access lock to an update lock for the string
+ * table insertion below.
+ */
+ long old_generation =
+ string_table->string_table_generation;
+
+ /*
+ * After we did the unlock and wer waiting for update
+ * lock someone could have potentially updated
+ * the string table. Check the generation number
+ * for this case. If it is the case we have to
+ * try all over again.
+ */
+ if (old_generation !=
+ string_table->string_table_generation) {
+ goto again;
+ }
+ }
+
+ /*
+ * At this point, we're committed to adding new_item to the string table.
+ */
+ new_item->next = string_table->string_table_head;
+ item = string_table->string_table_head = new_item;
+ string_table->string_table_generation++;
+
+out:
+ /*
+ * Need to unlock here.
+ */
+ return(item->string);
+}
+
+/*
+ * labelcl_info_create - Creates the data structure that will hold the
+ * device private information asscoiated with a entry.
+ * The pointer to this structure is what gets stored in the
+ * (void * info).
+ */
+labelcl_info_t *
+labelcl_info_create()
+{
+
+ labelcl_info_t *new = NULL;
+
+ /* Initial allocation does not include any area for labels */
+ if ( ( new = (labelcl_info_t *)kmalloc (sizeof(labelcl_info_t), GFP_KERNEL) ) == NULL )
+ return NULL;
+
+ memset (new, 0, sizeof(labelcl_info_t));
+ new->hwcl_magic = LABELCL_MAGIC;
+ return( new);
+
+}
+
+/*
+ * labelcl_info_destroy - Frees the data structure that holds the
+ * device private information asscoiated with a entry. This
+ * data structure was created by device_info_create().
+ *
+ * The caller is responsible for nulling the (void *info) in the
+ * corresponding entry.
+ */
+int
+labelcl_info_destroy(labelcl_info_t *labelcl_info)
+{
+
+ if (labelcl_info == NULL)
+ return(0);
+
+ /* Free the label list */
+ if (labelcl_info->label_list)
+ kfree(labelcl_info->label_list);
+
+ /* Now free the label info area */
+ labelcl_info->hwcl_magic = 0;
+ kfree(labelcl_info);
+
+ return(0);
+}
+
+/*
+ * labelcl_info_add_LBL - Adds a new label entry in the labelcl info
+ * structure.
+ *
+ * Error is returned if we find another label with the same name.
+ */
+int
+labelcl_info_add_LBL(vertex_hdl_t de,
+ char *info_name,
+ arb_info_desc_t info_desc,
+ arbitrary_info_t info)
+{
+ labelcl_info_t *labelcl_info = NULL;
+ int num_labels;
+ int new_label_list_size;
+ label_info_t *old_label_list, *new_label_list = NULL;
+ char *name;
+ int i;
+
+ if (de == NULL)
+ return(-1);
+
+ labelcl_info = hwgfs_get_info(de);
+ if (labelcl_info == NULL)
+ return(-1);
+
+ if (labelcl_info->hwcl_magic != LABELCL_MAGIC)
+ return(-1);
+
+ if (info_name == NULL)
+ return(-1);
+
+ if (strlen(info_name) >= LABEL_LENGTH_MAX)
+ return(-1);
+
+ name = string_table_insert(&label_string_table, info_name);
+
+ num_labels = labelcl_info->num_labels;
+ new_label_list_size = sizeof(label_info_t) * (num_labels+1);
+
+ /*
+ * Create a new label info area.
+ */
+ if (new_label_list_size != 0) {
+ new_label_list = (label_info_t *) kmalloc(new_label_list_size, GFP_KERNEL);
+
+ if (new_label_list == NULL)
+ return(-1);
+ }
+
+ /*
+ * At this point, we are committed to adding the labelled info,
+ * if there isn't already information there with the same name.
+ */
+ old_label_list = labelcl_info->label_list;
+
+ /*
+ * Look for matching info name.
+ */
+ for (i=0; i<num_labels; i++) {
+ if (!strcmp(info_name, old_label_list[i].name)) {
+ /* Not allowed to add duplicate labelled info names. */
+ kfree(new_label_list);
+ return(-1);
+ }
+ new_label_list[i] = old_label_list[i]; /* structure copy */
+ }
+
+ new_label_list[num_labels].name = name;
+ new_label_list[num_labels].desc = info_desc;
+ new_label_list[num_labels].info = info;
+
+ labelcl_info->num_labels = num_labels+1;
+ labelcl_info->label_list = new_label_list;
+
+ if (old_label_list != NULL)
+ kfree(old_label_list);
+
+ return(0);
+}
+
+/*
+ * labelcl_info_remove_LBL - Remove a label entry.
+ */
+int
+labelcl_info_remove_LBL(vertex_hdl_t de,
+ char *info_name,
+ arb_info_desc_t *info_desc,
+ arbitrary_info_t *info)
+{
+ labelcl_info_t *labelcl_info = NULL;
+ int num_labels;
+ int new_label_list_size;
+ label_info_t *old_label_list, *new_label_list = NULL;
+ arb_info_desc_t label_desc_found;
+ arbitrary_info_t label_info_found;
+ int i;
+
+ if (de == NULL)
+ return(-1);
+
+ labelcl_info = hwgfs_get_info(de);
+ if (labelcl_info == NULL)
+ return(-1);
+
+ if (labelcl_info->hwcl_magic != LABELCL_MAGIC)
+ return(-1);
+
+ num_labels = labelcl_info->num_labels;
+ if (num_labels == 0) {
+ return(-1);
+ }
+
+ /*
+ * Create a new info area.
+ */
+ new_label_list_size = sizeof(label_info_t) * (num_labels-1);
+ if (new_label_list_size) {
+ new_label_list = (label_info_t *) kmalloc(new_label_list_size, GFP_KERNEL);
+ if (new_label_list == NULL)
+ return(-1);
+ }
+
+ /*
+ * At this point, we are committed to removing the labelled info,
+ * if it still exists.
+ */
+ old_label_list = labelcl_info->label_list;
+
+ /*
+ * Find matching info name.
+ */
+ for (i=0; i<num_labels; i++) {
+ if (!strcmp(info_name, old_label_list[i].name)) {
+ label_desc_found = old_label_list[i].desc;
+ label_info_found = old_label_list[i].info;
+ goto found;
+ }
+ if (i < num_labels-1) /* avoid walking off the end of the new vertex */
+ new_label_list[i] = old_label_list[i]; /* structure copy */
+ }
+
+ /* The named info doesn't exist. */
+ if (new_label_list)
+ kfree(new_label_list);
+
+ return(-1);
+
+found:
+ /* Finish up rest of labelled info */
+ for (i=i+1; i<num_labels; i++)
+ new_label_list[i-1] = old_label_list[i]; /* structure copy */
+
+ labelcl_info->num_labels = num_labels+1;
+ labelcl_info->label_list = new_label_list;
+
+ kfree(old_label_list);
+
+ if (info != NULL)
+ *info = label_info_found;
+
+ if (info_desc != NULL)
+ *info_desc = label_desc_found;
+
+ return(0);
+}
+
+
+/*
+ * labelcl_info_replace_LBL - Replace an existing label entry with the
+ * given new information.
+ *
+ * Label entry must exist.
+ */
+int
+labelcl_info_replace_LBL(vertex_hdl_t de,
+ char *info_name,
+ arb_info_desc_t info_desc,
+ arbitrary_info_t info,
+ arb_info_desc_t *old_info_desc,
+ arbitrary_info_t *old_info)
+{
+ labelcl_info_t *labelcl_info = NULL;
+ int num_labels;
+ label_info_t *label_list;
+ int i;
+
+ if (de == NULL)
+ return(-1);
+
+ labelcl_info = hwgfs_get_info(de);
+ if (labelcl_info == NULL)
+ return(-1);
+
+ if (labelcl_info->hwcl_magic != LABELCL_MAGIC)
+ return(-1);
+
+ num_labels = labelcl_info->num_labels;
+ if (num_labels == 0) {
+ return(-1);
+ }
+
+ if (info_name == NULL)
+ return(-1);
+
+ label_list = labelcl_info->label_list;
+
+ /*
+ * Verify that information under info_name already exists.
+ */
+ for (i=0; i<num_labels; i++)
+ if (!strcmp(info_name, label_list[i].name)) {
+ if (old_info != NULL)
+ *old_info = label_list[i].info;
+
+ if (old_info_desc != NULL)
+ *old_info_desc = label_list[i].desc;
+
+ label_list[i].info = info;
+ label_list[i].desc = info_desc;
+
+ return(0);
+ }
+
+
+ return(-1);
+}
+
+/*
+ * labelcl_info_get_LBL - Retrieve and return the information for the
+ * given label entry.
+ */
+int
+labelcl_info_get_LBL(vertex_hdl_t de,
+ char *info_name,
+ arb_info_desc_t *info_desc,
+ arbitrary_info_t *info)
+{
+ labelcl_info_t *labelcl_info = NULL;
+ int num_labels;
+ label_info_t *label_list;
+ int i;
+
+ if (de == NULL)
+ return(-1);
+
+ labelcl_info = hwgfs_get_info(de);
+ if (labelcl_info == NULL)
+ return(-1);
+
+ if (labelcl_info->hwcl_magic != LABELCL_MAGIC)
+ return(-1);
+
+ num_labels = labelcl_info->num_labels;
+ if (num_labels == 0) {
+ return(-1);
+ }
+
+ label_list = labelcl_info->label_list;
+
+ /*
+ * Find information under info_name.
+ */
+ for (i=0; i<num_labels; i++)
+ if (!strcmp(info_name, label_list[i].name)) {
+ if (info != NULL)
+ *info = label_list[i].info;
+ if (info_desc != NULL)
+ *info_desc = label_list[i].desc;
+
+ return(0);
+ }
+
+ return(-1);
+}
+
+/*
+ * labelcl_info_get_next_LBL - returns the next label entry on the list.
+ */
+int
+labelcl_info_get_next_LBL(vertex_hdl_t de,
+ char *buffer,
+ arb_info_desc_t *info_descp,
+ arbitrary_info_t *infop,
+ labelcl_info_place_t *placeptr)
+{
+ labelcl_info_t *labelcl_info = NULL;
+ uint which_info;
+ label_info_t *label_list;
+
+ if ((buffer == NULL) && (infop == NULL))
+ return(-1);
+
+ if (placeptr == NULL)
+ return(-1);
+
+ if (de == NULL)
+ return(-1);
+
+ labelcl_info = hwgfs_get_info(de);
+ if (labelcl_info == NULL)
+ return(-1);
+
+ if (labelcl_info->hwcl_magic != LABELCL_MAGIC)
+ return(-1);
+
+ which_info = *placeptr;
+
+ if (which_info >= labelcl_info->num_labels) {
+ return(-1);
+ }
+
+ label_list = (label_info_t *) labelcl_info->label_list;
+
+ if (buffer != NULL)
+ strcpy(buffer, label_list[which_info].name);
+
+ if (infop)
+ *infop = label_list[which_info].info;
+
+ if (info_descp)
+ *info_descp = label_list[which_info].desc;
+
+ *placeptr = which_info + 1;
+
+ return(0);
+}
+
+
+int
+labelcl_info_replace_IDX(vertex_hdl_t de,
+ int index,
+ arbitrary_info_t info,
+ arbitrary_info_t *old_info)
+{
+ arbitrary_info_t *info_list_IDX;
+ labelcl_info_t *labelcl_info = NULL;
+
+ if (de == NULL) {
+ printk(KERN_ALERT "labelcl: NULL handle given.\n");
+ return(-1);
+ }
+
+ labelcl_info = hwgfs_get_info(de);
+ if (labelcl_info == NULL) {
+ printk(KERN_ALERT "labelcl: Entry %p does not have info pointer.\n", (void *)de);
+ return(-1);
+ }
+
+ if (labelcl_info->hwcl_magic != LABELCL_MAGIC)
+ return(-1);
+
+ if ( (index < 0) || (index >= HWGRAPH_NUM_INDEX_INFO) )
+ return(-1);
+
+ /*
+ * Replace information at the appropriate index in this vertex with
+ * the new info.
+ */
+ info_list_IDX = labelcl_info->IDX_list;
+ if (old_info != NULL)
+ *old_info = info_list_IDX[index];
+ info_list_IDX[index] = info;
+
+ return(0);
+
+}
+
+/*
+ * labelcl_info_connectpt_set - Sets the connectpt.
+ */
+int
+labelcl_info_connectpt_set(hwgfs_handle_t de,
+ hwgfs_handle_t connect_de)
+{
+ arbitrary_info_t old_info;
+ int rv;
+
+ rv = labelcl_info_replace_IDX(de, HWGRAPH_CONNECTPT,
+ (arbitrary_info_t) connect_de, &old_info);
+
+ if (rv) {
+ return(rv);
+ }
+
+ return(0);
+}
+
+
+/*
+ * labelcl_info_get_IDX - Returns the information pointed at by index.
+ *
+ */
+int
+labelcl_info_get_IDX(vertex_hdl_t de,
+ int index,
+ arbitrary_info_t *info)
+{
+ arbitrary_info_t *info_list_IDX;
+ labelcl_info_t *labelcl_info = NULL;
+
+ if (de == NULL)
+ return(-1);
+
+ labelcl_info = hwgfs_get_info(de);
+ if (labelcl_info == NULL)
+ return(-1);
+
+ if (labelcl_info->hwcl_magic != LABELCL_MAGIC)
+ return(-1);
+
+ if ( (index < 0) || (index >= HWGRAPH_NUM_INDEX_INFO) )
+ return(-1);
+
+ /*
+ * Return information at the appropriate index in this vertex.
+ */
+ info_list_IDX = labelcl_info->IDX_list;
+ if (info != NULL)
+ *info = info_list_IDX[index];
+
+ return(0);
+}
+
+/*
+ * labelcl_info_connectpt_get - Retrieve the connect point for a device entry.
+ */
+hwgfs_handle_t
+labelcl_info_connectpt_get(hwgfs_handle_t de)
+{
+ int rv;
+ arbitrary_info_t info;
+
+ rv = labelcl_info_get_IDX(de, HWGRAPH_CONNECTPT, &info);
+ if (rv)
+ return(NULL);
+
+ return((hwgfs_handle_t) info);
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * Mostly shameless copied from Linus Torvalds' ramfs and thus
+ * Copyright (C) 2000 Linus Torvalds.
+ * 2000 Transmeta Corp.
+ */
+
+#include <linux/module.h>
+#include <linux/backing-dev.h>
+#include <linux/fs.h>
+#include <linux/pagemap.h>
+#include <linux/init.h>
+#include <linux/string.h>
+#include <asm/uaccess.h>
+
+/* some random number */
+#define HWGFS_MAGIC 0x12061983
+
+static struct super_operations hwgfs_ops;
+static struct address_space_operations hwgfs_aops;
+static struct file_operations hwgfs_file_operations;
+static struct inode_operations hwgfs_file_inode_operations;
+static struct inode_operations hwgfs_dir_inode_operations;
+
+static struct backing_dev_info hwgfs_backing_dev_info = {
+ .ra_pages = 0, /* No readahead */
+ .memory_backed = 1, /* Does not contribute to dirty memory */
+};
+
+static struct inode *hwgfs_get_inode(struct super_block *sb, int mode, dev_t dev)
+{
+ struct inode * inode = new_inode(sb);
+
+ if (inode) {
+ inode->i_mode = mode;
+ inode->i_uid = current->fsuid;
+ inode->i_gid = current->fsgid;
+ inode->i_blksize = PAGE_CACHE_SIZE;
+ inode->i_blocks = 0;
+ inode->i_mapping->a_ops = &hwgfs_aops;
+ inode->i_mapping->backing_dev_info = &hwgfs_backing_dev_info;
+ inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ switch (mode & S_IFMT) {
+ default:
+ init_special_inode(inode, mode, dev);
+ break;
+ case S_IFREG:
+ inode->i_op = &hwgfs_file_inode_operations;
+ inode->i_fop = &hwgfs_file_operations;
+ break;
+ case S_IFDIR:
+ inode->i_op = &hwgfs_dir_inode_operations;
+ inode->i_fop = &simple_dir_operations;
+ inode->i_nlink++;
+ break;
+ case S_IFLNK:
+ inode->i_op = &page_symlink_inode_operations;
+ break;
+ }
+ }
+ return inode;
+}
+
+static int hwgfs_mknod(struct inode *dir, struct dentry *dentry, int mode, dev_t dev)
+{
+ struct inode * inode = hwgfs_get_inode(dir->i_sb, mode, dev);
+ int error = -ENOSPC;
+
+ if (inode) {
+ d_instantiate(dentry, inode);
+ dget(dentry); /* Extra count - pin the dentry in core */
+ error = 0;
+ }
+ return error;
+}
+
+static int hwgfs_mkdir(struct inode * dir, struct dentry * dentry, int mode)
+{
+ return hwgfs_mknod(dir, dentry, mode | S_IFDIR, 0);
+}
+
+static int hwgfs_create(struct inode *dir, struct dentry *dentry, int mode, struct nameidata *unused)
+{
+ return hwgfs_mknod(dir, dentry, mode | S_IFREG, 0);
+}
+
+static int hwgfs_symlink(struct inode * dir, struct dentry *dentry, const char * symname)
+{
+ struct inode *inode;
+ int error = -ENOSPC;
+
+ inode = hwgfs_get_inode(dir->i_sb, S_IFLNK|S_IRWXUGO, 0);
+ if (inode) {
+ int l = strlen(symname)+1;
+ error = page_symlink(inode, symname, l);
+ if (!error) {
+ d_instantiate(dentry, inode);
+ dget(dentry);
+ } else
+ iput(inode);
+ }
+ return error;
+}
+
+static struct address_space_operations hwgfs_aops = {
+ .readpage = simple_readpage,
+ .prepare_write = simple_prepare_write,
+ .commit_write = simple_commit_write
+};
+
+static struct file_operations hwgfs_file_operations = {
+ .read = generic_file_read,
+ .write = generic_file_write,
+ .mmap = generic_file_mmap,
+ .fsync = simple_sync_file,
+ .sendfile = generic_file_sendfile,
+};
+
+static struct inode_operations hwgfs_file_inode_operations = {
+ .getattr = simple_getattr,
+};
+
+static struct inode_operations hwgfs_dir_inode_operations = {
+ .create = hwgfs_create,
+ .lookup = simple_lookup,
+ .link = simple_link,
+ .unlink = simple_unlink,
+ .symlink = hwgfs_symlink,
+ .mkdir = hwgfs_mkdir,
+ .rmdir = simple_rmdir,
+ .mknod = hwgfs_mknod,
+ .rename = simple_rename,
+};
+
+static struct super_operations hwgfs_ops = {
+ .statfs = simple_statfs,
+ .drop_inode = generic_delete_inode,
+};
+
+static int hwgfs_fill_super(struct super_block * sb, void * data, int silent)
+{
+ struct inode * inode;
+ struct dentry * root;
+
+ sb->s_blocksize = PAGE_CACHE_SIZE;
+ sb->s_blocksize_bits = PAGE_CACHE_SHIFT;
+ sb->s_magic = HWGFS_MAGIC;
+ sb->s_op = &hwgfs_ops;
+ inode = hwgfs_get_inode(sb, S_IFDIR | 0755, 0);
+ if (!inode)
+ return -ENOMEM;
+
+ root = d_alloc_root(inode);
+ if (!root) {
+ iput(inode);
+ return -ENOMEM;
+ }
+ sb->s_root = root;
+ return 0;
+}
+
+static struct super_block *hwgfs_get_sb(struct file_system_type *fs_type,
+ int flags, const char *dev_name, void *data)
+{
+ return get_sb_single(fs_type, flags, data, hwgfs_fill_super);
+}
+
+static struct file_system_type hwgfs_fs_type = {
+ .owner = THIS_MODULE,
+ .name = "hwgfs",
+ .get_sb = hwgfs_get_sb,
+ .kill_sb = kill_litter_super,
+};
+
+struct vfsmount *hwgfs_vfsmount;
+
+int __init init_hwgfs_fs(void)
+{
+ int error;
+
+ error = register_filesystem(&hwgfs_fs_type);
+ if (error)
+ return error;
+
+ hwgfs_vfsmount = kern_mount(&hwgfs_fs_type);
+ if (IS_ERR(hwgfs_vfsmount))
+ goto fail;
+ return 0;
+
+fail:
+ unregister_filesystem(&hwgfs_fs_type);
+ return PTR_ERR(hwgfs_vfsmount);
+}
+
+static void __exit exit_hwgfs_fs(void)
+{
+ unregister_filesystem(&hwgfs_fs_type);
+}
+
+MODULE_LICENSE("GPL");
+
+module_init(init_hwgfs_fs)
+module_exit(exit_hwgfs_fs)
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997, 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <asm/sn/types.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/driver.h>
+#include <asm/param.h>
+#include <asm/sn/pio.h>
+#include <asm/sn/xtalk/xwidget.h>
+#include <asm/sn/io.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/addrs.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/hcl_util.h>
+#include <asm/sn/intr.h>
+#include <asm/sn/xtalk/xtalkaddrs.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/sn_cpuid.h>
+
+extern xtalk_provider_t hub_provider;
+
+static int force_fire_and_forget = 1;
+static int ignore_conveyor_override;
+
+
+/*
+ * Implementation of hub iobus operations.
+ *
+ * Hub provides a crosstalk "iobus" on IP27 systems. These routines
+ * provide a platform-specific implementation of xtalk used by all xtalk
+ * cards on IP27 systems.
+ *
+ * Called from corresponding xtalk_* routines.
+ */
+
+
+/* PIO MANAGEMENT */
+/* For mapping system virtual address space to xtalk space on a specified widget */
+
+/*
+ * Setup pio structures needed for a particular hub.
+ */
+static void
+hub_pio_init(vertex_hdl_t hubv)
+{
+ xwidgetnum_t widget;
+ hubinfo_t hubinfo;
+ nasid_t nasid;
+ int bigwin;
+ hub_piomap_t hub_piomap;
+
+ hubinfo_get(hubv, &hubinfo);
+ nasid = hubinfo->h_nasid;
+
+ /* Initialize small window piomaps for this hub */
+ for (widget=0; widget <= HUB_WIDGET_ID_MAX; widget++) {
+ hub_piomap = hubinfo_swin_piomap_get(hubinfo, (int)widget);
+ hub_piomap->hpio_xtalk_info.xp_target = widget;
+ hub_piomap->hpio_xtalk_info.xp_xtalk_addr = 0;
+ hub_piomap->hpio_xtalk_info.xp_mapsz = SWIN_SIZE;
+ hub_piomap->hpio_xtalk_info.xp_kvaddr = (caddr_t)NODE_SWIN_BASE(nasid, widget);
+ hub_piomap->hpio_hub = hubv;
+ hub_piomap->hpio_flags = HUB_PIOMAP_IS_VALID;
+ }
+
+ /* Initialize big window piomaps for this hub */
+ for (bigwin=0; bigwin < HUB_NUM_BIG_WINDOW; bigwin++) {
+ hub_piomap = hubinfo_bwin_piomap_get(hubinfo, bigwin);
+ hub_piomap->hpio_xtalk_info.xp_mapsz = BWIN_SIZE;
+ hub_piomap->hpio_hub = hubv;
+ hub_piomap->hpio_holdcnt = 0;
+ hub_piomap->hpio_flags = HUB_PIOMAP_IS_BIGWINDOW;
+ IIO_ITTE_DISABLE(nasid, bigwin);
+ }
+ hub_set_piomode(nasid, HUB_PIO_CONVEYOR);
+
+ spin_lock_init(&hubinfo->h_bwlock);
+ init_waitqueue_head(&hubinfo->h_bwwait);
+}
+
+/*
+ * Create a caddr_t-to-xtalk_addr mapping.
+ *
+ * Use a small window if possible (that's the usual case), but
+ * manage big windows if needed. Big window mappings can be
+ * either FIXED or UNFIXED -- we keep at least 1 big window available
+ * for UNFIXED mappings.
+ *
+ * Returns an opaque pointer-sized type which can be passed to
+ * other hub_pio_* routines on success, or NULL if the request
+ * cannot be satisfied.
+ */
+/* ARGSUSED */
+hub_piomap_t
+hub_piomap_alloc(vertex_hdl_t dev, /* set up mapping for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ iopaddr_t xtalk_addr, /* map for this xtalk_addr range */
+ size_t byte_count,
+ size_t byte_count_max, /* maximum size of a mapping */
+ unsigned flags) /* defined in sys/pio.h */
+{
+ xwidget_info_t widget_info = xwidget_info_get(dev);
+ xwidgetnum_t widget = xwidget_info_id_get(widget_info);
+ vertex_hdl_t hubv = xwidget_info_master_get(widget_info);
+ hubinfo_t hubinfo;
+ hub_piomap_t bw_piomap;
+ int bigwin, free_bw_index;
+ nasid_t nasid;
+ volatile hubreg_t junk;
+ caddr_t kvaddr;
+#ifdef PIOMAP_UNC_ACC_SPACE
+ uint64_t addr;
+#endif
+
+ /* sanity check */
+ if (byte_count_max > byte_count)
+ return NULL;
+
+ hubinfo_get(hubv, &hubinfo);
+
+ /* If xtalk_addr range is mapped by a small window, we don't have
+ * to do much
+ */
+ if (xtalk_addr + byte_count <= SWIN_SIZE) {
+ hub_piomap_t piomap;
+
+ piomap = hubinfo_swin_piomap_get(hubinfo, (int)widget);
+#ifdef PIOMAP_UNC_ACC_SPACE
+ if (flags & PIOMAP_UNC_ACC) {
+ addr = (uint64_t)piomap->hpio_xtalk_info.xp_kvaddr;
+ addr |= PIOMAP_UNC_ACC_SPACE;
+ piomap->hpio_xtalk_info.xp_kvaddr = (caddr_t)addr;
+ }
+#endif
+ return piomap;
+ }
+
+ /* We need to use a big window mapping. */
+
+ /*
+ * TBD: Allow requests that would consume multiple big windows --
+ * split the request up and use multiple mapping entries.
+ * For now, reject requests that span big windows.
+ */
+ if ((xtalk_addr % BWIN_SIZE) + byte_count > BWIN_SIZE)
+ return NULL;
+
+
+ /* Round xtalk address down for big window alignement */
+ xtalk_addr = xtalk_addr & ~(BWIN_SIZE-1);
+
+ /*
+ * Check to see if an existing big window mapping will suffice.
+ */
+tryagain:
+ free_bw_index = -1;
+ spin_lock(&hubinfo->h_bwlock);
+ for (bigwin=0; bigwin < HUB_NUM_BIG_WINDOW; bigwin++) {
+ bw_piomap = hubinfo_bwin_piomap_get(hubinfo, bigwin);
+
+ /* If mapping is not valid, skip it */
+ if (!(bw_piomap->hpio_flags & HUB_PIOMAP_IS_VALID)) {
+ free_bw_index = bigwin;
+ continue;
+ }
+
+ /*
+ * If mapping is UNFIXED, skip it. We don't allow sharing
+ * of UNFIXED mappings, because this would allow starvation.
+ */
+ if (!(bw_piomap->hpio_flags & HUB_PIOMAP_IS_FIXED))
+ continue;
+
+ if ( xtalk_addr == bw_piomap->hpio_xtalk_info.xp_xtalk_addr &&
+ widget == bw_piomap->hpio_xtalk_info.xp_target) {
+ bw_piomap->hpio_holdcnt++;
+ spin_unlock(&hubinfo->h_bwlock);
+ return bw_piomap;
+ }
+ }
+
+ /*
+ * None of the existing big window mappings will work for us --
+ * we need to establish a new mapping.
+ */
+
+ /* Insure that we don't consume all big windows with FIXED mappings */
+ if (flags & PIOMAP_FIXED) {
+ if (hubinfo->h_num_big_window_fixed < HUB_NUM_BIG_WINDOW-1) {
+ ASSERT(free_bw_index >= 0);
+ hubinfo->h_num_big_window_fixed++;
+ } else {
+ bw_piomap = NULL;
+ goto done;
+ }
+ } else /* PIOMAP_UNFIXED */ {
+ if (free_bw_index < 0) {
+ if (flags & PIOMAP_NOSLEEP) {
+ bw_piomap = NULL;
+ goto done;
+ } else {
+ DECLARE_WAITQUEUE(wait, current);
+
+ spin_unlock(&hubinfo->h_bwlock);
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ add_wait_queue_exclusive(&hubinfo->h_bwwait, &wait);
+ schedule();
+ remove_wait_queue(&hubinfo->h_bwwait, &wait);
+ goto tryagain;
+ }
+ }
+ }
+
+
+ /* OK! Allocate big window free_bw_index for this mapping. */
+ /*
+ * The code below does a PIO write to setup an ITTE entry.
+ * We need to prevent other CPUs from seeing our updated memory
+ * shadow of the ITTE (in the piomap) until the ITTE entry is
+ * actually set up; otherwise, another CPU might attempt a PIO
+ * prematurely.
+ *
+ * Also, the only way we can know that an entry has been received
+ * by the hub and can be used by future PIO reads/writes is by
+ * reading back the ITTE entry after writing it.
+ *
+ * For these two reasons, we PIO read back the ITTE entry after
+ * we write it.
+ */
+
+ nasid = hubinfo->h_nasid;
+ IIO_ITTE_PUT(nasid, free_bw_index, HUB_PIO_MAP_TO_MEM, widget, xtalk_addr);
+ junk = HUB_L(IIO_ITTE_GET(nasid, free_bw_index));
+
+ bw_piomap = hubinfo_bwin_piomap_get(hubinfo, free_bw_index);
+ bw_piomap->hpio_xtalk_info.xp_dev = dev;
+ bw_piomap->hpio_xtalk_info.xp_target = widget;
+ bw_piomap->hpio_xtalk_info.xp_xtalk_addr = xtalk_addr;
+ kvaddr = (caddr_t)NODE_BWIN_BASE(nasid, free_bw_index);
+#ifdef PIOMAP_UNC_ACC_SPACE
+ if (flags & PIOMAP_UNC_ACC) {
+ addr = (uint64_t)kvaddr;
+ addr |= PIOMAP_UNC_ACC_SPACE;
+ kvaddr = (caddr_t)addr;
+ }
+#endif
+ bw_piomap->hpio_xtalk_info.xp_kvaddr = kvaddr;
+ bw_piomap->hpio_holdcnt++;
+ bw_piomap->hpio_bigwin_num = free_bw_index;
+
+ if (flags & PIOMAP_FIXED)
+ bw_piomap->hpio_flags |= HUB_PIOMAP_IS_VALID | HUB_PIOMAP_IS_FIXED;
+ else
+ bw_piomap->hpio_flags |= HUB_PIOMAP_IS_VALID;
+
+done:
+ spin_unlock(&hubinfo->h_bwlock);
+ return bw_piomap;
+}
+
+/*
+ * hub_piomap_free destroys a caddr_t-to-xtalk pio mapping and frees
+ * any associated mapping resources.
+ *
+ * If this * piomap was handled with a small window, or if it was handled
+ * in a big window that's still in use by someone else, then there's
+ * nothing to do. On the other hand, if this mapping was handled
+ * with a big window, AND if we were the final user of that mapping,
+ * then destroy the mapping.
+ */
+void
+hub_piomap_free(hub_piomap_t hub_piomap)
+{
+ vertex_hdl_t hubv;
+ hubinfo_t hubinfo;
+ nasid_t nasid;
+
+ /*
+ * Small windows are permanently mapped to corresponding widgets,
+ * so there're no resources to free.
+ */
+ if (!(hub_piomap->hpio_flags & HUB_PIOMAP_IS_BIGWINDOW))
+ return;
+
+ ASSERT(hub_piomap->hpio_flags & HUB_PIOMAP_IS_VALID);
+ ASSERT(hub_piomap->hpio_holdcnt > 0);
+
+ hubv = hub_piomap->hpio_hub;
+ hubinfo_get(hubv, &hubinfo);
+ nasid = hubinfo->h_nasid;
+
+ spin_lock(&hubinfo->h_bwlock);
+
+ /*
+ * If this is the last hold on this mapping, free it.
+ */
+ if (--hub_piomap->hpio_holdcnt == 0) {
+ IIO_ITTE_DISABLE(nasid, hub_piomap->hpio_bigwin_num );
+
+ if (hub_piomap->hpio_flags & HUB_PIOMAP_IS_FIXED) {
+ hub_piomap->hpio_flags &= ~(HUB_PIOMAP_IS_VALID | HUB_PIOMAP_IS_FIXED);
+ hubinfo->h_num_big_window_fixed--;
+ ASSERT(hubinfo->h_num_big_window_fixed >= 0);
+ } else
+ hub_piomap->hpio_flags &= ~HUB_PIOMAP_IS_VALID;
+
+ wake_up(&hubinfo->h_bwwait);
+ }
+
+ spin_unlock(&hubinfo->h_bwlock);
+}
+
+/*
+ * Establish a mapping to a given xtalk address range using the resources
+ * allocated earlier.
+ */
+caddr_t
+hub_piomap_addr(hub_piomap_t hub_piomap, /* mapping resources */
+ iopaddr_t xtalk_addr, /* map for this xtalk address */
+ size_t byte_count) /* map this many bytes */
+{
+ /* Verify that range can be mapped using the specified piomap */
+ if (xtalk_addr < hub_piomap->hpio_xtalk_info.xp_xtalk_addr)
+ return 0;
+
+ if (xtalk_addr + byte_count >
+ ( hub_piomap->hpio_xtalk_info.xp_xtalk_addr +
+ hub_piomap->hpio_xtalk_info.xp_mapsz))
+ return 0;
+
+ if (hub_piomap->hpio_flags & HUB_PIOMAP_IS_VALID)
+ return hub_piomap->hpio_xtalk_info.xp_kvaddr +
+ (xtalk_addr % hub_piomap->hpio_xtalk_info.xp_mapsz);
+ else
+ return 0;
+}
+
+
+/*
+ * Driver indicates that it's done with PIO's from an earlier piomap_addr.
+ */
+/* ARGSUSED */
+void
+hub_piomap_done(hub_piomap_t hub_piomap) /* done with these mapping resources */
+{
+ /* Nothing to do */
+}
+
+
+/*
+ * For translations that require no mapping resources, supply a kernel virtual
+ * address that maps to the specified xtalk address range.
+ */
+/* ARGSUSED */
+caddr_t
+hub_piotrans_addr( vertex_hdl_t dev, /* translate to this device */
+ device_desc_t dev_desc, /* device descriptor */
+ iopaddr_t xtalk_addr, /* Crosstalk address */
+ size_t byte_count, /* map this many bytes */
+ unsigned flags) /* (currently unused) */
+{
+ xwidget_info_t widget_info = xwidget_info_get(dev);
+ xwidgetnum_t widget = xwidget_info_id_get(widget_info);
+ vertex_hdl_t hubv = xwidget_info_master_get(widget_info);
+ hub_piomap_t hub_piomap;
+ hubinfo_t hubinfo;
+ caddr_t addr;
+
+ hubinfo_get(hubv, &hubinfo);
+
+ if (xtalk_addr + byte_count <= SWIN_SIZE) {
+ hub_piomap = hubinfo_swin_piomap_get(hubinfo, (int)widget);
+ addr = hub_piomap_addr(hub_piomap, xtalk_addr, byte_count);
+#ifdef PIOMAP_UNC_ACC_SPACE
+ if (flags & PIOMAP_UNC_ACC) {
+ uint64_t iaddr;
+ iaddr = (uint64_t)addr;
+ iaddr |= PIOMAP_UNC_ACC_SPACE;
+ addr = (caddr_t)iaddr;
+ }
+#endif
+ return addr;
+ } else
+ return 0;
+}
+
+
+/* DMA MANAGEMENT */
+/* Mapping from crosstalk space to system physical space */
+
+
+/*
+ * Allocate resources needed to set up DMA mappings up to a specified size
+ * on a specified adapter.
+ *
+ * We don't actually use the adapter ID for anything. It's just the adapter
+ * that the lower level driver plans to use for DMA.
+ */
+/* ARGSUSED */
+hub_dmamap_t
+hub_dmamap_alloc( vertex_hdl_t dev, /* set up mappings for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ size_t byte_count_max, /* max size of a mapping */
+ unsigned flags) /* defined in dma.h */
+{
+ hub_dmamap_t dmamap;
+ xwidget_info_t widget_info = xwidget_info_get(dev);
+ xwidgetnum_t widget = xwidget_info_id_get(widget_info);
+ vertex_hdl_t hubv = xwidget_info_master_get(widget_info);
+
+ dmamap = kmalloc(sizeof(struct hub_dmamap_s), GFP_ATOMIC);
+ dmamap->hdma_xtalk_info.xd_dev = dev;
+ dmamap->hdma_xtalk_info.xd_target = widget;
+ dmamap->hdma_hub = hubv;
+ dmamap->hdma_flags = HUB_DMAMAP_IS_VALID;
+ if (flags & XTALK_FIXED)
+ dmamap->hdma_flags |= HUB_DMAMAP_IS_FIXED;
+
+ return dmamap;
+}
+
+/*
+ * Destroy a DMA mapping from crosstalk space to system address space.
+ * There is no actual mapping hardware to destroy, but we at least mark
+ * the dmamap INVALID and free the space that it took.
+ */
+void
+hub_dmamap_free(hub_dmamap_t hub_dmamap)
+{
+ hub_dmamap->hdma_flags &= ~HUB_DMAMAP_IS_VALID;
+ kfree(hub_dmamap);
+}
+
+/*
+ * Establish a DMA mapping using the resources allocated in a previous dmamap_alloc.
+ * Return an appropriate crosstalk address range that maps to the specified physical
+ * address range.
+ */
+/* ARGSUSED */
+extern iopaddr_t
+hub_dmamap_addr( hub_dmamap_t dmamap, /* use these mapping resources */
+ paddr_t paddr, /* map for this address */
+ size_t byte_count) /* map this many bytes */
+{
+ vertex_hdl_t vhdl;
+
+ ASSERT(dmamap->hdma_flags & HUB_DMAMAP_IS_VALID);
+
+ if (dmamap->hdma_flags & HUB_DMAMAP_USED) {
+ /* If the map is FIXED, re-use is OK. */
+ if (!(dmamap->hdma_flags & HUB_DMAMAP_IS_FIXED)) {
+ char name[MAXDEVNAME];
+ vhdl = dmamap->hdma_xtalk_info.xd_dev;
+ printk(KERN_WARNING "%s: hub_dmamap_addr re-uses dmamap.\n", vertex_to_name(vhdl, name, MAXDEVNAME));
+ }
+ } else {
+ dmamap->hdma_flags |= HUB_DMAMAP_USED;
+ }
+
+ /* There isn't actually any DMA mapping hardware on the hub. */
+ return (PHYS_TO_DMA(paddr));
+}
+
+/*
+ * Driver indicates that it has completed whatever DMA it may have started
+ * after an earlier dmamap_addr call.
+ */
+void
+hub_dmamap_done(hub_dmamap_t hub_dmamap) /* done with these mapping resources */
+{
+ vertex_hdl_t vhdl;
+
+ if (hub_dmamap->hdma_flags & HUB_DMAMAP_USED) {
+ hub_dmamap->hdma_flags &= ~HUB_DMAMAP_USED;
+ } else {
+ /* If the map is FIXED, re-done is OK. */
+ if (!(hub_dmamap->hdma_flags & HUB_DMAMAP_IS_FIXED)) {
+ char name[MAXDEVNAME];
+ vhdl = hub_dmamap->hdma_xtalk_info.xd_dev;
+ printk(KERN_WARNING "%s: hub_dmamap_done already done with dmamap\n", vertex_to_name(vhdl, name, MAXDEVNAME));
+ }
+ }
+}
+
+/*
+ * Translate a single system physical address into a crosstalk address.
+ */
+/* ARGSUSED */
+iopaddr_t
+hub_dmatrans_addr( vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ paddr_t paddr, /* system physical address */
+ size_t byte_count, /* length */
+ unsigned flags) /* defined in dma.h */
+{
+ return (PHYS_TO_DMA(paddr));
+}
+
+/*ARGSUSED*/
+void
+hub_dmamap_drain( hub_dmamap_t map)
+{
+ /* XXX- flush caches, if cache coherency WAR is needed */
+}
+
+/*ARGSUSED*/
+void
+hub_dmaaddr_drain( vertex_hdl_t vhdl,
+ paddr_t addr,
+ size_t bytes)
+{
+ /* XXX- flush caches, if cache coherency WAR is needed */
+}
+
+
+/* CONFIGURATION MANAGEMENT */
+
+/*
+ * Perform initializations that allow this hub to start crosstalk support.
+ */
+void
+hub_provider_startup(vertex_hdl_t hubv)
+{
+ hubinfo_t hubinfo;
+
+ hubinfo_get(hubv, &hubinfo);
+ hub_pio_init(hubv);
+ intr_init_vecblk(nasid_to_cnodeid(hubinfo->h_nasid));
+}
+
+/*
+ * Shutdown crosstalk support from a hub.
+ */
+void
+hub_provider_shutdown(vertex_hdl_t hub)
+{
+ /* TBD */
+ xtalk_provider_unregister(hub);
+}
+
+/*
+ * Check that an address is in the real small window widget 0 space
+ * or else in the big window we're using to emulate small window 0
+ * in the kernel.
+ */
+int
+hub_check_is_widget0(void *addr)
+{
+ nasid_t nasid = NASID_GET(addr);
+
+ if (((unsigned long)addr >= RAW_NODE_SWIN_BASE(nasid, 0)) &&
+ ((unsigned long)addr < RAW_NODE_SWIN_BASE(nasid, 1)))
+ return 1;
+ return 0;
+}
+
+
+/*
+ * Check that two addresses use the same widget
+ */
+int
+hub_check_window_equiv(void *addra, void *addrb)
+{
+ if (hub_check_is_widget0(addra) && hub_check_is_widget0(addrb))
+ return 1;
+
+ /* XXX - Assume this is really a small window address */
+ if (WIDGETID_GET((unsigned long)addra) ==
+ WIDGETID_GET((unsigned long)addrb))
+ return 1;
+
+ return 0;
+}
+
+
+/*
+ * hub_setup_prb(nasid, prbnum, credits, conveyor)
+ *
+ * Put a PRB into fire-and-forget mode if conveyor isn't set. Otherwise,
+ * put it into conveyor belt mode with the specified number of credits.
+ */
+void
+hub_setup_prb(nasid_t nasid, int prbnum, int credits, int conveyor)
+{
+ iprb_t prb;
+ int prb_offset;
+
+ if (force_fire_and_forget && !ignore_conveyor_override)
+ if (conveyor == HUB_PIO_CONVEYOR)
+ conveyor = HUB_PIO_FIRE_N_FORGET;
+
+ /*
+ * Get the current register value.
+ */
+ prb_offset = IIO_IOPRB(prbnum);
+ prb.iprb_regval = REMOTE_HUB_L(nasid, prb_offset);
+
+ /*
+ * Clear out some fields.
+ */
+ prb.iprb_ovflow = 1;
+ prb.iprb_bnakctr = 0;
+ prb.iprb_anakctr = 0;
+
+ /*
+ * Enable or disable fire-and-forget mode.
+ */
+ prb.iprb_ff = ((conveyor == HUB_PIO_CONVEYOR) ? 0 : 1);
+
+ /*
+ * Set the appropriate number of PIO cresits for the widget.
+ */
+ prb.iprb_xtalkctr = credits;
+
+ /*
+ * Store the new value to the register.
+ */
+ REMOTE_HUB_S(nasid, prb_offset, prb.iprb_regval);
+}
+
+/*
+ * hub_set_piomode()
+ *
+ * Put the hub into either "PIO conveyor belt" mode or "fire-and-forget"
+ * mode. To do this, we have to make absolutely sure that no PIOs
+ * are in progress so we turn off access to all widgets for the duration
+ * of the function.
+ *
+ * XXX - This code should really check what kind of widget we're talking
+ * to. Bridges can only handle three requests, but XG will do more.
+ * How many can crossbow handle to widget 0? We're assuming 1.
+ *
+ * XXX - There is a bug in the crossbow that link reset PIOs do not
+ * return write responses. The easiest solution to this problem is to
+ * leave widget 0 (xbow) in fire-and-forget mode at all times. This
+ * only affects pio's to xbow registers, which should be rare.
+ */
+void
+hub_set_piomode(nasid_t nasid, int conveyor)
+{
+ hubreg_t ii_iowa;
+ int direct_connect;
+ hubii_wcr_t ii_wcr;
+ int prbnum;
+
+ ASSERT(nasid_to_cnodeid(nasid) != INVALID_CNODEID);
+
+ ii_iowa = REMOTE_HUB_L(nasid, IIO_OUTWIDGET_ACCESS);
+ REMOTE_HUB_S(nasid, IIO_OUTWIDGET_ACCESS, 0);
+
+ ii_wcr.wcr_reg_value = REMOTE_HUB_L(nasid, IIO_WCR);
+ direct_connect = ii_wcr.iwcr_dir_con;
+
+ if (direct_connect) {
+ /*
+ * Assume a bridge here.
+ */
+ hub_setup_prb(nasid, 0, 3, conveyor);
+ } else {
+ /*
+ * Assume a crossbow here.
+ */
+ hub_setup_prb(nasid, 0, 1, conveyor);
+ }
+
+ for (prbnum = HUB_WIDGET_ID_MIN; prbnum <= HUB_WIDGET_ID_MAX; prbnum++) {
+ /*
+ * XXX - Here's where we should take the widget type into
+ * when account assigning credits.
+ */
+ /* Always set the PRBs in fire-and-forget mode */
+ hub_setup_prb(nasid, prbnum, 3, conveyor);
+ }
+
+ REMOTE_HUB_S(nasid, IIO_OUTWIDGET_ACCESS, ii_iowa);
+}
+/* Interface to allow special drivers to set hub specific
+ * device flags.
+ * Return 0 on failure , 1 on success
+ */
+int
+hub_widget_flags_set(nasid_t nasid,
+ xwidgetnum_t widget_num,
+ hub_widget_flags_t flags)
+{
+
+ ASSERT((flags & HUB_WIDGET_FLAGS) == flags);
+
+ if (flags & HUB_PIO_CONVEYOR) {
+ hub_setup_prb(nasid,widget_num,
+ 3,HUB_PIO_CONVEYOR); /* set the PRB in conveyor
+ * belt mode with 3 credits
+ */
+ } else if (flags & HUB_PIO_FIRE_N_FORGET) {
+ hub_setup_prb(nasid,widget_num,
+ 3,HUB_PIO_FIRE_N_FORGET); /* set the PRB in fire
+ * and forget mode
+ */
+ }
+
+ return 1;
+}
+
+/*
+ * A pointer to this structure hangs off of every hub hwgraph vertex.
+ * The generic xtalk layer may indirect through it to get to this specific
+ * crosstalk bus provider.
+ */
+xtalk_provider_t hub_provider = {
+ .piomap_alloc = (xtalk_piomap_alloc_f *) hub_piomap_alloc,
+ .piomap_free = (xtalk_piomap_free_f *) hub_piomap_free,
+ .piomap_addr = (xtalk_piomap_addr_f *) hub_piomap_addr,
+ .piomap_done = (xtalk_piomap_done_f *) hub_piomap_done,
+ .piotrans_addr = (xtalk_piotrans_addr_f *) hub_piotrans_addr,
+
+ .dmamap_alloc = (xtalk_dmamap_alloc_f *) hub_dmamap_alloc,
+ .dmamap_free = (xtalk_dmamap_free_f *) hub_dmamap_free,
+ .dmamap_addr = (xtalk_dmamap_addr_f *) hub_dmamap_addr,
+ .dmamap_done = (xtalk_dmamap_done_f *) hub_dmamap_done,
+ .dmatrans_addr = (xtalk_dmatrans_addr_f *) hub_dmatrans_addr,
+ .dmamap_drain = (xtalk_dmamap_drain_f *) hub_dmamap_drain,
+ .dmaaddr_drain = (xtalk_dmaaddr_drain_f *) hub_dmaaddr_drain,
+
+ .intr_alloc = (xtalk_intr_alloc_f *) hub_intr_alloc,
+ .intr_alloc_nothd = (xtalk_intr_alloc_f *) hub_intr_alloc_nothd,
+ .intr_free = (xtalk_intr_free_f *) hub_intr_free,
+ .intr_connect = (xtalk_intr_connect_f *) hub_intr_connect,
+ .intr_disconnect = (xtalk_intr_disconnect_f *) hub_intr_disconnect,
+ .provider_startup = (xtalk_provider_startup_f *) hub_provider_startup,
+ .provider_shutdown = (xtalk_provider_shutdown_f *) hub_provider_shutdown,
+};
--- /dev/null
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2002-2003 Silicon Graphics, Inc. All Rights Reserved.
+#
+# Makefile for the sn2 io routines.
+
+obj-y += pci.o pci_dma.o pci_bus_cvlink.o iomv.o
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/module.h>
+#include <asm/io.h>
+#include <asm/delay.h>
+#include <asm/sn/simulator.h>
+#include <asm/sn/pda.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/sn2/shub_mmr.h>
+
+/**
+ * sn_io_addr - convert an in/out port to an i/o address
+ * @port: port to convert
+ *
+ * Legacy in/out instructions are converted to ld/st instructions
+ * on IA64. This routine will convert a port number into a valid
+ * SN i/o address. Used by sn_in*() and sn_out*().
+ */
+void *
+sn_io_addr(unsigned long port)
+{
+ if (!IS_RUNNING_ON_SIMULATOR()) {
+ /* On sn2, legacy I/O ports don't point at anything */
+ if (port < 64*1024)
+ return 0;
+ return( (void *) (port | __IA64_UNCACHED_OFFSET));
+ } else {
+ /* but the simulator uses them... */
+ unsigned long io_base;
+ unsigned long addr;
+
+ /*
+ * word align port, but need more than 10 bits
+ * for accessing registers in bedrock local block
+ * (so we don't do port&0xfff)
+ */
+ if ((port >= 0x1f0 && port <= 0x1f7) ||
+ port == 0x3f6 || port == 0x3f7) {
+ io_base = (0xc000000fcc000000 | ((unsigned long)get_nasid() << 38));
+ addr = io_base | ((port >> 2) << 12) | (port & 0xfff);
+ } else {
+ addr = __ia64_get_io_port_base() | ((port >> 2) << 2);
+ }
+ return(void *) addr;
+ }
+}
+
+EXPORT_SYMBOL(sn_io_addr);
+
+/**
+ * sn_mmiob - I/O space memory barrier
+ *
+ * Acts as a memory mapped I/O barrier for platforms that queue writes to
+ * I/O space. This ensures that subsequent writes to I/O space arrive after
+ * all previous writes. For most ia64 platforms, this is a simple
+ * 'mf.a' instruction. For other platforms, mmiob() may have to read
+ * a chipset register to ensure ordering.
+ *
+ * On SN2, we wait for the PIO_WRITE_STATUS SHub register to clear.
+ * See PV 871084 for details about the WAR about zero value.
+ *
+ */
+void
+sn_mmiob (void)
+{
+ while ((((volatile unsigned long) (*pda->pio_write_status_addr)) & SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT_MASK) !=
+ SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT_MASK)
+ cpu_relax();
+}
+EXPORT_SYMBOL(sn_mmiob);
--- /dev/null
+/*
+ * SNI64 specific PCI support for SNI IO.
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 1997, 1998, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#include <asm/sn/hcl.h>
+#include <asm/sn/pci/pcibr_private.h>
+
+/*
+ * These routines are only used during sn_pci_init for probing each bus, and
+ * can probably be removed with a little more cleanup now that the SAL routines
+ * work on sn2.
+ */
+
+extern vertex_hdl_t devfn_to_vertex(unsigned char bus, unsigned char devfn);
+
+int sn_read_config(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *val)
+{
+ unsigned long res = 0;
+ vertex_hdl_t device_vertex;
+
+ device_vertex = devfn_to_vertex(bus->number, devfn);
+
+ if (!device_vertex)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+
+ res = pciio_config_get(device_vertex, (unsigned)where, size);
+ *val = (u32)res;
+ return PCIBIOS_SUCCESSFUL;
+}
+
+int sn_write_config(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 val)
+{
+ vertex_hdl_t device_vertex;
+
+ device_vertex = devfn_to_vertex(bus->number, devfn);
+
+ if (!device_vertex)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+
+ pciio_config_set(device_vertex, (unsigned)where, size, (uint64_t)val);
+ return PCIBIOS_SUCCESSFUL;
+}
+
+struct pci_ops sn_pci_ops = {
+ .read = sn_read_config,
+ .write = sn_write_config,
+};
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/vmalloc.h>
+#include <linux/slab.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/pci/pci_bus_cvlink.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/simulator.h>
+
+extern int bridge_rev_b_data_check_disable;
+
+vertex_hdl_t busnum_to_pcibr_vhdl[MAX_PCI_XWIDGET];
+nasid_t busnum_to_nid[MAX_PCI_XWIDGET];
+void * busnum_to_atedmamaps[MAX_PCI_XWIDGET];
+unsigned char num_bridges;
+static int done_probing;
+extern irqpda_t *irqpdaindr;
+
+static int pci_bus_map_create(struct pcibr_list_s *softlistp, moduleid_t io_moduleid);
+vertex_hdl_t devfn_to_vertex(unsigned char busnum, unsigned int devfn);
+
+extern void register_pcibr_intr(int irq, pcibr_intr_t intr);
+
+static struct sn_flush_device_list *sn_dma_flush_init(unsigned long start,
+ unsigned long end,
+ int idx, int pin, int slot);
+extern int cbrick_type_get_nasid(nasid_t);
+extern void ioconfig_bus_new_entries(void);
+extern void ioconfig_get_busnum(char *, int *);
+extern int iomoduleid_get(nasid_t);
+extern int pcibr_widget_to_bus(vertex_hdl_t);
+extern int isIO9(int);
+
+#define IS_OPUS(nasid) (cbrick_type_get_nasid(nasid) == MODULE_OPUSBRICK)
+#define IS_ALTIX(nasid) (cbrick_type_get_nasid(nasid) == MODULE_CBRICK)
+
+/*
+ * Init the provider asic for a given device
+ */
+
+static inline void __init
+set_pci_provider(struct sn_device_sysdata *device_sysdata)
+{
+ pciio_info_t pciio_info = pciio_info_get(device_sysdata->vhdl);
+
+ device_sysdata->pci_provider = pciio_info_pops_get(pciio_info);
+}
+
+/*
+ * pci_bus_cvlink_init() - To be called once during initialization before
+ * SGI IO Infrastructure init is called.
+ */
+int
+pci_bus_cvlink_init(void)
+{
+
+ extern int ioconfig_bus_init(void);
+
+ memset(busnum_to_pcibr_vhdl, 0x0, sizeof(vertex_hdl_t) * MAX_PCI_XWIDGET);
+ memset(busnum_to_nid, 0x0, sizeof(nasid_t) * MAX_PCI_XWIDGET);
+
+ memset(busnum_to_atedmamaps, 0x0, sizeof(void *) * MAX_PCI_XWIDGET);
+
+ num_bridges = 0;
+
+ return ioconfig_bus_init();
+}
+
+/*
+ * pci_bus_to_vertex() - Given a logical Linux Bus Number returns the associated
+ * pci bus vertex from the SGI IO Infrastructure.
+ */
+static inline vertex_hdl_t
+pci_bus_to_vertex(unsigned char busnum)
+{
+
+ vertex_hdl_t pci_bus = NULL;
+
+
+ /*
+ * First get the xwidget vertex.
+ */
+ pci_bus = busnum_to_pcibr_vhdl[busnum];
+ return(pci_bus);
+}
+
+/*
+ * devfn_to_vertex() - returns the vertex of the device given the bus, slot,
+ * and function numbers.
+ */
+vertex_hdl_t
+devfn_to_vertex(unsigned char busnum, unsigned int devfn)
+{
+
+ int slot = 0;
+ int func = 0;
+ char name[16];
+ vertex_hdl_t pci_bus = NULL;
+ vertex_hdl_t device_vertex = (vertex_hdl_t)NULL;
+
+ /*
+ * Go get the pci bus vertex.
+ */
+ pci_bus = pci_bus_to_vertex(busnum);
+ if (!pci_bus) {
+ /*
+ * During probing, the Linux pci code invents non-existent
+ * bus numbers and pci_dev structures and tries to access
+ * them to determine existence. Don't crib during probing.
+ */
+ if (done_probing)
+ printk("devfn_to_vertex: Invalid bus number %d given.\n", busnum);
+ return(NULL);
+ }
+
+
+ /*
+ * Go get the slot&function vertex.
+ * Should call pciio_slot_func_to_name() when ready.
+ */
+ slot = PCI_SLOT(devfn);
+ func = PCI_FUNC(devfn);
+
+ /*
+ * For a NON Multi-function card the name of the device looks like:
+ * ../pci/1, ../pci/2 ..
+ */
+ if (func == 0) {
+ sprintf(name, "%d", slot);
+ if (hwgraph_traverse(pci_bus, name, &device_vertex) ==
+ GRAPH_SUCCESS) {
+ if (device_vertex) {
+ return(device_vertex);
+ }
+ }
+ }
+
+ /*
+ * This maybe a multifunction card. It's names look like:
+ * ../pci/1a, ../pci/1b, etc.
+ */
+ sprintf(name, "%d%c", slot, 'a'+func);
+ if (hwgraph_traverse(pci_bus, name, &device_vertex) != GRAPH_SUCCESS) {
+ if (!device_vertex) {
+ return(NULL);
+ }
+ }
+
+ return(device_vertex);
+}
+
+/*
+ * sn_alloc_pci_sysdata() - This routine allocates a pci controller
+ * which is expected as the pci_dev and pci_bus sysdata by the Linux
+ * PCI infrastructure.
+ */
+static struct pci_controller *
+sn_alloc_pci_sysdata(void)
+{
+ struct pci_controller *pci_sysdata;
+
+ pci_sysdata = kmalloc(sizeof(*pci_sysdata), GFP_KERNEL);
+ if (!pci_sysdata)
+ return NULL;
+
+ memset(pci_sysdata, 0, sizeof(*pci_sysdata));
+ return pci_sysdata;
+}
+
+/*
+ * sn_pci_fixup_bus() - This routine sets up a bus's resources
+ * consistent with the Linux PCI abstraction layer.
+ */
+static int __init
+sn_pci_fixup_bus(struct pci_bus *bus)
+{
+ struct pci_controller *pci_sysdata;
+ struct sn_widget_sysdata *widget_sysdata;
+
+ pci_sysdata = sn_alloc_pci_sysdata();
+ if (!pci_sysdata) {
+ printk(KERN_WARNING "sn_pci_fixup_bus(): Unable to "
+ "allocate memory for pci_sysdata\n");
+ return -ENOMEM;
+ }
+ widget_sysdata = kmalloc(sizeof(struct sn_widget_sysdata),
+ GFP_KERNEL);
+ if (!widget_sysdata) {
+ printk(KERN_WARNING "sn_pci_fixup_bus(): Unable to "
+ "allocate memory for widget_sysdata\n");
+ kfree(pci_sysdata);
+ return -ENOMEM;
+ }
+
+ widget_sysdata->vhdl = pci_bus_to_vertex(bus->number);
+ pci_sysdata->platform_data = (void *)widget_sysdata;
+ bus->sysdata = pci_sysdata;
+ return 0;
+}
+
+
+/*
+ * sn_pci_fixup_slot() - This routine sets up a slot's resources
+ * consistent with the Linux PCI abstraction layer. Resources acquired
+ * from our PCI provider include PIO maps to BAR space and interrupt
+ * objects.
+ */
+static int
+sn_pci_fixup_slot(struct pci_dev *dev)
+{
+ extern int bit_pos_to_irq(int);
+ unsigned int irq;
+ int idx;
+ u16 cmd;
+ vertex_hdl_t vhdl;
+ unsigned long size;
+ struct pci_controller *pci_sysdata;
+ struct sn_device_sysdata *device_sysdata;
+ pciio_intr_line_t lines = 0;
+ vertex_hdl_t device_vertex;
+ pciio_provider_t *pci_provider;
+ pciio_intr_t intr_handle;
+
+ /* Allocate a controller structure */
+ pci_sysdata = sn_alloc_pci_sysdata();
+ if (!pci_sysdata) {
+ printk(KERN_WARNING "sn_pci_fixup_slot: Unable to "
+ "allocate memory for pci_sysdata\n");
+ return -ENOMEM;
+ }
+
+ /* Set the device vertex */
+ device_sysdata = kmalloc(sizeof(struct sn_device_sysdata), GFP_KERNEL);
+ if (!device_sysdata) {
+ printk(KERN_WARNING "sn_pci_fixup_slot: Unable to "
+ "allocate memory for device_sysdata\n");
+ kfree(pci_sysdata);
+ return -ENOMEM;
+ }
+
+ device_sysdata->vhdl = devfn_to_vertex(dev->bus->number, dev->devfn);
+ pci_sysdata->platform_data = (void *) device_sysdata;
+ dev->sysdata = pci_sysdata;
+ set_pci_provider(device_sysdata);
+
+ pci_read_config_word(dev, PCI_COMMAND, &cmd);
+
+ /*
+ * Set the resources address correctly. The assumption here
+ * is that the addresses in the resource structure has been
+ * read from the card and it was set in the card by our
+ * Infrastructure. NOTE: PIC and TIOCP don't have big-window
+ * upport for PCI I/O space. So by mapping the I/O space
+ * first we will attempt to use Device(x) registers for I/O
+ * BARs (which can't use big windows like MEM BARs can).
+ */
+ vhdl = device_sysdata->vhdl;
+
+ /* Allocate the IORESOURCE_IO space first */
+ for (idx = 0; idx < PCI_ROM_RESOURCE; idx++) {
+ unsigned long start, end, addr;
+
+ device_sysdata->pio_map[idx] = NULL;
+
+ if (!(dev->resource[idx].flags & IORESOURCE_IO))
+ continue;
+
+ start = dev->resource[idx].start;
+ end = dev->resource[idx].end;
+ size = end - start;
+ if (!size)
+ continue;
+
+ addr = (unsigned long)pciio_pio_addr(vhdl, 0,
+ PCIIO_SPACE_WIN(idx), 0, size,
+ &device_sysdata->pio_map[idx], 0);
+
+ if (!addr) {
+ dev->resource[idx].start = 0;
+ dev->resource[idx].end = 0;
+ printk("sn_pci_fixup(): pio map failure for "
+ "%s bar%d\n", dev->slot_name, idx);
+ } else {
+ addr |= __IA64_UNCACHED_OFFSET;
+ dev->resource[idx].start = addr;
+ dev->resource[idx].end = addr + size;
+ }
+
+ if (dev->resource[idx].flags & IORESOURCE_IO)
+ cmd |= PCI_COMMAND_IO;
+ }
+
+ /* Allocate the IORESOURCE_MEM space next */
+ for (idx = 0; idx < PCI_ROM_RESOURCE; idx++) {
+ unsigned long start, end, addr;
+
+ if ((dev->resource[idx].flags & IORESOURCE_IO))
+ continue;
+
+ start = dev->resource[idx].start;
+ end = dev->resource[idx].end;
+ size = end - start;
+ if (!size)
+ continue;
+
+ addr = (unsigned long)pciio_pio_addr(vhdl, 0,
+ PCIIO_SPACE_WIN(idx), 0, size,
+ &device_sysdata->pio_map[idx], 0);
+
+ if (!addr) {
+ dev->resource[idx].start = 0;
+ dev->resource[idx].end = 0;
+ printk("sn_pci_fixup(): pio map failure for "
+ "%s bar%d\n", dev->slot_name, idx);
+ } else {
+ addr |= __IA64_UNCACHED_OFFSET;
+ dev->resource[idx].start = addr;
+ dev->resource[idx].end = addr + size;
+ }
+
+ if (dev->resource[idx].flags & IORESOURCE_MEM)
+ cmd |= PCI_COMMAND_MEMORY;
+ }
+
+ /*
+ * Assign addresses to the ROMs, but don't enable them yet
+ * Also note that we only map display card ROMs due to PIO mapping
+ * space scarcity.
+ */
+ if ((dev->class >> 16) == PCI_BASE_CLASS_DISPLAY) {
+ unsigned long addr;
+ size = dev->resource[PCI_ROM_RESOURCE].end -
+ dev->resource[PCI_ROM_RESOURCE].start;
+
+ if (size) {
+ addr = (unsigned long) pciio_pio_addr(vhdl, 0,
+ PCIIO_SPACE_ROM,
+ 0, size, 0, PIOMAP_FIXED);
+ if (!addr) {
+ dev->resource[PCI_ROM_RESOURCE].start = 0;
+ dev->resource[PCI_ROM_RESOURCE].end = 0;
+ printk("sn_pci_fixup(): ROM pio map failure "
+ "for %s\n", dev->slot_name);
+ }
+ addr |= __IA64_UNCACHED_OFFSET;
+ dev->resource[PCI_ROM_RESOURCE].start = addr;
+ dev->resource[PCI_ROM_RESOURCE].end = addr + size;
+ if (dev->resource[PCI_ROM_RESOURCE].flags & IORESOURCE_MEM)
+ cmd |= PCI_COMMAND_MEMORY;
+ }
+ }
+
+ /*
+ * Update the Command Word on the Card.
+ */
+ cmd |= PCI_COMMAND_MASTER; /* If the device doesn't support */
+ /* bit gets dropped .. no harm */
+ pci_write_config_word(dev, PCI_COMMAND, cmd);
+
+ pci_read_config_byte(dev, PCI_INTERRUPT_PIN, (unsigned char *)&lines);
+ device_vertex = device_sysdata->vhdl;
+ pci_provider = device_sysdata->pci_provider;
+ device_sysdata->intr_handle = NULL;
+
+ if (!lines)
+ return 0;
+
+ irqpdaindr->curr = dev;
+
+ intr_handle = (pci_provider->intr_alloc)(device_vertex, NULL, lines, device_vertex);
+ if (intr_handle == NULL) {
+ printk(KERN_WARNING "sn_pci_fixup: pcibr_intr_alloc() failed\n");
+ kfree(pci_sysdata);
+ kfree(device_sysdata);
+ return -ENOMEM;
+ }
+
+ device_sysdata->intr_handle = intr_handle;
+ irq = intr_handle->pi_irq;
+ irqpdaindr->device_dev[irq] = dev;
+ (pci_provider->intr_connect)(intr_handle, (intr_func_t)0, (intr_arg_t)0);
+ dev->irq = irq;
+
+ register_pcibr_intr(irq, (pcibr_intr_t)intr_handle);
+
+ for (idx = 0; idx < PCI_ROM_RESOURCE; idx++) {
+ int ibits = ((pcibr_intr_t)intr_handle)->bi_ibits;
+ int i;
+
+ size = dev->resource[idx].end -
+ dev->resource[idx].start;
+ if (size == 0) continue;
+
+ for (i=0; i<8; i++) {
+ if (ibits & (1 << i) ) {
+ extern pcibr_info_t pcibr_info_get(vertex_hdl_t);
+ device_sysdata->dma_flush_list =
+ sn_dma_flush_init(dev->resource[idx].start,
+ dev->resource[idx].end,
+ idx,
+ i,
+ PCIBR_INFO_SLOT_GET_EXT(pcibr_info_get(device_sysdata->vhdl)));
+ }
+ }
+ }
+ return 0;
+}
+
+#ifdef CONFIG_HOTPLUG_PCI_SGI
+
+void
+sn_dma_flush_clear(struct sn_flush_device_list *dma_flush_list,
+ unsigned long start, unsigned long end)
+{
+
+ int i;
+
+ dma_flush_list->pin = -1;
+ dma_flush_list->bus = -1;
+ dma_flush_list->slot = -1;
+
+ for (i = 0; i < PCI_ROM_RESOURCE; i++)
+ if ((dma_flush_list->bar_list[i].start == start) &&
+ (dma_flush_list->bar_list[i].end == end)) {
+ dma_flush_list->bar_list[i].start = 0;
+ dma_flush_list->bar_list[i].end = 0;
+ break;
+ }
+
+}
+
+/*
+ * sn_pci_unfixup_slot() - This routine frees a slot's resources
+ * consistent with the Linux PCI abstraction layer. Resources released
+ * back to our PCI provider include PIO maps to BAR space and interrupt
+ * objects.
+ */
+void
+sn_pci_unfixup_slot(struct pci_dev *dev)
+{
+ struct sn_device_sysdata *device_sysdata;
+ vertex_hdl_t vhdl;
+ pciio_intr_t intr_handle;
+ unsigned int irq;
+ unsigned long size;
+ int idx;
+
+ device_sysdata = SN_DEVICE_SYSDATA(dev);
+
+ vhdl = device_sysdata->vhdl;
+
+ if (device_sysdata->dma_flush_list)
+ for (idx = 0; idx < PCI_ROM_RESOURCE; idx++) {
+ size = dev->resource[idx].end -
+ dev->resource[idx].start;
+ if (size == 0) continue;
+
+ sn_dma_flush_clear(device_sysdata->dma_flush_list,
+ dev->resource[idx].start,
+ dev->resource[idx].end);
+ }
+
+ intr_handle = device_sysdata->intr_handle;
+ if (intr_handle) {
+ extern void unregister_pcibr_intr(int, pcibr_intr_t);
+ irq = intr_handle->pi_irq;
+ irqpdaindr->device_dev[irq] = NULL;
+ unregister_pcibr_intr(irq, (pcibr_intr_t) intr_handle);
+ pciio_intr_disconnect(intr_handle);
+ pciio_intr_free(intr_handle);
+ }
+
+ for (idx = 0; idx < PCI_ROM_RESOURCE; idx++) {
+ if (device_sysdata->pio_map[idx]) {
+ pciio_piomap_done (device_sysdata->pio_map[idx]);
+ pciio_piomap_free (device_sysdata->pio_map[idx]);
+ }
+ }
+
+}
+#endif /* CONFIG_HOTPLUG_PCI_SGI */
+
+struct sn_flush_nasid_entry flush_nasid_list[MAX_NASIDS];
+
+/* Initialize the data structures for flushing write buffers after a PIO read.
+ * The theory is:
+ * Take an unused int. pin and associate it with a pin that is in use.
+ * After a PIO read, force an interrupt on the unused pin, forcing a write buffer flush
+ * on the in use pin. This will prevent the race condition between PIO read responses and
+ * DMA writes.
+ */
+static struct sn_flush_device_list *
+sn_dma_flush_init(unsigned long start, unsigned long end, int idx, int pin, int slot)
+{
+ nasid_t nasid;
+ unsigned long dnasid;
+ int wid_num;
+ int bus;
+ struct sn_flush_device_list *p;
+ void *b;
+ int bwin;
+ int i;
+
+ nasid = NASID_GET(start);
+ wid_num = SWIN_WIDGETNUM(start);
+ bus = (start >> 23) & 0x1;
+ bwin = BWIN_WINDOWNUM(start);
+
+ if (flush_nasid_list[nasid].widget_p == NULL) {
+ flush_nasid_list[nasid].widget_p = (struct sn_flush_device_list **)kmalloc((HUB_WIDGET_ID_MAX+1) *
+ sizeof(struct sn_flush_device_list *), GFP_KERNEL);
+ if (!flush_nasid_list[nasid].widget_p) {
+ printk(KERN_WARNING "sn_dma_flush_init: Cannot allocate memory for nasid list\n");
+ return NULL;
+ }
+ memset(flush_nasid_list[nasid].widget_p, 0, (HUB_WIDGET_ID_MAX+1) * sizeof(struct sn_flush_device_list *));
+ }
+ if (bwin > 0) {
+ int itte_index = bwin - 1;
+ unsigned long itte;
+
+ itte = HUB_L(IIO_ITTE_GET(nasid, itte_index));
+ flush_nasid_list[nasid].iio_itte[bwin] = itte;
+ wid_num = (itte >> IIO_ITTE_WIDGET_SHIFT)
+ & IIO_ITTE_WIDGET_MASK;
+ bus = itte & IIO_ITTE_OFFSET_MASK;
+ if (bus == 0x4 || bus == 0x8) {
+ bus = 0;
+ } else {
+ bus = 1;
+ }
+ }
+
+ /* if it's IO9, bus 1, we don't care about slots 1 and 4. This is
+ * because these are the IOC4 slots and we don't flush them.
+ */
+ if (isIO9(nasid) && bus == 0 && (slot == 1 || slot == 4)) {
+ return NULL;
+ }
+ if (flush_nasid_list[nasid].widget_p[wid_num] == NULL) {
+ flush_nasid_list[nasid].widget_p[wid_num] = (struct sn_flush_device_list *)kmalloc(
+ DEV_PER_WIDGET * sizeof (struct sn_flush_device_list), GFP_KERNEL);
+ if (!flush_nasid_list[nasid].widget_p[wid_num]) {
+ printk(KERN_WARNING "sn_dma_flush_init: Cannot allocate memory for nasid sub-list\n");
+ return NULL;
+ }
+ memset(flush_nasid_list[nasid].widget_p[wid_num], 0,
+ DEV_PER_WIDGET * sizeof (struct sn_flush_device_list));
+ p = &flush_nasid_list[nasid].widget_p[wid_num][0];
+ for (i=0; i<DEV_PER_WIDGET;i++) {
+ p->bus = -1;
+ p->pin = -1;
+ p->slot = -1;
+ p++;
+ }
+ }
+
+ p = &flush_nasid_list[nasid].widget_p[wid_num][0];
+ for (i=0;i<DEV_PER_WIDGET; i++) {
+ if (p->pin == pin && p->bus == bus && p->slot == slot) break;
+ if (p->pin < 0) {
+ p->pin = pin;
+ p->bus = bus;
+ p->slot = slot;
+ break;
+ }
+ p++;
+ }
+
+ for (i=0; i<PCI_ROM_RESOURCE; i++) {
+ if (p->bar_list[i].start == 0) {
+ p->bar_list[i].start = start;
+ p->bar_list[i].end = end;
+ break;
+ }
+ }
+ b = (void *)(NODE_SWIN_BASE(nasid, wid_num) | (bus << 23) );
+
+ /* If it's IO9, then slot 2 maps to slot 7 and slot 6 maps to slot 8.
+ * To see this is non-trivial. By drawing pictures and reading manuals and talking
+ * to HW guys, we can see that on IO9 bus 1, slots 7 and 8 are always unused.
+ * Further, since we short-circuit slots 1, 3, and 4 above, we only have to worry
+ * about the case when there is a card in slot 2. A multifunction card will appear
+ * to be in slot 6 (from an interrupt point of view) also. That's the most we'll
+ * have to worry about. A four function card will overload the interrupt lines in
+ * slot 2 and 6.
+ * We also need to special case the 12160 device in slot 3. Fortunately, we have
+ * a spare intr. line for pin 4, so we'll use that for the 12160.
+ * All other buses have slot 3 and 4 and slots 7 and 8 unused. Since we can only
+ * see slots 1 and 2 and slots 5 and 6 coming through here for those buses (this
+ * is true only on Pxbricks with 2 physical slots per bus), we just need to add
+ * 2 to the slot number to find an unused slot.
+ * We have convinced ourselves that we will never see a case where two different cards
+ * in two different slots will ever share an interrupt line, so there is no need to
+ * special case this.
+ */
+
+ if (isIO9(nasid) && ( (IS_ALTIX(nasid) && wid_num == 0xc)
+ || (IS_OPUS(nasid) && wid_num == 0xf) )
+ && bus == 0) {
+ if (pin == 1) {
+ p->force_int_addr = (unsigned long)pcireg_bridge_force_always_addr_get(b, 6);
+ pcireg_bridge_intr_device_bit_set(b, (1<<18));
+ dnasid = NASID_GET(virt_to_phys(&p->flush_addr));
+ pcireg_bridge_intr_addr_set(b, 6, ((virt_to_phys(&p->flush_addr) & 0xfffffffff) |
+ (dnasid << 36) | (0xfUL << 48)));
+ } else if (pin == 2) { /* 12160 SCSI device in IO9 */
+ p->force_int_addr = (unsigned long)pcireg_bridge_force_always_addr_get(b, 4);
+ pcireg_bridge_intr_device_bit_set(b, (2<<12));
+ dnasid = NASID_GET(virt_to_phys(&p->flush_addr));
+ pcireg_bridge_intr_addr_set(b, 4,
+ ((virt_to_phys(&p->flush_addr) & 0xfffffffff) |
+ (dnasid << 36) | (0xfUL << 48)));
+ } else { /* slot == 6 */
+ p->force_int_addr = (unsigned long)pcireg_bridge_force_always_addr_get(b, 7);
+ pcireg_bridge_intr_device_bit_set(b, (5<<21));
+ dnasid = NASID_GET(virt_to_phys(&p->flush_addr));
+ pcireg_bridge_intr_addr_set(b, 7,
+ ((virt_to_phys(&p->flush_addr) & 0xfffffffff) |
+ (dnasid << 36) | (0xfUL << 48)));
+ }
+ } else {
+ p->force_int_addr = (unsigned long)pcireg_bridge_force_always_addr_get(b, (pin +2));
+ pcireg_bridge_intr_device_bit_set(b, (pin << (pin * 3)));
+ dnasid = NASID_GET(virt_to_phys(&p->flush_addr));
+ pcireg_bridge_intr_addr_set(b, (pin + 2),
+ ((virt_to_phys(&p->flush_addr) & 0xfffffffff) |
+ (dnasid << 36) | (0xfUL << 48)));
+ }
+ return p;
+}
+
+
+/*
+ * linux_bus_cvlink() Creates a link between the Linux PCI Bus number
+ * to the actual hardware component that it represents:
+ * /dev/hw/linux/busnum/0 -> ../../../hw/module/001c01/slab/0/Ibrick/xtalk/15/pci
+ *
+ * The bus vertex, when called to devfs_generate_path() returns:
+ * hw/module/001c01/slab/0/Ibrick/xtalk/15/pci
+ * hw/module/001c01/slab/1/Pbrick/xtalk/12/pci-x/0
+ * hw/module/001c01/slab/1/Pbrick/xtalk/12/pci-x/1
+ */
+void
+linux_bus_cvlink(void)
+{
+ char name[8];
+ int index;
+
+ for (index=0; index < MAX_PCI_XWIDGET; index++) {
+ if (!busnum_to_pcibr_vhdl[index])
+ continue;
+
+ sprintf(name, "%x", index);
+ (void) hwgraph_edge_add(linux_busnum, busnum_to_pcibr_vhdl[index],
+ name);
+ }
+}
+
+/*
+ * pci_bus_map_create() - Called by pci_bus_to_hcl_cvlink() to finish the job.
+ *
+ * Linux PCI Bus numbers are assigned from lowest module_id numbers
+ * (rack/slot etc.)
+ */
+static int
+pci_bus_map_create(struct pcibr_list_s *softlistp, moduleid_t moduleid)
+{
+
+ int basebus_num, bus_number;
+ vertex_hdl_t pci_bus = softlistp->bl_vhdl;
+ char moduleid_str[16];
+
+ memset(moduleid_str, 0, 16);
+ format_module_id(moduleid_str, moduleid, MODULE_FORMAT_BRIEF);
+ (void) ioconfig_get_busnum((char *)moduleid_str, &basebus_num);
+
+ /*
+ * Assign the correct bus number and also the nasid of this
+ * pci Xwidget.
+ */
+ bus_number = basebus_num + pcibr_widget_to_bus(pci_bus);
+#ifdef DEBUG
+ {
+ char hwpath[MAXDEVNAME] = "\0";
+ extern int hwgraph_vertex_name_get(vertex_hdl_t, char *, uint);
+
+ pcibr_soft_t pcibr_soft = softlistp->bl_soft;
+ hwgraph_vertex_name_get(pci_bus, hwpath, MAXDEVNAME);
+ printk("%s:\n\tbus_num %d, basebus_num %d, brick_bus %d, "
+ "bus_vhdl 0x%lx, brick_type %d\n", hwpath, bus_number,
+ basebus_num, pcibr_widget_to_bus(pci_bus),
+ (uint64_t)pci_bus, pcibr_soft->bs_bricktype);
+ }
+#endif
+ busnum_to_pcibr_vhdl[bus_number] = pci_bus;
+
+ /*
+ * Pre assign DMA maps needed for 32 Bits Page Map DMA.
+ */
+ busnum_to_atedmamaps[bus_number] = (void *) vmalloc(
+ sizeof(struct pcibr_dmamap_s)*MAX_ATE_MAPS);
+ if (busnum_to_atedmamaps[bus_number] <= 0) {
+ printk("pci_bus_map_create: Cannot allocate memory for ate maps\n");
+ return -1;
+ }
+ memset(busnum_to_atedmamaps[bus_number], 0x0,
+ sizeof(struct pcibr_dmamap_s) * MAX_ATE_MAPS);
+ return(0);
+}
+
+/*
+ * pci_bus_to_hcl_cvlink() - This routine is called after SGI IO Infrastructure
+ * initialization has completed to set up the mappings between PCI BRIDGE
+ * ASIC and logical pci bus numbers.
+ *
+ * Must be called before pci_init() is invoked.
+ */
+int
+pci_bus_to_hcl_cvlink(void)
+{
+ int i;
+ extern pcibr_list_p pcibr_list;
+
+ for (i = 0; i < nummodules; i++) {
+ struct pcibr_list_s *softlistp = pcibr_list;
+ struct pcibr_list_s *first_in_list = NULL;
+ struct pcibr_list_s *last_in_list = NULL;
+
+ /* Walk the list of pcibr_soft structs looking for matches */
+ while (softlistp) {
+ struct pcibr_soft_s *pcibr_soft = softlistp->bl_soft;
+ moduleid_t moduleid;
+
+ /* Is this PCI bus associated with this moduleid? */
+ moduleid = NODE_MODULEID(
+ nasid_to_cnodeid(pcibr_soft->bs_nasid));
+ if (sn_modules[i]->id == moduleid) {
+ struct pcibr_list_s *new_element;
+
+ new_element = kmalloc(sizeof (struct pcibr_soft_s), GFP_KERNEL);
+ if (new_element == NULL) {
+ printk("%s: Couldn't allocate memory\n",__FUNCTION__);
+ return -ENOMEM;
+ }
+ new_element->bl_soft = softlistp->bl_soft;
+ new_element->bl_vhdl = softlistp->bl_vhdl;
+ new_element->bl_next = NULL;
+
+ /* list empty so just put it on the list */
+ if (first_in_list == NULL) {
+ first_in_list = new_element;
+ last_in_list = new_element;
+ softlistp = softlistp->bl_next;
+ continue;
+ }
+
+ /*
+ * BASEIO IObricks attached to a module have
+ * a higher priority than non BASEIO IOBricks
+ * when it comes to persistant pci bus
+ * numbering, so put them on the front of the
+ * list.
+ */
+ if (isIO9(pcibr_soft->bs_nasid)) {
+ new_element->bl_next = first_in_list;
+ first_in_list = new_element;
+ } else {
+ last_in_list->bl_next = new_element;
+ last_in_list = new_element;
+ }
+ }
+ softlistp = softlistp->bl_next;
+ }
+
+ /*
+ * We now have a list of all the pci bridges associated with
+ * the module_id, sn_modules[i]. Call pci_bus_map_create() for
+ * each pci bridge
+ */
+ softlistp = first_in_list;
+ while (softlistp) {
+ moduleid_t iobrick;
+ struct pcibr_list_s *next = softlistp->bl_next;
+ iobrick = iomoduleid_get(softlistp->bl_soft->bs_nasid);
+ pci_bus_map_create(softlistp, iobrick);
+ kfree(softlistp);
+ softlistp = next;
+ }
+ }
+
+ /*
+ * Create the Linux PCI bus number vertex link.
+ */
+ (void)linux_bus_cvlink();
+ (void)ioconfig_bus_new_entries();
+
+ return(0);
+}
+
+/*
+ * Ugly hack to get PCI setup until we have a proper ACPI namespace.
+ */
+
+#define PCI_BUSES_TO_SCAN 256
+
+extern struct pci_ops sn_pci_ops;
+int __init
+sn_pci_init (void)
+{
+ int i = 0;
+ struct pci_controller *controller;
+ struct list_head *ln;
+ struct pci_bus *pci_bus = NULL;
+ struct pci_dev *pci_dev = NULL;
+ int ret;
+#ifdef CONFIG_PROC_FS
+ extern void register_sn_procfs(void);
+#endif
+ extern void sgi_master_io_infr_init(void);
+ extern void sn_init_cpei_timer(void);
+
+
+ if (!ia64_platform_is("sn2") || IS_RUNNING_ON_SIMULATOR())
+ return 0;
+
+ /*
+ * This is needed to avoid bounce limit checks in the blk layer
+ */
+ ia64_max_iommu_merge_mask = ~PAGE_MASK;
+
+ /*
+ * set pci_raw_ops, etc.
+ */
+ sgi_master_io_infr_init();
+
+ sn_init_cpei_timer();
+
+#ifdef CONFIG_PROC_FS
+ register_sn_procfs();
+#endif
+
+ controller = kmalloc(sizeof(struct pci_controller), GFP_KERNEL);
+ if (!controller) {
+ printk(KERN_WARNING "cannot allocate PCI controller\n");
+ return 0;
+ }
+
+ memset(controller, 0, sizeof(struct pci_controller));
+
+ for (i = 0; i < PCI_BUSES_TO_SCAN; i++)
+ if (pci_bus_to_vertex(i))
+ pci_scan_bus(i, &sn_pci_ops, controller);
+
+ done_probing = 1;
+
+ /*
+ * Initialize the pci bus vertex in the pci_bus struct.
+ */
+ for( ln = pci_root_buses.next; ln != &pci_root_buses; ln = ln->next) {
+ pci_bus = pci_bus_b(ln);
+ ret = sn_pci_fixup_bus(pci_bus);
+ if ( ret ) {
+ printk(KERN_WARNING
+ "sn_pci_fixup: sn_pci_fixup_bus fails : error %d\n",
+ ret);
+ return 0;
+ }
+ }
+
+ /*
+ * set the root start and end so that drivers calling check_region()
+ * won't see a conflict
+ */
+ ioport_resource.start = 0xc000000000000000;
+ ioport_resource.end = 0xcfffffffffffffff;
+
+ /*
+ * Set the root start and end for Mem Resource.
+ */
+ iomem_resource.start = 0;
+ iomem_resource.end = 0xffffffffffffffff;
+
+ /*
+ * Initialize the device vertex in the pci_dev struct.
+ */
+ while ((pci_dev = pci_find_device(PCI_ANY_ID, PCI_ANY_ID, pci_dev)) != NULL) {
+ ret = sn_pci_fixup_slot(pci_dev);
+ if ( ret ) {
+ printk(KERN_WARNING
+ "sn_pci_fixup: sn_pci_fixup_slot fails : error %d\n",
+ ret);
+ return 0;
+ }
+ }
+
+ return 0;
+}
+
+subsys_initcall(sn_pci_init);
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2000,2002-2003 Silicon Graphics, Inc. All rights reserved.
+ *
+ * Routines for PCI DMA mapping. See Documentation/DMA-mapping.txt for
+ * a description of how these routines should be used.
+ */
+
+#include <linux/module.h>
+#include <asm/sn/pci/pci_bus_cvlink.h>
+
+/*
+ * For ATE allocations
+ */
+pciio_dmamap_t get_free_pciio_dmamap(vertex_hdl_t);
+void free_pciio_dmamap(pcibr_dmamap_t);
+static struct pcibr_dmamap_s *find_sn_dma_map(dma_addr_t, unsigned char);
+void sn_pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction);
+
+/*
+ * Toplogy stuff
+ */
+extern vertex_hdl_t busnum_to_pcibr_vhdl[];
+extern nasid_t busnum_to_nid[];
+extern void * busnum_to_atedmamaps[];
+
+/**
+ * get_free_pciio_dmamap - find and allocate an ATE
+ * @pci_bus: PCI bus to get an entry for
+ *
+ * Finds and allocates an ATE on the PCI bus specified
+ * by @pci_bus.
+ */
+pciio_dmamap_t
+get_free_pciio_dmamap(vertex_hdl_t pci_bus)
+{
+ int i;
+ struct pcibr_dmamap_s *sn_dma_map = NULL;
+
+ /*
+ * Darn, we need to get the maps allocated for this bus.
+ */
+ for (i = 0; i < MAX_PCI_XWIDGET; i++) {
+ if (busnum_to_pcibr_vhdl[i] == pci_bus) {
+ sn_dma_map = busnum_to_atedmamaps[i];
+ }
+ }
+
+ /*
+ * Now get a free dmamap entry from this list.
+ */
+ for (i = 0; i < MAX_ATE_MAPS; i++, sn_dma_map++) {
+ if (!sn_dma_map->bd_dma_addr) {
+ sn_dma_map->bd_dma_addr = -1;
+ return( (pciio_dmamap_t) sn_dma_map );
+ }
+ }
+
+ return NULL;
+}
+
+/**
+ * free_pciio_dmamap - free an ATE
+ * @dma_map: ATE to free
+ *
+ * Frees the ATE specified by @dma_map.
+ */
+void
+free_pciio_dmamap(pcibr_dmamap_t dma_map)
+{
+ dma_map->bd_dma_addr = 0;
+}
+
+/**
+ * find_sn_dma_map - find an ATE associated with @dma_addr and @busnum
+ * @dma_addr: DMA address to look for
+ * @busnum: PCI bus to look on
+ *
+ * Finds the ATE associated with @dma_addr and @busnum.
+ */
+static struct pcibr_dmamap_s *
+find_sn_dma_map(dma_addr_t dma_addr, unsigned char busnum)
+{
+
+ struct pcibr_dmamap_s *sn_dma_map = NULL;
+ int i;
+
+ sn_dma_map = busnum_to_atedmamaps[busnum];
+
+ for (i = 0; i < MAX_ATE_MAPS; i++, sn_dma_map++) {
+ if (sn_dma_map->bd_dma_addr == dma_addr) {
+ return sn_dma_map;
+ }
+ }
+
+ return NULL;
+}
+
+/**
+ * sn_pci_alloc_consistent - allocate memory for coherent DMA
+ * @hwdev: device to allocate for
+ * @size: size of the region
+ * @dma_handle: DMA (bus) address
+ *
+ * pci_alloc_consistent() returns a pointer to a memory region suitable for
+ * coherent DMA traffic to/from a PCI device. On SN platforms, this means
+ * that @dma_handle will have the %PCIIO_DMA_CMD flag set.
+ *
+ * This interface is usually used for "command" streams (e.g. the command
+ * queue for a SCSI controller). See Documentation/DMA-mapping.txt for
+ * more information.
+ *
+ * Also known as platform_pci_alloc_consistent() by the IA64 machvec code.
+ */
+void *
+sn_pci_alloc_consistent(struct pci_dev *hwdev, size_t size, dma_addr_t *dma_handle)
+{
+ void *cpuaddr;
+ vertex_hdl_t vhdl;
+ struct sn_device_sysdata *device_sysdata;
+ unsigned long phys_addr;
+ pcibr_dmamap_t dma_map = 0;
+
+ /*
+ * Get hwgraph vertex for the device
+ */
+ device_sysdata = SN_DEVICE_SYSDATA(hwdev);
+ vhdl = device_sysdata->vhdl;
+
+ /*
+ * Allocate the memory.
+ * FIXME: We should be doing alloc_pages_node for the node closest
+ * to the PCI device.
+ */
+ if (!(cpuaddr = (void *)__get_free_pages(GFP_ATOMIC, get_order(size))))
+ return NULL;
+
+ memset(cpuaddr, 0x0, size);
+
+ /* physical addr. of the memory we just got */
+ phys_addr = __pa(cpuaddr);
+
+ /*
+ * 64 bit address translations should never fail.
+ * 32 bit translations can fail if there are insufficient mapping
+ * resources and the direct map is already wired to a different
+ * 2GB range.
+ * 32 bit translations can also return a > 32 bit address, because
+ * pcibr_dmatrans_addr ignores a missing PCIIO_DMA_A64 flag on
+ * PCI-X buses.
+ */
+ if (hwdev->dev.coherent_dma_mask == ~0UL)
+ *dma_handle = pcibr_dmatrans_addr(vhdl, NULL, phys_addr, size,
+ PCIIO_DMA_CMD | PCIIO_DMA_A64);
+ else {
+ dma_map = pcibr_dmamap_alloc(vhdl, NULL, size, PCIIO_DMA_CMD |
+ MINIMAL_ATE_FLAG(phys_addr, size));
+ if (dma_map) {
+ *dma_handle = (dma_addr_t)
+ pcibr_dmamap_addr(dma_map, phys_addr, size);
+ dma_map->bd_dma_addr = *dma_handle;
+ }
+ else {
+ *dma_handle = pcibr_dmatrans_addr(vhdl, NULL, phys_addr, size,
+ PCIIO_DMA_CMD);
+ }
+ }
+
+ if (!*dma_handle || *dma_handle > hwdev->dev.coherent_dma_mask) {
+ if (dma_map) {
+ pcibr_dmamap_done(dma_map);
+ pcibr_dmamap_free(dma_map);
+ }
+ free_pages((unsigned long) cpuaddr, get_order(size));
+ return NULL;
+ }
+
+ return cpuaddr;
+}
+
+/**
+ * sn_pci_free_consistent - free memory associated with coherent DMAable region
+ * @hwdev: device to free for
+ * @size: size to free
+ * @vaddr: kernel virtual address to free
+ * @dma_handle: DMA address associated with this region
+ *
+ * Frees the memory allocated by pci_alloc_consistent(). Also known
+ * as platform_pci_free_consistent() by the IA64 machvec code.
+ */
+void
+sn_pci_free_consistent(struct pci_dev *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle)
+{
+ struct pcibr_dmamap_s *dma_map = NULL;
+
+ /*
+ * Get the sn_dma_map entry.
+ */
+ if (IS_PCI32_MAPPED(dma_handle))
+ dma_map = find_sn_dma_map(dma_handle, hwdev->bus->number);
+
+ /*
+ * and free it if necessary...
+ */
+ if (dma_map) {
+ pcibr_dmamap_done(dma_map);
+ pcibr_dmamap_free(dma_map);
+ }
+ free_pages((unsigned long) vaddr, get_order(size));
+}
+
+/**
+ * sn_pci_map_sg - map a scatter-gather list for DMA
+ * @hwdev: device to map for
+ * @sg: scatterlist to map
+ * @nents: number of entries
+ * @direction: direction of the DMA transaction
+ *
+ * Maps each entry of @sg for DMA. Also known as platform_pci_map_sg by the
+ * IA64 machvec code.
+ */
+int
+sn_pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
+{
+ int i;
+ vertex_hdl_t vhdl;
+ unsigned long phys_addr;
+ struct sn_device_sysdata *device_sysdata;
+ pcibr_dmamap_t dma_map;
+ struct scatterlist *saved_sg = sg;
+ unsigned dma_flag;
+
+ /* can't go anywhere w/o a direction in life */
+ if (direction == PCI_DMA_NONE)
+ BUG();
+
+ /*
+ * Get the hwgraph vertex for the device
+ */
+ device_sysdata = SN_DEVICE_SYSDATA(hwdev);
+ vhdl = device_sysdata->vhdl;
+
+ /*
+ * 64 bit DMA mask can use direct translations
+ * PCI only
+ * 32 bit DMA mask might be able to use direct, otherwise use dma map
+ * PCI-X
+ * only 64 bit DMA mask supported; both direct and dma map will fail
+ */
+ if (hwdev->dma_mask == ~0UL)
+ dma_flag = PCIIO_DMA_DATA | PCIIO_DMA_A64;
+ else
+ dma_flag = PCIIO_DMA_DATA;
+
+ /*
+ * Setup a DMA address for each entry in the
+ * scatterlist.
+ */
+ for (i = 0; i < nents; i++, sg++) {
+ phys_addr = __pa((unsigned long)page_address(sg->page) + sg->offset);
+ sg->dma_address = pcibr_dmatrans_addr(vhdl, NULL, phys_addr,
+ sg->length, dma_flag);
+ if (sg->dma_address) {
+ sg->dma_length = sg->length;
+ continue;
+ }
+
+ dma_map = pcibr_dmamap_alloc(vhdl, NULL, sg->length,
+ PCIIO_DMA_DATA|MINIMAL_ATE_FLAG(phys_addr, sg->length));
+ if (!dma_map) {
+ printk(KERN_ERR "sn_pci_map_sg: Unable to allocate "
+ "anymore 32 bit page map entries.\n");
+ /*
+ * We will need to free all previously allocated entries.
+ */
+ if (i > 0) {
+ sn_pci_unmap_sg(hwdev, saved_sg, i, direction);
+ }
+ return (0);
+ }
+
+ sg->dma_address = pcibr_dmamap_addr(dma_map, phys_addr, sg->length);
+ sg->dma_length = sg->length;
+ dma_map->bd_dma_addr = sg->dma_address;
+ }
+
+ return nents;
+
+}
+
+/**
+ * sn_pci_unmap_sg - unmap a scatter-gather list
+ * @hwdev: device to unmap
+ * @sg: scatterlist to unmap
+ * @nents: number of scatterlist entries
+ * @direction: DMA direction
+ *
+ * Unmap a set of streaming mode DMA translations. Again, cpu read rules
+ * concerning calls here are the same as for pci_unmap_single() below. Also
+ * known as sn_pci_unmap_sg() by the IA64 machvec code.
+ */
+void
+sn_pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
+{
+ int i;
+ struct pcibr_dmamap_s *dma_map;
+
+ /* can't go anywhere w/o a direction in life */
+ if (direction == PCI_DMA_NONE)
+ BUG();
+
+ for (i = 0; i < nents; i++, sg++){
+
+ if (IS_PCI32_MAPPED(sg->dma_address)) {
+ dma_map = find_sn_dma_map(sg->dma_address, hwdev->bus->number);
+ if (dma_map) {
+ pcibr_dmamap_done(dma_map);
+ pcibr_dmamap_free(dma_map);
+ }
+ }
+
+ sg->dma_address = (dma_addr_t)NULL;
+ sg->dma_length = 0;
+ }
+}
+
+/**
+ * sn_pci_map_single - map a single region for DMA
+ * @hwdev: device to map for
+ * @ptr: kernel virtual address of the region to map
+ * @size: size of the region
+ * @direction: DMA direction
+ *
+ * Map the region pointed to by @ptr for DMA and return the
+ * DMA address. Also known as platform_pci_map_single() by
+ * the IA64 machvec code.
+ *
+ * We map this to the one step pcibr_dmamap_trans interface rather than
+ * the two step pcibr_dmamap_alloc/pcibr_dmamap_addr because we have
+ * no way of saving the dmamap handle from the alloc to later free
+ * (which is pretty much unacceptable).
+ *
+ * TODO: simplify our interface;
+ * get rid of dev_desc and vhdl (seems redundant given a pci_dev);
+ * figure out how to save dmamap handle so can use two step.
+ */
+dma_addr_t
+sn_pci_map_single(struct pci_dev *hwdev, void *ptr, size_t size, int direction)
+{
+ vertex_hdl_t vhdl;
+ dma_addr_t dma_addr;
+ unsigned long phys_addr;
+ struct sn_device_sysdata *device_sysdata;
+ pcibr_dmamap_t dma_map = NULL;
+ unsigned dma_flag;
+
+ if (direction == PCI_DMA_NONE)
+ BUG();
+
+ /*
+ * find vertex for the device
+ */
+ device_sysdata = SN_DEVICE_SYSDATA(hwdev);
+ vhdl = device_sysdata->vhdl;
+
+ phys_addr = __pa(ptr);
+ /*
+ * 64 bit DMA mask can use direct translations
+ * PCI only
+ * 32 bit DMA mask might be able to use direct, otherwise use dma map
+ * PCI-X
+ * only 64 bit DMA mask supported; both direct and dma map will fail
+ */
+ if (hwdev->dma_mask == ~0UL)
+ dma_flag = PCIIO_DMA_DATA | PCIIO_DMA_A64;
+ else
+ dma_flag = PCIIO_DMA_DATA;
+
+ dma_addr = pcibr_dmatrans_addr(vhdl, NULL, phys_addr, size, dma_flag);
+ if (dma_addr)
+ return dma_addr;
+
+ /*
+ * It's a 32 bit card and we cannot do direct mapping so
+ * let's use the PMU instead.
+ */
+ dma_map = NULL;
+ dma_map = pcibr_dmamap_alloc(vhdl, NULL, size, PCIIO_DMA_DATA |
+ MINIMAL_ATE_FLAG(phys_addr, size));
+
+ /* PMU out of entries */
+ if (!dma_map)
+ return 0;
+
+ dma_addr = (dma_addr_t) pcibr_dmamap_addr(dma_map, phys_addr, size);
+ dma_map->bd_dma_addr = dma_addr;
+
+ return ((dma_addr_t)dma_addr);
+}
+
+/**
+ * sn_pci_unmap_single - unmap a region used for DMA
+ * @hwdev: device to unmap
+ * @dma_addr: DMA address to unmap
+ * @size: size of region
+ * @direction: DMA direction
+ *
+ * Unmaps the region pointed to by @dma_addr. Also known as
+ * platform_pci_unmap_single() by the IA64 machvec code.
+ */
+void
+sn_pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr, size_t size, int direction)
+{
+ struct pcibr_dmamap_s *dma_map = NULL;
+
+ if (direction == PCI_DMA_NONE)
+ BUG();
+
+ /*
+ * Get the sn_dma_map entry.
+ */
+ if (IS_PCI32_MAPPED(dma_addr))
+ dma_map = find_sn_dma_map(dma_addr, hwdev->bus->number);
+
+ /*
+ * and free it if necessary...
+ */
+ if (dma_map) {
+ pcibr_dmamap_done(dma_map);
+ pcibr_dmamap_free(dma_map);
+ }
+}
+
+/**
+ * sn_pci_dma_sync_single_* - make sure all DMAs or CPU accesses
+ * have completed
+ * @hwdev: device to sync
+ * @dma_handle: DMA address to sync
+ * @size: size of region
+ * @direction: DMA direction
+ *
+ * This routine is supposed to sync the DMA region specified
+ * by @dma_handle into the 'coherence domain'. We do not need to do
+ * anything on our platform.
+ */
+void
+sn_pci_dma_sync_single_for_cpu(struct pci_dev *hwdev, dma_addr_t dma_handle, size_t size, int direction)
+{
+ return;
+}
+
+void
+sn_pci_dma_sync_single_for_device(struct pci_dev *hwdev, dma_addr_t dma_handle, size_t size, int direction)
+{
+ return;
+}
+
+/**
+ * sn_pci_dma_sync_sg_* - make sure all DMAs or CPU accesses have completed
+ * @hwdev: device to sync
+ * @sg: scatterlist to sync
+ * @nents: number of entries in the scatterlist
+ * @direction: DMA direction
+ *
+ * This routine is supposed to sync the DMA regions specified
+ * by @sg into the 'coherence domain'. We do not need to do anything
+ * on our platform.
+ */
+void
+sn_pci_dma_sync_sg_for_cpu(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
+{
+ return;
+}
+
+void
+sn_pci_dma_sync_sg_for_device(struct pci_dev *hwdev, struct scatterlist *sg, int nents, int direction)
+{
+ return;
+}
+
+/**
+ * sn_dma_supported - test a DMA mask
+ * @hwdev: device to test
+ * @mask: DMA mask to test
+ *
+ * Return whether the given PCI device DMA address mask can be supported
+ * properly. For example, if your device can only drive the low 24-bits
+ * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
+ * this function. Of course, SN only supports devices that have 32 or more
+ * address bits when using the PMU. We could theoretically support <32 bit
+ * cards using direct mapping, but we'll worry about that later--on the off
+ * chance that someone actually wants to use such a card.
+ */
+int
+sn_pci_dma_supported(struct pci_dev *hwdev, u64 mask)
+{
+ if (mask < 0xffffffff)
+ return 0;
+ return 1;
+}
+
+/*
+ * New generic DMA routines just wrap sn2 PCI routines until we
+ * support other bus types (if ever).
+ */
+
+int
+sn_dma_supported(struct device *dev, u64 mask)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ return sn_pci_dma_supported(to_pci_dev(dev), mask);
+}
+EXPORT_SYMBOL(sn_dma_supported);
+
+int
+sn_dma_set_mask(struct device *dev, u64 dma_mask)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ if (!sn_dma_supported(dev, dma_mask))
+ return 0;
+
+ *dev->dma_mask = dma_mask;
+ return 1;
+}
+EXPORT_SYMBOL(sn_dma_set_mask);
+
+void *
+sn_dma_alloc_coherent(struct device *dev, size_t size, dma_addr_t *dma_handle,
+ int flag)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ return sn_pci_alloc_consistent(to_pci_dev(dev), size, dma_handle);
+}
+EXPORT_SYMBOL(sn_dma_alloc_coherent);
+
+void
+sn_dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
+ dma_addr_t dma_handle)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ sn_pci_free_consistent(to_pci_dev(dev), size, cpu_addr, dma_handle);
+}
+EXPORT_SYMBOL(sn_dma_free_coherent);
+
+dma_addr_t
+sn_dma_map_single(struct device *dev, void *cpu_addr, size_t size,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ return sn_pci_map_single(to_pci_dev(dev), cpu_addr, size, (int)direction);
+}
+EXPORT_SYMBOL(sn_dma_map_single);
+
+void
+sn_dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ sn_pci_unmap_single(to_pci_dev(dev), dma_addr, size, (int)direction);
+}
+EXPORT_SYMBOL(sn_dma_unmap_single);
+
+dma_addr_t
+sn_dma_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ return pci_map_page(to_pci_dev(dev), page, offset, size, (int)direction);
+}
+EXPORT_SYMBOL(sn_dma_map_page);
+
+void
+sn_dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ pci_unmap_page(to_pci_dev(dev), dma_address, size, (int)direction);
+}
+EXPORT_SYMBOL(sn_dma_unmap_page);
+
+int
+sn_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ return sn_pci_map_sg(to_pci_dev(dev), sg, nents, (int)direction);
+}
+EXPORT_SYMBOL(sn_dma_map_sg);
+
+void
+sn_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ sn_pci_unmap_sg(to_pci_dev(dev), sg, nhwentries, (int)direction);
+}
+EXPORT_SYMBOL(sn_dma_unmap_sg);
+
+void
+sn_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ sn_pci_dma_sync_single_for_cpu(to_pci_dev(dev), dma_handle, size, (int)direction);
+}
+EXPORT_SYMBOL(sn_dma_sync_single_for_cpu);
+
+void
+sn_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ sn_pci_dma_sync_single_for_device(to_pci_dev(dev), dma_handle, size, (int)direction);
+}
+EXPORT_SYMBOL(sn_dma_sync_single_for_device);
+
+void
+sn_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ sn_pci_dma_sync_sg_for_cpu(to_pci_dev(dev), sg, nelems, (int)direction);
+}
+EXPORT_SYMBOL(sn_dma_sync_sg_for_cpu);
+
+void
+sn_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ sn_pci_dma_sync_sg_for_device(to_pci_dev(dev), sg, nelems, (int)direction);
+}
+EXPORT_SYMBOL(sn_dma_sync_sg_for_device);
+
+int
+sn_dma_mapping_error(dma_addr_t dma_addr)
+{
+ /*
+ * We can only run out of page mapping entries, so if there's
+ * an error, tell the caller to try again later.
+ */
+ if (!dma_addr)
+ return -EAGAIN;
+ return 0;
+}
+
+EXPORT_SYMBOL(sn_dma_mapping_error);
+EXPORT_SYMBOL(sn_pci_unmap_single);
+EXPORT_SYMBOL(sn_pci_map_single);
+EXPORT_SYMBOL(sn_pci_dma_sync_single_for_cpu);
+EXPORT_SYMBOL(sn_pci_dma_sync_single_for_device);
+EXPORT_SYMBOL(sn_pci_dma_sync_sg_for_cpu);
+EXPORT_SYMBOL(sn_pci_dma_sync_sg_for_device);
+EXPORT_SYMBOL(sn_pci_map_sg);
+EXPORT_SYMBOL(sn_pci_unmap_sg);
+EXPORT_SYMBOL(sn_pci_alloc_consistent);
+EXPORT_SYMBOL(sn_pci_free_consistent);
+EXPORT_SYMBOL(sn_pci_dma_supported);
+
--- /dev/null
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2002-2003 Silicon Graphics, Inc. All Rights Reserved.
+#
+# Makefile for the sn2 io routines.
+
+obj-y += sgi_io_init.o
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/config.h>
+#include <linux/slab.h>
+#include <linux/smp.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/io.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/pda.h>
+
+extern void init_all_devices(void);
+extern void klhwg_add_all_modules(vertex_hdl_t);
+extern void klhwg_add_all_nodes(vertex_hdl_t);
+
+extern int init_hcl(void);
+extern vertex_hdl_t hwgraph_root;
+extern void io_module_init(void);
+extern int pci_bus_to_hcl_cvlink(void);
+
+nasid_t console_nasid = (nasid_t) - 1;
+char master_baseio_wid;
+
+nasid_t master_baseio_nasid;
+nasid_t master_nasid = INVALID_NASID; /* This is the partition master nasid */
+
+/*
+ * per_hub_init
+ *
+ * This code is executed once for each Hub chip.
+ */
+static void __init
+per_hub_init(cnodeid_t cnode)
+{
+ nasid_t nasid;
+ nodepda_t *npdap;
+ ii_icmr_u_t ii_icmr;
+ ii_ibcr_u_t ii_ibcr;
+ ii_ilcsr_u_t ii_ilcsr;
+
+ nasid = cnodeid_to_nasid(cnode);
+
+ ASSERT(nasid != INVALID_NASID);
+ ASSERT(nasid_to_cnodeid(nasid) == cnode);
+
+ npdap = NODEPDA(cnode);
+
+ /* Disable the request and reply errors. */
+ REMOTE_HUB_S(nasid, IIO_IWEIM, 0xC000);
+
+ /*
+ * Set the total number of CRBs that can be used.
+ */
+ ii_icmr.ii_icmr_regval = 0x0;
+ ii_icmr.ii_icmr_fld_s.i_c_cnt = 0xf;
+ if (enable_shub_wars_1_1()) {
+ // Set bit one of ICMR to prevent II from sending interrupt for II bug.
+ ii_icmr.ii_icmr_regval |= 0x1;
+ }
+ REMOTE_HUB_S(nasid, IIO_ICMR, ii_icmr.ii_icmr_regval);
+
+ /*
+ * Set the number of CRBs that both of the BTEs combined
+ * can use minus 1.
+ */
+ ii_ibcr.ii_ibcr_regval = 0x0;
+ ii_ilcsr.ii_ilcsr_regval = REMOTE_HUB_L(nasid, IIO_LLP_CSR);
+ if (ii_ilcsr.ii_ilcsr_fld_s.i_llp_stat & LNK_STAT_WORKING) {
+ ii_ibcr.ii_ibcr_fld_s.i_count = 0x8;
+ } else {
+ /*
+ * if the LLP is down, there is no attached I/O, so
+ * give BTE all the CRBs.
+ */
+ ii_ibcr.ii_ibcr_fld_s.i_count = 0x14;
+ }
+ REMOTE_HUB_S(nasid, IIO_IBCR, ii_ibcr.ii_ibcr_regval);
+
+ /*
+ * Set CRB timeout to be 10ms.
+ */
+ REMOTE_HUB_S(nasid, IIO_ICTP, 0xffffff);
+ REMOTE_HUB_S(nasid, IIO_ICTO, 0xff);
+
+ /* Initialize error interrupts for this hub. */
+ hub_error_init(cnode);
+}
+
+/*
+ * This routine is responsible for the setup of all the IRIX hwgraph style
+ * stuff that's been pulled into linux. It's called by sn_pci_find_bios which
+ * is called just before the generic Linux PCI layer does its probing (by
+ * platform_pci_fixup aka sn_pci_fixup).
+ *
+ * It is very IMPORTANT that this call is only made by the Master CPU!
+ *
+ */
+
+void __init
+sgi_master_io_infr_init(void)
+{
+ cnodeid_t cnode;
+
+ if (init_hcl() < 0) { /* Sets up the hwgraph compatibility layer */
+ printk("sgi_master_io_infr_init: Cannot init hcl\n");
+ return;
+ }
+
+ /*
+ * Initialize platform-dependent vertices in the hwgraph:
+ * module
+ * node
+ * cpu
+ * memory
+ * slot
+ * hub
+ * router
+ * xbow
+ */
+
+ io_module_init(); /* Use to be called module_init() .. */
+ klhwg_add_all_modules(hwgraph_root);
+ klhwg_add_all_nodes(hwgraph_root);
+
+ for (cnode = 0; cnode < numionodes; cnode++)
+ per_hub_init(cnode);
+
+ /*
+ *
+ * Our IO Infrastructure drivers are in place ..
+ * Initialize the whole IO Infrastructure .. xwidget/device probes.
+ *
+ */
+ init_all_devices();
+ pci_bus_to_hcl_cvlink();
+}
+
+inline int
+check_nasid_equiv(nasid_t nasida, nasid_t nasidb)
+{
+ if ((nasida == nasidb)
+ || (nasida == NODEPDA(nasid_to_cnodeid(nasidb))->xbow_peer))
+ return 1;
+ else
+ return 0;
+}
+
+int
+is_master_baseio_nasid_widget(nasid_t test_nasid, xwidgetnum_t test_wid)
+{
+ /*
+ * If the widget numbers are different, we're not the master.
+ */
+ if (test_wid != (xwidgetnum_t) master_baseio_wid) {
+ return 0;
+ }
+
+ /*
+ * If the NASIDs are the same or equivalent, we're the master.
+ */
+ if (check_nasid_equiv(test_nasid, master_baseio_nasid)) {
+ return 1;
+ } else {
+ return 0;
+ }
+}
--- /dev/null
+# arch/ia64/sn/io/sn2/Makefile
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2002-2003 Silicon Graphics, Inc. All Rights Reserved.
+#
+# Makefile for the sn2 specific io routines.
+#
+
+obj-y += pcibr/ ml_SN_intr.o shub_intr.o shuberror.o shub.o bte_error.o \
+ pic.o geo_op.o l1_command.o klconflib.o klgraph.o ml_SN_init.o \
+ ml_iograph.o module.o pciio.o xbow.o xtalk.o shubio.o
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ */
+
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <asm/smp.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/io.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/xtalk/xtalk.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/intr.h>
+#include <asm/sn/ioerror.h>
+#include <asm/sn/sn2/shubio.h>
+#include <asm/sn/bte.h>
+
+
+/*
+ * Bte error handling is done in two parts. The first captures
+ * any crb related errors. Since there can be multiple crbs per
+ * interface and multiple interfaces active, we need to wait until
+ * all active crbs are completed. This is the first job of the
+ * second part error handler. When all bte related CRBs are cleanly
+ * completed, it resets the interfaces and gets them ready for new
+ * transfers to be queued.
+ */
+
+
+void bte_error_handler(unsigned long);
+
+
+/*
+ * First part error handler. This is called whenever any error CRB interrupt
+ * is generated by the II.
+ */
+void
+bte_crb_error_handler(vertex_hdl_t hub_v, int btenum,
+ int crbnum, ioerror_t * ioe, int bteop)
+{
+ hubinfo_t hinfo;
+ struct bteinfo_s *bte;
+
+
+ hubinfo_get(hub_v, &hinfo);
+ bte = &hinfo->h_nodepda->bte_if[btenum];
+
+ /*
+ * The caller has already figured out the error type, we save that
+ * in the bte handle structure for the thread excercising the
+ * interface to consume.
+ */
+ bte->bh_error = ioe->ie_errortype + BTEFAIL_OFFSET;
+ bte->bte_error_count++;
+
+ BTE_PRINTK(("Got an error on cnode %d bte %d: HW error type 0x%x\n",
+ bte->bte_cnode, bte->bte_num, ioe->ie_errortype));
+ bte_error_handler((unsigned long) hinfo->h_nodepda);
+}
+
+
+/*
+ * Second part error handler. Wait until all BTE related CRBs are completed
+ * and then reset the interfaces.
+ */
+void
+bte_error_handler(unsigned long _nodepda)
+{
+ struct nodepda_s *err_nodepda = (struct nodepda_s *) _nodepda;
+ spinlock_t *recovery_lock = &err_nodepda->bte_recovery_lock;
+ struct timer_list *recovery_timer = &err_nodepda->bte_recovery_timer;
+ nasid_t nasid;
+ int i;
+ int valid_crbs;
+ unsigned long irq_flags;
+ volatile u64 *notify;
+ bte_result_t bh_error;
+ ii_imem_u_t imem; /* II IMEM Register */
+ ii_icrb0_d_u_t icrbd; /* II CRB Register D */
+ ii_ibcr_u_t ibcr;
+ ii_icmr_u_t icmr;
+ ii_ieclr_u_t ieclr;
+
+
+ BTE_PRINTK(("bte_error_handler(%p) - %d\n", err_nodepda,
+ smp_processor_id()));
+
+ spin_lock_irqsave(recovery_lock, irq_flags);
+
+ if ((err_nodepda->bte_if[0].bh_error == BTE_SUCCESS) &&
+ (err_nodepda->bte_if[1].bh_error == BTE_SUCCESS)) {
+ BTE_PRINTK(("eh:%p:%d Nothing to do.\n", err_nodepda,
+ smp_processor_id()));
+ spin_unlock_irqrestore(recovery_lock, irq_flags);
+ return;
+ }
+ /*
+ * Lock all interfaces on this node to prevent new transfers
+ * from being queued.
+ */
+ for (i = 0; i < BTES_PER_NODE; i++) {
+ if (err_nodepda->bte_if[i].cleanup_active) {
+ continue;
+ }
+ spin_lock(&err_nodepda->bte_if[i].spinlock);
+ BTE_PRINTK(("eh:%p:%d locked %d\n", err_nodepda,
+ smp_processor_id(), i));
+ err_nodepda->bte_if[i].cleanup_active = 1;
+ }
+
+ /* Determine information about our hub */
+ nasid = cnodeid_to_nasid(err_nodepda->bte_if[0].bte_cnode);
+
+
+ /*
+ * A BTE transfer can use multiple CRBs. We need to make sure
+ * that all the BTE CRBs are complete (or timed out) before
+ * attempting to clean up the error. Resetting the BTE while
+ * there are still BTE CRBs active will hang the BTE.
+ * We should look at all the CRBs to see if they are allocated
+ * to the BTE and see if they are still active. When none
+ * are active, we can continue with the cleanup.
+ *
+ * We also want to make sure that the local NI port is up.
+ * When a router resets the NI port can go down, while it
+ * goes through the LLP handshake, but then comes back up.
+ */
+ icmr.ii_icmr_regval = REMOTE_HUB_L(nasid, IIO_ICMR);
+ if (icmr.ii_icmr_fld_s.i_crb_mark != 0) {
+ /*
+ * There are errors which still need to be cleaned up by
+ * hubiio_crb_error_handler
+ */
+ mod_timer(recovery_timer, HZ * 5);
+ BTE_PRINTK(("eh:%p:%d Marked Giving up\n", err_nodepda,
+ smp_processor_id()));
+ spin_unlock_irqrestore(recovery_lock, irq_flags);
+ return;
+ }
+ if (icmr.ii_icmr_fld_s.i_crb_vld != 0) {
+
+ valid_crbs = icmr.ii_icmr_fld_s.i_crb_vld;
+
+ for (i = 0; i < IIO_NUM_CRBS; i++) {
+ if (!((1 << i) & valid_crbs)) {
+ /* This crb was not marked as valid, ignore */
+ continue;
+ }
+ icrbd.ii_icrb0_d_regval =
+ REMOTE_HUB_L(nasid, IIO_ICRB_D(i));
+ if (icrbd.d_bteop) {
+ mod_timer(recovery_timer, HZ * 5);
+ BTE_PRINTK(("eh:%p:%d Valid %d, Giving up\n",
+ err_nodepda, smp_processor_id(), i));
+ spin_unlock_irqrestore(recovery_lock,
+ irq_flags);
+ return;
+ }
+ }
+ }
+
+
+ BTE_PRINTK(("eh:%p:%d Cleaning up\n", err_nodepda,
+ smp_processor_id()));
+ /* Reenable both bte interfaces */
+ imem.ii_imem_regval = REMOTE_HUB_L(nasid, IIO_IMEM);
+ imem.ii_imem_fld_s.i_b0_esd = imem.ii_imem_fld_s.i_b1_esd = 1;
+ REMOTE_HUB_S(nasid, IIO_IMEM, imem.ii_imem_regval);
+
+ /* Clear IBLS0/1 error bits */
+ ieclr.ii_ieclr_regval = 0;
+ if (err_nodepda->bte_if[0].bh_error != BTE_SUCCESS)
+ ieclr.ii_ieclr_fld_s.i_e_bte_0 = 1;
+ if (err_nodepda->bte_if[1].bh_error != BTE_SUCCESS)
+ ieclr.ii_ieclr_fld_s.i_e_bte_1 = 1;
+ REMOTE_HUB_S(nasid, IIO_IECLR, ieclr.ii_ieclr_regval);
+
+ /* Reinitialize both BTE state machines. */
+ ibcr.ii_ibcr_regval = REMOTE_HUB_L(nasid, IIO_IBCR);
+ ibcr.ii_ibcr_fld_s.i_soft_reset = 1;
+ REMOTE_HUB_S(nasid, IIO_IBCR, ibcr.ii_ibcr_regval);
+
+
+ for (i = 0; i < BTES_PER_NODE; i++) {
+ bh_error = err_nodepda->bte_if[i].bh_error;
+ if (bh_error != BTE_SUCCESS) {
+ /* There is an error which needs to be notified */
+ notify = err_nodepda->bte_if[i].most_rcnt_na;
+ BTE_PRINTK(("cnode %d bte %d error=0x%lx\n",
+ err_nodepda->bte_if[i].bte_cnode,
+ err_nodepda->bte_if[i].bte_num,
+ IBLS_ERROR | (u64) bh_error));
+ *notify = IBLS_ERROR | bh_error;
+ err_nodepda->bte_if[i].bh_error = BTE_SUCCESS;
+ }
+
+ err_nodepda->bte_if[i].cleanup_active = 0;
+ BTE_PRINTK(("eh:%p:%d Unlocked %d\n", err_nodepda,
+ smp_processor_id(), i));
+ spin_unlock(&err_nodepda->bte_if[i].spinlock);
+ }
+
+ del_timer(recovery_timer);
+
+ spin_unlock_irqrestore(recovery_lock, irq_flags);
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+/*
+ * @doc file m:hwcfg
+ * DESCRIPTION:
+ *
+ * This file contains routines for manipulating and generating
+ * Geographic IDs. They are in a file by themself since they have
+ * no dependencies on other modules.
+ *
+ * ORIGIN:
+ *
+ * New for SN2
+ */
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <asm/smp.h>
+#include <asm/irq.h>
+#include <asm/hw_irq.h>
+#include <asm/sn/types.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/io.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/xtalk/xtalk.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/intr.h>
+#include <asm/sn/sn2/shub_mmr_t.h>
+#include <asm/sn/sn2/shubio.h>
+#include <asm/sal.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/sn/module.h>
+#include <asm/sn/geo.h>
+
+/********** Global functions and data (visible outside the module) ***********/
+
+/*
+ * @doc gf:geo_module
+ *
+ * moduleid_t geo_module(geoid_t g)
+ *
+ * DESCRIPTION:
+ *
+ * Return the moduleid component of a geoid.
+ *
+ * INTERNALS:
+ *
+ * Return INVALID_MODULE for an invalid geoid. Otherwise extract the
+ * moduleid from the structure, and return it.
+ *
+ * ORIGIN:
+ *
+ * New for SN2
+ */
+
+moduleid_t
+geo_module(geoid_t g)
+{
+ if (g.any.type == GEO_TYPE_INVALID)
+ return INVALID_MODULE;
+ else
+ return g.any.module;
+}
+
+
+/*
+ * @doc gf:geo_slab
+ *
+ * slabid_t geo_slab(geoid_t g)
+ *
+ * DESCRIPTION:
+ *
+ * Return the slabid component of a geoid.
+ *
+ * INTERNALS:
+ *
+ * Return INVALID_SLAB for an invalid geoid. Otherwise extract the
+ * slabid from the structure, and return it.
+ *
+ * ORIGIN:
+ *
+ * New for SN2
+ */
+
+slabid_t
+geo_slab(geoid_t g)
+{
+ if (g.any.type == GEO_TYPE_INVALID)
+ return INVALID_SLAB;
+ else
+ return g.any.slab;
+}
+
+
+/*
+ * @doc gf:geo_type
+ *
+ * geo_type_t geo_type(geoid_t g)
+ *
+ * DESCRIPTION:
+ *
+ * Return the type component of a geoid.
+ *
+ * INTERNALS:
+ *
+ * Extract the type from the structure, and return it.
+ *
+ * ORIGIN:
+ *
+ * New for SN2
+ */
+
+geo_type_t
+geo_type(geoid_t g)
+{
+ return g.any.type;
+}
+
+
+/*
+ * @doc gf:geo_valid
+ *
+ * int geo_valid(geoid_t g)
+ *
+ * DESCRIPTION:
+ *
+ * Return nonzero if g has a valid geoid type.
+ *
+ * INTERNALS:
+ *
+ * Test the type against GEO_TYPE_INVALID, and return the result.
+ *
+ * ORIGIN:
+ *
+ * New for SN2
+ */
+
+int
+geo_valid(geoid_t g)
+{
+ return g.any.type != GEO_TYPE_INVALID;
+}
+
+
+/*
+ * @doc gf:geo_cmp
+ *
+ * int geo_cmp(geoid_t g0, geoid_t g1)
+ *
+ * DESCRIPTION:
+ *
+ * Compare two geoid_t values, from the coarsest field to the finest.
+ * The comparison should be consistent with the physical locations of
+ * of the hardware named by the geoids.
+ *
+ * INTERNALS:
+ *
+ * First compare the module, then the slab, type, and type-specific fields.
+ *
+ * ORIGIN:
+ *
+ * New for SN2
+ */
+
+int
+geo_cmp(geoid_t g0, geoid_t g1)
+{
+ int rv;
+
+ /* Compare the common fields */
+ rv = MODULE_CMP(geo_module(g0), geo_module(g1));
+ if (rv != 0)
+ return rv;
+
+ rv = geo_slab(g0) - geo_slab(g1);
+ if (rv != 0)
+ return rv;
+
+ /* Within a slab, sort by type */
+ rv = geo_type(g0) - geo_type(g1);
+ if (rv != 0)
+ return rv;
+
+ switch(geo_type(g0)) {
+ case GEO_TYPE_CPU:
+ rv = g0.cpu.slice - g1.cpu.slice;
+ break;
+
+ case GEO_TYPE_IOCARD:
+ rv = g0.pcicard.bus - g1.pcicard.bus;
+ if (rv) break;
+ rv = SLOTNUM_GETSLOT(g0.pcicard.slot) -
+ SLOTNUM_GETSLOT(g1.pcicard.slot);
+ break;
+
+ case GEO_TYPE_MEM:
+ rv = g0.mem.membus - g1.mem.membus;
+ if (rv) break;
+ rv = g0.mem.memslot - g1.mem.memslot;
+ break;
+
+ default:
+ rv = 0;
+ }
+
+ return rv;
+}
+
+
+/*
+ * @doc gf:geo_new
+ *
+ * geoid_t geo_new(geo_type_t type, ...)
+ *
+ * DESCRIPTION:
+ *
+ * Generate a new geoid_t value of the given type from its components.
+ * Expected calling sequences:
+ * \@itemize \@bullet
+ * \@item
+ * \@code\{geo_new(GEO_TYPE_INVALID)\}
+ * \@item
+ * \@code\{geo_new(GEO_TYPE_MODULE, moduleid_t m)\}
+ * \@item
+ * \@code\{geo_new(GEO_TYPE_NODE, moduleid_t m, slabid_t s)\}
+ * \@item
+ * \@code\{geo_new(GEO_TYPE_RTR, moduleid_t m, slabid_t s)\}
+ * \@item
+ * \@code\{geo_new(GEO_TYPE_IOCNTL, moduleid_t m, slabid_t s)\}
+ * \@item
+ * \@code\{geo_new(GEO_TYPE_IOCARD, moduleid_t m, slabid_t s, char bus, slotid_t slot)\}
+ * \@item
+ * \@code\{geo_new(GEO_TYPE_CPU, moduleid_t m, slabid_t s, char slice)\}
+ * \@item
+ * \@code\{geo_new(GEO_TYPE_MEM, moduleid_t m, slabid_t s, char membus, char slot)\}
+ * \@end itemize
+ *
+ * Invalid types return a GEO_TYPE_INVALID geoid_t.
+ *
+ * INTERNALS:
+ *
+ * Use the type to determine which fields to expect. Write the fields into
+ * a new geoid_t and return it. Note: scalars smaller than an "int" are
+ * promoted to "int" by the "..." operator, so we need extra casts on "char",
+ * "slotid_t", and "slabid_t".
+ *
+ * ORIGIN:
+ *
+ * New for SN2
+ */
+
+geoid_t
+geo_new(geo_type_t type, ...)
+{
+ va_list al;
+ geoid_t g;
+ memset(&g, 0, sizeof(g));
+
+ va_start(al, type);
+
+ /* Make sure the type is sane */
+ if (type >= GEO_TYPE_MAX)
+ type = GEO_TYPE_INVALID;
+
+ g.any.type = type;
+ if (type == GEO_TYPE_INVALID)
+ goto done; /* invalid geoids have no components at all */
+
+ g.any.module = va_arg(al, moduleid_t);
+ if (type == GEO_TYPE_MODULE)
+ goto done;
+
+ g.any.slab = (slabid_t)va_arg(al, int);
+
+ /* Some types have additional components */
+ switch(type) {
+ case GEO_TYPE_CPU:
+ g.cpu.slice = (char)va_arg(al, int);
+ break;
+
+ case GEO_TYPE_IOCARD:
+ g.pcicard.bus = (char)va_arg(al, int);
+ g.pcicard.slot = (slotid_t)va_arg(al, int);
+ break;
+
+ case GEO_TYPE_MEM:
+ g.mem.membus = (char)va_arg(al, int);
+ g.mem.memslot = (char)va_arg(al, int);
+ break;
+
+ default:
+ break;
+ }
+
+ done:
+ va_end(al);
+ return g;
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+
+#include <linux/types.h>
+#include <linux/ctype.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/sn/io.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/nodepda.h>
+#include <asm/sn/module.h>
+#include <asm/sn/router.h>
+#include <asm/sn/xtalk/xbow.h>
+#include <asm/sn/ksys/l1.h>
+
+
+#undef DEBUG_KLGRAPH
+#ifdef DEBUG_KLGRAPH
+#define DBG(x...) printk(x)
+#else
+#define DBG(x...)
+#endif /* DEBUG_KLGRAPH */
+
+extern int numionodes;
+
+lboard_t *root_lboard[MAX_COMPACT_NODES];
+static int hasmetarouter;
+
+
+char brick_types[MAX_BRICK_TYPES + 1] = "crikxdpn%#=vo^34567890123456789...";
+
+lboard_t *
+find_lboard_any(lboard_t *start, unsigned char brd_type)
+{
+ /* Search all boards stored on this node. */
+ while (start) {
+ if (start->brd_type == brd_type)
+ return start;
+ start = KLCF_NEXT_ANY(start);
+ }
+
+ /* Didn't find it. */
+ return (lboard_t *)NULL;
+}
+
+lboard_t *
+find_lboard_nasid(lboard_t *start, nasid_t nasid, unsigned char brd_type)
+{
+
+ while (start) {
+ if ((start->brd_type == brd_type) &&
+ (start->brd_nasid == nasid))
+ return start;
+
+ if (numionodes == numnodes)
+ start = KLCF_NEXT_ANY(start);
+ else
+ start = KLCF_NEXT(start);
+ }
+
+ /* Didn't find it. */
+ return (lboard_t *)NULL;
+}
+
+lboard_t *
+find_lboard_class_any(lboard_t *start, unsigned char brd_type)
+{
+ /* Search all boards stored on this node. */
+ while (start) {
+ if (KLCLASS(start->brd_type) == KLCLASS(brd_type))
+ return start;
+ start = KLCF_NEXT_ANY(start);
+ }
+
+ /* Didn't find it. */
+ return (lboard_t *)NULL;
+}
+
+lboard_t *
+find_lboard_class_nasid(lboard_t *start, nasid_t nasid, unsigned char brd_type)
+{
+ /* Search all boards stored on this node. */
+ while (start) {
+ if (KLCLASS(start->brd_type) == KLCLASS(brd_type) &&
+ (start->brd_nasid == nasid))
+ return start;
+
+ if (numionodes == numnodes)
+ start = KLCF_NEXT_ANY(start);
+ else
+ start = KLCF_NEXT(start);
+ }
+
+ /* Didn't find it. */
+ return (lboard_t *)NULL;
+}
+
+
+
+klinfo_t *
+find_component(lboard_t *brd, klinfo_t *kli, unsigned char struct_type)
+{
+ int index, j;
+
+ if (kli == (klinfo_t *)NULL) {
+ index = 0;
+ } else {
+ for (j = 0; j < KLCF_NUM_COMPS(brd); j++) {
+ if (kli == KLCF_COMP(brd, j))
+ break;
+ }
+ index = j;
+ if (index == KLCF_NUM_COMPS(brd)) {
+ DBG("find_component: Bad pointer: 0x%p\n", kli);
+ return (klinfo_t *)NULL;
+ }
+ index++; /* next component */
+ }
+
+ for (; index < KLCF_NUM_COMPS(brd); index++) {
+ kli = KLCF_COMP(brd, index);
+ DBG("find_component: brd %p kli %p request type = 0x%x kli type 0x%x\n", brd, kli, kli->struct_type, KLCF_COMP_TYPE(kli));
+ if (KLCF_COMP_TYPE(kli) == struct_type)
+ return kli;
+ }
+
+ /* Didn't find it. */
+ return (klinfo_t *)NULL;
+}
+
+klinfo_t *
+find_first_component(lboard_t *brd, unsigned char struct_type)
+{
+ return find_component(brd, (klinfo_t *)NULL, struct_type);
+}
+
+lboard_t *
+find_lboard_modslot(lboard_t *start, geoid_t geoid)
+{
+ /* Search all boards stored on this node. */
+ while (start) {
+ if (geo_cmp(start->brd_geoid, geoid))
+ return start;
+ start = KLCF_NEXT(start);
+ }
+
+ /* Didn't find it. */
+ return (lboard_t *)NULL;
+}
+
+/*
+ * Convert a NIC name to a name for use in the hardware graph.
+ */
+void
+nic_name_convert(char *old_name, char *new_name)
+{
+ int i;
+ char c;
+ char *compare_ptr;
+
+ if ((old_name[0] == '\0') || (old_name[1] == '\0')) {
+ strcpy(new_name, EDGE_LBL_XWIDGET);
+ } else {
+ for (i = 0; i < strlen(old_name); i++) {
+ c = old_name[i];
+
+ if (isalpha(c))
+ new_name[i] = tolower(c);
+ else if (isdigit(c))
+ new_name[i] = c;
+ else
+ new_name[i] = '_';
+ }
+ new_name[i] = '\0';
+ }
+
+ /* XXX -
+ * Since a bunch of boards made it out with weird names like
+ * IO6-fibbbed and IO6P2, we need to look for IO6 in a name and
+ * replace it with "baseio" to avoid confusion in the field.
+ * We also have to make sure we don't report media_io instead of
+ * baseio.
+ */
+
+ /* Skip underscores at the beginning of the name */
+ for (compare_ptr = new_name; (*compare_ptr) == '_'; compare_ptr++)
+ ;
+
+ /*
+ * Check for some names we need to replace. Early boards
+ * had junk following the name so check only the first
+ * characters.
+ */
+ if (!strncmp(new_name, "io6", 3) ||
+ !strncmp(new_name, "mio", 3) ||
+ !strncmp(new_name, "media_io", 8))
+ strcpy(new_name, "baseio");
+ else if (!strncmp(new_name, "divo", 4))
+ strcpy(new_name, "divo") ;
+
+}
+
+/*
+ * get_actual_nasid
+ *
+ * Completely disabled brds have their klconfig on
+ * some other nasid as they have no memory. But their
+ * actual nasid is hidden in the klconfig. Use this
+ * routine to get it. Works for normal boards too.
+ */
+nasid_t
+get_actual_nasid(lboard_t *brd)
+{
+ klhub_t *hub ;
+
+ if (!brd)
+ return INVALID_NASID ;
+
+ /* find out if we are a completely disabled brd. */
+
+ hub = (klhub_t *)find_first_component(brd, KLSTRUCT_HUB);
+ if (!hub)
+ return INVALID_NASID ;
+ if (!(hub->hub_info.flags & KLINFO_ENABLE)) /* disabled node brd */
+ return hub->hub_info.physid ;
+ else
+ return brd->brd_nasid ;
+}
+
+int
+xbow_port_io_enabled(nasid_t nasid, int link)
+{
+ lboard_t *brd;
+ klxbow_t *xbow_p;
+
+ /*
+ * look for boards that might contain an xbow or xbridge
+ */
+ brd = find_lboard_nasid((lboard_t *)KL_CONFIG_INFO(nasid), nasid, KLTYPE_IOBRICK_XBOW);
+ if (brd == NULL) return 0;
+
+ if ((xbow_p = (klxbow_t *)find_component(brd, NULL, KLSTRUCT_XBOW))
+ == NULL)
+ return 0;
+
+ if (!XBOW_PORT_TYPE_IO(xbow_p, link) || !XBOW_PORT_IS_ENABLED(xbow_p, link))
+ return 0;
+
+ return 1;
+}
+
+void
+board_to_path(lboard_t *brd, char *path)
+{
+ moduleid_t modnum;
+ char *board_name;
+ char buffer[16];
+
+ ASSERT(brd);
+
+ switch (KLCLASS(brd->brd_type)) {
+
+ case KLCLASS_NODE:
+ board_name = EDGE_LBL_NODE;
+ break;
+ case KLCLASS_ROUTER:
+ if (brd->brd_type == KLTYPE_META_ROUTER) {
+ board_name = EDGE_LBL_META_ROUTER;
+ hasmetarouter++;
+ } else if (brd->brd_type == KLTYPE_REPEATER_ROUTER) {
+ board_name = EDGE_LBL_REPEATER_ROUTER;
+ hasmetarouter++;
+ } else
+ board_name = EDGE_LBL_ROUTER;
+ break;
+ case KLCLASS_MIDPLANE:
+ board_name = EDGE_LBL_MIDPLANE;
+ break;
+ case KLCLASS_IO:
+ board_name = EDGE_LBL_IO;
+ break;
+ case KLCLASS_IOBRICK:
+ if (brd->brd_type == KLTYPE_PXBRICK)
+ board_name = EDGE_LBL_PXBRICK;
+ else if (brd->brd_type == KLTYPE_IXBRICK)
+ board_name = EDGE_LBL_IXBRICK;
+ else if (brd->brd_type == KLTYPE_OPUSBRICK)
+ board_name = EDGE_LBL_OPUSBRICK;
+ else if (brd->brd_type == KLTYPE_CGBRICK)
+ board_name = EDGE_LBL_CGBRICK;
+ else
+ board_name = EDGE_LBL_IOBRICK;
+ break;
+ default:
+ board_name = EDGE_LBL_UNKNOWN;
+ }
+
+ modnum = geo_module(brd->brd_geoid);
+ memset(buffer, 0, 16);
+ format_module_id(buffer, modnum, MODULE_FORMAT_BRIEF);
+ sprintf(path, EDGE_LBL_MODULE "/%s/" EDGE_LBL_SLAB "/%d/%s", buffer, geo_slab(brd->brd_geoid), board_name);
+}
+
+#define MHZ 1000000
+
+/*
+ * Get the serial number of the main component of a board
+ * Returns 0 if a valid serial number is found
+ * 1 otherwise.
+ * Assumptions: Nic manufacturing string has the following format
+ * *Serial:<serial_number>;*
+ */
+static int
+component_serial_number_get(lboard_t *board,
+ klconf_off_t mfg_nic_offset,
+ char *serial_number,
+ char *key_pattern)
+{
+
+ char *mfg_nic_string;
+ char *serial_string,*str;
+ int i;
+ char *serial_pattern = "Serial:";
+
+ /* We have an error on a null mfg nic offset */
+ if (!mfg_nic_offset)
+ return(1);
+ /* Get the hub's manufacturing nic information
+ * which is in the form of a pre-formatted string
+ */
+ mfg_nic_string =
+ (char *)NODE_OFFSET_TO_K0(NASID_GET(board),
+ mfg_nic_offset);
+ /* There is no manufacturing nic info */
+ if (!mfg_nic_string)
+ return(1);
+
+ str = mfg_nic_string;
+ /* Look for the key pattern first (if it is specified)
+ * and then print the serial number corresponding to that.
+ */
+ if (strcmp(key_pattern,"") &&
+ !(str = strstr(mfg_nic_string,key_pattern)))
+ return(1);
+
+ /* There is no serial number info in the manufacturing
+ * nic info
+ */
+ if (!(serial_string = strstr(str,serial_pattern)))
+ return(1);
+
+ serial_string = serial_string + strlen(serial_pattern);
+ /* Copy the serial number information from the klconfig */
+ i = 0;
+ while (serial_string[i] != ';') {
+ serial_number[i] = serial_string[i];
+ i++;
+ }
+ serial_number[i] = 0;
+
+ return(0);
+}
+/*
+ * Get the serial number of a board
+ * Returns 0 if a valid serial number is found
+ * 1 otherwise.
+ */
+
+int
+board_serial_number_get(lboard_t *board,char *serial_number)
+{
+ ASSERT(board && serial_number);
+ if (!board || !serial_number)
+ return(1);
+
+ strcpy(serial_number,"");
+ switch(KLCLASS(board->brd_type)) {
+ case KLCLASS_CPU: { /* Node board */
+ klhub_t *hub;
+
+ /* Get the hub component information */
+ hub = (klhub_t *)find_first_component(board,
+ KLSTRUCT_HUB);
+ /* If we don't have a hub component on an IP27
+ * then we have a weird klconfig.
+ */
+ if (!hub)
+ return(1);
+ /* Get the serial number information from
+ * the hub's manufacturing nic info
+ */
+ if (component_serial_number_get(board,
+ hub->hub_mfg_nic,
+ serial_number,
+ "IP37"))
+ return(1);
+ break;
+ }
+ case KLCLASS_IO: { /* IO board */
+ klbri_t *bridge;
+
+ /* Get the bridge component information */
+ bridge = (klbri_t *)find_first_component(board,
+ KLSTRUCT_BRI);
+ /* If we don't have a bridge component on an IO board
+ * then we have a weird klconfig.
+ */
+ if (!bridge)
+ return(1);
+ /* Get the serial number information from
+ * the bridge's manufacturing nic info
+ */
+ if (component_serial_number_get(board,
+ bridge->bri_mfg_nic,
+ serial_number, ""))
+ return(1);
+ break;
+ }
+ case KLCLASS_ROUTER: { /* Router board */
+ klrou_t *router;
+
+ /* Get the router component information */
+ router = (klrou_t *)find_first_component(board,
+ KLSTRUCT_ROU);
+ /* If we don't have a router component on a router board
+ * then we have a weird klconfig.
+ */
+ if (!router)
+ return(1);
+ /* Get the serial number information from
+ * the router's manufacturing nic info
+ */
+ if (component_serial_number_get(board,
+ router->rou_mfg_nic,
+ serial_number,
+ ""))
+ return(1);
+ break;
+ }
+ case KLCLASS_GFX: { /* Gfx board */
+ klgfx_t *graphics;
+
+ /* Get the graphics component information */
+ graphics = (klgfx_t *)find_first_component(board, KLSTRUCT_GFX);
+ /* If we don't have a gfx component on a gfx board
+ * then we have a weird klconfig.
+ */
+ if (!graphics)
+ return(1);
+ /* Get the serial number information from
+ * the graphics's manufacturing nic info
+ */
+ if (component_serial_number_get(board,
+ graphics->gfx_mfg_nic,
+ serial_number,
+ ""))
+ return(1);
+ break;
+ }
+ default:
+ strcpy(serial_number,"");
+ break;
+ }
+ return(0);
+}
+
+/*
+ * Format a module id for printing.
+ *
+ * There are three possible formats:
+ *
+ * MODULE_FORMAT_BRIEF is the brief 6-character format, including
+ * the actual brick-type as recorded in the
+ * moduleid_t, eg. 002c15 for a C-brick, or
+ * 101#17 for a PX-brick.
+ *
+ * MODULE_FORMAT_LONG is the hwgraph format, eg. rack/002/bay/15
+ * of rack/101/bay/17 (note that the brick
+ * type does not appear in this format).
+ *
+ * MODULE_FORMAT_LCD is like MODULE_FORMAT_BRIEF, except that it
+ * ensures that the module id provided appears
+ * exactly as it would on the LCD display of
+ * the corresponding brick, eg. still 002c15
+ * for a C-brick, but 101p17 for a PX-brick.
+ */
+void
+format_module_id(char *buffer, moduleid_t m, int fmt)
+{
+ int rack, position;
+ unsigned char brickchar;
+
+ rack = MODULE_GET_RACK(m);
+ ASSERT(MODULE_GET_BTYPE(m) < MAX_BRICK_TYPES);
+ brickchar = MODULE_GET_BTCHAR(m);
+
+ if (fmt == MODULE_FORMAT_LCD) {
+ /* Be sure we use the same brick type character as displayed
+ * on the brick's LCD
+ */
+ switch (brickchar)
+ {
+ case L1_BRICKTYPE_PX:
+ brickchar = L1_BRICKTYPE_P;
+ break;
+
+ case L1_BRICKTYPE_IX:
+ brickchar = L1_BRICKTYPE_I;
+ break;
+ }
+ }
+
+ position = MODULE_GET_BPOS(m);
+
+ if ((fmt == MODULE_FORMAT_BRIEF) || (fmt == MODULE_FORMAT_LCD)) {
+ /* Brief module number format, eg. 002c15 */
+
+ /* Decompress the rack number */
+ *buffer++ = '0' + RACK_GET_CLASS(rack);
+ *buffer++ = '0' + RACK_GET_GROUP(rack);
+ *buffer++ = '0' + RACK_GET_NUM(rack);
+
+ /* Add the brick type */
+ *buffer++ = brickchar;
+ }
+ else if (fmt == MODULE_FORMAT_LONG) {
+ /* Fuller hwgraph format, eg. rack/002/bay/15 */
+
+ strcpy(buffer, EDGE_LBL_RACK "/"); buffer += strlen(buffer);
+
+ *buffer++ = '0' + RACK_GET_CLASS(rack);
+ *buffer++ = '0' + RACK_GET_GROUP(rack);
+ *buffer++ = '0' + RACK_GET_NUM(rack);
+
+ strcpy(buffer, "/" EDGE_LBL_RPOS "/"); buffer += strlen(buffer);
+ }
+
+ /* Add the bay position, using at least two digits */
+ if (position < 10)
+ *buffer++ = '0';
+ sprintf(buffer, "%d", position);
+
+}
+
+int
+cbrick_type_get_nasid(nasid_t nasid)
+{
+ moduleid_t module;
+ int t;
+
+ module = iomoduleid_get(nasid);
+ if (module < 0 ) {
+ return MODULE_CBRICK;
+ }
+ t = MODULE_GET_BTYPE(module);
+ if ((char)t == 'o') {
+ return MODULE_OPUSBRICK;
+ } else {
+ return MODULE_CBRICK;
+ }
+ return -1;
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+/*
+ * klgraph.c-
+ * This file specifies the interface between the kernel and the PROM's
+ * configuration data structures.
+ */
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/hcl_util.h>
+#include <asm/sn/sn_private.h>
+
+/* #define KLGRAPH_DEBUG 1 */
+#ifdef KLGRAPH_DEBUG
+#define GRPRINTF(x) printk x
+#else
+#define GRPRINTF(x)
+#endif
+
+void mark_cpuvertex_as_cpu(vertex_hdl_t vhdl, cpuid_t cpuid);
+
+
+/* ARGSUSED */
+static void __init
+klhwg_add_hub(vertex_hdl_t node_vertex, klhub_t *hub, cnodeid_t cnode)
+{
+ vertex_hdl_t myhubv;
+ vertex_hdl_t hub_mon;
+ int rc;
+ extern struct file_operations shub_mon_fops;
+
+ hwgraph_path_add(node_vertex, EDGE_LBL_HUB, &myhubv);
+
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__,__LINE__, myhubv, NULL, "Created path for hub vertex for Shub node.\n");
+
+ rc = device_master_set(myhubv, node_vertex);
+ if (rc) {
+ printk("klhwg_add_hub: Unable to create hub vertex.\n");
+ return;
+ }
+ hub_mon = hwgraph_register(myhubv, EDGE_LBL_PERFMON,
+ 0, 0, 0, 0,
+ S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP, 0, 0,
+ &shub_mon_fops, (void *)(long)cnode);
+}
+
+/* ARGSUSED */
+static void __init
+klhwg_add_disabled_cpu(vertex_hdl_t node_vertex, cnodeid_t cnode, klcpu_t *cpu, slotid_t slot)
+{
+ vertex_hdl_t my_cpu;
+ char name[120];
+ cpuid_t cpu_id;
+ nasid_t nasid;
+
+ nasid = cnodeid_to_nasid(cnode);
+ cpu_id = nasid_slice_to_cpuid(nasid, cpu->cpu_info.physid);
+ if(cpu_id != -1){
+ snprintf(name, 120, "%s/%s/%c", EDGE_LBL_DISABLED, EDGE_LBL_CPU, 'a' + cpu->cpu_info.physid);
+ (void) hwgraph_path_add(node_vertex, name, &my_cpu);
+
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__,__LINE__, my_cpu, NULL, "Created path for disabled cpu slice.\n");
+
+ mark_cpuvertex_as_cpu(my_cpu, cpu_id);
+ device_master_set(my_cpu, node_vertex);
+ return;
+ }
+}
+
+/* ARGSUSED */
+static void __init
+klhwg_add_cpu(vertex_hdl_t node_vertex, cnodeid_t cnode, klcpu_t *cpu)
+{
+ vertex_hdl_t my_cpu, cpu_dir;
+ char name[120];
+ cpuid_t cpu_id;
+ nasid_t nasid;
+
+ nasid = cnodeid_to_nasid(cnode);
+ cpu_id = nasid_slice_to_cpuid(nasid, cpu->cpu_info.physid);
+
+ snprintf(name, 120, "%s/%d/%c",
+ EDGE_LBL_CPUBUS,
+ 0,
+ 'a' + cpu->cpu_info.physid);
+
+ (void) hwgraph_path_add(node_vertex, name, &my_cpu);
+
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__,__LINE__, my_cpu, NULL, "Created path for active cpu slice.\n");
+
+ mark_cpuvertex_as_cpu(my_cpu, cpu_id);
+ device_master_set(my_cpu, node_vertex);
+
+ /* Add an alias under the node's CPU directory */
+ if (hwgraph_edge_get(node_vertex, EDGE_LBL_CPU, &cpu_dir) == GRAPH_SUCCESS) {
+ snprintf(name, 120, "%c", 'a' + cpu->cpu_info.physid);
+ (void) hwgraph_edge_add(cpu_dir, my_cpu, name);
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__,__LINE__, cpu_dir, my_cpu, "Created % from vhdl1 to vhdl2.\n", name);
+ }
+}
+
+
+static void __init
+klhwg_add_xbow(cnodeid_t cnode, nasid_t nasid)
+{
+ lboard_t *brd;
+ klxbow_t *xbow_p;
+ nasid_t hub_nasid;
+ cnodeid_t hub_cnode;
+ int widgetnum;
+ vertex_hdl_t xbow_v, hubv;
+ /*REFERENCED*/
+ graph_error_t err;
+
+ if (!(brd = find_lboard_nasid((lboard_t *)KL_CONFIG_INFO(nasid),
+ nasid, KLTYPE_IOBRICK_XBOW)))
+ return;
+
+ if (KL_CONFIG_DUPLICATE_BOARD(brd))
+ return;
+
+ if ((xbow_p = (klxbow_t *)find_component(brd, NULL, KLSTRUCT_XBOW))
+ == NULL)
+ return;
+
+ for (widgetnum = HUB_WIDGET_ID_MIN; widgetnum <= HUB_WIDGET_ID_MAX; widgetnum++) {
+ if (!XBOW_PORT_TYPE_HUB(xbow_p, widgetnum))
+ continue;
+
+ hub_nasid = XBOW_PORT_NASID(xbow_p, widgetnum);
+ if (hub_nasid == INVALID_NASID) {
+ printk(KERN_WARNING "hub widget %d, skipping xbow graph\n", widgetnum);
+ continue;
+ }
+
+ hub_cnode = nasid_to_cnodeid(hub_nasid);
+
+ if (hub_cnode == INVALID_CNODEID) {
+ continue;
+ }
+
+ hubv = cnodeid_to_vertex(hub_cnode);
+
+ err = hwgraph_path_add(hubv, EDGE_LBL_XTALK, &xbow_v);
+ if (err != GRAPH_SUCCESS) {
+ if (err == GRAPH_DUP)
+ printk(KERN_WARNING "klhwg_add_xbow: Check for "
+ "working routers and router links!");
+
+ printk("klhwg_add_xbow: Failed to add "
+ "edge: vertex 0x%p to vertex 0x%p,"
+ "error %d\n",
+ (void *)hubv, (void *)xbow_v, err);
+ return;
+ }
+
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__, __LINE__, xbow_v, NULL, "Created path for xtalk.\n");
+
+ xswitch_vertex_init(xbow_v);
+
+ NODEPDA(hub_cnode)->xbow_vhdl = xbow_v;
+
+ /*
+ * XXX - This won't work is we ever hook up two hubs
+ * by crosstown through a crossbow.
+ */
+ if (hub_nasid != nasid) {
+ NODEPDA(hub_cnode)->xbow_peer = nasid;
+ NODEPDA(nasid_to_cnodeid(nasid))->xbow_peer =
+ hub_nasid;
+ }
+ }
+}
+
+
+/* ARGSUSED */
+static void __init
+klhwg_add_node(vertex_hdl_t hwgraph_root, cnodeid_t cnode)
+{
+ nasid_t nasid;
+ lboard_t *brd;
+ klhub_t *hub;
+ vertex_hdl_t node_vertex = NULL;
+ char path_buffer[100];
+ int rv;
+ char *s;
+ int board_disabled = 0;
+ klcpu_t *cpu;
+ vertex_hdl_t cpu_dir;
+
+ nasid = cnodeid_to_nasid(cnode);
+ brd = find_lboard_any((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_SNIA);
+ ASSERT(brd);
+
+ /* Generate a hardware graph path for this board. */
+ board_to_path(brd, path_buffer);
+ rv = hwgraph_path_add(hwgraph_root, path_buffer, &node_vertex);
+ if (rv != GRAPH_SUCCESS) {
+ printk("Node vertex creation failed. Path == %s", path_buffer);
+ return;
+ }
+
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__, __LINE__, node_vertex, NULL, "Created path for SHUB node.\n");
+ hub = (klhub_t *)find_first_component(brd, KLSTRUCT_HUB);
+ ASSERT(hub);
+ if(hub->hub_info.flags & KLINFO_ENABLE)
+ board_disabled = 0;
+ else
+ board_disabled = 1;
+
+ if(!board_disabled) {
+ mark_nodevertex_as_node(node_vertex, cnode);
+ s = dev_to_name(node_vertex, path_buffer, sizeof(path_buffer));
+ NODEPDA(cnode)->hwg_node_name =
+ kmalloc(strlen(s) + 1, GFP_KERNEL);
+ if (NODEPDA(cnode)->hwg_node_name <= 0) {
+ printk("%s: no memory\n", __FUNCTION__);
+ return;
+ }
+ strcpy(NODEPDA(cnode)->hwg_node_name, s);
+ hubinfo_set(node_vertex, NODEPDA(cnode)->pdinfo);
+ NODEPDA(cnode)->slotdesc = brd->brd_slot;
+ NODEPDA(cnode)->geoid = brd->brd_geoid;
+ NODEPDA(cnode)->module = module_lookup(geo_module(brd->brd_geoid));
+ klhwg_add_hub(node_vertex, hub, cnode);
+ }
+
+ /*
+ * If there's at least 1 CPU, add a "cpu" directory to represent
+ * the collection of all CPUs attached to this node.
+ */
+ cpu = (klcpu_t *)find_first_component(brd, KLSTRUCT_CPU);
+ if (cpu) {
+ graph_error_t rv;
+
+ rv = hwgraph_path_add(node_vertex, EDGE_LBL_CPU, &cpu_dir);
+ if (rv != GRAPH_SUCCESS) {
+ printk("klhwg_add_node: Cannot create CPU directory\n");
+ return;
+ }
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__, __LINE__, cpu_dir, NULL, "Created cpu directiry on SHUB node.\n");
+
+ }
+
+ while (cpu) {
+ cpuid_t cpu_id;
+ cpu_id = nasid_slice_to_cpuid(nasid,cpu->cpu_info.physid);
+ if (cpu_online(cpu_id))
+ klhwg_add_cpu(node_vertex, cnode, cpu);
+ else
+ klhwg_add_disabled_cpu(node_vertex, cnode, cpu, brd->brd_slot);
+
+ cpu = (klcpu_t *)
+ find_component(brd, (klinfo_t *)cpu, KLSTRUCT_CPU);
+ }
+}
+
+
+/* ARGSUSED */
+static void __init
+klhwg_add_all_routers(vertex_hdl_t hwgraph_root)
+{
+ nasid_t nasid;
+ cnodeid_t cnode;
+ lboard_t *brd;
+ vertex_hdl_t node_vertex;
+ char path_buffer[100];
+ int rv;
+
+ for (cnode = 0; cnode < numnodes; cnode++) {
+ nasid = cnodeid_to_nasid(cnode);
+ brd = find_lboard_class_any((lboard_t *)KL_CONFIG_INFO(nasid),
+ KLTYPE_ROUTER);
+
+ if (!brd)
+ /* No routers stored in this node's memory */
+ continue;
+
+ do {
+ ASSERT(brd);
+
+ /* Don't add duplicate boards. */
+ if (brd->brd_flags & DUPLICATE_BOARD)
+ continue;
+
+ /* Generate a hardware graph path for this board. */
+ board_to_path(brd, path_buffer);
+
+ /* Add the router */
+ rv = hwgraph_path_add(hwgraph_root, path_buffer, &node_vertex);
+ if (rv != GRAPH_SUCCESS) {
+ printk("Router vertex creation "
+ "failed. Path == %s", path_buffer);
+ return;
+ }
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__, __LINE__, node_vertex, NULL, "Created router path.\n");
+
+ /* Find the rest of the routers stored on this node. */
+ } while ( (brd = find_lboard_class_any(KLCF_NEXT_ANY(brd),
+ KLTYPE_ROUTER)) );
+ }
+
+}
+
+/* ARGSUSED */
+static void __init
+klhwg_connect_one_router(vertex_hdl_t hwgraph_root, lboard_t *brd,
+ cnodeid_t cnode, nasid_t nasid)
+{
+ klrou_t *router;
+ char path_buffer[50];
+ char dest_path[50];
+ vertex_hdl_t router_hndl;
+ vertex_hdl_t dest_hndl;
+ int rc;
+ int port;
+ lboard_t *dest_brd;
+
+ /* Don't add duplicate boards. */
+ if (brd->brd_flags & DUPLICATE_BOARD) {
+ return;
+ }
+
+ /* Generate a hardware graph path for this board. */
+ board_to_path(brd, path_buffer);
+
+ rc = hwgraph_traverse(hwgraph_root, path_buffer, &router_hndl);
+
+ if (rc != GRAPH_SUCCESS)
+ return;
+
+ if (rc != GRAPH_SUCCESS)
+ printk(KERN_WARNING "Can't find router: %s", path_buffer);
+
+ /* We don't know what to do with multiple router components */
+ if (brd->brd_numcompts != 1) {
+ printk("klhwg_connect_one_router: %d cmpts on router\n",
+ brd->brd_numcompts);
+ return;
+ }
+
+
+ /* Convert component 0 to klrou_t ptr */
+ router = (klrou_t *)NODE_OFFSET_TO_K0(NASID_GET(brd),
+ brd->brd_compts[0]);
+
+ for (port = 1; port <= MAX_ROUTER_PORTS; port++) {
+ /* See if the port's active */
+ if (router->rou_port[port].port_nasid == INVALID_NASID) {
+ GRPRINTF(("klhwg_connect_one_router: port %d inactive.\n",
+ port));
+ continue;
+ }
+ if (nasid_to_cnodeid(router->rou_port[port].port_nasid)
+ == INVALID_CNODEID) {
+ continue;
+ }
+
+ dest_brd = (lboard_t *)NODE_OFFSET_TO_K0(
+ router->rou_port[port].port_nasid,
+ router->rou_port[port].port_offset);
+
+ /* Generate a hardware graph path for this board. */
+ board_to_path(dest_brd, dest_path);
+
+ rc = hwgraph_traverse(hwgraph_root, dest_path, &dest_hndl);
+
+ if (rc != GRAPH_SUCCESS) {
+ if (KL_CONFIG_DUPLICATE_BOARD(dest_brd))
+ continue;
+ printk("Can't find router: %s", dest_path);
+ return;
+ }
+
+ sprintf(dest_path, "%d", port);
+
+ rc = hwgraph_edge_add(router_hndl, dest_hndl, dest_path);
+
+ if (rc == GRAPH_DUP) {
+ GRPRINTF(("Skipping port %d. nasid %d %s/%s\n",
+ port, router->rou_port[port].port_nasid,
+ path_buffer, dest_path));
+ continue;
+ }
+
+ if (rc != GRAPH_SUCCESS) {
+ printk("Can't create edge: %s/%s to vertex 0x%p error 0x%x\n",
+ path_buffer, dest_path, (void *)dest_hndl, rc);
+ return;
+ }
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__, __LINE__, router_hndl, dest_hndl, "Created edge %s from vhdl1 to vhdl2.\n", dest_path);
+
+ }
+}
+
+
+static void __init
+klhwg_connect_routers(vertex_hdl_t hwgraph_root)
+{
+ nasid_t nasid;
+ cnodeid_t cnode;
+ lboard_t *brd;
+
+ for (cnode = 0; cnode < numnodes; cnode++) {
+ nasid = cnodeid_to_nasid(cnode);
+ brd = find_lboard_class_any((lboard_t *)KL_CONFIG_INFO(nasid),
+ KLTYPE_ROUTER);
+
+ if (!brd)
+ continue;
+
+ do {
+
+ nasid = cnodeid_to_nasid(cnode);
+
+ klhwg_connect_one_router(hwgraph_root, brd,
+ cnode, nasid);
+
+ /* Find the rest of the routers stored on this node. */
+ } while ( (brd = find_lboard_class_any(KLCF_NEXT_ANY(brd), KLTYPE_ROUTER)) );
+ }
+}
+
+
+
+static void __init
+klhwg_connect_hubs(vertex_hdl_t hwgraph_root)
+{
+ nasid_t nasid;
+ cnodeid_t cnode;
+ lboard_t *brd;
+ klhub_t *hub;
+ lboard_t *dest_brd;
+ vertex_hdl_t hub_hndl;
+ vertex_hdl_t dest_hndl;
+ char path_buffer[50];
+ char dest_path[50];
+ graph_error_t rc;
+ int port;
+
+ for (cnode = 0; cnode < numionodes; cnode++) {
+ nasid = cnodeid_to_nasid(cnode);
+
+ brd = find_lboard_any((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_SNIA);
+
+ hub = (klhub_t *)find_first_component(brd, KLSTRUCT_HUB);
+ ASSERT(hub);
+
+ for (port = 1; port <= MAX_NI_PORTS; port++) {
+ if (hub->hub_port[port].port_nasid == INVALID_NASID) {
+ continue; /* Port not active */
+ }
+
+ if (nasid_to_cnodeid(hub->hub_port[port].port_nasid) == INVALID_CNODEID)
+ continue;
+
+ /* Generate a hardware graph path for this board. */
+ board_to_path(brd, path_buffer);
+ rc = hwgraph_traverse(hwgraph_root, path_buffer, &hub_hndl);
+
+ if (rc != GRAPH_SUCCESS)
+ printk(KERN_WARNING "Can't find hub: %s", path_buffer);
+
+ dest_brd = (lboard_t *)NODE_OFFSET_TO_K0(
+ hub->hub_port[port].port_nasid,
+ hub->hub_port[port].port_offset);
+
+ /* Generate a hardware graph path for this board. */
+ board_to_path(dest_brd, dest_path);
+
+ rc = hwgraph_traverse(hwgraph_root, dest_path, &dest_hndl);
+
+ if (rc != GRAPH_SUCCESS) {
+ if (KL_CONFIG_DUPLICATE_BOARD(dest_brd))
+ continue;
+ printk("Can't find board: %s", dest_path);
+ return;
+ } else {
+ char buf[1024];
+
+ rc = hwgraph_path_add(hub_hndl, EDGE_LBL_INTERCONNECT, &hub_hndl);
+
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__, __LINE__, hub_hndl, NULL, "Created link path.\n");
+
+ sprintf(buf,"%s/%s",path_buffer,EDGE_LBL_INTERCONNECT);
+ rc = hwgraph_traverse(hwgraph_root, buf, &hub_hndl);
+ sprintf(buf,"%d",port);
+ rc = hwgraph_edge_add(hub_hndl, dest_hndl, buf);
+
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__, __LINE__, hub_hndl, dest_hndl, "Created edge %s from vhdl1 to vhdl2.\n", buf);
+
+ if (rc != GRAPH_SUCCESS) {
+ printk("Can't create edge: %s/%s to vertex 0x%p, error 0x%x\n",
+ path_buffer, dest_path, (void *)dest_hndl, rc);
+ return;
+ }
+ }
+ }
+ }
+}
+
+void __init
+klhwg_add_all_modules(vertex_hdl_t hwgraph_root)
+{
+ cmoduleid_t cm;
+ char name[128];
+ vertex_hdl_t vhdl;
+ vertex_hdl_t module_vhdl;
+ int rc;
+ char buffer[16];
+
+ /* Add devices under each module */
+
+ for (cm = 0; cm < nummodules; cm++) {
+ /* Use module as module vertex fastinfo */
+
+ memset(buffer, 0, 16);
+ format_module_id(buffer, sn_modules[cm]->id, MODULE_FORMAT_BRIEF);
+ sprintf(name, EDGE_LBL_MODULE "/%s", buffer);
+
+ rc = hwgraph_path_add(hwgraph_root, name, &module_vhdl);
+ ASSERT(rc == GRAPH_SUCCESS);
+ rc = rc;
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__, __LINE__, module_vhdl, NULL, "Created module path.\n");
+
+ hwgraph_fastinfo_set(module_vhdl, (arbitrary_info_t) sn_modules[cm]);
+
+ /* Add system controller */
+ sprintf(name,
+ EDGE_LBL_MODULE "/%s/" EDGE_LBL_L1,
+ buffer);
+
+ rc = hwgraph_path_add(hwgraph_root, name, &vhdl);
+ ASSERT_ALWAYS(rc == GRAPH_SUCCESS);
+ rc = rc;
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__, __LINE__, vhdl, NULL, "Created L1 path.\n");
+
+ hwgraph_info_add_LBL(vhdl, INFO_LBL_ELSC,
+ (arbitrary_info_t)1);
+
+ }
+}
+
+void __init
+klhwg_add_all_nodes(vertex_hdl_t hwgraph_root)
+{
+ cnodeid_t cnode;
+
+ for (cnode = 0; cnode < numionodes; cnode++) {
+ klhwg_add_node(hwgraph_root, cnode);
+ }
+
+ for (cnode = 0; cnode < numionodes; cnode++) {
+ klhwg_add_xbow(cnode, cnodeid_to_nasid(cnode));
+ }
+
+ /*
+ * As for router hardware inventory information, we set this
+ * up in router.c.
+ */
+
+ klhwg_add_all_routers(hwgraph_root);
+ klhwg_connect_routers(hwgraph_root);
+ klhwg_connect_hubs(hwgraph_root);
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/io.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/hcl_util.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/router.h>
+#include <asm/sn/module.h>
+#include <asm/sn/ksys/l1.h>
+#include <asm/sn/nodepda.h>
+#include <asm/sn/clksupport.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/sn_sal.h>
+#include <linux/ctype.h>
+
+/* elsc_display_line writes up to 12 characters to either the top or bottom
+ * line of the L1 display. line points to a buffer containing the message
+ * to be displayed. The zero-based line number is specified by lnum (so
+ * lnum == 0 specifies the top line and lnum == 1 specifies the bottom).
+ * Lines longer than 12 characters, or line numbers not less than
+ * L1_DISPLAY_LINES, cause elsc_display_line to return an error.
+ */
+int elsc_display_line(nasid_t nasid, char *line, int lnum)
+{
+ return 0;
+}
+
+
+/*
+ * iobrick routines
+ */
+
+/* iobrick_rack_bay_type_get fills in the three int * arguments with the
+ * rack number, bay number and brick type of the L1 being addressed. Note
+ * that if the L1 operation fails and this function returns an error value,
+ * garbage may be written to brick_type.
+ */
+
+
+int iobrick_rack_bay_type_get( nasid_t nasid, uint *rack,
+ uint *bay, uint *brick_type )
+{
+ int result = 0;
+
+ if ( ia64_sn_sysctl_iobrick_module_get(nasid, &result) )
+ return( ELSC_ERROR_CMD_SEND );
+
+ *rack = (result & MODULE_RACK_MASK) >> MODULE_RACK_SHFT;
+ *bay = (result & MODULE_BPOS_MASK) >> MODULE_BPOS_SHFT;
+ *brick_type = (result & MODULE_BTYPE_MASK) >> MODULE_BTYPE_SHFT;
+ return 0;
+}
+
+
+int iomoduleid_get(nasid_t nasid)
+{
+ int result = 0;
+
+ if ( ia64_sn_sysctl_iobrick_module_get(nasid, &result) )
+ return( ELSC_ERROR_CMD_SEND );
+
+ return result;
+}
+
+int
+iobrick_type_get_nasid(nasid_t nasid)
+{
+ uint rack, bay, type;
+ int t, ret;
+ extern char brick_types[];
+
+ if ((ret = iobrick_rack_bay_type_get(nasid, &rack, &bay, &type)) < 0) {
+ return ret;
+ }
+
+ /* convert brick_type to lower case */
+ if ((type >= 'A') && (type <= 'Z'))
+ type = type - 'A' + 'a';
+
+ /* convert to a module.h brick type */
+ for( t = 0; t < MAX_BRICK_TYPES; t++ ) {
+ if( brick_types[t] == type ) {
+ return t;
+ }
+ }
+
+ return -1; /* unknown brick */
+}
+
+/*
+ * given a L1 bricktype, return a bricktype string. This string is the
+ * string that will be used in the hwpath for I/O bricks
+ */
+char *
+iobrick_L1bricktype_to_name(int type)
+{
+ switch (type)
+ {
+ default:
+ return("Unknown");
+
+ case L1_BRICKTYPE_PX:
+ return(EDGE_LBL_PXBRICK);
+
+ case L1_BRICKTYPE_OPUS:
+ return(EDGE_LBL_OPUSBRICK);
+
+ case L1_BRICKTYPE_IX:
+ return(EDGE_LBL_IXBRICK);
+
+ case L1_BRICKTYPE_C:
+ return("Cbrick");
+
+ case L1_BRICKTYPE_R:
+ return("Rbrick");
+
+ case L1_BRICKTYPE_CHI_CG:
+ return(EDGE_LBL_CGBRICK);
+ }
+}
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/bootmem.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/io.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/simulator.h>
+
+int maxcpus;
+
+extern xwidgetnum_t hub_widget_id(nasid_t);
+
+/* XXX - Move the meat of this to intr.c ? */
+/*
+ * Set up the platform-dependent fields in the nodepda.
+ */
+void init_platform_nodepda(nodepda_t *npda, cnodeid_t node)
+{
+ hubinfo_t hubinfo;
+ nasid_t nasid;
+
+ /* Allocate per-node platform-dependent data */
+
+ nasid = cnodeid_to_nasid(node);
+ if (node >= numnodes) /* Headless/memless IO nodes */
+ hubinfo = (hubinfo_t)alloc_bootmem_node(NODE_DATA(0), sizeof(struct hubinfo_s));
+ else
+ hubinfo = (hubinfo_t)alloc_bootmem_node(NODE_DATA(node), sizeof(struct hubinfo_s));
+
+ npda->pdinfo = (void *)hubinfo;
+ hubinfo->h_nodepda = npda;
+ hubinfo->h_cnodeid = node;
+
+ spin_lock_init(&hubinfo->h_crblock);
+
+ npda->xbow_peer = INVALID_NASID;
+
+ /*
+ * Initialize the linked list of
+ * router info pointers to the dependent routers
+ */
+ npda->npda_rip_first = NULL;
+
+ /*
+ * npda_rip_last always points to the place
+ * where the next element is to be inserted
+ * into the list
+ */
+ npda->npda_rip_last = &npda->npda_rip_first;
+ npda->geoid.any.type = GEO_TYPE_INVALID;
+
+ init_MUTEX_LOCKED(&npda->xbow_sema); /* init it locked? */
+}
+
+void
+init_platform_hubinfo(nodepda_t **nodepdaindr)
+{
+ cnodeid_t cnode;
+ hubinfo_t hubinfo;
+ nodepda_t *npda;
+ extern int numionodes;
+
+ if (IS_RUNNING_ON_SIMULATOR())
+ return;
+ for (cnode = 0; cnode < numionodes; cnode++) {
+ npda = nodepdaindr[cnode];
+ hubinfo = (hubinfo_t)npda->pdinfo;
+ hubinfo->h_nasid = cnodeid_to_nasid(cnode);
+ hubinfo->h_widgetid = hub_widget_id(hubinfo->h_nasid);
+ }
+}
+
+void
+update_node_information(cnodeid_t cnodeid)
+{
+ nodepda_t *npda = NODEPDA(cnodeid);
+ nodepda_router_info_t *npda_rip;
+
+ /* Go through the list of router info
+ * structures and copy some frequently
+ * accessed info from the info hanging
+ * off the corresponding router vertices
+ */
+ npda_rip = npda->npda_rip_first;
+ while(npda_rip) {
+ if (npda_rip->router_infop) {
+ npda_rip->router_portmask =
+ npda_rip->router_infop->ri_portmask;
+ npda_rip->router_slot =
+ npda_rip->router_infop->ri_slotnum;
+ } else {
+ /* No router, no ports. */
+ npda_rip->router_portmask = 0;
+ }
+ npda_rip = npda_rip->router_next;
+ }
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997, 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <asm/smp.h>
+#include <asm/irq.h>
+#include <asm/hw_irq.h>
+#include <asm/topology.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/io.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/xtalk/xtalk.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/intr.h>
+#include <asm/sn/sn2/shub_mmr_t.h>
+#include <asm/sn/sn2/shubio.h>
+#include <asm/sal.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/sn/sn2/shub_mmr.h>
+#include <asm/sn/pda.h>
+
+extern irqpda_t *irqpdaindr;
+extern cnodeid_t master_node_get(vertex_hdl_t vhdl);
+extern nasid_t master_nasid;
+
+/* Initialize some shub registers for interrupts, both IO and error. */
+void intr_init_vecblk(cnodeid_t node)
+{
+ int nasid = cnodeid_to_nasid(node);
+ sh_ii_int0_config_u_t ii_int_config;
+ cpuid_t cpu;
+ cpuid_t cpu0, cpu1;
+ sh_ii_int0_enable_u_t ii_int_enable;
+ sh_int_node_id_config_u_t node_id_config;
+ sh_local_int5_config_u_t local5_config;
+ sh_local_int5_enable_u_t local5_enable;
+
+ if (is_headless_node(node) ) {
+ struct ia64_sal_retval ret_stuff;
+ int cnode;
+
+ /* retarget all interrupts on this node to the master node. */
+ node_id_config.sh_int_node_id_config_regval = 0;
+ node_id_config.sh_int_node_id_config_s.node_id = master_nasid;
+ node_id_config.sh_int_node_id_config_s.id_sel = 1;
+ HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_INT_NODE_ID_CONFIG),
+ node_id_config.sh_int_node_id_config_regval);
+ cnode = nasid_to_cnodeid(master_nasid);
+ cpu = first_cpu(node_to_cpumask(cnode));
+ cpu = cpu_physical_id(cpu);
+ SAL_CALL(ret_stuff, SN_SAL_REGISTER_CE, nasid, cpu, master_nasid,0,0,0,0);
+ if (ret_stuff.status < 0)
+ printk("%s: SN_SAL_REGISTER_CE SAL_CALL failed\n",__FUNCTION__);
+ } else {
+ cpu = first_cpu(node_to_cpumask(node));
+ cpu = cpu_physical_id(cpu);
+ }
+
+ /* Get the physical id's of the cpu's on this node. */
+ cpu0 = nasid_slice_to_cpu_physical_id(nasid, 0);
+ cpu1 = nasid_slice_to_cpu_physical_id(nasid, 2);
+
+ HUB_S( (unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_PI_ERROR_MASK), 0);
+ HUB_S( (unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_PI_CRBP_ERROR_MASK), 0);
+
+ /* Config and enable UART interrupt, all nodes. */
+ local5_config.sh_local_int5_config_regval = 0;
+ local5_config.sh_local_int5_config_s.idx = SGI_UART_VECTOR;
+ local5_config.sh_local_int5_config_s.pid = cpu;
+ HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_LOCAL_INT5_CONFIG),
+ local5_config.sh_local_int5_config_regval);
+
+ local5_enable.sh_local_int5_enable_regval = 0;
+ local5_enable.sh_local_int5_enable_s.uart_int = 1;
+ HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_LOCAL_INT5_ENABLE),
+ local5_enable.sh_local_int5_enable_regval);
+
+
+ /* The II_INT_CONFIG register for cpu 0. */
+ ii_int_config.sh_ii_int0_config_regval = 0;
+ ii_int_config.sh_ii_int0_config_s.type = 0;
+ ii_int_config.sh_ii_int0_config_s.agt = 0;
+ ii_int_config.sh_ii_int0_config_s.pid = cpu0;
+ ii_int_config.sh_ii_int0_config_s.base = 0;
+
+ HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_II_INT0_CONFIG),
+ ii_int_config.sh_ii_int0_config_regval);
+
+
+ /* The II_INT_CONFIG register for cpu 1. */
+ ii_int_config.sh_ii_int0_config_regval = 0;
+ ii_int_config.sh_ii_int0_config_s.type = 0;
+ ii_int_config.sh_ii_int0_config_s.agt = 0;
+ ii_int_config.sh_ii_int0_config_s.pid = cpu1;
+ ii_int_config.sh_ii_int0_config_s.base = 0;
+
+ HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_II_INT1_CONFIG),
+ ii_int_config.sh_ii_int0_config_regval);
+
+
+ /* Enable interrupts for II_INT0 and 1. */
+ ii_int_enable.sh_ii_int0_enable_regval = 0;
+ ii_int_enable.sh_ii_int0_enable_s.ii_enable = 1;
+
+ HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_II_INT0_ENABLE),
+ ii_int_enable.sh_ii_int0_enable_regval);
+ HUB_S((unsigned long *)GLOBAL_MMR_ADDR(nasid, SH_II_INT1_ENABLE),
+ ii_int_enable.sh_ii_int0_enable_regval);
+}
+
+static int intr_reserve_level(cpuid_t cpu, int bit)
+{
+ irqpda_t *irqs = irqpdaindr;
+ int min_shared;
+ int i;
+
+ if (bit < 0) {
+ for (i = IA64_SN2_FIRST_DEVICE_VECTOR; i <= IA64_SN2_LAST_DEVICE_VECTOR; i++) {
+ if (irqs->irq_flags[i] == 0) {
+ bit = i;
+ break;
+ }
+ }
+ }
+
+ if (bit < 0) { /* ran out of irqs. Have to share. This will be rare. */
+ min_shared = 256;
+ for (i=IA64_SN2_FIRST_DEVICE_VECTOR; i < IA64_SN2_LAST_DEVICE_VECTOR; i++) {
+ /* Share with the same device class */
+ /* XXX: gross layering violation.. */
+ if (irqpdaindr->curr->vendor == irqpdaindr->device_dev[i]->vendor &&
+ irqpdaindr->curr->device == irqpdaindr->device_dev[i]->device &&
+ irqpdaindr->share_count[i] < min_shared) {
+ min_shared = irqpdaindr->share_count[i];
+ bit = i;
+ }
+ }
+
+ min_shared = 256;
+ if (bit < 0) { /* didn't find a matching device, just pick one. This will be */
+ /* exceptionally rare. */
+ for (i=IA64_SN2_FIRST_DEVICE_VECTOR; i < IA64_SN2_LAST_DEVICE_VECTOR; i++) {
+ if (irqpdaindr->share_count[i] < min_shared) {
+ min_shared = irqpdaindr->share_count[i];
+ bit = i;
+ }
+ }
+ }
+ irqpdaindr->share_count[bit]++;
+ }
+
+ if (!(irqs->irq_flags[bit] & SN2_IRQ_SHARED)) {
+ if (irqs->irq_flags[bit] & SN2_IRQ_RESERVED)
+ return -1;
+ irqs->num_irq_used++;
+ }
+
+ irqs->irq_flags[bit] |= SN2_IRQ_RESERVED;
+ return bit;
+}
+
+void intr_unreserve_level(cpuid_t cpu,
+ int bit)
+{
+ irqpda_t *irqs = irqpdaindr;
+
+ if (irqs->irq_flags[bit] & SN2_IRQ_RESERVED) {
+ irqs->num_irq_used--;
+ irqs->irq_flags[bit] &= ~SN2_IRQ_RESERVED;
+ }
+}
+
+int intr_connect_level(cpuid_t cpu, int bit)
+{
+ irqpda_t *irqs = irqpdaindr;
+
+ if (!(irqs->irq_flags[bit] & SN2_IRQ_SHARED) &&
+ (irqs->irq_flags[bit] & SN2_IRQ_CONNECTED))
+ return -1;
+
+ irqs->irq_flags[bit] |= SN2_IRQ_CONNECTED;
+ return bit;
+}
+
+int intr_disconnect_level(cpuid_t cpu, int bit)
+{
+ irqpda_t *irqs = irqpdaindr;
+
+ if (!(irqs->irq_flags[bit] & SN2_IRQ_CONNECTED))
+ return -1;
+ irqs->irq_flags[bit] &= ~SN2_IRQ_CONNECTED;
+ return bit;
+}
+
+/*
+ * Choose a cpu on this node.
+ *
+ * We choose the one with the least number of int's assigned to it.
+ */
+static cpuid_t intr_cpu_choose_from_node(cnodeid_t cnode)
+{
+ cpuid_t cpu, best_cpu = CPU_NONE;
+ int slice, min_count = 1000;
+
+ for (slice = CPUS_PER_NODE - 1; slice >= 0; slice--) {
+ int intrs;
+
+ cpu = cnode_slice_to_cpuid(cnode, slice);
+ if (cpu == NR_CPUS)
+ continue;
+ if (!cpu_online(cpu))
+ continue;
+
+ intrs = pdacpu(cpu)->sn_num_irqs;
+
+ if (min_count > intrs) {
+ min_count = intrs;
+ best_cpu = cpu;
+ if (enable_shub_wars_1_1()) {
+ /*
+ * Rather than finding the best cpu, always
+ * return the first cpu. This forces all
+ * interrupts to the same cpu
+ */
+ break;
+ }
+ }
+ }
+ pdacpu(best_cpu)->sn_num_irqs++;
+ return best_cpu;
+}
+
+/*
+ * We couldn't put it on the closest node. Try to find another one.
+ * Do a stupid round-robin assignment of the node.
+ */
+static cpuid_t intr_cpu_choose_node(void)
+{
+ static cnodeid_t last_node = -1; /* XXX: racy */
+ cnodeid_t candidate_node;
+ cpuid_t cpuid;
+
+ if (last_node >= numnodes)
+ last_node = 0;
+
+ for (candidate_node = last_node + 1; candidate_node != last_node;
+ candidate_node++) {
+ if (candidate_node == numnodes)
+ candidate_node = 0;
+ cpuid = intr_cpu_choose_from_node(candidate_node);
+ if (cpuid != CPU_NONE)
+ return cpuid;
+ }
+
+ return CPU_NONE;
+}
+
+/*
+ * Find the node to assign for this interrupt.
+ *
+ * SN2 + pcibr addressing limitation:
+ * Due to this limitation, all interrupts from a given bridge must
+ * go to the name node. The interrupt must also be targetted for
+ * the same processor. This limitation does not exist on PIC.
+ * But, the processor limitation will stay. The limitation will be
+ * similar to the bedrock/xbridge limit regarding PI's
+ */
+cpuid_t intr_heuristic(vertex_hdl_t dev, int req_bit, int *resp_bit)
+{
+ cpuid_t cpuid;
+ vertex_hdl_t pconn_vhdl;
+ pcibr_soft_t pcibr_soft;
+ int bit;
+
+ /* XXX: gross layering violation.. */
+ if (hwgraph_edge_get(dev, EDGE_LBL_PCI, &pconn_vhdl) == GRAPH_SUCCESS) {
+ pcibr_soft = pcibr_soft_get(pconn_vhdl);
+ if (pcibr_soft && pcibr_soft->bsi_err_intr) {
+ /*
+ * The cpu was chosen already when we assigned
+ * the error interrupt.
+ */
+ cpuid = ((hub_intr_t)pcibr_soft->bsi_err_intr)->i_cpuid;
+ goto done;
+ }
+ }
+
+ /*
+ * Need to choose one. Try the controlling c-brick first.
+ */
+ cpuid = intr_cpu_choose_from_node(master_node_get(dev));
+ if (cpuid == CPU_NONE)
+ cpuid = intr_cpu_choose_node();
+
+ done:
+ if (cpuid != CPU_NONE) {
+ bit = intr_reserve_level(cpuid, req_bit);
+ if (bit >= 0) {
+ *resp_bit = bit;
+ return cpuid;
+ }
+ }
+
+ printk("Cannot target interrupt to target cpu (%ld).\n", cpuid);
+ return CPU_NONE;
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/ctype.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/hcl_util.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/xtalk/xtalkaddrs.h>
+#include <asm/sn/ksys/l1.h>
+
+/* #define IOGRAPH_DEBUG */
+#ifdef IOGRAPH_DEBUG
+#define DBG(x...) printk(x)
+#else
+#define DBG(x...)
+#endif /* IOGRAPH_DEBUG */
+
+/* At most 2 hubs can be connected to an xswitch */
+#define NUM_XSWITCH_VOLUNTEER 2
+
+/*
+ * Track which hubs have volunteered to manage devices hanging off of
+ * a Crosstalk Switch (e.g. xbow). This structure is allocated,
+ * initialized, and hung off the xswitch vertex early on when the
+ * xswitch vertex is created.
+ */
+typedef struct xswitch_vol_s {
+ struct semaphore xswitch_volunteer_mutex;
+ int xswitch_volunteer_count;
+ vertex_hdl_t xswitch_volunteer[NUM_XSWITCH_VOLUNTEER];
+} *xswitch_vol_t;
+
+void
+xswitch_vertex_init(vertex_hdl_t xswitch)
+{
+ xswitch_vol_t xvolinfo;
+ int rc;
+
+ xvolinfo = kmalloc(sizeof(struct xswitch_vol_s), GFP_KERNEL);
+ if (!xvolinfo) {
+ printk(KERN_WARNING "xswitch_vertex_init(): Unable to "
+ "allocate memory\n");
+ return;
+ }
+ memset(xvolinfo, 0, sizeof(struct xswitch_vol_s));
+ init_MUTEX(&xvolinfo->xswitch_volunteer_mutex);
+ rc = hwgraph_info_add_LBL(xswitch, INFO_LBL_XSWITCH_VOL,
+ (arbitrary_info_t)xvolinfo);
+ ASSERT(rc == GRAPH_SUCCESS); rc = rc;
+}
+
+
+/*
+ * When assignment of hubs to widgets is complete, we no longer need the
+ * xswitch volunteer structure hanging around. Destroy it.
+ */
+static void
+xswitch_volunteer_delete(vertex_hdl_t xswitch)
+{
+ xswitch_vol_t xvolinfo;
+ int rc;
+
+ rc = hwgraph_info_remove_LBL(xswitch,
+ INFO_LBL_XSWITCH_VOL,
+ (arbitrary_info_t *)&xvolinfo);
+ if (xvolinfo > 0)
+ kfree(xvolinfo);
+}
+/*
+ * A Crosstalk master volunteers to manage xwidgets on the specified xswitch.
+ */
+/* ARGSUSED */
+static void
+volunteer_for_widgets(vertex_hdl_t xswitch, vertex_hdl_t master)
+{
+ xswitch_vol_t xvolinfo = NULL;
+ vertex_hdl_t hubv;
+ hubinfo_t hubinfo;
+
+ (void)hwgraph_info_get_LBL(xswitch,
+ INFO_LBL_XSWITCH_VOL,
+ (arbitrary_info_t *)&xvolinfo);
+ if (xvolinfo == NULL) {
+ if (!is_headless_node_vertex(master)) {
+ char name[MAXDEVNAME];
+ printk(KERN_WARNING
+ "volunteer for widgets: vertex %s has no info label",
+ vertex_to_name(xswitch, name, MAXDEVNAME));
+ }
+ return;
+ }
+
+ down(&xvolinfo->xswitch_volunteer_mutex);
+ ASSERT(xvolinfo->xswitch_volunteer_count < NUM_XSWITCH_VOLUNTEER);
+ xvolinfo->xswitch_volunteer[xvolinfo->xswitch_volunteer_count] = master;
+ xvolinfo->xswitch_volunteer_count++;
+
+ /*
+ * if dual ported, make the lowest widgetid always be
+ * xswitch_volunteer[0].
+ */
+ if (xvolinfo->xswitch_volunteer_count == NUM_XSWITCH_VOLUNTEER) {
+ hubv = xvolinfo->xswitch_volunteer[0];
+ hubinfo_get(hubv, &hubinfo);
+ if (hubinfo->h_widgetid != XBOW_HUBLINK_LOW) {
+ xvolinfo->xswitch_volunteer[0] =
+ xvolinfo->xswitch_volunteer[1];
+ xvolinfo->xswitch_volunteer[1] = hubv;
+ }
+ }
+ up(&xvolinfo->xswitch_volunteer_mutex);
+}
+
+extern int xbow_port_io_enabled(nasid_t nasid, int widgetnum);
+
+/*
+ * Assign all the xwidgets hanging off the specified xswitch to the
+ * Crosstalk masters that have volunteered for xswitch duty.
+ */
+/* ARGSUSED */
+static void
+assign_widgets_to_volunteers(vertex_hdl_t xswitch, vertex_hdl_t hubv)
+{
+ xswitch_info_t xswitch_info;
+ xswitch_vol_t xvolinfo = NULL;
+ xwidgetnum_t widgetnum;
+ int num_volunteer;
+ nasid_t nasid;
+ hubinfo_t hubinfo;
+ extern int iobrick_type_get_nasid(nasid_t);
+
+
+ hubinfo_get(hubv, &hubinfo);
+ nasid = hubinfo->h_nasid;
+
+ xswitch_info = xswitch_info_get(xswitch);
+ ASSERT(xswitch_info != NULL);
+
+ (void)hwgraph_info_get_LBL(xswitch,
+ INFO_LBL_XSWITCH_VOL,
+ (arbitrary_info_t *)&xvolinfo);
+ if (xvolinfo == NULL) {
+ if (!is_headless_node_vertex(hubv)) {
+ char name[MAXDEVNAME];
+ printk(KERN_WARNING
+ "assign_widgets_to_volunteers:vertex %s has "
+ " no info label",
+ vertex_to_name(xswitch, name, MAXDEVNAME));
+ }
+ return;
+ }
+
+ num_volunteer = xvolinfo->xswitch_volunteer_count;
+ ASSERT(num_volunteer > 0);
+
+ /* Assign master hub for xswitch itself. */
+ if (HUB_WIDGET_ID_MIN > 0) {
+ hubv = xvolinfo->xswitch_volunteer[0];
+ xswitch_info_master_assignment_set(xswitch_info, (xwidgetnum_t)0, hubv);
+ }
+
+ /*
+ * TBD: Use administrative information to alter assignment of
+ * widgets to hubs.
+ */
+ for (widgetnum=HUB_WIDGET_ID_MIN; widgetnum <= HUB_WIDGET_ID_MAX; widgetnum++) {
+ int i;
+
+ if (!xbow_port_io_enabled(nasid, widgetnum))
+ continue;
+
+ /*
+ * If this is the master IO board, assign it to the same
+ * hub that owned it in the prom.
+ */
+ if (is_master_baseio_nasid_widget(nasid, widgetnum)) {
+ extern nasid_t snia_get_master_baseio_nasid(void);
+ for (i=0; i<num_volunteer; i++) {
+ hubv = xvolinfo->xswitch_volunteer[i];
+ hubinfo_get(hubv, &hubinfo);
+ nasid = hubinfo->h_nasid;
+ if (nasid == snia_get_master_baseio_nasid())
+ goto do_assignment;
+ }
+ printk("Nasid == %d, console nasid == %d",
+ nasid, snia_get_master_baseio_nasid());
+ nasid = 0;
+ }
+
+ /*
+ * Assuming that we're dual-hosted and that PCI cards
+ * are naturally placed left-to-right, alternate PCI
+ * buses across both Cbricks. For Pbricks, and Ibricks,
+ * io_brick_map_widget() returns the PCI bus number
+ * associated with the given brick type and widget number.
+ * For Xbricks, it returns the XIO slot number.
+ */
+
+ i = 0;
+ if (num_volunteer > 1) {
+ int bt;
+
+ bt = iobrick_type_get_nasid(nasid);
+ if (bt >= 0) {
+ i = io_brick_map_widget(bt, widgetnum) & 1;
+ }
+ }
+
+ hubv = xvolinfo->xswitch_volunteer[i];
+
+do_assignment:
+ /*
+ * At this point, we want to make hubv the master of widgetnum.
+ */
+ xswitch_info_master_assignment_set(xswitch_info, widgetnum, hubv);
+ }
+
+ xswitch_volunteer_delete(xswitch);
+}
+
+/*
+ * Probe to see if this hub's xtalk link is active. If so,
+ * return the Crosstalk Identification of the widget that we talk to.
+ * This is called before any of the Crosstalk infrastructure for
+ * this hub is set up. It's usually called on the node that we're
+ * probing, but not always.
+ *
+ * TBD: Prom code should actually do this work, and pass through
+ * hwid for our use.
+ */
+static void
+early_probe_for_widget(vertex_hdl_t hubv, xwidget_hwid_t hwid)
+{
+ nasid_t nasid;
+ hubinfo_t hubinfo;
+ hubreg_t llp_csr_reg;
+ widgetreg_t widget_id;
+ int result = 0;
+
+ hwid->part_num = XWIDGET_PART_NUM_NONE;
+ hwid->rev_num = XWIDGET_REV_NUM_NONE;
+ hwid->mfg_num = XWIDGET_MFG_NUM_NONE;
+
+ hubinfo_get(hubv, &hubinfo);
+ nasid = hubinfo->h_nasid;
+
+ llp_csr_reg = REMOTE_HUB_L(nasid, IIO_LLP_CSR);
+ if (!(llp_csr_reg & IIO_LLP_CSR_IS_UP))
+ return;
+
+ /* Read the Cross-Talk Widget Id on the other end */
+ result = snia_badaddr_val((volatile void *)
+ (RAW_NODE_SWIN_BASE(nasid, 0x0) + WIDGET_ID),
+ 4, (void *) &widget_id);
+
+ if (result == 0) { /* Found something connected */
+ hwid->part_num = XWIDGET_PART_NUM(widget_id);
+ hwid->rev_num = XWIDGET_REV_NUM(widget_id);
+ hwid->mfg_num = XWIDGET_MFG_NUM(widget_id);
+
+ /* TBD: link reset */
+ } else {
+
+ hwid->part_num = XWIDGET_PART_NUM_NONE;
+ hwid->rev_num = XWIDGET_REV_NUM_NONE;
+ hwid->mfg_num = XWIDGET_MFG_NUM_NONE;
+ }
+}
+
+/*
+ * io_xswitch_widget_init
+ *
+ */
+
+static void
+io_xswitch_widget_init(vertex_hdl_t xswitchv,
+ vertex_hdl_t hubv,
+ xwidgetnum_t widgetnum)
+{
+ xswitch_info_t xswitch_info;
+ xwidgetnum_t hub_widgetid;
+ vertex_hdl_t widgetv;
+ cnodeid_t cnode;
+ widgetreg_t widget_id;
+ nasid_t nasid, peer_nasid;
+ struct xwidget_hwid_s hwid;
+ hubinfo_t hubinfo;
+ /*REFERENCED*/
+ int rc;
+ char pathname[128];
+ lboard_t *board = NULL;
+ char buffer[16];
+ char bt;
+ moduleid_t io_module;
+ slotid_t get_widget_slotnum(int xbow, int widget);
+
+ DBG("\nio_xswitch_widget_init: hubv 0x%p, xswitchv 0x%p, widgetnum 0x%x\n", hubv, xswitchv, widgetnum);
+
+ /*
+ * Verify that xswitchv is indeed an attached xswitch.
+ */
+ xswitch_info = xswitch_info_get(xswitchv);
+ ASSERT(xswitch_info != NULL);
+
+ hubinfo_get(hubv, &hubinfo);
+ nasid = hubinfo->h_nasid;
+ cnode = nasid_to_cnodeid(nasid);
+ hub_widgetid = hubinfo->h_widgetid;
+
+ /*
+ * Check that the widget is an io widget and is enabled
+ * on this nasid or the `peer' nasid. The peer nasid
+ * is the other hub/bedrock connected to the xbow.
+ */
+ peer_nasid = NODEPDA(cnode)->xbow_peer;
+ if (peer_nasid == INVALID_NASID)
+ /* If I don't have a peer, use myself. */
+ peer_nasid = nasid;
+ if (!xbow_port_io_enabled(nasid, widgetnum) &&
+ !xbow_port_io_enabled(peer_nasid, widgetnum)) {
+ return;
+ }
+
+ if (xswitch_info_link_ok(xswitch_info, widgetnum)) {
+ char name[4];
+ lboard_t dummy;
+
+
+ /*
+ * If the current hub is not supposed to be the master
+ * for this widgetnum, then skip this widget.
+ */
+ if (xswitch_info_master_assignment_get(xswitch_info,
+ widgetnum) != hubv) {
+ return;
+ }
+
+ board = find_lboard_class_nasid( (lboard_t *)KL_CONFIG_INFO(nasid),
+ nasid, KLCLASS_IOBRICK);
+ if (!board && NODEPDA(cnode)->xbow_peer != INVALID_NASID) {
+ board = find_lboard_class_nasid(
+ (lboard_t *)KL_CONFIG_INFO( NODEPDA(cnode)->xbow_peer),
+ NODEPDA(cnode)->xbow_peer, KLCLASS_IOBRICK);
+ }
+
+ if (board) {
+ DBG("io_xswitch_widget_init: Found KLTYPE_IOBRICK Board 0x%p brd_type 0x%x\n", board, board->brd_type);
+ } else {
+ DBG("io_xswitch_widget_init: FIXME did not find IOBOARD\n");
+ board = &dummy;
+ }
+
+
+ /* Copy over the nodes' geoid info */
+ {
+ lboard_t *brd;
+
+ brd = find_lboard_any((lboard_t *)KL_CONFIG_INFO(nasid), KLTYPE_SNIA);
+ if ( brd != (lboard_t *)0 ) {
+ board->brd_geoid = brd->brd_geoid;
+ }
+ }
+
+ /*
+ * Make sure we really want to say xbrick, pbrick,
+ * etc. rather than XIO, graphics, etc.
+ */
+
+ memset(buffer, 0, 16);
+ format_module_id(buffer, geo_module(board->brd_geoid), MODULE_FORMAT_BRIEF);
+
+ sprintf(pathname, EDGE_LBL_MODULE "/%s/" EDGE_LBL_SLAB "/%d" "/%s" "/%s/%d",
+ buffer,
+ geo_slab(board->brd_geoid),
+ (board->brd_type == KLTYPE_PXBRICK) ? EDGE_LBL_PXBRICK :
+ (board->brd_type == KLTYPE_IXBRICK) ? EDGE_LBL_IXBRICK :
+ (board->brd_type == KLTYPE_CGBRICK) ? EDGE_LBL_CGBRICK :
+ (board->brd_type == KLTYPE_OPUSBRICK) ? EDGE_LBL_OPUSBRICK : "?brick",
+ EDGE_LBL_XTALK, widgetnum);
+
+ DBG("io_xswitch_widget_init: path= %s\n", pathname);
+ rc = hwgraph_path_add(hwgraph_root, pathname, &widgetv);
+
+ ASSERT(rc == GRAPH_SUCCESS);
+
+ /* This is needed to let the user programs to map the
+ * module,slot numbers to the corresponding widget numbers
+ * on the crossbow.
+ */
+ device_master_set(hwgraph_connectpt_get(widgetv), hubv);
+ sprintf(name, "%d", widgetnum);
+ DBG("io_xswitch_widget_init: FIXME hwgraph_edge_add %s xswitchv 0x%p, widgetv 0x%p\n", name, xswitchv, widgetv);
+ rc = hwgraph_edge_add(xswitchv, widgetv, name);
+
+ /*
+ * crosstalk switch code tracks which
+ * widget is attached to each link.
+ */
+ xswitch_info_vhdl_set(xswitch_info, widgetnum, widgetv);
+
+ /*
+ * Peek at the widget to get its crosstalk part and
+ * mfgr numbers, then present it to the generic xtalk
+ * bus provider to have its driver attach routine
+ * called (or not).
+ */
+ widget_id = XWIDGET_ID_READ(nasid, widgetnum);
+ hwid.part_num = XWIDGET_PART_NUM(widget_id);
+ hwid.rev_num = XWIDGET_REV_NUM(widget_id);
+ hwid.mfg_num = XWIDGET_MFG_NUM(widget_id);
+
+ (void)xwidget_register(&hwid, widgetv, widgetnum,
+ hubv, hub_widgetid);
+
+ io_module = iomoduleid_get(nasid);
+ if (io_module >= 0) {
+ char buffer[16];
+ vertex_hdl_t to, from;
+ char *brick_name;
+ extern char *iobrick_L1bricktype_to_name(int type);
+
+
+ memset(buffer, 0, 16);
+ format_module_id(buffer, geo_module(board->brd_geoid), MODULE_FORMAT_BRIEF);
+
+ if ( isupper(MODULE_GET_BTCHAR(io_module)) ) {
+ bt = tolower(MODULE_GET_BTCHAR(io_module));
+ }
+ else {
+ bt = MODULE_GET_BTCHAR(io_module);
+ }
+
+ brick_name = iobrick_L1bricktype_to_name(bt);
+
+ /* Add a helper vertex so xbow monitoring
+ * can identify the brick type. It's simply
+ * an edge from the widget 0 vertex to the
+ * brick vertex.
+ */
+
+ sprintf(pathname, EDGE_LBL_HW "/" EDGE_LBL_MODULE "/%s/"
+ EDGE_LBL_SLAB "/%d/"
+ EDGE_LBL_NODE "/" EDGE_LBL_XTALK "/"
+ "0",
+ buffer, geo_slab(board->brd_geoid));
+ from = hwgraph_path_to_vertex(pathname);
+ ASSERT_ALWAYS(from);
+ sprintf(pathname, EDGE_LBL_HW "/" EDGE_LBL_MODULE "/%s/"
+ EDGE_LBL_SLAB "/%d/"
+ "%s",
+ buffer, geo_slab(board->brd_geoid), brick_name);
+
+ to = hwgraph_path_to_vertex(pathname);
+ ASSERT_ALWAYS(to);
+ rc = hwgraph_edge_add(from, to,
+ EDGE_LBL_INTERCONNECT);
+ if (rc != -EEXIST && rc != GRAPH_SUCCESS) {
+ printk("%s: Unable to establish link"
+ " for xbmon.", pathname);
+ }
+ }
+
+ }
+}
+
+
+static void
+io_init_xswitch_widgets(vertex_hdl_t xswitchv, cnodeid_t cnode)
+{
+ xwidgetnum_t widgetnum;
+
+ DBG("io_init_xswitch_widgets: xswitchv 0x%p for cnode %d\n", xswitchv, cnode);
+
+ for (widgetnum = HUB_WIDGET_ID_MIN; widgetnum <= HUB_WIDGET_ID_MAX;
+ widgetnum++) {
+ io_xswitch_widget_init(xswitchv,
+ cnodeid_to_vertex(cnode),
+ widgetnum);
+ }
+}
+
+/*
+ * Initialize all I/O on the specified node.
+ */
+static void
+io_init_node(cnodeid_t cnodeid)
+{
+ /*REFERENCED*/
+ vertex_hdl_t hubv, switchv, widgetv;
+ struct xwidget_hwid_s hwid;
+ hubinfo_t hubinfo;
+ int is_xswitch;
+ nodepda_t *npdap;
+ struct semaphore *peer_sema = 0;
+ uint32_t widget_partnum;
+
+ npdap = NODEPDA(cnodeid);
+
+ /*
+ * Get the "top" vertex for this node's hardware
+ * graph; it will carry the per-hub hub-specific
+ * data, and act as the crosstalk provider master.
+ * It's canonical path is probably something of the
+ * form /hw/module/%M/slot/%d/node
+ */
+ hubv = cnodeid_to_vertex(cnodeid);
+ DBG("io_init_node: Initialize IO for cnode %d hubv(node) 0x%p npdap 0x%p\n", cnodeid, hubv, npdap);
+
+ ASSERT(hubv != GRAPH_VERTEX_NONE);
+
+ /*
+ * attach our hub_provider information to hubv,
+ * so we can use it as a crosstalk provider "master"
+ * vertex.
+ */
+ xtalk_provider_register(hubv, &hub_provider);
+ xtalk_provider_startup(hubv);
+
+ /*
+ * If nothing connected to this hub's xtalk port, we're done.
+ */
+ early_probe_for_widget(hubv, &hwid);
+ if (hwid.part_num == XWIDGET_PART_NUM_NONE) {
+ DBG("**** io_init_node: Node's 0x%p hub widget has XWIDGET_PART_NUM_NONE ****\n", hubv);
+ return;
+ /* NOTREACHED */
+ }
+
+ /*
+ * Create a vertex to represent the crosstalk bus
+ * attached to this hub, and a vertex to be used
+ * as the connect point for whatever is out there
+ * on the other side of our crosstalk connection.
+ *
+ * Crosstalk Switch drivers "climb up" from their
+ * connection point to try and take over the switch
+ * point.
+ *
+ * Of course, the edges and verticies may already
+ * exist, in which case our net effect is just to
+ * associate the "xtalk_" driver with the connection
+ * point for the device.
+ */
+
+ (void)hwgraph_path_add(hubv, EDGE_LBL_XTALK, &switchv);
+
+ DBG("io_init_node: Created 'xtalk' entry to '../node/' xtalk vertex 0x%p\n", switchv);
+
+ ASSERT(switchv != GRAPH_VERTEX_NONE);
+
+ (void)hwgraph_edge_add(hubv, switchv, EDGE_LBL_IO);
+
+ DBG("io_init_node: Created symlink 'io' from ../node/io to ../node/xtalk \n");
+
+ /*
+ * We need to find the widget id and update the basew_id field
+ * accordingly. In particular, SN00 has direct connected bridge,
+ * and hence widget id is Not 0.
+ */
+ widget_partnum = (((*(volatile int32_t *)(NODE_SWIN_BASE
+ (cnodeid_to_nasid(cnodeid), 0) +
+ WIDGET_ID))) & WIDGET_PART_NUM)
+ >> WIDGET_PART_NUM_SHFT;
+
+ if ((widget_partnum == XBOW_WIDGET_PART_NUM) ||
+ (widget_partnum == XXBOW_WIDGET_PART_NUM) ||
+ (widget_partnum == PXBOW_WIDGET_PART_NUM) ) {
+ /*
+ * Xbow control register does not have the widget ID field.
+ * So, hard code the widget ID to be zero.
+ */
+ DBG("io_init_node: Found XBOW widget_partnum= 0x%x\n", widget_partnum);
+ npdap->basew_id = 0;
+
+ } else {
+ void *bridge;
+
+ bridge = (void *)NODE_SWIN_BASE(cnodeid_to_nasid(cnodeid), 0);
+ npdap->basew_id = pcireg_bridge_control_get(bridge) & WIDGET_WIDGET_ID;
+
+ printk(" ****io_init_node: Unknown Widget Part Number 0x%x Widget ID 0x%x attached to Hubv 0x%p ****\n", widget_partnum, npdap->basew_id, (void *)hubv);
+ return;
+ }
+ {
+ char widname[10];
+ sprintf(widname, "%x", npdap->basew_id);
+ (void)hwgraph_path_add(switchv, widname, &widgetv);
+ DBG("io_init_node: Created '%s' to '..node/xtalk/' vertex 0x%p\n", widname, widgetv);
+ ASSERT(widgetv != GRAPH_VERTEX_NONE);
+ }
+
+ nodepda->basew_xc = widgetv;
+
+ is_xswitch = xwidget_hwid_is_xswitch(&hwid);
+
+ /*
+ * Try to become the master of the widget. If this is an xswitch
+ * with multiple hubs connected, only one will succeed. Mastership
+ * of an xswitch is used only when touching registers on that xswitch.
+ * The slave xwidgets connected to the xswitch can be owned by various
+ * masters.
+ */
+ if (device_master_set(widgetv, hubv) == 0) {
+
+ /* Only one hub (thread) per Crosstalk device or switch makes
+ * it to here.
+ */
+
+ /*
+ * Initialize whatever xwidget is hanging off our hub.
+ * Whatever it is, it's accessible through widgetnum 0.
+ */
+ hubinfo_get(hubv, &hubinfo);
+
+ (void)xwidget_register(&hwid, widgetv, npdap->basew_id, hubv, hubinfo->h_widgetid);
+
+ /*
+ * Special handling for Crosstalk Switches (e.g. xbow).
+ * We need to do things in roughly the following order:
+ * 1) Initialize xswitch hardware (done above)
+ * 2) Determine which hubs are available to be widget masters
+ * 3) Discover which links are active from the xswitch
+ * 4) Assign xwidgets hanging off the xswitch to hubs
+ * 5) Initialize all xwidgets on the xswitch
+ */
+
+ volunteer_for_widgets(switchv, hubv);
+
+ /* If there's someone else on this crossbow, recognize him */
+ if (npdap->xbow_peer != INVALID_NASID) {
+ nodepda_t *peer_npdap = NODEPDA(nasid_to_cnodeid(npdap->xbow_peer));
+ peer_sema = &peer_npdap->xbow_sema;
+ volunteer_for_widgets(switchv, peer_npdap->node_vertex);
+ }
+
+ assign_widgets_to_volunteers(switchv, hubv);
+
+ /* Signal that we're done */
+ if (peer_sema) {
+ up(peer_sema);
+ }
+
+ }
+ else {
+ /* Wait 'til master is done assigning widgets. */
+ down(&npdap->xbow_sema);
+ }
+
+ /* Now both nodes can safely inititialize widgets */
+ io_init_xswitch_widgets(switchv, cnodeid);
+
+ DBG("\nio_init_node: DONE INITIALIZED ALL I/O FOR CNODEID %d\n\n", cnodeid);
+}
+
+#include <asm/sn/ioerror_handling.h>
+
+/*
+ * Initialize all I/O devices. Starting closest to nodes, probe and
+ * initialize outward.
+ */
+void
+init_all_devices(void)
+{
+ cnodeid_t cnodeid, active;
+
+ active = 0;
+ for (cnodeid = 0; cnodeid < numionodes; cnodeid++) {
+ DBG("init_all_devices: Calling io_init_node() for cnode %d\n", cnodeid);
+ io_init_node(cnodeid);
+
+ DBG("init_all_devices: Done io_init_node() for cnode %d\n", cnodeid);
+ }
+
+ for (cnodeid = 0; cnodeid < numnodes; cnodeid++) {
+ /*
+ * Update information generated by IO init.
+ */
+ update_node_information(cnodeid);
+ }
+}
+
+static
+struct io_brick_map_s io_brick_tab[] = {
+
+/* PXbrick widget number to PCI bus number map */
+ { MODULE_PXBRICK, /* PXbrick type */
+ /* PCI Bus # Widget # */
+ { 0, 0, 0, 0, 0, 0, 0, 0, /* 0x0 - 0x7 */
+ 0, /* 0x8 */
+ 0, /* 0x9 */
+ 0, 0, /* 0xa - 0xb */
+ 1, /* 0xc */
+ 5, /* 0xd */
+ 0, /* 0xe */
+ 3 /* 0xf */
+ }
+ },
+
+/* OPUSbrick widget number to PCI bus number map */
+ { MODULE_OPUSBRICK, /* OPUSbrick type */
+ /* PCI Bus # Widget # */
+ { 0, 0, 0, 0, 0, 0, 0, 0, /* 0x0 - 0x7 */
+ 0, /* 0x8 */
+ 0, /* 0x9 */
+ 0, 0, /* 0xa - 0xb */
+ 0, /* 0xc */
+ 0, /* 0xd */
+ 0, /* 0xe */
+ 1 /* 0xf */
+ }
+ },
+
+/* IXbrick widget number to PCI bus number map */
+ { MODULE_IXBRICK, /* IXbrick type */
+ /* PCI Bus # Widget # */
+ { 0, 0, 0, 0, 0, 0, 0, 0, /* 0x0 - 0x7 */
+ 0, /* 0x8 */
+ 0, /* 0x9 */
+ 0, 0, /* 0xa - 0xb */
+ 1, /* 0xc */
+ 5, /* 0xd */
+ 0, /* 0xe */
+ 3 /* 0xf */
+ }
+ },
+
+/* CG brick widget number to PCI bus number map */
+ { MODULE_CGBRICK, /* CG brick */
+ /* PCI Bus # Widget # */
+ { 0, 0, 0, 0, 0, 0, 0, 0, /* 0x0 - 0x7 */
+ 0, /* 0x8 */
+ 0, /* 0x9 */
+ 0, 1, /* 0xa - 0xb */
+ 0, /* 0xc */
+ 0, /* 0xd */
+ 0, /* 0xe */
+ 0 /* 0xf */
+ }
+ },
+};
+
+/*
+ * Use the brick's type to map a widget number to a meaningful int
+ */
+int
+io_brick_map_widget(int brick_type, int widget_num)
+{
+ int num_bricks, i;
+
+ /* Calculate number of bricks in table */
+ num_bricks = sizeof(io_brick_tab)/sizeof(io_brick_tab[0]);
+
+ /* Look for brick prefix in table */
+ for (i = 0; i < num_bricks; i++) {
+ if (brick_type == io_brick_tab[i].ibm_type)
+ return io_brick_tab[i].ibm_map_wid[widget_num];
+ }
+
+ return 0;
+
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/string.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/sn/io.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/xtalk/xbow.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/module.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/xtalk/xswitch.h>
+#include <asm/sn/nodepda.h>
+#include <asm/sn/sn_cpuid.h>
+
+
+/* #define LDEBUG 1 */
+
+#ifdef LDEBUG
+#define DPRINTF printk
+#define printf printk
+#else
+#define DPRINTF(x...)
+#endif
+
+module_t *sn_modules[MODULE_MAX];
+int nummodules;
+
+#define SN00_SERIAL_FUDGE 0x3b1af409d513c2
+#define SN0_SERIAL_FUDGE 0x6e
+
+
+static void __init
+encode_str_serial(const char *src, char *dest)
+{
+ int i;
+
+ for (i = 0; i < MAX_SERIAL_NUM_SIZE; i++) {
+
+ dest[i] = src[MAX_SERIAL_NUM_SIZE/2 +
+ ((i%2) ? ((i/2 * -1) - 1) : (i/2))] +
+ SN0_SERIAL_FUDGE;
+ }
+}
+
+module_t * __init
+module_lookup(moduleid_t id)
+{
+ int i;
+
+ for (i = 0; i < nummodules; i++)
+ if (sn_modules[i]->id == id) {
+ DPRINTF("module_lookup: found m=0x%p\n", sn_modules[i]);
+ return sn_modules[i];
+ }
+
+ return NULL;
+}
+
+/*
+ * module_add_node
+ *
+ * The first time a new module number is seen, a module structure is
+ * inserted into the module list in order sorted by module number
+ * and the structure is initialized.
+ *
+ * The node number is added to the list of nodes in the module.
+ */
+static module_t * __init
+module_add_node(geoid_t geoid, cnodeid_t cnodeid)
+{
+ module_t *m;
+ int i;
+ char buffer[16];
+ moduleid_t moduleid;
+ slabid_t slab_number;
+
+ memset(buffer, 0, 16);
+ moduleid = geo_module(geoid);
+ format_module_id(buffer, moduleid, MODULE_FORMAT_BRIEF);
+ DPRINTF("module_add_node: moduleid=%s node=%d\n", buffer, cnodeid);
+
+ if ((m = module_lookup(moduleid)) == 0) {
+ m = kmalloc(sizeof (module_t), GFP_KERNEL);
+ ASSERT_ALWAYS(m);
+ memset(m, 0 , sizeof(module_t));
+
+ for (slab_number = 0; slab_number <= MAX_SLABS; slab_number++) {
+ m->nodes[slab_number] = -1;
+ }
+
+ m->id = moduleid;
+ spin_lock_init(&m->lock);
+
+ /* Insert in sorted order by module number */
+
+ for (i = nummodules; i > 0 && sn_modules[i - 1]->id > moduleid; i--)
+ sn_modules[i] = sn_modules[i - 1];
+
+ sn_modules[i] = m;
+ nummodules++;
+ }
+
+ /*
+ * Save this information in the correct slab number of the node in the
+ * module.
+ */
+ slab_number = geo_slab(geoid);
+ DPRINTF("slab number added 0x%x\n", slab_number);
+
+ if (m->nodes[slab_number] != -1) {
+ printk("module_add_node .. slab previously found\n");
+ return NULL;
+ }
+
+ m->nodes[slab_number] = cnodeid;
+ m->geoid[slab_number] = geoid;
+
+ return m;
+}
+
+static int __init
+module_probe_snum(module_t *m, nasid_t host_nasid, nasid_t nasid)
+{
+ lboard_t *board;
+ klmod_serial_num_t *comp;
+ char serial_number[16];
+
+ /*
+ * record brick serial number
+ */
+ board = find_lboard_nasid((lboard_t *) KL_CONFIG_INFO(host_nasid), host_nasid, KLTYPE_SNIA);
+
+ if (! board || KL_CONFIG_DUPLICATE_BOARD(board))
+ {
+ return 0;
+ }
+
+ board_serial_number_get( board, serial_number );
+ if( serial_number[0] != '\0' ) {
+ encode_str_serial( serial_number, m->snum.snum_str );
+ m->snum_valid = 1;
+ }
+
+ board = find_lboard_nasid((lboard_t *) KL_CONFIG_INFO(nasid),
+ nasid, KLTYPE_IOBRICK_XBOW);
+
+ if (! board || KL_CONFIG_DUPLICATE_BOARD(board))
+ return 0;
+
+ comp = GET_SNUM_COMP(board);
+
+ if (comp) {
+ if (comp->snum.snum_str[0] != '\0') {
+ memcpy(m->sys_snum, comp->snum.snum_str,
+ MAX_SERIAL_NUM_SIZE);
+ m->sys_snum_valid = 1;
+ }
+ }
+
+ if (m->sys_snum_valid)
+ return 1;
+ else {
+ DPRINTF("Invalid serial number for module %d, "
+ "possible missing or invalid NIC.", m->id);
+ return 0;
+ }
+}
+
+void __init
+io_module_init(void)
+{
+ cnodeid_t node;
+ lboard_t *board;
+ nasid_t nasid;
+ int nserial;
+ module_t *m;
+ extern int numionodes;
+
+ DPRINTF("*******module_init\n");
+
+ nserial = 0;
+
+ /*
+ * First pass just scan for compute node boards KLTYPE_SNIA.
+ * We do not support memoryless compute nodes.
+ */
+ for (node = 0; node < numnodes; node++) {
+ nasid = cnodeid_to_nasid(node);
+ board = find_lboard_nasid((lboard_t *) KL_CONFIG_INFO(nasid), nasid, KLTYPE_SNIA);
+ ASSERT(board);
+
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__, __LINE__, NULL, NULL, "Found Shub lboard 0x%lx nasid 0x%x cnode 0x%x \n", (unsigned long)board, (int)nasid, (int)node);
+
+ m = module_add_node(board->brd_geoid, node);
+ if (! m->snum_valid && module_probe_snum(m, nasid, nasid))
+ nserial++;
+ }
+
+ /*
+ * Second scan, look for headless/memless board hosted by compute nodes.
+ */
+ for (node = numnodes; node < numionodes; node++) {
+ nasid_t nasid;
+ char serial_number[16];
+
+ nasid = cnodeid_to_nasid(node);
+ board = find_lboard_nasid((lboard_t *) KL_CONFIG_INFO(nasid),
+ nasid, KLTYPE_SNIA);
+ ASSERT(board);
+
+ HWGRAPH_DEBUG(__FILE__, __FUNCTION__, __LINE__, NULL, NULL, "Found headless/memless lboard 0x%lx node %d nasid %d cnode %d\n", (unsigned long)board, node, (int)nasid, (int)node);
+
+ m = module_add_node(board->brd_geoid, node);
+
+ /*
+ * Get and initialize the serial number.
+ */
+ board_serial_number_get( board, serial_number );
+ if( serial_number[0] != '\0' ) {
+ encode_str_serial( serial_number, m->snum.snum_str );
+ m->snum_valid = 1;
+ nserial++;
+ }
+ }
+}
--- /dev/null
+# arch/ia64/sn/io/sn2/pcibr/Makefile
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2002-2003 Silicon Graphics, Inc. All Rights Reserved.
+#
+# Makefile for the sn2 specific pci bridge routines.
+#
+
+obj-y += pcibr_ate.o pcibr_config.o \
+ pcibr_dvr.o pcibr_hints.o \
+ pcibr_intr.o pcibr_rrb.o \
+ pcibr_slot.o pcibr_error.o \
+ pcibr_reg.o
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/pci/pci_defs.h>
+
+/*
+ * functions
+ */
+int pcibr_ate_alloc(pcibr_soft_t, int, struct resource *);
+void pcibr_ate_free(pcibr_soft_t, int, int, struct resource *);
+bridge_ate_t pcibr_flags_to_ate(pcibr_soft_t, unsigned);
+bridge_ate_p pcibr_ate_addr(pcibr_soft_t, int);
+void ate_write(pcibr_soft_t, int, int, bridge_ate_t);
+
+int pcibr_invalidate_ate; /* by default don't invalidate ATE on free */
+
+/*
+ * Allocate "count" contiguous Bridge Address Translation Entries
+ * on the specified bridge to be used for PCI to XTALK mappings.
+ * Indices in rm map range from 1..num_entries. Indicies returned
+ * to caller range from 0..num_entries-1.
+ *
+ * Return the start index on success, -1 on failure.
+ */
+int
+pcibr_ate_alloc(pcibr_soft_t pcibr_soft, int count, struct resource *res)
+{
+ int status = 0;
+ unsigned long flag;
+
+ memset(res, 0, sizeof(struct resource));
+ flag = pcibr_lock(pcibr_soft);
+ status = allocate_resource( &pcibr_soft->bs_int_ate_resource, res,
+ count, pcibr_soft->bs_int_ate_resource.start,
+ pcibr_soft->bs_int_ate_resource.end, 1,
+ NULL, NULL);
+ if (status) {
+ /* Failed to allocate */
+ pcibr_unlock(pcibr_soft, flag);
+ return -1;
+ }
+
+ /* Save the resource for freeing */
+ pcibr_unlock(pcibr_soft, flag);
+
+ return res->start;
+}
+
+void
+pcibr_ate_free(pcibr_soft_t pcibr_soft, int index, int count, struct resource *res)
+{
+
+ bridge_ate_t ate;
+ int status = 0;
+ unsigned long flags;
+
+ if (pcibr_invalidate_ate) {
+ /* For debugging purposes, clear the valid bit in the ATE */
+ ate = *pcibr_ate_addr(pcibr_soft, index);
+ ate_write(pcibr_soft, index, count, (ate & ~ATE_V));
+ }
+
+ flags = pcibr_lock(pcibr_soft);
+ status = release_resource(res);
+ pcibr_unlock(pcibr_soft, flags);
+ if (status)
+ BUG(); /* Ouch .. */
+
+}
+
+/*
+ * Convert PCI-generic software flags and Bridge-specific software flags
+ * into Bridge-specific Address Translation Entry attribute bits.
+ */
+bridge_ate_t
+pcibr_flags_to_ate(pcibr_soft_t pcibr_soft, unsigned flags)
+{
+ bridge_ate_t attributes;
+
+ /* default if nothing specified:
+ * NOBARRIER
+ * NOPREFETCH
+ * NOPRECISE
+ * COHERENT
+ * Plus the valid bit
+ */
+ attributes = ATE_CO | ATE_V;
+
+ /* Generic macro flags
+ */
+ if (flags & PCIIO_DMA_DATA) { /* standard data channel */
+ attributes &= ~ATE_BAR; /* no barrier */
+ attributes |= ATE_PREF; /* prefetch on */
+ }
+ if (flags & PCIIO_DMA_CMD) { /* standard command channel */
+ attributes |= ATE_BAR; /* barrier bit on */
+ attributes &= ~ATE_PREF; /* disable prefetch */
+ }
+ /* Generic detail flags
+ */
+ if (flags & PCIIO_PREFETCH)
+ attributes |= ATE_PREF;
+ if (flags & PCIIO_NOPREFETCH)
+ attributes &= ~ATE_PREF;
+
+ /* Provider-specific flags
+ */
+ if (flags & PCIBR_BARRIER)
+ attributes |= ATE_BAR;
+ if (flags & PCIBR_NOBARRIER)
+ attributes &= ~ATE_BAR;
+
+ if (flags & PCIBR_PREFETCH)
+ attributes |= ATE_PREF;
+ if (flags & PCIBR_NOPREFETCH)
+ attributes &= ~ATE_PREF;
+
+ if (flags & PCIBR_PRECISE)
+ attributes |= ATE_PREC;
+ if (flags & PCIBR_NOPRECISE)
+ attributes &= ~ATE_PREC;
+
+ /* In PCI-X mode, Prefetch & Precise not supported */
+ if (IS_PCIX(pcibr_soft)) {
+ attributes &= ~(ATE_PREC | ATE_PREF);
+ }
+
+ return (attributes);
+}
+
+/*
+ * Setup an Address Translation Entry as specified. Use either the Bridge
+ * internal maps or the external map RAM, as appropriate.
+ */
+bridge_ate_p
+pcibr_ate_addr(pcibr_soft_t pcibr_soft,
+ int ate_index)
+{
+ if (ate_index < pcibr_soft->bs_int_ate_size) {
+ return (pcireg_int_ate_addr(pcibr_soft, ate_index));
+ } else {
+ printk("pcibr_ate_addr(): INVALID ate_index 0x%x", ate_index);
+ return (bridge_ate_p)0;
+ }
+}
+
+/*
+ * Write the ATE.
+ */
+void
+ate_write(pcibr_soft_t pcibr_soft, int ate_index, int count, bridge_ate_t ate)
+{
+ while (count-- > 0) {
+ if (ate_index < pcibr_soft->bs_int_ate_size) {
+ pcireg_int_ate_set(pcibr_soft, ate_index, ate);
+ PCIBR_DEBUG((PCIBR_DEBUG_DMAMAP, pcibr_soft->bs_vhdl,
+ "ate_write(): ate_index=0x%x, ate=0x%lx\n",
+ ate_index, (uint64_t)ate));
+ } else {
+ printk("ate_write(): INVALID ate_index 0x%x", ate_index);
+ return;
+ }
+ ate_index++;
+ ate += IOPGSIZE;
+ }
+
+ pcireg_tflush_get(pcibr_soft); /* wait until Bridge PIO complete */
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/pci/pci_defs.h>
+
+extern pcibr_info_t pcibr_info_get(vertex_hdl_t);
+
+uint64_t pcibr_config_get(vertex_hdl_t, unsigned, unsigned);
+uint64_t do_pcibr_config_get(cfg_p, unsigned, unsigned);
+void pcibr_config_set(vertex_hdl_t, unsigned, unsigned, uint64_t);
+void do_pcibr_config_set(cfg_p, unsigned, unsigned, uint64_t);
+
+/*
+ * fancy snia bit twiddling....
+ */
+#define CBP(b,r) (((volatile uint8_t *) b)[(r)])
+#define CSP(b,r) (((volatile uint16_t *) b)[((r)/2)])
+#define CWP(b,r) (((volatile uint32_t *) b)[(r)/4])
+
+/*
+ * Return a config space address for given slot / func / offset. Note the
+ * returned pointer is a 32bit word (ie. cfg_p) aligned pointer pointing to
+ * the 32bit word that contains the "offset" byte.
+ */
+cfg_p
+pcibr_func_config_addr(pcibr_soft_t soft, pciio_bus_t bus, pciio_slot_t slot,
+ pciio_function_t func, int offset)
+{
+ /*
+ * Type 1 config space
+ */
+ if (bus > 0) {
+ pcireg_type1_cntr_set(soft, ((bus << 16) | (slot << 11)));
+ return (pcireg_type1_cfg_addr(soft, func, offset));
+ }
+
+ /*
+ * Type 0 config space
+ */
+ return (pcireg_type0_cfg_addr(soft, slot, func, offset));
+}
+
+/*
+ * Return config space address for given slot / offset. Note the returned
+ * pointer is a 32bit word (ie. cfg_p) aligned pointer pointing to the
+ * 32bit word that contains the "offset" byte.
+ */
+cfg_p
+pcibr_slot_config_addr(pcibr_soft_t soft, pciio_slot_t slot, int offset)
+{
+ return pcibr_func_config_addr(soft, 0, slot, 0, offset);
+}
+
+/*
+ * Set config space data for given slot / func / offset
+ */
+void
+pcibr_func_config_set(pcibr_soft_t soft, pciio_slot_t slot,
+ pciio_function_t func, int offset, unsigned val)
+{
+ cfg_p cfg_base;
+
+ cfg_base = pcibr_func_config_addr(soft, 0, slot, func, 0);
+ do_pcibr_config_set(cfg_base, offset, sizeof(unsigned), val);
+}
+
+int pcibr_config_debug = 0;
+
+cfg_p
+pcibr_config_addr(vertex_hdl_t conn,
+ unsigned reg)
+{
+ pcibr_info_t pcibr_info;
+ pciio_bus_t pciio_bus;
+ pciio_slot_t pciio_slot;
+ pciio_function_t pciio_func;
+ cfg_p cfgbase = (cfg_p)0;
+ pciio_info_t pciio_info;
+
+ pciio_info = pciio_info_get(conn);
+ pcibr_info = pcibr_info_get(conn);
+
+ /*
+ * Determine the PCI bus/slot/func to generate a config address for.
+ */
+
+ if (pciio_info_type1_get(pciio_info)) {
+ /*
+ * Conn is a vhdl which uses TYPE 1 addressing explicitly passed
+ * in reg.
+ */
+ pciio_bus = PCI_TYPE1_BUS(reg);
+ pciio_slot = PCI_TYPE1_SLOT(reg);
+ pciio_func = PCI_TYPE1_FUNC(reg);
+
+ ASSERT(pciio_bus != 0);
+ } else {
+ /*
+ * Conn is directly connected to the host bus. PCI bus number is
+ * hardcoded to 0 (even though it may have a logical bus number != 0)
+ * and slot/function are derived from the pcibr_info_t associated
+ * with the device.
+ */
+ pciio_bus = 0;
+
+ pciio_slot = PCIBR_INFO_SLOT_GET_INT(pcibr_info);
+ if (pciio_slot == PCIIO_SLOT_NONE)
+ pciio_slot = PCI_TYPE1_SLOT(reg);
+
+ pciio_func = pcibr_info->f_func;
+ if (pciio_func == PCIIO_FUNC_NONE)
+ pciio_func = PCI_TYPE1_FUNC(reg);
+ }
+
+ cfgbase = pcibr_func_config_addr((pcibr_soft_t) pcibr_info->f_mfast,
+ pciio_bus, pciio_slot, pciio_func, 0);
+
+ return cfgbase;
+}
+
+uint64_t
+pcibr_config_get(vertex_hdl_t conn,
+ unsigned reg,
+ unsigned size)
+{
+ return do_pcibr_config_get(pcibr_config_addr(conn, reg),
+ PCI_TYPE1_REG(reg), size);
+}
+
+uint64_t
+do_pcibr_config_get(cfg_p cfgbase,
+ unsigned reg,
+ unsigned size)
+{
+ unsigned value;
+
+ value = CWP(cfgbase, reg);
+ if (reg & 3)
+ value >>= 8 * (reg & 3);
+ if (size < 4)
+ value &= (1 << (8 * size)) - 1;
+ return value;
+}
+
+void
+pcibr_config_set(vertex_hdl_t conn,
+ unsigned reg,
+ unsigned size,
+ uint64_t value)
+{
+ do_pcibr_config_set(pcibr_config_addr(conn, reg),
+ PCI_TYPE1_REG(reg), size, value);
+}
+
+void
+do_pcibr_config_set(cfg_p cfgbase,
+ unsigned reg,
+ unsigned size,
+ uint64_t value)
+{
+ switch (size) {
+ case 1:
+ CBP(cfgbase, reg) = value;
+ break;
+ case 2:
+ if (reg & 1) {
+ CBP(cfgbase, reg) = value;
+ CBP(cfgbase, reg + 1) = value >> 8;
+ } else
+ CSP(cfgbase, reg) = value;
+ break;
+ case 3:
+ if (reg & 1) {
+ CBP(cfgbase, reg) = value;
+ CSP(cfgbase, (reg + 1)) = value >> 8;
+ } else {
+ CSP(cfgbase, reg) = value;
+ CBP(cfgbase, reg + 2) = value >> 16;
+ }
+ break;
+ case 4:
+ CWP(cfgbase, reg) = value;
+ break;
+ }
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/module.h>
+#include <linux/string.h>
+#include <linux/interrupt.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/pci/pci_defs.h>
+
+#include <asm/sn/prio.h>
+#include <asm/sn/sn_private.h>
+
+/*
+ * global variables to toggle the different levels of pcibr debugging.
+ * -pcibr_debug_mask is the mask of the different types of debugging
+ * you want to enable. See sys/PCI/pcibr_private.h
+ * -pcibr_debug_module is the module you want to trace. By default
+ * all modules are trace. The format is something like "001c10".
+ * -pcibr_debug_widget is the widget you want to trace. For TIO
+ * based bricks use the corelet id.
+ * -pcibr_debug_slot is the pci slot you want to trace.
+ */
+uint32_t pcibr_debug_mask; /* 0x00000000 to disable */
+static char *pcibr_debug_module = "all"; /* 'all' for all modules */
+static int pcibr_debug_widget = -1; /* '-1' for all widgets */
+static int pcibr_debug_slot = -1; /* '-1' for all slots */
+
+
+#if PCIBR_SOFT_LIST
+pcibr_list_p pcibr_list;
+#endif
+
+extern char *pci_space[];
+
+/* =====================================================================
+ * Function Table of Contents
+ *
+ * The order of functions in this file has stopped
+ * making much sense. We might want to take a look
+ * at it some time and bring back some sanity, or
+ * perhaps bust this file into smaller chunks.
+ */
+
+extern void do_pcibr_rrb_free_all(pcibr_soft_t, pciio_slot_t);
+extern void do_pcibr_rrb_autoalloc(pcibr_soft_t, int, int, int);
+extern void pcibr_rrb_alloc_more(pcibr_soft_t pcibr_soft, int slot,
+ int vchan, int more_rrbs);
+
+extern int pcibr_wrb_flush(vertex_hdl_t);
+extern int pcibr_rrb_alloc(vertex_hdl_t, int *, int *);
+void pcibr_rrb_alloc_more(pcibr_soft_t, int, int, int);
+
+extern void pcibr_rrb_flush(vertex_hdl_t);
+
+static int pcibr_try_set_device(pcibr_soft_t, pciio_slot_t, unsigned, uint64_t);
+void pcibr_release_device(pcibr_soft_t, pciio_slot_t, uint64_t);
+
+extern iopaddr_t pcibr_bus_addr_alloc(pcibr_soft_t, pciio_win_info_t,
+ pciio_space_t, int, int, int);
+extern int hwgraph_vertex_name_get(vertex_hdl_t vhdl, char *buf,
+ uint buflen);
+
+int pcibr_detach(vertex_hdl_t);
+void pcibr_directmap_init(pcibr_soft_t);
+int pcibr_pcix_rbars_calc(pcibr_soft_t);
+extern int pcibr_ate_alloc(pcibr_soft_t, int, struct resource *);
+extern void pcibr_ate_free(pcibr_soft_t, int, int, struct resource *);
+extern pciio_dmamap_t get_free_pciio_dmamap(vertex_hdl_t);
+extern void free_pciio_dmamap(pcibr_dmamap_t);
+extern int pcibr_widget_to_bus(vertex_hdl_t pcibr_vhdl);
+
+extern void ate_write(pcibr_soft_t, int, int, bridge_ate_t);
+
+pcibr_info_t pcibr_info_get(vertex_hdl_t);
+
+static iopaddr_t pcibr_addr_pci_to_xio(vertex_hdl_t, pciio_slot_t, pciio_space_t, iopaddr_t, size_t, unsigned);
+
+pcibr_piomap_t pcibr_piomap_alloc(vertex_hdl_t, device_desc_t, pciio_space_t, iopaddr_t, size_t, size_t, unsigned);
+void pcibr_piomap_free(pcibr_piomap_t);
+caddr_t pcibr_piomap_addr(pcibr_piomap_t, iopaddr_t, size_t);
+void pcibr_piomap_done(pcibr_piomap_t);
+caddr_t pcibr_piotrans_addr(vertex_hdl_t, device_desc_t, pciio_space_t, iopaddr_t, size_t, unsigned);
+iopaddr_t pcibr_piospace_alloc(vertex_hdl_t, device_desc_t, pciio_space_t, size_t, size_t);
+void pcibr_piospace_free(vertex_hdl_t, pciio_space_t, iopaddr_t, size_t);
+
+static iopaddr_t pcibr_flags_to_d64(unsigned, pcibr_soft_t);
+extern bridge_ate_t pcibr_flags_to_ate(pcibr_soft_t, unsigned);
+
+pcibr_dmamap_t pcibr_dmamap_alloc(vertex_hdl_t, device_desc_t, size_t, unsigned);
+void pcibr_dmamap_free(pcibr_dmamap_t);
+extern bridge_ate_p pcibr_ate_addr(pcibr_soft_t, int);
+static iopaddr_t pcibr_addr_xio_to_pci(pcibr_soft_t, iopaddr_t, size_t);
+iopaddr_t pcibr_dmamap_addr(pcibr_dmamap_t, paddr_t, size_t);
+void pcibr_dmamap_done(pcibr_dmamap_t);
+cnodeid_t pcibr_get_dmatrans_node(vertex_hdl_t);
+iopaddr_t pcibr_dmatrans_addr(vertex_hdl_t, device_desc_t, paddr_t, size_t, unsigned);
+void pcibr_dmamap_drain(pcibr_dmamap_t);
+void pcibr_dmaaddr_drain(vertex_hdl_t, paddr_t, size_t);
+iopaddr_t pcibr_dmamap_pciaddr_get(pcibr_dmamap_t);
+
+void pcibr_provider_startup(vertex_hdl_t);
+void pcibr_provider_shutdown(vertex_hdl_t);
+
+int pcibr_reset(vertex_hdl_t);
+pciio_endian_t pcibr_endian_set(vertex_hdl_t, pciio_endian_t, pciio_endian_t);
+int pcibr_device_flags_set(vertex_hdl_t, pcibr_device_flags_t);
+
+extern int pcibr_slot_info_free(vertex_hdl_t,pciio_slot_t);
+extern int pcibr_slot_detach(vertex_hdl_t, pciio_slot_t, int,
+ char *, int *);
+
+pciio_businfo_t pcibr_businfo_get(vertex_hdl_t);
+
+/* =====================================================================
+ * Device(x) register management
+ */
+
+/* pcibr_try_set_device: attempt to modify Device(x)
+ * for the specified slot on the specified bridge
+ * as requested in flags, limited to the specified
+ * bits. Returns which BRIDGE bits were in conflict,
+ * or ZERO if everything went OK.
+ *
+ * Caller MUST hold pcibr_lock when calling this function.
+ */
+static int
+pcibr_try_set_device(pcibr_soft_t pcibr_soft,
+ pciio_slot_t slot,
+ unsigned flags,
+ uint64_t mask)
+{
+ pcibr_soft_slot_t slotp;
+ uint64_t old;
+ uint64_t new;
+ uint64_t chg;
+ uint64_t bad;
+ uint64_t badpmu;
+ uint64_t badd32;
+ uint64_t badd64;
+ uint64_t fix;
+ unsigned long s;
+
+ slotp = &pcibr_soft->bs_slot[slot];
+
+ s = pcibr_lock(pcibr_soft);
+
+ old = slotp->bss_device;
+
+ /* figure out what the desired
+ * Device(x) bits are based on
+ * the flags specified.
+ */
+
+ new = old;
+
+ /* Currently, we inherit anything that
+ * the new caller has not specified in
+ * one way or another, unless we take
+ * action here to not inherit.
+ *
+ * This is needed for the "swap" stuff,
+ * since it could have been set via
+ * pcibr_endian_set -- altho note that
+ * any explicit PCIBR_BYTE_STREAM or
+ * PCIBR_WORD_VALUES will freely override
+ * the effect of that call (and vice
+ * versa, no protection either way).
+ *
+ * I want to get rid of pcibr_endian_set
+ * in favor of tracking DMA endianness
+ * using the flags specified when DMA
+ * channels are created.
+ */
+
+#define BRIDGE_DEV_WRGA_BITS (BRIDGE_DEV_PMU_WRGA_EN | BRIDGE_DEV_DIR_WRGA_EN)
+#define BRIDGE_DEV_SWAP_BITS (BRIDGE_DEV_SWAP_PMU | BRIDGE_DEV_SWAP_DIR)
+
+ /* Do not use Barrier, Write Gather,
+ * or Prefetch unless asked.
+ * Leave everything else as it
+ * was from the last time.
+ */
+ new = new
+ & ~BRIDGE_DEV_BARRIER
+ & ~BRIDGE_DEV_WRGA_BITS
+ & ~BRIDGE_DEV_PREF
+ ;
+
+ /* Generic macro flags
+ */
+ if (flags & PCIIO_DMA_DATA) {
+ new = (new
+ & ~BRIDGE_DEV_BARRIER) /* barrier off */
+ | BRIDGE_DEV_PREF; /* prefetch on */
+
+ }
+ if (flags & PCIIO_DMA_CMD) {
+ new = ((new
+ & ~BRIDGE_DEV_PREF) /* prefetch off */
+ & ~BRIDGE_DEV_WRGA_BITS) /* write gather off */
+ | BRIDGE_DEV_BARRIER; /* barrier on */
+ }
+ /* Generic detail flags
+ */
+ if (flags & PCIIO_WRITE_GATHER)
+ new |= BRIDGE_DEV_WRGA_BITS;
+ if (flags & PCIIO_NOWRITE_GATHER)
+ new &= ~BRIDGE_DEV_WRGA_BITS;
+
+ if (flags & PCIIO_PREFETCH)
+ new |= BRIDGE_DEV_PREF;
+ if (flags & PCIIO_NOPREFETCH)
+ new &= ~BRIDGE_DEV_PREF;
+
+ if (flags & PCIBR_WRITE_GATHER)
+ new |= BRIDGE_DEV_WRGA_BITS;
+ if (flags & PCIBR_NOWRITE_GATHER)
+ new &= ~BRIDGE_DEV_WRGA_BITS;
+
+ if (flags & PCIIO_BYTE_STREAM)
+ new |= BRIDGE_DEV_SWAP_DIR;
+ if (flags & PCIIO_WORD_VALUES)
+ new &= ~BRIDGE_DEV_SWAP_DIR;
+
+ /* Provider-specific flags
+ */
+ if (flags & PCIBR_PREFETCH)
+ new |= BRIDGE_DEV_PREF;
+ if (flags & PCIBR_NOPREFETCH)
+ new &= ~BRIDGE_DEV_PREF;
+
+ if (flags & PCIBR_PRECISE)
+ new |= BRIDGE_DEV_PRECISE;
+ if (flags & PCIBR_NOPRECISE)
+ new &= ~BRIDGE_DEV_PRECISE;
+
+ if (flags & PCIBR_BARRIER)
+ new |= BRIDGE_DEV_BARRIER;
+ if (flags & PCIBR_NOBARRIER)
+ new &= ~BRIDGE_DEV_BARRIER;
+
+ if (flags & PCIBR_64BIT)
+ new |= BRIDGE_DEV_DEV_SIZE;
+ if (flags & PCIBR_NO64BIT)
+ new &= ~BRIDGE_DEV_DEV_SIZE;
+
+ /*
+ * PIC BRINGUP WAR (PV# 855271):
+ * Allow setting BRIDGE_DEV_VIRTUAL_EN on PIC iff we're a 64-bit
+ * device. The bit is only intended for 64-bit devices and, on
+ * PIC, can cause problems for 32-bit devices.
+ */
+ if (mask == BRIDGE_DEV_D64_BITS &&
+ PCIBR_WAR_ENABLED(PV855271, pcibr_soft)) {
+ if (flags & PCIBR_VCHAN1) {
+ new |= BRIDGE_DEV_VIRTUAL_EN;
+ mask |= BRIDGE_DEV_VIRTUAL_EN;
+ }
+ }
+
+ /* PIC BRINGUP WAR (PV# 878674): Don't allow 64bit PIO accesses */
+ if ((flags & PCIBR_64BIT) &&
+ PCIBR_WAR_ENABLED(PV878674, pcibr_soft)) {
+ new &= ~(1ull << 22);
+ }
+
+ chg = old ^ new; /* what are we changing, */
+ chg &= mask; /* of the interesting bits */
+
+ if (chg) {
+
+ badd32 = slotp->bss_d32_uctr ? (BRIDGE_DEV_D32_BITS & chg) : 0;
+ badpmu = slotp->bss_pmu_uctr ? (XBRIDGE_DEV_PMU_BITS & chg) : 0;
+ badd64 = slotp->bss_d64_uctr ? (XBRIDGE_DEV_D64_BITS & chg) : 0;
+ bad = badpmu | badd32 | badd64;
+
+ if (bad) {
+
+ /* some conflicts can be resolved by
+ * forcing the bit on. this may cause
+ * some performance degredation in
+ * the stream(s) that want the bit off,
+ * but the alternative is not allowing
+ * the new stream at all.
+ */
+ if ( (fix = bad & (BRIDGE_DEV_PRECISE |
+ BRIDGE_DEV_BARRIER)) ) {
+ bad &= ~fix;
+ /* don't change these bits if
+ * they are already set in "old"
+ */
+ chg &= ~(fix & old);
+ }
+ /* some conflicts can be resolved by
+ * forcing the bit off. this may cause
+ * some performance degredation in
+ * the stream(s) that want the bit on,
+ * but the alternative is not allowing
+ * the new stream at all.
+ */
+ if ( (fix = bad & (BRIDGE_DEV_WRGA_BITS |
+ BRIDGE_DEV_PREF)) ) {
+ bad &= ~fix;
+ /* don't change these bits if
+ * we wanted to turn them on.
+ */
+ chg &= ~(fix & new);
+ }
+ /* conflicts in other bits mean
+ * we can not establish this DMA
+ * channel while the other(s) are
+ * still present.
+ */
+ if (bad) {
+ pcibr_unlock(pcibr_soft, s);
+ PCIBR_DEBUG((PCIBR_DEBUG_DEVREG, pcibr_soft->bs_vhdl,
+ "pcibr_try_set_device: mod blocked by 0x%x\n", bad));
+ return bad;
+ }
+ }
+ }
+ if (mask == BRIDGE_DEV_PMU_BITS)
+ slotp->bss_pmu_uctr++;
+ if (mask == BRIDGE_DEV_D32_BITS)
+ slotp->bss_d32_uctr++;
+ if (mask == BRIDGE_DEV_D64_BITS)
+ slotp->bss_d64_uctr++;
+
+ /* the value we want to write is the
+ * original value, with the bits for
+ * our selected changes flipped, and
+ * with any disabled features turned off.
+ */
+ new = old ^ chg; /* only change what we want to change */
+
+ if (slotp->bss_device == new) {
+ pcibr_unlock(pcibr_soft, s);
+ return 0;
+ }
+
+ pcireg_device_set(pcibr_soft, slot, new);
+ slotp->bss_device = new;
+ pcireg_tflush_get(pcibr_soft); /* wait until Bridge PIO complete */
+ pcibr_unlock(pcibr_soft, s);
+
+ PCIBR_DEBUG((PCIBR_DEBUG_DEVREG, pcibr_soft->bs_vhdl,
+ "pcibr_try_set_device: Device(%d): 0x%x\n", slot, new));
+ return 0;
+}
+
+void
+pcibr_release_device(pcibr_soft_t pcibr_soft,
+ pciio_slot_t slot,
+ uint64_t mask)
+{
+ pcibr_soft_slot_t slotp;
+ unsigned long s;
+
+ slotp = &pcibr_soft->bs_slot[slot];
+
+ s = pcibr_lock(pcibr_soft);
+
+ if (mask == BRIDGE_DEV_PMU_BITS)
+ slotp->bss_pmu_uctr--;
+ if (mask == BRIDGE_DEV_D32_BITS)
+ slotp->bss_d32_uctr--;
+ if (mask == BRIDGE_DEV_D64_BITS)
+ slotp->bss_d64_uctr--;
+
+ pcibr_unlock(pcibr_soft, s);
+}
+
+
+/* =====================================================================
+ * Bridge (pcibr) "Device Driver" entry points
+ */
+
+
+static int
+pcibr_mmap(struct file * file, struct vm_area_struct * vma)
+{
+ vertex_hdl_t pcibr_vhdl = file->f_dentry->d_fsdata;
+ pcibr_soft_t pcibr_soft;
+ void *bridge;
+ unsigned long phys_addr;
+ int error = 0;
+
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+ bridge = pcibr_soft->bs_base;
+ phys_addr = (unsigned long)bridge & ~0xc000000000000000; /* Mask out the Uncache bits */
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ vma->vm_flags |= VM_RESERVED | VM_IO;
+ error = io_remap_page_range(vma, phys_addr, vma->vm_start,
+ vma->vm_end - vma->vm_start,
+ vma->vm_page_prot);
+ return error;
+}
+
+/*
+ * This is the file operation table for the pcibr driver.
+ * As each of the functions are implemented, put the
+ * appropriate function name below.
+ */
+static int pcibr_mmap(struct file * file, struct vm_area_struct * vma);
+struct file_operations pcibr_fops = {
+ .owner = THIS_MODULE,
+ .mmap = pcibr_mmap,
+};
+
+
+/* This is special case code used by grio. There are plans to make
+ * this a bit more general in the future, but till then this should
+ * be sufficient.
+ */
+pciio_slot_t
+pcibr_device_slot_get(vertex_hdl_t dev_vhdl)
+{
+ char devname[MAXDEVNAME];
+ vertex_hdl_t tdev;
+ pciio_info_t pciio_info;
+ pciio_slot_t slot = PCIIO_SLOT_NONE;
+
+ vertex_to_name(dev_vhdl, devname, MAXDEVNAME);
+
+ /* run back along the canonical path
+ * until we find a PCI connection point.
+ */
+ tdev = hwgraph_connectpt_get(dev_vhdl);
+ while (tdev != GRAPH_VERTEX_NONE) {
+ pciio_info = pciio_info_chk(tdev);
+ if (pciio_info) {
+ slot = PCIBR_INFO_SLOT_GET_INT(pciio_info);
+ break;
+ }
+ hwgraph_vertex_unref(tdev);
+ tdev = hwgraph_connectpt_get(tdev);
+ }
+ hwgraph_vertex_unref(tdev);
+
+ return slot;
+}
+
+pcibr_info_t
+pcibr_info_get(vertex_hdl_t vhdl)
+{
+ return (pcibr_info_t) pciio_info_get(vhdl);
+}
+
+pcibr_info_t
+pcibr_device_info_new(
+ pcibr_soft_t pcibr_soft,
+ pciio_slot_t slot,
+ pciio_function_t rfunc,
+ pciio_vendor_id_t vendor,
+ pciio_device_id_t device)
+{
+ pcibr_info_t pcibr_info;
+ pciio_function_t func;
+ int ibit;
+
+ func = (rfunc == PCIIO_FUNC_NONE) ? 0 : rfunc;
+
+ /*
+ * Create a pciio_info_s for this device. pciio_device_info_new()
+ * will set the c_slot (which is suppose to represent the external
+ * slot (i.e the slot number silk screened on the back of the I/O
+ * brick)). So for PIC we need to adjust this "internal slot" num
+ * passed into us, into its external representation. See comment
+ * for the PCIBR_DEVICE_TO_SLOT macro for more information.
+ */
+ pcibr_info = kmalloc(sizeof (*(pcibr_info)), GFP_KERNEL);
+ if ( !pcibr_info ) {
+ return NULL;
+ }
+ memset(pcibr_info, 0, sizeof (*(pcibr_info)));
+
+ pciio_device_info_new(&pcibr_info->f_c, pcibr_soft->bs_vhdl,
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot),
+ rfunc, vendor, device);
+ pcibr_info->f_dev = slot;
+
+ /* Set PCI bus number */
+ pcibr_info->f_bus = pcibr_widget_to_bus(pcibr_soft->bs_vhdl);
+
+ if (slot != PCIIO_SLOT_NONE) {
+
+ /*
+ * Currently favored mapping from PCI
+ * slot number and INTA/B/C/D to Bridge
+ * PCI Interrupt Bit Number:
+ *
+ * SLOT A B C D
+ * 0 0 4 0 4
+ * 1 1 5 1 5
+ * 2 2 6 2 6
+ * 3 3 7 3 7
+ * 4 4 0 4 0
+ * 5 5 1 5 1
+ * 6 6 2 6 2
+ * 7 7 3 7 3
+ *
+ * XXX- allow pcibr_hints to override default
+ * XXX- allow ADMIN to override pcibr_hints
+ */
+ for (ibit = 0; ibit < 4; ++ibit)
+ pcibr_info->f_ibit[ibit] =
+ (slot + 4 * ibit) & 7;
+
+ /*
+ * Record the info in the sparse func info space.
+ */
+ if (func < pcibr_soft->bs_slot[slot].bss_ninfo)
+ pcibr_soft->bs_slot[slot].bss_infos[func] = pcibr_info;
+ }
+ return pcibr_info;
+}
+
+
+/*
+ * pcibr_device_unregister
+ * This frees up any hardware resources reserved for this PCI device
+ * and removes any PCI infrastructural information setup for it.
+ * This is usually used at the time of shutting down of the PCI card.
+ */
+int
+pcibr_device_unregister(vertex_hdl_t pconn_vhdl)
+{
+ pciio_info_t pciio_info;
+ vertex_hdl_t pcibr_vhdl;
+ pciio_slot_t slot;
+ pcibr_soft_t pcibr_soft;
+ int count_vchan0, count_vchan1;
+ unsigned long s;
+ int error_call;
+ int error = 0;
+
+ pciio_info = pciio_info_get(pconn_vhdl);
+
+ pcibr_vhdl = pciio_info_master_get(pciio_info);
+ slot = PCIBR_INFO_SLOT_GET_INT(pciio_info);
+
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+
+ /* Clear all the hardware xtalk resources for this device */
+ xtalk_widgetdev_shutdown(pcibr_soft->bs_conn, slot);
+
+ /* Flush all the rrbs */
+ pcibr_rrb_flush(pconn_vhdl);
+
+ /*
+ * If the RRB configuration for this slot has changed, set it
+ * back to the boot-time default
+ */
+ if (pcibr_soft->bs_rrb_valid_dflt[slot][VCHAN0] >= 0) {
+
+ s = pcibr_lock(pcibr_soft);
+
+ pcibr_soft->bs_rrb_res[slot] = pcibr_soft->bs_rrb_res[slot] +
+ pcibr_soft->bs_rrb_valid[slot][VCHAN0] +
+ pcibr_soft->bs_rrb_valid[slot][VCHAN1] +
+ pcibr_soft->bs_rrb_valid[slot][VCHAN2] +
+ pcibr_soft->bs_rrb_valid[slot][VCHAN3];
+
+ /* Free the rrbs allocated to this slot, both the normal & virtual */
+ do_pcibr_rrb_free_all(pcibr_soft, slot);
+
+ count_vchan0 = pcibr_soft->bs_rrb_valid_dflt[slot][VCHAN0];
+ count_vchan1 = pcibr_soft->bs_rrb_valid_dflt[slot][VCHAN1];
+
+ pcibr_unlock(pcibr_soft, s);
+
+ pcibr_rrb_alloc(pconn_vhdl, &count_vchan0, &count_vchan1);
+
+ }
+
+ /* Flush the write buffers !! */
+ error_call = pcibr_wrb_flush(pconn_vhdl);
+
+ if (error_call)
+ error = error_call;
+
+ /* Clear the information specific to the slot */
+ error_call = pcibr_slot_info_free(pcibr_vhdl, slot);
+
+ if (error_call)
+ error = error_call;
+
+ return error;
+
+}
+
+/*
+ * pcibr_driver_reg_callback
+ * CDL will call this function for each device found in the PCI
+ * registry that matches the vendor/device IDs supported by
+ * the driver being registered. The device's connection vertex
+ * and the driver's attach function return status enable the
+ * slot's device status to be set.
+ */
+void
+pcibr_driver_reg_callback(vertex_hdl_t pconn_vhdl,
+ int key1, int key2, int error)
+{
+ pciio_info_t pciio_info;
+ pcibr_info_t pcibr_info;
+ vertex_hdl_t pcibr_vhdl;
+ pciio_slot_t slot;
+ pcibr_soft_t pcibr_soft;
+
+ /* Do not set slot status for vendor/device ID wildcard drivers */
+ if ((key1 == -1) || (key2 == -1))
+ return;
+
+ pciio_info = pciio_info_get(pconn_vhdl);
+ pcibr_info = pcibr_info_get(pconn_vhdl);
+
+ pcibr_vhdl = pciio_info_master_get(pciio_info);
+ slot = PCIBR_INFO_SLOT_GET_INT(pciio_info);
+
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+ pcibr_info->f_att_det_error = error;
+
+#ifdef CONFIG_HOTPLUG_PCI_SGI
+ pcibr_soft->bs_slot[slot].slot_status &= ~SLOT_STATUS_MASK;
+
+ if (error) {
+ pcibr_soft->bs_slot[slot].slot_status |= SLOT_STARTUP_INCMPLT;
+ } else {
+ pcibr_soft->bs_slot[slot].slot_status |= SLOT_STARTUP_CMPLT;
+ }
+#endif /* CONFIG_HOTPLUG_PCI_SGI */
+}
+
+/*
+ * pcibr_driver_unreg_callback
+ * CDL will call this function for each device found in the PCI
+ * registry that matches the vendor/device IDs supported by
+ * the driver being unregistered. The device's connection vertex
+ * and the driver's detach function return status enable the
+ * slot's device status to be set.
+ */
+void
+pcibr_driver_unreg_callback(vertex_hdl_t pconn_vhdl,
+ int key1, int key2, int error)
+{
+ pciio_info_t pciio_info;
+ pcibr_info_t pcibr_info;
+ vertex_hdl_t pcibr_vhdl;
+ pciio_slot_t slot;
+ pcibr_soft_t pcibr_soft;
+
+ /* Do not set slot status for vendor/device ID wildcard drivers */
+ if ((key1 == -1) || (key2 == -1))
+ return;
+
+ pciio_info = pciio_info_get(pconn_vhdl);
+ pcibr_info = pcibr_info_get(pconn_vhdl);
+
+ pcibr_vhdl = pciio_info_master_get(pciio_info);
+ slot = PCIBR_INFO_SLOT_GET_INT(pciio_info);
+
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+ pcibr_info->f_att_det_error = error;
+#ifdef CONFIG_HOTPLUG_PCI_SGI
+ pcibr_soft->bs_slot[slot].slot_status &= ~SLOT_STATUS_MASK;
+
+ if (error) {
+ pcibr_soft->bs_slot[slot].slot_status |= SLOT_SHUTDOWN_INCMPLT;
+ } else {
+ pcibr_soft->bs_slot[slot].slot_status |= SLOT_SHUTDOWN_CMPLT;
+ }
+#endif /* CONFIG_HOTPLUG_PCI_SGI */
+}
+
+/*
+ * pcibr_detach:
+ * Detach the bridge device from the hwgraph after cleaning out all the
+ * underlying vertices.
+ */
+
+int
+pcibr_detach(vertex_hdl_t xconn)
+{
+ pciio_slot_t slot;
+ vertex_hdl_t pcibr_vhdl;
+ pcibr_soft_t pcibr_soft;
+ unsigned long s;
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DETACH, xconn, "pcibr_detach\n"));
+
+ /* Get the bridge vertex from its xtalk connection point */
+ if (hwgraph_traverse(xconn, EDGE_LBL_PCI, &pcibr_vhdl) != GRAPH_SUCCESS)
+ return 1;
+
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+
+ /* Disable the interrupts from the bridge */
+ s = pcibr_lock(pcibr_soft);
+ pcireg_intr_enable_set(pcibr_soft, 0);
+ pcibr_unlock(pcibr_soft, s);
+
+ /* Detach all the PCI devices talking to this bridge */
+ for (slot = pcibr_soft->bs_min_slot;
+ slot < PCIBR_NUM_SLOTS(pcibr_soft); ++slot) {
+ pcibr_slot_detach(pcibr_vhdl, slot, 0, (char *)NULL, (int *)NULL);
+ }
+
+ /* Unregister the no-slot connection point */
+ pciio_device_info_unregister(pcibr_vhdl,
+ &(pcibr_soft->bs_noslot_info->f_c));
+
+ kfree(pcibr_soft->bs_name);
+
+ /* Disconnect the error interrupt and free the xtalk resources
+ * associated with it.
+ */
+ xtalk_intr_disconnect(pcibr_soft->bsi_err_intr);
+ xtalk_intr_free(pcibr_soft->bsi_err_intr);
+
+ /* Clear the software state maintained by the bridge driver for this
+ * bridge.
+ */
+ kfree(pcibr_soft);
+
+ /* Remove the Bridge revision labelled info */
+ (void)hwgraph_info_remove_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, NULL);
+
+ return 0;
+}
+
+
+/*
+ * Set the Bridge's 32-bit PCI to XTalk Direct Map register to the most useful
+ * value we can determine. Note that we must use a single xid for all of:
+ * -direct-mapped 32-bit DMA accesses
+ * -direct-mapped 64-bit DMA accesses
+ * -DMA accesses through the PMU
+ * -interrupts
+ * This is the only way to guarantee that completion interrupts will reach a
+ * CPU after all DMA data has reached memory.
+ */
+void
+pcibr_directmap_init(pcibr_soft_t pcibr_soft)
+{
+ paddr_t paddr;
+ iopaddr_t xbase;
+ uint64_t diroff;
+ cnodeid_t cnodeid = 0; /* We need api for diroff api */
+ nasid_t nasid;
+
+ nasid = cnodeid_to_nasid(cnodeid);
+ paddr = NODE_OFFSET(nasid) + 0;
+
+ /* Assume that if we ask for a DMA mapping to zero the XIO host will
+ * transmute this into a request for the lowest hunk of memory.
+ */
+ xbase = xtalk_dmatrans_addr(pcibr_soft->bs_conn, 0, paddr, PAGE_SIZE, 0);
+
+ diroff = xbase >> BRIDGE_DIRMAP_OFF_ADDRSHFT;
+ pcireg_dirmap_diroff_set(pcibr_soft, diroff);
+ pcireg_dirmap_wid_set(pcibr_soft, pcibr_soft->bs_mxid);
+ pcibr_soft->bs_dir_xport = pcibr_soft->bs_mxid;
+ if (xbase == (512 << 20)) { /* 512Meg */
+ pcireg_dirmap_add512_set(pcibr_soft);
+ pcibr_soft->bs_dir_xbase = (512 << 20);
+ } else {
+ pcireg_dirmap_add512_clr(pcibr_soft);
+ pcibr_soft->bs_dir_xbase = diroff << BRIDGE_DIRMAP_OFF_ADDRSHFT;
+ }
+}
+
+
+int
+pcibr_asic_rev(vertex_hdl_t pconn_vhdl)
+{
+ vertex_hdl_t pcibr_vhdl;
+ int rc;
+ arbitrary_info_t ainfo;
+
+ if (GRAPH_SUCCESS !=
+ hwgraph_traverse(pconn_vhdl, EDGE_LBL_MASTER, &pcibr_vhdl))
+ return -1;
+
+ rc = hwgraph_info_get_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, &ainfo);
+
+ /*
+ * Any hwgraph function that returns a vertex handle will implicity
+ * increment that vertex's reference count. The caller must explicity
+ * decrement the vertex's referece count after the last reference to
+ * that vertex.
+ *
+ * Decrement reference count incremented by call to hwgraph_traverse().
+ *
+ */
+ hwgraph_vertex_unref(pcibr_vhdl);
+
+ if (rc != GRAPH_SUCCESS)
+ return -1;
+
+ return (int) ainfo;
+}
+
+/* =====================================================================
+ * PIO MANAGEMENT
+ */
+
+static iopaddr_t
+pcibr_addr_pci_to_xio(vertex_hdl_t pconn_vhdl,
+ pciio_slot_t slot,
+ pciio_space_t space,
+ iopaddr_t pci_addr,
+ size_t req_size,
+ unsigned flags)
+{
+ pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl);
+ pciio_info_t pciio_info = pciio_info_get(pconn_vhdl);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info);
+ unsigned bar; /* which BASE reg on device is decoding */
+ iopaddr_t xio_addr = XIO_NOWHERE;
+ iopaddr_t base = 0;
+ iopaddr_t limit = 0;
+
+ pciio_space_t wspace; /* which space device is decoding */
+ iopaddr_t wbase; /* base of device decode on PCI */
+ size_t wsize; /* size of device decode on PCI */
+
+ int try; /* DevIO(x) window scanning order control */
+ int maxtry, halftry;
+ int win; /* which DevIO(x) window is being used */
+ pciio_space_t mspace; /* target space for devio(x) register */
+ iopaddr_t mbase; /* base of devio(x) mapped area on PCI */
+ size_t msize; /* size of devio(x) mapped area on PCI */
+ size_t mmask; /* addr bits stored in Device(x) */
+
+ unsigned long s;
+
+ s = pcibr_lock(pcibr_soft);
+
+ if (pcibr_soft->bs_slot[slot].has_host) {
+ slot = pcibr_soft->bs_slot[slot].host_slot;
+ pcibr_info = pcibr_soft->bs_slot[slot].bss_infos[0];
+
+ /*
+ * Special case for dual-slot pci devices such as ioc3 on IP27
+ * baseio. In these cases, pconn_vhdl should never be for a pci
+ * function on a subordiate PCI bus, so we can safely reset pciio_info
+ * to be the info struct embedded in pcibr_info. Failure to do this
+ * results in using a bogus pciio_info_t for calculations done later
+ * in this routine.
+ */
+
+ pciio_info = &pcibr_info->f_c;
+ }
+ if (space == PCIIO_SPACE_NONE)
+ goto done;
+
+ if (space == PCIIO_SPACE_CFG) {
+ /*
+ * Usually, the first mapping
+ * established to a PCI device
+ * is to its config space.
+ *
+ * In any case, we definitely
+ * do NOT need to worry about
+ * PCI BASE registers, and
+ * MUST NOT attempt to point
+ * the DevIO(x) window at
+ * this access ...
+ */
+ if (((flags & PCIIO_BYTE_STREAM) == 0) &&
+ ((pci_addr + req_size) <= BRIDGE_TYPE0_CFG_FUNC_OFF))
+ xio_addr = pci_addr + PCIBR_TYPE0_CFG_DEV(pcibr_soft, slot);
+
+ goto done;
+ }
+ if (space == PCIIO_SPACE_ROM) {
+ /* PIO to the Expansion Rom.
+ * Driver is responsible for
+ * enabling and disabling
+ * decodes properly.
+ */
+ wbase = pciio_info->c_rbase;
+ wsize = pciio_info->c_rsize;
+
+ /*
+ * While the driver should know better
+ * than to attempt to map more space
+ * than the device is decoding, he might
+ * do it; better to bail out here.
+ */
+ if ((pci_addr + req_size) > wsize)
+ goto done;
+
+ pci_addr += wbase;
+ space = PCIIO_SPACE_MEM;
+ }
+ /*
+ * reduce window mappings to raw
+ * space mappings (maybe allocating
+ * windows), and try for DevIO(x)
+ * usage (setting it if it is available).
+ */
+ bar = space - PCIIO_SPACE_WIN0;
+ if (bar < 6) {
+ wspace = pciio_info->c_window[bar].w_space;
+ if (wspace == PCIIO_SPACE_NONE)
+ goto done;
+
+ /* get PCI base and size */
+ wbase = pciio_info->c_window[bar].w_base;
+ wsize = pciio_info->c_window[bar].w_size;
+
+ /*
+ * While the driver should know better
+ * than to attempt to map more space
+ * than the device is decoding, he might
+ * do it; better to bail out here.
+ */
+ if ((pci_addr + req_size) > wsize)
+ goto done;
+
+ /* shift from window relative to
+ * decoded space relative.
+ */
+ pci_addr += wbase;
+ space = wspace;
+ } else
+ bar = -1;
+
+ /* Scan all the DevIO(x) windows twice looking for one
+ * that can satisfy our request. The first time through,
+ * only look at assigned windows; the second time, also
+ * look at PCIIO_SPACE_NONE windows. Arrange the order
+ * so we always look at our own window first.
+ *
+ * We will not attempt to satisfy a single request
+ * by concatinating multiple windows.
+ */
+ maxtry = PCIBR_NUM_SLOTS(pcibr_soft) * 2;
+ halftry = PCIBR_NUM_SLOTS(pcibr_soft) - 1;
+ for (try = 0; try < maxtry; ++try) {
+ uint64_t devreg;
+ unsigned offset;
+
+ /* calculate win based on slot, attempt, and max possible
+ devices on bus */
+ win = (try + slot) % PCIBR_NUM_SLOTS(pcibr_soft);
+
+ /* If this DevIO(x) mapping area can provide
+ * a mapping to this address, use it.
+ */
+ msize = (win < 2) ? 0x200000 : 0x100000;
+ mmask = -msize;
+ if (space != PCIIO_SPACE_IO)
+ mmask &= 0x3FFFFFFF;
+
+ offset = pci_addr & (msize - 1);
+
+ /* If this window can't possibly handle that request,
+ * go on to the next window.
+ */
+ if (((pci_addr & (msize - 1)) + req_size) > msize)
+ continue;
+
+ devreg = pcibr_soft->bs_slot[win].bss_device;
+
+ /* Is this window "nailed down"?
+ * If not, maybe we can use it.
+ * (only check this the second time through)
+ */
+ mspace = pcibr_soft->bs_slot[win].bss_devio.bssd_space;
+ if ((try > halftry) && (mspace == PCIIO_SPACE_NONE)) {
+
+ /* If this is the primary DevIO(x) window
+ * for some other device, skip it.
+ */
+ if ((win != slot) &&
+ (PCIIO_VENDOR_ID_NONE !=
+ pcibr_soft->bs_slot[win].bss_vendor_id))
+ continue;
+
+ /* It's a free window, and we fit in it.
+ * Set up Device(win) to our taste.
+ */
+ mbase = pci_addr & mmask;
+
+ /* check that we would really get from
+ * here to there.
+ */
+ if ((mbase | offset) != pci_addr)
+ continue;
+
+ devreg &= ~BRIDGE_DEV_OFF_MASK;
+ if (space != PCIIO_SPACE_IO)
+ devreg |= BRIDGE_DEV_DEV_IO_MEM;
+ else
+ devreg &= ~BRIDGE_DEV_DEV_IO_MEM;
+ devreg |= (mbase >> 20) & BRIDGE_DEV_OFF_MASK;
+
+ /* default is WORD_VALUES.
+ * if you specify both,
+ * operation is undefined.
+ */
+ if (flags & PCIIO_BYTE_STREAM)
+ devreg |= BRIDGE_DEV_DEV_SWAP;
+ else
+ devreg &= ~BRIDGE_DEV_DEV_SWAP;
+
+ if (pcibr_soft->bs_slot[win].bss_device != devreg) {
+ pcireg_device_set(pcibr_soft, win, devreg);
+ pcibr_soft->bs_slot[win].bss_device = devreg;
+ pcireg_tflush_get(pcibr_soft);
+
+ PCIBR_DEBUG((PCIBR_DEBUG_DEVREG, pconn_vhdl,
+ "pcibr_addr_pci_to_xio: Device(%d): 0x%x\n",
+ win, devreg));
+ }
+ pcibr_soft->bs_slot[win].bss_devio.bssd_space = space;
+ pcibr_soft->bs_slot[win].bss_devio.bssd_base = mbase;
+ xio_addr = PCIBR_BRIDGE_DEVIO(pcibr_soft, win) + (pci_addr - mbase);
+
+ /* Increment this DevIO's use count */
+ pcibr_soft->bs_slot[win].bss_devio.bssd_ref_cnt++;
+
+ /* Save the DevIO register index used to access this BAR */
+ if (bar != -1)
+ pcibr_info->f_window[bar].w_devio_index = win;
+
+ PCIBR_DEBUG((PCIBR_DEBUG_PIOMAP, pconn_vhdl,
+ "pcibr_addr_pci_to_xio: map to space %s [0x%lx..0x%lx] "
+ "for slot %d allocates DevIO(%d) Device(%d) set to %lx\n",
+ pci_space[space], pci_addr, pci_addr + req_size - 1,
+ slot, win, win, devreg));
+
+ goto done;
+ } /* endif DevIO(x) not pointed */
+ mbase = pcibr_soft->bs_slot[win].bss_devio.bssd_base;
+
+ /* Now check for request incompat with DevIO(x)
+ */
+ if ((mspace != space) ||
+ (pci_addr < mbase) ||
+ ((pci_addr + req_size) > (mbase + msize)) ||
+ ((flags & PCIIO_BYTE_STREAM) && !(devreg & BRIDGE_DEV_DEV_SWAP)) ||
+ (!(flags & PCIIO_BYTE_STREAM) && (devreg & BRIDGE_DEV_DEV_SWAP)))
+ continue;
+
+ /* DevIO(x) window is pointed at PCI space
+ * that includes our target. Calculate the
+ * final XIO address, release the lock and
+ * return.
+ */
+ xio_addr = PCIBR_BRIDGE_DEVIO(pcibr_soft, win) + (pci_addr - mbase);
+
+ /* Increment this DevIO's use count */
+ pcibr_soft->bs_slot[win].bss_devio.bssd_ref_cnt++;
+
+ /* Save the DevIO register index used to access this BAR */
+ if (bar != -1)
+ pcibr_info->f_window[bar].w_devio_index = win;
+
+ PCIBR_DEBUG((PCIBR_DEBUG_PIOMAP, pconn_vhdl,
+ "pcibr_addr_pci_to_xio: map to space %s [0x%lx..0x%lx] "
+ "for slot %d uses DevIO(%d)\n", pci_space[space],
+ pci_addr, pci_addr + req_size - 1, slot, win));
+ goto done;
+ }
+
+ switch (space) {
+ /*
+ * Accesses to device decode
+ * areas that do a not fit
+ * within the DevIO(x) space are
+ * modified to be accesses via
+ * the direct mapping areas.
+ *
+ * If necessary, drivers can
+ * explicitly ask for mappings
+ * into these address spaces,
+ * but this should never be needed.
+ */
+ case PCIIO_SPACE_MEM: /* "mem space" */
+ case PCIIO_SPACE_MEM32: /* "mem, use 32-bit-wide bus" */
+ if (IS_PIC_BUSNUM_SOFT(pcibr_soft, 0)) { /* PIC bus 0 */
+ base = PICBRIDGE0_PCI_MEM32_BASE;
+ limit = PICBRIDGE0_PCI_MEM32_LIMIT;
+ } else if (IS_PIC_BUSNUM_SOFT(pcibr_soft, 1)) { /* PIC bus 1 */
+ base = PICBRIDGE1_PCI_MEM32_BASE;
+ limit = PICBRIDGE1_PCI_MEM32_LIMIT;
+ } else {
+ printk("pcibr_addr_pci_to_xio(): unknown bridge type");
+ return (iopaddr_t)0;
+ }
+
+ if ((pci_addr + base + req_size - 1) <= limit)
+ xio_addr = pci_addr + base;
+ break;
+
+ case PCIIO_SPACE_MEM64: /* "mem, use 64-bit-wide bus" */
+ if (IS_PIC_BUSNUM_SOFT(pcibr_soft, 0)) { /* PIC bus 0 */
+ base = PICBRIDGE0_PCI_MEM64_BASE;
+ limit = PICBRIDGE0_PCI_MEM64_LIMIT;
+ } else if (IS_PIC_BUSNUM_SOFT(pcibr_soft, 1)) { /* PIC bus 1 */
+ base = PICBRIDGE1_PCI_MEM64_BASE;
+ limit = PICBRIDGE1_PCI_MEM64_LIMIT;
+ } else {
+ printk("pcibr_addr_pci_to_xio(): unknown bridge type");
+ return (iopaddr_t)0;
+ }
+
+ if ((pci_addr + base + req_size - 1) <= limit)
+ xio_addr = pci_addr + base;
+ break;
+
+ case PCIIO_SPACE_IO: /* "i/o space" */
+ /*
+ * PIC bridges do not support big-window aliases into PCI I/O space
+ */
+ xio_addr = XIO_NOWHERE;
+ break;
+ }
+
+ /* Check that "Direct PIO" byteswapping matches,
+ * try to change it if it does not.
+ */
+ if (xio_addr != XIO_NOWHERE) {
+ unsigned bst; /* nonzero to set bytestream */
+ unsigned *bfp; /* addr of record of how swapper is set */
+ uint64_t swb; /* which control bit to mung */
+ unsigned bfo; /* current swapper setting */
+ unsigned bfn; /* desired swapper setting */
+
+ bfp = ((space == PCIIO_SPACE_IO)
+ ? (&pcibr_soft->bs_pio_end_io)
+ : (&pcibr_soft->bs_pio_end_mem));
+
+ bfo = *bfp;
+
+ bst = flags & PCIIO_BYTE_STREAM;
+
+ bfn = bst ? PCIIO_BYTE_STREAM : PCIIO_WORD_VALUES;
+
+ if (bfn == bfo) { /* we already match. */
+ ;
+ } else if (bfo != 0) { /* we have a conflict. */
+ PCIBR_DEBUG((PCIBR_DEBUG_PIOMAP, pconn_vhdl,
+ "pcibr_addr_pci_to_xio: swap conflict in %s, "
+ "was%s%s, want%s%s\n", pci_space[space],
+ bfo & PCIIO_BYTE_STREAM ? " BYTE_STREAM" : "",
+ bfo & PCIIO_WORD_VALUES ? " WORD_VALUES" : "",
+ bfn & PCIIO_BYTE_STREAM ? " BYTE_STREAM" : "",
+ bfn & PCIIO_WORD_VALUES ? " WORD_VALUES" : ""));
+ xio_addr = XIO_NOWHERE;
+ } else { /* OK to make the change. */
+ swb = (space == PCIIO_SPACE_IO) ? 0: BRIDGE_CTRL_MEM_SWAP;
+ if (bst) {
+ pcireg_control_bit_set(pcibr_soft, swb);
+ } else {
+ pcireg_control_bit_clr(pcibr_soft, swb);
+ }
+
+ *bfp = bfn; /* record the assignment */
+
+ PCIBR_DEBUG((PCIBR_DEBUG_PIOMAP, pconn_vhdl,
+ "pcibr_addr_pci_to_xio: swap for %s set to%s%s\n",
+ pci_space[space],
+ bfn & PCIIO_BYTE_STREAM ? " BYTE_STREAM" : "",
+ bfn & PCIIO_WORD_VALUES ? " WORD_VALUES" : ""));
+ }
+ }
+ done:
+ pcibr_unlock(pcibr_soft, s);
+ return xio_addr;
+}
+
+/*ARGSUSED6 */
+pcibr_piomap_t
+pcibr_piomap_alloc(vertex_hdl_t pconn_vhdl,
+ device_desc_t dev_desc,
+ pciio_space_t space,
+ iopaddr_t pci_addr,
+ size_t req_size,
+ size_t req_size_max,
+ unsigned flags)
+{
+ pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl);
+ pciio_info_t pciio_info = &pcibr_info->f_c;
+ pciio_slot_t pciio_slot = PCIBR_INFO_SLOT_GET_INT(pciio_info);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info);
+ vertex_hdl_t xconn_vhdl = pcibr_soft->bs_conn;
+
+ pcibr_piomap_t *mapptr;
+ pcibr_piomap_t maplist;
+ pcibr_piomap_t pcibr_piomap;
+ iopaddr_t xio_addr;
+ xtalk_piomap_t xtalk_piomap;
+ unsigned long s;
+
+ /* Make sure that the req sizes are non-zero */
+ if ((req_size < 1) || (req_size_max < 1)) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_PIOMAP, pconn_vhdl,
+ "pcibr_piomap_alloc: req_size | req_size_max < 1\n"));
+ return NULL;
+ }
+
+ /*
+ * Code to translate slot/space/addr
+ * into xio_addr is common between
+ * this routine and pcibr_piotrans_addr.
+ */
+ xio_addr = pcibr_addr_pci_to_xio(pconn_vhdl, pciio_slot, space, pci_addr, req_size, flags);
+
+ if (xio_addr == XIO_NOWHERE) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_PIOMAP, pconn_vhdl,
+ "pcibr_piomap_alloc: xio_addr == XIO_NOWHERE\n"));
+ return NULL;
+ }
+
+ /* Check the piomap list to see if there is already an allocated
+ * piomap entry but not in use. If so use that one. Otherwise
+ * allocate a new piomap entry and add it to the piomap list
+ */
+ mapptr = &(pcibr_info->f_piomap);
+
+ s = pcibr_lock(pcibr_soft);
+ for (pcibr_piomap = *mapptr;
+ pcibr_piomap != NULL;
+ pcibr_piomap = pcibr_piomap->bp_next) {
+ if (pcibr_piomap->bp_mapsz == 0)
+ break;
+ }
+
+ if (pcibr_piomap)
+ mapptr = NULL;
+ else {
+ pcibr_unlock(pcibr_soft, s);
+ pcibr_piomap = kmalloc(sizeof (*(pcibr_piomap)), GFP_KERNEL);
+ if ( !pcibr_piomap ) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_PIOMAP, pconn_vhdl,
+ "pcibr_piomap_alloc: malloc fails\n"));
+ return NULL;
+ }
+ memset(pcibr_piomap, 0, sizeof (*(pcibr_piomap)));
+ }
+
+ pcibr_piomap->bp_dev = pconn_vhdl;
+ pcibr_piomap->bp_slot = PCIBR_DEVICE_TO_SLOT(pcibr_soft, pciio_slot);
+ pcibr_piomap->bp_flags = flags;
+ pcibr_piomap->bp_space = space;
+ pcibr_piomap->bp_pciaddr = pci_addr;
+ pcibr_piomap->bp_mapsz = req_size;
+ pcibr_piomap->bp_soft = pcibr_soft;
+ pcibr_piomap->bp_toc = ATOMIC_INIT(0);
+
+ if (mapptr) {
+ s = pcibr_lock(pcibr_soft);
+ maplist = *mapptr;
+ pcibr_piomap->bp_next = maplist;
+ *mapptr = pcibr_piomap;
+ }
+ pcibr_unlock(pcibr_soft, s);
+
+
+ if (pcibr_piomap) {
+ xtalk_piomap =
+ xtalk_piomap_alloc(xconn_vhdl, 0,
+ xio_addr,
+ req_size, req_size_max,
+ flags & PIOMAP_FLAGS);
+ if (xtalk_piomap) {
+ pcibr_piomap->bp_xtalk_addr = xio_addr;
+ pcibr_piomap->bp_xtalk_pio = xtalk_piomap;
+ } else {
+ pcibr_piomap->bp_mapsz = 0;
+ pcibr_piomap = 0;
+ }
+ }
+
+ PCIBR_DEBUG((PCIBR_DEBUG_PIOMAP, pconn_vhdl,
+ "pcibr_piomap_alloc: map=0x%lx\n", pcibr_piomap));
+
+ return pcibr_piomap;
+}
+
+/*ARGSUSED */
+void
+pcibr_piomap_free(pcibr_piomap_t pcibr_piomap)
+{
+ PCIBR_DEBUG((PCIBR_DEBUG_PIOMAP, pcibr_piomap->bp_dev,
+ "pcibr_piomap_free: map=0x%lx\n", pcibr_piomap));
+
+ xtalk_piomap_free(pcibr_piomap->bp_xtalk_pio);
+ pcibr_piomap->bp_xtalk_pio = 0;
+ pcibr_piomap->bp_mapsz = 0;
+}
+
+/*ARGSUSED */
+caddr_t
+pcibr_piomap_addr(pcibr_piomap_t pcibr_piomap,
+ iopaddr_t pci_addr,
+ size_t req_size)
+{
+ caddr_t addr;
+ addr = xtalk_piomap_addr(pcibr_piomap->bp_xtalk_pio,
+ pcibr_piomap->bp_xtalk_addr +
+ pci_addr - pcibr_piomap->bp_pciaddr,
+ req_size);
+ PCIBR_DEBUG((PCIBR_DEBUG_PIOMAP, pcibr_piomap->bp_dev,
+ "pcibr_piomap_addr: map=0x%lx, addr=0x%lx\n",
+ pcibr_piomap, addr));
+
+ return addr;
+}
+
+/*ARGSUSED */
+void
+pcibr_piomap_done(pcibr_piomap_t pcibr_piomap)
+{
+ PCIBR_DEBUG((PCIBR_DEBUG_PIOMAP, pcibr_piomap->bp_dev,
+ "pcibr_piomap_done: map=0x%lx\n", pcibr_piomap));
+ xtalk_piomap_done(pcibr_piomap->bp_xtalk_pio);
+}
+
+/*ARGSUSED */
+caddr_t
+pcibr_piotrans_addr(vertex_hdl_t pconn_vhdl,
+ device_desc_t dev_desc,
+ pciio_space_t space,
+ iopaddr_t pci_addr,
+ size_t req_size,
+ unsigned flags)
+{
+ pciio_info_t pciio_info = pciio_info_get(pconn_vhdl);
+ pciio_slot_t pciio_slot = PCIBR_INFO_SLOT_GET_INT(pciio_info);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info);
+ vertex_hdl_t xconn_vhdl = pcibr_soft->bs_conn;
+
+ iopaddr_t xio_addr;
+ caddr_t addr;
+
+ xio_addr = pcibr_addr_pci_to_xio(pconn_vhdl, pciio_slot, space, pci_addr, req_size, flags);
+
+ if (xio_addr == XIO_NOWHERE) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_PIODIR, pconn_vhdl,
+ "pcibr_piotrans_addr: xio_addr == XIO_NOWHERE\n"));
+ return NULL;
+ }
+
+ addr = xtalk_piotrans_addr(xconn_vhdl, 0, xio_addr, req_size, flags & PIOMAP_FLAGS);
+ PCIBR_DEBUG((PCIBR_DEBUG_PIODIR, pconn_vhdl,
+ "pcibr_piotrans_addr: xio_addr=0x%lx, addr=0x%lx\n",
+ xio_addr, addr));
+ return addr;
+}
+
+/*
+ * PIO Space allocation and management.
+ * Allocate and Manage the PCI PIO space (mem and io space)
+ * This routine is pretty simplistic at this time, and
+ * does pretty trivial management of allocation and freeing.
+ * The current scheme is prone for fragmentation.
+ * Change the scheme to use bitmaps.
+ */
+
+/*ARGSUSED */
+iopaddr_t
+pcibr_piospace_alloc(vertex_hdl_t pconn_vhdl,
+ device_desc_t dev_desc,
+ pciio_space_t space,
+ size_t req_size,
+ size_t alignment)
+{
+ pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl);
+ pciio_info_t pciio_info = &pcibr_info->f_c;
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info);
+
+ pciio_piospace_t piosp;
+ unsigned long s;
+
+ iopaddr_t start_addr;
+ size_t align_mask;
+
+ /*
+ * Check for proper alignment
+ */
+ ASSERT(alignment >= PAGE_SIZE);
+ ASSERT((alignment & (alignment - 1)) == 0);
+
+ align_mask = alignment - 1;
+ s = pcibr_lock(pcibr_soft);
+
+ /*
+ * First look if a previously allocated chunk exists.
+ */
+ piosp = pcibr_info->f_piospace;
+ if (piosp) {
+ /*
+ * Look through the list for a right sized free chunk.
+ */
+ do {
+ if (piosp->free &&
+ (piosp->space == space) &&
+ (piosp->count >= req_size) &&
+ !(piosp->start & align_mask)) {
+ piosp->free = 0;
+ pcibr_unlock(pcibr_soft, s);
+ return piosp->start;
+ }
+ piosp = piosp->next;
+ } while (piosp);
+ }
+ ASSERT(!piosp);
+
+ /*
+ * Allocate PCI bus address, usually for the Universe chip driver;
+ * do not pass window info since the actual PCI bus address
+ * space will never be freed. The space may be reused after it
+ * is logically released by pcibr_piospace_free().
+ */
+ switch (space) {
+ case PCIIO_SPACE_IO:
+ start_addr = pcibr_bus_addr_alloc(pcibr_soft, NULL,
+ PCIIO_SPACE_IO,
+ 0, req_size, alignment);
+ break;
+
+ case PCIIO_SPACE_MEM:
+ case PCIIO_SPACE_MEM32:
+ start_addr = pcibr_bus_addr_alloc(pcibr_soft, NULL,
+ PCIIO_SPACE_MEM32,
+ 0, req_size, alignment);
+ break;
+
+ default:
+ ASSERT(0);
+ pcibr_unlock(pcibr_soft, s);
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_PIOMAP, pconn_vhdl,
+ "pcibr_piospace_alloc: unknown space %d\n", space));
+ return 0;
+ }
+
+ /*
+ * If too big a request, reject it.
+ */
+ if (!start_addr) {
+ pcibr_unlock(pcibr_soft, s);
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_PIOMAP, pconn_vhdl,
+ "pcibr_piospace_alloc: request 0x%lx to big\n", req_size));
+ return 0;
+ }
+
+ piosp = kmalloc(sizeof (*(piosp)), GFP_KERNEL);
+ if ( !piosp ) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_PIOMAP, pconn_vhdl,
+ "pcibr_piospace_alloc: malloc fails\n"));
+ return 0;
+ }
+ memset(piosp, 0, sizeof (*(piosp)));
+
+ piosp->free = 0;
+ piosp->space = space;
+ piosp->start = start_addr;
+ piosp->count = req_size;
+ piosp->next = pcibr_info->f_piospace;
+ pcibr_info->f_piospace = piosp;
+
+ pcibr_unlock(pcibr_soft, s);
+
+ PCIBR_DEBUG((PCIBR_DEBUG_PIOMAP, pconn_vhdl,
+ "pcibr_piospace_alloc: piosp=0x%lx\n", piosp));
+
+ return start_addr;
+}
+
+#define ERR_MSG "!Device %s freeing size (0x%lx) different than allocated (0x%lx)"
+/*ARGSUSED */
+void
+pcibr_piospace_free(vertex_hdl_t pconn_vhdl,
+ pciio_space_t space,
+ iopaddr_t pciaddr,
+ size_t req_size)
+{
+ pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pcibr_info->f_mfast;
+ pciio_piospace_t piosp;
+ unsigned long s;
+ char name[1024];
+
+ /*
+ * Look through the bridge data structures for the pciio_piospace_t
+ * structure corresponding to 'pciaddr'
+ */
+ s = pcibr_lock(pcibr_soft);
+ piosp = pcibr_info->f_piospace;
+ while (piosp) {
+ /*
+ * Piospace free can only be for the complete
+ * chunk and not parts of it..
+ */
+ if (piosp->start == pciaddr) {
+ if (piosp->count == req_size)
+ break;
+ /*
+ * Improper size passed for freeing..
+ * Print a message and break;
+ */
+ hwgraph_vertex_name_get(pconn_vhdl, name, 1024);
+ printk(KERN_WARNING "pcibr_piospace_free: error");
+ printk(KERN_WARNING "Device %s freeing size (0x%lx) different than allocated (0x%lx)",
+ name, req_size, piosp->count);
+ printk(KERN_WARNING "Freeing 0x%lx instead", piosp->count);
+ break;
+ }
+ piosp = piosp->next;
+ }
+
+ if (!piosp) {
+ printk(KERN_WARNING
+ "pcibr_piospace_free: Address 0x%lx size 0x%lx - No match\n",
+ pciaddr, req_size);
+ pcibr_unlock(pcibr_soft, s);
+ return;
+ }
+ piosp->free = 1;
+ pcibr_unlock(pcibr_soft, s);
+
+ PCIBR_DEBUG((PCIBR_DEBUG_PIOMAP, pconn_vhdl,
+ "pcibr_piospace_free: piosp=0x%lx\n", piosp));
+ return;
+}
+
+/* =====================================================================
+ * DMA MANAGEMENT
+ *
+ * The Bridge ASIC provides three methods of doing
+ * DMA: via a "direct map" register available in
+ * 32-bit PCI space (which selects a contiguous 2G
+ * address space on some other widget), via
+ * "direct" addressing via 64-bit PCI space (all
+ * destination information comes from the PCI
+ * address, including transfer attributes), and via
+ * a "mapped" region that allows a bunch of
+ * different small mappings to be established with
+ * the PMU.
+ *
+ * For efficiency, we most prefer to use the 32-bit
+ * direct mapping facility, since it requires no
+ * resource allocations. The advantage of using the
+ * PMU over the 64-bit direct is that single-cycle
+ * PCI addressing can be used; the advantage of
+ * using 64-bit direct over PMU addressing is that
+ * we do not have to allocate entries in the PMU.
+ */
+
+/*
+ * Convert PCI-generic software flags and Bridge-specific software flags
+ * into Bridge-specific Direct Map attribute bits.
+ */
+static iopaddr_t
+pcibr_flags_to_d64(unsigned flags, pcibr_soft_t pcibr_soft)
+{
+ iopaddr_t attributes = 0;
+
+ /* Sanity check: Bridge only allows use of VCHAN1 via 64-bit addrs */
+#ifdef LATER
+ ASSERT_ALWAYS(!(flags & PCIBR_VCHAN1) || (flags & PCIIO_DMA_A64));
+#endif
+
+ /* Generic macro flags
+ */
+ if (flags & PCIIO_DMA_DATA) { /* standard data channel */
+ attributes &= ~PCI64_ATTR_BAR; /* no barrier bit */
+ attributes |= PCI64_ATTR_PREF; /* prefetch on */
+ }
+ if (flags & PCIIO_DMA_CMD) { /* standard command channel */
+ attributes |= PCI64_ATTR_BAR; /* barrier bit on */
+ attributes &= ~PCI64_ATTR_PREF; /* disable prefetch */
+ }
+ /* Generic detail flags
+ */
+ if (flags & PCIIO_PREFETCH)
+ attributes |= PCI64_ATTR_PREF;
+ if (flags & PCIIO_NOPREFETCH)
+ attributes &= ~PCI64_ATTR_PREF;
+
+ /* the swap bit is in the address attributes for xbridge */
+ if (flags & PCIIO_BYTE_STREAM)
+ attributes |= PCI64_ATTR_SWAP;
+ if (flags & PCIIO_WORD_VALUES)
+ attributes &= ~PCI64_ATTR_SWAP;
+
+ /* Provider-specific flags
+ */
+ if (flags & PCIBR_BARRIER)
+ attributes |= PCI64_ATTR_BAR;
+ if (flags & PCIBR_NOBARRIER)
+ attributes &= ~PCI64_ATTR_BAR;
+
+ if (flags & PCIBR_PREFETCH)
+ attributes |= PCI64_ATTR_PREF;
+ if (flags & PCIBR_NOPREFETCH)
+ attributes &= ~PCI64_ATTR_PREF;
+
+ if (flags & PCIBR_PRECISE)
+ attributes |= PCI64_ATTR_PREC;
+ if (flags & PCIBR_NOPRECISE)
+ attributes &= ~PCI64_ATTR_PREC;
+
+ if (flags & PCIBR_VCHAN1)
+ attributes |= PCI64_ATTR_VIRTUAL;
+ if (flags & PCIBR_VCHAN0)
+ attributes &= ~PCI64_ATTR_VIRTUAL;
+
+ /* PIC in PCI-X mode only supports barrier & swap */
+ if (IS_PCIX(pcibr_soft)) {
+ attributes &= (PCI64_ATTR_BAR | PCI64_ATTR_SWAP);
+ }
+
+ return attributes;
+}
+
+/*ARGSUSED */
+pcibr_dmamap_t
+pcibr_dmamap_alloc(vertex_hdl_t pconn_vhdl,
+ device_desc_t dev_desc,
+ size_t req_size_max,
+ unsigned flags)
+{
+ pciio_info_t pciio_info = pciio_info_get(pconn_vhdl);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info);
+ vertex_hdl_t xconn_vhdl = pcibr_soft->bs_conn;
+ pciio_slot_t slot;
+ xwidgetnum_t xio_port;
+
+ xtalk_dmamap_t xtalk_dmamap;
+ pcibr_dmamap_t pcibr_dmamap;
+ int ate_count;
+ int ate_index;
+ int vchan = VCHAN0;
+ unsigned long s;
+
+ /* merge in forced flags */
+ flags |= pcibr_soft->bs_dma_flags;
+
+ /*
+ * On SNIA64, these maps are pre-allocated because pcibr_dmamap_alloc()
+ * can be called within an interrupt thread.
+ */
+ s = pcibr_lock(pcibr_soft);
+ pcibr_dmamap = (pcibr_dmamap_t)get_free_pciio_dmamap(pcibr_soft->bs_vhdl);
+ pcibr_unlock(pcibr_soft, s);
+
+ if (!pcibr_dmamap)
+ return 0;
+
+ xtalk_dmamap = xtalk_dmamap_alloc(xconn_vhdl, dev_desc, req_size_max,
+ flags & DMAMAP_FLAGS);
+ if (!xtalk_dmamap) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DMAMAP, pconn_vhdl,
+ "pcibr_dmamap_alloc: xtalk_dmamap_alloc failed\n"));
+ free_pciio_dmamap(pcibr_dmamap);
+ return 0;
+ }
+ xio_port = pcibr_soft->bs_mxid;
+ slot = PCIBR_INFO_SLOT_GET_INT(pciio_info);
+
+ pcibr_dmamap->bd_dev = pconn_vhdl;
+ pcibr_dmamap->bd_slot = PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot);
+ pcibr_dmamap->bd_soft = pcibr_soft;
+ pcibr_dmamap->bd_xtalk = xtalk_dmamap;
+ pcibr_dmamap->bd_max_size = req_size_max;
+ pcibr_dmamap->bd_xio_port = xio_port;
+
+ if (flags & PCIIO_DMA_A64) {
+ if (!pcibr_try_set_device(pcibr_soft, slot, flags, BRIDGE_DEV_D64_BITS)) {
+ iopaddr_t pci_addr;
+ int have_rrbs;
+ int min_rrbs;
+
+ /* Device is capable of A64 operations,
+ * and the attributes of the DMA are
+ * consistent with any previous DMA
+ * mappings using shared resources.
+ */
+
+ pci_addr = pcibr_flags_to_d64(flags, pcibr_soft);
+
+ pcibr_dmamap->bd_flags = flags;
+ pcibr_dmamap->bd_xio_addr = 0;
+ pcibr_dmamap->bd_pci_addr = pci_addr;
+
+ /* If in PCI mode, make sure we have an RRB (or two).
+ */
+ if (IS_PCI(pcibr_soft) &&
+ !(pcibr_soft->bs_rrb_fixed & (1 << slot))) {
+ if (flags & PCIBR_VCHAN1)
+ vchan = VCHAN1;
+ have_rrbs = pcibr_soft->bs_rrb_valid[slot][vchan];
+ if (have_rrbs < 2) {
+ if (pci_addr & PCI64_ATTR_PREF)
+ min_rrbs = 2;
+ else
+ min_rrbs = 1;
+ if (have_rrbs < min_rrbs)
+ pcibr_rrb_alloc_more(pcibr_soft, slot, vchan,
+ min_rrbs - have_rrbs);
+ }
+ }
+ PCIBR_DEBUG((PCIBR_DEBUG_DMAMAP | PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmamap_alloc: using direct64, map=0x%lx\n",
+ pcibr_dmamap));
+ return pcibr_dmamap;
+ }
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DMAMAP | PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmamap_alloc: unable to use direct64\n"));
+
+ /* PIC in PCI-X mode only supports 64-bit direct mapping so
+ * don't fall thru and try 32-bit direct mapping or 32-bit
+ * page mapping
+ */
+ if (IS_PCIX(pcibr_soft)) {
+ kfree(pcibr_dmamap);
+ return 0;
+ }
+
+ flags &= ~PCIIO_DMA_A64;
+ }
+ if (flags & PCIIO_FIXED) {
+ /* warning: mappings may fail later,
+ * if direct32 can't get to the address.
+ */
+ if (!pcibr_try_set_device(pcibr_soft, slot, flags, BRIDGE_DEV_D32_BITS)) {
+ /* User desires DIRECT A32 operations,
+ * and the attributes of the DMA are
+ * consistent with any previous DMA
+ * mappings using shared resources.
+ * Mapping calls may fail if target
+ * is outside the direct32 range.
+ */
+ PCIBR_DEBUG((PCIBR_DEBUG_DMAMAP | PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmamap_alloc: using direct32, map=0x%lx\n",
+ pcibr_dmamap));
+ pcibr_dmamap->bd_flags = flags;
+ pcibr_dmamap->bd_xio_addr = pcibr_soft->bs_dir_xbase;
+ pcibr_dmamap->bd_pci_addr = PCI32_DIRECT_BASE;
+ return pcibr_dmamap;
+ }
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DMAMAP | PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmamap_alloc: unable to use direct32\n"));
+
+ /* If the user demands FIXED and we can't
+ * give it to him, fail.
+ */
+ xtalk_dmamap_free(xtalk_dmamap);
+ free_pciio_dmamap(pcibr_dmamap);
+ return 0;
+ }
+ /*
+ * Allocate Address Translation Entries from the mapping RAM.
+ * Unless the PCIBR_NO_ATE_ROUNDUP flag is specified,
+ * the maximum number of ATEs is based on the worst-case
+ * scenario, where the requested target is in the
+ * last byte of an ATE; thus, mapping IOPGSIZE+2
+ * does end up requiring three ATEs.
+ */
+ if (!(flags & PCIBR_NO_ATE_ROUNDUP)) {
+ ate_count = IOPG((IOPGSIZE - 1) /* worst case start offset */
+ +req_size_max /* max mapping bytes */
+ - 1) + 1; /* round UP */
+ } else { /* assume requested target is page aligned */
+ ate_count = IOPG(req_size_max /* max mapping bytes */
+ - 1) + 1; /* round UP */
+ }
+
+ ate_index = pcibr_ate_alloc(pcibr_soft, ate_count, &pcibr_dmamap->resource);
+
+ if (ate_index != -1) {
+ if (!pcibr_try_set_device(pcibr_soft, slot, flags, BRIDGE_DEV_PMU_BITS)) {
+ bridge_ate_t ate_proto;
+ int have_rrbs;
+ int min_rrbs;
+
+ PCIBR_DEBUG((PCIBR_DEBUG_DMAMAP, pconn_vhdl,
+ "pcibr_dmamap_alloc: using PMU, ate_index=%d, "
+ "pcibr_dmamap=0x%lx\n", ate_index, pcibr_dmamap));
+
+ ate_proto = pcibr_flags_to_ate(pcibr_soft, flags);
+
+ pcibr_dmamap->bd_flags = flags;
+ pcibr_dmamap->bd_pci_addr =
+ PCI32_MAPPED_BASE + IOPGSIZE * ate_index;
+
+ if (flags & PCIIO_BYTE_STREAM)
+ ATE_SWAP_ON(pcibr_dmamap->bd_pci_addr);
+ /*
+ * If swap was set in bss_device in pcibr_endian_set()
+ * we need to change the address bit.
+ */
+ if (pcibr_soft->bs_slot[slot].bss_device &
+ BRIDGE_DEV_SWAP_PMU)
+ ATE_SWAP_ON(pcibr_dmamap->bd_pci_addr);
+ if (flags & PCIIO_WORD_VALUES)
+ ATE_SWAP_OFF(pcibr_dmamap->bd_pci_addr);
+ pcibr_dmamap->bd_xio_addr = 0;
+ pcibr_dmamap->bd_ate_ptr = pcibr_ate_addr(pcibr_soft, ate_index);
+ pcibr_dmamap->bd_ate_index = ate_index;
+ pcibr_dmamap->bd_ate_count = ate_count;
+ pcibr_dmamap->bd_ate_proto = ate_proto;
+
+ /* Make sure we have an RRB (or two).
+ */
+ if (!(pcibr_soft->bs_rrb_fixed & (1 << slot))) {
+ have_rrbs = pcibr_soft->bs_rrb_valid[slot][vchan];
+ if (have_rrbs < 2) {
+ if (ate_proto & ATE_PREF)
+ min_rrbs = 2;
+ else
+ min_rrbs = 1;
+ if (have_rrbs < min_rrbs)
+ pcibr_rrb_alloc_more(pcibr_soft, slot, vchan,
+ min_rrbs - have_rrbs);
+ }
+ }
+ return pcibr_dmamap;
+ }
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DMAMAP, pconn_vhdl,
+ "pcibr_dmamap_alloc: PMU use failed, ate_index=%d\n",
+ ate_index));
+
+ pcibr_ate_free(pcibr_soft, ate_index, ate_count, &pcibr_dmamap->resource);
+ }
+ /* total failure: sorry, you just can't
+ * get from here to there that way.
+ */
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DMAMAP, pconn_vhdl,
+ "pcibr_dmamap_alloc: complete failure.\n"));
+ xtalk_dmamap_free(xtalk_dmamap);
+ free_pciio_dmamap(pcibr_dmamap);
+ return 0;
+}
+
+/*ARGSUSED */
+void
+pcibr_dmamap_free(pcibr_dmamap_t pcibr_dmamap)
+{
+ pcibr_soft_t pcibr_soft = pcibr_dmamap->bd_soft;
+ pciio_slot_t slot = PCIBR_SLOT_TO_DEVICE(pcibr_soft,
+ pcibr_dmamap->bd_slot);
+
+ xtalk_dmamap_free(pcibr_dmamap->bd_xtalk);
+
+ if (pcibr_dmamap->bd_flags & PCIIO_DMA_A64) {
+ pcibr_release_device(pcibr_soft, slot, BRIDGE_DEV_D64_BITS);
+ }
+ if (pcibr_dmamap->bd_ate_count) {
+ pcibr_ate_free(pcibr_dmamap->bd_soft,
+ pcibr_dmamap->bd_ate_index,
+ pcibr_dmamap->bd_ate_count,
+ &pcibr_dmamap->resource);
+ pcibr_release_device(pcibr_soft, slot, XBRIDGE_DEV_PMU_BITS);
+ }
+
+ PCIBR_DEBUG((PCIBR_DEBUG_DMAMAP, pcibr_dmamap->bd_dev,
+ "pcibr_dmamap_free: pcibr_dmamap=0x%lx\n", pcibr_dmamap));
+
+ free_pciio_dmamap(pcibr_dmamap);
+}
+
+/*
+ * pcibr_addr_xio_to_pci: given a PIO range, hand
+ * back the corresponding base PCI MEM address;
+ * this is used to short-circuit DMA requests that
+ * loop back onto this PCI bus.
+ */
+static iopaddr_t
+pcibr_addr_xio_to_pci(pcibr_soft_t soft,
+ iopaddr_t xio_addr,
+ size_t req_size)
+{
+ iopaddr_t xio_lim = xio_addr + req_size - 1;
+ iopaddr_t pci_addr;
+ pciio_slot_t slot;
+
+ if (IS_PIC_BUSNUM_SOFT(soft, 0)) {
+ if ((xio_addr >= PICBRIDGE0_PCI_MEM32_BASE) &&
+ (xio_lim <= PICBRIDGE0_PCI_MEM32_LIMIT)) {
+ pci_addr = xio_addr - PICBRIDGE0_PCI_MEM32_BASE;
+ return pci_addr;
+ }
+ if ((xio_addr >= PICBRIDGE0_PCI_MEM64_BASE) &&
+ (xio_lim <= PICBRIDGE0_PCI_MEM64_LIMIT)) {
+ pci_addr = xio_addr - PICBRIDGE0_PCI_MEM64_BASE;
+ return pci_addr;
+ }
+ } else if (IS_PIC_BUSNUM_SOFT(soft, 1)) {
+ if ((xio_addr >= PICBRIDGE1_PCI_MEM32_BASE) &&
+ (xio_lim <= PICBRIDGE1_PCI_MEM32_LIMIT)) {
+ pci_addr = xio_addr - PICBRIDGE1_PCI_MEM32_BASE;
+ return pci_addr;
+ }
+ if ((xio_addr >= PICBRIDGE1_PCI_MEM64_BASE) &&
+ (xio_lim <= PICBRIDGE1_PCI_MEM64_LIMIT)) {
+ pci_addr = xio_addr - PICBRIDGE1_PCI_MEM64_BASE;
+ return pci_addr;
+ }
+ } else {
+ printk("pcibr_addr_xio_to_pci(): unknown bridge type");
+ return (iopaddr_t)0;
+ }
+ for (slot = soft->bs_min_slot; slot < PCIBR_NUM_SLOTS(soft); ++slot)
+ if ((xio_addr >= PCIBR_BRIDGE_DEVIO(soft, slot)) &&
+ (xio_lim < PCIBR_BRIDGE_DEVIO(soft, slot + 1))) {
+ uint64_t dev;
+
+ dev = soft->bs_slot[slot].bss_device;
+ pci_addr = dev & BRIDGE_DEV_OFF_MASK;
+ pci_addr <<= BRIDGE_DEV_OFF_ADDR_SHFT;
+ pci_addr += xio_addr - PCIBR_BRIDGE_DEVIO(soft, slot);
+ return (dev & BRIDGE_DEV_DEV_IO_MEM) ? pci_addr : PCI_NOWHERE;
+ }
+ return 0;
+}
+
+/*ARGSUSED */
+iopaddr_t
+pcibr_dmamap_addr(pcibr_dmamap_t pcibr_dmamap,
+ paddr_t paddr,
+ size_t req_size)
+{
+ pcibr_soft_t pcibr_soft;
+ iopaddr_t xio_addr;
+ xwidgetnum_t xio_port;
+ iopaddr_t pci_addr;
+ unsigned flags;
+
+ ASSERT(pcibr_dmamap != NULL);
+ ASSERT(req_size > 0);
+ ASSERT(req_size <= pcibr_dmamap->bd_max_size);
+
+ pcibr_soft = pcibr_dmamap->bd_soft;
+
+ flags = pcibr_dmamap->bd_flags;
+
+ xio_addr = xtalk_dmamap_addr(pcibr_dmamap->bd_xtalk, paddr, req_size);
+ if (XIO_PACKED(xio_addr)) {
+ xio_port = XIO_PORT(xio_addr);
+ xio_addr = XIO_ADDR(xio_addr);
+ } else
+ xio_port = pcibr_dmamap->bd_xio_port;
+
+ /* If this DMA is to an address that
+ * refers back to this Bridge chip,
+ * reduce it back to the correct
+ * PCI MEM address.
+ */
+ if (xio_port == pcibr_soft->bs_xid) {
+ pci_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, req_size);
+ } else if (flags & PCIIO_DMA_A64) {
+ /* A64 DMA:
+ * always use 64-bit direct mapping,
+ * which always works.
+ * Device(x) was set up during
+ * dmamap allocation.
+ */
+
+ /* attributes are already bundled up into bd_pci_addr.
+ */
+ pci_addr = pcibr_dmamap->bd_pci_addr
+ | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT)
+ | xio_addr;
+
+ /* Bridge Hardware WAR #482836:
+ * If the transfer is not cache aligned
+ * and the Bridge Rev is <= B, force
+ * prefetch to be off.
+ */
+ if (flags & PCIBR_NOPREFETCH)
+ pci_addr &= ~PCI64_ATTR_PREF;
+
+ PCIBR_DEBUG((PCIBR_DEBUG_DMAMAP | PCIBR_DEBUG_DMADIR,
+ pcibr_dmamap->bd_dev,
+ "pcibr_dmamap_addr: (direct64): wanted paddr [0x%lx..0x%lx] "
+ "XIO port 0x%x offset 0x%lx, returning PCI 0x%lx\n",
+ paddr, paddr + req_size - 1, xio_port, xio_addr, pci_addr));
+
+ } else if (flags & PCIIO_FIXED) {
+ /* A32 direct DMA:
+ * always use 32-bit direct mapping,
+ * which may fail.
+ * Device(x) was set up during
+ * dmamap allocation.
+ */
+
+ if (xio_port != pcibr_soft->bs_dir_xport)
+ pci_addr = 0; /* wrong DIDN */
+ else if (xio_addr < pcibr_dmamap->bd_xio_addr)
+ pci_addr = 0; /* out of range */
+ else if ((xio_addr + req_size) >
+ (pcibr_dmamap->bd_xio_addr + BRIDGE_DMA_DIRECT_SIZE))
+ pci_addr = 0; /* out of range */
+ else
+ pci_addr = pcibr_dmamap->bd_pci_addr +
+ xio_addr - pcibr_dmamap->bd_xio_addr;
+
+ PCIBR_DEBUG((PCIBR_DEBUG_DMAMAP | PCIBR_DEBUG_DMADIR,
+ pcibr_dmamap->bd_dev,
+ "pcibr_dmamap_addr (direct32): wanted paddr [0x%lx..0x%lx] "
+ "XIO port 0x%x offset 0x%lx, returning PCI 0x%lx\n",
+ paddr, paddr + req_size - 1, xio_port, xio_addr, pci_addr));
+
+ } else {
+ iopaddr_t offset = IOPGOFF(xio_addr);
+ bridge_ate_t ate_proto = pcibr_dmamap->bd_ate_proto;
+ int ate_count = IOPG(offset + req_size - 1) + 1;
+ int ate_index = pcibr_dmamap->bd_ate_index;
+ bridge_ate_t ate;
+
+ ate = ate_proto | (xio_addr - offset);
+ ate |= (xio_port << ATE_TIDSHIFT);
+
+ pci_addr = pcibr_dmamap->bd_pci_addr + offset;
+
+ /* Fill in our mapping registers
+ * with the appropriate xtalk data,
+ * and hand back the PCI address.
+ */
+
+ ASSERT(ate_count > 0);
+ if (ate_count <= pcibr_dmamap->bd_ate_count) {
+ ate_write(pcibr_soft, ate_index, ate_count, ate);
+
+ PCIBR_DEBUG((PCIBR_DEBUG_DMAMAP, pcibr_dmamap->bd_dev,
+ "pcibr_dmamap_addr (PMU) : wanted paddr "
+ "[0x%lx..0x%lx] returning PCI 0x%lx\n",
+ paddr, paddr + req_size - 1, pci_addr));
+
+ } else {
+ /* The number of ATE's required is greater than the number
+ * allocated for this map. One way this can happen is if
+ * pcibr_dmamap_alloc() was called with the PCIBR_NO_ATE_ROUNDUP
+ * flag, and then when that map is used (right now), the
+ * target address tells us we really did need to roundup.
+ * The other possibility is that the map is just plain too
+ * small to handle the requested target area.
+ */
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DMAMAP, pcibr_dmamap->bd_dev,
+ "pcibr_dmamap_addr (PMU) : wanted paddr "
+ "[0x%lx..0x%lx] ate_count 0x%x bd_ate_count 0x%x "
+ "ATE's required > number allocated\n",
+ paddr, paddr + req_size - 1,
+ ate_count, pcibr_dmamap->bd_ate_count));
+ pci_addr = 0;
+ }
+
+ }
+ return pci_addr;
+}
+
+/*ARGSUSED */
+void
+pcibr_dmamap_done(pcibr_dmamap_t pcibr_dmamap)
+{
+ xtalk_dmamap_done(pcibr_dmamap->bd_xtalk);
+
+ PCIBR_DEBUG((PCIBR_DEBUG_DMAMAP, pcibr_dmamap->bd_dev,
+ "pcibr_dmamap_done: pcibr_dmamap=0x%lx\n", pcibr_dmamap));
+}
+
+
+/*
+ * For each bridge, the DIR_OFF value in the Direct Mapping Register
+ * determines the PCI to Crosstalk memory mapping to be used for all
+ * 32-bit Direct Mapping memory accesses. This mapping can be to any
+ * node in the system. This function will return that compact node id.
+ */
+
+/*ARGSUSED */
+cnodeid_t
+pcibr_get_dmatrans_node(vertex_hdl_t pconn_vhdl)
+{
+
+ pciio_info_t pciio_info = pciio_info_get(pconn_vhdl);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info);
+
+ return nasid_to_cnodeid(NASID_GET(pcibr_soft->bs_dir_xbase));
+}
+
+/*ARGSUSED */
+iopaddr_t
+pcibr_dmatrans_addr(vertex_hdl_t pconn_vhdl,
+ device_desc_t dev_desc,
+ paddr_t paddr,
+ size_t req_size,
+ unsigned flags)
+{
+ pciio_info_t pciio_info = pciio_info_get(pconn_vhdl);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info);
+ vertex_hdl_t xconn_vhdl = pcibr_soft->bs_conn;
+ pciio_slot_t pciio_slot = PCIBR_INFO_SLOT_GET_INT(pciio_info);
+ pcibr_soft_slot_t slotp = &pcibr_soft->bs_slot[pciio_slot];
+
+ xwidgetnum_t xio_port;
+ iopaddr_t xio_addr;
+ iopaddr_t pci_addr;
+
+ int have_rrbs;
+ int min_rrbs;
+ int vchan = VCHAN0;
+
+ /* merge in forced flags */
+ flags |= pcibr_soft->bs_dma_flags;
+
+ xio_addr = xtalk_dmatrans_addr(xconn_vhdl, 0, paddr, req_size,
+ flags & DMAMAP_FLAGS);
+ if (!xio_addr) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmatrans_addr: wanted paddr [0x%lx..0x%lx], "
+ "xtalk_dmatrans_addr failed with 0x%lx\n",
+ paddr, paddr + req_size - 1, xio_addr));
+ return 0;
+ }
+ /*
+ * find which XIO port this goes to.
+ */
+ if (XIO_PACKED(xio_addr)) {
+ if (xio_addr == XIO_NOWHERE) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmatrans_addr: wanted paddr [0x%lx..0x%lx], "
+ "xtalk_dmatrans_addr failed with XIO_NOWHERE\n",
+ paddr, paddr + req_size - 1));
+ return 0;
+ }
+ xio_port = XIO_PORT(xio_addr);
+ xio_addr = XIO_ADDR(xio_addr);
+
+ } else
+ xio_port = pcibr_soft->bs_mxid;
+
+ /*
+ * If this DMA comes back to us,
+ * return the PCI MEM address on
+ * which it would land, or NULL
+ * if the target is something
+ * on bridge other than PCI MEM.
+ */
+ if (xio_port == pcibr_soft->bs_xid) {
+ pci_addr = pcibr_addr_xio_to_pci(pcibr_soft, xio_addr, req_size);
+ PCIBR_DEBUG((PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmatrans_addr: wanted paddr [0x%lx..0x%lx], "
+ "xio_port=0x%x, pci_addr=0x%lx\n",
+ paddr, paddr + req_size - 1, xio_port, pci_addr));
+ return pci_addr;
+ }
+ /* If the caller can use A64, try to
+ * satisfy the request with the 64-bit
+ * direct map. This can fail if the
+ * configuration bits in Device(x)
+ * conflict with our flags.
+ */
+
+ if (flags & PCIIO_DMA_A64) {
+ pci_addr = slotp->bss_d64_base;
+ if (!(flags & PCIBR_VCHAN1))
+ flags |= PCIBR_VCHAN0;
+ if ((pci_addr != PCIBR_D64_BASE_UNSET) &&
+ (flags == slotp->bss_d64_flags)) {
+
+ pci_addr |= xio_addr |
+ ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT);
+ PCIBR_DEBUG((PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmatrans_addr: wanted paddr [0x%lx..0x%lx], "
+ "xio_port=0x%x, direct64: pci_addr=0x%lx\n",
+ paddr, paddr + req_size - 1, xio_addr, pci_addr));
+ return pci_addr;
+ }
+ if (!pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D64_BITS)) {
+ pci_addr = pcibr_flags_to_d64(flags, pcibr_soft);
+ slotp->bss_d64_flags = flags;
+ slotp->bss_d64_base = pci_addr;
+ pci_addr |= xio_addr
+ | ((uint64_t) xio_port << PCI64_ATTR_TARG_SHFT);
+
+ /* If in PCI mode, make sure we have an RRB (or two).
+ */
+ if (IS_PCI(pcibr_soft) &&
+ !(pcibr_soft->bs_rrb_fixed & (1 << pciio_slot))) {
+ if (flags & PCIBR_VCHAN1)
+ vchan = VCHAN1;
+ have_rrbs = pcibr_soft->bs_rrb_valid[pciio_slot][vchan];
+ if (have_rrbs < 2) {
+ if (pci_addr & PCI64_ATTR_PREF)
+ min_rrbs = 2;
+ else
+ min_rrbs = 1;
+ if (have_rrbs < min_rrbs)
+ pcibr_rrb_alloc_more(pcibr_soft, pciio_slot, vchan,
+ min_rrbs - have_rrbs);
+ }
+ }
+ PCIBR_DEBUG((PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmatrans_addr: wanted paddr [0x%lx..0x%lx], "
+ "xio_port=0x%x, direct64: pci_addr=0x%lx, "
+ "new flags: 0x%x\n", paddr, paddr + req_size - 1,
+ xio_addr, pci_addr, (uint64_t) flags));
+ return pci_addr;
+ }
+
+ PCIBR_DEBUG((PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmatrans_addr: wanted paddr [0x%lx..0x%lx], "
+ "xio_port=0x%x, Unable to set direct64 Device(x) bits\n",
+ paddr, paddr + req_size - 1, xio_addr));
+
+ /* PIC only supports 64-bit direct mapping in PCI-X mode */
+ if (IS_PCIX(pcibr_soft)) {
+ return 0;
+ }
+
+ /* our flags conflict with Device(x). try direct32*/
+ flags = flags & ~(PCIIO_DMA_A64 | PCIBR_VCHAN0);
+ } else {
+ /* BUS in PCI-X mode only supports 64-bit direct mapping */
+ if (IS_PCIX(pcibr_soft)) {
+ return 0;
+ }
+ }
+ /* Try to satisfy the request with the 32-bit direct
+ * map. This can fail if the configuration bits in
+ * Device(x) conflict with our flags, or if the
+ * target address is outside where DIR_OFF points.
+ */
+ {
+ size_t map_size = 1ULL << 31;
+ iopaddr_t xio_base = pcibr_soft->bs_dir_xbase;
+ iopaddr_t offset = xio_addr - xio_base;
+ iopaddr_t endoff = req_size + offset;
+
+ if ((req_size > map_size) ||
+ (xio_addr < xio_base) ||
+ (xio_port != pcibr_soft->bs_dir_xport) ||
+ (endoff > map_size)) {
+
+ PCIBR_DEBUG((PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmatrans_addr: wanted paddr [0x%lx..0x%lx], "
+ "xio_port=0x%x, xio region outside direct32 target\n",
+ paddr, paddr + req_size - 1, xio_addr));
+ } else {
+ pci_addr = slotp->bss_d32_base;
+ if ((pci_addr != PCIBR_D32_BASE_UNSET) &&
+ (flags == slotp->bss_d32_flags)) {
+
+ pci_addr |= offset;
+
+ PCIBR_DEBUG((PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmatrans_addr: wanted paddr [0x%lx..0x%lx],"
+ " xio_port=0x%x, direct32: pci_addr=0x%lx\n",
+ paddr, paddr + req_size - 1, xio_addr, pci_addr));
+
+ return pci_addr;
+ }
+ if (!pcibr_try_set_device(pcibr_soft, pciio_slot, flags, BRIDGE_DEV_D32_BITS)) {
+
+ pci_addr = PCI32_DIRECT_BASE;
+ slotp->bss_d32_flags = flags;
+ slotp->bss_d32_base = pci_addr;
+ pci_addr |= offset;
+
+ /* Make sure we have an RRB (or two).
+ */
+ if (!(pcibr_soft->bs_rrb_fixed & (1 << pciio_slot))) {
+ have_rrbs = pcibr_soft->bs_rrb_valid[pciio_slot][vchan];
+ if (have_rrbs < 2) {
+ if (slotp->bss_device & BRIDGE_DEV_PREF)
+ min_rrbs = 2;
+ else
+ min_rrbs = 1;
+ if (have_rrbs < min_rrbs)
+ pcibr_rrb_alloc_more(pcibr_soft, pciio_slot,
+ vchan, min_rrbs - have_rrbs);
+ }
+ }
+ PCIBR_DEBUG((PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmatrans_addr: wanted paddr [0x%lx..0x%lx],"
+ " xio_port=0x%x, direct32: pci_addr=0x%lx, "
+ "new flags: 0x%x\n", paddr, paddr + req_size - 1,
+ xio_addr, pci_addr, (uint64_t) flags));
+
+ return pci_addr;
+ }
+ /* our flags conflict with Device(x).
+ */
+ PCIBR_DEBUG((PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmatrans_addr: wanted paddr [0x%lx..0x%lx], "
+ "xio_port=0x%x, Unable to set direct32 Device(x) bits\n",
+ paddr, paddr + req_size - 1, xio_port));
+ }
+ }
+
+ PCIBR_DEBUG((PCIBR_DEBUG_DMADIR, pconn_vhdl,
+ "pcibr_dmatrans_addr: wanted paddr [0x%lx..0x%lx], "
+ "xio_port=0x%x, No acceptable PCI address found\n",
+ paddr, paddr + req_size - 1, xio_port));
+
+ return 0;
+}
+
+void
+pcibr_dmamap_drain(pcibr_dmamap_t map)
+{
+ xtalk_dmamap_drain(map->bd_xtalk);
+}
+
+void
+pcibr_dmaaddr_drain(vertex_hdl_t pconn_vhdl,
+ paddr_t paddr,
+ size_t bytes)
+{
+ pciio_info_t pciio_info = pciio_info_get(pconn_vhdl);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info);
+ vertex_hdl_t xconn_vhdl = pcibr_soft->bs_conn;
+
+ xtalk_dmaaddr_drain(xconn_vhdl, paddr, bytes);
+}
+
+/*
+ * Get the starting PCIbus address out of the given DMA map.
+ * This function is supposed to be used by a close friend of PCI bridge
+ * since it relies on the fact that the starting address of the map is fixed at
+ * the allocation time in the current implementation of PCI bridge.
+ */
+iopaddr_t
+pcibr_dmamap_pciaddr_get(pcibr_dmamap_t pcibr_dmamap)
+{
+ return pcibr_dmamap->bd_pci_addr;
+}
+
+/* =====================================================================
+ * CONFIGURATION MANAGEMENT
+ */
+/*ARGSUSED */
+void
+pcibr_provider_startup(vertex_hdl_t pcibr)
+{
+}
+
+/*ARGSUSED */
+void
+pcibr_provider_shutdown(vertex_hdl_t pcibr)
+{
+}
+
+int
+pcibr_reset(vertex_hdl_t conn)
+{
+ BUG();
+ return -1;
+}
+
+pciio_endian_t
+pcibr_endian_set(vertex_hdl_t pconn_vhdl,
+ pciio_endian_t device_end,
+ pciio_endian_t desired_end)
+{
+ pciio_info_t pciio_info = pciio_info_get(pconn_vhdl);
+ pciio_slot_t pciio_slot = PCIBR_INFO_SLOT_GET_INT(pciio_info);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info);
+ uint64_t devreg;
+ unsigned long s;
+
+ /*
+ * Bridge supports hardware swapping; so we can always
+ * arrange for the caller's desired endianness.
+ */
+
+ s = pcibr_lock(pcibr_soft);
+ devreg = pcibr_soft->bs_slot[pciio_slot].bss_device;
+ if (device_end != desired_end)
+ devreg |= BRIDGE_DEV_SWAP_BITS;
+ else
+ devreg &= ~BRIDGE_DEV_SWAP_BITS;
+
+ /* NOTE- if we ever put SWAP bits
+ * onto the disabled list, we will
+ * have to change the logic here.
+ */
+ if (pcibr_soft->bs_slot[pciio_slot].bss_device != devreg) {
+ pcireg_device_set(pcibr_soft, pciio_slot, devreg);
+ pcibr_soft->bs_slot[pciio_slot].bss_device = devreg;
+ pcireg_tflush_get(pcibr_soft);
+ }
+ pcibr_unlock(pcibr_soft, s);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DEVREG, pconn_vhdl,
+ "pcibr_endian_set: Device(%d): 0x%x\n",
+ pciio_slot, devreg));
+
+ return desired_end;
+}
+
+/*
+ * Interfaces to allow special (e.g. SGI) drivers to set/clear
+ * Bridge-specific device flags. Many flags are modified through
+ * PCI-generic interfaces; we don't allow them to be directly
+ * manipulated here. Only flags that at this point seem pretty
+ * Bridge-specific can be set through these special interfaces.
+ * We may add more flags as the need arises, or remove flags and
+ * create PCI-generic interfaces as the need arises.
+ *
+ * Returns 0 on failure, 1 on success
+ */
+int
+pcibr_device_flags_set(vertex_hdl_t pconn_vhdl,
+ pcibr_device_flags_t flags)
+{
+ pciio_info_t pciio_info = pciio_info_get(pconn_vhdl);
+ pciio_slot_t pciio_slot = PCIBR_INFO_SLOT_GET_INT(pciio_info);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info);
+ uint64_t set = 0;
+ uint64_t clr = 0;
+
+ ASSERT((flags & PCIBR_DEVICE_FLAGS) == flags);
+
+ if (flags & PCIBR_WRITE_GATHER)
+ set |= BRIDGE_DEV_PMU_WRGA_EN;
+ if (flags & PCIBR_NOWRITE_GATHER)
+ clr |= BRIDGE_DEV_PMU_WRGA_EN;
+
+ if (flags & PCIBR_PREFETCH)
+ set |= BRIDGE_DEV_PREF;
+ if (flags & PCIBR_NOPREFETCH)
+ clr |= BRIDGE_DEV_PREF;
+
+ if (flags & PCIBR_PRECISE)
+ set |= BRIDGE_DEV_PRECISE;
+ if (flags & PCIBR_NOPRECISE)
+ clr |= BRIDGE_DEV_PRECISE;
+
+ if (flags & PCIBR_BARRIER)
+ set |= BRIDGE_DEV_BARRIER;
+ if (flags & PCIBR_NOBARRIER)
+ clr |= BRIDGE_DEV_BARRIER;
+
+ if (flags & PCIBR_64BIT)
+ set |= BRIDGE_DEV_DEV_SIZE;
+ if (flags & PCIBR_NO64BIT)
+ clr |= BRIDGE_DEV_DEV_SIZE;
+
+ /* PIC BRINGUP WAR (PV# 878674): Don't allow 64bit PIO accesses */
+ if ((flags & PCIBR_64BIT) && PCIBR_WAR_ENABLED(PV878674, pcibr_soft)) {
+ set &= ~BRIDGE_DEV_DEV_SIZE;
+ }
+
+ if (set || clr) {
+ uint64_t devreg;
+ unsigned long s;
+
+ s = pcibr_lock(pcibr_soft);
+ devreg = pcibr_soft->bs_slot[pciio_slot].bss_device;
+ devreg = (devreg & ~clr) | set;
+ if (pcibr_soft->bs_slot[pciio_slot].bss_device != devreg) {
+ pcireg_device_set(pcibr_soft, pciio_slot, devreg);
+ pcibr_soft->bs_slot[pciio_slot].bss_device = devreg;
+ pcireg_tflush_get(pcibr_soft);
+ }
+ pcibr_unlock(pcibr_soft, s);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DEVREG, pconn_vhdl,
+ "pcibr_device_flags_set: Device(%d): 0x%x\n",
+ pciio_slot, devreg));
+ }
+ return 1;
+}
+
+/*
+ * PIC has 16 RBARs per bus; meaning it can have a total of 16 outstanding
+ * split transactions. If the functions on the bus have requested a total
+ * of 16 or less, then we can give them what they requested (ie. 100%).
+ * Otherwise we have make sure each function can get at least one buffer
+ * and then divide the rest of the buffers up among the functions as ``A
+ * PERCENTAGE OF WHAT THEY REQUESTED'' (i.e. 0% - 100% of a function's
+ * pcix_type0_status.max_out_split). This percentage does not include the
+ * one RBAR that all functions get by default.
+ */
+int
+pcibr_pcix_rbars_calc(pcibr_soft_t pcibr_soft)
+{
+ /* 'percent_allowed' is the percentage of requested RBARs that functions
+ * are allowed, ***less the 1 RBAR that all functions get by default***
+ */
+ int percent_allowed;
+
+ if (pcibr_soft->bs_pcix_num_funcs) {
+ if (pcibr_soft->bs_pcix_num_funcs > NUM_RBAR) {
+ printk(KERN_WARNING
+ "%s: Must oversubscribe Read Buffer Attribute Registers"
+ "(RBAR). Bus has %d RBARs but %d funcs need them.\n",
+ pcibr_soft->bs_name, NUM_RBAR, pcibr_soft->bs_pcix_num_funcs);
+ percent_allowed = 0;
+ } else {
+ percent_allowed = (((NUM_RBAR-pcibr_soft->bs_pcix_num_funcs)*100) /
+ pcibr_soft->bs_pcix_split_tot);
+
+ /* +1 to percentage to solve rounding errors that occur because
+ * we're not doing fractional math. (ie. ((3 * 66%) / 100) = 1)
+ * but should be "2" if doing true fractional math. NOTE: Since
+ * the greatest number of outstanding transactions a function
+ * can request is 32, this "+1" will always work (i.e. we won't
+ * accidentally oversubscribe the RBARs because of this rounding
+ * of the percentage).
+ */
+ percent_allowed=(percent_allowed > 100) ? 100 : percent_allowed+1;
+ }
+ } else {
+ return -ENODEV;
+ }
+
+ return percent_allowed;
+}
+
+/*
+ * pcibr_debug() is used to print pcibr debug messages to the console. A
+ * user enables tracing by setting the following global variables:
+ *
+ * pcibr_debug_mask -Bitmask of what to trace. see pcibr_private.h
+ * pcibr_debug_module -Module to trace. 'all' means trace all modules
+ * pcibr_debug_widget -Widget to trace. '-1' means trace all widgets
+ * pcibr_debug_slot -Slot to trace. '-1' means trace all slots
+ *
+ * 'type' is the type of debugging that the current PCIBR_DEBUG macro is
+ * tracing. 'vhdl' (which can be NULL) is the vhdl associated with the
+ * debug statement. If there is a 'vhdl' associated with this debug
+ * statement, it is parsed to obtain the module, widget, and slot. If the
+ * globals above match the PCIBR_DEBUG params, then the debug info in the
+ * parameter 'format' is sent to the console.
+ */
+void
+pcibr_debug(uint32_t type, vertex_hdl_t vhdl, char *format, ...)
+{
+ char hwpath[MAXDEVNAME] = "\0";
+ char copy_of_hwpath[MAXDEVNAME];
+ char *buffer;
+ char *module = "all";
+ short widget = -1;
+ short slot = -1;
+ va_list ap;
+
+ if (pcibr_debug_mask & type) {
+ if (vhdl) {
+ if (!hwgraph_vertex_name_get(vhdl, hwpath, MAXDEVNAME)) {
+ char *cp;
+
+ if (strcmp(module, pcibr_debug_module)) {
+ /* use a copy */
+ (void)strcpy(copy_of_hwpath, hwpath);
+ cp = strstr(copy_of_hwpath, "/" EDGE_LBL_MODULE "/");
+ if (cp) {
+ cp += strlen("/" EDGE_LBL_MODULE "/");
+ module = strsep(&cp, "/");
+ }
+ }
+ if (pcibr_debug_widget != -1) {
+ cp = strstr(hwpath, "/" EDGE_LBL_XTALK "/");
+ if (cp) {
+ cp += strlen("/" EDGE_LBL_XTALK "/");
+ widget = simple_strtoul(cp, NULL, 0);
+ }
+ }
+ if (pcibr_debug_slot != -1) {
+ cp = strstr(hwpath, "/" EDGE_LBL_PCIX_0 "/");
+ if (!cp) {
+ cp = strstr(hwpath, "/" EDGE_LBL_PCIX_1 "/");
+ }
+ if (cp) {
+ cp += strlen("/" EDGE_LBL_PCIX_0 "/");
+ slot = simple_strtoul(cp, NULL, 0);
+ }
+ }
+ }
+ }
+ if ((vhdl == NULL) ||
+ (!strcmp(module, pcibr_debug_module) &&
+ (widget == pcibr_debug_widget) &&
+ (slot == pcibr_debug_slot))) {
+
+ buffer = kmalloc(1024, GFP_KERNEL);
+ if (buffer) {
+ printk("PCIBR_DEBUG<%d>\t: %s :", smp_processor_id(), hwpath);
+ /*
+ * KERN_MSG translates to this 3 line sequence. Since
+ * we have a variable length argument list, we need to
+ * call KERN_MSG this way rather than directly
+ */
+ va_start(ap, format);
+ memset(buffer, 0, 1024);
+ vsnprintf(buffer, 1024, format, ap);
+ va_end(ap);
+ printk("%s", buffer);
+ kfree(buffer);
+ }
+ }
+ }
+}
+
+/*
+ * given a xconn_vhdl and a bus number under that widget, return a
+ * bridge_t pointer.
+ */
+void *
+pcibr_bridge_ptr_get(vertex_hdl_t widget_vhdl, int bus_num)
+{
+ void *bridge;
+
+ bridge = (void *)xtalk_piotrans_addr(widget_vhdl, 0, 0,
+ sizeof(bridge), 0);
+
+ /* PIC ASIC has two bridges (ie. two buses) under a single widget */
+ if (bus_num == 1) {
+ bridge = (void *)((char *)bridge + PIC_BUS1_OFFSET);
+ }
+ return bridge;
+}
+
+
+int
+isIO9(nasid_t nasid)
+{
+ lboard_t *brd = (lboard_t *)KL_CONFIG_INFO(nasid);
+
+ while (brd) {
+ if (brd->brd_flags & LOCAL_MASTER_IO6) {
+ return 1;
+ }
+ if (numionodes == numnodes)
+ brd = KLCF_NEXT_ANY(brd);
+ else
+ brd = KLCF_NEXT(brd);
+ }
+ /* if it's dual ported, check the peer also */
+ nasid = NODEPDA(nasid_to_cnodeid(nasid))->xbow_peer;
+ if (nasid < 0) return 0;
+ brd = (lboard_t *)KL_CONFIG_INFO(nasid);
+ while (brd) {
+ if (brd->brd_flags & LOCAL_MASTER_IO6) {
+ return 1;
+ }
+ if (numionodes == numnodes)
+ brd = KLCF_NEXT_ANY(brd);
+ else
+ brd = KLCF_NEXT(brd);
+
+ }
+ return 0;
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/interrupt.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/addrs.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/pci/pci_defs.h>
+
+
+extern int hubii_check_widget_disabled(nasid_t, int);
+
+
+/* =====================================================================
+ * ERROR HANDLING
+ */
+
+#ifdef DEBUG
+#ifdef ERROR_DEBUG
+#define BRIDGE_PIOERR_TIMEOUT 100 /* Timeout with ERROR_DEBUG defined */
+#else
+#define BRIDGE_PIOERR_TIMEOUT 40 /* Timeout in debug mode */
+#endif
+#else
+#define BRIDGE_PIOERR_TIMEOUT 1 /* Timeout in non-debug mode */
+#endif
+
+#ifdef DEBUG
+#ifdef ERROR_DEBUG
+uint64_t bridge_errors_to_dump = ~BRIDGE_ISR_INT_MSK;
+#else
+uint64_t bridge_errors_to_dump = BRIDGE_ISR_ERROR_DUMP;
+#endif
+#else
+uint64_t bridge_errors_to_dump = BRIDGE_ISR_ERROR_FATAL |
+ BRIDGE_ISR_PCIBUS_PIOERR;
+#endif
+
+int pcibr_pioerr_dump = 1; /* always dump pio errors */
+
+/*
+ * register values
+ * map between numeric values and symbolic values
+ */
+struct reg_values {
+ unsigned long long rv_value;
+ char *rv_name;
+};
+
+/*
+ * register descriptors are used for formatted prints of register values
+ * rd_mask and rd_shift must be defined, other entries may be null
+ */
+struct reg_desc {
+ unsigned long long rd_mask; /* mask to extract field */
+ int rd_shift; /* shift for extracted value, - >>, + << */
+ char *rd_name; /* field name */
+ char *rd_format; /* format to print field */
+ struct reg_values *rd_values; /* symbolic names of values */
+};
+
+/* Crosstalk Packet Types */
+static struct reg_values xtalk_cmd_pactyp[] =
+{
+ {0x0, "RdReq"},
+ {0x1, "RdResp"},
+ {0x2, "WrReqWithResp"},
+ {0x3, "WrResp"},
+ {0x4, "WrReqNoResp"},
+ {0x5, "Reserved(5)"},
+ {0x6, "FetchAndOp"},
+ {0x7, "Reserved(7)"},
+ {0x8, "StoreAndOp"},
+ {0x9, "Reserved(9)"},
+ {0xa, "Reserved(a)"},
+ {0xb, "Reserved(b)"},
+ {0xc, "Reserved(c)"},
+ {0xd, "Reserved(d)"},
+ {0xe, "SpecialReq"},
+ {0xf, "SpecialResp"},
+ {0}
+};
+
+static struct reg_desc xtalk_cmd_bits[] =
+{
+ {WIDGET_DIDN, -28, "DIDN", "%x"},
+ {WIDGET_SIDN, -24, "SIDN", "%x"},
+ {WIDGET_PACTYP, -20, "PACTYP", 0, xtalk_cmd_pactyp},
+ {WIDGET_TNUM, -15, "TNUM", "%x"},
+ {WIDGET_COHERENT, 0, "COHERENT"},
+ {WIDGET_DS, 0, "DS"},
+ {WIDGET_GBR, 0, "GBR"},
+ {WIDGET_VBPM, 0, "VBPM"},
+ {WIDGET_ERROR, 0, "ERROR"},
+ {WIDGET_BARRIER, 0, "BARRIER"},
+ {0}
+};
+
+#define F(s,n) { 1l<<(s),-(s), n }
+
+char *pci_space[] = {"NONE",
+ "ROM",
+ "IO",
+ "",
+ "MEM",
+ "MEM32",
+ "MEM64",
+ "CFG",
+ "WIN0",
+ "WIN1",
+ "WIN2",
+ "WIN3",
+ "WIN4",
+ "WIN5",
+ "",
+ "BAD"};
+
+static char *pcibr_isr_errs[] =
+{
+ "", "", "", "", "", "", "", "",
+ "08: Reserved Bit 08",
+ "09: PCI to Crosstalk read request timeout",
+ "10: PCI retry operation count exhausted.",
+ "11: PCI bus device select timeout",
+ "12: PCI device reported parity error",
+ "13: PCI Address/Cmd parity error ",
+ "14: PCI Bridge detected parity error",
+ "15: PCI abort condition",
+ "16: Reserved Bit 16",
+ "17: LLP Transmitter Retry count wrapped", /* PIC ONLY */
+ "18: LLP Transmitter side required Retry", /* PIC ONLY */
+ "19: LLP Receiver retry count wrapped", /* PIC ONLY */
+ "20: LLP Receiver check bit error", /* PIC ONLY */
+ "21: LLP Receiver sequence number error", /* PIC ONLY */
+ "22: Request packet overflow",
+ "23: Request operation not supported by bridge",
+ "24: Request packet has invalid address for bridge widget",
+ "25: Incoming request xtalk command word error bit set or invalid sideband",
+ "26: Incoming response xtalk command word error bit set or invalid sideband",
+ "27: Framing error, request cmd data size does not match actual",
+ "28: Framing error, response cmd data size does not match actual",
+ "29: Unexpected response arrived",
+ "30: PMU Access Fault",
+ "31: Reserved Bit 31",
+ "32: PCI-X address or attribute cycle parity error",
+ "33: PCI-X data cycle parity error",
+ "34: PCI-X master timeout (ie. master abort)",
+ "35: PCI-X pio retry counter exhausted",
+ "36: PCI-X SERR",
+ "37: PCI-X PERR",
+ "38: PCI-X target abort",
+ "39: PCI-X read request timeout",
+ "40: PCI / PCI-X device requestin arbitration error",
+ "41: internal RAM parity error",
+ "42: PCI-X unexpected completion cycle to master",
+ "43: PCI-X split completion timeout",
+ "44: PCI-X split completion error message",
+ "45: PCI-X split completion message parity error",
+};
+
+/*
+ * print_register() allows formatted printing of bit fields. individual
+ * bit fields are described by a struct reg_desc, multiple bit fields within
+ * a single word can be described by multiple reg_desc structures.
+ * %r outputs a string of the format "<bit field descriptions>"
+ * %R outputs a string of the format "0x%x<bit field descriptions>"
+ *
+ * The fields in a reg_desc are:
+ * unsigned long long rd_mask; An appropriate mask to isolate the bit field
+ * within a word, and'ed with val
+ *
+ * int rd_shift; A shift amount to be done to the isolated
+ * bit field. done before printing the isolate
+ * bit field with rd_format and before searching
+ * for symbolic value names in rd_values
+ *
+ * char *rd_name; If non-null, a bit field name to label any
+ * out from rd_format or searching rd_values.
+ * if neither rd_format or rd_values is non-null
+ * rd_name is printed only if the isolated
+ * bit field is non-null.
+ *
+ * char *rd_format; If non-null, the shifted bit field value
+ * is printed using this format.
+ *
+ * struct reg_values *rd_values; If non-null, a pointer to a table
+ * matching numeric values with symbolic names.
+ * rd_values are searched and the symbolic
+ * value is printed if a match is found, if no
+ * match is found "???" is printed.
+ *
+ */
+
+static void
+print_register(unsigned long long reg, struct reg_desc *addr)
+{
+ register struct reg_desc *rd;
+ register struct reg_values *rv;
+ unsigned long long field;
+ int any;
+
+ printk("<");
+ any = 0;
+ for (rd = addr; rd->rd_mask; rd++) {
+ field = reg & rd->rd_mask;
+ field = (rd->rd_shift > 0) ? field << rd->rd_shift : field >> -rd->rd_shift;
+ if (any && (rd->rd_format || rd->rd_values || (rd->rd_name && field)))
+ printk(",");
+ if (rd->rd_name) {
+ if (rd->rd_format || rd->rd_values || field) {
+ printk("%s", rd->rd_name);
+ any = 1;
+ }
+ if (rd->rd_format || rd->rd_values) {
+ printk("=");
+ any = 1;
+ }
+ }
+ /* You can have any format so long as it is %x */
+ if (rd->rd_format) {
+ printk("%llx", field);
+ any = 1;
+ if (rd->rd_values)
+ printk(":");
+ }
+ if (rd->rd_values) {
+ any = 1;
+ for (rv = rd->rd_values; rv->rv_name; rv++) {
+ if (field == rv->rv_value) {
+ printk("%s", rv->rv_name);
+ break;
+ }
+ }
+ if (rv->rv_name == NULL)
+ printk("???");
+ }
+ }
+ printk(">\n");
+}
+
+
+/*
+ * display memory directory state
+ */
+static void
+pcibr_show_dir_state(paddr_t paddr, char *prefix)
+{
+#ifdef PCIBR_LATER
+ int state;
+ uint64_t vec_ptr;
+ hubreg_t elo;
+ extern char *dir_state_str[];
+ extern void get_dir_ent(paddr_t, int *, uint64_t *, hubreg_t *);
+
+ get_dir_ent(paddr, &state, &vec_ptr, &elo);
+
+ printf("%saddr 0x%lx: state 0x%x owner 0x%lx (%s)\n",
+ prefix, (uint64_t)paddr, state, (uint64_t)vec_ptr,
+ dir_state_str[state]);
+#endif /* PCIBR_LATER */
+}
+
+
+void
+print_bridge_errcmd(pcibr_soft_t pcibr_soft, uint32_t cmdword, char *errtype)
+{
+ printk(
+ "\t Bridge %sError Command Word Register ", errtype);
+ print_register(cmdword, xtalk_cmd_bits);
+}
+
+
+/*
+ * Dump relevant error information for Bridge error interrupts.
+ */
+/*ARGSUSED */
+void
+pcibr_error_dump(pcibr_soft_t pcibr_soft)
+{
+ uint64_t int_status;
+ uint64_t mult_int;
+ uint64_t bit;
+ int i;
+
+ int_status = (pcireg_intr_status_get(pcibr_soft) & ~BRIDGE_ISR_INT_MSK);
+
+ if (!int_status) {
+ /* No error bits set */
+ return;
+ }
+
+ /* Check if dumping the same error information multiple times */
+ if ( pcibr_soft->bs_errinfo.bserr_intstat == int_status )
+ return;
+ pcibr_soft->bs_errinfo.bserr_intstat = int_status;
+
+ printk(KERN_ALERT "PCI BRIDGE ERROR: int_status is 0x%lx for %s\n"
+ " Dumping relevant %s registers for each bit set...\n",
+ int_status, pcibr_soft->bs_name,
+ "PIC");
+
+ for (i = PCIBR_ISR_ERR_START; i < 64; i++) {
+ bit = 1ull << i;
+
+ /* A number of int_status bits are only valid for PIC's bus0 */
+ if ((pcibr_soft->bs_busnum != 0) &&
+ ((bit == BRIDGE_ISR_UNSUPPORTED_XOP) ||
+ (bit == BRIDGE_ISR_LLP_REC_SNERR) ||
+ (bit == BRIDGE_ISR_LLP_REC_CBERR) ||
+ (bit == BRIDGE_ISR_LLP_RCTY) ||
+ (bit == BRIDGE_ISR_LLP_TX_RETRY) ||
+ (bit == BRIDGE_ISR_LLP_TCTY))) {
+ continue;
+ }
+
+ if (int_status & bit) {
+ printk("\t%s\n", pcibr_isr_errs[i]);
+
+ switch (bit) {
+
+ case PIC_ISR_INT_RAM_PERR: /* bit41 INT_RAM_PERR */
+ /* XXX: should breakdown meaning of bits in reg */
+ printk("\t Internal RAM Parity Error: 0x%lx\n",
+ pcireg_parity_err_get(pcibr_soft));
+ break;
+
+ case PIC_ISR_PCIX_ARB_ERR: /* bit40 PCI_X_ARB_ERR */
+ /* XXX: should breakdown meaning of bits in reg */
+ printk("\t Arbitration Reg: 0x%lx\n",
+ pcireg_arbitration_get(pcibr_soft));
+ break;
+
+ case PIC_ISR_PCIX_REQ_TOUT: /* bit39 PCI_X_REQ_TOUT */
+ /* XXX: should breakdown meaning of attribute bit */
+ printk(
+ "\t PCI-X DMA Request Error Address Reg: 0x%lx\n"
+ "\t PCI-X DMA Request Error Attribute Reg: 0x%lx\n",
+ pcireg_pcix_req_err_addr_get(pcibr_soft),
+ pcireg_pcix_req_err_attr_get(pcibr_soft));
+ break;
+
+ case PIC_ISR_PCIX_SPLIT_MSG_PE: /* bit45 PCI_X_SPLIT_MES_PE */
+ case PIC_ISR_PCIX_SPLIT_EMSG: /* bit44 PCI_X_SPLIT_EMESS */
+ case PIC_ISR_PCIX_SPLIT_TO: /* bit43 PCI_X_SPLIT_TO */
+ /* XXX: should breakdown meaning of attribute bit */
+ printk(
+ "\t PCI-X Split Request Address Reg: 0x%lx\n"
+ "\t PCI-X Split Request Attribute Reg: 0x%lx\n",
+ pcireg_pcix_pio_split_addr_get(pcibr_soft),
+ pcireg_pcix_pio_split_attr_get(pcibr_soft));
+ /* FALL THRU */
+
+ case PIC_ISR_PCIX_UNEX_COMP: /* bit42 PCI_X_UNEX_COMP */
+ case PIC_ISR_PCIX_TABORT: /* bit38 PCI_X_TABORT */
+ case PIC_ISR_PCIX_PERR: /* bit37 PCI_X_PERR */
+ case PIC_ISR_PCIX_SERR: /* bit36 PCI_X_SERR */
+ case PIC_ISR_PCIX_MRETRY: /* bit35 PCI_X_MRETRY */
+ case PIC_ISR_PCIX_MTOUT: /* bit34 PCI_X_MTOUT */
+ case PIC_ISR_PCIX_DA_PARITY: /* bit33 PCI_X_DA_PARITY */
+ case PIC_ISR_PCIX_AD_PARITY: /* bit32 PCI_X_AD_PARITY */
+ /* XXX: should breakdown meaning of attribute bit */
+ printk(
+ "\t PCI-X Bus Error Address Reg: 0x%lx\n"
+ "\t PCI-X Bus Error Attribute Reg: 0x%lx\n"
+ "\t PCI-X Bus Error Data Reg: 0x%lx\n",
+ pcireg_pcix_bus_err_addr_get(pcibr_soft),
+ pcireg_pcix_bus_err_attr_get(pcibr_soft),
+ pcireg_pcix_bus_err_data_get(pcibr_soft));
+ break;
+
+ case BRIDGE_ISR_PAGE_FAULT: /* bit30 PMU_PAGE_FAULT */
+ printk("\t Map Fault Address Reg: 0x%lx\n",
+ pcireg_map_fault_get(pcibr_soft));
+ break;
+
+ case BRIDGE_ISR_UNEXP_RESP: /* bit29 UNEXPECTED_RESP */
+ print_bridge_errcmd(pcibr_soft,
+ pcireg_linkside_err_get(pcibr_soft), "Aux ");
+
+ /* PIC in PCI-X mode, dump the PCIX DMA Request registers */
+ if (IS_PCIX(pcibr_soft)) {
+ /* XXX: should breakdown meaning of attr bit */
+ printk(
+ "\t PCI-X DMA Request Error Addr Reg: 0x%lx\n"
+ "\t PCI-X DMA Request Error Attr Reg: 0x%lx\n",
+ pcireg_pcix_req_err_addr_get(pcibr_soft),
+ pcireg_pcix_req_err_attr_get(pcibr_soft));
+ }
+ break;
+
+ case BRIDGE_ISR_BAD_XRESP_PKT: /* bit28 BAD_RESP_PACKET */
+ case BRIDGE_ISR_RESP_XTLK_ERR: /* bit26 RESP_XTALK_ERROR */
+ print_bridge_errcmd(pcibr_soft,
+ pcireg_linkside_err_get(pcibr_soft), "Aux ");
+
+ /* PCI-X mode, DMA Request Error registers are valid. But
+ * in PCI mode, Response Buffer Address register are valid.
+ */
+ if (IS_PCIX(pcibr_soft)) {
+ /* XXX: should breakdown meaning of attribute bit */
+ printk(
+ "\t PCI-X DMA Request Error Addr Reg: 0x%lx\n"
+ "\t PCI-X DMA Request Error Attribute Reg: 0x%lx\n",
+ pcireg_pcix_req_err_addr_get(pcibr_soft),
+ pcireg_pcix_req_err_attr_get(pcibr_soft));
+ } else {
+ printk(
+ "\t Bridge Response Buf Error Addr Reg: 0x%lx\n"
+ "\t dev-num %d buff-num %d addr 0x%lx\n",
+ pcireg_resp_err_get(pcibr_soft),
+ (int)pcireg_resp_err_dev_get(pcibr_soft),
+ (int)pcireg_resp_err_buf_get(pcibr_soft),
+ pcireg_resp_err_addr_get(pcibr_soft));
+ if (bit == BRIDGE_ISR_RESP_XTLK_ERR) {
+ /* display memory directory associated with cacheline */
+ pcibr_show_dir_state(
+ pcireg_resp_err_get(pcibr_soft), "\t ");
+ }
+ }
+ break;
+
+ case BRIDGE_ISR_BAD_XREQ_PKT: /* bit27 BAD_XREQ_PACKET */
+ case BRIDGE_ISR_REQ_XTLK_ERR: /* bit25 REQ_XTALK_ERROR */
+ case BRIDGE_ISR_INVLD_ADDR: /* bit24 INVALID_ADDRESS */
+ print_bridge_errcmd(pcibr_soft,
+ pcireg_cmdword_err_get(pcibr_soft), "");
+ printk(
+ "\t Bridge Error Address Register: 0x%lx\n"
+ "\t Bridge Error Address: 0x%lx\n",
+ pcireg_bus_err_get(pcibr_soft),
+ pcireg_bus_err_get(pcibr_soft));
+ break;
+
+ case BRIDGE_ISR_UNSUPPORTED_XOP: /* bit23 UNSUPPORTED_XOP */
+ print_bridge_errcmd(pcibr_soft,
+ pcireg_linkside_err_get(pcibr_soft), "Aux ");
+ printk("\t Address Holding Link Side Error Reg: 0x%lx\n",
+ pcireg_linkside_err_addr_get(pcibr_soft));
+ break;
+
+ case BRIDGE_ISR_XREQ_FIFO_OFLOW: /* bit22 XREQ_FIFO_OFLOW */
+ print_bridge_errcmd(pcibr_soft,
+ pcireg_linkside_err_get(pcibr_soft), "Aux ");
+ printk("\t Address Holding Link Side Error Reg: 0x%lx\n",
+ pcireg_linkside_err_addr_get(pcibr_soft));
+ break;
+
+ case BRIDGE_ISR_PCI_ABORT: /* bit15 PCI_ABORT */
+ case BRIDGE_ISR_PCI_PARITY: /* bit14 PCI_PARITY */
+ case BRIDGE_ISR_PCI_SERR: /* bit13 PCI_SERR */
+ case BRIDGE_ISR_PCI_PERR: /* bit12 PCI_PERR */
+ case BRIDGE_ISR_PCI_MST_TIMEOUT: /* bit11 PCI_MASTER_TOUT */
+ case BRIDGE_ISR_PCI_RETRY_CNT: /* bit10 PCI_RETRY_CNT */
+ printk("\t PCI Error Address Register: 0x%lx\n"
+ "\t PCI Error Address: 0x%lx\n",
+ pcireg_pci_bus_addr_get(pcibr_soft),
+ pcireg_pci_bus_addr_addr_get(pcibr_soft));
+ break;
+
+ case BRIDGE_ISR_XREAD_REQ_TIMEOUT: /* bit09 XREAD_REQ_TOUT */
+ printk("\t Bridge Response Buf Error Addr Reg: 0x%lx\n"
+ "\t dev-num %d buff-num %d addr 0x%lx\n",
+ pcireg_resp_err_get(pcibr_soft),
+ (int)pcireg_resp_err_dev_get(pcibr_soft),
+ (int)pcireg_resp_err_buf_get(pcibr_soft),
+ pcireg_resp_err_get(pcibr_soft));
+ break;
+ }
+ }
+ }
+
+ mult_int = pcireg_intr_multiple_get(pcibr_soft);
+
+ if (mult_int & ~BRIDGE_ISR_INT_MSK) {
+ printk(" %s Multiple Interrupt Register is 0x%lx\n",
+ pcibr_soft->bs_asic_name, mult_int);
+ for (i = PCIBR_ISR_ERR_START; i < 64; i++) {
+ if (mult_int & (1ull << i))
+ printk( "\t%s\n", pcibr_isr_errs[i]);
+ }
+ }
+}
+
+/* pcibr_pioerr_check():
+ * Check to see if this pcibr has a PCI PIO
+ * TIMEOUT error; if so, bump the timeout-count
+ * on any piomaps that could cover the address.
+ */
+static void
+pcibr_pioerr_check(pcibr_soft_t soft)
+{
+ uint64_t int_status;
+ iopaddr_t pci_addr;
+ pciio_slot_t slot;
+ pcibr_piomap_t map;
+ iopaddr_t base;
+ size_t size;
+ unsigned win;
+ int func;
+
+ int_status = pcireg_intr_status_get(soft);
+
+ if (int_status & BRIDGE_ISR_PCIBUS_PIOERR) {
+ pci_addr = pcireg_pci_bus_addr_get(soft);
+
+ slot = PCIBR_NUM_SLOTS(soft);
+ while (slot-- > 0) {
+ int nfunc = soft->bs_slot[slot].bss_ninfo;
+ pcibr_info_h pcibr_infoh = soft->bs_slot[slot].bss_infos;
+
+ for (func = 0; func < nfunc; func++) {
+ pcibr_info_t pcibr_info = pcibr_infoh[func];
+
+ if (!pcibr_info)
+ continue;
+
+ for (map = pcibr_info->f_piomap;
+ map != NULL; map = map->bp_next) {
+ base = map->bp_pciaddr;
+ size = map->bp_mapsz;
+ win = map->bp_space - PCIIO_SPACE_WIN(0);
+ if (win < 6)
+ base += soft->bs_slot[slot].bss_window[win].bssw_base;
+ else if (map->bp_space == PCIIO_SPACE_ROM)
+ base += pcibr_info->f_rbase;
+ if ((pci_addr >= base) && (pci_addr < (base + size)))
+ atomic_inc(&map->bp_toc);
+ }
+ }
+ }
+ }
+}
+
+/*
+ * PCI Bridge Error interrupt handler.
+ * This gets invoked, whenever a PCI bridge sends an error interrupt.
+ * Primarily this servers two purposes.
+ * - If an error can be handled (typically a PIO read/write
+ * error, we try to do it silently.
+ * - If an error cannot be handled, we die violently.
+ * Interrupt due to PIO errors:
+ * - Bridge sends an interrupt, whenever a PCI operation
+ * done by the bridge as the master fails. Operations could
+ * be either a PIO read or a PIO write.
+ * PIO Read operation also triggers a bus error, and it's
+ * We primarily ignore this interrupt in that context..
+ * For PIO write errors, this is the only indication.
+ * and we have to handle with the info from here.
+ *
+ * So, there is no way to distinguish if an interrupt is
+ * due to read or write error!.
+ */
+
+irqreturn_t
+pcibr_error_intr_handler(int irq, void *arg, struct pt_regs *ep)
+{
+ pcibr_soft_t pcibr_soft;
+ void *bridge;
+ uint64_t int_status;
+ uint64_t err_status;
+ int i;
+ uint64_t disable_errintr_mask = 0;
+ nasid_t nasid;
+
+
+#if PCIBR_SOFT_LIST
+ /*
+ * Defensive code for linked pcibr_soft structs
+ */
+ {
+ extern pcibr_list_p pcibr_list;
+ pcibr_list_p entry;
+
+ entry = pcibr_list;
+ while (1) {
+ if (entry == NULL) {
+ printk("pcibr_error_intr_handler: (0x%lx) is not a pcibr_soft!",
+ (uint64_t)arg);
+ return IRQ_NONE;
+ }
+ if ((intr_arg_t) entry->bl_soft == arg)
+ break;
+ entry = entry->bl_next;
+ }
+ }
+#endif /* PCIBR_SOFT_LIST */
+ pcibr_soft = (pcibr_soft_t) arg;
+ bridge = pcibr_soft->bs_base;
+
+ /*
+ * pcibr_error_intr_handler gets invoked whenever bridge encounters
+ * an error situation, and the interrupt for that error is enabled.
+ * This routine decides if the error is fatal or not, and takes
+ * action accordingly.
+ *
+ * In the case of PIO read/write timeouts, there is no way
+ * to know if it was a read or write request that timed out.
+ * If the error was due to a "read", a bus error will also occur
+ * and the bus error handling code takes care of it.
+ * If the error is due to a "write", the error is currently logged
+ * by this routine. For SN1 and SN0, if fire-and-forget mode is
+ * disabled, a write error response xtalk packet will be sent to
+ * the II, which will cause an II error interrupt. No write error
+ * recovery actions of any kind currently take place at the pcibr
+ * layer! (e.g., no panic on unrecovered write error)
+ *
+ * Prior to reading the Bridge int_status register we need to ensure
+ * that there are no error bits set in the lower layers (hubii)
+ * that have disabled PIO access to the widget. If so, there is nothing
+ * we can do until the bits clear, so we setup a timeout and try again
+ * later.
+ */
+
+ nasid = NASID_GET(bridge);
+ if (hubii_check_widget_disabled(nasid, pcibr_soft->bs_xid)) {
+ DECLARE_WAIT_QUEUE_HEAD(wq);
+ sleep_on_timeout(&wq, BRIDGE_PIOERR_TIMEOUT*HZ ); /* sleep */
+ pcibr_soft->bs_errinfo.bserr_toutcnt++;
+ /* Let's go recursive */
+ return(pcibr_error_intr_handler(irq, arg, ep));
+ }
+
+ int_status = pcireg_intr_status_get(pcibr_soft);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ERROR, pcibr_soft->bs_conn,
+ "pcibr_error_intr_handler: int_status=0x%lx\n", int_status));
+
+ /* int_status is which bits we have to clear;
+ * err_status is the bits we haven't handled yet.
+ */
+ err_status = int_status;
+
+ if (!(int_status & ~BRIDGE_ISR_INT_MSK)) {
+ /*
+ * No error bit set!!.
+ */
+ return IRQ_HANDLED;
+ }
+ /*
+ * If we have a PCIBUS_PIOERR, hand it to the logger.
+ */
+ if (int_status & BRIDGE_ISR_PCIBUS_PIOERR) {
+ pcibr_pioerr_check(pcibr_soft);
+ }
+
+ if (err_status) {
+ struct bs_errintr_stat_s *bs_estat ;
+ bs_estat = &pcibr_soft->bs_errintr_stat[PCIBR_ISR_ERR_START];
+
+ for (i = PCIBR_ISR_ERR_START; i < 64; i++, bs_estat++) {
+ if (err_status & (1ull << i)) {
+ uint32_t errrate = 0;
+ uint32_t errcount = 0;
+ uint32_t errinterval = 0, current_tick = 0;
+ int llp_tx_retry_errors = 0;
+ int is_llp_tx_retry_intr = 0;
+
+ bs_estat->bs_errcount_total++;
+
+ current_tick = jiffies;
+ errinterval = (current_tick - bs_estat->bs_lasterr_timestamp);
+ errcount = (bs_estat->bs_errcount_total -
+ bs_estat->bs_lasterr_snapshot);
+
+ /* LLP interrrupt errors are only valid on BUS0 of the PIC */
+ if (pcibr_soft->bs_busnum == 0)
+ is_llp_tx_retry_intr = (BRIDGE_ISR_LLP_TX_RETRY==(1ull << i));
+
+ /* Check for the divide by zero condition while
+ * calculating the error rates.
+ */
+
+ if (errinterval) {
+ errrate = errcount / errinterval;
+ /* If able to calculate error rate
+ * on a LLP transmitter retry interrupt, check
+ * if the error rate is nonzero and we have seen
+ * a certain minimum number of errors.
+ *
+ * NOTE : errcount is being compared to
+ * PCIBR_ERRTIME_THRESHOLD to make sure that we are not
+ * seeing cases like x error interrupts per y ticks for
+ * very low x ,y (x > y ) which could result in a
+ * rate > 100/tick.
+ */
+ if (is_llp_tx_retry_intr &&
+ errrate &&
+ (errcount >= PCIBR_ERRTIME_THRESHOLD)) {
+ llp_tx_retry_errors = 1;
+ }
+ } else {
+ errrate = 0;
+ /* Since we are not able to calculate the
+ * error rate check if we exceeded a certain
+ * minimum number of errors for LLP transmitter
+ * retries. Note that this can only happen
+ * within the first tick after the last snapshot.
+ */
+ if (is_llp_tx_retry_intr &&
+ (errcount >= PCIBR_ERRINTR_DISABLE_LEVEL)) {
+ llp_tx_retry_errors = 1;
+ }
+ }
+
+ /*
+ * If a non-zero error rate (which is equivalent to
+ * to 100 errors/tick at least) for the LLP transmitter
+ * retry interrupt was seen, check if we should print
+ * a warning message.
+ */
+
+ if (llp_tx_retry_errors) {
+ static uint32_t last_printed_rate;
+
+ if (errrate > last_printed_rate) {
+ last_printed_rate = errrate;
+ /* Print the warning only if the error rate
+ * for the transmitter retry interrupt
+ * exceeded the previously printed rate.
+ */
+ printk(KERN_WARNING
+ "%s: %s, Excessive error interrupts : %d/tick\n",
+ pcibr_soft->bs_name,
+ pcibr_isr_errs[i],
+ errrate);
+
+ }
+ /*
+ * Update snapshot, and time
+ */
+ bs_estat->bs_lasterr_timestamp = current_tick;
+ bs_estat->bs_lasterr_snapshot =
+ bs_estat->bs_errcount_total;
+
+ }
+ /*
+ * If the error rate is high enough, print the error rate.
+ */
+ if (errinterval > PCIBR_ERRTIME_THRESHOLD) {
+
+ if (errrate > PCIBR_ERRRATE_THRESHOLD) {
+ printk(KERN_NOTICE "%s: %s, Error rate %d/tick",
+ pcibr_soft->bs_name,
+ pcibr_isr_errs[i],
+ errrate);
+ /*
+ * Update snapshot, and time
+ */
+ bs_estat->bs_lasterr_timestamp = current_tick;
+ bs_estat->bs_lasterr_snapshot =
+ bs_estat->bs_errcount_total;
+ }
+ }
+ /* PIC BRINGUP WAR (PV# 856155):
+ * Dont disable PCI_X_ARB_ERR interrupts, we need the
+ * interrupt inorder to clear the DEV_BROKE bits in
+ * b_arb register to re-enable the device.
+ */
+ if (!(err_status & PIC_ISR_PCIX_ARB_ERR) &&
+ PCIBR_WAR_ENABLED(PV856155, pcibr_soft)) {
+
+ if (bs_estat->bs_errcount_total > PCIBR_ERRINTR_DISABLE_LEVEL) {
+ /*
+ * We have seen a fairly large number of errors of
+ * this type. Let's disable the interrupt. But flash
+ * a message about the interrupt being disabled.
+ */
+ printk(KERN_NOTICE
+ "%s Disabling error interrupt type %s. Error count %d",
+ pcibr_soft->bs_name,
+ pcibr_isr_errs[i],
+ bs_estat->bs_errcount_total);
+ disable_errintr_mask |= (1ull << i);
+ }
+ } /* PIC: WAR for PV 856155 end-of-if */
+ }
+ }
+ }
+
+ if (disable_errintr_mask) {
+ unsigned long s;
+ /*
+ * Disable some high frequency errors as they
+ * could eat up too much cpu time.
+ */
+ s = pcibr_lock(pcibr_soft);
+ pcireg_intr_enable_bit_clr(pcibr_soft, disable_errintr_mask);
+ pcibr_unlock(pcibr_soft, s);
+ }
+ /*
+ * If we leave the PROM cacheable, T5 might
+ * try to do a cache line sized writeback to it,
+ * which will cause a BRIDGE_ISR_INVLD_ADDR.
+ */
+ if ((err_status & BRIDGE_ISR_INVLD_ADDR) &&
+ (0x00C00000 == (pcireg_bus_err_get(pcibr_soft) & 0xFFFFFFFFFFC00000)) &&
+ (0x00402000 == (0x00F07F00 & pcireg_cmdword_err_get(pcibr_soft)))) {
+ err_status &= ~BRIDGE_ISR_INVLD_ADDR;
+ }
+ /*
+ * pcibr_pioerr_dump is a systune that make be used to not
+ * print bridge registers for interrupts generated by pio-errors.
+ * Some customers do early probes and expect a lot of failed
+ * pios.
+ */
+ if (!pcibr_pioerr_dump) {
+ bridge_errors_to_dump &= ~BRIDGE_ISR_PCIBUS_PIOERR;
+ } else {
+ bridge_errors_to_dump |= BRIDGE_ISR_PCIBUS_PIOERR;
+ }
+
+ /* Dump/Log Bridge error interrupt info */
+ if (err_status & bridge_errors_to_dump) {
+ printk("BRIDGE ERR_STATUS 0x%lx\n", err_status);
+ pcibr_error_dump(pcibr_soft);
+ }
+
+ /* PIC BRINGUP WAR (PV# 867308):
+ * Make BRIDGE_ISR_LLP_REC_SNERR & BRIDGE_ISR_LLP_REC_CBERR fatal errors
+ * so we know we've hit the problem defined in PV 867308 that we believe
+ * has only been seen in simulation
+ */
+ if (PCIBR_WAR_ENABLED(PV867308, pcibr_soft) &&
+ (err_status & (BRIDGE_ISR_LLP_REC_SNERR | BRIDGE_ISR_LLP_REC_CBERR))) {
+ printk("BRIDGE ERR_STATUS 0x%lx\n", err_status);
+ pcibr_error_dump(pcibr_soft);
+ /* machine_error_dump(""); */
+ panic("PCI Bridge Error interrupt killed the system");
+ }
+
+ if (err_status & BRIDGE_ISR_ERROR_FATAL) {
+ panic("PCI Bridge Error interrupt killed the system");
+ /*NOTREACHED */
+ }
+
+
+ /*
+ * We can't return without re-enabling the interrupt, since
+ * it would cause problems for devices like IOC3 (Lost
+ * interrupts ?.). So, just cleanup the interrupt, and
+ * use saved values later..
+ *
+ * PIC doesn't require groups of interrupts to be cleared...
+ */
+ pcireg_intr_reset_set(pcibr_soft, (int_status | BRIDGE_IRR_MULTI_CLR));
+
+ /* PIC BRINGUP WAR (PV# 856155):
+ * On a PCI_X_ARB_ERR error interrupt clear the DEV_BROKE bits from
+ * the b_arb register to re-enable the device.
+ */
+ if ((err_status & PIC_ISR_PCIX_ARB_ERR) &&
+ PCIBR_WAR_ENABLED(PV856155, pcibr_soft)) {
+ pcireg_arbitration_bit_set(pcibr_soft, (0xf << 20));
+ }
+
+ /* Zero out bserr_intstat field */
+ pcibr_soft->bs_errinfo.bserr_intstat = 0;
+ return IRQ_HANDLED;
+}
+
+/*
+ * pcibr_addr_toslot
+ * Given the 'pciaddr' find out which slot this address is
+ * allocated to, and return the slot number.
+ * While we have the info handy, construct the
+ * function number, space code and offset as well.
+ *
+ * NOTE: if this routine is called, we don't know whether
+ * the address is in CFG, MEM, or I/O space. We have to guess.
+ * This will be the case on PIO stores, where the only way
+ * we have of getting the address is to check the Bridge, which
+ * stores the PCI address but not the space and not the xtalk
+ * address (from which we could get it).
+ */
+static int
+pcibr_addr_toslot(pcibr_soft_t pcibr_soft,
+ iopaddr_t pciaddr,
+ pciio_space_t *spacep,
+ iopaddr_t *offsetp,
+ pciio_function_t *funcp)
+{
+ int s, f = 0, w;
+ iopaddr_t base;
+ size_t size;
+ pciio_piospace_t piosp;
+
+ /*
+ * Check if the address is in config space
+ */
+
+ if ((pciaddr >= BRIDGE_CONFIG_BASE) && (pciaddr < BRIDGE_CONFIG_END)) {
+
+ if (pciaddr >= BRIDGE_CONFIG1_BASE)
+ pciaddr -= BRIDGE_CONFIG1_BASE;
+ else
+ pciaddr -= BRIDGE_CONFIG_BASE;
+
+ s = pciaddr / BRIDGE_CONFIG_SLOT_SIZE;
+ pciaddr %= BRIDGE_CONFIG_SLOT_SIZE;
+
+ if (funcp) {
+ f = pciaddr / 0x100;
+ pciaddr %= 0x100;
+ }
+ if (spacep)
+ *spacep = PCIIO_SPACE_CFG;
+ if (offsetp)
+ *offsetp = pciaddr;
+ if (funcp)
+ *funcp = f;
+
+ return s;
+ }
+ for (s = pcibr_soft->bs_min_slot; s < PCIBR_NUM_SLOTS(pcibr_soft); ++s) {
+ int nf = pcibr_soft->bs_slot[s].bss_ninfo;
+ pcibr_info_h pcibr_infoh = pcibr_soft->bs_slot[s].bss_infos;
+
+ for (f = 0; f < nf; f++) {
+ pcibr_info_t pcibr_info = pcibr_infoh[f];
+
+ if (!pcibr_info)
+ continue;
+ for (w = 0; w < 6; w++) {
+ if (pcibr_info->f_window[w].w_space
+ == PCIIO_SPACE_NONE) {
+ continue;
+ }
+ base = pcibr_info->f_window[w].w_base;
+ size = pcibr_info->f_window[w].w_size;
+
+ if ((pciaddr >= base) && (pciaddr < (base + size))) {
+ if (spacep)
+ *spacep = PCIIO_SPACE_WIN(w);
+ if (offsetp)
+ *offsetp = pciaddr - base;
+ if (funcp)
+ *funcp = f;
+ return s;
+ } /* endif match */
+ } /* next window */
+ } /* next func */
+ } /* next slot */
+
+ /*
+ * Check if the address was allocated as part of the
+ * pcibr_piospace_alloc calls.
+ */
+ for (s = pcibr_soft->bs_min_slot; s < PCIBR_NUM_SLOTS(pcibr_soft); ++s) {
+ int nf = pcibr_soft->bs_slot[s].bss_ninfo;
+ pcibr_info_h pcibr_infoh = pcibr_soft->bs_slot[s].bss_infos;
+
+ for (f = 0; f < nf; f++) {
+ pcibr_info_t pcibr_info = pcibr_infoh[f];
+
+ if (!pcibr_info)
+ continue;
+ piosp = pcibr_info->f_piospace;
+ while (piosp) {
+ if ((piosp->start <= pciaddr) &&
+ ((piosp->count + piosp->start) > pciaddr)) {
+ if (spacep)
+ *spacep = piosp->space;
+ if (offsetp)
+ *offsetp = pciaddr - piosp->start;
+ return s;
+ } /* endif match */
+ piosp = piosp->next;
+ } /* next piosp */
+ } /* next func */
+ } /* next slot */
+
+ /*
+ * Some other random address on the PCI bus ...
+ * we have no way of knowing whether this was
+ * a MEM or I/O access; so, for now, we just
+ * assume that the low 1G is MEM, the next
+ * 3G is I/O, and anything above the 4G limit
+ * is obviously MEM.
+ */
+
+ if (spacep)
+ *spacep = ((pciaddr < (1ul << 30)) ? PCIIO_SPACE_MEM :
+ (pciaddr < (4ul << 30)) ? PCIIO_SPACE_IO :
+ PCIIO_SPACE_MEM);
+ if (offsetp)
+ *offsetp = pciaddr;
+
+ return PCIIO_SLOT_NONE;
+
+}
+
+void
+pcibr_error_cleanup(pcibr_soft_t pcibr_soft, int error_code)
+{
+ uint64_t clr_bits = BRIDGE_IRR_ALL_CLR;
+
+ ASSERT(error_code & IOECODE_PIO);
+ error_code = error_code;
+
+ pcireg_intr_reset_set(pcibr_soft, clr_bits);
+
+ pcireg_tflush_get(pcibr_soft); /* flushbus */
+}
+
+
+/*
+ * pcibr_error_extract
+ * Given the 'pcibr vertex handle' find out which slot
+ * the bridge status error address (from pcibr_soft info
+ * hanging off the vertex)
+ * allocated to, and return the slot number.
+ * While we have the info handy, construct the
+ * space code and offset as well.
+ *
+ * NOTE: if this routine is called, we don't know whether
+ * the address is in CFG, MEM, or I/O space. We have to guess.
+ * This will be the case on PIO stores, where the only way
+ * we have of getting the address is to check the Bridge, which
+ * stores the PCI address but not the space and not the xtalk
+ * address (from which we could get it).
+ *
+ * XXX- this interface has no way to return the function
+ * number on a multifunction card, even though that data
+ * is available.
+ */
+
+pciio_slot_t
+pcibr_error_extract(vertex_hdl_t pcibr_vhdl,
+ pciio_space_t *spacep,
+ iopaddr_t *offsetp)
+{
+ pcibr_soft_t pcibr_soft = 0;
+ iopaddr_t bserr_addr;
+ pciio_slot_t slot = PCIIO_SLOT_NONE;
+ arbitrary_info_t rev;
+
+ /* Do a sanity check as to whether we really got a
+ * bridge vertex handle.
+ */
+ if (hwgraph_info_get_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV, &rev) !=
+ GRAPH_SUCCESS)
+ return(slot);
+
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+ if (pcibr_soft) {
+ bserr_addr = pcireg_pci_bus_addr_get(pcibr_soft);
+ slot = pcibr_addr_toslot(pcibr_soft, bserr_addr,
+ spacep, offsetp, NULL);
+ }
+ return slot;
+}
+
+/*ARGSUSED */
+void
+pcibr_device_disable(pcibr_soft_t pcibr_soft, int devnum)
+{
+ /*
+ * XXX
+ * Device failed to handle error. Take steps to
+ * disable this device ? HOW TO DO IT ?
+ *
+ * If there are any Read response buffers associated
+ * with this device, it's time to get them back!!
+ *
+ * We can disassociate any interrupt level associated
+ * with this device, and disable that interrupt level
+ *
+ * For now it's just a place holder
+ */
+}
+
+/*
+ * pcibr_pioerror
+ * Handle PIO error that happened at the bridge pointed by pcibr_soft.
+ *
+ * Queries the Bus interface attached to see if the device driver
+ * mapping the device-number that caused error can handle the
+ * situation. If so, it will clean up any error, and return
+ * indicating the error was handled. If the device driver is unable
+ * to handle the error, it expects the bus-interface to disable that
+ * device, and takes any steps needed here to take away any resources
+ * associated with this device.
+ *
+ * A note about slots:
+ *
+ * PIC-based bridges use zero-based device numbering when devices to
+ * internal registers. However, the physical slots are numbered using a
+ * one-based scheme because in PCI-X, device 0 is reserved (see comments
+ * in pcibr_private.h for a better description).
+ *
+ * When building up the hwgraph, we use the external (one-based) number
+ * scheme when numbering slot components so that hwgraph more accuratly
+ * reflects what is silkscreened on the bricks.
+ *
+ * Since pciio_error_handler() needs to ultimatly be able to do a hwgraph
+ * lookup, the ioerror that gets built up in pcibr_pioerror() encodes the
+ * external (one-based) slot number. However, loops in pcibr_pioerror()
+ * which attempt to translate the virtual address into the correct
+ * PCI physical address use the device (zero-based) numbering when
+ * walking through bridge structures.
+ *
+ * To that end, pcibr_pioerror() uses device to denote the
+ * zero-based device number, and external_slot to denote the corresponding
+ * one-based slot number. Loop counters (eg. cs) are always device based.
+ */
+
+/* BEM_ADD_IOE doesn't dump the whole ioerror, it just
+ * decodes the PCI specific portions -- we count on our
+ * callers to dump the raw IOE data.
+ */
+#define BEM_ADD_IOE(ioe) \
+ do { \
+ if (IOERROR_FIELDVALID(ioe, busspace)) { \
+ iopaddr_t spc; \
+ iopaddr_t win; \
+ short widdev; \
+ iopaddr_t busaddr; \
+ \
+ IOERROR_GETVALUE(spc, ioe, busspace); \
+ win = spc - PCIIO_SPACE_WIN(0); \
+ IOERROR_GETVALUE(busaddr, ioe, busaddr); \
+ IOERROR_GETVALUE(widdev, ioe, widgetdev); \
+ \
+ switch (spc) { \
+ case PCIIO_SPACE_CFG: \
+ printk("\tPCI Slot %d Func %d CFG space Offset 0x%lx\n",\
+ pciio_widgetdev_slot_get(widdev), \
+ pciio_widgetdev_func_get(widdev), \
+ busaddr); \
+ break; \
+ case PCIIO_SPACE_IO: \
+ printk("\tPCI I/O space Offset 0x%lx\n", busaddr); \
+ break; \
+ case PCIIO_SPACE_MEM: \
+ case PCIIO_SPACE_MEM32: \
+ case PCIIO_SPACE_MEM64: \
+ printk("\tPCI MEM space Offset 0x%lx\n", busaddr); \
+ break; \
+ default: \
+ if (win < 6) { \
+ printk("\tPCI Slot %d Func %d Window %ld Offset 0x%lx\n",\
+ pciio_widgetdev_slot_get(widdev), \
+ pciio_widgetdev_func_get(widdev), \
+ win, \
+ busaddr); \
+ } \
+ break; \
+ } \
+ } \
+ } while (0)
+
+/*ARGSUSED */
+int
+pcibr_pioerror(
+ pcibr_soft_t pcibr_soft,
+ int error_code,
+ ioerror_mode_t mode,
+ ioerror_t *ioe)
+{
+ int retval = IOERROR_HANDLED;
+
+ vertex_hdl_t pcibr_vhdl = pcibr_soft->bs_vhdl;
+ iopaddr_t bad_xaddr;
+
+ pciio_space_t raw_space; /* raw PCI space */
+ iopaddr_t raw_paddr; /* raw PCI address */
+
+ pciio_space_t space; /* final PCI space */
+ pciio_slot_t device; /* final PCI device if appropriate */
+ pciio_slot_t external_slot;/* external slot for device */
+ pciio_function_t func; /* final PCI func, if appropriate */
+ iopaddr_t offset; /* final PCI offset */
+
+ int cs, cw, cf;
+ pciio_space_t wx;
+ iopaddr_t wb;
+ size_t ws;
+ iopaddr_t wl;
+
+
+ /*
+ * We expect to have an "xtalkaddr" coming in,
+ * and need to construct the slot/space/offset.
+ */
+
+ IOERROR_GETVALUE(bad_xaddr, ioe, xtalkaddr);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_ERROR_HDLR, pcibr_soft->bs_conn,
+ "pcibr_pioerror: pcibr_soft=0x%lx, bad_xaddr=0x%lx\n",
+ pcibr_soft, bad_xaddr));
+
+ device = PCIIO_SLOT_NONE;
+ func = PCIIO_FUNC_NONE;
+ raw_space = PCIIO_SPACE_NONE;
+ raw_paddr = 0;
+
+ if ((bad_xaddr >= PCIBR_BUS_TYPE0_CFG_DEV(pcibr_soft, 0)) &&
+ (bad_xaddr < PCIBR_TYPE1_CFG(pcibr_soft))) {
+ raw_paddr = bad_xaddr - PCIBR_BUS_TYPE0_CFG_DEV(pcibr_soft, 0);
+ device = raw_paddr / BRIDGE_CONFIG_SLOT_SIZE;
+ raw_paddr = raw_paddr % BRIDGE_CONFIG_SLOT_SIZE;
+ raw_space = PCIIO_SPACE_CFG;
+ }
+ if ((bad_xaddr >= PCIBR_TYPE1_CFG(pcibr_soft)) &&
+ (bad_xaddr < (PCIBR_TYPE1_CFG(pcibr_soft) + 0x1000))) {
+ /* Type 1 config space:
+ * slot and function numbers not known.
+ * Perhaps we can read them back?
+ */
+ raw_paddr = bad_xaddr - PCIBR_TYPE1_CFG(pcibr_soft);
+ raw_space = PCIIO_SPACE_CFG;
+ }
+ if ((bad_xaddr >= PCIBR_BRIDGE_DEVIO(pcibr_soft, 0)) &&
+ (bad_xaddr < PCIBR_BRIDGE_DEVIO(pcibr_soft, BRIDGE_DEV_CNT))) {
+ int x;
+
+ raw_paddr = bad_xaddr - PCIBR_BRIDGE_DEVIO(pcibr_soft, 0);
+ x = raw_paddr / BRIDGE_DEVIO_OFF;
+ raw_paddr %= BRIDGE_DEVIO_OFF;
+ /* first two devio windows are double-sized */
+ if ((x == 1) || (x == 3))
+ raw_paddr += BRIDGE_DEVIO_OFF;
+ if (x > 0)
+ x--;
+ if (x > 1)
+ x--;
+ /* x is which devio reg; no guarantee
+ * PCI slot x will be responding.
+ * still need to figure out who decodes
+ * space/offset on the bus.
+ */
+ raw_space = pcibr_soft->bs_slot[x].bss_devio.bssd_space;
+ if (raw_space == PCIIO_SPACE_NONE) {
+ /* Someone got an error because they
+ * accessed the PCI bus via a DevIO(x)
+ * window that pcibr has not yet assigned
+ * to any specific PCI address. It is
+ * quite possible that the Device(x)
+ * register has been changed since they
+ * made their access, but we will give it
+ * our best decode shot.
+ */
+ raw_space = pcibr_soft->bs_slot[x].bss_device
+ & BRIDGE_DEV_DEV_IO_MEM
+ ? PCIIO_SPACE_MEM
+ : PCIIO_SPACE_IO;
+ raw_paddr +=
+ (pcibr_soft->bs_slot[x].bss_device &
+ BRIDGE_DEV_OFF_MASK) <<
+ BRIDGE_DEV_OFF_ADDR_SHFT;
+ } else
+ raw_paddr += pcibr_soft->bs_slot[x].bss_devio.bssd_base;
+ }
+
+ if (IS_PIC_BUSNUM_SOFT(pcibr_soft, 0)) {
+ if ((bad_xaddr >= PICBRIDGE0_PCI_MEM32_BASE) &&
+ (bad_xaddr <= PICBRIDGE0_PCI_MEM32_LIMIT)) {
+ raw_space = PCIIO_SPACE_MEM32;
+ raw_paddr = bad_xaddr - PICBRIDGE0_PCI_MEM32_BASE;
+ }
+ if ((bad_xaddr >= PICBRIDGE0_PCI_MEM64_BASE) &&
+ (bad_xaddr <= PICBRIDGE0_PCI_MEM64_LIMIT)) {
+ raw_space = PCIIO_SPACE_MEM64;
+ raw_paddr = bad_xaddr - PICBRIDGE0_PCI_MEM64_BASE;
+ }
+ } else if (IS_PIC_BUSNUM_SOFT(pcibr_soft, 1)) {
+ if ((bad_xaddr >= PICBRIDGE1_PCI_MEM32_BASE) &&
+ (bad_xaddr <= PICBRIDGE1_PCI_MEM32_LIMIT)) {
+ raw_space = PCIIO_SPACE_MEM32;
+ raw_paddr = bad_xaddr - PICBRIDGE1_PCI_MEM32_BASE;
+ }
+ if ((bad_xaddr >= PICBRIDGE1_PCI_MEM64_BASE) &&
+ (bad_xaddr <= PICBRIDGE1_PCI_MEM64_LIMIT)) {
+ raw_space = PCIIO_SPACE_MEM64;
+ raw_paddr = bad_xaddr - PICBRIDGE1_PCI_MEM64_BASE;
+ }
+ } else {
+ printk("pcibr_pioerror(): unknown bridge type");
+ return IOERROR_UNHANDLED;
+ }
+ space = raw_space;
+ offset = raw_paddr;
+
+ if ((device == PCIIO_SLOT_NONE) && (space != PCIIO_SPACE_NONE)) {
+ /* we've got a space/offset but not which
+ * PCI slot decodes it. Check through our
+ * notions of which devices decode where.
+ *
+ * Yes, this "duplicates" some logic in
+ * pcibr_addr_toslot; the difference is,
+ * this code knows which space we are in,
+ * and can really really tell what is
+ * going on (no guessing).
+ */
+
+ for (cs = pcibr_soft->bs_min_slot;
+ (cs < PCIBR_NUM_SLOTS(pcibr_soft)) &&
+ (device == PCIIO_SLOT_NONE); cs++) {
+ int nf = pcibr_soft->bs_slot[cs].bss_ninfo;
+ pcibr_info_h pcibr_infoh = pcibr_soft->bs_slot[cs].bss_infos;
+
+ for (cf = 0; (cf < nf) && (device == PCIIO_SLOT_NONE); cf++) {
+ pcibr_info_t pcibr_info = pcibr_infoh[cf];
+
+ if (!pcibr_info)
+ continue;
+ for (cw = 0; (cw < 6) && (device == PCIIO_SLOT_NONE); ++cw) {
+ if (((wx = pcibr_info->f_window[cw].w_space) != PCIIO_SPACE_NONE) &&
+ ((wb = pcibr_info->f_window[cw].w_base) != 0) &&
+ ((ws = pcibr_info->f_window[cw].w_size) != 0) &&
+ ((wl = wb + ws) > wb) &&
+ ((wb <= offset) && (wl > offset))) {
+ /* MEM, MEM32 and MEM64 need to
+ * compare as equal ...
+ */
+ if ((wx == space) ||
+ (((wx == PCIIO_SPACE_MEM) ||
+ (wx == PCIIO_SPACE_MEM32) ||
+ (wx == PCIIO_SPACE_MEM64)) &&
+ ((space == PCIIO_SPACE_MEM) ||
+ (space == PCIIO_SPACE_MEM32) ||
+ (space == PCIIO_SPACE_MEM64)))) {
+ device = cs;
+ func = cf;
+ space = PCIIO_SPACE_WIN(cw);
+ offset -= wb;
+ } /* endif window space match */
+ } /* endif window valid and addr match */
+ } /* next window unless slot set */
+ } /* next func unless slot set */
+ } /* next slot unless slot set */
+ /* XXX- if slot is still -1, no PCI devices are
+ * decoding here using their standard PCI BASE
+ * registers. This would be a really good place
+ * to cross-coordinate with the pciio PCI
+ * address space allocation routines, to find
+ * out if this address is "allocated" by any of
+ * our subsidiary devices.
+ */
+ }
+ /* Scan all piomap records on this PCI bus to update
+ * the TimeOut Counters on all matching maps. If we
+ * don't already know the slot number, take it from
+ * the first matching piomap. Note that we have to
+ * compare maps against raw_space and raw_paddr
+ * since space and offset could already be
+ * window-relative.
+ *
+ * There is a chance that one CPU could update
+ * through this path, and another CPU could also
+ * update due to an interrupt. Closing this hole
+ * would only result in the possibility of some
+ * errors never getting logged at all, and since the
+ * use for bp_toc is as a logical test rather than a
+ * strict count, the excess counts are not a
+ * problem.
+ */
+ for (cs = pcibr_soft->bs_min_slot;
+ cs < PCIBR_NUM_SLOTS(pcibr_soft); ++cs) {
+ int nf = pcibr_soft->bs_slot[cs].bss_ninfo;
+ pcibr_info_h pcibr_infoh = pcibr_soft->bs_slot[cs].bss_infos;
+
+ for (cf = 0; cf < nf; cf++) {
+ pcibr_info_t pcibr_info = pcibr_infoh[cf];
+ pcibr_piomap_t map;
+
+ if (!pcibr_info)
+ continue;
+
+ for (map = pcibr_info->f_piomap;
+ map != NULL; map = map->bp_next) {
+ wx = map->bp_space;
+ wb = map->bp_pciaddr;
+ ws = map->bp_mapsz;
+ cw = wx - PCIIO_SPACE_WIN(0);
+ if (cw >= 0 && cw < 6) {
+ wb += pcibr_soft->bs_slot[cs].bss_window[cw].bssw_base;
+ wx = pcibr_soft->bs_slot[cs].bss_window[cw].bssw_space;
+ }
+ if (wx == PCIIO_SPACE_ROM) {
+ wb += pcibr_info->f_rbase;
+ wx = PCIIO_SPACE_MEM;
+ }
+ if ((wx == PCIIO_SPACE_MEM32) ||
+ (wx == PCIIO_SPACE_MEM64))
+ wx = PCIIO_SPACE_MEM;
+ wl = wb + ws;
+ if ((wx == raw_space) && (raw_paddr >= wb) && (raw_paddr < wl)) {
+ atomic_inc(&map->bp_toc);
+ if (device == PCIIO_SLOT_NONE) {
+ device = cs;
+ func = cf;
+ space = map->bp_space;
+ if (cw >= 0 && cw < 6)
+ offset -= pcibr_soft->bs_slot[device].bss_window[cw].bssw_base;
+ }
+
+ break;
+ }
+ }
+ }
+ }
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_ERROR_HDLR, pcibr_soft->bs_conn,
+ "pcibr_pioerror: space=%d, offset=0x%lx, dev=0x%x, func=0x%x\n",
+ space, offset, device, func));
+
+ if (space != PCIIO_SPACE_NONE) {
+ if (device != PCIIO_SLOT_NONE) {
+ external_slot = PCIBR_DEVICE_TO_SLOT(pcibr_soft, device);
+
+ if (func != PCIIO_FUNC_NONE)
+ IOERROR_SETVALUE(ioe, widgetdev,
+ pciio_widgetdev_create(external_slot,func));
+ else
+ IOERROR_SETVALUE(ioe, widgetdev,
+ pciio_widgetdev_create(external_slot,0));
+ }
+ IOERROR_SETVALUE(ioe, busspace, space);
+ IOERROR_SETVALUE(ioe, busaddr, offset);
+ }
+ if (mode == MODE_DEVPROBE) {
+ /*
+ * During probing, we don't really care what the
+ * error is. Clean up the error in Bridge, notify
+ * subsidiary devices, and return success.
+ */
+ pcibr_error_cleanup(pcibr_soft, error_code);
+
+ /* if appropriate, give the error handler for this slot
+ * a shot at this probe access as well.
+ */
+ return (device == PCIIO_SLOT_NONE) ? IOERROR_HANDLED :
+ pciio_error_handler(pcibr_vhdl, error_code, mode, ioe);
+ }
+ /*
+ * If we don't know what "PCI SPACE" the access
+ * was targeting, we may have problems at the
+ * Bridge itself. Don't touch any bridge registers,
+ * and do complain loudly.
+ */
+
+ if (space == PCIIO_SPACE_NONE) {
+ printk("XIO Bus Error at %s\n"
+ "\taccess to XIO bus offset 0x%lx\n"
+ "\tdoes not correspond to any PCI address\n",
+ pcibr_soft->bs_name, bad_xaddr);
+
+ /* caller will dump contents of ioe struct */
+ return IOERROR_XTALKLEVEL;
+ }
+
+ /*
+ * Actual PCI Error handling situation.
+ * Typically happens when a user level process accesses
+ * PCI space, and it causes some error.
+ *
+ * Due to PCI Bridge implementation, we get two indication
+ * for a read error: an interrupt and a Bus error.
+ * We like to handle read error in the bus error context.
+ * But the interrupt comes and goes before bus error
+ * could make much progress. (NOTE: interrupd does
+ * come in _after_ bus error processing starts. But it's
+ * completed by the time bus error code reaches PCI PIO
+ * error handling.
+ * Similarly write error results in just an interrupt,
+ * and error handling has to be done at interrupt level.
+ * There is no way to distinguish at interrupt time, if an
+ * error interrupt is due to read/write error..
+ */
+
+ /* We know the xtalk addr, the raw PCI bus space,
+ * the raw PCI bus address, the decoded PCI bus
+ * space, the offset within that space, and the
+ * decoded PCI slot (which may be "PCIIO_SLOT_NONE" if no slot
+ * is known to be involved).
+ */
+
+ /*
+ * Hand the error off to the handler registered
+ * for the slot that should have decoded the error,
+ * or to generic PCI handling (if pciio decides that
+ * such is appropriate).
+ */
+ retval = pciio_error_handler(pcibr_vhdl, error_code, mode, ioe);
+
+ if (retval != IOERROR_HANDLED) {
+
+ /* Generate a generic message for IOERROR_UNHANDLED
+ * since the subsidiary handlers were silent, and
+ * did no recovery.
+ */
+ if (retval == IOERROR_UNHANDLED) {
+ retval = IOERROR_PANIC;
+
+ /* we may or may not want to print some of this,
+ * depending on debug level and which error code.
+ */
+
+ printk(KERN_ALERT
+ "PIO Error on PCI Bus %s",
+ pcibr_soft->bs_name);
+ BEM_ADD_IOE(ioe);
+ }
+
+ /*
+ * Since error could not be handled at lower level,
+ * error data logged has not been cleared.
+ * Clean up errors, and
+ * re-enable bridge to interrupt on error conditions.
+ * NOTE: Wheather we get the interrupt on PCI_ABORT or not is
+ * dependent on INT_ENABLE register. This write just makes sure
+ * that if the interrupt was enabled, we do get the interrupt.
+ *
+ * CAUTION: Resetting bit BRIDGE_IRR_PCI_GRP_CLR, acknowledges
+ * a group of interrupts. If while handling this error,
+ * some other error has occurred, that would be
+ * implicitly cleared by this write.
+ * Need a way to ensure we don't inadvertently clear some
+ * other errors.
+ */
+ if (IOERROR_FIELDVALID(ioe, widgetdev)) {
+ short widdev;
+ IOERROR_GETVALUE(widdev, ioe, widgetdev);
+ external_slot = pciio_widgetdev_slot_get(widdev);
+ device = PCIBR_SLOT_TO_DEVICE(pcibr_soft, external_slot);
+ pcibr_device_disable(pcibr_soft, device);
+ }
+ if (mode == MODE_DEVUSERERROR)
+ pcibr_error_cleanup(pcibr_soft, error_code);
+ }
+ return retval;
+}
+
+/*
+ * bridge_dmaerror
+ * Some error was identified in a DMA transaction.
+ * This routine will identify the <device, address> that caused the error,
+ * and try to invoke the appropriate bus service to handle this.
+ */
+
+int
+pcibr_dmard_error(
+ pcibr_soft_t pcibr_soft,
+ int error_code,
+ ioerror_mode_t mode,
+ ioerror_t *ioe)
+{
+ vertex_hdl_t pcibr_vhdl = pcibr_soft->bs_vhdl;
+ int retval = 0;
+ int bufnum, device;
+
+ /*
+ * In case of DMA errors, bridge should have logged the
+ * address that caused the error.
+ * Look up the address, in the bridge error registers, and
+ * take appropriate action
+ */
+ {
+ short tmp;
+ IOERROR_GETVALUE(tmp, ioe, widgetnum);
+ ASSERT(tmp == pcibr_soft->bs_xid);
+ }
+
+ /*
+ * read error log registers
+ */
+ bufnum = pcireg_resp_err_buf_get(pcibr_soft);
+ device = pcireg_resp_err_dev_get(pcibr_soft);
+ IOERROR_SETVALUE(ioe, widgetdev, pciio_widgetdev_create(device, 0));
+ IOERROR_SETVALUE(ioe, busaddr, pcireg_resp_err_get(pcibr_soft));
+
+ /*
+ * need to ensure that the xtalk address in ioe
+ * maps to PCI error address read from bridge.
+ * How to convert PCI address back to Xtalk address ?
+ * (better idea: convert XTalk address to PCI address
+ * and then do the compare!)
+ */
+
+ retval = pciio_error_handler(pcibr_vhdl, error_code, mode, ioe);
+ if (retval != IOERROR_HANDLED) {
+ short tmp;
+ IOERROR_GETVALUE(tmp, ioe, widgetdev);
+ pcibr_device_disable(pcibr_soft, pciio_widgetdev_slot_get(tmp));
+ }
+
+ /*
+ * Re-enable bridge to interrupt on BRIDGE_IRR_RESP_BUF_GRP_CLR
+ * NOTE: Wheather we get the interrupt on BRIDGE_IRR_RESP_BUF_GRP_CLR or
+ * not is dependent on INT_ENABLE register. This write just makes sure
+ * that if the interrupt was enabled, we do get the interrupt.
+ */
+ pcireg_intr_reset_set(pcibr_soft, BRIDGE_IRR_RESP_BUF_GRP_CLR);
+
+ /*
+ * Also, release the "bufnum" back to buffer pool that could be re-used.
+ * This is done by "disabling" the buffer for a moment, then restoring
+ * the original assignment.
+ */
+
+ {
+ uint64_t rrb_reg;
+ uint64_t mask;
+
+ rrb_reg = pcireg_rrb_get(pcibr_soft, (bufnum & 1));
+ mask = 0xF << ((bufnum >> 1) * 4);
+ pcireg_rrb_set(pcibr_soft, (bufnum & 1), (rrb_reg & ~mask));
+ pcireg_rrb_set(pcibr_soft, (bufnum & 1), rrb_reg);
+ }
+
+ return retval;
+}
+
+/*
+ * pcibr_dmawr_error:
+ * Handle a dma write error caused by a device attached to this bridge.
+ *
+ * ioe has the widgetnum, widgetdev, and memaddr fields updated
+ * But we don't know the PCI address that corresponds to "memaddr"
+ * nor do we know which device driver is generating this address.
+ *
+ * There is no easy way to find out the PCI address(es) that map
+ * to a specific system memory address. Bus handling code is also
+ * of not much help, since they don't keep track of the DMA mapping
+ * that have been handed out.
+ * So it's a dead-end at this time.
+ *
+ * If translation is available, we could invoke the error handling
+ * interface of the device driver.
+ */
+/*ARGSUSED */
+int
+pcibr_dmawr_error(
+ pcibr_soft_t pcibr_soft,
+ int error_code,
+ ioerror_mode_t mode,
+ ioerror_t *ioe)
+{
+ vertex_hdl_t pcibr_vhdl = pcibr_soft->bs_vhdl;
+ int retval;
+
+ retval = pciio_error_handler(pcibr_vhdl, error_code, mode, ioe);
+
+ if (retval != IOERROR_HANDLED) {
+ short tmp;
+
+ IOERROR_GETVALUE(tmp, ioe, widgetdev);
+ pcibr_device_disable(pcibr_soft, pciio_widgetdev_slot_get(tmp));
+ }
+ return retval;
+}
+
+/*
+ * Bridge error handler.
+ * Interface to handle all errors that involve bridge in some way.
+ *
+ * This normally gets called from xtalk error handler.
+ * ioe has different set of fields set depending on the error that
+ * was encountered. So, we have a bit field indicating which of the
+ * fields are valid.
+ *
+ * NOTE: This routine could be operating in interrupt context. So,
+ * don't try to sleep here (till interrupt threads work!!)
+ */
+int
+pcibr_error_handler(
+ error_handler_arg_t einfo,
+ int error_code,
+ ioerror_mode_t mode,
+ ioerror_t *ioe)
+{
+ pcibr_soft_t pcibr_soft;
+ int retval = IOERROR_BADERRORCODE;
+
+ pcibr_soft = (pcibr_soft_t) einfo;
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_ERROR_HDLR, pcibr_soft->bs_conn,
+ "pcibr_error_handler: pcibr_soft=0x%lx, error_code=0x%x\n",
+ pcibr_soft, error_code));
+
+#if DEBUG && ERROR_DEBUG
+ printk( "%s: pcibr_error_handler\n", pcibr_soft->bs_name);
+#endif
+
+ ASSERT(pcibr_soft != NULL);
+
+ if (error_code & IOECODE_PIO)
+ retval = pcibr_pioerror(pcibr_soft, error_code, mode, ioe);
+
+ if (error_code & IOECODE_DMA) {
+ if (error_code & IOECODE_READ) {
+ /*
+ * DMA read error occurs when a device attached to the bridge
+ * tries to read some data from system memory, and this
+ * either results in a timeout or access error.
+ * First case is indicated by the bit "XREAD_REQ_TOUT"
+ * and second case by "RESP_XTALK_ERROR" bit in bridge error
+ * interrupt status register.
+ *
+ * pcibr_error_intr_handler would get invoked first, and it has
+ * the responsibility of calling pcibr_error_handler with
+ * suitable parameters.
+ */
+
+ retval = pcibr_dmard_error(pcibr_soft, error_code, MODE_DEVERROR, ioe);
+ }
+ if (error_code & IOECODE_WRITE) {
+ /*
+ * A device attached to this bridge has been generating
+ * bad DMA writes. Find out the device attached, and
+ * slap on it's wrist.
+ */
+
+ retval = pcibr_dmawr_error(pcibr_soft, error_code, MODE_DEVERROR, ioe);
+ }
+ }
+ return retval;
+
+}
+
+/*
+ * PIC has 2 busses under a single widget so pcibr_attach2 registers this
+ * wrapper function rather than pcibr_error_handler() for PIC. It's upto
+ * this wrapper to call pcibr_error_handler() with the correct pcibr_soft
+ * struct (ie. the pcibr_soft struct for the bus that saw the error).
+ *
+ * NOTE: this wrapper function is only registered for PIC ASICs and will
+ * only be called for a PIC
+ */
+int
+pcibr_error_handler_wrapper(
+ error_handler_arg_t einfo,
+ int error_code,
+ ioerror_mode_t mode,
+ ioerror_t *ioe)
+{
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) einfo;
+ int pio_retval = -1;
+ int dma_retval = -1;
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_ERROR_HDLR, pcibr_soft->bs_conn,
+ "pcibr_error_handler_wrapper: pcibr_soft=0x%lx, "
+ "error_code=0x%x\n", pcibr_soft, error_code));
+
+ /*
+ * It is possible that both a IOECODE_PIO and a IOECODE_DMA, and both
+ * IOECODE_READ and IOECODE_WRITE could be set in error_code so we must
+ * process all. Since we are a wrapper for pcibr_error_handler(), and
+ * will be calling it several times within this routine, we turn off the
+ * error_code bits we don't want it to be processing during that call.
+ */
+ /*
+ * If the error was a result of a PIO, we tell what bus on the PIC saw
+ * the error from the PIO address.
+ */
+
+ if (error_code & IOECODE_PIO) {
+ iopaddr_t bad_xaddr;
+ /*
+ * PIC bus0 PIO space 0x000000 - 0x7fffff or 0x40000000 - 0xbfffffff
+ * bus1 PIO space 0x800000 - 0xffffff or 0xc0000000 - 0x13fffffff
+ */
+ IOERROR_GETVALUE(bad_xaddr, ioe, xtalkaddr);
+ if ((bad_xaddr <= 0x7fffff) ||
+ ((bad_xaddr >= 0x40000000) && (bad_xaddr <= 0xbfffffff))) {
+ /* bus 0 saw the error */
+ pio_retval = pcibr_error_handler((error_handler_arg_t)pcibr_soft,
+ (error_code & ~IOECODE_DMA), mode, ioe);
+ } else if (((bad_xaddr >= 0x800000) && (bad_xaddr <= 0xffffff)) ||
+ ((bad_xaddr >= 0xc0000000) && (bad_xaddr <= 0x13fffffff))) {
+ /* bus 1 saw the error */
+ pcibr_soft = pcibr_soft->bs_peers_soft;
+ if (!pcibr_soft) {
+#if DEBUG
+ printk(KERN_WARNING "pcibr_error_handler: "
+ "bs_peers_soft==NULL. bad_xaddr= 0x%lx mode= 0x%lx\n",
+ bad_xaddr, mode);
+#endif
+ pio_retval = IOERROR_HANDLED;
+ } else
+ pio_retval= pcibr_error_handler((error_handler_arg_t)pcibr_soft,
+ (error_code & ~IOECODE_DMA), mode, ioe);
+ } else {
+ printk(KERN_WARNING "pcibr_error_handler_wrapper(): IOECODE_PIO: "
+ "saw an invalid pio address: 0x%lx\n", bad_xaddr);
+ pio_retval = IOERROR_UNHANDLED;
+ }
+ }
+
+ /*
+ * If the error was a result of a DMA Write, we tell what bus on the PIC
+ * saw the error by looking at tnum.
+ */
+ if ((error_code & IOECODE_DMA) && (error_code & IOECODE_WRITE)) {
+ short tmp;
+ /*
+ * For DMA writes [X]Bridge encodes the TNUM field of a Xtalk
+ * packet like this:
+ * bits value
+ * 4:3 10b
+ * 2:0 device number
+ *
+ * BUT PIC needs the bus number so it does this:
+ * bits value
+ * 4:3 10b
+ * 2 busnumber
+ * 1:0 device number
+ *
+ * Pull out the bus number from `tnum' and reset the `widgetdev'
+ * since when hubiio_crb_error_handler() set `widgetdev' it had
+ * no idea if it was a PIC or a BRIDGE ASIC so it set it based
+ * off bits 2:0
+ */
+ IOERROR_GETVALUE(tmp, ioe, tnum);
+ IOERROR_SETVALUE(ioe, widgetdev, (tmp & 0x3));
+ if ((tmp & 0x4) == 0) {
+ /* bus 0 saw the error. */
+ dma_retval = pcibr_error_handler((error_handler_arg_t)pcibr_soft,
+ (error_code & ~(IOECODE_PIO|IOECODE_READ)), mode, ioe);
+ } else {
+ /* bus 1 saw the error */
+ pcibr_soft = pcibr_soft->bs_peers_soft;
+ dma_retval = pcibr_error_handler((error_handler_arg_t)pcibr_soft,
+ (error_code & ~(IOECODE_PIO|IOECODE_READ)), mode, ioe);
+ }
+ }
+
+ /*
+ * If the error was a result of a DMA READ, XXX ???
+ */
+ if ((error_code & IOECODE_DMA) && (error_code & IOECODE_READ)) {
+ /*
+ * A DMA Read error will result in a BRIDGE_ISR_RESP_XTLK_ERR
+ * or BRIDGE_ISR_BAD_XRESP_PKT bridge error interrupt which
+ * are fatal interrupts (ie. BRIDGE_ISR_ERROR_FATAL) causing
+ * pcibr_error_intr_handler() to panic the system. So is the
+ * error handler even going to get called??? It appears that
+ * the pcibr_dmard_error() attempts to clear the interrupts
+ * so pcibr_error_intr_handler() won't see them, but there
+ * appears to be nothing to prevent pcibr_error_intr_handler()
+ * from running before pcibr_dmard_error() has a chance to
+ * clear the interrupt.
+ *
+ * Since we'll be panicing anyways, don't bother handling the
+ * error for now until we can fix this race condition mentioned
+ * above.
+ */
+ dma_retval = IOERROR_UNHANDLED;
+ }
+
+ /* XXX: pcibr_error_handler() should probably do the same thing, it over-
+ * write it's return value as it processes the different "error_code"s.
+ */
+ if ((pio_retval == -1) && (dma_retval == -1)) {
+ return IOERROR_BADERRORCODE;
+ } else if ((dma_retval != IOERROR_HANDLED) && (dma_retval != -1)) {
+ return dma_retval;
+ } else if ((pio_retval != IOERROR_HANDLED) && (pio_retval != -1)) {
+ return pio_retval;
+ } else {
+ return IOERROR_HANDLED;
+ }
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/pci/pci_defs.h>
+
+pcibr_hints_t pcibr_hints_get(vertex_hdl_t, int);
+void pcibr_hints_fix_rrbs(vertex_hdl_t);
+void pcibr_hints_dualslot(vertex_hdl_t, pciio_slot_t, pciio_slot_t);
+void pcibr_hints_intr_bits(vertex_hdl_t, pcibr_intr_bits_f *);
+void pcibr_set_rrb_callback(vertex_hdl_t, rrb_alloc_funct_t);
+void pcibr_hints_handsoff(vertex_hdl_t);
+void pcibr_hints_subdevs(vertex_hdl_t, pciio_slot_t, uint64_t);
+
+pcibr_hints_t
+pcibr_hints_get(vertex_hdl_t xconn_vhdl, int alloc)
+{
+ arbitrary_info_t ainfo = 0;
+ graph_error_t rv;
+ pcibr_hints_t hint;
+
+ rv = hwgraph_info_get_LBL(xconn_vhdl, INFO_LBL_PCIBR_HINTS, &ainfo);
+
+ if (alloc && (rv != GRAPH_SUCCESS)) {
+
+ hint = kmalloc(sizeof (*(hint)), GFP_KERNEL);
+ if ( !hint ) {
+ printk(KERN_WARNING "pcibr_hints_get(): unable to allocate "
+ "memory\n");
+ goto abnormal_exit;
+ }
+ memset(hint, 0, sizeof (*(hint)));
+
+ hint->rrb_alloc_funct = NULL;
+ hint->ph_intr_bits = NULL;
+ rv = hwgraph_info_add_LBL(xconn_vhdl,
+ INFO_LBL_PCIBR_HINTS,
+ (arbitrary_info_t) hint);
+ if (rv != GRAPH_SUCCESS)
+ goto abnormal_exit;
+
+ rv = hwgraph_info_get_LBL(xconn_vhdl, INFO_LBL_PCIBR_HINTS, &ainfo);
+
+ if (rv != GRAPH_SUCCESS)
+ goto abnormal_exit;
+
+ if (ainfo != (arbitrary_info_t) hint)
+ goto abnormal_exit;
+ }
+ return (pcibr_hints_t) ainfo;
+
+abnormal_exit:
+ kfree(hint);
+ return NULL;
+
+}
+
+void
+pcibr_hints_fix_some_rrbs(vertex_hdl_t xconn_vhdl, unsigned mask)
+{
+ pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1);
+
+ if (hint)
+ hint->ph_rrb_fixed = mask;
+ else
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_HINTS, xconn_vhdl,
+ "pcibr_hints_fix_rrbs: pcibr_hints_get failed\n"));
+}
+
+void
+pcibr_hints_fix_rrbs(vertex_hdl_t xconn_vhdl)
+{
+ pcibr_hints_fix_some_rrbs(xconn_vhdl, 0xFF);
+}
+
+void
+pcibr_hints_dualslot(vertex_hdl_t xconn_vhdl,
+ pciio_slot_t host,
+ pciio_slot_t guest)
+{
+ pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1);
+
+ if (hint)
+ hint->ph_host_slot[guest] = host + 1;
+ else
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_HINTS, xconn_vhdl,
+ "pcibr_hints_dualslot: pcibr_hints_get failed\n"));
+}
+
+void
+pcibr_hints_intr_bits(vertex_hdl_t xconn_vhdl,
+ pcibr_intr_bits_f *xxx_intr_bits)
+{
+ pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1);
+
+ if (hint)
+ hint->ph_intr_bits = xxx_intr_bits;
+ else
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_HINTS, xconn_vhdl,
+ "pcibr_hints_intr_bits: pcibr_hints_get failed\n"));
+}
+
+void
+pcibr_set_rrb_callback(vertex_hdl_t xconn_vhdl, rrb_alloc_funct_t rrb_alloc_funct)
+{
+ pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1);
+
+ if (hint)
+ hint->rrb_alloc_funct = rrb_alloc_funct;
+}
+
+void
+pcibr_hints_handsoff(vertex_hdl_t xconn_vhdl)
+{
+ pcibr_hints_t hint = pcibr_hints_get(xconn_vhdl, 1);
+
+ if (hint)
+ hint->ph_hands_off = 1;
+ else
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_HINTS, xconn_vhdl,
+ "pcibr_hints_handsoff: pcibr_hints_get failed\n"));
+}
+
+void
+pcibr_hints_subdevs(vertex_hdl_t xconn_vhdl,
+ pciio_slot_t slot,
+ uint64_t subdevs)
+{
+ arbitrary_info_t ainfo = 0;
+ char sdname[16];
+ vertex_hdl_t pconn_vhdl = GRAPH_VERTEX_NONE;
+
+ sprintf(sdname, "%s/%d", EDGE_LBL_PCI, slot);
+ (void) hwgraph_path_add(xconn_vhdl, sdname, &pconn_vhdl);
+ if (pconn_vhdl == GRAPH_VERTEX_NONE) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_HINTS, xconn_vhdl,
+ "pcibr_hints_subdevs: hwgraph_path_create failed\n"));
+ return;
+ }
+ hwgraph_info_get_LBL(pconn_vhdl, INFO_LBL_SUBDEVS, &ainfo);
+ if (ainfo == 0) {
+ uint64_t *subdevp;
+
+ subdevp = kmalloc(sizeof (*(subdevp)), GFP_KERNEL);
+ if (!subdevp) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_HINTS, xconn_vhdl,
+ "pcibr_hints_subdevs: subdev ptr alloc failed\n"));
+ return;
+ }
+ memset(subdevp, 0, sizeof (*(subdevp)));
+ *subdevp = subdevs;
+ hwgraph_info_add_LBL(pconn_vhdl, INFO_LBL_SUBDEVS, (arbitrary_info_t) subdevp);
+ hwgraph_info_get_LBL(pconn_vhdl, INFO_LBL_SUBDEVS, &ainfo);
+ if (ainfo == (arbitrary_info_t) subdevp)
+ return;
+ kfree(subdevp);
+ if (ainfo == (arbitrary_info_t) NULL) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_HINTS, xconn_vhdl,
+ "pcibr_hints_subdevs: null subdevs ptr\n"));
+ return;
+ }
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_HINTS, xconn_vhdl,
+ "pcibr_subdevs_get: dup subdev add_LBL\n"));
+ }
+ *(uint64_t *) ainfo = subdevs;
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/arch.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/pci/pci_defs.h>
+#include <asm/sn/io.h>
+#include <asm/sn/sn_private.h>
+
+#ifdef __ia64
+inline int
+compare_and_swap_ptr(void **location, void *old_ptr, void *new_ptr)
+{
+ /* FIXME - compare_and_swap_ptr NOT ATOMIC */
+ if (*location == old_ptr) {
+ *location = new_ptr;
+ return 1;
+ }
+ else
+ return 0;
+}
+#endif
+
+unsigned int pcibr_intr_bits(pciio_info_t info, pciio_intr_line_t lines, int nslots);
+pcibr_intr_t pcibr_intr_alloc(vertex_hdl_t, device_desc_t, pciio_intr_line_t, vertex_hdl_t);
+void pcibr_intr_free(pcibr_intr_t);
+void pcibr_setpciint(xtalk_intr_t);
+int pcibr_intr_connect(pcibr_intr_t, intr_func_t, intr_arg_t);
+void pcibr_intr_disconnect(pcibr_intr_t);
+
+vertex_hdl_t pcibr_intr_cpu_get(pcibr_intr_t);
+
+extern pcibr_info_t pcibr_info_get(vertex_hdl_t);
+
+/* =====================================================================
+ * INTERRUPT MANAGEMENT
+ */
+
+unsigned int
+pcibr_intr_bits(pciio_info_t info,
+ pciio_intr_line_t lines, int nslots)
+{
+ pciio_slot_t slot = PCIBR_INFO_SLOT_GET_INT(info);
+ unsigned bbits = 0;
+
+ /*
+ * Currently favored mapping from PCI
+ * slot number and INTA/B/C/D to Bridge
+ * PCI Interrupt Bit Number:
+ *
+ * SLOT A B C D
+ * 0 0 4 0 4
+ * 1 1 5 1 5
+ * 2 2 6 2 6
+ * 3 3 7 3 7
+ * 4 4 0 4 0
+ * 5 5 1 5 1
+ * 6 6 2 6 2
+ * 7 7 3 7 3
+ */
+
+ if (slot < nslots) {
+ if (lines & (PCIIO_INTR_LINE_A| PCIIO_INTR_LINE_C))
+ bbits |= 1 << slot;
+ if (lines & (PCIIO_INTR_LINE_B| PCIIO_INTR_LINE_D))
+ bbits |= 1 << (slot ^ 4);
+ }
+ return bbits;
+}
+
+
+/*
+ * On SN systems there is a race condition between a PIO read response
+ * and DMA's. In rare cases, the read response may beat the DMA, causing
+ * the driver to think that data in memory is complete and meaningful.
+ * This code eliminates that race.
+ * This routine is called by the PIO read routines after doing the read.
+ * This routine then forces a fake interrupt on another line, which
+ * is logically associated with the slot that the PIO is addressed to.
+ * (see sn_dma_flush_init() )
+ * It then spins while watching the memory location that the interrupt
+ * is targetted to. When the interrupt response arrives, we are sure
+ * that the DMA has landed in memory and it is safe for the driver
+ * to proceed.
+ */
+
+extern struct sn_flush_nasid_entry flush_nasid_list[MAX_NASIDS];
+
+void
+sn_dma_flush(unsigned long addr)
+{
+ nasid_t nasid;
+ int wid_num;
+ struct sn_flush_device_list *p;
+ int i,j;
+ int bwin;
+ unsigned long flags;
+
+ nasid = NASID_GET(addr);
+ wid_num = SWIN_WIDGETNUM(addr);
+ bwin = BWIN_WINDOWNUM(addr);
+
+ if (flush_nasid_list[nasid].widget_p == NULL) return;
+ if (bwin > 0) {
+ unsigned long itte = flush_nasid_list[nasid].iio_itte[bwin];
+
+ wid_num = (itte >> IIO_ITTE_WIDGET_SHIFT) &
+ IIO_ITTE_WIDGET_MASK;
+ }
+ if (flush_nasid_list[nasid].widget_p == NULL) return;
+ if (flush_nasid_list[nasid].widget_p[wid_num] == NULL) return;
+ p = &flush_nasid_list[nasid].widget_p[wid_num][0];
+
+ /* find a matching BAR */
+
+ for (i=0; i<DEV_PER_WIDGET;i++) {
+ for (j=0; j<PCI_ROM_RESOURCE;j++) {
+ if (p->bar_list[j].start == 0) break;
+ if (addr >= p->bar_list[j].start && addr <= p->bar_list[j].end) break;
+ }
+ if (j < PCI_ROM_RESOURCE && p->bar_list[j].start != 0) break;
+ p++;
+ }
+
+ /* if no matching BAR, return without doing anything. */
+
+ if (i == DEV_PER_WIDGET) return;
+
+ spin_lock_irqsave(&p->flush_lock, flags);
+
+ p->flush_addr = 0;
+
+ /* force an interrupt. */
+
+ *(volatile uint32_t *)(p->force_int_addr) = 1;
+
+ /* wait for the interrupt to come back. */
+
+ while (p->flush_addr != 0x10f);
+
+ /* okay, everything is synched up. */
+ spin_unlock_irqrestore(&p->flush_lock, flags);
+}
+
+EXPORT_SYMBOL(sn_dma_flush);
+
+/*
+ * There are end cases where a deadlock can occur if interrupt
+ * processing completes and the Bridge b_int_status bit is still set.
+ *
+ * One scenerio is if a second PCI interrupt occurs within 60ns of
+ * the previous interrupt being cleared. In this case the Bridge
+ * does not detect the transition, the Bridge b_int_status bit
+ * remains set, and because no transition was detected no interrupt
+ * packet is sent to the Hub/Heart.
+ *
+ * A second scenerio is possible when a b_int_status bit is being
+ * shared by multiple devices:
+ * Device #1 generates interrupt
+ * Bridge b_int_status bit set
+ * Device #2 generates interrupt
+ * interrupt processing begins
+ * ISR for device #1 runs and
+ * clears interrupt
+ * Device #1 generates interrupt
+ * ISR for device #2 runs and
+ * clears interrupt
+ * (b_int_status bit still set)
+ * interrupt processing completes
+ *
+ * Interrupt processing is now complete, but an interrupt is still
+ * outstanding for Device #1. But because there was no transition of
+ * the b_int_status bit, no interrupt packet will be generated and
+ * a deadlock will occur.
+ *
+ * To avoid these deadlock situations, this function is used
+ * to check if a specific Bridge b_int_status bit is set, and if so,
+ * cause the setting of the corresponding interrupt bit.
+ *
+ * On a XBridge (SN1) and PIC (SN2), we do this by writing the appropriate Bridge Force
+ * Interrupt register.
+ */
+void
+pcibr_force_interrupt(pcibr_intr_t intr)
+{
+ unsigned bit;
+ unsigned bits;
+ pcibr_soft_t pcibr_soft = intr->bi_soft;
+
+ bits = intr->bi_ibits;
+ for (bit = 0; bit < 8; bit++) {
+ if (bits & (1 << bit)) {
+
+ PCIBR_DEBUG((PCIBR_DEBUG_INTR, pcibr_soft->bs_vhdl,
+ "pcibr_force_interrupt: bit=0x%x\n", bit));
+
+ pcireg_force_intr_set(pcibr_soft, bit);
+ }
+ }
+}
+
+/*ARGSUSED */
+pcibr_intr_t
+pcibr_intr_alloc(vertex_hdl_t pconn_vhdl,
+ device_desc_t dev_desc,
+ pciio_intr_line_t lines,
+ vertex_hdl_t owner_dev)
+{
+ pcibr_info_t pcibr_info = pcibr_info_get(pconn_vhdl);
+ pciio_slot_t pciio_slot = PCIBR_INFO_SLOT_GET_INT(pcibr_info);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pcibr_info->f_mfast;
+ vertex_hdl_t xconn_vhdl = pcibr_soft->bs_conn;
+ int is_threaded = 0;
+
+ xtalk_intr_t *xtalk_intr_p;
+ pcibr_intr_t *pcibr_intr_p;
+ pcibr_intr_list_t *intr_list_p;
+
+ unsigned pcibr_int_bits;
+ unsigned pcibr_int_bit;
+ xtalk_intr_t xtalk_intr = (xtalk_intr_t)0;
+ hub_intr_t hub_intr;
+ pcibr_intr_t pcibr_intr;
+ pcibr_intr_list_t intr_entry;
+ pcibr_intr_list_t intr_list;
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pconn_vhdl,
+ "pcibr_intr_alloc: %s%s%s%s%s\n",
+ !(lines & 15) ? " No INTs?" : "",
+ lines & 1 ? " INTA" : "",
+ lines & 2 ? " INTB" : "",
+ lines & 4 ? " INTC" : "",
+ lines & 8 ? " INTD" : ""));
+
+ pcibr_intr = kmalloc(sizeof (*(pcibr_intr)), GFP_KERNEL);
+ if (!pcibr_intr)
+ return NULL;
+ memset(pcibr_intr, 0, sizeof (*(pcibr_intr)));
+
+ pcibr_intr->bi_dev = pconn_vhdl;
+ pcibr_intr->bi_lines = lines;
+ pcibr_intr->bi_soft = pcibr_soft;
+ pcibr_intr->bi_ibits = 0; /* bits will be added below */
+ pcibr_intr->bi_func = 0; /* unset until connect */
+ pcibr_intr->bi_arg = 0; /* unset until connect */
+ pcibr_intr->bi_flags = is_threaded ? 0 : PCIIO_INTR_NOTHREAD;
+ pcibr_intr->bi_mustruncpu = CPU_NONE;
+ pcibr_intr->bi_ibuf.ib_in = 0;
+ pcibr_intr->bi_ibuf.ib_out = 0;
+ spin_lock_init(&pcibr_intr->bi_ibuf.ib_lock);
+
+ pcibr_int_bits = pcibr_soft->bs_intr_bits((pciio_info_t)pcibr_info,
+ lines, PCIBR_NUM_SLOTS(pcibr_soft));
+
+ /*
+ * For each PCI interrupt line requested, figure
+ * out which Bridge PCI Interrupt Line it maps
+ * to, and make sure there are xtalk resources
+ * allocated for it.
+ */
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pconn_vhdl,
+ "pcibr_intr_alloc: pcibr_int_bits: 0x%x\n", pcibr_int_bits));
+ for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit ++) {
+ if (pcibr_int_bits & (1 << pcibr_int_bit)) {
+ xtalk_intr_p = &pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr;
+
+ xtalk_intr = *xtalk_intr_p;
+
+ if (xtalk_intr == NULL) {
+ /*
+ * This xtalk_intr_alloc is constrained for two reasons:
+ * 1) Normal interrupts and error interrupts need to be delivered
+ * through a single xtalk target widget so that there aren't any
+ * ordering problems with DMA, completion interrupts, and error
+ * interrupts. (Use of xconn_vhdl forces this.)
+ *
+ * 2) On SN1, addressing constraints on SN1 and Bridge force
+ * us to use a single PI number for all interrupts from a
+ * single Bridge. (SN1-specific code forces this).
+ */
+
+ /*
+ * All code dealing with threaded PCI interrupt handlers
+ * is located at the pcibr level. Because of this,
+ * we always want the lower layers (hub/heart_intr_alloc,
+ * intr_level_connect) to treat us as non-threaded so we
+ * don't set up a duplicate threaded environment. We make
+ * this happen by calling a special xtalk interface.
+ */
+ xtalk_intr = xtalk_intr_alloc_nothd(xconn_vhdl, dev_desc,
+ owner_dev);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pconn_vhdl,
+ "pcibr_intr_alloc: xtalk_intr=0x%lx\n", xtalk_intr));
+
+ /* both an assert and a runtime check on this:
+ * we need to check in non-DEBUG kernels, and
+ * the ASSERT gets us more information when
+ * we use DEBUG kernels.
+ */
+ ASSERT(xtalk_intr != NULL);
+ if (xtalk_intr == NULL) {
+ /* it is quite possible that our
+ * xtalk_intr_alloc failed because
+ * someone else got there first,
+ * and we can find their results
+ * in xtalk_intr_p.
+ */
+ if (!*xtalk_intr_p) {
+ printk(KERN_ALERT "pcibr_intr_alloc %s: "
+ "unable to get xtalk interrupt resources",
+ pcibr_soft->bs_name);
+ /* yes, we leak resources here. */
+ return 0;
+ }
+ } else if (compare_and_swap_ptr((void **) xtalk_intr_p, NULL, xtalk_intr)) {
+ /*
+ * now tell the bridge which slot is
+ * using this interrupt line.
+ */
+ pcireg_intr_device_bit_clr(pcibr_soft,
+ BRIDGE_INT_DEV_MASK(pcibr_int_bit));
+ pcireg_intr_device_bit_set(pcibr_soft,
+ (pciio_slot << BRIDGE_INT_DEV_SHFT(pcibr_int_bit)));
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pconn_vhdl,
+ "bridge intr bit %d clears my wrb\n",
+ pcibr_int_bit));
+ } else {
+ /* someone else got one allocated first;
+ * free the one we just created, and
+ * retrieve the one they allocated.
+ */
+ xtalk_intr_free(xtalk_intr);
+ xtalk_intr = *xtalk_intr_p;
+ }
+ }
+
+ pcibr_intr->bi_ibits |= 1 << pcibr_int_bit;
+
+ intr_entry = kmalloc(sizeof (*(intr_entry)), GFP_KERNEL);
+ if ( !intr_entry ) {
+ printk(KERN_ALERT "pcibr_intr_alloc %s: "
+ "unable to get memory",
+ pcibr_soft->bs_name);
+ return 0;
+ }
+ memset(intr_entry, 0, sizeof (*(intr_entry)));
+
+ intr_entry->il_next = NULL;
+ intr_entry->il_intr = pcibr_intr;
+ intr_entry->il_soft = pcibr_soft;
+ intr_entry->il_slot = pciio_slot;
+ intr_list_p =
+ &pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_list;
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pconn_vhdl,
+ "Bridge bit 0x%x wrap=0x%lx\n", pcibr_int_bit,
+ &(pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap)));
+
+ if (compare_and_swap_ptr((void **) intr_list_p, NULL, intr_entry)) {
+ /* we are the first interrupt on this bridge bit.
+ */
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pconn_vhdl,
+ "INT 0x%x (bridge bit %d) allocated [FIRST]\n",
+ pcibr_int_bits, pcibr_int_bit));
+ continue;
+ }
+ intr_list = *intr_list_p;
+ pcibr_intr_p = &intr_list->il_intr;
+ if (compare_and_swap_ptr((void **) pcibr_intr_p, NULL, pcibr_intr)) {
+ /* first entry on list was erased,
+ * and we replaced it, so we
+ * don't need our intr_entry.
+ */
+ kfree(intr_entry);
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pconn_vhdl,
+ "INT 0x%x (bridge bit %d) replaces erased first\n",
+ pcibr_int_bits, pcibr_int_bit));
+ continue;
+ }
+ intr_list_p = &intr_list->il_next;
+ if (compare_and_swap_ptr((void **) intr_list_p, NULL, intr_entry)) {
+ /* we are the new second interrupt on this bit.
+ */
+ pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared = 1;
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pconn_vhdl,
+ "INT 0x%x (bridge bit %d) is new SECOND\n",
+ pcibr_int_bits, pcibr_int_bit));
+ continue;
+ }
+ while (1) {
+ pcibr_intr_p = &intr_list->il_intr;
+ if (compare_and_swap_ptr((void **) pcibr_intr_p, NULL, pcibr_intr)) {
+ /* an entry on list was erased,
+ * and we replaced it, so we
+ * don't need our intr_entry.
+ */
+ kfree(intr_entry);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pconn_vhdl,
+ "INT 0x%x (bridge bit %d) replaces erase Nth\n",
+ pcibr_int_bits, pcibr_int_bit));
+ break;
+ }
+ intr_list_p = &intr_list->il_next;
+ if (compare_and_swap_ptr((void **) intr_list_p, NULL, intr_entry)) {
+ /* entry appended to share list
+ */
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pconn_vhdl,
+ "INT 0x%x (bridge bit %d) is new Nth\n",
+ pcibr_int_bits, pcibr_int_bit));
+ break;
+ }
+ /* step to next record in chain
+ */
+ intr_list = *intr_list_p;
+ }
+ }
+ }
+
+ hub_intr = (hub_intr_t)xtalk_intr;
+ pcibr_intr->bi_irq = hub_intr->i_bit;
+ pcibr_intr->bi_cpu = hub_intr->i_cpuid;
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pconn_vhdl,
+ "pcibr_intr_alloc complete: pcibr_intr=0x%lx\n", pcibr_intr));
+ return pcibr_intr;
+}
+
+/*ARGSUSED */
+void
+pcibr_intr_free(pcibr_intr_t pcibr_intr)
+{
+ unsigned pcibr_int_bits = pcibr_intr->bi_ibits;
+ pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft;
+ unsigned pcibr_int_bit;
+ pcibr_intr_list_t intr_list;
+ int intr_shared;
+ xtalk_intr_t *xtalk_intrp;
+
+ for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++) {
+ if (pcibr_int_bits & (1 << pcibr_int_bit)) {
+ for (intr_list =
+ pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_list;
+ intr_list != NULL;
+ intr_list = intr_list->il_next)
+ if (compare_and_swap_ptr((void **) &intr_list->il_intr,
+ pcibr_intr,
+ NULL)) {
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC,
+ pcibr_intr->bi_dev,
+ "pcibr_intr_free: cleared hdlr from bit 0x%x\n",
+ pcibr_int_bit));
+ }
+ /* If this interrupt line is not being shared between multiple
+ * devices release the xtalk interrupt resources.
+ */
+ intr_shared =
+ pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared;
+ xtalk_intrp = &pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr;
+
+ if ((!intr_shared) && (*xtalk_intrp)) {
+
+ xtalk_intr_free(*xtalk_intrp);
+ *xtalk_intrp = 0;
+
+ /* Clear the PCI device interrupt to bridge interrupt pin
+ * mapping.
+ */
+ pcireg_intr_device_bit_clr(pcibr_soft,
+ BRIDGE_INT_DEV_MASK(pcibr_int_bit));
+ }
+ }
+ }
+ kfree(pcibr_intr);
+}
+
+void
+pcibr_setpciint(xtalk_intr_t xtalk_intr)
+{
+ iopaddr_t addr;
+ xtalk_intr_vector_t vect;
+ vertex_hdl_t vhdl;
+ int bus_num;
+ int pcibr_int_bit;
+ void *bridge;
+
+ addr = xtalk_intr_addr_get(xtalk_intr);
+ vect = xtalk_intr_vector_get(xtalk_intr);
+ vhdl = xtalk_intr_dev_get(xtalk_intr);
+
+ /* bus and int_bits are stored in sfarg, bus bit3, int_bits bit2:0 */
+ pcibr_int_bit = *((int *)xtalk_intr_sfarg_get(xtalk_intr)) & 0x7;
+ bus_num = ((*((int *)xtalk_intr_sfarg_get(xtalk_intr)) & 0x8) >> 3);
+
+ bridge = pcibr_bridge_ptr_get(vhdl, bus_num);
+ pcireg_bridge_intr_addr_vect_set(bridge, pcibr_int_bit, vect);
+ pcireg_bridge_intr_addr_addr_set(bridge, pcibr_int_bit, addr);
+}
+
+/*ARGSUSED */
+int
+pcibr_intr_connect(pcibr_intr_t pcibr_intr, intr_func_t intr_func, intr_arg_t intr_arg)
+{
+ pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft;
+ unsigned pcibr_int_bits = pcibr_intr->bi_ibits;
+ unsigned pcibr_int_bit;
+ unsigned long s;
+
+ if (pcibr_intr == NULL)
+ return -1;
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pcibr_intr->bi_dev,
+ "pcibr_intr_connect: intr_func=0x%lx, intr_arg=0x%lx\n",
+ intr_func, intr_arg));
+
+ pcibr_intr->bi_func = intr_func;
+ pcibr_intr->bi_arg = intr_arg;
+ *((volatile unsigned *)&pcibr_intr->bi_flags) |= PCIIO_INTR_CONNECTED;
+
+ /*
+ * For each PCI interrupt line requested, figure
+ * out which Bridge PCI Interrupt Line it maps
+ * to, and make sure there are xtalk resources
+ * allocated for it.
+ */
+ for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++)
+ if (pcibr_int_bits & (1 << pcibr_int_bit)) {
+ pcibr_intr_wrap_t intr_wrap;
+ xtalk_intr_t xtalk_intr;
+ void *int_addr;
+
+ xtalk_intr = pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr;
+ intr_wrap = &pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap;
+
+ /*
+ * If this interrupt line is being shared and the connect has
+ * already been done, no need to do it again.
+ */
+ if (pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_connected)
+ continue;
+
+
+ /*
+ * Use the pcibr wrapper function to handle all Bridge interrupts
+ * regardless of whether the interrupt line is shared or not.
+ */
+ int_addr = pcireg_intr_addr_addr(pcibr_soft, pcibr_int_bit);
+ pcibr_soft->bs_intr[pcibr_int_bit].bsi_int_bit =
+ ((pcibr_soft->bs_busnum << 3) | pcibr_int_bit);
+ xtalk_intr_connect(xtalk_intr,
+ NULL,
+ (intr_arg_t) intr_wrap,
+ (xtalk_intr_setfunc_t) pcibr_setpciint,
+ &pcibr_soft->bs_intr[pcibr_int_bit].bsi_int_bit);
+
+ pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_connected = 1;
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pcibr_intr->bi_dev,
+ "pcibr_setpciint: int_addr=0x%lx, *int_addr=0x%lx, "
+ "pcibr_int_bit=0x%x\n", int_addr,
+ pcireg_intr_addr_get(pcibr_soft, pcibr_int_bit),
+ pcibr_int_bit));
+ }
+
+ s = pcibr_lock(pcibr_soft);
+ pcireg_intr_enable_bit_set(pcibr_soft, pcibr_int_bits);
+ pcireg_tflush_get(pcibr_soft);
+ pcibr_unlock(pcibr_soft, s);
+
+ return 0;
+}
+
+/*ARGSUSED */
+void
+pcibr_intr_disconnect(pcibr_intr_t pcibr_intr)
+{
+ pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft;
+ unsigned pcibr_int_bits = pcibr_intr->bi_ibits;
+ unsigned pcibr_int_bit;
+ pcibr_intr_wrap_t intr_wrap;
+ unsigned long s;
+
+ /* Stop calling the function. Now.
+ */
+ *((volatile unsigned *)&pcibr_intr->bi_flags) &= ~PCIIO_INTR_CONNECTED;
+ pcibr_intr->bi_func = 0;
+ pcibr_intr->bi_arg = 0;
+ /*
+ * For each PCI interrupt line requested, figure
+ * out which Bridge PCI Interrupt Line it maps
+ * to, and disconnect the interrupt.
+ */
+
+ /* don't disable interrupts for lines that
+ * are shared between devices.
+ */
+ for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++)
+ if ((pcibr_int_bits & (1 << pcibr_int_bit)) &&
+ (pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared))
+ pcibr_int_bits &= ~(1 << pcibr_int_bit);
+ if (!pcibr_int_bits)
+ return;
+
+ s = pcibr_lock(pcibr_soft);
+ pcireg_intr_enable_bit_clr(pcibr_soft, pcibr_int_bits);
+ pcireg_tflush_get(pcibr_soft); /* wait until Bridge PIO complete */
+ pcibr_unlock(pcibr_soft, s);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pcibr_intr->bi_dev,
+ "pcibr_intr_disconnect: disabled int_bits=0x%x\n",
+ pcibr_int_bits));
+
+ for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++)
+ if (pcibr_int_bits & (1 << pcibr_int_bit)) {
+
+ /* if the interrupt line is now shared,
+ * do not disconnect it.
+ */
+ if (pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared)
+ continue;
+
+ xtalk_intr_disconnect(pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr);
+ pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_connected = 0;
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pcibr_intr->bi_dev,
+ "pcibr_intr_disconnect: disconnect int_bits=0x%x\n",
+ pcibr_int_bits));
+
+ /* if we are sharing the interrupt line,
+ * connect us up; this closes the hole
+ * where the another pcibr_intr_alloc()
+ * was in progress as we disconnected.
+ */
+ intr_wrap = &pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap;
+ if (!pcibr_soft->bs_intr[pcibr_int_bit].bsi_pcibr_intr_wrap.iw_shared)
+ continue;
+
+ pcibr_soft->bs_intr[pcibr_int_bit].bsi_int_bit =
+ ((pcibr_soft->bs_busnum << 3) | pcibr_int_bit);
+ xtalk_intr_connect(pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr,
+ NULL,
+ (intr_arg_t) intr_wrap,
+ (xtalk_intr_setfunc_t) pcibr_setpciint,
+ &pcibr_soft->bs_intr[pcibr_int_bit].bsi_int_bit);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pcibr_intr->bi_dev,
+ "pcibr_intr_disconnect: now-sharing int_bits=0x%x\n",
+ pcibr_int_bit));
+ }
+}
+
+/*ARGSUSED */
+vertex_hdl_t
+pcibr_intr_cpu_get(pcibr_intr_t pcibr_intr)
+{
+ pcibr_soft_t pcibr_soft = pcibr_intr->bi_soft;
+ unsigned pcibr_int_bits = pcibr_intr->bi_ibits;
+ unsigned pcibr_int_bit;
+
+ for (pcibr_int_bit = 0; pcibr_int_bit < 8; pcibr_int_bit++)
+ if (pcibr_int_bits & (1 << pcibr_int_bit))
+ return xtalk_intr_cpu_get(pcibr_soft->bs_intr[pcibr_int_bit].bsi_xtalk_intr);
+ return 0;
+}
+
+/* =====================================================================
+ * INTERRUPT HANDLING
+ */
+void
+pcibr_clearwidint(pcibr_soft_t pcibr_soft)
+{
+ pcireg_intr_dst_set(pcibr_soft, 0);
+}
+
+
+void
+pcibr_setwidint(xtalk_intr_t intr)
+{
+ xwidgetnum_t targ = xtalk_intr_target_get(intr);
+ iopaddr_t addr = xtalk_intr_addr_get(intr);
+ xtalk_intr_vector_t vect = xtalk_intr_vector_get(intr);
+
+ pcibr_soft_t bridge = (pcibr_soft_t)xtalk_intr_sfarg_get(intr);
+
+ pcireg_intr_dst_target_id_set(bridge, targ);
+ pcireg_intr_dst_addr_set(bridge, addr);
+ pcireg_intr_host_err_set(bridge, vect);
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/addrs.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/pci/pci_defs.h>
+
+
+/*
+ * Identification Register Access -- Read Only 0000_0000
+ */
+static uint64_t
+__pcireg_id_get(pic_t *bridge)
+{
+ return bridge->p_wid_id;
+}
+
+uint64_t
+pcireg_bridge_id_get(void *ptr)
+{
+ return __pcireg_id_get((pic_t *)ptr);
+}
+
+uint64_t
+pcireg_id_get(pcibr_soft_t ptr)
+{
+ return __pcireg_id_get((pic_t *)ptr->bs_base);
+}
+
+
+
+/*
+ * Address Bus Side Holding Register Access -- Read Only 0000_0010
+ */
+uint64_t
+pcireg_bus_err_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_wid_err;
+}
+
+
+/*
+ * Control Register Access -- Read/Write 0000_0020
+ */
+static uint64_t
+__pcireg_control_get(pic_t *bridge)
+{
+ return bridge->p_wid_control;
+}
+
+uint64_t
+pcireg_bridge_control_get(void *ptr)
+{
+ return __pcireg_control_get((pic_t *)ptr);
+}
+
+uint64_t
+pcireg_control_get(pcibr_soft_t ptr)
+{
+ return __pcireg_control_get((pic_t *)ptr->bs_base);
+}
+
+
+void
+pcireg_control_set(pcibr_soft_t ptr, uint64_t val)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ /* WAR for PV 439897 & 454474. Add a readback of the control
+ * register. Lock to protect against MP accesses to this
+ * register along with other write-only registers (See PVs).
+ * This register isnt accessed in the "hot path" so the splhi
+ * shouldn't be a bottleneck
+ */
+
+ bridge->p_wid_control = val;
+ bridge->p_wid_control; /* WAR */
+}
+
+
+void
+pcireg_control_bit_clr(pcibr_soft_t ptr, uint64_t bits)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ /* WAR for PV 439897 & 454474. Add a readback of the control
+ * register. Lock to protect against MP accesses to this
+ * register along with other write-only registers (See PVs).
+ * This register isnt accessed in the "hot path" so the splhi
+ * shouldn't be a bottleneck
+ */
+
+ bridge->p_wid_control &= ~bits;
+ bridge->p_wid_control; /* WAR */
+}
+
+
+void
+pcireg_control_bit_set(pcibr_soft_t ptr, uint64_t bits)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ /* WAR for PV 439897 & 454474. Add a readback of the control
+ * register. Lock to protect against MP accesses to this
+ * register along with other write-only registers (See PVs).
+ * This register isnt accessed in the "hot path" so the splhi
+ * shouldn't be a bottleneck
+ */
+
+ bridge->p_wid_control |= bits;
+ bridge->p_wid_control; /* WAR */
+}
+
+/*
+ * Bus Speed (from control register); -- Read Only access 0000_0020
+ * 0x00 == 33MHz, 0x01 == 66MHz, 0x10 == 100MHz, 0x11 == 133MHz
+ */
+uint64_t
+pcireg_speed_get(pcibr_soft_t ptr)
+{
+ uint64_t speedbits;
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ speedbits = bridge->p_wid_control & PIC_CTRL_PCI_SPEED;
+ return (speedbits >> 4);
+}
+
+/*
+ * Bus Mode (ie. PCIX or PCI) (from Status register); 0000_0008
+ * 0x0 == PCI, 0x1 == PCI-X
+ */
+uint64_t
+pcireg_mode_get(pcibr_soft_t ptr)
+{
+ uint64_t pcix_active_bit;
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ pcix_active_bit = bridge->p_wid_stat & PIC_STAT_PCIX_ACTIVE;
+ return (pcix_active_bit >> PIC_STAT_PCIX_ACTIVE_SHFT);
+}
+
+void
+pcireg_req_timeout_set(pcibr_soft_t ptr, uint64_t val)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_wid_req_timeout = val;
+}
+
+/*
+ * Interrupt Destination Addr Register Access -- Read/Write 0000_0038
+ */
+
+void
+pcireg_intr_dst_set(pcibr_soft_t ptr, uint64_t val)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_wid_int = val;
+}
+
+/*
+ * Intr Destination Addr Reg Access (target_id) -- Read/Write 0000_0038
+ */
+uint64_t
+pcireg_intr_dst_target_id_get(pcibr_soft_t ptr)
+{
+ uint64_t tid_bits;
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ tid_bits = (bridge->p_wid_int & PIC_INTR_DEST_TID);
+ return (tid_bits >> PIC_INTR_DEST_TID_SHFT);
+}
+
+void
+pcireg_intr_dst_target_id_set(pcibr_soft_t ptr, uint64_t target_id)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_wid_int &= ~PIC_INTR_DEST_TID;
+ bridge->p_wid_int |=
+ ((target_id << PIC_INTR_DEST_TID_SHFT) & PIC_INTR_DEST_TID);
+}
+
+/*
+ * Intr Destination Addr Register Access (addr) -- Read/Write 0000_0038
+ */
+uint64_t
+pcireg_intr_dst_addr_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_wid_int & PIC_XTALK_ADDR_MASK;
+}
+
+void
+pcireg_intr_dst_addr_set(pcibr_soft_t ptr, uint64_t addr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_wid_int &= ~PIC_XTALK_ADDR_MASK;
+ bridge->p_wid_int |= (addr & PIC_XTALK_ADDR_MASK);
+}
+
+/*
+ * Cmd Word Holding Bus Side Error Register Access -- Read Only 0000_0040
+ */
+uint64_t
+pcireg_cmdword_err_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_wid_err_cmdword;
+}
+
+/*
+ * PCI/PCIX Target Flush Register Access -- Read Only 0000_0050
+ */
+uint64_t
+pcireg_tflush_get(pcibr_soft_t ptr)
+{
+ uint64_t ret = 0;
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ ret = bridge->p_wid_tflush;
+
+ /* Read of the Targer Flush should always return zero */
+ ASSERT_ALWAYS(ret == 0);
+ return ret;
+}
+
+/*
+ * Cmd Word Holding Link Side Error Register Access -- Read Only 0000_0058
+ */
+uint64_t
+pcireg_linkside_err_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_wid_aux_err;
+}
+
+/*
+ * PCI Response Buffer Address Holding Register -- Read Only 0000_0068
+ */
+uint64_t
+pcireg_resp_err_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_wid_resp;
+}
+
+/*
+ * PCI Resp Buffer Address Holding Reg (Address) -- Read Only 0000_0068
+ */
+uint64_t
+pcireg_resp_err_addr_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_wid_resp & PIC_RSP_BUF_ADDR;
+}
+
+/*
+ * PCI Resp Buffer Address Holding Register (Buffer)-- Read Only 0000_0068
+ */
+uint64_t
+pcireg_resp_err_buf_get(pcibr_soft_t ptr)
+{
+ uint64_t bufnum_bits;
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bufnum_bits = (bridge->p_wid_resp_upper & PIC_RSP_BUF_NUM);
+ return (bufnum_bits >> PIC_RSP_BUF_NUM_SHFT);
+}
+
+/*
+ * PCI Resp Buffer Address Holding Register (Device)-- Read Only 0000_0068
+ */
+uint64_t
+pcireg_resp_err_dev_get(pcibr_soft_t ptr)
+{
+ uint64_t devnum_bits;
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ devnum_bits = (bridge->p_wid_resp_upper & PIC_RSP_BUF_DEV_NUM);
+ return (devnum_bits >> PIC_RSP_BUF_DEV_NUM_SHFT);
+}
+
+/*
+ * Address Holding Register Link Side Errors -- Read Only 0000_0078
+ */
+uint64_t
+pcireg_linkside_err_addr_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_wid_addr_lkerr;
+}
+
+void
+pcireg_dirmap_wid_set(pcibr_soft_t ptr, uint64_t target)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_dir_map &= ~PIC_DIRMAP_WID;
+ bridge->p_dir_map |=
+ ((target << PIC_DIRMAP_WID_SHFT) & PIC_DIRMAP_WID);
+}
+
+void
+pcireg_dirmap_diroff_set(pcibr_soft_t ptr, uint64_t dir_off)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_dir_map &= ~PIC_DIRMAP_DIROFF;
+ bridge->p_dir_map |= (dir_off & PIC_DIRMAP_DIROFF);
+}
+
+void
+pcireg_dirmap_add512_set(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_dir_map |= PIC_DIRMAP_ADD512;
+}
+
+void
+pcireg_dirmap_add512_clr(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_dir_map &= ~PIC_DIRMAP_ADD512;
+}
+
+/*
+ * PCI Page Map Fault Address Register Access -- Read Only 0000_0090
+ */
+uint64_t
+pcireg_map_fault_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_map_fault;
+}
+
+/*
+ * Arbitration Register Access -- Read/Write 0000_00A0
+ */
+uint64_t
+pcireg_arbitration_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_arb;
+}
+
+void
+pcireg_arbitration_bit_set(pcibr_soft_t ptr, uint64_t bits)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_arb |= bits;
+}
+
+/*
+ * Internal Ram Parity Error Register Access -- Read Only 0000_00B0
+ */
+uint64_t
+pcireg_parity_err_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_ate_parity_err;
+}
+
+/*
+ * Type 1 Configuration Register Access -- Read/Write 0000_00C8
+ */
+void
+pcireg_type1_cntr_set(pcibr_soft_t ptr, uint64_t val)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_pci_cfg = val;
+}
+
+/*
+ * PCI Bus Error Lower Addr Holding Reg Access -- Read Only 0000_00D8
+ */
+uint64_t
+pcireg_pci_bus_addr_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_pci_err;
+}
+
+/*
+ * PCI Bus Error Addr Holding Reg Access (Address) -- Read Only 0000_00D8
+ */
+uint64_t
+pcireg_pci_bus_addr_addr_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_pci_err & PIC_XTALK_ADDR_MASK;
+}
+
+/*
+ * Interrupt Status Register Access -- Read Only 0000_0100
+ */
+uint64_t
+pcireg_intr_status_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_int_status;
+}
+
+/*
+ * Interrupt Enable Register Access -- Read/Write 0000_0108
+ */
+uint64_t
+pcireg_intr_enable_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_int_enable;
+}
+
+void
+pcireg_intr_enable_set(pcibr_soft_t ptr, uint64_t val)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_int_enable = val;
+}
+
+void
+pcireg_intr_enable_bit_clr(pcibr_soft_t ptr, uint64_t bits)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_int_enable &= ~bits;
+}
+
+void
+pcireg_intr_enable_bit_set(pcibr_soft_t ptr, uint64_t bits)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_int_enable |= bits;
+}
+
+/*
+ * Interrupt Reset Register Access -- Write Only 0000_0110
+ */
+void
+pcireg_intr_reset_set(pcibr_soft_t ptr, uint64_t val)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_int_rst_stat = val;
+}
+
+void
+pcireg_intr_mode_set(pcibr_soft_t ptr, uint64_t val)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_int_mode = val;
+}
+
+void
+pcireg_intr_device_set(pcibr_soft_t ptr, uint64_t val)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_int_device = val;
+}
+
+static void
+__pcireg_intr_device_bit_set(pic_t *bridge, uint64_t bits)
+{
+ bridge->p_int_device |= bits;
+}
+
+void
+pcireg_bridge_intr_device_bit_set(void *ptr, uint64_t bits)
+{
+ __pcireg_intr_device_bit_set((pic_t *)ptr, bits);
+}
+
+void
+pcireg_intr_device_bit_set(pcibr_soft_t ptr, uint64_t bits)
+{
+ __pcireg_intr_device_bit_set((pic_t *)ptr->bs_base, bits);
+}
+
+void
+pcireg_intr_device_bit_clr(pcibr_soft_t ptr, uint64_t bits)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_int_device &= ~bits;
+}
+
+/*
+ * Host Error Interrupt Field Register Access -- Read/Write 0000_0128
+ */
+void
+pcireg_intr_host_err_set(pcibr_soft_t ptr, uint64_t val)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_int_host_err = val;
+}
+
+/*
+ * Interrupt Host Address Register -- Read/Write 0000_0130 - 0000_0168
+ */
+uint64_t
+pcireg_intr_addr_get(pcibr_soft_t ptr, int int_n)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_int_addr[int_n];
+}
+
+static void
+__pcireg_intr_addr_set(pic_t *bridge, int int_n, uint64_t val)
+{
+ bridge->p_int_addr[int_n] = val;
+}
+
+void
+pcireg_bridge_intr_addr_set(void *ptr, int int_n, uint64_t val)
+{
+ __pcireg_intr_addr_set((pic_t *)ptr, int_n, val);
+}
+
+void
+pcireg_intr_addr_set(pcibr_soft_t ptr, int int_n, uint64_t val)
+{
+ __pcireg_intr_addr_set((pic_t *)ptr->bs_base, int_n, val);
+}
+
+void *
+pcireg_intr_addr_addr(pcibr_soft_t ptr, int int_n)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return (void *)&(bridge->p_int_addr[int_n]);
+}
+
+static void
+__pcireg_intr_addr_vect_set(pic_t *bridge, int int_n, uint64_t vect)
+{
+ bridge->p_int_addr[int_n] &= ~PIC_HOST_INTR_FLD;
+ bridge->p_int_addr[int_n] |=
+ ((vect << PIC_HOST_INTR_FLD_SHFT) & PIC_HOST_INTR_FLD);
+}
+
+void
+pcireg_bridge_intr_addr_vect_set(void *ptr, int int_n, uint64_t vect)
+{
+ __pcireg_intr_addr_vect_set((pic_t *)ptr, int_n, vect);
+}
+
+void
+pcireg_intr_addr_vect_set(pcibr_soft_t ptr, int int_n, uint64_t vect)
+{
+ __pcireg_intr_addr_vect_set((pic_t *)ptr->bs_base, int_n, vect);
+}
+
+
+
+/*
+ * Intr Host Address Register (int_addr) -- Read/Write 0000_0130 - 0000_0168
+ */
+static void
+__pcireg_intr_addr_addr_set(pic_t *bridge, int int_n, uint64_t addr)
+{
+ bridge->p_int_addr[int_n] &= ~PIC_HOST_INTR_ADDR;
+ bridge->p_int_addr[int_n] |= (addr & PIC_HOST_INTR_ADDR);
+}
+
+void
+pcireg_bridge_intr_addr_addr_set(void *ptr, int int_n, uint64_t addr)
+{
+ __pcireg_intr_addr_addr_set((pic_t *)ptr, int_n, addr);
+}
+
+void
+pcireg_intr_addr_addr_set(pcibr_soft_t ptr, int int_n, uint64_t addr)
+{
+ __pcireg_intr_addr_addr_set((pic_t *)ptr->bs_base, int_n, addr);
+}
+
+/*
+ * Multiple Interrupt Register Access -- Read Only 0000_0178
+ */
+uint64_t
+pcireg_intr_multiple_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_mult_int;
+}
+
+/*
+ * Force Always Intr Register Access -- Write Only 0000_0180 - 0000_01B8
+ */
+static void *
+__pcireg_force_always_addr_get(pic_t *bridge, int int_n)
+{
+ return (void *)&(bridge->p_force_always[int_n]);
+}
+
+void *
+pcireg_bridge_force_always_addr_get(void *ptr, int int_n)
+{
+ return __pcireg_force_always_addr_get((pic_t *)ptr, int_n);
+}
+
+void *
+pcireg_force_always_addr_get(pcibr_soft_t ptr, int int_n)
+{
+ return __pcireg_force_always_addr_get((pic_t *)ptr->bs_base, int_n);
+}
+
+/*
+ * Force Interrupt Register Access -- Write Only 0000_01C0 - 0000_01F8
+ */
+void
+pcireg_force_intr_set(pcibr_soft_t ptr, int int_n)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_force_pin[int_n] = 1;
+}
+
+/*
+ * Device(x) Register Access -- Read/Write 0000_0200 - 0000_0218
+ */
+uint64_t
+pcireg_device_get(pcibr_soft_t ptr, int device)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ ASSERT_ALWAYS((device >= 0) && (device <= 3));
+ return bridge->p_device[device];
+}
+
+void
+pcireg_device_set(pcibr_soft_t ptr, int device, uint64_t val)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ ASSERT_ALWAYS((device >= 0) && (device <= 3));
+ bridge->p_device[device] = val;
+}
+
+/*
+ * Device(x) Write Buffer Flush Reg Access -- Read Only 0000_0240 - 0000_0258
+ */
+uint64_t
+pcireg_wrb_flush_get(pcibr_soft_t ptr, int device)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+ uint64_t ret = 0;
+
+ ASSERT_ALWAYS((device >= 0) && (device <= 3));
+ ret = bridge->p_wr_req_buf[device];
+
+ /* Read of the Write Buffer Flush should always return zero */
+ ASSERT_ALWAYS(ret == 0);
+ return ret;
+}
+
+/*
+ * Even/Odd RRB Register Access -- Read/Write 0000_0280 - 0000_0288
+ */
+uint64_t
+pcireg_rrb_get(pcibr_soft_t ptr, int even_odd)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_rrb_map[even_odd];
+}
+
+void
+pcireg_rrb_set(pcibr_soft_t ptr, int even_odd, uint64_t val)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_rrb_map[even_odd] = val;
+}
+
+void
+pcireg_rrb_bit_set(pcibr_soft_t ptr, int even_odd, uint64_t bits)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_rrb_map[even_odd] |= bits;
+}
+
+/*
+ * RRB Status Register Access -- Read Only 0000_0290
+ */
+uint64_t
+pcireg_rrb_status_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_resp_status;
+}
+
+/*
+ * RRB Clear Register Access -- Write Only 0000_0298
+ */
+void
+pcireg_rrb_clear_set(pcibr_soft_t ptr, uint64_t val)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ bridge->p_resp_clear = val;
+}
+
+/*
+ * PCIX Bus Error Address Register Access -- Read Only 0000_0600
+ */
+uint64_t
+pcireg_pcix_bus_err_addr_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_pcix_bus_err_addr;
+}
+
+/*
+ * PCIX Bus Error Attribute Register Access -- Read Only 0000_0608
+ */
+uint64_t
+pcireg_pcix_bus_err_attr_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_pcix_bus_err_attr;
+}
+
+/*
+ * PCIX Bus Error Data Register Access -- Read Only 0000_0610
+ */
+uint64_t
+pcireg_pcix_bus_err_data_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_pcix_bus_err_data;
+}
+
+/*
+ * PCIX PIO Split Request Address Register Access -- Read Only 0000_0618
+ */
+uint64_t
+pcireg_pcix_pio_split_addr_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_pcix_pio_split_addr;
+}
+
+/*
+ * PCIX PIO Split Request Attribute Register Access -- Read Only 0000_0620
+ */
+uint64_t
+pcireg_pcix_pio_split_attr_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_pcix_pio_split_attr;
+}
+
+/*
+ * PCIX DMA Request Error Attribute Register Access -- Read Only 0000_0628
+ */
+uint64_t
+pcireg_pcix_req_err_attr_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_pcix_dma_req_err_attr;
+}
+
+/*
+ * PCIX DMA Request Error Address Register Access -- Read Only 0000_0630
+ */
+uint64_t
+pcireg_pcix_req_err_addr_get(pcibr_soft_t ptr)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ return bridge->p_pcix_dma_req_err_addr;
+}
+
+/*
+ * Type 0 Configuration Space Access -- Read/Write
+ */
+cfg_p
+pcireg_type0_cfg_addr(pcibr_soft_t ptr, uint8_t slot, uint8_t func, int off)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ /* Type 0 Config space accesses on PIC are 1-4, not 0-3 since
+ * it is a PCIX Bridge. See sys/PCI/pic.h for explanation.
+ */
+ slot++;
+ ASSERT_ALWAYS(((int) slot >= 1) && ((int) slot <= 4));
+ return &(bridge->p_type0_cfg_dev[slot].f[func].l[(off / 4)]);
+}
+
+/*
+ * Type 1 Configuration Space Access -- Read/Write
+ */
+cfg_p
+pcireg_type1_cfg_addr(pcibr_soft_t ptr, uint8_t func, int offset)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ /*
+ * Return a config space address for the given slot/func/offset.
+ * Note the returned ptr is a 32bit word (ie. cfg_p) aligned ptr
+ * pointing to the 32bit word that contains the "offset" byte.
+ */
+ return &(bridge->p_type1_cfg.f[func].l[(offset / 4)]);
+}
+
+/*
+ * Internal ATE SSRAM Access -- Read/Write
+ */
+bridge_ate_t
+pcireg_int_ate_get(pcibr_soft_t ptr, int ate_index)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ ASSERT_ALWAYS((ate_index >= 0) && (ate_index <= 1024));
+ return bridge->p_int_ate_ram[ate_index];
+}
+
+void
+pcireg_int_ate_set(pcibr_soft_t ptr, int ate_index, bridge_ate_t val)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ ASSERT_ALWAYS((ate_index >= 0) && (ate_index <= 1024));
+ bridge->p_int_ate_ram[ate_index] = (picate_t) val;
+}
+
+bridge_ate_p
+pcireg_int_ate_addr(pcibr_soft_t ptr, int ate_index)
+{
+ pic_t *bridge = (pic_t *)ptr->bs_base;
+
+ ASSERT_ALWAYS((ate_index >= 0) && (ate_index <= 1024));
+ return &(bridge->p_int_ate_ram[ate_index]);
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/pci/pci_defs.h>
+
+void pcibr_rrb_alloc_init(pcibr_soft_t, int, int, int);
+void pcibr_rrb_alloc_more(pcibr_soft_t, int, int, int);
+
+int pcibr_wrb_flush(vertex_hdl_t);
+int pcibr_rrb_alloc(vertex_hdl_t, int *, int *);
+int pcibr_rrb_check(vertex_hdl_t, int *, int *, int *, int *);
+int pcibr_alloc_all_rrbs(vertex_hdl_t, int, int, int, int,
+ int, int, int, int, int);
+void pcibr_rrb_flush(vertex_hdl_t);
+int pcibr_slot_initial_rrb_alloc(vertex_hdl_t,pciio_slot_t);
+
+void pcibr_rrb_debug(char *, pcibr_soft_t);
+
+
+/*
+ * RRB Management
+ *
+ * All the do_pcibr_rrb_ routines manipulate the Read Response Buffer (rrb)
+ * registers within the Bridge. Two 32 registers (b_rrb_map[2] also known
+ * as the b_even_resp & b_odd_resp registers) are used to allocate the 16
+ * rrbs to devices. The b_even_resp register represents even num devices,
+ * and b_odd_resp represent odd number devices. Each rrb is represented by
+ * 4-bits within a register.
+ * BRIDGE & XBRIDGE: 1 enable bit, 1 virtual channel bit, 2 device bits
+ * PIC: 1 enable bit, 2 virtual channel bits, 1 device bit
+ * PIC has 4 devices per bus, and 4 virtual channels (1 normal & 3 virtual)
+ * per device. BRIDGE & XBRIDGE have 8 devices per bus and 2 virtual
+ * channels (1 normal & 1 virtual) per device. See the BRIDGE and PIC ASIC
+ * Programmers Reference guides for more information.
+ */
+
+#define RRB_MASK (0xf) /* mask a single rrb within reg */
+#define RRB_SIZE (4) /* sizeof rrb within reg (bits) */
+
+#define RRB_ENABLE_BIT (0x8) /* [BRIDGE | PIC]_RRB_EN */
+#define NUM_PDEV_BITS (1)
+#define NUMBER_VCHANNELS (4)
+#define SLOT_2_PDEV(slot) ((slot) >> 1)
+#define SLOT_2_RRB_REG(slot) ((slot) & 0x1)
+
+#define RRB_VALID(rrb) (0x00010000 << (rrb))
+#define RRB_INUSE(rrb) (0x00000001 << (rrb))
+#define RRB_CLEAR(rrb) (0x00000001 << (rrb))
+
+/* validate that the slot and virtual channel are valid */
+#define VALIDATE_SLOT_n_VCHAN(s, v) \
+ (((((s) != PCIIO_SLOT_NONE) && ((s) <= (pciio_slot_t)3)) && \
+ (((v) >= 0) && ((v) <= 3))) ? 1 : 0)
+
+/*
+ * Count how many RRBs are marked valid for the specified PCI slot
+ * and virtual channel. Return the count.
+ */
+static int
+do_pcibr_rrb_count_valid(pcibr_soft_t pcibr_soft,
+ pciio_slot_t slot,
+ int vchan)
+{
+ uint64_t tmp;
+ uint16_t enable_bit, vchan_bits, pdev_bits, rrb_bits;
+ int rrb_index, cnt=0;
+
+ if (!VALIDATE_SLOT_n_VCHAN(slot, vchan)) {
+ printk(KERN_WARNING "do_pcibr_rrb_count_valid() invalid slot/vchan [%d/%d]\n", slot, vchan);
+ return 0;
+ }
+
+ enable_bit = RRB_ENABLE_BIT;
+ vchan_bits = vchan << NUM_PDEV_BITS;
+ pdev_bits = SLOT_2_PDEV(slot);
+ rrb_bits = enable_bit | vchan_bits | pdev_bits;
+
+ tmp = pcireg_rrb_get(pcibr_soft, SLOT_2_RRB_REG(slot));
+
+ for (rrb_index = 0; rrb_index < 8; rrb_index++) {
+ if ((tmp & RRB_MASK) == rrb_bits)
+ cnt++;
+ tmp = (tmp >> RRB_SIZE);
+ }
+ return cnt;
+}
+
+
+/*
+ * Count how many RRBs are available to be allocated to the specified
+ * slot. Return the count.
+ */
+static int
+do_pcibr_rrb_count_avail(pcibr_soft_t pcibr_soft,
+ pciio_slot_t slot)
+{
+ uint64_t tmp;
+ uint16_t enable_bit;
+ int rrb_index, cnt=0;
+
+ if (!VALIDATE_SLOT_n_VCHAN(slot, 0)) {
+ printk(KERN_WARNING "do_pcibr_rrb_count_avail() invalid slot/vchan");
+ return 0;
+ }
+
+ enable_bit = RRB_ENABLE_BIT;
+
+ tmp = pcireg_rrb_get(pcibr_soft, SLOT_2_RRB_REG(slot));
+
+ for (rrb_index = 0; rrb_index < 8; rrb_index++) {
+ if ((tmp & enable_bit) != enable_bit)
+ cnt++;
+ tmp = (tmp >> RRB_SIZE);
+ }
+ return cnt;
+}
+
+
+/*
+ * Allocate some additional RRBs for the specified slot and the specified
+ * virtual channel. Returns -1 if there were insufficient free RRBs to
+ * satisfy the request, or 0 if the request was fulfilled.
+ *
+ * Note that if a request can be partially filled, it will be, even if
+ * we return failure.
+ */
+static int
+do_pcibr_rrb_alloc(pcibr_soft_t pcibr_soft,
+ pciio_slot_t slot,
+ int vchan,
+ int more)
+{
+ uint64_t reg, tmp = 0;
+ uint16_t enable_bit, vchan_bits, pdev_bits, rrb_bits;
+ int rrb_index;
+
+ if (!VALIDATE_SLOT_n_VCHAN(slot, vchan)) {
+ printk(KERN_WARNING "do_pcibr_rrb_alloc() invalid slot/vchan");
+ return -1;
+ }
+
+ enable_bit = RRB_ENABLE_BIT;
+ vchan_bits = vchan << NUM_PDEV_BITS;
+ pdev_bits = SLOT_2_PDEV(slot);
+ rrb_bits = enable_bit | vchan_bits | pdev_bits;
+
+ reg = tmp = pcireg_rrb_get(pcibr_soft, SLOT_2_RRB_REG(slot));
+
+ for (rrb_index = 0; ((rrb_index < 8) && (more > 0)); rrb_index++) {
+ if ((tmp & enable_bit) != enable_bit) {
+ /* clear the rrb and OR in the new rrb into 'reg' */
+ reg = reg & ~(RRB_MASK << (RRB_SIZE * rrb_index));
+ reg = reg | (rrb_bits << (RRB_SIZE * rrb_index));
+ more--;
+ }
+ tmp = (tmp >> RRB_SIZE);
+ }
+
+ pcireg_rrb_set(pcibr_soft, SLOT_2_RRB_REG(slot), reg);
+ return (more ? -1 : 0);
+}
+
+/*
+ * Wait for the the specified rrb to have no outstanding XIO pkts
+ * and for all data to be drained. Mark the rrb as no longer being
+ * valid.
+ */
+static void
+do_pcibr_rrb_clear(pcibr_soft_t pcibr_soft, int rrb)
+{
+ uint64_t status;
+
+ /* bridge_lock must be held; this RRB must be disabled. */
+
+ /* wait until RRB has no outstanduing XIO packets. */
+ status = pcireg_rrb_status_get(pcibr_soft);
+ while (status & RRB_INUSE(rrb)) {
+ status = pcireg_rrb_status_get(pcibr_soft);
+ }
+
+ /* if the RRB has data, drain it. */
+ if (status & RRB_VALID(rrb)) {
+ pcireg_rrb_clear_set(pcibr_soft, RRB_CLEAR(rrb));
+
+ /* wait until RRB is no longer valid. */
+ status = pcireg_rrb_status_get(pcibr_soft);
+ while (status & RRB_VALID(rrb)) {
+ status = pcireg_rrb_status_get(pcibr_soft);
+ }
+ }
+}
+
+
+/*
+ * Release some of the RRBs that have been allocated for the specified
+ * slot. Returns zero for success, or negative if it was unable to free
+ * that many RRBs.
+ *
+ * Note that if a request can be partially fulfilled, it will be, even
+ * if we return failure.
+ */
+static int
+do_pcibr_rrb_free(pcibr_soft_t pcibr_soft,
+ pciio_slot_t slot,
+ int vchan,
+ int less)
+{
+ uint64_t reg, tmp = 0, clr = 0;
+ uint16_t enable_bit, vchan_bits, pdev_bits, rrb_bits;
+ int rrb_index;
+
+ if (!VALIDATE_SLOT_n_VCHAN(slot, vchan)) {
+ printk(KERN_WARNING "do_pcibr_rrb_free() invalid slot/vchan");
+ return -1;
+ }
+
+ enable_bit = RRB_ENABLE_BIT;
+ vchan_bits = vchan << NUM_PDEV_BITS;
+ pdev_bits = SLOT_2_PDEV(slot);
+ rrb_bits = enable_bit | vchan_bits | pdev_bits;
+
+ reg = tmp = pcireg_rrb_get(pcibr_soft, SLOT_2_RRB_REG(slot));
+
+ for (rrb_index = 0; ((rrb_index < 8) && (less > 0)); rrb_index++) {
+ if ((tmp & RRB_MASK) == rrb_bits) {
+ /*
+ * the old do_pcibr_rrb_free() code only clears the enable bit
+ * but I say we should clear the whole rrb (ie):
+ * reg = reg & ~(RRB_MASK << (RRB_SIZE * rrb_index));
+ * But to be compatible with old code we'll only clear enable.
+ */
+ reg = reg & ~(RRB_ENABLE_BIT << (RRB_SIZE * rrb_index));
+ clr = clr | (enable_bit << (RRB_SIZE * rrb_index));
+ less--;
+ }
+ tmp = (tmp >> RRB_SIZE);
+ }
+
+ pcireg_rrb_set(pcibr_soft, SLOT_2_RRB_REG(slot), reg);
+
+ /* call do_pcibr_rrb_clear() for all the rrbs we've freed */
+ for (rrb_index = 0; rrb_index < 8; rrb_index++) {
+ int evn_odd = SLOT_2_RRB_REG(slot);
+ if (clr & (enable_bit << (RRB_SIZE * rrb_index)))
+ do_pcibr_rrb_clear(pcibr_soft, (2 * rrb_index) + evn_odd);
+ }
+
+ return (less ? -1 : 0);
+}
+
+/*
+ * Flush the specified rrb by calling do_pcibr_rrb_clear(). This
+ * routine is just a wrapper to make sure the rrb is disabled
+ * before calling do_pcibr_rrb_clear().
+ */
+static void
+do_pcibr_rrb_flush(pcibr_soft_t pcibr_soft, int rrbn)
+{
+ uint64_t rrbv;
+ int shft = (RRB_SIZE * (rrbn >> 1));
+ uint64_t ebit = RRB_ENABLE_BIT << shft;
+
+ rrbv = pcireg_rrb_get(pcibr_soft, (rrbn & 1));
+ if (rrbv & ebit) {
+ pcireg_rrb_set(pcibr_soft, (rrbn & 1), (rrbv & ~ebit));
+ }
+
+ do_pcibr_rrb_clear(pcibr_soft, rrbn);
+
+ if (rrbv & ebit) {
+ pcireg_rrb_set(pcibr_soft, (rrbn & 1), rrbv);
+ }
+}
+
+/*
+ * free all the rrbs (both the normal and virtual channels) for the
+ * specified slot.
+ */
+void
+do_pcibr_rrb_free_all(pcibr_soft_t pcibr_soft,
+ pciio_slot_t slot)
+{
+ int vchan;
+ int vchan_total = NUMBER_VCHANNELS;
+
+ /* pretend we own all 8 rrbs and just ignore the return value */
+ for (vchan = 0; vchan < vchan_total; vchan++) {
+ do_pcibr_rrb_free(pcibr_soft, slot, vchan, 8);
+ pcibr_soft->bs_rrb_valid[slot][vchan] = 0;
+ }
+}
+
+
+/*
+ * Initialize a slot with a given number of RRBs. (this routine
+ * will also give back RRBs if the slot has more than we want).
+ */
+void
+pcibr_rrb_alloc_init(pcibr_soft_t pcibr_soft,
+ int slot,
+ int vchan,
+ int init_rrbs)
+{
+ int had = pcibr_soft->bs_rrb_valid[slot][vchan];
+ int have = had;
+ int added = 0;
+
+ for (added = 0; have < init_rrbs; ++added, ++have) {
+ if (pcibr_soft->bs_rrb_res[slot] > 0)
+ pcibr_soft->bs_rrb_res[slot]--;
+ else if (pcibr_soft->bs_rrb_avail[slot & 1] > 0)
+ pcibr_soft->bs_rrb_avail[slot & 1]--;
+ else
+ break;
+ if (do_pcibr_rrb_alloc(pcibr_soft, slot, vchan, 1) < 0)
+ break;
+
+ pcibr_soft->bs_rrb_valid[slot][vchan]++;
+ }
+
+ /* Free any extra RRBs that the slot may have allocated to it */
+ while (have > init_rrbs) {
+ pcibr_soft->bs_rrb_avail[slot & 1]++;
+ pcibr_soft->bs_rrb_valid[slot][vchan]--;
+ do_pcibr_rrb_free(pcibr_soft, slot, vchan, 1);
+ added--;
+ have--;
+ }
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_RRB, pcibr_soft->bs_vhdl,
+ "pcibr_rrb_alloc_init: had %d, added/removed %d, "
+ "(of requested %d) RRBs "
+ "to slot %d, vchan %d\n", had, added, init_rrbs,
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot), vchan));
+
+ pcibr_rrb_debug("pcibr_rrb_alloc_init", pcibr_soft);
+}
+
+
+/*
+ * Allocate more RRBs to a given slot (if the RRBs are available).
+ */
+void
+pcibr_rrb_alloc_more(pcibr_soft_t pcibr_soft,
+ int slot,
+ int vchan,
+ int more_rrbs)
+{
+ int added;
+
+ for (added = 0; added < more_rrbs; ++added) {
+ if (pcibr_soft->bs_rrb_res[slot] > 0)
+ pcibr_soft->bs_rrb_res[slot]--;
+ else if (pcibr_soft->bs_rrb_avail[slot & 1] > 0)
+ pcibr_soft->bs_rrb_avail[slot & 1]--;
+ else
+ break;
+ if (do_pcibr_rrb_alloc(pcibr_soft, slot, vchan, 1) < 0)
+ break;
+
+ pcibr_soft->bs_rrb_valid[slot][vchan]++;
+ }
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_RRB, pcibr_soft->bs_vhdl,
+ "pcibr_rrb_alloc_more: added %d (of %d requested) RRBs "
+ "to slot %d, vchan %d\n", added, more_rrbs,
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot), vchan));
+
+ pcibr_rrb_debug("pcibr_rrb_alloc_more", pcibr_soft);
+}
+
+
+/*
+ * Flush all the rrb's assigned to the specified connection point.
+ */
+void
+pcibr_rrb_flush(vertex_hdl_t pconn_vhdl)
+{
+ pciio_info_t pciio_info = pciio_info_get(pconn_vhdl);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t)pciio_info_mfast_get(pciio_info);
+ pciio_slot_t slot = PCIBR_INFO_SLOT_GET_INT(pciio_info);
+
+ uint64_t tmp;
+ uint16_t enable_bit, pdev_bits, rrb_bits, rrb_mask;
+ int rrb_index;
+ unsigned long s;
+
+ enable_bit = RRB_ENABLE_BIT;
+ pdev_bits = SLOT_2_PDEV(slot);
+ rrb_bits = enable_bit | pdev_bits;
+ rrb_mask = enable_bit | ((NUM_PDEV_BITS << 1) - 1);
+
+ tmp = pcireg_rrb_get(pcibr_soft, SLOT_2_RRB_REG(slot));
+
+ s = pcibr_lock(pcibr_soft);
+ for (rrb_index = 0; rrb_index < 8; rrb_index++) {
+ int evn_odd = SLOT_2_RRB_REG(slot);
+ if ((tmp & rrb_mask) == rrb_bits)
+ do_pcibr_rrb_flush(pcibr_soft, (2 * rrb_index) + evn_odd);
+ tmp = (tmp >> RRB_SIZE);
+ }
+ pcibr_unlock(pcibr_soft, s);
+}
+
+
+/*
+ * Device driver interface to flush the write buffers for a specified
+ * device hanging off the bridge.
+ */
+int
+pcibr_wrb_flush(vertex_hdl_t pconn_vhdl)
+{
+ pciio_info_t pciio_info = pciio_info_get(pconn_vhdl);
+ pciio_slot_t pciio_slot = PCIBR_INFO_SLOT_GET_INT(pciio_info);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info);
+
+ pcireg_wrb_flush_get(pcibr_soft, pciio_slot);
+
+ return 0;
+}
+
+/*
+ * Device driver interface to request RRBs for a specified device
+ * hanging off a Bridge. The driver requests the total number of
+ * RRBs it would like for the normal channel (vchan0) and for the
+ * "virtual channel" (vchan1). The actual number allocated to each
+ * channel is returned.
+ *
+ * If we cannot allocate at least one RRB to a channel that needs
+ * at least one, return -1 (failure). Otherwise, satisfy the request
+ * as best we can and return 0.
+ */
+int
+pcibr_rrb_alloc(vertex_hdl_t pconn_vhdl,
+ int *count_vchan0,
+ int *count_vchan1)
+{
+ pciio_info_t pciio_info = pciio_info_get(pconn_vhdl);
+ pciio_slot_t pciio_slot = PCIBR_INFO_SLOT_GET_INT(pciio_info);
+ pcibr_soft_t pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info);
+ int desired_vchan0;
+ int desired_vchan1;
+ int orig_vchan0;
+ int orig_vchan1;
+ int delta_vchan0;
+ int delta_vchan1;
+ int final_vchan0;
+ int final_vchan1;
+ int avail_rrbs;
+ int res_rrbs;
+ int vchan_total;
+ int vchan;
+ unsigned long s;
+ int error;
+
+ /*
+ * TBD: temper request with admin info about RRB allocation,
+ * and according to demand from other devices on this Bridge.
+ *
+ * One way of doing this would be to allocate two RRBs
+ * for each device on the bus, before any drivers start
+ * asking for extras. This has the weakness that one
+ * driver might not give back an "extra" RRB until after
+ * another driver has already failed to get one that
+ * it wanted.
+ */
+
+ s = pcibr_lock(pcibr_soft);
+
+ vchan_total = NUMBER_VCHANNELS;
+
+ /* Save the boot-time RRB configuration for this slot */
+ if (pcibr_soft->bs_rrb_valid_dflt[pciio_slot][VCHAN0] < 0) {
+ for (vchan = 0; vchan < vchan_total; vchan++)
+ pcibr_soft->bs_rrb_valid_dflt[pciio_slot][vchan] =
+ pcibr_soft->bs_rrb_valid[pciio_slot][vchan];
+ pcibr_soft->bs_rrb_res_dflt[pciio_slot] =
+ pcibr_soft->bs_rrb_res[pciio_slot];
+
+ }
+
+ /* How many RRBs do we own? */
+ orig_vchan0 = pcibr_soft->bs_rrb_valid[pciio_slot][VCHAN0];
+ orig_vchan1 = pcibr_soft->bs_rrb_valid[pciio_slot][VCHAN1];
+
+ /* How many RRBs do we want? */
+ desired_vchan0 = count_vchan0 ? *count_vchan0 : orig_vchan0;
+ desired_vchan1 = count_vchan1 ? *count_vchan1 : orig_vchan1;
+
+ /* How many RRBs are free? */
+ avail_rrbs = pcibr_soft->bs_rrb_avail[pciio_slot & 1]
+ + pcibr_soft->bs_rrb_res[pciio_slot];
+
+ /* Figure desired deltas */
+ delta_vchan0 = desired_vchan0 - orig_vchan0;
+ delta_vchan1 = desired_vchan1 - orig_vchan1;
+
+ /* Trim back deltas to something
+ * that we can actually meet, by
+ * decreasing the ending allocation
+ * for whichever channel wants
+ * more RRBs. If both want the same
+ * number, cut the second channel.
+ * NOTE: do not change the allocation for
+ * a channel that was passed as NULL.
+ */
+ while ((delta_vchan0 + delta_vchan1) > avail_rrbs) {
+ if (count_vchan0 &&
+ (!count_vchan1 ||
+ ((orig_vchan0 + delta_vchan0) >
+ (orig_vchan1 + delta_vchan1))))
+ delta_vchan0--;
+ else
+ delta_vchan1--;
+ }
+
+ /* Figure final RRB allocations
+ */
+ final_vchan0 = orig_vchan0 + delta_vchan0;
+ final_vchan1 = orig_vchan1 + delta_vchan1;
+
+ /* If either channel wants RRBs but our actions
+ * would leave it with none, declare an error,
+ * but DO NOT change any RRB allocations.
+ */
+ if ((desired_vchan0 && !final_vchan0) ||
+ (desired_vchan1 && !final_vchan1)) {
+
+ error = -1;
+
+ } else {
+
+ /* Commit the allocations: free, then alloc.
+ */
+ if (delta_vchan0 < 0)
+ do_pcibr_rrb_free(pcibr_soft, pciio_slot, VCHAN0, -delta_vchan0);
+ if (delta_vchan1 < 0)
+ do_pcibr_rrb_free(pcibr_soft, pciio_slot, VCHAN1, -delta_vchan1);
+
+ if (delta_vchan0 > 0)
+ do_pcibr_rrb_alloc(pcibr_soft, pciio_slot, VCHAN0, delta_vchan0);
+ if (delta_vchan1 > 0)
+ do_pcibr_rrb_alloc(pcibr_soft, pciio_slot, VCHAN1, delta_vchan1);
+
+ /* Return final values to caller.
+ */
+ if (count_vchan0)
+ *count_vchan0 = final_vchan0;
+ if (count_vchan1)
+ *count_vchan1 = final_vchan1;
+
+ /* prevent automatic changes to this slot's RRBs
+ */
+ pcibr_soft->bs_rrb_fixed |= 1 << pciio_slot;
+
+ /* Track the actual allocations, release
+ * any further reservations, and update the
+ * number of available RRBs.
+ */
+
+ pcibr_soft->bs_rrb_valid[pciio_slot][VCHAN0] = final_vchan0;
+ pcibr_soft->bs_rrb_valid[pciio_slot][VCHAN1] = final_vchan1;
+ pcibr_soft->bs_rrb_avail[pciio_slot & 1] =
+ pcibr_soft->bs_rrb_avail[pciio_slot & 1]
+ + pcibr_soft->bs_rrb_res[pciio_slot]
+ - delta_vchan0
+ - delta_vchan1;
+ pcibr_soft->bs_rrb_res[pciio_slot] = 0;
+
+ /*
+ * Reserve enough RRBs so this slot's RRB configuration can be
+ * reset to its boot-time default following a hot-plug shut-down
+ */
+ res_rrbs = (pcibr_soft->bs_rrb_res_dflt[pciio_slot] -
+ pcibr_soft->bs_rrb_res[pciio_slot]);
+ for (vchan = 0; vchan < vchan_total; vchan++) {
+ res_rrbs += (pcibr_soft->bs_rrb_valid_dflt[pciio_slot][vchan] -
+ pcibr_soft->bs_rrb_valid[pciio_slot][vchan]);
+ }
+
+ if (res_rrbs > 0) {
+ pcibr_soft->bs_rrb_res[pciio_slot] = res_rrbs;
+ pcibr_soft->bs_rrb_avail[pciio_slot & 1] =
+ pcibr_soft->bs_rrb_avail[pciio_slot & 1]
+ - res_rrbs;
+ }
+
+ pcibr_rrb_debug("pcibr_rrb_alloc", pcibr_soft);
+
+ error = 0;
+ }
+
+ pcibr_unlock(pcibr_soft, s);
+
+ return error;
+}
+
+/*
+ * Device driver interface to check the current state
+ * of the RRB allocations.
+ *
+ * pconn_vhdl is your PCI connection point (specifies which
+ * PCI bus and which slot).
+ *
+ * count_vchan0 points to where to return the number of RRBs
+ * assigned to the primary DMA channel, used by all DMA
+ * that does not explicitly ask for the alternate virtual
+ * channel.
+ *
+ * count_vchan1 points to where to return the number of RRBs
+ * assigned to the secondary DMA channel, used when
+ * PCIBR_VCHAN1 and PCIIO_DMA_A64 are specified.
+ *
+ * count_reserved points to where to return the number of RRBs
+ * that have been automatically reserved for your device at
+ * startup, but which have not been assigned to a
+ * channel. RRBs must be assigned to a channel to be used;
+ * this can be done either with an explicit pcibr_rrb_alloc
+ * call, or automatically by the infrastructure when a DMA
+ * translation is constructed. Any call to pcibr_rrb_alloc
+ * will release any unassigned reserved RRBs back to the
+ * free pool.
+ *
+ * count_pool points to where to return the number of RRBs
+ * that are currently unassigned and unreserved. This
+ * number can (and will) change as other drivers make calls
+ * to pcibr_rrb_alloc, or automatically allocate RRBs for
+ * DMA beyond their initial reservation.
+ *
+ * NULL may be passed for any of the return value pointers
+ * the caller is not interested in.
+ *
+ * The return value is "0" if all went well, or "-1" if
+ * there is a problem. Additionally, if the wrong vertex
+ * is passed in, one of the subsidiary support functions
+ * could panic with a "bad pciio fingerprint."
+ */
+
+int
+pcibr_rrb_check(vertex_hdl_t pconn_vhdl,
+ int *count_vchan0,
+ int *count_vchan1,
+ int *count_reserved,
+ int *count_pool)
+{
+ pciio_info_t pciio_info;
+ pciio_slot_t pciio_slot;
+ pcibr_soft_t pcibr_soft;
+ unsigned long s;
+ int error = -1;
+
+ if ((pciio_info = pciio_info_get(pconn_vhdl)) &&
+ (pcibr_soft = (pcibr_soft_t) pciio_info_mfast_get(pciio_info)) &&
+ ((pciio_slot = PCIBR_INFO_SLOT_GET_INT(pciio_info)) < PCIBR_NUM_SLOTS(pcibr_soft))) {
+
+ s = pcibr_lock(pcibr_soft);
+
+ if (count_vchan0)
+ *count_vchan0 =
+ pcibr_soft->bs_rrb_valid[pciio_slot][VCHAN0];
+
+ if (count_vchan1)
+ *count_vchan1 =
+ pcibr_soft->bs_rrb_valid[pciio_slot][VCHAN1];
+
+ if (count_reserved)
+ *count_reserved =
+ pcibr_soft->bs_rrb_res[pciio_slot];
+
+ if (count_pool)
+ *count_pool =
+ pcibr_soft->bs_rrb_avail[pciio_slot & 1];
+
+ error = 0;
+
+ pcibr_unlock(pcibr_soft, s);
+ }
+ return error;
+}
+
+/*
+ * pcibr_slot_initial_rrb_alloc
+ * Allocate a default number of rrbs for this slot on
+ * the two channels. This is dictated by the rrb allocation
+ * strategy routine defined per platform.
+ */
+
+int
+pcibr_slot_initial_rrb_alloc(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t slot)
+{
+ pcibr_soft_t pcibr_soft;
+ pcibr_info_h pcibr_infoh;
+ pcibr_info_t pcibr_info;
+ int vchan_total;
+ int vchan;
+ int chan[4];
+
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+
+ if (!pcibr_soft)
+ return -EINVAL;
+
+ if (!PCIBR_VALID_SLOT(pcibr_soft, slot))
+ return -EINVAL;
+
+ /* How many RRBs are on this slot? */
+ vchan_total = NUMBER_VCHANNELS;
+ for (vchan = 0; vchan < vchan_total; vchan++)
+ chan[vchan] = do_pcibr_rrb_count_valid(pcibr_soft, slot, vchan);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_RRB, pcibr_vhdl,
+ "pcibr_slot_initial_rrb_alloc: slot %d started with %d+%d+%d+%d\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot),
+ chan[VCHAN0], chan[VCHAN1], chan[VCHAN2], chan[VCHAN3]));
+
+ /* Do we really need any?
+ */
+ pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos;
+ pcibr_info = pcibr_infoh[0];
+ /*
+ * PIC BRINGUP WAR (PV# 856866, 859504, 861476, 861478):
+ * Don't free RRBs we allocated to device[2|3]--vchan3 as
+ * a WAR to those PVs mentioned above. In pcibr_attach2
+ * we allocate RRB0,8,1,9 to device[2|3]--vchan3.
+ */
+ if (PCIBR_WAR_ENABLED(PV856866, pcibr_soft) &&
+ (slot == 2 || slot == 3) &&
+ (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) &&
+ !pcibr_soft->bs_slot[slot].has_host) {
+
+ for (vchan = 0; vchan < 2; vchan++) {
+ do_pcibr_rrb_free(pcibr_soft, slot, vchan, 8);
+ pcibr_soft->bs_rrb_valid[slot][vchan] = 0;
+ }
+
+ pcibr_soft->bs_rrb_valid[slot][3] = chan[3];
+
+ return -ENODEV;
+ }
+
+ if ((pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE) &&
+ !pcibr_soft->bs_slot[slot].has_host) {
+ do_pcibr_rrb_free_all(pcibr_soft, slot);
+
+ /* Reserve RRBs for this empty slot for hot-plug */
+ for (vchan = 0; vchan < vchan_total; vchan++)
+ pcibr_soft->bs_rrb_valid[slot][vchan] = 0;
+
+ return -ENODEV;
+ }
+
+ for (vchan = 0; vchan < vchan_total; vchan++)
+ pcibr_soft->bs_rrb_valid[slot][vchan] = chan[vchan];
+
+ return 0;
+}
+
+
+/*
+ * pcibr_initial_rrb
+ * Assign an equal total number of RRBs to all candidate slots,
+ * where the total is the sum of the number of RRBs assigned to
+ * the normal channel, the number of RRBs assigned to the virtual
+ * channels, and the number of RRBs assigned as reserved.
+ *
+ * A candidate slot is any existing (populated or empty) slot.
+ * Empty SN1 slots need RRBs to support hot-plug operations.
+ */
+
+int
+pcibr_initial_rrb(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t first, pciio_slot_t last)
+{
+ pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+ pciio_slot_t slot;
+ int rrb_total;
+ int vchan_total;
+ int vchan;
+ int have[2][3];
+ int res[2];
+ int eo;
+
+ have[0][0] = have[0][1] = have[0][2] = 0;
+ have[1][0] = have[1][1] = have[1][2] = 0;
+ res[0] = res[1] = 0;
+
+ vchan_total = NUMBER_VCHANNELS;
+
+ for (slot = pcibr_soft->bs_min_slot;
+ slot < PCIBR_NUM_SLOTS(pcibr_soft); ++slot) {
+ /* Initial RRB management; give back RRBs in all non-existent slots */
+ pcibr_slot_initial_rrb_alloc(pcibr_vhdl, slot);
+
+ /* Base calculations only on existing slots */
+ if ((slot >= first) && (slot <= last)) {
+ rrb_total = 0;
+ for (vchan = 0; vchan < vchan_total; vchan++)
+ rrb_total += pcibr_soft->bs_rrb_valid[slot][vchan];
+
+ if (rrb_total < 3)
+ have[slot & 1][rrb_total]++;
+ }
+ }
+
+ /* Initialize even/odd slot available RRB counts */
+ pcibr_soft->bs_rrb_avail[0] = do_pcibr_rrb_count_avail(pcibr_soft, 0);
+ pcibr_soft->bs_rrb_avail[1] = do_pcibr_rrb_count_avail(pcibr_soft, 1);
+
+ /*
+ * Calculate reserved RRBs for slots based on current RRB usage
+ */
+ for (eo = 0; eo < 2; eo++) {
+ if ((3 * have[eo][0] + 2 * have[eo][1] + have[eo][2]) <= pcibr_soft->bs_rrb_avail[eo])
+ res[eo] = 3;
+ else if ((2 * have[eo][0] + have[eo][1]) <= pcibr_soft->bs_rrb_avail[eo])
+ res[eo] = 2;
+ else if (have[eo][0] <= pcibr_soft->bs_rrb_avail[eo])
+ res[eo] = 1;
+ else
+ res[eo] = 0;
+
+ }
+
+ /* Assign reserved RRBs to existing slots */
+ for (slot = first; slot <= last; ++slot) {
+ int r;
+
+ if (pcibr_soft->bs_unused_slot & (1 << slot))
+ continue;
+
+ rrb_total = 0;
+ for (vchan = 0; vchan < vchan_total; vchan++)
+ rrb_total += pcibr_soft->bs_rrb_valid[slot][vchan];
+
+ r = res[slot & 1] - (rrb_total);
+
+ if (r > 0) {
+ pcibr_soft->bs_rrb_res[slot] = r;
+ pcibr_soft->bs_rrb_avail[slot & 1] -= r;
+ }
+ }
+
+ pcibr_rrb_debug("pcibr_initial_rrb", pcibr_soft);
+
+ return 0;
+
+}
+
+/*
+ * Dump the pcibr_soft_t RRB state variable
+ */
+void
+pcibr_rrb_debug(char *calling_func, pcibr_soft_t pcibr_soft)
+{
+ pciio_slot_t slot;
+
+ if (pcibr_debug_mask & PCIBR_DEBUG_RRB) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_RRB, pcibr_soft->bs_vhdl,
+ "%s: rrbs available, even=%d, odd=%d\n", calling_func,
+ pcibr_soft->bs_rrb_avail[0], pcibr_soft->bs_rrb_avail[1]));
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_RRB, pcibr_soft->bs_vhdl,
+ "\tslot\tvchan0\tvchan1\tvchan2\tvchan3\treserved\n"));
+
+ for (slot=0; slot < PCIBR_NUM_SLOTS(pcibr_soft); slot++) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_RRB, pcibr_soft->bs_vhdl,
+ "\t %d\t %d\t %d\t %d\t %d\t %d\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot),
+ 0xFFF & pcibr_soft->bs_rrb_valid[slot][VCHAN0],
+ 0xFFF & pcibr_soft->bs_rrb_valid[slot][VCHAN1],
+ 0xFFF & pcibr_soft->bs_rrb_valid[slot][VCHAN2],
+ 0xFFF & pcibr_soft->bs_rrb_valid[slot][VCHAN3],
+ pcibr_soft->bs_rrb_res[slot]));
+ }
+ }
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/uaccess.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/pci/pci_defs.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/sn_sal.h>
+
+extern pcibr_info_t pcibr_info_get(vertex_hdl_t);
+extern int pcibr_widget_to_bus(vertex_hdl_t pcibr_vhdl);
+extern pcibr_info_t pcibr_device_info_new(pcibr_soft_t, pciio_slot_t, pciio_function_t, pciio_vendor_id_t, pciio_device_id_t);
+extern int pcibr_slot_initial_rrb_alloc(vertex_hdl_t,pciio_slot_t);
+extern int pcibr_pcix_rbars_calc(pcibr_soft_t);
+
+extern char *pci_space[];
+
+int pcibr_slot_info_init(vertex_hdl_t pcibr_vhdl, pciio_slot_t slot);
+int pcibr_slot_info_free(vertex_hdl_t pcibr_vhdl, pciio_slot_t slot);
+int pcibr_slot_addr_space_init(vertex_hdl_t pcibr_vhdl, pciio_slot_t slot);
+int pcibr_slot_pcix_rbar_init(pcibr_soft_t pcibr_soft, pciio_slot_t slot);
+int pcibr_slot_device_init(vertex_hdl_t pcibr_vhdl, pciio_slot_t slot);
+int pcibr_slot_guest_info_init(vertex_hdl_t pcibr_vhdl, pciio_slot_t slot);
+int pcibr_slot_call_device_attach(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t slot, int drv_flags);
+int pcibr_slot_call_device_detach(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t slot, int drv_flags);
+int pcibr_slot_detach(vertex_hdl_t pcibr_vhdl, pciio_slot_t slot,
+ int drv_flags, char *l1_msg, int *sub_errorp);
+static int pcibr_probe_slot(pcibr_soft_t, cfg_p, unsigned int *);
+static int pcibr_probe_work(pcibr_soft_t pcibr_soft, void *addr, int len, void *valp);
+void pcibr_device_info_free(vertex_hdl_t, pciio_slot_t);
+iopaddr_t pcibr_bus_addr_alloc(pcibr_soft_t, pciio_win_info_t,
+ pciio_space_t, int, int, int);
+void pcibr_bus_addr_free(pciio_win_info_t);
+cfg_p pcibr_find_capability(cfg_p, unsigned);
+extern uint64_t do_pcibr_config_get(cfg_p, unsigned, unsigned);
+void do_pcibr_config_set(cfg_p, unsigned, unsigned, uint64_t);
+int pcibr_slot_pwr(vertex_hdl_t pcibr_vhdl, pciio_slot_t slot, int up, char *err_msg);
+
+
+/*
+ * PCI-X Max Outstanding Split Transactions translation array and Max Memory
+ * Read Byte Count translation array, as defined in the PCI-X Specification.
+ * Section 7.2.3 & 7.2.4 of PCI-X Specification - rev 1.0
+ */
+#define MAX_SPLIT_TABLE 8
+#define MAX_READCNT_TABLE 4
+int max_splittrans_to_numbuf[MAX_SPLIT_TABLE] = {1, 2, 3, 4, 8, 12, 16, 32};
+int max_readcount_to_bufsize[MAX_READCNT_TABLE] = {512, 1024, 2048, 4096 };
+
+#ifdef CONFIG_HOTPLUG_PCI_SGI
+
+/*
+ * PCI slot manipulation errors from the system controller, and their
+ * associated descriptions
+ */
+#define SYSCTL_REQERR_BASE (-106000)
+#define SYSCTL_PCI_ERROR_BASE (SYSCTL_REQERR_BASE - 100)
+#define SYSCTL_PCIX_ERROR_BASE (SYSCTL_REQERR_BASE - 3000)
+
+struct sysctl_pci_error_s {
+
+ int error;
+ char *msg;
+
+} sysctl_pci_errors[] = {
+
+#define SYSCTL_PCI_UNINITIALIZED (SYSCTL_PCI_ERROR_BASE - 0)
+ { SYSCTL_PCI_UNINITIALIZED, "module not initialized" },
+
+#define SYSCTL_PCI_UNSUPPORTED_BUS (SYSCTL_PCI_ERROR_BASE - 1)
+ { SYSCTL_PCI_UNSUPPORTED_BUS, "unsupported bus" },
+
+#define SYSCTL_PCI_UNSUPPORTED_SLOT (SYSCTL_PCI_ERROR_BASE - 2)
+ { SYSCTL_PCI_UNSUPPORTED_SLOT, "unsupported slot" },
+
+#define SYSCTL_PCI_POWER_NOT_OKAY (SYSCTL_PCI_ERROR_BASE - 3)
+ { SYSCTL_PCI_POWER_NOT_OKAY, "slot power not okay" },
+
+#define SYSCTL_PCI_CARD_NOT_PRESENT (SYSCTL_PCI_ERROR_BASE - 4)
+ { SYSCTL_PCI_CARD_NOT_PRESENT, "card not present" },
+
+#define SYSCTL_PCI_POWER_LIMIT (SYSCTL_PCI_ERROR_BASE - 5)
+ { SYSCTL_PCI_POWER_LIMIT, "power limit reached - some cards not powered up" },
+
+#define SYSCTL_PCI_33MHZ_ON_66MHZ (SYSCTL_PCI_ERROR_BASE - 6)
+ { SYSCTL_PCI_33MHZ_ON_66MHZ, "cannot add a 33 MHz card to an active 66 MHz bus" },
+
+#define SYSCTL_PCI_INVALID_ORDER (SYSCTL_PCI_ERROR_BASE - 7)
+ { SYSCTL_PCI_INVALID_ORDER, "invalid reset order" },
+
+#define SYSCTL_PCI_DOWN_33MHZ (SYSCTL_PCI_ERROR_BASE - 8)
+ { SYSCTL_PCI_DOWN_33MHZ, "cannot power down a 33 MHz card on an active bus" },
+
+#define SYSCTL_PCI_RESET_33MHZ (SYSCTL_PCI_ERROR_BASE - 9)
+ { SYSCTL_PCI_RESET_33MHZ, "cannot reset a 33 MHz card on an active bus" },
+
+#define SYSCTL_PCI_SLOT_NOT_UP (SYSCTL_PCI_ERROR_BASE - 10)
+ { SYSCTL_PCI_SLOT_NOT_UP, "cannot reset a slot that is not powered up" },
+
+#define SYSCTL_PCIX_UNINITIALIZED (SYSCTL_PCIX_ERROR_BASE - 0)
+ { SYSCTL_PCIX_UNINITIALIZED, "module not initialized" },
+
+#define SYSCTL_PCIX_UNSUPPORTED_BUS (SYSCTL_PCIX_ERROR_BASE - 1)
+ { SYSCTL_PCIX_UNSUPPORTED_BUS, "unsupported bus" },
+
+#define SYSCTL_PCIX_UNSUPPORTED_SLOT (SYSCTL_PCIX_ERROR_BASE - 2)
+ { SYSCTL_PCIX_UNSUPPORTED_SLOT, "unsupported slot" },
+
+#define SYSCTL_PCIX_POWER_NOT_OKAY (SYSCTL_PCIX_ERROR_BASE - 3)
+ { SYSCTL_PCIX_POWER_NOT_OKAY, "slot power not okay" },
+
+#define SYSCTL_PCIX_CARD_NOT_PRESENT (SYSCTL_PCIX_ERROR_BASE - 4)
+ { SYSCTL_PCIX_CARD_NOT_PRESENT, "card not present" },
+
+#define SYSCTL_PCIX_POWER_LIMIT (SYSCTL_PCIX_ERROR_BASE - 5)
+ { SYSCTL_PCIX_POWER_LIMIT, "power limit reached - some cards not powered up" },
+
+#define SYSCTL_PCIX_33MHZ_ON_66MHZ (SYSCTL_PCIX_ERROR_BASE - 6)
+ { SYSCTL_PCIX_33MHZ_ON_66MHZ, "cannot add a 33 MHz card to an active 66 MHz bus" },
+
+#define SYSCTL_PCIX_PCI_ON_PCIX (SYSCTL_PCIX_ERROR_BASE - 7)
+ { SYSCTL_PCIX_PCI_ON_PCIX, "cannot add a PCI card to an active PCIX bus" },
+
+#define SYSCTL_PCIX_ANYTHING_ON_133MHZ (SYSCTL_PCIX_ERROR_BASE - 8)
+ { SYSCTL_PCIX_ANYTHING_ON_133MHZ, "cannot add any card to an active 133MHz PCIX bus" },
+
+#define SYSCTL_PCIX_X66MHZ_ON_X100MHZ (SYSCTL_PCIX_ERROR_BASE - 9)
+ { SYSCTL_PCIX_X66MHZ_ON_X100MHZ, "cannot add a PCIX 66MHz card to an active 100MHz PCIX bus" },
+
+#define SYSCTL_PCIX_INVALID_ORDER (SYSCTL_PCIX_ERROR_BASE - 10)
+ { SYSCTL_PCIX_INVALID_ORDER, "invalid reset order" },
+
+#define SYSCTL_PCIX_DOWN_33MHZ (SYSCTL_PCIX_ERROR_BASE - 11)
+ { SYSCTL_PCIX_DOWN_33MHZ, "cannot power down a 33 MHz card on an active bus" },
+
+#define SYSCTL_PCIX_RESET_33MHZ (SYSCTL_PCIX_ERROR_BASE - 12)
+ { SYSCTL_PCIX_RESET_33MHZ, "cannot reset a 33 MHz card on an active bus" },
+
+#define SYSCTL_PCIX_SLOT_NOT_UP (SYSCTL_PCIX_ERROR_BASE - 13)
+ { SYSCTL_PCIX_SLOT_NOT_UP, "cannot reset a slot that is not powered up" },
+
+#define SYSCTL_PCIX_INVALID_BUS_SETTING (SYSCTL_PCIX_ERROR_BASE - 14)
+ { SYSCTL_PCIX_INVALID_BUS_SETTING, "invalid bus type/speed selection (PCIX<66MHz, PCI>66MHz)" },
+
+#define SYSCTL_PCIX_INVALID_DEPENDENT_SLOT (SYSCTL_PCIX_ERROR_BASE - 15)
+ { SYSCTL_PCIX_INVALID_DEPENDENT_SLOT, "invalid dependent slot in PCI slot configuration" },
+
+#define SYSCTL_PCIX_SHARED_IDSELECT (SYSCTL_PCIX_ERROR_BASE - 16)
+ { SYSCTL_PCIX_SHARED_IDSELECT, "cannot enable two slots sharing the same IDSELECT" },
+
+#define SYSCTL_PCIX_SLOT_DISABLED (SYSCTL_PCIX_ERROR_BASE - 17)
+ { SYSCTL_PCIX_SLOT_DISABLED, "slot is disabled" },
+
+}; /* end sysctl_pci_errors[] */
+
+/*
+ * look up an error message for PCI operations that fail
+ */
+static void
+sysctl_pci_error_lookup(int error, char *err_msg)
+{
+ int i;
+ struct sysctl_pci_error_s *e = sysctl_pci_errors;
+
+ for (i = 0;
+ i < (sizeof(sysctl_pci_errors) / sizeof(*e));
+ i++, e++ )
+ {
+ if (e->error == error)
+ {
+ strcpy(err_msg, e->msg);
+ return;
+ }
+ }
+
+ sprintf(err_msg, "unrecognized PCI error type");
+}
+
+/*
+ * pcibr_slot_attach
+ * This is a place holder routine to keep track of all the
+ * slot-specific initialization that needs to be done.
+ * This is usually called when we want to initialize a new
+ * PCI card on the bus.
+ */
+int
+pcibr_slot_attach(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t slot,
+ int drv_flags,
+ char *l1_msg,
+ int *sub_errorp)
+{
+ pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+ int error;
+
+ if (!(pcibr_soft->bs_slot[slot].slot_status & PCI_SLOT_POWER_ON)) {
+ uint64_t speed;
+ uint64_t mode;
+
+ /* Power-up the slot */
+ error = pcibr_slot_pwr(pcibr_vhdl, slot, PCI_REQ_SLOT_POWER_ON, l1_msg);
+
+ if (error) {
+ if (sub_errorp)
+ *sub_errorp = error;
+ return(PCI_L1_ERR);
+ } else {
+ pcibr_soft->bs_slot[slot].slot_status &= ~PCI_SLOT_POWER_MASK;
+ pcibr_soft->bs_slot[slot].slot_status |= PCI_SLOT_POWER_ON;
+ }
+
+ /* The speed/mode of the bus may have changed due to the hotplug */
+ speed = pcireg_speed_get(pcibr_soft);
+ mode = pcireg_mode_get(pcibr_soft);
+ pcibr_soft->bs_bridge_mode = ((speed << 1) | mode);
+
+ /*
+ * Allow cards like the Alteon Gigabit Ethernet Adapter to complete
+ * on-card initialization following the slot reset
+ */
+ set_current_state (TASK_INTERRUPTIBLE);
+ schedule_timeout (HZ);
+
+ /* Find out what is out there */
+ error = pcibr_slot_info_init(pcibr_vhdl, slot);
+
+ if (error) {
+ if (sub_errorp)
+ *sub_errorp = error;
+ return(PCI_SLOT_INFO_INIT_ERR);
+ }
+
+ /* Set up the address space for this slot in the PCI land */
+
+ error = pcibr_slot_addr_space_init(pcibr_vhdl, slot);
+
+ if (error) {
+ if (sub_errorp)
+ *sub_errorp = error;
+ return(PCI_SLOT_ADDR_INIT_ERR);
+ }
+
+ /* Allocate the PCI-X Read Buffer Attribute Registers (RBARs)*/
+ if (IS_PCIX(pcibr_soft)) {
+ int tmp_slot;
+
+ /* Recalculate the RBARs for all the devices on the bus. Only
+ * return an error if we error for the given 'slot'
+ */
+ pcibr_soft->bs_pcix_rbar_inuse = 0;
+ pcibr_soft->bs_pcix_rbar_avail = NUM_RBAR;
+ pcibr_soft->bs_pcix_rbar_percent_allowed =
+ pcibr_pcix_rbars_calc(pcibr_soft);
+ for (tmp_slot = pcibr_soft->bs_min_slot;
+ tmp_slot < PCIBR_NUM_SLOTS(pcibr_soft); ++tmp_slot) {
+ if (tmp_slot == slot)
+ continue; /* skip this 'slot', we do it below */
+ (void)pcibr_slot_pcix_rbar_init(pcibr_soft, tmp_slot);
+ }
+
+ error = pcibr_slot_pcix_rbar_init(pcibr_soft, slot);
+ if (error) {
+ if (sub_errorp)
+ *sub_errorp = error;
+ return(PCI_SLOT_RBAR_ALLOC_ERR);
+ }
+ }
+
+ /* Setup the device register */
+ error = pcibr_slot_device_init(pcibr_vhdl, slot);
+
+ if (error) {
+ if (sub_errorp)
+ *sub_errorp = error;
+ return(PCI_SLOT_DEV_INIT_ERR);
+ }
+
+ /* Setup host/guest relations */
+ error = pcibr_slot_guest_info_init(pcibr_vhdl, slot);
+
+ if (error) {
+ if (sub_errorp)
+ *sub_errorp = error;
+ return(PCI_SLOT_GUEST_INIT_ERR);
+ }
+
+ /* Initial RRB management */
+ error = pcibr_slot_initial_rrb_alloc(pcibr_vhdl, slot);
+
+ if (error) {
+ if (sub_errorp)
+ *sub_errorp = error;
+ return(PCI_SLOT_RRB_ALLOC_ERR);
+ }
+
+ }
+
+ /* Call the device attach */
+ error = pcibr_slot_call_device_attach(pcibr_vhdl, slot, drv_flags);
+
+ if (error) {
+ if (sub_errorp)
+ *sub_errorp = error;
+ if (error == EUNATCH)
+ return(PCI_NO_DRIVER);
+ else
+ return(PCI_SLOT_DRV_ATTACH_ERR);
+ }
+
+ return(0);
+}
+
+/*
+ * pcibr_slot_enable
+ * Enable the PCI slot for a hot-plug insert.
+ */
+int
+pcibr_slot_enable(vertex_hdl_t pcibr_vhdl, struct pcibr_slot_enable_req_s *req_p)
+{
+ pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+ pciio_slot_t slot = req_p->req_device;
+ int error = 0;
+
+ /* Make sure that we are dealing with a bridge device vertex */
+ if (!pcibr_soft) {
+ return(PCI_NOT_A_BRIDGE);
+ }
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_HOTPLUG, pcibr_vhdl,
+ "pcibr_slot_enable: pcibr_soft=0x%lx, slot=%d, req_p=0x%lx\n",
+ pcibr_soft, slot, req_p));
+
+ /* Check for the valid slot */
+ if (!PCIBR_VALID_SLOT(pcibr_soft, slot))
+ return(PCI_NOT_A_SLOT);
+
+ if (pcibr_soft->bs_slot[slot].slot_status & PCI_SLOT_ENABLE_CMPLT) {
+ error = PCI_SLOT_ALREADY_UP;
+ goto enable_unlock;
+ }
+
+ error = pcibr_slot_attach(pcibr_vhdl, slot, 0,
+ req_p->req_resp.resp_l1_msg,
+ &req_p->req_resp.resp_sub_errno);
+
+ req_p->req_resp.resp_l1_msg[PCI_L1_QSIZE] = '\0';
+
+ enable_unlock:
+
+ return(error);
+}
+
+/*
+ * pcibr_slot_disable
+ * Disable the PCI slot for a hot-plug removal.
+ */
+int
+pcibr_slot_disable(vertex_hdl_t pcibr_vhdl, struct pcibr_slot_disable_req_s *req_p)
+{
+ pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+ pciio_slot_t slot = req_p->req_device;
+ int error = 0;
+ pciio_slot_t tmp_slot;
+
+ /* Make sure that we are dealing with a bridge device vertex */
+ if (!pcibr_soft) {
+ return(PCI_NOT_A_BRIDGE);
+ }
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_HOTPLUG, pcibr_vhdl,
+ "pcibr_slot_disable: pcibr_soft=0x%lx, slot=%d, req_p=0x%lx\n",
+ pcibr_soft, slot, req_p));
+
+ /* Check for valid slot */
+ if (!PCIBR_VALID_SLOT(pcibr_soft, slot))
+ return(PCI_NOT_A_SLOT);
+
+ if ((pcibr_soft->bs_slot[slot].slot_status & PCI_SLOT_DISABLE_CMPLT) ||
+ ((pcibr_soft->bs_slot[slot].slot_status & PCI_SLOT_STATUS_MASK) == 0)) {
+ error = PCI_SLOT_ALREADY_DOWN;
+ /*
+ * RJR - Should we invoke an L1 slot power-down command just in case
+ * a previous shut-down failed to power-down the slot?
+ */
+ goto disable_unlock;
+ }
+
+ /* Do not allow the last 33 MHz card to be removed */
+ if (IS_33MHZ(pcibr_soft)) {
+ for (tmp_slot = pcibr_soft->bs_first_slot;
+ tmp_slot <= pcibr_soft->bs_last_slot; tmp_slot++)
+ if (tmp_slot != slot)
+ if (pcibr_soft->bs_slot[tmp_slot].slot_status & PCI_SLOT_POWER_ON) {
+ error++;
+ break;
+ }
+ if (!error) {
+ error = PCI_EMPTY_33MHZ;
+ goto disable_unlock;
+ }
+ }
+
+ if (req_p->req_action == PCI_REQ_SLOT_ELIGIBLE)
+ return(0);
+
+ error = pcibr_slot_detach(pcibr_vhdl, slot, 1,
+ req_p->req_resp.resp_l1_msg,
+ &req_p->req_resp.resp_sub_errno);
+
+ req_p->req_resp.resp_l1_msg[PCI_L1_QSIZE] = '\0';
+
+ disable_unlock:
+
+ return(error);
+}
+
+/*
+ * pcibr_slot_pwr
+ * Power-up or power-down a PCI slot. This routines makes calls to
+ * the L1 system controller driver which requires "external" slot#.
+ */
+int
+pcibr_slot_pwr(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t slot,
+ int up,
+ char *err_msg)
+{
+ pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+ nasid_t nasid;
+ u64 connection_type;
+ int rv;
+
+ nasid = NASID_GET(pcibr_soft->bs_base);
+ connection_type = SAL_SYSCTL_IO_XTALK;
+
+ rv = (int) ia64_sn_sysctl_iobrick_pci_op
+ (nasid,
+ connection_type,
+ (u64) pcibr_widget_to_bus(pcibr_vhdl),
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot),
+ (up ? SAL_SYSCTL_PCI_POWER_UP : SAL_SYSCTL_PCI_POWER_DOWN));
+
+ if (!rv) {
+ /* everything's okay; no error message */
+ *err_msg = '\0';
+ }
+ else {
+ /* there was a problem; look up an appropriate error message */
+ sysctl_pci_error_lookup(rv, err_msg);
+ }
+ return rv;
+}
+
+#endif /* CONFIG_HOTPLUG_PCI_SGI */
+
+/*
+ * pcibr_slot_info_init
+ * Probe for this slot and see if it is populated.
+ * If it is populated initialize the generic PCI infrastructural
+ * information associated with this particular PCI device.
+ */
+int
+pcibr_slot_info_init(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t slot)
+{
+ pcibr_soft_t pcibr_soft;
+ pcibr_info_h pcibr_infoh;
+ pcibr_info_t pcibr_info;
+ cfg_p cfgw;
+ unsigned idword;
+ unsigned pfail;
+ unsigned idwords[8];
+ pciio_vendor_id_t vendor;
+ pciio_device_id_t device;
+ unsigned htype;
+ unsigned lt_time;
+ int nbars;
+ cfg_p wptr;
+ cfg_p pcix_cap;
+ int win;
+ pciio_space_t space;
+ int nfunc;
+ pciio_function_t rfunc;
+ int func;
+ vertex_hdl_t conn_vhdl;
+ pcibr_soft_slot_t slotp;
+ uint64_t device_reg;
+
+ /* Get the basic software information required to proceed */
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+ if (!pcibr_soft)
+ return -EINVAL;
+
+ if (!PCIBR_VALID_SLOT(pcibr_soft, slot))
+ return -EINVAL;
+
+ /* If we have a host slot (eg:- IOC3 has 2 PCI slots and the initialization
+ * is done by the host slot then we are done.
+ */
+ if (pcibr_soft->bs_slot[slot].has_host) {
+ return 0;
+ }
+
+ /* Try to read the device-id/vendor-id from the config space */
+ cfgw = pcibr_slot_config_addr(pcibr_soft, slot, 0);
+
+ if (pcibr_probe_slot(pcibr_soft, cfgw, &idword))
+ return -ENODEV;
+
+ slotp = &pcibr_soft->bs_slot[slot];
+#ifdef CONFIG_HOTPLUG_PCI_SGI
+ slotp->slot_status |= SLOT_POWER_UP;
+#endif
+
+ vendor = 0xFFFF & idword;
+ device = 0xFFFF & (idword >> 16);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_PROBE, pcibr_vhdl,
+ "pcibr_slot_info_init: slot=%d, vendor=0x%x, device=0x%x\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot), vendor, device));
+
+ /* If the vendor id is not valid then the slot is not populated
+ * and we are done.
+ */
+ if (vendor == 0xFFFF)
+ return -ENODEV;
+
+ htype = do_pcibr_config_get(cfgw, PCI_CFG_HEADER_TYPE, 1);
+ nfunc = 1;
+ rfunc = PCIIO_FUNC_NONE;
+ pfail = 0;
+
+ /* NOTE: if a card claims to be multifunction
+ * but only responds to config space 0, treat
+ * it as a unifunction card.
+ */
+
+ if (htype & 0x80) { /* MULTIFUNCTION */
+ for (func = 1; func < 8; ++func) {
+ cfgw = pcibr_func_config_addr(pcibr_soft, 0, slot, func, 0);
+ if (pcibr_probe_slot(pcibr_soft, cfgw, &idwords[func])) {
+ pfail |= 1 << func;
+ continue;
+ }
+ vendor = 0xFFFF & idwords[func];
+ if (vendor == 0xFFFF) {
+ pfail |= 1 << func;
+ continue;
+ }
+ nfunc = func + 1;
+ rfunc = 0;
+ }
+ cfgw = pcibr_slot_config_addr(pcibr_soft, slot, 0);
+ }
+ pcibr_infoh = kmalloc(nfunc*sizeof (*(pcibr_infoh)), GFP_KERNEL);
+ if ( !pcibr_infoh ) {
+ return -ENOMEM;
+ }
+ memset(pcibr_infoh, 0, nfunc*sizeof (*(pcibr_infoh)));
+
+ pcibr_soft->bs_slot[slot].bss_ninfo = nfunc;
+ pcibr_soft->bs_slot[slot].bss_infos = pcibr_infoh;
+
+ for (func = 0; func < nfunc; ++func) {
+ unsigned cmd_reg;
+
+ if (func) {
+ if (pfail & (1 << func))
+ continue;
+
+ idword = idwords[func];
+ cfgw = pcibr_func_config_addr(pcibr_soft, 0, slot, func, 0);
+
+ device = 0xFFFF & (idword >> 16);
+ htype = do_pcibr_config_get(cfgw, PCI_CFG_HEADER_TYPE, 1);
+ rfunc = func;
+ }
+ htype &= 0x7f;
+ if (htype != 0x00) {
+ printk(KERN_WARNING
+ "%s pcibr: pci slot %d func %d has strange header type 0x%x\n",
+ pcibr_soft->bs_name, slot, func, htype);
+ nbars = 2;
+ } else {
+ nbars = PCI_CFG_BASE_ADDRS;
+ }
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_CONFIG, pcibr_vhdl,
+ "pcibr_slot_info_init: slot=%d, func=%d, cfgw=0x%lx\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft,slot), func, cfgw));
+
+ /*
+ * If the latency timer has already been set, by prom or by the
+ * card itself, use that value. Otherwise look at the device's
+ * 'min_gnt' and attempt to calculate a latency time.
+ *
+ * NOTE: For now if the device is on the 'real time' arbitration
+ * ring we don't set the latency timer.
+ *
+ * WAR: SGI's IOC3 and RAD devices target abort if you write a
+ * single byte into their config space. So don't set the Latency
+ * Timer for these devices
+ */
+
+ lt_time = do_pcibr_config_get(cfgw, PCI_CFG_LATENCY_TIMER, 1);
+ device_reg = pcireg_device_get(pcibr_soft, slot);
+ if ((lt_time == 0) && !(device_reg & BRIDGE_DEV_RT)) {
+ unsigned min_gnt;
+ unsigned min_gnt_mult;
+
+ /* 'min_gnt' indicates how long of a burst period a device
+ * needs in increments of 250ns. But latency timer is in
+ * PCI clock cycles, so a conversion is needed.
+ */
+ min_gnt = do_pcibr_config_get(cfgw, PCI_MIN_GNT, 1);
+
+ if (IS_133MHZ(pcibr_soft))
+ min_gnt_mult = 32; /* 250ns @ 133MHz in clocks */
+ else if (IS_100MHZ(pcibr_soft))
+ min_gnt_mult = 24; /* 250ns @ 100MHz in clocks */
+ else if (IS_66MHZ(pcibr_soft))
+ min_gnt_mult = 16; /* 250ns @ 66MHz, in clocks */
+ else
+ min_gnt_mult = 8; /* 250ns @ 33MHz, in clocks */
+
+ if ((min_gnt != 0) && ((min_gnt * min_gnt_mult) < 256))
+ lt_time = (min_gnt * min_gnt_mult);
+ else
+ lt_time = 4 * min_gnt_mult; /* 1 micro second */
+
+ do_pcibr_config_set(cfgw, PCI_CFG_LATENCY_TIMER, 1, lt_time);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_CONFIG, pcibr_vhdl,
+ "pcibr_slot_info_init: set Latency Timer for slot=%d, "
+ "func=%d, to 0x%x\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot), func, lt_time));
+ }
+
+
+ /* In our architecture the setting of the cacheline size isn't
+ * beneficial for cards in PCI mode, but in PCI-X mode devices
+ * can optionally use the cacheline size value for internal
+ * device optimizations (See 7.1.5 of the PCI-X v1.0 spec).
+ * NOTE: cachline size is in doubleword increments
+ */
+ if (IS_PCIX(pcibr_soft)) {
+ if (!do_pcibr_config_get(cfgw, PCI_CFG_CACHE_LINE, 1)) {
+ do_pcibr_config_set(cfgw, PCI_CFG_CACHE_LINE, 1, 0x20);
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_CONFIG, pcibr_vhdl,
+ "pcibr_slot_info_init: set CacheLine for slot=%d, "
+ "func=%d, to 0x20\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot), func));
+ }
+ }
+
+ /* Get the PCI-X capability if running in PCI-X mode. If the func
+ * doesnt have a pcix capability, allocate a PCIIO_VENDOR_ID_NONE
+ * pcibr_info struct so the device driver for that function is not
+ * called.
+ */
+ if (IS_PCIX(pcibr_soft)) {
+ if (!(pcix_cap = pcibr_find_capability(cfgw, PCI_CAP_PCIX))) {
+ printk(KERN_WARNING
+ "%s: Bus running in PCI-X mode, But card in slot %d, "
+ "func %d not PCI-X capable\n",
+ pcibr_soft->bs_name, slot, func);
+ pcibr_device_info_new(pcibr_soft, slot, PCIIO_FUNC_NONE,
+ PCIIO_VENDOR_ID_NONE, PCIIO_DEVICE_ID_NONE);
+ continue;
+ }
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_CONFIG, pcibr_vhdl,
+ "pcibr_slot_info_init: PCI-X capability at 0x%lx for "
+ "slot=%d, func=%d\n",
+ pcix_cap, PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot), func));
+ } else {
+ pcix_cap = NULL;
+ }
+
+ pcibr_info = pcibr_device_info_new
+ (pcibr_soft, slot, rfunc, vendor, device);
+
+ /* Keep a running total of the number of PIC-X functions on the bus
+ * and the number of max outstanding split trasnactions that they
+ * have requested. NOTE: "pcix_cap != NULL" implies IS_PCIX()
+ */
+ pcibr_info->f_pcix_cap = (cap_pcix_type0_t *)pcix_cap;
+ if (pcibr_info->f_pcix_cap) {
+ int max_out; /* max outstanding splittrans from status reg */
+
+ pcibr_soft->bs_pcix_num_funcs++;
+ max_out = pcibr_info->f_pcix_cap->pcix_type0_status.max_out_split;
+ pcibr_soft->bs_pcix_split_tot += max_splittrans_to_numbuf[max_out];
+ }
+
+ conn_vhdl = pciio_device_info_register(pcibr_vhdl, &pcibr_info->f_c);
+ if (func == 0)
+ slotp->slot_conn = conn_vhdl;
+
+ cmd_reg = do_pcibr_config_get(cfgw, PCI_CFG_COMMAND, 4);
+
+ wptr = cfgw + PCI_CFG_BASE_ADDR_0 / 4;
+
+ for (win = 0; win < nbars; ++win) {
+ iopaddr_t base, mask, code;
+ size_t size;
+
+ /*
+ * GET THE BASE & SIZE OF THIS WINDOW:
+ *
+ * The low two or four bits of the BASE register
+ * determines which address space we are in; the
+ * rest is a base address. BASE registers
+ * determine windows that are power-of-two sized
+ * and naturally aligned, so we can get the size
+ * of a window by writing all-ones to the
+ * register, reading it back, and seeing which
+ * bits are used for decode; the least
+ * significant nonzero bit is also the size of
+ * the window.
+ *
+ * WARNING: someone may already have allocated
+ * some PCI space to this window, and in fact
+ * PIO may be in process at this very moment
+ * from another processor (or even from this
+ * one, if we get interrupted)! So, if the BASE
+ * already has a nonzero address, be generous
+ * and use the LSBit of that address as the
+ * size; this could overstate the window size.
+ * Usually, when one card is set up, all are set
+ * up; so, since we don't bitch about
+ * overlapping windows, we are ok.
+ *
+ * UNFORTUNATELY, some cards do not clear their
+ * BASE registers on reset. I have two heuristics
+ * that can detect such cards: first, if the
+ * decode enable is turned off for the space
+ * that the window uses, we can disregard the
+ * initial value. second, if the address is
+ * outside the range that we use, we can disregard
+ * it as well.
+ *
+ * This is looking very PCI generic. Except for
+ * knowing how many slots and where their config
+ * spaces are, this window loop and the next one
+ * could probably be shared with other PCI host
+ * adapters. It would be interesting to see if
+ * this could be pushed up into pciio, when we
+ * start supporting more PCI providers.
+ */
+ base = do_pcibr_config_get(wptr, (win * 4), 4);
+
+ if (base & PCI_BA_IO_SPACE) {
+ /* BASE is in I/O space. */
+ space = PCIIO_SPACE_IO;
+ mask = -4;
+ code = base & 3;
+ base = base & mask;
+ if (base == 0) {
+ ; /* not assigned */
+ } else if (!(cmd_reg & PCI_CMD_IO_SPACE)) {
+ base = 0; /* decode not enabled */
+ }
+ } else {
+ /* BASE is in MEM space. */
+ space = PCIIO_SPACE_MEM;
+ mask = -16;
+ code = base & PCI_BA_MEM_LOCATION; /* extract BAR type */
+ base = base & mask;
+ if (base == 0) {
+ ; /* not assigned */
+ } else if (!(cmd_reg & PCI_CMD_MEM_SPACE)) {
+ base = 0; /* decode not enabled */
+ } else if (base & 0xC0000000) {
+ base = 0; /* outside permissable range */
+ } else if ((code == PCI_BA_MEM_64BIT) &&
+ (do_pcibr_config_get(wptr, ((win + 1)*4), 4) != 0)) {
+ base = 0; /* outside permissable range */
+ }
+ }
+
+ if (base != 0) { /* estimate size */
+ pciio_space_t tmp_space = space;
+ iopaddr_t tmp_base;
+
+ size = base & -base;
+
+ /*
+ * Reserve this space in the relavent address map. Don't
+ * care about the return code from pcibr_bus_addr_alloc().
+ */
+
+ if (space == PCIIO_SPACE_MEM && code != PCI_BA_MEM_1MEG) {
+ tmp_space = PCIIO_SPACE_MEM32;
+ }
+
+ tmp_base = pcibr_bus_addr_alloc(pcibr_soft,
+ &pcibr_info->f_window[win],
+ tmp_space,
+ base, size, 0);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_BAR, pcibr_vhdl,
+ "pcibr_slot_info_init: slot=%d, func=%d win %d "
+ "reserving space %s [0x%lx..0x%lx], tmp_base 0x%lx\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot), func, win,
+ pci_space[tmp_space], (uint64_t)base,
+ (uint64_t)(base + size - 1), (uint64_t)tmp_base));
+ } else { /* calculate size */
+ do_pcibr_config_set(wptr, (win * 4), 4, ~0); /* write 1's */
+ size = do_pcibr_config_get(wptr, (win * 4), 4); /* read back */
+ size &= mask; /* keep addr */
+ size &= -size; /* keep lsbit */
+ if (size == 0)
+ continue;
+ }
+
+ pcibr_info->f_window[win].w_space = space;
+ pcibr_info->f_window[win].w_base = base;
+ pcibr_info->f_window[win].w_size = size;
+
+ if (code == PCI_BA_MEM_64BIT) {
+ win++; /* skip upper half */
+ do_pcibr_config_set(wptr, (win * 4), 4, 0); /* must be zero */
+ }
+ } /* next win */
+ } /* next func */
+
+ return 0;
+}
+
+/*
+ * pcibr_find_capability
+ * Walk the list of capabilities (if it exists) looking for
+ * the requested capability. Return a cfg_p pointer to the
+ * capability if found, else return NULL
+ */
+cfg_p
+pcibr_find_capability(cfg_p cfgw,
+ unsigned capability)
+{
+ unsigned cap_nxt;
+ unsigned cap_id;
+ int defend_against_circular_linkedlist = 0;
+
+ /* Check to see if there is a capabilities pointer in the cfg header */
+ if (!(do_pcibr_config_get(cfgw, PCI_CFG_STATUS, 2) & PCI_STAT_CAP_LIST)) {
+ return NULL;
+ }
+
+ /*
+ * Read up the capabilities head pointer from the configuration header.
+ * Capabilities are stored as a linked list in the lower 48 dwords of
+ * config space and are dword aligned. (Note: spec states the least two
+ * significant bits of the next pointer must be ignored, so we mask
+ * with 0xfc).
+ */
+ cap_nxt = (do_pcibr_config_get(cfgw, PCI_CAPABILITIES_PTR, 1) & 0xfc);
+
+ while (cap_nxt && (defend_against_circular_linkedlist <= 48)) {
+ cap_id = do_pcibr_config_get(cfgw, cap_nxt, 1);
+ if (cap_id == capability) {
+ return (cfg_p)((char *)cfgw + cap_nxt);
+ }
+ cap_nxt = (do_pcibr_config_get(cfgw, cap_nxt+1, 1) & 0xfc);
+ defend_against_circular_linkedlist++;
+ }
+
+ return NULL;
+}
+
+/*
+ * pcibr_slot_info_free
+ * Remove all the PCI infrastructural information associated
+ * with a particular PCI device.
+ */
+int
+pcibr_slot_info_free(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t slot)
+{
+ pcibr_soft_t pcibr_soft;
+ pcibr_info_h pcibr_infoh;
+ int nfunc;
+
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+
+ if (!pcibr_soft)
+ return -EINVAL;
+
+ if (!PCIBR_VALID_SLOT(pcibr_soft, slot))
+ return -EINVAL;
+
+ nfunc = pcibr_soft->bs_slot[slot].bss_ninfo;
+
+ pcibr_device_info_free(pcibr_vhdl, slot);
+
+ pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos;
+ kfree(pcibr_infoh);
+ pcibr_soft->bs_slot[slot].bss_ninfo = 0;
+
+ return 0;
+}
+
+/*
+ * pcibr_slot_pcix_rbar_init
+ * Allocate RBARs to the PCI-X functions on a given device
+ */
+int
+pcibr_slot_pcix_rbar_init(pcibr_soft_t pcibr_soft,
+ pciio_slot_t slot)
+{
+ pcibr_info_h pcibr_infoh;
+ pcibr_info_t pcibr_info;
+ int nfunc;
+ int func;
+
+ if (!PCIBR_VALID_SLOT(pcibr_soft, slot))
+ return -EINVAL;
+
+ if ((nfunc = pcibr_soft->bs_slot[slot].bss_ninfo) < 1)
+ return -EINVAL;
+
+ if (!(pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos))
+ return -EINVAL;
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_RBAR, pcibr_soft->bs_vhdl,
+ "pcibr_slot_pcix_rbar_init for slot %d\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot)));
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_RBAR, pcibr_soft->bs_vhdl,
+ "\tslot/func\trequested\tgiven\tinuse\tavail\n"));
+
+ for (func = 0; func < nfunc; ++func) {
+ cap_pcix_type0_t *pcix_cap_p;
+ cap_pcix_stat_reg_t *pcix_statreg_p;
+ cap_pcix_cmd_reg_t *pcix_cmdreg_p;
+ int num_rbar;
+
+ if (!(pcibr_info = pcibr_infoh[func]))
+ continue;
+
+ if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE)
+ continue;
+
+ if (!(pcix_cap_p = pcibr_info->f_pcix_cap))
+ continue;
+
+ pcix_statreg_p = &pcix_cap_p->pcix_type0_status;
+ pcix_cmdreg_p = &pcix_cap_p->pcix_type0_command;
+
+ /* If there are enough RBARs to satify the number of "max outstanding
+ * transactions" each function requested (bs_pcix_rbar_percent_allowed
+ * is 100%), then give each function what it requested, otherwise give
+ * the functions a "percentage of what they requested".
+ */
+ if (pcibr_soft->bs_pcix_rbar_percent_allowed >= 100) {
+ pcix_cmdreg_p->max_split = pcix_statreg_p->max_out_split;
+ num_rbar = max_splittrans_to_numbuf[pcix_cmdreg_p->max_split];
+ pcibr_soft->bs_pcix_rbar_inuse += num_rbar;
+ pcibr_soft->bs_pcix_rbar_avail -= num_rbar;
+ pcix_cmdreg_p->max_mem_read_cnt = pcix_statreg_p->max_mem_read_cnt;
+ } else {
+ int index; /* index into max_splittrans_to_numbuf table */
+ int max_out; /* max outstanding transactions given to func */
+
+ /* Calculate the percentage of RBARs this function can have.
+ * NOTE: Every function gets at least 1 RBAR (thus the "+1").
+ * bs_pcix_rbar_percent_allowed is the percentage of what was
+ * requested less this 1 RBAR that all functions automatically
+ * gets
+ */
+ max_out = ((max_splittrans_to_numbuf[pcix_statreg_p->max_out_split]
+ * pcibr_soft->bs_pcix_rbar_percent_allowed) / 100) + 1;
+
+ /* round down the newly caclulated max_out to a valid number in
+ * max_splittrans_to_numbuf[]
+ */
+ for (index = 0; index < MAX_SPLIT_TABLE-1; index++)
+ if (max_splittrans_to_numbuf[index + 1] > max_out)
+ break;
+
+ pcix_cmdreg_p->max_split = index;
+ num_rbar = max_splittrans_to_numbuf[pcix_cmdreg_p->max_split];
+ pcibr_soft->bs_pcix_rbar_inuse += num_rbar;
+ pcibr_soft->bs_pcix_rbar_avail -= num_rbar;
+ pcix_cmdreg_p->max_mem_read_cnt = pcix_statreg_p->max_mem_read_cnt;
+ }
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_RBAR, pcibr_soft->bs_vhdl,
+ "\t %d/%d \t %d \t %d \t %d \t %d\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot), func,
+ max_splittrans_to_numbuf[pcix_statreg_p->max_out_split],
+ max_splittrans_to_numbuf[pcix_cmdreg_p->max_split],
+ pcibr_soft->bs_pcix_rbar_inuse,
+ pcibr_soft->bs_pcix_rbar_avail));
+ }
+ return 0;
+}
+
+int as_debug = 0;
+/*
+ * pcibr_slot_addr_space_init
+ * Reserve chunks of PCI address space as required by
+ * the base registers in the card.
+ */
+int
+pcibr_slot_addr_space_init(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t slot)
+{
+ pcibr_soft_t pcibr_soft;
+ pcibr_info_h pcibr_infoh;
+ pcibr_info_t pcibr_info;
+ iopaddr_t mask;
+ int nbars;
+ int nfunc;
+ int func;
+ int win;
+ int rc = 0;
+ int align = 0;
+ int align_slot;
+
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+
+ if (!pcibr_soft)
+ return -EINVAL;
+
+ if (!PCIBR_VALID_SLOT(pcibr_soft, slot))
+ return -EINVAL;
+
+ /* allocate address space,
+ * for windows that have not been
+ * previously assigned.
+ */
+ if (pcibr_soft->bs_slot[slot].has_host) {
+ return 0;
+ }
+
+ nfunc = pcibr_soft->bs_slot[slot].bss_ninfo;
+ if (nfunc < 1)
+ return -EINVAL;
+
+ pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos;
+ if (!pcibr_infoh)
+ return -EINVAL;
+
+ /*
+ * Try to make the DevIO windows not
+ * overlap by pushing the "io" and "hi"
+ * allocation areas up to the next one
+ * or two megabyte bound. This also
+ * keeps them from being zero.
+ *
+ * DO NOT do this with "pci_lo" since
+ * the entire "lo" area is only a
+ * megabyte, total ...
+ */
+ align_slot = (slot < 2) ? 0x200000 : 0x100000;
+
+ for (func = 0; func < nfunc; ++func) {
+ cfg_p cfgw;
+ cfg_p wptr;
+ pciio_space_t space;
+ iopaddr_t base;
+ size_t size;
+ unsigned pci_cfg_cmd_reg;
+ unsigned pci_cfg_cmd_reg_add = 0;
+
+ pcibr_info = pcibr_infoh[func];
+
+ if (!pcibr_info)
+ continue;
+
+ if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE)
+ continue;
+
+ cfgw = pcibr_func_config_addr(pcibr_soft, 0, slot, func, 0);
+ wptr = cfgw + PCI_CFG_BASE_ADDR_0 / 4;
+
+ if ((do_pcibr_config_get(cfgw, PCI_CFG_HEADER_TYPE, 1) & 0x7f) != 0)
+ nbars = 2;
+ else
+ nbars = PCI_CFG_BASE_ADDRS;
+
+ for (win = 0; win < nbars; ++win) {
+ space = pcibr_info->f_window[win].w_space;
+ base = pcibr_info->f_window[win].w_base;
+ size = pcibr_info->f_window[win].w_size;
+
+ if (size < 1)
+ continue;
+
+ if (base >= size) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_BAR, pcibr_vhdl,
+ "pcibr_slot_addr_space_init: slot=%d, "
+ "func=%d win %d is in space %s [0x%lx..0x%lx], "
+ "allocated by prom\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot), func, win,
+ pci_space[space], (uint64_t)base,
+ (uint64_t)(base + size - 1)));
+
+ continue; /* already allocated */
+ }
+
+ align = (win) ? size : align_slot;
+
+ if (align < PAGE_SIZE)
+ align = PAGE_SIZE; /* ie. 0x00004000 */
+
+ switch (space) {
+ case PCIIO_SPACE_IO:
+ base = pcibr_bus_addr_alloc(pcibr_soft,
+ &pcibr_info->f_window[win],
+ PCIIO_SPACE_IO,
+ 0, size, align);
+ if (!base)
+ rc = ENOSPC;
+ break;
+
+ case PCIIO_SPACE_MEM:
+ if ((do_pcibr_config_get(wptr, (win * 4), 4) &
+ PCI_BA_MEM_LOCATION) == PCI_BA_MEM_1MEG) {
+
+ /* allocate from 20-bit PCI space */
+ base = pcibr_bus_addr_alloc(pcibr_soft,
+ &pcibr_info->f_window[win],
+ PCIIO_SPACE_MEM,
+ 0, size, align);
+ if (!base)
+ rc = ENOSPC;
+ } else {
+ /* allocate from 32-bit or 64-bit PCI space */
+ base = pcibr_bus_addr_alloc(pcibr_soft,
+ &pcibr_info->f_window[win],
+ PCIIO_SPACE_MEM32,
+ 0, size, align);
+ if (!base)
+ rc = ENOSPC;
+ }
+ break;
+
+ default:
+ base = 0;
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_BAR, pcibr_vhdl,
+ "pcibr_slot_addr_space_init: slot=%d, window %d "
+ "had bad space code %d\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft,slot), win, space));
+ }
+ pcibr_info->f_window[win].w_base = base;
+ do_pcibr_config_set(wptr, (win * 4), 4, base);
+
+ if (base >= size) {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_BAR, pcibr_vhdl,
+ "pcibr_slot_addr_space_init: slot=%d, func=%d. win %d "
+ "is in space %s [0x%lx..0x%lx], allocated by pcibr\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot), func, win,
+ pci_space[space], (uint64_t)base,
+ (uint64_t)(base + size - 1)));
+ } else {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_BAR, pcibr_vhdl,
+ "pcibr_slot_addr_space_init: slot=%d, func=%d, win %d, "
+ "unable to alloc 0x%lx in space %s\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot), func, win,
+ (uint64_t)size, pci_space[space]));
+ }
+ } /* next base */
+
+ /*
+ * Allocate space for the EXPANSION ROM
+ */
+ base = size = 0;
+ {
+ wptr = cfgw + PCI_EXPANSION_ROM / 4;
+ do_pcibr_config_set(wptr, 0, 4, 0xFFFFF000);
+ mask = do_pcibr_config_get(wptr, 0, 4);
+ if (mask & 0xFFFFF000) {
+ size = mask & -mask;
+ base = pcibr_bus_addr_alloc(pcibr_soft,
+ &pcibr_info->f_rwindow,
+ PCIIO_SPACE_MEM32,
+ 0, size, align);
+ if (!base)
+ rc = ENOSPC;
+ else {
+ do_pcibr_config_set(wptr, 0, 4, base);
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_BAR, pcibr_vhdl,
+ "pcibr_slot_addr_space_init: slot=%d, func=%d, "
+ "ROM in [0x%X..0x%X], allocated by pcibr\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot),
+ func, base, base + size - 1));
+ }
+ }
+ }
+ pcibr_info->f_rbase = base;
+ pcibr_info->f_rsize = size;
+
+ /*
+ * if necessary, update the board's
+ * command register to enable decoding
+ * in the windows we added.
+ *
+ * There are some bits we always want to
+ * be sure are set.
+ */
+ pci_cfg_cmd_reg_add |= PCI_CMD_IO_SPACE;
+
+ /*
+ * The Adaptec 1160 FC Controller WAR #767995:
+ * The part incorrectly ignores the upper 32 bits of a 64 bit
+ * address when decoding references to its registers so to
+ * keep it from responding to a bus cycle that it shouldn't
+ * we only use I/O space to get at it's registers. Don't
+ * enable memory space accesses on that PCI device.
+ */
+ #define FCADP_VENDID 0x9004 /* Adaptec Vendor ID from fcadp.h */
+ #define FCADP_DEVID 0x1160 /* Adaptec 1160 Device ID from fcadp.h */
+
+ if ((pcibr_info->f_vendor != FCADP_VENDID) ||
+ (pcibr_info->f_device != FCADP_DEVID))
+ pci_cfg_cmd_reg_add |= PCI_CMD_MEM_SPACE;
+
+ pci_cfg_cmd_reg_add |= PCI_CMD_BUS_MASTER;
+
+ pci_cfg_cmd_reg = do_pcibr_config_get(cfgw, PCI_CFG_COMMAND, 4);
+ pci_cfg_cmd_reg &= 0xFFFF;
+ if (pci_cfg_cmd_reg_add & ~pci_cfg_cmd_reg)
+ do_pcibr_config_set(cfgw, PCI_CFG_COMMAND, 4,
+ pci_cfg_cmd_reg | pci_cfg_cmd_reg_add);
+ } /* next func */
+ return rc;
+}
+
+/*
+ * pcibr_slot_device_init
+ * Setup the device register in the bridge for this PCI slot.
+ */
+
+int
+pcibr_slot_device_init(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t slot)
+{
+ pcibr_soft_t pcibr_soft;
+ uint64_t devreg;
+
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+
+ if (!pcibr_soft)
+ return -EINVAL;
+
+ if (!PCIBR_VALID_SLOT(pcibr_soft, slot))
+ return -EINVAL;
+
+ /*
+ * Adjustments to Device(x) and init of bss_device shadow
+ */
+ devreg = pcireg_device_get(pcibr_soft, slot);
+ devreg &= ~BRIDGE_DEV_PAGE_CHK_DIS;
+
+ /*
+ * Enable virtual channels by default (exception: see PIC WAR below)
+ */
+ devreg |= BRIDGE_DEV_VIRTUAL_EN;
+
+ /*
+ * PIC WAR. PV# 855271: Disable virtual channels in the PIC since
+ * it can cause problems with 32-bit devices. We'll set the bit in
+ * pcibr_try_set_device() iff we're 64-bit and requesting virtual
+ * channels.
+ */
+ if (PCIBR_WAR_ENABLED(PV855271, pcibr_soft)) {
+ devreg &= ~BRIDGE_DEV_VIRTUAL_EN;
+ }
+ devreg |= BRIDGE_DEV_COH;
+
+ pcibr_soft->bs_slot[slot].bss_device = devreg;
+ pcireg_device_set(pcibr_soft, slot, devreg);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DEVREG, pcibr_vhdl,
+ "pcibr_slot_device_init: Device(%d): 0x%x\n",
+ slot, devreg));
+ return 0;
+}
+
+/*
+ * pcibr_slot_guest_info_init
+ * Setup the host/guest relations for a PCI slot.
+ */
+int
+pcibr_slot_guest_info_init(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t slot)
+{
+ pcibr_soft_t pcibr_soft;
+ pcibr_info_h pcibr_infoh;
+ pcibr_info_t pcibr_info;
+ pcibr_soft_slot_t slotp;
+
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+
+ if (!pcibr_soft)
+ return -EINVAL;
+
+ if (!PCIBR_VALID_SLOT(pcibr_soft, slot))
+ return -EINVAL;
+
+ slotp = &pcibr_soft->bs_slot[slot];
+
+ /* create info and verticies for guest slots;
+ * for compatibilitiy macros, create info
+ * for even unpopulated slots (but do not
+ * build verticies for them).
+ */
+ if (pcibr_soft->bs_slot[slot].bss_ninfo < 1) {
+ pcibr_infoh = kmalloc(sizeof (*(pcibr_infoh)), GFP_KERNEL);
+ if ( !pcibr_infoh ) {
+ return -ENOMEM;
+ }
+ memset(pcibr_infoh, 0, sizeof (*(pcibr_infoh)));
+
+ pcibr_soft->bs_slot[slot].bss_ninfo = 1;
+ pcibr_soft->bs_slot[slot].bss_infos = pcibr_infoh;
+
+ pcibr_info = pcibr_device_info_new
+ (pcibr_soft, slot, PCIIO_FUNC_NONE,
+ PCIIO_VENDOR_ID_NONE, PCIIO_DEVICE_ID_NONE);
+
+ if (pcibr_soft->bs_slot[slot].has_host) {
+ slotp->slot_conn = pciio_device_info_register
+ (pcibr_vhdl, &pcibr_info->f_c);
+ }
+ }
+
+ /* generate host/guest relations
+ */
+ if (pcibr_soft->bs_slot[slot].has_host) {
+ int host = pcibr_soft->bs_slot[slot].host_slot;
+ pcibr_soft_slot_t host_slotp = &pcibr_soft->bs_slot[host];
+
+ hwgraph_edge_add(slotp->slot_conn,
+ host_slotp->slot_conn,
+ EDGE_LBL_HOST);
+
+ /* XXX- only gives us one guest edge per
+ * host. If/when we have a host with more than
+ * one guest, we will need to figure out how
+ * the host finds all its guests, and sorts
+ * out which one is which.
+ */
+ hwgraph_edge_add(host_slotp->slot_conn,
+ slotp->slot_conn,
+ EDGE_LBL_GUEST);
+ }
+
+ return 0;
+}
+
+
+/*
+ * pcibr_slot_call_device_attach
+ * This calls the associated driver attach routine for the PCI
+ * card in this slot.
+ */
+int
+pcibr_slot_call_device_attach(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t slot,
+ int drv_flags)
+{
+ pcibr_soft_t pcibr_soft;
+ pcibr_info_h pcibr_infoh;
+ pcibr_info_t pcibr_info;
+ int func;
+ vertex_hdl_t xconn_vhdl, conn_vhdl;
+ int nfunc;
+ int error_func;
+ int error_slot = 0;
+ int error = ENODEV;
+
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+
+ if (!pcibr_soft)
+ return -EINVAL;
+
+ if (!PCIBR_VALID_SLOT(pcibr_soft, slot))
+ return -EINVAL;
+
+ if (pcibr_soft->bs_slot[slot].has_host) {
+ return -EPERM;
+ }
+
+ xconn_vhdl = pcibr_soft->bs_conn;
+
+ nfunc = pcibr_soft->bs_slot[slot].bss_ninfo;
+ pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos;
+
+ for (func = 0; func < nfunc; ++func) {
+
+ pcibr_info = pcibr_infoh[func];
+
+ if (!pcibr_info)
+ continue;
+
+ if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE)
+ continue;
+
+ conn_vhdl = pcibr_info->f_vertex;
+
+ error_func = pciio_device_attach(conn_vhdl, drv_flags);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DEV_ATTACH, pcibr_vhdl,
+ "pcibr_slot_call_device_attach: slot=%d, func=%d "
+ "drv_flags=0x%x, pciio_device_attach returned %d\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot), func,
+ drv_flags, error_func));
+ pcibr_info->f_att_det_error = error_func;
+
+ if (error_func)
+ error_slot = error_func;
+
+ error = error_slot;
+
+ } /* next func */
+
+#ifdef CONFIG_HOTPLUG_PCI_SGI
+ if (error) {
+ if ((error != ENODEV) && (error != EUNATCH) && (error != EPERM)) {
+ pcibr_soft->bs_slot[slot].slot_status &= ~SLOT_STATUS_MASK;
+ pcibr_soft->bs_slot[slot].slot_status |= SLOT_STARTUP_INCMPLT;
+ }
+ } else {
+ pcibr_soft->bs_slot[slot].slot_status &= ~SLOT_STATUS_MASK;
+ pcibr_soft->bs_slot[slot].slot_status |= SLOT_STARTUP_CMPLT;
+ }
+#endif /* CONFIG_HOTPLUG_PCI_SGI */
+ return error;
+}
+
+/*
+ * pcibr_slot_call_device_detach
+ * This calls the associated driver detach routine for the PCI
+ * card in this slot.
+ */
+int
+pcibr_slot_call_device_detach(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t slot,
+ int drv_flags)
+{
+ pcibr_soft_t pcibr_soft;
+ pcibr_info_h pcibr_infoh;
+ pcibr_info_t pcibr_info;
+ int func;
+ vertex_hdl_t conn_vhdl = GRAPH_VERTEX_NONE;
+ int nfunc;
+ int error_func;
+ int error_slot = 0;
+ int error = ENODEV;
+
+ pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+
+ if (!pcibr_soft)
+ return -EINVAL;
+
+ if (!PCIBR_VALID_SLOT(pcibr_soft, slot))
+ return -EINVAL;
+
+ if (pcibr_soft->bs_slot[slot].has_host)
+ return -EPERM;
+
+ nfunc = pcibr_soft->bs_slot[slot].bss_ninfo;
+ pcibr_infoh = pcibr_soft->bs_slot[slot].bss_infos;
+
+ for (func = 0; func < nfunc; ++func) {
+
+ pcibr_info = pcibr_infoh[func];
+
+ if (!pcibr_info)
+ continue;
+
+ if (pcibr_info->f_vendor == PCIIO_VENDOR_ID_NONE)
+ continue;
+
+ if (IS_PCIX(pcibr_soft) && pcibr_info->f_pcix_cap) {
+ int max_out;
+
+ pcibr_soft->bs_pcix_num_funcs--;
+ max_out = pcibr_info->f_pcix_cap->pcix_type0_status.max_out_split;
+ pcibr_soft->bs_pcix_split_tot -= max_splittrans_to_numbuf[max_out];
+ }
+
+ conn_vhdl = pcibr_info->f_vertex;
+
+ error_func = pciio_device_detach(conn_vhdl, drv_flags);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_DEV_DETACH, pcibr_vhdl,
+ "pcibr_slot_call_device_detach: slot=%d, func=%d "
+ "drv_flags=0x%x, pciio_device_detach returned %d\n",
+ PCIBR_DEVICE_TO_SLOT(pcibr_soft, slot), func,
+ drv_flags, error_func));
+
+ pcibr_info->f_att_det_error = error_func;
+
+ if (error_func)
+ error_slot = error_func;
+
+ error = error_slot;
+
+ } /* next func */
+
+#ifdef CONFIG_HOTPLUG_PCI_SGI
+ if (error) {
+ if ((error != ENODEV) && (error != EUNATCH) && (error != EPERM)) {
+ pcibr_soft->bs_slot[slot].slot_status &= ~SLOT_STATUS_MASK;
+ pcibr_soft->bs_slot[slot].slot_status |= SLOT_SHUTDOWN_INCMPLT;
+ }
+ } else {
+ if (conn_vhdl != GRAPH_VERTEX_NONE)
+ pcibr_device_unregister(conn_vhdl);
+ pcibr_soft->bs_slot[slot].slot_status &= ~SLOT_STATUS_MASK;
+ pcibr_soft->bs_slot[slot].slot_status |= SLOT_SHUTDOWN_CMPLT;
+ }
+#endif /* CONFIG_HOTPLUG_PCI_SGI */
+ return error;
+}
+
+
+
+/*
+ * pcibr_slot_detach
+ * This is a place holder routine to keep track of all the
+ * slot-specific freeing that needs to be done.
+ */
+int
+pcibr_slot_detach(vertex_hdl_t pcibr_vhdl,
+ pciio_slot_t slot,
+ int drv_flags,
+ char *l1_msg,
+ int *sub_errorp)
+{
+ pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+ int error;
+
+ /* Call the device detach function */
+ error = (pcibr_slot_call_device_detach(pcibr_vhdl, slot, drv_flags));
+ if (error) {
+ if (sub_errorp)
+ *sub_errorp = error;
+ if (l1_msg)
+ ;
+ return PCI_SLOT_DRV_DETACH_ERR;
+ }
+
+ /* Recalculate the RBARs for all the devices on the bus since we've
+ * just freed some up and some of the devices could use them.
+ */
+ if (IS_PCIX(pcibr_soft)) {
+ int tmp_slot;
+
+ pcibr_soft->bs_pcix_rbar_inuse = 0;
+ pcibr_soft->bs_pcix_rbar_avail = NUM_RBAR;
+ pcibr_soft->bs_pcix_rbar_percent_allowed =
+ pcibr_pcix_rbars_calc(pcibr_soft);
+
+ for (tmp_slot = pcibr_soft->bs_min_slot;
+ tmp_slot < PCIBR_NUM_SLOTS(pcibr_soft); ++tmp_slot)
+ (void)pcibr_slot_pcix_rbar_init(pcibr_soft, tmp_slot);
+ }
+
+ return 0;
+
+}
+
+/*
+ * pcibr_probe_slot_pic: read a config space word
+ * while trapping any errors; return zero if
+ * all went OK, or nonzero if there was an error.
+ * The value read, if any, is passed back
+ * through the valp parameter.
+ */
+static int
+pcibr_probe_slot(pcibr_soft_t pcibr_soft,
+ cfg_p cfg,
+ unsigned *valp)
+{
+ return pcibr_probe_work(pcibr_soft, (void *)cfg, 4, (void *)valp);
+}
+
+/*
+ * Probe an offset within a piomap with errors disabled.
+ * len must be 1, 2, 4, or 8. The probed address must be a multiple of
+ * len.
+ *
+ * Returns: 0 if the offset was probed and put valid data in valp
+ * -1 if there was a usage error such as improper alignment
+ * or out of bounds offset/len combination. In this
+ * case, the map was not probed
+ * 1 if the offset was probed but resulted in an error
+ * such as device not responding, bus error, etc.
+ */
+
+int
+pcibr_piomap_probe(pcibr_piomap_t piomap, off_t offset, int len, void *valp)
+{
+ if (offset + len > piomap->bp_mapsz) {
+ return -1;
+ }
+
+ return pcibr_probe_work(piomap->bp_soft,
+ piomap->bp_kvaddr + offset, len, valp);
+}
+
+static uint64_t
+pcibr_disable_mst_timeout(pcibr_soft_t pcibr_soft)
+{
+ uint64_t old_enable;
+ uint64_t new_enable;
+ uint64_t intr_bits;
+
+ intr_bits = PIC_ISR_PCI_MST_TIMEOUT
+ | PIC_ISR_PCIX_MTOUT | PIC_ISR_PCIX_SPLIT_EMSG;
+ old_enable = pcireg_intr_enable_get(pcibr_soft);
+ pcireg_intr_enable_bit_clr(pcibr_soft, intr_bits);
+ new_enable = pcireg_intr_enable_get(pcibr_soft);
+
+ if (old_enable == new_enable) {
+ return 0; /* was already disabled */
+ } else {
+ return 1;
+ }
+}
+
+static int
+pcibr_enable_mst_timeout(pcibr_soft_t pcibr_soft)
+{
+ uint64_t old_enable;
+ uint64_t new_enable;
+ uint64_t intr_bits;
+
+ intr_bits = PIC_ISR_PCI_MST_TIMEOUT
+ | PIC_ISR_PCIX_MTOUT | PIC_ISR_PCIX_SPLIT_EMSG;
+ old_enable = pcireg_intr_enable_get(pcibr_soft);
+ pcireg_intr_enable_bit_set(pcibr_soft, intr_bits);
+ new_enable = pcireg_intr_enable_get(pcibr_soft);
+
+ if (old_enable == new_enable) {
+ return 0; /* was alread enabled */
+ } else {
+ return 1;
+ }
+}
+
+/*
+ * pcibr_probe_slot: read a config space word
+ * while trapping any errors; return zero if
+ * all went OK, or nonzero if there was an error.
+ * The value read, if any, is passed back
+ * through the valp parameter.
+ */
+static int
+pcibr_probe_work(pcibr_soft_t pcibr_soft,
+ void *addr,
+ int len,
+ void *valp)
+{
+ int rv, changed;
+
+ /*
+ * Sanity checks ...
+ */
+
+ if (len != 1 && len != 2 && len != 4 && len != 8) {
+ return -1; /* invalid len */
+ }
+
+ if ((uint64_t)addr & (len-1)) {
+ return -1; /* invalid alignment */
+ }
+
+ changed = pcibr_disable_mst_timeout(pcibr_soft);
+
+ rv = snia_badaddr_val((void *)addr, len, valp);
+
+ /* Clear the int_view register incase it was set */
+ pcireg_intr_reset_set(pcibr_soft, BRIDGE_IRR_MULTI_CLR);
+
+ if (changed) {
+ pcibr_enable_mst_timeout(pcibr_soft);
+ }
+ return (rv ? 1 : 0); /* return 1 for snia_badaddr_val error, 0 if ok */
+}
+
+void
+pcibr_device_info_free(vertex_hdl_t pcibr_vhdl, pciio_slot_t slot)
+{
+ pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+ pcibr_info_t pcibr_info;
+ pciio_function_t func;
+ pcibr_soft_slot_t slotp = &pcibr_soft->bs_slot[slot];
+ cfg_p cfgw;
+ int nfunc = slotp->bss_ninfo;
+ int bar;
+ int devio_index;
+ unsigned long s;
+ unsigned cmd_reg;
+
+
+ for (func = 0; func < nfunc; func++) {
+ pcibr_info = slotp->bss_infos[func];
+
+ if (!pcibr_info)
+ continue;
+
+ s = pcibr_lock(pcibr_soft);
+
+ /* Disable memory and I/O BARs */
+ cfgw = pcibr_func_config_addr(pcibr_soft, 0, slot, func, 0);
+ cmd_reg = do_pcibr_config_get(cfgw, PCI_CFG_COMMAND, 4);
+ cmd_reg &= (PCI_CMD_MEM_SPACE | PCI_CMD_IO_SPACE);
+ do_pcibr_config_set(cfgw, PCI_CFG_COMMAND, 4, cmd_reg);
+
+ for (bar = 0; bar < PCI_CFG_BASE_ADDRS; bar++) {
+ if (pcibr_info->f_window[bar].w_space == PCIIO_SPACE_NONE)
+ continue;
+
+ /* Free the PCI bus space */
+ pcibr_bus_addr_free(&pcibr_info->f_window[bar]);
+
+ /* Get index of the DevIO(x) register used to access this BAR */
+ devio_index = pcibr_info->f_window[bar].w_devio_index;
+
+
+ /* On last use, clear the DevIO(x) used to access this BAR */
+ if (! --pcibr_soft->bs_slot[devio_index].bss_devio.bssd_ref_cnt) {
+ pcibr_soft->bs_slot[devio_index].bss_devio.bssd_space =
+ PCIIO_SPACE_NONE;
+ pcibr_soft->bs_slot[devio_index].bss_devio.bssd_base =
+ PCIBR_D32_BASE_UNSET;
+ pcibr_soft->bs_slot[devio_index].bss_device = 0;
+ }
+ }
+
+ /* Free the Expansion ROM PCI bus space */
+ if(pcibr_info->f_rbase && pcibr_info->f_rsize) {
+ pcibr_bus_addr_free(&pcibr_info->f_rwindow);
+ }
+
+ pcibr_unlock(pcibr_soft, s);
+
+ slotp->bss_infos[func] = 0;
+ pciio_device_info_unregister(pcibr_vhdl, &pcibr_info->f_c);
+ pciio_device_info_free(&pcibr_info->f_c);
+
+ kfree(pcibr_info);
+ }
+
+ /* Reset the mapping usage counters */
+ slotp->bss_pmu_uctr = 0;
+ slotp->bss_d32_uctr = 0;
+ slotp->bss_d64_uctr = 0;
+
+ /* Clear the Direct translation info */
+ slotp->bss_d64_base = PCIBR_D64_BASE_UNSET;
+ slotp->bss_d64_flags = 0;
+ slotp->bss_d32_base = PCIBR_D32_BASE_UNSET;
+ slotp->bss_d32_flags = 0;
+}
+
+
+iopaddr_t
+pcibr_bus_addr_alloc(pcibr_soft_t pcibr_soft, pciio_win_info_t win_info_p,
+ pciio_space_t space, int start, int size, int align)
+{
+ pciio_win_map_t win_map_p;
+ struct resource *root_resource = NULL;
+ iopaddr_t iopaddr = 0;
+
+ switch (space) {
+
+ case PCIIO_SPACE_IO:
+ win_map_p = &pcibr_soft->bs_io_win_map;
+ root_resource = &pcibr_soft->bs_io_win_root_resource;
+ break;
+
+ case PCIIO_SPACE_MEM:
+ win_map_p = &pcibr_soft->bs_swin_map;
+ root_resource = &pcibr_soft->bs_swin_root_resource;
+ break;
+
+ case PCIIO_SPACE_MEM32:
+ win_map_p = &pcibr_soft->bs_mem_win_map;
+ root_resource = &pcibr_soft->bs_mem_win_root_resource;
+ break;
+
+ default:
+ return 0;
+
+ }
+ iopaddr = pciio_device_win_alloc(root_resource,
+ win_info_p
+ ? &win_info_p->w_win_alloc
+ : NULL,
+ start, size, align);
+ return iopaddr;
+}
+
+
+void
+pcibr_bus_addr_free(pciio_win_info_t win_info_p)
+{
+ pciio_device_win_free(&win_info_p->w_win_alloc);
+}
+
+/*
+ * given a vertex_hdl to the pcibr_vhdl, return the brick's bus number
+ * associated with that vertex_hdl. The true mapping happens from the
+ * io_brick_tab[] array defined in ml/SN/iograph.c
+ */
+int
+pcibr_widget_to_bus(vertex_hdl_t pcibr_vhdl)
+{
+ pcibr_soft_t pcibr_soft = pcibr_soft_get(pcibr_vhdl);
+ xwidgetnum_t widget = pcibr_soft->bs_xid;
+ int bricktype = pcibr_soft->bs_bricktype;
+ int bus;
+
+ if ((bus = io_brick_map_widget(bricktype, widget)) <= 0) {
+ printk(KERN_WARNING "pcibr_widget_to_bus() bad bricktype %d\n", bricktype);
+ return 0;
+ }
+
+ /* For PIC there are 2 busses per widget and pcibr_soft->bs_busnum
+ * will be 0 or 1. Add in the correct PIC bus offset.
+ */
+ bus += pcibr_soft->bs_busnum;
+ return bus;
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <asm/sn/pci/pci_bus_cvlink.h>
+#include <asm/sn/simulator.h>
+
+char pciio_info_fingerprint[] = "pciio_info";
+
+/* =====================================================================
+ * PCI Generic Bus Provider
+ * Implement PCI provider operations. The pciio* layer provides a
+ * platform-independent interface for PCI devices. This layer
+ * switches among the possible implementations of a PCI adapter.
+ */
+
+/* =====================================================================
+ * Provider Function Location
+ *
+ * If there is more than one possible provider for
+ * this platform, we need to examine the master
+ * vertex of the current vertex for a provider
+ * function structure, and indirect through the
+ * appropriately named member.
+ */
+
+pciio_provider_t *
+pciio_to_provider_fns(vertex_hdl_t dev)
+{
+ pciio_info_t card_info;
+ pciio_provider_t *provider_fns;
+
+ /*
+ * We're called with two types of vertices, one is
+ * the bridge vertex (ends with "pci") and the other is the
+ * pci slot vertex (ends with "pci/[0-8]"). For the first type
+ * we need to get the provider from the PFUNCS label. For
+ * the second we get it from fastinfo/c_pops.
+ */
+ provider_fns = pciio_provider_fns_get(dev);
+ if (provider_fns == NULL) {
+ card_info = pciio_info_get(dev);
+ if (card_info != NULL) {
+ provider_fns = pciio_info_pops_get(card_info);
+ }
+ }
+
+ if (provider_fns == NULL) {
+ char devname[MAXDEVNAME];
+ panic("%s: provider_fns == NULL", vertex_to_name(dev, devname, MAXDEVNAME));
+ }
+ return provider_fns;
+
+}
+
+#define DEV_FUNC(dev,func) pciio_to_provider_fns(dev)->func
+#define CAST_PIOMAP(x) ((pciio_piomap_t)(x))
+#define CAST_DMAMAP(x) ((pciio_dmamap_t)(x))
+#define CAST_INTR(x) ((pciio_intr_t)(x))
+
+/*
+ * Many functions are not passed their vertex
+ * information directly; rather, they must
+ * dive through a resource map. These macros
+ * are available to coordinate this detail.
+ */
+#define PIOMAP_FUNC(map,func) DEV_FUNC((map)->pp_dev,func)
+#define DMAMAP_FUNC(map,func) DEV_FUNC((map)->pd_dev,func)
+#define INTR_FUNC(intr_hdl,func) DEV_FUNC((intr_hdl)->pi_dev,func)
+
+/* =====================================================================
+ * PIO MANAGEMENT
+ *
+ * For mapping system virtual address space to
+ * pciio space on a specified card
+ */
+
+pciio_piomap_t
+pciio_piomap_alloc(vertex_hdl_t dev, /* set up mapping for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ pciio_space_t space, /* CFG, MEM, IO, or a device-decoded window */
+ iopaddr_t addr, /* lowest address (or offset in window) */
+ size_t byte_count, /* size of region containing our mappings */
+ size_t byte_count_max, /* maximum size of a mapping */
+ unsigned flags)
+{ /* defined in sys/pio.h */
+ return (pciio_piomap_t) DEV_FUNC(dev, piomap_alloc)
+ (dev, dev_desc, space, addr, byte_count, byte_count_max, flags);
+}
+
+void
+pciio_piomap_free(pciio_piomap_t pciio_piomap)
+{
+ PIOMAP_FUNC(pciio_piomap, piomap_free)
+ (CAST_PIOMAP(pciio_piomap));
+}
+
+caddr_t
+pciio_piomap_addr(pciio_piomap_t pciio_piomap, /* mapping resources */
+ iopaddr_t pciio_addr, /* map for this pciio address */
+ size_t byte_count)
+{ /* map this many bytes */
+ pciio_piomap->pp_kvaddr = PIOMAP_FUNC(pciio_piomap, piomap_addr)
+ (CAST_PIOMAP(pciio_piomap), pciio_addr, byte_count);
+
+ return pciio_piomap->pp_kvaddr;
+}
+
+void
+pciio_piomap_done(pciio_piomap_t pciio_piomap)
+{
+ PIOMAP_FUNC(pciio_piomap, piomap_done)
+ (CAST_PIOMAP(pciio_piomap));
+}
+
+caddr_t
+pciio_piotrans_addr(vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ pciio_space_t space, /* CFG, MEM, IO, or a device-decoded window */
+ iopaddr_t addr, /* starting address (or offset in window) */
+ size_t byte_count, /* map this many bytes */
+ unsigned flags)
+{ /* (currently unused) */
+ return DEV_FUNC(dev, piotrans_addr)
+ (dev, dev_desc, space, addr, byte_count, flags);
+}
+
+caddr_t
+pciio_pio_addr(vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ pciio_space_t space, /* CFG, MEM, IO, or a device-decoded window */
+ iopaddr_t addr, /* starting address (or offset in window) */
+ size_t byte_count, /* map this many bytes */
+ pciio_piomap_t *mapp, /* where to return the map pointer */
+ unsigned flags)
+{ /* PIO flags */
+ pciio_piomap_t map = 0;
+ int errfree = 0;
+ caddr_t res;
+
+ if (mapp) {
+ map = *mapp; /* possible pre-allocated map */
+ *mapp = 0; /* record "no map used" */
+ }
+
+ res = pciio_piotrans_addr
+ (dev, dev_desc, space, addr, byte_count, flags);
+ if (res)
+ return res; /* pciio_piotrans worked */
+
+ if (!map) {
+ map = pciio_piomap_alloc
+ (dev, dev_desc, space, addr, byte_count, byte_count, flags);
+ if (!map)
+ return res; /* pciio_piomap_alloc failed */
+ errfree = 1;
+ }
+
+ res = pciio_piomap_addr
+ (map, addr, byte_count);
+ if (!res) {
+ if (errfree)
+ pciio_piomap_free(map);
+ return res; /* pciio_piomap_addr failed */
+ }
+ if (mapp)
+ *mapp = map; /* pass back map used */
+
+ return res; /* pciio_piomap_addr succeeded */
+}
+
+iopaddr_t
+pciio_piospace_alloc(vertex_hdl_t dev, /* Device requiring space */
+ device_desc_t dev_desc, /* Device descriptor */
+ pciio_space_t space, /* MEM32/MEM64/IO */
+ size_t byte_count, /* Size of mapping */
+ size_t align)
+{ /* Alignment needed */
+ if (align < PAGE_SIZE)
+ align = PAGE_SIZE;
+ return DEV_FUNC(dev, piospace_alloc)
+ (dev, dev_desc, space, byte_count, align);
+}
+
+void
+pciio_piospace_free(vertex_hdl_t dev, /* Device freeing space */
+ pciio_space_t space, /* Type of space */
+ iopaddr_t pciaddr, /* starting address */
+ size_t byte_count)
+{ /* Range of address */
+ DEV_FUNC(dev, piospace_free)
+ (dev, space, pciaddr, byte_count);
+}
+
+/* =====================================================================
+ * DMA MANAGEMENT
+ *
+ * For mapping from pci space to system
+ * physical space.
+ */
+
+pciio_dmamap_t
+pciio_dmamap_alloc(vertex_hdl_t dev, /* set up mappings for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ size_t byte_count_max, /* max size of a mapping */
+ unsigned flags)
+{ /* defined in dma.h */
+ return (pciio_dmamap_t) DEV_FUNC(dev, dmamap_alloc)
+ (dev, dev_desc, byte_count_max, flags);
+}
+
+void
+pciio_dmamap_free(pciio_dmamap_t pciio_dmamap)
+{
+ DMAMAP_FUNC(pciio_dmamap, dmamap_free)
+ (CAST_DMAMAP(pciio_dmamap));
+}
+
+iopaddr_t
+pciio_dmamap_addr(pciio_dmamap_t pciio_dmamap, /* use these mapping resources */
+ paddr_t paddr, /* map for this address */
+ size_t byte_count)
+{ /* map this many bytes */
+ return DMAMAP_FUNC(pciio_dmamap, dmamap_addr)
+ (CAST_DMAMAP(pciio_dmamap), paddr, byte_count);
+}
+
+void
+pciio_dmamap_done(pciio_dmamap_t pciio_dmamap)
+{
+ DMAMAP_FUNC(pciio_dmamap, dmamap_done)
+ (CAST_DMAMAP(pciio_dmamap));
+}
+
+iopaddr_t
+pciio_dmatrans_addr(vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ paddr_t paddr, /* system physical address */
+ size_t byte_count, /* length */
+ unsigned flags)
+{ /* defined in dma.h */
+ return DEV_FUNC(dev, dmatrans_addr)
+ (dev, dev_desc, paddr, byte_count, flags);
+}
+
+iopaddr_t
+pciio_dma_addr(vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ paddr_t paddr, /* system physical address */
+ size_t byte_count, /* length */
+ pciio_dmamap_t *mapp, /* map to use, then map we used */
+ unsigned flags)
+{ /* PIO flags */
+ pciio_dmamap_t map = 0;
+ int errfree = 0;
+ iopaddr_t res;
+
+ if (mapp) {
+ map = *mapp; /* possible pre-allocated map */
+ *mapp = 0; /* record "no map used" */
+ }
+
+ res = pciio_dmatrans_addr
+ (dev, dev_desc, paddr, byte_count, flags);
+ if (res)
+ return res; /* pciio_dmatrans worked */
+
+ if (!map) {
+ map = pciio_dmamap_alloc
+ (dev, dev_desc, byte_count, flags);
+ if (!map)
+ return res; /* pciio_dmamap_alloc failed */
+ errfree = 1;
+ }
+
+ res = pciio_dmamap_addr
+ (map, paddr, byte_count);
+ if (!res) {
+ if (errfree)
+ pciio_dmamap_free(map);
+ return res; /* pciio_dmamap_addr failed */
+ }
+ if (mapp)
+ *mapp = map; /* pass back map used */
+
+ return res; /* pciio_dmamap_addr succeeded */
+}
+
+void
+pciio_dmamap_drain(pciio_dmamap_t map)
+{
+ DMAMAP_FUNC(map, dmamap_drain)
+ (CAST_DMAMAP(map));
+}
+
+void
+pciio_dmaaddr_drain(vertex_hdl_t dev, paddr_t addr, size_t size)
+{
+ DEV_FUNC(dev, dmaaddr_drain)
+ (dev, addr, size);
+}
+
+/* =====================================================================
+ * INTERRUPT MANAGEMENT
+ *
+ * Allow crosstalk devices to establish interrupts
+ */
+
+/*
+ * Allocate resources required for an interrupt as specified in intr_desc.
+ * Return resource handle in intr_hdl.
+ */
+pciio_intr_t
+pciio_intr_alloc(vertex_hdl_t dev, /* which Crosstalk device */
+ device_desc_t dev_desc, /* device descriptor */
+ pciio_intr_line_t lines, /* INTR line(s) to attach */
+ vertex_hdl_t owner_dev)
+{ /* owner of this interrupt */
+ return (pciio_intr_t) DEV_FUNC(dev, intr_alloc)
+ (dev, dev_desc, lines, owner_dev);
+}
+
+/*
+ * Free resources consumed by intr_alloc.
+ */
+void
+pciio_intr_free(pciio_intr_t intr_hdl)
+{
+ INTR_FUNC(intr_hdl, intr_free)
+ (CAST_INTR(intr_hdl));
+}
+
+/*
+ * Associate resources allocated with a previous pciio_intr_alloc call with the
+ * described handler, arg, name, etc.
+ *
+ * Returns 0 on success, returns <0 on failure.
+ */
+int
+pciio_intr_connect(pciio_intr_t intr_hdl,
+ intr_func_t intr_func, intr_arg_t intr_arg) /* pciio intr resource handle */
+{
+ return INTR_FUNC(intr_hdl, intr_connect)
+ (CAST_INTR(intr_hdl), intr_func, intr_arg);
+}
+
+/*
+ * Disassociate handler with the specified interrupt.
+ */
+void
+pciio_intr_disconnect(pciio_intr_t intr_hdl)
+{
+ INTR_FUNC(intr_hdl, intr_disconnect)
+ (CAST_INTR(intr_hdl));
+}
+
+/*
+ * Return a hwgraph vertex that represents the CPU currently
+ * targeted by an interrupt.
+ */
+vertex_hdl_t
+pciio_intr_cpu_get(pciio_intr_t intr_hdl)
+{
+ return INTR_FUNC(intr_hdl, intr_cpu_get)
+ (CAST_INTR(intr_hdl));
+}
+
+void
+pciio_slot_func_to_name(char *name,
+ pciio_slot_t slot,
+ pciio_function_t func)
+{
+ /*
+ * standard connection points:
+ *
+ * PCIIO_SLOT_NONE: .../pci/direct
+ * PCIIO_FUNC_NONE: .../pci/<SLOT> ie. .../pci/3
+ * multifunction: .../pci/<SLOT><FUNC> ie. .../pci/3c
+ */
+
+ if (slot == PCIIO_SLOT_NONE)
+ sprintf(name, EDGE_LBL_DIRECT);
+ else if (func == PCIIO_FUNC_NONE)
+ sprintf(name, "%d", slot);
+ else
+ sprintf(name, "%d%c", slot, 'a'+func);
+}
+
+/*
+ * pciio_cardinfo_get
+ *
+ * Get the pciio info structure corresponding to the
+ * specified PCI "slot" (we like it when the same index
+ * number is used for the PCI IDSEL, the REQ/GNT pair,
+ * and the interrupt line being used for INTA. We like
+ * it so much we call it the slot number).
+ */
+static pciio_info_t
+pciio_cardinfo_get(
+ vertex_hdl_t pciio_vhdl,
+ pciio_slot_t pci_slot)
+{
+ char namebuf[16];
+ pciio_info_t info = 0;
+ vertex_hdl_t conn;
+
+ pciio_slot_func_to_name(namebuf, pci_slot, PCIIO_FUNC_NONE);
+ if (GRAPH_SUCCESS ==
+ hwgraph_traverse(pciio_vhdl, namebuf, &conn)) {
+ info = pciio_info_chk(conn);
+ hwgraph_vertex_unref(conn);
+ }
+
+ return info;
+}
+
+
+/*
+ * pciio_error_handler:
+ * dispatch an error to the appropriate
+ * pciio connection point, or process
+ * it as a generic pci error.
+ * Yes, the first parameter is the
+ * provider vertex at the middle of
+ * the bus; we get to the pciio connect
+ * point using the ioerror widgetdev field.
+ *
+ * This function is called by the
+ * specific PCI provider, after it has figured
+ * out where on the PCI bus (including which slot,
+ * if it can tell) the error came from.
+ */
+/*ARGSUSED */
+int
+pciio_error_handler(
+ vertex_hdl_t pciio_vhdl,
+ int error_code,
+ ioerror_mode_t mode,
+ ioerror_t *ioerror)
+{
+ pciio_info_t pciio_info;
+ vertex_hdl_t pconn_vhdl;
+ pciio_slot_t slot;
+
+ int retval;
+
+#if DEBUG && ERROR_DEBUG
+ printk("%v: pciio_error_handler\n", pciio_vhdl);
+#endif
+
+ IOERR_PRINTF(printk(KERN_NOTICE "%v: PCI Bus Error: Error code: %d Error mode: %d\n",
+ pciio_vhdl, error_code, mode));
+
+ /* If there is an error handler sitting on
+ * the "no-slot" connection point, give it
+ * first crack at the error. NOTE: it is
+ * quite possible that this function may
+ * do further refining of the ioerror.
+ */
+ pciio_info = pciio_cardinfo_get(pciio_vhdl, PCIIO_SLOT_NONE);
+ if (pciio_info && pciio_info->c_efunc) {
+ pconn_vhdl = pciio_info_dev_get(pciio_info);
+
+ retval = pciio_info->c_efunc
+ (pciio_info->c_einfo, error_code, mode, ioerror);
+ if (retval != IOERROR_UNHANDLED)
+ return retval;
+ }
+
+ /* Is the error associated with a particular slot?
+ */
+ if (IOERROR_FIELDVALID(ioerror, widgetdev)) {
+ short widgetdev;
+ /*
+ * NOTE :
+ * widgetdev is a 4byte value encoded as slot in the higher order
+ * 2 bytes and function in the lower order 2 bytes.
+ */
+ IOERROR_GETVALUE(widgetdev, ioerror, widgetdev);
+ slot = pciio_widgetdev_slot_get(widgetdev);
+
+ /* If this slot has an error handler,
+ * deliver the error to it.
+ */
+ pciio_info = pciio_cardinfo_get(pciio_vhdl, slot);
+ if (pciio_info != NULL) {
+ if (pciio_info->c_efunc != NULL) {
+
+ pconn_vhdl = pciio_info_dev_get(pciio_info);
+
+ retval = pciio_info->c_efunc
+ (pciio_info->c_einfo, error_code, mode, ioerror);
+ if (retval != IOERROR_UNHANDLED)
+ return retval;
+ }
+ }
+ }
+
+ return (mode == MODE_DEVPROBE)
+ ? IOERROR_HANDLED /* probes are OK */
+ : IOERROR_UNHANDLED; /* otherwise, foo! */
+}
+
+/* =====================================================================
+ * CONFIGURATION MANAGEMENT
+ */
+
+/*
+ * Startup a crosstalk provider
+ */
+void
+pciio_provider_startup(vertex_hdl_t pciio_provider)
+{
+ DEV_FUNC(pciio_provider, provider_startup)
+ (pciio_provider);
+}
+
+/*
+ * Shutdown a crosstalk provider
+ */
+void
+pciio_provider_shutdown(vertex_hdl_t pciio_provider)
+{
+ DEV_FUNC(pciio_provider, provider_shutdown)
+ (pciio_provider);
+}
+
+/*
+ * Read value of configuration register
+ */
+uint64_t
+pciio_config_get(vertex_hdl_t dev,
+ unsigned reg,
+ unsigned size)
+{
+ uint64_t value = 0;
+ unsigned shift = 0;
+
+ /* handle accesses that cross words here,
+ * since that's common code between all
+ * possible providers.
+ */
+ while (size > 0) {
+ unsigned biw = 4 - (reg&3);
+ if (biw > size)
+ biw = size;
+
+ value |= DEV_FUNC(dev, config_get)
+ (dev, reg, biw) << shift;
+
+ shift += 8*biw;
+ reg += biw;
+ size -= biw;
+ }
+ return value;
+}
+
+/*
+ * Change value of configuration register
+ */
+void
+pciio_config_set(vertex_hdl_t dev,
+ unsigned reg,
+ unsigned size,
+ uint64_t value)
+{
+ /* handle accesses that cross words here,
+ * since that's common code between all
+ * possible providers.
+ */
+ while (size > 0) {
+ unsigned biw = 4 - (reg&3);
+ if (biw > size)
+ biw = size;
+
+ DEV_FUNC(dev, config_set)
+ (dev, reg, biw, value);
+ reg += biw;
+ size -= biw;
+ value >>= biw * 8;
+ }
+}
+
+/* =====================================================================
+ * GENERIC PCI SUPPORT FUNCTIONS
+ */
+
+/*
+ * Issue a hardware reset to a card.
+ */
+int
+pciio_reset(vertex_hdl_t dev)
+{
+ return DEV_FUNC(dev, reset) (dev);
+}
+
+/****** Generic pci slot information interfaces ******/
+
+pciio_info_t
+pciio_info_chk(vertex_hdl_t pciio)
+{
+ arbitrary_info_t ainfo = 0;
+
+ hwgraph_info_get_LBL(pciio, INFO_LBL_PCIIO, &ainfo);
+ return (pciio_info_t) ainfo;
+}
+
+pciio_info_t
+pciio_info_get(vertex_hdl_t pciio)
+{
+ pciio_info_t pciio_info;
+
+ pciio_info = (pciio_info_t) hwgraph_fastinfo_get(pciio);
+
+ if ((pciio_info != NULL) &&
+ (pciio_info->c_fingerprint != pciio_info_fingerprint)
+ && (pciio_info->c_fingerprint != NULL)) {
+
+ return((pciio_info_t)-1); /* Should panic .. */
+ }
+
+ return pciio_info;
+}
+
+void
+pciio_info_set(vertex_hdl_t pciio, pciio_info_t pciio_info)
+{
+ if (pciio_info != NULL)
+ pciio_info->c_fingerprint = pciio_info_fingerprint;
+ hwgraph_fastinfo_set(pciio, (arbitrary_info_t) pciio_info);
+
+ /* Also, mark this vertex as a PCI slot
+ * and use the pciio_info, so pciio_info_chk
+ * can work (and be fairly efficient).
+ */
+ hwgraph_info_add_LBL(pciio, INFO_LBL_PCIIO,
+ (arbitrary_info_t) pciio_info);
+}
+
+vertex_hdl_t
+pciio_info_dev_get(pciio_info_t pciio_info)
+{
+ return (pciio_info->c_vertex);
+}
+
+/*ARGSUSED*/
+pciio_bus_t
+pciio_info_bus_get(pciio_info_t pciio_info)
+{
+ return (pciio_info->c_bus);
+}
+
+pciio_slot_t
+pciio_info_slot_get(pciio_info_t pciio_info)
+{
+ return (pciio_info->c_slot);
+}
+
+pciio_function_t
+pciio_info_function_get(pciio_info_t pciio_info)
+{
+ return (pciio_info->c_func);
+}
+
+pciio_vendor_id_t
+pciio_info_vendor_id_get(pciio_info_t pciio_info)
+{
+ return (pciio_info->c_vendor);
+}
+
+pciio_device_id_t
+pciio_info_device_id_get(pciio_info_t pciio_info)
+{
+ return (pciio_info->c_device);
+}
+
+vertex_hdl_t
+pciio_info_master_get(pciio_info_t pciio_info)
+{
+ return (pciio_info->c_master);
+}
+
+arbitrary_info_t
+pciio_info_mfast_get(pciio_info_t pciio_info)
+{
+ return (pciio_info->c_mfast);
+}
+
+pciio_provider_t *
+pciio_info_pops_get(pciio_info_t pciio_info)
+{
+ return (pciio_info->c_pops);
+}
+
+/* =====================================================================
+ * GENERIC PCI INITIALIZATION FUNCTIONS
+ */
+
+/*
+ * pciioattach: called for each vertex in the graph
+ * that is a PCI provider.
+ */
+/*ARGSUSED */
+int
+pciio_attach(vertex_hdl_t pciio)
+{
+#if DEBUG && ATTACH_DEBUG
+ char devname[MAXDEVNAME];
+ printk("%s: pciio_attach\n", vertex_to_name(pciio, devname, MAXDEVNAME));
+#endif
+ return 0;
+}
+
+/*
+ * Associate a set of pciio_provider functions with a vertex.
+ */
+void
+pciio_provider_register(vertex_hdl_t provider, pciio_provider_t *pciio_fns)
+{
+ hwgraph_info_add_LBL(provider, INFO_LBL_PFUNCS, (arbitrary_info_t) pciio_fns);
+}
+
+/*
+ * Disassociate a set of pciio_provider functions with a vertex.
+ */
+void
+pciio_provider_unregister(vertex_hdl_t provider)
+{
+ arbitrary_info_t ainfo;
+
+ hwgraph_info_remove_LBL(provider, INFO_LBL_PFUNCS, (long *) &ainfo);
+}
+
+/*
+ * Obtain a pointer to the pciio_provider functions for a specified Crosstalk
+ * provider.
+ */
+pciio_provider_t *
+pciio_provider_fns_get(vertex_hdl_t provider)
+{
+ arbitrary_info_t ainfo = 0;
+
+ (void) hwgraph_info_get_LBL(provider, INFO_LBL_PFUNCS, &ainfo);
+ return (pciio_provider_t *) ainfo;
+}
+
+pciio_info_t
+pciio_device_info_new(
+ pciio_info_t pciio_info,
+ vertex_hdl_t master,
+ pciio_slot_t slot,
+ pciio_function_t func,
+ pciio_vendor_id_t vendor_id,
+ pciio_device_id_t device_id)
+{
+ if (!pciio_info) {
+ pciio_info = kmalloc(sizeof (*(pciio_info)), GFP_KERNEL);
+ if ( pciio_info )
+ memset(pciio_info, 0, sizeof (*(pciio_info)));
+ else {
+ printk(KERN_WARNING "pciio_device_info_new(): Unable to "
+ "allocate memory\n");
+ return NULL;
+ }
+ }
+ pciio_info->c_slot = slot;
+ pciio_info->c_func = func;
+ pciio_info->c_vendor = vendor_id;
+ pciio_info->c_device = device_id;
+ pciio_info->c_master = master;
+ pciio_info->c_mfast = hwgraph_fastinfo_get(master);
+ pciio_info->c_pops = pciio_provider_fns_get(master);
+ pciio_info->c_efunc = 0;
+ pciio_info->c_einfo = 0;
+
+ return pciio_info;
+}
+
+void
+pciio_device_info_free(pciio_info_t pciio_info)
+{
+ /* NOTE : pciio_info is a structure within the pcibr_info
+ * and not a pointer to memory allocated on the heap !!
+ */
+ memset((char *)pciio_info, 0, sizeof(pciio_info));
+}
+
+vertex_hdl_t
+pciio_device_info_register(
+ vertex_hdl_t connectpt, /* vertex at center of bus */
+ pciio_info_t pciio_info) /* details about the connectpt */
+{
+ char name[32];
+ vertex_hdl_t pconn;
+ int device_master_set(vertex_hdl_t, vertex_hdl_t);
+
+ pciio_slot_func_to_name(name,
+ pciio_info->c_slot,
+ pciio_info->c_func);
+
+ if (GRAPH_SUCCESS !=
+ hwgraph_path_add(connectpt, name, &pconn))
+ return pconn;
+
+ pciio_info->c_vertex = pconn;
+ pciio_info_set(pconn, pciio_info);
+
+ /*
+ * create link to our pci provider
+ */
+
+ device_master_set(pconn, pciio_info->c_master);
+ return pconn;
+}
+
+void
+pciio_device_info_unregister(vertex_hdl_t connectpt,
+ pciio_info_t pciio_info)
+{
+ char name[32];
+ vertex_hdl_t pconn = NULL;
+
+ if (!pciio_info)
+ return;
+
+ pciio_slot_func_to_name(name,
+ pciio_info->c_slot,
+ pciio_info->c_func);
+
+ pciio_info_set(pconn,0);
+
+ hwgraph_vertex_unref(pconn);
+ hwgraph_vertex_destroy(pconn);
+}
+
+/*ARGSUSED */
+int
+pciio_device_attach(vertex_hdl_t pconn,
+ int drv_flags)
+{
+ pciio_info_t pciio_info;
+ pciio_vendor_id_t vendor_id;
+ pciio_device_id_t device_id;
+
+
+ pciio_info = pciio_info_get(pconn);
+
+ vendor_id = pciio_info->c_vendor;
+ device_id = pciio_info->c_device;
+
+ /* we don't start attaching things until
+ * all the driver init routines (including
+ * pciio_init) have been called; so we
+ * can assume here that we have a registry.
+ */
+
+ return(cdl_add_connpt(vendor_id, device_id, pconn, drv_flags));
+}
+
+int
+pciio_device_detach(vertex_hdl_t pconn,
+ int drv_flags)
+{
+ return(0);
+}
+
+/*
+ * Allocate space from the specified PCI window mapping resource. On
+ * success record information about the allocation in the supplied window
+ * allocation cookie (if non-NULL) and return the address of the allocated
+ * window. On failure return NULL.
+ *
+ * The "size" parameter is usually from a PCI device's Base Address Register
+ * (BAR) decoder. As such, the allocation must be aligned to be a multiple of
+ * that. The "align" parameter acts as a ``minimum alignment'' allocation
+ * constraint. The alignment contraint reflects system or device addressing
+ * restrictions such as the inability to share higher level ``windows''
+ * between devices, etc. The returned PCI address allocation will be a
+ * multiple of the alignment constraint both in alignment and size. Thus, the
+ * returned PCI address block is aligned to the maximum of the requested size
+ * and alignment.
+ */
+iopaddr_t
+pciio_device_win_alloc(struct resource *root_resource,
+ pciio_win_alloc_t win_alloc,
+ size_t start, size_t size, size_t align)
+{
+
+ struct resource *new_res;
+ int status;
+
+ new_res = (struct resource *) kmalloc( sizeof(struct resource), GFP_KERNEL);
+ if (!new_res)
+ return 0;
+
+ if (start > 0) {
+ status = allocate_resource( root_resource, new_res,
+ size, start /* Min start addr. */,
+ (start + size) - 1, 1,
+ NULL, NULL);
+ } else {
+ if (size > align)
+ align = size;
+ status = allocate_resource( root_resource, new_res,
+ size, align /* Min start addr. */,
+ root_resource->end, align,
+ NULL, NULL);
+ }
+
+ if (status) {
+ kfree(new_res);
+ return((iopaddr_t) NULL);
+ }
+
+ /*
+ * If a window allocation cookie has been supplied, use it to keep
+ * track of all the allocated space assigned to this window.
+ */
+ if (win_alloc) {
+ win_alloc->wa_resource = new_res;
+ win_alloc->wa_base = new_res->start;
+ win_alloc->wa_pages = size;
+ }
+
+ return new_res->start;
+}
+
+/*
+ * Free the specified window allocation back into the PCI window mapping
+ * resource. As noted above, we keep page addresses offset by 1 ...
+ */
+void
+pciio_device_win_free(pciio_win_alloc_t win_alloc)
+{
+ int status;
+
+ if (win_alloc->wa_resource) {
+ status = release_resource(win_alloc->wa_resource);
+ if (!status)
+ kfree(win_alloc->wa_resource);
+ else
+ BUG();
+ }
+}
+
+/*
+ * pciio_error_register:
+ * arrange for a function to be called with
+ * a specified first parameter plus other
+ * information when an error is encountered
+ * and traced to the pci slot corresponding
+ * to the connection point pconn.
+ *
+ * may also be called with a null function
+ * pointer to "unregister" the error handler.
+ *
+ * NOTE: subsequent calls silently overwrite
+ * previous data for this vertex. We assume that
+ * cooperating drivers, well, cooperate ...
+ */
+void
+pciio_error_register(vertex_hdl_t pconn,
+ error_handler_f *efunc,
+ error_handler_arg_t einfo)
+{
+ pciio_info_t pciio_info;
+
+ pciio_info = pciio_info_get(pconn);
+ ASSERT(pciio_info != NULL);
+ pciio_info->c_efunc = efunc;
+ pciio_info->c_einfo = einfo;
+}
+
+/*
+ * Check if any device has been found in this slot, and return
+ * true or false
+ * vhdl is the vertex for the slot
+ */
+int
+pciio_slot_inuse(vertex_hdl_t pconn_vhdl)
+{
+ pciio_info_t pciio_info = pciio_info_get(pconn_vhdl);
+
+ ASSERT(pciio_info);
+ ASSERT(pciio_info->c_vertex == pconn_vhdl);
+ if (pciio_info->c_vendor) {
+ /*
+ * Non-zero value for vendor indicate
+ * a board being found in this slot.
+ */
+ return 1;
+ }
+ return 0;
+}
+
+int
+pciio_info_type1_get(pciio_info_t pci_info)
+{
+ return (pci_info->c_type1);
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/interrupt.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/hcl_util.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/pci/pci_defs.h>
+#include <asm/sn/pci/pic.h>
+#include <asm/sn/sn_private.h>
+
+extern struct file_operations pcibr_fops;
+extern pcibr_list_p pcibr_list;
+
+static int pic_attach2(vertex_hdl_t, void *, vertex_hdl_t,
+ int, pcibr_soft_t *);
+
+extern int isIO9(nasid_t);
+extern char *dev_to_name(vertex_hdl_t dev, char *buf, uint buflen);
+extern int pcibr_widget_to_bus(vertex_hdl_t pcibr_vhdl);
+extern pcibr_hints_t pcibr_hints_get(vertex_hdl_t, int);
+extern unsigned pcibr_intr_bits(pciio_info_t info,
+ pciio_intr_line_t lines, int nslots);
+extern void pcibr_setwidint(xtalk_intr_t);
+extern int pcibr_error_handler_wrapper(error_handler_arg_t, int,
+ ioerror_mode_t, ioerror_t *);
+extern void pcibr_error_intr_handler(intr_arg_t);
+extern void pcibr_directmap_init(pcibr_soft_t);
+extern int pcibr_slot_info_init(vertex_hdl_t,pciio_slot_t);
+extern int pcibr_slot_addr_space_init(vertex_hdl_t,pciio_slot_t);
+extern int pcibr_slot_device_init(vertex_hdl_t, pciio_slot_t);
+extern int pcibr_slot_pcix_rbar_init(pcibr_soft_t, pciio_slot_t);
+extern int pcibr_slot_guest_info_init(vertex_hdl_t,pciio_slot_t);
+extern int pcibr_slot_call_device_attach(vertex_hdl_t,
+ pciio_slot_t, int);
+extern void pcibr_rrb_alloc_init(pcibr_soft_t, int, int, int);
+extern int pcibr_pcix_rbars_calc(pcibr_soft_t);
+extern pcibr_info_t pcibr_device_info_new(pcibr_soft_t, pciio_slot_t,
+ pciio_function_t, pciio_vendor_id_t,
+ pciio_device_id_t);
+extern int pcibr_initial_rrb(vertex_hdl_t, pciio_slot_t,
+ pciio_slot_t);
+extern void xwidget_error_register(vertex_hdl_t, error_handler_f *,
+ error_handler_arg_t);
+extern void pcibr_clearwidint(pcibr_soft_t);
+
+
+
+/*
+ * copy xwidget_info_t from conn_v to peer_conn_v
+ */
+static int
+pic_bus1_widget_info_dup(vertex_hdl_t conn_v, vertex_hdl_t peer_conn_v,
+ cnodeid_t xbow_peer, char *peer_path)
+{
+ xwidget_info_t widget_info, peer_widget_info;
+ vertex_hdl_t peer_hubv;
+ hubinfo_t peer_hub_info;
+
+ /* get the peer hub's widgetid */
+ peer_hubv = NODEPDA(xbow_peer)->node_vertex;
+ peer_hub_info = NULL;
+ hubinfo_get(peer_hubv, &peer_hub_info);
+ if (peer_hub_info == NULL)
+ return 0;
+
+ if (hwgraph_info_get_LBL(conn_v, INFO_LBL_XWIDGET,
+ (arbitrary_info_t *)&widget_info) == GRAPH_SUCCESS) {
+ peer_widget_info = kmalloc(sizeof (*(peer_widget_info)), GFP_KERNEL);
+ if ( !peer_widget_info ) {
+ return -ENOMEM;
+ }
+ memset(peer_widget_info, 0, sizeof (*(peer_widget_info)));
+
+ peer_widget_info->w_fingerprint = widget_info_fingerprint;
+ peer_widget_info->w_vertex = peer_conn_v;
+ peer_widget_info->w_id = widget_info->w_id;
+ peer_widget_info->w_master = peer_hubv;
+ peer_widget_info->w_masterid = peer_hub_info->h_widgetid;
+ /* structure copy */
+ peer_widget_info->w_hwid = widget_info->w_hwid;
+ peer_widget_info->w_efunc = 0;
+ peer_widget_info->w_einfo = 0;
+ peer_widget_info->w_name = kmalloc(strlen(peer_path) + 1, GFP_KERNEL);
+ if (!peer_widget_info->w_name) {
+ kfree(peer_widget_info);
+ return -ENOMEM;
+ }
+ strcpy(peer_widget_info->w_name, peer_path);
+
+ if (hwgraph_info_add_LBL(peer_conn_v, INFO_LBL_XWIDGET,
+ (arbitrary_info_t)peer_widget_info) != GRAPH_SUCCESS) {
+ kfree(peer_widget_info->w_name);
+ kfree(peer_widget_info);
+ return 0;
+ }
+
+ xwidget_info_set(peer_conn_v, peer_widget_info);
+
+ return 1;
+ }
+
+ printk("pic_bus1_widget_info_dup: "
+ "cannot get INFO_LBL_XWIDGET from 0x%lx\n", (uint64_t)conn_v);
+ return 0;
+}
+
+/*
+ * If this PIC is attached to two Cbricks ("dual-ported") then
+ * attach each bus to opposite Cbricks.
+ *
+ * If successful, return a new vertex suitable for attaching the PIC bus.
+ * If not successful, return zero and both buses will attach to the
+ * vertex passed into pic_attach().
+ */
+static vertex_hdl_t
+pic_bus1_redist(nasid_t nasid, vertex_hdl_t conn_v)
+{
+ cnodeid_t cnode = nasid_to_cnodeid(nasid);
+ cnodeid_t xbow_peer = -1;
+ char pathname[256], peer_path[256], tmpbuf[256];
+ char *p;
+ int rc;
+ vertex_hdl_t peer_conn_v, hubv;
+ int pos;
+ slabid_t slab;
+
+ if (NODEPDA(cnode)->xbow_peer >= 0) { /* if dual-ported */
+ /* create a path for this widget on the peer Cbrick */
+ /* pcibr widget hw/module/001c11/slab/0/Pbrick/xtalk/12 */
+ /* sprintf(pathname, "%v", conn_v); */
+ xbow_peer = nasid_to_cnodeid(NODEPDA(cnode)->xbow_peer);
+ pos = hwgfs_generate_path(conn_v, tmpbuf, 256);
+ strcpy(pathname, &tmpbuf[pos]);
+ p = pathname + strlen("hw/module/001c01/slab/0/");
+
+ memset(tmpbuf, 0, 16);
+ format_module_id(tmpbuf, geo_module((NODEPDA(xbow_peer))->geoid), MODULE_FORMAT_BRIEF);
+ slab = geo_slab((NODEPDA(xbow_peer))->geoid);
+ sprintf(peer_path, "module/%s/slab/%d/%s", tmpbuf, (int)slab, p);
+
+ /* Look for vertex for this widget on the peer Cbrick.
+ * Expect GRAPH_NOT_FOUND.
+ */
+ rc = hwgraph_traverse(hwgraph_root, peer_path, &peer_conn_v);
+ if (GRAPH_SUCCESS == rc)
+ printk("pic_attach: found unexpected vertex: 0x%lx\n",
+ (uint64_t)peer_conn_v);
+ else if (GRAPH_NOT_FOUND != rc) {
+ printk("pic_attach: hwgraph_traverse unexpectedly"
+ " returned 0x%x\n", rc);
+ } else {
+ /* try to add the widget vertex to the peer Cbrick */
+ rc = hwgraph_path_add(hwgraph_root, peer_path, &peer_conn_v);
+
+ if (GRAPH_SUCCESS != rc)
+ printk("pic_attach: hwgraph_path_add"
+ " failed with 0x%x\n", rc);
+ else {
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_ATTACH, conn_v,
+ "pic_bus1_redist: added vertex %v\n", peer_conn_v));
+
+ /* Now hang appropiate stuff off of the new
+ * vertex. We bail out if we cannot add something.
+ * In that case, we don't remove the newly added
+ * vertex but that should be safe and we don't
+ * really expect the additions to fail anyway.
+ */
+ if (!pic_bus1_widget_info_dup(conn_v, peer_conn_v,
+ xbow_peer, peer_path))
+ return 0;
+
+ hubv = cnodeid_to_vertex(xbow_peer);
+ ASSERT(hubv != GRAPH_VERTEX_NONE);
+ device_master_set(peer_conn_v, hubv);
+ xtalk_provider_register(hubv, &hub_provider);
+ xtalk_provider_startup(hubv);
+ return peer_conn_v;
+ }
+ }
+ }
+ return 0;
+}
+
+/*
+ * PIC has two buses under a single widget. pic_attach() calls pic_attach2()
+ * to attach each of those buses.
+ */
+int
+pic_attach(vertex_hdl_t conn_v)
+{
+ int rc;
+ void *bridge0, *bridge1 = (void *)0;
+ vertex_hdl_t pcibr_vhdl0, pcibr_vhdl1 = (vertex_hdl_t)0;
+ pcibr_soft_t bus0_soft, bus1_soft = (pcibr_soft_t)0;
+ vertex_hdl_t conn_v0, conn_v1, peer_conn_v;
+ int bricktype;
+ int iobrick_type_get_nasid(nasid_t nasid);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_ATTACH, conn_v, "pic_attach()\n"));
+
+ bridge0 = pcibr_bridge_ptr_get(conn_v, 0);
+ bridge1 = pcibr_bridge_ptr_get(conn_v, 1);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_ATTACH, conn_v,
+ "pic_attach: bridge0=0x%lx, bridge1=0x%lx\n",
+ bridge0, bridge1));
+
+ conn_v0 = conn_v1 = conn_v;
+
+ /* If dual-ported then split the two PIC buses across both Cbricks */
+ peer_conn_v = pic_bus1_redist(NASID_GET(bridge0), conn_v);
+ if (peer_conn_v)
+ conn_v1 = peer_conn_v;
+
+ /*
+ * Create the vertex for the PCI buses, which we
+ * will also use to hold the pcibr_soft and
+ * which will be the "master" vertex for all the
+ * pciio connection points we will hang off it.
+ * This needs to happen before we call nic_bridge_vertex_info
+ * as we are some of the *_vmc functions need access to the edges.
+ *
+ * Opening this vertex will provide access to
+ * the Bridge registers themselves.
+ */
+ bricktype = iobrick_type_get_nasid(NASID_GET(bridge0));
+ if ( bricktype == MODULE_CGBRICK ) {
+ rc = hwgraph_path_add(conn_v0, EDGE_LBL_AGP_0, &pcibr_vhdl0);
+ ASSERT(rc == GRAPH_SUCCESS);
+ rc = hwgraph_path_add(conn_v1, EDGE_LBL_AGP_1, &pcibr_vhdl1);
+ ASSERT(rc == GRAPH_SUCCESS);
+ } else {
+ rc = hwgraph_path_add(conn_v0, EDGE_LBL_PCIX_0, &pcibr_vhdl0);
+ ASSERT(rc == GRAPH_SUCCESS);
+ rc = hwgraph_path_add(conn_v1, EDGE_LBL_PCIX_1, &pcibr_vhdl1);
+ ASSERT(rc == GRAPH_SUCCESS);
+ }
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_ATTACH, conn_v,
+ "pic_attach: pcibr_vhdl0=0x%lx, pcibr_vhdl1=0x%lx\n",
+ pcibr_vhdl0, pcibr_vhdl1));
+
+ /* register pci provider array */
+ pciio_provider_register(pcibr_vhdl0, &pci_pic_provider);
+ pciio_provider_register(pcibr_vhdl1, &pci_pic_provider);
+
+ pciio_provider_startup(pcibr_vhdl0);
+ pciio_provider_startup(pcibr_vhdl1);
+
+ pic_attach2(conn_v0, bridge0, pcibr_vhdl0, 0, &bus0_soft);
+ pic_attach2(conn_v1, bridge1, pcibr_vhdl1, 1, &bus1_soft);
+
+ {
+ /* If we're dual-ported finish duplicating the peer info structure.
+ * The error handler and arg are done in pic_attach2().
+ */
+ xwidget_info_t info0, info1;
+ if (conn_v0 != conn_v1) { /* dual ported */
+ info0 = xwidget_info_get(conn_v0);
+ info1 = xwidget_info_get(conn_v1);
+ if (info1->w_efunc == (error_handler_f *)NULL)
+ info1->w_efunc = info0->w_efunc;
+ if (info1->w_einfo == (error_handler_arg_t)0)
+ info1->w_einfo = bus1_soft;
+ }
+ }
+
+ /* save a pointer to the PIC's other bus's soft struct */
+ bus0_soft->bs_peers_soft = bus1_soft;
+ bus1_soft->bs_peers_soft = bus0_soft;
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_ATTACH, conn_v,
+ "pic_attach: bus0_soft=0x%lx, bus1_soft=0x%lx\n",
+ bus0_soft, bus1_soft));
+
+ return 0;
+}
+
+
+/*
+ * PIC has two buses under a single widget. pic_attach() calls pic_attach2()
+ * to attach each of those buses.
+ */
+static int
+pic_attach2(vertex_hdl_t xconn_vhdl, void *bridge,
+ vertex_hdl_t pcibr_vhdl, int busnum, pcibr_soft_t *ret_softp)
+{
+ vertex_hdl_t ctlr_vhdl;
+ pcibr_soft_t pcibr_soft;
+ pcibr_info_t pcibr_info;
+ xwidget_info_t info;
+ xtalk_intr_t xtalk_intr;
+ pcibr_list_p self;
+ int entry, slot, ibit, i;
+ vertex_hdl_t noslot_conn;
+ char devnm[MAXDEVNAME], *s;
+ pcibr_hints_t pcibr_hints;
+ picreg_t id;
+ picreg_t int_enable;
+ picreg_t pic_ctrl_reg;
+
+ int iobrick_type_get_nasid(nasid_t nasid);
+ int iomoduleid_get(nasid_t nasid);
+ int irq;
+ int cpu;
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_ATTACH, pcibr_vhdl,
+ "pic_attach2: bridge=0x%lx, busnum=%d\n", bridge, busnum));
+
+ ctlr_vhdl = NULL;
+ ctlr_vhdl = hwgraph_register(pcibr_vhdl, EDGE_LBL_CONTROLLER, 0,
+ 0, 0, 0,
+ S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP, 0, 0,
+ (struct file_operations *)&pcibr_fops, (void *)pcibr_vhdl);
+ ASSERT(ctlr_vhdl != NULL);
+
+ id = pcireg_bridge_id_get(bridge);
+ hwgraph_info_add_LBL(pcibr_vhdl, INFO_LBL_PCIBR_ASIC_REV,
+ (arbitrary_info_t)XWIDGET_PART_REV_NUM(id));
+
+ /*
+ * Get the hint structure; if some NIC callback marked this vertex as
+ * "hands-off" then we just return here, before doing anything else.
+ */
+ pcibr_hints = pcibr_hints_get(xconn_vhdl, 0);
+
+ if (pcibr_hints && pcibr_hints->ph_hands_off)
+ return -1;
+
+ /* allocate soft structure to hang off the vertex. Link the new soft
+ * structure to the pcibr_list linked list
+ */
+ pcibr_soft = kmalloc(sizeof (*(pcibr_soft)), GFP_KERNEL);
+ if ( !pcibr_soft )
+ return -ENOMEM;
+
+ self = kmalloc(sizeof (*(self)), GFP_KERNEL);
+ if ( !self ) {
+ kfree(pcibr_soft);
+ return -ENOMEM;
+ }
+ memset(pcibr_soft, 0, sizeof (*(pcibr_soft)));
+ memset(self, 0, sizeof (*(self)));
+
+ self->bl_soft = pcibr_soft;
+ self->bl_vhdl = pcibr_vhdl;
+ self->bl_next = pcibr_list;
+ pcibr_list = self;
+
+ if (ret_softp)
+ *ret_softp = pcibr_soft;
+
+ memset(pcibr_soft, 0, sizeof *pcibr_soft);
+ pcibr_soft_set(pcibr_vhdl, pcibr_soft);
+
+ s = dev_to_name(pcibr_vhdl, devnm, MAXDEVNAME);
+ pcibr_soft->bs_name = kmalloc(strlen(s) + 1, GFP_KERNEL);
+ if (!pcibr_soft->bs_name)
+ return -ENOMEM;
+
+ strcpy(pcibr_soft->bs_name, s);
+
+ pcibr_soft->bs_conn = xconn_vhdl;
+ pcibr_soft->bs_vhdl = pcibr_vhdl;
+ pcibr_soft->bs_base = (void *)bridge;
+ pcibr_soft->bs_rev_num = XWIDGET_PART_REV_NUM(id);
+ pcibr_soft->bs_intr_bits = (pcibr_intr_bits_f *)pcibr_intr_bits;
+ pcibr_soft->bsi_err_intr = 0;
+ pcibr_soft->bs_min_slot = 0;
+ pcibr_soft->bs_max_slot = 3;
+ pcibr_soft->bs_busnum = busnum;
+ pcibr_soft->bs_bridge_type = PCIBR_BRIDGETYPE_PIC;
+ pcibr_soft->bs_int_ate_size = PIC_INTERNAL_ATES;
+ /* Make sure this is called after setting the bs_base and bs_bridge_type */
+ pcibr_soft->bs_bridge_mode = (pcireg_speed_get(pcibr_soft) << 1) |
+ pcireg_mode_get(pcibr_soft);
+
+ info = xwidget_info_get(xconn_vhdl);
+ pcibr_soft->bs_xid = xwidget_info_id_get(info);
+ pcibr_soft->bs_master = xwidget_info_master_get(info);
+ pcibr_soft->bs_mxid = xwidget_info_masterid_get(info);
+
+ strcpy(pcibr_soft->bs_asic_name, "PIC");
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_ATTACH, pcibr_vhdl,
+ "pic_attach2: pcibr_soft=0x%lx, mode=0x%x\n",
+ pcibr_soft, pcibr_soft->bs_bridge_mode));
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_ATTACH, pcibr_vhdl,
+ "pic_attach2: %s ASIC: rev %s (code=0x%x)\n",
+ pcibr_soft->bs_asic_name,
+ (IS_PIC_PART_REV_A(pcibr_soft->bs_rev_num)) ? "A" :
+ (IS_PIC_PART_REV_B(pcibr_soft->bs_rev_num)) ? "B" :
+ (IS_PIC_PART_REV_C(pcibr_soft->bs_rev_num)) ? "C" :
+ "unknown", pcibr_soft->bs_rev_num));
+
+ /* PV854845: Must clear write request buffer to avoid parity errors */
+ for (i=0; i < PIC_WR_REQ_BUFSIZE; i++) {
+ ((pic_t *)bridge)->p_wr_req_lower[i] = 0;
+ ((pic_t *)bridge)->p_wr_req_upper[i] = 0;
+ ((pic_t *)bridge)->p_wr_req_parity[i] = 0;
+ }
+
+ pcibr_soft->bs_nasid = NASID_GET(bridge);
+
+ pcibr_soft->bs_bricktype = iobrick_type_get_nasid(pcibr_soft->bs_nasid);
+ if (pcibr_soft->bs_bricktype < 0)
+ printk(KERN_WARNING "%s: bricktype was unknown by L1 (ret val = 0x%x)\n",
+ pcibr_soft->bs_name, pcibr_soft->bs_bricktype);
+
+ pcibr_soft->bs_moduleid = iomoduleid_get(pcibr_soft->bs_nasid);
+
+ if (pcibr_soft->bs_bricktype > 0) {
+ switch (pcibr_soft->bs_bricktype) {
+ case MODULE_PXBRICK:
+ case MODULE_IXBRICK:
+ case MODULE_OPUSBRICK:
+ pcibr_soft->bs_first_slot = 0;
+ pcibr_soft->bs_last_slot = 1;
+ pcibr_soft->bs_last_reset = 1;
+
+ /* Bus 1 of IXBrick has a IO9, so there are 4 devices, not 2 */
+ if ((pcibr_widget_to_bus(pcibr_vhdl) == 1)
+ && isIO9(pcibr_soft->bs_nasid)) {
+ pcibr_soft->bs_last_slot = 3;
+ pcibr_soft->bs_last_reset = 3;
+ }
+ break;
+
+ case MODULE_CGBRICK:
+ pcibr_soft->bs_first_slot = 0;
+ pcibr_soft->bs_last_slot = 0;
+ pcibr_soft->bs_last_reset = 0;
+ break;
+
+ default:
+ printk(KERN_WARNING "%s: Unknown bricktype: 0x%x\n",
+ pcibr_soft->bs_name, pcibr_soft->bs_bricktype);
+ break;
+ }
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_ATTACH, pcibr_vhdl,
+ "pic_attach2: bricktype=%d, brickbus=%d, "
+ "slots %d-%d\n", pcibr_soft->bs_bricktype,
+ pcibr_widget_to_bus(pcibr_vhdl),
+ pcibr_soft->bs_first_slot, pcibr_soft->bs_last_slot));
+ }
+
+ /*
+ * Initialize bridge and bus locks
+ */
+ spin_lock_init(&pcibr_soft->bs_lock);
+
+ /*
+ * If we have one, process the hints structure.
+ */
+ if (pcibr_hints) {
+ unsigned rrb_fixed;
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_HINTS, pcibr_vhdl,
+ "pic_attach2: pcibr_hints=0x%lx\n", pcibr_hints));
+
+ rrb_fixed = pcibr_hints->ph_rrb_fixed;
+
+ pcibr_soft->bs_rrb_fixed = rrb_fixed;
+
+ if (pcibr_hints->ph_intr_bits)
+ pcibr_soft->bs_intr_bits = pcibr_hints->ph_intr_bits;
+
+
+ for (slot = pcibr_soft->bs_min_slot;
+ slot < PCIBR_NUM_SLOTS(pcibr_soft); ++slot) {
+ int hslot = pcibr_hints->ph_host_slot[slot] - 1;
+
+ if (hslot < 0) {
+ pcibr_soft->bs_slot[slot].host_slot = slot;
+ } else {
+ pcibr_soft->bs_slot[slot].has_host = 1;
+ pcibr_soft->bs_slot[slot].host_slot = hslot;
+ }
+ }
+ }
+
+ /*
+ * Set-up initial values for state fields
+ */
+ for (slot = pcibr_soft->bs_min_slot;
+ slot < PCIBR_NUM_SLOTS(pcibr_soft); ++slot) {
+ pcibr_soft->bs_slot[slot].bss_devio.bssd_space = PCIIO_SPACE_NONE;
+ pcibr_soft->bs_slot[slot].bss_devio.bssd_ref_cnt = 0;
+ pcibr_soft->bs_slot[slot].bss_d64_base = PCIBR_D64_BASE_UNSET;
+ pcibr_soft->bs_slot[slot].bss_d32_base = PCIBR_D32_BASE_UNSET;
+ pcibr_soft->bs_rrb_valid_dflt[slot][VCHAN0] = -1;
+ }
+
+ for (ibit = 0; ibit < 8; ++ibit) {
+ pcibr_soft->bs_intr[ibit].bsi_xtalk_intr = 0;
+ pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_soft = pcibr_soft;
+ pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_list = NULL;
+ pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_ibit = ibit;
+ pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_hdlrcnt = 0;
+ pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_shared = 0;
+ pcibr_soft->bs_intr[ibit].bsi_pcibr_intr_wrap.iw_connected = 0;
+ }
+
+
+ /*
+ * connect up our error handler. PIC has 2 busses (thus resulting in 2
+ * pcibr_soft structs under 1 widget), so only register a xwidget error
+ * handler for PIC's bus0. NOTE: for PIC pcibr_error_handler_wrapper()
+ * is a wrapper routine we register that will call the real error handler
+ * pcibr_error_handler() with the correct pcibr_soft struct.
+ */
+ if (busnum == 0) {
+ xwidget_error_register(xconn_vhdl,
+ pcibr_error_handler_wrapper, pcibr_soft);
+ }
+
+ /*
+ * Clear all pending interrupts. Assume all interrupts are from slot 3
+ * until otherise setup.
+ */
+ pcireg_intr_reset_set(pcibr_soft, PIC_IRR_ALL_CLR);
+ pcireg_intr_device_set(pcibr_soft, 0x006db6db);
+
+ /* Setup the mapping register used for direct mapping */
+ pcibr_directmap_init(pcibr_soft);
+
+ /*
+ * Initialize the PICs control register.
+ */
+ pic_ctrl_reg = pcireg_control_get(pcibr_soft);
+
+ /* Bridges Requester ID: bus = busnum, dev = 0, func = 0 */
+ pic_ctrl_reg &= ~PIC_CTRL_BUS_NUM_MASK;
+ pic_ctrl_reg |= PIC_CTRL_BUS_NUM(busnum);
+ pic_ctrl_reg &= ~PIC_CTRL_DEV_NUM_MASK;
+ pic_ctrl_reg &= ~PIC_CTRL_FUN_NUM_MASK;
+
+ pic_ctrl_reg &= ~PIC_CTRL_NO_SNOOP;
+ pic_ctrl_reg &= ~PIC_CTRL_RELAX_ORDER;
+
+ /* enable parity checking on PICs internal RAM */
+ pic_ctrl_reg |= PIC_CTRL_PAR_EN_RESP;
+ pic_ctrl_reg |= PIC_CTRL_PAR_EN_ATE;
+
+ /* PIC BRINGUP WAR (PV# 862253): dont enable write request parity */
+ if (!PCIBR_WAR_ENABLED(PV862253, pcibr_soft)) {
+ pic_ctrl_reg |= PIC_CTRL_PAR_EN_REQ;
+ }
+
+ pic_ctrl_reg |= PIC_CTRL_PAGE_SIZE;
+
+ pcireg_control_set(pcibr_soft, pic_ctrl_reg);
+
+ /* Initialize internal mapping entries (ie. the ATEs) */
+ for (entry = 0; entry < pcibr_soft->bs_int_ate_size; entry++)
+ pcireg_int_ate_set(pcibr_soft, entry, 0);
+
+ pcibr_soft->bs_int_ate_resource.start = 0;
+ pcibr_soft->bs_int_ate_resource.end = pcibr_soft->bs_int_ate_size - 1;
+
+ /* Setup the PICs error interrupt handler. */
+ xtalk_intr = xtalk_intr_alloc(xconn_vhdl, (device_desc_t)0, pcibr_vhdl);
+
+ ASSERT(xtalk_intr != NULL);
+
+ irq = ((hub_intr_t)xtalk_intr)->i_bit;
+ cpu = ((hub_intr_t)xtalk_intr)->i_cpuid;
+
+ intr_unreserve_level(cpu, irq);
+ ((hub_intr_t)xtalk_intr)->i_bit = SGI_PCIBR_ERROR;
+ xtalk_intr->xi_vector = SGI_PCIBR_ERROR;
+
+ pcibr_soft->bsi_err_intr = xtalk_intr;
+
+ /*
+ * On IP35 with XBridge, we do some extra checks in pcibr_setwidint
+ * in order to work around some addressing limitations. In order
+ * for that fire wall to work properly, we need to make sure we
+ * start from a known clean state.
+ */
+ pcibr_clearwidint(pcibr_soft);
+
+ xtalk_intr_connect(xtalk_intr,
+ (intr_func_t) pcibr_error_intr_handler,
+ (intr_arg_t) pcibr_soft,
+ (xtalk_intr_setfunc_t) pcibr_setwidint,
+ (void *) pcibr_soft);
+
+ request_irq(SGI_PCIBR_ERROR, (void *)pcibr_error_intr_handler, SA_SHIRQ,
+ "PCIBR error", (intr_arg_t) pcibr_soft);
+
+ PCIBR_DEBUG_ALWAYS((PCIBR_DEBUG_INTR_ALLOC, pcibr_vhdl,
+ "pcibr_setwidint: target_id=0x%lx, int_addr=0x%lx\n",
+ pcireg_intr_dst_target_id_get(pcibr_soft),
+ pcireg_intr_dst_addr_get(pcibr_soft)));
+
+ /* now we can start handling error interrupts */
+ int_enable = pcireg_intr_enable_get(pcibr_soft);
+ int_enable |= PIC_ISR_ERRORS;
+
+ /* PIC BRINGUP WAR (PV# 856864 & 856865): allow the tnums that are
+ * locked out to be freed up sooner (by timing out) so that the
+ * read tnums are never completely used up.
+ */
+ if (PCIBR_WAR_ENABLED(PV856864, pcibr_soft)) {
+ int_enable &= ~PIC_ISR_PCIX_REQ_TOUT;
+ int_enable &= ~PIC_ISR_XREAD_REQ_TIMEOUT;
+
+ pcireg_req_timeout_set(pcibr_soft, 0x750);
+ }
+
+ pcireg_intr_enable_set(pcibr_soft, int_enable);
+ pcireg_intr_mode_set(pcibr_soft, 0); /* dont send 'clear interrupt' pkts */
+ pcireg_tflush_get(pcibr_soft); /* wait until Bridge PIO complete */
+
+ /*
+ * PIC BRINGUP WAR (PV# 856866, 859504, 861476, 861478): Don't use
+ * RRB0, RRB8, RRB1, and RRB9. Assign them to DEVICE[2|3]--VCHAN3
+ * so they are not used. This works since there is currently no
+ * API to penable VCHAN3.
+ */
+ if (PCIBR_WAR_ENABLED(PV856866, pcibr_soft)) {
+ pcireg_rrb_bit_set(pcibr_soft, 0, 0x000f000f); /* even rrb reg */
+ pcireg_rrb_bit_set(pcibr_soft, 1, 0x000f000f); /* odd rrb reg */
+ }
+
+ /* PIC only supports 64-bit direct mapping in PCI-X mode. Since
+ * all PCI-X devices that initiate memory transactions must be
+ * capable of generating 64-bit addressed, we force 64-bit DMAs.
+ */
+ pcibr_soft->bs_dma_flags = 0;
+ if (IS_PCIX(pcibr_soft)) {
+ pcibr_soft->bs_dma_flags |= PCIIO_DMA_A64;
+ }
+
+ {
+
+ iopaddr_t prom_base_addr = pcibr_soft->bs_xid << 24;
+ int prom_base_size = 0x1000000;
+ int status;
+ struct resource *res;
+
+ /* Allocate resource maps based on bus page size; for I/O and memory
+ * space, free all pages except those in the base area and in the
+ * range set by the PROM.
+ *
+ * PROM creates BAR addresses in this format: 0x0ws00000 where w is
+ * the widget number and s is the device register offset for the slot.
+ */
+
+ /* Setup the Bus's PCI IO Root Resource. */
+ pcibr_soft->bs_io_win_root_resource.start = PCIBR_BUS_IO_BASE;
+ pcibr_soft->bs_io_win_root_resource.end = 0xffffffff;
+ res = (struct resource *) kmalloc( sizeof(struct resource), GFP_KERNEL);
+ if (!res)
+ panic("PCIBR:Unable to allocate resource structure\n");
+
+ /* Block off the range used by PROM. */
+ res->start = prom_base_addr;
+ res->end = prom_base_addr + (prom_base_size - 1);
+ status = request_resource(&pcibr_soft->bs_io_win_root_resource, res);
+ if (status)
+ panic("PCIBR:Unable to request_resource()\n");
+
+ /* Setup the Small Window Root Resource */
+ pcibr_soft->bs_swin_root_resource.start = PAGE_SIZE;
+ pcibr_soft->bs_swin_root_resource.end = 0x000FFFFF;
+
+ /* Setup the Bus's PCI Memory Root Resource */
+ pcibr_soft->bs_mem_win_root_resource.start = 0x200000;
+ pcibr_soft->bs_mem_win_root_resource.end = 0xffffffff;
+ res = (struct resource *) kmalloc( sizeof(struct resource), GFP_KERNEL);
+ if (!res)
+ panic("PCIBR:Unable to allocate resource structure\n");
+
+ /* Block off the range used by PROM. */
+ res->start = prom_base_addr;
+ res->end = prom_base_addr + (prom_base_size - 1);
+ status = request_resource(&pcibr_soft->bs_mem_win_root_resource, res);
+ if (status)
+ panic("PCIBR:Unable to request_resource()\n");
+
+ }
+
+
+ /* build "no-slot" connection point */
+ pcibr_info = pcibr_device_info_new(pcibr_soft, PCIIO_SLOT_NONE,
+ PCIIO_FUNC_NONE, PCIIO_VENDOR_ID_NONE, PCIIO_DEVICE_ID_NONE);
+ noslot_conn = pciio_device_info_register(pcibr_vhdl, &pcibr_info->f_c);
+
+ /* Store no slot connection point info for tearing it down during detach. */
+ pcibr_soft->bs_noslot_conn = noslot_conn;
+ pcibr_soft->bs_noslot_info = pcibr_info;
+
+ for (slot = pcibr_soft->bs_min_slot;
+ slot < PCIBR_NUM_SLOTS(pcibr_soft); ++slot) {
+ /* Find out what is out there */
+ (void)pcibr_slot_info_init(pcibr_vhdl, slot);
+ }
+
+ for (slot = pcibr_soft->bs_min_slot;
+ slot < PCIBR_NUM_SLOTS(pcibr_soft); ++slot) {
+ /* Set up the address space for this slot in the PCI land */
+ (void)pcibr_slot_addr_space_init(pcibr_vhdl, slot);
+ }
+
+ for (slot = pcibr_soft->bs_min_slot;
+ slot < PCIBR_NUM_SLOTS(pcibr_soft); ++slot) {
+ /* Setup the device register */
+ (void)pcibr_slot_device_init(pcibr_vhdl, slot);
+ }
+
+ if (IS_PCIX(pcibr_soft)) {
+ pcibr_soft->bs_pcix_rbar_inuse = 0;
+ pcibr_soft->bs_pcix_rbar_avail = NUM_RBAR;
+ pcibr_soft->bs_pcix_rbar_percent_allowed =
+ pcibr_pcix_rbars_calc(pcibr_soft);
+
+ for (slot = pcibr_soft->bs_min_slot;
+ slot < PCIBR_NUM_SLOTS(pcibr_soft); ++slot) {
+ /* Setup the PCI-X Read Buffer Attribute Registers (RBARs) */
+ (void)pcibr_slot_pcix_rbar_init(pcibr_soft, slot);
+ }
+ }
+
+ for (slot = pcibr_soft->bs_min_slot;
+ slot < PCIBR_NUM_SLOTS(pcibr_soft); ++slot) {
+ /* Setup host/guest relations */
+ (void)pcibr_slot_guest_info_init(pcibr_vhdl, slot);
+ }
+
+ /* Handle initial RRB management */
+ pcibr_initial_rrb(pcibr_vhdl,
+ pcibr_soft->bs_first_slot, pcibr_soft->bs_last_slot);
+
+ /* Before any drivers get called that may want to re-allocate RRB's,
+ * let's get some special cases pre-allocated. Drivers may override
+ * these pre-allocations, but by doing pre-allocations now we're
+ * assured not to step all over what the driver intended.
+ */
+ if (pcibr_soft->bs_bricktype > 0) {
+ switch (pcibr_soft->bs_bricktype) {
+ case MODULE_PXBRICK:
+ case MODULE_IXBRICK:
+ case MODULE_OPUSBRICK:
+ /*
+ * If IO9 in bus 1, allocate RRBs to all the IO9 devices
+ */
+ if ((pcibr_widget_to_bus(pcibr_vhdl) == 1) &&
+ (pcibr_soft->bs_slot[0].bss_vendor_id == 0x10A9) &&
+ (pcibr_soft->bs_slot[0].bss_device_id == 0x100A)) {
+ pcibr_rrb_alloc_init(pcibr_soft, 0, VCHAN0, 4);
+ pcibr_rrb_alloc_init(pcibr_soft, 1, VCHAN0, 4);
+ pcibr_rrb_alloc_init(pcibr_soft, 2, VCHAN0, 4);
+ pcibr_rrb_alloc_init(pcibr_soft, 3, VCHAN0, 4);
+ } else {
+ pcibr_rrb_alloc_init(pcibr_soft, 0, VCHAN0, 4);
+ pcibr_rrb_alloc_init(pcibr_soft, 1, VCHAN0, 4);
+ }
+ break;
+
+ case MODULE_CGBRICK:
+ pcibr_rrb_alloc_init(pcibr_soft, 0, VCHAN0, 8);
+ break;
+ } /* switch */
+ }
+
+
+ for (slot = pcibr_soft->bs_min_slot;
+ slot < PCIBR_NUM_SLOTS(pcibr_soft); ++slot) {
+ /* Call the device attach */
+ (void)pcibr_slot_call_device_attach(pcibr_vhdl, slot, 0);
+ }
+
+ pciio_device_attach(noslot_conn, 0);
+
+ return 0;
+}
+
+
+/*
+ * pci provider functions
+ *
+ * mostly in pcibr.c but if any are needed here then
+ * this might be a way to get them here.
+ */
+pciio_provider_t pci_pic_provider =
+{
+ PCIIO_ASIC_TYPE_PIC,
+
+ (pciio_piomap_alloc_f *) pcibr_piomap_alloc,
+ (pciio_piomap_free_f *) pcibr_piomap_free,
+ (pciio_piomap_addr_f *) pcibr_piomap_addr,
+ (pciio_piomap_done_f *) pcibr_piomap_done,
+ (pciio_piotrans_addr_f *) pcibr_piotrans_addr,
+ (pciio_piospace_alloc_f *) pcibr_piospace_alloc,
+ (pciio_piospace_free_f *) pcibr_piospace_free,
+
+ (pciio_dmamap_alloc_f *) pcibr_dmamap_alloc,
+ (pciio_dmamap_free_f *) pcibr_dmamap_free,
+ (pciio_dmamap_addr_f *) pcibr_dmamap_addr,
+ (pciio_dmamap_done_f *) pcibr_dmamap_done,
+ (pciio_dmatrans_addr_f *) pcibr_dmatrans_addr,
+ (pciio_dmamap_drain_f *) pcibr_dmamap_drain,
+ (pciio_dmaaddr_drain_f *) pcibr_dmaaddr_drain,
+
+ (pciio_intr_alloc_f *) pcibr_intr_alloc,
+ (pciio_intr_free_f *) pcibr_intr_free,
+ (pciio_intr_connect_f *) pcibr_intr_connect,
+ (pciio_intr_disconnect_f *) pcibr_intr_disconnect,
+ (pciio_intr_cpu_get_f *) pcibr_intr_cpu_get,
+
+ (pciio_provider_startup_f *) pcibr_provider_startup,
+ (pciio_provider_shutdown_f *) pcibr_provider_shutdown,
+ (pciio_reset_f *) pcibr_reset,
+ (pciio_endian_set_f *) pcibr_endian_set,
+ (pciio_config_get_f *) pcibr_config_get,
+ (pciio_config_set_f *) pcibr_config_set,
+
+ (pciio_error_extract_f *) pcibr_error_extract,
+
+ (pciio_driver_reg_callback_f *) pcibr_driver_reg_callback,
+ (pciio_driver_unreg_callback_f *) pcibr_driver_unreg_callback,
+ (pciio_device_unregister_f *) pcibr_device_unregister,
+};
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997, 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/seq_file.h>
+#include <linux/sched.h>
+#include <asm/smp.h>
+#include <asm/irq.h>
+#include <asm/hw_irq.h>
+#include <asm/system.h>
+#include <asm/sn/sgi.h>
+#include <asm/uaccess.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/io.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/xtalk/xtalk.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/intr.h>
+#include <asm/sn/sn2/shub_mmr.h>
+#include <asm/sn/sn2/shub_mmr_t.h>
+#include <asm/sal.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/sn/sndrv.h>
+#include <asm/sn/sn2/shubio.h>
+
+#define SHUB_NUM_ECF_REGISTERS 8
+
+static uint32_t shub_perf_counts[SHUB_NUM_ECF_REGISTERS];
+
+static shubreg_t shub_perf_counts_regs[SHUB_NUM_ECF_REGISTERS] = {
+ SH_PERFORMANCE_COUNTER0,
+ SH_PERFORMANCE_COUNTER1,
+ SH_PERFORMANCE_COUNTER2,
+ SH_PERFORMANCE_COUNTER3,
+ SH_PERFORMANCE_COUNTER4,
+ SH_PERFORMANCE_COUNTER5,
+ SH_PERFORMANCE_COUNTER6,
+ SH_PERFORMANCE_COUNTER7
+};
+
+static inline void
+shub_mmr_write(cnodeid_t cnode, shubreg_t reg, uint64_t val)
+{
+ int nasid = cnodeid_to_nasid(cnode);
+ volatile uint64_t *addr = (uint64_t *)(GLOBAL_MMR_ADDR(nasid, reg));
+
+ *addr = val;
+ __ia64_mf_a();
+}
+
+static inline void
+shub_mmr_write_iospace(cnodeid_t cnode, shubreg_t reg, uint64_t val)
+{
+ int nasid = cnodeid_to_nasid(cnode);
+
+ REMOTE_HUB_S(nasid, reg, val);
+}
+
+static inline void
+shub_mmr_write32(cnodeid_t cnode, shubreg_t reg, uint32_t val)
+{
+ int nasid = cnodeid_to_nasid(cnode);
+ volatile uint32_t *addr = (uint32_t *)(GLOBAL_MMR_ADDR(nasid, reg));
+
+ *addr = val;
+ __ia64_mf_a();
+}
+
+static inline uint64_t
+shub_mmr_read(cnodeid_t cnode, shubreg_t reg)
+{
+ int nasid = cnodeid_to_nasid(cnode);
+ volatile uint64_t val;
+
+ val = *(uint64_t *)(GLOBAL_MMR_ADDR(nasid, reg));
+ __ia64_mf_a();
+
+ return val;
+}
+
+static inline uint64_t
+shub_mmr_read_iospace(cnodeid_t cnode, shubreg_t reg)
+{
+ int nasid = cnodeid_to_nasid(cnode);
+
+ return REMOTE_HUB_L(nasid, reg);
+}
+
+static inline uint32_t
+shub_mmr_read32(cnodeid_t cnode, shubreg_t reg)
+{
+ int nasid = cnodeid_to_nasid(cnode);
+ volatile uint32_t val;
+
+ val = *(uint32_t *)(GLOBAL_MMR_ADDR(nasid, reg));
+ __ia64_mf_a();
+
+ return val;
+}
+
+static int
+reset_shub_stats(cnodeid_t cnode)
+{
+ int i;
+
+ for (i=0; i < SHUB_NUM_ECF_REGISTERS; i++) {
+ shub_perf_counts[i] = 0;
+ shub_mmr_write32(cnode, shub_perf_counts_regs[i], 0);
+ }
+ return 0;
+}
+
+static int
+configure_shub_stats(cnodeid_t cnode, unsigned long arg)
+{
+ uint64_t *p = (uint64_t *)arg;
+ uint64_t i;
+ uint64_t regcnt;
+ uint64_t regval[2];
+
+ if (copy_from_user((void *)®cnt, p, sizeof(regcnt)))
+ return -EFAULT;
+
+ for (p++, i=0; i < regcnt; i++, p += 2) {
+ if (copy_from_user((void *)regval, (void *)p, sizeof(regval)))
+ return -EFAULT;
+ if (regval[0] & 0x7) {
+ printk("Error: configure_shub_stats: unaligned address 0x%016lx\n", regval[0]);
+ return -EINVAL;
+ }
+ shub_mmr_write(cnode, (shubreg_t)regval[0], regval[1]);
+ }
+ return 0;
+}
+
+static int
+capture_shub_stats(cnodeid_t cnode, uint32_t *counts)
+{
+ int i;
+
+ for (i=0; i < SHUB_NUM_ECF_REGISTERS; i++) {
+ counts[i] = shub_mmr_read32(cnode, shub_perf_counts_regs[i]);
+ }
+ return 0;
+}
+
+static int
+shubstats_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ cnodeid_t cnode;
+ uint64_t longarg;
+ uint64_t intarg;
+ uint64_t regval[2];
+ int nasid;
+
+ cnode = (cnodeid_t)(u64)file->f_dentry->d_fsdata;
+ if (cnode < 0 || cnode >= numnodes)
+ return -ENODEV;
+
+ switch (cmd) {
+ case SNDRV_SHUB_CONFIGURE:
+ return configure_shub_stats(cnode, arg);
+ break;
+
+ case SNDRV_SHUB_RESETSTATS:
+ reset_shub_stats(cnode);
+ break;
+
+ case SNDRV_SHUB_INFOSIZE:
+ longarg = sizeof(shub_perf_counts);
+ if (copy_to_user((void *)arg, &longarg, sizeof(longarg))) {
+ return -EFAULT;
+ }
+ break;
+
+ case SNDRV_SHUB_GETSTATS:
+ capture_shub_stats(cnode, shub_perf_counts);
+ if (copy_to_user((void *)arg, shub_perf_counts,
+ sizeof(shub_perf_counts))) {
+ return -EFAULT;
+ }
+ break;
+
+ case SNDRV_SHUB_GETNASID:
+ nasid = cnodeid_to_nasid(cnode);
+ if (copy_to_user((void *)arg, &nasid,
+ sizeof(nasid))) {
+ return -EFAULT;
+ }
+ break;
+
+ case SNDRV_SHUB_GETMMR32:
+ intarg = shub_mmr_read32(cnode, arg);
+ if (copy_to_user((void *)arg, &intarg,
+ sizeof(intarg))) {
+ return -EFAULT;
+ }
+ break;
+
+ case SNDRV_SHUB_GETMMR64:
+ case SNDRV_SHUB_GETMMR64_IO:
+ if (cmd == SNDRV_SHUB_GETMMR64)
+ longarg = shub_mmr_read(cnode, arg);
+ else
+ longarg = shub_mmr_read_iospace(cnode, arg);
+ if (copy_to_user((void *)arg, &longarg, sizeof(longarg)))
+ return -EFAULT;
+ break;
+
+ case SNDRV_SHUB_PUTMMR64:
+ case SNDRV_SHUB_PUTMMR64_IO:
+ if (copy_from_user((void *)regval, (void *)arg, sizeof(regval)))
+ return -EFAULT;
+ if (regval[0] & 0x7) {
+ printk("Error: configure_shub_stats: unaligned address 0x%016lx\n", regval[0]);
+ return -EINVAL;
+ }
+ if (cmd == SNDRV_SHUB_PUTMMR64)
+ shub_mmr_write(cnode, (shubreg_t)regval[0], regval[1]);
+ else
+ shub_mmr_write_iospace(cnode, (shubreg_t)regval[0], regval[1]);
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+struct file_operations shub_mon_fops = {
+ .ioctl = shubstats_ioctl,
+};
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997, 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <asm/sn/types.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/driver.h>
+#include <asm/param.h>
+#include <asm/sn/pio.h>
+#include <asm/sn/xtalk/xwidget.h>
+#include <asm/sn/io.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/addrs.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/hcl_util.h>
+#include <asm/sn/intr.h>
+#include <asm/sn/xtalk/xtalkaddrs.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/sn2/shub_mmr.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pcibr_private.h>
+
+/* ARGSUSED */
+void
+hub_intr_init(vertex_hdl_t hubv)
+{
+}
+
+xwidgetnum_t
+hub_widget_id(nasid_t nasid)
+{
+
+ if (!(nasid & 1)) {
+ hubii_wcr_t ii_wcr; /* the control status register */
+ ii_wcr.wcr_reg_value = REMOTE_HUB_L(nasid,IIO_WCR);
+ return ii_wcr.wcr_fields_s.wcr_widget_id;
+ } else {
+ /* ICE does not have widget id. */
+ return(-1);
+ }
+}
+
+static hub_intr_t
+do_hub_intr_alloc(vertex_hdl_t dev,
+ device_desc_t dev_desc,
+ vertex_hdl_t owner_dev,
+ int uncond_nothread)
+{
+ cpuid_t cpu;
+ int vector;
+ hub_intr_t intr_hdl;
+ cnodeid_t cnode;
+ int cpuphys, slice;
+ int nasid;
+ iopaddr_t xtalk_addr;
+ struct xtalk_intr_s *xtalk_info;
+ xwidget_info_t xwidget_info;
+
+ cpu = intr_heuristic(dev, -1, &vector);
+ if (cpu == CPU_NONE) {
+ printk("Unable to allocate interrupt for 0x%p\n", (void *)owner_dev);
+ return(0);
+ }
+
+ cpuphys = cpu_physical_id(cpu);
+ slice = cpu_physical_id_to_slice(cpuphys);
+ nasid = cpu_physical_id_to_nasid(cpuphys);
+ cnode = cpuid_to_cnodeid(cpu);
+
+ if (slice) {
+ xtalk_addr = SH_II_INT1 | ((unsigned long)nasid << 36) | (1UL << 47);
+ } else {
+ xtalk_addr = SH_II_INT0 | ((unsigned long)nasid << 36) | (1UL << 47);
+ }
+
+ intr_hdl = kmalloc(sizeof(struct hub_intr_s), GFP_KERNEL);
+ ASSERT_ALWAYS(intr_hdl);
+ memset(intr_hdl, 0, sizeof(struct hub_intr_s));
+
+ xtalk_info = &intr_hdl->i_xtalk_info;
+ xtalk_info->xi_dev = dev;
+ xtalk_info->xi_vector = vector;
+ xtalk_info->xi_addr = xtalk_addr;
+
+ xwidget_info = xwidget_info_get(dev);
+ if (xwidget_info) {
+ xtalk_info->xi_target = xwidget_info_masterid_get(xwidget_info);
+ }
+
+ intr_hdl->i_cpuid = cpu;
+ intr_hdl->i_bit = vector;
+ intr_hdl->i_flags |= HUB_INTR_IS_ALLOCED;
+
+ return intr_hdl;
+}
+
+hub_intr_t
+hub_intr_alloc(vertex_hdl_t dev,
+ device_desc_t dev_desc,
+ vertex_hdl_t owner_dev)
+{
+ return(do_hub_intr_alloc(dev, dev_desc, owner_dev, 0));
+}
+
+hub_intr_t
+hub_intr_alloc_nothd(vertex_hdl_t dev,
+ device_desc_t dev_desc,
+ vertex_hdl_t owner_dev)
+{
+ return(do_hub_intr_alloc(dev, dev_desc, owner_dev, 1));
+}
+
+void
+hub_intr_free(hub_intr_t intr_hdl)
+{
+ cpuid_t cpu = intr_hdl->i_cpuid;
+ int vector = intr_hdl->i_bit;
+ xtalk_intr_t xtalk_info;
+
+ if (intr_hdl->i_flags & HUB_INTR_IS_CONNECTED) {
+ xtalk_info = &intr_hdl->i_xtalk_info;
+ xtalk_info->xi_dev = 0;
+ xtalk_info->xi_vector = 0;
+ xtalk_info->xi_addr = 0;
+ hub_intr_disconnect(intr_hdl);
+ }
+
+ if (intr_hdl->i_flags & HUB_INTR_IS_ALLOCED) {
+ kfree(intr_hdl);
+ }
+ intr_unreserve_level(cpu, vector);
+}
+
+int
+hub_intr_connect(hub_intr_t intr_hdl,
+ intr_func_t intr_func, /* xtalk intr handler */
+ void *intr_arg, /* arg to intr handler */
+ xtalk_intr_setfunc_t setfunc,
+ void *setfunc_arg)
+{
+ int rv;
+ cpuid_t cpu = intr_hdl->i_cpuid;
+ int vector = intr_hdl->i_bit;
+
+ ASSERT(intr_hdl->i_flags & HUB_INTR_IS_ALLOCED);
+
+ rv = intr_connect_level(cpu, vector);
+ if (rv < 0)
+ return rv;
+
+ intr_hdl->i_xtalk_info.xi_setfunc = setfunc;
+ intr_hdl->i_xtalk_info.xi_sfarg = setfunc_arg;
+
+ if (setfunc) {
+ (*setfunc)((xtalk_intr_t)intr_hdl);
+ }
+
+ intr_hdl->i_flags |= HUB_INTR_IS_CONNECTED;
+
+ return 0;
+}
+
+/*
+ * Disassociate handler with the specified interrupt.
+ */
+void
+hub_intr_disconnect(hub_intr_t intr_hdl)
+{
+ /*REFERENCED*/
+ int rv;
+ cpuid_t cpu = intr_hdl->i_cpuid;
+ int bit = intr_hdl->i_bit;
+ xtalk_intr_setfunc_t setfunc;
+
+ setfunc = intr_hdl->i_xtalk_info.xi_setfunc;
+
+ /* TBD: send disconnected interrupts somewhere harmless */
+ if (setfunc) (*setfunc)((xtalk_intr_t)intr_hdl);
+
+ rv = intr_disconnect_level(cpu, bit);
+ ASSERT(rv == 0);
+ intr_hdl->i_flags &= ~HUB_INTR_IS_CONNECTED;
+}
+
+/*
+ * Redirect an interrupt to another cpu.
+ */
+
+void
+sn_shub_redirect_intr(pcibr_intr_t intr, unsigned long cpu)
+{
+ unsigned long bit;
+ int cpuphys, slice;
+ nasid_t nasid;
+ unsigned long xtalk_addr;
+ int irq;
+ int i;
+ int old_cpu;
+ int new_cpu;
+
+ cpuphys = cpu_physical_id(cpu);
+ slice = cpu_physical_id_to_slice(cpuphys);
+ nasid = cpu_physical_id_to_nasid(cpuphys);
+
+ for (i = CPUS_PER_NODE - 1; i >= 0; i--) {
+ new_cpu = nasid_slice_to_cpuid(nasid, i);
+ if (new_cpu == NR_CPUS) {
+ continue;
+ }
+
+ if (!cpu_online(new_cpu)) {
+ continue;
+ }
+ break;
+ }
+
+ if (enable_shub_wars_1_1() && slice != i) {
+ printk("smp_affinity WARNING: SHUB 1.1 present: cannot target cpu %d, targeting cpu %d instead.\n",(int)cpu, new_cpu);
+ cpu = new_cpu;
+ slice = i;
+ }
+
+ if (slice) {
+ xtalk_addr = SH_II_INT1 | ((unsigned long)nasid << 36) | (1UL << 47);
+ } else {
+ xtalk_addr = SH_II_INT0 | ((unsigned long)nasid << 36) | (1UL << 47);
+ }
+
+ for (bit = 0; bit < 8; bit++) {
+ if (intr->bi_ibits & (1 << bit) ) {
+ /* Disable interrupts. */
+ pcireg_intr_enable_bit_clr(intr->bi_soft, bit);
+ /* Reset Host address (Interrupt destination) */
+ pcireg_intr_addr_addr_set(intr->bi_soft, bit, xtalk_addr);
+ /* Enable interrupt */
+ pcireg_intr_enable_bit_set(intr->bi_soft, bit);
+ /* Force an interrupt, just in case. */
+ pcireg_force_intr_set(intr->bi_soft, bit);
+ }
+ }
+ irq = intr->bi_irq;
+ old_cpu = intr->bi_cpu;
+ if (pdacpu(cpu)->sn_first_irq == 0 || pdacpu(cpu)->sn_first_irq > irq) {
+ pdacpu(cpu)->sn_first_irq = irq;
+ }
+ if (pdacpu(cpu)->sn_last_irq < irq) {
+ pdacpu(cpu)->sn_last_irq = irq;
+ }
+ pdacpu(old_cpu)->sn_num_irqs--;
+ pdacpu(cpu)->sn_num_irqs++;
+ intr->bi_cpu = (int)cpu;
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000,2002-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/irq.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/smp.h>
+#include <asm/delay.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/io.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/xtalk/xtalk.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/intr.h>
+#include <asm/sn/ioerror_handling.h>
+#include <asm/sn/ioerror.h>
+#include <asm/sn/sn2/shubio.h>
+#include <asm/sn/sn2/shub_mmr.h>
+#include <asm/sn/bte.h>
+
+extern void hubni_eint_init(cnodeid_t cnode);
+extern void hubii_eint_init(cnodeid_t cnode);
+extern irqreturn_t hubii_eint_handler (int irq, void *arg, struct pt_regs *ep);
+int hubiio_crb_error_handler(vertex_hdl_t hub_v, hubinfo_t hinfo);
+int hubiio_prb_error_handler(vertex_hdl_t hub_v, hubinfo_t hinfo);
+extern void bte_crb_error_handler(vertex_hdl_t hub_v, int btenum, int crbnum, ioerror_t *ioe, int bteop);
+void print_crb_fields(int crb_num, ii_icrb0_a_u_t icrba,
+ ii_icrb0_b_u_t icrbb, ii_icrb0_c_u_t icrbc,
+ ii_icrb0_d_u_t icrbd, ii_icrb0_e_u_t icrbe);
+
+extern int maxcpus;
+extern error_return_code_t error_state_set(vertex_hdl_t v,error_state_t new_state);
+
+#define HUB_ERROR_PERIOD (120 * HZ) /* 2 minutes */
+
+void
+hub_error_clear(nasid_t nasid)
+{
+ int i;
+
+ /*
+ * Make sure spurious write response errors are cleared
+ * (values are from hub_set_prb())
+ */
+ for (i = 0; i <= HUB_WIDGET_ID_MAX - HUB_WIDGET_ID_MIN + 1; i++) {
+ iprb_t prb;
+
+ prb.iprb_regval = REMOTE_HUB_L(nasid, IIO_IOPRB_0 + (i * sizeof(hubreg_t)));
+
+ /* Clear out some fields */
+ prb.iprb_ovflow = 1;
+ prb.iprb_bnakctr = 0;
+ prb.iprb_anakctr = 0;
+
+ prb.iprb_xtalkctr = 3; /* approx. PIO credits for the widget */
+
+ REMOTE_HUB_S(nasid, IIO_IOPRB_0 + (i * sizeof(hubreg_t)), prb.iprb_regval);
+ }
+
+ REMOTE_HUB_S(nasid, IIO_IECLR, -1);
+
+}
+
+
+/*
+ * Function : hub_error_init
+ * Purpose : initialize the error handling requirements for a given hub.
+ * Parameters : cnode, the compact nodeid.
+ * Assumptions : Called only once per hub, either by a local cpu. Or by a
+ * remote cpu, when this hub is headless.(cpuless)
+ * Returns : None
+ */
+
+void
+hub_error_init(cnodeid_t cnode)
+{
+ nasid_t nasid;
+
+ nasid = cnodeid_to_nasid(cnode);
+ hub_error_clear(nasid);
+
+
+ /*
+ * Now setup the hub ii error interrupt handler.
+ */
+
+ hubii_eint_init(cnode);
+
+ return;
+}
+
+/*
+ * Function : hubii_eint_init
+ * Parameters : cnode
+ * Purpose : to initialize the hub iio error interrupt.
+ * Assumptions : Called once per hub, by the cpu which will ultimately
+ * handle this interrupt.
+ * Returns : None.
+ */
+
+void
+hubii_eint_init(cnodeid_t cnode)
+{
+ int bit, rv;
+ ii_iidsr_u_t hubio_eint;
+ hubinfo_t hinfo;
+ cpuid_t intr_cpu;
+ vertex_hdl_t hub_v;
+ int bit_pos_to_irq(int bit);
+ ii_ilcsr_u_t ilcsr;
+
+
+ hub_v = (vertex_hdl_t)cnodeid_to_vertex(cnode);
+ ASSERT_ALWAYS(hub_v);
+ hubinfo_get(hub_v, &hinfo);
+
+ ASSERT(hinfo);
+ ASSERT(hinfo->h_cnodeid == cnode);
+
+ ilcsr.ii_ilcsr_regval = REMOTE_HUB_L(hinfo->h_nasid, IIO_ILCSR);
+ if ((ilcsr.ii_ilcsr_fld_s.i_llp_stat & 0x2) == 0) {
+ /*
+ * HUB II link is not up. Disable LLP. Clear old errors.
+ * Enable interrupts to handle BTE errors.
+ */
+ ilcsr.ii_ilcsr_fld_s.i_llp_en = 0;
+ REMOTE_HUB_S(hinfo->h_nasid, IIO_ILCSR, ilcsr.ii_ilcsr_regval);
+ }
+
+ /* Select a possible interrupt target where there is a free interrupt
+ * bit and also reserve the interrupt bit for this IO error interrupt
+ */
+ intr_cpu = intr_heuristic(hub_v, SGI_II_ERROR, &bit);
+ if (intr_cpu == CPU_NONE) {
+ printk("hubii_eint_init: intr_heuristic failed, cnode %d", cnode);
+ return;
+ }
+
+ rv = intr_connect_level(intr_cpu, SGI_II_ERROR);
+ request_irq(SGI_II_ERROR, hubii_eint_handler, SA_SHIRQ, "SN_hub_error", (void *)hub_v);
+ irq_descp(bit)->status |= SN2_IRQ_PER_HUB;
+ ASSERT_ALWAYS(rv >= 0);
+ hubio_eint.ii_iidsr_regval = 0;
+ hubio_eint.ii_iidsr_fld_s.i_enable = 1;
+ hubio_eint.ii_iidsr_fld_s.i_level = bit;/* Take the least significant bits*/
+ hubio_eint.ii_iidsr_fld_s.i_node = cnodeid_to_nasid(cnode);
+ hubio_eint.ii_iidsr_fld_s.i_pi_id = cpuid_to_subnode(intr_cpu);
+ REMOTE_HUB_S(hinfo->h_nasid, IIO_IIDSR, hubio_eint.ii_iidsr_regval);
+
+}
+
+
+/*ARGSUSED*/
+irqreturn_t
+hubii_eint_handler (int irq, void *arg, struct pt_regs *ep)
+{
+ vertex_hdl_t hub_v;
+ hubinfo_t hinfo;
+ ii_wstat_u_t wstat;
+ hubreg_t idsr;
+
+
+ /* two levels of casting avoids compiler warning.!! */
+ hub_v = (vertex_hdl_t)(long)(arg);
+ ASSERT(hub_v);
+
+ hubinfo_get(hub_v, &hinfo);
+
+ idsr = REMOTE_HUB_L(hinfo->h_nasid, IIO_ICMR);
+#if 0
+ if (idsr & 0x1) {
+ /* ICMR bit is set .. we are getting into "Spurious Interrupts condition. */
+ printk("Cnode %d II has seen the ICMR condition\n", hinfo->h_cnodeid);
+ printk("***** Please file PV with the above messages *****\n");
+ /* panic("We have to panic to prevent further unknown states ..\n"); */
+ }
+#endif
+
+ /*
+ * Identify the reason for error.
+ */
+ wstat.ii_wstat_regval = REMOTE_HUB_L(hinfo->h_nasid, IIO_WSTAT);
+
+ if (wstat.ii_wstat_fld_s.w_crazy) {
+ char *reason;
+ /*
+ * We can do a couple of things here.
+ * Look at the fields TX_MX_RTY/XT_TAIL_TO/XT_CRD_TO to check
+ * which of these caused the CRAZY bit to be set.
+ * You may be able to check if the Link is up really.
+ */
+ if (wstat.ii_wstat_fld_s.w_tx_mx_rty)
+ reason = "Micro Packet Retry Timeout";
+ else if (wstat.ii_wstat_fld_s.w_xt_tail_to)
+ reason = "Crosstalk Tail Timeout";
+ else if (wstat.ii_wstat_fld_s.w_xt_crd_to)
+ reason = "Crosstalk Credit Timeout";
+ else {
+ hubreg_t hubii_imem;
+ /*
+ * Check if widget 0 has been marked as shutdown, or
+ * if BTE 0/1 has been marked.
+ */
+ hubii_imem = REMOTE_HUB_L(hinfo->h_nasid, IIO_IMEM);
+ if (hubii_imem & IIO_IMEM_W0ESD)
+ reason = "Hub Widget 0 has been Shutdown";
+ else if (hubii_imem & IIO_IMEM_B0ESD)
+ reason = "BTE 0 has been shutdown";
+ else if (hubii_imem & IIO_IMEM_B1ESD)
+ reason = "BTE 1 has been shutdown";
+ else reason = "Unknown";
+
+ }
+ /*
+ * Only print the II_ECRAZY message if there is an attached xbow.
+ */
+ if (NODEPDA(hinfo->h_cnodeid)->xbow_vhdl != 0) {
+ printk("Hub %d, cnode %d to Xtalk Link failed (II_ECRAZY) Reason: %s",
+ hinfo->h_nasid, hinfo->h_cnodeid, reason);
+ }
+ }
+
+
+ /*
+ * Before processing any interrupt related information, clear all
+ * error indication and reenable interrupts. This will prevent
+ * lost interrupts due to the interrupt handler scanning past a PRB/CRB
+ * which has not errorred yet and then the PRB/CRB goes into error.
+ * Note, PRB errors are cleared individually.
+ */
+ REMOTE_HUB_S(hinfo->h_nasid, IIO_IECLR, 0xff0000);
+ idsr = REMOTE_HUB_L(hinfo->h_nasid, IIO_IIDSR) & ~IIO_IIDSR_SENT_MASK;
+ REMOTE_HUB_S(hinfo->h_nasid, IIO_IIDSR, idsr);
+
+
+ /*
+ * It's a toss as to which one among PRB/CRB to check first.
+ * Current decision is based on the severity of the errors.
+ * IO CRB errors tend to be more severe than PRB errors.
+ *
+ * It is possible for BTE errors to have been handled already, so we
+ * may not see any errors handled here.
+ */
+ (void)hubiio_crb_error_handler(hub_v, hinfo);
+ (void)hubiio_prb_error_handler(hub_v, hinfo);
+
+ return IRQ_HANDLED;
+}
+
+/*
+ * Free the hub CRB "crbnum" which encountered an error.
+ * Assumption is, error handling was successfully done,
+ * and we now want to return the CRB back to Hub for normal usage.
+ *
+ * In order to free the CRB, all that's needed is to de-allocate it
+ *
+ * Assumption:
+ * No other processor is mucking around with the hub control register.
+ * So, upper layer has to single thread this.
+ */
+void
+hubiio_crb_free(hubinfo_t hinfo, int crbnum)
+{
+ ii_icrb0_b_u_t icrbb;
+
+ /*
+ * The hardware does NOT clear the mark bit, so it must get cleared
+ * here to be sure the error is not processed twice.
+ */
+ icrbb.ii_icrb0_b_regval = REMOTE_HUB_L(hinfo->h_nasid, IIO_ICRB_B(crbnum));
+ icrbb.b_mark = 0;
+ REMOTE_HUB_S(hinfo->h_nasid, IIO_ICRB_B(crbnum), icrbb.ii_icrb0_b_regval);
+
+ /*
+ * Deallocate the register.
+ */
+
+ REMOTE_HUB_S(hinfo->h_nasid, IIO_ICDR, (IIO_ICDR_PND | crbnum));
+
+ /*
+ * Wait till hub indicates it's done.
+ */
+ while (REMOTE_HUB_L(hinfo->h_nasid, IIO_ICDR) & IIO_ICDR_PND)
+ udelay(1);
+
+}
+
+
+/*
+ * Array of error names that get logged in CRBs
+ */
+char *hubiio_crb_errors[] = {
+ "Directory Error",
+ "CRB Poison Error",
+ "I/O Write Error",
+ "I/O Access Error",
+ "I/O Partial Write Error",
+ "I/O Partial Read Error",
+ "I/O Timeout Error",
+ "Xtalk Error Packet"
+};
+
+void
+print_crb_fields(int crb_num, ii_icrb0_a_u_t icrba,
+ ii_icrb0_b_u_t icrbb, ii_icrb0_c_u_t icrbc,
+ ii_icrb0_d_u_t icrbd, ii_icrb0_e_u_t icrbe)
+{
+ printk("CRB %d regA\n\t"
+ "a_iow 0x%x\n\t"
+ "valid0x%x\n\t"
+ "Address0x%lx\n\t"
+ "a_tnum 0x%x\n\t"
+ "a_sidn 0x%x\n",
+ crb_num,
+ icrba.a_iow,
+ icrba.a_valid,
+ icrba.a_addr,
+ icrba.a_tnum,
+ icrba.a_sidn);
+ printk("CRB %d regB\n\t"
+ "b_imsgtype 0x%x\n\t"
+ "b_imsg 0x%x\n"
+ "\tb_use_old 0x%x\n\t"
+ "b_initiator 0x%x\n\t"
+ "b_exc 0x%x\n"
+ "\tb_ackcnt 0x%x\n\t"
+ "b_resp 0x%x\n\t"
+ "b_ack 0x%x\n"
+ "\tb_hold 0x%x\n\t"
+ "b_wb 0x%x\n\t"
+ "b_intvn 0x%x\n"
+ "\tb_stall_ib 0x%x\n\t"
+ "b_stall_int 0x%x\n"
+ "\tb_stall_bte_0 0x%x\n\t"
+ "b_stall_bte_1 0x%x\n"
+ "\tb_error 0x%x\n\t"
+ "b_lnetuce 0x%x\n\t"
+ "b_mark 0x%x\n\t"
+ "b_xerr 0x%x\n",
+ crb_num,
+ icrbb.b_imsgtype,
+ icrbb.b_imsg,
+ icrbb.b_use_old,
+ icrbb.b_initiator,
+ icrbb.b_exc,
+ icrbb.b_ackcnt,
+ icrbb.b_resp,
+ icrbb.b_ack,
+ icrbb.b_hold,
+ icrbb.b_wb,
+ icrbb.b_intvn,
+ icrbb.b_stall_ib,
+ icrbb.b_stall_int,
+ icrbb.b_stall_bte_0,
+ icrbb.b_stall_bte_1,
+ icrbb.b_error,
+ icrbb.b_lnetuce,
+ icrbb.b_mark,
+ icrbb.b_xerr);
+ printk("CRB %d regC\n\t"
+ "c_source 0x%x\n\t"
+ "c_xtsize 0x%x\n\t"
+ "c_cohtrans 0x%x\n\t"
+ "c_btenum 0x%x\n\t"
+ "c_gbr 0x%x\n\t"
+ "c_doresp 0x%x\n\t"
+ "c_barrop 0x%x\n\t"
+ "c_suppl 0x%x\n",
+ crb_num,
+ icrbc.c_source,
+ icrbc.c_xtsize,
+ icrbc.c_cohtrans,
+ icrbc.c_btenum,
+ icrbc.c_gbr,
+ icrbc.c_doresp,
+ icrbc.c_barrop,
+ icrbc.c_suppl);
+ printk("CRB %d regD\n\t"
+ "d_bteaddr 0x%lx\n\t"
+ "d_bteop 0x%x\n\t"
+ "d_pripsc 0x%x\n\t"
+ "d_pricnt 0x%x\n\t"
+ "d_sleep 0x%x\n\t",
+ crb_num,
+ icrbd.d_bteaddr,
+ icrbd.d_bteop,
+ icrbd.d_pripsc,
+ icrbd.d_pricnt,
+ icrbd.d_sleep);
+ printk("CRB %d regE\n\t"
+ "icrbe_timeout 0x%x\n\t"
+ "icrbe_context 0x%x\n\t"
+ "icrbe_toutvld 0x%x\n\t"
+ "icrbe_ctxtvld 0x%x\n\t",
+ crb_num,
+ icrbe.icrbe_timeout,
+ icrbe.icrbe_context,
+ icrbe.icrbe_toutvld,
+ icrbe.icrbe_ctxtvld);
+}
+
+/*
+ * hubiio_crb_error_handler
+ *
+ * This routine gets invoked when a hub gets an error
+ * interrupt. So, the routine is running in interrupt context
+ * at error interrupt level.
+ * Action:
+ * It's responsible for identifying ALL the CRBs that are marked
+ * with error, and process them.
+ *
+ * If you find the CRB that's marked with error, map this to the
+ * reason it caused error, and invoke appropriate error handler.
+ *
+ * XXX Be aware of the information in the context register.
+ *
+ * NOTE:
+ * Use REMOTE_HUB_* macro instead of LOCAL_HUB_* so that the interrupt
+ * handler can be run on any node. (not necessarily the node
+ * corresponding to the hub that encountered error).
+ */
+
+int
+hubiio_crb_error_handler(vertex_hdl_t hub_v, hubinfo_t hinfo)
+{
+ cnodeid_t cnode;
+ nasid_t nasid;
+ ii_icrb0_a_u_t icrba; /* II CRB Register A */
+ ii_icrb0_b_u_t icrbb; /* II CRB Register B */
+ ii_icrb0_c_u_t icrbc; /* II CRB Register C */
+ ii_icrb0_d_u_t icrbd; /* II CRB Register D */
+ ii_icrb0_e_u_t icrbe; /* II CRB Register D */
+ int i;
+ int num_errors = 0; /* Num of errors handled */
+ ioerror_t ioerror;
+ int rc;
+
+ nasid = hinfo->h_nasid;
+ cnode = nasid_to_cnodeid(nasid);
+
+ /*
+ * XXX - Add locking for any recovery actions
+ */
+ /*
+ * Scan through all CRBs in the Hub, and handle the errors
+ * in any of the CRBs marked.
+ */
+ for (i = 0; i < IIO_NUM_CRBS; i++) {
+ /* Check this crb entry to see if it is in error. */
+ icrbb.ii_icrb0_b_regval = REMOTE_HUB_L(nasid, IIO_ICRB_B(i));
+
+ if (icrbb.b_mark == 0) {
+ continue;
+ }
+
+ icrba.ii_icrb0_a_regval = REMOTE_HUB_L(nasid, IIO_ICRB_A(i));
+
+ IOERROR_INIT(&ioerror);
+
+ /* read other CRB error registers. */
+ icrbc.ii_icrb0_c_regval = REMOTE_HUB_L(nasid, IIO_ICRB_C(i));
+ icrbd.ii_icrb0_d_regval = REMOTE_HUB_L(nasid, IIO_ICRB_D(i));
+ icrbe.ii_icrb0_e_regval = REMOTE_HUB_L(nasid, IIO_ICRB_E(i));
+
+ IOERROR_SETVALUE(&ioerror,errortype,icrbb.b_ecode);
+
+ /* Check if this error is due to BTE operation,
+ * and handle it separately.
+ */
+ if (icrbd.d_bteop ||
+ ((icrbb.b_initiator == IIO_ICRB_INIT_BTE0 ||
+ icrbb.b_initiator == IIO_ICRB_INIT_BTE1) &&
+ (icrbb.b_imsgtype == IIO_ICRB_IMSGT_BTE ||
+ icrbb.b_imsgtype == IIO_ICRB_IMSGT_SN1NET))){
+
+ int bte_num;
+
+ if (icrbd.d_bteop)
+ bte_num = icrbc.c_btenum;
+ else /* b_initiator bit 2 gives BTE number */
+ bte_num = (icrbb.b_initiator & 0x4) >> 2;
+
+ hubiio_crb_free(hinfo, i);
+
+ bte_crb_error_handler(hub_v, bte_num,
+ i, &ioerror,
+ icrbd.d_bteop);
+ num_errors++;
+ continue;
+ }
+
+ /*
+ * XXX
+ * Assuming the only other error that would reach here is
+ * crosstalk errors.
+ * If CRB times out on a message from Xtalk, it changes
+ * the message type to CRB.
+ *
+ * If we get here due to other errors (SN0net/CRB)
+ * what's the action ?
+ */
+
+ /*
+ * Pick out the useful fields in CRB, and
+ * tuck them away into ioerror structure.
+ */
+ IOERROR_SETVALUE(&ioerror,xtalkaddr,icrba.a_addr << IIO_ICRB_ADDR_SHFT);
+ IOERROR_SETVALUE(&ioerror,widgetnum,icrba.a_sidn);
+
+
+ if (icrba.a_iow){
+ /*
+ * XXX We shouldn't really have BRIDGE-specific code
+ * here, but alas....
+ *
+ * The BRIDGE (or XBRIDGE) sets the upper bit of TNUM
+ * to indicate a WRITE operation. It sets the next
+ * bit to indicate an INTERRUPT operation. The bottom
+ * 3 bits of TNUM indicate which device was responsible.
+ */
+ IOERROR_SETVALUE(&ioerror,widgetdev,
+ TNUM_TO_WIDGET_DEV(icrba.a_tnum));
+ /*
+ * The encoding of TNUM (see comments above) is
+ * different for PIC. So we'll save TNUM here and
+ * deal with the differences later when we can
+ * determine if we're using a Bridge or the PIC.
+ *
+ * XXX: We may be able to remove saving the widgetdev
+ * above and just sort it out of TNUM later.
+ */
+ IOERROR_SETVALUE(&ioerror, tnum, icrba.a_tnum);
+
+ }
+ if (icrbb.b_error) {
+ /*
+ * CRB 'i' has some error. Identify the type of error,
+ * and try to handle it.
+ *
+ */
+ switch(icrbb.b_ecode) {
+ case IIO_ICRB_ECODE_PERR:
+ case IIO_ICRB_ECODE_WERR:
+ case IIO_ICRB_ECODE_AERR:
+ case IIO_ICRB_ECODE_PWERR:
+ case IIO_ICRB_ECODE_TOUT:
+ case IIO_ICRB_ECODE_XTERR:
+ printk("Shub II CRB %d: error %s on hub cnodeid: %d",
+ i, hubiio_crb_errors[icrbb.b_ecode], cnode);
+ /*
+ * Any sort of write error is mostly due
+ * bad programming (Note it's not a timeout.)
+ * So, invoke hub_iio_error_handler with
+ * appropriate information.
+ */
+ IOERROR_SETVALUE(&ioerror,errortype,icrbb.b_ecode);
+
+ /* Go through the error bit lookup phase */
+ if (error_state_set(hub_v, ERROR_STATE_LOOKUP) ==
+ ERROR_RETURN_CODE_CANNOT_SET_STATE)
+ return(IOERROR_UNHANDLED);
+ rc = hub_ioerror_handler(
+ hub_v,
+ DMA_WRITE_ERROR,
+ MODE_DEVERROR,
+ &ioerror);
+ if (rc == IOERROR_HANDLED) {
+ rc = hub_ioerror_handler(
+ hub_v,
+ DMA_WRITE_ERROR,
+ MODE_DEVREENABLE,
+ &ioerror);
+ }else {
+ printk("Unable to handle %s on hub %d",
+ hubiio_crb_errors[icrbb.b_ecode],
+ cnode);
+ /* panic; */
+ }
+ /* Go to Next error */
+ print_crb_fields(i, icrba, icrbb, icrbc,
+ icrbd, icrbe);
+ hubiio_crb_free(hinfo, i);
+ continue;
+ case IIO_ICRB_ECODE_PRERR:
+ case IIO_ICRB_ECODE_DERR:
+ printk("Shub II CRB %d: error %s on hub : %d",
+ i, hubiio_crb_errors[icrbb.b_ecode], cnode);
+ /* panic */
+ default:
+ printk("Shub II CRB error (code : %d) on hub : %d",
+ icrbb.b_ecode, cnode);
+ /* panic */
+ }
+ }
+ /*
+ * Error is not indicated via the errcode field
+ * Check other error indications in this register.
+ */
+ if (icrbb.b_xerr) {
+ printk("Shub II CRB %d: Xtalk Packet with error bit set to hub %d",
+ i, cnode);
+ /* panic */
+ }
+ if (icrbb.b_lnetuce) {
+ printk("Shub II CRB %d: Uncorrectable data error detected on data "
+ " from NUMAlink to node %d",
+ i, cnode);
+ /* panic */
+ }
+ print_crb_fields(i, icrba, icrbb, icrbc, icrbd, icrbe);
+
+
+
+
+
+ if (icrbb.b_error) {
+ /*
+ * CRB 'i' has some error. Identify the type of error,
+ * and try to handle it.
+ */
+ switch(icrbb.b_ecode) {
+ case IIO_ICRB_ECODE_PERR:
+ case IIO_ICRB_ECODE_WERR:
+ case IIO_ICRB_ECODE_AERR:
+ case IIO_ICRB_ECODE_PWERR:
+
+ printk("%s on hub cnodeid: %d",
+ hubiio_crb_errors[icrbb.b_ecode], cnode);
+ /*
+ * Any sort of write error is mostly due
+ * bad programming (Note it's not a timeout.)
+ * So, invoke hub_iio_error_handler with
+ * appropriate information.
+ */
+ IOERROR_SETVALUE(&ioerror,errortype,icrbb.b_ecode);
+
+ rc = hub_ioerror_handler(
+ hub_v,
+ DMA_WRITE_ERROR,
+ MODE_DEVERROR,
+ &ioerror);
+
+ if (rc == IOERROR_HANDLED) {
+ rc = hub_ioerror_handler(
+ hub_v,
+ DMA_WRITE_ERROR,
+ MODE_DEVREENABLE,
+ &ioerror);
+ ASSERT(rc == IOERROR_HANDLED);
+ }else {
+
+ panic("Unable to handle %s on hub %d",
+ hubiio_crb_errors[icrbb.b_ecode],
+ cnode);
+ /*NOTREACHED*/
+ }
+ /* Go to Next error */
+ hubiio_crb_free(hinfo, i);
+ continue;
+
+ case IIO_ICRB_ECODE_PRERR:
+
+ case IIO_ICRB_ECODE_TOUT:
+ case IIO_ICRB_ECODE_XTERR:
+
+ case IIO_ICRB_ECODE_DERR:
+ panic("Fatal %s on hub : %d",
+ hubiio_crb_errors[icrbb.b_ecode], cnode);
+ /*NOTREACHED*/
+
+ default:
+ panic("Fatal error (code : %d) on hub : %d",
+ icrbb.b_ecode, cnode);
+ /*NOTREACHED*/
+
+ }
+ } /* if (icrbb.b_error) */
+
+ /*
+ * Error is not indicated via the errcode field
+ * Check other error indications in this register.
+ */
+
+ if (icrbb.b_xerr) {
+ panic("Xtalk Packet with error bit set to hub %d",
+ cnode);
+ /*NOTREACHED*/
+ }
+
+ if (icrbb.b_lnetuce) {
+ panic("Uncorrectable data error detected on data "
+ " from Craylink to node %d",
+ cnode);
+ /*NOTREACHED*/
+ }
+
+ }
+ return num_errors;
+}
+
+/*
+ * hubii_check_widget_disabled
+ *
+ * Check if PIO access to the specified widget is disabled due
+ * to any II errors that are currently set.
+ *
+ * The specific error bits checked are:
+ * IPRBx register: SPUR_RD (51)
+ * SPUR_WR (50)
+ * RD_TO (49)
+ * ERROR (48)
+ *
+ * WSTAT register: CRAZY (32)
+ */
+
+int
+hubii_check_widget_disabled(nasid_t nasid, int wnum)
+{
+ iprb_t iprb;
+ ii_wstat_u_t wstat;
+
+ iprb.iprb_regval = REMOTE_HUB_L(nasid, IIO_IOPRB(wnum));
+ if (iprb.iprb_regval & (IIO_PRB_SPUR_RD | IIO_PRB_SPUR_WR |
+ IIO_PRB_RD_TO | IIO_PRB_ERROR)) {
+#ifdef DEBUG
+ printk(KERN_WARNING "II error, IPRB%x=0x%lx\n", wnum, iprb.iprb_regval);
+#endif
+ return(1);
+ }
+
+ wstat.ii_wstat_regval = REMOTE_HUB_L(nasid, IIO_WSTAT);
+ if (wstat.ii_wstat_regval & IIO_WSTAT_ECRAZY) {
+#ifdef DEBUG
+ printk(KERN_WARNING "II error, WSTAT=0x%lx\n", wstat.ii_wstat_regval);
+#endif
+ return(1);
+ }
+ return(0);
+}
+
+/*ARGSUSED*/
+/*
+ * hubii_prb_handler
+ * Handle the error reported in the PRB for wiget number wnum.
+ * This typically happens on a PIO write error.
+ * There is nothing much we can do in this interrupt context for
+ * PIO write errors. For e.g. QL scsi controller has the
+ * habit of flaking out on PIO writes.
+ * Print a message and try to continue for now
+ * Cleanup involes freeing the PRB register
+ */
+static void
+hubii_prb_handler(vertex_hdl_t hub_v, hubinfo_t hinfo, int wnum)
+{
+ nasid_t nasid;
+
+ nasid = hinfo->h_nasid;
+ /*
+ * Clear error bit by writing to IECLR register.
+ */
+ REMOTE_HUB_S(nasid, IIO_IECLR, (1 << wnum));
+ /*
+ * PIO Write to Widget 'i' got into an error.
+ * Invoke hubiio_error_handler with this information.
+ */
+ printk( "Hub nasid %d got a PIO Write error from widget %d, "
+ "cleaning up and continuing", nasid, wnum);
+ /*
+ * XXX
+ * It may be necessary to adjust IO PRB counter
+ * to account for any lost credits.
+ */
+}
+
+int
+hubiio_prb_error_handler(vertex_hdl_t hub_v, hubinfo_t hinfo)
+{
+ int wnum;
+ nasid_t nasid;
+ int num_errors = 0;
+ iprb_t iprb;
+
+ nasid = hinfo->h_nasid;
+ /*
+ * Check if IPRB0 has any error first.
+ */
+ iprb.iprb_regval = REMOTE_HUB_L(nasid, IIO_IOPRB(0));
+ if (iprb.iprb_error) {
+ num_errors++;
+ hubii_prb_handler(hub_v, hinfo, 0);
+ }
+ /*
+ * Look through PRBs 8 - F to see if any of them has error bit set.
+ * If true, invoke hub iio error handler for this widget.
+ */
+ for (wnum = HUB_WIDGET_ID_MIN; wnum <= HUB_WIDGET_ID_MAX; wnum++) {
+ iprb.iprb_regval = REMOTE_HUB_L(nasid, IIO_IOPRB(wnum));
+
+ if (!iprb.iprb_error)
+ continue;
+
+ num_errors++;
+ hubii_prb_handler(hub_v, hinfo, wnum);
+ }
+
+ return num_errors;
+}
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000,2002-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <asm/smp.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/io.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/xtalk/xtalk.h>
+#include <asm/sn/pci/pcibr_private.h>
+#include <asm/sn/intr.h>
+#include <asm/sn/ioerror_handling.h>
+#include <asm/sn/ioerror.h>
+#include <asm/sn/sn2/shubio.h>
+
+
+error_state_t error_state_get(vertex_hdl_t v);
+error_return_code_t error_state_set(vertex_hdl_t v,error_state_t new_state);
+
+
+/*
+ * Get the xtalk provider function pointer for the
+ * specified hub.
+ */
+
+/*ARGSUSED*/
+int
+hub_xp_error_handler(
+ vertex_hdl_t hub_v,
+ nasid_t nasid,
+ int error_code,
+ ioerror_mode_t mode,
+ ioerror_t *ioerror)
+{
+ /*REFERENCED*/
+ hubreg_t iio_imem;
+ vertex_hdl_t xswitch;
+ error_state_t e_state;
+ cnodeid_t cnode;
+
+ /*
+ * Before walking down to the next level, check if
+ * the I/O link is up. If it's been disabled by the
+ * hub ii for some reason, we can't even touch the
+ * widget registers.
+ */
+ iio_imem = REMOTE_HUB_L(nasid, IIO_IMEM);
+
+ if (!(iio_imem & (IIO_IMEM_B0ESD|IIO_IMEM_W0ESD))){
+ /*
+ * IIO_IMEM_B0ESD getting set, indicates II shutdown
+ * on HUB0 parts.. Hopefully that's not true for
+ * Hub1 parts..
+ *
+ *
+ * If either one of them is shut down, can't
+ * go any further.
+ */
+ return IOERROR_XTALKLEVEL;
+ }
+
+ /* Get the error state of the hub */
+ e_state = error_state_get(hub_v);
+
+ cnode = nasid_to_cnodeid(nasid);
+
+ xswitch = NODEPDA(cnode)->basew_xc;
+
+ /* Set the error state of the crosstalk device to that of
+ * hub.
+ */
+ if (error_state_set(xswitch , e_state) ==
+ ERROR_RETURN_CODE_CANNOT_SET_STATE)
+ return(IOERROR_UNHANDLED);
+
+ /* Clean the error state of the hub if we are in the action handling
+ * phase.
+ */
+ if (e_state == ERROR_STATE_ACTION)
+ (void)error_state_set(hub_v, ERROR_STATE_NONE);
+ /* hand the error off to the switch or the directly
+ * connected crosstalk device.
+ */
+ return xtalk_error_handler(xswitch,
+ error_code, mode, ioerror);
+
+}
+
+/*
+ * Check if the widget in error has been enabled for PIO accesses
+ */
+int
+is_widget_pio_enabled(ioerror_t *ioerror)
+{
+ cnodeid_t src_node;
+ nasid_t src_nasid;
+ hubreg_t ii_iowa;
+ xwidgetnum_t widget;
+ iopaddr_t p;
+
+ /* Get the node where the PIO error occurred */
+ IOERROR_GETVALUE(p,ioerror, srcnode);
+ src_node = p;
+ if (src_node == CNODEID_NONE)
+ return(0);
+
+ /* Get the nasid for the cnode */
+ src_nasid = cnodeid_to_nasid(src_node);
+ if (src_nasid == INVALID_NASID)
+ return(0);
+
+ /* Read the Outbound widget access register for this hub */
+ ii_iowa = REMOTE_HUB_L(src_nasid, IIO_IOWA);
+ IOERROR_GETVALUE(p,ioerror, widgetnum);
+ widget = p;
+
+ /* Check if the PIOs to the widget with PIO error have been
+ * enabled.
+ */
+ if (ii_iowa & IIO_IOWA_WIDGET(widget))
+ return(1);
+
+ return(0);
+}
+
+/*
+ * Hub IO error handling.
+ *
+ * Gets invoked for different types of errors found at the hub.
+ * Typically this includes situations from bus error or due to
+ * an error interrupt (mostly generated at the hub).
+ */
+int
+hub_ioerror_handler(
+ vertex_hdl_t hub_v,
+ int error_code,
+ int mode,
+ struct io_error_s *ioerror)
+{
+ hubinfo_t hinfo; /* Hub info pointer */
+ nasid_t nasid;
+ int retval = 0;
+ /*REFERENCED*/
+ iopaddr_t p;
+ caddr_t cp;
+
+ hubinfo_get(hub_v, &hinfo);
+
+ if (!hinfo){
+ /* Print an error message and return */
+ goto end;
+ }
+ nasid = hinfo->h_nasid;
+
+ switch(error_code) {
+
+ case PIO_READ_ERROR:
+ /*
+ * Cpu got a bus error while accessing IO space.
+ * hubaddr field in ioerror structure should have
+ * the IO address that caused access error.
+ */
+
+ /*
+ * Identify if the physical address in hub_error_data
+ * corresponds to small/large window, and accordingly,
+ * get the xtalk address.
+ */
+
+ /*
+ * Evaluate the widget number and the widget address that
+ * caused the error. Use 'vaddr' if it's there.
+ * This is typically true either during probing
+ * or a kernel driver getting into trouble.
+ * Otherwise, use paddr to figure out widget details
+ * This is typically true for user mode bus errors while
+ * accessing I/O space.
+ */
+ IOERROR_GETVALUE(cp,ioerror,vaddr);
+ if (cp){
+ /*
+ * If neither in small window nor in large window range,
+ * outright reject it.
+ */
+ IOERROR_GETVALUE(cp,ioerror,vaddr);
+ if (NODE_SWIN_ADDR(nasid, (paddr_t)cp)){
+ iopaddr_t hubaddr;
+ xwidgetnum_t widgetnum;
+ iopaddr_t xtalkaddr;
+
+ IOERROR_GETVALUE(p,ioerror,hubaddr);
+ hubaddr = p;
+ widgetnum = SWIN_WIDGETNUM(hubaddr);
+ xtalkaddr = SWIN_WIDGETADDR(hubaddr);
+ /*
+ * differentiate local register vs IO space access
+ */
+ IOERROR_SETVALUE(ioerror,widgetnum,widgetnum);
+ IOERROR_SETVALUE(ioerror,xtalkaddr,xtalkaddr);
+
+
+ } else if (NODE_BWIN_ADDR(nasid, (paddr_t)cp)){
+ /*
+ * Address corresponds to large window space.
+ * Convert it to xtalk address.
+ */
+ int bigwin;
+ hub_piomap_t bw_piomap;
+ xtalk_piomap_t xt_pmap = NULL;
+ iopaddr_t hubaddr;
+ xwidgetnum_t widgetnum;
+ iopaddr_t xtalkaddr;
+
+ IOERROR_GETVALUE(p,ioerror,hubaddr);
+ hubaddr = p;
+
+ /*
+ * Have to loop to find the correct xtalk_piomap
+ * because the're not allocated on a one-to-one
+ * basis to the window number.
+ */
+ for (bigwin=0; bigwin < HUB_NUM_BIG_WINDOW; bigwin++) {
+ bw_piomap = hubinfo_bwin_piomap_get(hinfo,
+ bigwin);
+
+ if (bw_piomap->hpio_bigwin_num ==
+ (BWIN_WINDOWNUM(hubaddr) - 1)) {
+ xt_pmap = hub_piomap_xt_piomap(bw_piomap);
+ break;
+ }
+ }
+
+ ASSERT(xt_pmap);
+
+ widgetnum = xtalk_pio_target_get(xt_pmap);
+ xtalkaddr = xtalk_pio_xtalk_addr_get(xt_pmap) + BWIN_WIDGETADDR(hubaddr);
+
+ IOERROR_SETVALUE(ioerror,widgetnum,widgetnum);
+ IOERROR_SETVALUE(ioerror,xtalkaddr,xtalkaddr);
+
+ /*
+ * Make sure that widgetnum doesnot map to hub
+ * register widget number, as we never use
+ * big window to access hub registers.
+ */
+ ASSERT(widgetnum != HUB_REGISTER_WIDGET);
+ }
+ } else if (IOERROR_FIELDVALID(ioerror,hubaddr)) {
+ iopaddr_t hubaddr;
+ xwidgetnum_t widgetnum;
+ iopaddr_t xtalkaddr;
+
+ IOERROR_GETVALUE(p,ioerror,hubaddr);
+ hubaddr = p;
+ if (BWIN_WINDOWNUM(hubaddr)){
+ int window = BWIN_WINDOWNUM(hubaddr) - 1;
+ hubreg_t itte;
+ itte = (hubreg_t)HUB_L(IIO_ITTE_GET(nasid, window));
+ widgetnum = (itte >> IIO_ITTE_WIDGET_SHIFT) &
+ IIO_ITTE_WIDGET_MASK;
+ xtalkaddr = (((itte >> IIO_ITTE_OFFSET_SHIFT) &
+ IIO_ITTE_OFFSET_MASK) <<
+ BWIN_SIZE_BITS) +
+ BWIN_WIDGETADDR(hubaddr);
+ } else {
+ widgetnum = SWIN_WIDGETNUM(hubaddr);
+ xtalkaddr = SWIN_WIDGETADDR(hubaddr);
+ }
+ IOERROR_SETVALUE(ioerror,widgetnum,widgetnum);
+ IOERROR_SETVALUE(ioerror,xtalkaddr,xtalkaddr);
+ } else {
+ IOERR_PRINTF(printk(
+ "hub_ioerror_handler: Invalid address passed"));
+
+ return IOERROR_INVALIDADDR;
+ }
+
+
+ IOERROR_GETVALUE(p,ioerror,widgetnum);
+ if ((p) == HUB_REGISTER_WIDGET) {
+ /*
+ * Error in accessing Hub local register
+ * This should happen mostly in SABLE mode..
+ */
+ retval = 0;
+ } else {
+ /* Make sure that the outbound widget access for this
+ * widget is enabled.
+ */
+ if (!is_widget_pio_enabled(ioerror)) {
+ return(IOERROR_HANDLED);
+ }
+
+
+ retval = hub_xp_error_handler(
+ hub_v, nasid, error_code, mode, ioerror);
+
+ }
+
+ IOERR_PRINTF(printk(
+ "hub_ioerror_handler:PIO_READ_ERROR return: %d",
+ retval));
+
+ break;
+
+ case PIO_WRITE_ERROR:
+ /*
+ * This hub received an interrupt indicating a widget
+ * attached to this hub got a timeout.
+ * widgetnum field should be filled to indicate the
+ * widget that caused error.
+ *
+ * NOTE: This hub may have nothing to do with this error.
+ * We are here since the widget attached to the xbow
+ * gets its PIOs through this hub.
+ *
+ * There is nothing that can be done at this level.
+ * Just invoke the xtalk error handling mechanism.
+ */
+ IOERROR_GETVALUE(p,ioerror,widgetnum);
+ if ((p) == HUB_REGISTER_WIDGET) {
+ } else {
+ /* Make sure that the outbound widget access for this
+ * widget is enabled.
+ */
+
+ if (!is_widget_pio_enabled(ioerror)) {
+ return(IOERROR_HANDLED);
+ }
+
+ retval = hub_xp_error_handler(
+ hub_v, nasid, error_code, mode, ioerror);
+ }
+ break;
+
+ case DMA_READ_ERROR:
+ /*
+ * DMA Read error always ends up generating an interrupt
+ * at the widget level, and never at the hub level. So,
+ * we don't expect to come here any time
+ */
+ ASSERT(0);
+ retval = IOERROR_UNHANDLED;
+ break;
+
+ case DMA_WRITE_ERROR:
+ /*
+ * DMA Write error is generated when a write by an I/O
+ * device could not be completed. Problem is, device is
+ * totally unaware of this problem, and would continue
+ * writing to system memory. So, hub has a way to send
+ * an error interrupt on the first error, and bitbucket
+ * all further write transactions.
+ * Coming here indicates that hub detected one such error,
+ * and we need to handle it.
+ *
+ * Hub interrupt handler would have extracted physaddr,
+ * widgetnum, and widgetdevice from the CRB
+ *
+ * There is nothing special to do here, since gathering
+ * data from crb's is done elsewhere. Just pass the
+ * error to xtalk layer.
+ */
+ retval = hub_xp_error_handler(hub_v, nasid, error_code, mode,
+ ioerror);
+ break;
+
+ default:
+ ASSERT(0);
+ return IOERROR_BADERRORCODE;
+
+ }
+
+ /*
+ * If error was not handled, we may need to take certain action
+ * based on the error code.
+ * For e.g. in case of PIO_READ_ERROR, we may need to release the
+ * PIO Read entry table (they are sticky after errors).
+ * Similarly other cases.
+ *
+ * Further Action TBD
+ */
+end:
+ if (retval == IOERROR_HWGRAPH_LOOKUP) {
+ /*
+ * If we get errors very early, we can't traverse
+ * the path using hardware graph.
+ * To handle this situation, we need a functions
+ * which don't depend on the hardware graph vertex to
+ * handle errors. This break the modularity of the
+ * existing code. Instead we print out the reason for
+ * not handling error, and return. On return, all the
+ * info collected would be dumped. This should provide
+ * sufficient info to analyse the error.
+ */
+ printk("Unable to handle IO error: hardware graph not setup\n");
+ }
+
+ return retval;
+}
+
+#define INFO_LBL_ERROR_STATE "error_state"
+
+#define v_error_state_get(v,s) \
+(hwgraph_info_get_LBL(v,INFO_LBL_ERROR_STATE, (arbitrary_info_t *)&s))
+
+#define v_error_state_set(v,s,replace) \
+(replace ? \
+hwgraph_info_replace_LBL(v,INFO_LBL_ERROR_STATE,(arbitrary_info_t)s,0) :\
+hwgraph_info_add_LBL(v,INFO_LBL_ERROR_STATE, (arbitrary_info_t)s))
+
+
+#define v_error_state_clear(v) \
+(hwgraph_info_remove_LBL(v,INFO_LBL_ERROR_STATE,0))
+
+/*
+ * error_state_get
+ * Get the state of the vertex.
+ * Returns ERROR_STATE_INVALID on failure
+ * current state otherwise
+ */
+error_state_t
+error_state_get(vertex_hdl_t v)
+{
+ error_state_t s;
+
+ /* Check if we have a valid hwgraph vertex */
+ if ( v == (vertex_hdl_t)0 )
+ return(ERROR_STATE_NONE);
+
+ /* Get the labelled info hanging off the vertex which corresponds
+ * to the state.
+ */
+ if (v_error_state_get(v, s) != GRAPH_SUCCESS) {
+ return(ERROR_STATE_NONE);
+ }
+ return(s);
+}
+
+
+/*
+ * error_state_set
+ * Set the state of the vertex
+ * Returns ERROR_RETURN_CODE_CANNOT_SET_STATE on failure
+ * ERROR_RETURN_CODE_SUCCESS otherwise
+ */
+error_return_code_t
+error_state_set(vertex_hdl_t v,error_state_t new_state)
+{
+ error_state_t old_state;
+ int replace = 1;
+
+ /* Check if we have a valid hwgraph vertex */
+ if ( v == (vertex_hdl_t)0 )
+ return(ERROR_RETURN_CODE_GENERAL_FAILURE);
+
+
+ /* This means that the error state needs to be cleaned */
+ if (new_state == ERROR_STATE_NONE) {
+ /* Make sure that we have an error state */
+ if (v_error_state_get(v,old_state) == GRAPH_SUCCESS)
+ v_error_state_clear(v);
+ return(ERROR_RETURN_CODE_SUCCESS);
+ }
+
+ /* Check if the state information has been set at least once
+ * for this vertex.
+ */
+ if (v_error_state_get(v,old_state) != GRAPH_SUCCESS)
+ replace = 0;
+
+ if (v_error_state_set(v,new_state,replace) != GRAPH_SUCCESS) {
+ return(ERROR_RETURN_CODE_CANNOT_SET_STATE);
+ }
+ return(ERROR_RETURN_CODE_SUCCESS);
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/mm.h>
+#include <linux/delay.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/sn2/sn_private.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/simulator.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/hcl_util.h>
+#include <asm/sn/pci/pcibr_private.h>
+
+/* #define DEBUG 1 */
+/* #define XBOW_DEBUG 1 */
+
+#define kdebug 0
+
+
+/*
+ * This file supports the Xbow chip. Main functions: initializtion,
+ * error handling.
+ */
+
+/*
+ * each vertex corresponding to an xbow chip
+ * has a "fastinfo" pointer pointing at one
+ * of these things.
+ */
+
+struct xbow_soft_s {
+ vertex_hdl_t conn; /* our connection point */
+ vertex_hdl_t vhdl; /* xbow's private vertex */
+ vertex_hdl_t busv; /* the xswitch vertex */
+ xbow_t *base; /* PIO pointer to crossbow chip */
+ char *name; /* hwgraph name */
+
+ xbow_link_status_t xbow_link_status[MAX_XBOW_PORTS];
+ widget_cfg_t *wpio[MAX_XBOW_PORTS]; /* cached PIO pointer */
+
+ /* Bandwidth allocation state. Bandwidth values are for the
+ * destination port since contention happens there.
+ * Implicit mapping from xbow ports (8..f) -> (0..7) array indices.
+ */
+ unsigned long long bw_hiwm[MAX_XBOW_PORTS]; /* hiwater mark values */
+ unsigned long long bw_cur_used[MAX_XBOW_PORTS]; /* bw used currently */
+};
+
+#define xbow_soft_set(v,i) hwgraph_fastinfo_set((v), (arbitrary_info_t)(i))
+#define xbow_soft_get(v) ((struct xbow_soft_s *)hwgraph_fastinfo_get((v)))
+
+/*
+ * Function Table of Contents
+ */
+
+int xbow_attach(vertex_hdl_t);
+
+int xbow_widget_present(xbow_t *, int);
+static int xbow_link_alive(xbow_t *, int);
+vertex_hdl_t xbow_widget_lookup(vertex_hdl_t, int);
+
+void xbow_intr_preset(void *, int, xwidgetnum_t, iopaddr_t, xtalk_intr_vector_t);
+static void xbow_setwidint(xtalk_intr_t);
+
+xswitch_reset_link_f xbow_reset_link;
+
+xswitch_provider_t xbow_provider =
+{
+ xbow_reset_link,
+};
+
+
+static int
+xbow_mmap(struct file * file, struct vm_area_struct * vma)
+{
+ unsigned long phys_addr;
+ int error;
+
+ phys_addr = (unsigned long)file->private_data & ~0xc000000000000000; /* Mask out the Uncache bits */
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ vma->vm_flags |= VM_RESERVED | VM_IO;
+ error = io_remap_page_range(vma, vma->vm_start, phys_addr,
+ vma->vm_end-vma->vm_start,
+ vma->vm_page_prot);
+ return(error);
+}
+
+/*
+ * This is the file operation table for the pcibr driver.
+ * As each of the functions are implemented, put the
+ * appropriate function name below.
+ */
+struct file_operations xbow_fops = {
+ .owner = THIS_MODULE,
+ .mmap = xbow_mmap,
+};
+
+#ifdef XBRIDGE_REGS_SIM
+/* xbow_set_simulated_regs: sets xbow regs as needed
+ * for powering through the boot
+ */
+void
+xbow_set_simulated_regs(xbow_t *xbow, int port)
+{
+ /*
+ * turn on link
+ */
+ xbow->xb_link(port).link_status = (1<<31);
+ /*
+ * and give it a live widget too
+ */
+ xbow->xb_link(port).link_aux_status = XB_AUX_STAT_PRESENT;
+ /*
+ * zero the link control reg
+ */
+ xbow->xb_link(port).link_control = 0x0;
+}
+#endif /* XBRIDGE_REGS_SIM */
+
+/*
+ * xbow_attach: the crosstalk provider has
+ * determined that there is a crossbow widget
+ * present, and has handed us the connection
+ * point for that vertex.
+ *
+ * We not only add our own vertex, but add
+ * some "xtalk switch" data to the switch
+ * vertex (at the connect point's parent) if
+ * it does not have any.
+ */
+
+/*ARGSUSED */
+int
+xbow_attach(vertex_hdl_t conn)
+{
+ /*REFERENCED */
+ vertex_hdl_t vhdl;
+ vertex_hdl_t busv;
+ xbow_t *xbow;
+ struct xbow_soft_s *soft;
+ int port;
+ xswitch_info_t info;
+ xtalk_intr_t intr_hdl;
+ char devnm[MAXDEVNAME], *s;
+ xbowreg_t id;
+ int rev;
+ int i;
+ int xbow_num;
+#if DEBUG && ATTACH_DEBUG
+ char name[MAXDEVNAME];
+#endif
+ static irqreturn_t xbow_errintr_handler(int, void *, struct pt_regs *);
+
+
+#if DEBUG && ATTACH_DEBUG
+ printk("%s: xbow_attach\n", vertex_to_name(conn, name, MAXDEVNAME));
+#endif
+
+ /*
+ * Get a PIO pointer to the base of the crossbow
+ * chip.
+ */
+#ifdef XBRIDGE_REGS_SIM
+ printk("xbow_attach: XBRIDGE_REGS_SIM FIXME: allocating %ld bytes for xbow_s\n", sizeof(xbow_t));
+ xbow = (xbow_t *) kmalloc(sizeof(xbow_t), GFP_KERNEL);
+ if (!xbow)
+ return -ENOMEM;
+ /*
+ * turn on ports e and f like in a real live ibrick
+ */
+ xbow_set_simulated_regs(xbow, 0xe);
+ xbow_set_simulated_regs(xbow, 0xf);
+#else
+ xbow = (xbow_t *) xtalk_piotrans_addr(conn, 0, 0, sizeof(xbow_t), 0);
+#endif /* XBRIDGE_REGS_SIM */
+
+ /*
+ * Locate the "switch" vertex: it is the parent
+ * of our connection point.
+ */
+ busv = hwgraph_connectpt_get(conn);
+#if DEBUG && ATTACH_DEBUG
+ printk("xbow_attach: Bus Vertex 0x%p, conn 0x%p, xbow register 0x%p wid= 0x%x\n", busv, conn, xbow, *(volatile u32 *)xbow);
+#endif
+
+ ASSERT(busv != GRAPH_VERTEX_NONE);
+
+ /*
+ * Create our private vertex, and connect our
+ * driver information to it. This makes it possible
+ * for diagnostic drivers to open the crossbow
+ * vertex for access to registers.
+ */
+
+ /*
+ * Register a xbow driver with hwgraph.
+ * file ops.
+ */
+ vhdl = hwgraph_register(conn, EDGE_LBL_XBOW, 0,
+ 0, 0, 0,
+ S_IFCHR | S_IRUSR | S_IWUSR | S_IRGRP, 0, 0,
+ (struct file_operations *)&xbow_fops, (void *)xbow);
+ if (!vhdl) {
+ printk(KERN_WARNING "xbow_attach: Unable to create char device for xbow conn %p\n",
+ (void *)conn);
+ }
+
+ /*
+ * Allocate the soft state structure and attach
+ * it to the xbow's vertex
+ */
+ soft = kmalloc(sizeof(*soft), GFP_KERNEL);
+ if (!soft)
+ return -ENOMEM;
+ soft->conn = conn;
+ soft->vhdl = vhdl;
+ soft->busv = busv;
+ soft->base = xbow;
+ /* does the universe really need another macro? */
+ /* xbow_soft_set(vhdl, (arbitrary_info_t) soft); */
+ /* hwgraph_fastinfo_set(vhdl, (arbitrary_info_t) soft); */
+
+#define XBOW_NUM_SUFFIX_FORMAT "[xbow# %d]"
+
+ /* Add xbow number as a suffix to the hwgraph name of the xbow.
+ * This is helpful while looking at the error/warning messages.
+ */
+ xbow_num = 0;
+
+ /*
+ * get the name of this xbow vertex and keep the info.
+ * This is needed during errors and interupts, but as
+ * long as we have it, we can use it elsewhere.
+ */
+ s = dev_to_name(vhdl, devnm, MAXDEVNAME);
+ soft->name = kmalloc(strlen(s) + strlen(XBOW_NUM_SUFFIX_FORMAT) + 1,
+ GFP_KERNEL);
+ if (!soft->name) {
+ kfree(soft);
+ return -ENOMEM;
+ }
+ sprintf(soft->name,"%s"XBOW_NUM_SUFFIX_FORMAT, s,xbow_num);
+
+#ifdef XBRIDGE_REGS_SIM
+ /* my o200/ibrick has id=0x2d002049, but XXBOW_WIDGET_PART_NUM is defined
+ * as 0xd000, so I'm using that for the partnum bitfield.
+ */
+ printk("xbow_attach: XBRIDGE_REGS_SIM FIXME: need xb_wid_id value!!\n");
+ id = 0x2d000049;
+#else
+ id = xbow->xb_wid_id;
+#endif /* XBRIDGE_REGS_SIM */
+ rev = XWIDGET_PART_REV_NUM(id);
+
+#define XBOW_16_BIT_PORT_BW_MAX (800 * 1000 * 1000) /* 800 MB/s */
+
+ /* Set bandwidth hiwatermark and current values */
+ for (i = 0; i < MAX_XBOW_PORTS; i++) {
+ soft->bw_hiwm[i] = XBOW_16_BIT_PORT_BW_MAX; /* for now */
+ soft->bw_cur_used[i] = 0;
+ }
+
+ /*
+ * attach the crossbow error interrupt.
+ */
+ intr_hdl = xtalk_intr_alloc(conn, (device_desc_t)0, vhdl);
+ ASSERT(intr_hdl != NULL);
+
+ {
+ int irq = ((hub_intr_t)intr_hdl)->i_bit;
+ int cpu = ((hub_intr_t)intr_hdl)->i_cpuid;
+
+ intr_unreserve_level(cpu, irq);
+ ((hub_intr_t)intr_hdl)->i_bit = SGI_XBOW_ERROR;
+ }
+
+ xtalk_intr_connect(intr_hdl,
+ (intr_func_t) xbow_errintr_handler,
+ (intr_arg_t) soft,
+ (xtalk_intr_setfunc_t) xbow_setwidint,
+ (void *) xbow);
+
+ request_irq(SGI_XBOW_ERROR, (void *)xbow_errintr_handler, SA_SHIRQ, "XBOW error",
+ (intr_arg_t) soft);
+
+
+ /*
+ * Enable xbow error interrupts
+ */
+ xbow->xb_wid_control = (XB_WID_CTRL_REG_ACC_IE | XB_WID_CTRL_XTALK_IE);
+
+ /*
+ * take a census of the widgets present,
+ * leaving notes at the switch vertex.
+ */
+ info = xswitch_info_new(busv);
+
+ for (port = MAX_PORT_NUM - MAX_XBOW_PORTS;
+ port < MAX_PORT_NUM; ++port) {
+ if (!xbow_link_alive(xbow, port)) {
+#if DEBUG && XBOW_DEBUG
+ printk(KERN_INFO "0x%p link %d is not alive\n",
+ (void *)busv, port);
+#endif
+ continue;
+ }
+ if (!xbow_widget_present(xbow, port)) {
+#if DEBUG && XBOW_DEBUG
+ printk(KERN_INFO "0x%p link %d is alive but no widget is present\n", (void *)busv, port);
+#endif
+ continue;
+ }
+#if DEBUG && XBOW_DEBUG
+ printk(KERN_INFO "0x%p link %d has a widget\n",
+ (void *)busv, port);
+#endif
+
+ xswitch_info_link_is_ok(info, port);
+ /*
+ * Turn some error interrupts on
+ * and turn others off. The PROM has
+ * some things turned on we don't
+ * want to see (bandwidth allocation
+ * errors for instance); so if it
+ * is not listed here, it is not on.
+ */
+ xbow->xb_link(port).link_control =
+ ( (xbow->xb_link(port).link_control
+ /*
+ * Turn off these bits; they are non-fatal,
+ * but we might want to save some statistics
+ * on the frequency of these errors.
+ * XXX FIXME XXX
+ */
+ & ~XB_CTRL_RCV_CNT_OFLOW_IE
+ & ~XB_CTRL_XMT_CNT_OFLOW_IE
+ & ~XB_CTRL_BNDWDTH_ALLOC_IE
+ & ~XB_CTRL_RCV_IE)
+ /*
+ * These are the ones we want to turn on.
+ */
+ | (XB_CTRL_ILLEGAL_DST_IE
+ | XB_CTRL_OALLOC_IBUF_IE
+ | XB_CTRL_XMT_MAX_RTRY_IE
+ | XB_CTRL_MAXREQ_TOUT_IE
+ | XB_CTRL_XMT_RTRY_IE
+ | XB_CTRL_SRC_TOUT_IE) );
+ }
+
+ xswitch_provider_register(busv, &xbow_provider);
+
+ return 0; /* attach successful */
+}
+
+/*
+ * xbow_widget_present: See if a device is present
+ * on the specified port of this crossbow.
+ */
+int
+xbow_widget_present(xbow_t *xbow, int port)
+{
+ if ( IS_RUNNING_ON_SIMULATOR() ) {
+ if ( (port == 14) || (port == 15) ) {
+ return 1;
+ }
+ else {
+ return 0;
+ }
+ }
+ else {
+ /* WAR: port 0xf on PIC is missing present bit */
+ if (XBOW_WAR_ENABLED(PV854827, xbow->xb_wid_id) &&
+ IS_PIC_XBOW(xbow->xb_wid_id) && port==0xf) {
+ return 1;
+ }
+ else if ( IS_PIC_XBOW(xbow->xb_wid_id) && port==0xb ) {
+ /* for opus the present bit doesn't work on port 0xb */
+ return 1;
+ }
+ return xbow->xb_link(port).link_aux_status & XB_AUX_STAT_PRESENT;
+ }
+}
+
+static int
+xbow_link_alive(xbow_t * xbow, int port)
+{
+ xbwX_stat_t xbow_linkstat;
+
+ xbow_linkstat.linkstatus = xbow->xb_link(port).link_status;
+ return (xbow_linkstat.link_alive);
+}
+
+/*
+ * xbow_widget_lookup
+ * Lookup the edges connected to the xbow specified, and
+ * retrieve the handle corresponding to the widgetnum
+ * specified.
+ * If not found, return 0.
+ */
+vertex_hdl_t
+xbow_widget_lookup(vertex_hdl_t vhdl,
+ int widgetnum)
+{
+ xswitch_info_t xswitch_info;
+ vertex_hdl_t conn;
+
+ xswitch_info = xswitch_info_get(vhdl);
+ conn = xswitch_info_vhdl_get(xswitch_info, widgetnum);
+ return conn;
+}
+
+/*
+ * xbow_setwidint: called when xtalk
+ * is establishing or migrating our
+ * interrupt service.
+ */
+static void
+xbow_setwidint(xtalk_intr_t intr)
+{
+ xwidgetnum_t targ = xtalk_intr_target_get(intr);
+ iopaddr_t addr = xtalk_intr_addr_get(intr);
+ xtalk_intr_vector_t vect = xtalk_intr_vector_get(intr);
+ xbow_t *xbow = (xbow_t *) xtalk_intr_sfarg_get(intr);
+
+ xbow_intr_preset((void *) xbow, 0, targ, addr, vect);
+}
+
+/*
+ * xbow_intr_preset: called during mlreset time
+ * if the platform specific code needs to route
+ * an xbow interrupt before the xtalk infrastructure
+ * is available for use.
+ *
+ * Also called from xbow_setwidint, so we don't
+ * replicate the guts of the routine.
+ *
+ * XXX- probably should be renamed xbow_wid_intr_set or
+ * something to reduce confusion.
+ */
+/*ARGSUSED3 */
+void
+xbow_intr_preset(void *which_widget,
+ int which_widget_intr,
+ xwidgetnum_t targ,
+ iopaddr_t addr,
+ xtalk_intr_vector_t vect)
+{
+ xbow_t *xbow = (xbow_t *) which_widget;
+
+ xbow->xb_wid_int_upper = ((0xFF000000 & (vect << 24)) |
+ (0x000F0000 & (targ << 16)) |
+ XTALK_ADDR_TO_UPPER(addr));
+ xbow->xb_wid_int_lower = XTALK_ADDR_TO_LOWER(addr);
+
+}
+
+#define XEM_ADD_STR(s) printk("%s", (s))
+#define XEM_ADD_NVAR(n,v) printk("\t%20s: 0x%llx\n", (n), ((unsigned long long)v))
+#define XEM_ADD_VAR(v) XEM_ADD_NVAR(#v,(v))
+#define XEM_ADD_IOEF(p,n) if (IOERROR_FIELDVALID(ioe,n)) { \
+ IOERROR_GETVALUE(p,ioe,n); \
+ XEM_ADD_NVAR("ioe." #n, p); \
+ }
+
+int
+xbow_xmit_retry_error(struct xbow_soft_s *soft,
+ int port)
+{
+ xswitch_info_t info;
+ vertex_hdl_t vhdl;
+ widget_cfg_t *wid;
+ widgetreg_t id;
+ int part;
+ int mfgr;
+
+ wid = soft->wpio[port - BASE_XBOW_PORT];
+ if (wid == NULL) {
+ /* If we can't track down a PIO
+ * pointer to our widget yet,
+ * leave our caller knowing that
+ * we are interested in this
+ * interrupt if it occurs in
+ * the future.
+ */
+ info = xswitch_info_get(soft->busv);
+ if (!info)
+ return 1;
+ vhdl = xswitch_info_vhdl_get(info, port);
+ if (vhdl == GRAPH_VERTEX_NONE)
+ return 1;
+ wid = (widget_cfg_t *) xtalk_piotrans_addr
+ (vhdl, 0, 0, sizeof *wid, 0);
+ if (!wid)
+ return 1;
+ soft->wpio[port - BASE_XBOW_PORT] = wid;
+ }
+ id = wid->w_id;
+ part = XWIDGET_PART_NUM(id);
+ mfgr = XWIDGET_MFG_NUM(id);
+
+ return 0;
+}
+
+/*
+ * xbow_errintr_handler will be called if the xbow
+ * sends an interrupt request to report an error.
+ */
+static irqreturn_t
+xbow_errintr_handler(int irq, void *arg, struct pt_regs *ep)
+{
+ ioerror_t ioe[1];
+ struct xbow_soft_s *soft = (struct xbow_soft_s *)arg;
+ xbow_t *xbow = soft->base;
+ xbowreg_t wid_control;
+ xbowreg_t wid_stat;
+ xbowreg_t wid_err_cmdword;
+ xbowreg_t wid_err_upper;
+ xbowreg_t wid_err_lower;
+ w_err_cmd_word_u wid_err;
+ unsigned long long wid_err_addr;
+
+ int fatal = 0;
+ int dump_ioe = 0;
+ static int xbow_error_handler(void *, int, ioerror_mode_t, ioerror_t *);
+
+ wid_control = xbow->xb_wid_control;
+ wid_stat = xbow->xb_wid_stat_clr;
+ wid_err_cmdword = xbow->xb_wid_err_cmdword;
+ wid_err_upper = xbow->xb_wid_err_upper;
+ wid_err_lower = xbow->xb_wid_err_lower;
+ xbow->xb_wid_err_cmdword = 0;
+
+ wid_err_addr = wid_err_lower | (((iopaddr_t) wid_err_upper & WIDGET_ERR_UPPER_ADDR_ONLY) << 32);
+
+ if (wid_stat & XB_WID_STAT_LINK_INTR_MASK) {
+ int port;
+
+ wid_err.r = wid_err_cmdword;
+
+ for (port = MAX_PORT_NUM - MAX_XBOW_PORTS;
+ port < MAX_PORT_NUM; port++) {
+ if (wid_stat & XB_WID_STAT_LINK_INTR(port)) {
+ xb_linkregs_t *link = &(xbow->xb_link(port));
+ xbowreg_t link_control = link->link_control;
+ xbowreg_t link_status = link->link_status_clr;
+ xbowreg_t link_aux_status = link->link_aux_status;
+ xbowreg_t link_pend;
+
+ link_pend = link_status & link_control &
+ (XB_STAT_ILLEGAL_DST_ERR
+ | XB_STAT_OALLOC_IBUF_ERR
+ | XB_STAT_RCV_CNT_OFLOW_ERR
+ | XB_STAT_XMT_CNT_OFLOW_ERR
+ | XB_STAT_XMT_MAX_RTRY_ERR
+ | XB_STAT_RCV_ERR
+ | XB_STAT_XMT_RTRY_ERR
+ | XB_STAT_MAXREQ_TOUT_ERR
+ | XB_STAT_SRC_TOUT_ERR
+ );
+
+ if (link_pend & XB_STAT_ILLEGAL_DST_ERR) {
+ if (wid_err.f.sidn == port) {
+ IOERROR_INIT(ioe);
+ IOERROR_SETVALUE(ioe, widgetnum, port);
+ IOERROR_SETVALUE(ioe, xtalkaddr, wid_err_addr);
+ if (IOERROR_HANDLED ==
+ xbow_error_handler(soft,
+ IOECODE_DMA,
+ MODE_DEVERROR,
+ ioe)) {
+ link_pend &= ~XB_STAT_ILLEGAL_DST_ERR;
+ } else {
+ dump_ioe++;
+ }
+ }
+ }
+ /* Xbow/Bridge WAR:
+ * if the bridge signals an LLP Transmitter Retry,
+ * rewrite its control register.
+ * If someone else triggers this interrupt,
+ * ignore (and disable) the interrupt.
+ */
+ if (link_pend & XB_STAT_XMT_RTRY_ERR) {
+ if (!xbow_xmit_retry_error(soft, port)) {
+ link_control &= ~XB_CTRL_XMT_RTRY_IE;
+ link->link_control = link_control;
+ link->link_control; /* stall until written */
+ }
+ link_pend &= ~XB_STAT_XMT_RTRY_ERR;
+ }
+ if (link_pend) {
+ vertex_hdl_t xwidget_vhdl;
+ char *xwidget_name;
+
+ /* Get the widget name corresponding to the current
+ * xbow link.
+ */
+ xwidget_vhdl = xbow_widget_lookup(soft->busv,port);
+ xwidget_name = xwidget_name_get(xwidget_vhdl);
+
+ printk("%s port %X[%s] XIO Bus Error",
+ soft->name, port, xwidget_name);
+ if (link_status & XB_STAT_MULTI_ERR)
+ XEM_ADD_STR("\tMultiple Errors\n");
+ if (link_status & XB_STAT_ILLEGAL_DST_ERR)
+ XEM_ADD_STR("\tInvalid Packet Destination\n");
+ if (link_status & XB_STAT_OALLOC_IBUF_ERR)
+ XEM_ADD_STR("\tInput Overallocation Error\n");
+ if (link_status & XB_STAT_RCV_CNT_OFLOW_ERR)
+ XEM_ADD_STR("\tLLP receive error counter overflow\n");
+ if (link_status & XB_STAT_XMT_CNT_OFLOW_ERR)
+ XEM_ADD_STR("\tLLP transmit retry counter overflow\n");
+ if (link_status & XB_STAT_XMT_MAX_RTRY_ERR)
+ XEM_ADD_STR("\tLLP Max Transmitter Retry\n");
+ if (link_status & XB_STAT_RCV_ERR)
+ XEM_ADD_STR("\tLLP Receiver error\n");
+ if (link_status & XB_STAT_XMT_RTRY_ERR)
+ XEM_ADD_STR("\tLLP Transmitter Retry\n");
+ if (link_status & XB_STAT_MAXREQ_TOUT_ERR)
+ XEM_ADD_STR("\tMaximum Request Timeout\n");
+ if (link_status & XB_STAT_SRC_TOUT_ERR)
+ XEM_ADD_STR("\tSource Timeout Error\n");
+
+ {
+ int other_port;
+
+ for (other_port = 8; other_port < 16; ++other_port) {
+ if (link_aux_status & (1 << other_port)) {
+ /* XXX- need to go to "other_port"
+ * and clean up after the timeout?
+ */
+ XEM_ADD_VAR(other_port);
+ }
+ }
+ }
+
+#if !DEBUG
+ if (kdebug) {
+#endif
+ XEM_ADD_VAR(link_control);
+ XEM_ADD_VAR(link_status);
+ XEM_ADD_VAR(link_aux_status);
+
+#if !DEBUG
+ }
+#endif
+ fatal++;
+ }
+ }
+ }
+ }
+ if (wid_stat & wid_control & XB_WID_STAT_WIDGET0_INTR) {
+ /* we have a "widget zero" problem */
+
+ if (wid_stat & (XB_WID_STAT_MULTI_ERR
+ | XB_WID_STAT_XTALK_ERR
+ | XB_WID_STAT_REG_ACC_ERR)) {
+
+ printk("%s Port 0 XIO Bus Error",
+ soft->name);
+ if (wid_stat & XB_WID_STAT_MULTI_ERR)
+ XEM_ADD_STR("\tMultiple Error\n");
+ if (wid_stat & XB_WID_STAT_XTALK_ERR)
+ XEM_ADD_STR("\tXIO Error\n");
+ if (wid_stat & XB_WID_STAT_REG_ACC_ERR)
+ XEM_ADD_STR("\tRegister Access Error\n");
+
+ fatal++;
+ }
+ }
+ if (fatal) {
+ XEM_ADD_VAR(wid_stat);
+ XEM_ADD_VAR(wid_control);
+ XEM_ADD_VAR(wid_err_cmdword);
+ XEM_ADD_VAR(wid_err_upper);
+ XEM_ADD_VAR(wid_err_lower);
+ XEM_ADD_VAR(wid_err_addr);
+ panic("XIO Bus Error");
+ }
+ return IRQ_HANDLED;
+}
+
+/*
+ * XBOW ERROR Handling routines.
+ * These get invoked as part of walking down the error handling path
+ * from hub/heart towards the I/O device that caused the error.
+ */
+
+/*
+ * xbow_error_handler
+ * XBow error handling dispatch routine.
+ * This is the primary interface used by external world to invoke
+ * in case of an error related to a xbow.
+ * Only functionality in this layer is to identify the widget handle
+ * given the widgetnum. Otherwise, xbow does not gathers any error
+ * data.
+ */
+static int
+xbow_error_handler(
+ void *einfo,
+ int error_code,
+ ioerror_mode_t mode,
+ ioerror_t *ioerror)
+{
+ int retval = IOERROR_WIDGETLEVEL;
+
+ struct xbow_soft_s *soft = (struct xbow_soft_s *) einfo;
+ int port;
+ vertex_hdl_t conn;
+ vertex_hdl_t busv;
+
+ xbow_t *xbow = soft->base;
+ xbowreg_t wid_stat;
+ xbowreg_t wid_err_cmdword;
+ xbowreg_t wid_err_upper;
+ xbowreg_t wid_err_lower;
+ unsigned long long wid_err_addr;
+
+ xb_linkregs_t *link;
+ xbowreg_t link_control;
+ xbowreg_t link_status;
+ xbowreg_t link_aux_status;
+
+ ASSERT(soft != 0);
+ busv = soft->busv;
+
+#if DEBUG && ERROR_DEBUG
+ printk("%s: xbow_error_handler\n", soft->name, busv);
+#endif
+
+ IOERROR_GETVALUE(port, ioerror, widgetnum);
+
+ if (port == 0) {
+ /* error during access to xbow:
+ * do NOT attempt to access xbow regs.
+ */
+ if (mode == MODE_DEVPROBE)
+ return IOERROR_HANDLED;
+
+ if (error_code & IOECODE_DMA) {
+ printk(KERN_ALERT
+ "DMA error blamed on Crossbow at %s\n"
+ "\tbut Crosbow never initiates DMA!",
+ soft->name);
+ }
+ if (error_code & IOECODE_PIO) {
+ iopaddr_t tmp;
+ IOERROR_GETVALUE(tmp, ioerror, xtalkaddr);
+ printk(KERN_ALERT "PIO Error on XIO Bus %s\n"
+ "\tattempting to access XIO controller\n"
+ "\twith offset 0x%lx",
+ soft->name, tmp);
+ }
+ /* caller will dump contents of ioerror
+ * in DEBUG and kdebug kernels.
+ */
+
+ return retval;
+ }
+ /*
+ * error not on port zero:
+ * safe to read xbow registers.
+ */
+ wid_stat = xbow->xb_wid_stat;
+ wid_err_cmdword = xbow->xb_wid_err_cmdword;
+ wid_err_upper = xbow->xb_wid_err_upper;
+ wid_err_lower = xbow->xb_wid_err_lower;
+
+ wid_err_addr =
+ wid_err_lower
+ | (((iopaddr_t) wid_err_upper
+ & WIDGET_ERR_UPPER_ADDR_ONLY)
+ << 32);
+
+ if ((port < BASE_XBOW_PORT) ||
+ (port >= MAX_PORT_NUM)) {
+
+ if (mode == MODE_DEVPROBE)
+ return IOERROR_HANDLED;
+
+ if (error_code & IOECODE_DMA) {
+ printk(KERN_ALERT
+ "DMA error blamed on XIO port at %s/%d\n"
+ "\tbut Crossbow does not support that port",
+ soft->name, port);
+ }
+ if (error_code & IOECODE_PIO) {
+ iopaddr_t tmp;
+ IOERROR_GETVALUE(tmp, ioerror, xtalkaddr);
+ printk(KERN_ALERT
+ "PIO Error on XIO Bus %s\n"
+ "\tattempting to access XIO port %d\n"
+ "\t(which Crossbow does not support)"
+ "\twith offset 0x%lx",
+ soft->name, port, tmp);
+ }
+#if !DEBUG
+ if (kdebug) {
+#endif
+ XEM_ADD_STR("Raw status values for Crossbow:\n");
+ XEM_ADD_VAR(wid_stat);
+ XEM_ADD_VAR(wid_err_cmdword);
+ XEM_ADD_VAR(wid_err_upper);
+ XEM_ADD_VAR(wid_err_lower);
+ XEM_ADD_VAR(wid_err_addr);
+#if !DEBUG
+ }
+#endif
+
+ /* caller will dump contents of ioerror
+ * in DEBUG and kdebug kernels.
+ */
+
+ return retval;
+ }
+ /* access to valid port:
+ * ok to check port status.
+ */
+
+ link = &(xbow->xb_link(port));
+ link_control = link->link_control;
+ link_status = link->link_status;
+ link_aux_status = link->link_aux_status;
+
+ /* Check that there is something present
+ * in that XIO port.
+ */
+ /* WAR: PIC widget 0xf is missing prescense bit */
+ if (XBOW_WAR_ENABLED(PV854827, xbow->xb_wid_id) &&
+ IS_PIC_XBOW(xbow->xb_wid_id) && (port==0xf))
+ ;
+ else if (IS_PIC_XBOW(xbow->xb_wid_id) && (port==0xb))
+ ; /* WAR for opus this is missing on 0xb */
+ else if (!(link_aux_status & XB_AUX_STAT_PRESENT)) {
+ /* nobody connected. */
+ if (mode == MODE_DEVPROBE)
+ return IOERROR_HANDLED;
+
+ if (error_code & IOECODE_DMA) {
+ printk(KERN_ALERT
+ "DMA error blamed on XIO port at %s/%d\n"
+ "\tbut there is no device connected there.",
+ soft->name, port);
+ }
+ if (error_code & IOECODE_PIO) {
+ iopaddr_t tmp;
+ IOERROR_GETVALUE(tmp, ioerror, xtalkaddr);
+ printk(KERN_ALERT
+ "PIO Error on XIO Bus %s\n"
+ "\tattempting to access XIO port %d\n"
+ "\t(which has no device connected)"
+ "\twith offset 0x%lx",
+ soft->name, port, tmp);
+ }
+#if !DEBUG
+ if (kdebug) {
+#endif
+ XEM_ADD_STR("Raw status values for Crossbow:\n");
+ XEM_ADD_VAR(wid_stat);
+ XEM_ADD_VAR(wid_err_cmdword);
+ XEM_ADD_VAR(wid_err_upper);
+ XEM_ADD_VAR(wid_err_lower);
+ XEM_ADD_VAR(wid_err_addr);
+ XEM_ADD_VAR(port);
+ XEM_ADD_VAR(link_control);
+ XEM_ADD_VAR(link_status);
+ XEM_ADD_VAR(link_aux_status);
+#if !DEBUG
+ }
+#endif
+ return retval;
+
+ }
+ /* Check that the link is alive.
+ */
+ if (!(link_status & XB_STAT_LINKALIVE)) {
+ iopaddr_t tmp;
+ /* nobody connected. */
+ if (mode == MODE_DEVPROBE)
+ return IOERROR_HANDLED;
+
+ printk(KERN_ALERT
+ "%s%sError on XIO Bus %s port %d",
+ (error_code & IOECODE_DMA) ? "DMA " : "",
+ (error_code & IOECODE_PIO) ? "PIO " : "",
+ soft->name, port);
+
+ IOERROR_GETVALUE(tmp, ioerror, xtalkaddr);
+ if ((error_code & IOECODE_PIO) &&
+ (IOERROR_FIELDVALID(ioerror, xtalkaddr))) {
+ printk("\tAccess attempted to offset 0x%lx\n", tmp);
+ }
+ if (link_aux_status & XB_AUX_LINKFAIL_RST_BAD)
+ XEM_ADD_STR("\tLink never came out of reset\n");
+ else
+ XEM_ADD_STR("\tLink failed while transferring data\n");
+
+ }
+ /* get the connection point for the widget
+ * involved in this error; if it exists and
+ * is not our connectpoint, cycle back through
+ * xtalk_error_handler to deliver control to
+ * the proper handler (or to report a generic
+ * crosstalk error).
+ *
+ * If the downstream handler won't handle
+ * the problem, we let our upstream caller
+ * deal with it, after (in DEBUG and kdebug
+ * kernels) dumping the xbow state for this
+ * port.
+ */
+ conn = xbow_widget_lookup(busv, port);
+ if ((conn != GRAPH_VERTEX_NONE) &&
+ (conn != soft->conn)) {
+ retval = xtalk_error_handler(conn, error_code, mode, ioerror);
+ if (retval == IOERROR_HANDLED)
+ return IOERROR_HANDLED;
+ }
+ if (mode == MODE_DEVPROBE)
+ return IOERROR_HANDLED;
+
+ if (retval == IOERROR_UNHANDLED) {
+ iopaddr_t tmp;
+ retval = IOERROR_PANIC;
+
+ printk(KERN_ALERT
+ "%s%sError on XIO Bus %s port %d",
+ (error_code & IOECODE_DMA) ? "DMA " : "",
+ (error_code & IOECODE_PIO) ? "PIO " : "",
+ soft->name, port);
+
+ IOERROR_GETVALUE(tmp, ioerror, xtalkaddr);
+ if ((error_code & IOECODE_PIO) &&
+ (IOERROR_FIELDVALID(ioerror, xtalkaddr))) {
+ printk("\tAccess attempted to offset 0x%lx\n", tmp);
+ }
+ }
+
+#if !DEBUG
+ if (kdebug) {
+#endif
+ XEM_ADD_STR("Raw status values for Crossbow:\n");
+ XEM_ADD_VAR(wid_stat);
+ XEM_ADD_VAR(wid_err_cmdword);
+ XEM_ADD_VAR(wid_err_upper);
+ XEM_ADD_VAR(wid_err_lower);
+ XEM_ADD_VAR(wid_err_addr);
+ XEM_ADD_VAR(port);
+ XEM_ADD_VAR(link_control);
+ XEM_ADD_VAR(link_status);
+ XEM_ADD_VAR(link_aux_status);
+#if !DEBUG
+ }
+#endif
+ /* caller will dump raw ioerror data
+ * in DEBUG and kdebug kernels.
+ */
+
+ return retval;
+}
+
+int
+xbow_reset_link(vertex_hdl_t xconn_vhdl)
+{
+ xwidget_info_t widget_info;
+ xwidgetnum_t port;
+ xbow_t *xbow;
+ xbowreg_t ctrl;
+ xbwX_stat_t stat;
+ unsigned long itick;
+ unsigned int dtick;
+ static long ticks_to_wait = HZ / 1000;
+
+ widget_info = xwidget_info_get(xconn_vhdl);
+ port = xwidget_info_id_get(widget_info);
+
+#ifdef XBOW_K1PTR /* defined if we only have one xbow ... */
+ xbow = XBOW_K1PTR;
+#else
+ {
+ vertex_hdl_t xbow_vhdl;
+ struct xbow_soft_s *xbow_soft;
+
+ hwgraph_traverse(xconn_vhdl, ".master/xtalk/0/xbow", &xbow_vhdl);
+ xbow_soft = xbow_soft_get(xbow_vhdl);
+ xbow = xbow_soft->base;
+ }
+#endif
+
+ /*
+ * This requires three PIOs (reset the link, check for the
+ * reset, restore the control register for the link) plus
+ * 10us to wait for the reset. We allow up to 1ms for the
+ * widget to come out of reset before giving up and
+ * returning a failure.
+ */
+ ctrl = xbow->xb_link(port).link_control;
+ xbow->xb_link(port).link_reset = 0;
+ itick = jiffies;
+ while (1) {
+ stat.linkstatus = xbow->xb_link(port).link_status;
+ if (stat.link_alive)
+ break;
+ dtick = jiffies - itick;
+ if (dtick > ticks_to_wait) {
+ return -1; /* never came out of reset */
+ }
+ udelay(2); /* don't beat on link_status */
+ }
+ xbow->xb_link(port).link_control = ctrl;
+ return 0;
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/driver.h>
+#include <asm/sn/io.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/hcl_util.h>
+#include <asm/sn/xtalk/xtalk.h>
+#include <asm/sn/xtalk/xswitch.h>
+#include <asm/sn/xtalk/xwidget.h>
+#include <asm/sn/xtalk/xtalk_private.h>
+
+/*
+ * Implement io channel provider operations. The xtalk* layer provides a
+ * platform-independent interface for io channel devices. This layer
+ * switches among the possible implementations of a io channel adapter.
+ *
+ * On platforms with only one possible xtalk provider, macros can be
+ * set up at the top that cause the table lookups and indirections to
+ * completely disappear.
+ */
+
+char widget_info_fingerprint[] = "widget_info";
+
+/* =====================================================================
+ * Function Table of Contents
+ */
+xtalk_piomap_t xtalk_piomap_alloc(vertex_hdl_t, device_desc_t, iopaddr_t, size_t, size_t, unsigned);
+void xtalk_piomap_free(xtalk_piomap_t);
+caddr_t xtalk_piomap_addr(xtalk_piomap_t, iopaddr_t, size_t);
+void xtalk_piomap_done(xtalk_piomap_t);
+caddr_t xtalk_piotrans_addr(vertex_hdl_t, device_desc_t, iopaddr_t, size_t, unsigned);
+caddr_t xtalk_pio_addr(vertex_hdl_t, device_desc_t, iopaddr_t, size_t, xtalk_piomap_t *, unsigned);
+void xtalk_set_early_piotrans_addr(xtalk_early_piotrans_addr_f *);
+caddr_t xtalk_early_piotrans_addr(xwidget_part_num_t, xwidget_mfg_num_t, int, iopaddr_t, size_t, unsigned);
+static caddr_t null_xtalk_early_piotrans_addr(xwidget_part_num_t, xwidget_mfg_num_t, int, iopaddr_t, size_t, unsigned);
+xtalk_dmamap_t xtalk_dmamap_alloc(vertex_hdl_t, device_desc_t, size_t, unsigned);
+void xtalk_dmamap_free(xtalk_dmamap_t);
+iopaddr_t xtalk_dmamap_addr(xtalk_dmamap_t, paddr_t, size_t);
+void xtalk_dmamap_done(xtalk_dmamap_t);
+iopaddr_t xtalk_dmatrans_addr(vertex_hdl_t, device_desc_t, paddr_t, size_t, unsigned);
+void xtalk_dmamap_drain(xtalk_dmamap_t);
+void xtalk_dmaaddr_drain(vertex_hdl_t, iopaddr_t, size_t);
+xtalk_intr_t xtalk_intr_alloc(vertex_hdl_t, device_desc_t, vertex_hdl_t);
+xtalk_intr_t xtalk_intr_alloc_nothd(vertex_hdl_t, device_desc_t, vertex_hdl_t);
+void xtalk_intr_free(xtalk_intr_t);
+int xtalk_intr_connect(xtalk_intr_t, intr_func_t, intr_arg_t, xtalk_intr_setfunc_t, void *);
+void xtalk_intr_disconnect(xtalk_intr_t);
+vertex_hdl_t xtalk_intr_cpu_get(xtalk_intr_t);
+int xtalk_error_handler(vertex_hdl_t, int, ioerror_mode_t, ioerror_t *);
+void xtalk_provider_startup(vertex_hdl_t);
+void xtalk_provider_shutdown(vertex_hdl_t);
+vertex_hdl_t xtalk_intr_dev_get(xtalk_intr_t);
+xwidgetnum_t xtalk_intr_target_get(xtalk_intr_t);
+xtalk_intr_vector_t xtalk_intr_vector_get(xtalk_intr_t);
+iopaddr_t xtalk_intr_addr_get(struct xtalk_intr_s *);
+void *xtalk_intr_sfarg_get(xtalk_intr_t);
+vertex_hdl_t xtalk_pio_dev_get(xtalk_piomap_t);
+xwidgetnum_t xtalk_pio_target_get(xtalk_piomap_t);
+iopaddr_t xtalk_pio_xtalk_addr_get(xtalk_piomap_t);
+ulong xtalk_pio_mapsz_get(xtalk_piomap_t);
+caddr_t xtalk_pio_kvaddr_get(xtalk_piomap_t);
+vertex_hdl_t xtalk_dma_dev_get(xtalk_dmamap_t);
+xwidgetnum_t xtalk_dma_target_get(xtalk_dmamap_t);
+xwidget_info_t xwidget_info_chk(vertex_hdl_t);
+xwidget_info_t xwidget_info_get(vertex_hdl_t);
+void xwidget_info_set(vertex_hdl_t, xwidget_info_t);
+vertex_hdl_t xwidget_info_dev_get(xwidget_info_t);
+xwidgetnum_t xwidget_info_id_get(xwidget_info_t);
+vertex_hdl_t xwidget_info_master_get(xwidget_info_t);
+xwidgetnum_t xwidget_info_masterid_get(xwidget_info_t);
+xwidget_part_num_t xwidget_info_part_num_get(xwidget_info_t);
+xwidget_mfg_num_t xwidget_info_mfg_num_get(xwidget_info_t);
+char *xwidget_info_name_get(xwidget_info_t);
+void xtalk_provider_register(vertex_hdl_t, xtalk_provider_t *);
+void xtalk_provider_unregister(vertex_hdl_t);
+xtalk_provider_t *xtalk_provider_fns_get(vertex_hdl_t);
+int xwidget_driver_register(xwidget_part_num_t,
+ xwidget_mfg_num_t,
+ char *, unsigned);
+void xwidget_driver_unregister(char *);
+int xwidget_register(xwidget_hwid_t, vertex_hdl_t,
+ xwidgetnum_t, vertex_hdl_t,
+ xwidgetnum_t);
+int xwidget_unregister(vertex_hdl_t);
+void xwidget_reset(vertex_hdl_t);
+char *xwidget_name_get(vertex_hdl_t);
+#if !defined(DEV_FUNC)
+/*
+ * There is more than one possible provider
+ * for this platform. We need to examine the
+ * master vertex of the current vertex for
+ * a provider function structure, and indirect
+ * through the appropriately named member.
+ */
+#define DEV_FUNC(dev,func) xwidget_to_provider_fns(dev)->func
+#define CAST_PIOMAP(x) ((xtalk_piomap_t)(x))
+#define CAST_DMAMAP(x) ((xtalk_dmamap_t)(x))
+#define CAST_INTR(x) ((xtalk_intr_t)(x))
+xtalk_provider_t * xwidget_info_pops_get(xwidget_info_t info);
+
+static xtalk_provider_t *
+xwidget_to_provider_fns(vertex_hdl_t xconn)
+{
+ xwidget_info_t widget_info;
+ xtalk_provider_t *provider_fns;
+
+ widget_info = xwidget_info_get(xconn);
+ ASSERT(widget_info != NULL);
+
+ provider_fns = xwidget_info_pops_get(widget_info);
+ ASSERT(provider_fns != NULL);
+
+ return (provider_fns);
+}
+
+xtalk_provider_t *
+xwidget_info_pops_get(xwidget_info_t info) {
+ vertex_hdl_t master = info->w_master;
+ xtalk_provider_t *provider_fns;
+
+ provider_fns = xtalk_provider_fns_get(master);
+
+ ASSERT(provider_fns != NULL);
+ return provider_fns;
+}
+#endif
+
+/*
+ * Many functions are not passed their vertex
+ * information directly; rather, they must
+ * dive through a resource map. These macros
+ * are available to coordinate this detail.
+ */
+#define PIOMAP_FUNC(map,func) DEV_FUNC(map->xp_dev,func)
+#define DMAMAP_FUNC(map,func) DEV_FUNC(map->xd_dev,func)
+#define INTR_FUNC(intr,func) DEV_FUNC(intr_hdl->xi_dev,func)
+
+/* =====================================================================
+ * PIO MANAGEMENT
+ *
+ * For mapping system virtual address space to
+ * xtalk space on a specified widget
+ */
+
+xtalk_piomap_t
+xtalk_piomap_alloc(vertex_hdl_t dev, /* set up mapping for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ iopaddr_t xtalk_addr, /* map for this xtalk_addr range */
+ size_t byte_count,
+ size_t byte_count_max, /* maximum size of a mapping */
+ unsigned flags)
+{ /* defined in sys/pio.h */
+ return (xtalk_piomap_t) DEV_FUNC(dev, piomap_alloc)
+ (dev, dev_desc, xtalk_addr, byte_count, byte_count_max, flags);
+}
+
+
+void
+xtalk_piomap_free(xtalk_piomap_t xtalk_piomap)
+{
+ PIOMAP_FUNC(xtalk_piomap, piomap_free)
+ (CAST_PIOMAP(xtalk_piomap));
+}
+
+
+caddr_t
+xtalk_piomap_addr(xtalk_piomap_t xtalk_piomap, /* mapping resources */
+ iopaddr_t xtalk_addr, /* map for this xtalk address */
+ size_t byte_count)
+{ /* map this many bytes */
+ return PIOMAP_FUNC(xtalk_piomap, piomap_addr)
+ (CAST_PIOMAP(xtalk_piomap), xtalk_addr, byte_count);
+}
+
+
+void
+xtalk_piomap_done(xtalk_piomap_t xtalk_piomap)
+{
+ PIOMAP_FUNC(xtalk_piomap, piomap_done)
+ (CAST_PIOMAP(xtalk_piomap));
+}
+
+
+caddr_t
+xtalk_piotrans_addr(vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ iopaddr_t xtalk_addr, /* Crosstalk address */
+ size_t byte_count, /* map this many bytes */
+ unsigned flags)
+{ /* (currently unused) */
+ return DEV_FUNC(dev, piotrans_addr)
+ (dev, dev_desc, xtalk_addr, byte_count, flags);
+}
+
+caddr_t
+xtalk_pio_addr(vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ iopaddr_t addr, /* starting address (or offset in window) */
+ size_t byte_count, /* map this many bytes */
+ xtalk_piomap_t *mapp, /* where to return the map pointer */
+ unsigned flags)
+{ /* PIO flags */
+ xtalk_piomap_t map = 0;
+ caddr_t res;
+
+ if (mapp)
+ *mapp = 0; /* record "no map used" */
+
+ res = xtalk_piotrans_addr
+ (dev, dev_desc, addr, byte_count, flags);
+ if (res)
+ return res; /* xtalk_piotrans worked */
+
+ map = xtalk_piomap_alloc
+ (dev, dev_desc, addr, byte_count, byte_count, flags);
+ if (!map)
+ return res; /* xtalk_piomap_alloc failed */
+
+ res = xtalk_piomap_addr
+ (map, addr, byte_count);
+ if (!res) {
+ xtalk_piomap_free(map);
+ return res; /* xtalk_piomap_addr failed */
+ }
+ if (mapp)
+ *mapp = map; /* pass back map used */
+
+ return res; /* xtalk_piomap_addr succeeded */
+}
+
+/* =====================================================================
+ * EARLY PIOTRANS SUPPORT
+ *
+ * There are places where drivers (mgras, for instance)
+ * need to get PIO translations before the infrastructure
+ * is extended to them (setting up textports, for
+ * instance). These drivers should call
+ * xtalk_early_piotrans_addr with their xtalk ID
+ * information, a sequence number (so we can use the second
+ * mgras for instance), and the usual piotrans parameters.
+ *
+ * Machine specific code should provide an implementation
+ * of early_piotrans_addr, and present a pointer to this
+ * function to xtalk_set_early_piotrans_addr so it can be
+ * used by clients without the clients having to know what
+ * platform or what xtalk provider is in use.
+ */
+
+static xtalk_early_piotrans_addr_f null_xtalk_early_piotrans_addr;
+
+xtalk_early_piotrans_addr_f *impl_early_piotrans_addr = null_xtalk_early_piotrans_addr;
+
+/* xtalk_set_early_piotrans_addr:
+ * specify the early_piotrans_addr implementation function.
+ */
+void
+xtalk_set_early_piotrans_addr(xtalk_early_piotrans_addr_f *impl)
+{
+ impl_early_piotrans_addr = impl;
+}
+
+/* xtalk_early_piotrans_addr:
+ * figure out a PIO address for the "nth" io channel widget that
+ * matches the specified part and mfgr number. Returns NULL if
+ * there is no such widget, or if the requested mapping can not
+ * be constructed.
+ * Limitations on which io channel slots (and busses) are
+ * checked, and definitions of the ordering of the search across
+ * the io channel slots, are defined by the platform.
+ */
+caddr_t
+xtalk_early_piotrans_addr(xwidget_part_num_t part_num,
+ xwidget_mfg_num_t mfg_num,
+ int which,
+ iopaddr_t xtalk_addr,
+ size_t byte_count,
+ unsigned flags)
+{
+ return impl_early_piotrans_addr
+ (part_num, mfg_num, which, xtalk_addr, byte_count, flags);
+}
+
+/* null_xtalk_early_piotrans_addr:
+ * used as the early_piotrans_addr implementation until and
+ * unless a real implementation is provided. In DEBUG kernels,
+ * we want to know who is calling before the implementation is
+ * registered; in non-DEBUG kernels, return NULL representing
+ * lack of mapping support.
+ */
+/*ARGSUSED */
+static caddr_t
+null_xtalk_early_piotrans_addr(xwidget_part_num_t part_num,
+ xwidget_mfg_num_t mfg_num,
+ int which,
+ iopaddr_t xtalk_addr,
+ size_t byte_count,
+ unsigned flags)
+{
+#if DEBUG
+ panic("null_xtalk_early_piotrans_addr");
+#endif
+ return NULL;
+}
+
+/* =====================================================================
+ * DMA MANAGEMENT
+ *
+ * For mapping from io channel space to system
+ * physical space.
+ */
+
+xtalk_dmamap_t
+xtalk_dmamap_alloc(vertex_hdl_t dev, /* set up mappings for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ size_t byte_count_max, /* max size of a mapping */
+ unsigned flags)
+{ /* defined in dma.h */
+ return (xtalk_dmamap_t) DEV_FUNC(dev, dmamap_alloc)
+ (dev, dev_desc, byte_count_max, flags);
+}
+
+
+void
+xtalk_dmamap_free(xtalk_dmamap_t xtalk_dmamap)
+{
+ DMAMAP_FUNC(xtalk_dmamap, dmamap_free)
+ (CAST_DMAMAP(xtalk_dmamap));
+}
+
+
+iopaddr_t
+xtalk_dmamap_addr(xtalk_dmamap_t xtalk_dmamap, /* use these mapping resources */
+ paddr_t paddr, /* map for this address */
+ size_t byte_count)
+{ /* map this many bytes */
+ return DMAMAP_FUNC(xtalk_dmamap, dmamap_addr)
+ (CAST_DMAMAP(xtalk_dmamap), paddr, byte_count);
+}
+
+
+void
+xtalk_dmamap_done(xtalk_dmamap_t xtalk_dmamap)
+{
+ DMAMAP_FUNC(xtalk_dmamap, dmamap_done)
+ (CAST_DMAMAP(xtalk_dmamap));
+}
+
+
+iopaddr_t
+xtalk_dmatrans_addr(vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ paddr_t paddr, /* system physical address */
+ size_t byte_count, /* length */
+ unsigned flags)
+{ /* defined in dma.h */
+ return DEV_FUNC(dev, dmatrans_addr)
+ (dev, dev_desc, paddr, byte_count, flags);
+}
+
+
+void
+xtalk_dmamap_drain(xtalk_dmamap_t map)
+{
+ DMAMAP_FUNC(map, dmamap_drain)
+ (CAST_DMAMAP(map));
+}
+
+void
+xtalk_dmaaddr_drain(vertex_hdl_t dev, paddr_t addr, size_t size)
+{
+ DEV_FUNC(dev, dmaaddr_drain)
+ (dev, addr, size);
+}
+
+/* =====================================================================
+ * INTERRUPT MANAGEMENT
+ *
+ * Allow io channel devices to establish interrupts
+ */
+
+/*
+ * Allocate resources required for an interrupt as specified in intr_desc.
+ * Return resource handle in intr_hdl.
+ */
+xtalk_intr_t
+xtalk_intr_alloc(vertex_hdl_t dev, /* which Crosstalk device */
+ device_desc_t dev_desc, /* device descriptor */
+ vertex_hdl_t owner_dev)
+{ /* owner of this interrupt */
+ return (xtalk_intr_t) DEV_FUNC(dev, intr_alloc)
+ (dev, dev_desc, owner_dev);
+}
+
+/*
+ * Allocate resources required for an interrupt as specified in dev_desc.
+ * Unconditionally setup resources to be non-threaded.
+ * Return resource handle in intr_hdl.
+ */
+xtalk_intr_t
+xtalk_intr_alloc_nothd(vertex_hdl_t dev, /* which Crosstalk device */
+ device_desc_t dev_desc, /* device descriptor */
+ vertex_hdl_t owner_dev) /* owner of this interrupt */
+{
+ return (xtalk_intr_t) DEV_FUNC(dev, intr_alloc_nothd)
+ (dev, dev_desc, owner_dev);
+}
+
+/*
+ * Free resources consumed by intr_alloc.
+ */
+void
+xtalk_intr_free(xtalk_intr_t intr_hdl)
+{
+ INTR_FUNC(intr_hdl, intr_free)
+ (CAST_INTR(intr_hdl));
+}
+
+
+/*
+ * Associate resources allocated with a previous xtalk_intr_alloc call with the
+ * described handler, arg, name, etc.
+ *
+ * Returns 0 on success, returns <0 on failure.
+ */
+int
+xtalk_intr_connect(xtalk_intr_t intr_hdl, /* xtalk intr resource handle */
+ intr_func_t intr_func, /* xtalk intr handler */
+ intr_arg_t intr_arg, /* arg to intr handler */
+ xtalk_intr_setfunc_t setfunc, /* func to set intr hw */
+ void *setfunc_arg) /* arg to setfunc */
+{
+ return INTR_FUNC(intr_hdl, intr_connect)
+ (CAST_INTR(intr_hdl), intr_func, intr_arg, setfunc, setfunc_arg);
+}
+
+
+/*
+ * Disassociate handler with the specified interrupt.
+ */
+void
+xtalk_intr_disconnect(xtalk_intr_t intr_hdl)
+{
+ INTR_FUNC(intr_hdl, intr_disconnect)
+ (CAST_INTR(intr_hdl));
+}
+
+
+/*
+ * Return a hwgraph vertex that represents the CPU currently
+ * targeted by an interrupt.
+ */
+vertex_hdl_t
+xtalk_intr_cpu_get(xtalk_intr_t intr_hdl)
+{
+ return (vertex_hdl_t)0;
+}
+
+
+/*
+ * =====================================================================
+ * ERROR MANAGEMENT
+ */
+
+/*
+ * xtalk_error_handler:
+ * pass this error on to the handler registered
+ * at the specified xtalk connecdtion point,
+ * or complain about it here if there is no handler.
+ *
+ * This routine plays two roles during error delivery
+ * to most widgets: first, the external agent (heart,
+ * hub, or whatever) calls in with the error and the
+ * connect point representing the io channel switch,
+ * or whatever io channel device is directly connected
+ * to the agent.
+ *
+ * If there is a switch, it will generally look at the
+ * widget number stashed in the ioerror structure; and,
+ * if the error came from some widget other than the
+ * switch, it will call back into xtalk_error_handler
+ * with the connection point of the offending port.
+ */
+int
+xtalk_error_handler(
+ vertex_hdl_t xconn,
+ int error_code,
+ ioerror_mode_t mode,
+ ioerror_t *ioerror)
+{
+ xwidget_info_t xwidget_info;
+ char name[MAXDEVNAME];
+
+
+ xwidget_info = xwidget_info_get(xconn);
+ /* Make sure that xwidget_info is a valid pointer before derefencing it.
+ * We could come in here during very early initialization.
+ */
+ if (xwidget_info && xwidget_info->w_efunc)
+ return xwidget_info->w_efunc
+ (xwidget_info->w_einfo,
+ error_code, mode, ioerror);
+ /*
+ * no error handler registered for
+ * the offending port. it's not clear
+ * what needs to be done, but reporting
+ * it would be a good thing, unless it
+ * is a mode that requires nothing.
+ */
+ if ((mode == MODE_DEVPROBE) || (mode == MODE_DEVUSERERROR) ||
+ (mode == MODE_DEVREENABLE))
+ return IOERROR_HANDLED;
+
+ printk(KERN_WARNING "Xbow at %s encountered Fatal error", vertex_to_name(xconn, name, MAXDEVNAME));
+
+ return IOERROR_UNHANDLED;
+}
+
+
+/* =====================================================================
+ * CONFIGURATION MANAGEMENT
+ */
+
+/*
+ * Startup an io channel provider
+ */
+void
+xtalk_provider_startup(vertex_hdl_t xtalk_provider)
+{
+ ((xtalk_provider_t *) hwgraph_fastinfo_get(xtalk_provider))->provider_startup(xtalk_provider);
+}
+
+
+/*
+ * Shutdown an io channel provider
+ */
+void
+xtalk_provider_shutdown(vertex_hdl_t xtalk_provider)
+{
+ ((xtalk_provider_t *) hwgraph_fastinfo_get(xtalk_provider))->provider_shutdown(xtalk_provider);
+}
+
+/*
+ * Enable a device on a xtalk widget
+ */
+void
+xtalk_widgetdev_enable(vertex_hdl_t xconn_vhdl, int devnum)
+{
+ return;
+}
+
+/*
+ * Shutdown a device on a xtalk widget
+ */
+void
+xtalk_widgetdev_shutdown(vertex_hdl_t xconn_vhdl, int devnum)
+{
+ return;
+}
+
+/*
+ * Generic io channel functions, for use with all io channel providers
+ * and all io channel devices.
+ */
+
+/* Generic io channel interrupt interfaces */
+vertex_hdl_t
+xtalk_intr_dev_get(xtalk_intr_t xtalk_intr)
+{
+ return (xtalk_intr->xi_dev);
+}
+
+xwidgetnum_t
+xtalk_intr_target_get(xtalk_intr_t xtalk_intr)
+{
+ return (xtalk_intr->xi_target);
+}
+
+xtalk_intr_vector_t
+xtalk_intr_vector_get(xtalk_intr_t xtalk_intr)
+{
+ return (xtalk_intr->xi_vector);
+}
+
+iopaddr_t
+xtalk_intr_addr_get(struct xtalk_intr_s *xtalk_intr)
+{
+ return (xtalk_intr->xi_addr);
+}
+
+void *
+xtalk_intr_sfarg_get(xtalk_intr_t xtalk_intr)
+{
+ return (xtalk_intr->xi_sfarg);
+}
+
+/* Generic io channel pio interfaces */
+vertex_hdl_t
+xtalk_pio_dev_get(xtalk_piomap_t xtalk_piomap)
+{
+ return (xtalk_piomap->xp_dev);
+}
+
+xwidgetnum_t
+xtalk_pio_target_get(xtalk_piomap_t xtalk_piomap)
+{
+ return (xtalk_piomap->xp_target);
+}
+
+iopaddr_t
+xtalk_pio_xtalk_addr_get(xtalk_piomap_t xtalk_piomap)
+{
+ return (xtalk_piomap->xp_xtalk_addr);
+}
+
+ulong
+xtalk_pio_mapsz_get(xtalk_piomap_t xtalk_piomap)
+{
+ return (xtalk_piomap->xp_mapsz);
+}
+
+caddr_t
+xtalk_pio_kvaddr_get(xtalk_piomap_t xtalk_piomap)
+{
+ return (xtalk_piomap->xp_kvaddr);
+}
+
+
+/* Generic io channel dma interfaces */
+vertex_hdl_t
+xtalk_dma_dev_get(xtalk_dmamap_t xtalk_dmamap)
+{
+ return (xtalk_dmamap->xd_dev);
+}
+
+xwidgetnum_t
+xtalk_dma_target_get(xtalk_dmamap_t xtalk_dmamap)
+{
+ return (xtalk_dmamap->xd_target);
+}
+
+
+/* Generic io channel widget information interfaces */
+
+/* xwidget_info_chk:
+ * check to see if this vertex is a widget;
+ * if so, return its widget_info (if any).
+ * if not, return NULL.
+ */
+xwidget_info_t
+xwidget_info_chk(vertex_hdl_t xwidget)
+{
+ arbitrary_info_t ainfo = 0;
+
+ hwgraph_info_get_LBL(xwidget, INFO_LBL_XWIDGET, &ainfo);
+ return (xwidget_info_t) ainfo;
+}
+
+
+xwidget_info_t
+xwidget_info_get(vertex_hdl_t xwidget)
+{
+ xwidget_info_t widget_info;
+
+ widget_info = (xwidget_info_t)
+ hwgraph_fastinfo_get(xwidget);
+
+ return (widget_info);
+}
+
+void
+xwidget_info_set(vertex_hdl_t xwidget, xwidget_info_t widget_info)
+{
+ if (widget_info != NULL)
+ widget_info->w_fingerprint = widget_info_fingerprint;
+
+ hwgraph_fastinfo_set(xwidget, (arbitrary_info_t) widget_info);
+
+ /* Also, mark this vertex as an xwidget,
+ * and use the widget_info, so xwidget_info_chk
+ * can work (and be fairly efficient).
+ */
+ hwgraph_info_add_LBL(xwidget, INFO_LBL_XWIDGET,
+ (arbitrary_info_t) widget_info);
+}
+
+vertex_hdl_t
+xwidget_info_dev_get(xwidget_info_t xwidget_info)
+{
+ if (xwidget_info == NULL)
+ panic("xwidget_info_dev_get: null xwidget_info");
+ return (xwidget_info->w_vertex);
+}
+
+xwidgetnum_t
+xwidget_info_id_get(xwidget_info_t xwidget_info)
+{
+ if (xwidget_info == NULL)
+ panic("xwidget_info_id_get: null xwidget_info");
+ return (xwidget_info->w_id);
+}
+
+
+vertex_hdl_t
+xwidget_info_master_get(xwidget_info_t xwidget_info)
+{
+ if (xwidget_info == NULL)
+ panic("xwidget_info_master_get: null xwidget_info");
+ return (xwidget_info->w_master);
+}
+
+xwidgetnum_t
+xwidget_info_masterid_get(xwidget_info_t xwidget_info)
+{
+ if (xwidget_info == NULL)
+ panic("xwidget_info_masterid_get: null xwidget_info");
+ return (xwidget_info->w_masterid);
+}
+
+xwidget_part_num_t
+xwidget_info_part_num_get(xwidget_info_t xwidget_info)
+{
+ if (xwidget_info == NULL)
+ panic("xwidget_info_part_num_get: null xwidget_info");
+ return (xwidget_info->w_hwid.part_num);
+}
+
+xwidget_mfg_num_t
+xwidget_info_mfg_num_get(xwidget_info_t xwidget_info)
+{
+ if (xwidget_info == NULL)
+ panic("xwidget_info_mfg_num_get: null xwidget_info");
+ return (xwidget_info->w_hwid.mfg_num);
+}
+/* Extract the widget name from the widget information
+ * for the xtalk widget.
+ */
+char *
+xwidget_info_name_get(xwidget_info_t xwidget_info)
+{
+ if (xwidget_info == NULL)
+ panic("xwidget_info_name_get: null xwidget_info");
+ return(xwidget_info->w_name);
+}
+/* Generic io channel initialization interfaces */
+
+/*
+ * Associate a set of xtalk_provider functions with a vertex.
+ */
+void
+xtalk_provider_register(vertex_hdl_t provider, xtalk_provider_t *xtalk_fns)
+{
+ hwgraph_fastinfo_set(provider, (arbitrary_info_t) xtalk_fns);
+}
+
+/*
+ * Disassociate a set of xtalk_provider functions with a vertex.
+ */
+void
+xtalk_provider_unregister(vertex_hdl_t provider)
+{
+ hwgraph_fastinfo_set(provider, (arbitrary_info_t)NULL);
+}
+
+/*
+ * Obtain a pointer to the xtalk_provider functions for a specified Crosstalk
+ * provider.
+ */
+xtalk_provider_t *
+xtalk_provider_fns_get(vertex_hdl_t provider)
+{
+ return ((xtalk_provider_t *) hwgraph_fastinfo_get(provider));
+}
+
+/*
+ * Inform xtalk infrastructure that a driver is no longer available for
+ * handling any widgets.
+ */
+void
+xwidget_driver_unregister(char *driver_prefix)
+{
+ return;
+}
+
+/*
+ * Call some function with each vertex that
+ * might be one of this driver's attach points.
+ */
+void
+xtalk_iterate(char *driver_prefix,
+ xtalk_iter_f *func)
+{
+}
+
+/*
+ * xwidget_register:
+ * Register a xtalk device (xwidget) by doing the following.
+ * -allocate and initialize xwidget_info data
+ * -allocate a hwgraph vertex with name based on widget number (id)
+ * -look up the widget's initialization function and call it,
+ * or remember the vertex for later initialization.
+ *
+ */
+int
+xwidget_register(xwidget_hwid_t hwid, /* widget's hardware ID */
+ vertex_hdl_t widget, /* widget to initialize */
+ xwidgetnum_t id, /* widget's target id (0..f) */
+ vertex_hdl_t master, /* widget's master vertex */
+ xwidgetnum_t targetid) /* master's target id (9/a) */
+{
+ xwidget_info_t widget_info;
+ char *s,devnm[MAXDEVNAME];
+
+ /* Allocate widget_info and associate it with widget vertex */
+ widget_info = kmalloc(sizeof(*widget_info), GFP_KERNEL);
+ if (!widget_info)
+ return - ENOMEM;
+
+ /* Initialize widget_info */
+ widget_info->w_vertex = widget;
+ widget_info->w_id = id;
+ widget_info->w_master = master;
+ widget_info->w_masterid = targetid;
+ widget_info->w_hwid = *hwid; /* structure copy */
+ widget_info->w_efunc = 0;
+ widget_info->w_einfo = 0;
+ /*
+ * get the name of this xwidget vertex and keep the info.
+ * This is needed during errors and interupts, but as
+ * long as we have it, we can use it elsewhere.
+ */
+ s = dev_to_name(widget,devnm,MAXDEVNAME);
+ widget_info->w_name = kmalloc(strlen(s) + 1, GFP_KERNEL);
+ strcpy(widget_info->w_name,s);
+
+ xwidget_info_set(widget, widget_info);
+
+ device_master_set(widget, master);
+
+ /*
+ * Add pointer to async attach info -- tear down will be done when
+ * the particular descendant is done with the info.
+ */
+ return cdl_add_connpt(hwid->part_num, hwid->mfg_num,
+ widget, 0);
+}
+
+/*
+ * xwidget_unregister :
+ * Unregister the xtalk device and detach all its hwgraph namespace.
+ */
+int
+xwidget_unregister(vertex_hdl_t widget)
+{
+ xwidget_info_t widget_info;
+ xwidget_hwid_t hwid;
+
+ /* Make sure that we have valid widget information initialized */
+ if (!(widget_info = xwidget_info_get(widget)))
+ return 1;
+
+ hwid = &(widget_info->w_hwid);
+
+ kfree(widget_info->w_name);
+ kfree(widget_info);
+ return 0;
+}
+
+void
+xwidget_error_register(vertex_hdl_t xwidget,
+ error_handler_f *efunc,
+ error_handler_arg_t einfo)
+{
+ xwidget_info_t xwidget_info;
+
+ xwidget_info = xwidget_info_get(xwidget);
+ ASSERT(xwidget_info != NULL);
+ xwidget_info->w_efunc = efunc;
+ xwidget_info->w_einfo = einfo;
+}
+
+/*
+ * Issue a link reset to a widget.
+ */
+void
+xwidget_reset(vertex_hdl_t xwidget)
+{
+ xswitch_reset_link(xwidget);
+}
+
+
+void
+xwidget_gfx_reset(vertex_hdl_t xwidget)
+{
+ return;
+}
+
+#define ANON_XWIDGET_NAME "No Name" /* Default Widget Name */
+
+/* Get the canonical hwgraph name of xtalk widget */
+char *
+xwidget_name_get(vertex_hdl_t xwidget_vhdl)
+{
+ xwidget_info_t info;
+
+ /* If we have a bogus widget handle then return
+ * a default anonymous widget name.
+ */
+ if (xwidget_vhdl == GRAPH_VERTEX_NONE)
+ return(ANON_XWIDGET_NAME);
+ /* Read the widget name stored in the widget info
+ * for the widget setup during widget initialization.
+ */
+ info = xwidget_info_get(xwidget_vhdl);
+ ASSERT(info != NULL);
+ return(xwidget_info_name_get(info));
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <asm/sn/sgi.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/sn/pci/pci_bus_cvlink.h>
+#include <asm/sn/simulator.h>
+
+extern pciio_provider_t *pciio_to_provider_fns(vertex_hdl_t dev);
+
+int
+snia_badaddr_val(volatile void *addr, int len, volatile void *ptr)
+{
+ int ret = 0;
+ volatile void *new_addr;
+
+ switch (len) {
+ case 4:
+ new_addr = (void *) addr;
+ ret = ia64_sn_probe_io_slot((long) new_addr, len, (void *) ptr);
+ break;
+ default:
+ printk(KERN_WARNING
+ "snia_badaddr_val given len %x but supports len of 4 only\n",
+ len);
+ }
+
+ if (ret < 0)
+ panic("snia_badaddr_val: unexpected status (%d) in probing",
+ ret);
+ return (ret);
+
+}
+
+nasid_t
+snia_get_console_nasid(void)
+{
+ extern nasid_t console_nasid;
+ extern nasid_t master_baseio_nasid;
+
+ if (console_nasid < 0) {
+ console_nasid = ia64_sn_get_console_nasid();
+ if (console_nasid < 0) {
+// ZZZ What do we do if we don't get a console nasid on the hardware????
+ if (IS_RUNNING_ON_SIMULATOR())
+ console_nasid = master_baseio_nasid;
+ }
+ }
+ return console_nasid;
+}
+
+nasid_t
+snia_get_master_baseio_nasid(void)
+{
+ extern nasid_t master_baseio_nasid;
+ extern char master_baseio_wid;
+
+ if (master_baseio_nasid < 0) {
+ master_baseio_nasid = ia64_sn_get_master_baseio_nasid();
+
+ if (master_baseio_nasid >= 0) {
+ master_baseio_wid =
+ WIDGETID_GET(KL_CONFIG_CH_CONS_INFO
+ (master_baseio_nasid)->memory_base);
+ }
+ }
+ return master_baseio_nasid;
+}
+
+/*
+ * XXX: should probably be called __sn2_pci_rrb_alloc
+ * used by qla1280
+ */
+
+int
+snia_pcibr_rrb_alloc(struct pci_dev *pci_dev,
+ int *count_vchan0, int *count_vchan1)
+{
+ vertex_hdl_t dev = PCIDEV_VERTEX(pci_dev);
+
+ return pcibr_rrb_alloc(dev, count_vchan0, count_vchan1);
+}
+
+/*
+ * XXX: interface should be more like
+ *
+ * int __sn2_pci_enable_bwswap(struct pci_dev *dev);
+ * void __sn2_pci_disable_bswap(struct pci_dev *dev);
+ */
+/* used by ioc4 ide */
+
+pciio_endian_t
+snia_pciio_endian_set(struct pci_dev * pci_dev,
+ pciio_endian_t device_end, pciio_endian_t desired_end)
+{
+ vertex_hdl_t dev = PCIDEV_VERTEX(pci_dev);
+
+ return ((pciio_to_provider_fns(dev))->endian_set)
+ (dev, device_end, desired_end);
+}
+
+EXPORT_SYMBOL(snia_pciio_endian_set);
+EXPORT_SYMBOL(snia_pcibr_rrb_alloc);
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <asm/errno.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/driver.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/labelcl.h>
+#include <asm/sn/xtalk/xtalk.h>
+#include <asm/sn/xtalk/xswitch.h>
+#include <asm/sn/xtalk/xwidget.h>
+#include <asm/sn/xtalk/xtalk_private.h>
+
+
+/*
+ * This file provides generic support for Crosstalk
+ * Switches, in a way that insulates crosstalk providers
+ * from specifics about the switch chips being used.
+ */
+
+#include <asm/sn/xtalk/xbow.h>
+
+#define XSWITCH_CENSUS_BIT(port) (1<<(port))
+#define XSWITCH_CENSUS_PORT_MAX (0xF)
+#define XSWITCH_CENSUS_PORTS (0x10)
+#define XSWITCH_WIDGET_PRESENT(infop,port) ((infop)->census & XSWITCH_CENSUS_BIT(port))
+
+static char xswitch_info_fingerprint[] = "xswitch_info";
+
+struct xswitch_info_s {
+ char *fingerprint;
+ unsigned census;
+ vertex_hdl_t vhdl[XSWITCH_CENSUS_PORTS];
+ vertex_hdl_t master_vhdl[XSWITCH_CENSUS_PORTS];
+ xswitch_provider_t *xswitch_fns;
+};
+
+xswitch_info_t
+xswitch_info_get(vertex_hdl_t xwidget)
+{
+ xswitch_info_t xswitch_info;
+
+ xswitch_info = (xswitch_info_t)
+ hwgraph_fastinfo_get(xwidget);
+
+ return (xswitch_info);
+}
+
+void
+xswitch_info_vhdl_set(xswitch_info_t xswitch_info,
+ xwidgetnum_t port,
+ vertex_hdl_t xwidget)
+{
+ if (port > XSWITCH_CENSUS_PORT_MAX)
+ return;
+
+ xswitch_info->vhdl[(int)port] = xwidget;
+}
+
+vertex_hdl_t
+xswitch_info_vhdl_get(xswitch_info_t xswitch_info,
+ xwidgetnum_t port)
+{
+ if (port > XSWITCH_CENSUS_PORT_MAX)
+ return GRAPH_VERTEX_NONE;
+
+ return xswitch_info->vhdl[(int)port];
+}
+
+/*
+ * Some systems may allow for multiple switch masters. On such systems,
+ * we assign a master for each port on the switch. These interfaces
+ * establish and retrieve that assignment.
+ */
+void
+xswitch_info_master_assignment_set(xswitch_info_t xswitch_info,
+ xwidgetnum_t port,
+ vertex_hdl_t master_vhdl)
+{
+ if (port > XSWITCH_CENSUS_PORT_MAX)
+ return;
+
+ xswitch_info->master_vhdl[(int)port] = master_vhdl;
+}
+
+vertex_hdl_t
+xswitch_info_master_assignment_get(xswitch_info_t xswitch_info,
+ xwidgetnum_t port)
+{
+ if (port > XSWITCH_CENSUS_PORT_MAX)
+ return GRAPH_VERTEX_NONE;
+
+ return xswitch_info->master_vhdl[(int)port];
+}
+
+void
+xswitch_info_set(vertex_hdl_t xwidget, xswitch_info_t xswitch_info)
+{
+ xswitch_info->fingerprint = xswitch_info_fingerprint;
+ hwgraph_fastinfo_set(xwidget, (arbitrary_info_t) xswitch_info);
+}
+
+xswitch_info_t
+xswitch_info_new(vertex_hdl_t xwidget)
+{
+ xswitch_info_t xswitch_info;
+
+ xswitch_info = xswitch_info_get(xwidget);
+ if (xswitch_info == NULL) {
+ int port;
+
+ xswitch_info = kmalloc(sizeof(*xswitch_info), GFP_KERNEL);
+ if (!xswitch_info) {
+ printk(KERN_WARNING "xswitch_info_new(): Unable to "
+ "allocate memory\n");
+ return NULL;
+ }
+ xswitch_info->census = 0;
+ for (port = 0; port <= XSWITCH_CENSUS_PORT_MAX; port++) {
+ xswitch_info_vhdl_set(xswitch_info, port,
+ GRAPH_VERTEX_NONE);
+
+ xswitch_info_master_assignment_set(xswitch_info,
+ port,
+ GRAPH_VERTEX_NONE);
+ }
+ xswitch_info_set(xwidget, xswitch_info);
+ }
+ return xswitch_info;
+}
+
+void
+xswitch_provider_register(vertex_hdl_t busv,
+ xswitch_provider_t * xswitch_fns)
+{
+ xswitch_info_t xswitch_info = xswitch_info_get(busv);
+
+ ASSERT(xswitch_info);
+ xswitch_info->xswitch_fns = xswitch_fns;
+}
+
+void
+xswitch_info_link_is_ok(xswitch_info_t xswitch_info, xwidgetnum_t port)
+{
+ xswitch_info->census |= XSWITCH_CENSUS_BIT(port);
+}
+
+int
+xswitch_info_link_ok(xswitch_info_t xswitch_info, xwidgetnum_t port)
+{
+ if (port > XSWITCH_CENSUS_PORT_MAX)
+ return 0;
+
+ return (xswitch_info->census & XSWITCH_CENSUS_BIT(port));
+}
+
+int
+xswitch_reset_link(vertex_hdl_t xconn_vhdl)
+{
+ return xbow_reset_link(xconn_vhdl);
+}
#include <asm/sn/clksupport.h>
extern unsigned long sn_rtc_cycles_per_second;
-
static struct time_interpolator sn2_interpolator = {
.drift = -1,
.shift = 10,
#include <linux/sem.h>
#include <linux/msg.h>
#include <linux/shm.h>
+#include <linux/vs_cvirt.h>
#include <linux/compiler.h>
#include <linux/vs_cvirt.h>
--- /dev/null
+/*
+ * This file is derived from zlib.h and zconf.h from the zlib-0.95
+ * distribution by Jean-loup Gailly and Mark Adler, with some additions
+ * by Paul Mackerras to aid in implementing Deflate compression and
+ * decompression for PPP packets.
+ */
+
+/*
+ * ==FILEVERSION 960122==
+ *
+ * This marker is used by the Linux installation script to determine
+ * whether an up-to-date version of this file is already installed.
+ */
+
+/* zlib.h -- interface of the 'zlib' general purpose compression library
+ version 0.95, Aug 16th, 1995.
+
+ Copyright (C) 1995 Jean-loup Gailly and Mark Adler
+
+ This software is provided 'as-is', without any express or implied
+ warranty. In no event will the authors be held liable for any damages
+ arising from the use of this software.
+
+ Permission is granted to anyone to use this software for any purpose,
+ including commercial applications, and to alter it and redistribute it
+ freely, subject to the following restrictions:
+
+ 1. The origin of this software must not be misrepresented; you must not
+ claim that you wrote the original software. If you use this software
+ in a product, an acknowledgment in the product documentation would be
+ appreciated but is not required.
+ 2. Altered source versions must be plainly marked as such, and must not be
+ misrepresented as being the original software.
+ 3. This notice may not be removed or altered from any source distribution.
+
+ Jean-loup Gailly Mark Adler
+ gzip@prep.ai.mit.edu madler@alumni.caltech.edu
+ */
+
+#ifndef _ZLIB_H
+#define _ZLIB_H
+
+/* #include "zconf.h" */ /* included directly here */
+
+/* zconf.h -- configuration of the zlib compression library
+ * Copyright (C) 1995 Jean-loup Gailly.
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* From: zconf.h,v 1.12 1995/05/03 17:27:12 jloup Exp */
+
+/*
+ The library does not install any signal handler. It is recommended to
+ add at least a handler for SIGSEGV when decompressing; the library checks
+ the consistency of the input data whenever possible but may go nuts
+ for some forms of corrupted input.
+ */
+
+/*
+ * Compile with -DMAXSEG_64K if the alloc function cannot allocate more
+ * than 64k bytes at a time (needed on systems with 16-bit int).
+ * Compile with -DUNALIGNED_OK if it is OK to access shorts or ints
+ * at addresses which are not a multiple of their size.
+ * Under DOS, -DFAR=far or -DFAR=__far may be needed.
+ */
+
+#ifndef STDC
+# if defined(MSDOS) || defined(__STDC__) || defined(__cplusplus)
+# define STDC
+# endif
+#endif
+
+#ifdef __MWERKS__ /* Metrowerks CodeWarrior declares fileno() in unix.h */
+# include <unix.h>
+#endif
+
+/* Maximum value for memLevel in deflateInit2 */
+#ifndef MAX_MEM_LEVEL
+# ifdef MAXSEG_64K
+# define MAX_MEM_LEVEL 8
+# else
+# define MAX_MEM_LEVEL 9
+# endif
+#endif
+
+#ifndef FAR
+# define FAR
+#endif
+
+/* Maximum value for windowBits in deflateInit2 and inflateInit2 */
+#ifndef MAX_WBITS
+# define MAX_WBITS 15 /* 32K LZ77 window */
+#endif
+
+/* The memory requirements for deflate are (in bytes):
+ 1 << (windowBits+2) + 1 << (memLevel+9)
+ that is: 128K for windowBits=15 + 128K for memLevel = 8 (default values)
+ plus a few kilobytes for small objects. For example, if you want to reduce
+ the default memory requirements from 256K to 128K, compile with
+ make CFLAGS="-O -DMAX_WBITS=14 -DMAX_MEM_LEVEL=7"
+ Of course this will generally degrade compression (there's no free lunch).
+
+ The memory requirements for inflate are (in bytes) 1 << windowBits
+ that is, 32K for windowBits=15 (default value) plus a few kilobytes
+ for small objects.
+*/
+
+ /* Type declarations */
+
+#ifndef OF /* function prototypes */
+# ifdef STDC
+# define OF(args) args
+# else
+# define OF(args) ()
+# endif
+#endif
+
+typedef unsigned char Byte; /* 8 bits */
+typedef unsigned int uInt; /* 16 bits or more */
+typedef unsigned long uLong; /* 32 bits or more */
+
+typedef Byte FAR Bytef;
+typedef char FAR charf;
+typedef int FAR intf;
+typedef uInt FAR uIntf;
+typedef uLong FAR uLongf;
+
+#ifdef STDC
+ typedef void FAR *voidpf;
+ typedef void *voidp;
+#else
+ typedef Byte FAR *voidpf;
+ typedef Byte *voidp;
+#endif
+
+/* end of original zconf.h */
+
+#define ZLIB_VERSION "0.95P"
+
+/*
+ The 'zlib' compression library provides in-memory compression and
+ decompression functions, including integrity checks of the uncompressed
+ data. This version of the library supports only one compression method
+ (deflation) but other algorithms may be added later and will have the same
+ stream interface.
+
+ For compression the application must provide the output buffer and
+ may optionally provide the input buffer for optimization. For decompression,
+ the application must provide the input buffer and may optionally provide
+ the output buffer for optimization.
+
+ Compression can be done in a single step if the buffers are large
+ enough (for example if an input file is mmap'ed), or can be done by
+ repeated calls of the compression function. In the latter case, the
+ application must provide more input and/or consume the output
+ (providing more output space) before each call.
+*/
+
+typedef voidpf (*alloc_func) OF((voidpf opaque, uInt items, uInt size));
+typedef void (*free_func) OF((voidpf opaque, voidpf address, uInt nbytes));
+
+struct internal_state;
+
+typedef struct z_stream_s {
+ Bytef *next_in; /* next input byte */
+ uInt avail_in; /* number of bytes available at next_in */
+ uLong total_in; /* total nb of input bytes read so far */
+
+ Bytef *next_out; /* next output byte should be put there */
+ uInt avail_out; /* remaining free space at next_out */
+ uLong total_out; /* total nb of bytes output so far */
+
+ char *msg; /* last error message, NULL if no error */
+ struct internal_state FAR *state; /* not visible by applications */
+
+ alloc_func zalloc; /* used to allocate the internal state */
+ free_func zfree; /* used to free the internal state */
+ voidp opaque; /* private data object passed to zalloc and zfree */
+
+ Byte data_type; /* best guess about the data type: ascii or binary */
+
+} z_stream;
+
+/*
+ The application must update next_in and avail_in when avail_in has
+ dropped to zero. It must update next_out and avail_out when avail_out
+ has dropped to zero. The application must initialize zalloc, zfree and
+ opaque before calling the init function. All other fields are set by the
+ compression library and must not be updated by the application.
+
+ The opaque value provided by the application will be passed as the first
+ parameter for calls of zalloc and zfree. This can be useful for custom
+ memory management. The compression library attaches no meaning to the
+ opaque value.
+
+ zalloc must return Z_NULL if there is not enough memory for the object.
+ On 16-bit systems, the functions zalloc and zfree must be able to allocate
+ exactly 65536 bytes, but will not be required to allocate more than this
+ if the symbol MAXSEG_64K is defined (see zconf.h). WARNING: On MSDOS,
+ pointers returned by zalloc for objects of exactly 65536 bytes *must*
+ have their offset normalized to zero. The default allocation function
+ provided by this library ensures this (see zutil.c). To reduce memory
+ requirements and avoid any allocation of 64K objects, at the expense of
+ compression ratio, compile the library with -DMAX_WBITS=14 (see zconf.h).
+
+ The fields total_in and total_out can be used for statistics or
+ progress reports. After compression, total_in holds the total size of
+ the uncompressed data and may be saved for use in the decompressor
+ (particularly if the decompressor wants to decompress everything in
+ a single step).
+*/
+
+ /* constants */
+
+#define Z_NO_FLUSH 0
+#define Z_PARTIAL_FLUSH 1
+#define Z_FULL_FLUSH 2
+#define Z_SYNC_FLUSH 3 /* experimental: partial_flush + byte align */
+#define Z_FINISH 4
+#define Z_PACKET_FLUSH 5
+/* See deflate() below for the usage of these constants */
+
+#define Z_OK 0
+#define Z_STREAM_END 1
+#define Z_ERRNO (-1)
+#define Z_STREAM_ERROR (-2)
+#define Z_DATA_ERROR (-3)
+#define Z_MEM_ERROR (-4)
+#define Z_BUF_ERROR (-5)
+/* error codes for the compression/decompression functions */
+
+#define Z_BEST_SPEED 1
+#define Z_BEST_COMPRESSION 9
+#define Z_DEFAULT_COMPRESSION (-1)
+/* compression levels */
+
+#define Z_FILTERED 1
+#define Z_HUFFMAN_ONLY 2
+#define Z_DEFAULT_STRATEGY 0
+
+#define Z_BINARY 0
+#define Z_ASCII 1
+#define Z_UNKNOWN 2
+/* Used to set the data_type field */
+
+#define Z_NULL 0 /* for initializing zalloc, zfree, opaque */
+
+extern char *zlib_version;
+/* The application can compare zlib_version and ZLIB_VERSION for consistency.
+ If the first character differs, the library code actually used is
+ not compatible with the zlib.h header file used by the application.
+ */
+
+ /* basic functions */
+
+extern int inflateInit OF((z_stream *strm));
+/*
+ Initializes the internal stream state for decompression. The fields
+ zalloc and zfree must be initialized before by the caller. If zalloc and
+ zfree are set to Z_NULL, inflateInit updates them to use default allocation
+ functions.
+
+ inflateInit returns Z_OK if success, Z_MEM_ERROR if there was not
+ enough memory. msg is set to null if there is no error message.
+ inflateInit does not perform any decompression: this will be done by
+ inflate().
+*/
+
+
+extern int inflate OF((z_stream *strm, int flush));
+/*
+ Performs one or both of the following actions:
+
+ - Decompress more input starting at next_in and update next_in and avail_in
+ accordingly. If not all input can be processed (because there is not
+ enough room in the output buffer), next_in is updated and processing
+ will resume at this point for the next call of inflate().
+
+ - Provide more output starting at next_out and update next_out and avail_out
+ accordingly. inflate() always provides as much output as possible
+ (until there is no more input data or no more space in the output buffer).
+
+ Before the call of inflate(), the application should ensure that at least
+ one of the actions is possible, by providing more input and/or consuming
+ more output, and updating the next_* and avail_* values accordingly.
+ The application can consume the uncompressed output when it wants, for
+ example when the output buffer is full (avail_out == 0), or after each
+ call of inflate().
+
+ If the parameter flush is set to Z_PARTIAL_FLUSH or Z_PACKET_FLUSH,
+ inflate flushes as much output as possible to the output buffer. The
+ flushing behavior of inflate is not specified for values of the flush
+ parameter other than Z_PARTIAL_FLUSH, Z_PACKET_FLUSH or Z_FINISH, but the
+ current implementation actually flushes as much output as possible
+ anyway. For Z_PACKET_FLUSH, inflate checks that once all the input data
+ has been consumed, it is expecting to see the length field of a stored
+ block; if not, it returns Z_DATA_ERROR.
+
+ inflate() should normally be called until it returns Z_STREAM_END or an
+ error. However if all decompression is to be performed in a single step
+ (a single call of inflate), the parameter flush should be set to
+ Z_FINISH. In this case all pending input is processed and all pending
+ output is flushed; avail_out must be large enough to hold all the
+ uncompressed data. (The size of the uncompressed data may have been saved
+ by the compressor for this purpose.) The next operation on this stream must
+ be inflateEnd to deallocate the decompression state. The use of Z_FINISH
+ is never required, but can be used to inform inflate that a faster routine
+ may be used for the single inflate() call.
+
+ inflate() returns Z_OK if some progress has been made (more input
+ processed or more output produced), Z_STREAM_END if the end of the
+ compressed data has been reached and all uncompressed output has been
+ produced, Z_DATA_ERROR if the input data was corrupted, Z_STREAM_ERROR if
+ the stream structure was inconsistent (for example if next_in or next_out
+ was NULL), Z_MEM_ERROR if there was not enough memory, Z_BUF_ERROR if no
+ progress is possible or if there was not enough room in the output buffer
+ when Z_FINISH is used. In the Z_DATA_ERROR case, the application may then
+ call inflateSync to look for a good compression block. */
+
+
+extern int inflateEnd OF((z_stream *strm));
+/*
+ All dynamically allocated data structures for this stream are freed.
+ This function discards any unprocessed input and does not flush any
+ pending output.
+
+ inflateEnd returns Z_OK if success, Z_STREAM_ERROR if the stream state
+ was inconsistent. In the error case, msg may be set but then points to a
+ static string (which must not be deallocated).
+*/
+
+ /* advanced functions */
+
+extern int inflateInit2 OF((z_stream *strm,
+ int windowBits));
+/*
+ This is another version of inflateInit with more compression options. The
+ fields next_out, zalloc and zfree must be initialized before by the caller.
+
+ The windowBits parameter is the base two logarithm of the maximum window
+ size (the size of the history buffer). It should be in the range 8..15 for
+ this version of the library (the value 16 will be allowed soon). The
+ default value is 15 if inflateInit is used instead. If a compressed stream
+ with a larger window size is given as input, inflate() will return with
+ the error code Z_DATA_ERROR instead of trying to allocate a larger window.
+
+ If next_out is not null, the library will use this buffer for the history
+ buffer; the buffer must either be large enough to hold the entire output
+ data, or have at least 1<<windowBits bytes. If next_out is null, the
+ library will allocate its own buffer (and leave next_out null). next_in
+ need not be provided here but must be provided by the application for the
+ next call of inflate().
+
+ If the history buffer is provided by the application, next_out must
+ never be changed by the application since the decompressor maintains
+ history information inside this buffer from call to call; the application
+ can only reset next_out to the beginning of the history buffer when
+ avail_out is zero and all output has been consumed.
+
+ inflateInit2 returns Z_OK if success, Z_MEM_ERROR if there was
+ not enough memory, Z_STREAM_ERROR if a parameter is invalid (such as
+ windowBits < 8). msg is set to null if there is no error message.
+ inflateInit2 does not perform any decompression: this will be done by
+ inflate().
+*/
+
+extern int inflateSync OF((z_stream *strm));
+/*
+ Skips invalid compressed data until the special marker (see deflate()
+ above) can be found, or until all available input is skipped. No output
+ is provided.
+
+ inflateSync returns Z_OK if the special marker has been found, Z_BUF_ERROR
+ if no more input was provided, Z_DATA_ERROR if no marker has been found,
+ or Z_STREAM_ERROR if the stream structure was inconsistent. In the success
+ case, the application may save the current current value of total_in which
+ indicates where valid compressed data was found. In the error case, the
+ application may repeatedly call inflateSync, providing more input each time,
+ until success or end of the input data.
+*/
+
+extern int inflateReset OF((z_stream *strm));
+/*
+ This function is equivalent to inflateEnd followed by inflateInit,
+ but does not free and reallocate all the internal decompression state.
+ The stream will keep attributes that may have been set by inflateInit2.
+
+ inflateReset returns Z_OK if success, or Z_STREAM_ERROR if the source
+ stream state was inconsistent (such as zalloc or state being NULL).
+*/
+
+extern int inflateIncomp OF((z_stream *strm));
+/*
+ This function adds the data at next_in (avail_in bytes) to the output
+ history without performing any output. There must be no pending output,
+ and the decompressor must be expecting to see the start of a block.
+ Calling this function is equivalent to decompressing a stored block
+ containing the data at next_in (except that the data is not output).
+*/
+
+ /* checksum functions */
+
+/*
+ This function is not related to compression but is exported
+ anyway because it might be useful in applications using the
+ compression library.
+*/
+
+extern uLong adler32 OF((uLong adler, Bytef *buf, uInt len));
+
+/*
+ Update a running Adler-32 checksum with the bytes buf[0..len-1] and
+ return the updated checksum. If buf is NULL, this function returns
+ the required initial value for the checksum.
+ An Adler-32 checksum is almost as reliable as a CRC32 but can be computed
+ much faster. Usage example:
+
+ uLong adler = adler32(0L, Z_NULL, 0);
+
+ while (read_buffer(buffer, length) != EOF) {
+ adler = adler32(adler, buffer, length);
+ }
+ if (adler != original_adler) error();
+*/
+
+#ifndef _Z_UTIL_H
+ struct internal_state {int dummy;}; /* hack for buggy compilers */
+#endif
+
+#endif /* _ZLIB_H */
--- /dev/null
+/*
+ * This file is derived from various .h and .c files from the zlib-0.95
+ * distribution by Jean-loup Gailly and Mark Adler, with some additions
+ * by Paul Mackerras to aid in implementing Deflate compression and
+ * decompression for PPP packets. See zlib.h for conditions of
+ * distribution and use.
+ *
+ * Changes that have been made include:
+ * - changed functions not used outside this file to "local"
+ * - added minCompression parameter to deflateInit2
+ * - added Z_PACKET_FLUSH (see zlib.h for details)
+ * - added inflateIncomp
+ *
+ */
+
+/*+++++*/
+/* zutil.h -- internal interface and configuration of the compression library
+ * Copyright (C) 1995 Jean-loup Gailly.
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* WARNING: this file should *not* be used by applications. It is
+ part of the implementation of the compression library and is
+ subject to change. Applications should only use zlib.h.
+ */
+
+/* From: zutil.h,v 1.9 1995/05/03 17:27:12 jloup Exp */
+
+#define _Z_UTIL_H
+
+#include "zlib.h"
+
+#ifndef local
+# define local static
+#endif
+/* compile with -Dlocal if your debugger can't find static symbols */
+
+#define FAR
+
+typedef unsigned char uch;
+typedef uch FAR uchf;
+typedef unsigned short ush;
+typedef ush FAR ushf;
+typedef unsigned long ulg;
+
+extern char *z_errmsg[]; /* indexed by 1-zlib_error */
+
+#define ERR_RETURN(strm,err) return (strm->msg=z_errmsg[1-err], err)
+/* To be used only when the state is known to be valid */
+
+#ifndef NULL
+#define NULL ((void *) 0)
+#endif
+
+ /* common constants */
+
+#define DEFLATED 8
+
+#ifndef DEF_WBITS
+# define DEF_WBITS MAX_WBITS
+#endif
+/* default windowBits for decompression. MAX_WBITS is for compression only */
+
+#if MAX_MEM_LEVEL >= 8
+# define DEF_MEM_LEVEL 8
+#else
+# define DEF_MEM_LEVEL MAX_MEM_LEVEL
+#endif
+/* default memLevel */
+
+#define STORED_BLOCK 0
+#define STATIC_TREES 1
+#define DYN_TREES 2
+/* The three kinds of block type */
+
+#define MIN_MATCH 3
+#define MAX_MATCH 258
+/* The minimum and maximum match lengths */
+
+ /* functions */
+
+#include <linux/string.h>
+#define zmemcpy memcpy
+#define zmemzero(dest, len) memset(dest, 0, len)
+
+/* Diagnostic functions */
+#ifdef DEBUG_ZLIB
+# include <stdio.h>
+# ifndef verbose
+# define verbose 0
+# endif
+# define Assert(cond,msg) {if(!(cond)) z_error(msg);}
+# define Trace(x) fprintf x
+# define Tracev(x) {if (verbose) fprintf x ;}
+# define Tracevv(x) {if (verbose>1) fprintf x ;}
+# define Tracec(c,x) {if (verbose && (c)) fprintf x ;}
+# define Tracecv(c,x) {if (verbose>1 && (c)) fprintf x ;}
+#else
+# define Assert(cond,msg)
+# define Trace(x)
+# define Tracev(x)
+# define Tracevv(x)
+# define Tracec(c,x)
+# define Tracecv(c,x)
+#endif
+
+
+typedef uLong (*check_func) OF((uLong check, Bytef *buf, uInt len));
+
+/* voidpf zcalloc OF((voidpf opaque, unsigned items, unsigned size)); */
+/* void zcfree OF((voidpf opaque, voidpf ptr)); */
+
+#define ZALLOC(strm, items, size) \
+ (*((strm)->zalloc))((strm)->opaque, (items), (size))
+#define ZFREE(strm, addr, size) \
+ (*((strm)->zfree))((strm)->opaque, (voidpf)(addr), (size))
+#define TRY_FREE(s, p, n) {if (p) ZFREE(s, p, n);}
+
+/* deflate.h -- internal compression state
+ * Copyright (C) 1995 Jean-loup Gailly
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* WARNING: this file should *not* be used by applications. It is
+ part of the implementation of the compression library and is
+ subject to change. Applications should only use zlib.h.
+ */
+
+/*+++++*/
+/* infblock.h -- header to use infblock.c
+ * Copyright (C) 1995 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* WARNING: this file should *not* be used by applications. It is
+ part of the implementation of the compression library and is
+ subject to change. Applications should only use zlib.h.
+ */
+
+struct inflate_blocks_state;
+typedef struct inflate_blocks_state FAR inflate_blocks_statef;
+
+local inflate_blocks_statef * inflate_blocks_new OF((
+ z_stream *z,
+ check_func c, /* check function */
+ uInt w)); /* window size */
+
+local int inflate_blocks OF((
+ inflate_blocks_statef *,
+ z_stream *,
+ int)); /* initial return code */
+
+local void inflate_blocks_reset OF((
+ inflate_blocks_statef *,
+ z_stream *,
+ uLongf *)); /* check value on output */
+
+local int inflate_blocks_free OF((
+ inflate_blocks_statef *,
+ z_stream *,
+ uLongf *)); /* check value on output */
+
+local int inflate_addhistory OF((
+ inflate_blocks_statef *,
+ z_stream *));
+
+local int inflate_packet_flush OF((
+ inflate_blocks_statef *));
+
+/*+++++*/
+/* inftrees.h -- header to use inftrees.c
+ * Copyright (C) 1995 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* WARNING: this file should *not* be used by applications. It is
+ part of the implementation of the compression library and is
+ subject to change. Applications should only use zlib.h.
+ */
+
+/* Huffman code lookup table entry--this entry is four bytes for machines
+ that have 16-bit pointers (e.g. PC's in the small or medium model). */
+
+typedef struct inflate_huft_s FAR inflate_huft;
+
+struct inflate_huft_s {
+ union {
+ struct {
+ Byte Exop; /* number of extra bits or operation */
+ Byte Bits; /* number of bits in this code or subcode */
+ } what;
+ uInt Nalloc; /* number of these allocated here */
+ Bytef *pad; /* pad structure to a power of 2 (4 bytes for */
+ } word; /* 16-bit, 8 bytes for 32-bit machines) */
+ union {
+ uInt Base; /* literal, length base, or distance base */
+ inflate_huft *Next; /* pointer to next level of table */
+ } more;
+};
+
+#ifdef DEBUG_ZLIB
+ local uInt inflate_hufts;
+#endif
+
+local int inflate_trees_bits OF((
+ uIntf *, /* 19 code lengths */
+ uIntf *, /* bits tree desired/actual depth */
+ inflate_huft * FAR *, /* bits tree result */
+ z_stream *)); /* for zalloc, zfree functions */
+
+local int inflate_trees_dynamic OF((
+ uInt, /* number of literal/length codes */
+ uInt, /* number of distance codes */
+ uIntf *, /* that many (total) code lengths */
+ uIntf *, /* literal desired/actual bit depth */
+ uIntf *, /* distance desired/actual bit depth */
+ inflate_huft * FAR *, /* literal/length tree result */
+ inflate_huft * FAR *, /* distance tree result */
+ z_stream *)); /* for zalloc, zfree functions */
+
+local int inflate_trees_fixed OF((
+ uIntf *, /* literal desired/actual bit depth */
+ uIntf *, /* distance desired/actual bit depth */
+ inflate_huft * FAR *, /* literal/length tree result */
+ inflate_huft * FAR *)); /* distance tree result */
+
+local int inflate_trees_free OF((
+ inflate_huft *, /* tables to free */
+ z_stream *)); /* for zfree function */
+
+
+/*+++++*/
+/* infcodes.h -- header to use infcodes.c
+ * Copyright (C) 1995 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* WARNING: this file should *not* be used by applications. It is
+ part of the implementation of the compression library and is
+ subject to change. Applications should only use zlib.h.
+ */
+
+struct inflate_codes_state;
+typedef struct inflate_codes_state FAR inflate_codes_statef;
+
+local inflate_codes_statef *inflate_codes_new OF((
+ uInt, uInt,
+ inflate_huft *, inflate_huft *,
+ z_stream *));
+
+local int inflate_codes OF((
+ inflate_blocks_statef *,
+ z_stream *,
+ int));
+
+local void inflate_codes_free OF((
+ inflate_codes_statef *,
+ z_stream *));
+
+
+/*+++++*/
+/* inflate.c -- zlib interface to inflate modules
+ * Copyright (C) 1995 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* inflate private state */
+struct internal_state {
+
+ /* mode */
+ enum {
+ METHOD, /* waiting for method byte */
+ FLAG, /* waiting for flag byte */
+ BLOCKS, /* decompressing blocks */
+ CHECK4, /* four check bytes to go */
+ CHECK3, /* three check bytes to go */
+ CHECK2, /* two check bytes to go */
+ CHECK1, /* one check byte to go */
+ DONE, /* finished check, done */
+ BAD} /* got an error--stay here */
+ mode; /* current inflate mode */
+
+ /* mode dependent information */
+ union {
+ uInt method; /* if FLAGS, method byte */
+ struct {
+ uLong was; /* computed check value */
+ uLong need; /* stream check value */
+ } check; /* if CHECK, check values to compare */
+ uInt marker; /* if BAD, inflateSync's marker bytes count */
+ } sub; /* submode */
+
+ /* mode independent information */
+ int nowrap; /* flag for no wrapper */
+ uInt wbits; /* log2(window size) (8..15, defaults to 15) */
+ inflate_blocks_statef
+ *blocks; /* current inflate_blocks state */
+
+};
+
+
+int inflateReset(
+ z_stream *z
+)
+{
+ uLong c;
+
+ if (z == Z_NULL || z->state == Z_NULL)
+ return Z_STREAM_ERROR;
+ z->total_in = z->total_out = 0;
+ z->msg = Z_NULL;
+ z->state->mode = z->state->nowrap ? BLOCKS : METHOD;
+ inflate_blocks_reset(z->state->blocks, z, &c);
+ Trace((stderr, "inflate: reset\n"));
+ return Z_OK;
+}
+
+
+int inflateEnd(
+ z_stream *z
+)
+{
+ uLong c;
+
+ if (z == Z_NULL || z->state == Z_NULL || z->zfree == Z_NULL)
+ return Z_STREAM_ERROR;
+ if (z->state->blocks != Z_NULL)
+ inflate_blocks_free(z->state->blocks, z, &c);
+ ZFREE(z, z->state, sizeof(struct internal_state));
+ z->state = Z_NULL;
+ Trace((stderr, "inflate: end\n"));
+ return Z_OK;
+}
+
+
+int inflateInit2(
+ z_stream *z,
+ int w
+)
+{
+ /* initialize state */
+ if (z == Z_NULL)
+ return Z_STREAM_ERROR;
+/* if (z->zalloc == Z_NULL) z->zalloc = zcalloc; */
+/* if (z->zfree == Z_NULL) z->zfree = zcfree; */
+ if ((z->state = (struct internal_state FAR *)
+ ZALLOC(z,1,sizeof(struct internal_state))) == Z_NULL)
+ return Z_MEM_ERROR;
+ z->state->blocks = Z_NULL;
+
+ /* handle undocumented nowrap option (no zlib header or check) */
+ z->state->nowrap = 0;
+ if (w < 0)
+ {
+ w = - w;
+ z->state->nowrap = 1;
+ }
+
+ /* set window size */
+ if (w < 8 || w > 15)
+ {
+ inflateEnd(z);
+ return Z_STREAM_ERROR;
+ }
+ z->state->wbits = (uInt)w;
+
+ /* create inflate_blocks state */
+ if ((z->state->blocks =
+ inflate_blocks_new(z, z->state->nowrap ? Z_NULL : adler32, 1 << w))
+ == Z_NULL)
+ {
+ inflateEnd(z);
+ return Z_MEM_ERROR;
+ }
+ Trace((stderr, "inflate: allocated\n"));
+
+ /* reset state */
+ inflateReset(z);
+ return Z_OK;
+}
+
+
+int inflateInit(
+ z_stream *z
+)
+{
+ return inflateInit2(z, DEF_WBITS);
+}
+
+
+#define NEEDBYTE {if(z->avail_in==0)goto empty;r=Z_OK;}
+#define NEXTBYTE (z->avail_in--,z->total_in++,*z->next_in++)
+
+int inflate(
+ z_stream *z,
+ int f
+)
+{
+ int r;
+ uInt b;
+
+ if (z == Z_NULL || z->next_in == Z_NULL)
+ return Z_STREAM_ERROR;
+ r = Z_BUF_ERROR;
+ while (1) switch (z->state->mode)
+ {
+ case METHOD:
+ NEEDBYTE
+ if (((z->state->sub.method = NEXTBYTE) & 0xf) != DEFLATED)
+ {
+ z->state->mode = BAD;
+ z->msg = "unknown compression method";
+ z->state->sub.marker = 5; /* can't try inflateSync */
+ break;
+ }
+ if ((z->state->sub.method >> 4) + 8 > z->state->wbits)
+ {
+ z->state->mode = BAD;
+ z->msg = "invalid window size";
+ z->state->sub.marker = 5; /* can't try inflateSync */
+ break;
+ }
+ z->state->mode = FLAG;
+ case FLAG:
+ NEEDBYTE
+ if ((b = NEXTBYTE) & 0x20)
+ {
+ z->state->mode = BAD;
+ z->msg = "invalid reserved bit";
+ z->state->sub.marker = 5; /* can't try inflateSync */
+ break;
+ }
+ if (((z->state->sub.method << 8) + b) % 31)
+ {
+ z->state->mode = BAD;
+ z->msg = "incorrect header check";
+ z->state->sub.marker = 5; /* can't try inflateSync */
+ break;
+ }
+ Trace((stderr, "inflate: zlib header ok\n"));
+ z->state->mode = BLOCKS;
+ case BLOCKS:
+ r = inflate_blocks(z->state->blocks, z, r);
+ if (f == Z_PACKET_FLUSH && z->avail_in == 0 && z->avail_out != 0)
+ r = inflate_packet_flush(z->state->blocks);
+ if (r == Z_DATA_ERROR)
+ {
+ z->state->mode = BAD;
+ z->state->sub.marker = 0; /* can try inflateSync */
+ break;
+ }
+ if (r != Z_STREAM_END)
+ return r;
+ r = Z_OK;
+ inflate_blocks_reset(z->state->blocks, z, &z->state->sub.check.was);
+ if (z->state->nowrap)
+ {
+ z->state->mode = DONE;
+ break;
+ }
+ z->state->mode = CHECK4;
+ case CHECK4:
+ NEEDBYTE
+ z->state->sub.check.need = (uLong)NEXTBYTE << 24;
+ z->state->mode = CHECK3;
+ case CHECK3:
+ NEEDBYTE
+ z->state->sub.check.need += (uLong)NEXTBYTE << 16;
+ z->state->mode = CHECK2;
+ case CHECK2:
+ NEEDBYTE
+ z->state->sub.check.need += (uLong)NEXTBYTE << 8;
+ z->state->mode = CHECK1;
+ case CHECK1:
+ NEEDBYTE
+ z->state->sub.check.need += (uLong)NEXTBYTE;
+
+ if (z->state->sub.check.was != z->state->sub.check.need)
+ {
+ z->state->mode = BAD;
+ z->msg = "incorrect data check";
+ z->state->sub.marker = 5; /* can't try inflateSync */
+ break;
+ }
+ Trace((stderr, "inflate: zlib check ok\n"));
+ z->state->mode = DONE;
+ case DONE:
+ return Z_STREAM_END;
+ case BAD:
+ return Z_DATA_ERROR;
+ default:
+ return Z_STREAM_ERROR;
+ }
+
+ empty:
+ if (f != Z_PACKET_FLUSH)
+ return r;
+ z->state->mode = BAD;
+ z->state->sub.marker = 0; /* can try inflateSync */
+ return Z_DATA_ERROR;
+}
+
+/*
+ * This subroutine adds the data at next_in/avail_in to the output history
+ * without performing any output. The output buffer must be "caught up";
+ * i.e. no pending output (hence s->read equals s->write), and the state must
+ * be BLOCKS (i.e. we should be willing to see the start of a series of
+ * BLOCKS). On exit, the output will also be caught up, and the checksum
+ * will have been updated if need be.
+ */
+
+int inflateIncomp(
+ z_stream *z
+)
+{
+ if (z->state->mode != BLOCKS)
+ return Z_DATA_ERROR;
+ return inflate_addhistory(z->state->blocks, z);
+}
+
+
+int inflateSync(
+ z_stream *z
+)
+{
+ uInt n; /* number of bytes to look at */
+ Bytef *p; /* pointer to bytes */
+ uInt m; /* number of marker bytes found in a row */
+ uLong r, w; /* temporaries to save total_in and total_out */
+
+ /* set up */
+ if (z == Z_NULL || z->state == Z_NULL)
+ return Z_STREAM_ERROR;
+ if (z->state->mode != BAD)
+ {
+ z->state->mode = BAD;
+ z->state->sub.marker = 0;
+ }
+ if ((n = z->avail_in) == 0)
+ return Z_BUF_ERROR;
+ p = z->next_in;
+ m = z->state->sub.marker;
+
+ /* search */
+ while (n && m < 4)
+ {
+ if (*p == (Byte)(m < 2 ? 0 : 0xff))
+ m++;
+ else if (*p)
+ m = 0;
+ else
+ m = 4 - m;
+ p++, n--;
+ }
+
+ /* restore */
+ z->total_in += p - z->next_in;
+ z->next_in = p;
+ z->avail_in = n;
+ z->state->sub.marker = m;
+
+ /* return no joy or set up to restart on a new block */
+ if (m != 4)
+ return Z_DATA_ERROR;
+ r = z->total_in; w = z->total_out;
+ inflateReset(z);
+ z->total_in = r; z->total_out = w;
+ z->state->mode = BLOCKS;
+ return Z_OK;
+}
+
+#undef NEEDBYTE
+#undef NEXTBYTE
+
+/*+++++*/
+/* infutil.h -- types and macros common to blocks and codes
+ * Copyright (C) 1995 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* WARNING: this file should *not* be used by applications. It is
+ part of the implementation of the compression library and is
+ subject to change. Applications should only use zlib.h.
+ */
+
+/* inflate blocks semi-private state */
+struct inflate_blocks_state {
+
+ /* mode */
+ enum {
+ TYPE, /* get type bits (3, including end bit) */
+ LENS, /* get lengths for stored */
+ STORED, /* processing stored block */
+ TABLE, /* get table lengths */
+ BTREE, /* get bit lengths tree for a dynamic block */
+ DTREE, /* get length, distance trees for a dynamic block */
+ CODES, /* processing fixed or dynamic block */
+ DRY, /* output remaining window bytes */
+ DONEB, /* finished last block, done */
+ BADB} /* got a data error--stuck here */
+ mode; /* current inflate_block mode */
+
+ /* mode dependent information */
+ union {
+ uInt left; /* if STORED, bytes left to copy */
+ struct {
+ uInt table; /* table lengths (14 bits) */
+ uInt index; /* index into blens (or border) */
+ uIntf *blens; /* bit lengths of codes */
+ uInt bb; /* bit length tree depth */
+ inflate_huft *tb; /* bit length decoding tree */
+ int nblens; /* # elements allocated at blens */
+ } trees; /* if DTREE, decoding info for trees */
+ struct {
+ inflate_huft *tl, *td; /* trees to free */
+ inflate_codes_statef
+ *codes;
+ } decode; /* if CODES, current state */
+ } sub; /* submode */
+ uInt last; /* true if this block is the last block */
+
+ /* mode independent information */
+ uInt bitk; /* bits in bit buffer */
+ uLong bitb; /* bit buffer */
+ Bytef *window; /* sliding window */
+ Bytef *end; /* one byte after sliding window */
+ Bytef *read; /* window read pointer */
+ Bytef *write; /* window write pointer */
+ check_func checkfn; /* check function */
+ uLong check; /* check on output */
+
+};
+
+
+/* defines for inflate input/output */
+/* update pointers and return */
+#define UPDBITS {s->bitb=b;s->bitk=k;}
+#define UPDIN {z->avail_in=n;z->total_in+=p-z->next_in;z->next_in=p;}
+#define UPDOUT {s->write=q;}
+#define UPDATE {UPDBITS UPDIN UPDOUT}
+#define LEAVE {UPDATE return inflate_flush(s,z,r);}
+/* get bytes and bits */
+#define LOADIN {p=z->next_in;n=z->avail_in;b=s->bitb;k=s->bitk;}
+#define NEEDBYTE {if(n)r=Z_OK;else LEAVE}
+#define NEXTBYTE (n--,*p++)
+#define NEEDBITS(j) {while(k<(j)){NEEDBYTE;b|=((uLong)NEXTBYTE)<<k;k+=8;}}
+#define DUMPBITS(j) {b>>=(j);k-=(j);}
+/* output bytes */
+#define WAVAIL (q<s->read?s->read-q-1:s->end-q)
+#define LOADOUT {q=s->write;m=WAVAIL;}
+#define WRAP {if(q==s->end&&s->read!=s->window){q=s->window;m=WAVAIL;}}
+#define FLUSH {UPDOUT r=inflate_flush(s,z,r); LOADOUT}
+#define NEEDOUT {if(m==0){WRAP if(m==0){FLUSH WRAP if(m==0) LEAVE}}r=Z_OK;}
+#define OUTBYTE(a) {*q++=(Byte)(a);m--;}
+/* load local pointers */
+#define LOAD {LOADIN LOADOUT}
+
+/* And'ing with mask[n] masks the lower n bits */
+local uInt inflate_mask[] = {
+ 0x0000,
+ 0x0001, 0x0003, 0x0007, 0x000f, 0x001f, 0x003f, 0x007f, 0x00ff,
+ 0x01ff, 0x03ff, 0x07ff, 0x0fff, 0x1fff, 0x3fff, 0x7fff, 0xffff
+};
+
+/* copy as much as possible from the sliding window to the output area */
+local int inflate_flush OF((
+ inflate_blocks_statef *,
+ z_stream *,
+ int));
+
+/*+++++*/
+/* inffast.h -- header to use inffast.c
+ * Copyright (C) 1995 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* WARNING: this file should *not* be used by applications. It is
+ part of the implementation of the compression library and is
+ subject to change. Applications should only use zlib.h.
+ */
+
+local int inflate_fast OF((
+ uInt,
+ uInt,
+ inflate_huft *,
+ inflate_huft *,
+ inflate_blocks_statef *,
+ z_stream *));
+
+
+/*+++++*/
+/* infblock.c -- interpret and process block types to last block
+ * Copyright (C) 1995 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* Table for deflate from PKZIP's appnote.txt. */
+local uInt border[] = { /* Order of the bit length code lengths */
+ 16, 17, 18, 0, 8, 7, 9, 6, 10, 5, 11, 4, 12, 3, 13, 2, 14, 1, 15};
+
+/*
+ Notes beyond the 1.93a appnote.txt:
+
+ 1. Distance pointers never point before the beginning of the output
+ stream.
+ 2. Distance pointers can point back across blocks, up to 32k away.
+ 3. There is an implied maximum of 7 bits for the bit length table and
+ 15 bits for the actual data.
+ 4. If only one code exists, then it is encoded using one bit. (Zero
+ would be more efficient, but perhaps a little confusing.) If two
+ codes exist, they are coded using one bit each (0 and 1).
+ 5. There is no way of sending zero distance codes--a dummy must be
+ sent if there are none. (History: a pre 2.0 version of PKZIP would
+ store blocks with no distance codes, but this was discovered to be
+ too harsh a criterion.) Valid only for 1.93a. 2.04c does allow
+ zero distance codes, which is sent as one code of zero bits in
+ length.
+ 6. There are up to 286 literal/length codes. Code 256 represents the
+ end-of-block. Note however that the static length tree defines
+ 288 codes just to fill out the Huffman codes. Codes 286 and 287
+ cannot be used though, since there is no length base or extra bits
+ defined for them. Similarily, there are up to 30 distance codes.
+ However, static trees define 32 codes (all 5 bits) to fill out the
+ Huffman codes, but the last two had better not show up in the data.
+ 7. Unzip can check dynamic Huffman blocks for complete code sets.
+ The exception is that a single code would not be complete (see #4).
+ 8. The five bits following the block type is really the number of
+ literal codes sent minus 257.
+ 9. Length codes 8,16,16 are interpreted as 13 length codes of 8 bits
+ (1+6+6). Therefore, to output three times the length, you output
+ three codes (1+1+1), whereas to output four times the same length,
+ you only need two codes (1+3). Hmm.
+ 10. In the tree reconstruction algorithm, Code = Code + Increment
+ only if BitLength(i) is not zero. (Pretty obvious.)
+ 11. Correction: 4 Bits: # of Bit Length codes - 4 (4 - 19)
+ 12. Note: length code 284 can represent 227-258, but length code 285
+ really is 258. The last length deserves its own, short code
+ since it gets used a lot in very redundant files. The length
+ 258 is special since 258 - 3 (the min match length) is 255.
+ 13. The literal/length and distance code bit lengths are read as a
+ single stream of lengths. It is possible (and advantageous) for
+ a repeat code (16, 17, or 18) to go across the boundary between
+ the two sets of lengths.
+ */
+
+
+local void inflate_blocks_reset(
+ inflate_blocks_statef *s,
+ z_stream *z,
+ uLongf *c
+)
+{
+ if (s->checkfn != Z_NULL)
+ *c = s->check;
+ if (s->mode == BTREE || s->mode == DTREE)
+ ZFREE(z, s->sub.trees.blens, s->sub.trees.nblens * sizeof(uInt));
+ if (s->mode == CODES)
+ {
+ inflate_codes_free(s->sub.decode.codes, z);
+ inflate_trees_free(s->sub.decode.td, z);
+ inflate_trees_free(s->sub.decode.tl, z);
+ }
+ s->mode = TYPE;
+ s->bitk = 0;
+ s->bitb = 0;
+ s->read = s->write = s->window;
+ if (s->checkfn != Z_NULL)
+ s->check = (*s->checkfn)(0L, Z_NULL, 0);
+ Trace((stderr, "inflate: blocks reset\n"));
+}
+
+
+local inflate_blocks_statef *inflate_blocks_new(
+ z_stream *z,
+ check_func c,
+ uInt w
+)
+{
+ inflate_blocks_statef *s;
+
+ if ((s = (inflate_blocks_statef *)ZALLOC
+ (z,1,sizeof(struct inflate_blocks_state))) == Z_NULL)
+ return s;
+ if ((s->window = (Bytef *)ZALLOC(z, 1, w)) == Z_NULL)
+ {
+ ZFREE(z, s, sizeof(struct inflate_blocks_state));
+ return Z_NULL;
+ }
+ s->end = s->window + w;
+ s->checkfn = c;
+ s->mode = TYPE;
+ Trace((stderr, "inflate: blocks allocated\n"));
+ inflate_blocks_reset(s, z, &s->check);
+ return s;
+}
+
+
+local int inflate_blocks(
+ inflate_blocks_statef *s,
+ z_stream *z,
+ int r
+)
+{
+ uInt t; /* temporary storage */
+ uLong b; /* bit buffer */
+ uInt k; /* bits in bit buffer */
+ Bytef *p; /* input data pointer */
+ uInt n; /* bytes available there */
+ Bytef *q; /* output window write pointer */
+ uInt m; /* bytes to end of window or read pointer */
+
+ /* copy input/output information to locals (UPDATE macro restores) */
+ LOAD
+
+ /* process input based on current state */
+ while (1) switch (s->mode)
+ {
+ case TYPE:
+ NEEDBITS(3)
+ t = (uInt)b & 7;
+ s->last = t & 1;
+ switch (t >> 1)
+ {
+ case 0: /* stored */
+ Trace((stderr, "inflate: stored block%s\n",
+ s->last ? " (last)" : ""));
+ DUMPBITS(3)
+ t = k & 7; /* go to byte boundary */
+ DUMPBITS(t)
+ s->mode = LENS; /* get length of stored block */
+ break;
+ case 1: /* fixed */
+ Trace((stderr, "inflate: fixed codes block%s\n",
+ s->last ? " (last)" : ""));
+ {
+ uInt bl, bd;
+ inflate_huft *tl, *td;
+
+ inflate_trees_fixed(&bl, &bd, &tl, &td);
+ s->sub.decode.codes = inflate_codes_new(bl, bd, tl, td, z);
+ if (s->sub.decode.codes == Z_NULL)
+ {
+ r = Z_MEM_ERROR;
+ LEAVE
+ }
+ s->sub.decode.tl = Z_NULL; /* don't try to free these */
+ s->sub.decode.td = Z_NULL;
+ }
+ DUMPBITS(3)
+ s->mode = CODES;
+ break;
+ case 2: /* dynamic */
+ Trace((stderr, "inflate: dynamic codes block%s\n",
+ s->last ? " (last)" : ""));
+ DUMPBITS(3)
+ s->mode = TABLE;
+ break;
+ case 3: /* illegal */
+ DUMPBITS(3)
+ s->mode = BADB;
+ z->msg = "invalid block type";
+ r = Z_DATA_ERROR;
+ LEAVE
+ }
+ break;
+ case LENS:
+ NEEDBITS(32)
+ if (((~b) >> 16) != (b & 0xffff))
+ {
+ s->mode = BADB;
+ z->msg = "invalid stored block lengths";
+ r = Z_DATA_ERROR;
+ LEAVE
+ }
+ s->sub.left = (uInt)b & 0xffff;
+ b = k = 0; /* dump bits */
+ Tracev((stderr, "inflate: stored length %u\n", s->sub.left));
+ s->mode = s->sub.left ? STORED : TYPE;
+ break;
+ case STORED:
+ if (n == 0)
+ LEAVE
+ NEEDOUT
+ t = s->sub.left;
+ if (t > n) t = n;
+ if (t > m) t = m;
+ zmemcpy(q, p, t);
+ p += t; n -= t;
+ q += t; m -= t;
+ if ((s->sub.left -= t) != 0)
+ break;
+ Tracev((stderr, "inflate: stored end, %lu total out\n",
+ z->total_out + (q >= s->read ? q - s->read :
+ (s->end - s->read) + (q - s->window))));
+ s->mode = s->last ? DRY : TYPE;
+ break;
+ case TABLE:
+ NEEDBITS(14)
+ s->sub.trees.table = t = (uInt)b & 0x3fff;
+#ifndef PKZIP_BUG_WORKAROUND
+ if ((t & 0x1f) > 29 || ((t >> 5) & 0x1f) > 29)
+ {
+ s->mode = BADB;
+ z->msg = "too many length or distance symbols";
+ r = Z_DATA_ERROR;
+ LEAVE
+ }
+#endif
+ t = 258 + (t & 0x1f) + ((t >> 5) & 0x1f);
+ if (t < 19)
+ t = 19;
+ if ((s->sub.trees.blens = (uIntf*)ZALLOC(z, t, sizeof(uInt))) == Z_NULL)
+ {
+ r = Z_MEM_ERROR;
+ LEAVE
+ }
+ s->sub.trees.nblens = t;
+ DUMPBITS(14)
+ s->sub.trees.index = 0;
+ Tracev((stderr, "inflate: table sizes ok\n"));
+ s->mode = BTREE;
+ case BTREE:
+ while (s->sub.trees.index < 4 + (s->sub.trees.table >> 10))
+ {
+ NEEDBITS(3)
+ s->sub.trees.blens[border[s->sub.trees.index++]] = (uInt)b & 7;
+ DUMPBITS(3)
+ }
+ while (s->sub.trees.index < 19)
+ s->sub.trees.blens[border[s->sub.trees.index++]] = 0;
+ s->sub.trees.bb = 7;
+ t = inflate_trees_bits(s->sub.trees.blens, &s->sub.trees.bb,
+ &s->sub.trees.tb, z);
+ if (t != Z_OK)
+ {
+ r = t;
+ if (r == Z_DATA_ERROR)
+ s->mode = BADB;
+ LEAVE
+ }
+ s->sub.trees.index = 0;
+ Tracev((stderr, "inflate: bits tree ok\n"));
+ s->mode = DTREE;
+ case DTREE:
+ while (t = s->sub.trees.table,
+ s->sub.trees.index < 258 + (t & 0x1f) + ((t >> 5) & 0x1f))
+ {
+ inflate_huft *h;
+ uInt i, j, c;
+
+ t = s->sub.trees.bb;
+ NEEDBITS(t)
+ h = s->sub.trees.tb + ((uInt)b & inflate_mask[t]);
+ t = h->word.what.Bits;
+ c = h->more.Base;
+ if (c < 16)
+ {
+ DUMPBITS(t)
+ s->sub.trees.blens[s->sub.trees.index++] = c;
+ }
+ else /* c == 16..18 */
+ {
+ i = c == 18 ? 7 : c - 14;
+ j = c == 18 ? 11 : 3;
+ NEEDBITS(t + i)
+ DUMPBITS(t)
+ j += (uInt)b & inflate_mask[i];
+ DUMPBITS(i)
+ i = s->sub.trees.index;
+ t = s->sub.trees.table;
+ if (i + j > 258 + (t & 0x1f) + ((t >> 5) & 0x1f) ||
+ (c == 16 && i < 1))
+ {
+ s->mode = BADB;
+ z->msg = "invalid bit length repeat";
+ r = Z_DATA_ERROR;
+ LEAVE
+ }
+ c = c == 16 ? s->sub.trees.blens[i - 1] : 0;
+ do {
+ s->sub.trees.blens[i++] = c;
+ } while (--j);
+ s->sub.trees.index = i;
+ }
+ }
+ inflate_trees_free(s->sub.trees.tb, z);
+ s->sub.trees.tb = Z_NULL;
+ {
+ uInt bl, bd;
+ inflate_huft *tl, *td;
+ inflate_codes_statef *c;
+
+ bl = 9; /* must be <= 9 for lookahead assumptions */
+ bd = 6; /* must be <= 9 for lookahead assumptions */
+ t = s->sub.trees.table;
+ t = inflate_trees_dynamic(257 + (t & 0x1f), 1 + ((t >> 5) & 0x1f),
+ s->sub.trees.blens, &bl, &bd, &tl, &td, z);
+ if (t != Z_OK)
+ {
+ if (t == (uInt)Z_DATA_ERROR)
+ s->mode = BADB;
+ r = t;
+ LEAVE
+ }
+ Tracev((stderr, "inflate: trees ok\n"));
+ if ((c = inflate_codes_new(bl, bd, tl, td, z)) == Z_NULL)
+ {
+ inflate_trees_free(td, z);
+ inflate_trees_free(tl, z);
+ r = Z_MEM_ERROR;
+ LEAVE
+ }
+ ZFREE(z, s->sub.trees.blens, s->sub.trees.nblens * sizeof(uInt));
+ s->sub.decode.codes = c;
+ s->sub.decode.tl = tl;
+ s->sub.decode.td = td;
+ }
+ s->mode = CODES;
+ case CODES:
+ UPDATE
+ if ((r = inflate_codes(s, z, r)) != Z_STREAM_END)
+ return inflate_flush(s, z, r);
+ r = Z_OK;
+ inflate_codes_free(s->sub.decode.codes, z);
+ inflate_trees_free(s->sub.decode.td, z);
+ inflate_trees_free(s->sub.decode.tl, z);
+ LOAD
+ Tracev((stderr, "inflate: codes end, %lu total out\n",
+ z->total_out + (q >= s->read ? q - s->read :
+ (s->end - s->read) + (q - s->window))));
+ if (!s->last)
+ {
+ s->mode = TYPE;
+ break;
+ }
+ if (k > 7) /* return unused byte, if any */
+ {
+ Assert(k < 16, "inflate_codes grabbed too many bytes")
+ k -= 8;
+ n++;
+ p--; /* can always return one */
+ }
+ s->mode = DRY;
+ case DRY:
+ FLUSH
+ if (s->read != s->write)
+ LEAVE
+ s->mode = DONEB;
+ case DONEB:
+ r = Z_STREAM_END;
+ LEAVE
+ case BADB:
+ r = Z_DATA_ERROR;
+ LEAVE
+ default:
+ r = Z_STREAM_ERROR;
+ LEAVE
+ }
+}
+
+
+local int inflate_blocks_free(
+ inflate_blocks_statef *s,
+ z_stream *z,
+ uLongf *c
+)
+{
+ inflate_blocks_reset(s, z, c);
+ ZFREE(z, s->window, s->end - s->window);
+ ZFREE(z, s, sizeof(struct inflate_blocks_state));
+ Trace((stderr, "inflate: blocks freed\n"));
+ return Z_OK;
+}
+
+/*
+ * This subroutine adds the data at next_in/avail_in to the output history
+ * without performing any output. The output buffer must be "caught up";
+ * i.e. no pending output (hence s->read equals s->write), and the state must
+ * be BLOCKS (i.e. we should be willing to see the start of a series of
+ * BLOCKS). On exit, the output will also be caught up, and the checksum
+ * will have been updated if need be.
+ */
+local int inflate_addhistory(
+ inflate_blocks_statef *s,
+ z_stream *z
+)
+{
+ uLong b; /* bit buffer */ /* NOT USED HERE */
+ uInt k; /* bits in bit buffer */ /* NOT USED HERE */
+ uInt t; /* temporary storage */
+ Bytef *p; /* input data pointer */
+ uInt n; /* bytes available there */
+ Bytef *q; /* output window write pointer */
+ uInt m; /* bytes to end of window or read pointer */
+
+ if (s->read != s->write)
+ return Z_STREAM_ERROR;
+ if (s->mode != TYPE)
+ return Z_DATA_ERROR;
+
+ /* we're ready to rock */
+ LOAD
+ /* while there is input ready, copy to output buffer, moving
+ * pointers as needed.
+ */
+ while (n) {
+ t = n; /* how many to do */
+ /* is there room until end of buffer? */
+ if (t > m) t = m;
+ /* update check information */
+ if (s->checkfn != Z_NULL)
+ s->check = (*s->checkfn)(s->check, q, t);
+ zmemcpy(q, p, t);
+ q += t;
+ p += t;
+ n -= t;
+ z->total_out += t;
+ s->read = q; /* drag read pointer forward */
+/* WRAP */ /* expand WRAP macro by hand to handle s->read */
+ if (q == s->end) {
+ s->read = q = s->window;
+ m = WAVAIL;
+ }
+ }
+ UPDATE
+ return Z_OK;
+}
+
+
+/*
+ * At the end of a Deflate-compressed PPP packet, we expect to have seen
+ * a `stored' block type value but not the (zero) length bytes.
+ */
+local int inflate_packet_flush(
+ inflate_blocks_statef *s
+)
+{
+ if (s->mode != LENS)
+ return Z_DATA_ERROR;
+ s->mode = TYPE;
+ return Z_OK;
+}
+
+
+/*+++++*/
+/* inftrees.c -- generate Huffman trees for efficient decoding
+ * Copyright (C) 1995 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* simplify the use of the inflate_huft type with some defines */
+#define base more.Base
+#define next more.Next
+#define exop word.what.Exop
+#define bits word.what.Bits
+
+
+local int huft_build OF((
+ uIntf *, /* code lengths in bits */
+ uInt, /* number of codes */
+ uInt, /* number of "simple" codes */
+ uIntf *, /* list of base values for non-simple codes */
+ uIntf *, /* list of extra bits for non-simple codes */
+ inflate_huft * FAR*,/* result: starting table */
+ uIntf *, /* maximum lookup bits (returns actual) */
+ z_stream *)); /* for zalloc function */
+
+local voidpf falloc OF((
+ voidpf, /* opaque pointer (not used) */
+ uInt, /* number of items */
+ uInt)); /* size of item */
+
+local void ffree OF((
+ voidpf q, /* opaque pointer (not used) */
+ voidpf p, /* what to free (not used) */
+ uInt n)); /* number of bytes (not used) */
+
+/* Tables for deflate from PKZIP's appnote.txt. */
+local uInt cplens[] = { /* Copy lengths for literal codes 257..285 */
+ 3, 4, 5, 6, 7, 8, 9, 10, 11, 13, 15, 17, 19, 23, 27, 31,
+ 35, 43, 51, 59, 67, 83, 99, 115, 131, 163, 195, 227, 258, 0, 0};
+ /* actually lengths - 2; also see note #13 above about 258 */
+local uInt cplext[] = { /* Extra bits for literal codes 257..285 */
+ 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2,
+ 3, 3, 3, 3, 4, 4, 4, 4, 5, 5, 5, 5, 0, 192, 192}; /* 192==invalid */
+local uInt cpdist[] = { /* Copy offsets for distance codes 0..29 */
+ 1, 2, 3, 4, 5, 7, 9, 13, 17, 25, 33, 49, 65, 97, 129, 193,
+ 257, 385, 513, 769, 1025, 1537, 2049, 3073, 4097, 6145,
+ 8193, 12289, 16385, 24577};
+local uInt cpdext[] = { /* Extra bits for distance codes */
+ 0, 0, 0, 0, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6,
+ 7, 7, 8, 8, 9, 9, 10, 10, 11, 11,
+ 12, 12, 13, 13};
+
+/*
+ Huffman code decoding is performed using a multi-level table lookup.
+ The fastest way to decode is to simply build a lookup table whose
+ size is determined by the longest code. However, the time it takes
+ to build this table can also be a factor if the data being decoded
+ is not very long. The most common codes are necessarily the
+ shortest codes, so those codes dominate the decoding time, and hence
+ the speed. The idea is you can have a shorter table that decodes the
+ shorter, more probable codes, and then point to subsidiary tables for
+ the longer codes. The time it costs to decode the longer codes is
+ then traded against the time it takes to make longer tables.
+
+ This results of this trade are in the variables lbits and dbits
+ below. lbits is the number of bits the first level table for literal/
+ length codes can decode in one step, and dbits is the same thing for
+ the distance codes. Subsequent tables are also less than or equal to
+ those sizes. These values may be adjusted either when all of the
+ codes are shorter than that, in which case the longest code length in
+ bits is used, or when the shortest code is *longer* than the requested
+ table size, in which case the length of the shortest code in bits is
+ used.
+
+ There are two different values for the two tables, since they code a
+ different number of possibilities each. The literal/length table
+ codes 286 possible values, or in a flat code, a little over eight
+ bits. The distance table codes 30 possible values, or a little less
+ than five bits, flat. The optimum values for speed end up being
+ about one bit more than those, so lbits is 8+1 and dbits is 5+1.
+ The optimum values may differ though from machine to machine, and
+ possibly even between compilers. Your mileage may vary.
+ */
+
+
+/* If BMAX needs to be larger than 16, then h and x[] should be uLong. */
+#define BMAX 15 /* maximum bit length of any code */
+#define N_MAX 288 /* maximum number of codes in any set */
+
+#ifdef DEBUG_ZLIB
+ uInt inflate_hufts;
+#endif
+
+local int huft_build(
+ uIntf *b, /* code lengths in bits (all assumed <= BMAX) */
+ uInt n, /* number of codes (assumed <= N_MAX) */
+ uInt s, /* number of simple-valued codes (0..s-1) */
+ uIntf *d, /* list of base values for non-simple codes */
+ uIntf *e, /* list of extra bits for non-simple codes */
+ inflate_huft * FAR *t, /* result: starting table */
+ uIntf *m, /* maximum lookup bits, returns actual */
+ z_stream *zs /* for zalloc function */
+)
+/* Given a list of code lengths and a maximum table size, make a set of
+ tables to decode that set of codes. Return Z_OK on success, Z_BUF_ERROR
+ if the given code set is incomplete (the tables are still built in this
+ case), Z_DATA_ERROR if the input is invalid (all zero length codes or an
+ over-subscribed set of lengths), or Z_MEM_ERROR if not enough memory. */
+{
+
+ uInt a; /* counter for codes of length k */
+ uInt c[BMAX+1]; /* bit length count table */
+ uInt f; /* i repeats in table every f entries */
+ int g; /* maximum code length */
+ int h; /* table level */
+ register uInt i; /* counter, current code */
+ register uInt j; /* counter */
+ register int k; /* number of bits in current code */
+ int l; /* bits per table (returned in m) */
+ register uIntf *p; /* pointer into c[], b[], or v[] */
+ inflate_huft *q; /* points to current table */
+ struct inflate_huft_s r; /* table entry for structure assignment */
+ inflate_huft *u[BMAX]; /* table stack */
+ uInt v[N_MAX]; /* values in order of bit length */
+ register int w; /* bits before this table == (l * h) */
+ uInt x[BMAX+1]; /* bit offsets, then code stack */
+ uIntf *xp; /* pointer into x */
+ int y; /* number of dummy codes added */
+ uInt z; /* number of entries in current table */
+
+
+ /* Generate counts for each bit length */
+ p = c;
+#define C0 *p++ = 0;
+#define C2 C0 C0 C0 C0
+#define C4 C2 C2 C2 C2
+ C4 /* clear c[]--assume BMAX+1 is 16 */
+ p = b; i = n;
+ do {
+ c[*p++]++; /* assume all entries <= BMAX */
+ } while (--i);
+ if (c[0] == n) /* null input--all zero length codes */
+ {
+ *t = (inflate_huft *)Z_NULL;
+ *m = 0;
+ return Z_OK;
+ }
+
+
+ /* Find minimum and maximum length, bound *m by those */
+ l = *m;
+ for (j = 1; j <= BMAX; j++)
+ if (c[j])
+ break;
+ k = j; /* minimum code length */
+ if ((uInt)l < j)
+ l = j;
+ for (i = BMAX; i; i--)
+ if (c[i])
+ break;
+ g = i; /* maximum code length */
+ if ((uInt)l > i)
+ l = i;
+ *m = l;
+
+
+ /* Adjust last length count to fill out codes, if needed */
+ for (y = 1 << j; j < i; j++, y <<= 1)
+ if ((y -= c[j]) < 0)
+ return Z_DATA_ERROR;
+ if ((y -= c[i]) < 0)
+ return Z_DATA_ERROR;
+ c[i] += y;
+
+
+ /* Generate starting offsets into the value table for each length */
+ x[1] = j = 0;
+ p = c + 1; xp = x + 2;
+ while (--i) { /* note that i == g from above */
+ *xp++ = (j += *p++);
+ }
+
+
+ /* Make a table of values in order of bit lengths */
+ p = b; i = 0;
+ do {
+ if ((j = *p++) != 0)
+ v[x[j]++] = i;
+ } while (++i < n);
+
+
+ /* Generate the Huffman codes and for each, make the table entries */
+ x[0] = i = 0; /* first Huffman code is zero */
+ p = v; /* grab values in bit order */
+ h = -1; /* no tables yet--level -1 */
+ w = -l; /* bits decoded == (l * h) */
+ u[0] = (inflate_huft *)Z_NULL; /* just to keep compilers happy */
+ q = (inflate_huft *)Z_NULL; /* ditto */
+ z = 0; /* ditto */
+
+ /* go through the bit lengths (k already is bits in shortest code) */
+ for (; k <= g; k++)
+ {
+ a = c[k];
+ while (a--)
+ {
+ /* here i is the Huffman code of length k bits for value *p */
+ /* make tables up to required level */
+ while (k > w + l)
+ {
+ h++;
+ w += l; /* previous table always l bits */
+
+ /* compute minimum size table less than or equal to l bits */
+ z = (z = g - w) > (uInt)l ? l : z; /* table size upper limit */
+ if ((f = 1 << (j = k - w)) > a + 1) /* try a k-w bit table */
+ { /* too few codes for k-w bit table */
+ f -= a + 1; /* deduct codes from patterns left */
+ xp = c + k;
+ if (j < z)
+ while (++j < z) /* try smaller tables up to z bits */
+ {
+ if ((f <<= 1) <= *++xp)
+ break; /* enough codes to use up j bits */
+ f -= *xp; /* else deduct codes from patterns */
+ }
+ }
+ z = 1 << j; /* table entries for j-bit table */
+
+ /* allocate and link in new table */
+ if ((q = (inflate_huft *)ZALLOC
+ (zs,z + 1,sizeof(inflate_huft))) == Z_NULL)
+ {
+ if (h)
+ inflate_trees_free(u[0], zs);
+ return Z_MEM_ERROR; /* not enough memory */
+ }
+ q->word.Nalloc = z + 1;
+#ifdef DEBUG_ZLIB
+ inflate_hufts += z + 1;
+#endif
+ *t = q + 1; /* link to list for huft_free() */
+ *(t = &(q->next)) = Z_NULL;
+ u[h] = ++q; /* table starts after link */
+
+ /* connect to last table, if there is one */
+ if (h)
+ {
+ x[h] = i; /* save pattern for backing up */
+ r.bits = (Byte)l; /* bits to dump before this table */
+ r.exop = (Byte)j; /* bits in this table */
+ r.next = q; /* pointer to this table */
+ j = i >> (w - l); /* (get around Turbo C bug) */
+ u[h-1][j] = r; /* connect to last table */
+ }
+ }
+
+ /* set up table entry in r */
+ r.bits = (Byte)(k - w);
+ if (p >= v + n)
+ r.exop = 128 + 64; /* out of values--invalid code */
+ else if (*p < s)
+ {
+ r.exop = (Byte)(*p < 256 ? 0 : 32 + 64); /* 256 is end-of-block */
+ r.base = *p++; /* simple code is just the value */
+ }
+ else
+ {
+ r.exop = (Byte)e[*p - s] + 16 + 64; /* non-simple--look up in lists */
+ r.base = d[*p++ - s];
+ }
+
+ /* fill code-like entries with r */
+ f = 1 << (k - w);
+ for (j = i >> w; j < z; j += f)
+ q[j] = r;
+
+ /* backwards increment the k-bit code i */
+ for (j = 1 << (k - 1); i & j; j >>= 1)
+ i ^= j;
+ i ^= j;
+
+ /* backup over finished tables */
+ while ((i & ((1 << w) - 1)) != x[h])
+ {
+ h--; /* don't need to update q */
+ w -= l;
+ }
+ }
+ }
+
+
+ /* Return Z_BUF_ERROR if we were given an incomplete table */
+ return y != 0 && g != 1 ? Z_BUF_ERROR : Z_OK;
+}
+
+
+local int inflate_trees_bits(
+ uIntf *c, /* 19 code lengths */
+ uIntf *bb, /* bits tree desired/actual depth */
+ inflate_huft * FAR *tb, /* bits tree result */
+ z_stream *z /* for zfree function */
+)
+{
+ int r;
+
+ r = huft_build(c, 19, 19, (uIntf*)Z_NULL, (uIntf*)Z_NULL, tb, bb, z);
+ if (r == Z_DATA_ERROR)
+ z->msg = "oversubscribed dynamic bit lengths tree";
+ else if (r == Z_BUF_ERROR)
+ {
+ inflate_trees_free(*tb, z);
+ z->msg = "incomplete dynamic bit lengths tree";
+ r = Z_DATA_ERROR;
+ }
+ return r;
+}
+
+
+local int inflate_trees_dynamic(
+ uInt nl, /* number of literal/length codes */
+ uInt nd, /* number of distance codes */
+ uIntf *c, /* that many (total) code lengths */
+ uIntf *bl, /* literal desired/actual bit depth */
+ uIntf *bd, /* distance desired/actual bit depth */
+ inflate_huft * FAR *tl, /* literal/length tree result */
+ inflate_huft * FAR *td, /* distance tree result */
+ z_stream *z /* for zfree function */
+)
+{
+ int r;
+
+ /* build literal/length tree */
+ if ((r = huft_build(c, nl, 257, cplens, cplext, tl, bl, z)) != Z_OK)
+ {
+ if (r == Z_DATA_ERROR)
+ z->msg = "oversubscribed literal/length tree";
+ else if (r == Z_BUF_ERROR)
+ {
+ inflate_trees_free(*tl, z);
+ z->msg = "incomplete literal/length tree";
+ r = Z_DATA_ERROR;
+ }
+ return r;
+ }
+
+ /* build distance tree */
+ if ((r = huft_build(c + nl, nd, 0, cpdist, cpdext, td, bd, z)) != Z_OK)
+ {
+ if (r == Z_DATA_ERROR)
+ z->msg = "oversubscribed literal/length tree";
+ else if (r == Z_BUF_ERROR) {
+#ifdef PKZIP_BUG_WORKAROUND
+ r = Z_OK;
+ }
+#else
+ inflate_trees_free(*td, z);
+ z->msg = "incomplete literal/length tree";
+ r = Z_DATA_ERROR;
+ }
+ inflate_trees_free(*tl, z);
+ return r;
+#endif
+ }
+
+ /* done */
+ return Z_OK;
+}
+
+
+/* build fixed tables only once--keep them here */
+local int fixed_lock = 0;
+local int fixed_built = 0;
+#define FIXEDH 530 /* number of hufts used by fixed tables */
+local uInt fixed_left = FIXEDH;
+local inflate_huft fixed_mem[FIXEDH];
+local uInt fixed_bl;
+local uInt fixed_bd;
+local inflate_huft *fixed_tl;
+local inflate_huft *fixed_td;
+
+
+local voidpf falloc(q, n, s)
+voidpf q; /* opaque pointer (not used) */
+uInt n; /* number of items */
+uInt s; /* size of item */
+{
+ Assert(s == sizeof(inflate_huft) && n <= fixed_left,
+ "inflate_trees falloc overflow");
+ if (q) s++; /* to make some compilers happy */
+ fixed_left -= n;
+ return (voidpf)(fixed_mem + fixed_left);
+}
+
+
+local void ffree(q, p, n)
+voidpf q;
+voidpf p;
+uInt n;
+{
+ Assert(0, "inflate_trees ffree called!");
+ if (q) q = p; /* to make some compilers happy */
+}
+
+
+local int inflate_trees_fixed(
+ uIntf *bl, /* literal desired/actual bit depth */
+ uIntf *bd, /* distance desired/actual bit depth */
+ inflate_huft * FAR *tl, /* literal/length tree result */
+ inflate_huft * FAR *td /* distance tree result */
+)
+{
+ /* build fixed tables if not built already--lock out other instances */
+ while (++fixed_lock > 1)
+ fixed_lock--;
+ if (!fixed_built)
+ {
+ int k; /* temporary variable */
+ unsigned c[288]; /* length list for huft_build */
+ z_stream z; /* for falloc function */
+
+ /* set up fake z_stream for memory routines */
+ z.zalloc = falloc;
+ z.zfree = ffree;
+ z.opaque = Z_NULL;
+
+ /* literal table */
+ for (k = 0; k < 144; k++)
+ c[k] = 8;
+ for (; k < 256; k++)
+ c[k] = 9;
+ for (; k < 280; k++)
+ c[k] = 7;
+ for (; k < 288; k++)
+ c[k] = 8;
+ fixed_bl = 7;
+ huft_build(c, 288, 257, cplens, cplext, &fixed_tl, &fixed_bl, &z);
+
+ /* distance table */
+ for (k = 0; k < 30; k++)
+ c[k] = 5;
+ fixed_bd = 5;
+ huft_build(c, 30, 0, cpdist, cpdext, &fixed_td, &fixed_bd, &z);
+
+ /* done */
+ fixed_built = 1;
+ }
+ fixed_lock--;
+ *bl = fixed_bl;
+ *bd = fixed_bd;
+ *tl = fixed_tl;
+ *td = fixed_td;
+ return Z_OK;
+}
+
+
+local int inflate_trees_free(
+ inflate_huft *t, /* table to free */
+ z_stream *z /* for zfree function */
+)
+/* Free the malloc'ed tables built by huft_build(), which makes a linked
+ list of the tables it made, with the links in a dummy first entry of
+ each table. */
+{
+ register inflate_huft *p, *q;
+
+ /* Go through linked list, freeing from the malloced (t[-1]) address. */
+ p = t;
+ while (p != Z_NULL)
+ {
+ q = (--p)->next;
+ ZFREE(z, p, p->word.Nalloc * sizeof(inflate_huft));
+ p = q;
+ }
+ return Z_OK;
+}
+
+/*+++++*/
+/* infcodes.c -- process literals and length/distance pairs
+ * Copyright (C) 1995 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* simplify the use of the inflate_huft type with some defines */
+#define base more.Base
+#define next more.Next
+#define exop word.what.Exop
+#define bits word.what.Bits
+
+/* inflate codes private state */
+struct inflate_codes_state {
+
+ /* mode */
+ enum { /* waiting for "i:"=input, "o:"=output, "x:"=nothing */
+ START, /* x: set up for LEN */
+ LEN, /* i: get length/literal/eob next */
+ LENEXT, /* i: getting length extra (have base) */
+ DIST, /* i: get distance next */
+ DISTEXT, /* i: getting distance extra */
+ COPY, /* o: copying bytes in window, waiting for space */
+ LIT, /* o: got literal, waiting for output space */
+ WASH, /* o: got eob, possibly still output waiting */
+ END, /* x: got eob and all data flushed */
+ BADCODE} /* x: got error */
+ mode; /* current inflate_codes mode */
+
+ /* mode dependent information */
+ uInt len;
+ union {
+ struct {
+ inflate_huft *tree; /* pointer into tree */
+ uInt need; /* bits needed */
+ } code; /* if LEN or DIST, where in tree */
+ uInt lit; /* if LIT, literal */
+ struct {
+ uInt get; /* bits to get for extra */
+ uInt dist; /* distance back to copy from */
+ } copy; /* if EXT or COPY, where and how much */
+ } sub; /* submode */
+
+ /* mode independent information */
+ Byte lbits; /* ltree bits decoded per branch */
+ Byte dbits; /* dtree bits decoder per branch */
+ inflate_huft *ltree; /* literal/length/eob tree */
+ inflate_huft *dtree; /* distance tree */
+
+};
+
+
+local inflate_codes_statef *inflate_codes_new(
+ uInt bl,
+ uInt bd,
+ inflate_huft *tl,
+ inflate_huft *td,
+ z_stream *z
+)
+{
+ inflate_codes_statef *c;
+
+ if ((c = (inflate_codes_statef *)
+ ZALLOC(z,1,sizeof(struct inflate_codes_state))) != Z_NULL)
+ {
+ c->mode = START;
+ c->lbits = (Byte)bl;
+ c->dbits = (Byte)bd;
+ c->ltree = tl;
+ c->dtree = td;
+ Tracev((stderr, "inflate: codes new\n"));
+ }
+ return c;
+}
+
+
+local int inflate_codes(
+ inflate_blocks_statef *s,
+ z_stream *z,
+ int r
+)
+{
+ uInt j; /* temporary storage */
+ inflate_huft *t; /* temporary pointer */
+ uInt e; /* extra bits or operation */
+ uLong b; /* bit buffer */
+ uInt k; /* bits in bit buffer */
+ Bytef *p; /* input data pointer */
+ uInt n; /* bytes available there */
+ Bytef *q; /* output window write pointer */
+ uInt m; /* bytes to end of window or read pointer */
+ Bytef *f; /* pointer to copy strings from */
+ inflate_codes_statef *c = s->sub.decode.codes; /* codes state */
+
+ /* copy input/output information to locals (UPDATE macro restores) */
+ LOAD
+
+ /* process input and output based on current state */
+ while (1) switch (c->mode)
+ { /* waiting for "i:"=input, "o:"=output, "x:"=nothing */
+ case START: /* x: set up for LEN */
+#ifndef SLOW
+ if (m >= 258 && n >= 10)
+ {
+ UPDATE
+ r = inflate_fast(c->lbits, c->dbits, c->ltree, c->dtree, s, z);
+ LOAD
+ if (r != Z_OK)
+ {
+ c->mode = r == Z_STREAM_END ? WASH : BADCODE;
+ break;
+ }
+ }
+#endif /* !SLOW */
+ c->sub.code.need = c->lbits;
+ c->sub.code.tree = c->ltree;
+ c->mode = LEN;
+ case LEN: /* i: get length/literal/eob next */
+ j = c->sub.code.need;
+ NEEDBITS(j)
+ t = c->sub.code.tree + ((uInt)b & inflate_mask[j]);
+ DUMPBITS(t->bits)
+ e = (uInt)(t->exop);
+ if (e == 0) /* literal */
+ {
+ c->sub.lit = t->base;
+ Tracevv((stderr, t->base >= 0x20 && t->base < 0x7f ?
+ "inflate: literal '%c'\n" :
+ "inflate: literal 0x%02x\n", t->base));
+ c->mode = LIT;
+ break;
+ }
+ if (e & 16) /* length */
+ {
+ c->sub.copy.get = e & 15;
+ c->len = t->base;
+ c->mode = LENEXT;
+ break;
+ }
+ if ((e & 64) == 0) /* next table */
+ {
+ c->sub.code.need = e;
+ c->sub.code.tree = t->next;
+ break;
+ }
+ if (e & 32) /* end of block */
+ {
+ Tracevv((stderr, "inflate: end of block\n"));
+ c->mode = WASH;
+ break;
+ }
+ c->mode = BADCODE; /* invalid code */
+ z->msg = "invalid literal/length code";
+ r = Z_DATA_ERROR;
+ LEAVE
+ case LENEXT: /* i: getting length extra (have base) */
+ j = c->sub.copy.get;
+ NEEDBITS(j)
+ c->len += (uInt)b & inflate_mask[j];
+ DUMPBITS(j)
+ c->sub.code.need = c->dbits;
+ c->sub.code.tree = c->dtree;
+ Tracevv((stderr, "inflate: length %u\n", c->len));
+ c->mode = DIST;
+ case DIST: /* i: get distance next */
+ j = c->sub.code.need;
+ NEEDBITS(j)
+ t = c->sub.code.tree + ((uInt)b & inflate_mask[j]);
+ DUMPBITS(t->bits)
+ e = (uInt)(t->exop);
+ if (e & 16) /* distance */
+ {
+ c->sub.copy.get = e & 15;
+ c->sub.copy.dist = t->base;
+ c->mode = DISTEXT;
+ break;
+ }
+ if ((e & 64) == 0) /* next table */
+ {
+ c->sub.code.need = e;
+ c->sub.code.tree = t->next;
+ break;
+ }
+ c->mode = BADCODE; /* invalid code */
+ z->msg = "invalid distance code";
+ r = Z_DATA_ERROR;
+ LEAVE
+ case DISTEXT: /* i: getting distance extra */
+ j = c->sub.copy.get;
+ NEEDBITS(j)
+ c->sub.copy.dist += (uInt)b & inflate_mask[j];
+ DUMPBITS(j)
+ Tracevv((stderr, "inflate: distance %u\n", c->sub.copy.dist));
+ c->mode = COPY;
+ case COPY: /* o: copying bytes in window, waiting for space */
+#ifndef __TURBOC__ /* Turbo C bug for following expression */
+ f = (uInt)(q - s->window) < c->sub.copy.dist ?
+ s->end - (c->sub.copy.dist - (q - s->window)) :
+ q - c->sub.copy.dist;
+#else
+ f = q - c->sub.copy.dist;
+ if ((uInt)(q - s->window) < c->sub.copy.dist)
+ f = s->end - (c->sub.copy.dist - (q - s->window));
+#endif
+ while (c->len)
+ {
+ NEEDOUT
+ OUTBYTE(*f++)
+ if (f == s->end)
+ f = s->window;
+ c->len--;
+ }
+ c->mode = START;
+ break;
+ case LIT: /* o: got literal, waiting for output space */
+ NEEDOUT
+ OUTBYTE(c->sub.lit)
+ c->mode = START;
+ break;
+ case WASH: /* o: got eob, possibly more output */
+ FLUSH
+ if (s->read != s->write)
+ LEAVE
+ c->mode = END;
+ case END:
+ r = Z_STREAM_END;
+ LEAVE
+ case BADCODE: /* x: got error */
+ r = Z_DATA_ERROR;
+ LEAVE
+ default:
+ r = Z_STREAM_ERROR;
+ LEAVE
+ }
+}
+
+
+local void inflate_codes_free(
+ inflate_codes_statef *c,
+ z_stream *z
+)
+{
+ ZFREE(z, c, sizeof(struct inflate_codes_state));
+ Tracev((stderr, "inflate: codes free\n"));
+}
+
+/*+++++*/
+/* inflate_util.c -- data and routines common to blocks and codes
+ * Copyright (C) 1995 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* copy as much as possible from the sliding window to the output area */
+local int inflate_flush(
+ inflate_blocks_statef *s,
+ z_stream *z,
+ int r
+)
+{
+ uInt n;
+ Bytef *p, *q;
+
+ /* local copies of source and destination pointers */
+ p = z->next_out;
+ q = s->read;
+
+ /* compute number of bytes to copy as far as end of window */
+ n = (uInt)((q <= s->write ? s->write : s->end) - q);
+ if (n > z->avail_out) n = z->avail_out;
+ if (n && r == Z_BUF_ERROR) r = Z_OK;
+
+ /* update counters */
+ z->avail_out -= n;
+ z->total_out += n;
+
+ /* update check information */
+ if (s->checkfn != Z_NULL)
+ s->check = (*s->checkfn)(s->check, q, n);
+
+ /* copy as far as end of window */
+ zmemcpy(p, q, n);
+ p += n;
+ q += n;
+
+ /* see if more to copy at beginning of window */
+ if (q == s->end)
+ {
+ /* wrap pointers */
+ q = s->window;
+ if (s->write == s->end)
+ s->write = s->window;
+
+ /* compute bytes to copy */
+ n = (uInt)(s->write - q);
+ if (n > z->avail_out) n = z->avail_out;
+ if (n && r == Z_BUF_ERROR) r = Z_OK;
+
+ /* update counters */
+ z->avail_out -= n;
+ z->total_out += n;
+
+ /* update check information */
+ if (s->checkfn != Z_NULL)
+ s->check = (*s->checkfn)(s->check, q, n);
+
+ /* copy */
+ zmemcpy(p, q, n);
+ p += n;
+ q += n;
+ }
+
+ /* update pointers */
+ z->next_out = p;
+ s->read = q;
+
+ /* done */
+ return r;
+}
+
+
+/*+++++*/
+/* inffast.c -- process literals and length/distance pairs fast
+ * Copyright (C) 1995 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* simplify the use of the inflate_huft type with some defines */
+#define base more.Base
+#define next more.Next
+#define exop word.what.Exop
+#define bits word.what.Bits
+
+/* macros for bit input with no checking and for returning unused bytes */
+#define GRABBITS(j) {while(k<(j)){b|=((uLong)NEXTBYTE)<<k;k+=8;}}
+#define UNGRAB {n+=(c=k>>3);p-=c;k&=7;}
+
+/* Called with number of bytes left to write in window at least 258
+ (the maximum string length) and number of input bytes available
+ at least ten. The ten bytes are six bytes for the longest length/
+ distance pair plus four bytes for overloading the bit buffer. */
+
+local int inflate_fast(
+ uInt bl,
+ uInt bd,
+ inflate_huft *tl,
+ inflate_huft *td,
+ inflate_blocks_statef *s,
+ z_stream *z
+)
+{
+ inflate_huft *t; /* temporary pointer */
+ uInt e; /* extra bits or operation */
+ uLong b; /* bit buffer */
+ uInt k; /* bits in bit buffer */
+ Bytef *p; /* input data pointer */
+ uInt n; /* bytes available there */
+ Bytef *q; /* output window write pointer */
+ uInt m; /* bytes to end of window or read pointer */
+ uInt ml; /* mask for literal/length tree */
+ uInt md; /* mask for distance tree */
+ uInt c; /* bytes to copy */
+ uInt d; /* distance back to copy from */
+ Bytef *r; /* copy source pointer */
+
+ /* load input, output, bit values */
+ LOAD
+
+ /* initialize masks */
+ ml = inflate_mask[bl];
+ md = inflate_mask[bd];
+
+ /* do until not enough input or output space for fast loop */
+ do { /* assume called with m >= 258 && n >= 10 */
+ /* get literal/length code */
+ GRABBITS(20) /* max bits for literal/length code */
+ if ((e = (t = tl + ((uInt)b & ml))->exop) == 0)
+ {
+ DUMPBITS(t->bits)
+ Tracevv((stderr, t->base >= 0x20 && t->base < 0x7f ?
+ "inflate: * literal '%c'\n" :
+ "inflate: * literal 0x%02x\n", t->base));
+ *q++ = (Byte)t->base;
+ m--;
+ continue;
+ }
+ do {
+ DUMPBITS(t->bits)
+ if (e & 16)
+ {
+ /* get extra bits for length */
+ e &= 15;
+ c = t->base + ((uInt)b & inflate_mask[e]);
+ DUMPBITS(e)
+ Tracevv((stderr, "inflate: * length %u\n", c));
+
+ /* decode distance base of block to copy */
+ GRABBITS(15); /* max bits for distance code */
+ e = (t = td + ((uInt)b & md))->exop;
+ do {
+ DUMPBITS(t->bits)
+ if (e & 16)
+ {
+ /* get extra bits to add to distance base */
+ e &= 15;
+ GRABBITS(e) /* get extra bits (up to 13) */
+ d = t->base + ((uInt)b & inflate_mask[e]);
+ DUMPBITS(e)
+ Tracevv((stderr, "inflate: * distance %u\n", d));
+
+ /* do the copy */
+ m -= c;
+ if ((uInt)(q - s->window) >= d) /* offset before dest */
+ { /* just copy */
+ r = q - d;
+ *q++ = *r++; c--; /* minimum count is three, */
+ *q++ = *r++; c--; /* so unroll loop a little */
+ }
+ else /* else offset after destination */
+ {
+ e = d - (q - s->window); /* bytes from offset to end */
+ r = s->end - e; /* pointer to offset */
+ if (c > e) /* if source crosses, */
+ {
+ c -= e; /* copy to end of window */
+ do {
+ *q++ = *r++;
+ } while (--e);
+ r = s->window; /* copy rest from start of window */
+ }
+ }
+ do { /* copy all or what's left */
+ *q++ = *r++;
+ } while (--c);
+ break;
+ }
+ else if ((e & 64) == 0)
+ e = (t = t->next + ((uInt)b & inflate_mask[e]))->exop;
+ else
+ {
+ z->msg = "invalid distance code";
+ UNGRAB
+ UPDATE
+ return Z_DATA_ERROR;
+ }
+ } while (1);
+ break;
+ }
+ if ((e & 64) == 0)
+ {
+ if ((e = (t = t->next + ((uInt)b & inflate_mask[e]))->exop) == 0)
+ {
+ DUMPBITS(t->bits)
+ Tracevv((stderr, t->base >= 0x20 && t->base < 0x7f ?
+ "inflate: * literal '%c'\n" :
+ "inflate: * literal 0x%02x\n", t->base));
+ *q++ = (Byte)t->base;
+ m--;
+ break;
+ }
+ }
+ else if (e & 32)
+ {
+ Tracevv((stderr, "inflate: * end of block\n"));
+ UNGRAB
+ UPDATE
+ return Z_STREAM_END;
+ }
+ else
+ {
+ z->msg = "invalid literal/length code";
+ UNGRAB
+ UPDATE
+ return Z_DATA_ERROR;
+ }
+ } while (1);
+ } while (m >= 258 && n >= 10);
+
+ /* not enough input or output--restore pointers and return */
+ UNGRAB
+ UPDATE
+ return Z_OK;
+}
+
+
+/*+++++*/
+/* zutil.c -- target dependent utility functions for the compression library
+ * Copyright (C) 1995 Jean-loup Gailly.
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* From: zutil.c,v 1.8 1995/05/03 17:27:12 jloup Exp */
+
+char *zlib_version = ZLIB_VERSION;
+
+char *z_errmsg[] = {
+"stream end", /* Z_STREAM_END 1 */
+"", /* Z_OK 0 */
+"file error", /* Z_ERRNO (-1) */
+"stream error", /* Z_STREAM_ERROR (-2) */
+"data error", /* Z_DATA_ERROR (-3) */
+"insufficient memory", /* Z_MEM_ERROR (-4) */
+"buffer error", /* Z_BUF_ERROR (-5) */
+""};
+
+
+/*+++++*/
+/* adler32.c -- compute the Adler-32 checksum of a data stream
+ * Copyright (C) 1995 Mark Adler
+ * For conditions of distribution and use, see copyright notice in zlib.h
+ */
+
+/* From: adler32.c,v 1.6 1995/05/03 17:27:08 jloup Exp */
+
+#define BASE 65521L /* largest prime smaller than 65536 */
+#define NMAX 5552
+/* NMAX is the largest n such that 255n(n+1)/2 + (n+1)(BASE-1) <= 2^32-1 */
+
+#define DO1(buf) {s1 += *buf++; s2 += s1;}
+#define DO2(buf) DO1(buf); DO1(buf);
+#define DO4(buf) DO2(buf); DO2(buf);
+#define DO8(buf) DO4(buf); DO4(buf);
+#define DO16(buf) DO8(buf); DO8(buf);
+
+/* ========================================================================= */
+uLong adler32(adler, buf, len)
+ uLong adler;
+ Bytef *buf;
+ uInt len;
+{
+ unsigned long s1 = adler & 0xffff;
+ unsigned long s2 = (adler >> 16) & 0xffff;
+ int k;
+
+ if (buf == Z_NULL) return 1L;
+
+ while (len > 0) {
+ k = len < NMAX ? len : NMAX;
+ len -= k;
+ while (k >= 16) {
+ DO16(buf);
+ k -= 16;
+ }
+ if (k != 0) do {
+ DO1(buf);
+ } while (--k);
+ s1 %= BASE;
+ s2 %= BASE;
+ }
+ return (s2 << 16) | s1;
+}
--- /dev/null
+/*
+ * arch/ppc/boot/simple/chrpmap.S
+ *
+ * Author: Tom Rini <trini@mvista.com>
+ *
+ * This will go and setup ISA_io to 0xFE00000 and return.
+ */
+
+#include <asm/ppc_asm.h>
+
+ .text
+
+ .globl serial_fixups
+serial_fixups:
+ lis r3,ISA_io@h /* Load ISA_io */
+ ori r3,r3,ISA_io@l
+ lis r4,0xFE00 /* Load the value, 0xFE00000 */
+ stw r4,0(r3) /* store */
+ blr
--- /dev/null
+/*
+ * arch/ppc/boot/simple/legacy.S
+ *
+ * Author: Tom Rini <trini@mvista.com>
+ *
+ * This will go and setup ISA_io to 0x8000000 and return.
+ */
+
+#include <asm/ppc_asm.h>
+
+ .text
+
+ .globl serial_fixups
+serial_fixups:
+ lis r3,ISA_io@h /* Load ISA_io */
+ ori r3,r3,ISA_io@l
+ lis r4,0x8000 /* Load the value, 0x8000000 */
+ stw r4,0(r3) /* store */
+ blr
--- /dev/null
+/*
+ * arch/ppc/boot/simple/misc-chestnut.S
+ *
+ * Setup for the IBM Chestnut (ibm-750fxgx_eval)
+ *
+ * Author: <source@mvista.com>
+ *
+ * <2004> (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+
+#include <asm/ppc_asm.h>
+#include <asm/mv64x60_defs.h>
+#include <platforms/chestnut.h>
+
+ .globl mv64x60_board_init
+mv64x60_board_init:
+ /*
+ * move UART to 0xffc00000
+ */
+
+ li r23,16
+
+ addis r25,0,CONFIG_MV64X60_BASE@h
+ ori r25,r25,MV64x60_CPU2DEV_2_BASE
+ addis r26,0,CHESTNUT_UART_BASE@h
+ srw r26,r26,r23
+ stwbrx r26,0,(r25)
+ sync
+
+ addis r25,0,CONFIG_MV64X60_BASE@h
+ ori r25,r25,MV64x60_CPU2DEV_2_SIZE
+ addis r26,0,0x00100000@h
+ srw r26,r26,r23
+ stwbrx r26,0,(r25)
+ sync
+
+ blr
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc85xx_devices.c
+ *
+ * MPC85xx Device descriptions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2005 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/serial_8250.h>
+#include <linux/fsl_devices.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/ppc_sys.h>
+
+/* We use offsets for IORESOURCE_MEM since we do not know at compile time
+ * what CCSRBAR is, will get fixed up by mach_mpc85xx_fixup
+ */
+
+static struct gianfar_platform_data mpc85xx_tsec1_pdata = {
+ .device_flags = FSL_GIANFAR_DEV_HAS_GIGABIT |
+ FSL_GIANFAR_DEV_HAS_COALESCE | FSL_GIANFAR_DEV_HAS_RMON |
+ FSL_GIANFAR_DEV_HAS_MULTI_INTR,
+ .phy_reg_addr = MPC85xx_ENET1_OFFSET,
+};
+
+static struct gianfar_platform_data mpc85xx_tsec2_pdata = {
+ .device_flags = FSL_GIANFAR_DEV_HAS_GIGABIT |
+ FSL_GIANFAR_DEV_HAS_COALESCE | FSL_GIANFAR_DEV_HAS_RMON |
+ FSL_GIANFAR_DEV_HAS_MULTI_INTR,
+ .phy_reg_addr = MPC85xx_ENET1_OFFSET,
+};
+
+static struct gianfar_platform_data mpc85xx_fec_pdata = {
+ .phy_reg_addr = MPC85xx_ENET1_OFFSET,
+};
+
+static struct fsl_i2c_platform_data mpc85xx_fsl_i2c_pdata = {
+ .device_flags = FSL_I2C_DEV_SEPARATE_DFSRR,
+};
+
+static struct plat_serial8250_port serial_platform_data[] = {
+ [0] = {
+ .mapbase = 0x4500,
+ .irq = MPC85xx_IRQ_DUART,
+ .iotype = UPIO_MEM,
+ .flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST | UPF_SHARE_IRQ,
+ },
+ [1] = {
+ .mapbase = 0x4600,
+ .irq = MPC85xx_IRQ_DUART,
+ .iotype = UPIO_MEM,
+ .flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST | UPF_SHARE_IRQ,
+ },
+};
+
+struct platform_device ppc_sys_platform_devices[] = {
+ [MPC85xx_TSEC1] = {
+ .name = "fsl-gianfar",
+ .id = 1,
+ .dev.platform_data = &mpc85xx_tsec1_pdata,
+ .num_resources = 4,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_ENET1_OFFSET,
+ .end = MPC85xx_ENET1_OFFSET +
+ MPC85xx_ENET1_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .name = "tx",
+ .start = MPC85xx_IRQ_TSEC1_TX,
+ .end = MPC85xx_IRQ_TSEC1_TX,
+ .flags = IORESOURCE_IRQ,
+ },
+ {
+ .name = "rx",
+ .start = MPC85xx_IRQ_TSEC1_RX,
+ .end = MPC85xx_IRQ_TSEC1_RX,
+ .flags = IORESOURCE_IRQ,
+ },
+ {
+ .name = "error",
+ .start = MPC85xx_IRQ_TSEC1_ERROR,
+ .end = MPC85xx_IRQ_TSEC1_ERROR,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_TSEC2] = {
+ .name = "fsl-gianfar",
+ .id = 2,
+ .dev.platform_data = &mpc85xx_tsec2_pdata,
+ .num_resources = 4,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_ENET2_OFFSET,
+ .end = MPC85xx_ENET2_OFFSET +
+ MPC85xx_ENET2_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .name = "tx",
+ .start = MPC85xx_IRQ_TSEC2_TX,
+ .end = MPC85xx_IRQ_TSEC2_TX,
+ .flags = IORESOURCE_IRQ,
+ },
+ {
+ .name = "rx",
+ .start = MPC85xx_IRQ_TSEC2_RX,
+ .end = MPC85xx_IRQ_TSEC2_RX,
+ .flags = IORESOURCE_IRQ,
+ },
+ {
+ .name = "error",
+ .start = MPC85xx_IRQ_TSEC2_ERROR,
+ .end = MPC85xx_IRQ_TSEC2_ERROR,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_FEC] = {
+ .name = "fsl-gianfar",
+ .id = 3,
+ .dev.platform_data = &mpc85xx_fec_pdata,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_ENET3_OFFSET,
+ .end = MPC85xx_ENET3_OFFSET +
+ MPC85xx_ENET3_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+
+ },
+ {
+ .start = MPC85xx_IRQ_FEC,
+ .end = MPC85xx_IRQ_FEC,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_IIC1] = {
+ .name = "fsl-i2c",
+ .id = 1,
+ .dev.platform_data = &mpc85xx_fsl_i2c_pdata,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_IIC1_OFFSET,
+ .end = MPC85xx_IIC1_OFFSET +
+ MPC85xx_IIC1_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_IIC1,
+ .end = MPC85xx_IRQ_IIC1,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_DMA0] = {
+ .name = "fsl-dma",
+ .id = 0,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_DMA0_OFFSET,
+ .end = MPC85xx_DMA0_OFFSET +
+ MPC85xx_DMA0_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_DMA0,
+ .end = MPC85xx_IRQ_DMA0,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_DMA1] = {
+ .name = "fsl-dma",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_DMA1_OFFSET,
+ .end = MPC85xx_DMA1_OFFSET +
+ MPC85xx_DMA1_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_DMA1,
+ .end = MPC85xx_IRQ_DMA1,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_DMA2] = {
+ .name = "fsl-dma",
+ .id = 2,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_DMA2_OFFSET,
+ .end = MPC85xx_DMA2_OFFSET +
+ MPC85xx_DMA2_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_DMA2,
+ .end = MPC85xx_IRQ_DMA2,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_DMA3] = {
+ .name = "fsl-dma",
+ .id = 3,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_DMA3_OFFSET,
+ .end = MPC85xx_DMA3_OFFSET +
+ MPC85xx_DMA3_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_DMA3,
+ .end = MPC85xx_IRQ_DMA3,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_DUART] = {
+ .name = "serial8250",
+ .id = 0,
+ .dev.platform_data = serial_platform_data,
+ },
+ [MPC85xx_PERFMON] = {
+ .name = "fsl-perfmon",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_PERFMON_OFFSET,
+ .end = MPC85xx_PERFMON_OFFSET +
+ MPC85xx_PERFMON_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_PERFMON,
+ .end = MPC85xx_IRQ_PERFMON,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_SEC2] = {
+ .name = "fsl-sec2",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = MPC85xx_SEC2_OFFSET,
+ .end = MPC85xx_SEC2_OFFSET +
+ MPC85xx_SEC2_SIZE - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = MPC85xx_IRQ_SEC2,
+ .end = MPC85xx_IRQ_SEC2,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+#ifdef CONFIG_CPM2
+ [MPC85xx_CPM_FCC1] = {
+ .name = "fsl-cpm-fcc",
+ .id = 1,
+ .num_resources = 3,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91300,
+ .end = 0x9131F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = 0x91380,
+ .end = 0x9139F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_FCC1,
+ .end = SIU_INT_FCC1,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_FCC2] = {
+ .name = "fsl-cpm-fcc",
+ .id = 2,
+ .num_resources = 3,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91320,
+ .end = 0x9133F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = 0x913A0,
+ .end = 0x913CF,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_FCC2,
+ .end = SIU_INT_FCC2,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_FCC3] = {
+ .name = "fsl-cpm-fcc",
+ .id = 3,
+ .num_resources = 3,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91340,
+ .end = 0x9135F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = 0x913D0,
+ .end = 0x913FF,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_FCC3,
+ .end = SIU_INT_FCC3,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_I2C] = {
+ .name = "fsl-cpm-i2c",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91860,
+ .end = 0x918BF,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_I2C,
+ .end = SIU_INT_I2C,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SCC1] = {
+ .name = "fsl-cpm-scc",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91A00,
+ .end = 0x91A1F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SCC1,
+ .end = SIU_INT_SCC1,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SCC2] = {
+ .name = "fsl-cpm-scc",
+ .id = 2,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91A20,
+ .end = 0x91A3F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SCC2,
+ .end = SIU_INT_SCC2,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SCC3] = {
+ .name = "fsl-cpm-scc",
+ .id = 3,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91A40,
+ .end = 0x91A5F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SCC3,
+ .end = SIU_INT_SCC3,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SCC4] = {
+ .name = "fsl-cpm-scc",
+ .id = 4,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91A60,
+ .end = 0x91A7F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SCC4,
+ .end = SIU_INT_SCC4,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SPI] = {
+ .name = "fsl-cpm-spi",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91AA0,
+ .end = 0x91AFF,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SPI,
+ .end = SIU_INT_SPI,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_MCC1] = {
+ .name = "fsl-cpm-mcc",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91B30,
+ .end = 0x91B3F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_MCC1,
+ .end = SIU_INT_MCC1,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_MCC2] = {
+ .name = "fsl-cpm-mcc",
+ .id = 2,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91B50,
+ .end = 0x91B5F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_MCC2,
+ .end = SIU_INT_MCC2,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SMC1] = {
+ .name = "fsl-cpm-smc",
+ .id = 1,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91A80,
+ .end = 0x91A8F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SMC1,
+ .end = SIU_INT_SMC1,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_SMC2] = {
+ .name = "fsl-cpm-smc",
+ .id = 2,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91A90,
+ .end = 0x91A9F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_SMC2,
+ .end = SIU_INT_SMC2,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+ [MPC85xx_CPM_USB] = {
+ .name = "fsl-cpm-usb",
+ .id = 2,
+ .num_resources = 2,
+ .resource = (struct resource[]) {
+ {
+ .start = 0x91B60,
+ .end = 0x91B7F,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = SIU_INT_USB,
+ .end = SIU_INT_USB,
+ .flags = IORESOURCE_IRQ,
+ },
+ },
+ },
+#endif /* CONFIG_CPM2 */
+};
+
+static int __init mach_mpc85xx_fixup(struct platform_device *pdev)
+{
+ ppc_sys_fixup_mem_resource(pdev, CCSRBAR);
+ return 0;
+}
+
+static int __init mach_mpc85xx_init(void)
+{
+ ppc_sys_device_fixup = mach_mpc85xx_fixup;
+ return 0;
+}
+
+postcore_initcall(mach_mpc85xx_init);
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc85xx_sys.c
+ *
+ * MPC85xx System descriptions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2005 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <asm/ppc_sys.h>
+
+struct ppc_sys_spec *cur_ppc_sys_spec;
+struct ppc_sys_spec ppc_sys_specs[] = {
+ {
+ .ppc_sys_name = "MPC8540",
+ .mask = 0xFFFF0000,
+ .value = 0x80300000,
+ .num_devices = 10,
+ .device_list = (enum ppc_sys_devices[])
+ {
+ MPC85xx_TSEC1, MPC85xx_TSEC2, MPC85xx_FEC, MPC85xx_IIC1,
+ MPC85xx_DMA0, MPC85xx_DMA1, MPC85xx_DMA2, MPC85xx_DMA3,
+ MPC85xx_PERFMON, MPC85xx_DUART,
+ },
+ },
+ {
+ .ppc_sys_name = "MPC8560",
+ .mask = 0xFFFF0000,
+ .value = 0x80700000,
+ .num_devices = 19,
+ .device_list = (enum ppc_sys_devices[])
+ {
+ MPC85xx_TSEC1, MPC85xx_TSEC2, MPC85xx_IIC1,
+ MPC85xx_DMA0, MPC85xx_DMA1, MPC85xx_DMA2, MPC85xx_DMA3,
+ MPC85xx_PERFMON,
+ MPC85xx_CPM_SPI, MPC85xx_CPM_I2C, MPC85xx_CPM_SCC1,
+ MPC85xx_CPM_SCC2, MPC85xx_CPM_SCC3, MPC85xx_CPM_SCC4,
+ MPC85xx_CPM_FCC1, MPC85xx_CPM_FCC2, MPC85xx_CPM_FCC3,
+ MPC85xx_CPM_MCC1, MPC85xx_CPM_MCC2,
+ },
+ },
+ {
+ .ppc_sys_name = "MPC8541",
+ .mask = 0xFFFF0000,
+ .value = 0x80720000,
+ .num_devices = 13,
+ .device_list = (enum ppc_sys_devices[])
+ {
+ MPC85xx_TSEC1, MPC85xx_TSEC2, MPC85xx_IIC1,
+ MPC85xx_DMA0, MPC85xx_DMA1, MPC85xx_DMA2, MPC85xx_DMA3,
+ MPC85xx_PERFMON, MPC85xx_DUART,
+ MPC85xx_CPM_SPI, MPC85xx_CPM_I2C,
+ MPC85xx_CPM_FCC1, MPC85xx_CPM_FCC2,
+ },
+ },
+ {
+ .ppc_sys_name = "MPC8541E",
+ .mask = 0xFFFF0000,
+ .value = 0x807A0000,
+ .num_devices = 14,
+ .device_list = (enum ppc_sys_devices[])
+ {
+ MPC85xx_TSEC1, MPC85xx_TSEC2, MPC85xx_IIC1,
+ MPC85xx_DMA0, MPC85xx_DMA1, MPC85xx_DMA2, MPC85xx_DMA3,
+ MPC85xx_PERFMON, MPC85xx_DUART, MPC85xx_SEC2,
+ MPC85xx_CPM_SPI, MPC85xx_CPM_I2C,
+ MPC85xx_CPM_FCC1, MPC85xx_CPM_FCC2,
+ },
+ },
+ {
+ .ppc_sys_name = "MPC8555",
+ .mask = 0xFFFF0000,
+ .value = 0x80710000,
+ .num_devices = 20,
+ .device_list = (enum ppc_sys_devices[])
+ {
+ MPC85xx_TSEC1, MPC85xx_TSEC2, MPC85xx_IIC1,
+ MPC85xx_DMA0, MPC85xx_DMA1, MPC85xx_DMA2, MPC85xx_DMA3,
+ MPC85xx_PERFMON, MPC85xx_DUART,
+ MPC85xx_CPM_SPI, MPC85xx_CPM_I2C, MPC85xx_CPM_SCC1,
+ MPC85xx_CPM_SCC2, MPC85xx_CPM_SCC3,
+ MPC85xx_CPM_FCC1, MPC85xx_CPM_FCC2, MPC85xx_CPM_FCC3,
+ MPC85xx_CPM_SMC1, MPC85xx_CPM_SMC2,
+ MPC85xx_CPM_USB,
+ },
+ },
+ {
+ .ppc_sys_name = "MPC8555E",
+ .mask = 0xFFFF0000,
+ .value = 0x80790000,
+ .num_devices = 21,
+ .device_list = (enum ppc_sys_devices[])
+ {
+ MPC85xx_TSEC1, MPC85xx_TSEC2, MPC85xx_IIC1,
+ MPC85xx_DMA0, MPC85xx_DMA1, MPC85xx_DMA2, MPC85xx_DMA3,
+ MPC85xx_PERFMON, MPC85xx_DUART, MPC85xx_SEC2,
+ MPC85xx_CPM_SPI, MPC85xx_CPM_I2C, MPC85xx_CPM_SCC1,
+ MPC85xx_CPM_SCC2, MPC85xx_CPM_SCC3,
+ MPC85xx_CPM_FCC1, MPC85xx_CPM_FCC2, MPC85xx_CPM_FCC3,
+ MPC85xx_CPM_SMC1, MPC85xx_CPM_SMC2,
+ MPC85xx_CPM_USB,
+ },
+ },
+ { /* default match */
+ .ppc_sys_name = "",
+ .mask = 0x00000000,
+ .value = 0x00000000,
+ },
+};
--- /dev/null
+/*
+ * arch/ppc/platforms/est8260_setup.c
+ *
+ * EST8260 platform support
+ *
+ * Author: Allen Curtis <acurtis@onz.com>
+ * Derived from: m8260_setup.c by Dan Malek, MVista
+ *
+ * Copyright 2002 Ones and Zeros, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/seq_file.h>
+
+#include <asm/mpc8260.h>
+#include <asm/machdep.h>
+
+static void (*callback_setup_arch)(void);
+
+extern unsigned char __res[sizeof(bd_t)];
+
+extern void m8260_init(unsigned long r3, unsigned long r4,
+ unsigned long r5, unsigned long r6, unsigned long r7);
+
+static int
+est8260_show_cpuinfo(struct seq_file *m)
+{
+ bd_t *binfo = (bd_t *)__res;
+
+ seq_printf(m, "vendor\t\t: EST Corporation\n"
+ "machine\t\t: SBC8260 PowerPC\n"
+ "\n"
+ "mem size\t\t: 0x%08x\n"
+ "console baud\t\t: %d\n"
+ "\n",
+ binfo->bi_memsize,
+ binfo->bi_baudrate);
+ return 0;
+}
+
+static void __init
+est8260_setup_arch(void)
+{
+ printk("EST SBC8260 Port\n");
+ callback_setup_arch();
+}
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* Generic 8260 platform initialization */
+ m8260_init(r3, r4, r5, r6, r7);
+
+ /* Anything special for this platform */
+ ppc_md.show_cpuinfo = est8260_show_cpuinfo;
+
+ callback_setup_arch = ppc_md.setup_arch;
+ ppc_md.setup_arch = est8260_setup_arch;
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/lopec_pci.c
+ *
+ * PCI setup routines for the Motorola LoPEC.
+ *
+ * Author: Dan Cox
+ * danc@mvista.com (or, alternately, source@mvista.com)
+ *
+ * 2001-2002 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#include <linux/init.h>
+#include <linux/pci.h>
+
+#include <asm/machdep.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc10x.h>
+
+static inline int __init
+lopec_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
+{
+ int irq;
+ static char pci_irq_table[][4] = {
+ {16, 0, 0, 0}, /* ID 11 - Winbond */
+ {22, 0, 0, 0}, /* ID 12 - SCSI */
+ {0, 0, 0, 0}, /* ID 13 - nothing */
+ {17, 0, 0, 0}, /* ID 14 - 82559 Ethernet */
+ {27, 0, 0, 0}, /* ID 15 - USB */
+ {23, 0, 0, 0}, /* ID 16 - PMC slot 1 */
+ {24, 0, 0, 0}, /* ID 17 - PMC slot 2 */
+ {25, 0, 0, 0}, /* ID 18 - PCI slot */
+ {0, 0, 0, 0}, /* ID 19 - nothing */
+ {0, 0, 0, 0}, /* ID 20 - nothing */
+ {0, 0, 0, 0}, /* ID 21 - nothing */
+ {0, 0, 0, 0}, /* ID 22 - nothing */
+ {0, 0, 0, 0}, /* ID 23 - nothing */
+ {0, 0, 0, 0}, /* ID 24 - PMC slot 1b */
+ {0, 0, 0, 0}, /* ID 25 - nothing */
+ {0, 0, 0, 0} /* ID 26 - PMC Slot 2b */
+ };
+ const long min_idsel = 11, max_idsel = 26, irqs_per_slot = 4;
+
+ irq = PCI_IRQ_TABLE_LOOKUP;
+ if (!irq)
+ return 0;
+
+ return irq;
+}
+
+void __init
+lopec_setup_winbond_83553(struct pci_controller *hose)
+{
+ int devfn;
+
+ devfn = PCI_DEVFN(11,0);
+
+ /* IDE interrupt routing (primary 14, secondary 15) */
+ early_write_config_byte(hose, 0, devfn, 0x43, 0xef);
+ /* PCI interrupt routing */
+ early_write_config_word(hose, 0, devfn, 0x44, 0x0000);
+
+ /* ISA-PCI address decoder */
+ early_write_config_byte(hose, 0, devfn, 0x48, 0xf0);
+
+ /* RTC, kb, not used in PPC */
+ early_write_config_byte(hose, 0, devfn, 0x4d, 0x00);
+ early_write_config_byte(hose, 0, devfn, 0x4e, 0x04);
+ devfn = PCI_DEVFN(11, 1);
+ early_write_config_byte(hose, 0, devfn, 0x09, 0x8f);
+ early_write_config_dword(hose, 0, devfn, 0x40, 0x00ff0011);
+}
+
+void __init
+lopec_find_bridges(void)
+{
+ struct pci_controller *hose;
+
+ hose = pcibios_alloc_controller();
+ if (!hose)
+ return;
+
+ hose->first_busno = 0;
+ hose->last_busno = 0xff;
+
+ if (mpc10x_bridge_init(hose,
+ MPC10X_MEM_MAP_B,
+ MPC10X_MEM_MAP_B,
+ MPC10X_MAPB_EUMB_BASE) == 0) {
+
+ hose->mem_resources[0].end = 0xffffffff;
+ lopec_setup_winbond_83553(hose);
+ hose->last_busno = pciauto_bus_scan(hose, hose->first_busno);
+ ppc_md.pci_swizzle = common_swizzle;
+ ppc_md.pci_map_irq = lopec_map_irq;
+ }
+}
--- /dev/null
+/*
+ * include/asm-ppc/lopec_serial.h
+ *
+ * Definitions for Motorola LoPEC board.
+ *
+ * Author: Dan Cox
+ * danc@mvista.com (or, alternately, source@mvista.com)
+ *
+ * 2001 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#ifndef __H_LOPEC_SERIAL
+#define __H_LOPEC_SERIAL
+
+#define RS_TABLE_SIZE 3
+
+#define BASE_BAUD (1843200 / 16)
+
+#ifdef CONFIG_SERIAL_DETECT_IRQ
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST|ASYNC_AUTO_IRQ)
+#else
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST)
+#endif
+
+#define SERIAL_PORT_DFNS \
+ { 0, BASE_BAUD, 0xffe10000, 29, STD_COM_FLAGS, \
+ iomem_base: (u8 *) 0xffe10000, \
+ io_type: SERIAL_IO_MEM }, \
+ { 0, BASE_BAUD, 0xffe11000, 20, STD_COM_FLAGS, \
+ iomem_base: (u8 *) 0xffe11000, \
+ io_type: SERIAL_IO_MEM }, \
+ { 0, BASE_BAUD, 0xffe12000, 21, STD_COM_FLAGS, \
+ iomem_base: (u8 *) 0xffe12000, \
+ io_type: SERIAL_IO_MEM }
+
+#endif
--- /dev/null
+/*
+ * arch/ppc/platforms/lopec_setup.c
+ *
+ * Setup routines for the Motorola LoPEC.
+ *
+ * Author: Dan Cox
+ * danc@mvista.com
+ *
+ * 2001-2002 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/delay.h>
+#include <linux/pci_ids.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/ide.h>
+#include <linux/seq_file.h>
+#include <linux/initrd.h>
+#include <linux/console.h>
+#include <linux/root_dev.h>
+
+#include <asm/io.h>
+#include <asm/open_pic.h>
+#include <asm/i8259.h>
+#include <asm/todc.h>
+#include <asm/bootinfo.h>
+#include <asm/mpc10x.h>
+#include <asm/hw_irq.h>
+#include <asm/prep_nvram.h>
+
+extern void lopec_find_bridges(void);
+
+/*
+ * Define all of the IRQ senses and polarities. Taken from the
+ * LoPEC Programmer's Reference Guide.
+ */
+static u_char lopec_openpic_initsenses[16] __initdata = {
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* IRQ 0 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* IRQ 1 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* IRQ 2 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* IRQ 3 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* IRQ 4 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* IRQ 5 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* IRQ 6 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* IRQ 7 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* IRQ 8 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* IRQ 9 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* IRQ 10 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* IRQ 11 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* IRQ 12 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* IRQ 13 */
+ (IRQ_SENSE_EDGE | IRQ_POLARITY_NEGATIVE), /* IRQ 14 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE) /* IRQ 15 */
+};
+
+static int
+lopec_show_cpuinfo(struct seq_file *m)
+{
+ seq_printf(m, "machine\t\t: Motorola LoPEC\n");
+ return 0;
+}
+
+static u32
+lopec_irq_canonicalize(u32 irq)
+{
+ if (irq == 2)
+ return 9;
+ else
+ return irq;
+}
+
+static void
+lopec_restart(char *cmd)
+{
+#define LOPEC_SYSSTAT1 0xffe00000
+ /* force a hard reset, if possible */
+ unsigned char reg = *((unsigned char *) LOPEC_SYSSTAT1);
+ reg |= 0x80;
+ *((unsigned char *) LOPEC_SYSSTAT1) = reg;
+
+ local_irq_disable();
+ while(1);
+#undef LOPEC_SYSSTAT1
+}
+
+static void
+lopec_halt(void)
+{
+ local_irq_disable();
+ while(1);
+}
+
+static void
+lopec_power_off(void)
+{
+ lopec_halt();
+}
+
+#if defined(CONFIG_BLK_DEV_IDE) || defined(CONFIG_BLK_DEV_IDE_MODULE)
+int lopec_ide_ports_known = 0;
+static unsigned long lopec_ide_regbase[MAX_HWIFS];
+static unsigned long lopec_ide_ctl_regbase[MAX_HWIFS];
+static unsigned long lopec_idedma_regbase;
+
+static void
+lopec_ide_probe(void)
+{
+ struct pci_dev *dev = pci_find_device(PCI_VENDOR_ID_WINBOND,
+ PCI_DEVICE_ID_WINBOND_82C105,
+ NULL);
+ lopec_ide_ports_known = 1;
+
+ if (dev) {
+ lopec_ide_regbase[0] = dev->resource[0].start;
+ lopec_ide_regbase[1] = dev->resource[2].start;
+ lopec_ide_ctl_regbase[0] = dev->resource[1].start;
+ lopec_ide_ctl_regbase[1] = dev->resource[3].start;
+ lopec_idedma_regbase = dev->resource[4].start;
+ }
+}
+
+static int
+lopec_ide_default_irq(unsigned long base)
+{
+ if (lopec_ide_ports_known == 0)
+ lopec_ide_probe();
+
+ if (base == lopec_ide_regbase[0])
+ return 14;
+ else if (base == lopec_ide_regbase[1])
+ return 15;
+ else
+ return 0;
+}
+
+static unsigned long
+lopec_ide_default_io_base(int index)
+{
+ if (lopec_ide_ports_known == 0)
+ lopec_ide_probe();
+ return lopec_ide_regbase[index];
+}
+
+static void __init
+lopec_ide_init_hwif_ports(hw_regs_t *hw, unsigned long data,
+ unsigned long ctl, int *irq)
+{
+ unsigned long reg = data;
+ uint alt_status_base;
+ int i;
+
+ for(i = IDE_DATA_OFFSET; i <= IDE_STATUS_OFFSET; i++)
+ hw->io_ports[i] = reg++;
+
+ if (data == lopec_ide_regbase[0]) {
+ alt_status_base = lopec_ide_ctl_regbase[0] + 2;
+ hw->irq = 14;
+ }
+ else if (data == lopec_ide_regbase[1]) {
+ alt_status_base = lopec_ide_ctl_regbase[1] + 2;
+ hw->irq = 15;
+ }
+ else {
+ alt_status_base = 0;
+ hw->irq = 0;
+ }
+
+ if (ctl)
+ hw->io_ports[IDE_CONTROL_OFFSET] = ctl;
+ else
+ hw->io_ports[IDE_CONTROL_OFFSET] = alt_status_base;
+
+ if (irq != NULL)
+ *irq = hw->irq;
+
+}
+#endif /* BLK_DEV_IDE */
+
+static void __init
+lopec_init_IRQ(void)
+{
+ int i;
+
+ /*
+ * Provide the open_pic code with the correct table of interrupts.
+ */
+ OpenPIC_InitSenses = lopec_openpic_initsenses;
+ OpenPIC_NumInitSenses = sizeof(lopec_openpic_initsenses);
+
+ mpc10x_set_openpic();
+
+ /* We have a cascade on OpenPIC IRQ 0, Linux IRQ 16 */
+ openpic_hookup_cascade(NUM_8259_INTERRUPTS, "82c59 cascade",
+ &i8259_irq);
+
+ /* Map i8259 interrupts */
+ for(i = 0; i < NUM_8259_INTERRUPTS; i++)
+ irq_desc[i].handler = &i8259_pic;
+
+ /*
+ * The EPIC allows for a read in the range of 0xFEF00000 ->
+ * 0xFEFFFFFF to generate a PCI interrupt-acknowledge transaction.
+ */
+ i8259_init(0xfef00000);
+}
+
+static int __init
+lopec_request_io(void)
+{
+ outb(0x00, 0x4d0);
+ outb(0xc0, 0x4d1);
+
+ request_region(0x00, 0x20, "dma1");
+ request_region(0x20, 0x20, "pic1");
+ request_region(0x40, 0x20, "timer");
+ request_region(0x80, 0x10, "dma page reg");
+ request_region(0xa0, 0x20, "pic2");
+ request_region(0xc0, 0x20, "dma2");
+
+ return 0;
+}
+
+device_initcall(lopec_request_io);
+
+static void __init
+lopec_map_io(void)
+{
+ io_block_mapping(0xf0000000, 0xf0000000, 0x10000000, _PAGE_IO);
+ io_block_mapping(0xb0000000, 0xb0000000, 0x10000000, _PAGE_IO);
+}
+
+static void __init
+lopec_set_bat(void)
+{
+ unsigned long batu, batl;
+
+ __asm__ __volatile__(
+ "lis %0,0xf800\n \
+ ori %1,%0,0x002a\n \
+ ori %0,%0,0x0ffe\n \
+ mtspr 0x21e,%0\n \
+ mtspr 0x21f,%1\n \
+ isync\n \
+ sync "
+ : "=r" (batu), "=r" (batl));
+}
+
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+#include <linux/serial.h>
+#include <linux/serialP.h>
+#include <linux/serial_reg.h>
+#include <asm/serial.h>
+
+static struct serial_state rs_table[RS_TABLE_SIZE] = {
+ SERIAL_PORT_DFNS /* Defined in <asm/serial.h> */
+};
+
+volatile unsigned char *com_port;
+volatile unsigned char *com_port_lsr;
+
+static void
+serial_writechar(char c)
+{
+ while ((*com_port_lsr & UART_LSR_THRE) == 0)
+ ;
+ *com_port = c;
+}
+
+void
+lopec_progress(char *s, unsigned short hex)
+{
+ volatile char c;
+
+ com_port = (volatile unsigned char *) rs_table[0].port;
+ com_port_lsr = com_port + UART_LSR;
+
+ while ((c = *s++) != 0)
+ serial_writechar(c);
+
+ /* Most messages don't have a newline in them */
+ serial_writechar('\n');
+ serial_writechar('\r');
+}
+#endif /* CONFIG_SERIAL_TEXT_DEBUG */
+
+TODC_ALLOC();
+
+static void __init
+lopec_setup_arch(void)
+{
+
+ TODC_INIT(TODC_TYPE_MK48T37, 0, 0,
+ ioremap(0xffe80000, 0x8000), 8);
+
+ loops_per_jiffy = 100000000/HZ;
+
+ lopec_find_bridges();
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#elif defined(CONFIG_ROOT_NFS)
+ ROOT_DEV = Root_NFS;
+#elif defined(CONFIG_BLK_DEV_IDE) || defined(CONFIG_BLK_DEV_IDE_MODULE)
+ ROOT_DEV = Root_HDA1;
+#else
+ ROOT_DEV = Root_SDA1;
+#endif
+
+#ifdef CONFIG_VT
+ conswitchp = &dummy_con;
+#endif
+#ifdef CONFIG_PPCBUG_NVRAM
+ /* Read in NVRAM data */
+ init_prep_nvram();
+
+ /* if no bootargs, look in NVRAM */
+ if ( cmd_line[0] == '\0' ) {
+ char *bootargs;
+ bootargs = prep_nvram_get_var("bootargs");
+ if (bootargs != NULL) {
+ strcpy(cmd_line, bootargs);
+ /* again.. */
+ strcpy(saved_command_line, cmd_line);
+ }
+ }
+#endif
+}
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ parse_bootinfo(find_bootinfo());
+ lopec_set_bat();
+
+ isa_io_base = MPC10X_MAPB_ISA_IO_BASE;
+ isa_mem_base = MPC10X_MAPB_ISA_MEM_BASE;
+ pci_dram_offset = MPC10X_MAPB_DRAM_OFFSET;
+ ISA_DMA_THRESHOLD = 0x00ffffff;
+ DMA_MODE_READ = 0x44;
+ DMA_MODE_WRITE = 0x48;
+
+ ppc_md.setup_arch = lopec_setup_arch;
+ ppc_md.show_cpuinfo = lopec_show_cpuinfo;
+ ppc_md.irq_canonicalize = lopec_irq_canonicalize;
+ ppc_md.init_IRQ = lopec_init_IRQ;
+ ppc_md.get_irq = openpic_get_irq;
+
+ ppc_md.restart = lopec_restart;
+ ppc_md.power_off = lopec_power_off;
+ ppc_md.halt = lopec_halt;
+
+ ppc_md.setup_io_mappings = lopec_map_io;
+
+ ppc_md.time_init = todc_time_init;
+ ppc_md.set_rtc_time = todc_set_rtc_time;
+ ppc_md.get_rtc_time = todc_get_rtc_time;
+ ppc_md.calibrate_decr = todc_calibrate_decr;
+
+ ppc_md.nvram_read_val = todc_direct_read_val;
+ ppc_md.nvram_write_val = todc_direct_write_val;
+
+#if defined(CONFIG_BLK_DEV_IDE) || defined(CONFIG_BLK_DEV_IDE_MODULE)
+ ppc_ide_md.default_irq = lopec_ide_default_irq;
+ ppc_ide_md.default_io_base = lopec_ide_default_io_base;
+ ppc_ide_md.ide_init_hwif = lopec_ide_init_hwif_ports;
+#endif
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ ppc_md.progress = lopec_progress;
+#endif
+}
--- /dev/null
+/*
+ * include/asm-ppc/mcpn765_serial.h
+ *
+ * Definitions for Motorola MCG MCPN765 cPCI board support
+ *
+ * Author: Mark A. Greer
+ * mgreer@mvista.com
+ *
+ * 2001 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#ifndef __ASMPPC_MCPN765_SERIAL_H
+#define __ASMPPC_MCPN765_SERIAL_H
+
+#include <linux/config.h>
+
+/* Define the UART base addresses */
+#define MCPN765_SERIAL_1 0xfef88000
+#define MCPN765_SERIAL_2 0xfef88200
+#define MCPN765_SERIAL_3 0xfef88400
+#define MCPN765_SERIAL_4 0xfef88600
+
+#ifdef CONFIG_SERIAL_MANY_PORTS
+#define RS_TABLE_SIZE 64
+#else
+#define RS_TABLE_SIZE 4
+#endif
+
+/* Rate for the 1.8432 Mhz clock for the onboard serial chip */
+#define BASE_BAUD ( 1843200 / 16 )
+#define UART_CLK 1843200
+
+#ifdef CONFIG_SERIAL_DETECT_IRQ
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST|ASYNC_AUTO_IRQ)
+#else
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST)
+#endif
+
+/* All UART IRQ's are wire-OR'd to IRQ 17 */
+#define STD_SERIAL_PORT_DFNS \
+ { 0, BASE_BAUD, MCPN765_SERIAL_1, 17, STD_COM_FLAGS, /* ttyS0 */\
+ iomem_base: (u8 *)MCPN765_SERIAL_1, \
+ iomem_reg_shift: 4, \
+ io_type: SERIAL_IO_MEM }, \
+ { 0, BASE_BAUD, MCPN765_SERIAL_2, 17, STD_COM_FLAGS, /* ttyS1 */\
+ iomem_base: (u8 *)MCPN765_SERIAL_2, \
+ iomem_reg_shift: 4, \
+ io_type: SERIAL_IO_MEM }, \
+ { 0, BASE_BAUD, MCPN765_SERIAL_3, 17, STD_COM_FLAGS, /* ttyS2 */\
+ iomem_base: (u8 *)MCPN765_SERIAL_3, \
+ iomem_reg_shift: 4, \
+ io_type: SERIAL_IO_MEM }, \
+ { 0, BASE_BAUD, MCPN765_SERIAL_4, 17, STD_COM_FLAGS, /* ttyS3 */\
+ iomem_base: (u8 *)MCPN765_SERIAL_4, \
+ iomem_reg_shift: 4, \
+ io_type: SERIAL_IO_MEM },
+
+#define SERIAL_PORT_DFNS \
+ STD_SERIAL_PORT_DFNS
+
+#endif /* __ASMPPC_MCPN765_SERIAL_H */
--- /dev/null
+/*
+ * arch/ppc/platforms/mvme5100_pci.c
+ *
+ * PCI setup routines for the Motorola MVME5100.
+ *
+ * Author: Matt Porter <mporter@mvista.com>
+ *
+ * 2001 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+
+#include <asm/byteorder.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/uaccess.h>
+#include <asm/machdep.h>
+#include <asm/pci-bridge.h>
+#include <platforms/mvme5100.h>
+#include <asm/pplus.h>
+
+static inline int
+mvme5100_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
+{
+ int irq;
+
+ static char pci_irq_table[][4] =
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ {
+ { 0, 0, 0, 0 }, /* IDSEL 11 - Winbond */
+ { 0, 0, 0, 0 }, /* IDSEL 12 - unused */
+ { 21, 22, 23, 24 }, /* IDSEL 13 - Universe II */
+ { 18, 0, 0, 0 }, /* IDSEL 14 - Enet 1 */
+ { 0, 0, 0, 0 }, /* IDSEL 15 - unused */
+ { 25, 26, 27, 28 }, /* IDSEL 16 - PMC Slot 1 */
+ { 28, 25, 26, 27 }, /* IDSEL 17 - PMC Slot 2 */
+ { 0, 0, 0, 0 }, /* IDSEL 18 - unused */
+ { 29, 0, 0, 0 }, /* IDSEL 19 - Enet 2 */
+ { 0, 0, 0, 0 }, /* IDSEL 20 - PMCSPAN */
+ };
+
+ const long min_idsel = 11, max_idsel = 20, irqs_per_slot = 4;
+ irq = PCI_IRQ_TABLE_LOOKUP;
+ /* If lookup is zero, always return 0 */
+ if (!irq)
+ return 0;
+ else
+#ifdef CONFIG_MVME5100_IPMC761_PRESENT
+ /* If IPMC761 present, return table value */
+ return irq;
+#else
+ /* If IPMC761 not present, we don't have an i8259 so adjust */
+ return (irq - NUM_8259_INTERRUPTS);
+#endif
+}
+
+static void
+mvme5100_pcibios_fixup_resources(struct pci_dev *dev)
+{
+ int i;
+
+ if ((dev->vendor == PCI_VENDOR_ID_MOTOROLA) &&
+ (dev->device == PCI_DEVICE_ID_MOTOROLA_HAWK))
+ for (i=0; i<DEVICE_COUNT_RESOURCE; i++)
+ {
+ dev->resource[i].start = 0;
+ dev->resource[i].end = 0;
+ }
+}
+
+void __init
+mvme5100_setup_bridge(void)
+{
+ struct pci_controller* hose;
+
+ hose = pcibios_alloc_controller();
+
+ if (!hose)
+ return;
+
+ hose->first_busno = 0;
+ hose->last_busno = 0xff;
+ hose->pci_mem_offset = MVME5100_PCI_MEM_OFFSET;
+
+ pci_init_resource(&hose->io_resource,
+ MVME5100_PCI_LOWER_IO,
+ MVME5100_PCI_UPPER_IO,
+ IORESOURCE_IO,
+ "PCI host bridge");
+
+ pci_init_resource(&hose->mem_resources[0],
+ MVME5100_PCI_LOWER_MEM,
+ MVME5100_PCI_UPPER_MEM,
+ IORESOURCE_MEM,
+ "PCI host bridge");
+
+ hose->io_space.start = MVME5100_PCI_LOWER_IO;
+ hose->io_space.end = MVME5100_PCI_UPPER_IO;
+ hose->mem_space.start = MVME5100_PCI_LOWER_MEM;
+ hose->mem_space.end = MVME5100_PCI_UPPER_MEM;
+ hose->io_base_virt = (void *)MVME5100_ISA_IO_BASE;
+
+ /* Use indirect method of Hawk */
+ setup_indirect_pci(hose,
+ MVME5100_PCI_CONFIG_ADDR,
+ MVME5100_PCI_CONFIG_DATA);
+
+ hose->last_busno = pciauto_bus_scan(hose, hose->first_busno);
+
+ ppc_md.pcibios_fixup_resources = mvme5100_pcibios_fixup_resources;
+ ppc_md.pci_swizzle = common_swizzle;
+ ppc_md.pci_map_irq = mvme5100_map_irq;
+}
--- /dev/null
+/*
+ * include/asm-ppc/mvme5100_serial.h
+ *
+ * Definitions for Motorola MVME5100 support
+ *
+ * Author: Matt Porter <mporter@mvista.com>
+ *
+ * 2001 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#ifdef __KERNEL__
+#ifndef __ASM_MVME5100_SERIAL_H__
+#define __ASM_MVME5100_SERIAL_H__
+
+#include <linux/config.h>
+#include <platforms/mvme5100.h>
+
+#ifdef CONFIG_SERIAL_MANY_PORTS
+#define RS_TABLE_SIZE 64
+#else
+#define RS_TABLE_SIZE 4
+#endif
+
+#define BASE_BAUD ( MVME5100_BASE_BAUD / 16 )
+
+#ifdef CONFIG_SERIAL_DETECT_IRQ
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST|ASYNC_AUTO_IRQ)
+#else
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST)
+#endif
+
+/* All UART IRQ's are wire-OR'd to one MPIC IRQ */
+#define STD_SERIAL_PORT_DFNS \
+ { 0, BASE_BAUD, MVME5100_SERIAL_1, \
+ MVME5100_SERIAL_IRQ, \
+ STD_COM_FLAGS, /* ttyS0 */ \
+ iomem_base: (unsigned char *)MVME5100_SERIAL_1, \
+ iomem_reg_shift: 4, \
+ io_type: SERIAL_IO_MEM }, \
+ { 0, BASE_BAUD, MVME5100_SERIAL_2, \
+ MVME5100_SERIAL_IRQ, \
+ STD_COM_FLAGS, /* ttyS1 */ \
+ iomem_base: (unsigned char *)MVME5100_SERIAL_2, \
+ iomem_reg_shift: 4, \
+ io_type: SERIAL_IO_MEM },
+
+#define SERIAL_PORT_DFNS \
+ STD_SERIAL_PORT_DFNS
+
+#endif /* __ASM_MVME5100_SERIAL_H__ */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * arch/ppc/platforms/mvme5100_setup.c
+ *
+ * Board setup routines for the Motorola MVME5100.
+ *
+ * Author: Matt Porter <mporter@mvista.com>
+ *
+ * 2001 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/initrd.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/ide.h>
+#include <linux/seq_file.h>
+#include <linux/root_dev.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/time.h>
+#include <asm/dma.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/smp.h>
+#include <asm/open_pic.h>
+#include <asm/i8259.h>
+#include <platforms/mvme5100.h>
+#include <asm/todc.h>
+#include <asm/pci-bridge.h>
+#include <asm/bootinfo.h>
+#include <asm/pplus.h>
+
+extern char cmd_line[];
+
+static u_char mvme5100_openpic_initsenses[] __initdata = {
+ 0, /* 16: i8259 cascade (active high) */
+ 1, /* 17: TL16C550 UART 1,2 */
+ 1, /* 18: Enet 1 (front panel or P2) */
+ 1, /* 19: Hawk Watchdog 1,2 */
+ 1, /* 20: DS1621 thermal alarm */
+ 1, /* 21: Universe II LINT0# */
+ 1, /* 22: Universe II LINT1# */
+ 1, /* 23: Universe II LINT2# */
+ 1, /* 24: Universe II LINT3# */
+ 1, /* 25: PMC1 INTA#, PMC2 INTB# */
+ 1, /* 26: PMC1 INTB#, PMC2 INTC# */
+ 1, /* 27: PMC1 INTC#, PMC2 INTD# */
+ 1, /* 28: PMC1 INTD#, PMC2 INTA# */
+ 1, /* 29: Enet 2 (front panel) */
+ 1, /* 30: Abort Switch */
+ 1, /* 31: RTC Alarm */
+};
+
+static void __init
+mvme5100_setup_arch(void)
+{
+ if ( ppc_md.progress )
+ ppc_md.progress("mvme5100_setup_arch: enter", 0);
+
+ loops_per_jiffy = 50000000 / HZ;
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_SDA2;
+#endif
+
+#ifdef CONFIG_DUMMY_CONSOLE
+ conswitchp = &dummy_con;
+#endif
+
+ if ( ppc_md.progress )
+ ppc_md.progress("mvme5100_setup_arch: find_bridges", 0);
+
+ /* Setup PCI host bridge */
+ mvme5100_setup_bridge();
+
+ /* Find and map our OpenPIC */
+ pplus_mpic_init(MVME5100_PCI_MEM_OFFSET);
+ OpenPIC_InitSenses = mvme5100_openpic_initsenses;
+ OpenPIC_NumInitSenses = sizeof(mvme5100_openpic_initsenses);
+
+ printk("MVME5100 port (C) 2001 MontaVista Software, Inc. (source@mvista.com)\n");
+
+ if ( ppc_md.progress )
+ ppc_md.progress("mvme5100_setup_arch: exit", 0);
+
+ return;
+}
+
+static void __init
+mvme5100_init2(void)
+{
+#ifdef CONFIG_MVME5100_IPMC761_PRESENT
+ request_region(0x00,0x20,"dma1");
+ request_region(0x20,0x20,"pic1");
+ request_region(0x40,0x20,"timer");
+ request_region(0x80,0x10,"dma page reg");
+ request_region(0xa0,0x20,"pic2");
+ request_region(0xc0,0x20,"dma2");
+#endif
+ return;
+}
+
+/*
+ * Interrupt setup and service.
+ * Have MPIC on HAWK and cascaded 8259s on Winbond cascaded to MPIC.
+ */
+static void __init
+mvme5100_init_IRQ(void)
+{
+#ifdef CONFIG_MVME5100_IPMC761_PRESENT
+ int i;
+#endif
+
+ if ( ppc_md.progress )
+ ppc_md.progress("init_irq: enter", 0);
+
+#ifdef CONFIG_MVME5100_IPMC761_PRESENT
+ openpic_init(1, NUM_8259_INTERRUPTS, NULL, -1);
+ openpic_hookup_cascade(NUM_8259_INTERRUPTS,"82c59 cascade",&i8259_irq);
+
+ for(i=0; i < NUM_8259_INTERRUPTS; i++)
+ irq_desc[i].handler = &i8259_pic;
+
+ i8259_init(NULL);
+#else
+ openpic_init(1, 0, NULL, -1);
+#endif
+
+ if ( ppc_md.progress )
+ ppc_md.progress("init_irq: exit", 0);
+
+ return;
+}
+
+/*
+ * Set BAT 3 to map 0xf0000000 to end of physical memory space.
+ */
+static __inline__ void
+mvme5100_set_bat(void)
+{
+ unsigned long bat3u, bat3l;
+ static int mapping_set = 0;
+
+ if (!mapping_set) {
+
+ __asm__ __volatile__(
+ " lis %0,0xf000\n \
+ ori %1,%0,0x002a\n \
+ ori %0,%0,0x1ffe\n \
+ mtspr 0x21e,%0\n \
+ mtspr 0x21f,%1\n \
+ isync\n \
+ sync "
+ : "=r" (bat3u), "=r" (bat3l));
+
+ mapping_set = 1;
+ }
+
+ return;
+}
+
+static unsigned long __init
+mvme5100_find_end_of_memory(void)
+{
+ mvme5100_set_bat();
+ return pplus_get_mem_size(MVME5100_HAWK_SMC_BASE);
+}
+
+static void __init
+mvme5100_map_io(void)
+{
+ io_block_mapping(0xfe000000, 0xfe000000, 0x02000000, _PAGE_IO);
+ ioremap_base = 0xfe000000;
+}
+
+static void
+mvme5100_reset_board(void)
+{
+ local_irq_disable();
+
+ /* Set exception prefix high - to the firmware */
+ _nmask_and_or_msr(0, MSR_IP);
+
+ out_8((u_char *)MVME5100_BOARD_MODRST_REG, 0x01);
+
+ return;
+}
+
+static void
+mvme5100_restart(char *cmd)
+{
+ volatile ulong i = 10000000;
+
+ mvme5100_reset_board();
+
+ while (i-- > 0);
+ panic("restart failed\n");
+}
+
+static void
+mvme5100_halt(void)
+{
+ local_irq_disable();
+ while (1);
+}
+
+static void
+mvme5100_power_off(void)
+{
+ mvme5100_halt();
+}
+
+static int
+mvme5100_show_cpuinfo(struct seq_file *m)
+{
+ seq_printf(m, "vendor\t\t: Motorola\n");
+ seq_printf(m, "machine\t\t: MVME5100\n");
+
+ return 0;
+}
+
+TODC_ALLOC();
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ parse_bootinfo(find_bootinfo());
+
+ isa_io_base = MVME5100_ISA_IO_BASE;
+ isa_mem_base = MVME5100_ISA_MEM_BASE;
+ pci_dram_offset = MVME5100_PCI_DRAM_OFFSET;
+
+ ppc_md.setup_arch = mvme5100_setup_arch;
+ ppc_md.show_cpuinfo = mvme5100_show_cpuinfo;
+ ppc_md.init_IRQ = mvme5100_init_IRQ;
+ ppc_md.get_irq = openpic_get_irq;
+ ppc_md.init = mvme5100_init2;
+
+ ppc_md.restart = mvme5100_restart;
+ ppc_md.power_off = mvme5100_power_off;
+ ppc_md.halt = mvme5100_halt;
+
+ ppc_md.find_end_of_memory = mvme5100_find_end_of_memory;
+ ppc_md.setup_io_mappings = mvme5100_map_io;
+
+ TODC_INIT(TODC_TYPE_MK48T37,
+ MVME5100_NVRAM_AS0,
+ MVME5100_NVRAM_AS1,
+ MVME5100_NVRAM_DATA,
+ 8);
+
+ ppc_md.time_init = todc_time_init;
+ ppc_md.set_rtc_time = todc_set_rtc_time;
+ ppc_md.get_rtc_time = todc_get_rtc_time;
+ ppc_md.calibrate_decr = todc_calibrate_decr;
+
+ ppc_md.nvram_read_val = todc_m48txx_read_val;
+ ppc_md.nvram_write_val = todc_m48txx_write_val;
+
+ ppc_md.progress = NULL;
+}
--- /dev/null
+/*
+ * include/asm-ppc/platforms/powerpmc250_serial.h
+ *
+ * Motorola PrPMC750 serial support
+ *
+ * Author: Troy Benjegerdes <tbenjegerdes@mvista.com>
+ *
+ * 2001 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#ifdef __KERNEL__
+#ifndef __ASMPPC_POWERPMC250_SERIAL_H
+#define __ASMPPC_POWERPMC250_SERIAL_H
+
+#include <linux/config.h>
+#include <platforms/powerpmc250.h>
+
+#define RS_TABLE_SIZE 1
+
+#define BASE_BAUD (POWERPMC250_BASE_BAUD / 16)
+
+#ifdef CONFIG_SERIAL_DETECT_IRQ
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST|ASYNC_AUTO_IRQ)
+#define STD_COM4_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_AUTO_IRQ)
+#else
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST)
+#define STD_COM4_FLAGS (ASYNC_BOOT_AUTOCONF)
+#endif
+
+#define SERIAL_PORT_DFNS \
+{ 0, BASE_BAUD, POWERPMC250_SERIAL, POWERPMC250_SERIAL_IRQ, STD_COM_FLAGS, /* ttyS0 */\
+ iomem_base: (u8 *)POWERPMC250_SERIAL, \
+ iomem_reg_shift: 0, \
+ io_type: SERIAL_IO_MEM }
+
+#endif
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * arch/ppc/platforms/pq2ads_setup.c
+ *
+ * PQ2ADS platform support
+ *
+ * Author: Kumar Gala <kumar.gala@freescale.com>
+ * Derived from: est8260_setup.c by Allen Curtis
+ *
+ * Copyright 2004 Freescale Semiconductor, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/seq_file.h>
+
+#include <asm/mpc8260.h>
+#include <asm/machdep.h>
+
+static void (*callback_setup_arch)(void);
+
+extern unsigned char __res[sizeof(bd_t)];
+
+extern void m8260_init(unsigned long r3, unsigned long r4,
+ unsigned long r5, unsigned long r6, unsigned long r7);
+
+static int
+pq2ads_show_cpuinfo(struct seq_file *m)
+{
+ bd_t *binfo = (bd_t *)__res;
+
+ seq_printf(m, "vendor\t\t: Motorola\n"
+ "machine\t\t: PQ2 ADS PowerPC\n"
+ "\n"
+ "mem size\t\t: 0x%08lx\n"
+ "console baud\t\t: %ld\n"
+ "\n",
+ binfo->bi_memsize,
+ binfo->bi_baudrate);
+ return 0;
+}
+
+static void __init
+pq2ads_setup_arch(void)
+{
+ printk("PQ2 ADS Port\n");
+ callback_setup_arch();
+ *(volatile uint *)(BCSR_ADDR + 4) &= ~BCSR1_RS232_EN2;
+}
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* Generic 8260 platform initialization */
+ m8260_init(r3, r4, r5, r6, r7);
+
+ /* Anything special for this platform */
+ ppc_md.show_cpuinfo = pq2ads_show_cpuinfo;
+
+ callback_setup_arch = ppc_md.setup_arch;
+ ppc_md.setup_arch = pq2ads_setup_arch;
+}
--- /dev/null
+/*
+ * include/asm-ppc/platforms/prpmc750_serial.h
+ *
+ * Motorola PrPMC750 serial support
+ *
+ * Author: Matt Porter <mporter@mvista.com>
+ *
+ * 2001 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#ifdef __KERNEL__
+#ifndef __ASM_PRPMC750_SERIAL_H__
+#define __ASM_PRPMC750_SERIAL_H__
+
+#include <linux/config.h>
+#include <platforms/prpmc750.h>
+
+#define RS_TABLE_SIZE 4
+
+/* Rate for the 1.8432 Mhz clock for the onboard serial chip */
+#define BASE_BAUD (PRPMC750_BASE_BAUD / 16)
+
+#ifndef SERIAL_MAGIC_KEY
+#define kernel_debugger ppc_kernel_debug
+#endif
+
+#ifdef CONFIG_SERIAL_DETECT_IRQ
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST|ASYNC_AUTO_IRQ)
+#else
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST)
+#endif
+
+#define SERIAL_PORT_DFNS \
+ { 0, BASE_BAUD, PRPMC750_SERIAL_0, 1, STD_COM_FLAGS, \
+ iomem_base: (unsigned char *)PRPMC750_SERIAL_0, \
+ iomem_reg_shift: 4, \
+ io_type: SERIAL_IO_MEM } /* ttyS0 */
+
+#endif /* __ASM_PRPMC750_SERIAL_H__ */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * arch/ppc/platforms/prpmc800_serial.h
+ *
+ * Definitions for Motorola MCG PRPMC800 cPCI board support
+ *
+ * Author: Dale Farnsworth dale.farnsworth@mvista.com
+ *
+ * 2001 (c) MontaVista, Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#ifndef __ASMPPC_PRPMC800_SERIAL_H
+#define __ASMPPC_PRPMC800_SERIAL_H
+
+#include <linux/config.h>
+#include <platforms/prpmc800.h>
+
+#ifdef CONFIG_SERIAL_MANY_PORTS
+#define RS_TABLE_SIZE 64
+#else
+#define RS_TABLE_SIZE 4
+#endif
+
+/* Rate for the 1.8432 Mhz clock for the onboard serial chip */
+#define BASE_BAUD (PRPMC800_BASE_BAUD / 16)
+
+#ifndef SERIAL_MAGIC_KEY
+#define kernel_debugger ppc_kernel_debug
+#endif
+
+#ifdef CONFIG_SERIAL_DETECT_IRQ
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST|ASYNC_AUTO_IRQ)
+#else
+#define STD_COM_FLAGS (ASYNC_BOOT_AUTOCONF|ASYNC_SKIP_TEST)
+#endif
+
+/* UARTS are at IRQ 16 */
+#define STD_SERIAL_PORT_DFNS \
+ { 0, BASE_BAUD, PRPMC800_SERIAL_1, 16, STD_COM_FLAGS, /* ttyS0 */\
+ iomem_base: (unsigned char *)PRPMC800_SERIAL_1, \
+ iomem_reg_shift: 0, \
+ io_type: SERIAL_IO_MEM },
+
+#define SERIAL_PORT_DFNS \
+ STD_SERIAL_PORT_DFNS
+
+#endif /* __ASMPPC_PRPMC800_SERIAL_H */
--- /dev/null
+/*
+ * arch/ppc/platforms/rpx8260.c
+ *
+ * RPC EP8260 platform support
+ *
+ * Author: Dan Malek <dan@embeddededge.com>
+ * Derived from: pq2ads_setup.c by Kumar
+ *
+ * Copyright 2004 Embedded Edge, LLC
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/seq_file.h>
+
+#include <asm/mpc8260.h>
+#include <asm/machdep.h>
+
+static void (*callback_setup_arch)(void);
+
+extern unsigned char __res[sizeof(bd_t)];
+
+extern void m8260_init(unsigned long r3, unsigned long r4,
+ unsigned long r5, unsigned long r6, unsigned long r7);
+
+static int
+ep8260_show_cpuinfo(struct seq_file *m)
+{
+ bd_t *binfo = (bd_t *)__res;
+
+ seq_printf(m, "vendor\t\t: RPC\n"
+ "machine\t\t: EP8260 PPC\n"
+ "\n"
+ "mem size\t\t: 0x%08x\n"
+ "console baud\t\t: %d\n"
+ "\n",
+ binfo->bi_memsize,
+ binfo->bi_baudrate);
+ return 0;
+}
+
+static void __init
+ep8260_setup_arch(void)
+{
+ printk("RPC EP8260 Port\n");
+ callback_setup_arch();
+}
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* Generic 8260 platform initialization */
+ m8260_init(r3, r4, r5, r6, r7);
+
+ /* Anything special for this platform */
+ ppc_md.show_cpuinfo = ep8260_show_cpuinfo;
+
+ callback_setup_arch = ppc_md.setup_arch;
+ ppc_md.setup_arch = ep8260_setup_arch;
+}
--- /dev/null
+#include <stdio.h>
+#include <stdlib.h>
+#include <byteswap.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <string.h>
+
+void xlate( char * inb, char * trb, unsigned len )
+{
+ unsigned i;
+ for ( i=0; i<len; ++i )
+ {
+ char c = *inb++;
+ char c1 = c >> 4;
+ char c2 = c & 0xf;
+ if ( c1 > 9 )
+ c1 = c1 + 'A' - 10;
+ else
+ c1 = c1 + '0';
+ if ( c2 > 9 )
+ c2 = c2 + 'A' - 10;
+ else
+ c2 = c2 + '0';
+ *trb++ = c1;
+ *trb++ = c2;
+ }
+ *trb = 0;
+}
+
+#define ElfHeaderSize (64 * 1024)
+#define ElfPages (ElfHeaderSize / 4096)
+
+void get4k( /*istream *inf*/FILE *file, char *buf )
+{
+ unsigned j;
+ unsigned num = fread(buf, 1, 4096, file);
+ for ( j=num; j<4096; ++j )
+ buf[j] = 0;
+}
+
+void put4k( /*ostream *outf*/FILE *file, char *buf )
+{
+ fwrite(buf, 1, 4096, file);
+}
+
+int main(int argc, char **argv)
+{
+ char inbuf[4096];
+ FILE *sysmap = NULL;
+ char* ptr_end = NULL;
+ FILE *inputVmlinux = NULL;
+ FILE *outputVmlinux = NULL;
+ long i = 0;
+ unsigned long sysmapFileLen = 0;
+ unsigned long sysmapLen = 0;
+ unsigned long roundR = 0;
+ unsigned long kernelLen = 0;
+ unsigned long actualKernelLen = 0;
+ unsigned long round = 0;
+ unsigned long roundedKernelLen = 0;
+ unsigned long sysmapStartOffs = 0;
+ unsigned long sysmapPages = 0;
+ unsigned long roundedKernelPages = 0;
+ long padPages = 0;
+ if ( argc < 2 )
+ {
+ fprintf(stderr, "Name of System Map file missing.\n");
+ exit(1);
+ }
+
+ if ( argc < 3 )
+ {
+ fprintf(stderr, "Name of vmlinux file missing.\n");
+ exit(1);
+ }
+
+ if ( argc < 4 )
+ {
+ fprintf(stderr, "Name of vmlinux output file missing.\n");
+ exit(1);
+ }
+
+ sysmap = fopen(argv[1], "r");
+ if ( ! sysmap )
+ {
+ fprintf(stderr, "System Map file \"%s\" failed to open.\n", argv[1]);
+ exit(1);
+ }
+ inputVmlinux = fopen(argv[2], "r");
+ if ( ! inputVmlinux )
+ {
+ fprintf(stderr, "vmlinux file \"%s\" failed to open.\n", argv[2]);
+ exit(1);
+ }
+ outputVmlinux = fopen(argv[3], "w");
+ if ( ! outputVmlinux )
+ {
+ fprintf(stderr, "output vmlinux file \"%s\" failed to open.\n", argv[3]);
+ exit(1);
+ }
+
+
+
+ fseek(inputVmlinux, 0, SEEK_END);
+ kernelLen = ftell(inputVmlinux);
+ fseek(inputVmlinux, 0, SEEK_SET);
+ printf("kernel file size = %ld\n", kernelLen);
+ if ( kernelLen == 0 )
+ {
+ fprintf(stderr, "You must have a linux kernel specified as argv[2]\n");
+ exit(1);
+ }
+
+
+ actualKernelLen = kernelLen - ElfHeaderSize;
+
+ printf("actual kernel length (minus ELF header) = %ld/%lxx \n", actualKernelLen, actualKernelLen);
+
+ round = actualKernelLen % 4096;
+ roundedKernelLen = actualKernelLen;
+ if ( round )
+ roundedKernelLen += (4096 - round);
+
+ printf("Kernel length rounded up to a 4k multiple = %ld/%lxx \n", roundedKernelLen, roundedKernelLen);
+ roundedKernelPages = roundedKernelLen / 4096;
+ printf("Kernel pages to copy = %ld/%lxx\n", roundedKernelPages, roundedKernelPages);
+
+
+
+ /* Sysmap file */
+ fseek(sysmap, 0, SEEK_END);
+ sysmapFileLen = ftell(sysmap);
+ fseek(sysmap, 0, SEEK_SET);
+ printf("%s file size = %ld\n", argv[1], sysmapFileLen);
+
+ sysmapLen = sysmapFileLen;
+
+ roundR = 4096 - (sysmapLen % 4096);
+ if (roundR)
+ {
+ printf("Rounding System Map file up to a multiple of 4096, adding %ld\n", roundR);
+ sysmapLen += roundR;
+ }
+ printf("Rounded System Map size is %ld\n", sysmapLen);
+
+ /* Process the Sysmap file to determine the true end of the kernel */
+ sysmapPages = sysmapLen / 4096;
+ printf("System map pages to copy = %ld\n", sysmapPages);
+ /* read the whole file line by line, expect that it doesn't fail */
+ while ( fgets(inbuf, 4096, sysmap) ) ;
+ /* search for _end in the last page of the system map */
+ ptr_end = strstr(inbuf, " _end");
+ if (!ptr_end)
+ {
+ fprintf(stderr, "Unable to find _end in the sysmap file \n");
+ fprintf(stderr, "inbuf: \n");
+ fprintf(stderr, "%s \n", inbuf);
+ exit(1);
+ }
+ printf("Found _end in the last page of the sysmap - backing up 10 characters it looks like %s", ptr_end-10);
+ sysmapStartOffs = (unsigned int)strtol(ptr_end-10, NULL, 16);
+ /* calc how many pages we need to insert between the vmlinux and the start of the sysmap */
+ padPages = sysmapStartOffs/4096 - roundedKernelPages;
+
+ /* Check and see if the vmlinux is larger than _end in System.map */
+ if (padPages < 0)
+ { /* vmlinux is larger than _end - adjust the offset to start the embedded system map */
+ sysmapStartOffs = roundedKernelLen;
+ printf("vmlinux is larger than _end indicates it needs to be - sysmapStartOffs = %lx \n", sysmapStartOffs);
+ padPages = 0;
+ printf("will insert %lx pages between the vmlinux and the start of the sysmap \n", padPages);
+ }
+ else
+ { /* _end is larger than vmlinux - use the sysmapStartOffs we calculated from the system map */
+ printf("vmlinux is smaller than _end indicates is needed - sysmapStartOffs = %lx \n", sysmapStartOffs);
+ printf("will insert %lx pages between the vmlinux and the start of the sysmap \n", padPages);
+ }
+
+
+
+
+ /* Copy 64K ELF header */
+ for (i=0; i<(ElfPages); ++i)
+ {
+ get4k( inputVmlinux, inbuf );
+ put4k( outputVmlinux, inbuf );
+ }
+
+
+ /* Copy the vmlinux (as full pages). */
+ fseek(inputVmlinux, ElfHeaderSize, SEEK_SET);
+ for ( i=0; i<roundedKernelPages; ++i )
+ {
+ get4k( inputVmlinux, inbuf );
+
+ /* Set the offsets (of the start and end) of the embedded sysmap so it is set in the vmlinux.sm */
+ if ( i == 0 )
+ {
+ unsigned long * p;
+ printf("Storing embedded_sysmap_start at 0x3c\n");
+ p = (unsigned long *)(inbuf + 0x3c);
+
+#if (BYTE_ORDER == __BIG_ENDIAN)
+ *p = sysmapStartOffs;
+#else
+ *p = bswap_32(sysmapStartOffs);
+#endif
+
+ printf("Storing embedded_sysmap_end at 0x44\n");
+ p = (unsigned long *)(inbuf + 0x44);
+
+#if (BYTE_ORDER == __BIG_ENDIAN)
+ *p = sysmapStartOffs + sysmapFileLen;
+#else
+ *p = bswap_32(sysmapStartOffs + sysmapFileLen);
+#endif
+ }
+
+ put4k( outputVmlinux, inbuf );
+ }
+
+
+ /* Insert any pad pages between the end of the vmlinux and where the system map needs to be. */
+ for (i=0; i<padPages; ++i)
+ {
+ memset(inbuf, 0, 4096);
+ put4k(outputVmlinux, inbuf);
+ }
+
+
+ /* Copy the system map (as full pages). */
+ fseek(sysmap, 0, SEEK_SET); /* start reading from begining of the system map */
+ for ( i=0; i<sysmapPages; ++i )
+ {
+ get4k( sysmap, inbuf );
+ put4k( outputVmlinux, inbuf );
+ }
+
+
+ fclose(sysmap);
+ fclose(inputVmlinux);
+ fclose(outputVmlinux);
+ /* Set permission to executable */
+ chmod(argv[3], S_IRUSR|S_IWUSR|S_IXUSR|S_IRGRP|S_IXGRP|S_IROTH|S_IXOTH);
+
+ return 0;
+}
+
--- /dev/null
+/*
+ * linux/arch/ppc/kernel/setup.c
+ *
+ * Copyright (C) 1995 Linus Torvalds
+ * Adapted from 'alpha' version by Gary Thomas
+ * Modified by Cort Dougan (cort@cs.nmt.edu)
+ * Modified by PPC64 Team, IBM Corp
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+/*
+ * bootup setup stuff..
+ */
+
+#include <linux/config.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/stddef.h>
+#include <linux/unistd.h>
+#include <linux/slab.h>
+#include <linux/user.h>
+#include <linux/a.out.h>
+#include <linux/tty.h>
+#include <linux/major.h>
+#include <linux/interrupt.h>
+#include <linux/reboot.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/console.h>
+#include <linux/pci.h>
+#include <linux/version.h>
+#include <linux/adb.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/root_dev.h>
+
+#include <asm/mmu.h>
+#include <asm/processor.h>
+#include <asm/io.h>
+#include <asm/pgtable.h>
+#include <asm/prom.h>
+#include <asm/rtas.h>
+#include <asm/pci-bridge.h>
+#include <asm/iommu.h>
+#include <asm/dma.h>
+#include <asm/machdep.h>
+#include <asm/irq.h>
+#include <asm/naca.h>
+#include <asm/time.h>
+#include <asm/nvram.h>
+
+#include "i8259.h"
+#include "open_pic.h"
+#include <asm/xics.h>
+#include <asm/ppcdebug.h>
+#include <asm/cputable.h>
+
+void chrp_progress(char *, unsigned short);
+
+extern void pSeries_init_openpic(void);
+
+extern void find_and_init_phbs(void);
+extern void pSeries_final_fixup(void);
+
+extern void pSeries_get_boot_time(struct rtc_time *rtc_time);
+extern void pSeries_get_rtc_time(struct rtc_time *rtc_time);
+extern int pSeries_set_rtc_time(struct rtc_time *rtc_time);
+void pSeries_calibrate_decr(void);
+void fwnmi_init(void);
+extern void SystemReset_FWNMI(void), MachineCheck_FWNMI(void); /* from head.S */
+int fwnmi_active; /* TRUE if an FWNMI handler is present */
+
+dev_t boot_dev;
+unsigned long virtPython0Facilities = 0; // python0 facility area (memory mapped io) (64-bit format) VIRTUAL address.
+
+extern unsigned long loops_per_jiffy;
+
+extern unsigned long ppc_proc_freq;
+extern unsigned long ppc_tb_freq;
+
+void chrp_get_cpuinfo(struct seq_file *m)
+{
+ struct device_node *root;
+ const char *model = "";
+
+ root = of_find_node_by_path("/");
+ if (root)
+ model = get_property(root, "model", NULL);
+ seq_printf(m, "machine\t\t: CHRP %s\n", model);
+ of_node_put(root);
+}
+
+#define I8042_DATA_REG 0x60
+
+void __init chrp_request_regions(void)
+{
+ struct device_node *i8042;
+
+ request_region(0x20,0x20,"pic1");
+ request_region(0xa0,0x20,"pic2");
+ request_region(0x00,0x20,"dma1");
+ request_region(0x40,0x20,"timer");
+ request_region(0x80,0x10,"dma page reg");
+ request_region(0xc0,0x20,"dma2");
+
+ /*
+ * Some machines have an unterminated i8042 so check the device
+ * tree and reserve the region if it does not appear. Later on
+ * the i8042 code will try and reserve this region and fail.
+ */
+ if (!(i8042 = of_find_node_by_type(NULL, "8042")))
+ request_region(I8042_DATA_REG, 16, "reserved (no i8042)");
+ of_node_put(i8042);
+}
+
+void __init
+chrp_setup_arch(void)
+{
+ struct device_node *root;
+ unsigned int *opprop;
+
+ /* openpic global configuration register (64-bit format). */
+ /* openpic Interrupt Source Unit pointer (64-bit format). */
+ /* python0 facility area (mmio) (64-bit format) REAL address. */
+
+ /* init to some ~sane value until calibrate_delay() runs */
+ loops_per_jiffy = 50000000;
+
+ if (ROOT_DEV == 0) {
+ printk("No ramdisk, default root is /dev/sda2\n");
+ ROOT_DEV = Root_SDA2;
+ }
+
+ printk("Boot arguments: %s\n", cmd_line);
+
+ fwnmi_init();
+
+#ifndef CONFIG_PPC_ISERIES
+ /* Find and initialize PCI host bridges */
+ /* iSeries needs to be done much later. */
+ eeh_init();
+ find_and_init_phbs();
+#endif
+
+ /* Find the Open PIC if present */
+ root = of_find_node_by_path("/");
+ opprop = (unsigned int *) get_property(root,
+ "platform-open-pic", NULL);
+ if (opprop != 0) {
+ int n = prom_n_addr_cells(root);
+ unsigned long openpic;
+
+ for (openpic = 0; n > 0; --n)
+ openpic = (openpic << 32) + *opprop++;
+ printk(KERN_DEBUG "OpenPIC addr: %lx\n", openpic);
+ OpenPIC_Addr = __ioremap(openpic, 0x40000, _PAGE_NO_CACHE);
+ }
+ of_node_put(root);
+
+#ifdef CONFIG_DUMMY_CONSOLE
+ conswitchp = &dummy_con;
+#endif
+
+#ifdef CONFIG_PPC_PSERIES
+ pSeries_nvram_init();
+#endif
+}
+
+void __init
+chrp_init2(void)
+{
+ /* Manually leave the kernel version on the panel. */
+ ppc_md.progress("Linux ppc64\n", 0);
+ ppc_md.progress(UTS_RELEASE, 0);
+}
+
+/* Initialize firmware assisted non-maskable interrupts if
+ * the firmware supports this feature.
+ *
+ */
+void __init fwnmi_init(void)
+{
+ int ret;
+ int ibm_nmi_register = rtas_token("ibm,nmi-register");
+ if (ibm_nmi_register == RTAS_UNKNOWN_SERVICE)
+ return;
+ ret = rtas_call(ibm_nmi_register, 2, 1, NULL,
+ __pa((unsigned long)SystemReset_FWNMI),
+ __pa((unsigned long)MachineCheck_FWNMI));
+ if (ret == 0)
+ fwnmi_active = 1;
+}
+
+/* Early initialization. Relocation is on but do not reference unbolted pages */
+void __init pSeries_init_early(void)
+{
+ void *comport;
+
+ hpte_init_pSeries();
+
+ if (ppc64_iommu_off)
+ pci_dma_init_direct();
+ else
+ tce_init_pSeries();
+
+#ifdef CONFIG_SMP
+ smp_init_pSeries();
+#endif
+
+ /* Map the uart for udbg. */
+ comport = (void *)__ioremap(naca->serialPortAddr, 16, _PAGE_NO_CACHE);
+ udbg_init_uart(comport);
+
+ ppc_md.udbg_putc = udbg_putc;
+ ppc_md.udbg_getc = udbg_getc;
+ ppc_md.udbg_getc_poll = udbg_getc_poll;
+}
+
+void __init
+chrp_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ struct device_node * dn;
+ char * hypertas;
+ unsigned int len;
+
+ ppc_md.setup_arch = chrp_setup_arch;
+ ppc_md.get_cpuinfo = chrp_get_cpuinfo;
+ if (naca->interrupt_controller == IC_OPEN_PIC) {
+ ppc_md.init_IRQ = pSeries_init_openpic;
+ ppc_md.get_irq = openpic_get_irq;
+ } else {
+ ppc_md.init_IRQ = xics_init_IRQ;
+ ppc_md.get_irq = xics_get_irq;
+ }
+
+ ppc_md.log_error = pSeries_log_error;
+
+ ppc_md.init = chrp_init2;
+
+ ppc_md.pcibios_fixup = pSeries_final_fixup;
+
+ ppc_md.restart = rtas_restart;
+ ppc_md.power_off = rtas_power_off;
+ ppc_md.halt = rtas_halt;
+ ppc_md.panic = rtas_os_term;
+
+ ppc_md.get_boot_time = pSeries_get_boot_time;
+ ppc_md.get_rtc_time = pSeries_get_rtc_time;
+ ppc_md.set_rtc_time = pSeries_set_rtc_time;
+ ppc_md.calibrate_decr = pSeries_calibrate_decr;
+
+ ppc_md.progress = chrp_progress;
+
+ /* Build up the firmware_features bitmask field
+ * using contents of device-tree/ibm,hypertas-functions.
+ * Ultimately this functionality may be moved into prom.c prom_init().
+ */
+ cur_cpu_spec->firmware_features = 0;
+ dn = of_find_node_by_path("/rtas");
+ if (dn == NULL) {
+ printk(KERN_ERR "WARNING ! Cannot find RTAS in device-tree !\n");
+ goto no_rtas;
+ }
+
+ hypertas = get_property(dn, "ibm,hypertas-functions", &len);
+ if (hypertas) {
+ while (len > 0){
+ int i, hypertas_len;
+ /* check value against table of strings */
+ for(i=0; i < FIRMWARE_MAX_FEATURES ;i++) {
+ if ((firmware_features_table[i].name) &&
+ (strcmp(firmware_features_table[i].name,hypertas))==0) {
+ /* we have a match */
+ cur_cpu_spec->firmware_features |=
+ (firmware_features_table[i].val);
+ break;
+ }
+ }
+ hypertas_len = strlen(hypertas);
+ len -= hypertas_len +1;
+ hypertas+= hypertas_len +1;
+ }
+ }
+
+ of_node_put(dn);
+ no_rtas:
+ printk(KERN_INFO "firmware_features = 0x%lx\n",
+ cur_cpu_spec->firmware_features);
+}
+
+void chrp_progress(char *s, unsigned short hex)
+{
+ struct device_node *root;
+ int width, *p;
+ char *os;
+ static int display_character, set_indicator;
+ static int max_width;
+ static spinlock_t progress_lock = SPIN_LOCK_UNLOCKED;
+ static int pending_newline = 0; /* did last write end with unprinted newline? */
+
+ if (!rtas.base)
+ return;
+
+ if (max_width == 0) {
+ if ((root = find_path_device("/rtas")) &&
+ (p = (unsigned int *)get_property(root,
+ "ibm,display-line-length",
+ NULL)))
+ max_width = *p;
+ else
+ max_width = 0x10;
+ display_character = rtas_token("display-character");
+ set_indicator = rtas_token("set-indicator");
+ }
+
+ if (display_character == RTAS_UNKNOWN_SERVICE) {
+ /* use hex display if available */
+ if (set_indicator != RTAS_UNKNOWN_SERVICE)
+ rtas_call(set_indicator, 3, 1, NULL, 6, 0, hex);
+ return;
+ }
+
+ spin_lock(&progress_lock);
+
+ /*
+ * Last write ended with newline, but we didn't print it since
+ * it would just clear the bottom line of output. Print it now
+ * instead.
+ *
+ * If no newline is pending, print a CR to start output at the
+ * beginning of the line.
+ */
+ if (pending_newline) {
+ rtas_call(display_character, 1, 1, NULL, '\r');
+ rtas_call(display_character, 1, 1, NULL, '\n');
+ pending_newline = 0;
+ } else {
+ rtas_call(display_character, 1, 1, NULL, '\r');
+ }
+
+ width = max_width;
+ os = s;
+ while (*os) {
+ if (*os == '\n' || *os == '\r') {
+ /* Blank to end of line. */
+ while (width-- > 0)
+ rtas_call(display_character, 1, 1, NULL, ' ');
+
+ /* If newline is the last character, save it
+ * until next call to avoid bumping up the
+ * display output.
+ */
+ if (*os == '\n' && !os[1]) {
+ pending_newline = 1;
+ spin_unlock(&progress_lock);
+ return;
+ }
+
+ /* RTAS wants CR-LF, not just LF */
+
+ if (*os == '\n') {
+ rtas_call(display_character, 1, 1, NULL, '\r');
+ rtas_call(display_character, 1, 1, NULL, '\n');
+ } else {
+ /* CR might be used to re-draw a line, so we'll
+ * leave it alone and not add LF.
+ */
+ rtas_call(display_character, 1, 1, NULL, *os);
+ }
+
+ width = max_width;
+ } else {
+ width--;
+ rtas_call(display_character, 1, 1, NULL, *os);
+ }
+
+ os++;
+
+ /* if we overwrite the screen length */
+ if (width <= 0)
+ while ((*os != 0) && (*os != '\n') && (*os != '\r'))
+ os++;
+ }
+
+ /* Blank to end of line. */
+ while (width-- > 0)
+ rtas_call(display_character, 1, 1, NULL, ' ');
+
+ spin_unlock(&progress_lock);
+}
+
+extern void setup_default_decr(void);
+
+/* Some sane defaults: 125 MHz timebase, 1GHz processor */
+#define DEFAULT_TB_FREQ 125000000UL
+#define DEFAULT_PROC_FREQ (DEFAULT_TB_FREQ * 8)
+
+void __init pSeries_calibrate_decr(void)
+{
+ struct device_node *cpu;
+ struct div_result divres;
+ unsigned int *fp;
+ int node_found;
+
+ /*
+ * The cpu node should have a timebase-frequency property
+ * to tell us the rate at which the decrementer counts.
+ */
+ cpu = of_find_node_by_type(NULL, "cpu");
+
+ ppc_tb_freq = DEFAULT_TB_FREQ; /* hardcoded default */
+ node_found = 0;
+ if (cpu != 0) {
+ fp = (unsigned int *)get_property(cpu, "timebase-frequency",
+ NULL);
+ if (fp != 0) {
+ node_found = 1;
+ ppc_tb_freq = *fp;
+ }
+ }
+ if (!node_found)
+ printk(KERN_ERR "WARNING: Estimating decrementer frequency "
+ "(not found)\n");
+
+ ppc_proc_freq = DEFAULT_PROC_FREQ;
+ node_found = 0;
+ if (cpu != 0) {
+ fp = (unsigned int *)get_property(cpu, "clock-frequency",
+ NULL);
+ if (fp != 0) {
+ node_found = 1;
+ ppc_proc_freq = *fp;
+ }
+ }
+ if (!node_found)
+ printk(KERN_ERR "WARNING: Estimating processor frequency "
+ "(not found)\n");
+
+ of_node_put(cpu);
+
+ printk(KERN_INFO "time_init: decrementer frequency = %lu.%.6lu MHz\n",
+ ppc_tb_freq/1000000, ppc_tb_freq%1000000);
+ printk(KERN_INFO "time_init: processor frequency = %lu.%.6lu MHz\n",
+ ppc_proc_freq/1000000, ppc_proc_freq%1000000);
+
+ tb_ticks_per_jiffy = ppc_tb_freq / HZ;
+ tb_ticks_per_sec = tb_ticks_per_jiffy * HZ;
+ tb_ticks_per_usec = ppc_tb_freq / 1000000;
+ tb_to_us = mulhwu_scale_factor(ppc_tb_freq, 1000000);
+ div128_by_32(1024*1024, 0, tb_ticks_per_sec, &divres);
+ tb_to_xs = divres.result_low;
+
+ setup_default_decr();
+}
--- /dev/null
+/*
+ * pSeries hashtable management.
+ *
+ * SMP scalability work:
+ * Copyright (C) 2001 Anton Blanchard <anton@au.ibm.com>, IBM
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#include <linux/spinlock.h>
+#include <linux/bitops.h>
+#include <linux/threads.h>
+#include <linux/smp.h>
+
+#include <asm/abs_addr.h>
+#include <asm/machdep.h>
+#include <asm/mmu.h>
+#include <asm/mmu_context.h>
+#include <asm/pgtable.h>
+#include <asm/tlbflush.h>
+#include <asm/tlb.h>
+#include <asm/cputable.h>
+
+#define HPTE_LOCK_BIT 3
+
+static inline void pSeries_lock_hpte(HPTE *hptep)
+{
+ unsigned long *word = &hptep->dw0.dword0;
+
+ while (1) {
+ if (!test_and_set_bit(HPTE_LOCK_BIT, word))
+ break;
+ while(test_bit(HPTE_LOCK_BIT, word))
+ cpu_relax();
+ }
+}
+
+static inline void pSeries_unlock_hpte(HPTE *hptep)
+{
+ unsigned long *word = &hptep->dw0.dword0;
+
+ asm volatile("lwsync":::"memory");
+ clear_bit(HPTE_LOCK_BIT, word);
+}
+
+static spinlock_t pSeries_tlbie_lock = SPIN_LOCK_UNLOCKED;
+
+long pSeries_hpte_insert(unsigned long hpte_group, unsigned long va,
+ unsigned long prpn, int secondary,
+ unsigned long hpteflags, int bolted, int large)
+{
+ unsigned long arpn = physRpn_to_absRpn(prpn);
+ HPTE *hptep = htab_data.htab + hpte_group;
+ Hpte_dword0 dw0;
+ HPTE lhpte;
+ int i;
+
+ for (i = 0; i < HPTES_PER_GROUP; i++) {
+ dw0 = hptep->dw0.dw0;
+
+ if (!dw0.v) {
+ /* retry with lock held */
+ pSeries_lock_hpte(hptep);
+ dw0 = hptep->dw0.dw0;
+ if (!dw0.v)
+ break;
+ pSeries_unlock_hpte(hptep);
+ }
+
+ hptep++;
+ }
+
+ if (i == HPTES_PER_GROUP)
+ return -1;
+
+ lhpte.dw1.dword1 = 0;
+ lhpte.dw1.dw1.rpn = arpn;
+ lhpte.dw1.flags.flags = hpteflags;
+
+ lhpte.dw0.dword0 = 0;
+ lhpte.dw0.dw0.avpn = va >> 23;
+ lhpte.dw0.dw0.h = secondary;
+ lhpte.dw0.dw0.bolted = bolted;
+ lhpte.dw0.dw0.v = 1;
+
+ if (large) {
+ lhpte.dw0.dw0.l = 1;
+ lhpte.dw0.dw0.avpn &= ~0x1UL;
+ }
+
+ hptep->dw1.dword1 = lhpte.dw1.dword1;
+
+ /* Guarantee the second dword is visible before the valid bit */
+ __asm__ __volatile__ ("eieio" : : : "memory");
+
+ /*
+ * Now set the first dword including the valid bit
+ * NOTE: this also unlocks the hpte
+ */
+ hptep->dw0.dword0 = lhpte.dw0.dword0;
+
+ __asm__ __volatile__ ("ptesync" : : : "memory");
+
+ return i | (secondary << 3);
+}
+
+static long pSeries_hpte_remove(unsigned long hpte_group)
+{
+ HPTE *hptep;
+ Hpte_dword0 dw0;
+ int i;
+ int slot_offset;
+
+ /* pick a random entry to start at */
+ slot_offset = mftb() & 0x7;
+
+ for (i = 0; i < HPTES_PER_GROUP; i++) {
+ hptep = htab_data.htab + hpte_group + slot_offset;
+ dw0 = hptep->dw0.dw0;
+
+ if (dw0.v && !dw0.bolted) {
+ /* retry with lock held */
+ pSeries_lock_hpte(hptep);
+ dw0 = hptep->dw0.dw0;
+ if (dw0.v && !dw0.bolted)
+ break;
+ pSeries_unlock_hpte(hptep);
+ }
+
+ slot_offset++;
+ slot_offset &= 0x7;
+ }
+
+ if (i == HPTES_PER_GROUP)
+ return -1;
+
+ /* Invalidate the hpte. NOTE: this also unlocks it */
+ hptep->dw0.dword0 = 0;
+
+ return i;
+}
+
+static inline void set_pp_bit(unsigned long pp, HPTE *addr)
+{
+ unsigned long old;
+ unsigned long *p = &addr->dw1.dword1;
+
+ __asm__ __volatile__(
+ "1: ldarx %0,0,%3\n\
+ rldimi %0,%2,0,61\n\
+ stdcx. %0,0,%3\n\
+ bne 1b"
+ : "=&r" (old), "=m" (*p)
+ : "r" (pp), "r" (p), "m" (*p)
+ : "cc");
+}
+
+/*
+ * Only works on small pages. Yes its ugly to have to check each slot in
+ * the group but we only use this during bootup.
+ */
+static long pSeries_hpte_find(unsigned long vpn)
+{
+ HPTE *hptep;
+ unsigned long hash;
+ unsigned long i, j;
+ long slot;
+ Hpte_dword0 dw0;
+
+ hash = hpt_hash(vpn, 0);
+
+ for (j = 0; j < 2; j++) {
+ slot = (hash & htab_data.htab_hash_mask) * HPTES_PER_GROUP;
+ for (i = 0; i < HPTES_PER_GROUP; i++) {
+ hptep = htab_data.htab + slot;
+ dw0 = hptep->dw0.dw0;
+
+ if ((dw0.avpn == (vpn >> 11)) && dw0.v &&
+ (dw0.h == j)) {
+ /* HPTE matches */
+ if (j)
+ slot = -slot;
+ return slot;
+ }
+ ++slot;
+ }
+ hash = ~hash;
+ }
+
+ return -1;
+}
+
+static long pSeries_hpte_updatepp(unsigned long slot, unsigned long newpp,
+ unsigned long va, int large, int local)
+{
+ HPTE *hptep = htab_data.htab + slot;
+ Hpte_dword0 dw0;
+ unsigned long avpn = va >> 23;
+ int ret = 0;
+
+ if (large)
+ avpn &= ~0x1UL;
+
+ pSeries_lock_hpte(hptep);
+
+ dw0 = hptep->dw0.dw0;
+
+ /* Even if we miss, we need to invalidate the TLB */
+ if ((dw0.avpn != avpn) || !dw0.v) {
+ pSeries_unlock_hpte(hptep);
+ ret = -1;
+ } else {
+ set_pp_bit(newpp, hptep);
+ pSeries_unlock_hpte(hptep);
+ }
+
+ /* Ensure it is out of the tlb too */
+ if ((cur_cpu_spec->cpu_features & CPU_FTR_TLBIEL) && !large && local) {
+ tlbiel(va);
+ } else {
+ if (!(cur_cpu_spec->cpu_features & CPU_FTR_LOCKLESS_TLBIE))
+ spin_lock(&pSeries_tlbie_lock);
+ tlbie(va, large);
+ if (!(cur_cpu_spec->cpu_features & CPU_FTR_LOCKLESS_TLBIE))
+ spin_unlock(&pSeries_tlbie_lock);
+ }
+
+ return ret;
+}
+
+/*
+ * Update the page protection bits. Intended to be used to create
+ * guard pages for kernel data structures on pages which are bolted
+ * in the HPT. Assumes pages being operated on will not be stolen.
+ * Does not work on large pages.
+ *
+ * No need to lock here because we should be the only user.
+ */
+static void pSeries_hpte_updateboltedpp(unsigned long newpp, unsigned long ea)
+{
+ unsigned long vsid, va, vpn, flags;
+ long slot;
+ HPTE *hptep;
+
+ vsid = get_kernel_vsid(ea);
+ va = (vsid << 28) | (ea & 0x0fffffff);
+ vpn = va >> PAGE_SHIFT;
+
+ slot = pSeries_hpte_find(vpn);
+ if (slot == -1)
+ panic("could not find page to bolt\n");
+ hptep = htab_data.htab + slot;
+
+ set_pp_bit(newpp, hptep);
+
+ /* Ensure it is out of the tlb too */
+ if (!(cur_cpu_spec->cpu_features & CPU_FTR_LOCKLESS_TLBIE))
+ spin_lock_irqsave(&pSeries_tlbie_lock, flags);
+ tlbie(va, 0);
+ if (!(cur_cpu_spec->cpu_features & CPU_FTR_LOCKLESS_TLBIE))
+ spin_unlock_irqrestore(&pSeries_tlbie_lock, flags);
+}
+
+static void pSeries_hpte_invalidate(unsigned long slot, unsigned long va,
+ int large, int local)
+{
+ HPTE *hptep = htab_data.htab + slot;
+ Hpte_dword0 dw0;
+ unsigned long avpn = va >> 23;
+ unsigned long flags;
+
+ if (large)
+ avpn &= ~0x1UL;
+
+ local_irq_save(flags);
+ pSeries_lock_hpte(hptep);
+
+ dw0 = hptep->dw0.dw0;
+
+ /* Even if we miss, we need to invalidate the TLB */
+ if ((dw0.avpn != avpn) || !dw0.v) {
+ pSeries_unlock_hpte(hptep);
+ } else {
+ /* Invalidate the hpte. NOTE: this also unlocks it */
+ hptep->dw0.dword0 = 0;
+ }
+
+ /* Invalidate the tlb */
+ if ((cur_cpu_spec->cpu_features & CPU_FTR_TLBIEL) && !large && local) {
+ tlbiel(va);
+ } else {
+ if (!(cur_cpu_spec->cpu_features & CPU_FTR_LOCKLESS_TLBIE))
+ spin_lock(&pSeries_tlbie_lock);
+ tlbie(va, large);
+ if (!(cur_cpu_spec->cpu_features & CPU_FTR_LOCKLESS_TLBIE))
+ spin_unlock(&pSeries_tlbie_lock);
+ }
+ local_irq_restore(flags);
+}
+
+static void pSeries_flush_hash_range(unsigned long context,
+ unsigned long number, int local)
+{
+ unsigned long vsid, vpn, va, hash, secondary, slot, flags, avpn;
+ int i, j;
+ HPTE *hptep;
+ Hpte_dword0 dw0;
+ struct ppc64_tlb_batch *batch = &__get_cpu_var(ppc64_tlb_batch);
+
+ /* XXX fix for large ptes */
+ unsigned long large = 0;
+
+ local_irq_save(flags);
+
+ j = 0;
+ for (i = 0; i < number; i++) {
+ if ((batch->addr[i] >= USER_START) &&
+ (batch->addr[i] <= USER_END))
+ vsid = get_vsid(context, batch->addr[i]);
+ else
+ vsid = get_kernel_vsid(batch->addr[i]);
+
+ va = (vsid << 28) | (batch->addr[i] & 0x0fffffff);
+ batch->vaddr[j] = va;
+ if (large)
+ vpn = va >> LARGE_PAGE_SHIFT;
+ else
+ vpn = va >> PAGE_SHIFT;
+ hash = hpt_hash(vpn, large);
+ secondary = (pte_val(batch->pte[i]) & _PAGE_SECONDARY) >> 15;
+ if (secondary)
+ hash = ~hash;
+ slot = (hash & htab_data.htab_hash_mask) * HPTES_PER_GROUP;
+ slot += (pte_val(batch->pte[i]) & _PAGE_GROUP_IX) >> 12;
+
+ hptep = htab_data.htab + slot;
+
+ avpn = va >> 23;
+ if (large)
+ avpn &= ~0x1UL;
+
+ pSeries_lock_hpte(hptep);
+
+ dw0 = hptep->dw0.dw0;
+
+ /* Even if we miss, we need to invalidate the TLB */
+ if ((dw0.avpn != avpn) || !dw0.v) {
+ pSeries_unlock_hpte(hptep);
+ } else {
+ /* Invalidate the hpte. NOTE: this also unlocks it */
+ hptep->dw0.dword0 = 0;
+ }
+
+ j++;
+ }
+
+ if ((cur_cpu_spec->cpu_features & CPU_FTR_TLBIEL) && !large && local) {
+ asm volatile("ptesync":::"memory");
+
+ for (i = 0; i < j; i++)
+ __tlbiel(batch->vaddr[i]);
+
+ asm volatile("ptesync":::"memory");
+ } else {
+ /* XXX double check that it is safe to take this late */
+ if (!(cur_cpu_spec->cpu_features & CPU_FTR_LOCKLESS_TLBIE))
+ spin_lock(&pSeries_tlbie_lock);
+
+ asm volatile("ptesync":::"memory");
+
+ for (i = 0; i < j; i++)
+ __tlbie(batch->vaddr[i], 0);
+
+ asm volatile("eieio; tlbsync; ptesync":::"memory");
+
+ if (!(cur_cpu_spec->cpu_features & CPU_FTR_LOCKLESS_TLBIE))
+ spin_unlock(&pSeries_tlbie_lock);
+ }
+
+ local_irq_restore(flags);
+}
+
+void hpte_init_pSeries(void)
+{
+ struct device_node *root;
+ const char *model;
+
+ ppc_md.hpte_invalidate = pSeries_hpte_invalidate;
+ ppc_md.hpte_updatepp = pSeries_hpte_updatepp;
+ ppc_md.hpte_updateboltedpp = pSeries_hpte_updateboltedpp;
+ ppc_md.hpte_insert = pSeries_hpte_insert;
+ ppc_md.hpte_remove = pSeries_hpte_remove;
+
+ /* Disable TLB batching on nighthawk */
+ root = of_find_node_by_path("/");
+ if (root) {
+ model = get_property(root, "model", NULL);
+ if (!strcmp(model, "CHRP IBM,9076-N81")) {
+ of_node_put(root);
+ return;
+ }
+ of_node_put(root);
+ }
+
+ ppc_md.flush_hash_range = pSeries_flush_hash_range;
+}
--- /dev/null
+/*
+ * arch/ppc64/kernel/pmac_iommu.c
+ *
+ * Copyright (C) 2004 Olof Johansson <olof@austin.ibm.com>, IBM Corporation
+ *
+ * Based on pSeries_iommu.c:
+ * Copyright (C) 2001 Mike Corrigan & Dave Engebretsen, IBM Corporation
+ * Copyright (C) 2004 Olof Johansson <olof@austin.ibm.com>, IBM Corporation
+ *
+ * Dynamic DMA mapping support, PowerMac G5 (DART)-specific parts.
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/slab.h>
+#include <linux/mm.h>
+#include <linux/spinlock.h>
+#include <linux/string.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/vmalloc.h>
+#include <asm/io.h>
+#include <asm/prom.h>
+#include <asm/rtas.h>
+#include <asm/ppcdebug.h>
+#include <asm/iommu.h>
+#include <asm/pci-bridge.h>
+#include <asm/machdep.h>
+#include <asm/abs_addr.h>
+#include <asm/cacheflush.h>
+#include "pci.h"
+
+
+/* physical base of DART registers */
+#define DART_BASE 0xf8033000UL
+
+/* Offset from base to control register */
+#define DARTCNTL 0
+/* Offset from base to exception register */
+#define DARTEXCP 0x10
+/* Offset from base to TLB tag registers */
+#define DARTTAG 0x1000
+
+
+/* Control Register fields */
+
+/* base address of table (pfn) */
+#define DARTCNTL_BASE_MASK 0xfffff
+#define DARTCNTL_BASE_SHIFT 12
+
+#define DARTCNTL_FLUSHTLB 0x400
+#define DARTCNTL_ENABLE 0x200
+
+/* size of table in pages */
+#define DARTCNTL_SIZE_MASK 0x1ff
+#define DARTCNTL_SIZE_SHIFT 0
+
+/* DART table fields */
+#define DARTMAP_VALID 0x80000000
+#define DARTMAP_RPNMASK 0x00ffffff
+
+/* Physical base address and size of the DART table */
+unsigned long dart_tablebase;
+unsigned long dart_tablesize;
+
+/* Virtual base address of the DART table */
+static u32 *dart_vbase;
+
+/* Mapped base address for the dart */
+static unsigned int *dart;
+
+/* Dummy val that entries are set to when unused */
+static unsigned int dart_emptyval;
+
+static struct iommu_table iommu_table_pmac;
+static int dart_dirty;
+
+#define DBG(...)
+
+static inline void dart_tlb_invalidate_all(void)
+{
+ unsigned long l = 0;
+ unsigned int reg;
+ unsigned long limit;
+
+ DBG("dart: flush\n");
+
+ /* To invalidate the DART, set the DARTCNTL_FLUSHTLB bit in the
+ * control register and wait for it to clear.
+ *
+ * Gotcha: Sometimes, the DART won't detect that the bit gets
+ * set. If so, clear it and set it again.
+ */
+
+ limit = 0;
+
+retry:
+ reg = in_be32((unsigned int *)dart+DARTCNTL);
+ reg |= DARTCNTL_FLUSHTLB;
+ out_be32((unsigned int *)dart+DARTCNTL, reg);
+
+ l = 0;
+ while ((in_be32((unsigned int *)dart+DARTCNTL) & DARTCNTL_FLUSHTLB) &&
+ l < (1L<<limit)) {
+ l++;
+ }
+ if (l == (1L<<limit)) {
+ if (limit < 4) {
+ limit++;
+ reg = in_be32((unsigned int *)dart+DARTCNTL);
+ reg &= ~DARTCNTL_FLUSHTLB;
+ out_be32((unsigned int *)dart+DARTCNTL, reg);
+ goto retry;
+ } else
+ panic("U3-DART: TLB did not flush after waiting a long "
+ "time. Buggy U3 ?");
+ }
+}
+
+static void dart_flush(struct iommu_table *tbl)
+{
+ if (dart_dirty)
+ dart_tlb_invalidate_all();
+ dart_dirty = 0;
+}
+
+static void dart_build_pmac(struct iommu_table *tbl, long index,
+ long npages, unsigned long uaddr,
+ enum dma_data_direction direction)
+{
+ unsigned int *dp;
+ unsigned int rpn;
+
+ DBG("dart: build at: %lx, %lx, addr: %x\n", index, npages, uaddr);
+
+ dp = ((unsigned int*)tbl->it_base) + index;
+
+ /* On pmac, all memory is contigous, so we can move this
+ * out of the loop.
+ */
+ while (npages--) {
+ rpn = virt_to_abs(uaddr) >> PAGE_SHIFT;
+
+ *(dp++) = DARTMAP_VALID | (rpn & DARTMAP_RPNMASK);
+
+ rpn++;
+ uaddr += PAGE_SIZE;
+ }
+
+ dart_dirty = 1;
+}
+
+
+static void dart_free_pmac(struct iommu_table *tbl, long index, long npages)
+{
+ unsigned int *dp;
+
+ /* We don't worry about flushing the TLB cache. The only drawback of
+ * not doing it is that we won't catch buggy device drivers doing
+ * bad DMAs, but then no 32-bit architecture ever does either.
+ */
+
+ DBG("dart: free at: %lx, %lx\n", index, npages);
+
+ dp = ((unsigned int *)tbl->it_base) + index;
+
+ while (npages--)
+ *(dp++) = dart_emptyval;
+}
+
+
+static int dart_init(struct device_node *dart_node)
+{
+ unsigned int regword;
+ unsigned int i;
+ unsigned long tmp;
+ struct page *p;
+
+ if (dart_tablebase == 0 || dart_tablesize == 0) {
+ printk(KERN_INFO "U3-DART: table not allocated, using direct DMA\n");
+ return -ENODEV;
+ }
+
+ /* Make sure nothing from the DART range remains in the CPU cache
+ * from a previous mapping that existed before the kernel took
+ * over
+ */
+ flush_dcache_phys_range(dart_tablebase, dart_tablebase + dart_tablesize);
+
+ /* Allocate a spare page to map all invalid DART pages. We need to do
+ * that to work around what looks like a problem with the HT bridge
+ * prefetching into invalid pages and corrupting data
+ */
+ tmp = __get_free_pages(GFP_ATOMIC, 1);
+ if (tmp == 0)
+ panic("U3-DART: Cannot allocate spare page !");
+ dart_emptyval = DARTMAP_VALID |
+ ((virt_to_abs(tmp) >> PAGE_SHIFT) & DARTMAP_RPNMASK);
+
+ /* Map in DART registers. FIXME: Use device node to get base address */
+ dart = ioremap(DART_BASE, 0x7000);
+ if (dart == NULL)
+ panic("U3-DART: Cannot map registers !");
+
+ /* Set initial control register contents: table base,
+ * table size and enable bit
+ */
+ regword = DARTCNTL_ENABLE |
+ ((dart_tablebase >> PAGE_SHIFT) << DARTCNTL_BASE_SHIFT) |
+ (((dart_tablesize >> PAGE_SHIFT) & DARTCNTL_SIZE_MASK)
+ << DARTCNTL_SIZE_SHIFT);
+ p = virt_to_page(dart_tablebase);
+ dart_vbase = ioremap(virt_to_abs(dart_tablebase), dart_tablesize);
+
+ /* Fill initial table */
+ for (i = 0; i < dart_tablesize/4; i++)
+ dart_vbase[i] = dart_emptyval;
+
+ /* Initialize DART with table base and enable it. */
+ out_be32((unsigned int *)dart, regword);
+
+ /* Invalidate DART to get rid of possible stale TLBs */
+ dart_tlb_invalidate_all();
+
+ iommu_table_pmac.it_busno = 0;
+
+ /* Units of tce entries */
+ iommu_table_pmac.it_offset = 0;
+
+ /* Set the tce table size - measured in pages */
+ iommu_table_pmac.it_size = dart_tablesize >> PAGE_SHIFT;
+
+ /* Initialize the common IOMMU code */
+ iommu_table_pmac.it_base = (unsigned long)dart_vbase;
+ iommu_table_pmac.it_index = 0;
+ iommu_table_pmac.it_blocksize = 1;
+ iommu_table_pmac.it_entrysize = sizeof(u32);
+ iommu_init_table(&iommu_table_pmac);
+
+ /* Reserve the last page of the DART to avoid possible prefetch
+ * past the DART mapped area
+ */
+ set_bit(iommu_table_pmac.it_mapsize - 1, iommu_table_pmac.it_map);
+
+ printk(KERN_INFO "U3-DART IOMMU initialized\n");
+
+ return 0;
+}
+
+
+void iommu_setup_pmac(void)
+{
+ struct pci_dev *dev = NULL;
+ struct device_node *dn;
+
+ /* Find the DART in the device-tree */
+ dn = of_find_compatible_node(NULL, "dart", "u3-dart");
+ if (dn == NULL)
+ return;
+
+ /* Setup low level TCE operations for the core IOMMU code */
+ ppc_md.tce_build = dart_build_pmac;
+ ppc_md.tce_free = dart_free_pmac;
+ ppc_md.tce_flush = dart_flush;
+
+ /* Initialize the DART HW */
+ if (dart_init(dn))
+ return;
+
+ /* Setup pci_dma ops */
+ pci_iommu_init();
+
+ /* We only have one iommu table on the mac for now, which makes
+ * things simple. Setup all PCI devices to point to this table
+ */
+ while ((dev = pci_find_device(PCI_ANY_ID, PCI_ANY_ID, dev)) != NULL) {
+ /* We must use pci_device_to_OF_node() to make sure that
+ * we get the real "final" pointer to the device in the
+ * pci_dev sysdata and not the temporary PHB one
+ */
+ struct device_node *dn = pci_device_to_OF_node(dev);
+ if (dn)
+ dn->iommu_table = &iommu_table_pmac;
+ }
+}
+
+
+
+
--- /dev/null
+/*
+ * PowerPC64 Segment Translation Support.
+ *
+ * Dave Engebretsen and Mike Corrigan {engebret|mikejc}@us.ibm.com
+ * Copyright (c) 2001 Dave Engebretsen
+ *
+ * Copyright (C) 2002 Anton Blanchard <anton@au.ibm.com>, IBM
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <asm/pgtable.h>
+#include <asm/mmu.h>
+#include <asm/mmu_context.h>
+#include <asm/paca.h>
+#include <asm/naca.h>
+#include <asm/cputable.h>
+
+static int make_ste(unsigned long stab, unsigned long esid,
+ unsigned long vsid);
+
+void slb_initialize(void);
+
+/*
+ * Build an entry for the base kernel segment and put it into
+ * the segment table or SLB. All other segment table or SLB
+ * entries are faulted in.
+ */
+void stab_initialize(unsigned long stab)
+{
+ unsigned long vsid = get_kernel_vsid(KERNELBASE);
+
+ if (cur_cpu_spec->cpu_features & CPU_FTR_SLB) {
+ slb_initialize();
+ } else {
+ asm volatile("isync; slbia; isync":::"memory");
+ make_ste(stab, GET_ESID(KERNELBASE), vsid);
+
+ /* Order update */
+ asm volatile("sync":::"memory");
+ }
+}
+
+/* Both the segment table and SLB code uses the following cache */
+#define NR_STAB_CACHE_ENTRIES 8
+DEFINE_PER_CPU(long, stab_cache_ptr);
+DEFINE_PER_CPU(long, stab_cache[NR_STAB_CACHE_ENTRIES]);
+
+/*
+ * Segment table stuff
+ */
+
+/*
+ * Create a segment table entry for the given esid/vsid pair.
+ */
+static int make_ste(unsigned long stab, unsigned long esid, unsigned long vsid)
+{
+ unsigned long entry, group, old_esid, castout_entry, i;
+ unsigned int global_entry;
+ STE *ste, *castout_ste;
+ unsigned long kernel_segment = (REGION_ID(esid << SID_SHIFT) !=
+ USER_REGION_ID);
+
+ /* Search the primary group first. */
+ global_entry = (esid & 0x1f) << 3;
+ ste = (STE *)(stab | ((esid & 0x1f) << 7));
+
+ /* Find an empty entry, if one exists. */
+ for (group = 0; group < 2; group++) {
+ for (entry = 0; entry < 8; entry++, ste++) {
+ if (!(ste->dw0.dw0.v)) {
+ ste->dw0.dword0 = 0;
+ ste->dw1.dword1 = 0;
+ ste->dw1.dw1.vsid = vsid;
+ ste->dw0.dw0.esid = esid;
+ ste->dw0.dw0.kp = 1;
+ if (!kernel_segment)
+ ste->dw0.dw0.ks = 1;
+ asm volatile("eieio":::"memory");
+ ste->dw0.dw0.v = 1;
+ return (global_entry | entry);
+ }
+ }
+ /* Now search the secondary group. */
+ global_entry = ((~esid) & 0x1f) << 3;
+ ste = (STE *)(stab | (((~esid) & 0x1f) << 7));
+ }
+
+ /*
+ * Could not find empty entry, pick one with a round robin selection.
+ * Search all entries in the two groups.
+ */
+ castout_entry = get_paca()->stab_rr;
+ for (i = 0; i < 16; i++) {
+ if (castout_entry < 8) {
+ global_entry = (esid & 0x1f) << 3;
+ ste = (STE *)(stab | ((esid & 0x1f) << 7));
+ castout_ste = ste + castout_entry;
+ } else {
+ global_entry = ((~esid) & 0x1f) << 3;
+ ste = (STE *)(stab | (((~esid) & 0x1f) << 7));
+ castout_ste = ste + (castout_entry - 8);
+ }
+
+ /* Dont cast out the first kernel segment */
+ if (castout_ste->dw0.dw0.esid != GET_ESID(KERNELBASE))
+ break;
+
+ castout_entry = (castout_entry + 1) & 0xf;
+ }
+
+ get_paca()->stab_rr = (castout_entry + 1) & 0xf;
+
+ /* Modify the old entry to the new value. */
+
+ /* Force previous translations to complete. DRENG */
+ asm volatile("isync" : : : "memory");
+
+ castout_ste->dw0.dw0.v = 0;
+ asm volatile("sync" : : : "memory"); /* Order update */
+
+ castout_ste->dw0.dword0 = 0;
+ castout_ste->dw1.dword1 = 0;
+ castout_ste->dw1.dw1.vsid = vsid;
+ old_esid = castout_ste->dw0.dw0.esid;
+ castout_ste->dw0.dw0.esid = esid;
+ castout_ste->dw0.dw0.kp = 1;
+ if (!kernel_segment)
+ castout_ste->dw0.dw0.ks = 1;
+ asm volatile("eieio" : : : "memory"); /* Order update */
+ castout_ste->dw0.dw0.v = 1;
+ asm volatile("slbie %0" : : "r" (old_esid << SID_SHIFT));
+ /* Ensure completion of slbie */
+ asm volatile("sync" : : : "memory");
+
+ return (global_entry | (castout_entry & 0x7));
+}
+
+static inline void __ste_allocate(unsigned long esid, unsigned long vsid)
+{
+ unsigned char stab_entry;
+ unsigned long offset;
+ int region_id = REGION_ID(esid << SID_SHIFT);
+
+ stab_entry = make_ste(get_paca()->stab_addr, esid, vsid);
+
+ if (region_id != USER_REGION_ID)
+ return;
+
+ offset = __get_cpu_var(stab_cache_ptr);
+ if (offset < NR_STAB_CACHE_ENTRIES)
+ __get_cpu_var(stab_cache[offset++]) = stab_entry;
+ else
+ offset = NR_STAB_CACHE_ENTRIES+1;
+ __get_cpu_var(stab_cache_ptr) = offset;
+}
+
+/*
+ * Allocate a segment table entry for the given ea.
+ */
+int ste_allocate(unsigned long ea)
+{
+ unsigned long vsid, esid;
+ mm_context_t context;
+
+ /* Check for invalid effective addresses. */
+ if (!IS_VALID_EA(ea))
+ return 1;
+
+ /* Kernel or user address? */
+ if (REGION_ID(ea) >= KERNEL_REGION_ID) {
+ vsid = get_kernel_vsid(ea);
+ context = KERNEL_CONTEXT(ea);
+ } else {
+ if (!current->mm)
+ return 1;
+
+ context = current->mm->context;
+ vsid = get_vsid(context.id, ea);
+ }
+
+ esid = GET_ESID(ea);
+ __ste_allocate(esid, vsid);
+ /* Order update */
+ asm volatile("sync":::"memory");
+
+ return 0;
+}
+
+/*
+ * preload some userspace segments into the segment table.
+ */
+static void preload_stab(struct task_struct *tsk, struct mm_struct *mm)
+{
+ unsigned long pc = KSTK_EIP(tsk);
+ unsigned long stack = KSTK_ESP(tsk);
+ unsigned long unmapped_base;
+ unsigned long pc_esid = GET_ESID(pc);
+ unsigned long stack_esid = GET_ESID(stack);
+ unsigned long unmapped_base_esid;
+ unsigned long vsid;
+
+ if (test_tsk_thread_flag(tsk, TIF_32BIT))
+ unmapped_base = TASK_UNMAPPED_BASE_USER32;
+ else
+ unmapped_base = TASK_UNMAPPED_BASE_USER64;
+
+ unmapped_base_esid = GET_ESID(unmapped_base);
+
+ if (!IS_VALID_EA(pc) || (REGION_ID(pc) >= KERNEL_REGION_ID))
+ return;
+ vsid = get_vsid(mm->context.id, pc);
+ __ste_allocate(pc_esid, vsid);
+
+ if (pc_esid == stack_esid)
+ return;
+
+ if (!IS_VALID_EA(stack) || (REGION_ID(stack) >= KERNEL_REGION_ID))
+ return;
+ vsid = get_vsid(mm->context.id, stack);
+ __ste_allocate(stack_esid, vsid);
+
+ if (pc_esid == unmapped_base_esid || stack_esid == unmapped_base_esid)
+ return;
+
+ if (!IS_VALID_EA(unmapped_base) ||
+ (REGION_ID(unmapped_base) >= KERNEL_REGION_ID))
+ return;
+ vsid = get_vsid(mm->context.id, unmapped_base);
+ __ste_allocate(unmapped_base_esid, vsid);
+
+ /* Order update */
+ asm volatile("sync" : : : "memory");
+}
+
+/* Flush all user entries from the segment table of the current processor. */
+void flush_stab(struct task_struct *tsk, struct mm_struct *mm)
+{
+ STE *stab = (STE *) get_paca()->stab_addr;
+ STE *ste;
+ unsigned long offset = __get_cpu_var(stab_cache_ptr);
+
+ /* Force previous translations to complete. DRENG */
+ asm volatile("isync" : : : "memory");
+
+ if (offset <= NR_STAB_CACHE_ENTRIES) {
+ int i;
+
+ for (i = 0; i < offset; i++) {
+ ste = stab + __get_cpu_var(stab_cache[i]);
+ ste->dw0.dw0.v = 0;
+ }
+ } else {
+ unsigned long entry;
+
+ /* Invalidate all entries. */
+ ste = stab;
+
+ /* Never flush the first entry. */
+ ste += 1;
+ for (entry = 1;
+ entry < (PAGE_SIZE / sizeof(STE));
+ entry++, ste++) {
+ unsigned long ea;
+ ea = ste->dw0.dw0.esid << SID_SHIFT;
+ if (ea < KERNELBASE) {
+ ste->dw0.dw0.v = 0;
+ }
+ }
+ }
+
+ asm volatile("sync; slbia; sync":::"memory");
+
+ __get_cpu_var(stab_cache_ptr) = 0;
+
+ preload_stab(tsk, mm);
+}
#include <linux/ptrace.h>
#include <linux/aio_abi.h>
#include <linux/elf.h>
+#include <linux/vs_cvirt.h>
#include <net/scm.h>
#include <net/sock.h>
# $1 - kernel version
# $2 - kernel image file
# $3 - kernel map file
-# $4 - default install path (blank if root directory)
+# $4 - kernel type file
+# $5 - default install path (blank if root directory)
#
# User may have a custom install script
# Default install - same as make zlilo
-if [ -f $4/vmlinuz ]; then
- mv $4/vmlinuz $4/vmlinuz.old
+if [ -f $5/vmlinuz ]; then
+ mv $5/vmlinuz $5/vmlinuz.old
fi
-if [ -f $4/System.map ]; then
- mv $4/System.map $4/System.old
+if [ -f $5/System.map ]; then
+ mv $5/System.map $5/System.old
fi
-cat $2 > $4/vmlinuz
-cp $3 $4/System.map
+if [ -f $5/Kerntypes ]; then
+ mv $5/Kerntypes $5/Kerntypes.old
+fi
+
+cat $2 > $5/vmlinuz
+cp $3 $5/System.map
+
+# copy the kernel type file if it exists
+if [ -f $4 ]; then
+ cp $4 $5/Kerntypes
+fi
--- /dev/null
+/* U3copy_in_user.S: UltraSparc-III optimized memcpy.
+ *
+ * Copyright (C) 1999, 2000, 2004 David S. Miller (davem@redhat.com)
+ */
+
+#include <asm/visasm.h>
+#include <asm/asi.h>
+#include <asm/dcu.h>
+#include <asm/spitfire.h>
+
+#define XCC xcc
+
+#define EXNV(x,y,a,b) \
+98: x,y; \
+ .section .fixup; \
+ .align 4; \
+99: retl; \
+ a, b, %o0; \
+ .section __ex_table; \
+ .align 4; \
+ .word 98b, 99b; \
+ .text; \
+ .align 4;
+#define EXNV1(x,y,a,b) \
+98: x,y; \
+ .section .fixup; \
+ .align 4; \
+99: a, b, %o0; \
+ retl; \
+ add %o0, 1, %o0; \
+ .section __ex_table; \
+ .align 4; \
+ .word 98b, 99b; \
+ .text; \
+ .align 4;
+#define EXNV4(x,y,a,b) \
+98: x,y; \
+ .section .fixup; \
+ .align 4; \
+99: a, b, %o0; \
+ retl; \
+ add %o0, 4, %o0; \
+ .section __ex_table; \
+ .align 4; \
+ .word 98b, 99b; \
+ .text; \
+ .align 4;
+#define EXNV8(x,y,a,b) \
+98: x,y; \
+ .section .fixup; \
+ .align 4; \
+99: a, b, %o0; \
+ retl; \
+ add %o0, 8, %o0; \
+ .section __ex_table; \
+ .align 4; \
+ .word 98b, 99b; \
+ .text; \
+ .align 4;
+
+ .register %g2,#scratch
+ .register %g3,#scratch
+
+ .text
+ .align 32
+
+ /* Don't try to get too fancy here, just nice and
+ * simple. This is predominantly used for well aligned
+ * small copies in the compat layer. It is also used
+ * to copy register windows around during thread cloning.
+ */
+
+ .globl U3copy_in_user
+U3copy_in_user: /* %o0=dst, %o1=src, %o2=len */
+ /* Writing to %asi is _expensive_ so we hardcode it.
+ * Reading %asi to check for KERNEL_DS is comparatively
+ * cheap.
+ */
+ rd %asi, %g1
+ cmp %g1, ASI_AIUS
+ bne,pn %icc, U3memcpy_user_stub
+ nop
+
+ cmp %o2, 0
+ be,pn %XCC, out
+ or %o0, %o1, %o3
+ cmp %o2, 16
+ bleu,a,pn %XCC, small_copy
+ or %o3, %o2, %o3
+
+medium_copy: /* 16 < len <= 64 */
+ andcc %o3, 0x7, %g0
+ bne,pn %XCC, small_copy_unaligned
+ sub %o0, %o1, %o3
+
+medium_copy_aligned:
+ andn %o2, 0x7, %o4
+ and %o2, 0x7, %o2
+1: subcc %o4, 0x8, %o4
+ EXNV8(ldxa [%o1] %asi, %o5, add %o4, %o2)
+ EXNV8(stxa %o5, [%o1 + %o3] ASI_AIUS, add %o4, %o2)
+ bgu,pt %XCC, 1b
+ add %o1, 0x8, %o1
+ andcc %o2, 0x4, %g0
+ be,pt %XCC, 1f
+ nop
+ sub %o2, 0x4, %o2
+ EXNV4(lduwa [%o1] %asi, %o5, add %o4, %o2)
+ EXNV4(stwa %o5, [%o1 + %o3] ASI_AIUS, add %o4, %o2)
+ add %o1, 0x4, %o1
+1: cmp %o2, 0
+ be,pt %XCC, out
+ nop
+ ba,pt %xcc, small_copy_unaligned
+ nop
+
+small_copy: /* 0 < len <= 16 */
+ andcc %o3, 0x3, %g0
+ bne,pn %XCC, small_copy_unaligned
+ sub %o0, %o1, %o3
+
+small_copy_aligned:
+ subcc %o2, 4, %o2
+ EXNV4(lduwa [%o1] %asi, %g1, add %o2, %g0)
+ EXNV4(stwa %g1, [%o1 + %o3] ASI_AIUS, add %o2, %g0)
+ bgu,pt %XCC, small_copy_aligned
+ add %o1, 4, %o1
+
+out: retl
+ clr %o0
+
+ .align 32
+small_copy_unaligned:
+ subcc %o2, 1, %o2
+ EXNV1(lduba [%o1] %asi, %g1, add %o2, %g0)
+ EXNV1(stba %g1, [%o1 + %o3] ASI_AIUS, add %o2, %g0)
+ bgu,pt %XCC, small_copy_unaligned
+ add %o1, 1, %o1
+ retl
+ clr %o0
--- /dev/null
+/* $Id: VIScopy.S,v 1.27 2002/02/09 19:49:30 davem Exp $
+ * VIScopy.S: High speed copy operations utilizing the UltraSparc
+ * Visual Instruction Set.
+ *
+ * Copyright (C) 1997 David S. Miller (davem@caip.rutgers.edu)
+ * Copyright (C) 1996, 1997, 1998, 1999 Jakub Jelinek (jj@ultra.linux.cz)
+ */
+
+#include "VIS.h"
+
+ /* VIS code can be used for numerous copy/set operation variants.
+ * It can be made to work in the kernel, one single instance,
+ * for all of memcpy, copy_to_user, and copy_from_user by setting
+ * the ASI src/dest globals correctly. Furthermore it can
+ * be used for kernel-->kernel page copies as well, a hook label
+ * is put in here just for this purpose.
+ *
+ * For userland, compiling this without __KERNEL__ defined makes
+ * it work just fine as a generic libc bcopy and memcpy.
+ * If for userland it is compiled with a 32bit gcc (but you need
+ * -Wa,-Av9a for as), the code will just rely on lower 32bits of
+ * IEU registers, if you compile it with 64bit gcc (ie. define
+ * __sparc_v9__), the code will use full 64bit.
+ */
+
+#ifdef __KERNEL__
+
+#include <asm/visasm.h>
+#include <asm/thread_info.h>
+
+#define FPU_CLEAN_RETL \
+ ldub [%g6 + TI_CURRENT_DS], %o1; \
+ VISExit \
+ clr %o0; \
+ retl; \
+ wr %o1, %g0, %asi;
+#define FPU_RETL \
+ ldub [%g6 + TI_CURRENT_DS], %o1; \
+ VISExit \
+ clr %o0; \
+ retl; \
+ wr %o1, %g0, %asi;
+#define NORMAL_RETL \
+ ldub [%g6 + TI_CURRENT_DS], %o1; \
+ clr %o0; \
+ retl; \
+ wr %o1, %g0, %asi;
+#define EX(x,y,a,b) \
+98: x,y; \
+ .section .fixup; \
+ .align 4; \
+99: ba VIScopyfixup_ret; \
+ a, b, %o1; \
+ .section __ex_table; \
+ .align 4; \
+ .word 98b, 99b; \
+ .text; \
+ .align 4;
+#define EX2(x,y,c,d,e,a,b) \
+98: x,y; \
+ .section .fixup; \
+ .align 4; \
+99: c, d, e; \
+ ba VIScopyfixup_ret; \
+ a, b, %o1; \
+ .section __ex_table; \
+ .align 4; \
+ .word 98b, 99b; \
+ .text; \
+ .align 4;
+#define EXO2(x,y) \
+98: x,y; \
+ .section __ex_table; \
+ .align 4; \
+ .word 98b, VIScopyfixup_reto2; \
+ .text; \
+ .align 4;
+#define EXVISN(x,y,n) \
+98: x,y; \
+ .section __ex_table; \
+ .align 4; \
+ .word 98b, VIScopyfixup_vis##n; \
+ .text; \
+ .align 4;
+#define EXT(start,end,handler) \
+ .section __ex_table; \
+ .align 4; \
+ .word start, 0, end, handler; \
+ .text; \
+ .align 4;
+#else
+#ifdef REGS_64BIT
+#define FPU_CLEAN_RETL \
+ retl; \
+ mov %g6, %o0;
+#define FPU_RETL \
+ retl; \
+ mov %g6, %o0;
+#else
+#define FPU_CLEAN_RETL \
+ wr %g0, FPRS_FEF, %fprs; \
+ retl; \
+ mov %g6, %o0;
+#define FPU_RETL \
+ wr %g0, FPRS_FEF, %fprs; \
+ retl; \
+ mov %g6, %o0;
+#endif
+#define NORMAL_RETL \
+ retl; \
+ mov %g6, %o0;
+#define EX(x,y,a,b) x,y
+#define EX2(x,y,c,d,e,a,b) x,y
+#define EXO2(x,y) x,y
+#define EXVISN(x,y,n) x,y
+#define EXT(a,b,c)
+#endif
+#define EXVIS(x,y) EXVISN(x,y,0)
+#define EXVIS1(x,y) EXVISN(x,y,1)
+#define EXVIS2(x,y) EXVISN(x,y,2)
+#define EXVIS3(x,y) EXVISN(x,y,3)
+#define EXVIS4(x,y) EXVISN(x,y,4)
+
+#define FREG_FROB(f1, f2, f3, f4, f5, f6, f7, f8, f9) \
+ faligndata %f1, %f2, %f48; \
+ faligndata %f2, %f3, %f50; \
+ faligndata %f3, %f4, %f52; \
+ faligndata %f4, %f5, %f54; \
+ faligndata %f5, %f6, %f56; \
+ faligndata %f6, %f7, %f58; \
+ faligndata %f7, %f8, %f60; \
+ faligndata %f8, %f9, %f62;
+
+#define MAIN_LOOP_CHUNK(src, dest, fdest, fsrc, len, jmptgt) \
+ EXVIS(LDBLK [%src] ASIBLK, %fdest); \
+ ASI_SETDST_BLK \
+ EXVIS(STBLK %fsrc, [%dest] ASIBLK); \
+ add %src, 0x40, %src; \
+ subcc %len, 0x40, %len; \
+ be,pn %xcc, jmptgt; \
+ add %dest, 0x40, %dest; \
+ ASI_SETSRC_BLK
+
+#define LOOP_CHUNK1(src, dest, len, branch_dest) \
+ MAIN_LOOP_CHUNK(src, dest, f0, f48, len, branch_dest)
+#define LOOP_CHUNK2(src, dest, len, branch_dest) \
+ MAIN_LOOP_CHUNK(src, dest, f16, f48, len, branch_dest)
+#define LOOP_CHUNK3(src, dest, len, branch_dest) \
+ MAIN_LOOP_CHUNK(src, dest, f32, f48, len, branch_dest)
+
+#define STORE_SYNC(dest, fsrc) \
+ EXVIS(STBLK %fsrc, [%dest] ASIBLK); \
+ add %dest, 0x40, %dest;
+
+#ifdef __KERNEL__
+#define STORE_JUMP(dest, fsrc, target) \
+ srl asi_dest, 3, %g5; \
+ EXVIS2(STBLK %fsrc, [%dest] ASIBLK); \
+ xor asi_dest, ASI_BLK_XOR1, asi_dest;\
+ add %dest, 0x40, %dest; \
+ xor asi_dest, %g5, asi_dest; \
+ ba,pt %xcc, target;
+#else
+#define STORE_JUMP(dest, fsrc, target) \
+ EXVIS2(STBLK %fsrc, [%dest] ASIBLK); \
+ add %dest, 0x40, %dest; \
+ ba,pt %xcc, target;
+#endif
+
+#ifndef __KERNEL__
+#define VISLOOP_PAD nop; nop; nop; nop; \
+ nop; nop; nop; nop; \
+ nop; nop; nop; nop; \
+ nop; nop; nop;
+#else
+#define VISLOOP_PAD
+#endif
+
+#define FINISH_VISCHUNK(dest, f0, f1, left) \
+ ASI_SETDST_NOBLK \
+ subcc %left, 8, %left; \
+ bl,pn %xcc, vis_out; \
+ faligndata %f0, %f1, %f48; \
+ EXVIS3(STDF %f48, [%dest] ASINORMAL); \
+ add %dest, 8, %dest;
+
+#define UNEVEN_VISCHUNK_LAST(dest, f0, f1, left) \
+ subcc %left, 8, %left; \
+ bl,pn %xcc, vis_out; \
+ fsrc1 %f0, %f1;
+#define UNEVEN_VISCHUNK(dest, f0, f1, left) \
+ UNEVEN_VISCHUNK_LAST(dest, f0, f1, left) \
+ ba,a,pt %xcc, vis_out_slk;
+
+ /* Macros for non-VIS memcpy code. */
+#ifdef REGS_64BIT
+
+#define MOVE_BIGCHUNK(src, dst, offset, t0, t1, t2, t3) \
+ ASI_SETSRC_NOBLK \
+ LDX [%src + offset + 0x00] ASINORMAL, %t0; \
+ LDX [%src + offset + 0x08] ASINORMAL, %t1; \
+ LDX [%src + offset + 0x10] ASINORMAL, %t2; \
+ LDX [%src + offset + 0x18] ASINORMAL, %t3; \
+ ASI_SETDST_NOBLK \
+ STW %t0, [%dst + offset + 0x04] ASINORMAL; \
+ srlx %t0, 32, %t0; \
+ STW %t0, [%dst + offset + 0x00] ASINORMAL; \
+ STW %t1, [%dst + offset + 0x0c] ASINORMAL; \
+ srlx %t1, 32, %t1; \
+ STW %t1, [%dst + offset + 0x08] ASINORMAL; \
+ STW %t2, [%dst + offset + 0x14] ASINORMAL; \
+ srlx %t2, 32, %t2; \
+ STW %t2, [%dst + offset + 0x10] ASINORMAL; \
+ STW %t3, [%dst + offset + 0x1c] ASINORMAL; \
+ srlx %t3, 32, %t3; \
+ STW %t3, [%dst + offset + 0x18] ASINORMAL;
+
+#define MOVE_BIGALIGNCHUNK(src, dst, offset, t0, t1, t2, t3) \
+ ASI_SETSRC_NOBLK \
+ LDX [%src + offset + 0x00] ASINORMAL, %t0; \
+ LDX [%src + offset + 0x08] ASINORMAL, %t1; \
+ LDX [%src + offset + 0x10] ASINORMAL, %t2; \
+ LDX [%src + offset + 0x18] ASINORMAL, %t3; \
+ ASI_SETDST_NOBLK \
+ STX %t0, [%dst + offset + 0x00] ASINORMAL; \
+ STX %t1, [%dst + offset + 0x08] ASINORMAL; \
+ STX %t2, [%dst + offset + 0x10] ASINORMAL; \
+ STX %t3, [%dst + offset + 0x18] ASINORMAL; \
+ ASI_SETSRC_NOBLK \
+ LDX [%src + offset + 0x20] ASINORMAL, %t0; \
+ LDX [%src + offset + 0x28] ASINORMAL, %t1; \
+ LDX [%src + offset + 0x30] ASINORMAL, %t2; \
+ LDX [%src + offset + 0x38] ASINORMAL, %t3; \
+ ASI_SETDST_NOBLK \
+ STX %t0, [%dst + offset + 0x20] ASINORMAL; \
+ STX %t1, [%dst + offset + 0x28] ASINORMAL; \
+ STX %t2, [%dst + offset + 0x30] ASINORMAL; \
+ STX %t3, [%dst + offset + 0x38] ASINORMAL;
+
+#define MOVE_LASTCHUNK(src, dst, offset, t0, t1, t2, t3) \
+ ASI_SETSRC_NOBLK \
+ LDX [%src - offset - 0x10] ASINORMAL, %t0; \
+ LDX [%src - offset - 0x08] ASINORMAL, %t1; \
+ ASI_SETDST_NOBLK \
+ STW %t0, [%dst - offset - 0x0c] ASINORMAL; \
+ srlx %t0, 32, %t2; \
+ STW %t2, [%dst - offset - 0x10] ASINORMAL; \
+ STW %t1, [%dst - offset - 0x04] ASINORMAL; \
+ srlx %t1, 32, %t3; \
+ STW %t3, [%dst - offset - 0x08] ASINORMAL;
+
+#define MOVE_LASTALIGNCHUNK(src, dst, offset, t0, t1) \
+ ASI_SETSRC_NOBLK \
+ LDX [%src - offset - 0x10] ASINORMAL, %t0; \
+ LDX [%src - offset - 0x08] ASINORMAL, %t1; \
+ ASI_SETDST_NOBLK \
+ STX %t0, [%dst - offset - 0x10] ASINORMAL; \
+ STX %t1, [%dst - offset - 0x08] ASINORMAL;
+
+#else /* !REGS_64BIT */
+
+#define MOVE_BIGCHUNK(src, dst, offset, t0, t1, t2, t3) \
+ lduw [%src + offset + 0x00], %t0; \
+ lduw [%src + offset + 0x04], %t1; \
+ lduw [%src + offset + 0x08], %t2; \
+ lduw [%src + offset + 0x0c], %t3; \
+ stw %t0, [%dst + offset + 0x00]; \
+ stw %t1, [%dst + offset + 0x04]; \
+ stw %t2, [%dst + offset + 0x08]; \
+ stw %t3, [%dst + offset + 0x0c]; \
+ lduw [%src + offset + 0x10], %t0; \
+ lduw [%src + offset + 0x14], %t1; \
+ lduw [%src + offset + 0x18], %t2; \
+ lduw [%src + offset + 0x1c], %t3; \
+ stw %t0, [%dst + offset + 0x10]; \
+ stw %t1, [%dst + offset + 0x14]; \
+ stw %t2, [%dst + offset + 0x18]; \
+ stw %t3, [%dst + offset + 0x1c];
+
+#define MOVE_LASTCHUNK(src, dst, offset, t0, t1, t2, t3) \
+ lduw [%src - offset - 0x10], %t0; \
+ lduw [%src - offset - 0x0c], %t1; \
+ lduw [%src - offset - 0x08], %t2; \
+ lduw [%src - offset - 0x04], %t3; \
+ stw %t0, [%dst - offset - 0x10]; \
+ stw %t1, [%dst - offset - 0x0c]; \
+ stw %t2, [%dst - offset - 0x08]; \
+ stw %t3, [%dst - offset - 0x04];
+
+#endif /* !REGS_64BIT */
+
+#ifdef __KERNEL__
+ .section __ex_table,#alloc
+ .section .fixup,#alloc,#execinstr
+#endif
+
+ .text
+ .align 32
+ .globl memcpy
+ .type memcpy,@function
+
+ .globl bcopy
+ .type bcopy,@function
+
+#ifdef __KERNEL__
+memcpy_private:
+memcpy: mov ASI_P, asi_src ! IEU0 Group
+ brnz,pt %o2, __memcpy_entry ! CTI
+ mov ASI_P, asi_dest ! IEU1
+ retl
+ clr %o0
+
+ .align 32
+ .globl __copy_from_user
+ .type __copy_from_user,@function
+__copy_from_user:rd %asi, asi_src ! IEU0 Group
+ brnz,pt %o2, __memcpy_entry ! CTI
+ mov ASI_P, asi_dest ! IEU1
+
+ .globl __copy_to_user
+ .type __copy_to_user,@function
+__copy_to_user: mov ASI_P, asi_src ! IEU0 Group
+ brnz,pt %o2, __memcpy_entry ! CTI
+ rd %asi, asi_dest ! IEU1
+ retl ! CTI Group
+ clr %o0 ! IEU0 Group
+
+ .globl __copy_in_user
+ .type __copy_in_user,@function
+__copy_in_user: rd %asi, asi_src ! IEU0 Group
+ brnz,pt %o2, __memcpy_entry ! CTI
+ mov asi_src, asi_dest ! IEU1
+ retl ! CTI Group
+ clr %o0 ! IEU0 Group
+#endif
+
+bcopy: or %o0, 0, %g3 ! IEU0 Group
+ addcc %o1, 0, %o0 ! IEU1
+ brgez,pt %o2, memcpy_private ! CTI
+ or %g3, 0, %o1 ! IEU0 Group
+ retl ! CTI Group brk forced
+ clr %o0 ! IEU0
+
+
+#ifdef __KERNEL__
+#define BRANCH_ALWAYS 0x10680000
+#define NOP 0x01000000
+#define ULTRA3_DO_PATCH(OLD, NEW) \
+ sethi %hi(NEW), %g1; \
+ or %g1, %lo(NEW), %g1; \
+ sethi %hi(OLD), %g2; \
+ or %g2, %lo(OLD), %g2; \
+ sub %g1, %g2, %g1; \
+ sethi %hi(BRANCH_ALWAYS), %g3; \
+ srl %g1, 2, %g1; \
+ or %g3, %lo(BRANCH_ALWAYS), %g3; \
+ or %g3, %g1, %g3; \
+ stw %g3, [%g2]; \
+ sethi %hi(NOP), %g3; \
+ or %g3, %lo(NOP), %g3; \
+ stw %g3, [%g2 + 0x4]; \
+ flush %g2;
+
+ .globl cheetah_patch_copyops
+cheetah_patch_copyops:
+ ULTRA3_DO_PATCH(memcpy, U3memcpy)
+ ULTRA3_DO_PATCH(__copy_from_user, U3copy_from_user)
+ ULTRA3_DO_PATCH(__copy_to_user, U3copy_to_user)
+ ULTRA3_DO_PATCH(__copy_in_user, U3copy_in_user)
+ retl
+ nop
+#undef BRANCH_ALWAYS
+#undef NOP
+#undef ULTRA3_DO_PATCH
+#endif /* __KERNEL__ */
+
+ .align 32
+#ifdef __KERNEL__
+ andcc %o0, 7, %g2 ! IEU1 Group
+#endif
+VIS_enter:
+ be,pt %xcc, dest_is_8byte_aligned ! CTI
+#ifdef __KERNEL__
+ nop ! IEU0 Group
+#else
+ andcc %o0, 0x38, %g5 ! IEU1 Group
+#endif
+do_dest_8byte_align:
+ mov 8, %g1 ! IEU0
+ sub %g1, %g2, %g2 ! IEU0 Group
+ andcc %o0, 1, %g0 ! IEU1
+ be,pt %icc, 2f ! CTI
+ sub %o2, %g2, %o2 ! IEU0 Group
+1: ASI_SETSRC_NOBLK ! LSU Group
+ EX(LDUB [%o1] ASINORMAL, %o5,
+ add %o2, %g2) ! Load Group
+ add %o1, 1, %o1 ! IEU0
+ add %o0, 1, %o0 ! IEU1
+ ASI_SETDST_NOBLK ! LSU Group
+ subcc %g2, 1, %g2 ! IEU1 Group
+ be,pn %xcc, 3f ! CTI
+ EX2(STB %o5, [%o0 - 1] ASINORMAL,
+ add %g2, 1, %g2,
+ add %o2, %g2) ! Store
+2: ASI_SETSRC_NOBLK ! LSU Group
+ EX(LDUB [%o1] ASINORMAL, %o5,
+ add %o2, %g2) ! Load Group
+ add %o0, 2, %o0 ! IEU0
+ EX2(LDUB [%o1 + 1] ASINORMAL, %g3,
+ sub %o0, 2, %o0,
+ add %o2, %g2) ! Load Group
+ ASI_SETDST_NOBLK ! LSU Group
+ subcc %g2, 2, %g2 ! IEU1 Group
+ EX2(STB %o5, [%o0 - 2] ASINORMAL,
+ add %g2, 2, %g2,
+ add %o2, %g2) ! Store
+ add %o1, 2, %o1 ! IEU0
+ bne,pt %xcc, 2b ! CTI Group
+ EX2(STB %g3, [%o0 - 1] ASINORMAL,
+ add %g2, 1, %g2,
+ add %o2, %g2) ! Store
+#ifdef __KERNEL__
+3:
+dest_is_8byte_aligned:
+ VISEntry
+ andcc %o0, 0x38, %g5 ! IEU1 Group
+#else
+3: andcc %o0, 0x38, %g5 ! IEU1 Group
+dest_is_8byte_aligned:
+#endif
+ be,pt %icc, dest_is_64byte_aligned ! CTI
+ mov 64, %g1 ! IEU0
+ fmovd %f0, %f2 ! FPU
+ sub %g1, %g5, %g5 ! IEU0 Group
+ ASI_SETSRC_NOBLK ! LSU Group
+ alignaddr %o1, %g0, %g1 ! GRU Group
+ EXO2(LDDF [%g1] ASINORMAL, %f4) ! Load Group
+ sub %o2, %g5, %o2 ! IEU0
+1: EX(LDDF [%g1 + 0x8] ASINORMAL, %f6,
+ add %o2, %g5) ! Load Group
+ add %g1, 0x8, %g1 ! IEU0 Group
+ subcc %g5, 8, %g5 ! IEU1
+ ASI_SETDST_NOBLK ! LSU Group
+ faligndata %f4, %f6, %f0 ! GRU Group
+ EX2(STDF %f0, [%o0] ASINORMAL,
+ add %g5, 8, %g5,
+ add %o2, %g5) ! Store
+ add %o1, 8, %o1 ! IEU0 Group
+ be,pn %xcc, dest_is_64byte_aligned ! CTI
+ add %o0, 8, %o0 ! IEU1
+ ASI_SETSRC_NOBLK ! LSU Group
+ EX(LDDF [%g1 + 0x8] ASINORMAL, %f4,
+ add %o2, %g5) ! Load Group
+ add %g1, 8, %g1 ! IEU0
+ subcc %g5, 8, %g5 ! IEU1
+ ASI_SETDST_NOBLK ! LSU Group
+ faligndata %f6, %f4, %f0 ! GRU Group
+ EX2(STDF %f0, [%o0] ASINORMAL,
+ add %g5, 8, %g5,
+ add %o2, %g5) ! Store
+ add %o1, 8, %o1 ! IEU0
+ ASI_SETSRC_NOBLK ! LSU Group
+ bne,pt %xcc, 1b ! CTI Group
+ add %o0, 8, %o0 ! IEU0
+dest_is_64byte_aligned:
+ membar #LoadStore | #StoreStore | #StoreLoad ! LSU Group
+#ifndef __KERNEL__
+ wr %g0, ASI_BLK_P, %asi ! LSU Group
+#endif
+ subcc %o2, 0x40, %g7 ! IEU1 Group
+ mov %o1, %g1 ! IEU0
+ andncc %g7, (0x40 - 1), %g7 ! IEU1 Group
+ srl %g1, 3, %g2 ! IEU0
+ sub %o2, %g7, %g3 ! IEU0 Group
+ andn %o1, (0x40 - 1), %o1 ! IEU1
+ and %g2, 7, %g2 ! IEU0 Group
+ andncc %g3, 0x7, %g3 ! IEU1
+ fmovd %f0, %f2 ! FPU
+ sub %g3, 0x10, %g3 ! IEU0 Group
+ sub %o2, %g7, %o2 ! IEU1
+#ifdef __KERNEL__
+ or asi_src, ASI_BLK_OR, asi_src ! IEU0 Group
+ or asi_dest, ASI_BLK_OR, asi_dest ! IEU1
+#endif
+ alignaddr %g1, %g0, %g0 ! GRU Group
+ add %g1, %g7, %g1 ! IEU0 Group
+ subcc %o2, %g3, %o2 ! IEU1
+ ASI_SETSRC_BLK ! LSU Group
+ EXVIS1(LDBLK [%o1 + 0x00] ASIBLK, %f0) ! LSU Group
+ add %g1, %g3, %g1 ! IEU0
+ EXVIS1(LDBLK [%o1 + 0x40] ASIBLK, %f16) ! LSU Group
+ sub %g7, 0x80, %g7 ! IEU0
+ EXVIS(LDBLK [%o1 + 0x80] ASIBLK, %f32) ! LSU Group
+#ifdef __KERNEL__
+vispc: sll %g2, 9, %g2 ! IEU0 Group
+ sethi %hi(vis00), %g5 ! IEU1
+ or %g5, %lo(vis00), %g5 ! IEU0 Group
+ jmpl %g5 + %g2, %g0 ! CTI Group brk forced
+ addcc %o1, 0xc0, %o1 ! IEU1 Group
+#else
+ ! Clk1 Group 8-(
+ ! Clk2 Group 8-(
+ ! Clk3 Group 8-(
+ ! Clk4 Group 8-(
+vispc: rd %pc, %g5 ! PDU Group 8-(
+ addcc %g5, %lo(vis00 - vispc), %g5 ! IEU1 Group
+ sll %g2, 9, %g2 ! IEU0
+ jmpl %g5 + %g2, %g0 ! CTI Group brk forced
+ addcc %o1, 0xc0, %o1 ! IEU1 Group
+#endif
+ .align 512 /* OK, here comes the fun part... */
+vis00:FREG_FROB(f0, f2, f4, f6, f8, f10,f12,f14,f16) LOOP_CHUNK1(o1, o0, g7, vis01)
+ FREG_FROB(f16,f18,f20,f22,f24,f26,f28,f30,f32) LOOP_CHUNK2(o1, o0, g7, vis02)
+ FREG_FROB(f32,f34,f36,f38,f40,f42,f44,f46,f0) LOOP_CHUNK3(o1, o0, g7, vis03)
+ b,pt %xcc, vis00+4; faligndata %f0, %f2, %f48
+vis01:FREG_FROB(f16,f18,f20,f22,f24,f26,f28,f30,f32) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f32,f34,f36,f38,f40,f42,f44,f46,f0) STORE_JUMP(o0, f48, finish_f0) membar #Sync
+vis02:FREG_FROB(f32,f34,f36,f38,f40,f42,f44,f46,f0) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f0, f2, f4, f6, f8, f10,f12,f14,f16) STORE_JUMP(o0, f48, finish_f16) membar #Sync
+vis03:FREG_FROB(f0, f2, f4, f6, f8, f10,f12,f14,f16) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f16,f18,f20,f22,f24,f26,f28,f30,f32) STORE_JUMP(o0, f48, finish_f32) membar #Sync
+ VISLOOP_PAD
+vis10:FREG_FROB(f2, f4, f6, f8, f10,f12,f14,f16,f18) LOOP_CHUNK1(o1, o0, g7, vis11)
+ FREG_FROB(f18,f20,f22,f24,f26,f28,f30,f32,f34) LOOP_CHUNK2(o1, o0, g7, vis12)
+ FREG_FROB(f34,f36,f38,f40,f42,f44,f46,f0, f2) LOOP_CHUNK3(o1, o0, g7, vis13)
+ b,pt %xcc, vis10+4; faligndata %f2, %f4, %f48
+vis11:FREG_FROB(f18,f20,f22,f24,f26,f28,f30,f32,f34) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f34,f36,f38,f40,f42,f44,f46,f0, f2) STORE_JUMP(o0, f48, finish_f2) membar #Sync
+vis12:FREG_FROB(f34,f36,f38,f40,f42,f44,f46,f0, f2) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f2, f4, f6, f8, f10,f12,f14,f16,f18) STORE_JUMP(o0, f48, finish_f18) membar #Sync
+vis13:FREG_FROB(f2, f4, f6, f8, f10,f12,f14,f16,f18) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f18,f20,f22,f24,f26,f28,f30,f32,f34) STORE_JUMP(o0, f48, finish_f34) membar #Sync
+ VISLOOP_PAD
+vis20:FREG_FROB(f4, f6, f8, f10,f12,f14,f16,f18,f20) LOOP_CHUNK1(o1, o0, g7, vis21)
+ FREG_FROB(f20,f22,f24,f26,f28,f30,f32,f34,f36) LOOP_CHUNK2(o1, o0, g7, vis22)
+ FREG_FROB(f36,f38,f40,f42,f44,f46,f0, f2, f4) LOOP_CHUNK3(o1, o0, g7, vis23)
+ b,pt %xcc, vis20+4; faligndata %f4, %f6, %f48
+vis21:FREG_FROB(f20,f22,f24,f26,f28,f30,f32,f34,f36) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f36,f38,f40,f42,f44,f46,f0, f2, f4) STORE_JUMP(o0, f48, finish_f4) membar #Sync
+vis22:FREG_FROB(f36,f38,f40,f42,f44,f46,f0, f2, f4) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f4, f6, f8, f10,f12,f14,f16,f18,f20) STORE_JUMP(o0, f48, finish_f20) membar #Sync
+vis23:FREG_FROB(f4, f6, f8, f10,f12,f14,f16,f18,f20) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f20,f22,f24,f26,f28,f30,f32,f34,f36) STORE_JUMP(o0, f48, finish_f36) membar #Sync
+ VISLOOP_PAD
+vis30:FREG_FROB(f6, f8, f10,f12,f14,f16,f18,f20,f22) LOOP_CHUNK1(o1, o0, g7, vis31)
+ FREG_FROB(f22,f24,f26,f28,f30,f32,f34,f36,f38) LOOP_CHUNK2(o1, o0, g7, vis32)
+ FREG_FROB(f38,f40,f42,f44,f46,f0, f2, f4, f6) LOOP_CHUNK3(o1, o0, g7, vis33)
+ b,pt %xcc, vis30+4; faligndata %f6, %f8, %f48
+vis31:FREG_FROB(f22,f24,f26,f28,f30,f32,f34,f36,f38) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f38,f40,f42,f44,f46,f0, f2, f4, f6) STORE_JUMP(o0, f48, finish_f6) membar #Sync
+vis32:FREG_FROB(f38,f40,f42,f44,f46,f0, f2, f4, f6) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f6, f8, f10,f12,f14,f16,f18,f20,f22) STORE_JUMP(o0, f48, finish_f22) membar #Sync
+vis33:FREG_FROB(f6, f8, f10,f12,f14,f16,f18,f20,f22) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f22,f24,f26,f28,f30,f32,f34,f36,f38) STORE_JUMP(o0, f48, finish_f38) membar #Sync
+ VISLOOP_PAD
+vis40:FREG_FROB(f8, f10,f12,f14,f16,f18,f20,f22,f24) LOOP_CHUNK1(o1, o0, g7, vis41)
+ FREG_FROB(f24,f26,f28,f30,f32,f34,f36,f38,f40) LOOP_CHUNK2(o1, o0, g7, vis42)
+ FREG_FROB(f40,f42,f44,f46,f0, f2, f4, f6, f8) LOOP_CHUNK3(o1, o0, g7, vis43)
+ b,pt %xcc, vis40+4; faligndata %f8, %f10, %f48
+vis41:FREG_FROB(f24,f26,f28,f30,f32,f34,f36,f38,f40) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f40,f42,f44,f46,f0, f2, f4, f6, f8) STORE_JUMP(o0, f48, finish_f8) membar #Sync
+vis42:FREG_FROB(f40,f42,f44,f46,f0, f2, f4, f6, f8) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f8, f10,f12,f14,f16,f18,f20,f22,f24) STORE_JUMP(o0, f48, finish_f24) membar #Sync
+vis43:FREG_FROB(f8, f10,f12,f14,f16,f18,f20,f22,f24) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f24,f26,f28,f30,f32,f34,f36,f38,f40) STORE_JUMP(o0, f48, finish_f40) membar #Sync
+ VISLOOP_PAD
+vis50:FREG_FROB(f10,f12,f14,f16,f18,f20,f22,f24,f26) LOOP_CHUNK1(o1, o0, g7, vis51)
+ FREG_FROB(f26,f28,f30,f32,f34,f36,f38,f40,f42) LOOP_CHUNK2(o1, o0, g7, vis52)
+ FREG_FROB(f42,f44,f46,f0, f2, f4, f6, f8, f10) LOOP_CHUNK3(o1, o0, g7, vis53)
+ b,pt %xcc, vis50+4; faligndata %f10, %f12, %f48
+vis51:FREG_FROB(f26,f28,f30,f32,f34,f36,f38,f40,f42) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f42,f44,f46,f0, f2, f4, f6, f8, f10) STORE_JUMP(o0, f48, finish_f10) membar #Sync
+vis52:FREG_FROB(f42,f44,f46,f0, f2, f4, f6, f8, f10) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f10,f12,f14,f16,f18,f20,f22,f24,f26) STORE_JUMP(o0, f48, finish_f26) membar #Sync
+vis53:FREG_FROB(f10,f12,f14,f16,f18,f20,f22,f24,f26) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f26,f28,f30,f32,f34,f36,f38,f40,f42) STORE_JUMP(o0, f48, finish_f42) membar #Sync
+ VISLOOP_PAD
+vis60:FREG_FROB(f12,f14,f16,f18,f20,f22,f24,f26,f28) LOOP_CHUNK1(o1, o0, g7, vis61)
+ FREG_FROB(f28,f30,f32,f34,f36,f38,f40,f42,f44) LOOP_CHUNK2(o1, o0, g7, vis62)
+ FREG_FROB(f44,f46,f0, f2, f4, f6, f8, f10,f12) LOOP_CHUNK3(o1, o0, g7, vis63)
+ b,pt %xcc, vis60+4; faligndata %f12, %f14, %f48
+vis61:FREG_FROB(f28,f30,f32,f34,f36,f38,f40,f42,f44) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f44,f46,f0, f2, f4, f6, f8, f10,f12) STORE_JUMP(o0, f48, finish_f12) membar #Sync
+vis62:FREG_FROB(f44,f46,f0, f2, f4, f6, f8, f10,f12) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f12,f14,f16,f18,f20,f22,f24,f26,f28) STORE_JUMP(o0, f48, finish_f28) membar #Sync
+vis63:FREG_FROB(f12,f14,f16,f18,f20,f22,f24,f26,f28) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f28,f30,f32,f34,f36,f38,f40,f42,f44) STORE_JUMP(o0, f48, finish_f44) membar #Sync
+ VISLOOP_PAD
+vis70:FREG_FROB(f14,f16,f18,f20,f22,f24,f26,f28,f30) LOOP_CHUNK1(o1, o0, g7, vis71)
+ FREG_FROB(f30,f32,f34,f36,f38,f40,f42,f44,f46) LOOP_CHUNK2(o1, o0, g7, vis72)
+ FREG_FROB(f46,f0, f2, f4, f6, f8, f10,f12,f14) LOOP_CHUNK3(o1, o0, g7, vis73)
+ b,pt %xcc, vis70+4; faligndata %f14, %f16, %f48
+vis71:FREG_FROB(f30,f32,f34,f36,f38,f40,f42,f44,f46) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f46,f0, f2, f4, f6, f8, f10,f12,f14) STORE_JUMP(o0, f48, finish_f14) membar #Sync
+vis72:FREG_FROB(f46,f0, f2, f4, f6, f8, f10,f12,f14) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f14,f16,f18,f20,f22,f24,f26,f28,f30) STORE_JUMP(o0, f48, finish_f30) membar #Sync
+vis73:FREG_FROB(f14,f16,f18,f20,f22,f24,f26,f28,f30) STORE_SYNC(o0, f48) membar #Sync
+ FREG_FROB(f30,f32,f34,f36,f38,f40,f42,f44,f46) STORE_JUMP(o0, f48, finish_f46) membar #Sync
+ VISLOOP_PAD
+finish_f0: FINISH_VISCHUNK(o0, f0, f2, g3)
+finish_f2: FINISH_VISCHUNK(o0, f2, f4, g3)
+finish_f4: FINISH_VISCHUNK(o0, f4, f6, g3)
+finish_f6: FINISH_VISCHUNK(o0, f6, f8, g3)
+finish_f8: FINISH_VISCHUNK(o0, f8, f10, g3)
+finish_f10: FINISH_VISCHUNK(o0, f10, f12, g3)
+finish_f12: FINISH_VISCHUNK(o0, f12, f14, g3)
+finish_f14: UNEVEN_VISCHUNK(o0, f14, f0, g3)
+finish_f16: FINISH_VISCHUNK(o0, f16, f18, g3)
+finish_f18: FINISH_VISCHUNK(o0, f18, f20, g3)
+finish_f20: FINISH_VISCHUNK(o0, f20, f22, g3)
+finish_f22: FINISH_VISCHUNK(o0, f22, f24, g3)
+finish_f24: FINISH_VISCHUNK(o0, f24, f26, g3)
+finish_f26: FINISH_VISCHUNK(o0, f26, f28, g3)
+finish_f28: FINISH_VISCHUNK(o0, f28, f30, g3)
+finish_f30: UNEVEN_VISCHUNK(o0, f30, f0, g3)
+finish_f32: FINISH_VISCHUNK(o0, f32, f34, g3)
+finish_f34: FINISH_VISCHUNK(o0, f34, f36, g3)
+finish_f36: FINISH_VISCHUNK(o0, f36, f38, g3)
+finish_f38: FINISH_VISCHUNK(o0, f38, f40, g3)
+finish_f40: FINISH_VISCHUNK(o0, f40, f42, g3)
+finish_f42: FINISH_VISCHUNK(o0, f42, f44, g3)
+finish_f44: FINISH_VISCHUNK(o0, f44, f46, g3)
+finish_f46: UNEVEN_VISCHUNK_LAST(o0, f46, f0, g3)
+vis_out_slk:
+#ifdef __KERNEL__
+ srl asi_src, 3, %g5 ! IEU0 Group
+ xor asi_src, ASI_BLK_XOR1, asi_src ! IEU1
+ xor asi_src, %g5, asi_src ! IEU0 Group
+#endif
+vis_slk:ASI_SETSRC_NOBLK ! LSU Group
+ EXVIS3(LDDF [%o1] ASINORMAL, %f2) ! Load Group
+ add %o1, 8, %o1 ! IEU0
+ subcc %g3, 8, %g3 ! IEU1
+ ASI_SETDST_NOBLK ! LSU Group
+ faligndata %f0, %f2, %f8 ! GRU Group
+ EXVIS4(STDF %f8, [%o0] ASINORMAL) ! Store
+ bl,pn %xcc, vis_out_slp ! CTI
+ add %o0, 8, %o0 ! IEU0 Group
+ ASI_SETSRC_NOBLK ! LSU Group
+ EXVIS3(LDDF [%o1] ASINORMAL, %f0) ! Load Group
+ add %o1, 8, %o1 ! IEU0
+ subcc %g3, 8, %g3 ! IEU1
+ ASI_SETDST_NOBLK ! LSU Group
+ faligndata %f2, %f0, %f8 ! GRU Group
+ EXVIS4(STDF %f8, [%o0] ASINORMAL) ! Store
+ bge,pt %xcc, vis_slk ! CTI
+ add %o0, 8, %o0 ! IEU0 Group
+vis_out_slp:
+#ifdef __KERNEL__
+ brz,pt %o2, vis_ret ! CTI Group
+ mov %g1, %o1 ! IEU0
+ ba,pt %xcc, vis_slp+4 ! CTI Group
+ ASI_SETSRC_NOBLK ! LSU Group
+#endif
+vis_out:brz,pt %o2, vis_ret ! CTI Group
+ mov %g1, %o1 ! IEU0
+#ifdef __KERNEL__
+ srl asi_src, 3, %g5 ! IEU0 Group
+ xor asi_src, ASI_BLK_XOR1, asi_src ! IEU1
+ xor asi_src, %g5, asi_src ! IEU0 Group
+#endif
+vis_slp:ASI_SETSRC_NOBLK ! LSU Group
+ EXO2(LDUB [%o1] ASINORMAL, %g5) ! LOAD
+ add %o1, 1, %o1 ! IEU0
+ add %o0, 1, %o0 ! IEU1
+ ASI_SETDST_NOBLK ! LSU Group
+ subcc %o2, 1, %o2 ! IEU1
+ bne,pt %xcc, vis_slp ! CTI
+ EX(STB %g5, [%o0 - 1] ASINORMAL,
+ add %o2, 1) ! Store Group
+vis_ret:membar #StoreLoad | #StoreStore ! LSU Group
+ FPU_CLEAN_RETL
+
+
+__memcpy_short:
+ andcc %o2, 1, %g0 ! IEU1 Group
+ be,pt %icc, 2f ! CTI
+1: ASI_SETSRC_NOBLK ! LSU Group
+ EXO2(LDUB [%o1] ASINORMAL, %g5) ! LOAD Group
+ add %o1, 1, %o1 ! IEU0
+ add %o0, 1, %o0 ! IEU1
+ ASI_SETDST_NOBLK ! LSU Group
+ subcc %o2, 1, %o2 ! IEU1 Group
+ be,pn %xcc, short_ret ! CTI
+ EX(STB %g5, [%o0 - 1] ASINORMAL,
+ add %o2, 1) ! Store
+2: ASI_SETSRC_NOBLK ! LSU Group
+ EXO2(LDUB [%o1] ASINORMAL, %g5) ! LOAD Group
+ add %o0, 2, %o0 ! IEU0
+ EX2(LDUB [%o1 + 1] ASINORMAL, %o5,
+ sub %o0, 2, %o0,
+ add %o2, %g0) ! LOAD Group
+ add %o1, 2, %o1 ! IEU0
+ ASI_SETDST_NOBLK ! LSU Group
+ subcc %o2, 2, %o2 ! IEU1 Group
+ EX(STB %g5, [%o0 - 2] ASINORMAL,
+ add %o2, 2) ! Store
+ bne,pt %xcc, 2b ! CTI
+ EX(STB %o5, [%o0 - 1] ASINORMAL,
+ add %o2, 1) ! Store
+short_ret:
+ NORMAL_RETL
+
+#ifndef __KERNEL__
+memcpy_private:
+memcpy:
+#ifndef REGS_64BIT
+ srl %o2, 0, %o2 ! IEU1 Group
+#endif
+ brz,pn %o2, short_ret ! CTI Group
+ mov %o0, %g6 ! IEU0
+#endif
+__memcpy_entry:
+ cmp %o2, 15 ! IEU1 Group
+ bleu,pn %xcc, __memcpy_short ! CTI
+ cmp %o2, (64 * 6) ! IEU1 Group
+ bgeu,pn %xcc, VIS_enter ! CTI
+ andcc %o0, 7, %g2 ! IEU1 Group
+ sub %o0, %o1, %g5 ! IEU0
+ andcc %g5, 3, %o5 ! IEU1 Group
+ bne,pn %xcc, memcpy_noVIS_misaligned ! CTI
+ andcc %o1, 3, %g0 ! IEU1 Group
+#ifdef REGS_64BIT
+ be,a,pt %xcc, 3f ! CTI
+ andcc %o1, 4, %g0 ! IEU1 Group
+ andcc %o1, 1, %g0 ! IEU1 Group
+#else /* !REGS_64BIT */
+ be,pt %xcc, 5f ! CTI
+ andcc %o1, 1, %g0 ! IEU1 Group
+#endif /* !REGS_64BIT */
+ be,pn %xcc, 4f ! CTI
+ andcc %o1, 2, %g0 ! IEU1 Group
+ ASI_SETSRC_NOBLK ! LSU Group
+ EXO2(LDUB [%o1] ASINORMAL, %g2) ! Load Group
+ add %o1, 1, %o1 ! IEU0
+ add %o0, 1, %o0 ! IEU1
+ sub %o2, 1, %o2 ! IEU0 Group
+ ASI_SETDST_NOBLK ! LSU Group
+ bne,pn %xcc, 5f ! CTI Group
+ EX(STB %g2, [%o0 - 1] ASINORMAL,
+ add %o2, 1) ! Store
+4: ASI_SETSRC_NOBLK ! LSU Group
+ EXO2(LDUH [%o1] ASINORMAL, %g2) ! Load Group
+ add %o1, 2, %o1 ! IEU0
+ add %o0, 2, %o0 ! IEU1
+ ASI_SETDST_NOBLK ! LSU Group
+ sub %o2, 2, %o2 ! IEU0
+ EX(STH %g2, [%o0 - 2] ASINORMAL,
+ add %o2, 2) ! Store Group + bubble
+#ifdef REGS_64BIT
+5: andcc %o1, 4, %g0 ! IEU1
+3: be,a,pn %xcc, 2f ! CTI
+ andcc %o2, -128, %g7 ! IEU1 Group
+ ASI_SETSRC_NOBLK ! LSU Group
+ EXO2(LDUW [%o1] ASINORMAL, %g5) ! Load Group
+ add %o1, 4, %o1 ! IEU0
+ add %o0, 4, %o0 ! IEU1
+ ASI_SETDST_NOBLK ! LSU Group
+ sub %o2, 4, %o2 ! IEU0 Group
+ EX(STW %g5, [%o0 - 4] ASINORMAL,
+ add %o2, 4) ! Store
+ andcc %o2, -128, %g7 ! IEU1 Group
+2: be,pn %xcc, 3f ! CTI
+ andcc %o0, 4, %g0 ! IEU1 Group
+ be,pn %xcc, 82f + 4 ! CTI Group
+#else /* !REGS_64BIT */
+5: andcc %o2, -128, %g7 ! IEU1
+ be,a,pn %xcc, 41f ! CTI
+ andcc %o2, 0x70, %g7 ! IEU1 Group
+#endif /* !REGS_64BIT */
+5: MOVE_BIGCHUNK(o1, o0, 0x00, g1, g3, g5, o5)
+ MOVE_BIGCHUNK(o1, o0, 0x20, g1, g3, g5, o5)
+ MOVE_BIGCHUNK(o1, o0, 0x40, g1, g3, g5, o5)
+ MOVE_BIGCHUNK(o1, o0, 0x60, g1, g3, g5, o5)
+ EXT(5b,35f,VIScopyfixup1)
+35: subcc %g7, 128, %g7 ! IEU1 Group
+ add %o1, 128, %o1 ! IEU0
+ bne,pt %xcc, 5b ! CTI
+ add %o0, 128, %o0 ! IEU0 Group
+3: andcc %o2, 0x70, %g7 ! IEU1 Group
+41: be,pn %xcc, 80f ! CTI
+ andcc %o2, 8, %g0 ! IEU1 Group
+#ifdef __KERNEL__
+79: sethi %hi(80f), %o5 ! IEU0
+ sll %g7, 1, %g5 ! IEU0 Group
+ add %o1, %g7, %o1 ! IEU1
+ srl %g7, 1, %g2 ! IEU0 Group
+ sub %o5, %g5, %o5 ! IEU1
+ sub %o5, %g2, %o5 ! IEU0 Group
+ jmpl %o5 + %lo(80f), %g0 ! CTI Group brk forced
+ add %o0, %g7, %o0 ! IEU0 Group
+#else
+ ! Clk1 8-(
+ ! Clk2 8-(
+ ! Clk3 8-(
+ ! Clk4 8-(
+79: rd %pc, %o5 ! PDU Group
+ sll %g7, 1, %g5 ! IEU0 Group
+ add %o1, %g7, %o1 ! IEU1
+ sub %o5, %g5, %o5 ! IEU0 Group
+ jmpl %o5 + %lo(80f - 79b), %g0 ! CTI Group brk forced
+ add %o0, %g7, %o0 ! IEU0 Group
+#endif
+36: MOVE_LASTCHUNK(o1, o0, 0x60, g2, g3, g5, o5)
+ MOVE_LASTCHUNK(o1, o0, 0x50, g2, g3, g5, o5)
+ MOVE_LASTCHUNK(o1, o0, 0x40, g2, g3, g5, o5)
+ MOVE_LASTCHUNK(o1, o0, 0x30, g2, g3, g5, o5)
+ MOVE_LASTCHUNK(o1, o0, 0x20, g2, g3, g5, o5)
+ MOVE_LASTCHUNK(o1, o0, 0x10, g2, g3, g5, o5)
+ MOVE_LASTCHUNK(o1, o0, 0x00, g2, g3, g5, o5)
+ EXT(36b,80f,VIScopyfixup2)
+80: be,pt %xcc, 81f ! CTI
+ andcc %o2, 4, %g0 ! IEU1
+#ifdef REGS_64BIT
+ ASI_SETSRC_NOBLK ! LSU Group
+ EX(LDX [%o1] ASINORMAL, %g2,
+ and %o2, 0xf) ! Load Group
+ add %o0, 8, %o0 ! IEU0
+ ASI_SETDST_NOBLK ! LSU Group
+ EX(STW %g2, [%o0 - 0x4] ASINORMAL,
+ and %o2, 0xf) ! Store Group
+ add %o1, 8, %o1 ! IEU1
+ srlx %g2, 32, %g2 ! IEU0 Group
+ EX2(STW %g2, [%o0 - 0x8] ASINORMAL,
+ and %o2, 0xf, %o2,
+ sub %o2, 4) ! Store
+#else /* !REGS_64BIT */
+ lduw [%o1], %g2 ! Load Group
+ add %o0, 8, %o0 ! IEU0
+ lduw [%o1 + 0x4], %g3 ! Load Group
+ add %o1, 8, %o1 ! IEU0
+ stw %g2, [%o0 - 0x8] ! Store Group
+ stw %g3, [%o0 - 0x4] ! Store Group
+#endif /* !REGS_64BIT */
+81: be,pt %xcc, 1f ! CTI
+ andcc %o2, 2, %g0 ! IEU1 Group
+ ASI_SETSRC_NOBLK ! LSU Group
+ EX(LDUW [%o1] ASINORMAL, %g2,
+ and %o2, 0x7) ! Load Group
+ add %o1, 4, %o1 ! IEU0
+ ASI_SETDST_NOBLK ! LSU Group
+ EX(STW %g2, [%o0] ASINORMAL,
+ and %o2, 0x7) ! Store Group
+ add %o0, 4, %o0 ! IEU0
+1: be,pt %xcc, 1f ! CTI
+ andcc %o2, 1, %g0 ! IEU1 Group
+ ASI_SETSRC_NOBLK ! LSU Group
+ EX(LDUH [%o1] ASINORMAL, %g2,
+ and %o2, 0x3) ! Load Group
+ add %o1, 2, %o1 ! IEU0
+ ASI_SETDST_NOBLK ! LSU Group
+ EX(STH %g2, [%o0] ASINORMAL,
+ and %o2, 0x3) ! Store Group
+ add %o0, 2, %o0 ! IEU0
+1: be,pt %xcc, normal_retl ! CTI
+ nop ! IEU1
+ ASI_SETSRC_NOBLK ! LSU Group
+ EX(LDUB [%o1] ASINORMAL, %g2,
+ add %g0, 1) ! Load Group
+ ASI_SETDST_NOBLK ! LSU Group
+ EX(STB %g2, [%o0] ASINORMAL,
+ add %g0, 1) ! Store Group + bubble
+normal_retl:
+ NORMAL_RETL
+
+#ifdef REGS_64BIT
+82: MOVE_BIGALIGNCHUNK(o1, o0, 0x00, g1, g3, g5, o5)
+ MOVE_BIGALIGNCHUNK(o1, o0, 0x40, g1, g3, g5, o5)
+ EXT(82b,37f,VIScopyfixup3)
+37: subcc %g7, 128, %g7 ! IEU1 Group
+ add %o1, 128, %o1 ! IEU0
+ bne,pt %xcc, 82b ! CTI
+ add %o0, 128, %o0 ! IEU0 Group
+ andcc %o2, 0x70, %g7 ! IEU1
+ be,pn %xcc, 84f ! CTI
+ andcc %o2, 8, %g0 ! IEU1 Group
+#ifdef __KERNEL__
+83: srl %g7, 1, %g5 ! IEU0
+ sethi %hi(84f), %o5 ! IEU0 Group
+ add %g7, %g5, %g5 ! IEU1
+ add %o1, %g7, %o1 ! IEU0 Group
+ sub %o5, %g5, %o5 ! IEU1
+ jmpl %o5 + %lo(84f), %g0 ! CTI Group brk forced
+ add %o0, %g7, %o0 ! IEU0 Group
+#else
+ ! Clk1 8-(
+ ! Clk2 8-(
+ ! Clk3 8-(
+ ! Clk4 8-(
+83: rd %pc, %o5 ! PDU Group
+ add %o1, %g7, %o1 ! IEU0 Group
+ sub %o5, %g7, %o5 ! IEU1
+ jmpl %o5 + %lo(84f - 83b), %g0 ! CTI Group brk forced
+ add %o0, %g7, %o0 ! IEU0 Group
+#endif
+38: MOVE_LASTALIGNCHUNK(o1, o0, 0x60, g2, g3)
+ MOVE_LASTALIGNCHUNK(o1, o0, 0x50, g2, g3)
+ MOVE_LASTALIGNCHUNK(o1, o0, 0x40, g2, g3)
+ MOVE_LASTALIGNCHUNK(o1, o0, 0x30, g2, g3)
+ MOVE_LASTALIGNCHUNK(o1, o0, 0x20, g2, g3)
+ MOVE_LASTALIGNCHUNK(o1, o0, 0x10, g2, g3)
+ MOVE_LASTALIGNCHUNK(o1, o0, 0x00, g2, g3)
+ EXT(38b,84f,VIScopyfixup4)
+84: be,pt %xcc, 85f ! CTI Group
+ andcc %o2, 4, %g0 ! IEU1
+ ASI_SETSRC_NOBLK ! LSU Group
+ EX(LDX [%o1] ASINORMAL, %g2,
+ and %o2, 0xf) ! Load Group
+ add %o0, 8, %o0 ! IEU0
+ ASI_SETDST_NOBLK ! LSU Group
+ add %o1, 8, %o1 ! IEU0 Group
+ EX(STX %g2, [%o0 - 0x8] ASINORMAL,
+ and %o2, 0xf) ! Store
+85: be,pt %xcc, 1f ! CTI
+ andcc %o2, 2, %g0 ! IEU1 Group
+ ASI_SETSRC_NOBLK ! LSU Group
+ EX(LDUW [%o1] ASINORMAL, %g2,
+ and %o2, 0x7) ! Load Group
+ add %o0, 4, %o0 ! IEU0
+ ASI_SETDST_NOBLK ! LSU Group
+ add %o1, 4, %o1 ! IEU0 Group
+ EX(STW %g2, [%o0 - 0x4] ASINORMAL,
+ and %o2, 0x7) ! Store
+1: be,pt %xcc, 1f ! CTI
+ andcc %o2, 1, %g0 ! IEU1 Group
+ ASI_SETSRC_NOBLK ! LSU Group
+ EX(LDUH [%o1] ASINORMAL, %g2,
+ and %o2, 0x3) ! Load Group
+ add %o0, 2, %o0 ! IEU0
+ ASI_SETDST_NOBLK ! LSU Group
+ add %o1, 2, %o1 ! IEU0 Group
+ EX(STH %g2, [%o0 - 0x2] ASINORMAL,
+ and %o2, 0x3) ! Store
+1: be,pt %xcc, 1f ! CTI
+ nop ! IEU0 Group
+ ASI_SETSRC_NOBLK ! LSU Group
+ EX(LDUB [%o1] ASINORMAL, %g2,
+ add %g0, 1) ! Load Group
+ ASI_SETDST_NOBLK ! LSU Group
+ EX(STB %g2, [%o0] ASINORMAL,
+ add %g0, 1) ! Store Group + bubble
+1: NORMAL_RETL
+#endif /* REGS_64BIT */
+
+memcpy_noVIS_misaligned:
+ brz,pt %g2, 2f ! CTI Group
+ mov 8, %g1 ! IEU0
+ sub %g1, %g2, %g2 ! IEU0 Group
+ sub %o2, %g2, %o2 ! IEU0 Group
+1: ASI_SETSRC_NOBLK ! LSU Group
+ EX(LDUB [%o1] ASINORMAL, %g5,
+ add %o2, %g2) ! Load Group
+ add %o1, 1, %o1 ! IEU0
+ add %o0, 1, %o0 ! IEU1
+ ASI_SETDST_NOBLK ! LSU Group
+ subcc %g2, 1, %g2 ! IEU1 Group
+ bne,pt %xcc, 1b ! CTI
+ EX2(STB %g5, [%o0 - 1] ASINORMAL,
+ add %o2, %g2, %o2,
+ add %o2, 1) ! Store
+2:
+#ifdef __KERNEL__
+ VISEntry
+#endif
+ andn %o2, 7, %g5 ! IEU0 Group
+ and %o2, 7, %o2 ! IEU1
+ fmovd %f0, %f2 ! FPU
+ ASI_SETSRC_NOBLK ! LSU Group
+ alignaddr %o1, %g0, %g1 ! GRU Group
+ EXO2(LDDF [%g1] ASINORMAL, %f4) ! Load Group
+1: EX(LDDF [%g1 + 0x8] ASINORMAL, %f6,
+ add %o2, %g5) ! Load Group
+ add %g1, 0x8, %g1 ! IEU0 Group
+ subcc %g5, 8, %g5 ! IEU1
+ ASI_SETDST_NOBLK ! LSU Group
+ faligndata %f4, %f6, %f0 ! GRU Group
+ EX2(STDF %f0, [%o0] ASINORMAL,
+ add %o2, %g5, %o2,
+ add %o2, 8) ! Store
+ add %o1, 8, %o1 ! IEU0 Group
+ be,pn %xcc, end_cruft ! CTI
+ add %o0, 8, %o0 ! IEU1
+ ASI_SETSRC_NOBLK ! LSU Group
+ EX(LDDF [%g1 + 0x8] ASINORMAL, %f4,
+ add %o2, %g5) ! Load Group
+ add %g1, 8, %g1 ! IEU0
+ subcc %g5, 8, %g5 ! IEU1
+ ASI_SETDST_NOBLK ! LSU Group
+ faligndata %f6, %f4, %f0 ! GRU Group
+ EX2(STDF %f0, [%o0] ASINORMAL,
+ add %o2, %g5, %o2,
+ add %o2, 8) ! Store
+ add %o1, 8, %o1 ! IEU0
+ ASI_SETSRC_NOBLK ! LSU Group
+ bne,pn %xcc, 1b ! CTI Group
+ add %o0, 8, %o0 ! IEU0
+end_cruft:
+ brz,pn %o2, fpu_retl ! CTI Group
+#ifndef __KERNEL__
+ nop ! IEU0
+#else
+ ASI_SETSRC_NOBLK ! LSU Group
+#endif
+ EXO2(LDUB [%o1] ASINORMAL, %g5) ! LOAD
+ add %o1, 1, %o1 ! IEU0
+ add %o0, 1, %o0 ! IEU1
+ ASI_SETDST_NOBLK ! LSU Group
+ subcc %o2, 1, %o2 ! IEU1
+ bne,pt %xcc, vis_slp ! CTI
+ EX(STB %g5, [%o0 - 1] ASINORMAL,
+ add %o2, 1) ! Store Group
+fpu_retl:
+ FPU_RETL
+
+#ifdef __KERNEL__
+ .section .fixup
+ .align 4
+VIScopyfixup_reto2:
+ mov %o2, %o1
+VIScopyfixup_ret:
+ /* If this is copy_from_user(), zero out the rest of the
+ * kernel buffer.
+ */
+ ldub [%g6 + TI_CURRENT_DS], %o4
+ andcc asi_src, 0x1, %g0
+ be,pt %icc, 1f
+ VISExit
+ andcc asi_dest, 0x1, %g0
+ bne,pn %icc, 1f
+ nop
+ save %sp, -160, %sp
+ mov %i0, %o0
+ call __bzero
+ mov %i1, %o1
+ restore
+1: mov %o1, %o0
+ retl
+ wr %o4, %g0, %asi
+VIScopyfixup1: subcc %g2, 18, %g2
+ add %o0, 32, %o0
+ bgeu,a,pt %icc, VIScopyfixup1
+ sub %g7, 32, %g7
+ sub %o0, 32, %o0
+ rd %pc, %g5
+ add %g2, (18 + 16), %g2
+ ldub [%g5 + %g2], %g2
+ ba,a,pt %xcc, 2f
+.byte 0, 0, 0, 0, 0, 0, 0, 4, 4, 8, 12, 12, 16, 20, 20, 24, 28, 28
+ .align 4
+VIScopyfixup2: mov (7 * 16), %g7
+1: subcc %g2, 10, %g2
+ bgeu,a,pt %icc, 1b
+ sub %g7, 16, %g7
+ sub %o0, %g7, %o0
+ rd %pc, %g5
+ add %g2, (10 + 16), %g2
+ ldub [%g5 + %g2], %g2
+ ba,a,pt %xcc, 4f
+.byte 0, 0, 0, 0, 0, 4, 4, 8, 12, 12
+ .align 4
+VIScopyfixup3: subcc %g2, 10, %g2
+ add %o0, 32, %o0
+ bgeu,a,pt %icc, VIScopyfixup3
+ sub %g7, 32, %g7
+ sub %o0, 32, %o0
+ rd %pc, %g5
+ add %g2, (10 + 16), %g2
+ ldub [%g5 + %g2], %g2
+ ba,a,pt %xcc, 2f
+.byte 0, 0, 0, 0, 0, 0, 0, 8, 16, 24
+ .align 4
+2: and %o2, 0x7f, %o2
+ sub %g7, %g2, %g7
+ ba,pt %xcc, VIScopyfixup_ret
+ add %g7, %o2, %o1
+VIScopyfixup4: mov (7 * 16), %g7
+3: subcc %g2, 6, %g2
+ bgeu,a,pt %icc, 3b
+ sub %g7, 16, %g7
+ sub %o0, %g7, %o0
+ rd %pc, %g5
+ add %g2, (6 + 16), %g2
+ ldub [%g5 + %g2], %g2
+ ba,a,pt %xcc, 4f
+.byte 0, 0, 0, 0, 0, 8
+ .align 4
+4: and %o2, 0xf, %o2
+ sub %g7, %g2, %g7
+ ba,pt %xcc, VIScopyfixup_ret
+ add %g7, %o2, %o1
+VIScopyfixup_vis2:
+ sub %o2, 0x40, %o2
+VIScopyfixup_vis0:
+ add %o2, 0x80, %o2
+VIScopyfixup_vis1:
+ add %g7, %g3, %g7
+ ba,pt %xcc, VIScopyfixup_ret
+ add %o2, %g7, %o1
+VIScopyfixup_vis4:
+ add %g3, 8, %g3
+VIScopyfixup_vis3:
+ add %g3, 8, %g3
+ ba,pt %xcc, VIScopyfixup_ret
+ add %o2, %g3, %o1
+#endif
+
+#ifdef __KERNEL__
+ .text
+ .align 32
+
+ .globl __memmove
+ .type __memmove,@function
+
+ .globl memmove
+ .type memmove,@function
+
+memmove:
+__memmove: cmp %o0, %o1
+ blu,pt %xcc, memcpy_private
+ sub %o0, %o1, %g5
+ add %o1, %o2, %g3
+ cmp %g3, %o0
+ bleu,pt %xcc, memcpy_private
+ add %o1, %o2, %g5
+ add %o0, %o2, %o5
+
+ sub %g5, 1, %o1
+ sub %o5, 1, %o0
+1: ldub [%o1], %g5
+ subcc %o2, 1, %o2
+ sub %o1, 1, %o1
+ stb %g5, [%o0]
+ bne,pt %icc, 1b
+ sub %o0, 1, %o0
+
+ retl
+ clr %o0
+#endif
--- /dev/null
+/* $Id: rwlock.S,v 1.4 2000/09/09 00:00:34 davem Exp $
+ * rwlocks.S: These things are too big to do inline.
+ *
+ * Copyright (C) 1999 David S. Miller (davem@redhat.com)
+ */
+
+ .text
+ .align 64
+
+ /* The non-contention read lock usage is 2 cache lines. */
+
+ .globl __read_lock, __read_unlock
+__read_lock: /* %o0 = lock_ptr */
+ ldsw [%o0], %g5
+ brlz,pn %g5, __read_wait_for_writer
+4: add %g5, 1, %g7
+ cas [%o0], %g5, %g7
+ cmp %g5, %g7
+ bne,pn %icc, __read_lock
+ membar #StoreLoad | #StoreStore
+99: retl
+ nop
+__read_unlock: /* %o0 = lock_ptr */
+ lduw [%o0], %g5
+ sub %g5, 1, %g7
+ cas [%o0], %g5, %g7
+ cmp %g5, %g7
+ be,pt %xcc, 99b
+ membar #StoreLoad | #StoreStore
+ ba,a,pt %xcc, __read_unlock
+
+__read_wait_for_writer:
+ ldsw [%o0], %g5
+ brlz,pt %g5, __read_wait_for_writer
+ membar #LoadLoad
+ ba,a,pt %xcc, 4b
+__write_wait_for_any:
+ lduw [%o0], %g5
+ brnz,pt %g5, __write_wait_for_any
+ membar #LoadLoad
+ ba,a,pt %xcc, 4f
+
+ .align 64
+ .globl __write_unlock
+__write_unlock: /* %o0 = lock_ptr */
+ membar #LoadStore | #StoreStore
+ retl
+ stw %g0, [%o0]
+
+ .globl __write_lock
+__write_lock: /* %o0 = lock_ptr */
+ sethi %hi(0x80000000), %g2
+
+1: lduw [%o0], %g5
+ brnz,pn %g5, __write_wait_for_any
+4: or %g5, %g2, %g7
+ cas [%o0], %g5, %g7
+
+ cmp %g5, %g7
+ be,pt %icc, 99b
+ membar #StoreLoad | #StoreStore
+ ba,a,pt %xcc, 1b
+
+ .globl __write_trylock
+__write_trylock: /* %o0 = lock_ptr */
+ sethi %hi(0x80000000), %g2
+1: lduw [%o0], %g5
+ brnz,pn %g5, __write_trylock_fail
+4: or %g5, %g2, %g7
+
+ cas [%o0], %g5, %g7
+ cmp %g5, %g7
+ be,pt %icc, __write_trylock_succeed
+ membar #StoreLoad | #StoreStore
+
+ ba,pt %xcc, 1b
+ nop
+__write_trylock_succeed:
+ retl
+ mov 1, %o0
+
+__write_trylock_fail:
+ retl
+ mov 0, %o0
+
--- /dev/null
+/* splock.S: Spinlock primitives too large to inline.
+ *
+ * Copyright (C) 2004 David S. Miller (davem@redhat.com)
+ */
+
+ .text
+ .align 64
+
+ .globl _raw_spin_lock
+_raw_spin_lock: /* %o0 = lock_ptr */
+1: ldstub [%o0], %g7
+ brnz,pn %g7, 2f
+ membar #StoreLoad | #StoreStore
+ retl
+ nop
+2: ldub [%o0], %g7
+ brnz,pt %g7, 2b
+ membar #LoadLoad
+ ba,a,pt %xcc, 1b
+
+ .globl _raw_spin_lock_flags
+_raw_spin_lock_flags: /* %o0 = lock_ptr, %o1 = irq_flags */
+1: ldstub [%o0], %g7
+ brnz,pn %g7, 2f
+ membar #StoreLoad | #StoreStore
+ retl
+ nop
+
+2: rdpr %pil, %g2 ! Save PIL
+ wrpr %o1, %pil ! Set previous PIL
+3: ldub [%o0], %g7 ! Spin on lock set
+ brnz,pt %g7, 3b
+ membar #LoadLoad
+ ba,pt %xcc, 1b ! Retry lock acquire
+ wrpr %g2, %pil ! Restore PIL
$(ARCH_DIR)/kernel/skas/util: scripts_basic $(ARCH_DIR)/user-offsets.h FORCE
$(Q)$(MAKE) $(build)=$@
-$(ARCH_DIR)/os-$(OS)/util: scripts_basic $(ARCH_DIR)/user-offsets.h FORCE
+$(ARCH_DIR)/os-$(OS)/util: scripts_basic FORCE
$(Q)$(MAKE) $(build)=$@
export SUBARCH USER_CFLAGS OS
--- /dev/null
+/*
+ * Copyright (C) 2002 Steve Schmidtke
+ * Licensed under the GPL
+ */
+
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/ioctl.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <errno.h>
+#include "hostaudio.h"
+#include "user_util.h"
+#include "kern_util.h"
+#include "user.h"
+#include "os.h"
+
+/* /dev/dsp file operations */
+
+ssize_t hostaudio_read_user(struct hostaudio_state *state, char *buffer,
+ size_t count, loff_t *ppos)
+{
+ ssize_t ret;
+
+#ifdef DEBUG
+ printk("hostaudio: read_user called, count = %d\n", count);
+#endif
+
+ ret = read(state->fd, buffer, count);
+
+ if(ret < 0) return(-errno);
+ return(ret);
+}
+
+ssize_t hostaudio_write_user(struct hostaudio_state *state, const char *buffer,
+ size_t count, loff_t *ppos)
+{
+ ssize_t ret;
+
+#ifdef DEBUG
+ printk("hostaudio: write_user called, count = %d\n", count);
+#endif
+
+ ret = write(state->fd, buffer, count);
+
+ if(ret < 0) return(-errno);
+ return(ret);
+}
+
+int hostaudio_ioctl_user(struct hostaudio_state *state, unsigned int cmd,
+ unsigned long arg)
+{
+ int ret;
+#ifdef DEBUG
+ printk("hostaudio: ioctl_user called, cmd = %u\n", cmd);
+#endif
+
+ ret = ioctl(state->fd, cmd, arg);
+
+ if(ret < 0) return(-errno);
+ return(ret);
+}
+
+int hostaudio_open_user(struct hostaudio_state *state, int r, int w, char *dsp)
+{
+#ifdef DEBUG
+ printk("hostaudio: open_user called\n");
+#endif
+
+ state->fd = os_open_file(dsp, of_set_rw(OPENFLAGS(), r, w), 0);
+
+ if(state->fd >= 0) return(0);
+
+ printk("hostaudio_open_user failed to open '%s', errno = %d\n",
+ dsp, errno);
+
+ return(-errno);
+}
+
+int hostaudio_release_user(struct hostaudio_state *state)
+{
+#ifdef DEBUG
+ printk("hostaudio: release called\n");
+#endif
+ if(state->fd >= 0){
+ close(state->fd);
+ state->fd=-1;
+ }
+
+ return(0);
+}
+
+/* /dev/mixer file operations */
+
+int hostmixer_ioctl_mixdev_user(struct hostmixer_state *state,
+ unsigned int cmd, unsigned long arg)
+{
+ int ret;
+#ifdef DEBUG
+ printk("hostmixer: ioctl_user called cmd = %u\n",cmd);
+#endif
+
+ ret = ioctl(state->fd, cmd, arg);
+ if(ret < 0)
+ return(-errno);
+ return(ret);
+}
+
+int hostmixer_open_mixdev_user(struct hostmixer_state *state, int r, int w,
+ char *mixer)
+{
+#ifdef DEBUG
+ printk("hostmixer: open_user called\n");
+#endif
+
+ state->fd = os_open_file(mixer, of_set_rw(OPENFLAGS(), r, w), 0);
+
+ if(state->fd >= 0) return(0);
+
+ printk("hostaudio_open_mixdev_user failed to open '%s', errno = %d\n",
+ mixer, errno);
+
+ return(-errno);
+}
+
+int hostmixer_release_mixdev_user(struct hostmixer_state *state)
+{
+#ifdef DEBUG
+ printk("hostmixer: release_user called\n");
+#endif
+
+ if(state->fd >= 0){
+ close(state->fd);
+ state->fd = -1;
+ }
+
+ return 0;
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+OUTPUT_FORMAT(ELF_FORMAT)
+OUTPUT_ARCH(ELF_ARCH)
+ENTRY(_start)
+jiffies = jiffies_64;
+
+SEARCH_DIR("/usr/local/i686-pc-linux-gnu/lib"); SEARCH_DIR("/usr/local/lib"); SEARCH_DIR("/lib"); SEARCH_DIR("/usr/lib");
+/* Do we need any of these for elf?
+ __DYNAMIC = 0; */
+SECTIONS
+{
+ . = START + SIZEOF_HEADERS;
+ .interp : { *(.interp) }
+ . = ALIGN(4096);
+ __binary_start = .;
+ . = ALIGN(4096); /* Init code and data */
+ _stext = .;
+ __init_begin = .;
+ .text.init : { *(.text.init) }
+
+ . = ALIGN(4096);
+
+ /* Read-only sections, merged into text segment: */
+ .hash : { *(.hash) }
+ .dynsym : { *(.dynsym) }
+ .dynstr : { *(.dynstr) }
+ .gnu.version : { *(.gnu.version) }
+ .gnu.version_d : { *(.gnu.version_d) }
+ .gnu.version_r : { *(.gnu.version_r) }
+ .rel.init : { *(.rel.init) }
+ .rela.init : { *(.rela.init) }
+ .rel.text : { *(.rel.text .rel.text.* .rel.gnu.linkonce.t.*) }
+ .rela.text : { *(.rela.text .rela.text.* .rela.gnu.linkonce.t.*) }
+ .rel.fini : { *(.rel.fini) }
+ .rela.fini : { *(.rela.fini) }
+ .rel.rodata : { *(.rel.rodata .rel.rodata.* .rel.gnu.linkonce.r.*) }
+ .rela.rodata : { *(.rela.rodata .rela.rodata.* .rela.gnu.linkonce.r.*) }
+ .rel.data : { *(.rel.data .rel.data.* .rel.gnu.linkonce.d.*) }
+ .rela.data : { *(.rela.data .rela.data.* .rela.gnu.linkonce.d.*) }
+ .rel.tdata : { *(.rel.tdata .rel.tdata.* .rel.gnu.linkonce.td.*) }
+ .rela.tdata : { *(.rela.tdata .rela.tdata.* .rela.gnu.linkonce.td.*) }
+ .rel.tbss : { *(.rel.tbss .rel.tbss.* .rel.gnu.linkonce.tb.*) }
+ .rela.tbss : { *(.rela.tbss .rela.tbss.* .rela.gnu.linkonce.tb.*) }
+ .rel.ctors : { *(.rel.ctors) }
+ .rela.ctors : { *(.rela.ctors) }
+ .rel.dtors : { *(.rel.dtors) }
+ .rela.dtors : { *(.rela.dtors) }
+ .rel.got : { *(.rel.got) }
+ .rela.got : { *(.rela.got) }
+ .rel.bss : { *(.rel.bss .rel.bss.* .rel.gnu.linkonce.b.*) }
+ .rela.bss : { *(.rela.bss .rela.bss.* .rela.gnu.linkonce.b.*) }
+ .rel.plt : { *(.rel.plt) }
+ .rela.plt : { *(.rela.plt) }
+ .init : {
+ KEEP (*(.init))
+ } =0x90909090
+ .plt : { *(.plt) }
+ .text : {
+ *(.text .stub .text.* .gnu.linkonce.t.*)
+ /* .gnu.warning sections are handled specially by elf32.em. */
+ *(.gnu.warning)
+ } =0x90909090
+ .fini : {
+ KEEP (*(.fini))
+ } =0x90909090
+
+ .kstrtab : { *(.kstrtab) }
+
+ #include "asm/common.lds.S"
+
+ .data.init : { *(.data.init) }
+
+ /* Ensure the __preinit_array_start label is properly aligned. We
+ could instead move the label definition inside the section, but
+ the linker would then create the section even if it turns out to
+ be empty, which isn't pretty. */
+ . = ALIGN(32 / 8);
+ .preinit_array : { *(.preinit_array) }
+ .init_array : { *(.init_array) }
+ .fini_array : { *(.fini_array) }
+ .data : {
+ . = ALIGN(KERNEL_STACK_SIZE); /* init_task */
+ *(.data.init_task)
+ *(.data .data.* .gnu.linkonce.d.*)
+ SORT(CONSTRUCTORS)
+ }
+ .data1 : { *(.data1) }
+ .tdata : { *(.tdata .tdata.* .gnu.linkonce.td.*) }
+ .tbss : { *(.tbss .tbss.* .gnu.linkonce.tb.*) *(.tcommon) }
+ .eh_frame : { KEEP (*(.eh_frame)) }
+ .gcc_except_table : { *(.gcc_except_table) }
+ .dynamic : { *(.dynamic) }
+ .ctors : {
+ /* gcc uses crtbegin.o to find the start of
+ the constructors, so we make sure it is
+ first. Because this is a wildcard, it
+ doesn't matter if the user does not
+ actually link against crtbegin.o; the
+ linker won't look for a file to match a
+ wildcard. The wildcard also means that it
+ doesn't matter which directory crtbegin.o
+ is in. */
+ KEEP (*crtbegin.o(.ctors))
+ /* We don't want to include the .ctor section from
+ from the crtend.o file until after the sorted ctors.
+ The .ctor section from the crtend file contains the
+ end of ctors marker and it must be last */
+ KEEP (*(EXCLUDE_FILE (*crtend.o ) .ctors))
+ KEEP (*(SORT(.ctors.*)))
+ KEEP (*(.ctors))
+ }
+ .dtors : {
+ KEEP (*crtbegin.o(.dtors))
+ KEEP (*(EXCLUDE_FILE (*crtend.o ) .dtors))
+ KEEP (*(SORT(.dtors.*)))
+ KEEP (*(.dtors))
+ }
+ .jcr : { KEEP (*(.jcr)) }
+ .got : { *(.got.plt) *(.got) }
+ _edata = .;
+ PROVIDE (edata = .);
+ __bss_start = .;
+ .bss : {
+ *(.dynbss)
+ *(.bss .bss.* .gnu.linkonce.b.*)
+ *(COMMON)
+ /* Align here to ensure that the .bss section occupies space up to
+ _end. Align after .bss to ensure correct alignment even if the
+ .bss section disappears because there are no input sections. */
+ . = ALIGN(32 / 8);
+ . = ALIGN(32 / 8);
+ }
+ _end = .;
+ PROVIDE (end = .);
+ /* Stabs debugging sections. */
+ .stab 0 : { *(.stab) }
+ .stabstr 0 : { *(.stabstr) }
+ .stab.excl 0 : { *(.stab.excl) }
+ .stab.exclstr 0 : { *(.stab.exclstr) }
+ .stab.index 0 : { *(.stab.index) }
+ .stab.indexstr 0 : { *(.stab.indexstr) }
+ .comment 0 : { *(.comment) }
+ /* DWARF debug sections.
+ Symbols in the DWARF debugging sections are relative to the beginning
+ of the section so we begin them at 0. */
+ /* DWARF 1 */
+ .debug 0 : { *(.debug) }
+ .line 0 : { *(.line) }
+ /* GNU DWARF 1 extensions */
+ .debug_srcinfo 0 : { *(.debug_srcinfo) }
+ .debug_sfnames 0 : { *(.debug_sfnames) }
+ /* DWARF 1.1 and DWARF 2 */
+ .debug_aranges 0 : { *(.debug_aranges) }
+ .debug_pubnames 0 : { *(.debug_pubnames) }
+ /* DWARF 2 */
+ .debug_info 0 : { *(.debug_info .gnu.linkonce.wi.*) }
+ .debug_abbrev 0 : { *(.debug_abbrev) }
+ .debug_line 0 : { *(.debug_line) }
+ .debug_frame 0 : { *(.debug_frame) }
+ .debug_str 0 : { *(.debug_str) }
+ .debug_loc 0 : { *(.debug_loc) }
+ .debug_macinfo 0 : { *(.debug_macinfo) }
+ /* SGI/MIPS DWARF 2 extensions */
+ .debug_weaknames 0 : { *(.debug_weaknames) }
+ .debug_funcnames 0 : { *(.debug_funcnames) }
+ .debug_typenames 0 : { *(.debug_typenames) }
+ .debug_varnames 0 : { *(.debug_varnames) }
+}
--- /dev/null
+all : sc.h
+
+sc.h : ../util/mk_sc
+ ../util/mk_sc > $@
+
+../util/mk_sc :
+ $(MAKE) -C ../util mk_sc
--- /dev/null
+/*
+ * Copyright (C) 2002 Steve Schmidtke
+ * Licensed under the GPL
+ */
+
+#ifndef HOSTAUDIO_H
+#define HOSTAUDIO_H
+
+#define HOSTAUDIO_DEV_DSP "/dev/sound/dsp"
+#define HOSTAUDIO_DEV_MIXER "/dev/sound/mixer"
+
+struct hostaudio_state {
+ int fd;
+};
+
+struct hostmixer_state {
+ int fd;
+};
+
+/* UML user-side protoypes */
+extern ssize_t hostaudio_read_user(struct hostaudio_state *state, char *buffer,
+ size_t count, loff_t *ppos);
+extern ssize_t hostaudio_write_user(struct hostaudio_state *state,
+ const char *buffer, size_t count,
+ loff_t *ppos);
+extern int hostaudio_ioctl_user(struct hostaudio_state *state,
+ unsigned int cmd, unsigned long arg);
+extern int hostaudio_open_user(struct hostaudio_state *state, int r, int w,
+ char *dsp);
+extern int hostaudio_release_user(struct hostaudio_state *state);
+extern int hostmixer_ioctl_mixdev_user(struct hostmixer_state *state,
+ unsigned int cmd, unsigned long arg);
+extern int hostmixer_open_mixdev_user(struct hostmixer_state *state, int r,
+ int w, char *mixer);
+extern int hostmixer_release_mixdev_user(struct hostmixer_state *state);
+
+#endif /* HOSTAUDIO_H */
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+#ifndef __MPROT_H__
+#define __MPROT_H__
+
+extern void no_access(unsigned long addr, unsigned int len);
+
+#endif
EXPORT_SYMBOL(physmem_remove_mapping);
EXPORT_SYMBOL(physmem_subst_mapping);
-int arch_free_page(struct page *page, int order)
+void arch_free_page(struct page *page, int order)
{
void *virt;
int i;
virt = __va(page_to_phys(page + i));
physmem_remove_mapping(virt);
}
-
- return 0;
}
int is_remapped(void *virt)
--- /dev/null
+/*
+ * Copyright (C) 2002 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#include <stdlib.h>
+#include <errno.h>
+#include <signal.h>
+#include <sched.h>
+#include <sys/wait.h>
+#include <sys/ptrace.h>
+#include "user.h"
+#include "kern_util.h"
+#include "user_util.h"
+#include "os.h"
+#include "time_user.h"
+
+static int user_thread_tramp(void *arg)
+{
+ if(ptrace(PTRACE_TRACEME, 0, 0, 0) < 0)
+ panic("user_thread_tramp - PTRACE_TRACEME failed, "
+ "errno = %d\n", errno);
+ enable_timer();
+ os_stop_process(os_getpid());
+ return(0);
+}
+
+int user_thread(unsigned long stack, int flags)
+{
+ int pid, status, err;
+
+ pid = clone(user_thread_tramp, (void *) stack_sp(stack),
+ flags | CLONE_FILES | SIGCHLD, NULL);
+ if(pid < 0){
+ printk("user_thread - clone failed, errno = %d\n", errno);
+ return(pid);
+ }
+
+ CATCH_EINTR(err = waitpid(pid, &status, WUNTRACED));
+ if(err < 0){
+ printk("user_thread - waitpid failed, errno = %d\n", errno);
+ return(-errno);
+ }
+
+ if(!WIFSTOPPED(status) || (WSTOPSIG(status) != SIGSTOP)){
+ printk("user_thread - trampoline didn't stop, status = %d\n",
+ status);
+ return(-EINVAL);
+ }
+
+ return(pid);
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2002 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#ifndef __SKAS_MMU_H
+#define __SKAS_MMU_H
+
+#include "linux/list.h"
+#include "linux/spinlock.h"
+
+struct mmu_context_skas {
+ int mm_fd;
+};
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2002 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#ifndef __MODE_SKAS_H__
+#define __MODE_SKAS_H__
+
+extern unsigned long exec_regs[];
+extern unsigned long exec_fp_regs[];
+extern unsigned long exec_fpx_regs[];
+extern int have_fpx_regs;
+
+extern void user_time_init_skas(void);
+extern int copy_sc_from_user_skas(int pid, union uml_pt_regs *regs,
+ void *from_ptr);
+extern int copy_sc_to_user_skas(int pid, void *to_ptr, void *fp,
+ union uml_pt_regs *regs,
+ unsigned long fault_addr, int fault_type);
+extern void sig_handler_common_skas(int sig, void *sc_ptr);
+extern void halt_skas(void);
+extern void reboot_skas(void);
+extern void kill_off_processes_skas(void);
+extern int is_skas_winch(int pid, int fd, void *data);
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2002 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#ifndef __SKAS_MODE_KERN_H__
+#define __SKAS_MODE_KERN_H__
+
+#include "linux/sched.h"
+#include "asm/page.h"
+#include "asm/ptrace.h"
+
+extern void flush_thread_skas(void);
+extern void *switch_to_skas(void *prev, void *next);
+extern void start_thread_skas(struct pt_regs *regs, unsigned long eip,
+ unsigned long esp);
+extern int copy_thread_skas(int nr, unsigned long clone_flags,
+ unsigned long sp, unsigned long stack_top,
+ struct task_struct *p, struct pt_regs *regs);
+extern void release_thread_skas(struct task_struct *task);
+extern void exit_thread_skas(void);
+extern void initial_thread_cb_skas(void (*proc)(void *), void *arg);
+extern void init_idle_skas(void);
+extern void flush_tlb_kernel_range_skas(unsigned long start,
+ unsigned long end);
+extern void flush_tlb_kernel_vm_skas(void);
+extern void __flush_tlb_one_skas(unsigned long addr);
+extern void flush_tlb_range_skas(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end);
+extern void flush_tlb_mm_skas(struct mm_struct *mm);
+extern void force_flush_all_skas(void);
+extern long execute_syscall_skas(void *r);
+extern void before_mem_skas(unsigned long unused);
+extern unsigned long set_task_sizes_skas(int arg, unsigned long *host_size_out,
+ unsigned long *task_size_out);
+extern int start_uml_skas(void);
+extern int external_pid_skas(struct task_struct *task);
+extern int thread_pid_skas(struct task_struct *task);
+
+#define kmem_end_skas (host_task_size - 1024 * 1024)
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2002 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#ifndef __SKAS_UACCESS_H
+#define __SKAS_UACCESS_H
+
+#include "asm/errno.h"
+
+#define access_ok_skas(type, addr, size) \
+ ((segment_eq(get_fs(), KERNEL_DS)) || \
+ (((unsigned long) (addr) < TASK_SIZE) && \
+ ((unsigned long) (addr) + (size) <= TASK_SIZE)))
+
+static inline int verify_area_skas(int type, const void * addr,
+ unsigned long size)
+{
+ return(access_ok_skas(type, addr, size) ? 0 : -EFAULT);
+}
+
+extern int copy_from_user_skas(void *to, const void *from, int n);
+extern int copy_to_user_skas(void *to, const void *from, int n);
+extern int strncpy_from_user_skas(char *dst, const char *src, int count);
+extern int __clear_user_skas(void *mem, int len);
+extern int clear_user_skas(void *mem, int len);
+extern int strnlen_user_skas(const void *str, int len);
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2002 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#ifndef __TT_MMU_H
+#define __TT_MMU_H
+
+struct mmu_context_tt {
+};
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2002 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#ifndef __MODE_TT_H__
+#define __MODE_TT_H__
+
+#include "sysdep/ptrace.h"
+
+enum { OP_NONE, OP_EXEC, OP_FORK, OP_TRACE_ON, OP_REBOOT, OP_HALT, OP_CB };
+
+extern int tracing_pid;
+
+extern int tracer(int (*init_proc)(void *), void *sp);
+extern void user_time_init_tt(void);
+extern int copy_sc_from_user_tt(void *to_ptr, void *from_ptr, void *data);
+extern int copy_sc_to_user_tt(void *to_ptr, void *fp, void *from_ptr,
+ void *data);
+extern void sig_handler_common_tt(int sig, void *sc);
+extern void syscall_handler_tt(int sig, union uml_pt_regs *regs);
+extern void reboot_tt(void);
+extern void halt_tt(void);
+extern int is_tracer_winch(int pid, int fd, void *data);
+extern void kill_off_processes_tt(void);
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2002 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#ifndef __TT_MODE_KERN_H__
+#define __TT_MODE_KERN_H__
+
+#include "linux/sched.h"
+#include "asm/page.h"
+#include "asm/ptrace.h"
+#include "asm/uaccess.h"
+
+extern void *switch_to_tt(void *prev, void *next);
+extern void flush_thread_tt(void);
+extern void start_thread_tt(struct pt_regs *regs, unsigned long eip,
+ unsigned long esp);
+extern int copy_thread_tt(int nr, unsigned long clone_flags, unsigned long sp,
+ unsigned long stack_top, struct task_struct *p,
+ struct pt_regs *regs);
+extern void release_thread_tt(struct task_struct *task);
+extern void exit_thread_tt(void);
+extern void initial_thread_cb_tt(void (*proc)(void *), void *arg);
+extern void init_idle_tt(void);
+extern void flush_tlb_kernel_range_tt(unsigned long start, unsigned long end);
+extern void flush_tlb_kernel_vm_tt(void);
+extern void __flush_tlb_one_tt(unsigned long addr);
+extern void flush_tlb_range_tt(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end);
+extern void flush_tlb_mm_tt(struct mm_struct *mm);
+extern void force_flush_all_tt(void);
+extern long execute_syscall_tt(void *r);
+extern void before_mem_tt(unsigned long brk_start);
+extern unsigned long set_task_sizes_tt(int arg, unsigned long *host_size_out,
+ unsigned long *task_size_out);
+extern int start_uml_tt(void);
+extern int external_pid_tt(struct task_struct *task);
+extern int thread_pid_tt(struct task_struct *task);
+
+#define kmem_end_tt (host_task_size - ABOVE_KMEM)
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * Copyright (C) 2000 - 2003 Jeff Dike (jdike@addtoit.com)
+ * Licensed under the GPL
+ */
+
+#ifndef __TT_UACCESS_H
+#define __TT_UACCESS_H
+
+#include "linux/string.h"
+#include "linux/sched.h"
+#include "asm/processor.h"
+#include "asm/errno.h"
+#include "asm/current.h"
+#include "asm/a.out.h"
+#include "uml_uaccess.h"
+
+#define ABOVE_KMEM (16 * 1024 * 1024)
+
+extern unsigned long end_vm;
+extern unsigned long uml_physmem;
+
+#define under_task_size(addr, size) \
+ (((unsigned long) (addr) < TASK_SIZE) && \
+ (((unsigned long) (addr) + (size)) < TASK_SIZE))
+
+#define is_stack(addr, size) \
+ (((unsigned long) (addr) < STACK_TOP) && \
+ ((unsigned long) (addr) >= STACK_TOP - ABOVE_KMEM) && \
+ (((unsigned long) (addr) + (size)) <= STACK_TOP))
+
+#define access_ok_tt(type, addr, size) \
+ ((type == VERIFY_READ) || (segment_eq(get_fs(), KERNEL_DS)) || \
+ (((unsigned long) (addr) <= ((unsigned long) (addr) + (size))) && \
+ (under_task_size(addr, size) || is_stack(addr, size))))
+
+static inline int verify_area_tt(int type, const void * addr,
+ unsigned long size)
+{
+ return(access_ok_tt(type, addr, size) ? 0 : -EFAULT);
+}
+
+extern unsigned long get_fault_addr(void);
+
+extern int __do_copy_from_user(void *to, const void *from, int n,
+ void **fault_addr, void **fault_catcher);
+extern int __do_strncpy_from_user(char *dst, const char *src, size_t n,
+ void **fault_addr, void **fault_catcher);
+extern int __do_clear_user(void *mem, size_t len, void **fault_addr,
+ void **fault_catcher);
+extern int __do_strnlen_user(const char *str, unsigned long n,
+ void **fault_addr, void **fault_catcher);
+
+extern int copy_from_user_tt(void *to, const void *from, int n);
+extern int copy_to_user_tt(void *to, const void *from, int n);
+extern int strncpy_from_user_tt(char *dst, const char *src, int count);
+extern int __clear_user_tt(void *mem, int len);
+extern int clear_user_tt(void *mem, int len);
+extern int strnlen_user_tt(const void *str, int len);
+
+#endif
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+#include <stdio.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <dirent.h>
+#include <errno.h>
+#include <utime.h>
+#include <string.h>
+#include <sys/stat.h>
+#include <sys/vfs.h>
+#include <sys/ioctl.h>
+#include "user_util.h"
+#include "mem_user.h"
+#include "uml-config.h"
+
+/* Had to steal this from linux/module.h because that file can't be included
+ * since this includes various user-level headers.
+ */
+
+struct module_symbol
+{
+ unsigned long value;
+ const char *name;
+};
+
+/* Indirect stringification. */
+
+#define __MODULE_STRING_1(x) #x
+#define __MODULE_STRING(x) __MODULE_STRING_1(x)
+
+#if !defined(__AUTOCONF_INCLUDED__)
+
+#define __EXPORT_SYMBOL(sym,str) error config_must_be_included_before_module
+#define EXPORT_SYMBOL(var) error config_must_be_included_before_module
+#define EXPORT_SYMBOL_NOVERS(var) error config_must_be_included_before_module
+
+#elif !defined(UML_CONFIG_MODULES)
+
+#define __EXPORT_SYMBOL(sym,str)
+#define EXPORT_SYMBOL(var)
+#define EXPORT_SYMBOL_NOVERS(var)
+
+#else
+
+#define __EXPORT_SYMBOL(sym, str) \
+const char __kstrtab_##sym[] \
+__attribute__((section(".kstrtab"))) = str; \
+const struct module_symbol __ksymtab_##sym \
+__attribute__((section("__ksymtab"))) = \
+{ (unsigned long)&sym, __kstrtab_##sym }
+
+#if defined(__MODVERSIONS__) || !defined(UML_CONFIG_MODVERSIONS)
+#define EXPORT_SYMBOL(var) __EXPORT_SYMBOL(var, __MODULE_STRING(var))
+#else
+#define EXPORT_SYMBOL(var) __EXPORT_SYMBOL(var, __MODULE_STRING(__VERSIONED_SYMBOL(var)))
+#endif
+
+#define EXPORT_SYMBOL_NOVERS(var) __EXPORT_SYMBOL(var, __MODULE_STRING(var))
+
+#endif
+
+EXPORT_SYMBOL(__errno_location);
+
+EXPORT_SYMBOL(access);
+EXPORT_SYMBOL(open);
+EXPORT_SYMBOL(open64);
+EXPORT_SYMBOL(close);
+EXPORT_SYMBOL(read);
+EXPORT_SYMBOL(write);
+EXPORT_SYMBOL(dup2);
+EXPORT_SYMBOL(__xstat);
+EXPORT_SYMBOL(__lxstat);
+EXPORT_SYMBOL(__lxstat64);
+EXPORT_SYMBOL(lseek);
+EXPORT_SYMBOL(lseek64);
+EXPORT_SYMBOL(chown);
+EXPORT_SYMBOL(truncate);
+EXPORT_SYMBOL(utime);
+EXPORT_SYMBOL(chmod);
+EXPORT_SYMBOL(rename);
+EXPORT_SYMBOL(__xmknod);
+
+EXPORT_SYMBOL(symlink);
+EXPORT_SYMBOL(link);
+EXPORT_SYMBOL(unlink);
+EXPORT_SYMBOL(readlink);
+
+EXPORT_SYMBOL(mkdir);
+EXPORT_SYMBOL(rmdir);
+EXPORT_SYMBOL(opendir);
+EXPORT_SYMBOL(readdir);
+EXPORT_SYMBOL(closedir);
+EXPORT_SYMBOL(seekdir);
+EXPORT_SYMBOL(telldir);
+
+EXPORT_SYMBOL(ioctl);
+
+extern ssize_t pread64 (int __fd, void *__buf, size_t __nbytes,
+ __off64_t __offset);
+extern ssize_t pwrite64 (int __fd, __const void *__buf, size_t __n,
+ __off64_t __offset);
+EXPORT_SYMBOL(pread64);
+EXPORT_SYMBOL(pwrite64);
+
+EXPORT_SYMBOL(statfs);
+EXPORT_SYMBOL(statfs64);
+
+EXPORT_SYMBOL(memcpy);
+EXPORT_SYMBOL(getuid);
+
+EXPORT_SYMBOL(memset);
+EXPORT_SYMBOL(strstr);
+
+EXPORT_SYMBOL(find_iomem);
--- /dev/null
+/*
+ * Copyright (C) 2000, 2001 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#include <unistd.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <signal.h>
+#include <sys/resource.h>
+#include <sys/mman.h>
+#include <sys/user.h>
+#include <asm/page.h>
+#include "user_util.h"
+#include "kern_util.h"
+#include "mem_user.h"
+#include "signal_user.h"
+#include "user.h"
+#include "init.h"
+#include "mode.h"
+#include "choose-mode.h"
+#include "uml-config.h"
+
+/* Set in set_stklim, which is called from main and __wrap_malloc.
+ * __wrap_malloc only calls it if main hasn't started.
+ */
+unsigned long stacksizelim;
+
+/* Set in main */
+char *linux_prog;
+
+#define PGD_BOUND (4 * 1024 * 1024)
+#define STACKSIZE (8 * 1024 * 1024)
+#define THREAD_NAME_LEN (256)
+
+static void set_stklim(void)
+{
+ struct rlimit lim;
+
+ if(getrlimit(RLIMIT_STACK, &lim) < 0){
+ perror("getrlimit");
+ exit(1);
+ }
+ if((lim.rlim_cur == RLIM_INFINITY) || (lim.rlim_cur > STACKSIZE)){
+ lim.rlim_cur = STACKSIZE;
+ if(setrlimit(RLIMIT_STACK, &lim) < 0){
+ perror("setrlimit");
+ exit(1);
+ }
+ }
+ stacksizelim = (lim.rlim_cur + PGD_BOUND - 1) & ~(PGD_BOUND - 1);
+}
+
+static __init void do_uml_initcalls(void)
+{
+ initcall_t *call;
+
+ call = &__uml_initcall_start;
+ while (call < &__uml_initcall_end){;
+ (*call)();
+ call++;
+ }
+}
+
+static void last_ditch_exit(int sig)
+{
+ CHOOSE_MODE(kmalloc_ok = 0, (void) 0);
+ signal(SIGINT, SIG_DFL);
+ signal(SIGTERM, SIG_DFL);
+ signal(SIGHUP, SIG_DFL);
+ uml_cleanup();
+ exit(1);
+}
+
+extern int uml_exitcode;
+
+int main(int argc, char **argv, char **envp)
+{
+ char **new_argv;
+ sigset_t mask;
+ int ret, i;
+
+ /* Enable all signals except SIGIO - in some environments, we can
+ * enter with some signals blocked
+ */
+
+ sigemptyset(&mask);
+ sigaddset(&mask, SIGIO);
+ if(sigprocmask(SIG_SETMASK, &mask, NULL) < 0){
+ perror("sigprocmask");
+ exit(1);
+ }
+
+#ifdef UML_CONFIG_MODE_TT
+ /* Allocate memory for thread command lines */
+ if(argc < 2 || strlen(argv[1]) < THREAD_NAME_LEN - 1){
+
+ char padding[THREAD_NAME_LEN] = {
+ [ 0 ... THREAD_NAME_LEN - 2] = ' ', '\0'
+ };
+
+ new_argv = malloc((argc + 2) * sizeof(char*));
+ if(!new_argv) {
+ perror("Allocating extended argv");
+ exit(1);
+ }
+
+ new_argv[0] = argv[0];
+ new_argv[1] = padding;
+
+ for(i = 2; i <= argc; i++)
+ new_argv[i] = argv[i - 1];
+ new_argv[argc + 1] = NULL;
+
+ execvp(new_argv[0], new_argv);
+ perror("execing with extended args");
+ exit(1);
+ }
+#endif
+
+ linux_prog = argv[0];
+
+ set_stklim();
+
+ if((new_argv = malloc((argc + 1) * sizeof(char *))) == NULL){
+ perror("Mallocing argv");
+ exit(1);
+ }
+ for(i=0;i<argc;i++){
+ if((new_argv[i] = strdup(argv[i])) == NULL){
+ perror("Mallocing an arg");
+ exit(1);
+ }
+ }
+ new_argv[argc] = NULL;
+
+ set_handler(SIGINT, last_ditch_exit, SA_ONESHOT | SA_NODEFER, -1);
+ set_handler(SIGTERM, last_ditch_exit, SA_ONESHOT | SA_NODEFER, -1);
+ set_handler(SIGHUP, last_ditch_exit, SA_ONESHOT | SA_NODEFER, -1);
+
+ do_uml_initcalls();
+ ret = linux_main(argc, argv);
+
+ /* Reboot */
+ if(ret){
+ printf("\n");
+ execvp(new_argv[0], new_argv);
+ perror("Failed to exec kernel");
+ ret = 1;
+ }
+ printf("\n");
+ return(uml_exitcode);
+}
+
+#define CAN_KMALLOC() \
+ (kmalloc_ok && CHOOSE_MODE((getpid() != tracing_pid), 1))
+
+extern void *__real_malloc(int);
+
+void *__wrap_malloc(int size)
+{
+ if(CAN_KMALLOC())
+ return(um_kmalloc(size));
+ else
+ return(__real_malloc(size));
+}
+
+void *__wrap_calloc(int n, int size)
+{
+ void *ptr = __wrap_malloc(n * size);
+
+ if(ptr == NULL) return(NULL);
+ memset(ptr, 0, n * size);
+ return(ptr);
+}
+
+extern void __real_free(void *);
+
+void __wrap_free(void *ptr)
+{
+ if(CAN_KMALLOC()) kfree(ptr);
+ else __real_free(ptr);
+}
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
--- /dev/null
+/*
+ * linux/arch/i386/mm/extable.c
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/spinlock.h>
+#include <asm/uaccess.h>
+
+/* Simple binary search */
+const struct exception_table_entry *
+search_extable(const struct exception_table_entry *first,
+ const struct exception_table_entry *last,
+ unsigned long value)
+{
+ while (first <= last) {
+ const struct exception_table_entry *mid;
+ long diff;
+
+ mid = (last - first) / 2 + first;
+ diff = mid->insn - value;
+ if (diff == 0)
+ return mid;
+ else if (diff < 0)
+ first = mid+1;
+ else
+ last = mid-1;
+ }
+ return NULL;
+}
--- /dev/null
+#include "linux/config.h"
+#include "linux/stddef.h"
+#include "linux/sched.h"
+
+extern void print_head(void);
+extern void print_constant_ptr(char *name, int value);
+extern void print_constant(char *name, char *type, int value);
+extern void print_tail(void);
+
+#define THREAD_OFFSET(field) offsetof(struct task_struct, thread.field)
+
+int main(int argc, char **argv)
+{
+ print_head();
+#ifdef CONFIG_MODE_TT
+ print_constant("TASK_EXTERN_PID", "int", THREAD_OFFSET(mode.tt.extern_pid));
+#endif
+ print_tail();
+ return(0);
+}
+
--- /dev/null
+#include <stdio.h>
+
+void print_head(void)
+{
+ printf("/*\n");
+ printf(" * Generated by mk_thread\n");
+ printf(" */\n");
+ printf("\n");
+ printf("#ifndef __UM_THREAD_H\n");
+ printf("#define __UM_THREAD_H\n");
+ printf("\n");
+}
+
+void print_constant_ptr(char *name, int value)
+{
+ printf("#define %s(task) ((unsigned long *) "
+ "&(((char *) (task))[%d]))\n", name, value);
+}
+
+void print_constant(char *name, char *type, int value)
+{
+ printf("#define %s(task) *((%s *) &(((char *) (task))[%d]))\n", name, type,
+ value);
+}
+
+void print_tail(void)
+{
+ printf("\n");
+ printf("#endif\n");
+}
--- /dev/null
+#include <asm-generic/vmlinux.lds.h>
+
+OUTPUT_FORMAT(ELF_FORMAT)
+OUTPUT_ARCH(ELF_ARCH)
+ENTRY(_start)
+jiffies = jiffies_64;
+
+SECTIONS
+{
+ . = START + SIZEOF_HEADERS;
+
+ . = ALIGN(4096);
+ __binary_start = .;
+#ifdef MODE_TT
+ .thread_private : {
+ __start_thread_private = .;
+ errno = .;
+ . += 4;
+ arch/um/kernel/tt/unmap_fin.o (.data)
+ __end_thread_private = .;
+ }
+ . = ALIGN(4096);
+ .remap : { arch/um/kernel/tt/unmap_fin.o (.text) }
+#endif
+
+ . = ALIGN(4096); /* Init code and data */
+ _stext = .;
+ __init_begin = .;
+ .text.init : { *(.text.init) }
+ . = ALIGN(4096);
+ .text :
+ {
+ *(.text)
+ /* .gnu.warning sections are handled specially by elf32.em. */
+ *(.gnu.warning)
+ *(.gnu.linkonce.t*)
+ }
+
+ #include "asm/common.lds.S"
+
+ .data.init : { *(.data.init) }
+ .data :
+ {
+ . = ALIGN(KERNEL_STACK_SIZE); /* init_task */
+ *(.data.init_task)
+ *(.data)
+ *(.gnu.linkonce.d*)
+ CONSTRUCTORS
+ }
+ .data1 : { *(.data1) }
+ .ctors :
+ {
+ *(.ctors)
+ }
+ .dtors :
+ {
+ *(.dtors)
+ }
+
+ .got : { *(.got.plt) *(.got) }
+ .dynamic : { *(.dynamic) }
+ /* We want the small data sections together, so single-instruction offsets
+ can access them all, and initialized data all before uninitialized, so
+ we can shorten the on-disk segment size. */
+ .sdata : { *(.sdata) }
+ _edata = .;
+ PROVIDE (edata = .);
+ . = ALIGN(0x1000);
+ .sbss :
+ {
+ __bss_start = .;
+ PROVIDE(_bss_start = .);
+ *(.sbss)
+ *(.scommon)
+ }
+ .bss :
+ {
+ *(.dynbss)
+ *(.bss)
+ *(COMMON)
+ }
+ _end = . ;
+ PROVIDE (end = .);
+ /* Stabs debugging sections. */
+ .stab 0 : { *(.stab) }
+ .stabstr 0 : { *(.stabstr) }
+ .stab.excl 0 : { *(.stab.excl) }
+ .stab.exclstr 0 : { *(.stab.exclstr) }
+ .stab.index 0 : { *(.stab.index) }
+ .stab.indexstr 0 : { *(.stab.indexstr) }
+ .comment 0 : { *(.comment) }
+}
#include <linux/ptrace.h>
#include <linux/highuid.h>
#include <linux/vmalloc.h>
+#include <linux/vs_cvirt.h>
#include <asm/mman.h>
#include <asm/types.h>
#include <asm/uaccess.h>
--- /dev/null
+#
+# Makefile for the linux kernel.
+#
+
+extra-y := head.o head64.o init_task.o vmlinux.lds.s
+EXTRA_AFLAGS := -traditional
+obj-y := process.o semaphore.o signal.o entry.o traps.o irq.o \
+ ptrace.o i8259.o ioport.o ldt.o setup.o time.o sys_x86_64.o \
+ x8664_ksyms.o i387.o syscall.o vsyscall.o \
+ setup64.o bootflag.o e820.o reboot.o warmreboot.o
+obj-y += mce.o
+
+obj-$(CONFIG_MTRR) += ../../i386/kernel/cpu/mtrr/
+obj-$(CONFIG_ACPI_BOOT) += acpi/
+obj-$(CONFIG_X86_MSR) += msr.o
+obj-$(CONFIG_MICROCODE) += microcode.o
+obj-$(CONFIG_X86_CPUID) += cpuid.o
+obj-$(CONFIG_SMP) += smp.o smpboot.o trampoline.o
+obj-$(CONFIG_X86_LOCAL_APIC) += apic.o nmi.o
+obj-$(CONFIG_X86_IO_APIC) += io_apic.o mpparse.o
+obj-$(CONFIG_PM) += suspend.o
+obj-$(CONFIG_SOFTWARE_SUSPEND) += suspend_asm.o
+obj-$(CONFIG_CPU_FREQ) += cpufreq/
+obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
+obj-$(CONFIG_GART_IOMMU) += pci-gart.o aperture.o
+obj-$(CONFIG_DUMMY_IOMMU) += pci-nommu.o pci-dma.o
+obj-$(CONFIG_SWIOTLB) += swiotlb.o
+obj-$(CONFIG_SCHED_SMT) += domain.o
+
+obj-$(CONFIG_MODULES) += module.o
+
+obj-y += topology.o
+
+bootflag-y += ../../i386/kernel/bootflag.o
+cpuid-$(subst m,y,$(CONFIG_X86_CPUID)) += ../../i386/kernel/cpuid.o
+topology-y += ../../i386/mach-default/topology.o
+swiotlb-$(CONFIG_SWIOTLB) += ../../ia64/lib/swiotlb.o
+microcode-$(subst m,y,$(CONFIG_MICROCODE)) += ../../i386/kernel/microcode.o
--- /dev/null
+#include <linux/init.h>
+#include <linux/sched.h>
+
+/* Don't do any NUMA setup on Opteron right now. They seem to be
+ better off with flat scheduling. This is just for SMT. */
+
+#ifdef CONFIG_SCHED_SMT
+
+static struct sched_group sched_group_cpus[NR_CPUS];
+static struct sched_group sched_group_phys[NR_CPUS];
+static DEFINE_PER_CPU(struct sched_domain, cpu_domains);
+static DEFINE_PER_CPU(struct sched_domain, phys_domains);
+__init void arch_init_sched_domains(void)
+{
+ int i;
+ struct sched_group *first = NULL, *last = NULL;
+
+ /* Set up domains */
+ for_each_cpu(i) {
+ struct sched_domain *cpu_domain = &per_cpu(cpu_domains, i);
+ struct sched_domain *phys_domain = &per_cpu(phys_domains, i);
+
+ *cpu_domain = SD_SIBLING_INIT;
+ /* Disable SMT NICE for CMP */
+ /* RED-PEN use a generic flag */
+ if (cpu_data[i].x86_vendor == X86_VENDOR_AMD)
+ cpu_domain->flags &= ~SD_SHARE_CPUPOWER;
+ cpu_domain->span = cpu_sibling_map[i];
+ cpu_domain->parent = phys_domain;
+ cpu_domain->groups = &sched_group_cpus[i];
+
+ *phys_domain = SD_CPU_INIT;
+ phys_domain->span = cpu_possible_map;
+ phys_domain->groups = &sched_group_phys[first_cpu(cpu_domain->span)];
+ }
+
+ /* Set up CPU (sibling) groups */
+ for_each_cpu(i) {
+ struct sched_domain *cpu_domain = &per_cpu(cpu_domains, i);
+ int j;
+ first = last = NULL;
+
+ if (i != first_cpu(cpu_domain->span))
+ continue;
+
+ for_each_cpu_mask(j, cpu_domain->span) {
+ struct sched_group *cpu = &sched_group_cpus[j];
+
+ cpus_clear(cpu->cpumask);
+ cpu_set(j, cpu->cpumask);
+ cpu->cpu_power = SCHED_LOAD_SCALE;
+
+ if (!first)
+ first = cpu;
+ if (last)
+ last->next = cpu;
+ last = cpu;
+ }
+ last->next = first;
+ }
+
+ first = last = NULL;
+ /* Set up physical groups */
+ for_each_cpu(i) {
+ struct sched_domain *cpu_domain = &per_cpu(cpu_domains, i);
+ struct sched_group *cpu = &sched_group_phys[i];
+
+ if (i != first_cpu(cpu_domain->span))
+ continue;
+
+ cpu->cpumask = cpu_domain->span;
+ /*
+ * Make each extra sibling increase power by 10% of
+ * the basic CPU. This is very arbitrary.
+ */
+ cpu->cpu_power = SCHED_LOAD_SCALE + SCHED_LOAD_SCALE*(cpus_weight(cpu->cpumask)-1) / 10;
+
+ if (!first)
+ first = cpu;
+ if (last)
+ last->next = cpu;
+ last = cpu;
+ }
+ last->next = first;
+
+ mb();
+ for_each_cpu(i) {
+ struct sched_domain *cpu_domain = &per_cpu(cpu_domains, i);
+ cpu_attach_domain(cpu_domain, i);
+ }
+}
+
+#endif
source "arch/i386/Kconfig.debug"
-source "kernel/vserver/Kconfig"
-
source "security/Kconfig"
source "crypto/Kconfig"
@if [ -e arch/xen/arch ]; then $(MAKE) $(clean)=arch/xen/arch; fi;
@rm -f arch/xen/arch include/.asm-ignore include/asm-xen/asm
@rm -f vmlinux-stripped vmlinuz
- @rm -f boot/bzImage
define archhelp
echo '* vmlinuz - Compressed kernel image'
generate incorrect output with certain kernel constructs when
-mregparm=3 is used.
-config KERN_PHYS_OFFSET
- int "Physical address where the kernel is loaded (1-112)MB"
- range 1 112
- default "1"
- help
- This gives the physical address where the kernel is loaded.
- Primarily used in the case of kexec on panic where the
- recovery kernel needs to run at a different address than
- the panic-ed kernel.
config X86_LOCAL_APIC
bool
$(call if_changed,syscall)
c-link := init_task.o
-s-link := vsyscall-int80.o vsyscall-sysenter.o vsyscall-sigreturn.o # \
- # vsyscall-note.o MEF: looks like this should not be here.
+s-link := vsyscall-int80.o vsyscall-sysenter.o vsyscall-sigreturn.o \
+ vsyscall-note.o
$(patsubst %.o,$(obj)/%.c,$(c-obj-y) $(c-link)) $(patsubst %.o,$(obj)/%.S,$(s-obj-y) $(s-link)):
@ln -fsn $(srctree)/arch/i386/kernel/$(notdir $@) $@
.long sys_tgkill /* 270 */
.long sys_utimes
.long sys_fadvise64_64
- .long sys_vserver
+ .long sys_ni_syscall /* sys_vserver */
.long sys_mbind
.long sys_get_mempolicy
.long sys_set_mempolicy
+++ /dev/null
-/* ld script to make i386 Linux kernel
- * Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz>;
- */
-
-
-#include <asm-generic/vmlinux.lds.h>
-#include <asm/thread_info.h>
-#include <asm/page.h>
-
-
-OUTPUT_FORMAT("elf32-i386", "elf32-i386", "elf32-i386")
-OUTPUT_ARCH(i386)
-#ifdef ALK_KEXEC
-#define LOAD_OFFSET __PAGE_OFFSET
-#define KERN_PHYS_OFFSET (CONFIG_KERN_PHYS_OFFSET * 0x100000)
-ENTRY(phys_startup_32)
-jiffies = jiffies_64;
-SECTIONS
-{
- . = LOAD_OFFSET + KERN_PHYS_OFFSET;
- phys_startup_32 = startup_32 - LOAD_OFFSET;
- /* read-only */
- _text = .; /* Text and read-only data */
- .text : AT(ADDR(.text) - LOAD_OFFSET) {
- *(.text)
- SCHED_TEXT
- LOCK_TEXT
- *(.fixup)
- *(.gnu.warning)
- } = 0x9090
-
- _etext = .; /* End of text section */
-
- . = ALIGN(16); /* Exception table */
- __start___ex_table = .;
- __ex_table : AT(ADDR(__ex_table) - LOAD_OFFSET) { *(__ex_table) }
- __stop___ex_table = .;
-
- RODATA
-
- /* writeable */
- .data : AT(ADDR(.data) - LOAD_OFFSET) { /* Data */
- *(.data)
- CONSTRUCTORS
- }
-
- . = ALIGN(4096);
- __nosave_begin = .;
- .data_nosave : AT(ADDR(.data_nosave) - LOAD_OFFSET) { *(.data.nosave) }
- . = ALIGN(4096);
- __nosave_end = .;
-
- . = ALIGN(4096);
- .data.page_aligned : AT(ADDR(.data.page_aligned) - LOAD_OFFSET) { *(.data.idt) }
-
- . = ALIGN(32);
- .data.cacheline_aligned : AT(ADDR(.data.cacheline_aligned) - LOAD_OFFSET) {
- *(.data.cacheline_aligned)
- }
-
- _edata = .; /* End of data section */
-
- . = ALIGN(THREAD_SIZE); /* init_task */
- .data.init_task : AT(ADDR(.data.init_task) - LOAD_OFFSET) { *(.data.init_task) }
-
- /* will be freed after init */
- . = ALIGN(4096); /* Init code and data */
- __init_begin = .;
- .init.text : AT(ADDR(.init.text) - LOAD_OFFSET) {
- _sinittext = .;
- *(.init.text)
- _einittext = .;
- }
- .init.data : AT(ADDR(.init.data) - LOAD_OFFSET) { *(.init.data) }
- . = ALIGN(16);
- __setup_start = .;
- .init.setup : AT(ADDR(.init.setup) - LOAD_OFFSET) { *(.init.setup) }
- __setup_end = .;
- __initcall_start = .;
- .initcall.init : AT(ADDR(.initcall.init) - LOAD_OFFSET) {
- *(.initcall1.init)
- *(.initcall2.init)
- *(.initcall3.init)
- *(.initcall4.init)
- *(.initcall5.init)
- *(.initcall6.init)
- *(.initcall7.init)
- }
- __initcall_end = .;
- __con_initcall_start = .;
- .con_initcall.init : AT(ADDR(.con_initcall.init) - LOAD_OFFSET) {
- *(.con_initcall.init)
- }
- __con_initcall_end = .;
- SECURITY_INIT
- . = ALIGN(4);
- __alt_instructions = .;
- .altinstructions : AT(ADDR(.altinstructions) - LOAD_OFFSET) {
- *(.altinstructions)
- }
- __alt_instructions_end = .;
- .altinstr_replacement : AT(ADDR(.altinstr_replacement) - LOAD_OFFSET) {
- *(.altinstr_replacement)
- }
- /* .exit.text is discard at runtime, not link time, to deal with references
- from .altinstructions and .eh_frame */
- .exit.text : AT(ADDR(.exit.text) - LOAD_OFFSET) { *(.exit.text) }
- .exit.data : AT(ADDR(.exit.data) - LOAD_OFFSET) { *(.exit.data) }
- . = ALIGN(4096);
- __initramfs_start = .;
- .init.ramfs : AT(ADDR(.init.ramfs) - LOAD_OFFSET) { *(.init.ramfs) }
- __initramfs_end = .;
- . = ALIGN(32);
- __per_cpu_start = .;
- .data.percpu : AT(ADDR(.data.percpu) - LOAD_OFFSET) { *(.data.percpu) }
- __per_cpu_end = .;
- . = ALIGN(4096);
- __init_end = .;
- /* freed after init ends here */
-
- __bss_start = .; /* BSS */
- .bss.page_aligned : AT(ADDR(.bss.page_aligned) - LOAD_OFFSET) {
- *(.bss.page_aligned) }
- .bss : AT(ADDR(.bss) - LOAD_OFFSET) {
- *(.bss)
- }
- . = ALIGN(4);
- __bss_stop = .;
-
- _end = . ;
-
- /* This is where the kernel creates the early boot page tables */
- . = ALIGN(4096);
- pg0 = .;
-
- /* Sections to be discarded */
- /DISCARD/ : {
- *(.exitcall.exit)
- }
-
- /* Stabs debugging sections. */
- .stab 0 : { *(.stab) }
- .stabstr 0 : { *(.stabstr) }
- .stab.excl 0 : { *(.stab.excl) }
- .stab.exclstr 0 : { *(.stab.exclstr) }
- .stab.index 0 : { *(.stab.index) }
- .stab.indexstr 0 : { *(.stab.indexstr) }
- .comment 0 : { *(.comment) }
-}
-#else
-ENTRY(startup_32)
-jiffies = jiffies_64;
-SECTIONS
-{
- . = __PAGE_OFFSET + 0x100000;
- /* read-only */
- _text = .; /* Text and read-only data */
- .text : {
- *(.text)
- SCHED_TEXT
- LOCK_TEXT
- *(.fixup)
- *(.gnu.warning)
- } = 0x9090
-
- _etext = .; /* End of text section */
-
- . = ALIGN(16); /* Exception table */
- __start___ex_table = .;
- __ex_table : { *(__ex_table) }
- __stop___ex_table = .;
-
- RODATA
-
- /* writeable */
- .data : { /* Data */
- *(.data)
- CONSTRUCTORS
- }
-
- . = ALIGN(4096);
- __nosave_begin = .;
- .data_nosave : { *(.data.nosave) }
- . = ALIGN(4096);
- __nosave_end = .;
-
- . = ALIGN(4096);
- .data.page_aligned : { *(.data.idt) }
-
- . = ALIGN(32);
- .data.cacheline_aligned : { *(.data.cacheline_aligned) }
-
- _edata = .; /* End of data section */
-
- . = ALIGN(THREAD_SIZE); /* init_task */
- .data.init_task : { *(.data.init_task) }
-
- /* will be freed after init */
- . = ALIGN(4096); /* Init code and data */
- __init_begin = .;
- .init.text : {
- _sinittext = .;
- *(.init.text)
- _einittext = .;
- }
- .init.data : { *(.init.data) }
- . = ALIGN(16);
- __setup_start = .;
- .init.setup : { *(.init.setup) }
- __setup_end = .;
- __initcall_start = .;
- .initcall.init : {
- *(.initcall1.init)
- *(.initcall2.init)
- *(.initcall3.init)
- *(.initcall4.init)
- *(.initcall5.init)
- *(.initcall6.init)
- *(.initcall7.init)
- }
- __initcall_end = .;
- __con_initcall_start = .;
- .con_initcall.init : { *(.con_initcall.init) }
- __con_initcall_end = .;
- SECURITY_INIT
- . = ALIGN(4);
- __alt_instructions = .;
- .altinstructions : { *(.altinstructions) }
- __alt_instructions_end = .;
- .altinstr_replacement : { *(.altinstr_replacement) }
- /* .exit.text is discard at runtime, not link time, to deal with references
- from .altinstructions and .eh_frame */
- .exit.text : { *(.exit.text) }
- .exit.data : { *(.exit.data) }
- . = ALIGN(4096);
- __initramfs_start = .;
- .init.ramfs : { *(.init.ramfs) }
- __initramfs_end = .;
- . = ALIGN(32);
- __per_cpu_start = .;
- .data.percpu : { *(.data.percpu) }
- __per_cpu_end = .;
- . = ALIGN(4096);
- __init_end = .;
- /* freed after init ends here */
-
- __bss_start = .; /* BSS */
- .bss : {
- *(.bss.page_aligned)
- *(.bss)
- }
- . = ALIGN(4);
- __bss_stop = .;
-
- _end = . ;
-
- /* This is where the kernel creates the early boot page tables */
- . = ALIGN(4096);
- pg0 = .;
-
- /* Sections to be discarded */
- /DISCARD/ : {
- *(.exitcall.exit)
- }
-
- /* Stabs debugging sections. */
- .stab 0 : { *(.stab) }
- .stabstr 0 : { *(.stabstr) }
- .stab.excl 0 : { *(.stab.excl) }
- .stab.exclstr 0 : { *(.stab.exclstr) }
- .stab.index 0 : { *(.stab.index) }
- .stab.indexstr 0 : { *(.stab.indexstr) }
- .comment 0 : { *(.comment) }
-}
-#endif
#
# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.12-1.1_1390_FC4.1.planetlab.2005.08.04
-# Thu Aug 11 15:24:29 2005
+# Linux kernel version: 2.6.12-1.1398_FC4.0.planetlab.2005.08.08
+# Tue Aug 9 13:24:51 2005
#
CONFIG_X86=y
CONFIG_MMU=y
CONFIG_BLK_DEV_UMEM=m
# CONFIG_BLK_DEV_COW_COMMON is not set
CONFIG_BLK_DEV_LOOP=m
-# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+CONFIG_BLK_DEV_CRYPTOLOOP=m
# CONFIG_BLK_DEV_VROOT is not set
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_SX8=m
CONFIG_SCSI_ATA_PIIX=m
CONFIG_SCSI_SATA_NV=m
CONFIG_SCSI_SATA_PROMISE=m
-CONFIG_SCSI_SATA_QSTOR=m
+# CONFIG_SCSI_SATA_QSTOR is not set
CONFIG_SCSI_SATA_SX4=m
CONFIG_SCSI_SATA_SIL=m
CONFIG_SCSI_SATA_SIS=m
CONFIG_SCSI_QLA2300=m
CONFIG_SCSI_QLA2322=m
CONFIG_SCSI_QLA6312=m
-CONFIG_SCSI_LPFC=m
+# CONFIG_SCSI_LPFC is not set
# CONFIG_SCSI_SYM53C416 is not set
# CONFIG_SCSI_DC395x is not set
CONFIG_SCSI_DC390T=m
# CONFIG_NET_KEY is not set
CONFIG_INET=y
# CONFIG_IP_MULTICAST is not set
-CONFIG_IP_ADVANCED_ROUTER=y
-CONFIG_IP_MULTIPLE_TABLES=y
-# CONFIG_IP_ROUTE_FWMARK is not set
-# CONFIG_IP_ROUTE_MULTIPATH is not set
-CONFIG_IP_ROUTE_VERBOSE=y
+# CONFIG_IP_ADVANCED_ROUTER is not set
# CONFIG_IP_PNP is not set
# CONFIG_NET_IPIP is not set
# CONFIG_NET_IPGRE is not set
CONFIG_IP_NF_ARPTABLES=m
CONFIG_IP_NF_ARPFILTER=m
CONFIG_IP_NF_ARP_MANGLE=m
+# CONFIG_IP_NF_COMPAT_IPCHAINS is not set
+# CONFIG_IP_NF_COMPAT_IPFWADM is not set
CONFIG_IP_NF_CT_PROTO_GRE=m
CONFIG_IP_NF_PPTP=m
CONFIG_IP_NF_NAT_PPTP=m
#
# SCTP Configuration (EXPERIMENTAL)
#
-CONFIG_IP_SCTP=m
-# CONFIG_SCTP_DBG_MSG is not set
-# CONFIG_SCTP_DBG_OBJCNT is not set
-# CONFIG_SCTP_HMAC_NONE is not set
-# CONFIG_SCTP_HMAC_SHA1 is not set
-CONFIG_SCTP_HMAC_MD5=y
+# CONFIG_IP_SCTP is not set
# CONFIG_ATM is not set
# CONFIG_BRIDGE is not set
# CONFIG_VLAN_8021Q is not set
# CONFIG_BT is not set
# CONFIG_TUX is not set
CONFIG_NETDEVICES=y
-# CONFIG_DUMMY is not set
+CONFIG_DUMMY=m
# CONFIG_BONDING is not set
# CONFIG_EQUALIZER is not set
-# CONFIG_TUN is not set
+CONFIG_TUN=m
#
# ARCnet devices
# CONFIG_AT1700 is not set
CONFIG_DEPCA=m
CONFIG_HP100=m
-# CONFIG_NET_ISA is not set
+CONFIG_NET_ISA=y
+# CONFIG_E2100 is not set
+# CONFIG_EWRK3 is not set
+# CONFIG_EEXPRESS is not set
+# CONFIG_EEXPRESS_PRO is not set
+# CONFIG_HPLAN_PLUS is not set
+# CONFIG_HPLAN is not set
+# CONFIG_LP486E is not set
+# CONFIG_ETH16I is not set
+CONFIG_NE2000=m
+# CONFIG_ZNET is not set
+# CONFIG_SEEQ8005 is not set
CONFIG_NET_PCI=y
CONFIG_PCNET32=m
CONFIG_AMD8111_ETH=m
# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
-# CONFIG_INPUT_JOYDEV is not set
+CONFIG_INPUT_JOYDEV=m
# CONFIG_INPUT_TSDEV is not set
# CONFIG_INPUT_EVDEV is not set
# CONFIG_INPUT_EVBUG is not set
# Ftape, the floppy tape device driver
#
# CONFIG_FTAPE is not set
-# CONFIG_AGP is not set
+CONFIG_AGP=m
+CONFIG_AGP_ALI=m
+CONFIG_AGP_ATI=m
+CONFIG_AGP_AMD=m
+CONFIG_AGP_AMD64=m
+CONFIG_AGP_INTEL=m
+CONFIG_AGP_NVIDIA=m
+CONFIG_AGP_SIS=m
+CONFIG_AGP_SWORKS=m
+CONFIG_AGP_VIA=m
+CONFIG_AGP_EFFICEON=m
# CONFIG_DRM is not set
# CONFIG_MWAVE is not set
# CONFIG_RAW_DRIVER is not set
#
# I2C support
#
-# CONFIG_I2C is not set
+CONFIG_I2C=m
+# CONFIG_I2C_CHARDEV is not set
+
+#
+# I2C Algorithms
+#
+CONFIG_I2C_ALGOBIT=m
+# CONFIG_I2C_ALGOPCF is not set
+# CONFIG_I2C_ALGOPCA is not set
+
+#
+# I2C Hardware Bus support
+#
+# CONFIG_I2C_ALI1535 is not set
+# CONFIG_I2C_ALI1563 is not set
+# CONFIG_I2C_ALI15X3 is not set
+# CONFIG_I2C_AMD756 is not set
+# CONFIG_I2C_AMD8111 is not set
+# CONFIG_I2C_ELEKTOR is not set
+# CONFIG_I2C_I801 is not set
+# CONFIG_I2C_I810 is not set
+# CONFIG_I2C_PIIX4 is not set
+CONFIG_I2C_ISA=m
+# CONFIG_I2C_NFORCE2 is not set
+# CONFIG_I2C_PARPORT_LIGHT is not set
+# CONFIG_I2C_PROSAVAGE is not set
+# CONFIG_I2C_SAVAGE4 is not set
+# CONFIG_I2C_SIS5595 is not set
+# CONFIG_I2C_SIS630 is not set
+# CONFIG_I2C_SIS96X is not set
+# CONFIG_I2C_STUB is not set
+# CONFIG_I2C_VIA is not set
+# CONFIG_I2C_VIAPRO is not set
+# CONFIG_I2C_VOODOO3 is not set
+# CONFIG_I2C_PCA_ISA is not set
+
+#
+# Hardware Sensors Chip support
+#
+# CONFIG_I2C_SENSOR is not set
+# CONFIG_SENSORS_ADM1021 is not set
+# CONFIG_SENSORS_ADM1025 is not set
+# CONFIG_SENSORS_ADM1026 is not set
+# CONFIG_SENSORS_ADM1031 is not set
+# CONFIG_SENSORS_ASB100 is not set
+# CONFIG_SENSORS_DS1621 is not set
+# CONFIG_SENSORS_FSCHER is not set
+# CONFIG_SENSORS_FSCPOS is not set
+# CONFIG_SENSORS_GL518SM is not set
+# CONFIG_SENSORS_GL520SM is not set
+# CONFIG_SENSORS_IT87 is not set
+# CONFIG_SENSORS_LM63 is not set
+# CONFIG_SENSORS_LM75 is not set
+# CONFIG_SENSORS_LM77 is not set
+# CONFIG_SENSORS_LM78 is not set
+# CONFIG_SENSORS_LM80 is not set
+# CONFIG_SENSORS_LM83 is not set
+# CONFIG_SENSORS_LM85 is not set
+# CONFIG_SENSORS_LM87 is not set
+# CONFIG_SENSORS_LM90 is not set
+# CONFIG_SENSORS_LM92 is not set
+# CONFIG_SENSORS_MAX1619 is not set
+# CONFIG_SENSORS_PC87360 is not set
+# CONFIG_SENSORS_SMSC47B397 is not set
+# CONFIG_SENSORS_SIS5595 is not set
+# CONFIG_SENSORS_SMSC47M1 is not set
+# CONFIG_SENSORS_VIA686A is not set
+# CONFIG_SENSORS_W83781D is not set
+# CONFIG_SENSORS_W83L785TS is not set
+# CONFIG_SENSORS_W83627HF is not set
+
+#
+# Other I2C Chip support
+#
+# CONFIG_SENSORS_DS1337 is not set
+# CONFIG_SENSORS_EEPROM is not set
+# CONFIG_SENSORS_PCF8574 is not set
+# CONFIG_SENSORS_PCF8591 is not set
+# CONFIG_SENSORS_RTC8564 is not set
+# CONFIG_I2C_DEBUG_CORE is not set
+# CONFIG_I2C_DEBUG_ALGO is not set
+# CONFIG_I2C_DEBUG_BUS is not set
+# CONFIG_I2C_DEBUG_CHIP is not set
#
# Dallas's 1-wire bus
#
# Multimedia devices
#
-# CONFIG_VIDEO_DEV is not set
+CONFIG_VIDEO_DEV=m
+
+#
+# Video For Linux
+#
+
+#
+# Video Adapters
+#
+# CONFIG_VIDEO_BT848 is not set
+# CONFIG_VIDEO_PMS is not set
+# CONFIG_VIDEO_CPIA is not set
+# CONFIG_VIDEO_SAA5246A is not set
+# CONFIG_VIDEO_SAA5249 is not set
+# CONFIG_TUNER_3036 is not set
+# CONFIG_VIDEO_STRADIS is not set
+# CONFIG_VIDEO_ZORAN is not set
+# CONFIG_VIDEO_SAA7134 is not set
+# CONFIG_VIDEO_MXB is not set
+# CONFIG_VIDEO_DPC is not set
+# CONFIG_VIDEO_HEXIUM_ORION is not set
+# CONFIG_VIDEO_HEXIUM_GEMINI is not set
+# CONFIG_VIDEO_CX88 is not set
+# CONFIG_VIDEO_OVCAMCHIP is not set
+
+#
+# Radio Adapters
+#
+# CONFIG_RADIO_CADET is not set
+# CONFIG_RADIO_RTRACK is not set
+# CONFIG_RADIO_RTRACK2 is not set
+# CONFIG_RADIO_AZTECH is not set
+# CONFIG_RADIO_GEMTEK is not set
+# CONFIG_RADIO_GEMTEK_PCI is not set
+# CONFIG_RADIO_MAXIRADIO is not set
+# CONFIG_RADIO_MAESTRO is not set
+# CONFIG_RADIO_SF16FMI is not set
+# CONFIG_RADIO_SF16FMR2 is not set
+# CONFIG_RADIO_TERRATEC is not set
+# CONFIG_RADIO_TRUST is not set
+# CONFIG_RADIO_TYPHOON is not set
+# CONFIG_RADIO_ZOLTRIX is not set
#
# Digital Video Broadcasting Devices
# Console display driver support
#
CONFIG_VGA_CONSOLE=y
-# CONFIG_MDA_CONSOLE is not set
+CONFIG_MDA_CONSOLE=m
CONFIG_DUMMY_CONSOLE=y
#
CONFIG_USB_STORAGE_FREECOM=y
CONFIG_USB_STORAGE_ISD200=y
CONFIG_USB_STORAGE_DPCM=y
-CONFIG_USB_STORAGE_USBAT=y
+# CONFIG_USB_STORAGE_USBAT is not set
CONFIG_USB_STORAGE_SDDR09=y
CONFIG_USB_STORAGE_SDDR55=y
CONFIG_USB_STORAGE_JUMPSHOT=y
# USB Multimedia devices
#
# CONFIG_USB_DABUSB is not set
-
-#
-# Video4Linux support is needed for USB Multimedia device support
-#
+# CONFIG_USB_VICAM is not set
+# CONFIG_USB_DSBR is not set
+# CONFIG_USB_IBMCAM is not set
+# CONFIG_USB_KONICAWC is not set
+# CONFIG_USB_OV511 is not set
+# CONFIG_USB_SE401 is not set
+# CONFIG_USB_SN9C102 is not set
+# CONFIG_USB_STV680 is not set
+# CONFIG_USB_PWC is not set
#
# USB Network Adapters
# CONFIG_NLS_ISO8859_15 is not set
# CONFIG_NLS_KOI8_R is not set
# CONFIG_NLS_KOI8_U is not set
-CONFIG_NLS_UTF8=m
+# CONFIG_NLS_UTF8 is not set
#
# Profiling support
#
-# CONFIG_PROFILING is not set
+CONFIG_PROFILING=y
+CONFIG_OPROFILE=m
#
# Kernel hacking
# CONFIG_VSERVER_NGNET is not set
# CONFIG_VSERVER_PROC_SECURE is not set
CONFIG_VSERVER_HARDCPU=y
-CONFIG_VSERVER_HARDCPU_IDLE=y
-CONFIG_VSERVER_ACB_SCHED=y
+# CONFIG_VSERVER_HARDCPU_IDLE is not set
# CONFIG_INOXID_NONE is not set
# CONFIG_INOXID_UID16 is not set
# CONFIG_INOXID_GID16 is not set
# Cryptographic options
#
CONFIG_CRYPTO=y
-CONFIG_CRYPTO_HMAC=y
+# CONFIG_CRYPTO_HMAC is not set
# CONFIG_CRYPTO_NULL is not set
# CONFIG_CRYPTO_MD4 is not set
CONFIG_CRYPTO_MD5=m
+++ /dev/null
-#
-# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.12-1.1_1390_FC4.1.planetlab
-# Fri Aug 26 16:28:13 2005
-#
-CONFIG_X86=y
-CONFIG_MMU=y
-CONFIG_UID16=y
-CONFIG_GENERIC_ISA_DMA=y
-CONFIG_GENERIC_IOMAP=y
-
-#
-# Code maturity level options
-#
-CONFIG_EXPERIMENTAL=y
-CONFIG_CLEAN_COMPILE=y
-CONFIG_LOCK_KERNEL=y
-CONFIG_INIT_ENV_ARG_LIMIT=32
-
-#
-# General setup
-#
-CONFIG_LOCALVERSION=""
-CONFIG_SWAP=y
-CONFIG_SYSVIPC=y
-CONFIG_POSIX_MQUEUE=y
-CONFIG_BSD_PROCESS_ACCT=y
-# CONFIG_BSD_PROCESS_ACCT_V3 is not set
-CONFIG_SYSCTL=y
-# CONFIG_AUDIT is not set
-CONFIG_HOTPLUG=y
-CONFIG_KOBJECT_UEVENT=y
-CONFIG_IKCONFIG=y
-CONFIG_IKCONFIG_PROC=y
-CONFIG_OOM_PANIC=y
-CONFIG_CPUSETS=y
-# CONFIG_EMBEDDED is not set
-CONFIG_KALLSYMS=y
-# CONFIG_KALLSYMS_ALL is not set
-CONFIG_KALLSYMS_EXTRA_PASS=y
-CONFIG_PRINTK=y
-CONFIG_BUG=y
-CONFIG_BASE_FULL=y
-CONFIG_FUTEX=y
-CONFIG_EPOLL=y
-CONFIG_CC_OPTIMIZE_FOR_SIZE=y
-CONFIG_SHMEM=y
-CONFIG_CC_ALIGN_FUNCTIONS=0
-CONFIG_CC_ALIGN_LABELS=0
-CONFIG_CC_ALIGN_LOOPS=0
-CONFIG_CC_ALIGN_JUMPS=0
-# CONFIG_TINY_SHMEM is not set
-CONFIG_BASE_SMALL=0
-
-#
-# Loadable module support
-#
-CONFIG_MODULES=y
-CONFIG_MODULE_UNLOAD=y
-# CONFIG_MODULE_FORCE_UNLOAD is not set
-CONFIG_OBSOLETE_MODPARM=y
-# CONFIG_MODVERSIONS is not set
-# CONFIG_MODULE_SRCVERSION_ALL is not set
-# CONFIG_MODULE_SIG is not set
-CONFIG_KMOD=y
-CONFIG_STOP_MACHINE=y
-
-#
-# Processor type and features
-#
-# CONFIG_X86_PC is not set
-# CONFIG_X86_ELAN is not set
-# CONFIG_X86_VOYAGER is not set
-# CONFIG_X86_NUMAQ is not set
-# CONFIG_X86_SUMMIT is not set
-# CONFIG_X86_BIGSMP is not set
-# CONFIG_X86_VISWS is not set
-CONFIG_X86_GENERICARCH=y
-# CONFIG_X86_ES7000 is not set
-CONFIG_X86_CYCLONE_TIMER=y
-# CONFIG_M386 is not set
-# CONFIG_M486 is not set
-# CONFIG_M586 is not set
-# CONFIG_M586TSC is not set
-# CONFIG_M586MMX is not set
-CONFIG_M686=y
-# CONFIG_MPENTIUMII is not set
-# CONFIG_MPENTIUMIII is not set
-# CONFIG_MPENTIUMM is not set
-# CONFIG_MPENTIUM4 is not set
-# CONFIG_MK6 is not set
-# CONFIG_MK7 is not set
-# CONFIG_MK8 is not set
-# CONFIG_MCRUSOE is not set
-# CONFIG_MEFFICEON is not set
-# CONFIG_MWINCHIPC6 is not set
-# CONFIG_MWINCHIP2 is not set
-# CONFIG_MWINCHIP3D is not set
-# CONFIG_MGEODEGX1 is not set
-# CONFIG_MCYRIXIII is not set
-# CONFIG_MVIAC3_2 is not set
-CONFIG_X86_GENERIC=y
-CONFIG_X86_CMPXCHG=y
-CONFIG_X86_XADD=y
-CONFIG_X86_L1_CACHE_SHIFT=7
-CONFIG_RWSEM_XCHGADD_ALGORITHM=y
-CONFIG_GENERIC_CALIBRATE_DELAY=y
-CONFIG_X86_PPRO_FENCE=y
-CONFIG_X86_WP_WORKS_OK=y
-CONFIG_X86_INVLPG=y
-CONFIG_X86_BSWAP=y
-CONFIG_X86_POPAD_OK=y
-CONFIG_X86_GOOD_APIC=y
-CONFIG_X86_INTEL_USERCOPY=y
-CONFIG_X86_USE_PPRO_CHECKSUM=y
-CONFIG_HPET_TIMER=y
-CONFIG_SMP=y
-CONFIG_NR_CPUS=32
-CONFIG_SCHED_SMT=y
-# CONFIG_PREEMPT is not set
-CONFIG_X86_LOCAL_APIC=y
-CONFIG_X86_IO_APIC=y
-CONFIG_KERNEL_HZ=1000
-CONFIG_X86_TSC=y
-CONFIG_X86_MCE=y
-# CONFIG_X86_MCE_NONFATAL is not set
-CONFIG_X86_MCE_P4THERMAL=y
-CONFIG_TOSHIBA=m
-CONFIG_I8K=m
-# CONFIG_X86_REBOOTFIXUPS is not set
-CONFIG_MICROCODE=m
-CONFIG_X86_MSR=m
-CONFIG_X86_CPUID=m
-
-#
-# Firmware Drivers
-#
-CONFIG_EDD=m
-# CONFIG_NOHIGHMEM is not set
-# CONFIG_HIGHMEM4G is not set
-CONFIG_HIGHMEM64G=y
-CONFIG_SPLIT_3GB=y
-# CONFIG_SPLIT_25GB is not set
-# CONFIG_SPLIT_2GB is not set
-# CONFIG_SPLIT_15GB is not set
-# CONFIG_SPLIT_1GB is not set
-CONFIG_HIGHMEM=y
-CONFIG_X86_PAE=y
-# CONFIG_NUMA is not set
-CONFIG_HIGHPTE=y
-# CONFIG_MATH_EMULATION is not set
-CONFIG_MTRR=y
-# CONFIG_EFI is not set
-# CONFIG_IRQBALANCE is not set
-CONFIG_HAVE_DEC_LOCK=y
-CONFIG_REGPARM=y
-CONFIG_KERN_PHYS_OFFSET=1
-CONFIG_KEXEC=y
-# CONFIG_CRASH_DUMP is not set
-# CONFIG_SECCOMP is not set
-
-#
-# Power management options (ACPI, APM)
-#
-CONFIG_PM=y
-# CONFIG_PM_DEBUG is not set
-# CONFIG_SOFTWARE_SUSPEND is not set
-
-#
-# ACPI (Advanced Configuration and Power Interface) Support
-#
-CONFIG_ACPI=y
-CONFIG_ACPI_BOOT=y
-CONFIG_ACPI_INTERPRETER=y
-CONFIG_ACPI_SLEEP=y
-CONFIG_ACPI_SLEEP_PROC_FS=y
-# CONFIG_ACPI_AC is not set
-# CONFIG_ACPI_BATTERY is not set
-# CONFIG_ACPI_BUTTON is not set
-# CONFIG_ACPI_VIDEO is not set
-# CONFIG_ACPI_FAN is not set
-# CONFIG_ACPI_PROCESSOR is not set
-# CONFIG_ACPI_ASUS is not set
-# CONFIG_ACPI_IBM is not set
-# CONFIG_ACPI_TOSHIBA is not set
-CONFIG_ACPI_BLACKLIST_YEAR=2001
-# CONFIG_ACPI_DEBUG is not set
-CONFIG_ACPI_BUS=y
-CONFIG_ACPI_EC=y
-CONFIG_ACPI_POWER=y
-CONFIG_ACPI_PCI=y
-CONFIG_ACPI_SYSTEM=y
-# CONFIG_X86_PM_TIMER is not set
-# CONFIG_ACPI_CONTAINER is not set
-
-#
-# APM (Advanced Power Management) BIOS Support
-#
-# CONFIG_APM is not set
-
-#
-# CPU Frequency scaling
-#
-# CONFIG_CPU_FREQ is not set
-
-#
-# Bus options (PCI, PCMCIA, EISA, MCA, ISA)
-#
-CONFIG_PCI=y
-# CONFIG_PCI_GOBIOS is not set
-# CONFIG_PCI_GOMMCONFIG is not set
-# CONFIG_PCI_GODIRECT is not set
-CONFIG_PCI_GOANY=y
-CONFIG_PCI_BIOS=y
-CONFIG_PCI_DIRECT=y
-CONFIG_PCI_MMCONFIG=y
-# CONFIG_PCIEPORTBUS is not set
-CONFIG_PCI_MSI=y
-CONFIG_PCI_LEGACY_PROC=y
-# CONFIG_PCI_NAMES is not set
-# CONFIG_PCI_DEBUG is not set
-CONFIG_ISA_DMA_API=y
-CONFIG_ISA=y
-# CONFIG_EISA is not set
-# CONFIG_MCA is not set
-# CONFIG_SCx200 is not set
-
-#
-# PCCARD (PCMCIA/CardBus) support
-#
-# CONFIG_PCCARD is not set
-
-#
-# PCI Hotplug Support
-#
-# CONFIG_HOTPLUG_PCI is not set
-
-#
-# Executable file formats
-#
-CONFIG_BINFMT_ELF=y
-# CONFIG_BINFMT_AOUT is not set
-CONFIG_BINFMT_MISC=y
-
-#
-# Device Drivers
-#
-
-#
-# Generic Driver Options
-#
-CONFIG_STANDALONE=y
-CONFIG_PREVENT_FIRMWARE_BUILD=y
-CONFIG_FW_LOADER=y
-# CONFIG_DEBUG_DRIVER is not set
-
-#
-# Memory Technology Devices (MTD)
-#
-# CONFIG_MTD is not set
-
-#
-# Parallel port support
-#
-# CONFIG_PARPORT is not set
-
-#
-# Plug and Play support
-#
-# CONFIG_PNP is not set
-
-#
-# Block devices
-#
-CONFIG_BLK_DEV_FD=m
-# CONFIG_BLK_DEV_XD is not set
-CONFIG_BLK_CPQ_DA=m
-CONFIG_BLK_CPQ_CISS_DA=m
-CONFIG_CISS_SCSI_TAPE=y
-CONFIG_BLK_DEV_DAC960=m
-CONFIG_BLK_DEV_UMEM=m
-# CONFIG_BLK_DEV_COW_COMMON is not set
-CONFIG_BLK_DEV_LOOP=m
-# CONFIG_BLK_DEV_CRYPTOLOOP is not set
-# CONFIG_BLK_DEV_VROOT is not set
-CONFIG_BLK_DEV_NBD=m
-CONFIG_BLK_DEV_SX8=m
-# CONFIG_BLK_DEV_UB is not set
-CONFIG_BLK_DEV_RAM=y
-CONFIG_BLK_DEV_RAM_COUNT=16
-CONFIG_BLK_DEV_RAM_SIZE=16384
-CONFIG_BLK_DEV_INITRD=y
-CONFIG_INITRAMFS_SOURCE=""
-CONFIG_LBD=y
-# CONFIG_CDROM_PKTCDVD is not set
-CONFIG_DISKDUMP=m
-
-#
-# IO Schedulers
-#
-CONFIG_IOSCHED_NOOP=y
-CONFIG_IOSCHED_AS=y
-CONFIG_IOSCHED_DEADLINE=y
-CONFIG_IOSCHED_CFQ=y
-# CONFIG_ATA_OVER_ETH is not set
-
-#
-# ATA/ATAPI/MFM/RLL support
-#
-CONFIG_IDE=y
-CONFIG_BLK_DEV_IDE=y
-
-#
-# Please see Documentation/ide.txt for help/info on IDE drives
-#
-# CONFIG_BLK_DEV_IDE_SATA is not set
-# CONFIG_BLK_DEV_HD_IDE is not set
-CONFIG_BLK_DEV_IDEDISK=y
-CONFIG_IDEDISK_MULTI_MODE=y
-CONFIG_BLK_DEV_IDECD=y
-# CONFIG_BLK_DEV_IDETAPE is not set
-CONFIG_BLK_DEV_IDEFLOPPY=y
-CONFIG_BLK_DEV_IDESCSI=m
-# CONFIG_IDE_TASK_IOCTL is not set
-
-#
-# IDE chipset support/bugfixes
-#
-CONFIG_IDE_GENERIC=y
-CONFIG_BLK_DEV_CMD640=y
-CONFIG_BLK_DEV_CMD640_ENHANCED=y
-CONFIG_BLK_DEV_IDEPCI=y
-CONFIG_IDEPCI_SHARE_IRQ=y
-# CONFIG_BLK_DEV_OFFBOARD is not set
-CONFIG_BLK_DEV_GENERIC=y
-# CONFIG_BLK_DEV_OPTI621 is not set
-CONFIG_BLK_DEV_RZ1000=y
-CONFIG_BLK_DEV_IDEDMA_PCI=y
-# CONFIG_BLK_DEV_IDEDMA_FORCED is not set
-CONFIG_IDEDMA_PCI_AUTO=y
-# CONFIG_IDEDMA_ONLYDISK is not set
-CONFIG_BLK_DEV_AEC62XX=y
-CONFIG_BLK_DEV_ALI15X3=y
-# CONFIG_WDC_ALI15X3 is not set
-CONFIG_BLK_DEV_AMD74XX=y
-CONFIG_BLK_DEV_ATIIXP=y
-CONFIG_BLK_DEV_CMD64X=y
-CONFIG_BLK_DEV_TRIFLEX=y
-CONFIG_BLK_DEV_CY82C693=y
-CONFIG_BLK_DEV_CS5520=y
-CONFIG_BLK_DEV_CS5530=y
-CONFIG_BLK_DEV_HPT34X=y
-# CONFIG_HPT34X_AUTODMA is not set
-CONFIG_BLK_DEV_HPT366=y
-# CONFIG_BLK_DEV_SC1200 is not set
-CONFIG_BLK_DEV_PIIX=y
-CONFIG_BLK_DEV_IT821X=m
-# CONFIG_BLK_DEV_NS87415 is not set
-CONFIG_BLK_DEV_PDC202XX_OLD=y
-# CONFIG_PDC202XX_BURST is not set
-CONFIG_BLK_DEV_PDC202XX_NEW=y
-CONFIG_PDC202XX_FORCE=y
-CONFIG_BLK_DEV_SVWKS=y
-CONFIG_BLK_DEV_SIIMAGE=y
-CONFIG_BLK_DEV_SIS5513=y
-CONFIG_BLK_DEV_SLC90E66=y
-# CONFIG_BLK_DEV_TRM290 is not set
-CONFIG_BLK_DEV_VIA82CXXX=y
-# CONFIG_IDE_ARM is not set
-# CONFIG_IDE_CHIPSETS is not set
-CONFIG_BLK_DEV_IDEDMA=y
-# CONFIG_IDEDMA_IVB is not set
-CONFIG_IDEDMA_AUTO=y
-# CONFIG_BLK_DEV_HD is not set
-
-#
-# SCSI device support
-#
-CONFIG_SCSI=m
-CONFIG_SCSI_PROC_FS=y
-
-#
-# SCSI support type (disk, tape, CD-ROM)
-#
-CONFIG_BLK_DEV_SD=m
-CONFIG_CHR_DEV_ST=m
-CONFIG_CHR_DEV_OSST=m
-CONFIG_BLK_DEV_SR=m
-CONFIG_BLK_DEV_SR_VENDOR=y
-CONFIG_CHR_DEV_SG=m
-
-#
-# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
-#
-CONFIG_SCSI_MULTI_LUN=y
-CONFIG_SCSI_CONSTANTS=y
-CONFIG_SCSI_LOGGING=y
-
-#
-# SCSI Transport Attributes
-#
-CONFIG_SCSI_SPI_ATTRS=m
-CONFIG_SCSI_FC_ATTRS=m
-# CONFIG_SCSI_ISCSI_ATTRS is not set
-
-#
-# SCSI low-level drivers
-#
-CONFIG_BLK_DEV_3W_XXXX_RAID=m
-CONFIG_SCSI_3W_9XXX=m
-# CONFIG_SCSI_7000FASST is not set
-CONFIG_SCSI_ACARD=m
-CONFIG_SCSI_AHA152X=m
-CONFIG_SCSI_AHA1542=m
-CONFIG_SCSI_AACRAID=m
-CONFIG_SCSI_AIC7XXX=m
-CONFIG_AIC7XXX_CMDS_PER_DEVICE=4
-CONFIG_AIC7XXX_RESET_DELAY_MS=15000
-# CONFIG_AIC7XXX_DEBUG_ENABLE is not set
-CONFIG_AIC7XXX_DEBUG_MASK=0
-# CONFIG_AIC7XXX_REG_PRETTY_PRINT is not set
-CONFIG_SCSI_AIC7XXX_OLD=m
-CONFIG_SCSI_AIC79XX=m
-CONFIG_AIC79XX_CMDS_PER_DEVICE=4
-CONFIG_AIC79XX_RESET_DELAY_MS=15000
-# CONFIG_AIC79XX_ENABLE_RD_STRM is not set
-# CONFIG_AIC79XX_DEBUG_ENABLE is not set
-CONFIG_AIC79XX_DEBUG_MASK=0
-# CONFIG_AIC79XX_REG_PRETTY_PRINT is not set
-# CONFIG_SCSI_DPT_I2O is not set
-CONFIG_SCSI_ADVANSYS=m
-CONFIG_SCSI_IN2000=m
-CONFIG_MEGARAID_NEWGEN=y
-CONFIG_MEGARAID_MM=m
-CONFIG_MEGARAID_MAILBOX=m
-CONFIG_MEGARAID_LEGACY=m
-CONFIG_SCSI_SATA=y
-CONFIG_SCSI_SATA_AHCI=m
-CONFIG_SCSI_SATA_SVW=m
-CONFIG_SCSI_ATA_PIIX=m
-CONFIG_SCSI_SATA_NV=m
-CONFIG_SCSI_SATA_PROMISE=m
-CONFIG_SCSI_SATA_QSTOR=m
-CONFIG_SCSI_SATA_SX4=m
-CONFIG_SCSI_SATA_SIL=m
-CONFIG_SCSI_SATA_SIS=m
-CONFIG_SCSI_SATA_ULI=m
-CONFIG_SCSI_SATA_VIA=m
-CONFIG_SCSI_SATA_VITESSE=m
-CONFIG_SCSI_BUSLOGIC=m
-# CONFIG_SCSI_OMIT_FLASHPOINT is not set
-# CONFIG_SCSI_DMX3191D is not set
-# CONFIG_SCSI_DTC3280 is not set
-# CONFIG_SCSI_EATA is not set
-CONFIG_SCSI_FUTURE_DOMAIN=m
-CONFIG_SCSI_GDTH=m
-# CONFIG_SCSI_GENERIC_NCR5380 is not set
-# CONFIG_SCSI_GENERIC_NCR5380_MMIO is not set
-CONFIG_SCSI_IPS=m
-CONFIG_SCSI_INITIO=m
-CONFIG_SCSI_INIA100=m
-# CONFIG_SCSI_NCR53C406A is not set
-CONFIG_SCSI_SYM53C8XX_2=m
-CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=1
-CONFIG_SCSI_SYM53C8XX_DEFAULT_TAGS=16
-CONFIG_SCSI_SYM53C8XX_MAX_TAGS=64
-# CONFIG_SCSI_SYM53C8XX_IOMAPPED is not set
-# CONFIG_SCSI_IPR is not set
-# CONFIG_SCSI_PAS16 is not set
-# CONFIG_SCSI_PSI240I is not set
-CONFIG_SCSI_QLOGIC_FAS=m
-# CONFIG_SCSI_QLOGIC_FC is not set
-CONFIG_SCSI_QLOGIC_1280=m
-CONFIG_SCSI_QLOGIC_1280_1040=y
-CONFIG_SCSI_QLA2XXX=m
-CONFIG_SCSI_QLA21XX=m
-CONFIG_SCSI_QLA22XX=m
-CONFIG_SCSI_QLA2300=m
-CONFIG_SCSI_QLA2322=m
-CONFIG_SCSI_QLA6312=m
-CONFIG_SCSI_LPFC=m
-# CONFIG_SCSI_SYM53C416 is not set
-# CONFIG_SCSI_DC395x is not set
-CONFIG_SCSI_DC390T=m
-# CONFIG_SCSI_T128 is not set
-# CONFIG_SCSI_U14_34F is not set
-# CONFIG_SCSI_ULTRASTOR is not set
-# CONFIG_SCSI_NSP32 is not set
-# CONFIG_SCSI_DEBUG is not set
-
-#
-# Old CD-ROM drivers (not SCSI, not IDE)
-#
-# CONFIG_CD_NO_IDESCSI is not set
-
-#
-# Multi-device support (RAID and LVM)
-#
-CONFIG_MD=y
-CONFIG_BLK_DEV_MD=y
-CONFIG_MD_LINEAR=y
-# CONFIG_MD_RAID0 is not set
-# CONFIG_MD_RAID1 is not set
-# CONFIG_MD_RAID10 is not set
-# CONFIG_MD_RAID5 is not set
-# CONFIG_MD_RAID6 is not set
-# CONFIG_MD_MULTIPATH is not set
-# CONFIG_MD_FAULTY is not set
-CONFIG_BLK_DEV_DM=y
-# CONFIG_DM_CRYPT is not set
-# CONFIG_DM_SNAPSHOT is not set
-# CONFIG_DM_MIRROR is not set
-# CONFIG_DM_ZERO is not set
-# CONFIG_DM_MULTIPATH is not set
-
-#
-# Fusion MPT device support
-#
-CONFIG_FUSION=m
-CONFIG_FUSION_MAX_SGE=40
-CONFIG_FUSION_CTL=m
-
-#
-# IEEE 1394 (FireWire) support
-#
-# CONFIG_IEEE1394 is not set
-
-#
-# I2O device support
-#
-# CONFIG_I2O is not set
-
-#
-# Networking support
-#
-CONFIG_NET=y
-
-#
-# Networking options
-#
-CONFIG_PACKET=y
-CONFIG_PACKET_MMAP=y
-CONFIG_UNIX=y
-# CONFIG_NET_KEY is not set
-CONFIG_INET=y
-# CONFIG_IP_MULTICAST is not set
-CONFIG_IP_ADVANCED_ROUTER=y
-CONFIG_IP_MULTIPLE_TABLES=y
-# CONFIG_IP_ROUTE_FWMARK is not set
-# CONFIG_IP_ROUTE_MULTIPATH is not set
-CONFIG_IP_ROUTE_VERBOSE=y
-# CONFIG_IP_PNP is not set
-# CONFIG_NET_IPIP is not set
-# CONFIG_NET_IPGRE is not set
-# CONFIG_ARPD is not set
-# CONFIG_SYN_COOKIES is not set
-# CONFIG_INET_AH is not set
-# CONFIG_INET_ESP is not set
-# CONFIG_INET_IPCOMP is not set
-# CONFIG_INET_TUNNEL is not set
-# CONFIG_IP_TCPDIAG is not set
-# CONFIG_IP_TCPDIAG_IPV6 is not set
-
-#
-# IP: Virtual Server Configuration
-#
-# CONFIG_IP_VS is not set
-CONFIG_ICMP_IPOD=y
-# CONFIG_IPV6 is not set
-CONFIG_NETFILTER=y
-# CONFIG_NETFILTER_DEBUG is not set
-
-#
-# IP: Netfilter Configuration
-#
-CONFIG_IP_NF_CONNTRACK=m
-CONFIG_IP_NF_CT_ACCT=y
-CONFIG_IP_NF_CONNTRACK_MARK=y
-CONFIG_IP_NF_CT_PROTO_SCTP=m
-CONFIG_IP_NF_FTP=m
-CONFIG_IP_NF_IRC=m
-CONFIG_IP_NF_TFTP=m
-CONFIG_IP_NF_AMANDA=m
-CONFIG_IP_NF_QUEUE=m
-CONFIG_IP_NF_IPTABLES=m
-CONFIG_IP_NF_MATCH_LIMIT=m
-CONFIG_IP_NF_MATCH_IPRANGE=m
-CONFIG_IP_NF_MATCH_MAC=m
-CONFIG_IP_NF_MATCH_PKTTYPE=m
-CONFIG_IP_NF_MATCH_MARK=m
-CONFIG_IP_NF_MATCH_MULTIPORT=m
-CONFIG_IP_NF_MATCH_TOS=m
-CONFIG_IP_NF_MATCH_RECENT=m
-CONFIG_IP_NF_MATCH_ECN=m
-CONFIG_IP_NF_MATCH_DSCP=m
-CONFIG_IP_NF_MATCH_AH_ESP=m
-CONFIG_IP_NF_MATCH_LENGTH=m
-CONFIG_IP_NF_MATCH_TTL=m
-CONFIG_IP_NF_MATCH_TCPMSS=m
-CONFIG_IP_NF_MATCH_HELPER=m
-CONFIG_IP_NF_MATCH_STATE=m
-CONFIG_IP_NF_MATCH_CONNTRACK=m
-CONFIG_IP_NF_MATCH_OWNER=m
-CONFIG_IP_NF_MATCH_ADDRTYPE=m
-CONFIG_IP_NF_MATCH_REALM=m
-CONFIG_IP_NF_MATCH_SCTP=m
-CONFIG_IP_NF_MATCH_COMMENT=m
-CONFIG_IP_NF_MATCH_CONNMARK=m
-CONFIG_IP_NF_MATCH_HASHLIMIT=m
-CONFIG_IP_NF_FILTER=m
-CONFIG_IP_NF_TARGET_REJECT=m
-CONFIG_IP_NF_TARGET_LOG=m
-CONFIG_IP_NF_TARGET_ULOG=m
-CONFIG_IP_NF_TARGET_TCPMSS=m
-CONFIG_IP_NF_NAT=m
-CONFIG_IP_NF_NAT_NEEDED=y
-CONFIG_IP_NF_TARGET_MASQUERADE=m
-CONFIG_IP_NF_TARGET_REDIRECT=m
-CONFIG_IP_NF_TARGET_NETMAP=m
-CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_SNMP_BASIC=m
-CONFIG_IP_NF_NAT_IRC=m
-CONFIG_IP_NF_NAT_FTP=m
-CONFIG_IP_NF_NAT_TFTP=m
-CONFIG_IP_NF_NAT_AMANDA=m
-CONFIG_IP_NF_MANGLE=m
-CONFIG_IP_NF_TARGET_TOS=m
-CONFIG_IP_NF_TARGET_ECN=m
-CONFIG_IP_NF_TARGET_DSCP=m
-CONFIG_IP_NF_TARGET_MARK=m
-CONFIG_IP_NF_TARGET_CLASSIFY=m
-CONFIG_IP_NF_TARGET_CONNMARK=m
-CONFIG_IP_NF_TARGET_CLUSTERIP=m
-CONFIG_IP_NF_RAW=m
-CONFIG_IP_NF_TARGET_NOTRACK=m
-CONFIG_IP_NF_ARPTABLES=m
-CONFIG_IP_NF_ARPFILTER=m
-CONFIG_IP_NF_ARP_MANGLE=m
-CONFIG_IP_NF_CT_PROTO_GRE=m
-CONFIG_IP_NF_PPTP=m
-CONFIG_IP_NF_NAT_PPTP=m
-CONFIG_IP_NF_NAT_PROTO_GRE=m
-CONFIG_VNET=m
-
-#
-# SCTP Configuration (EXPERIMENTAL)
-#
-CONFIG_IP_SCTP=m
-# CONFIG_SCTP_DBG_MSG is not set
-# CONFIG_SCTP_DBG_OBJCNT is not set
-# CONFIG_SCTP_HMAC_NONE is not set
-# CONFIG_SCTP_HMAC_SHA1 is not set
-CONFIG_SCTP_HMAC_MD5=y
-# CONFIG_ATM is not set
-# CONFIG_BRIDGE is not set
-# CONFIG_VLAN_8021Q is not set
-# CONFIG_DECNET is not set
-# CONFIG_LLC2 is not set
-# CONFIG_IPX is not set
-# CONFIG_ATALK is not set
-# CONFIG_X25 is not set
-# CONFIG_LAPB is not set
-# CONFIG_NET_DIVERT is not set
-# CONFIG_ECONET is not set
-# CONFIG_WAN_ROUTER is not set
-
-#
-# QoS and/or fair queueing
-#
-CONFIG_NET_SCHED=y
-CONFIG_NET_SCH_CLK_JIFFIES=y
-# CONFIG_NET_SCH_CLK_GETTIMEOFDAY is not set
-# CONFIG_NET_SCH_CLK_CPU is not set
-# CONFIG_NET_SCH_CBQ is not set
-CONFIG_NET_SCH_HTB=m
-# CONFIG_NET_SCH_HFSC is not set
-# CONFIG_NET_SCH_PRIO is not set
-# CONFIG_NET_SCH_RED is not set
-# CONFIG_NET_SCH_SFQ is not set
-# CONFIG_NET_SCH_TEQL is not set
-# CONFIG_NET_SCH_TBF is not set
-# CONFIG_NET_SCH_GRED is not set
-# CONFIG_NET_SCH_DSMARK is not set
-# CONFIG_NET_SCH_NETEM is not set
-# CONFIG_NET_SCH_INGRESS is not set
-# CONFIG_NET_QOS is not set
-CONFIG_NET_CLS=y
-# CONFIG_NET_CLS_BASIC is not set
-# CONFIG_NET_CLS_TCINDEX is not set
-# CONFIG_NET_CLS_ROUTE4 is not set
-CONFIG_NET_CLS_ROUTE=y
-CONFIG_NET_CLS_FW=m
-# CONFIG_NET_CLS_U32 is not set
-# CONFIG_NET_CLS_IND is not set
-# CONFIG_NET_EMATCH is not set
-
-#
-# Network testing
-#
-# CONFIG_NET_PKTGEN is not set
-# CONFIG_NETPOLL is not set
-# CONFIG_NET_POLL_CONTROLLER is not set
-# CONFIG_HAMRADIO is not set
-# CONFIG_IRDA is not set
-# CONFIG_BT is not set
-# CONFIG_TUX is not set
-CONFIG_NETDEVICES=y
-# CONFIG_DUMMY is not set
-# CONFIG_BONDING is not set
-# CONFIG_EQUALIZER is not set
-# CONFIG_TUN is not set
-
-#
-# ARCnet devices
-#
-# CONFIG_ARCNET is not set
-
-#
-# Ethernet (10 or 100Mbit)
-#
-CONFIG_NET_ETHERNET=y
-CONFIG_MII=m
-CONFIG_HAPPYMEAL=m
-CONFIG_SUNGEM=m
-CONFIG_NET_VENDOR_3COM=y
-CONFIG_EL1=m
-CONFIG_EL2=m
-CONFIG_ELPLUS=m
-CONFIG_EL16=m
-CONFIG_EL3=m
-CONFIG_3C515=m
-CONFIG_VORTEX=m
-CONFIG_TYPHOON=m
-CONFIG_LANCE=m
-CONFIG_NET_VENDOR_SMC=y
-CONFIG_WD80x3=m
-CONFIG_ULTRA=m
-CONFIG_SMC9194=m
-CONFIG_NET_VENDOR_RACAL=y
-CONFIG_NI52=m
-CONFIG_NI65=m
-
-#
-# Tulip family network device support
-#
-CONFIG_NET_TULIP=y
-CONFIG_DE2104X=m
-CONFIG_TULIP=m
-# CONFIG_TULIP_MWI is not set
-CONFIG_TULIP_MMIO=y
-# CONFIG_TULIP_NAPI is not set
-CONFIG_DE4X5=m
-CONFIG_WINBOND_840=m
-CONFIG_DM9102=m
-# CONFIG_AT1700 is not set
-CONFIG_DEPCA=m
-CONFIG_HP100=m
-# CONFIG_NET_ISA is not set
-CONFIG_NET_PCI=y
-CONFIG_PCNET32=m
-CONFIG_AMD8111_ETH=m
-CONFIG_AMD8111E_NAPI=y
-CONFIG_ADAPTEC_STARFIRE=m
-CONFIG_ADAPTEC_STARFIRE_NAPI=y
-CONFIG_AC3200=m
-CONFIG_APRICOT=m
-CONFIG_B44=m
-CONFIG_FORCEDETH=m
-CONFIG_CS89x0=m
-CONFIG_DGRS=m
-CONFIG_EEPRO100=m
-CONFIG_E100=m
-CONFIG_FEALNX=m
-CONFIG_NATSEMI=m
-CONFIG_NE2K_PCI=m
-CONFIG_8139CP=m
-CONFIG_8139TOO=m
-CONFIG_8139TOO_PIO=y
-# CONFIG_8139TOO_TUNE_TWISTER is not set
-CONFIG_8139TOO_8129=y
-# CONFIG_8139_OLD_RX_RESET is not set
-CONFIG_SIS900=m
-CONFIG_EPIC100=m
-CONFIG_SUNDANCE=m
-# CONFIG_SUNDANCE_MMIO is not set
-CONFIG_TLAN=m
-CONFIG_VIA_RHINE=m
-CONFIG_VIA_RHINE_MMIO=y
-CONFIG_NET_POCKET=y
-CONFIG_ATP=m
-CONFIG_DE600=m
-CONFIG_DE620=m
-
-#
-# Ethernet (1000 Mbit)
-#
-CONFIG_ACENIC=m
-# CONFIG_ACENIC_OMIT_TIGON_I is not set
-CONFIG_DL2K=m
-CONFIG_E1000=m
-CONFIG_E1000_NAPI=y
-CONFIG_NS83820=m
-CONFIG_HAMACHI=m
-CONFIG_YELLOWFIN=m
-CONFIG_R8169=m
-CONFIG_R8169_NAPI=y
-CONFIG_SK98LIN=m
-CONFIG_VIA_VELOCITY=m
-CONFIG_TIGON3=m
-CONFIG_BNX2=m
-
-#
-# Ethernet (10000 Mbit)
-#
-CONFIG_IXGB=m
-CONFIG_IXGB_NAPI=y
-CONFIG_S2IO=m
-CONFIG_S2IO_NAPI=y
-# CONFIG_2BUFF_MODE is not set
-
-#
-# Token Ring devices
-#
-# CONFIG_TR is not set
-
-#
-# Wireless LAN (non-hamradio)
-#
-# CONFIG_NET_RADIO is not set
-
-#
-# Wan interfaces
-#
-# CONFIG_WAN is not set
-# CONFIG_FDDI is not set
-# CONFIG_HIPPI is not set
-# CONFIG_PPP is not set
-# CONFIG_SLIP is not set
-# CONFIG_NET_FC is not set
-# CONFIG_SHAPER is not set
-# CONFIG_NETCONSOLE is not set
-
-#
-# ISDN subsystem
-#
-# CONFIG_ISDN is not set
-
-#
-# Telephony Support
-#
-# CONFIG_PHONE is not set
-
-#
-# Input device support
-#
-CONFIG_INPUT=y
-
-#
-# Userland interfaces
-#
-CONFIG_INPUT_MOUSEDEV=y
-# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
-CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
-CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
-# CONFIG_INPUT_JOYDEV is not set
-# CONFIG_INPUT_TSDEV is not set
-# CONFIG_INPUT_EVDEV is not set
-# CONFIG_INPUT_EVBUG is not set
-
-#
-# Input Device Drivers
-#
-CONFIG_INPUT_KEYBOARD=y
-CONFIG_KEYBOARD_ATKBD=y
-# CONFIG_KEYBOARD_SUNKBD is not set
-# CONFIG_KEYBOARD_LKKBD is not set
-# CONFIG_KEYBOARD_XTKBD is not set
-# CONFIG_KEYBOARD_NEWTON is not set
-CONFIG_INPUT_MOUSE=y
-CONFIG_MOUSE_PS2=y
-# CONFIG_MOUSE_SERIAL is not set
-# CONFIG_MOUSE_INPORT is not set
-# CONFIG_MOUSE_LOGIBM is not set
-# CONFIG_MOUSE_PC110PAD is not set
-# CONFIG_MOUSE_VSXXXAA is not set
-# CONFIG_INPUT_JOYSTICK is not set
-# CONFIG_INPUT_TOUCHSCREEN is not set
-# CONFIG_INPUT_MISC is not set
-
-#
-# Hardware I/O ports
-#
-CONFIG_SERIO=y
-CONFIG_SERIO_I8042=y
-# CONFIG_SERIO_SERPORT is not set
-# CONFIG_SERIO_CT82C710 is not set
-# CONFIG_SERIO_PCIPS2 is not set
-CONFIG_SERIO_LIBPS2=y
-# CONFIG_SERIO_RAW is not set
-# CONFIG_GAMEPORT is not set
-
-#
-# Character devices
-#
-CONFIG_VT=y
-CONFIG_VT_CONSOLE=y
-CONFIG_HW_CONSOLE=y
-# CONFIG_SERIAL_NONSTANDARD is not set
-
-#
-# Serial drivers
-#
-CONFIG_SERIAL_8250=y
-CONFIG_SERIAL_8250_CONSOLE=y
-# CONFIG_SERIAL_8250_ACPI is not set
-CONFIG_SERIAL_8250_NR_UARTS=32
-CONFIG_SERIAL_8250_EXTENDED=y
-CONFIG_SERIAL_8250_MANY_PORTS=y
-CONFIG_SERIAL_8250_SHARE_IRQ=y
-CONFIG_SERIAL_8250_DETECT_IRQ=y
-CONFIG_SERIAL_8250_MULTIPORT=y
-CONFIG_SERIAL_8250_RSA=y
-
-#
-# Non-8250 serial port support
-#
-CONFIG_SERIAL_CORE=y
-CONFIG_SERIAL_CORE_CONSOLE=y
-# CONFIG_SERIAL_JSM is not set
-CONFIG_UNIX98_PTYS=y
-# CONFIG_LEGACY_PTYS is not set
-# CONFIG_CRASH is not set
-
-#
-# IPMI
-#
-# CONFIG_IPMI_HANDLER is not set
-
-#
-# Watchdog Cards
-#
-# CONFIG_WATCHDOG is not set
-# CONFIG_HW_RANDOM is not set
-# CONFIG_NVRAM is not set
-# CONFIG_RTC is not set
-# CONFIG_GEN_RTC is not set
-# CONFIG_DTLK is not set
-# CONFIG_R3964 is not set
-# CONFIG_APPLICOM is not set
-# CONFIG_SONYPI is not set
-
-#
-# Ftape, the floppy tape device driver
-#
-# CONFIG_AGP is not set
-# CONFIG_DRM is not set
-# CONFIG_MWAVE is not set
-# CONFIG_RAW_DRIVER is not set
-# CONFIG_HPET is not set
-CONFIG_HANGCHECK_TIMER=y
-
-#
-# TPM devices
-#
-# CONFIG_TCG_TPM is not set
-
-#
-# I2C support
-#
-# CONFIG_I2C is not set
-
-#
-# Dallas's 1-wire bus
-#
-# CONFIG_W1 is not set
-
-#
-# Misc devices
-#
-# CONFIG_IBM_ASM is not set
-
-#
-# Multimedia devices
-#
-# CONFIG_VIDEO_DEV is not set
-
-#
-# Digital Video Broadcasting Devices
-#
-# CONFIG_DVB is not set
-
-#
-# Graphics support
-#
-# CONFIG_FB is not set
-CONFIG_VIDEO_SELECT=y
-
-#
-# Console display driver support
-#
-CONFIG_VGA_CONSOLE=y
-# CONFIG_MDA_CONSOLE is not set
-CONFIG_DUMMY_CONSOLE=y
-
-#
-# Sound
-#
-# CONFIG_SOUND is not set
-
-#
-# USB support
-#
-CONFIG_USB_ARCH_HAS_HCD=y
-CONFIG_USB_ARCH_HAS_OHCI=y
-CONFIG_USB=y
-# CONFIG_USB_DEBUG is not set
-
-#
-# Miscellaneous USB options
-#
-CONFIG_USB_DEVICEFS=y
-# CONFIG_USB_BANDWIDTH is not set
-# CONFIG_USB_DYNAMIC_MINORS is not set
-# CONFIG_USB_SUSPEND is not set
-# CONFIG_USB_OTG is not set
-
-#
-# USB Host Controller Drivers
-#
-CONFIG_USB_EHCI_HCD=m
-CONFIG_USB_EHCI_SPLIT_ISO=y
-CONFIG_USB_EHCI_ROOT_HUB_TT=y
-CONFIG_USB_OHCI_HCD=m
-# CONFIG_USB_OHCI_BIG_ENDIAN is not set
-CONFIG_USB_OHCI_LITTLE_ENDIAN=y
-CONFIG_USB_UHCI_HCD=m
-CONFIG_USB_SL811_HCD=m
-
-#
-# USB Device Class drivers
-#
-# CONFIG_USB_BLUETOOTH_TTY is not set
-# CONFIG_USB_ACM is not set
-# CONFIG_USB_PRINTER is not set
-
-#
-# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
-#
-CONFIG_USB_STORAGE=m
-# CONFIG_USB_STORAGE_DEBUG is not set
-CONFIG_USB_STORAGE_DATAFAB=y
-CONFIG_USB_STORAGE_FREECOM=y
-CONFIG_USB_STORAGE_ISD200=y
-CONFIG_USB_STORAGE_DPCM=y
-CONFIG_USB_STORAGE_USBAT=y
-CONFIG_USB_STORAGE_SDDR09=y
-CONFIG_USB_STORAGE_SDDR55=y
-CONFIG_USB_STORAGE_JUMPSHOT=y
-
-#
-# USB Input Devices
-#
-CONFIG_USB_HID=y
-CONFIG_USB_HIDINPUT=y
-# CONFIG_HID_FF is not set
-# CONFIG_USB_HIDDEV is not set
-# CONFIG_USB_AIPTEK is not set
-# CONFIG_USB_WACOM is not set
-# CONFIG_USB_KBTAB is not set
-# CONFIG_USB_POWERMATE is not set
-# CONFIG_USB_MTOUCH is not set
-# CONFIG_USB_EGALAX is not set
-# CONFIG_USB_XPAD is not set
-# CONFIG_USB_ATI_REMOTE is not set
-# CONFIG_USB_APPLETOUCH is not set
-
-#
-# USB Imaging devices
-#
-# CONFIG_USB_MDC800 is not set
-# CONFIG_USB_MICROTEK is not set
-
-#
-# USB Multimedia devices
-#
-# CONFIG_USB_DABUSB is not set
-
-#
-# Video4Linux support is needed for USB Multimedia device support
-#
-
-#
-# USB Network Adapters
-#
-CONFIG_USB_CATC=m
-CONFIG_USB_KAWETH=m
-CONFIG_USB_PEGASUS=m
-CONFIG_USB_RTL8150=m
-CONFIG_USB_USBNET=m
-
-#
-# USB Host-to-Host Cables
-#
-CONFIG_USB_ALI_M5632=y
-CONFIG_USB_AN2720=y
-CONFIG_USB_BELKIN=y
-CONFIG_USB_GENESYS=y
-CONFIG_USB_NET1080=y
-CONFIG_USB_PL2301=y
-CONFIG_USB_KC2190=y
-
-#
-# Intelligent USB Devices/Gadgets
-#
-CONFIG_USB_ARMLINUX=y
-CONFIG_USB_EPSON2888=y
-CONFIG_USB_ZAURUS=y
-CONFIG_USB_CDCETHER=y
-
-#
-# USB Network Adapters
-#
-CONFIG_USB_AX8817X=y
-# CONFIG_USB_MON is not set
-
-#
-# USB port drivers
-#
-
-#
-# USB Serial Converter support
-#
-CONFIG_USB_SERIAL=m
-CONFIG_USB_SERIAL_GENERIC=y
-# CONFIG_USB_SERIAL_AIRPRIME is not set
-CONFIG_USB_SERIAL_BELKIN=m
-CONFIG_USB_SERIAL_DIGI_ACCELEPORT=m
-# CONFIG_USB_SERIAL_CP2101 is not set
-CONFIG_USB_SERIAL_CYPRESS_M8=m
-CONFIG_USB_SERIAL_EMPEG=m
-CONFIG_USB_SERIAL_FTDI_SIO=m
-CONFIG_USB_SERIAL_VISOR=m
-CONFIG_USB_SERIAL_IPAQ=m
-CONFIG_USB_SERIAL_IR=m
-CONFIG_USB_SERIAL_EDGEPORT=m
-CONFIG_USB_SERIAL_EDGEPORT_TI=m
-# CONFIG_USB_SERIAL_GARMIN is not set
-CONFIG_USB_SERIAL_IPW=m
-CONFIG_USB_SERIAL_KEYSPAN_PDA=m
-CONFIG_USB_SERIAL_KEYSPAN=m
-CONFIG_USB_SERIAL_KEYSPAN_MPR=y
-CONFIG_USB_SERIAL_KEYSPAN_USA28=y
-CONFIG_USB_SERIAL_KEYSPAN_USA28X=y
-CONFIG_USB_SERIAL_KEYSPAN_USA28XA=y
-CONFIG_USB_SERIAL_KEYSPAN_USA28XB=y
-CONFIG_USB_SERIAL_KEYSPAN_USA19=y
-CONFIG_USB_SERIAL_KEYSPAN_USA18X=y
-CONFIG_USB_SERIAL_KEYSPAN_USA19W=y
-CONFIG_USB_SERIAL_KEYSPAN_USA19QW=y
-CONFIG_USB_SERIAL_KEYSPAN_USA19QI=y
-CONFIG_USB_SERIAL_KEYSPAN_USA49W=y
-CONFIG_USB_SERIAL_KEYSPAN_USA49WLC=y
-CONFIG_USB_SERIAL_KLSI=m
-CONFIG_USB_SERIAL_KOBIL_SCT=m
-CONFIG_USB_SERIAL_MCT_U232=m
-CONFIG_USB_SERIAL_PL2303=m
-# CONFIG_USB_SERIAL_HP4X is not set
-CONFIG_USB_SERIAL_SAFE=m
-CONFIG_USB_SERIAL_SAFE_PADDED=y
-# CONFIG_USB_SERIAL_TI is not set
-CONFIG_USB_SERIAL_CYBERJACK=m
-CONFIG_USB_SERIAL_XIRCOM=m
-CONFIG_USB_SERIAL_OMNINET=m
-CONFIG_USB_EZUSB=y
-
-#
-# USB Miscellaneous drivers
-#
-# CONFIG_USB_EMI62 is not set
-# CONFIG_USB_EMI26 is not set
-# CONFIG_USB_AUERSWALD is not set
-# CONFIG_USB_RIO500 is not set
-# CONFIG_USB_LEGOTOWER is not set
-# CONFIG_USB_LCD is not set
-# CONFIG_USB_LED is not set
-# CONFIG_USB_CYTHERM is not set
-# CONFIG_USB_PHIDGETKIT is not set
-# CONFIG_USB_PHIDGETSERVO is not set
-# CONFIG_USB_IDMOUSE is not set
-# CONFIG_USB_SISUSBVGA is not set
-# CONFIG_USB_TEST is not set
-
-#
-# USB ATM/DSL drivers
-#
-
-#
-# USB Gadget Support
-#
-# CONFIG_USB_GADGET is not set
-
-#
-# MMC/SD Card support
-#
-CONFIG_MMC=m
-# CONFIG_MMC_DEBUG is not set
-CONFIG_MMC_BLOCK=m
-CONFIG_MMC_WBSD=m
-
-#
-# InfiniBand support
-#
-# CONFIG_INFINIBAND is not set
-
-#
-# File systems
-#
-CONFIG_EXT2_FS=y
-CONFIG_EXT2_FS_XATTR=y
-CONFIG_EXT2_FS_POSIX_ACL=y
-CONFIG_EXT2_FS_SECURITY=y
-CONFIG_EXT3_FS=y
-CONFIG_EXT3_FS_XATTR=y
-CONFIG_EXT3_FS_POSIX_ACL=y
-CONFIG_EXT3_FS_SECURITY=y
-CONFIG_JBD=y
-# CONFIG_JBD_DEBUG is not set
-CONFIG_FS_MBCACHE=y
-# CONFIG_REISERFS_FS is not set
-# CONFIG_JFS_FS is not set
-CONFIG_FS_POSIX_ACL=y
-
-#
-# XFS support
-#
-# CONFIG_XFS_FS is not set
-# CONFIG_MINIX_FS is not set
-# CONFIG_ROMFS_FS is not set
-CONFIG_QUOTA=y
-# CONFIG_QFMT_V1 is not set
-CONFIG_QFMT_V2=y
-CONFIG_QUOTACTL=y
-CONFIG_DNOTIFY=y
-CONFIG_AUTOFS_FS=m
-CONFIG_AUTOFS4_FS=m
-
-#
-# CD-ROM/DVD Filesystems
-#
-CONFIG_ISO9660_FS=y
-CONFIG_JOLIET=y
-CONFIG_ZISOFS=y
-CONFIG_ZISOFS_FS=y
-CONFIG_UDF_FS=m
-CONFIG_UDF_NLS=y
-
-#
-# DOS/FAT/NT Filesystems
-#
-CONFIG_FAT_FS=m
-CONFIG_MSDOS_FS=m
-CONFIG_VFAT_FS=m
-CONFIG_FAT_DEFAULT_CODEPAGE=437
-CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
-# CONFIG_NTFS_FS is not set
-
-#
-# Pseudo filesystems
-#
-CONFIG_PROC_FS=y
-CONFIG_PROC_KCORE=y
-CONFIG_SYSFS=y
-# CONFIG_DEVFS_FS is not set
-CONFIG_DEVPTS_FS_XATTR=y
-CONFIG_DEVPTS_FS_SECURITY=y
-CONFIG_TMPFS=y
-CONFIG_TMPFS_XATTR=y
-CONFIG_TMPFS_SECURITY=y
-CONFIG_HUGETLBFS=y
-CONFIG_HUGETLB_PAGE=y
-CONFIG_RAMFS=y
-
-#
-# Miscellaneous filesystems
-#
-# CONFIG_ADFS_FS is not set
-# CONFIG_AFFS_FS is not set
-# CONFIG_HFS_FS is not set
-# CONFIG_HFSPLUS_FS is not set
-# CONFIG_BEFS_FS is not set
-# CONFIG_BFS_FS is not set
-# CONFIG_EFS_FS is not set
-# CONFIG_CRAMFS is not set
-# CONFIG_VXFS_FS is not set
-# CONFIG_HPFS_FS is not set
-# CONFIG_QNX4FS_FS is not set
-# CONFIG_SYSV_FS is not set
-# CONFIG_UFS_FS is not set
-
-#
-# Network File Systems
-#
-CONFIG_NFS_FS=m
-CONFIG_NFS_V3=y
-CONFIG_NFS_V4=y
-CONFIG_NFS_DIRECTIO=y
-CONFIG_NFSD=m
-CONFIG_NFSD_V3=y
-CONFIG_NFSD_V4=y
-CONFIG_NFSD_TCP=y
-CONFIG_LOCKD=m
-CONFIG_LOCKD_V4=y
-CONFIG_EXPORTFS=m
-CONFIG_SUNRPC=m
-CONFIG_SUNRPC_GSS=m
-CONFIG_RPCSEC_GSS_KRB5=m
-# CONFIG_RPCSEC_GSS_SPKM3 is not set
-# CONFIG_SMB_FS is not set
-# CONFIG_CIFS is not set
-# CONFIG_NCP_FS is not set
-# CONFIG_CODA_FS is not set
-# CONFIG_AFS_FS is not set
-
-#
-# Partition Types
-#
-# CONFIG_PARTITION_ADVANCED is not set
-CONFIG_MSDOS_PARTITION=y
-
-#
-# Native Language Support
-#
-CONFIG_NLS=y
-CONFIG_NLS_DEFAULT="utf8"
-CONFIG_NLS_CODEPAGE_437=y
-# CONFIG_NLS_CODEPAGE_737 is not set
-# CONFIG_NLS_CODEPAGE_775 is not set
-# CONFIG_NLS_CODEPAGE_850 is not set
-# CONFIG_NLS_CODEPAGE_852 is not set
-# CONFIG_NLS_CODEPAGE_855 is not set
-# CONFIG_NLS_CODEPAGE_857 is not set
-# CONFIG_NLS_CODEPAGE_860 is not set
-# CONFIG_NLS_CODEPAGE_861 is not set
-# CONFIG_NLS_CODEPAGE_862 is not set
-# CONFIG_NLS_CODEPAGE_863 is not set
-# CONFIG_NLS_CODEPAGE_864 is not set
-# CONFIG_NLS_CODEPAGE_865 is not set
-# CONFIG_NLS_CODEPAGE_866 is not set
-# CONFIG_NLS_CODEPAGE_869 is not set
-# CONFIG_NLS_CODEPAGE_936 is not set
-# CONFIG_NLS_CODEPAGE_950 is not set
-# CONFIG_NLS_CODEPAGE_932 is not set
-# CONFIG_NLS_CODEPAGE_949 is not set
-# CONFIG_NLS_CODEPAGE_874 is not set
-# CONFIG_NLS_ISO8859_8 is not set
-# CONFIG_NLS_CODEPAGE_1250 is not set
-# CONFIG_NLS_CODEPAGE_1251 is not set
-CONFIG_NLS_ASCII=y
-CONFIG_NLS_ISO8859_1=m
-# CONFIG_NLS_ISO8859_2 is not set
-# CONFIG_NLS_ISO8859_3 is not set
-# CONFIG_NLS_ISO8859_4 is not set
-# CONFIG_NLS_ISO8859_5 is not set
-# CONFIG_NLS_ISO8859_6 is not set
-# CONFIG_NLS_ISO8859_7 is not set
-# CONFIG_NLS_ISO8859_9 is not set
-# CONFIG_NLS_ISO8859_13 is not set
-# CONFIG_NLS_ISO8859_14 is not set
-# CONFIG_NLS_ISO8859_15 is not set
-# CONFIG_NLS_KOI8_R is not set
-# CONFIG_NLS_KOI8_U is not set
-CONFIG_NLS_UTF8=m
-
-#
-# Profiling support
-#
-# CONFIG_PROFILING is not set
-
-#
-# Kernel hacking
-#
-# CONFIG_PRINTK_TIME is not set
-CONFIG_DEBUG_KERNEL=y
-CONFIG_MAGIC_SYSRQ=y
-CONFIG_LOG_BUF_SHIFT=17
-# CONFIG_SCHEDSTATS is not set
-# CONFIG_DEBUG_SLAB is not set
-CONFIG_DEBUG_SPINLOCK=y
-CONFIG_DEBUG_SPINLOCK_SLEEP=y
-# CONFIG_DEBUG_KOBJECT is not set
-CONFIG_DEBUG_HIGHMEM=y
-CONFIG_DEBUG_BUGVERBOSE=y
-CONFIG_DEBUG_INFO=y
-# CONFIG_DEBUG_FS is not set
-# CONFIG_FRAME_POINTER is not set
-CONFIG_EARLY_PRINTK=y
-CONFIG_KPROBES=y
-# CONFIG_DEBUG_PAGEALLOC is not set
-CONFIG_DEBUG_STACKOVERFLOW=y
-CONFIG_DEBUG_STACK_USAGE=y
-# CONFIG_IRQSTACKS is not set
-CONFIG_STACK_SIZE_SHIFT=13
-CONFIG_STACK_WARN=4096
-# CONFIG_X86_STACK_CHECK is not set
-# CONFIG_4KSTACKS is not set
-CONFIG_X86_FIND_SMP_CONFIG=y
-CONFIG_X86_MPPARSE=y
-CONFIG_VSERVER=y
-CONFIG_VSERVER_LEGACYNET=y
-
-#
-# Linux VServer
-#
-CONFIG_VSERVER_FILESHARING=y
-CONFIG_VSERVER_LEGACY=y
-# CONFIG_VSERVER_LEGACY_VERSION is not set
-# CONFIG_VSERVER_NGNET is not set
-# CONFIG_VSERVER_PROC_SECURE is not set
-CONFIG_VSERVER_HARDCPU=y
-CONFIG_VSERVER_HARDCPU_IDLE=y
-CONFIG_VSERVER_ACB_SCHED=y
-# CONFIG_INOXID_NONE is not set
-# CONFIG_INOXID_UID16 is not set
-# CONFIG_INOXID_GID16 is not set
-CONFIG_INOXID_UGID24=y
-# CONFIG_INOXID_INTERN is not set
-# CONFIG_INOXID_RUNTIME is not set
-# CONFIG_XID_TAG_NFSD is not set
-# CONFIG_VSERVER_DEBUG is not set
-
-#
-# Security options
-#
-# CONFIG_KEYS is not set
-# CONFIG_SECURITY is not set
-
-#
-# Cryptographic options
-#
-CONFIG_CRYPTO=y
-CONFIG_CRYPTO_HMAC=y
-# CONFIG_CRYPTO_NULL is not set
-# CONFIG_CRYPTO_MD4 is not set
-CONFIG_CRYPTO_MD5=m
-# CONFIG_CRYPTO_SHA1 is not set
-# CONFIG_CRYPTO_SHA256 is not set
-# CONFIG_CRYPTO_SHA512 is not set
-# CONFIG_CRYPTO_WP512 is not set
-# CONFIG_CRYPTO_TGR192 is not set
-CONFIG_CRYPTO_DES=m
-# CONFIG_CRYPTO_BLOWFISH is not set
-# CONFIG_CRYPTO_TWOFISH is not set
-# CONFIG_CRYPTO_SERPENT is not set
-# CONFIG_CRYPTO_AES_586 is not set
-# CONFIG_CRYPTO_CAST5 is not set
-# CONFIG_CRYPTO_CAST6 is not set
-# CONFIG_CRYPTO_TEA is not set
-# CONFIG_CRYPTO_ARC4 is not set
-# CONFIG_CRYPTO_KHAZAD is not set
-# CONFIG_CRYPTO_ANUBIS is not set
-# CONFIG_CRYPTO_DEFLATE is not set
-# CONFIG_CRYPTO_MICHAEL_MIC is not set
-# CONFIG_CRYPTO_CRC32C is not set
-# CONFIG_CRYPTO_TEST is not set
-# CONFIG_CRYPTO_SIGNATURE is not set
-# CONFIG_CRYPTO_MPILIB is not set
-
-#
-# Hardware crypto devices
-#
-# CONFIG_CRYPTO_DEV_PADLOCK is not set
-
-#
-# Library routines
-#
-CONFIG_CRC_CCITT=m
-CONFIG_CRC32=y
-# CONFIG_LIBCRC32C is not set
-CONFIG_ZLIB_INFLATE=y
-CONFIG_GENERIC_HARDIRQS=y
-CONFIG_GENERIC_IRQ_PROBE=y
-CONFIG_X86_SMP=y
-CONFIG_X86_HT=y
-CONFIG_X86_BIOS_REBOOT=y
-CONFIG_X86_TRAMPOLINE=y
-CONFIG_PC=y
+++ /dev/null
-#
-# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.12-1.1_1390_FC4.1.planetlab
-# Sun Aug 21 15:42:31 2005
-#
-CONFIG_GENERIC_HARDIRQS=y
-CONFIG_UML=y
-CONFIG_MMU=y
-CONFIG_UID16=y
-CONFIG_RWSEM_GENERIC_SPINLOCK=y
-CONFIG_GENERIC_CALIBRATE_DELAY=y
-
-#
-# UML-specific options
-#
-CONFIG_MODE_TT=y
-CONFIG_MODE_SKAS=y
-CONFIG_UML_X86=y
-# CONFIG_64BIT is not set
-CONFIG_TOP_ADDR=0xc0000000
-# CONFIG_3_LEVEL_PGTABLES is not set
-CONFIG_ARCH_HAS_SC_SIGNALS=y
-CONFIG_ARCH_REUSE_HOST_VSYSCALL_AREA=y
-CONFIG_LD_SCRIPT_STATIC=y
-CONFIG_NET=y
-CONFIG_BINFMT_ELF=y
-CONFIG_BINFMT_MISC=m
-CONFIG_HOSTFS=y
-CONFIG_MCONSOLE=y
-# CONFIG_MAGIC_SYSRQ is not set
-# CONFIG_HOST_2G_2G is not set
-# CONFIG_SMP is not set
-CONFIG_NEST_LEVEL=0
-CONFIG_KERNEL_HALF_GIGS=1
-# CONFIG_HIGHMEM is not set
-CONFIG_KERNEL_STACK_ORDER=2
-CONFIG_UML_REAL_TIME_CLOCK=y
-
-#
-# Code maturity level options
-#
-CONFIG_EXPERIMENTAL=y
-CONFIG_CLEAN_COMPILE=y
-CONFIG_BROKEN_ON_SMP=y
-CONFIG_INIT_ENV_ARG_LIMIT=32
-
-#
-# General setup
-#
-CONFIG_LOCALVERSION=""
-CONFIG_SWAP=y
-CONFIG_SYSVIPC=y
-CONFIG_POSIX_MQUEUE=y
-CONFIG_BSD_PROCESS_ACCT=y
-# CONFIG_BSD_PROCESS_ACCT_V3 is not set
-CONFIG_SYSCTL=y
-# CONFIG_AUDIT is not set
-# CONFIG_HOTPLUG is not set
-CONFIG_KOBJECT_UEVENT=y
-CONFIG_IKCONFIG=y
-CONFIG_IKCONFIG_PROC=y
-CONFIG_OOM_PANIC=y
-# CONFIG_EMBEDDED is not set
-CONFIG_KALLSYMS=y
-# CONFIG_KALLSYMS_ALL is not set
-CONFIG_KALLSYMS_EXTRA_PASS=y
-CONFIG_PRINTK=y
-CONFIG_BUG=y
-CONFIG_BASE_FULL=y
-CONFIG_FUTEX=y
-CONFIG_EPOLL=y
-# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_SHMEM=y
-CONFIG_CC_ALIGN_FUNCTIONS=0
-CONFIG_CC_ALIGN_LABELS=0
-CONFIG_CC_ALIGN_LOOPS=0
-CONFIG_CC_ALIGN_JUMPS=0
-# CONFIG_TINY_SHMEM is not set
-CONFIG_BASE_SMALL=0
-
-#
-# Loadable module support
-#
-CONFIG_MODULES=y
-CONFIG_MODULE_UNLOAD=y
-# CONFIG_MODULE_FORCE_UNLOAD is not set
-CONFIG_OBSOLETE_MODPARM=y
-# CONFIG_MODVERSIONS is not set
-# CONFIG_MODULE_SRCVERSION_ALL is not set
-# CONFIG_MODULE_SIG is not set
-CONFIG_KMOD=y
-
-#
-# Generic Driver Options
-#
-CONFIG_STANDALONE=y
-CONFIG_PREVENT_FIRMWARE_BUILD=y
-# CONFIG_FW_LOADER is not set
-# CONFIG_DEBUG_DRIVER is not set
-
-#
-# Character Devices
-#
-CONFIG_STDERR_CONSOLE=y
-CONFIG_STDIO_CONSOLE=y
-CONFIG_SSL=y
-CONFIG_NULL_CHAN=y
-CONFIG_PORT_CHAN=y
-CONFIG_PTY_CHAN=y
-CONFIG_TTY_CHAN=y
-CONFIG_XTERM_CHAN=y
-# CONFIG_NOCONFIG_CHAN is not set
-CONFIG_CON_ZERO_CHAN="fd:0,fd:1"
-CONFIG_CON_CHAN="xterm"
-CONFIG_SSL_CHAN="pty"
-CONFIG_UNIX98_PTYS=y
-CONFIG_LEGACY_PTYS=y
-CONFIG_LEGACY_PTY_COUNT=256
-# CONFIG_WATCHDOG is not set
-# CONFIG_UML_SOUND is not set
-# CONFIG_SOUND is not set
-# CONFIG_HOSTAUDIO is not set
-# CONFIG_UML_RANDOM is not set
-# CONFIG_MMAPPER is not set
-
-#
-# Block devices
-#
-CONFIG_BLK_DEV_UBD=y
-CONFIG_BLK_DEV_UBD_SYNC=y
-CONFIG_BLK_DEV_COW_COMMON=y
-CONFIG_BLK_DEV_LOOP=m
-# CONFIG_BLK_DEV_CRYPTOLOOP is not set
-# CONFIG_BLK_DEV_VROOT is not set
-CONFIG_BLK_DEV_NBD=m
-CONFIG_BLK_DEV_RAM=y
-CONFIG_BLK_DEV_RAM_COUNT=16
-CONFIG_BLK_DEV_RAM_SIZE=4096
-CONFIG_BLK_DEV_INITRD=y
-CONFIG_INITRAMFS_SOURCE=""
-# CONFIG_LBD is not set
-# CONFIG_DISKDUMP is not set
-
-#
-# IO Schedulers
-#
-CONFIG_IOSCHED_NOOP=y
-CONFIG_IOSCHED_AS=y
-CONFIG_IOSCHED_DEADLINE=y
-CONFIG_IOSCHED_CFQ=y
-# CONFIG_ATA_OVER_ETH is not set
-CONFIG_NETDEVICES=y
-
-#
-# UML Network Devices
-#
-CONFIG_UML_NET=y
-CONFIG_UML_NET_ETHERTAP=y
-CONFIG_UML_NET_TUNTAP=y
-CONFIG_UML_NET_SLIP=y
-CONFIG_UML_NET_DAEMON=y
-CONFIG_UML_NET_MCAST=y
-CONFIG_UML_NET_SLIRP=y
-
-#
-# Networking support
-#
-
-#
-# Networking options
-#
-CONFIG_PACKET=y
-CONFIG_PACKET_MMAP=y
-CONFIG_UNIX=y
-# CONFIG_NET_KEY is not set
-CONFIG_INET=y
-# CONFIG_IP_MULTICAST is not set
-CONFIG_IP_ADVANCED_ROUTER=y
-CONFIG_IP_MULTIPLE_TABLES=y
-# CONFIG_IP_ROUTE_FWMARK is not set
-# CONFIG_IP_ROUTE_MULTIPATH is not set
-CONFIG_IP_ROUTE_VERBOSE=y
-# CONFIG_IP_PNP is not set
-# CONFIG_NET_IPIP is not set
-# CONFIG_NET_IPGRE is not set
-# CONFIG_ARPD is not set
-# CONFIG_SYN_COOKIES is not set
-# CONFIG_INET_AH is not set
-# CONFIG_INET_ESP is not set
-# CONFIG_INET_IPCOMP is not set
-# CONFIG_INET_TUNNEL is not set
-CONFIG_IP_TCPDIAG=y
-# CONFIG_IP_TCPDIAG_IPV6 is not set
-
-#
-# IP: Virtual Server Configuration
-#
-# CONFIG_IP_VS is not set
-CONFIG_ICMP_IPOD=y
-# CONFIG_IPV6 is not set
-CONFIG_NETFILTER=y
-# CONFIG_NETFILTER_DEBUG is not set
-
-#
-# IP: Netfilter Configuration
-#
-CONFIG_IP_NF_CONNTRACK=m
-# CONFIG_IP_NF_CT_ACCT is not set
-# CONFIG_IP_NF_CONNTRACK_MARK is not set
-CONFIG_IP_NF_CT_PROTO_SCTP=m
-CONFIG_IP_NF_FTP=m
-CONFIG_IP_NF_IRC=m
-CONFIG_IP_NF_TFTP=m
-CONFIG_IP_NF_AMANDA=m
-CONFIG_IP_NF_QUEUE=m
-CONFIG_IP_NF_IPTABLES=m
-CONFIG_IP_NF_MATCH_LIMIT=m
-CONFIG_IP_NF_MATCH_IPRANGE=m
-CONFIG_IP_NF_MATCH_MAC=m
-CONFIG_IP_NF_MATCH_PKTTYPE=m
-CONFIG_IP_NF_MATCH_MARK=m
-CONFIG_IP_NF_MATCH_MULTIPORT=m
-CONFIG_IP_NF_MATCH_TOS=m
-CONFIG_IP_NF_MATCH_RECENT=m
-CONFIG_IP_NF_MATCH_ECN=m
-CONFIG_IP_NF_MATCH_DSCP=m
-CONFIG_IP_NF_MATCH_AH_ESP=m
-CONFIG_IP_NF_MATCH_LENGTH=m
-CONFIG_IP_NF_MATCH_TTL=m
-CONFIG_IP_NF_MATCH_TCPMSS=m
-CONFIG_IP_NF_MATCH_HELPER=m
-CONFIG_IP_NF_MATCH_STATE=m
-CONFIG_IP_NF_MATCH_CONNTRACK=m
-CONFIG_IP_NF_MATCH_OWNER=m
-CONFIG_IP_NF_MATCH_ADDRTYPE=m
-CONFIG_IP_NF_MATCH_REALM=m
-CONFIG_IP_NF_MATCH_SCTP=m
-CONFIG_IP_NF_MATCH_COMMENT=m
-CONFIG_IP_NF_MATCH_HASHLIMIT=m
-CONFIG_IP_NF_FILTER=m
-CONFIG_IP_NF_TARGET_REJECT=m
-CONFIG_IP_NF_TARGET_LOG=m
-CONFIG_IP_NF_TARGET_ULOG=m
-CONFIG_IP_NF_TARGET_TCPMSS=m
-CONFIG_IP_NF_NAT=m
-CONFIG_IP_NF_NAT_NEEDED=y
-CONFIG_IP_NF_TARGET_MASQUERADE=m
-CONFIG_IP_NF_TARGET_REDIRECT=m
-CONFIG_IP_NF_TARGET_NETMAP=m
-CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_SNMP_BASIC=m
-CONFIG_IP_NF_NAT_IRC=m
-CONFIG_IP_NF_NAT_FTP=m
-CONFIG_IP_NF_NAT_TFTP=m
-CONFIG_IP_NF_NAT_AMANDA=m
-CONFIG_IP_NF_MANGLE=m
-CONFIG_IP_NF_TARGET_TOS=m
-CONFIG_IP_NF_TARGET_ECN=m
-CONFIG_IP_NF_TARGET_DSCP=m
-CONFIG_IP_NF_TARGET_MARK=m
-CONFIG_IP_NF_TARGET_CLASSIFY=m
-CONFIG_IP_NF_RAW=m
-CONFIG_IP_NF_TARGET_NOTRACK=m
-CONFIG_IP_NF_ARPTABLES=m
-CONFIG_IP_NF_ARPFILTER=m
-CONFIG_IP_NF_ARP_MANGLE=m
-CONFIG_IP_NF_CT_PROTO_GRE=m
-CONFIG_IP_NF_PPTP=m
-CONFIG_IP_NF_NAT_PPTP=m
-CONFIG_IP_NF_NAT_PROTO_GRE=m
-CONFIG_VNET=m
-
-#
-# SCTP Configuration (EXPERIMENTAL)
-#
-# CONFIG_IP_SCTP is not set
-# CONFIG_ATM is not set
-# CONFIG_BRIDGE is not set
-# CONFIG_VLAN_8021Q is not set
-# CONFIG_DECNET is not set
-# CONFIG_LLC2 is not set
-# CONFIG_IPX is not set
-# CONFIG_ATALK is not set
-# CONFIG_X25 is not set
-# CONFIG_LAPB is not set
-# CONFIG_NET_DIVERT is not set
-# CONFIG_ECONET is not set
-# CONFIG_WAN_ROUTER is not set
-
-#
-# QoS and/or fair queueing
-#
-CONFIG_NET_SCHED=y
-CONFIG_NET_SCH_CLK_JIFFIES=y
-# CONFIG_NET_SCH_CLK_GETTIMEOFDAY is not set
-# CONFIG_NET_SCH_CLK_CPU is not set
-# CONFIG_NET_SCH_CBQ is not set
-CONFIG_NET_SCH_HTB=m
-# CONFIG_NET_SCH_HFSC is not set
-# CONFIG_NET_SCH_PRIO is not set
-# CONFIG_NET_SCH_RED is not set
-# CONFIG_NET_SCH_SFQ is not set
-# CONFIG_NET_SCH_TEQL is not set
-# CONFIG_NET_SCH_TBF is not set
-# CONFIG_NET_SCH_GRED is not set
-# CONFIG_NET_SCH_DSMARK is not set
-# CONFIG_NET_SCH_NETEM is not set
-# CONFIG_NET_SCH_INGRESS is not set
-# CONFIG_NET_QOS is not set
-CONFIG_NET_CLS=y
-# CONFIG_NET_CLS_BASIC is not set
-# CONFIG_NET_CLS_TCINDEX is not set
-# CONFIG_NET_CLS_ROUTE4 is not set
-CONFIG_NET_CLS_ROUTE=y
-CONFIG_NET_CLS_FW=m
-# CONFIG_NET_CLS_U32 is not set
-# CONFIG_NET_CLS_IND is not set
-# CONFIG_NET_EMATCH is not set
-
-#
-# Network testing
-#
-# CONFIG_NET_PKTGEN is not set
-# CONFIG_NETPOLL is not set
-# CONFIG_NET_POLL_CONTROLLER is not set
-# CONFIG_HAMRADIO is not set
-# CONFIG_IRDA is not set
-# CONFIG_BT is not set
-# CONFIG_TUX is not set
-CONFIG_DUMMY=m
-# CONFIG_BONDING is not set
-# CONFIG_EQUALIZER is not set
-CONFIG_TUN=m
-
-#
-# Wan interfaces
-#
-# CONFIG_WAN is not set
-# CONFIG_PPP is not set
-# CONFIG_SLIP is not set
-# CONFIG_SHAPER is not set
-# CONFIG_NETCONSOLE is not set
-
-#
-# File systems
-#
-CONFIG_EXT2_FS=y
-# CONFIG_EXT2_FS_XATTR is not set
-CONFIG_EXT3_FS=y
-# CONFIG_EXT3_FS_XATTR is not set
-CONFIG_JBD=y
-# CONFIG_JBD_DEBUG is not set
-CONFIG_REISERFS_FS=y
-# CONFIG_REISERFS_CHECK is not set
-# CONFIG_REISERFS_PROC_INFO is not set
-# CONFIG_REISERFS_FS_XATTR is not set
-# CONFIG_JFS_FS is not set
-
-#
-# XFS support
-#
-# CONFIG_XFS_FS is not set
-# CONFIG_MINIX_FS is not set
-# CONFIG_ROMFS_FS is not set
-CONFIG_QUOTA=y
-# CONFIG_QFMT_V1 is not set
-# CONFIG_QFMT_V2 is not set
-CONFIG_QUOTACTL=y
-CONFIG_DNOTIFY=y
-CONFIG_AUTOFS_FS=m
-CONFIG_AUTOFS4_FS=m
-
-#
-# CD-ROM/DVD Filesystems
-#
-CONFIG_ISO9660_FS=m
-CONFIG_JOLIET=y
-# CONFIG_ZISOFS is not set
-# CONFIG_UDF_FS is not set
-
-#
-# DOS/FAT/NT Filesystems
-#
-# CONFIG_MSDOS_FS is not set
-# CONFIG_VFAT_FS is not set
-# CONFIG_NTFS_FS is not set
-
-#
-# Pseudo filesystems
-#
-CONFIG_PROC_FS=y
-CONFIG_PROC_KCORE=y
-CONFIG_SYSFS=y
-CONFIG_DEVFS_FS=y
-CONFIG_DEVFS_MOUNT=y
-# CONFIG_DEVFS_DEBUG is not set
-# CONFIG_DEVPTS_FS_XATTR is not set
-CONFIG_TMPFS=y
-# CONFIG_TMPFS_XATTR is not set
-# CONFIG_HUGETLB_PAGE is not set
-CONFIG_RAMFS=y
-
-#
-# Miscellaneous filesystems
-#
-# CONFIG_ADFS_FS is not set
-# CONFIG_AFFS_FS is not set
-# CONFIG_HFS_FS is not set
-# CONFIG_HFSPLUS_FS is not set
-# CONFIG_BEFS_FS is not set
-# CONFIG_BFS_FS is not set
-# CONFIG_EFS_FS is not set
-# CONFIG_CRAMFS is not set
-# CONFIG_VXFS_FS is not set
-# CONFIG_HPFS_FS is not set
-# CONFIG_QNX4FS_FS is not set
-# CONFIG_SYSV_FS is not set
-# CONFIG_UFS_FS is not set
-
-#
-# Network File Systems
-#
-# CONFIG_NFS_FS is not set
-# CONFIG_NFSD is not set
-# CONFIG_SMB_FS is not set
-# CONFIG_CIFS is not set
-# CONFIG_NCP_FS is not set
-# CONFIG_CODA_FS is not set
-# CONFIG_AFS_FS is not set
-
-#
-# Partition Types
-#
-# CONFIG_PARTITION_ADVANCED is not set
-CONFIG_MSDOS_PARTITION=y
-
-#
-# Native Language Support
-#
-CONFIG_NLS=y
-CONFIG_NLS_DEFAULT="utf-8"
-CONFIG_NLS_CODEPAGE_437=m
-# CONFIG_NLS_CODEPAGE_737 is not set
-# CONFIG_NLS_CODEPAGE_775 is not set
-# CONFIG_NLS_CODEPAGE_850 is not set
-# CONFIG_NLS_CODEPAGE_852 is not set
-# CONFIG_NLS_CODEPAGE_855 is not set
-# CONFIG_NLS_CODEPAGE_857 is not set
-# CONFIG_NLS_CODEPAGE_860 is not set
-# CONFIG_NLS_CODEPAGE_861 is not set
-# CONFIG_NLS_CODEPAGE_862 is not set
-# CONFIG_NLS_CODEPAGE_863 is not set
-# CONFIG_NLS_CODEPAGE_864 is not set
-# CONFIG_NLS_CODEPAGE_865 is not set
-# CONFIG_NLS_CODEPAGE_866 is not set
-# CONFIG_NLS_CODEPAGE_869 is not set
-# CONFIG_NLS_CODEPAGE_936 is not set
-# CONFIG_NLS_CODEPAGE_950 is not set
-# CONFIG_NLS_CODEPAGE_932 is not set
-# CONFIG_NLS_CODEPAGE_949 is not set
-# CONFIG_NLS_CODEPAGE_874 is not set
-# CONFIG_NLS_ISO8859_8 is not set
-# CONFIG_NLS_CODEPAGE_1250 is not set
-# CONFIG_NLS_CODEPAGE_1251 is not set
-# CONFIG_NLS_ASCII is not set
-CONFIG_NLS_ISO8859_1=m
-# CONFIG_NLS_ISO8859_2 is not set
-# CONFIG_NLS_ISO8859_3 is not set
-# CONFIG_NLS_ISO8859_4 is not set
-# CONFIG_NLS_ISO8859_5 is not set
-# CONFIG_NLS_ISO8859_6 is not set
-# CONFIG_NLS_ISO8859_7 is not set
-# CONFIG_NLS_ISO8859_9 is not set
-# CONFIG_NLS_ISO8859_13 is not set
-# CONFIG_NLS_ISO8859_14 is not set
-# CONFIG_NLS_ISO8859_15 is not set
-# CONFIG_NLS_KOI8_R is not set
-# CONFIG_NLS_KOI8_U is not set
-CONFIG_NLS_UTF8=m
-CONFIG_VSERVER=y
-CONFIG_VSERVER_LEGACYNET=y
-
-#
-# Linux VServer
-#
-CONFIG_VSERVER_FILESHARING=y
-CONFIG_VSERVER_LEGACY=y
-# CONFIG_VSERVER_LEGACY_VERSION is not set
-# CONFIG_VSERVER_NGNET is not set
-# CONFIG_VSERVER_PROC_SECURE is not set
-CONFIG_VSERVER_HARDCPU=y
-CONFIG_VSERVER_HARDCPU_IDLE=y
-CONFIG_VSERVER_ACB_SCHED=y
-# CONFIG_INOXID_NONE is not set
-# CONFIG_INOXID_UID16 is not set
-# CONFIG_INOXID_GID16 is not set
-CONFIG_INOXID_UGID24=y
-# CONFIG_INOXID_INTERN is not set
-# CONFIG_INOXID_RUNTIME is not set
-# CONFIG_XID_TAG_NFSD is not set
-# CONFIG_VSERVER_DEBUG is not set
-
-#
-# Security options
-#
-# CONFIG_KEYS is not set
-# CONFIG_SECURITY is not set
-
-#
-# Cryptographic options
-#
-# CONFIG_CRYPTO is not set
-
-#
-# Hardware crypto devices
-#
-
-#
-# Library routines
-#
-# CONFIG_CRC_CCITT is not set
-CONFIG_CRC32=m
-# CONFIG_LIBCRC32C is not set
-
-#
-# Multi-device support (RAID and LVM)
-#
-# CONFIG_MD is not set
-# CONFIG_INPUT is not set
-
-#
-# Kernel hacking
-#
-# CONFIG_PRINTK_TIME is not set
-CONFIG_DEBUG_KERNEL=y
-CONFIG_LOG_BUF_SHIFT=14
-# CONFIG_SCHEDSTATS is not set
-# CONFIG_DEBUG_SLAB is not set
-# CONFIG_DEBUG_SPINLOCK is not set
-# CONFIG_DEBUG_SPINLOCK_SLEEP is not set
-# CONFIG_DEBUG_KOBJECT is not set
-CONFIG_DEBUG_INFO=y
-# CONFIG_DEBUG_FS is not set
-CONFIG_FRAME_POINTER=y
-CONFIG_PT_PROXY=y
-# CONFIG_GCOV is not set
-# CONFIG_SYSCALL_DEBUG is not set
+++ /dev/null
-#
-# Automatically generated make config: don't edit
-# Linux kernel version: 2.6.12-1.1_1390_FC4.1.planetlab
-# Wed Sep 7 22:43:27 2005
-#
-CONFIG_XEN=y
-CONFIG_ARCH_XEN=y
-CONFIG_NO_IDLE_HZ=y
-
-#
-# XEN
-#
-# CONFIG_XEN_PRIVILEGED_GUEST is not set
-# CONFIG_XEN_PHYSDEV_ACCESS is not set
-CONFIG_XEN_BLKDEV_GRANT=y
-CONFIG_XEN_BLKDEV_FRONTEND=y
-CONFIG_XEN_NETDEV_FRONTEND=y
-# CONFIG_XEN_NETDEV_FRONTEND_PIPELINED_TRANSMITTER is not set
-# CONFIG_XEN_BLKDEV_TAP is not set
-# CONFIG_XEN_SHADOW_MODE is not set
-CONFIG_XEN_SCRUB_PAGES=y
-CONFIG_XEN_X86=y
-# CONFIG_XEN_X86_64 is not set
-CONFIG_HAVE_ARCH_DEV_ALLOC_SKB=y
-
-#
-# Code maturity level options
-#
-CONFIG_EXPERIMENTAL=y
-CONFIG_CLEAN_COMPILE=y
-CONFIG_LOCK_KERNEL=y
-CONFIG_INIT_ENV_ARG_LIMIT=32
-
-#
-# General setup
-#
-CONFIG_LOCALVERSION=""
-CONFIG_SWAP=y
-CONFIG_SYSVIPC=y
-CONFIG_POSIX_MQUEUE=y
-CONFIG_BSD_PROCESS_ACCT=y
-# CONFIG_BSD_PROCESS_ACCT_V3 is not set
-CONFIG_SYSCTL=y
-CONFIG_AUDIT=y
-CONFIG_AUDITSYSCALL=y
-# CONFIG_HOTPLUG is not set
-CONFIG_KOBJECT_UEVENT=y
-# CONFIG_IKCONFIG is not set
-CONFIG_OOM_PANIC=y
-CONFIG_CPUSETS=y
-# CONFIG_EMBEDDED is not set
-CONFIG_KALLSYMS=y
-# CONFIG_KALLSYMS_ALL is not set
-CONFIG_KALLSYMS_EXTRA_PASS=y
-CONFIG_PRINTK=y
-CONFIG_BUG=y
-CONFIG_BASE_FULL=y
-CONFIG_FUTEX=y
-CONFIG_EPOLL=y
-# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
-CONFIG_SHMEM=y
-CONFIG_CC_ALIGN_FUNCTIONS=0
-CONFIG_CC_ALIGN_LABELS=0
-CONFIG_CC_ALIGN_LOOPS=0
-CONFIG_CC_ALIGN_JUMPS=0
-# CONFIG_TINY_SHMEM is not set
-CONFIG_BASE_SMALL=0
-
-#
-# Loadable module support
-#
-CONFIG_MODULES=y
-CONFIG_MODULE_UNLOAD=y
-# CONFIG_MODULE_FORCE_UNLOAD is not set
-CONFIG_OBSOLETE_MODPARM=y
-CONFIG_MODVERSIONS=y
-CONFIG_MODULE_SRCVERSION_ALL=y
-CONFIG_MODULE_SIG=y
-# CONFIG_MODULE_SIG_FORCE is not set
-CONFIG_KMOD=y
-CONFIG_STOP_MACHINE=y
-
-#
-# X86 Processor Configuration
-#
-CONFIG_XENARCH="i386"
-CONFIG_X86=y
-CONFIG_MMU=y
-CONFIG_UID16=y
-CONFIG_GENERIC_ISA_DMA=y
-CONFIG_GENERIC_IOMAP=y
-# CONFIG_M386 is not set
-# CONFIG_M486 is not set
-# CONFIG_M586 is not set
-# CONFIG_M586TSC is not set
-# CONFIG_M586MMX is not set
-CONFIG_M686=y
-# CONFIG_MPENTIUMII is not set
-# CONFIG_MPENTIUMIII is not set
-# CONFIG_MPENTIUMM is not set
-# CONFIG_MPENTIUM4 is not set
-# CONFIG_MK6 is not set
-# CONFIG_MK7 is not set
-# CONFIG_MK8 is not set
-# CONFIG_MCRUSOE is not set
-# CONFIG_MEFFICEON is not set
-# CONFIG_MWINCHIPC6 is not set
-# CONFIG_MWINCHIP2 is not set
-# CONFIG_MWINCHIP3D is not set
-# CONFIG_MCYRIXIII is not set
-# CONFIG_MVIAC3_2 is not set
-CONFIG_X86_GENERIC=y
-CONFIG_X86_CMPXCHG=y
-CONFIG_X86_XADD=y
-CONFIG_X86_L1_CACHE_SHIFT=7
-CONFIG_RWSEM_XCHGADD_ALGORITHM=y
-CONFIG_GENERIC_CALIBRATE_DELAY=y
-CONFIG_X86_PPRO_FENCE=y
-CONFIG_X86_WP_WORKS_OK=y
-CONFIG_X86_INVLPG=y
-CONFIG_X86_BSWAP=y
-CONFIG_X86_POPAD_OK=y
-CONFIG_X86_GOOD_APIC=y
-CONFIG_X86_INTEL_USERCOPY=y
-CONFIG_X86_USE_PPRO_CHECKSUM=y
-# CONFIG_HPET_TIMER is not set
-# CONFIG_HPET_EMULATE_RTC is not set
-CONFIG_SMP=y
-CONFIG_NR_CPUS=32
-CONFIG_SCHED_SMT=y
-# CONFIG_PREEMPT is not set
-CONFIG_X86_CPUID=m
-
-#
-# Firmware Drivers
-#
-# CONFIG_EDD is not set
-CONFIG_NOHIGHMEM=y
-# CONFIG_HIGHMEM4G is not set
-CONFIG_HAVE_DEC_LOCK=y
-# CONFIG_REGPARM is not set
-CONFIG_KERN_PHYS_OFFSET=1
-# CONFIG_X86_LOCAL_APIC is not set
-
-#
-# Kernel hacking
-#
-CONFIG_DEBUG_KERNEL=y
-CONFIG_EARLY_PRINTK=y
-# CONFIG_DEBUG_STACKOVERFLOW is not set
-# CONFIG_DEBUG_STACK_USAGE is not set
-# CONFIG_DEBUG_SLAB is not set
-CONFIG_MAGIC_SYSRQ=y
-CONFIG_DEBUG_SPINLOCK=y
-# CONFIG_DEBUG_PAGEALLOC is not set
-CONFIG_DEBUG_INFO=y
-CONFIG_DEBUG_SPINLOCK_SLEEP=y
-# CONFIG_FRAME_POINTER is not set
-# CONFIG_4KSTACKS is not set
-CONFIG_GENERIC_HARDIRQS=y
-CONFIG_GENERIC_IRQ_PROBE=y
-CONFIG_X86_SMP=y
-CONFIG_X86_BIOS_REBOOT=y
-CONFIG_X86_TRAMPOLINE=y
-CONFIG_PC=y
-
-#
-# Executable file formats
-#
-CONFIG_BINFMT_ELF=y
-# CONFIG_BINFMT_AOUT is not set
-CONFIG_BINFMT_MISC=y
-
-#
-# Device Drivers
-#
-
-#
-# Generic Driver Options
-#
-CONFIG_STANDALONE=y
-CONFIG_PREVENT_FIRMWARE_BUILD=y
-# CONFIG_FW_LOADER is not set
-# CONFIG_DEBUG_DRIVER is not set
-
-#
-# Block devices
-#
-# CONFIG_BLK_DEV_FD is not set
-# CONFIG_BLK_DEV_COW_COMMON is not set
-CONFIG_BLK_DEV_LOOP=m
-CONFIG_BLK_DEV_CRYPTOLOOP=m
-# CONFIG_BLK_DEV_VROOT is not set
-CONFIG_BLK_DEV_NBD=m
-CONFIG_BLK_DEV_RAM=y
-CONFIG_BLK_DEV_RAM_COUNT=16
-CONFIG_BLK_DEV_RAM_SIZE=16384
-CONFIG_BLK_DEV_INITRD=y
-CONFIG_INITRAMFS_SOURCE=""
-CONFIG_LBD=y
-CONFIG_CDROM_PKTCDVD=m
-CONFIG_CDROM_PKTCDVD_BUFFERS=8
-# CONFIG_CDROM_PKTCDVD_WCACHE is not set
-# CONFIG_DISKDUMP is not set
-
-#
-# IO Schedulers
-#
-CONFIG_IOSCHED_NOOP=y
-CONFIG_IOSCHED_AS=y
-CONFIG_IOSCHED_DEADLINE=y
-CONFIG_IOSCHED_CFQ=y
-CONFIG_ATA_OVER_ETH=m
-
-#
-# SCSI device support
-#
-CONFIG_SCSI=m
-CONFIG_SCSI_PROC_FS=y
-
-#
-# SCSI support type (disk, tape, CD-ROM)
-#
-CONFIG_BLK_DEV_SD=m
-CONFIG_CHR_DEV_ST=m
-CONFIG_CHR_DEV_OSST=m
-CONFIG_BLK_DEV_SR=m
-CONFIG_BLK_DEV_SR_VENDOR=y
-CONFIG_CHR_DEV_SG=m
-
-#
-# Some SCSI devices (e.g. CD jukebox) support multiple LUNs
-#
-CONFIG_SCSI_MULTI_LUN=y
-CONFIG_SCSI_CONSTANTS=y
-CONFIG_SCSI_LOGGING=y
-
-#
-# SCSI Transport Attributes
-#
-CONFIG_SCSI_SPI_ATTRS=m
-CONFIG_SCSI_FC_ATTRS=m
-CONFIG_SCSI_ISCSI_ATTRS=m
-
-#
-# SCSI low-level drivers
-#
-CONFIG_SCSI_SATA=y
-# CONFIG_SCSI_DEBUG is not set
-
-#
-# Multi-device support (RAID and LVM)
-#
-CONFIG_MD=y
-CONFIG_BLK_DEV_MD=y
-CONFIG_MD_LINEAR=m
-CONFIG_MD_RAID0=m
-CONFIG_MD_RAID1=m
-CONFIG_MD_RAID10=m
-CONFIG_MD_RAID5=m
-CONFIG_MD_RAID6=m
-CONFIG_MD_MULTIPATH=m
-CONFIG_MD_FAULTY=m
-CONFIG_BLK_DEV_DM=m
-CONFIG_DM_CRYPT=m
-CONFIG_DM_SNAPSHOT=m
-CONFIG_DM_MIRROR=m
-CONFIG_DM_ZERO=m
-CONFIG_DM_MULTIPATH=m
-CONFIG_DM_MULTIPATH_EMC=m
-
-#
-# Networking support
-#
-CONFIG_NET=y
-
-#
-# Networking options
-#
-CONFIG_PACKET=y
-CONFIG_PACKET_MMAP=y
-CONFIG_UNIX=y
-CONFIG_NET_KEY=m
-CONFIG_INET=y
-CONFIG_IP_MULTICAST=y
-CONFIG_IP_ADVANCED_ROUTER=y
-CONFIG_IP_MULTIPLE_TABLES=y
-CONFIG_IP_ROUTE_FWMARK=y
-CONFIG_IP_ROUTE_MULTIPATH=y
-# CONFIG_IP_ROUTE_MULTIPATH_CACHED is not set
-CONFIG_IP_ROUTE_VERBOSE=y
-# CONFIG_IP_PNP is not set
-CONFIG_NET_IPIP=m
-CONFIG_NET_IPGRE=m
-CONFIG_NET_IPGRE_BROADCAST=y
-CONFIG_IP_MROUTE=y
-CONFIG_IP_PIMSM_V1=y
-CONFIG_IP_PIMSM_V2=y
-# CONFIG_ARPD is not set
-CONFIG_SYN_COOKIES=y
-CONFIG_INET_AH=m
-CONFIG_INET_ESP=m
-CONFIG_INET_IPCOMP=m
-CONFIG_INET_TUNNEL=m
-CONFIG_IP_TCPDIAG=m
-CONFIG_IP_TCPDIAG_IPV6=y
-
-#
-# IP: Virtual Server Configuration
-#
-CONFIG_IP_VS=m
-# CONFIG_IP_VS_DEBUG is not set
-CONFIG_IP_VS_TAB_BITS=12
-
-#
-# IPVS transport protocol load balancing support
-#
-CONFIG_IP_VS_PROTO_TCP=y
-CONFIG_IP_VS_PROTO_UDP=y
-CONFIG_IP_VS_PROTO_ESP=y
-CONFIG_IP_VS_PROTO_AH=y
-
-#
-# IPVS scheduler
-#
-CONFIG_IP_VS_RR=m
-CONFIG_IP_VS_WRR=m
-CONFIG_IP_VS_LC=m
-CONFIG_IP_VS_WLC=m
-CONFIG_IP_VS_LBLC=m
-CONFIG_IP_VS_LBLCR=m
-CONFIG_IP_VS_DH=m
-CONFIG_IP_VS_SH=m
-CONFIG_IP_VS_SED=m
-CONFIG_IP_VS_NQ=m
-
-#
-# IPVS application helper
-#
-CONFIG_IP_VS_FTP=m
-# CONFIG_ICMP_IPOD is not set
-CONFIG_IPV6=m
-CONFIG_IPV6_PRIVACY=y
-CONFIG_INET6_AH=m
-CONFIG_INET6_ESP=m
-CONFIG_INET6_IPCOMP=m
-CONFIG_INET6_TUNNEL=m
-CONFIG_IPV6_TUNNEL=m
-CONFIG_NETFILTER=y
-# CONFIG_NETFILTER_DEBUG is not set
-CONFIG_BRIDGE_NETFILTER=y
-
-#
-# IP: Netfilter Configuration
-#
-CONFIG_IP_NF_CONNTRACK=m
-CONFIG_IP_NF_CT_ACCT=y
-CONFIG_IP_NF_CONNTRACK_MARK=y
-CONFIG_IP_NF_CT_PROTO_SCTP=m
-CONFIG_IP_NF_FTP=m
-CONFIG_IP_NF_IRC=m
-CONFIG_IP_NF_TFTP=m
-CONFIG_IP_NF_AMANDA=m
-CONFIG_IP_NF_QUEUE=m
-CONFIG_IP_NF_IPTABLES=m
-CONFIG_IP_NF_MATCH_LIMIT=m
-CONFIG_IP_NF_MATCH_IPRANGE=m
-CONFIG_IP_NF_MATCH_MAC=m
-CONFIG_IP_NF_MATCH_PKTTYPE=m
-CONFIG_IP_NF_MATCH_MARK=m
-CONFIG_IP_NF_MATCH_MULTIPORT=m
-CONFIG_IP_NF_MATCH_TOS=m
-CONFIG_IP_NF_MATCH_RECENT=m
-CONFIG_IP_NF_MATCH_ECN=m
-CONFIG_IP_NF_MATCH_DSCP=m
-CONFIG_IP_NF_MATCH_AH_ESP=m
-CONFIG_IP_NF_MATCH_LENGTH=m
-CONFIG_IP_NF_MATCH_TTL=m
-CONFIG_IP_NF_MATCH_TCPMSS=m
-CONFIG_IP_NF_MATCH_HELPER=m
-CONFIG_IP_NF_MATCH_STATE=m
-CONFIG_IP_NF_MATCH_CONNTRACK=m
-CONFIG_IP_NF_MATCH_OWNER=m
-CONFIG_IP_NF_MATCH_PHYSDEV=m
-CONFIG_IP_NF_MATCH_ADDRTYPE=m
-CONFIG_IP_NF_MATCH_REALM=m
-CONFIG_IP_NF_MATCH_SCTP=m
-CONFIG_IP_NF_MATCH_COMMENT=m
-CONFIG_IP_NF_MATCH_CONNMARK=m
-CONFIG_IP_NF_MATCH_HASHLIMIT=m
-CONFIG_IP_NF_FILTER=m
-CONFIG_IP_NF_TARGET_REJECT=m
-CONFIG_IP_NF_TARGET_LOG=m
-CONFIG_IP_NF_TARGET_ULOG=m
-CONFIG_IP_NF_TARGET_TCPMSS=m
-CONFIG_IP_NF_NAT=m
-CONFIG_IP_NF_NAT_NEEDED=y
-CONFIG_IP_NF_TARGET_MASQUERADE=m
-CONFIG_IP_NF_TARGET_REDIRECT=m
-CONFIG_IP_NF_TARGET_NETMAP=m
-CONFIG_IP_NF_TARGET_SAME=m
-CONFIG_IP_NF_NAT_SNMP_BASIC=m
-CONFIG_IP_NF_NAT_IRC=m
-CONFIG_IP_NF_NAT_FTP=m
-CONFIG_IP_NF_NAT_TFTP=m
-CONFIG_IP_NF_NAT_AMANDA=m
-CONFIG_IP_NF_MANGLE=m
-CONFIG_IP_NF_TARGET_TOS=m
-CONFIG_IP_NF_TARGET_ECN=m
-CONFIG_IP_NF_TARGET_DSCP=m
-CONFIG_IP_NF_TARGET_MARK=m
-CONFIG_IP_NF_TARGET_CLASSIFY=m
-CONFIG_IP_NF_TARGET_CONNMARK=m
-CONFIG_IP_NF_TARGET_CLUSTERIP=m
-CONFIG_IP_NF_RAW=m
-CONFIG_IP_NF_TARGET_NOTRACK=m
-CONFIG_IP_NF_ARPTABLES=m
-CONFIG_IP_NF_ARPFILTER=m
-CONFIG_IP_NF_ARP_MANGLE=m
-# CONFIG_IP_NF_CT_PROTO_GRE is not set
-
-#
-# IPv6: Netfilter Configuration (EXPERIMENTAL)
-#
-CONFIG_IP6_NF_QUEUE=m
-CONFIG_IP6_NF_IPTABLES=m
-CONFIG_IP6_NF_MATCH_LIMIT=m
-CONFIG_IP6_NF_MATCH_MAC=m
-CONFIG_IP6_NF_MATCH_RT=m
-CONFIG_IP6_NF_MATCH_OPTS=m
-CONFIG_IP6_NF_MATCH_FRAG=m
-CONFIG_IP6_NF_MATCH_HL=m
-CONFIG_IP6_NF_MATCH_MULTIPORT=m
-CONFIG_IP6_NF_MATCH_OWNER=m
-CONFIG_IP6_NF_MATCH_MARK=m
-CONFIG_IP6_NF_MATCH_IPV6HEADER=m
-CONFIG_IP6_NF_MATCH_AHESP=m
-CONFIG_IP6_NF_MATCH_LENGTH=m
-CONFIG_IP6_NF_MATCH_EUI64=m
-CONFIG_IP6_NF_MATCH_PHYSDEV=m
-CONFIG_IP6_NF_FILTER=m
-CONFIG_IP6_NF_TARGET_LOG=m
-CONFIG_IP6_NF_MANGLE=m
-CONFIG_IP6_NF_TARGET_MARK=m
-CONFIG_IP6_NF_RAW=m
-
-#
-# Bridge: Netfilter Configuration
-#
-CONFIG_BRIDGE_NF_EBTABLES=m
-CONFIG_BRIDGE_EBT_BROUTE=m
-CONFIG_BRIDGE_EBT_T_FILTER=m
-CONFIG_BRIDGE_EBT_T_NAT=m
-CONFIG_BRIDGE_EBT_802_3=m
-CONFIG_BRIDGE_EBT_AMONG=m
-CONFIG_BRIDGE_EBT_ARP=m
-CONFIG_BRIDGE_EBT_IP=m
-CONFIG_BRIDGE_EBT_LIMIT=m
-CONFIG_BRIDGE_EBT_MARK=m
-CONFIG_BRIDGE_EBT_PKTTYPE=m
-CONFIG_BRIDGE_EBT_STP=m
-CONFIG_BRIDGE_EBT_VLAN=m
-CONFIG_BRIDGE_EBT_ARPREPLY=m
-CONFIG_BRIDGE_EBT_DNAT=m
-CONFIG_BRIDGE_EBT_MARK_T=m
-CONFIG_BRIDGE_EBT_REDIRECT=m
-CONFIG_BRIDGE_EBT_SNAT=m
-CONFIG_BRIDGE_EBT_LOG=m
-CONFIG_BRIDGE_EBT_ULOG=m
-CONFIG_VNET=m
-CONFIG_XFRM=y
-CONFIG_XFRM_USER=y
-
-#
-# SCTP Configuration (EXPERIMENTAL)
-#
-CONFIG_IP_SCTP=m
-# CONFIG_SCTP_DBG_MSG is not set
-# CONFIG_SCTP_DBG_OBJCNT is not set
-# CONFIG_SCTP_HMAC_NONE is not set
-# CONFIG_SCTP_HMAC_SHA1 is not set
-CONFIG_SCTP_HMAC_MD5=y
-CONFIG_ATM=m
-CONFIG_ATM_CLIP=m
-# CONFIG_ATM_CLIP_NO_ICMP is not set
-CONFIG_ATM_LANE=m
-# CONFIG_ATM_MPOA is not set
-CONFIG_ATM_BR2684=m
-# CONFIG_ATM_BR2684_IPFILTER is not set
-CONFIG_BRIDGE=m
-CONFIG_VLAN_8021Q=m
-# CONFIG_DECNET is not set
-CONFIG_LLC=m
-# CONFIG_LLC2 is not set
-CONFIG_IPX=m
-# CONFIG_IPX_INTERN is not set
-CONFIG_ATALK=m
-CONFIG_DEV_APPLETALK=y
-CONFIG_IPDDP=m
-CONFIG_IPDDP_ENCAP=y
-CONFIG_IPDDP_DECAP=y
-# CONFIG_X25 is not set
-# CONFIG_LAPB is not set
-CONFIG_NET_DIVERT=y
-# CONFIG_ECONET is not set
-CONFIG_WAN_ROUTER=m
-
-#
-# QoS and/or fair queueing
-#
-CONFIG_NET_SCHED=y
-# CONFIG_NET_SCH_CLK_JIFFIES is not set
-CONFIG_NET_SCH_CLK_GETTIMEOFDAY=y
-# CONFIG_NET_SCH_CLK_CPU is not set
-CONFIG_NET_SCH_CBQ=m
-CONFIG_NET_SCH_HTB=m
-CONFIG_NET_SCH_HFSC=m
-CONFIG_NET_SCH_ATM=m
-CONFIG_NET_SCH_PRIO=m
-CONFIG_NET_SCH_RED=m
-CONFIG_NET_SCH_SFQ=m
-CONFIG_NET_SCH_TEQL=m
-CONFIG_NET_SCH_TBF=m
-CONFIG_NET_SCH_GRED=m
-CONFIG_NET_SCH_DSMARK=m
-CONFIG_NET_SCH_NETEM=m
-CONFIG_NET_SCH_INGRESS=m
-CONFIG_NET_QOS=y
-CONFIG_NET_ESTIMATOR=y
-CONFIG_NET_CLS=y
-CONFIG_NET_CLS_BASIC=m
-CONFIG_NET_CLS_TCINDEX=m
-CONFIG_NET_CLS_ROUTE4=m
-CONFIG_NET_CLS_ROUTE=y
-CONFIG_NET_CLS_FW=m
-CONFIG_NET_CLS_U32=m
-CONFIG_CLS_U32_PERF=y
-CONFIG_NET_CLS_IND=y
-CONFIG_CLS_U32_MARK=y
-CONFIG_NET_CLS_RSVP=m
-CONFIG_NET_CLS_RSVP6=m
-CONFIG_NET_EMATCH=y
-CONFIG_NET_EMATCH_STACK=32
-CONFIG_NET_EMATCH_CMP=m
-CONFIG_NET_EMATCH_NBYTE=m
-CONFIG_NET_EMATCH_U32=m
-CONFIG_NET_EMATCH_META=m
-# CONFIG_NET_CLS_ACT is not set
-CONFIG_NET_CLS_POLICE=y
-
-#
-# Network testing
-#
-# CONFIG_NET_PKTGEN is not set
-CONFIG_NETPOLL=y
-# CONFIG_NETPOLL_RX is not set
-CONFIG_NETPOLL_TRAP=y
-CONFIG_NET_POLL_CONTROLLER=y
-# CONFIG_HAMRADIO is not set
-CONFIG_IRDA=m
-
-#
-# IrDA protocols
-#
-CONFIG_IRLAN=m
-CONFIG_IRNET=m
-CONFIG_IRCOMM=m
-# CONFIG_IRDA_ULTRA is not set
-
-#
-# IrDA options
-#
-CONFIG_IRDA_CACHE_LAST_LSAP=y
-CONFIG_IRDA_FAST_RR=y
-# CONFIG_IRDA_DEBUG is not set
-
-#
-# Infrared-port device drivers
-#
-
-#
-# SIR device drivers
-#
-CONFIG_IRTTY_SIR=m
-
-#
-# Dongle support
-#
-# CONFIG_DONGLE is not set
-
-#
-# Old SIR device drivers
-#
-
-#
-# Old Serial dongle support
-#
-
-#
-# FIR device drivers
-#
-# CONFIG_BT is not set
-CONFIG_TUX=m
-
-#
-# TUX options
-#
-CONFIG_TUX_EXTCGI=y
-CONFIG_TUX_EXTENDED_LOG=y
-# CONFIG_TUX_DEBUG is not set
-CONFIG_NETDEVICES=y
-CONFIG_DUMMY=m
-CONFIG_BONDING=m
-CONFIG_EQUALIZER=m
-CONFIG_TUN=m
-
-#
-# Ethernet (10 or 100Mbit)
-#
-CONFIG_NET_ETHERNET=y
-CONFIG_MII=m
-
-#
-# Ethernet (1000 Mbit)
-#
-
-#
-# Ethernet (10000 Mbit)
-#
-
-#
-# Token Ring devices
-#
-
-#
-# Wireless LAN (non-hamradio)
-#
-CONFIG_NET_RADIO=y
-
-#
-# Obsolete Wireless cards support (pre-802.11)
-#
-# CONFIG_STRIP is not set
-# CONFIG_IEEE80211 is not set
-# CONFIG_ATMEL is not set
-
-#
-# Wan interfaces
-#
-# CONFIG_WAN is not set
-
-#
-# ATM drivers
-#
-CONFIG_ATM_TCP=m
-CONFIG_PPP=m
-CONFIG_PPP_MULTILINK=y
-CONFIG_PPP_FILTER=y
-CONFIG_PPP_ASYNC=m
-CONFIG_PPP_SYNC_TTY=m
-CONFIG_PPP_DEFLATE=m
-# CONFIG_PPP_BSDCOMP is not set
-CONFIG_PPPOE=m
-CONFIG_PPPOATM=m
-CONFIG_SLIP=m
-CONFIG_SLIP_COMPRESSED=y
-CONFIG_SLIP_SMART=y
-# CONFIG_SLIP_MODE_SLIP6 is not set
-# CONFIG_SHAPER is not set
-CONFIG_NETCONSOLE=m
-CONFIG_UNIX98_PTYS=y
-
-#
-# File systems
-#
-CONFIG_EXT2_FS=y
-CONFIG_EXT2_FS_XATTR=y
-CONFIG_EXT2_FS_POSIX_ACL=y
-CONFIG_EXT2_FS_SECURITY=y
-CONFIG_EXT3_FS=y
-CONFIG_EXT3_FS_XATTR=y
-CONFIG_EXT3_FS_POSIX_ACL=y
-CONFIG_EXT3_FS_SECURITY=y
-CONFIG_JBD=y
-# CONFIG_JBD_DEBUG is not set
-CONFIG_FS_MBCACHE=y
-CONFIG_REISERFS_FS=m
-# CONFIG_REISERFS_CHECK is not set
-CONFIG_REISERFS_PROC_INFO=y
-CONFIG_REISERFS_FS_XATTR=y
-CONFIG_REISERFS_FS_POSIX_ACL=y
-CONFIG_REISERFS_FS_SECURITY=y
-CONFIG_JFS_FS=m
-CONFIG_JFS_POSIX_ACL=y
-CONFIG_JFS_SECURITY=y
-# CONFIG_JFS_DEBUG is not set
-# CONFIG_JFS_STATISTICS is not set
-CONFIG_FS_POSIX_ACL=y
-
-#
-# XFS support
-#
-CONFIG_XFS_FS=m
-CONFIG_XFS_EXPORT=y
-# CONFIG_XFS_RT is not set
-CONFIG_XFS_QUOTA=y
-CONFIG_XFS_SECURITY=y
-CONFIG_XFS_POSIX_ACL=y
-CONFIG_MINIX_FS=m
-CONFIG_ROMFS_FS=m
-CONFIG_QUOTA=y
-# CONFIG_QFMT_V1 is not set
-CONFIG_QFMT_V2=y
-CONFIG_QUOTACTL=y
-CONFIG_DNOTIFY=y
-CONFIG_AUTOFS_FS=m
-CONFIG_AUTOFS4_FS=m
-
-#
-# CD-ROM/DVD Filesystems
-#
-CONFIG_ISO9660_FS=y
-CONFIG_JOLIET=y
-CONFIG_ZISOFS=y
-CONFIG_ZISOFS_FS=y
-CONFIG_UDF_FS=m
-CONFIG_UDF_NLS=y
-
-#
-# DOS/FAT/NT Filesystems
-#
-CONFIG_FAT_FS=m
-CONFIG_MSDOS_FS=m
-CONFIG_VFAT_FS=m
-CONFIG_FAT_DEFAULT_CODEPAGE=437
-CONFIG_FAT_DEFAULT_IOCHARSET="ascii"
-# CONFIG_NTFS_FS is not set
-
-#
-# Pseudo filesystems
-#
-CONFIG_PROC_FS=y
-CONFIG_PROC_KCORE=y
-CONFIG_SYSFS=y
-# CONFIG_DEVFS_FS is not set
-CONFIG_DEVPTS_FS_XATTR=y
-CONFIG_DEVPTS_FS_SECURITY=y
-CONFIG_TMPFS=y
-CONFIG_TMPFS_XATTR=y
-CONFIG_TMPFS_SECURITY=y
-CONFIG_HUGETLBFS=y
-CONFIG_HUGETLB_PAGE=y
-CONFIG_RAMFS=y
-
-#
-# Miscellaneous filesystems
-#
-# CONFIG_ADFS_FS is not set
-CONFIG_AFFS_FS=m
-CONFIG_HFS_FS=m
-CONFIG_HFSPLUS_FS=m
-CONFIG_BEFS_FS=m
-# CONFIG_BEFS_DEBUG is not set
-CONFIG_BFS_FS=m
-CONFIG_EFS_FS=m
-CONFIG_CRAMFS=m
-CONFIG_VXFS_FS=m
-# CONFIG_HPFS_FS is not set
-CONFIG_QNX4FS_FS=m
-CONFIG_SYSV_FS=m
-CONFIG_UFS_FS=m
-# CONFIG_UFS_FS_WRITE is not set
-
-#
-# Network File Systems
-#
-CONFIG_NFS_FS=m
-CONFIG_NFS_V3=y
-CONFIG_NFS_V4=y
-CONFIG_NFS_DIRECTIO=y
-CONFIG_NFSD=m
-CONFIG_NFSD_V3=y
-CONFIG_NFSD_V4=y
-CONFIG_NFSD_TCP=y
-CONFIG_LOCKD=m
-CONFIG_LOCKD_V4=y
-CONFIG_EXPORTFS=m
-CONFIG_SUNRPC=m
-CONFIG_SUNRPC_GSS=m
-CONFIG_RPCSEC_GSS_KRB5=m
-CONFIG_RPCSEC_GSS_SPKM3=m
-CONFIG_SMB_FS=m
-# CONFIG_SMB_NLS_DEFAULT is not set
-CONFIG_CIFS=m
-# CONFIG_CIFS_STATS is not set
-CONFIG_CIFS_XATTR=y
-CONFIG_CIFS_POSIX=y
-# CONFIG_CIFS_EXPERIMENTAL is not set
-CONFIG_NCP_FS=m
-CONFIG_NCPFS_PACKET_SIGNING=y
-CONFIG_NCPFS_IOCTL_LOCKING=y
-CONFIG_NCPFS_STRONG=y
-CONFIG_NCPFS_NFS_NS=y
-CONFIG_NCPFS_OS2_NS=y
-CONFIG_NCPFS_SMALLDOS=y
-CONFIG_NCPFS_NLS=y
-CONFIG_NCPFS_EXTRAS=y
-# CONFIG_CODA_FS is not set
-# CONFIG_AFS_FS is not set
-
-#
-# Partition Types
-#
-CONFIG_PARTITION_ADVANCED=y
-# CONFIG_ACORN_PARTITION is not set
-CONFIG_OSF_PARTITION=y
-# CONFIG_AMIGA_PARTITION is not set
-# CONFIG_ATARI_PARTITION is not set
-CONFIG_MAC_PARTITION=y
-CONFIG_MSDOS_PARTITION=y
-CONFIG_BSD_DISKLABEL=y
-CONFIG_MINIX_SUBPARTITION=y
-CONFIG_SOLARIS_X86_PARTITION=y
-CONFIG_UNIXWARE_DISKLABEL=y
-# CONFIG_LDM_PARTITION is not set
-CONFIG_SGI_PARTITION=y
-# CONFIG_ULTRIX_PARTITION is not set
-CONFIG_SUN_PARTITION=y
-CONFIG_EFI_PARTITION=y
-
-#
-# Native Language Support
-#
-CONFIG_NLS=y
-CONFIG_NLS_DEFAULT="utf8"
-CONFIG_NLS_CODEPAGE_437=y
-CONFIG_NLS_CODEPAGE_737=m
-CONFIG_NLS_CODEPAGE_775=m
-CONFIG_NLS_CODEPAGE_850=m
-CONFIG_NLS_CODEPAGE_852=m
-CONFIG_NLS_CODEPAGE_855=m
-CONFIG_NLS_CODEPAGE_857=m
-CONFIG_NLS_CODEPAGE_860=m
-CONFIG_NLS_CODEPAGE_861=m
-CONFIG_NLS_CODEPAGE_862=m
-CONFIG_NLS_CODEPAGE_863=m
-CONFIG_NLS_CODEPAGE_864=m
-CONFIG_NLS_CODEPAGE_865=m
-CONFIG_NLS_CODEPAGE_866=m
-CONFIG_NLS_CODEPAGE_869=m
-CONFIG_NLS_CODEPAGE_936=m
-CONFIG_NLS_CODEPAGE_950=m
-CONFIG_NLS_CODEPAGE_932=m
-CONFIG_NLS_CODEPAGE_949=m
-CONFIG_NLS_CODEPAGE_874=m
-CONFIG_NLS_ISO8859_8=m
-CONFIG_NLS_CODEPAGE_1250=m
-CONFIG_NLS_CODEPAGE_1251=m
-CONFIG_NLS_ASCII=y
-CONFIG_NLS_ISO8859_1=m
-CONFIG_NLS_ISO8859_2=m
-CONFIG_NLS_ISO8859_3=m
-CONFIG_NLS_ISO8859_4=m
-CONFIG_NLS_ISO8859_5=m
-CONFIG_NLS_ISO8859_6=m
-CONFIG_NLS_ISO8859_7=m
-CONFIG_NLS_ISO8859_9=m
-CONFIG_NLS_ISO8859_13=m
-CONFIG_NLS_ISO8859_14=m
-CONFIG_NLS_ISO8859_15=m
-CONFIG_NLS_KOI8_R=m
-CONFIG_NLS_KOI8_U=m
-CONFIG_NLS_UTF8=m
-
-#
-# Kernel hacking
-#
-# CONFIG_PRINTK_TIME is not set
-CONFIG_LOG_BUF_SHIFT=17
-# CONFIG_SCHEDSTATS is not set
-# CONFIG_DEBUG_KOBJECT is not set
-CONFIG_DEBUG_BUGVERBOSE=y
-CONFIG_DEBUG_FS=y
-# CONFIG_KPROBES is not set
-# CONFIG_IRQSTACKS is not set
-CONFIG_STACK_SIZE_SHIFT=13
-CONFIG_STACK_WARN=4096
-# CONFIG_X86_STACK_CHECK is not set
-CONFIG_VSERVER=y
-CONFIG_VSERVER_SECURITY=y
-CONFIG_VSERVER_LEGACYNET=y
-
-#
-# Linux VServer
-#
-CONFIG_VSERVER_FILESHARING=y
-CONFIG_VSERVER_LEGACY=y
-# CONFIG_VSERVER_LEGACY_VERSION is not set
-# CONFIG_VSERVER_NGNET is not set
-CONFIG_VSERVER_PROC_SECURE=y
-CONFIG_VSERVER_HARDCPU=y
-CONFIG_VSERVER_HARDCPU_IDLE=y
-CONFIG_VSERVER_ACB_SCHED=y
-# CONFIG_INOXID_NONE is not set
-# CONFIG_INOXID_UID16 is not set
-# CONFIG_INOXID_GID16 is not set
-CONFIG_INOXID_UGID24=y
-# CONFIG_INOXID_INTERN is not set
-# CONFIG_INOXID_RUNTIME is not set
-# CONFIG_XID_TAG_NFSD is not set
-# CONFIG_VSERVER_DEBUG is not set
-
-#
-# Security options
-#
-CONFIG_KEYS=y
-CONFIG_KEYS_DEBUG_PROC_KEYS=y
-CONFIG_SECURITY=y
-CONFIG_SECURITY_NETWORK=y
-CONFIG_SECURITY_CAPABILITIES=y
-# CONFIG_SECURITY_SECLVL is not set
-CONFIG_SECURITY_SELINUX=y
-CONFIG_SECURITY_SELINUX_BOOTPARAM=y
-CONFIG_SECURITY_SELINUX_BOOTPARAM_VALUE=1
-CONFIG_SECURITY_SELINUX_DISABLE=y
-CONFIG_SECURITY_SELINUX_DEVELOP=y
-CONFIG_SECURITY_SELINUX_AVC_STATS=y
-CONFIG_SECURITY_SELINUX_CHECKREQPROT_VALUE=1
-
-#
-# Cryptographic options
-#
-CONFIG_CRYPTO=y
-CONFIG_CRYPTO_HMAC=y
-CONFIG_CRYPTO_NULL=m
-CONFIG_CRYPTO_MD4=m
-CONFIG_CRYPTO_MD5=m
-CONFIG_CRYPTO_SHA1=y
-CONFIG_CRYPTO_SHA256=m
-CONFIG_CRYPTO_SHA512=m
-CONFIG_CRYPTO_WP512=m
-CONFIG_CRYPTO_TGR192=m
-CONFIG_CRYPTO_DES=m
-CONFIG_CRYPTO_BLOWFISH=m
-CONFIG_CRYPTO_TWOFISH=m
-CONFIG_CRYPTO_SERPENT=m
-CONFIG_CRYPTO_AES_586=m
-CONFIG_CRYPTO_CAST5=m
-CONFIG_CRYPTO_CAST6=m
-CONFIG_CRYPTO_TEA=m
-CONFIG_CRYPTO_ARC4=m
-CONFIG_CRYPTO_KHAZAD=m
-CONFIG_CRYPTO_ANUBIS=m
-CONFIG_CRYPTO_DEFLATE=m
-CONFIG_CRYPTO_MICHAEL_MIC=m
-CONFIG_CRYPTO_CRC32C=m
-# CONFIG_CRYPTO_TEST is not set
-CONFIG_CRYPTO_SIGNATURE=y
-CONFIG_CRYPTO_SIGNATURE_DSA=y
-CONFIG_CRYPTO_MPILIB=y
-
-#
-# Hardware crypto devices
-#
-CONFIG_CRYPTO_DEV_PADLOCK=m
-CONFIG_CRYPTO_DEV_PADLOCK_AES=y
-
-#
-# Library routines
-#
-CONFIG_CRC_CCITT=m
-CONFIG_CRC32=y
-CONFIG_LIBCRC32C=m
-CONFIG_ZLIB_INFLATE=y
-CONFIG_ZLIB_DEFLATE=m
obj-$(CONFIG_MCA) += mca/
obj-$(CONFIG_EISA) += eisa/
obj-$(CONFIG_CPU_FREQ) += cpufreq/
+obj-$(CONFIG_CRASH_DUMP) += dump/
obj-$(CONFIG_MMC) += mmc/
obj-$(CONFIG_INFINIBAND) += infiniband/
obj-$(CONFIG_BLK_DEV_SGIIOC4) += sn/
--- /dev/null
+/*
+ * linux/drivers/char/busmouse.c
+ *
+ * Copyright (C) 1995 - 1998 Russell King <linux@arm.linux.org.uk>
+ * Protocol taken from original busmouse.c
+ * read() waiting taken from psaux.c
+ *
+ * Medium-level interface for quadrature or bus mice.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/signal.h>
+#include <linux/slab.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/poll.h>
+#include <linux/miscdevice.h>
+#include <linux/random.h>
+#include <linux/init.h>
+#include <linux/smp_lock.h>
+
+#include <asm/uaccess.h>
+#include <asm/system.h>
+#include <asm/io.h>
+
+#include "busmouse.h"
+
+/* Uncomment this if your mouse drivers expect the kernel to
+ * return with EAGAIN if the mouse does not have any events
+ * available, even if the mouse is opened in blocking mode.
+ * Please report use of this "feature" to the author using the
+ * above address.
+ */
+/*#define BROKEN_MOUSE*/
+
+struct busmouse_data {
+ struct miscdevice miscdev;
+ struct busmouse *ops;
+ spinlock_t lock;
+
+ wait_queue_head_t wait;
+ struct fasync_struct *fasyncptr;
+ char active;
+ char buttons;
+ char ready;
+ int dxpos;
+ int dypos;
+};
+
+#define NR_MICE 15
+#define FIRST_MOUSE 0
+#define DEV_TO_MOUSE(inode) MINOR_TO_MOUSE(iminor(inode))
+#define MINOR_TO_MOUSE(minor) ((minor) - FIRST_MOUSE)
+
+/*
+ * List of mice and guarding semaphore. You must take the semaphore
+ * before you take the misc device semaphore if you need both
+ */
+
+static struct busmouse_data *busmouse_data[NR_MICE];
+static DECLARE_MUTEX(mouse_sem);
+
+/**
+ * busmouse_add_movement - notification of a change of mouse position
+ * @mousedev: mouse number
+ * @dx: delta X movement
+ * @dy: delta Y movement
+ * @buttons: new button state
+ *
+ * Updates the mouse position and button information. The mousedev
+ * parameter is the value returned from register_busmouse. The
+ * movement information is updated, and the new button state is
+ * saved. A waiting user thread is woken.
+ */
+
+void busmouse_add_movementbuttons(int mousedev, int dx, int dy, int buttons)
+{
+ struct busmouse_data *mse = busmouse_data[mousedev];
+ int changed;
+
+ spin_lock(&mse->lock);
+ changed = (dx != 0 || dy != 0 || mse->buttons != buttons);
+
+ if (changed) {
+ add_mouse_randomness((buttons << 16) + (dy << 8) + dx);
+
+ mse->buttons = buttons;
+ mse->dxpos += dx;
+ mse->dypos += dy;
+ mse->ready = 1;
+
+ /*
+ * keep dx/dy reasonable, but still able to track when X (or
+ * whatever) must page or is busy (i.e. long waits between
+ * reads)
+ */
+ if (mse->dxpos < -2048)
+ mse->dxpos = -2048;
+ if (mse->dxpos > 2048)
+ mse->dxpos = 2048;
+ if (mse->dypos < -2048)
+ mse->dypos = -2048;
+ if (mse->dypos > 2048)
+ mse->dypos = 2048;
+ }
+
+ spin_unlock(&mse->lock);
+
+ if (changed) {
+ wake_up(&mse->wait);
+
+ kill_fasync(&mse->fasyncptr, SIGIO, POLL_IN);
+ }
+}
+
+/**
+ * busmouse_add_movement - notification of a change of mouse position
+ * @mousedev: mouse number
+ * @dx: delta X movement
+ * @dy: delta Y movement
+ *
+ * Updates the mouse position. The mousedev parameter is the value
+ * returned from register_busmouse. The movement information is
+ * updated, and a waiting user thread is woken.
+ */
+
+void busmouse_add_movement(int mousedev, int dx, int dy)
+{
+ struct busmouse_data *mse = busmouse_data[mousedev];
+
+ busmouse_add_movementbuttons(mousedev, dx, dy, mse->buttons);
+}
+
+/**
+ * busmouse_add_buttons - notification of a change of button state
+ * @mousedev: mouse number
+ * @clear: mask of buttons to clear
+ * @eor: mask of buttons to change
+ *
+ * Updates the button state. The mousedev parameter is the value
+ * returned from register_busmouse. The buttons are updated by:
+ * new_state = (old_state & ~clear) ^ eor
+ * A waiting user thread is woken up.
+ */
+
+void busmouse_add_buttons(int mousedev, int clear, int eor)
+{
+ struct busmouse_data *mse = busmouse_data[mousedev];
+
+ busmouse_add_movementbuttons(mousedev, 0, 0, (mse->buttons & ~clear) ^ eor);
+}
+
+static int busmouse_fasync(int fd, struct file *filp, int on)
+{
+ struct busmouse_data *mse = (struct busmouse_data *)filp->private_data;
+ int retval;
+
+ retval = fasync_helper(fd, filp, on, &mse->fasyncptr);
+ if (retval < 0)
+ return retval;
+ return 0;
+}
+
+static int busmouse_release(struct inode *inode, struct file *file)
+{
+ struct busmouse_data *mse = (struct busmouse_data *)file->private_data;
+ int ret = 0;
+
+ lock_kernel();
+ busmouse_fasync(-1, file, 0);
+
+ down(&mouse_sem); /* to protect mse->active */
+ if (--mse->active == 0) {
+ if (mse->ops->release)
+ ret = mse->ops->release(inode, file);
+ module_put(mse->ops->owner);
+ mse->ready = 0;
+ }
+ unlock_kernel();
+ up( &mouse_sem);
+
+ return ret;
+}
+
+static int busmouse_open(struct inode *inode, struct file *file)
+{
+ struct busmouse_data *mse;
+ unsigned int mousedev;
+ int ret;
+
+ mousedev = DEV_TO_MOUSE(inode);
+ if (mousedev >= NR_MICE)
+ return -EINVAL;
+
+ down(&mouse_sem);
+ mse = busmouse_data[mousedev];
+ ret = -ENODEV;
+ if (!mse || !mse->ops) /* shouldn't happen, but... */
+ goto end;
+
+ if (!try_module_get(mse->ops->owner))
+ goto end;
+
+ ret = 0;
+ if (mse->ops->open) {
+ ret = mse->ops->open(inode, file);
+ if (ret)
+ module_put(mse->ops->owner);
+ }
+
+ if (ret)
+ goto end;
+
+ file->private_data = mse;
+
+ if (mse->active++)
+ goto end;
+
+ spin_lock_irq(&mse->lock);
+
+ mse->ready = 0;
+ mse->dxpos = 0;
+ mse->dypos = 0;
+ mse->buttons = mse->ops->init_button_state;
+
+ spin_unlock_irq(&mse->lock);
+end:
+ up(&mouse_sem);
+ return ret;
+}
+
+static ssize_t busmouse_write(struct file *file, const char *buffer, size_t count, loff_t *ppos)
+{
+ return -EINVAL;
+}
+
+static ssize_t busmouse_read(struct file *file, char *buffer, size_t count, loff_t *ppos)
+{
+ struct busmouse_data *mse = (struct busmouse_data *)file->private_data;
+ DECLARE_WAITQUEUE(wait, current);
+ int dxpos, dypos, buttons;
+
+ if (count < 3)
+ return -EINVAL;
+
+ spin_lock_irq(&mse->lock);
+
+ if (!mse->ready) {
+#ifdef BROKEN_MOUSE
+ spin_unlock_irq(&mse->lock);
+ return -EAGAIN;
+#else
+ if (file->f_flags & O_NONBLOCK) {
+ spin_unlock_irq(&mse->lock);
+ return -EAGAIN;
+ }
+
+ add_wait_queue(&mse->wait, &wait);
+repeat:
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (!mse->ready && !signal_pending(current)) {
+ spin_unlock_irq(&mse->lock);
+ schedule();
+ spin_lock_irq(&mse->lock);
+ goto repeat;
+ }
+
+ current->state = TASK_RUNNING;
+ remove_wait_queue(&mse->wait, &wait);
+
+ if (signal_pending(current)) {
+ spin_unlock_irq(&mse->lock);
+ return -ERESTARTSYS;
+ }
+#endif
+ }
+
+ dxpos = mse->dxpos;
+ dypos = mse->dypos;
+ buttons = mse->buttons;
+
+ if (dxpos < -127)
+ dxpos =- 127;
+ if (dxpos > 127)
+ dxpos = 127;
+ if (dypos < -127)
+ dypos =- 127;
+ if (dypos > 127)
+ dypos = 127;
+
+ mse->dxpos -= dxpos;
+ mse->dypos -= dypos;
+
+ /* This is something that many drivers have apparantly
+ * forgotten... If the X and Y positions still contain
+ * information, we still have some info ready for the
+ * user program...
+ */
+ mse->ready = mse->dxpos || mse->dypos;
+
+ spin_unlock_irq(&mse->lock);
+
+ /* Write out data to the user. Format is:
+ * byte 0 - identifer (0x80) and (inverted) mouse buttons
+ * byte 1 - X delta position +/- 127
+ * byte 2 - Y delta position +/- 127
+ */
+ if (put_user((char)buttons | 128, buffer) ||
+ put_user((char)dxpos, buffer + 1) ||
+ put_user((char)dypos, buffer + 2))
+ return -EFAULT;
+
+ if (count > 3 && clear_user(buffer + 3, count - 3))
+ return -EFAULT;
+
+ file->f_dentry->d_inode->i_atime = CURRENT_TIME;
+
+ return count;
+}
+
+/* No kernel lock held - fine */
+static unsigned int busmouse_poll(struct file *file, poll_table *wait)
+{
+ struct busmouse_data *mse = (struct busmouse_data *)file->private_data;
+
+ poll_wait(file, &mse->wait, wait);
+
+ if (mse->ready)
+ return POLLIN | POLLRDNORM;
+
+ return 0;
+}
+
+struct file_operations busmouse_fops=
+{
+ .owner = THIS_MODULE,
+ .read = busmouse_read,
+ .write = busmouse_write,
+ .poll = busmouse_poll,
+ .open = busmouse_open,
+ .release = busmouse_release,
+ .fasync = busmouse_fasync,
+};
+
+/**
+ * register_busmouse - register a bus mouse interface
+ * @ops: busmouse structure for the mouse
+ *
+ * Registers a mouse with the driver. The return is mouse number on
+ * success and a negative errno code on an error. The passed ops
+ * structure most not be freed until the mouser is unregistered
+ */
+
+int register_busmouse(struct busmouse *ops)
+{
+ unsigned int msedev = MINOR_TO_MOUSE(ops->minor);
+ struct busmouse_data *mse;
+ int ret = -EINVAL;
+
+ if (msedev >= NR_MICE) {
+ printk(KERN_ERR "busmouse: trying to allocate mouse on minor %d\n",
+ ops->minor);
+ goto out;
+ }
+
+ ret = -ENOMEM;
+ mse = kmalloc(sizeof(*mse), GFP_KERNEL);
+ if (!mse)
+ goto out;
+
+ down(&mouse_sem);
+ ret = -EBUSY;
+ if (busmouse_data[msedev])
+ goto freemem;
+
+ memset(mse, 0, sizeof(*mse));
+
+ mse->miscdev.minor = ops->minor;
+ mse->miscdev.name = ops->name;
+ mse->miscdev.fops = &busmouse_fops;
+ mse->ops = ops;
+ mse->lock = (spinlock_t)SPIN_LOCK_UNLOCKED;
+ init_waitqueue_head(&mse->wait);
+
+
+ ret = misc_register(&mse->miscdev);
+
+ if (ret < 0)
+ goto freemem;
+
+ busmouse_data[msedev] = mse;
+ ret = msedev;
+out:
+ up(&mouse_sem);
+ return ret;
+
+
+freemem:
+ kfree(mse);
+ goto out;
+}
+
+/**
+ * unregister_busmouse - unregister a bus mouse interface
+ * @mousedev: Mouse number to release
+ *
+ * Unregister a previously installed mouse handler. The mousedev
+ * passed is the return code from a previous call to register_busmouse
+ */
+
+
+int unregister_busmouse(int mousedev)
+{
+ int err = -EINVAL;
+
+ if (mousedev < 0)
+ return 0;
+ if (mousedev >= NR_MICE) {
+ printk(KERN_ERR "busmouse: trying to free mouse on"
+ " mousedev %d\n", mousedev);
+ return -EINVAL;
+ }
+
+ down(&mouse_sem);
+
+ if (!busmouse_data[mousedev]) {
+ printk(KERN_WARNING "busmouse: trying to free free mouse"
+ " on mousedev %d\n", mousedev);
+ goto fail;
+ }
+
+ if (busmouse_data[mousedev]->active) {
+ printk(KERN_ERR "busmouse: trying to free active mouse"
+ " on mousedev %d\n", mousedev);
+ goto fail;
+ }
+
+ err = misc_deregister(&busmouse_data[mousedev]->miscdev);
+
+ kfree(busmouse_data[mousedev]);
+ busmouse_data[mousedev] = NULL;
+fail:
+ up(&mouse_sem);
+ return err;
+}
+
+EXPORT_SYMBOL(busmouse_add_movementbuttons);
+EXPORT_SYMBOL(busmouse_add_movement);
+EXPORT_SYMBOL(busmouse_add_buttons);
+EXPORT_SYMBOL(register_busmouse);
+EXPORT_SYMBOL(unregister_busmouse);
+
+MODULE_ALIAS_MISCDEV(BUSMOUSE_MINOR);
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * linux/drivers/char/busmouse.h
+ *
+ * Copyright (C) 1995 - 1998 Russell King
+ *
+ * Prototypes for generic busmouse interface
+ */
+#ifndef BUSMOUSE_H
+#define BUSMOUSE_H
+
+struct busmouse {
+ int minor;
+ const char *name;
+ struct module *owner;
+ int (*open)(struct inode * inode, struct file * file);
+ int (*release)(struct inode * inode, struct file * file);
+ int init_button_state;
+};
+
+extern void busmouse_add_movementbuttons(int mousedev, int dx, int dy, int buttons);
+extern void busmouse_add_movement(int mousedev, int dx, int dy);
+extern void busmouse_add_buttons(int mousedev, int clear, int eor);
+
+extern int register_busmouse(struct busmouse *ops);
+extern int unregister_busmouse(int mousedev);
+
+#endif
--- /dev/null
+#
+# Makefile for the dump device drivers.
+#
+
+dump-y := dump_setup.o dump_fmt.o dump_filters.o dump_scheme.o dump_execute.o
+dump-$(CONFIG_X86) += dump_i386.o
+dump-$(CONFIG_ARM) += dump_arm.o
+dump-$(CONFIG_PPC64) += dump_ppc64.o
+dump-$(CONFIG_CRASH_DUMP_MEMDEV) += dump_memdev.o dump_overlay.o
+dump-objs += $(dump-y)
+
+obj-$(CONFIG_CRASH_DUMP) += dump.o
+obj-$(CONFIG_CRASH_DUMP_BLOCKDEV) += dump_blockdev.o
+obj-$(CONFIG_CRASH_DUMP_NETDEV) += dump_netdev.o
+obj-$(CONFIG_CRASH_DUMP_COMPRESS_RLE) += dump_rle.o
+obj-$(CONFIG_CRASH_DUMP_COMPRESS_GZIP) += dump_gzip.o
--- /dev/null
+/*
+ * Architecture specific (ARM/XScale) functions for Linux crash dumps.
+ *
+ * Created by: Fleming Feng (fleming.feng@intel.com)
+ *
+ * Copyright(C) 2003 Intel Corp. All rights reserved.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+/*
+ * The hooks for dumping the kernel virtual memory to disk are in this
+ * file. Any time a modification is made to the virtual memory mechanism,
+ * these routines must be changed to use the new mechanisms.
+ */
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/smp.h>
+#include <linux/fs.h>
+#include <linux/vmalloc.h>
+#include <linux/dump.h>
+#include <linux/mm.h>
+#include <asm/processor.h>
+#include <asm/hardirq.h>
+#include <asm/kdebug.h>
+
+static __s32 saved_irq_count; /* saved preempt_count() flags */
+
+static int alloc_dha_stack(void)
+{
+ int i;
+ void *ptr;
+
+ if (dump_header_asm.dha_stack[0])
+ return 0;
+
+ ptr = vmalloc(THREAD_SIZE * num_online_cpus());
+ if (!ptr) {
+ printk("vmalloc for dha_stacks failed\n");
+ return -ENOMEM;
+ }
+
+ for( i = 0; i < num_online_cpus(); i++){
+ dump_header_asm.dha_stack[i] = (u32)((unsigned long)ptr +
+ (i * THREAD_SIZE));
+ }
+
+ return 0;
+}
+
+static int free_dha_stack(void)
+{
+ if (dump_header_asm.dha_stack[0]){
+ vfree((void*)dump_header_asm.dha_stack[0]);
+ dump_header_asm.dha_stack[0] = 0;
+ }
+ return 0;
+}
+
+void __dump_save_regs(struct pt_regs* dest_regs, const struct pt_regs* regs)
+{
+
+ /* Here, because the arm version uses _dump_regs_t,
+ * instead of pt_regs in dump_header_asm, while the
+ * the function is defined inside architecture independent
+ * header file include/linux/dump.h, the size of block of
+ * memory copied is not equal to pt_regs.
+ */
+
+ memcpy(dest_regs, regs, sizeof(_dump_regs_t));
+
+}
+
+#ifdef CONFIG_SMP
+/* FIXME: This is reserved for possible future usage for SMP system
+ * based on ARM/XScale. Currently, there is no information for an
+ * SMP system based on ARM/XScale, they are not used!
+ */
+/* save registers on other processor */
+void
+__dump_save_other_cpus(void)
+{
+
+ /* Dummy now! */
+
+ return;
+
+}
+#else /* !CONFIG_SMP */
+#define save_other_cpu_state() do { } while (0)
+#endif /* !CONFIG_SMP */
+
+/*
+ * Kludge - dump from interrupt context is unreliable (Fixme)
+ *
+ * We do this so that softirqs initiated for dump i/o
+ * get processed and we don't hang while waiting for i/o
+ * to complete or in any irq synchronization attempt.
+ *
+ * This is not quite legal of course, as it has the side
+ * effect of making all interrupts & softirqs triggered
+ * while dump is in progress complete before currently
+ * pending softirqs and the currently executing interrupt
+ * code.
+ */
+static inline void
+irq_bh_save(void)
+{
+ saved_irq_count = irq_count();
+ preempt_count() &= ~(HARDIRQ_MASK|SOFTIRQ_MASK);
+}
+
+static inline void
+irq_bh_restore(void)
+{
+ preempt_count() |= saved_irq_count;
+}
+
+/*
+ * Name: __dump_irq_enable
+ * Func: Reset system so interrupts are enabled.
+ * This is used for dump methods that requires interrupts
+ * Eventually, all methods will have interrupts disabled
+ * and this code can be removed.
+ *
+ * Re-enable interrupts
+ */
+int
+__dump_irq_enable(void)
+{
+ irq_bh_save();
+ local_irq_enable();
+ return 0;
+}
+
+/* Name: __dump_irq_restore
+ * Func: Resume the system state in an architecture-specific way.
+ */
+void
+__dump_irq_restore(void)
+{
+ local_irq_disable();
+ irq_bh_restore();
+}
+
+
+/*
+ * Name: __dump_configure_header()
+ * Func: Meant to fill in arch specific header fields except per-cpu state
+ * already captured in dump_lcrash_configure_header.
+ */
+int
+__dump_configure_header(const struct pt_regs *regs)
+{
+ return (0);
+}
+
+/*
+ * Name: dump_die_event
+ * Func: Called from notify_die
+ */
+static int dump_die_event(struct notifier_block* this,
+ unsigned long event,
+ void* arg)
+{
+ const struct die_args* args = (const struct die_args*)arg;
+
+ switch(event){
+ case DIE_PANIC:
+ case DIE_OOPS:
+ case DIE_WATCHDOG:
+ dump_execute(args->str, args->regs);
+ break;
+ }
+ return NOTIFY_DONE;
+
+}
+
+static struct notifier_block dump_die_block = {
+ .notifier_call = dump_die_event,
+};
+
+/* Name: __dump_init()
+ * Func: Initialize the dumping routine process.
+ */
+void
+__dump_init(uint64_t local_memory_start)
+{
+ /* hook into PANIC and OOPS */
+ register_die_notifier(&dump_die_block);
+}
+
+/*
+ * Name: __dump_open()
+ * Func: Open the dump device (architecture specific). This is in
+ * case it's necessary in the future.
+ */
+void
+__dump_open(void)
+{
+
+ alloc_dha_stack();
+
+ return;
+}
+
+/*
+ * Name: __dump_cleanup()
+ * Func: Free any architecture specific data structures. This is called
+ * when the dump module is being removed.
+ */
+void
+__dump_cleanup(void)
+{
+ free_dha_stack();
+ unregister_die_notifier(&dump_die_block);
+
+ /* return */
+ return;
+}
+
+/*
+ * Name: __dump_page_valid()
+ * Func: Check if page is valid to dump.
+ */
+int
+__dump_page_valid(unsigned long index)
+{
+ if(!pfn_valid(index))
+ return 0;
+ else
+ return 1;
+}
+
+/*
+ * Name: manual_handle_crashdump
+ * Func: Interface for the lkcd dump command. Calls dump_execute()
+ */
+int
+manual_handle_crashdump(void) {
+
+ _dump_regs_t regs;
+
+ get_current_general_regs(®s);
+ get_current_cp14_regs(®s);
+ get_current_cp15_regs(®s);
+ dump_execute("manual", ®s);
+ return 0;
+}
--- /dev/null
+/*
+ * Implements the dump driver interface for saving a dump to
+ * a block device through the kernel's generic low level block i/o
+ * routines.
+ *
+ * Started: June 2002 - Mohamed Abbas <mohamed.abbas@intel.com>
+ * Moved original lkcd kiobuf dump i/o code from dump_base.c
+ * to use generic dump device interfaces
+ *
+ * Sept 2002 - Bharata B. Rao <bharata@in.ibm.com>
+ * Convert dump i/o to directly use bio instead of kiobuf for 2.5
+ *
+ * Oct 2002 - Suparna Bhattacharya <suparna@in.ibm.com>
+ * Rework to new dumpdev.h structures, implement open/close/
+ * silence, misc fixes (blocknr removal, bio_add_page usage)
+ *
+ * Copyright (C) 1999 - 2002 Silicon Graphics, Inc. All rights reserved.
+ * Copyright (C) 2001 - 2002 Matt D. Robinson. All rights reserved.
+ * Copyright (C) 2002 International Business Machines Corp.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+#include <linux/types.h>
+#include <linux/proc_fs.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/blkdev.h>
+#include <linux/bio.h>
+#include <asm/hardirq.h>
+#include <linux/dump.h>
+#include "dump_methods.h"
+
+extern void *dump_page_buf;
+
+/* The end_io callback for dump i/o completion */
+static int
+dump_bio_end_io(struct bio *bio, unsigned int bytes_done, int error)
+{
+ struct dump_blockdev *dump_bdev;
+
+ if (bio->bi_size) {
+ /* some bytes still left to transfer */
+ return 1; /* not complete */
+ }
+
+ dump_bdev = (struct dump_blockdev *)bio->bi_private;
+ if (error) {
+ printk("IO error while writing the dump, aborting\n");
+ }
+
+ dump_bdev->err = error;
+
+ /* no wakeup needed, since caller polls for completion */
+ return 0;
+}
+
+/* Check if the dump bio is already mapped to the specified buffer */
+static int
+dump_block_map_valid(struct dump_blockdev *dev, struct page *page,
+ int len)
+{
+ struct bio *bio = dev->bio;
+ unsigned long bsize = 0;
+
+ if (!bio->bi_vcnt)
+ return 0; /* first time, not mapped */
+
+
+ if ((bio_page(bio) != page) || (len > bio->bi_vcnt << PAGE_SHIFT))
+ return 0; /* buffer not mapped */
+
+ bsize = bdev_hardsect_size(bio->bi_bdev);
+ if ((len & (PAGE_SIZE - 1)) || (len & bsize))
+ return 0; /* alignment checks needed */
+
+ /* quick check to decide if we need to redo bio_add_page */
+ if (bdev_get_queue(bio->bi_bdev)->merge_bvec_fn)
+ return 0; /* device may have other restrictions */
+
+ return 1; /* already mapped */
+}
+
+/*
+ * Set up the dump bio for i/o from the specified buffer
+ * Return value indicates whether the full buffer could be mapped or not
+ */
+static int
+dump_block_map(struct dump_blockdev *dev, void *buf, int len)
+{
+ struct page *page = virt_to_page(buf);
+ struct bio *bio = dev->bio;
+ unsigned long bsize = 0;
+
+ bio->bi_bdev = dev->bdev;
+ bio->bi_sector = (dev->start_offset + dev->ddev.curr_offset) >> 9;
+ bio->bi_idx = 0; /* reset index to the beginning */
+
+ if (dump_block_map_valid(dev, page, len)) {
+ /* already mapped and usable rightaway */
+ bio->bi_size = len; /* reset size to the whole bio */
+ } else {
+ /* need to map the bio */
+ bio->bi_size = 0;
+ bio->bi_vcnt = 0;
+ bsize = bdev_hardsect_size(bio->bi_bdev);
+
+ /* first a few sanity checks */
+ if (len < bsize) {
+ printk("map: len less than hardsect size \n");
+ return -EINVAL;
+ }
+
+ if ((unsigned long)buf & bsize) {
+ printk("map: not aligned \n");
+ return -EINVAL;
+ }
+
+ /* assume contig. page aligned low mem buffer( no vmalloc) */
+ if ((page_address(page) != buf) || (len & (PAGE_SIZE - 1))) {
+ printk("map: invalid buffer alignment!\n");
+ return -EINVAL;
+ }
+ /* finally we can go ahead and map it */
+ while (bio->bi_size < len)
+ if (bio_add_page(bio, page++, PAGE_SIZE, 0) == 0) {
+ break;
+ }
+
+ bio->bi_end_io = dump_bio_end_io;
+ bio->bi_private = dev;
+ }
+
+ if (bio->bi_size != len) {
+ printk("map: bio size = %d not enough for len = %d!\n",
+ bio->bi_size, len);
+ return -E2BIG;
+ }
+ return 0;
+}
+
+static void
+dump_free_bio(struct bio *bio)
+{
+ if (bio)
+ kfree(bio->bi_io_vec);
+ kfree(bio);
+}
+
+/*
+ * Prepares the dump device so we can take a dump later.
+ * The caller is expected to have filled up the dev_id field in the
+ * block dump dev structure.
+ *
+ * At dump time when dump_block_write() is invoked it will be too
+ * late to recover, so as far as possible make sure obvious errors
+ * get caught right here and reported back to the caller.
+ */
+static int
+dump_block_open(struct dump_dev *dev, unsigned long arg)
+{
+ struct dump_blockdev *dump_bdev = DUMP_BDEV(dev);
+ struct block_device *bdev;
+ int retval = 0;
+ struct bio_vec *bvec;
+
+ /* make sure this is a valid block device */
+ if (!arg) {
+ retval = -EINVAL;
+ goto err;
+ }
+
+ /* Convert it to the new dev_t format */
+ arg = MKDEV((arg >> OLDMINORBITS), (arg & OLDMINORMASK));
+
+ /* get a corresponding block_dev struct for this */
+ bdev = bdget((dev_t)arg);
+ if (!bdev) {
+ retval = -ENODEV;
+ goto err;
+ }
+
+ /* get the block device opened */
+ if ((retval = blkdev_get(bdev, O_RDWR | O_LARGEFILE, 0))) {
+ goto err1;
+ }
+
+ if ((dump_bdev->bio = kmalloc(sizeof(struct bio), GFP_KERNEL))
+ == NULL) {
+ printk("Cannot allocate bio\n");
+ retval = -ENOMEM;
+ goto err2;
+ }
+
+ bio_init(dump_bdev->bio);
+
+ if ((bvec = kmalloc(sizeof(struct bio_vec) *
+ (DUMP_BUFFER_SIZE >> PAGE_SHIFT), GFP_KERNEL)) == NULL) {
+ retval = -ENOMEM;
+ goto err3;
+ }
+
+ /* assign the new dump dev structure */
+ dump_bdev->dev_id = (dev_t)arg;
+ dump_bdev->bdev = bdev;
+
+ /* make a note of the limit */
+ dump_bdev->limit = bdev->bd_inode->i_size;
+
+ /* now make sure we can map the dump buffer */
+ dump_bdev->bio->bi_io_vec = bvec;
+ dump_bdev->bio->bi_max_vecs = DUMP_BUFFER_SIZE >> PAGE_SHIFT;
+
+ retval = dump_block_map(dump_bdev, dump_config.dumper->dump_buf,
+ DUMP_BUFFER_SIZE);
+
+ if (retval) {
+ printk("open: dump_block_map failed, ret %d\n", retval);
+ goto err3;
+ }
+
+ printk("Block device (%d,%d) successfully configured for dumping\n",
+ MAJOR(dump_bdev->dev_id),
+ MINOR(dump_bdev->dev_id));
+
+
+ /* after opening the block device, return */
+ return retval;
+
+err3: dump_free_bio(dump_bdev->bio);
+ dump_bdev->bio = NULL;
+err2: if (bdev) blkdev_put(bdev);
+ goto err;
+err1: if (bdev) bdput(bdev);
+ dump_bdev->bdev = NULL;
+err: return retval;
+}
+
+/*
+ * Close the dump device and release associated resources
+ * Invoked when unconfiguring the dump device.
+ */
+static int
+dump_block_release(struct dump_dev *dev)
+{
+ struct dump_blockdev *dump_bdev = DUMP_BDEV(dev);
+
+ /* release earlier bdev if present */
+ if (dump_bdev->bdev) {
+ blkdev_put(dump_bdev->bdev);
+ dump_bdev->bdev = NULL;
+ }
+
+ dump_free_bio(dump_bdev->bio);
+ dump_bdev->bio = NULL;
+
+ return 0;
+}
+
+
+/*
+ * Prepare the dump device for use (silence any ongoing activity
+ * and quiesce state) when the system crashes.
+ */
+static int
+dump_block_silence(struct dump_dev *dev)
+{
+ struct dump_blockdev *dump_bdev = DUMP_BDEV(dev);
+ struct request_queue *q = bdev_get_queue(dump_bdev->bdev);
+ int ret;
+
+ /* If we can't get request queue lock, refuse to take the dump */
+ if (!spin_trylock(q->queue_lock))
+ return -EBUSY;
+
+ ret = elv_queue_empty(q);
+ spin_unlock(q->queue_lock);
+
+ /* For now we assume we have the device to ourselves */
+ /* Just a quick sanity check */
+ if (!ret) {
+ /* Warn the user and move on */
+ printk(KERN_ALERT "Warning: Non-empty request queue\n");
+ printk(KERN_ALERT "I/O requests in flight at dump time\n");
+ }
+
+ /*
+ * Move to a softer level of silencing where no spin_lock_irqs
+ * are held on other cpus
+ */
+ dump_silence_level = DUMP_SOFT_SPIN_CPUS;
+
+ ret = __dump_irq_enable();
+ if (ret) {
+ return ret;
+ }
+
+ printk("Dumping to block device (%d,%d) on CPU %d ...\n",
+ MAJOR(dump_bdev->dev_id), MINOR(dump_bdev->dev_id),
+ smp_processor_id());
+
+ return 0;
+}
+
+/*
+ * Invoked when dumping is done. This is the time to put things back
+ * (i.e. undo the effects of dump_block_silence) so the device is
+ * available for normal use.
+ */
+static int
+dump_block_resume(struct dump_dev *dev)
+{
+ __dump_irq_restore();
+ return 0;
+}
+
+
+/*
+ * Seek to the specified offset in the dump device.
+ * Makes sure this is a valid offset, otherwise returns an error.
+ */
+static int
+dump_block_seek(struct dump_dev *dev, loff_t off)
+{
+ struct dump_blockdev *dump_bdev = DUMP_BDEV(dev);
+ loff_t offset = off + dump_bdev->start_offset;
+
+ if (offset & ( PAGE_SIZE - 1)) {
+ printk("seek: non-page aligned\n");
+ return -EINVAL;
+ }
+
+ if (offset & (bdev_hardsect_size(dump_bdev->bdev) - 1)) {
+ printk("seek: not sector aligned \n");
+ return -EINVAL;
+ }
+
+ if (offset > dump_bdev->limit) {
+ printk("seek: not enough space left on device!\n");
+ return -ENOSPC;
+ }
+ dev->curr_offset = off;
+ return 0;
+}
+
+/*
+ * Write out a buffer after checking the device limitations,
+ * sector sizes, etc. Assumes the buffer is in directly mapped
+ * kernel address space (not vmalloc'ed).
+ *
+ * Returns: number of bytes written or -ERRNO.
+ */
+static int
+dump_block_write(struct dump_dev *dev, void *buf,
+ unsigned long len)
+{
+ struct dump_blockdev *dump_bdev = DUMP_BDEV(dev);
+ loff_t offset = dev->curr_offset + dump_bdev->start_offset;
+ int retval = -ENOSPC;
+
+ if (offset >= dump_bdev->limit) {
+ printk("write: not enough space left on device!\n");
+ goto out;
+ }
+
+ /* don't write more blocks than our max limit */
+ if (offset + len > dump_bdev->limit)
+ len = dump_bdev->limit - offset;
+
+
+ retval = dump_block_map(dump_bdev, buf, len);
+ if (retval){
+ printk("write: dump_block_map failed! err %d\n", retval);
+ goto out;
+ }
+
+ /*
+ * Write out the data to disk.
+ * Assumes the entire buffer mapped to a single bio, which we can
+ * submit and wait for io completion. In the future, may consider
+ * increasing the dump buffer size and submitting multiple bio s
+ * for better throughput.
+ */
+ dump_bdev->err = -EAGAIN;
+ submit_bio(WRITE, dump_bdev->bio);
+
+ dump_bdev->ddev.curr_offset += len;
+ retval = len;
+ out:
+ return retval;
+}
+
+/*
+ * Name: dump_block_ready()
+ * Func: check if the last dump i/o is over and ready for next request
+ */
+static int
+dump_block_ready(struct dump_dev *dev, void *buf)
+{
+ struct dump_blockdev *dump_bdev = DUMP_BDEV(dev);
+ request_queue_t *q = bdev_get_queue(dump_bdev->bio->bi_bdev);
+
+ /* check for io completion */
+ if (dump_bdev->err == -EAGAIN) {
+ q->unplug_fn(q);
+ return -EAGAIN;
+ }
+
+ if (dump_bdev->err) {
+ printk("dump i/o err\n");
+ return dump_bdev->err;
+ }
+
+ return 0;
+}
+
+
+struct dump_dev_ops dump_blockdev_ops = {
+ .open = dump_block_open,
+ .release = dump_block_release,
+ .silence = dump_block_silence,
+ .resume = dump_block_resume,
+ .seek = dump_block_seek,
+ .write = dump_block_write,
+ /* .read not implemented */
+ .ready = dump_block_ready
+};
+
+static struct dump_blockdev default_dump_blockdev = {
+ .ddev = {.type_name = "blockdev", .ops = &dump_blockdev_ops,
+ .curr_offset = 0},
+ /*
+ * leave enough room for the longest swap header possibly written
+ * written by mkswap (likely the largest page size supported by
+ * the arch
+ */
+ .start_offset = DUMP_HEADER_OFFSET,
+ .err = 0
+ /* assume the rest of the fields are zeroed by default */
+};
+
+struct dump_blockdev *dump_blockdev = &default_dump_blockdev;
+
+static int __init
+dump_blockdev_init(void)
+{
+ if (dump_register_device(&dump_blockdev->ddev) < 0) {
+ printk("block device driver registration failed\n");
+ return -1;
+ }
+
+ printk("block device driver for LKCD registered\n");
+ return 0;
+}
+
+static void __exit
+dump_blockdev_cleanup(void)
+{
+ dump_unregister_device(&dump_blockdev->ddev);
+ printk("block device driver for LKCD unregistered\n");
+}
+
+MODULE_AUTHOR("LKCD Development Team <lkcd-devel@lists.sourceforge.net>");
+MODULE_DESCRIPTION("Block Dump Driver for Linux Kernel Crash Dump (LKCD)");
+MODULE_LICENSE("GPL");
+
+module_init(dump_blockdev_init);
+module_exit(dump_blockdev_cleanup);
--- /dev/null
+/*
+ * The file has the common/generic dump execution code
+ *
+ * Started: Oct 2002 - Suparna Bhattacharya <suparna@in.ibm.com>
+ * Split and rewrote high level dump execute code to make use
+ * of dump method interfaces.
+ *
+ * Derived from original code in dump_base.c created by
+ * Matt Robinson <yakker@sourceforge.net>)
+ *
+ * Copyright (C) 1999 - 2002 Silicon Graphics, Inc. All rights reserved.
+ * Copyright (C) 2001 - 2002 Matt D. Robinson. All rights reserved.
+ * Copyright (C) 2002 International Business Machines Corp.
+ *
+ * Assumes dumper and dump config settings are in place
+ * (invokes corresponding dumper specific routines as applicable)
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+#include <linux/kernel.h>
+#include <linux/notifier.h>
+#include <linux/dump.h>
+#include <linux/delay.h>
+#include <linux/reboot.h>
+#include "dump_methods.h"
+
+struct notifier_block *dump_notifier_list; /* dump started/ended callback */
+
+extern int panic_timeout;
+
+/* Dump progress indicator */
+void
+dump_speedo(int i)
+{
+ static const char twiddle[4] = { '|', '\\', '-', '/' };
+ printk("%c\b", twiddle[i&3]);
+}
+
+/* Make the device ready and write out the header */
+int dump_begin(void)
+{
+ int err = 0;
+
+ /* dump_dev = dump_config.dumper->dev; */
+ dumper_reset();
+ if ((err = dump_dev_silence())) {
+ /* quiesce failed, can't risk continuing */
+ /* Todo/Future: switch to alternate dump scheme if possible */
+ printk("dump silence dev failed ! error %d\n", err);
+ return err;
+ }
+
+ pr_debug("Writing dump header\n");
+ if ((err = dump_update_header())) {
+ printk("dump update header failed ! error %d\n", err);
+ dump_dev_resume();
+ return err;
+ }
+
+ dump_config.dumper->curr_offset = DUMP_BUFFER_SIZE;
+
+ return 0;
+}
+
+/*
+ * Write the dump terminator, a final header update and let go of
+ * exclusive use of the device for dump.
+ */
+int dump_complete(void)
+{
+ int ret = 0;
+
+ if (dump_config.level != DUMP_LEVEL_HEADER) {
+ if ((ret = dump_update_end_marker())) {
+ printk("dump update end marker error %d\n", ret);
+ }
+ if ((ret = dump_update_header())) {
+ printk("dump update header error %d\n", ret);
+ }
+ }
+ ret = dump_dev_resume();
+
+ if ((panic_timeout > 0) && (!(dump_config.flags & (DUMP_FLAGS_SOFTBOOT | DUMP_FLAGS_NONDISRUPT)))) {
+ printk(KERN_EMERG "Rebooting in %d seconds..",panic_timeout);
+#ifdef CONFIG_SMP
+ smp_send_stop();
+#endif
+ mdelay(panic_timeout * 1000);
+ machine_restart(NULL);
+ }
+
+ return ret;
+}
+
+/* Saves all dump data */
+int dump_execute_savedump(void)
+{
+ int ret = 0, err = 0;
+
+ if ((ret = dump_begin())) {
+ return ret;
+ }
+
+ if (dump_config.level != DUMP_LEVEL_HEADER) {
+ ret = dump_sequencer();
+ }
+ if ((err = dump_complete())) {
+ printk("Dump complete failed. Error %d\n", err);
+ }
+
+ return ret;
+}
+
+extern void dump_calc_bootmap_pages(void);
+
+/* Does all the real work: Capture and save state */
+int dump_generic_execute(const char *panic_str, const struct pt_regs *regs)
+{
+ int ret = 0;
+
+ if ((ret = dump_configure_header(panic_str, regs))) {
+ printk("dump config header failed ! error %d\n", ret);
+ return ret;
+ }
+
+ dump_calc_bootmap_pages();
+ /* tell interested parties that a dump is about to start */
+ notifier_call_chain(&dump_notifier_list, DUMP_BEGIN,
+ &dump_config.dump_device);
+
+ if (dump_config.level != DUMP_LEVEL_NONE)
+ ret = dump_execute_savedump();
+
+ pr_debug("dumped %ld blocks of %d bytes each\n",
+ dump_config.dumper->count, DUMP_BUFFER_SIZE);
+
+ /* tell interested parties that a dump has completed */
+ notifier_call_chain(&dump_notifier_list, DUMP_END,
+ &dump_config.dump_device);
+
+ return ret;
+}
--- /dev/null
+/*
+ * Default filters to select data to dump for various passes.
+ *
+ * Started: Oct 2002 - Suparna Bhattacharya <suparna@in.ibm.com>
+ * Split and rewrote default dump selection logic to generic dump
+ * method interfaces
+ * Derived from a portion of dump_base.c created by
+ * Matt Robinson <yakker@sourceforge.net>)
+ *
+ * Copyright (C) 1999 - 2002 Silicon Graphics, Inc. All rights reserved.
+ * Copyright (C) 2001 - 2002 Matt D. Robinson. All rights reserved.
+ * Copyright (C) 2002 International Business Machines Corp.
+ *
+ * Used during single-stage dumping and during stage 1 of the 2-stage scheme
+ * (Stage 2 of the 2-stage scheme uses the fully transparent filters
+ * i.e. passthru filters in dump_overlay.c)
+ *
+ * Future: Custom selective dump may involve a different set of filters.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+#include <linux/kernel.h>
+#include <linux/bootmem.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/dump.h>
+#include "dump_methods.h"
+
+#define DUMP_PFN_SAFETY_MARGIN 1024 /* 4 MB */
+static unsigned long bootmap_pages;
+
+/* Copied from mm/bootmem.c - FIXME */
+/* return the number of _pages_ that will be allocated for the boot bitmap */
+void dump_calc_bootmap_pages (void)
+{
+ unsigned long mapsize;
+ unsigned long pages = num_physpages;
+
+ mapsize = (pages+7)/8;
+ mapsize = (mapsize + ~PAGE_MASK) & PAGE_MASK;
+ mapsize >>= PAGE_SHIFT;
+ bootmap_pages = mapsize + DUMP_PFN_SAFETY_MARGIN + 1;
+}
+
+
+/* temporary */
+extern unsigned long min_low_pfn;
+
+
+int dump_low_page(struct page *p)
+{
+ return ((page_to_pfn(p) >= min_low_pfn) &&
+ (page_to_pfn(p) < (min_low_pfn + bootmap_pages)));
+}
+
+static inline int kernel_page(struct page *p)
+{
+ /* FIXME: Need to exclude hugetlb pages. Clue: reserved but inuse */
+ return (PageReserved(p) && !PageInuse(p)) || (!PageLRU(p) && PageInuse(p));
+}
+
+static inline int user_page(struct page *p)
+{
+ return PageInuse(p) && (!PageReserved(p) && PageLRU(p));
+}
+
+static inline int unreferenced_page(struct page *p)
+{
+ return !PageInuse(p) && !PageReserved(p);
+}
+
+
+/* loc marks the beginning of a range of pages */
+int dump_filter_kernpages(int pass, unsigned long loc, unsigned long sz)
+{
+ struct page *page = (struct page *)loc;
+ /* if any of the pages is a kernel page, select this set */
+ while (sz) {
+ if (dump_low_page(page) || kernel_page(page))
+ return 1;
+ sz -= PAGE_SIZE;
+ page++;
+ }
+ return 0;
+}
+
+
+/* loc marks the beginning of a range of pages */
+int dump_filter_userpages(int pass, unsigned long loc, unsigned long sz)
+{
+ struct page *page = (struct page *)loc;
+ int ret = 0;
+ /* select if the set has any user page, and no kernel pages */
+ while (sz) {
+ if (user_page(page) && !dump_low_page(page)) {
+ ret = 1;
+ } else if (kernel_page(page) || dump_low_page(page)) {
+ return 0;
+ }
+ page++;
+ sz -= PAGE_SIZE;
+ }
+ return ret;
+}
+
+
+
+/* loc marks the beginning of a range of pages */
+int dump_filter_unusedpages(int pass, unsigned long loc, unsigned long sz)
+{
+ struct page *page = (struct page *)loc;
+
+ /* select if the set does not have any used pages */
+ while (sz) {
+ if (!unreferenced_page(page) || dump_low_page(page)) {
+ return 0;
+ }
+ page++;
+ sz -= PAGE_SIZE;
+ }
+ return 1;
+}
+
+/* dummy: last (non-existent) pass */
+int dump_filter_none(int pass, unsigned long loc, unsigned long sz)
+{
+ return 0;
+}
+
+/* TBD: resolve level bitmask ? */
+struct dump_data_filter dump_filter_table[] = {
+ { .name = "kern", .selector = dump_filter_kernpages,
+ .level_mask = DUMP_MASK_KERN},
+ { .name = "user", .selector = dump_filter_userpages,
+ .level_mask = DUMP_MASK_USED},
+ { .name = "unused", .selector = dump_filter_unusedpages,
+ .level_mask = DUMP_MASK_UNUSED},
+ { .name = "none", .selector = dump_filter_none,
+ .level_mask = DUMP_MASK_REST},
+ { .name = "", .selector = NULL, .level_mask = 0}
+};
+
--- /dev/null
+/*
+ * Implements the routines which handle the format specific
+ * aspects of dump for the default dump format.
+ *
+ * Used in single stage dumping and stage 1 of soft-boot based dumping
+ * Saves data in LKCD (lcrash) format
+ *
+ * Previously a part of dump_base.c
+ *
+ * Started: Oct 2002 - Suparna Bhattacharya <suparna@in.ibm.com>
+ * Split off and reshuffled LKCD dump format code around generic
+ * dump method interfaces.
+ *
+ * Derived from original code created by
+ * Matt Robinson <yakker@sourceforge.net>)
+ *
+ * Contributions from SGI, IBM, HP, MCL, and others.
+ *
+ * Copyright (C) 1999 - 2002 Silicon Graphics, Inc. All rights reserved.
+ * Copyright (C) 2000 - 2002 TurboLinux, Inc. All rights reserved.
+ * Copyright (C) 2001 - 2002 Matt D. Robinson. All rights reserved.
+ * Copyright (C) 2002 International Business Machines Corp.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/time.h>
+#include <linux/sched.h>
+#include <linux/ptrace.h>
+#include <linux/utsname.h>
+#include <asm/dump.h>
+#include <linux/dump.h>
+#include "dump_methods.h"
+
+/*
+ * SYSTEM DUMP LAYOUT
+ *
+ * System dumps are currently the combination of a dump header and a set
+ * of data pages which contain the system memory. The layout of the dump
+ * (for full dumps) is as follows:
+ *
+ * +-----------------------------+
+ * | generic dump header |
+ * +-----------------------------+
+ * | architecture dump header |
+ * +-----------------------------+
+ * | page header |
+ * +-----------------------------+
+ * | page data |
+ * +-----------------------------+
+ * | page header |
+ * +-----------------------------+
+ * | page data |
+ * +-----------------------------+
+ * | | |
+ * | | |
+ * | | |
+ * | | |
+ * | V |
+ * +-----------------------------+
+ * | PAGE_END header |
+ * +-----------------------------+
+ *
+ * There are two dump headers, the first which is architecture
+ * independent, and the other which is architecture dependent. This
+ * allows different architectures to dump different data structures
+ * which are specific to their chipset, CPU, etc.
+ *
+ * After the dump headers come a succession of dump page headers along
+ * with dump pages. The page header contains information about the page
+ * size, any flags associated with the page (whether it's compressed or
+ * not), and the address of the page. After the page header is the page
+ * data, which is either compressed (or not). Each page of data is
+ * dumped in succession, until the final dump header (PAGE_END) is
+ * placed at the end of the dump, assuming the dump device isn't out
+ * of space.
+ *
+ * This mechanism allows for multiple compression types, different
+ * types of data structures, different page ordering, etc., etc., etc.
+ * It's a very straightforward mechanism for dumping system memory.
+ */
+
+struct __dump_header dump_header; /* the primary dump header */
+struct __dump_header_asm dump_header_asm; /* the arch-specific dump header */
+
+/*
+ * Set up common header fields (mainly the arch indep section)
+ * Per-cpu state is handled by lcrash_save_context
+ * Returns the size of the header in bytes.
+ */
+static int lcrash_init_dump_header(const char *panic_str)
+{
+ struct timeval dh_time;
+ unsigned long temp_dha_stack[DUMP_MAX_NUM_CPUS];
+ u64 temp_memsz = dump_header.dh_memory_size;
+
+ /* make sure the dump header isn't TOO big */
+ if ((sizeof(struct __dump_header) +
+ sizeof(struct __dump_header_asm)) > DUMP_BUFFER_SIZE) {
+ printk("lcrash_init_header(): combined "
+ "headers larger than DUMP_BUFFER_SIZE!\n");
+ return -E2BIG;
+ }
+
+ /* initialize the dump headers to zero */
+ /* save dha_stack pointer because it may contains pointer for stack! */
+ memcpy(&(temp_dha_stack[0]), &(dump_header_asm.dha_stack[0]),
+ DUMP_MAX_NUM_CPUS * sizeof(unsigned long));
+ memset(&dump_header, 0, sizeof(dump_header));
+ memset(&dump_header_asm, 0, sizeof(dump_header_asm));
+ dump_header.dh_memory_size = temp_memsz;
+ memcpy(&(dump_header_asm.dha_stack[0]), &(temp_dha_stack[0]),
+ DUMP_MAX_NUM_CPUS * sizeof(unsigned long));
+
+ /* configure dump header values */
+ dump_header.dh_magic_number = DUMP_MAGIC_NUMBER;
+ dump_header.dh_version = DUMP_VERSION_NUMBER;
+ dump_header.dh_memory_start = PAGE_OFFSET;
+ dump_header.dh_memory_end = DUMP_MAGIC_NUMBER;
+ dump_header.dh_header_size = sizeof(struct __dump_header);
+ dump_header.dh_page_size = PAGE_SIZE;
+ dump_header.dh_dump_level = dump_config.level;
+ dump_header.dh_current_task = (unsigned long) current;
+ dump_header.dh_dump_compress = dump_config.dumper->compress->
+ compress_type;
+ dump_header.dh_dump_flags = dump_config.flags;
+ dump_header.dh_dump_device = dump_config.dumper->dev->device_id;
+
+#if DUMP_DEBUG >= 6
+ dump_header.dh_num_bytes = 0;
+#endif
+ dump_header.dh_num_dump_pages = 0;
+ do_gettimeofday(&dh_time);
+ dump_header.dh_time.tv_sec = dh_time.tv_sec;
+ dump_header.dh_time.tv_usec = dh_time.tv_usec;
+
+ memcpy((void *)&(dump_header.dh_utsname_sysname),
+ (const void *)&(system_utsname.sysname), __NEW_UTS_LEN + 1);
+ memcpy((void *)&(dump_header.dh_utsname_nodename),
+ (const void *)&(system_utsname.nodename), __NEW_UTS_LEN + 1);
+ memcpy((void *)&(dump_header.dh_utsname_release),
+ (const void *)&(system_utsname.release), __NEW_UTS_LEN + 1);
+ memcpy((void *)&(dump_header.dh_utsname_version),
+ (const void *)&(system_utsname.version), __NEW_UTS_LEN + 1);
+ memcpy((void *)&(dump_header.dh_utsname_machine),
+ (const void *)&(system_utsname.machine), __NEW_UTS_LEN + 1);
+ memcpy((void *)&(dump_header.dh_utsname_domainname),
+ (const void *)&(system_utsname.domainname), __NEW_UTS_LEN + 1);
+
+ if (panic_str) {
+ memcpy((void *)&(dump_header.dh_panic_string),
+ (const void *)panic_str, DUMP_PANIC_LEN);
+ }
+
+ dump_header_asm.dha_magic_number = DUMP_ASM_MAGIC_NUMBER;
+ dump_header_asm.dha_version = DUMP_ASM_VERSION_NUMBER;
+ dump_header_asm.dha_header_size = sizeof(dump_header_asm);
+#ifdef CONFIG_ARM
+ dump_header_asm.dha_physaddr_start = PHYS_OFFSET;
+#endif
+
+ dump_header_asm.dha_smp_num_cpus = num_online_cpus();
+ pr_debug("smp_num_cpus in header %d\n",
+ dump_header_asm.dha_smp_num_cpus);
+
+ dump_header_asm.dha_dumping_cpu = smp_processor_id();
+
+ return sizeof(dump_header) + sizeof(dump_header_asm);
+}
+
+
+int dump_lcrash_configure_header(const char *panic_str,
+ const struct pt_regs *regs)
+{
+ int retval = 0;
+
+ dump_config.dumper->header_len = lcrash_init_dump_header(panic_str);
+
+ /* capture register states for all processors */
+ dump_save_this_cpu(regs);
+ __dump_save_other_cpus(); /* side effect:silence cpus */
+
+ /* configure architecture-specific dump header values */
+ if ((retval = __dump_configure_header(regs)))
+ return retval;
+
+ dump_config.dumper->header_dirty++;
+ return 0;
+}
+
+/* save register and task context */
+void dump_lcrash_save_context(int cpu, const struct pt_regs *regs,
+ struct task_struct *tsk)
+{
+ dump_header_asm.dha_smp_current_task[cpu] = (unsigned long)tsk;
+
+ __dump_save_regs(&dump_header_asm.dha_smp_regs[cpu], regs);
+
+ /* take a snapshot of the stack */
+ /* doing this enables us to tolerate slight drifts on this cpu */
+ if (dump_header_asm.dha_stack[cpu]) {
+ memcpy((void *)dump_header_asm.dha_stack[cpu],
+ tsk->thread_info, THREAD_SIZE);
+ }
+ dump_header_asm.dha_stack_ptr[cpu] = (unsigned long)(tsk->thread_info);
+}
+
+/* write out the header */
+int dump_write_header(void)
+{
+ int retval = 0, size;
+ void *buf = dump_config.dumper->dump_buf;
+
+ /* accounts for DUMP_HEADER_OFFSET if applicable */
+ if ((retval = dump_dev_seek(0))) {
+ printk("Unable to seek to dump header offset: %d\n",
+ retval);
+ return retval;
+ }
+
+ memcpy(buf, (void *)&dump_header, sizeof(dump_header));
+ size = sizeof(dump_header);
+ memcpy(buf + size, (void *)&dump_header_asm, sizeof(dump_header_asm));
+ size += sizeof(dump_header_asm);
+ size = PAGE_ALIGN(size);
+ retval = dump_ll_write(buf , size);
+
+ if (retval < size)
+ return (retval >= 0) ? ENOSPC : retval;
+ return 0;
+}
+
+int dump_generic_update_header(void)
+{
+ int err = 0;
+
+ if (dump_config.dumper->header_dirty) {
+ if ((err = dump_write_header())) {
+ printk("dump write header failed !err %d\n", err);
+ } else {
+ dump_config.dumper->header_dirty = 0;
+ }
+ }
+
+ return err;
+}
+
+static inline int is_curr_stack_page(struct page *page, unsigned long size)
+{
+ unsigned long thread_addr = (unsigned long)current_thread_info();
+ unsigned long addr = (unsigned long)page_address(page);
+
+ return !PageHighMem(page) && (addr < thread_addr + THREAD_SIZE)
+ && (addr + size > thread_addr);
+}
+
+static inline int is_dump_page(struct page *page, unsigned long size)
+{
+ unsigned long addr = (unsigned long)page_address(page);
+ unsigned long dump_buf = (unsigned long)dump_config.dumper->dump_buf;
+
+ return !PageHighMem(page) && (addr < dump_buf + DUMP_BUFFER_SIZE)
+ && (addr + size > dump_buf);
+}
+
+int dump_allow_compress(struct page *page, unsigned long size)
+{
+ /*
+ * Don't compress the page if any part of it overlaps
+ * with the current stack or dump buffer (since the contents
+ * in these could be changing while compression is going on)
+ */
+ return !is_curr_stack_page(page, size) && !is_dump_page(page, size);
+}
+
+void lcrash_init_pageheader(struct __dump_page *dp, struct page *page,
+ unsigned long sz)
+{
+ memset(dp, sizeof(struct __dump_page), 0);
+ dp->dp_flags = 0;
+ dp->dp_size = 0;
+ if (sz > 0)
+ dp->dp_address = (loff_t)page_to_pfn(page) << PAGE_SHIFT;
+
+#if DUMP_DEBUG > 6
+ dp->dp_page_index = dump_header.dh_num_dump_pages;
+ dp->dp_byte_offset = dump_header.dh_num_bytes + DUMP_BUFFER_SIZE
+ + DUMP_HEADER_OFFSET; /* ?? */
+#endif /* DUMP_DEBUG */
+}
+
+int dump_lcrash_add_data(unsigned long loc, unsigned long len)
+{
+ struct page *page = (struct page *)loc;
+ void *addr, *buf = dump_config.dumper->curr_buf;
+ struct __dump_page *dp = (struct __dump_page *)buf;
+ int bytes, size;
+
+ if (buf > dump_config.dumper->dump_buf + DUMP_BUFFER_SIZE)
+ return -ENOMEM;
+
+ lcrash_init_pageheader(dp, page, len);
+ buf += sizeof(struct __dump_page);
+
+ while (len) {
+ addr = kmap_atomic(page, KM_CRASHDUMP);
+ size = bytes = (len > PAGE_SIZE) ? PAGE_SIZE : len;
+ /* check for compression */
+ if (dump_allow_compress(page, bytes)) {
+ size = dump_compress_data((char *)addr, bytes, (char *)buf);
+ }
+ /* set the compressed flag if the page did compress */
+ if (size && (size < bytes)) {
+ dp->dp_flags |= DUMP_DH_COMPRESSED;
+ } else {
+ /* compression failed -- default to raw mode */
+ dp->dp_flags |= DUMP_DH_RAW;
+ memcpy(buf, addr, bytes);
+ size = bytes;
+ }
+ /* memset(buf, 'A', size); temporary: testing only !! */
+ kunmap_atomic(addr, KM_CRASHDUMP);
+ dp->dp_size += size;
+ buf += size;
+ len -= bytes;
+ page++;
+ }
+
+ /* now update the header */
+#if DUMP_DEBUG > 6
+ dump_header.dh_num_bytes += dp->dp_size + sizeof(*dp);
+#endif
+ dump_header.dh_num_dump_pages++;
+ dump_config.dumper->header_dirty++;
+
+ dump_config.dumper->curr_buf = buf;
+
+ return len;
+}
+
+int dump_lcrash_update_end_marker(void)
+{
+ struct __dump_page *dp =
+ (struct __dump_page *)dump_config.dumper->curr_buf;
+ unsigned long left;
+ int ret = 0;
+
+ lcrash_init_pageheader(dp, NULL, 0);
+ dp->dp_flags |= DUMP_DH_END; /* tbd: truncation test ? */
+
+ /* now update the header */
+#if DUMP_DEBUG > 6
+ dump_header.dh_num_bytes += sizeof(*dp);
+#endif
+ dump_config.dumper->curr_buf += sizeof(*dp);
+ left = dump_config.dumper->curr_buf - dump_config.dumper->dump_buf;
+
+ printk("\n");
+
+ while (left) {
+ if ((ret = dump_dev_seek(dump_config.dumper->curr_offset))) {
+ printk("Seek failed at offset 0x%llx\n",
+ dump_config.dumper->curr_offset);
+ return ret;
+ }
+
+ if (DUMP_BUFFER_SIZE > left)
+ memset(dump_config.dumper->curr_buf, 'm',
+ DUMP_BUFFER_SIZE - left);
+
+ if ((ret = dump_ll_write(dump_config.dumper->dump_buf,
+ DUMP_BUFFER_SIZE)) < DUMP_BUFFER_SIZE) {
+ return (ret < 0) ? ret : -ENOSPC;
+ }
+
+ dump_config.dumper->curr_offset += DUMP_BUFFER_SIZE;
+
+ if (left > DUMP_BUFFER_SIZE) {
+ left -= DUMP_BUFFER_SIZE;
+ memcpy(dump_config.dumper->dump_buf,
+ dump_config.dumper->dump_buf + DUMP_BUFFER_SIZE, left);
+ dump_config.dumper->curr_buf -= DUMP_BUFFER_SIZE;
+ } else {
+ left = 0;
+ }
+ }
+ return 0;
+}
+
+
+/* Default Formatter (lcrash) */
+struct dump_fmt_ops dump_fmt_lcrash_ops = {
+ .configure_header = dump_lcrash_configure_header,
+ .update_header = dump_generic_update_header,
+ .save_context = dump_lcrash_save_context,
+ .add_data = dump_lcrash_add_data,
+ .update_end_marker = dump_lcrash_update_end_marker
+};
+
+struct dump_fmt dump_fmt_lcrash = {
+ .name = "lcrash",
+ .ops = &dump_fmt_lcrash_ops
+};
+
--- /dev/null
+/*
+ * GZIP Compression functions for kernel crash dumps.
+ *
+ * Created by: Matt Robinson (yakker@sourceforge.net)
+ * Copyright 2001 Matt D. Robinson. All rights reserved.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+/* header files */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/dump.h>
+#include <linux/zlib.h>
+#include <linux/vmalloc.h>
+
+static void *deflate_workspace;
+
+/*
+ * Name: dump_compress_gzip()
+ * Func: Compress a DUMP_PAGE_SIZE page using gzip-style algorithms (the.
+ * deflate functions similar to what's used in PPP).
+ */
+static u16
+dump_compress_gzip(const u8 *old, u16 oldsize, u8 *new, u16 newsize)
+{
+ /* error code and dump stream */
+ int err;
+ z_stream dump_stream;
+
+ dump_stream.workspace = deflate_workspace;
+
+ if ((err = zlib_deflateInit(&dump_stream, Z_BEST_COMPRESSION)) != Z_OK) {
+ /* fall back to RLE compression */
+ printk("dump_compress_gzip(): zlib_deflateInit() "
+ "failed (%d)!\n", err);
+ return 0;
+ }
+
+ /* use old (page of memory) and size (DUMP_PAGE_SIZE) as in-streams */
+ dump_stream.next_in = (u8 *) old;
+ dump_stream.avail_in = oldsize;
+
+ /* out streams are new (dpcpage) and new size (DUMP_DPC_PAGE_SIZE) */
+ dump_stream.next_out = new;
+ dump_stream.avail_out = newsize;
+
+ /* deflate the page -- check for error */
+ err = zlib_deflate(&dump_stream, Z_FINISH);
+ if (err != Z_STREAM_END) {
+ /* zero is return code here */
+ (void)zlib_deflateEnd(&dump_stream);
+ printk("dump_compress_gzip(): zlib_deflate() failed (%d)!\n",
+ err);
+ return 0;
+ }
+
+ /* let's end the deflated compression stream */
+ if ((err = zlib_deflateEnd(&dump_stream)) != Z_OK) {
+ printk("dump_compress_gzip(): zlib_deflateEnd() "
+ "failed (%d)!\n", err);
+ }
+
+ /* return the compressed byte total (if it's smaller) */
+ if (dump_stream.total_out >= oldsize) {
+ return oldsize;
+ }
+ return dump_stream.total_out;
+}
+
+/* setup the gzip compression functionality */
+static struct __dump_compress dump_gzip_compression = {
+ .compress_type = DUMP_COMPRESS_GZIP,
+ .compress_func = dump_compress_gzip,
+ .compress_name = "GZIP",
+};
+
+/*
+ * Name: dump_compress_gzip_init()
+ * Func: Initialize gzip as a compression mechanism.
+ */
+static int __init
+dump_compress_gzip_init(void)
+{
+ deflate_workspace = vmalloc(zlib_deflate_workspacesize());
+ if (!deflate_workspace) {
+ printk("dump_compress_gzip_init(): Failed to "
+ "alloc %d bytes for deflate workspace\n",
+ zlib_deflate_workspacesize());
+ return -ENOMEM;
+ }
+ dump_register_compression(&dump_gzip_compression);
+ return 0;
+}
+
+/*
+ * Name: dump_compress_gzip_cleanup()
+ * Func: Remove gzip as a compression mechanism.
+ */
+static void __exit
+dump_compress_gzip_cleanup(void)
+{
+ vfree(deflate_workspace);
+ dump_unregister_compression(DUMP_COMPRESS_GZIP);
+}
+
+/* module initialization */
+module_init(dump_compress_gzip_init);
+module_exit(dump_compress_gzip_cleanup);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("LKCD Development Team <lkcd-devel@lists.sourceforge.net>");
+MODULE_DESCRIPTION("Gzip compression module for crash dump driver");
--- /dev/null
+/*
+ * Architecture specific (i386) functions for Linux crash dumps.
+ *
+ * Created by: Matt Robinson (yakker@sgi.com)
+ *
+ * Copyright 1999 Silicon Graphics, Inc. All rights reserved.
+ *
+ * 2.3 kernel modifications by: Matt D. Robinson (yakker@turbolinux.com)
+ * Copyright 2000 TurboLinux, Inc. All rights reserved.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+/*
+ * The hooks for dumping the kernel virtual memory to disk are in this
+ * file. Any time a modification is made to the virtual memory mechanism,
+ * these routines must be changed to use the new mechanisms.
+ */
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/smp.h>
+#include <linux/fs.h>
+#include <linux/vmalloc.h>
+#include <linux/mm.h>
+#include <linux/dump.h>
+#include "dump_methods.h"
+#include <linux/irq.h>
+
+#include <asm/processor.h>
+#include <asm/e820.h>
+#include <asm/hardirq.h>
+#include <asm/nmi.h>
+
+static __s32 saved_irq_count; /* saved preempt_count() flags */
+
+static int
+alloc_dha_stack(void)
+{
+ int i;
+ void *ptr;
+
+ if (dump_header_asm.dha_stack[0])
+ return 0;
+
+ ptr = vmalloc(THREAD_SIZE * num_online_cpus());
+ if (!ptr) {
+ printk("vmalloc for dha_stacks failed\n");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < num_online_cpus(); i++) {
+ dump_header_asm.dha_stack[i] = (u32)((unsigned long)ptr +
+ (i * THREAD_SIZE));
+ }
+ return 0;
+}
+
+static int
+free_dha_stack(void)
+{
+ if (dump_header_asm.dha_stack[0]) {
+ vfree((void *)dump_header_asm.dha_stack[0]);
+ dump_header_asm.dha_stack[0] = 0;
+ }
+ return 0;
+}
+
+
+void
+__dump_save_regs(struct pt_regs *dest_regs, const struct pt_regs *regs)
+{
+ *dest_regs = *regs;
+
+ /* In case of panic dumps, we collects regs on entry to panic.
+ * so, we shouldn't 'fix' ssesp here again. But it is hard to
+ * tell just looking at regs whether ssesp need fixing. We make
+ * this decision by looking at xss in regs. If we have better
+ * means to determine that ssesp are valid (by some flag which
+ * tells that we are here due to panic dump), then we can use
+ * that instead of this kludge.
+ */
+ if (!user_mode(regs)) {
+ if ((0xffff & regs->xss) == __KERNEL_DS)
+ /* already fixed up */
+ return;
+ dest_regs->esp = (unsigned long)&(regs->esp);
+ __asm__ __volatile__ ("movw %%ss, %%ax;"
+ :"=a"(dest_regs->xss));
+ }
+}
+
+
+#ifdef CONFIG_SMP
+extern cpumask_t irq_affinity[];
+extern irq_desc_t irq_desc[];
+extern void dump_send_ipi(void);
+
+static int dump_expect_ipi[NR_CPUS];
+static atomic_t waiting_for_dump_ipi;
+static cpumask_t saved_affinity[NR_IRQS];
+
+extern void stop_this_cpu(void *); /* exported by i386 kernel */
+
+static int
+dump_nmi_callback(struct pt_regs *regs, int cpu)
+{
+ if (!dump_expect_ipi[cpu])
+ return 0;
+
+ dump_expect_ipi[cpu] = 0;
+
+ dump_save_this_cpu(regs);
+ atomic_dec(&waiting_for_dump_ipi);
+
+ level_changed:
+ switch (dump_silence_level) {
+ case DUMP_HARD_SPIN_CPUS: /* Spin until dump is complete */
+ while (dump_oncpu) {
+ barrier(); /* paranoia */
+ if (dump_silence_level != DUMP_HARD_SPIN_CPUS)
+ goto level_changed;
+
+ cpu_relax(); /* kill time nicely */
+ }
+ break;
+
+ case DUMP_HALT_CPUS: /* Execute halt */
+ stop_this_cpu(NULL);
+ break;
+
+ case DUMP_SOFT_SPIN_CPUS:
+ /* Mark the task so it spins in schedule */
+ set_tsk_thread_flag(current, TIF_NEED_RESCHED);
+ break;
+ }
+
+ return 1;
+}
+
+/* save registers on other processors */
+void
+__dump_save_other_cpus(void)
+{
+ int i, cpu = smp_processor_id();
+ int other_cpus = num_online_cpus()-1;
+
+ if (other_cpus > 0) {
+ atomic_set(&waiting_for_dump_ipi, other_cpus);
+
+ for (i = 0; i < NR_CPUS; i++) {
+ dump_expect_ipi[i] = (i != cpu && cpu_online(i));
+ }
+
+ /* short circuit normal NMI handling temporarily */
+ set_nmi_callback(dump_nmi_callback);
+ wmb();
+
+ dump_send_ipi();
+ /* may be we dont need to wait for NMI to be processed.
+ just write out the header at the end of dumping, if
+ this IPI is not processed until then, there probably
+ is a problem and we just fail to capture state of
+ other cpus. */
+ while(atomic_read(&waiting_for_dump_ipi) > 0) {
+ cpu_relax();
+ }
+
+ unset_nmi_callback();
+ }
+}
+
+/*
+ * Routine to save the old irq affinities and change affinities of all irqs to
+ * the dumping cpu.
+ */
+static void
+set_irq_affinity(void)
+{
+ int i;
+ cpumask_t cpu = CPU_MASK_NONE;
+
+ cpu_set(smp_processor_id(), cpu);
+ memcpy(saved_affinity, irq_affinity, NR_IRQS * sizeof(unsigned long));
+ for (i = 0; i < NR_IRQS; i++) {
+ if (irq_desc[i].handler == NULL)
+ continue;
+ irq_affinity[i] = cpu;
+ if (irq_desc[i].handler->set_affinity != NULL)
+ irq_desc[i].handler->set_affinity(i, irq_affinity[i]);
+ }
+}
+
+/*
+ * Restore old irq affinities.
+ */
+static void
+reset_irq_affinity(void)
+{
+ int i;
+
+ memcpy(irq_affinity, saved_affinity, NR_IRQS * sizeof(unsigned long));
+ for (i = 0; i < NR_IRQS; i++) {
+ if (irq_desc[i].handler == NULL)
+ continue;
+ if (irq_desc[i].handler->set_affinity != NULL)
+ irq_desc[i].handler->set_affinity(i, saved_affinity[i]);
+ }
+}
+
+#else /* !CONFIG_SMP */
+#define set_irq_affinity() do { } while (0)
+#define reset_irq_affinity() do { } while (0)
+#define save_other_cpu_states() do { } while (0)
+#endif /* !CONFIG_SMP */
+
+/*
+ * Kludge - dump from interrupt context is unreliable (Fixme)
+ *
+ * We do this so that softirqs initiated for dump i/o
+ * get processed and we don't hang while waiting for i/o
+ * to complete or in any irq synchronization attempt.
+ *
+ * This is not quite legal of course, as it has the side
+ * effect of making all interrupts & softirqs triggered
+ * while dump is in progress complete before currently
+ * pending softirqs and the currently executing interrupt
+ * code.
+ */
+static inline void
+irq_bh_save(void)
+{
+ saved_irq_count = irq_count();
+ preempt_count() &= ~(HARDIRQ_MASK|SOFTIRQ_MASK);
+}
+
+static inline void
+irq_bh_restore(void)
+{
+ preempt_count() |= saved_irq_count;
+}
+
+/*
+ * Name: __dump_irq_enable
+ * Func: Reset system so interrupts are enabled.
+ * This is used for dump methods that require interrupts
+ * Eventually, all methods will have interrupts disabled
+ * and this code can be removed.
+ *
+ * Change irq affinities
+ * Re-enable interrupts
+ */
+int
+__dump_irq_enable(void)
+{
+ set_irq_affinity();
+ irq_bh_save();
+ local_irq_enable();
+ return 0;
+}
+
+/*
+ * Name: __dump_irq_restore
+ * Func: Resume the system state in an architecture-specific way.
+
+ */
+void
+__dump_irq_restore(void)
+{
+ local_irq_disable();
+ reset_irq_affinity();
+ irq_bh_restore();
+}
+
+/*
+ * Name: __dump_configure_header()
+ * Func: Meant to fill in arch specific header fields except per-cpu state
+ * already captured via __dump_save_context for all CPUs.
+ */
+int
+__dump_configure_header(const struct pt_regs *regs)
+{
+ return (0);
+}
+
+/*
+ * Name: __dump_init()
+ * Func: Initialize the dumping routine process.
+ */
+void
+__dump_init(uint64_t local_memory_start)
+{
+ return;
+}
+
+/*
+ * Name: __dump_open()
+ * Func: Open the dump device (architecture specific).
+ */
+void
+__dump_open(void)
+{
+ alloc_dha_stack();
+}
+
+/*
+ * Name: __dump_cleanup()
+ * Func: Free any architecture specific data structures. This is called
+ * when the dump module is being removed.
+ */
+void
+__dump_cleanup(void)
+{
+ free_dha_stack();
+}
+
+extern int page_is_ram(unsigned long);
+
+/*
+ * Name: __dump_page_valid()
+ * Func: Check if page is valid to dump.
+ */
+int
+__dump_page_valid(unsigned long index)
+{
+ if (!pfn_valid(index))
+ return 0;
+
+ return page_is_ram(index);
+}
+
+/*
+ * Name: manual_handle_crashdump()
+ * Func: Interface for the lkcd dump command. Calls dump_execute()
+ */
+int
+manual_handle_crashdump(void) {
+
+ struct pt_regs regs;
+
+ get_current_regs(®s);
+ dump_execute("manual", ®s);
+ return 0;
+}
--- /dev/null
+/*
+ * Implements the dump driver interface for saving a dump in available
+ * memory areas. The saved pages may be written out to persistent storage
+ * after a soft reboot.
+ *
+ * Started: Oct 2002 - Suparna Bhattacharya <suparna@in.ibm.com>
+ *
+ * Copyright (C) 2002 International Business Machines Corp.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ *
+ * The approach of tracking pages containing saved dump using map pages
+ * allocated as needed has been derived from the Mission Critical Linux
+ * mcore dump implementation.
+ *
+ * Credits and a big thanks for letting the lkcd project make use of
+ * the excellent piece of work and also helping with clarifications
+ * and tips along the way are due to:
+ * Dave Winchell <winchell@mclx.com> (primary author of mcore)
+ * Jeff Moyer <moyer@mclx.com>
+ * Josh Huber <huber@mclx.com>
+ *
+ * For those familiar with the mcore code, the main differences worth
+ * noting here (besides the dump device abstraction) result from enabling
+ * "high" memory pages (pages not permanently mapped in the kernel
+ * address space) to be used for saving dump data (because of which a
+ * simple virtual address based linked list cannot be used anymore for
+ * managing free pages), an added level of indirection for faster
+ * lookups during the post-boot stage, and the idea of pages being
+ * made available as they get freed up while dump to memory progresses
+ * rather than one time before starting the dump. The last point enables
+ * a full memory snapshot to be saved starting with an initial set of
+ * bootstrap pages given a good compression ratio. (See dump_overlay.c)
+ *
+ */
+
+/*
+ * -----------------MEMORY LAYOUT ------------------
+ * The memory space consists of a set of discontiguous pages, and
+ * discontiguous map pages as well, rooted in a chain of indirect
+ * map pages (also discontiguous). Except for the indirect maps
+ * (which must be preallocated in advance), the rest of the pages
+ * could be in high memory.
+ *
+ * root
+ * | --------- -------- --------
+ * --> | . . +|--->| . +|------->| . . | indirect
+ * --|--|--- ---|---- --|-|--- maps
+ * | | | | |
+ * ------ ------ ------- ------ -------
+ * | . | | . | | . . | | . | | . . | maps
+ * --|--- --|--- --|--|-- --|--- ---|-|--
+ * page page page page page page page data
+ * pages
+ *
+ * Writes to the dump device happen sequentially in append mode.
+ * The main reason for the existence of the indirect map is
+ * to enable a quick way to lookup a specific logical offset in
+ * the saved data post-soft-boot, e.g. to writeout pages
+ * with more critical data first, even though such pages
+ * would have been compressed and copied last, being the lowest
+ * ranked candidates for reuse due to their criticality.
+ * (See dump_overlay.c)
+ */
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/bootmem.h>
+#include <linux/dump.h>
+#include "dump_methods.h"
+
+#define DUMP_MAP_SZ (PAGE_SIZE / sizeof(unsigned long)) /* direct map size */
+#define DUMP_IND_MAP_SZ DUMP_MAP_SZ - 1 /* indirect map size */
+#define DUMP_NR_BOOTSTRAP 64 /* no of bootstrap pages */
+
+extern int dump_low_page(struct page *);
+
+/* check if the next entry crosses a page boundary */
+static inline int is_last_map_entry(unsigned long *map)
+{
+ unsigned long addr = (unsigned long)(map + 1);
+
+ return (!(addr & (PAGE_SIZE - 1)));
+}
+
+/* Todo: should have some validation checks */
+/* The last entry in the indirect map points to the next indirect map */
+/* Indirect maps are referred to directly by virtual address */
+static inline unsigned long *next_indirect_map(unsigned long *map)
+{
+ return (unsigned long *)map[DUMP_IND_MAP_SZ];
+}
+
+#ifdef CONFIG_CRASH_DUMP_SOFTBOOT
+/* Called during early bootup - fixme: make this __init */
+void dump_early_reserve_map(struct dump_memdev *dev)
+{
+ unsigned long *map1, *map2;
+ loff_t off = 0, last = dev->last_used_offset >> PAGE_SHIFT;
+ int i, j;
+
+ printk("Reserve bootmap space holding previous dump of %lld pages\n",
+ last);
+ map1= (unsigned long *)dev->indirect_map_root;
+
+ while (map1 && (off < last)) {
+ reserve_bootmem(virt_to_phys((void *)map1), PAGE_SIZE);
+ for (i=0; (i < DUMP_MAP_SZ - 1) && map1[i] && (off < last);
+ i++, off += DUMP_MAP_SZ) {
+ pr_debug("indirect map[%d] = 0x%lx\n", i, map1[i]);
+ if (map1[i] >= max_low_pfn)
+ continue;
+ reserve_bootmem(map1[i] << PAGE_SHIFT, PAGE_SIZE);
+ map2 = pfn_to_kaddr(map1[i]);
+ for (j = 0 ; (j < DUMP_MAP_SZ) && map2[j] &&
+ (off + j < last); j++) {
+ pr_debug("\t map[%d][%d] = 0x%lx\n", i, j,
+ map2[j]);
+ if (map2[j] < max_low_pfn) {
+ reserve_bootmem(map2[j] << PAGE_SHIFT,
+ PAGE_SIZE);
+ }
+ }
+ }
+ map1 = next_indirect_map(map1);
+ }
+ dev->nr_free = 0; /* these pages don't belong to this boot */
+}
+#endif
+
+/* mark dump pages so that they aren't used by this kernel */
+void dump_mark_map(struct dump_memdev *dev)
+{
+ unsigned long *map1, *map2;
+ loff_t off = 0, last = dev->last_used_offset >> PAGE_SHIFT;
+ struct page *page;
+ int i, j;
+
+ printk("Dump: marking pages in use by previous dump\n");
+ map1= (unsigned long *)dev->indirect_map_root;
+
+ while (map1 && (off < last)) {
+ page = virt_to_page(map1);
+ set_page_count(page, 1);
+ for (i=0; (i < DUMP_MAP_SZ - 1) && map1[i] && (off < last);
+ i++, off += DUMP_MAP_SZ) {
+ pr_debug("indirect map[%d] = 0x%lx\n", i, map1[i]);
+ page = pfn_to_page(map1[i]);
+ set_page_count(page, 1);
+ map2 = kmap_atomic(page, KM_CRASHDUMP);
+ for (j = 0 ; (j < DUMP_MAP_SZ) && map2[j] &&
+ (off + j < last); j++) {
+ pr_debug("\t map[%d][%d] = 0x%lx\n", i, j,
+ map2[j]);
+ page = pfn_to_page(map2[j]);
+ set_page_count(page, 1);
+ }
+ }
+ map1 = next_indirect_map(map1);
+ }
+}
+
+
+/*
+ * Given a logical offset into the mem device lookup the
+ * corresponding page
+ * loc is specified in units of pages
+ * Note: affects curr_map (even in the case where lookup fails)
+ */
+struct page *dump_mem_lookup(struct dump_memdev *dump_mdev, unsigned long loc)
+{
+ unsigned long *map;
+ unsigned long i, index = loc / DUMP_MAP_SZ;
+ struct page *page = NULL;
+ unsigned long curr_pfn, curr_map, *curr_map_ptr = NULL;
+
+ map = (unsigned long *)dump_mdev->indirect_map_root;
+ if (!map)
+ return NULL;
+
+ if (loc > dump_mdev->last_offset >> PAGE_SHIFT)
+ return NULL;
+
+ /*
+ * first locate the right indirect map
+ * in the chain of indirect maps
+ */
+ for (i = 0; i + DUMP_IND_MAP_SZ < index ; i += DUMP_IND_MAP_SZ) {
+ if (!(map = next_indirect_map(map)))
+ return NULL;
+ }
+ /* then the right direct map */
+ /* map entries are referred to by page index */
+ if ((curr_map = map[index - i])) {
+ page = pfn_to_page(curr_map);
+ /* update the current traversal index */
+ /* dump_mdev->curr_map = &map[index - i];*/
+ curr_map_ptr = &map[index - i];
+ }
+
+ if (page)
+ map = kmap_atomic(page, KM_CRASHDUMP);
+ else
+ return NULL;
+
+ /* and finally the right entry therein */
+ /* data pages are referred to by page index */
+ i = index * DUMP_MAP_SZ;
+ if ((curr_pfn = map[loc - i])) {
+ page = pfn_to_page(curr_pfn);
+ dump_mdev->curr_map = curr_map_ptr;
+ dump_mdev->curr_map_offset = loc - i;
+ dump_mdev->ddev.curr_offset = loc << PAGE_SHIFT;
+ } else {
+ page = NULL;
+ }
+ kunmap_atomic(map, KM_CRASHDUMP);
+
+ return page;
+}
+
+/*
+ * Retrieves a pointer to the next page in the dump device
+ * Used during the lookup pass post-soft-reboot
+ */
+struct page *dump_mem_next_page(struct dump_memdev *dev)
+{
+ unsigned long i;
+ unsigned long *map;
+ struct page *page = NULL;
+
+ if (dev->ddev.curr_offset + PAGE_SIZE >= dev->last_offset) {
+ return NULL;
+ }
+
+ if ((i = (unsigned long)(++dev->curr_map_offset)) >= DUMP_MAP_SZ) {
+ /* move to next map */
+ if (is_last_map_entry(++dev->curr_map)) {
+ /* move to the next indirect map page */
+ printk("dump_mem_next_page: go to next indirect map\n");
+ dev->curr_map = (unsigned long *)*dev->curr_map;
+ if (!dev->curr_map)
+ return NULL;
+ }
+ i = dev->curr_map_offset = 0;
+ pr_debug("dump_mem_next_page: next map 0x%lx, entry 0x%lx\n",
+ dev->curr_map, *dev->curr_map);
+
+ };
+
+ if (*dev->curr_map) {
+ map = kmap_atomic(pfn_to_page(*dev->curr_map), KM_CRASHDUMP);
+ if (map[i])
+ page = pfn_to_page(map[i]);
+ kunmap_atomic(map, KM_CRASHDUMP);
+ dev->ddev.curr_offset += PAGE_SIZE;
+ };
+
+ return page;
+}
+
+/* Copied from dump_filters.c */
+static inline int kernel_page(struct page *p)
+{
+ /* FIXME: Need to exclude hugetlb pages. Clue: reserved but inuse */
+ return (PageReserved(p) && !PageInuse(p)) || (!PageLRU(p) && PageInuse(p));
+}
+
+static inline int user_page(struct page *p)
+{
+ return PageInuse(p) && (!PageReserved(p) && PageLRU(p));
+}
+
+int dump_reused_by_boot(struct page *page)
+{
+ /* Todo
+ * Checks:
+ * if PageReserved
+ * if < __end + bootmem_bootmap_pages for this boot + allowance
+ * if overwritten by initrd (how to check ?)
+ * Also, add more checks in early boot code
+ * e.g. bootmem bootmap alloc verify not overwriting dump, and if
+ * so then realloc or move the dump pages out accordingly.
+ */
+
+ /* Temporary proof of concept hack, avoid overwriting kern pages */
+
+ return (kernel_page(page) || dump_low_page(page) || user_page(page));
+}
+
+
+/* Uses the free page passed in to expand available space */
+int dump_mem_add_space(struct dump_memdev *dev, struct page *page)
+{
+ struct page *map_page;
+ unsigned long *map;
+ unsigned long i;
+
+ if (!dev->curr_map)
+ return -ENOMEM; /* must've exhausted indirect map */
+
+ if (!*dev->curr_map || dev->curr_map_offset >= DUMP_MAP_SZ) {
+ /* add map space */
+ *dev->curr_map = page_to_pfn(page);
+ dev->curr_map_offset = 0;
+ return 0;
+ }
+
+ /* add data space */
+ i = dev->curr_map_offset;
+ map_page = pfn_to_page(*dev->curr_map);
+ map = (unsigned long *)kmap_atomic(map_page, KM_CRASHDUMP);
+ map[i] = page_to_pfn(page);
+ kunmap_atomic(map, KM_CRASHDUMP);
+ dev->curr_map_offset = ++i;
+ dev->last_offset += PAGE_SIZE;
+ if (i >= DUMP_MAP_SZ) {
+ /* move to next map */
+ if (is_last_map_entry(++dev->curr_map)) {
+ /* move to the next indirect map page */
+ pr_debug("dump_mem_add_space: using next"
+ "indirect map\n");
+ dev->curr_map = (unsigned long *)*dev->curr_map;
+ }
+ }
+ return 0;
+}
+
+
+/* Caution: making a dest page invalidates existing contents of the page */
+int dump_check_and_free_page(struct dump_memdev *dev, struct page *page)
+{
+ int err = 0;
+
+ /*
+ * the page can be used as a destination only if we are sure
+ * it won't get overwritten by the soft-boot, and is not
+ * critical for us right now.
+ */
+ if (dump_reused_by_boot(page))
+ return 0;
+
+ if ((err = dump_mem_add_space(dev, page))) {
+ printk("Warning: Unable to extend memdev space. Err %d\n",
+ err);
+ return 0;
+ }
+
+ dev->nr_free++;
+ return 1;
+}
+
+
+/* Set up the initial maps and bootstrap space */
+/* Must be called only after any previous dump is written out */
+int dump_mem_open(struct dump_dev *dev, unsigned long devid)
+{
+ struct dump_memdev *dump_mdev = DUMP_MDEV(dev);
+ unsigned long nr_maps, *map, *prev_map = &dump_mdev->indirect_map_root;
+ void *addr;
+ struct page *page;
+ unsigned long i = 0;
+ int err = 0;
+
+ /* Todo: sanity check for unwritten previous dump */
+
+ /* allocate pages for indirect map (non highmem area) */
+ nr_maps = num_physpages / DUMP_MAP_SZ; /* maps to cover entire mem */
+ for (i = 0; i < nr_maps; i += DUMP_IND_MAP_SZ) {
+ if (!(map = (unsigned long *)dump_alloc_mem(PAGE_SIZE))) {
+ printk("Unable to alloc indirect map %ld\n",
+ i / DUMP_IND_MAP_SZ);
+ return -ENOMEM;
+ }
+ clear_page(map);
+ *prev_map = (unsigned long)map;
+ prev_map = &map[DUMP_IND_MAP_SZ];
+ };
+
+ dump_mdev->curr_map = (unsigned long *)dump_mdev->indirect_map_root;
+ dump_mdev->curr_map_offset = 0;
+
+ /*
+ * allocate a few bootstrap pages: at least 1 map and 1 data page
+ * plus enough to save the dump header
+ */
+ i = 0;
+ do {
+ if (!(addr = dump_alloc_mem(PAGE_SIZE))) {
+ printk("Unable to alloc bootstrap page %ld\n", i);
+ return -ENOMEM;
+ }
+
+ page = virt_to_page(addr);
+ if (dump_low_page(page)) {
+ dump_free_mem(addr);
+ continue;
+ }
+
+ if (dump_mem_add_space(dump_mdev, page)) {
+ printk("Warning: Unable to extend memdev "
+ "space. Err %d\n", err);
+ dump_free_mem(addr);
+ continue;
+ }
+ i++;
+ } while (i < DUMP_NR_BOOTSTRAP);
+
+ printk("dump memdev init: %ld maps, %ld bootstrap pgs, %ld free pgs\n",
+ nr_maps, i, dump_mdev->last_offset >> PAGE_SHIFT);
+
+ dump_mdev->last_bs_offset = dump_mdev->last_offset;
+
+ return 0;
+}
+
+/* Releases all pre-alloc'd pages */
+int dump_mem_release(struct dump_dev *dev)
+{
+ struct dump_memdev *dump_mdev = DUMP_MDEV(dev);
+ struct page *page, *map_page;
+ unsigned long *map, *prev_map;
+ void *addr;
+ int i;
+
+ if (!dump_mdev->nr_free)
+ return 0;
+
+ pr_debug("dump_mem_release\n");
+ page = dump_mem_lookup(dump_mdev, 0);
+ for (i = 0; page && (i < DUMP_NR_BOOTSTRAP - 1); i++) {
+ if (PageHighMem(page))
+ break;
+ addr = page_address(page);
+ if (!addr) {
+ printk("page_address(%p) = NULL\n", page);
+ break;
+ }
+ pr_debug("Freeing page at 0x%lx\n", addr);
+ dump_free_mem(addr);
+ if (dump_mdev->curr_map_offset >= DUMP_MAP_SZ - 1) {
+ map_page = pfn_to_page(*dump_mdev->curr_map);
+ if (PageHighMem(map_page))
+ break;
+ page = dump_mem_next_page(dump_mdev);
+ addr = page_address(map_page);
+ if (!addr) {
+ printk("page_address(%p) = NULL\n",
+ map_page);
+ break;
+ }
+ pr_debug("Freeing map page at 0x%lx\n", addr);
+ dump_free_mem(addr);
+ i++;
+ } else {
+ page = dump_mem_next_page(dump_mdev);
+ }
+ }
+
+ /* now for the last used bootstrap page used as a map page */
+ if ((i < DUMP_NR_BOOTSTRAP) && (*dump_mdev->curr_map)) {
+ map_page = pfn_to_page(*dump_mdev->curr_map);
+ if ((map_page) && !PageHighMem(map_page)) {
+ addr = page_address(map_page);
+ if (!addr) {
+ printk("page_address(%p) = NULL\n", map_page);
+ } else {
+ pr_debug("Freeing map page at 0x%lx\n", addr);
+ dump_free_mem(addr);
+ i++;
+ }
+ }
+ }
+
+ printk("Freed %d bootstrap pages\n", i);
+
+ /* free the indirect maps */
+ map = (unsigned long *)dump_mdev->indirect_map_root;
+
+ i = 0;
+ while (map) {
+ prev_map = map;
+ map = next_indirect_map(map);
+ dump_free_mem(prev_map);
+ i++;
+ }
+
+ printk("Freed %d indirect map(s)\n", i);
+
+ /* Reset the indirect map */
+ dump_mdev->indirect_map_root = 0;
+ dump_mdev->curr_map = 0;
+
+ /* Reset the free list */
+ dump_mdev->nr_free = 0;
+
+ dump_mdev->last_offset = dump_mdev->ddev.curr_offset = 0;
+ dump_mdev->last_used_offset = 0;
+ dump_mdev->curr_map = NULL;
+ dump_mdev->curr_map_offset = 0;
+ return 0;
+}
+
+/*
+ * Long term:
+ * It is critical for this to be very strict. Cannot afford
+ * to have anything running and accessing memory while we overwrite
+ * memory (potential risk of data corruption).
+ * If in doubt (e.g if a cpu is hung and not responding) just give
+ * up and refuse to proceed with this scheme.
+ *
+ * Note: I/O will only happen after soft-boot/switchover, so we can
+ * safely disable interrupts and force stop other CPUs if this is
+ * going to be a disruptive dump, no matter what they
+ * are in the middle of.
+ */
+/*
+ * ATM Most of this is already taken care of in the nmi handler
+ * We may halt the cpus rightaway if we know this is going to be disruptive
+ * For now, since we've limited ourselves to overwriting free pages we
+ * aren't doing much here. Eventually, we'd have to wait to make sure other
+ * cpus aren't using memory we could be overwriting
+ */
+int dump_mem_silence(struct dump_dev *dev)
+{
+ struct dump_memdev *dump_mdev = DUMP_MDEV(dev);
+
+ if (dump_mdev->last_offset > dump_mdev->last_bs_offset) {
+ /* prefer to run lkcd config & start with a clean slate */
+ return -EEXIST;
+ }
+ return 0;
+}
+
+extern int dump_overlay_resume(void);
+
+/* Trigger the next stage of dumping */
+int dump_mem_resume(struct dump_dev *dev)
+{
+ dump_overlay_resume();
+ return 0;
+}
+
+/*
+ * Allocate mem dev pages as required and copy buffer contents into it.
+ * Fails if the no free pages are available
+ * Keeping it simple and limited for starters (can modify this over time)
+ * Does not handle holes or a sparse layout
+ * Data must be in multiples of PAGE_SIZE
+ */
+int dump_mem_write(struct dump_dev *dev, void *buf, unsigned long len)
+{
+ struct dump_memdev *dump_mdev = DUMP_MDEV(dev);
+ struct page *page;
+ unsigned long n = 0;
+ void *addr;
+ unsigned long *saved_curr_map, saved_map_offset;
+ int ret = 0;
+
+ pr_debug("dump_mem_write: offset 0x%llx, size %ld\n",
+ dev->curr_offset, len);
+
+ if (dev->curr_offset + len > dump_mdev->last_offset) {
+ printk("Out of space to write\n");
+ return -ENOSPC;
+ }
+
+ if ((len & (PAGE_SIZE - 1)) || (dev->curr_offset & (PAGE_SIZE - 1)))
+ return -EINVAL; /* not aligned in units of page size */
+
+ saved_curr_map = dump_mdev->curr_map;
+ saved_map_offset = dump_mdev->curr_map_offset;
+ page = dump_mem_lookup(dump_mdev, dev->curr_offset >> PAGE_SHIFT);
+
+ for (n = len; (n > 0) && page; n -= PAGE_SIZE, buf += PAGE_SIZE ) {
+ addr = kmap_atomic(page, KM_CRASHDUMP);
+ /* memset(addr, 'x', PAGE_SIZE); */
+ memcpy(addr, buf, PAGE_SIZE);
+ kunmap_atomic(addr, KM_CRASHDUMP);
+ /* dev->curr_offset += PAGE_SIZE; */
+ page = dump_mem_next_page(dump_mdev);
+ }
+
+ dump_mdev->curr_map = saved_curr_map;
+ dump_mdev->curr_map_offset = saved_map_offset;
+
+ if (dump_mdev->last_used_offset < dev->curr_offset)
+ dump_mdev->last_used_offset = dev->curr_offset;
+
+ return (len - n) ? (len - n) : ret ;
+}
+
+/* dummy - always ready */
+int dump_mem_ready(struct dump_dev *dev, void *buf)
+{
+ return 0;
+}
+
+/*
+ * Should check for availability of space to write upto the offset
+ * affects only the curr_offset; last_offset untouched
+ * Keep it simple: Only allow multiples of PAGE_SIZE for now
+ */
+int dump_mem_seek(struct dump_dev *dev, loff_t offset)
+{
+ struct dump_memdev *dump_mdev = DUMP_MDEV(dev);
+
+ if (offset & (PAGE_SIZE - 1))
+ return -EINVAL; /* allow page size units only for now */
+
+ /* Are we exceeding available space ? */
+ if (offset > dump_mdev->last_offset) {
+ printk("dump_mem_seek failed for offset 0x%llx\n",
+ offset);
+ return -ENOSPC;
+ }
+
+ dump_mdev->ddev.curr_offset = offset;
+ return 0;
+}
+
+struct dump_dev_ops dump_memdev_ops = {
+ .open = dump_mem_open,
+ .release = dump_mem_release,
+ .silence = dump_mem_silence,
+ .resume = dump_mem_resume,
+ .seek = dump_mem_seek,
+ .write = dump_mem_write,
+ .read = NULL, /* not implemented at the moment */
+ .ready = dump_mem_ready
+};
+
+static struct dump_memdev default_dump_memdev = {
+ .ddev = {.type_name = "memdev", .ops = &dump_memdev_ops,
+ .device_id = 0x14}
+ /* assume the rest of the fields are zeroed by default */
+};
+
+/* may be overwritten if a previous dump exists */
+struct dump_memdev *dump_memdev = &default_dump_memdev;
+
--- /dev/null
+/*
+ * Generic interfaces for flexible system dump
+ *
+ * Started: Oct 2002 - Suparna Bhattacharya (suparna@in.ibm.com)
+ *
+ * Copyright (C) 2002 International Business Machines Corp.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+#ifndef _LINUX_DUMP_METHODS_H
+#define _LINUX_DUMP_METHODS_H
+
+/*
+ * Inspired by Matt Robinson's suggestion of introducing dump
+ * methods as a way to enable different crash dump facilities to
+ * coexist where each employs its own scheme or dumping policy.
+ *
+ * The code here creates a framework for flexible dump by defining
+ * a set of methods and providing associated helpers that differentiate
+ * between the underlying mechanism (how to dump), overall scheme
+ * (sequencing of stages and data dumped and associated quiescing),
+ * output format (what the dump output looks like), target type
+ * (where to save the dump; see dumpdev.h), and selection policy
+ * (state/data to dump).
+ *
+ * These sets of interfaces can be mixed and matched to build a
+ * dumper suitable for a given situation, allowing for
+ * flexibility as well appropriate degree of code reuse.
+ * For example all features and options of lkcd (including
+ * granular selective dumping in the near future) should be
+ * available even when say, the 2 stage soft-boot based mechanism
+ * is used for taking disruptive dumps.
+ *
+ * Todo: Additionally modules or drivers may supply their own
+ * custom dumpers which extend dump with module specific
+ * information or hardware state, and can even tweak the
+ * mechanism when it comes to saving state relevant to
+ * them.
+ */
+
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/highmem.h>
+#include <linux/dumpdev.h>
+
+#define MAX_PASSES 6
+#define MAX_DEVS 4
+
+
+/* To customise selection of pages to be dumped in a given pass/group */
+struct dump_data_filter{
+ char name[32];
+ int (*selector)(int, unsigned long, unsigned long);
+ ulong level_mask; /* dump level(s) for which this filter applies */
+ loff_t start[MAX_NUMNODES], end[MAX_NUMNODES]; /* location range applicable */
+ ulong num_mbanks; /* Number of memory banks. Greater than one for discontig memory (NUMA) */
+};
+
+
+/*
+ * Determined by the kind of dump mechanism and appropriate
+ * overall scheme
+ */
+struct dump_scheme_ops {
+ /* sets aside memory, inits data structures etc */
+ int (*configure)(unsigned long devid);
+ /* releases resources */
+ int (*unconfigure)(void);
+
+ /* ordering of passes, invoking iterator */
+ int (*sequencer)(void);
+ /* iterates over system data, selects and acts on data to dump */
+ int (*iterator)(int, int (*)(unsigned long, unsigned long),
+ struct dump_data_filter *);
+ /* action when data is selected for dump */
+ int (*save_data)(unsigned long, unsigned long);
+ /* action when data is to be excluded from dump */
+ int (*skip_data)(unsigned long, unsigned long);
+ /* policies for space, multiple dump devices etc */
+ int (*write_buffer)(void *, unsigned long);
+};
+
+struct dump_scheme {
+ /* the name serves as an anchor to locate the scheme after reboot */
+ char name[32];
+ struct dump_scheme_ops *ops;
+ struct list_head list;
+};
+
+/* Quiescing/Silence levels (controls IPI callback behaviour) */
+extern enum dump_silence_levels {
+ DUMP_SOFT_SPIN_CPUS = 1,
+ DUMP_HARD_SPIN_CPUS = 2,
+ DUMP_HALT_CPUS = 3,
+} dump_silence_level;
+
+/* determined by the dump (file) format */
+struct dump_fmt_ops {
+ /* build header */
+ int (*configure_header)(const char *, const struct pt_regs *);
+ int (*update_header)(void); /* update header and write it out */
+ /* save curr context */
+ void (*save_context)(int, const struct pt_regs *,
+ struct task_struct *);
+ /* typically called by the save_data action */
+ /* add formatted data to the dump buffer */
+ int (*add_data)(unsigned long, unsigned long);
+ int (*update_end_marker)(void);
+};
+
+struct dump_fmt {
+ unsigned long magic;
+ char name[32]; /* lcrash, crash, elf-core etc */
+ struct dump_fmt_ops *ops;
+ struct list_head list;
+};
+
+/*
+ * Modules will be able add their own data capture schemes by
+ * registering their own dumpers. Typically they would use the
+ * primary dumper as a template and tune it with their routines.
+ * Still Todo.
+ */
+
+/* The combined dumper profile (mechanism, scheme, dev, fmt) */
+struct dumper {
+ char name[32]; /* singlestage, overlay (stg1), passthru(stg2), pull */
+ struct dump_scheme *scheme;
+ struct dump_fmt *fmt;
+ struct __dump_compress *compress;
+ struct dump_data_filter *filter;
+ struct dump_dev *dev;
+ /* state valid only for active dumper(s) - per instance */
+ /* run time state/context */
+ int curr_pass;
+ unsigned long count;
+ loff_t curr_offset; /* current logical offset into dump device */
+ loff_t curr_loc; /* current memory location */
+ void *curr_buf; /* current position in the dump buffer */
+ void *dump_buf; /* starting addr of dump buffer */
+ int header_dirty; /* whether the header needs to be written out */
+ int header_len;
+ struct list_head dumper_list; /* links to other dumpers */
+};
+
+/* Starting point to get to the current configured state */
+struct dump_config {
+ ulong level;
+ ulong flags;
+ struct dumper *dumper;
+ unsigned long dump_device;
+ unsigned long dump_addr; /* relevant only for in-memory dumps */
+ struct list_head dump_dev_list;
+};
+
+extern struct dump_config dump_config;
+
+/* Used to save the dump config across a reboot for 2-stage dumps:
+ *
+ * Note: The scheme, format, compression and device type should be
+ * registered at bootup, for this config to be sharable across soft-boot.
+ * The function addresses could have changed and become invalid, and
+ * need to be set up again.
+ */
+struct dump_config_block {
+ u64 magic; /* for a quick sanity check after reboot */
+ struct dump_memdev memdev; /* handle to dump stored in memory */
+ struct dump_config config;
+ struct dumper dumper;
+ struct dump_scheme scheme;
+ struct dump_fmt fmt;
+ struct __dump_compress compress;
+ struct dump_data_filter filter_table[MAX_PASSES];
+ struct dump_anydev dev[MAX_DEVS]; /* target dump device */
+};
+
+
+/* Wrappers that invoke the methods for the current (active) dumper */
+
+/* Scheme operations */
+
+static inline int dump_sequencer(void)
+{
+ return dump_config.dumper->scheme->ops->sequencer();
+}
+
+static inline int dump_iterator(int pass, int (*action)(unsigned long,
+ unsigned long), struct dump_data_filter *filter)
+{
+ return dump_config.dumper->scheme->ops->iterator(pass, action, filter);
+}
+
+#define dump_save_data dump_config.dumper->scheme->ops->save_data
+#define dump_skip_data dump_config.dumper->scheme->ops->skip_data
+
+static inline int dump_write_buffer(void *buf, unsigned long len)
+{
+ return dump_config.dumper->scheme->ops->write_buffer(buf, len);
+}
+
+static inline int dump_configure(unsigned long devid)
+{
+ return dump_config.dumper->scheme->ops->configure(devid);
+}
+
+static inline int dump_unconfigure(void)
+{
+ return dump_config.dumper->scheme->ops->unconfigure();
+}
+
+/* Format operations */
+
+static inline int dump_configure_header(const char *panic_str,
+ const struct pt_regs *regs)
+{
+ return dump_config.dumper->fmt->ops->configure_header(panic_str, regs);
+}
+
+static inline void dump_save_context(int cpu, const struct pt_regs *regs,
+ struct task_struct *tsk)
+{
+ dump_config.dumper->fmt->ops->save_context(cpu, regs, tsk);
+}
+
+static inline int dump_save_this_cpu(const struct pt_regs *regs)
+{
+ int cpu = smp_processor_id();
+
+ dump_save_context(cpu, regs, current);
+ return 1;
+}
+
+static inline int dump_update_header(void)
+{
+ return dump_config.dumper->fmt->ops->update_header();
+}
+
+static inline int dump_update_end_marker(void)
+{
+ return dump_config.dumper->fmt->ops->update_end_marker();
+}
+
+static inline int dump_add_data(unsigned long loc, unsigned long sz)
+{
+ return dump_config.dumper->fmt->ops->add_data(loc, sz);
+}
+
+/* Compression operation */
+static inline int dump_compress_data(char *src, int slen, char *dst)
+{
+ return dump_config.dumper->compress->compress_func(src, slen,
+ dst, DUMP_DPC_PAGE_SIZE);
+}
+
+
+/* Prototypes of some default implementations of dump methods */
+
+extern struct __dump_compress dump_none_compression;
+
+/* Default scheme methods (dump_scheme.c) */
+
+extern int dump_generic_sequencer(void);
+extern int dump_page_iterator(int pass, int (*action)(unsigned long, unsigned
+ long), struct dump_data_filter *filter);
+extern int dump_generic_save_data(unsigned long loc, unsigned long sz);
+extern int dump_generic_skip_data(unsigned long loc, unsigned long sz);
+extern int dump_generic_write_buffer(void *buf, unsigned long len);
+extern int dump_generic_configure(unsigned long);
+extern int dump_generic_unconfigure(void);
+
+/* Default scheme template */
+extern struct dump_scheme dump_scheme_singlestage;
+
+/* Default dump format methods */
+
+extern int dump_lcrash_configure_header(const char *panic_str,
+ const struct pt_regs *regs);
+extern void dump_lcrash_save_context(int cpu, const struct pt_regs *regs,
+ struct task_struct *tsk);
+extern int dump_generic_update_header(void);
+extern int dump_lcrash_add_data(unsigned long loc, unsigned long sz);
+extern int dump_lcrash_update_end_marker(void);
+
+/* Default format (lcrash) template */
+extern struct dump_fmt dump_fmt_lcrash;
+
+/* Default dump selection filter table */
+
+/*
+ * Entries listed in order of importance and correspond to passes
+ * The last entry (with a level_mask of zero) typically reflects data that
+ * won't be dumped -- this may for example be used to identify data
+ * that will be skipped for certain so the corresponding memory areas can be
+ * utilized as scratch space.
+ */
+extern struct dump_data_filter dump_filter_table[];
+
+/* Some pre-defined dumpers */
+extern struct dumper dumper_singlestage;
+extern struct dumper dumper_stage1;
+extern struct dumper dumper_stage2;
+
+/* These are temporary */
+#define DUMP_MASK_HEADER DUMP_LEVEL_HEADER
+#define DUMP_MASK_KERN DUMP_LEVEL_KERN
+#define DUMP_MASK_USED DUMP_LEVEL_USED
+#define DUMP_MASK_UNUSED DUMP_LEVEL_ALL_RAM
+#define DUMP_MASK_REST 0 /* dummy for now */
+
+/* Helpers - move these to dump.h later ? */
+
+int dump_generic_execute(const char *panic_str, const struct pt_regs *regs);
+extern int dump_ll_write(void *buf, unsigned long len);
+int dump_check_and_free_page(struct dump_memdev *dev, struct page *page);
+
+static inline void dumper_reset(void)
+{
+ dump_config.dumper->curr_buf = dump_config.dumper->dump_buf;
+ dump_config.dumper->curr_loc = 0;
+ dump_config.dumper->curr_offset = 0;
+ dump_config.dumper->count = 0;
+ dump_config.dumper->curr_pass = 0;
+}
+
+/*
+ * May later be moulded to perform boot-time allocations so we can dump
+ * earlier during bootup
+ */
+static inline void *dump_alloc_mem(unsigned long size)
+{
+ return kmalloc(size, GFP_KERNEL);
+}
+
+static inline void dump_free_mem(void *buf)
+{
+ struct page *page;
+
+ /* ignore reserved pages (e.g. post soft boot stage) */
+ if (buf && (page = virt_to_page(buf))) {
+ if (PageReserved(page))
+ return;
+ }
+
+ kfree(buf);
+}
+
+
+#endif /* _LINUX_DUMP_METHODS_H */
--- /dev/null
+/*
+ * Implements the dump driver interface for saving a dump via network
+ * interface.
+ *
+ * Some of this code has been taken/adapted from Ingo Molnar's netconsole
+ * code. LKCD team expresses its thanks to Ingo.
+ *
+ * Started: June 2002 - Mohamed Abbas <mohamed.abbas@intel.com>
+ * Adapted netconsole code to implement LKCD dump over the network.
+ *
+ * Nov 2002 - Bharata B. Rao <bharata@in.ibm.com>
+ * Innumerable code cleanups, simplification and some fixes.
+ * Netdump configuration done by ioctl instead of using module parameters.
+ *
+ * Copyright (C) 2001 Ingo Molnar <mingo@redhat.com>
+ * Copyright (C) 2002 International Business Machines Corp.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+#include <net/tcp.h>
+#include <net/udp.h>
+#include <linux/delay.h>
+#include <linux/random.h>
+#include <linux/reboot.h>
+#include <linux/module.h>
+#include <linux/dump.h>
+#include <linux/dump_netdev.h>
+#include <linux/percpu.h>
+
+#include <asm/unaligned.h>
+
+static int startup_handshake;
+static int page_counter;
+static struct net_device *dump_ndev;
+static struct in_device *dump_in_dev;
+static u16 source_port, target_port;
+static u32 source_ip, target_ip;
+static unsigned char daddr[6] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff} ;
+static spinlock_t dump_skb_lock = SPIN_LOCK_UNLOCKED;
+static int dump_nr_skbs;
+static struct sk_buff *dump_skb;
+static unsigned long flags_global;
+static int netdump_in_progress;
+static char device_name[IFNAMSIZ];
+
+/*
+ * security depends on the trusted path between the netconsole
+ * server and netconsole client, since none of the packets are
+ * encrypted. The random magic number protects the protocol
+ * against spoofing.
+ */
+static u64 dump_magic;
+
+#define MAX_UDP_CHUNK 1460
+#define MAX_PRINT_CHUNK (MAX_UDP_CHUNK-HEADER_LEN)
+
+/*
+ * We maintain a small pool of fully-sized skbs,
+ * to make sure the message gets out even in
+ * extreme OOM situations.
+ */
+#define DUMP_MAX_SKBS 32
+
+#define MAX_SKB_SIZE \
+ (MAX_UDP_CHUNK + sizeof(struct udphdr) + \
+ sizeof(struct iphdr) + sizeof(struct ethhdr))
+
+static void
+dump_refill_skbs(void)
+{
+ struct sk_buff *skb;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dump_skb_lock, flags);
+ while (dump_nr_skbs < DUMP_MAX_SKBS) {
+ skb = alloc_skb(MAX_SKB_SIZE, GFP_ATOMIC);
+ if (!skb)
+ break;
+ if (dump_skb)
+ skb->next = dump_skb;
+ else
+ skb->next = NULL;
+ dump_skb = skb;
+ dump_nr_skbs++;
+ }
+ spin_unlock_irqrestore(&dump_skb_lock, flags);
+}
+
+static struct
+sk_buff * dump_get_skb(void)
+{
+ struct sk_buff *skb;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dump_skb_lock, flags);
+ skb = dump_skb;
+ if (skb) {
+ dump_skb = skb->next;
+ skb->next = NULL;
+ dump_nr_skbs--;
+ }
+ spin_unlock_irqrestore(&dump_skb_lock, flags);
+
+ return skb;
+}
+
+/*
+ * Zap completed output skbs.
+ */
+static void
+zap_completion_queue(void)
+{
+ int count;
+ unsigned long flags;
+ struct softnet_data *sd;
+
+ count=0;
+ sd = &__get_cpu_var(softnet_data);
+ if (sd->completion_queue) {
+ struct sk_buff *clist;
+
+ local_irq_save(flags);
+ clist = sd->completion_queue;
+ sd->completion_queue = NULL;
+ local_irq_restore(flags);
+
+ while (clist != NULL) {
+ struct sk_buff *skb = clist;
+ clist = clist->next;
+ __kfree_skb(skb);
+ count++;
+ if (count > 10000)
+ printk("Error in sk list\n");
+ }
+ }
+}
+
+static void
+dump_send_skb(struct net_device *dev, const char *msg, unsigned int msg_len,
+ reply_t *reply)
+{
+ int once = 1;
+ int total_len, eth_len, ip_len, udp_len, count = 0;
+ struct sk_buff *skb;
+ struct udphdr *udph;
+ struct iphdr *iph;
+ struct ethhdr *eth;
+
+ udp_len = msg_len + HEADER_LEN + sizeof(*udph);
+ ip_len = eth_len = udp_len + sizeof(*iph);
+ total_len = eth_len + ETH_HLEN;
+
+repeat_loop:
+ zap_completion_queue();
+ if (dump_nr_skbs < DUMP_MAX_SKBS)
+ dump_refill_skbs();
+
+ skb = alloc_skb(total_len, GFP_ATOMIC);
+ if (!skb) {
+ skb = dump_get_skb();
+ if (!skb) {
+ count++;
+ if (once && (count == 1000000)) {
+ printk("possibly FATAL: out of netconsole "
+ "skbs!!! will keep retrying.\n");
+ once = 0;
+ }
+ dev->poll_controller(dev);
+ goto repeat_loop;
+ }
+ }
+
+ atomic_set(&skb->users, 1);
+ skb_reserve(skb, total_len - msg_len - HEADER_LEN);
+ skb->data[0] = NETCONSOLE_VERSION;
+
+ put_unaligned(htonl(reply->nr), (u32 *) (skb->data + 1));
+ put_unaligned(htonl(reply->code), (u32 *) (skb->data + 5));
+ put_unaligned(htonl(reply->info), (u32 *) (skb->data + 9));
+
+ memcpy(skb->data + HEADER_LEN, msg, msg_len);
+ skb->len += msg_len + HEADER_LEN;
+
+ udph = (struct udphdr *) skb_push(skb, sizeof(*udph));
+ udph->source = source_port;
+ udph->dest = target_port;
+ udph->len = htons(udp_len);
+ udph->check = 0;
+
+ iph = (struct iphdr *)skb_push(skb, sizeof(*iph));
+
+ iph->version = 4;
+ iph->ihl = 5;
+ iph->tos = 0;
+ iph->tot_len = htons(ip_len);
+ iph->id = 0;
+ iph->frag_off = 0;
+ iph->ttl = 64;
+ iph->protocol = IPPROTO_UDP;
+ iph->check = 0;
+ iph->saddr = source_ip;
+ iph->daddr = target_ip;
+ iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl);
+
+ eth = (struct ethhdr *) skb_push(skb, ETH_HLEN);
+
+ eth->h_proto = htons(ETH_P_IP);
+ memcpy(eth->h_source, dev->dev_addr, dev->addr_len);
+ memcpy(eth->h_dest, daddr, dev->addr_len);
+
+ count=0;
+repeat_poll:
+ spin_lock(&dev->xmit_lock);
+ dev->xmit_lock_owner = smp_processor_id();
+
+ count++;
+
+
+ if (netif_queue_stopped(dev)) {
+ dev->xmit_lock_owner = -1;
+ spin_unlock(&dev->xmit_lock);
+
+ dev->poll_controller(dev);
+ zap_completion_queue();
+
+
+ goto repeat_poll;
+ }
+
+ dev->hard_start_xmit(skb, dev);
+
+ dev->xmit_lock_owner = -1;
+ spin_unlock(&dev->xmit_lock);
+}
+
+static unsigned short
+udp_check(struct udphdr *uh, int len, unsigned long saddr, unsigned long daddr,
+ unsigned long base)
+{
+ return csum_tcpudp_magic(saddr, daddr, len, IPPROTO_UDP, base);
+}
+
+static int
+udp_checksum_init(struct sk_buff *skb, struct udphdr *uh,
+ unsigned short ulen, u32 saddr, u32 daddr)
+{
+ if (uh->check == 0) {
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ } else if (skb->ip_summed == CHECKSUM_HW) {
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ if (!udp_check(uh, ulen, saddr, daddr, skb->csum))
+ return 0;
+ skb->ip_summed = CHECKSUM_NONE;
+ }
+ if (skb->ip_summed != CHECKSUM_UNNECESSARY)
+ skb->csum = csum_tcpudp_nofold(saddr, daddr, ulen,
+ IPPROTO_UDP, 0);
+ /* Probably, we should checksum udp header (it should be in cache
+ * in any case) and data in tiny packets (< rx copybreak).
+ */
+ return 0;
+}
+
+static __inline__ int
+__udp_checksum_complete(struct sk_buff *skb)
+{
+ return (unsigned short)csum_fold(skb_checksum(skb, 0, skb->len,
+ skb->csum));
+}
+
+static __inline__
+int udp_checksum_complete(struct sk_buff *skb)
+{
+ return skb->ip_summed != CHECKSUM_UNNECESSARY &&
+ __udp_checksum_complete(skb);
+}
+
+int new_req = 0;
+static req_t req;
+
+static int
+dump_rx_hook(struct sk_buff *skb)
+{
+ int proto;
+ struct iphdr *iph;
+ struct udphdr *uh;
+ __u32 len, saddr, daddr, ulen;
+ req_t *__req;
+
+ /*
+ * First check if were are dumping or doing startup handshake, if
+ * not quickly return.
+ */
+ if (!netdump_in_progress)
+ return NET_RX_SUCCESS;
+
+ if (skb->dev->type != ARPHRD_ETHER)
+ goto out;
+
+ proto = ntohs(skb->mac.ethernet->h_proto);
+ if (proto != ETH_P_IP)
+ goto out;
+
+ if (skb->pkt_type == PACKET_OTHERHOST)
+ goto out;
+
+ if (skb_shared(skb))
+ goto out;
+
+ /* IP header correctness testing: */
+ iph = (struct iphdr *)skb->data;
+ if (!pskb_may_pull(skb, sizeof(struct iphdr)))
+ goto out;
+
+ if (iph->ihl < 5 || iph->version != 4)
+ goto out;
+
+ if (!pskb_may_pull(skb, iph->ihl*4))
+ goto out;
+
+ if (ip_fast_csum((u8 *)iph, iph->ihl) != 0)
+ goto out;
+
+ len = ntohs(iph->tot_len);
+ if (skb->len < len || len < iph->ihl*4)
+ goto out;
+
+ saddr = iph->saddr;
+ daddr = iph->daddr;
+ if (iph->protocol != IPPROTO_UDP)
+ goto out;
+
+ if (source_ip != daddr)
+ goto out;
+
+ if (target_ip != saddr)
+ goto out;
+
+ len -= iph->ihl*4;
+ uh = (struct udphdr *)(((char *)iph) + iph->ihl*4);
+ ulen = ntohs(uh->len);
+
+ if (ulen != len || ulen < (sizeof(*uh) + sizeof(*__req)))
+ goto out;
+
+ if (udp_checksum_init(skb, uh, ulen, saddr, daddr) < 0)
+ goto out;
+
+ if (udp_checksum_complete(skb))
+ goto out;
+
+ if (source_port != uh->dest)
+ goto out;
+
+ if (target_port != uh->source)
+ goto out;
+
+ __req = (req_t *)(uh + 1);
+ if ((ntohl(__req->command) != COMM_GET_MAGIC) &&
+ (ntohl(__req->command) != COMM_HELLO) &&
+ (ntohl(__req->command) != COMM_START_WRITE_NETDUMP_ACK) &&
+ (ntohl(__req->command) != COMM_START_NETDUMP_ACK) &&
+ (memcmp(&__req->magic, &dump_magic, sizeof(dump_magic)) != 0))
+ goto out;
+
+ req.magic = ntohl(__req->magic);
+ req.command = ntohl(__req->command);
+ req.from = ntohl(__req->from);
+ req.to = ntohl(__req->to);
+ req.nr = ntohl(__req->nr);
+ new_req = 1;
+out:
+ return NET_RX_DROP;
+}
+
+static void
+dump_send_mem(struct net_device *dev, req_t *req, const char* buff, size_t len)
+{
+ int i;
+
+ int nr_chunks = len/1024;
+ reply_t reply;
+
+ reply.nr = req->nr;
+ reply.info = 0;
+
+ if ( nr_chunks <= 0)
+ nr_chunks = 1;
+ for (i = 0; i < nr_chunks; i++) {
+ unsigned int offset = i*1024;
+ reply.code = REPLY_MEM;
+ reply.info = offset;
+ dump_send_skb(dev, buff + offset, 1024, &reply);
+ }
+}
+
+/*
+ * This function waits for the client to acknowledge the receipt
+ * of the netdump startup reply, with the possibility of packets
+ * getting lost. We resend the startup packet if no ACK is received,
+ * after a 1 second delay.
+ *
+ * (The client can test the success of the handshake via the HELLO
+ * command, and send ACKs until we enter netdump mode.)
+ */
+static int
+dump_handshake(struct dump_dev *net_dev)
+{
+ char tmp[200];
+ reply_t reply;
+ int i, j;
+
+ if (startup_handshake) {
+ sprintf(tmp, "NETDUMP start, waiting for start-ACK.\n");
+ reply.code = REPLY_START_NETDUMP;
+ reply.nr = 0;
+ reply.info = 0;
+ } else {
+ sprintf(tmp, "NETDUMP start, waiting for start-ACK.\n");
+ reply.code = REPLY_START_WRITE_NETDUMP;
+ reply.nr = net_dev->curr_offset;
+ reply.info = net_dev->curr_offset;
+ }
+
+ /* send 300 handshake packets before declaring failure */
+ for (i = 0; i < 300; i++) {
+ dump_send_skb(dump_ndev, tmp, strlen(tmp), &reply);
+
+ /* wait 1 sec */
+ for (j = 0; j < 10000; j++) {
+ udelay(100);
+ dump_ndev->poll_controller(dump_ndev);
+ zap_completion_queue();
+ if (new_req)
+ break;
+ }
+
+ /*
+ * if there is no new request, try sending the handshaking
+ * packet again
+ */
+ if (!new_req)
+ continue;
+
+ /*
+ * check if the new request is of the expected type,
+ * if so, return, else try sending the handshaking
+ * packet again
+ */
+ if (startup_handshake) {
+ if (req.command == COMM_HELLO || req.command ==
+ COMM_START_NETDUMP_ACK) {
+ return 0;
+ } else {
+ new_req = 0;
+ continue;
+ }
+ } else {
+ if (req.command == COMM_SEND_MEM) {
+ return 0;
+ } else {
+ new_req = 0;
+ continue;
+ }
+ }
+ }
+ return -1;
+}
+
+static ssize_t
+do_netdump(struct dump_dev *net_dev, const char* buff, size_t len)
+{
+ reply_t reply;
+ char tmp[200];
+ ssize_t ret = 0;
+ int repeatCounter, counter, total_loop;
+
+ netdump_in_progress = 1;
+
+ if (dump_handshake(net_dev) < 0) {
+ printk("network dump failed due to handshake failure\n");
+ goto out;
+ }
+
+ /*
+ * Ideally startup handshake should be done during dump configuration,
+ * i.e., in dump_net_open(). This will be done when I figure out
+ * the dependency between startup handshake, subsequent write and
+ * various commands wrt to net-server.
+ */
+ if (startup_handshake)
+ startup_handshake = 0;
+
+ counter = 0;
+ repeatCounter = 0;
+ total_loop = 0;
+ while (1) {
+ if (!new_req) {
+ dump_ndev->poll_controller(dump_ndev);
+ zap_completion_queue();
+ }
+ if (!new_req) {
+ repeatCounter++;
+
+ if (repeatCounter > 5) {
+ counter++;
+ if (counter > 10000) {
+ if (total_loop >= 100000) {
+ printk("Time OUT LEAVE NOW\n");
+ goto out;
+ } else {
+ total_loop++;
+ printk("Try number %d out of "
+ "10 before Time Out\n",
+ total_loop);
+ }
+ }
+ mdelay(1);
+ repeatCounter = 0;
+ }
+ continue;
+ }
+ repeatCounter = 0;
+ counter = 0;
+ total_loop = 0;
+ new_req = 0;
+ switch (req.command) {
+ case COMM_NONE:
+ break;
+
+ case COMM_SEND_MEM:
+ dump_send_mem(dump_ndev, &req, buff, len);
+ break;
+
+ case COMM_EXIT:
+ case COMM_START_WRITE_NETDUMP_ACK:
+ ret = len;
+ goto out;
+
+ case COMM_HELLO:
+ sprintf(tmp, "Hello, this is netdump version "
+ "0.%02d\n", NETCONSOLE_VERSION);
+ reply.code = REPLY_HELLO;
+ reply.nr = req.nr;
+ reply.info = net_dev->curr_offset;
+ dump_send_skb(dump_ndev, tmp, strlen(tmp), &reply);
+ break;
+
+ case COMM_GET_PAGE_SIZE:
+ sprintf(tmp, "PAGE_SIZE: %ld\n", PAGE_SIZE);
+ reply.code = REPLY_PAGE_SIZE;
+ reply.nr = req.nr;
+ reply.info = PAGE_SIZE;
+ dump_send_skb(dump_ndev, tmp, strlen(tmp), &reply);
+ break;
+
+ case COMM_GET_NR_PAGES:
+ reply.code = REPLY_NR_PAGES;
+ reply.nr = req.nr;
+ reply.info = num_physpages;
+ reply.info = page_counter;
+ sprintf(tmp, "Number of pages: %ld\n", num_physpages);
+ dump_send_skb(dump_ndev, tmp, strlen(tmp), &reply);
+ break;
+
+ case COMM_GET_MAGIC:
+ reply.code = REPLY_MAGIC;
+ reply.nr = req.nr;
+ reply.info = NETCONSOLE_VERSION;
+ dump_send_skb(dump_ndev, (char *)&dump_magic,
+ sizeof(dump_magic), &reply);
+ break;
+
+ default:
+ reply.code = REPLY_ERROR;
+ reply.nr = req.nr;
+ reply.info = req.command;
+ sprintf(tmp, "Got unknown command code %d!\n",
+ req.command);
+ dump_send_skb(dump_ndev, tmp, strlen(tmp), &reply);
+ break;
+ }
+ }
+out:
+ netdump_in_progress = 0;
+ return ret;
+}
+
+static int
+dump_validate_config(void)
+{
+ source_ip = dump_in_dev->ifa_list->ifa_local;
+ if (!source_ip) {
+ printk("network device %s has no local address, "
+ "aborting.\n", device_name);
+ return -1;
+ }
+
+#define IP(x) ((unsigned char *)&source_ip)[x]
+ printk("Source %d.%d.%d.%d", IP(0), IP(1), IP(2), IP(3));
+#undef IP
+
+ if (!source_port) {
+ printk("source_port parameter not specified, aborting.\n");
+ return -1;
+ }
+ printk(":%i\n", source_port);
+ source_port = htons(source_port);
+
+ if (!target_ip) {
+ printk("target_ip parameter not specified, aborting.\n");
+ return -1;
+ }
+
+#define IP(x) ((unsigned char *)&target_ip)[x]
+ printk("Target %d.%d.%d.%d", IP(0), IP(1), IP(2), IP(3));
+#undef IP
+
+ if (!target_port) {
+ printk("target_port parameter not specified, aborting.\n");
+ return -1;
+ }
+ printk(":%i\n", target_port);
+ target_port = htons(target_port);
+
+ printk("Target Ethernet Address %02x:%02x:%02x:%02x:%02x:%02x",
+ daddr[0], daddr[1], daddr[2], daddr[3], daddr[4], daddr[5]);
+
+ if ((daddr[0] & daddr[1] & daddr[2] & daddr[3] & daddr[4] &
+ daddr[5]) == 255)
+ printk("(Broadcast)");
+ printk("\n");
+ return 0;
+}
+
+/*
+ * Prepares the dump device so we can take a dump later.
+ * Validates the netdump configuration parameters.
+ *
+ * TODO: Network connectivity check should be done here.
+ */
+static int
+dump_net_open(struct dump_dev *net_dev, unsigned long arg)
+{
+ int retval = 0;
+
+ /* get the interface name */
+ if (copy_from_user(device_name, (void *)arg, IFNAMSIZ))
+ return -EFAULT;
+
+ if (!(dump_ndev = dev_get_by_name(device_name))) {
+ printk("network device %s does not exist, aborting.\n",
+ device_name);
+ return -ENODEV;
+ }
+
+ if (!dump_ndev->poll_controller) {
+ printk("network device %s does not implement polling yet, "
+ "aborting.\n", device_name);
+ retval = -1; /* return proper error */
+ goto err1;
+ }
+
+ if (!(dump_in_dev = in_dev_get(dump_ndev))) {
+ printk("network device %s is not an IP protocol device, "
+ "aborting.\n", device_name);
+ retval = -EINVAL;
+ goto err1;
+ }
+
+ if ((retval = dump_validate_config()) < 0)
+ goto err2;
+
+ net_dev->curr_offset = 0;
+ printk("Network device %s successfully configured for dumping\n",
+ device_name);
+ return retval;
+err2:
+ in_dev_put(dump_in_dev);
+err1:
+ dev_put(dump_ndev);
+ return retval;
+}
+
+/*
+ * Close the dump device and release associated resources
+ * Invoked when unconfiguring the dump device.
+ */
+static int
+dump_net_release(struct dump_dev *net_dev)
+{
+ if (dump_in_dev)
+ in_dev_put(dump_in_dev);
+ if (dump_ndev)
+ dev_put(dump_ndev);
+ return 0;
+}
+
+/*
+ * Prepare the dump device for use (silence any ongoing activity
+ * and quiesce state) when the system crashes.
+ */
+static int
+dump_net_silence(struct dump_dev *net_dev)
+{
+ netpoll_set_trap(1);
+ local_irq_save(flags_global);
+ dump_ndev->rx_hook = dump_rx_hook;
+ startup_handshake = 1;
+ net_dev->curr_offset = 0;
+ printk("Dumping to network device %s on CPU %d ...\n", device_name,
+ smp_processor_id());
+ return 0;
+}
+
+/*
+ * Invoked when dumping is done. This is the time to put things back
+ * (i.e. undo the effects of dump_block_silence) so the device is
+ * available for normal use.
+ */
+static int
+dump_net_resume(struct dump_dev *net_dev)
+{
+ int indx;
+ reply_t reply;
+ char tmp[200];
+
+ if (!dump_ndev)
+ return (0);
+
+ sprintf(tmp, "NETDUMP end.\n");
+ for( indx = 0; indx < 6; indx++) {
+ reply.code = REPLY_END_NETDUMP;
+ reply.nr = 0;
+ reply.info = 0;
+ dump_send_skb(dump_ndev, tmp, strlen(tmp), &reply);
+ }
+ printk("NETDUMP END!\n");
+ local_irq_restore(flags_global);
+ netpoll_set_trap(0);
+ dump_ndev->rx_hook = NULL;
+ startup_handshake = 0;
+ return 0;
+}
+
+/*
+ * Seek to the specified offset in the dump device.
+ * Makes sure this is a valid offset, otherwise returns an error.
+ */
+static int
+dump_net_seek(struct dump_dev *net_dev, loff_t off)
+{
+ /*
+ * For now using DUMP_HEADER_OFFSET as hard coded value,
+ * See dump_block_seekin dump_blockdev.c to know how to
+ * do this properly.
+ */
+ net_dev->curr_offset = off;
+ return 0;
+}
+
+/*
+ *
+ */
+static int
+dump_net_write(struct dump_dev *net_dev, void *buf, unsigned long len)
+{
+ int cnt, i, off;
+ ssize_t ret;
+
+ cnt = len/ PAGE_SIZE;
+
+ for (i = 0; i < cnt; i++) {
+ off = i* PAGE_SIZE;
+ ret = do_netdump(net_dev, buf+off, PAGE_SIZE);
+ if (ret <= 0)
+ return -1;
+ net_dev->curr_offset = net_dev->curr_offset + PAGE_SIZE;
+ }
+ return len;
+}
+
+/*
+ * check if the last dump i/o is over and ready for next request
+ */
+static int
+dump_net_ready(struct dump_dev *net_dev, void *buf)
+{
+ return 0;
+}
+
+/*
+ * ioctl function used for configuring network dump
+ */
+static int
+dump_net_ioctl(struct dump_dev *net_dev, unsigned int cmd, unsigned long arg)
+{
+ switch (cmd) {
+ case DIOSTARGETIP:
+ target_ip = arg;
+ break;
+ case DIOSTARGETPORT:
+ target_port = (u16)arg;
+ break;
+ case DIOSSOURCEPORT:
+ source_port = (u16)arg;
+ break;
+ case DIOSETHADDR:
+ return copy_from_user(daddr, (void *)arg, 6);
+ break;
+ case DIOGTARGETIP:
+ case DIOGTARGETPORT:
+ case DIOGSOURCEPORT:
+ case DIOGETHADDR:
+ break;
+ default:
+ return -EINVAL;
+ }
+ return 0;
+}
+
+struct dump_dev_ops dump_netdev_ops = {
+ .open = dump_net_open,
+ .release = dump_net_release,
+ .silence = dump_net_silence,
+ .resume = dump_net_resume,
+ .seek = dump_net_seek,
+ .write = dump_net_write,
+ /* .read not implemented */
+ .ready = dump_net_ready,
+ .ioctl = dump_net_ioctl
+};
+
+static struct dump_dev default_dump_netdev = {
+ .type_name = "networkdev",
+ .ops = &dump_netdev_ops,
+ .curr_offset = 0
+};
+
+static int __init
+dump_netdev_init(void)
+{
+ default_dump_netdev.curr_offset = 0;
+
+ if (dump_register_device(&default_dump_netdev) < 0) {
+ printk("network dump device driver registration failed\n");
+ return -1;
+ }
+ printk("network device driver for LKCD registered\n");
+
+ get_random_bytes(&dump_magic, sizeof(dump_magic));
+ return 0;
+}
+
+static void __exit
+dump_netdev_cleanup(void)
+{
+ dump_unregister_device(&default_dump_netdev);
+}
+
+MODULE_AUTHOR("LKCD Development Team <lkcd-devel@lists.sourceforge.net>");
+MODULE_DESCRIPTION("Network Dump Driver for Linux Kernel Crash Dump (LKCD)");
+MODULE_LICENSE("GPL");
+
+module_init(dump_netdev_init);
+module_exit(dump_netdev_cleanup);
--- /dev/null
+/*
+ * Two-stage soft-boot based dump scheme methods (memory overlay
+ * with post soft-boot writeout)
+ *
+ * Started: Oct 2002 - Suparna Bhattacharya <suparna@in.ibm.com>
+ *
+ * This approach of saving the dump in memory and writing it
+ * out after a softboot without clearing memory is derived from the
+ * Mission Critical Linux dump implementation. Credits and a big
+ * thanks for letting the lkcd project make use of the excellent
+ * piece of work and also for helping with clarifications and
+ * tips along the way are due to:
+ * Dave Winchell <winchell@mclx.com> (primary author of mcore)
+ * and also to
+ * Jeff Moyer <moyer@mclx.com>
+ * Josh Huber <huber@mclx.com>
+ *
+ * For those familiar with the mcore implementation, the key
+ * differences/extensions here are in allowing entire memory to be
+ * saved (in compressed form) through a careful ordering scheme
+ * on both the way down as well on the way up after boot, the latter
+ * for supporting the LKCD notion of passes in which most critical
+ * data is the first to be saved to the dump device. Also the post
+ * boot writeout happens from within the kernel rather than driven
+ * from userspace.
+ *
+ * The sequence is orchestrated through the abstraction of "dumpers",
+ * one for the first stage which then sets up the dumper for the next
+ * stage, providing for a smooth and flexible reuse of the singlestage
+ * dump scheme methods and a handle to pass dump device configuration
+ * information across the soft boot.
+ *
+ * Copyright (C) 2002 International Business Machines Corp.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+/*
+ * Disruptive dumping using the second kernel soft-boot option
+ * for issuing dump i/o operates in 2 stages:
+ *
+ * (1) - Saves the (compressed & formatted) dump in memory using a
+ * carefully ordered overlay scheme designed to capture the
+ * entire physical memory or selective portions depending on
+ * dump config settings,
+ * - Registers the stage 2 dumper and
+ * - Issues a soft reboot w/o clearing memory.
+ *
+ * The overlay scheme starts with a small bootstrap free area
+ * and follows a reverse ordering of passes wherein it
+ * compresses and saves data starting with the least critical
+ * areas first, thus freeing up the corresponding pages to
+ * serve as destination for subsequent data to be saved, and
+ * so on. With a good compression ratio, this makes it feasible
+ * to capture an entire physical memory dump without significantly
+ * reducing memory available during regular operation.
+ *
+ * (2) Post soft-reboot, runs through the saved memory dump and
+ * writes it out to disk, this time around, taking care to
+ * save the more critical data first (i.e. pages which figure
+ * in early passes for a regular dump). Finally issues a
+ * clean reboot.
+ *
+ * Since the data was saved in memory after selection/filtering
+ * and formatted as per the chosen output dump format, at this
+ * stage the filter and format actions are just dummy (or
+ * passthrough) actions, except for influence on ordering of
+ * passes.
+ */
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/highmem.h>
+#include <linux/bootmem.h>
+#include <linux/dump.h>
+#ifdef CONFIG_KEXEC
+#include <linux/delay.h>
+#include <linux/reboot.h>
+#include <linux/kexec.h>
+#endif
+#include "dump_methods.h"
+
+extern struct list_head dumper_list_head;
+extern struct dump_memdev *dump_memdev;
+extern struct dumper dumper_stage2;
+struct dump_config_block *dump_saved_config = NULL;
+extern struct dump_blockdev *dump_blockdev;
+static struct dump_memdev *saved_dump_memdev = NULL;
+static struct dumper *saved_dumper = NULL;
+
+#ifdef CONFIG_KEXEC
+extern int panic_timeout;
+#endif
+
+/* For testing
+extern void dump_display_map(struct dump_memdev *);
+*/
+
+struct dumper *dumper_by_name(char *name)
+{
+#ifdef LATER
+ struct dumper *dumper;
+ list_for_each_entry(dumper, &dumper_list_head, dumper_list)
+ if (!strncmp(dumper->name, name, 32))
+ return dumper;
+
+ /* not found */
+ return NULL;
+#endif
+ /* Temporary proof of concept */
+ if (!strncmp(dumper_stage2.name, name, 32))
+ return &dumper_stage2;
+ else
+ return NULL;
+}
+
+#ifdef CONFIG_CRASH_DUMP_SOFTBOOT
+extern void dump_early_reserve_map(struct dump_memdev *);
+
+void crashdump_reserve(void)
+{
+ extern unsigned long crashdump_addr;
+
+ if (crashdump_addr == 0xdeadbeef)
+ return;
+
+ /* reserve dump config and saved dump pages */
+ dump_saved_config = (struct dump_config_block *)crashdump_addr;
+ /* magic verification */
+ if (dump_saved_config->magic != DUMP_MAGIC_LIVE) {
+ printk("Invalid dump magic. Ignoring dump\n");
+ dump_saved_config = NULL;
+ return;
+ }
+
+ printk("Dump may be available from previous boot\n");
+
+ reserve_bootmem(virt_to_phys((void *)crashdump_addr),
+ PAGE_ALIGN(sizeof(struct dump_config_block)));
+ dump_early_reserve_map(&dump_saved_config->memdev);
+
+}
+#endif
+
+/*
+ * Loads the dump configuration from a memory block saved across soft-boot
+ * The ops vectors need fixing up as the corresp. routines may have
+ * relocated in the new soft-booted kernel.
+ */
+int dump_load_config(struct dump_config_block *config)
+{
+ struct dumper *dumper;
+ struct dump_data_filter *filter_table, *filter;
+ struct dump_dev *dev;
+ int i;
+
+ if (config->magic != DUMP_MAGIC_LIVE)
+ return -ENOENT; /* not a valid config */
+
+ /* initialize generic config data */
+ memcpy(&dump_config, &config->config, sizeof(dump_config));
+
+ /* initialize dumper state */
+ if (!(dumper = dumper_by_name(config->dumper.name))) {
+ printk("dumper name mismatch\n");
+ return -ENOENT; /* dumper mismatch */
+ }
+
+ /* verify and fixup schema */
+ if (strncmp(dumper->scheme->name, config->scheme.name, 32)) {
+ printk("dumper scheme mismatch\n");
+ return -ENOENT; /* mismatch */
+ }
+ config->scheme.ops = dumper->scheme->ops;
+ config->dumper.scheme = &config->scheme;
+
+ /* verify and fixup filter operations */
+ filter_table = dumper->filter;
+ for (i = 0, filter = config->filter_table;
+ ((i < MAX_PASSES) && filter_table[i].selector);
+ i++, filter++) {
+ if (strncmp(filter_table[i].name, filter->name, 32)) {
+ printk("dump filter mismatch\n");
+ return -ENOENT; /* filter name mismatch */
+ }
+ filter->selector = filter_table[i].selector;
+ }
+ config->dumper.filter = config->filter_table;
+
+ /* fixup format */
+ if (strncmp(dumper->fmt->name, config->fmt.name, 32)) {
+ printk("dump format mismatch\n");
+ return -ENOENT; /* mismatch */
+ }
+ config->fmt.ops = dumper->fmt->ops;
+ config->dumper.fmt = &config->fmt;
+
+ /* fixup target device */
+ dev = (struct dump_dev *)(&config->dev[0]);
+ if (dumper->dev == NULL) {
+ pr_debug("Vanilla dumper - assume default\n");
+ if (dump_dev == NULL)
+ return -ENODEV;
+ dumper->dev = dump_dev;
+ }
+
+ if (strncmp(dumper->dev->type_name, dev->type_name, 32)) {
+ printk("dump dev type mismatch %s instead of %s\n",
+ dev->type_name, dumper->dev->type_name);
+ return -ENOENT; /* mismatch */
+ }
+ dev->ops = dumper->dev->ops;
+ config->dumper.dev = dev;
+
+ /* fixup memory device containing saved dump pages */
+ /* assume statically init'ed dump_memdev */
+ config->memdev.ddev.ops = dump_memdev->ddev.ops;
+ /* switch to memdev from prev boot */
+ saved_dump_memdev = dump_memdev; /* remember current */
+ dump_memdev = &config->memdev;
+
+ /* Make this the current primary dumper */
+ dump_config.dumper = &config->dumper;
+
+ return 0;
+}
+
+/* Saves the dump configuration in a memory block for use across a soft-boot */
+int dump_save_config(struct dump_config_block *config)
+{
+ printk("saving dump config settings\n");
+
+ /* dump config settings */
+ memcpy(&config->config, &dump_config, sizeof(dump_config));
+
+ /* dumper state */
+ memcpy(&config->dumper, dump_config.dumper, sizeof(struct dumper));
+ memcpy(&config->scheme, dump_config.dumper->scheme,
+ sizeof(struct dump_scheme));
+ memcpy(&config->fmt, dump_config.dumper->fmt, sizeof(struct dump_fmt));
+ memcpy(&config->dev[0], dump_config.dumper->dev,
+ sizeof(struct dump_anydev));
+ memcpy(&config->filter_table, dump_config.dumper->filter,
+ sizeof(struct dump_data_filter)*MAX_PASSES);
+
+ /* handle to saved mem pages */
+ memcpy(&config->memdev, dump_memdev, sizeof(struct dump_memdev));
+
+ config->magic = DUMP_MAGIC_LIVE;
+
+ return 0;
+}
+
+int dump_init_stage2(struct dump_config_block *saved_config)
+{
+ int err = 0;
+
+ pr_debug("dump_init_stage2\n");
+ /* Check if dump from previous boot exists */
+ if (saved_config) {
+ printk("loading dumper from previous boot \n");
+ /* load and configure dumper from previous boot */
+ if ((err = dump_load_config(saved_config)))
+ return err;
+
+ if (!dump_oncpu) {
+ if ((err = dump_configure(dump_config.dump_device))) {
+ printk("Stage 2 dump configure failed\n");
+ return err;
+ }
+ }
+
+ dumper_reset();
+ dump_dev = dump_config.dumper->dev;
+ /* write out the dump */
+ err = dump_generic_execute(NULL, NULL);
+
+ dump_saved_config = NULL;
+
+ if (!dump_oncpu) {
+ dump_unconfigure();
+ }
+
+ return err;
+
+ } else {
+ /* no dump to write out */
+ printk("no dumper from previous boot \n");
+ return 0;
+ }
+}
+
+extern void dump_mem_markpages(struct dump_memdev *);
+
+int dump_switchover_stage(void)
+{
+ int ret = 0;
+
+ /* trigger stage 2 rightaway - in real life would be after soft-boot */
+ /* dump_saved_config would be a boot param */
+ saved_dump_memdev = dump_memdev;
+ saved_dumper = dump_config.dumper;
+ ret = dump_init_stage2(dump_saved_config);
+ dump_memdev = saved_dump_memdev;
+ dump_config.dumper = saved_dumper;
+ return ret;
+}
+
+int dump_activate_softboot(void)
+{
+ int err = 0;
+#ifdef CONFIG_KEXEC
+ int num_cpus_online = 0;
+ struct kimage *image;
+#endif
+
+ /* temporary - switchover to writeout previously saved dump */
+#ifndef CONFIG_KEXEC
+ err = dump_switchover_stage(); /* non-disruptive case */
+ if (dump_oncpu)
+ dump_config.dumper = &dumper_stage1; /* set things back */
+
+ return err;
+#else
+
+ dump_silence_level = DUMP_HALT_CPUS;
+ /* wait till we become the only cpu */
+ /* maybe by checking for online cpus ? */
+
+ while((num_cpus_online = num_online_cpus()) > 1);
+
+ /* now call into kexec */
+
+ image = xchg(&kexec_image, 0);
+ if (image) {
+ mdelay(panic_timeout*1000);
+ machine_kexec(image);
+ }
+
+
+ /* TBD/Fixme:
+ * * should we call reboot notifiers ? inappropriate for panic ?
+ * * what about device_shutdown() ?
+ * * is explicit bus master disabling needed or can we do that
+ * * through driverfs ?
+ * */
+ return 0;
+#endif
+}
+
+/* --- DUMP SCHEME ROUTINES --- */
+
+static inline int dump_buf_pending(struct dumper *dumper)
+{
+ return (dumper->curr_buf - dumper->dump_buf);
+}
+
+/* Invoked during stage 1 of soft-reboot based dumping */
+int dump_overlay_sequencer(void)
+{
+ struct dump_data_filter *filter = dump_config.dumper->filter;
+ struct dump_data_filter *filter2 = dumper_stage2.filter;
+ int pass = 0, err = 0, save = 0;
+ int (*action)(unsigned long, unsigned long);
+
+ /* Make sure gzip compression is being used */
+ if (dump_config.dumper->compress->compress_type != DUMP_COMPRESS_GZIP) {
+ printk(" Please set GZIP compression \n");
+ return -EINVAL;
+ }
+
+ /* start filling in dump data right after the header */
+ dump_config.dumper->curr_offset =
+ PAGE_ALIGN(dump_config.dumper->header_len);
+
+ /* Locate the last pass */
+ for (;filter->selector; filter++, pass++);
+
+ /*
+ * Start from the end backwards: overlay involves a reverse
+ * ordering of passes, since less critical pages are more
+ * likely to be reusable as scratch space once we are through
+ * with them.
+ */
+ for (--pass, --filter; pass >= 0; pass--, filter--)
+ {
+ /* Assumes passes are exclusive (even across dumpers) */
+ /* Requires care when coding the selection functions */
+ if ((save = filter->level_mask & dump_config.level))
+ action = dump_save_data;
+ else
+ action = dump_skip_data;
+
+ /* Remember the offset where this pass started */
+ /* The second stage dumper would use this */
+ if (dump_buf_pending(dump_config.dumper) & (PAGE_SIZE - 1)) {
+ pr_debug("Starting pass %d with pending data\n", pass);
+ pr_debug("filling dummy data to page-align it\n");
+ dump_config.dumper->curr_buf = (void *)PAGE_ALIGN(
+ (unsigned long)dump_config.dumper->curr_buf);
+ }
+
+ filter2[pass].start[0] = dump_config.dumper->curr_offset
+ + dump_buf_pending(dump_config.dumper);
+
+ err = dump_iterator(pass, action, filter);
+
+ filter2[pass].end[0] = dump_config.dumper->curr_offset
+ + dump_buf_pending(dump_config.dumper);
+ filter2[pass].num_mbanks = 1;
+
+ if (err < 0) {
+ printk("dump_overlay_seq: failure %d in pass %d\n",
+ err, pass);
+ break;
+ }
+ printk("\n %d overlay pages %s of %d each in pass %d\n",
+ err, save ? "saved" : "skipped", DUMP_PAGE_SIZE, pass);
+ }
+
+ return err;
+}
+
+/* from dump_memdev.c */
+extern struct page *dump_mem_lookup(struct dump_memdev *dev, unsigned long loc);
+extern struct page *dump_mem_next_page(struct dump_memdev *dev);
+
+static inline struct page *dump_get_saved_page(loff_t loc)
+{
+ return (dump_mem_lookup(dump_memdev, loc >> PAGE_SHIFT));
+}
+
+static inline struct page *dump_next_saved_page(void)
+{
+ return (dump_mem_next_page(dump_memdev));
+}
+
+/*
+ * Iterates over list of saved dump pages. Invoked during second stage of
+ * soft boot dumping
+ *
+ * Observation: If additional selection is desired at this stage then
+ * a different iterator could be written which would advance
+ * to the next page header everytime instead of blindly picking up
+ * the data. In such a case loc would be interpreted differently.
+ * At this moment however a blind pass seems sufficient, cleaner and
+ * faster.
+ */
+int dump_saved_data_iterator(int pass, int (*action)(unsigned long,
+ unsigned long), struct dump_data_filter *filter)
+{
+ loff_t loc, end;
+ struct page *page;
+ unsigned long count = 0;
+ int i, err = 0;
+ unsigned long sz;
+
+ for (i = 0; i < filter->num_mbanks; i++) {
+ loc = filter->start[i];
+ end = filter->end[i];
+ printk("pass %d, start off 0x%llx end offset 0x%llx\n", pass,
+ loc, end);
+
+ /* loc will get treated as logical offset into stage 1 */
+ page = dump_get_saved_page(loc);
+
+ for (; loc < end; loc += PAGE_SIZE) {
+ dump_config.dumper->curr_loc = loc;
+ if (!page) {
+ printk("no more saved data for pass %d\n",
+ pass);
+ break;
+ }
+ sz = (loc + PAGE_SIZE > end) ? end - loc : PAGE_SIZE;
+
+ if (page && filter->selector(pass, (unsigned long)page,
+ PAGE_SIZE)) {
+ pr_debug("mem offset 0x%llx\n", loc);
+ if ((err = action((unsigned long)page, sz)))
+ break;
+ else
+ count++;
+ /* clear the contents of page */
+ /* fixme: consider using KM_CRASHDUMP instead */
+ clear_highpage(page);
+
+ }
+ page = dump_next_saved_page();
+ }
+ }
+
+ return err ? err : count;
+}
+
+static inline int dump_overlay_pages_done(struct page *page, int nr)
+{
+ int ret=0;
+
+ for (; nr ; page++, nr--) {
+ if (dump_check_and_free_page(dump_memdev, page))
+ ret++;
+ }
+ return ret;
+}
+
+int dump_overlay_save_data(unsigned long loc, unsigned long len)
+{
+ int err = 0;
+ struct page *page = (struct page *)loc;
+ static unsigned long cnt = 0;
+
+ if ((err = dump_generic_save_data(loc, len)))
+ return err;
+
+ if (dump_overlay_pages_done(page, len >> PAGE_SHIFT)) {
+ cnt++;
+ if (!(cnt & 0x7f))
+ pr_debug("released page 0x%lx\n", page_to_pfn(page));
+ }
+
+ return err;
+}
+
+
+int dump_overlay_skip_data(unsigned long loc, unsigned long len)
+{
+ struct page *page = (struct page *)loc;
+
+ dump_overlay_pages_done(page, len >> PAGE_SHIFT);
+ return 0;
+}
+
+int dump_overlay_resume(void)
+{
+ int err = 0;
+
+ /*
+ * switch to stage 2 dumper, save dump_config_block
+ * and then trigger a soft-boot
+ */
+ dumper_stage2.header_len = dump_config.dumper->header_len;
+ dump_config.dumper = &dumper_stage2;
+ if ((err = dump_save_config(dump_saved_config)))
+ return err;
+
+ dump_dev = dump_config.dumper->dev;
+
+#ifdef CONFIG_KEXEC
+ /* If we are doing a disruptive dump, activate softboot now */
+ if((panic_timeout > 0) && (!(dump_config.flags & DUMP_FLAGS_NONDISRUPT)))
+ err = dump_activate_softboot();
+#endif
+
+ return err;
+ err = dump_switchover_stage(); /* plugs into soft boot mechanism */
+ dump_config.dumper = &dumper_stage1; /* set things back */
+ return err;
+}
+
+int dump_overlay_configure(unsigned long devid)
+{
+ struct dump_dev *dev;
+ struct dump_config_block *saved_config = dump_saved_config;
+ int err = 0;
+
+ /* If there is a previously saved dump, write it out first */
+ if (saved_config) {
+ printk("Processing old dump pending writeout\n");
+ err = dump_switchover_stage();
+ if (err) {
+ printk("failed to writeout saved dump\n");
+ return err;
+ }
+ dump_free_mem(saved_config); /* testing only: not after boot */
+ }
+
+ dev = dumper_stage2.dev = dump_config.dumper->dev;
+ /* From here on the intermediate dump target is memory-only */
+ dump_dev = dump_config.dumper->dev = &dump_memdev->ddev;
+ if ((err = dump_generic_configure(0))) {
+ printk("dump generic configure failed: err %d\n", err);
+ return err;
+ }
+ /* temporary */
+ dumper_stage2.dump_buf = dump_config.dumper->dump_buf;
+
+ /* Sanity check on the actual target dump device */
+ if (!dev || (err = dev->ops->open(dev, devid))) {
+ return err;
+ }
+ /* TBD: should we release the target if this is soft-boot only ? */
+
+ /* alloc a dump config block area to save across reboot */
+ if (!(dump_saved_config = dump_alloc_mem(sizeof(struct
+ dump_config_block)))) {
+ printk("dump config block alloc failed\n");
+ /* undo configure */
+ dump_generic_unconfigure();
+ return -ENOMEM;
+ }
+ dump_config.dump_addr = (unsigned long)dump_saved_config;
+ printk("Dump config block of size %d set up at 0x%lx\n",
+ sizeof(*dump_saved_config), (unsigned long)dump_saved_config);
+ return 0;
+}
+
+int dump_overlay_unconfigure(void)
+{
+ struct dump_dev *dev = dumper_stage2.dev;
+ int err = 0;
+
+ pr_debug("dump_overlay_unconfigure\n");
+ /* Close the secondary device */
+ dev->ops->release(dev);
+ pr_debug("released secondary device\n");
+
+ err = dump_generic_unconfigure();
+ pr_debug("Unconfigured generic portions\n");
+ dump_free_mem(dump_saved_config);
+ dump_saved_config = NULL;
+ pr_debug("Freed saved config block\n");
+ dump_dev = dump_config.dumper->dev = dumper_stage2.dev;
+
+ printk("Unconfigured overlay dumper\n");
+ return err;
+}
+
+int dump_staged_unconfigure(void)
+{
+ int err = 0;
+ struct dump_config_block *saved_config = dump_saved_config;
+ struct dump_dev *dev;
+
+ pr_debug("dump_staged_unconfigure\n");
+ err = dump_generic_unconfigure();
+
+ /* now check if there is a saved dump waiting to be written out */
+ if (saved_config) {
+ printk("Processing saved dump pending writeout\n");
+ if ((err = dump_switchover_stage())) {
+ printk("Error in commiting saved dump at 0x%lx\n",
+ (unsigned long)saved_config);
+ printk("Old dump may hog memory\n");
+ } else {
+ dump_free_mem(saved_config);
+ pr_debug("Freed saved config block\n");
+ }
+ dump_saved_config = NULL;
+ } else {
+ dev = &dump_memdev->ddev;
+ dev->ops->release(dev);
+ }
+ printk("Unconfigured second stage dumper\n");
+
+ return 0;
+}
+
+/* ----- PASSTHRU FILTER ROUTINE --------- */
+
+/* transparent - passes everything through */
+int dump_passthru_filter(int pass, unsigned long loc, unsigned long sz)
+{
+ return 1;
+}
+
+/* ----- PASSTRU FORMAT ROUTINES ---- */
+
+
+int dump_passthru_configure_header(const char *panic_str, const struct pt_regs *regs)
+{
+ dump_config.dumper->header_dirty++;
+ return 0;
+}
+
+/* Copies bytes of data from page(s) to the specified buffer */
+int dump_copy_pages(void *buf, struct page *page, unsigned long sz)
+{
+ unsigned long len = 0, bytes;
+ void *addr;
+
+ while (len < sz) {
+ addr = kmap_atomic(page, KM_CRASHDUMP);
+ bytes = (sz > len + PAGE_SIZE) ? PAGE_SIZE : sz - len;
+ memcpy(buf, addr, bytes);
+ kunmap_atomic(addr, KM_CRASHDUMP);
+ buf += bytes;
+ len += bytes;
+ page++;
+ }
+ /* memset(dump_config.dumper->curr_buf, 0x57, len); temporary */
+
+ return sz - len;
+}
+
+int dump_passthru_update_header(void)
+{
+ long len = dump_config.dumper->header_len;
+ struct page *page;
+ void *buf = dump_config.dumper->dump_buf;
+ int err = 0;
+
+ if (!dump_config.dumper->header_dirty)
+ return 0;
+
+ pr_debug("Copying header of size %ld bytes from memory\n", len);
+ if (len > DUMP_BUFFER_SIZE)
+ return -E2BIG;
+
+ page = dump_mem_lookup(dump_memdev, 0);
+ for (; (len > 0) && page; buf += PAGE_SIZE, len -= PAGE_SIZE) {
+ if ((err = dump_copy_pages(buf, page, PAGE_SIZE)))
+ return err;
+ page = dump_mem_next_page(dump_memdev);
+ }
+ if (len > 0) {
+ printk("Incomplete header saved in mem\n");
+ return -ENOENT;
+ }
+
+ if ((err = dump_dev_seek(0))) {
+ printk("Unable to seek to dump header offset\n");
+ return err;
+ }
+ err = dump_ll_write(dump_config.dumper->dump_buf,
+ buf - dump_config.dumper->dump_buf);
+ if (err < dump_config.dumper->header_len)
+ return (err < 0) ? err : -ENOSPC;
+
+ dump_config.dumper->header_dirty = 0;
+ return 0;
+}
+
+static loff_t next_dph_offset = 0;
+
+static int dph_valid(struct __dump_page *dph)
+{
+ if ((dph->dp_address & (PAGE_SIZE - 1)) || (dph->dp_flags
+ > DUMP_DH_COMPRESSED) || (!dph->dp_flags) ||
+ (dph->dp_size > PAGE_SIZE)) {
+ printk("dp->address = 0x%llx, dp->size = 0x%x, dp->flag = 0x%x\n",
+ dph->dp_address, dph->dp_size, dph->dp_flags);
+ return 0;
+ }
+ return 1;
+}
+
+int dump_verify_lcrash_data(void *buf, unsigned long sz)
+{
+ struct __dump_page *dph;
+
+ /* sanity check for page headers */
+ while (next_dph_offset + sizeof(*dph) < sz) {
+ dph = (struct __dump_page *)(buf + next_dph_offset);
+ if (!dph_valid(dph)) {
+ printk("Invalid page hdr at offset 0x%llx\n",
+ next_dph_offset);
+ return -EINVAL;
+ }
+ next_dph_offset += dph->dp_size + sizeof(*dph);
+ }
+
+ next_dph_offset -= sz;
+ return 0;
+}
+
+/*
+ * TBD/Later: Consider avoiding the copy by using a scatter/gather
+ * vector representation for the dump buffer
+ */
+int dump_passthru_add_data(unsigned long loc, unsigned long sz)
+{
+ struct page *page = (struct page *)loc;
+ void *buf = dump_config.dumper->curr_buf;
+ int err = 0;
+
+ if ((err = dump_copy_pages(buf, page, sz))) {
+ printk("dump_copy_pages failed");
+ return err;
+ }
+
+ if ((err = dump_verify_lcrash_data(buf, sz))) {
+ printk("dump_verify_lcrash_data failed\n");
+ printk("Invalid data for pfn 0x%lx\n", page_to_pfn(page));
+ printk("Page flags 0x%lx\n", page->flags);
+ printk("Page count 0x%x\n", atomic_read(&page->count));
+ return err;
+ }
+
+ dump_config.dumper->curr_buf = buf + sz;
+
+ return 0;
+}
+
+
+/* Stage 1 dumper: Saves compressed dump in memory and soft-boots system */
+
+/* Scheme to overlay saved data in memory for writeout after a soft-boot */
+struct dump_scheme_ops dump_scheme_overlay_ops = {
+ .configure = dump_overlay_configure,
+ .unconfigure = dump_overlay_unconfigure,
+ .sequencer = dump_overlay_sequencer,
+ .iterator = dump_page_iterator,
+ .save_data = dump_overlay_save_data,
+ .skip_data = dump_overlay_skip_data,
+ .write_buffer = dump_generic_write_buffer
+};
+
+struct dump_scheme dump_scheme_overlay = {
+ .name = "overlay",
+ .ops = &dump_scheme_overlay_ops
+};
+
+
+/* Stage 1 must use a good compression scheme - default to gzip */
+extern struct __dump_compress dump_gzip_compression;
+
+struct dumper dumper_stage1 = {
+ .name = "stage1",
+ .scheme = &dump_scheme_overlay,
+ .fmt = &dump_fmt_lcrash,
+ .compress = &dump_none_compression, /* needs to be gzip */
+ .filter = dump_filter_table,
+ .dev = NULL,
+};
+
+/* Stage 2 dumper: Activated after softboot to write out saved dump to device */
+
+/* Formatter that transfers data as is (transparent) w/o further conversion */
+struct dump_fmt_ops dump_fmt_passthru_ops = {
+ .configure_header = dump_passthru_configure_header,
+ .update_header = dump_passthru_update_header,
+ .save_context = NULL, /* unused */
+ .add_data = dump_passthru_add_data,
+ .update_end_marker = dump_lcrash_update_end_marker
+};
+
+struct dump_fmt dump_fmt_passthru = {
+ .name = "passthru",
+ .ops = &dump_fmt_passthru_ops
+};
+
+/* Filter that simply passes along any data within the range (transparent)*/
+/* Note: The start and end ranges in the table are filled in at run-time */
+
+extern int dump_filter_none(int pass, unsigned long loc, unsigned long sz);
+
+struct dump_data_filter dump_passthru_filtertable[MAX_PASSES] = {
+{.name = "passkern", .selector = dump_passthru_filter,
+ .level_mask = DUMP_MASK_KERN },
+{.name = "passuser", .selector = dump_passthru_filter,
+ .level_mask = DUMP_MASK_USED },
+{.name = "passunused", .selector = dump_passthru_filter,
+ .level_mask = DUMP_MASK_UNUSED },
+{.name = "none", .selector = dump_filter_none,
+ .level_mask = DUMP_MASK_REST }
+};
+
+
+/* Scheme to handle data staged / preserved across a soft-boot */
+struct dump_scheme_ops dump_scheme_staged_ops = {
+ .configure = dump_generic_configure,
+ .unconfigure = dump_staged_unconfigure,
+ .sequencer = dump_generic_sequencer,
+ .iterator = dump_saved_data_iterator,
+ .save_data = dump_generic_save_data,
+ .skip_data = dump_generic_skip_data,
+ .write_buffer = dump_generic_write_buffer
+};
+
+struct dump_scheme dump_scheme_staged = {
+ .name = "staged",
+ .ops = &dump_scheme_staged_ops
+};
+
+/* The stage 2 dumper comprising all these */
+struct dumper dumper_stage2 = {
+ .name = "stage2",
+ .scheme = &dump_scheme_staged,
+ .fmt = &dump_fmt_passthru,
+ .compress = &dump_none_compression,
+ .filter = dump_passthru_filtertable,
+ .dev = NULL,
+};
+
--- /dev/null
+/*
+ * Architecture specific (ppc64) functions for Linux crash dumps.
+ *
+ * Created by: Matt Robinson (yakker@sgi.com)
+ *
+ * Copyright 1999 Silicon Graphics, Inc. All rights reserved.
+ *
+ * 2.3 kernel modifications by: Matt D. Robinson (yakker@turbolinux.com)
+ * Copyright 2000 TurboLinux, Inc. All rights reserved.
+ * Copyright 2003, 2004 IBM Corporation
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+/*
+ * The hooks for dumping the kernel virtual memory to disk are in this
+ * file. Any time a modification is made to the virtual memory mechanism,
+ * these routines must be changed to use the new mechanisms.
+ */
+#include <linux/types.h>
+#include <linux/fs.h>
+#include <linux/dump.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/delay.h>
+#include <linux/syscalls.h>
+#include <linux/ioctl32.h>
+#include <asm/hardirq.h>
+#include "dump_methods.h"
+#include <linux/irq.h>
+#include <asm/machdep.h>
+#include <asm/uaccess.h>
+#include <asm/irq.h>
+#include <asm/page.h>
+#if defined(CONFIG_KDB) && !defined(CONFIG_DUMP_MODULE)
+#include <linux/kdb.h>
+#endif
+
+extern cpumask_t irq_affinity[];
+
+static cpumask_t saved_affinity[NR_IRQS];
+
+static __s32 saved_irq_count; /* saved preempt_count() flags */
+
+static int alloc_dha_stack(void)
+{
+ int i;
+ void *ptr;
+
+ if (dump_header_asm.dha_stack[0])
+ return 0;
+
+ ptr = (void *)vmalloc(THREAD_SIZE * num_online_cpus());
+ if (!ptr) {
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < num_online_cpus(); i++) {
+ dump_header_asm.dha_stack[i] =
+ (uint64_t)((unsigned long)ptr + (i * THREAD_SIZE));
+ }
+ return 0;
+}
+
+static int free_dha_stack(void)
+{
+ if (dump_header_asm.dha_stack[0]) {
+ vfree((void*)dump_header_asm.dha_stack[0]);
+ dump_header_asm.dha_stack[0] = 0;
+ }
+ return 0;
+}
+#ifdef CONFIG_SMP
+static int dump_expect_ipi[NR_CPUS];
+static atomic_t waiting_for_dump_ipi;
+
+extern void stop_this_cpu(void *);
+static int
+dump_ipi_handler(struct pt_regs *regs)
+{
+ int cpu = smp_processor_id();
+
+ if (!dump_expect_ipi[cpu])
+ return 0;
+ dump_save_this_cpu(regs);
+ atomic_dec(&waiting_for_dump_ipi);
+
+ level_changed:
+ switch (dump_silence_level) {
+ case DUMP_HARD_SPIN_CPUS: /* Spin until dump is complete */
+ while (dump_oncpu) {
+ barrier(); /* paranoia */
+ if (dump_silence_level != DUMP_HARD_SPIN_CPUS)
+ goto level_changed;
+ cpu_relax(); /* kill time nicely */
+ }
+ break;
+
+ case DUMP_HALT_CPUS: /* Execute halt */
+ stop_this_cpu(NULL);
+ break;
+
+ case DUMP_SOFT_SPIN_CPUS:
+ /* Mark the task so it spins in schedule */
+ set_tsk_thread_flag(current, TIF_NEED_RESCHED);
+ break;
+ }
+
+ return 1;
+}
+
+/* save registers on other processors
+ * If the other cpus don't respond we simply do not get their states.
+ */
+void
+__dump_save_other_cpus(void)
+{
+ int i, cpu = smp_processor_id();
+ int other_cpus = num_online_cpus()-1;
+
+ if (other_cpus > 0) {
+ atomic_set(&waiting_for_dump_ipi, other_cpus);
+ for (i = 0; i < NR_CPUS; i++)
+ dump_expect_ipi[i] = (i != cpu && cpu_online(i));
+
+ dump_send_ipi(dump_ipi_handler);
+ /*
+ * may be we dont need to wait for NMI to be processed.
+ * just write out the header at the end of dumping, if
+ * this IPI is not processed until then, there probably
+ * is a problem and we just fail to capture state of
+ * other cpus.
+ */
+ while (atomic_read(&waiting_for_dump_ipi) > 0) {
+ cpu_relax();
+ }
+ dump_send_ipi(NULL); /* clear handler */
+ }
+}
+
+/*
+ * Restore old irq affinities.
+ */
+static void
+__dump_reset_irq_affinity(void)
+{
+ int i;
+ irq_desc_t *irq_d;
+
+ memcpy(irq_affinity, saved_affinity, NR_IRQS * sizeof(unsigned long));
+
+ for_each_irq(i) {
+ irq_d = get_irq_desc(i);
+ if (irq_d->handler == NULL) {
+ continue;
+ }
+ if (irq_d->handler->set_affinity != NULL) {
+ irq_d->handler->set_affinity(i, saved_affinity[i]);
+ }
+ }
+}
+
+/*
+ * Routine to save the old irq affinities and change affinities of all irqs to
+ * the dumping cpu.
+ *
+ * NB: Need to be expanded to multiple nodes.
+ */
+static void
+__dump_set_irq_affinity(void)
+{
+ int i;
+ cpumask_t cpu = CPU_MASK_NONE;
+ irq_desc_t *irq_d;
+
+ cpu_set(smp_processor_id(), cpu);
+
+ memcpy(saved_affinity, irq_affinity, NR_IRQS * sizeof(unsigned long));
+
+ for_each_irq(i) {
+ irq_d = get_irq_desc(i);
+ if (irq_d->handler == NULL) {
+ continue;
+ }
+ irq_affinity[i] = cpu;
+ if (irq_d->handler->set_affinity != NULL) {
+ irq_d->handler->set_affinity(i, irq_affinity[i]);
+ }
+ }
+}
+#else /* !CONFIG_SMP */
+#define __dump_save_other_cpus() do { } while (0)
+#define __dump_set_irq_affinity() do { } while (0)
+#define __dump_reset_irq_affinity() do { } while (0)
+#endif /* !CONFIG_SMP */
+
+void
+__dump_save_regs(struct pt_regs *dest_regs, const struct pt_regs *regs)
+{
+ if (regs) {
+ memcpy(dest_regs, regs, sizeof(struct pt_regs));
+ }
+}
+
+/*
+ * Name: __dump_configure_header()
+ * Func: Configure the dump header with all proper values.
+ */
+int
+__dump_configure_header(const struct pt_regs *regs)
+{
+ return (0);
+}
+
+#if defined(CONFIG_KDB) && !defined(CONFIG_DUMP_MODULE)
+int
+kdb_sysdump(int argc, const char **argv, const char **envp, struct pt_regs *regs)
+{
+ kdb_printf("Dumping to disk...\n");
+ dump("dump from kdb", regs);
+ kdb_printf("Dump Complete\n");
+ return 0;
+}
+#endif
+
+static int dw_long(unsigned int fd, unsigned int cmd, unsigned long arg,
+ struct file *f)
+{
+ mm_segment_t old_fs = get_fs();
+ int err;
+ unsigned long val;
+
+ set_fs (KERNEL_DS);
+ err = sys_ioctl(fd, cmd, (unsigned long)&val);
+ set_fs (old_fs);
+ if (!err && put_user((unsigned int) val, (u32 *)arg))
+ return -EFAULT;
+ return err;
+}
+
+/*
+ * Name: __dump_init()
+ * Func: Initialize the dumping routine process. This is in case
+ * it's necessary in the future.
+ */
+void
+__dump_init(uint64_t local_memory_start)
+{
+ int ret;
+
+ ret = register_ioctl32_conversion(DIOSDUMPDEV, NULL);
+ ret |= register_ioctl32_conversion(DIOGDUMPDEV, NULL);
+ ret |= register_ioctl32_conversion(DIOSDUMPLEVEL, NULL);
+ ret |= register_ioctl32_conversion(DIOGDUMPLEVEL, dw_long);
+ ret |= register_ioctl32_conversion(DIOSDUMPFLAGS, NULL);
+ ret |= register_ioctl32_conversion(DIOGDUMPFLAGS, dw_long);
+ ret |= register_ioctl32_conversion(DIOSDUMPCOMPRESS, NULL);
+ ret |= register_ioctl32_conversion(DIOGDUMPCOMPRESS, dw_long);
+ ret |= register_ioctl32_conversion(DIOSTARGETIP, NULL);
+ ret |= register_ioctl32_conversion(DIOGTARGETIP, NULL);
+ ret |= register_ioctl32_conversion(DIOSTARGETPORT, NULL);
+ ret |= register_ioctl32_conversion(DIOGTARGETPORT, NULL);
+ ret |= register_ioctl32_conversion(DIOSSOURCEPORT, NULL);
+ ret |= register_ioctl32_conversion(DIOGSOURCEPORT, NULL);
+ ret |= register_ioctl32_conversion(DIOSETHADDR, NULL);
+ ret |= register_ioctl32_conversion(DIOGETHADDR, NULL);
+ ret |= register_ioctl32_conversion(DIOGDUMPOKAY, dw_long);
+ ret |= register_ioctl32_conversion(DIOSDUMPTAKE, NULL);
+ if (ret) {
+ printk(KERN_ERR "LKCD: registering ioctl32 translations failed\n");
+ }
+
+#if defined(FIXME) && defined(CONFIG_KDB) && !defined(CONFIG_DUMP_MODULE)
+ /* This won't currently work because interrupts are off in kdb
+ * and the dump process doesn't understand how to recover.
+ */
+ /* ToDo: add a command to query/set dump configuration */
+ kdb_register_repeat("sysdump", kdb_sysdump, "", "use lkcd to dump the system to disk (if configured)", 0, KDB_REPEAT_NONE);
+#endif
+
+ /* return */
+ return;
+}
+
+/*
+ * Name: __dump_open()
+ * Func: Open the dump device (architecture specific). This is in
+ * case it's necessary in the future.
+ */
+void
+__dump_open(void)
+{
+ alloc_dha_stack();
+}
+
+
+/*
+ * Name: __dump_cleanup()
+ * Func: Free any architecture specific data structures. This is called
+ * when the dump module is being removed.
+ */
+void
+__dump_cleanup(void)
+{
+ int ret;
+
+ ret = unregister_ioctl32_conversion(DIOSDUMPDEV);
+ ret |= unregister_ioctl32_conversion(DIOGDUMPDEV);
+ ret |= unregister_ioctl32_conversion(DIOSDUMPLEVEL);
+ ret |= unregister_ioctl32_conversion(DIOGDUMPLEVEL);
+ ret |= unregister_ioctl32_conversion(DIOSDUMPFLAGS);
+ ret |= unregister_ioctl32_conversion(DIOGDUMPFLAGS);
+ ret |= unregister_ioctl32_conversion(DIOSDUMPCOMPRESS);
+ ret |= unregister_ioctl32_conversion(DIOGDUMPCOMPRESS);
+ ret |= unregister_ioctl32_conversion(DIOSTARGETIP);
+ ret |= unregister_ioctl32_conversion(DIOGTARGETIP);
+ ret |= unregister_ioctl32_conversion(DIOSTARGETPORT);
+ ret |= unregister_ioctl32_conversion(DIOGTARGETPORT);
+ ret |= unregister_ioctl32_conversion(DIOSSOURCEPORT);
+ ret |= unregister_ioctl32_conversion(DIOGSOURCEPORT);
+ ret |= unregister_ioctl32_conversion(DIOSETHADDR);
+ ret |= unregister_ioctl32_conversion(DIOGETHADDR);
+ ret |= unregister_ioctl32_conversion(DIOGDUMPOKAY);
+ ret |= unregister_ioctl32_conversion(DIOSDUMPTAKE);
+ if (ret) {
+ printk(KERN_ERR "LKCD: Unregistering ioctl32 translations failed\n");
+ }
+ free_dha_stack();
+}
+
+/*
+ * Kludge - dump from interrupt context is unreliable (Fixme)
+ *
+ * We do this so that softirqs initiated for dump i/o
+ * get processed and we don't hang while waiting for i/o
+ * to complete or in any irq synchronization attempt.
+ *
+ * This is not quite legal of course, as it has the side
+ * effect of making all interrupts & softirqs triggered
+ * while dump is in progress complete before currently
+ * pending softirqs and the currently executing interrupt
+ * code.
+ */
+static inline void
+irq_bh_save(void)
+{
+ saved_irq_count = irq_count();
+ preempt_count() &= ~(HARDIRQ_MASK|SOFTIRQ_MASK);
+}
+
+static inline void
+irq_bh_restore(void)
+{
+ preempt_count() |= saved_irq_count;
+}
+
+/*
+ * Name: __dump_irq_enable
+ * Func: Reset system so interrupts are enabled.
+ * This is used for dump methods that require interrupts
+ * Eventually, all methods will have interrupts disabled
+ * and this code can be removed.
+ *
+ * Change irq affinities
+ * Re-enable interrupts
+ */
+int
+__dump_irq_enable(void)
+{
+ __dump_set_irq_affinity();
+ irq_bh_save();
+ local_irq_enable();
+ return 0;
+}
+
+/*
+ * Name: __dump_irq_restore
+ * Func: Resume the system state in an architecture-specific way.
+ */
+void
+__dump_irq_restore(void)
+{
+ local_irq_disable();
+ __dump_reset_irq_affinity();
+ irq_bh_restore();
+}
+
+#if 0
+/* Cheap progress hack. It estimates pages to write and
+ * assumes all pages will go -- so it may get way off.
+ * As the progress is not displayed for other architectures, not used at this
+ * moment.
+ */
+void
+__dump_progress_add_page(void)
+{
+ unsigned long total_pages = nr_free_pages() + nr_inactive_pages + nr_active_pages;
+ unsigned int percent = (dump_header.dh_num_dump_pages * 100) / total_pages;
+ char buf[30];
+
+ if (percent > last_percent && percent <= 100) {
+ sprintf(buf, "Dump %3d%% ", percent);
+ ppc64_dump_msg(0x2, buf);
+ last_percent = percent;
+ }
+
+}
+#endif
+
+extern int dump_page_is_ram(unsigned long);
+/*
+ * Name: __dump_page_valid()
+ * Func: Check if page is valid to dump.
+ */
+int
+__dump_page_valid(unsigned long index)
+{
+ if (!pfn_valid(index))
+ return 0;
+
+ return dump_page_is_ram(index);
+}
+
+/*
+ * Name: manual_handle_crashdump()
+ * Func: Interface for the lkcd dump command. Calls dump_execute()
+ */
+int
+manual_handle_crashdump(void)
+{
+ struct pt_regs regs;
+
+ get_current_regs(®s);
+ dump_execute("manual", ®s);
+ return 0;
+}
--- /dev/null
+/*
+ * RLE Compression functions for kernel crash dumps.
+ *
+ * Created by: Matt Robinson (yakker@sourceforge.net)
+ * Copyright 2001 Matt D. Robinson. All rights reserved.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+/* header files */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/fs.h>
+#include <linux/file.h>
+#include <linux/init.h>
+#include <linux/dump.h>
+
+/*
+ * Name: dump_compress_rle()
+ * Func: Compress a DUMP_PAGE_SIZE (hardware) page down to something more
+ * reasonable, if possible. This is the same routine we use in IRIX.
+ */
+static u16
+dump_compress_rle(const u8 *old, u16 oldsize, u8 *new, u16 newsize)
+{
+ u16 ri, wi, count = 0;
+ u_char value = 0, cur_byte;
+
+ /*
+ * If the block should happen to "compress" to larger than the
+ * buffer size, allocate a larger one and change cur_buf_size.
+ */
+
+ wi = ri = 0;
+
+ while (ri < oldsize) {
+ if (!ri) {
+ cur_byte = value = old[ri];
+ count = 0;
+ } else {
+ if (count == 255) {
+ if (wi + 3 > oldsize) {
+ return oldsize;
+ }
+ new[wi++] = 0;
+ new[wi++] = count;
+ new[wi++] = value;
+ value = cur_byte = old[ri];
+ count = 0;
+ } else {
+ if ((cur_byte = old[ri]) == value) {
+ count++;
+ } else {
+ if (count > 1) {
+ if (wi + 3 > oldsize) {
+ return oldsize;
+ }
+ new[wi++] = 0;
+ new[wi++] = count;
+ new[wi++] = value;
+ } else if (count == 1) {
+ if (value == 0) {
+ if (wi + 3 > oldsize) {
+ return oldsize;
+ }
+ new[wi++] = 0;
+ new[wi++] = 1;
+ new[wi++] = 0;
+ } else {
+ if (wi + 2 > oldsize) {
+ return oldsize;
+ }
+ new[wi++] = value;
+ new[wi++] = value;
+ }
+ } else { /* count == 0 */
+ if (value == 0) {
+ if (wi + 2 > oldsize) {
+ return oldsize;
+ }
+ new[wi++] = value;
+ new[wi++] = value;
+ } else {
+ if (wi + 1 > oldsize) {
+ return oldsize;
+ }
+ new[wi++] = value;
+ }
+ } /* if count > 1 */
+
+ value = cur_byte;
+ count = 0;
+
+ } /* if byte == value */
+
+ } /* if count == 255 */
+
+ } /* if ri == 0 */
+ ri++;
+
+ }
+ if (count > 1) {
+ if (wi + 3 > oldsize) {
+ return oldsize;
+ }
+ new[wi++] = 0;
+ new[wi++] = count;
+ new[wi++] = value;
+ } else if (count == 1) {
+ if (value == 0) {
+ if (wi + 3 > oldsize)
+ return oldsize;
+ new[wi++] = 0;
+ new[wi++] = 1;
+ new[wi++] = 0;
+ } else {
+ if (wi + 2 > oldsize)
+ return oldsize;
+ new[wi++] = value;
+ new[wi++] = value;
+ }
+ } else { /* count == 0 */
+ if (value == 0) {
+ if (wi + 2 > oldsize)
+ return oldsize;
+ new[wi++] = value;
+ new[wi++] = value;
+ } else {
+ if (wi + 1 > oldsize)
+ return oldsize;
+ new[wi++] = value;
+ }
+ } /* if count > 1 */
+
+ value = cur_byte;
+ count = 0;
+ return wi;
+}
+
+/* setup the rle compression functionality */
+static struct __dump_compress dump_rle_compression = {
+ .compress_type = DUMP_COMPRESS_RLE,
+ .compress_func = dump_compress_rle,
+ .compress_name = "RLE",
+};
+
+/*
+ * Name: dump_compress_rle_init()
+ * Func: Initialize rle compression for dumping.
+ */
+static int __init
+dump_compress_rle_init(void)
+{
+ dump_register_compression(&dump_rle_compression);
+ return 0;
+}
+
+/*
+ * Name: dump_compress_rle_cleanup()
+ * Func: Remove rle compression for dumping.
+ */
+static void __exit
+dump_compress_rle_cleanup(void)
+{
+ dump_unregister_compression(DUMP_COMPRESS_RLE);
+}
+
+/* module initialization */
+module_init(dump_compress_rle_init);
+module_exit(dump_compress_rle_cleanup);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("LKCD Development Team <lkcd-devel@lists.sourceforge.net>");
+MODULE_DESCRIPTION("RLE compression module for crash dump driver");
--- /dev/null
+/*
+ * Default single stage dump scheme methods
+ *
+ * Previously a part of dump_base.c
+ *
+ * Started: Oct 2002 - Suparna Bhattacharya <suparna@in.ibm.com>
+ * Split and rewrote LKCD dump scheme to generic dump method
+ * interfaces
+ * Derived from original code created by
+ * Matt Robinson <yakker@sourceforge.net>)
+ *
+ * Contributions from SGI, IBM, HP, MCL, and others.
+ *
+ * Copyright (C) 1999 - 2002 Silicon Graphics, Inc. All rights reserved.
+ * Copyright (C) 2001 - 2002 Matt D. Robinson. All rights reserved.
+ * Copyright (C) 2002 International Business Machines Corp.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+/*
+ * Implements the default dump scheme, i.e. single-stage gathering and
+ * saving of dump data directly to the target device, which operates in
+ * a push mode, where the dumping system decides what data it saves
+ * taking into account pre-specified dump config options.
+ *
+ * Aside: The 2-stage dump scheme, where there is a soft-reset between
+ * the gathering and saving phases, also reuses some of these
+ * default routines (see dump_overlay.c)
+ */
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/reboot.h>
+#include <linux/nmi.h>
+#include <linux/dump.h>
+#include "dump_methods.h"
+
+extern int panic_timeout; /* time before reboot */
+
+extern void dump_speedo(int);
+
+/* Default sequencer used during single stage dumping */
+/* Also invoked during stage 2 of soft-boot based dumping */
+int dump_generic_sequencer(void)
+{
+ struct dump_data_filter *filter = dump_config.dumper->filter;
+ int pass = 0, err = 0, save = 0;
+ int (*action)(unsigned long, unsigned long);
+
+ /*
+ * We want to save the more critical data areas first in
+ * case we run out of space, encounter i/o failures, or get
+ * interrupted otherwise and have to give up midway
+ * So, run through the passes in increasing order
+ */
+ for (;filter->selector; filter++, pass++)
+ {
+ /* Assumes passes are exclusive (even across dumpers) */
+ /* Requires care when coding the selection functions */
+ if ((save = filter->level_mask & dump_config.level))
+ action = dump_save_data;
+ else
+ action = dump_skip_data;
+
+ if ((err = dump_iterator(pass, action, filter)) < 0)
+ break;
+
+ printk("\n %d dump pages %s of %d each in pass %d\n",
+ err, save ? "saved" : "skipped", DUMP_PAGE_SIZE, pass);
+
+ }
+
+ return (err < 0) ? err : 0;
+}
+
+static inline struct page *dump_get_page(loff_t loc)
+{
+
+ unsigned long page_index = loc >> PAGE_SHIFT;
+
+ /* todo: complete this to account for ia64/discontig mem */
+ /* todo: and to check for validity, ram page, no i/o mem etc */
+ /* need to use pfn/physaddr equiv of kern_addr_valid */
+
+ /* Important:
+ * On ARM/XScale system, the physical address starts from
+ * PHYS_OFFSET, and it maybe the situation that PHYS_OFFSET != 0.
+ * For example on Intel's PXA250, PHYS_OFFSET = 0xa0000000. And the
+ * page index starts from PHYS_PFN_OFFSET. When configuring
+ * filter, filter->start is assigned to 0 in dump_generic_configure.
+ * Here we want to adjust it by adding PHYS_PFN_OFFSET to it!
+ */
+#ifdef CONFIG_ARM
+ page_index += PHYS_PFN_OFFSET;
+#endif
+ if (__dump_page_valid(page_index))
+ return pfn_to_page(page_index);
+ else
+ return NULL;
+
+}
+
+/* Default iterator: for singlestage and stage 1 of soft-boot dumping */
+/* Iterates over range of physical memory pages in DUMP_PAGE_SIZE increments */
+int dump_page_iterator(int pass, int (*action)(unsigned long, unsigned long),
+ struct dump_data_filter *filter)
+{
+ /* Todo : fix unit, type */
+ loff_t loc, start, end;
+ int i, count = 0, err = 0;
+ struct page *page;
+
+ /* Todo: Add membanks code */
+ /* TBD: Check if we need to address DUMP_PAGE_SIZE < PAGE_SIZE */
+
+ for (i = 0; i < filter->num_mbanks; i++) {
+ start = filter->start[i];
+ end = filter->end[i];
+ for (loc = start; loc < end; loc += DUMP_PAGE_SIZE) {
+ dump_config.dumper->curr_loc = loc;
+ page = dump_get_page(loc);
+ if (page && filter->selector(pass,
+ (unsigned long) page, DUMP_PAGE_SIZE)) {
+ if ((err = action((unsigned long)page,
+ DUMP_PAGE_SIZE))) {
+ printk("dump_page_iterator: err %d for "
+ "loc 0x%llx, in pass %d\n",
+ err, loc, pass);
+ return err ? err : count;
+ } else
+ count++;
+ }
+ }
+ }
+
+ return err ? err : count;
+}
+
+/*
+ * Base function that saves the selected block of data in the dump
+ * Action taken when iterator decides that data needs to be saved
+ */
+int dump_generic_save_data(unsigned long loc, unsigned long sz)
+{
+ void *buf;
+ void *dump_buf = dump_config.dumper->dump_buf;
+ int left, bytes, ret;
+
+ if ((ret = dump_add_data(loc, sz))) {
+ return ret;
+ }
+ buf = dump_config.dumper->curr_buf;
+
+ /* If we've filled up the buffer write it out */
+ if ((left = buf - dump_buf) >= DUMP_BUFFER_SIZE) {
+ bytes = dump_write_buffer(dump_buf, DUMP_BUFFER_SIZE);
+ if (bytes < DUMP_BUFFER_SIZE) {
+ printk("dump_write_buffer failed %d\n", bytes);
+ return bytes ? -ENOSPC : bytes;
+ }
+
+ left -= bytes;
+
+ /* -- A few chores to do from time to time -- */
+ dump_config.dumper->count++;
+
+ if (!(dump_config.dumper->count & 0x3f)) {
+ /* Update the header every one in a while */
+ memset((void *)dump_buf, 'b', DUMP_BUFFER_SIZE);
+ if ((ret = dump_update_header()) < 0) {
+ /* issue warning */
+ return ret;
+ }
+ printk(".");
+
+ touch_nmi_watchdog();
+ } else if (!(dump_config.dumper->count & 0x7)) {
+ /* Show progress so the user knows we aren't hung */
+ dump_speedo(dump_config.dumper->count >> 3);
+ }
+ /* Todo: Touch/Refresh watchdog */
+
+ /* --- Done with periodic chores -- */
+
+ /*
+ * extra bit of copying to simplify verification
+ * in the second kernel boot based scheme
+ */
+ memcpy(dump_buf - DUMP_PAGE_SIZE, dump_buf +
+ DUMP_BUFFER_SIZE - DUMP_PAGE_SIZE, DUMP_PAGE_SIZE);
+
+ /* now adjust the leftover bits back to the top of the page */
+ /* this case would not arise during stage 2 (passthru) */
+ memset(dump_buf, 'z', DUMP_BUFFER_SIZE);
+ if (left) {
+ memcpy(dump_buf, dump_buf + DUMP_BUFFER_SIZE, left);
+ }
+ buf -= DUMP_BUFFER_SIZE;
+ dump_config.dumper->curr_buf = buf;
+ }
+
+ return 0;
+}
+
+int dump_generic_skip_data(unsigned long loc, unsigned long sz)
+{
+ /* dummy by default */
+ return 0;
+}
+
+/*
+ * Common low level routine to write a buffer to current dump device
+ * Expects checks for space etc to have been taken care of by the caller
+ * Operates serially at the moment for simplicity.
+ * TBD/Todo: Consider batching for improved throughput
+ */
+int dump_ll_write(void *buf, unsigned long len)
+{
+ long transferred = 0, last_transfer = 0;
+ int ret = 0;
+
+ /* make sure device is ready */
+ while ((ret = dump_dev_ready(NULL)) == -EAGAIN);
+ if (ret < 0) {
+ printk("dump_dev_ready failed !err %d\n", ret);
+ return ret;
+ }
+
+ while (len) {
+ if ((last_transfer = dump_dev_write(buf, len)) <= 0) {
+ ret = last_transfer;
+ printk("dump_dev_write failed !err %d\n",
+ ret);
+ break;
+ }
+ /* wait till complete */
+ while ((ret = dump_dev_ready(buf)) == -EAGAIN)
+ cpu_relax();
+
+ if (ret < 0) {
+ printk("i/o failed !err %d\n", ret);
+ break;
+ }
+
+ len -= last_transfer;
+ buf += last_transfer;
+ transferred += last_transfer;
+ }
+ return (ret < 0) ? ret : transferred;
+}
+
+/* default writeout routine for single dump device */
+/* writes out the dump data ensuring enough space is left for the end marker */
+int dump_generic_write_buffer(void *buf, unsigned long len)
+{
+ long written = 0;
+ int err = 0;
+
+ /* check for space */
+ if ((err = dump_dev_seek(dump_config.dumper->curr_offset + len +
+ 2*DUMP_BUFFER_SIZE)) < 0) {
+ printk("dump_write_buffer: insuff space after offset 0x%llx\n",
+ dump_config.dumper->curr_offset);
+ return err;
+ }
+ /* alignment check would happen as a side effect of this */
+ if ((err = dump_dev_seek(dump_config.dumper->curr_offset)) < 0)
+ return err;
+
+ written = dump_ll_write(buf, len);
+
+ /* all or none */
+
+ if (written < len)
+ written = written ? -ENOSPC : written;
+ else
+ dump_config.dumper->curr_offset += len;
+
+ return written;
+}
+
+int dump_generic_configure(unsigned long devid)
+{
+ struct dump_dev *dev = dump_config.dumper->dev;
+ struct dump_data_filter *filter;
+ void *buf;
+ int ret = 0;
+
+ /* Allocate the dump buffer and initialize dumper state */
+ /* Assume that we get aligned addresses */
+ if (!(buf = dump_alloc_mem(DUMP_BUFFER_SIZE + 3 * DUMP_PAGE_SIZE)))
+ return -ENOMEM;
+
+ if ((unsigned long)buf & (PAGE_SIZE - 1)) {
+ /* sanity check for page aligned address */
+ dump_free_mem(buf);
+ return -ENOMEM; /* fixme: better error code */
+ }
+
+ /* Initialize the rest of the fields */
+ dump_config.dumper->dump_buf = buf + DUMP_PAGE_SIZE;
+ dumper_reset();
+
+ /* Open the dump device */
+ if (!dev)
+ return -ENODEV;
+
+ if ((ret = dev->ops->open(dev, devid))) {
+ return ret;
+ }
+
+ /* Initialise the memory ranges in the dump filter */
+ for (filter = dump_config.dumper->filter ;filter->selector; filter++) {
+ if (!filter->start[0] && !filter->end[0]) {
+ pg_data_t *pgdat;
+ int i = 0;
+ for_each_pgdat(pgdat) {
+ filter->start[i] =
+ (loff_t)pgdat->node_start_pfn << PAGE_SHIFT;
+ filter->end[i] =
+ (loff_t)(pgdat->node_start_pfn + pgdat->node_spanned_pages) << PAGE_SHIFT;
+ i++;
+ }
+ filter->num_mbanks = i;
+ }
+ }
+
+ return 0;
+}
+
+int dump_generic_unconfigure(void)
+{
+ struct dump_dev *dev = dump_config.dumper->dev;
+ void *buf = dump_config.dumper->dump_buf;
+ int ret = 0;
+
+ pr_debug("Generic unconfigure\n");
+ /* Close the dump device */
+ if (dev && (ret = dev->ops->release(dev)))
+ return ret;
+
+ printk("Closed dump device\n");
+
+ if (buf)
+ dump_free_mem((buf - DUMP_PAGE_SIZE));
+
+ dump_config.dumper->curr_buf = dump_config.dumper->dump_buf = NULL;
+ pr_debug("Released dump buffer\n");
+
+ return 0;
+}
+
+
+/* Set up the default dump scheme */
+
+struct dump_scheme_ops dump_scheme_singlestage_ops = {
+ .configure = dump_generic_configure,
+ .unconfigure = dump_generic_unconfigure,
+ .sequencer = dump_generic_sequencer,
+ .iterator = dump_page_iterator,
+ .save_data = dump_generic_save_data,
+ .skip_data = dump_generic_skip_data,
+ .write_buffer = dump_generic_write_buffer,
+};
+
+struct dump_scheme dump_scheme_singlestage = {
+ .name = "single-stage",
+ .ops = &dump_scheme_singlestage_ops
+};
+
+/* The single stage dumper comprising all these */
+struct dumper dumper_singlestage = {
+ .name = "single-stage",
+ .scheme = &dump_scheme_singlestage,
+ .fmt = &dump_fmt_lcrash,
+ .compress = &dump_none_compression,
+ .filter = dump_filter_table,
+ .dev = NULL,
+};
+
--- /dev/null
+/*
+ * Standard kernel function entry points for Linux crash dumps.
+ *
+ * Created by: Matt Robinson (yakker@sourceforge.net)
+ * Contributions from SGI, IBM, HP, MCL, and others.
+ *
+ * Copyright (C) 1999 - 2002 Silicon Graphics, Inc. All rights reserved.
+ * Copyright (C) 2000 - 2002 TurboLinux, Inc. All rights reserved.
+ * Copyright (C) 2001 - 2002 Matt D. Robinson. All rights reserved.
+ * Copyright (C) 2002 Free Software Foundation, Inc. All rights reserved.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+/*
+ * -----------------------------------------------------------------------
+ *
+ * DUMP HISTORY
+ *
+ * This dump code goes back to SGI's first attempts at dumping system
+ * memory on SGI systems running IRIX. A few developers at SGI needed
+ * a way to take this system dump and analyze it, and created 'icrash',
+ * or IRIX Crash. The mechanism (the dumps and 'icrash') were used
+ * by support people to generate crash reports when a system failure
+ * occurred. This was vital for large system configurations that
+ * couldn't apply patch after patch after fix just to hope that the
+ * problems would go away. So the system memory, along with the crash
+ * dump analyzer, allowed support people to quickly figure out what the
+ * problem was on the system with the crash dump.
+ *
+ * In comes Linux. SGI started moving towards the open source community,
+ * and upon doing so, SGI wanted to take its support utilities into Linux
+ * with the hopes that they would end up the in kernel and user space to
+ * be used by SGI's customers buying SGI Linux systems. One of the first
+ * few products to be open sourced by SGI was LKCD, or Linux Kernel Crash
+ * Dumps. LKCD comprises of a patch to the kernel to enable system
+ * dumping, along with 'lcrash', or Linux Crash, to analyze the system
+ * memory dump. A few additional system scripts and kernel modifications
+ * are also included to make the dump mechanism and dump data easier to
+ * process and use.
+ *
+ * As soon as LKCD was released into the open source community, a number
+ * of larger companies started to take advantage of it. Today, there are
+ * many community members that contribute to LKCD, and it continues to
+ * flourish and grow as an open source project.
+ */
+
+/*
+ * DUMP TUNABLES
+ *
+ * This is the list of system tunables (via /proc) that are available
+ * for Linux systems. All the read, write, etc., functions are listed
+ * here. Currently, there are a few different tunables for dumps:
+ *
+ * dump_device (used to be dumpdev):
+ * The device for dumping the memory pages out to. This
+ * may be set to the primary swap partition for disruptive dumps,
+ * and must be an unused partition for non-disruptive dumps.
+ * Todo: In the case of network dumps, this may be interpreted
+ * as the IP address of the netdump server to connect to.
+ *
+ * dump_compress (used to be dump_compress_pages):
+ * This is the flag which indicates which compression mechanism
+ * to use. This is a BITMASK, not an index (0,1,2,4,8,16,etc.).
+ * This is the current set of values:
+ *
+ * 0: DUMP_COMPRESS_NONE -- Don't compress any pages.
+ * 1: DUMP_COMPRESS_RLE -- This uses RLE compression.
+ * 2: DUMP_COMPRESS_GZIP -- This uses GZIP compression.
+ *
+ * dump_level:
+ * The amount of effort the dump module should make to save
+ * information for post crash analysis. This value is now
+ * a BITMASK value, not an index:
+ *
+ * 0: Do nothing, no dumping. (DUMP_LEVEL_NONE)
+ *
+ * 1: Print out the dump information to the dump header, and
+ * write it out to the dump_device. (DUMP_LEVEL_HEADER)
+ *
+ * 2: Write out the dump header and all kernel memory pages.
+ * (DUMP_LEVEL_KERN)
+ *
+ * 4: Write out the dump header and all kernel and user
+ * memory pages. (DUMP_LEVEL_USED)
+ *
+ * 8: Write out the dump header and all conventional/cached
+ * memory (RAM) pages in the system (kernel, user, free).
+ * (DUMP_LEVEL_ALL_RAM)
+ *
+ * 16: Write out everything, including non-conventional memory
+ * like firmware, proms, I/O registers, uncached memory.
+ * (DUMP_LEVEL_ALL)
+ *
+ * The dump_level will default to 1.
+ *
+ * dump_flags:
+ * These are the flags to use when talking about dumps. There
+ * are lots of possibilities. This is a BITMASK value, not an index.
+ *
+ * -----------------------------------------------------------------------
+ */
+
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include <linux/reboot.h>
+#include <linux/fs.h>
+#include <linux/dump.h>
+#include "dump_methods.h"
+#include <linux/proc_fs.h>
+#include <linux/module.h>
+#include <linux/utsname.h>
+#include <linux/highmem.h>
+#include <linux/miscdevice.h>
+#include <linux/sysrq.h>
+#include <linux/sysctl.h>
+#include <linux/nmi.h>
+#include <linux/init.h>
+
+#include <asm/hardirq.h>
+#include <asm/uaccess.h>
+
+/*
+ * -----------------------------------------------------------------------
+ * V A R I A B L E S
+ * -----------------------------------------------------------------------
+ */
+
+/* Dump tunables */
+struct dump_config dump_config = {
+ .level = 0,
+ .flags = 0,
+ .dump_device = 0,
+ .dump_addr = 0,
+ .dumper = NULL
+};
+#ifdef CONFIG_ARM
+static _dump_regs_t all_regs;
+#endif
+
+/* Global variables used in dump.h */
+/* degree of system freeze when dumping */
+enum dump_silence_levels dump_silence_level = DUMP_HARD_SPIN_CPUS;
+
+/* Other global fields */
+extern struct __dump_header dump_header;
+struct dump_dev *dump_dev = NULL; /* Active dump device */
+static int dump_compress = 0;
+
+static u16 dump_compress_none(const u8 *old, u16 oldsize, u8 *new, u16 newsize);
+struct __dump_compress dump_none_compression = {
+ .compress_type = DUMP_COMPRESS_NONE,
+ .compress_func = dump_compress_none,
+ .compress_name = "none",
+};
+
+/* our device operations and functions */
+static int dump_ioctl(struct inode *i, struct file *f,
+ unsigned int cmd, unsigned long arg);
+
+static struct file_operations dump_fops = {
+ .owner = THIS_MODULE,
+ .ioctl = dump_ioctl,
+};
+
+static struct miscdevice dump_miscdev = {
+ .minor = CRASH_DUMP_MINOR,
+ .name = "dump",
+ .fops = &dump_fops,
+};
+MODULE_ALIAS_MISCDEV(CRASH_DUMP_MINOR);
+
+/* static variables */
+static int dump_okay = 0; /* can we dump out to disk? */
+static spinlock_t dump_lock = SPIN_LOCK_UNLOCKED;
+
+/* used for dump compressors */
+static struct list_head dump_compress_list = LIST_HEAD_INIT(dump_compress_list);
+
+/* list of registered dump targets */
+static struct list_head dump_target_list = LIST_HEAD_INIT(dump_target_list);
+
+/* lkcd info structure -- this is used by lcrash for basic system data */
+struct __lkcdinfo lkcdinfo = {
+ .ptrsz = (sizeof(void *) * 8),
+#if defined(__LITTLE_ENDIAN)
+ .byte_order = __LITTLE_ENDIAN,
+#else
+ .byte_order = __BIG_ENDIAN,
+#endif
+ .page_shift = PAGE_SHIFT,
+ .page_size = PAGE_SIZE,
+ .page_mask = PAGE_MASK,
+ .page_offset = PAGE_OFFSET,
+};
+
+/*
+ * -----------------------------------------------------------------------
+ * / P R O C T U N A B L E F U N C T I O N S
+ * -----------------------------------------------------------------------
+ */
+
+static int proc_dump_device(ctl_table *ctl, int write, struct file *f,
+ void *buffer, size_t *lenp);
+
+static int proc_doulonghex(ctl_table *ctl, int write, struct file *f,
+ void *buffer, size_t *lenp);
+/*
+ * sysctl-tuning infrastructure.
+ */
+static ctl_table dump_table[] = {
+ { .ctl_name = CTL_DUMP_LEVEL,
+ .procname = DUMP_LEVEL_NAME,
+ .data = &dump_config.level,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_doulonghex, },
+
+ { .ctl_name = CTL_DUMP_FLAGS,
+ .procname = DUMP_FLAGS_NAME,
+ .data = &dump_config.flags,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_doulonghex, },
+
+ { .ctl_name = CTL_DUMP_COMPRESS,
+ .procname = DUMP_COMPRESS_NAME,
+ .data = &dump_compress, /* FIXME */
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = proc_dointvec, },
+
+ { .ctl_name = CTL_DUMP_DEVICE,
+ .procname = DUMP_DEVICE_NAME,
+ .mode = 0644,
+ .data = &dump_config.dump_device, /* FIXME */
+ .maxlen = sizeof(int),
+ .proc_handler = proc_dump_device },
+
+#ifdef CONFIG_CRASH_DUMP_MEMDEV
+ { .ctl_name = CTL_DUMP_ADDR,
+ .procname = DUMP_ADDR_NAME,
+ .mode = 0444,
+ .data = &dump_config.dump_addr,
+ .maxlen = sizeof(unsigned long),
+ .proc_handler = proc_doulonghex },
+#endif
+
+ { 0, }
+};
+
+static ctl_table dump_root[] = {
+ { .ctl_name = KERN_DUMP,
+ .procname = "dump",
+ .mode = 0555,
+ .child = dump_table },
+ { 0, }
+};
+
+static ctl_table kernel_root[] = {
+ { .ctl_name = CTL_KERN,
+ .procname = "kernel",
+ .mode = 0555,
+ .child = dump_root, },
+ { 0, }
+};
+
+static struct ctl_table_header *sysctl_header;
+
+/*
+ * -----------------------------------------------------------------------
+ * C O M P R E S S I O N F U N C T I O N S
+ * -----------------------------------------------------------------------
+ */
+
+/*
+ * Name: dump_compress_none()
+ * Func: Don't do any compression, period.
+ */
+static u16
+dump_compress_none(const u8 *old, u16 oldsize, u8 *new, u16 newsize)
+{
+ /* just return the old size */
+ return oldsize;
+}
+
+
+/*
+ * Name: dump_execute()
+ * Func: Execute the dumping process. This makes sure all the appropriate
+ * fields are updated correctly, and calls dump_execute_memdump(),
+ * which does the real work.
+ */
+void
+dump_execute(const char *panic_str, const struct pt_regs *regs)
+{
+ int state = -1;
+ unsigned long flags;
+
+ /* make sure we can dump */
+ if (!dump_okay) {
+ pr_info("LKCD not yet configured, can't take dump now\n");
+ return;
+ }
+
+ /* Exclude multiple dumps at the same time,
+ * and disable interrupts, some drivers may re-enable
+ * interrupts in with silence()
+ *
+ * Try and acquire spin lock. If successful, leave preempt
+ * and interrupts disabled. See spin_lock_irqsave in spinlock.h
+ */
+ local_irq_save(flags);
+ if (!spin_trylock(&dump_lock)) {
+ local_irq_restore(flags);
+ pr_info("LKCD dump already in progress\n");
+ return;
+ }
+
+ /* Bring system into the strictest level of quiescing for min drift
+ * dump drivers can soften this as required in dev->ops->silence()
+ */
+ dump_oncpu = smp_processor_id() + 1;
+ dump_silence_level = DUMP_HARD_SPIN_CPUS;
+
+ state = dump_generic_execute(panic_str, regs);
+
+ dump_oncpu = 0;
+ spin_unlock_irqrestore(&dump_lock, flags);
+
+ if (state < 0) {
+ printk("Dump Incomplete or failed!\n");
+ } else {
+ printk("Dump Complete; %d dump pages saved.\n",
+ dump_header.dh_num_dump_pages);
+ }
+}
+
+/*
+ * Name: dump_register_compression()
+ * Func: Register a dump compression mechanism.
+ */
+void
+dump_register_compression(struct __dump_compress *item)
+{
+ if (item)
+ list_add(&(item->list), &dump_compress_list);
+}
+
+/*
+ * Name: dump_unregister_compression()
+ * Func: Remove a dump compression mechanism, and re-assign the dump
+ * compression pointer if necessary.
+ */
+void
+dump_unregister_compression(int compression_type)
+{
+ struct list_head *tmp;
+ struct __dump_compress *dc;
+
+ /* let's make sure our list is valid */
+ if (compression_type != DUMP_COMPRESS_NONE) {
+ list_for_each(tmp, &dump_compress_list) {
+ dc = list_entry(tmp, struct __dump_compress, list);
+ if (dc->compress_type == compression_type) {
+ list_del(&(dc->list));
+ break;
+ }
+ }
+ }
+}
+
+/*
+ * Name: dump_compress_init()
+ * Func: Initialize (or re-initialize) compression scheme.
+ */
+static int
+dump_compress_init(int compression_type)
+{
+ struct list_head *tmp;
+ struct __dump_compress *dc;
+
+ /* try to remove the compression item */
+ list_for_each(tmp, &dump_compress_list) {
+ dc = list_entry(tmp, struct __dump_compress, list);
+ if (dc->compress_type == compression_type) {
+ dump_config.dumper->compress = dc;
+ dump_compress = compression_type;
+ pr_debug("Dump Compress %s\n", dc->compress_name);
+ return 0;
+ }
+ }
+
+ /*
+ * nothing on the list -- return ENODATA to indicate an error
+ *
+ * NB:
+ * EAGAIN: reports "Resource temporarily unavailable" which
+ * isn't very enlightening.
+ */
+ printk("compression_type:%d not found\n", compression_type);
+
+ return -ENODATA;
+}
+
+static int
+dumper_setup(unsigned long flags, unsigned long devid)
+{
+ int ret = 0;
+
+ /* unconfigure old dumper if it exists */
+ dump_okay = 0;
+ if (dump_config.dumper) {
+ pr_debug("Unconfiguring current dumper\n");
+ dump_unconfigure();
+ }
+ /* set up new dumper */
+ if (dump_config.flags & DUMP_FLAGS_SOFTBOOT) {
+ printk("Configuring softboot based dump \n");
+#ifdef CONFIG_CRASH_DUMP_MEMDEV
+ dump_config.dumper = &dumper_stage1;
+#else
+ printk("Requires CONFIG_CRASHDUMP_MEMDEV. Can't proceed.\n");
+ return -1;
+#endif
+ } else {
+ dump_config.dumper = &dumper_singlestage;
+ }
+ dump_config.dumper->dev = dump_dev;
+
+ ret = dump_configure(devid);
+ if (!ret) {
+ dump_okay = 1;
+ pr_debug("%s dumper set up for dev 0x%lx\n",
+ dump_config.dumper->name, devid);
+ dump_config.dump_device = devid;
+ } else {
+ printk("%s dumper set up failed for dev 0x%lx\n",
+ dump_config.dumper->name, devid);
+ dump_config.dumper = NULL;
+ }
+ return ret;
+}
+
+static int
+dump_target_init(int target)
+{
+ char type[20];
+ struct list_head *tmp;
+ struct dump_dev *dev;
+
+ switch (target) {
+ case DUMP_FLAGS_DISKDUMP:
+ strcpy(type, "blockdev"); break;
+ case DUMP_FLAGS_NETDUMP:
+ strcpy(type, "networkdev"); break;
+ default:
+ return -1;
+ }
+
+ /*
+ * This is a bit stupid, generating strings from flag
+ * and doing strcmp. This is done because 'struct dump_dev'
+ * has string 'type_name' and not interger 'type'.
+ */
+ list_for_each(tmp, &dump_target_list) {
+ dev = list_entry(tmp, struct dump_dev, list);
+ if (strcmp(type, dev->type_name) == 0) {
+ dump_dev = dev;
+ return 0;
+ }
+ }
+ return -1;
+}
+
+/*
+ * Name: dump_ioctl()
+ * Func: Allow all dump tunables through a standard ioctl() mechanism.
+ * This is far better than before, where we'd go through /proc,
+ * because now this will work for multiple OS and architectures.
+ */
+static int
+dump_ioctl(struct inode *i, struct file *f, unsigned int cmd, unsigned long arg)
+{
+ /* check capabilities */
+ if (!capable(CAP_SYS_ADMIN))
+ return -EPERM;
+
+ if (!dump_config.dumper && cmd == DIOSDUMPCOMPRESS)
+ /* dump device must be configured first */
+ return -ENODEV;
+
+ /*
+ * This is the main mechanism for controlling get/set data
+ * for various dump device parameters. The real trick here
+ * is setting the dump device (DIOSDUMPDEV). That's what
+ * triggers everything else.
+ */
+ switch (cmd) {
+ case DIOSDUMPDEV: /* set dump_device */
+ pr_debug("Configuring dump device\n");
+ if (!(f->f_flags & O_RDWR))
+ return -EPERM;
+
+ __dump_open();
+ return dumper_setup(dump_config.flags, arg);
+
+
+ case DIOGDUMPDEV: /* get dump_device */
+ return put_user((long)dump_config.dump_device, (long *)arg);
+
+ case DIOSDUMPLEVEL: /* set dump_level */
+ if (!(f->f_flags & O_RDWR))
+ return -EPERM;
+
+ /* make sure we have a positive value */
+ if (arg < 0)
+ return -EINVAL;
+
+ /* Fixme: clean this up */
+ dump_config.level = 0;
+ switch ((int)arg) {
+ case DUMP_LEVEL_ALL:
+ case DUMP_LEVEL_ALL_RAM:
+ dump_config.level |= DUMP_MASK_UNUSED;
+ case DUMP_LEVEL_USED:
+ dump_config.level |= DUMP_MASK_USED;
+ case DUMP_LEVEL_KERN:
+ dump_config.level |= DUMP_MASK_KERN;
+ case DUMP_LEVEL_HEADER:
+ dump_config.level |= DUMP_MASK_HEADER;
+ case DUMP_LEVEL_NONE:
+ break;
+ default:
+ return (-EINVAL);
+ }
+ pr_debug("Dump Level 0x%lx\n", dump_config.level);
+ break;
+
+ case DIOGDUMPLEVEL: /* get dump_level */
+ /* fixme: handle conversion */
+ return put_user((long)dump_config.level, (long *)arg);
+
+
+ case DIOSDUMPFLAGS: /* set dump_flags */
+ /* check flags */
+ if (!(f->f_flags & O_RDWR))
+ return -EPERM;
+
+ /* make sure we have a positive value */
+ if (arg < 0)
+ return -EINVAL;
+
+ if (dump_target_init(arg & DUMP_FLAGS_TARGETMASK) < 0)
+ return -EINVAL; /* return proper error */
+
+ dump_config.flags = arg;
+
+ pr_debug("Dump Flags 0x%lx\n", dump_config.flags);
+ break;
+
+ case DIOGDUMPFLAGS: /* get dump_flags */
+ return put_user((long)dump_config.flags, (long *)arg);
+
+ case DIOSDUMPCOMPRESS: /* set the dump_compress status */
+ if (!(f->f_flags & O_RDWR))
+ return -EPERM;
+
+ return dump_compress_init((int)arg);
+
+ case DIOGDUMPCOMPRESS: /* get the dump_compress status */
+ return put_user((long)(dump_config.dumper ?
+ dump_config.dumper->compress->compress_type : 0),
+ (long *)arg);
+ case DIOGDUMPOKAY: /* check if dump is configured */
+ return put_user((long)dump_okay, (long *)arg);
+
+ case DIOSDUMPTAKE: /* Trigger a manual dump */
+ /* Do not proceed if lkcd not yet configured */
+ if(!dump_okay) {
+ printk("LKCD not yet configured. Cannot take manual dump\n");
+ return -ENODEV;
+ }
+
+ /* Take the dump */
+ return manual_handle_crashdump();
+
+ default:
+ /*
+ * these are network dump specific ioctls, let the
+ * module handle them.
+ */
+ return dump_dev_ioctl(cmd, arg);
+ }
+ return 0;
+}
+
+/*
+ * Handle special cases for dump_device
+ * changing dump device requires doing an opening the device
+ */
+static int
+proc_dump_device(ctl_table *ctl, int write, struct file *f,
+ void *buffer, size_t *lenp)
+{
+ int *valp = ctl->data;
+ int oval = *valp;
+ int ret = -EPERM;
+
+ /* same permission checks as ioctl */
+ if (capable(CAP_SYS_ADMIN)) {
+ ret = proc_doulonghex(ctl, write, f, buffer, lenp);
+ if (ret == 0 && write && *valp != oval) {
+ /* need to restore old value to close properly */
+ dump_config.dump_device = (dev_t) oval;
+ __dump_open();
+ ret = dumper_setup(dump_config.flags, (dev_t) *valp);
+ }
+ }
+
+ return ret;
+}
+
+/* All for the want of a proc_do_xxx routine which prints values in hex */
+static int
+proc_doulonghex(ctl_table *ctl, int write, struct file *f,
+ void *buffer, size_t *lenp)
+{
+#define TMPBUFLEN 20
+ unsigned long *i;
+ size_t len, left;
+ char buf[TMPBUFLEN];
+
+ if (!ctl->data || !ctl->maxlen || !*lenp || (f->f_pos)) {
+ *lenp = 0;
+ return 0;
+ }
+
+ i = (unsigned long *) ctl->data;
+ left = *lenp;
+
+ sprintf(buf, "0x%lx\n", (*i));
+ len = strlen(buf);
+ if (len > left)
+ len = left;
+ if(copy_to_user(buffer, buf, len))
+ return -EFAULT;
+
+ left -= len;
+ *lenp -= left;
+ f->f_pos += *lenp;
+ return 0;
+}
+
+/*
+ * -----------------------------------------------------------------------
+ * I N I T F U N C T I O N S
+ * -----------------------------------------------------------------------
+ */
+
+/*
+ * These register and unregister routines are exported for modules
+ * to register their dump drivers (like block, net etc)
+ */
+int
+dump_register_device(struct dump_dev *ddev)
+{
+ struct list_head *tmp;
+ struct dump_dev *dev;
+
+ list_for_each(tmp, &dump_target_list) {
+ dev = list_entry(tmp, struct dump_dev, list);
+ if (strcmp(ddev->type_name, dev->type_name) == 0) {
+ printk("Target type %s already registered\n",
+ dev->type_name);
+ return -1; /* return proper error */
+ }
+ }
+ list_add(&(ddev->list), &dump_target_list);
+
+ return 0;
+}
+
+void
+dump_unregister_device(struct dump_dev *ddev)
+{
+ list_del(&(ddev->list));
+ if (ddev != dump_dev)
+ return;
+
+ dump_okay = 0;
+
+ if (dump_config.dumper)
+ dump_unconfigure();
+
+ dump_config.flags &= ~DUMP_FLAGS_TARGETMASK;
+ dump_okay = 0;
+ dump_dev = NULL;
+ dump_config.dumper = NULL;
+}
+
+static int panic_event(struct notifier_block *this, unsigned long event,
+ void *ptr)
+{
+#ifdef CONFIG_ARM
+ get_current_general_regs(&all_regs);
+ get_current_cp14_regs(&all_regs);
+ get_current_cp15_regs(&all_regs);
+ dump_execute((const char *)ptr, &all_regs);
+#else
+ struct pt_regs regs;
+
+ get_current_regs(®s);
+ dump_execute((const char *)ptr, ®s);
+#endif
+ return 0;
+}
+
+extern struct notifier_block *panic_notifier_list;
+static int panic_event(struct notifier_block *, unsigned long, void *);
+static struct notifier_block panic_block = {
+ .notifier_call = panic_event,
+};
+
+#ifdef CONFIG_MAGIC_SYSRQ
+/* Sysrq handler */
+static void sysrq_handle_crashdump(int key, struct pt_regs *pt_regs,
+ struct tty_struct *tty) {
+ dump_execute("sysrq", pt_regs);
+}
+
+static struct sysrq_key_op sysrq_crashdump_op = {
+ .handler = sysrq_handle_crashdump,
+ .help_msg = "Dump",
+ .action_msg = "Starting crash dump",
+};
+#endif
+
+static inline void
+dump_sysrq_register(void)
+{
+#ifdef CONFIG_MAGIC_SYSRQ
+ register_sysrq_key(DUMP_SYSRQ_KEY, &sysrq_crashdump_op);
+#endif
+}
+
+static inline void
+dump_sysrq_unregister(void)
+{
+#ifdef CONFIG_MAGIC_SYSRQ
+ unregister_sysrq_key(DUMP_SYSRQ_KEY, &sysrq_crashdump_op);
+#endif
+}
+
+/*
+ * Name: dump_init()
+ * Func: Initialize the dump process. This will set up any architecture
+ * dependent code. The big key is we need the memory offsets before
+ * the page table is initialized, because the base memory offset
+ * is changed after paging_init() is called.
+ */
+static int __init
+dump_init(void)
+{
+ struct sysinfo info;
+ int err;
+
+ /* try to create our dump device */
+ err = misc_register(&dump_miscdev);
+ if (err) {
+ printk("cannot register dump character device!\n");
+ return err;
+ }
+
+ __dump_init((u64)PAGE_OFFSET);
+
+ /* set the dump_compression_list structure up */
+ dump_register_compression(&dump_none_compression);
+
+ /* grab the total memory size now (not if/when we crash) */
+ si_meminfo(&info);
+
+ /* set the memory size */
+ dump_header.dh_memory_size = (u64)info.totalram;
+
+ sysctl_header = register_sysctl_table(kernel_root, 0);
+ dump_sysrq_register();
+
+ notifier_chain_register(&panic_notifier_list, &panic_block);
+ dump_function_ptr = dump_execute;
+
+ pr_info("Crash dump driver initialized.\n");
+ return 0;
+}
+
+static void __exit
+dump_cleanup(void)
+{
+ dump_okay = 0;
+
+ if (dump_config.dumper)
+ dump_unconfigure();
+
+ /* arch-specific cleanup routine */
+ __dump_cleanup();
+
+ /* ignore errors while unregistering -- since can't do anything */
+ unregister_sysctl_table(sysctl_header);
+ misc_deregister(&dump_miscdev);
+ dump_sysrq_unregister();
+ notifier_chain_unregister(&panic_notifier_list, &panic_block);
+ dump_function_ptr = NULL;
+}
+
+EXPORT_SYMBOL(dump_register_compression);
+EXPORT_SYMBOL(dump_unregister_compression);
+EXPORT_SYMBOL(dump_register_device);
+EXPORT_SYMBOL(dump_unregister_device);
+EXPORT_SYMBOL(dump_config);
+EXPORT_SYMBOL(dump_silence_level);
+
+EXPORT_SYMBOL(__dump_irq_enable);
+EXPORT_SYMBOL(__dump_irq_restore);
+
+MODULE_AUTHOR("Matt D. Robinson <yakker@sourceforge.net>");
+MODULE_DESCRIPTION("Linux Kernel Crash Dump (LKCD) driver");
+MODULE_LICENSE("GPL");
+
+module_init(dump_init);
+module_exit(dump_cleanup);
--- /dev/null
+/*
+ i2c-sensor.c - Part of lm_sensors, Linux kernel modules for hardware
+ monitoring
+ Copyright (c) 1998 - 2001 Frodo Looijaard <frodol@dds.nl> and
+ Mark D. Studebaker <mdsxyz123@yahoo.com>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+*/
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/ctype.h>
+#include <linux/sysctl.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/i2c.h>
+#include <linux/i2c-sensor.h>
+#include <asm/uaccess.h>
+
+
+/* Very inefficient for ISA detects, and won't work for 10-bit addresses! */
+int i2c_detect(struct i2c_adapter *adapter,
+ struct i2c_address_data *address_data,
+ int (*found_proc) (struct i2c_adapter *, int, int))
+{
+ int addr, i, found, j, err;
+ struct i2c_force_data *this_force;
+ int is_isa = i2c_is_isa_adapter(adapter);
+ int adapter_id =
+ is_isa ? ANY_I2C_ISA_BUS : i2c_adapter_id(adapter);
+
+ /* Forget it if we can't probe using SMBUS_QUICK */
+ if ((!is_isa) &&
+ !i2c_check_functionality(adapter, I2C_FUNC_SMBUS_QUICK))
+ return -1;
+
+ for (addr = 0x00; addr <= (is_isa ? 0xffff : 0x7f); addr++) {
+ if (!is_isa && i2c_check_addr(adapter, addr))
+ continue;
+
+ /* If it is in one of the force entries, we don't do any
+ detection at all */
+ found = 0;
+ for (i = 0; !found && (this_force = address_data->forces + i, this_force->force); i++) {
+ for (j = 0; !found && (this_force->force[j] != I2C_CLIENT_END); j += 2) {
+ if ( ((adapter_id == this_force->force[j]) ||
+ ((this_force->force[j] == ANY_I2C_BUS) && !is_isa)) &&
+ (addr == this_force->force[j + 1]) ) {
+ dev_dbg(&adapter->dev, "found force parameter for adapter %d, addr %04x\n", adapter_id, addr);
+ if ((err = found_proc(adapter, addr, this_force->kind)))
+ return err;
+ found = 1;
+ }
+ }
+ }
+ if (found)
+ continue;
+
+ /* If this address is in one of the ignores, we can forget about it
+ right now */
+ for (i = 0; !found && (address_data->ignore[i] != I2C_CLIENT_END); i += 2) {
+ if ( ((adapter_id == address_data->ignore[i]) ||
+ ((address_data->ignore[i] == ANY_I2C_BUS) &&
+ !is_isa)) &&
+ (addr == address_data->ignore[i + 1])) {
+ dev_dbg(&adapter->dev, "found ignore parameter for adapter %d, addr %04x\n", adapter_id, addr);
+ found = 1;
+ }
+ }
+ for (i = 0; !found && (address_data->ignore_range[i] != I2C_CLIENT_END); i += 3) {
+ if ( ((adapter_id == address_data->ignore_range[i]) ||
+ ((address_data-> ignore_range[i] == ANY_I2C_BUS) &
+ !is_isa)) &&
+ (addr >= address_data->ignore_range[i + 1]) &&
+ (addr <= address_data->ignore_range[i + 2])) {
+ dev_dbg(&adapter->dev, "found ignore_range parameter for adapter %d, addr %04x\n", adapter_id, addr);
+ found = 1;
+ }
+ }
+ if (found)
+ continue;
+
+ /* Now, we will do a detection, but only if it is in the normal or
+ probe entries */
+ if (is_isa) {
+ for (i = 0; !found && (address_data->normal_isa[i] != I2C_CLIENT_ISA_END); i += 1) {
+ if (addr == address_data->normal_isa[i]) {
+ dev_dbg(&adapter->dev, "found normal isa entry for adapter %d, addr %04x\n", adapter_id, addr);
+ found = 1;
+ }
+ }
+ for (i = 0; !found && (address_data->normal_isa_range[i] != I2C_CLIENT_ISA_END); i += 3) {
+ if ((addr >= address_data->normal_isa_range[i]) &&
+ (addr <= address_data->normal_isa_range[i + 1]) &&
+ ((addr - address_data->normal_isa_range[i]) % address_data->normal_isa_range[i + 2] == 0)) {
+ dev_dbg(&adapter->dev, "found normal isa_range entry for adapter %d, addr %04x", adapter_id, addr);
+ found = 1;
+ }
+ }
+ } else {
+ for (i = 0; !found && (address_data->normal_i2c[i] != I2C_CLIENT_END); i += 1) {
+ if (addr == address_data->normal_i2c[i]) {
+ found = 1;
+ dev_dbg(&adapter->dev, "found normal i2c entry for adapter %d, addr %02x", adapter_id, addr);
+ }
+ }
+ for (i = 0; !found && (address_data->normal_i2c_range[i] != I2C_CLIENT_END); i += 2) {
+ if ((addr >= address_data->normal_i2c_range[i]) &&
+ (addr <= address_data->normal_i2c_range[i + 1])) {
+ dev_dbg(&adapter->dev, "found normal i2c_range entry for adapter %d, addr %04x\n", adapter_id, addr);
+ found = 1;
+ }
+ }
+ }
+
+ for (i = 0;
+ !found && (address_data->probe[i] != I2C_CLIENT_END);
+ i += 2) {
+ if (((adapter_id == address_data->probe[i]) ||
+ ((address_data->
+ probe[i] == ANY_I2C_BUS) && !is_isa))
+ && (addr == address_data->probe[i + 1])) {
+ dev_dbg(&adapter->dev, "found probe parameter for adapter %d, addr %04x\n", adapter_id, addr);
+ found = 1;
+ }
+ }
+ for (i = 0; !found && (address_data->probe_range[i] != I2C_CLIENT_END); i += 3) {
+ if ( ((adapter_id == address_data->probe_range[i]) ||
+ ((address_data->probe_range[i] == ANY_I2C_BUS) && !is_isa)) &&
+ (addr >= address_data->probe_range[i + 1]) &&
+ (addr <= address_data->probe_range[i + 2])) {
+ found = 1;
+ dev_dbg(&adapter->dev, "found probe_range parameter for adapter %d, addr %04x\n", adapter_id, addr);
+ }
+ }
+ if (!found)
+ continue;
+
+ /* OK, so we really should examine this address. First check
+ whether there is some client here at all! */
+ if (is_isa ||
+ (i2c_smbus_xfer (adapter, addr, 0, 0, 0, I2C_SMBUS_QUICK, NULL) >= 0))
+ if ((err = found_proc(adapter, addr, -1)))
+ return err;
+ }
+ return 0;
+}
+
+EXPORT_SYMBOL(i2c_detect);
+
+MODULE_AUTHOR("Frodo Looijaard <frodol@dds.nl>");
+MODULE_DESCRIPTION("i2c-sensor driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * dvb-dibusb-pid.c is part of the driver for mobile USB Budget DVB-T devices
+ * based on reference design made by DiBcom (http://www.dibcom.fr/)
+ *
+ * Copyright (C) 2004-5 Patrick Boettcher (patrick.boettcher@desy.de)
+ *
+ * see dvb-dibusb-core.c for more copyright details.
+ *
+ * This file contains functions for initializing and handling the internal
+ * pid-list. This pid-list mirrors the information currently stored in the
+ * devices pid-list.
+ */
+#include "dvb-dibusb.h"
+
+int dibusb_pid_list_init(struct usb_dibusb *dib)
+{
+ int i;
+ dib->pid_list = kmalloc(sizeof(struct dibusb_pid) * dib->dibdev->dev_cl->demod->pid_filter_count,GFP_KERNEL);
+ if (dib->pid_list == NULL)
+ return -ENOMEM;
+
+ deb_xfer("initializing %d pids for the pid_list.\n",dib->dibdev->dev_cl->demod->pid_filter_count);
+
+ dib->pid_list_lock = SPIN_LOCK_UNLOCKED;
+ memset(dib->pid_list,0,dib->dibdev->dev_cl->demod->pid_filter_count*(sizeof(struct dibusb_pid)));
+ for (i=0; i < dib->dibdev->dev_cl->demod->pid_filter_count; i++) {
+ dib->pid_list[i].index = i;
+ dib->pid_list[i].pid = 0;
+ dib->pid_list[i].active = 0;
+ }
+
+ dib->init_state |= DIBUSB_STATE_PIDLIST;
+ return 0;
+}
+
+void dibusb_pid_list_exit(struct usb_dibusb *dib)
+{
+ if (dib->init_state & DIBUSB_STATE_PIDLIST)
+ kfree(dib->pid_list);
+ dib->init_state &= ~DIBUSB_STATE_PIDLIST;
+}
+
+/* fetch a pid from pid_list and set it on or off */
+int dibusb_ctrl_pid(struct usb_dibusb *dib, struct dvb_demux_feed *dvbdmxfeed , int onoff)
+{
+ int i,ret = -1;
+ unsigned long flags;
+ u16 pid = dvbdmxfeed->pid;
+
+ if (onoff) {
+ spin_lock_irqsave(&dib->pid_list_lock,flags);
+ for (i=0; i < dib->dibdev->dev_cl->demod->pid_filter_count; i++)
+ if (!dib->pid_list[i].active) {
+ dib->pid_list[i].pid = pid;
+ dib->pid_list[i].active = 1;
+ ret = i;
+ break;
+ }
+ dvbdmxfeed->priv = &dib->pid_list[ret];
+ spin_unlock_irqrestore(&dib->pid_list_lock,flags);
+
+ if (dib->xfer_ops.pid_ctrl != NULL)
+ dib->xfer_ops.pid_ctrl(dib->fe,dib->pid_list[ret].index,dib->pid_list[ret].pid,1);
+ } else {
+ struct dibusb_pid *dpid = dvbdmxfeed->priv;
+
+ if (dib->xfer_ops.pid_ctrl != NULL)
+ dib->xfer_ops.pid_ctrl(dib->fe,dpid->index,0,0);
+
+ dpid->pid = 0;
+ dpid->active = 0;
+ ret = dpid->index;
+ }
+
+ /* a free pid from the list */
+ deb_info("setting pid: %5d %04x at index %d '%s'\n",pid,pid,ret,onoff ? "on" : "off");
+
+ return ret;
+}
+
--- /dev/null
+#ifndef SCSI_ASCQ_TBL_C_INCLUDED
+#define SCSI_ASCQ_TBL_C_INCLUDED
+
+/* AuToMaGiCaLlY generated from: "t10.org/asc-num.txt"
+ *******************************************************************************
+ * File: ASC-NUM.TXT
+ *
+ * SCSI ASC/ASCQ Assignments
+ * Numeric Sorted Listing
+ * as of 5/18/00
+ *
+ * D - DIRECT ACCESS DEVICE (SBC-2) device column key
+ * .T - SEQUENTIAL ACCESS DEVICE (SSC) -------------------
+ * . L - PRINTER DEVICE (SSC) blank = reserved
+ * . P - PROCESSOR DEVICE (SPC) not blank = allowed
+ * . .W - WRITE ONCE READ MULTIPLE DEVICE (SBC-2)
+ * . . R - CD DEVICE (MMC)
+ * . . S - SCANNER DEVICE (SCSI-2)
+ * . . .O - OPTICAL MEMORY DEVICE (SBC-2)
+ * . . . M - MEDIA CHANGER DEVICE (SMC)
+ * . . . C - COMMUNICATION DEVICE (SCSI-2)
+ * . . . .A - STORAGE ARRAY DEVICE (SCC)
+ * . . . . E - ENCLOSURE SERVICES DEVICE (SES)
+ * . . . . B - SIMPLIFIED DIRECT-ACCESS DEVICE (RBC)
+ * . . . . .K - OPTICAL CARD READER/WRITER DEVICE (OCRW)
+ * ASC/ASCQ DTLPWRSOMCAEBK Description
+ * ------- -------------- ----------------------------------------------------
+ */
+
+static char SenseDevTypes001[] = "DTLPWRSOMCAEBK";
+static char SenseDevTypes002[] = ".T............";
+static char SenseDevTypes003[] = ".T....S.......";
+static char SenseDevTypes004[] = ".TL...S.......";
+static char SenseDevTypes005[] = ".....R........";
+static char SenseDevTypes006[] = "DTL.WRSOM.AEBK";
+static char SenseDevTypes007[] = "D...W..O....BK";
+static char SenseDevTypes008[] = "D...WR.OM...BK";
+static char SenseDevTypes009[] = "DTL.W.SO....BK";
+static char SenseDevTypes010[] = "DTL..R.O....B.";
+static char SenseDevTypes011[] = "DT..W..OMCA.BK";
+static char SenseDevTypes012[] = "..............";
+static char SenseDevTypes013[] = "DTL.WRSOMCAEBK";
+static char SenseDevTypes014[] = "DTL.WRSOM...BK";
+static char SenseDevTypes015[] = "DT...R.OM...BK";
+static char SenseDevTypes016[] = "DTLPWRSO.C...K";
+static char SenseDevTypes017[] = "DT..WR.O....B.";
+static char SenseDevTypes018[] = "....WR.O.....K";
+static char SenseDevTypes019[] = "....WR.O......";
+static char SenseDevTypes020[] = ".T...RS.......";
+static char SenseDevTypes021[] = ".............K";
+static char SenseDevTypes022[] = "DT..W..O....B.";
+static char SenseDevTypes023[] = "DT..WRSO....BK";
+static char SenseDevTypes024[] = "DT..W.SO....BK";
+static char SenseDevTypes025[] = "....WR.O....B.";
+static char SenseDevTypes026[] = "....W..O....B.";
+static char SenseDevTypes027[] = "DT.....O....BK";
+static char SenseDevTypes028[] = "DTL.WRSO....BK";
+static char SenseDevTypes029[] = "DT..WR.O....BK";
+static char SenseDevTypes030[] = "DT..W..O....BK";
+static char SenseDevTypes031[] = "D...WR.O....BK";
+static char SenseDevTypes032[] = "D......O.....K";
+static char SenseDevTypes033[] = "D......O....BK";
+static char SenseDevTypes034[] = "DT..WR.OM...BK";
+static char SenseDevTypes035[] = "D.............";
+static char SenseDevTypes036[] = "DTLPWRSOMCAE.K";
+static char SenseDevTypes037[] = "DTLPWRSOMCA.BK";
+static char SenseDevTypes038[] = ".T...R........";
+static char SenseDevTypes039[] = "DT..WR.OM...B.";
+static char SenseDevTypes040[] = "DTL.WRSOMCAE.K";
+static char SenseDevTypes041[] = "DTLPWRSOMCAE..";
+static char SenseDevTypes042[] = "......S.......";
+static char SenseDevTypes043[] = "............B.";
+static char SenseDevTypes044[] = "DTLPWRSO.CA..K";
+static char SenseDevTypes045[] = "DT...R.......K";
+static char SenseDevTypes046[] = "D.L..R.O....B.";
+static char SenseDevTypes047[] = "..L...........";
+static char SenseDevTypes048[] = ".TL...........";
+static char SenseDevTypes049[] = "DTLPWRSOMC..BK";
+static char SenseDevTypes050[] = "DT..WR.OMCAEBK";
+static char SenseDevTypes051[] = "DT..WR.OMCAEB.";
+static char SenseDevTypes052[] = ".T...R.O......";
+static char SenseDevTypes053[] = "...P..........";
+static char SenseDevTypes054[] = "DTLPWRSOM.AE.K";
+static char SenseDevTypes055[] = "DTLPWRSOM.AE..";
+static char SenseDevTypes056[] = ".......O......";
+static char SenseDevTypes057[] = "DTLPWRSOM...BK";
+static char SenseDevTypes058[] = "DT..WR.O..A.BK";
+static char SenseDevTypes059[] = "DTLPWRSOM....K";
+static char SenseDevTypes060[] = "D......O......";
+static char SenseDevTypes061[] = ".....R......B.";
+static char SenseDevTypes062[] = "D...........B.";
+static char SenseDevTypes063[] = "............BK";
+static char SenseDevTypes064[] = "..........A...";
+
+static ASCQ_Table_t ASCQ_Table[] = {
+ {
+ 0x00, 0x00,
+ SenseDevTypes001,
+ "NO ADDITIONAL SENSE INFORMATION"
+ },
+ {
+ 0x00, 0x01,
+ SenseDevTypes002,
+ "FILEMARK DETECTED"
+ },
+ {
+ 0x00, 0x02,
+ SenseDevTypes003,
+ "END-OF-PARTITION/MEDIUM DETECTED"
+ },
+ {
+ 0x00, 0x03,
+ SenseDevTypes002,
+ "SETMARK DETECTED"
+ },
+ {
+ 0x00, 0x04,
+ SenseDevTypes003,
+ "BEGINNING-OF-PARTITION/MEDIUM DETECTED"
+ },
+ {
+ 0x00, 0x05,
+ SenseDevTypes004,
+ "END-OF-DATA DETECTED"
+ },
+ {
+ 0x00, 0x06,
+ SenseDevTypes001,
+ "I/O PROCESS TERMINATED"
+ },
+ {
+ 0x00, 0x11,
+ SenseDevTypes005,
+ "AUDIO PLAY OPERATION IN PROGRESS"
+ },
+ {
+ 0x00, 0x12,
+ SenseDevTypes005,
+ "AUDIO PLAY OPERATION PAUSED"
+ },
+ {
+ 0x00, 0x13,
+ SenseDevTypes005,
+ "AUDIO PLAY OPERATION SUCCESSFULLY COMPLETED"
+ },
+ {
+ 0x00, 0x14,
+ SenseDevTypes005,
+ "AUDIO PLAY OPERATION STOPPED DUE TO ERROR"
+ },
+ {
+ 0x00, 0x15,
+ SenseDevTypes005,
+ "NO CURRENT AUDIO STATUS TO RETURN"
+ },
+ {
+ 0x00, 0x16,
+ SenseDevTypes001,
+ "OPERATION IN PROGRESS"
+ },
+ {
+ 0x00, 0x17,
+ SenseDevTypes006,
+ "CLEANING REQUESTED"
+ },
+ {
+ 0x01, 0x00,
+ SenseDevTypes007,
+ "NO INDEX/SECTOR SIGNAL"
+ },
+ {
+ 0x02, 0x00,
+ SenseDevTypes008,
+ "NO SEEK COMPLETE"
+ },
+ {
+ 0x03, 0x00,
+ SenseDevTypes009,
+ "PERIPHERAL DEVICE WRITE FAULT"
+ },
+ {
+ 0x03, 0x01,
+ SenseDevTypes002,
+ "NO WRITE CURRENT"
+ },
+ {
+ 0x03, 0x02,
+ SenseDevTypes002,
+ "EXCESSIVE WRITE ERRORS"
+ },
+ {
+ 0x04, 0x00,
+ SenseDevTypes001,
+ "LOGICAL UNIT NOT READY, CAUSE NOT REPORTABLE"
+ },
+ {
+ 0x04, 0x01,
+ SenseDevTypes001,
+ "LOGICAL UNIT IS IN PROCESS OF BECOMING READY"
+ },
+ {
+ 0x04, 0x02,
+ SenseDevTypes001,
+ "LOGICAL UNIT NOT READY, INITIALIZING CMD. REQUIRED"
+ },
+ {
+ 0x04, 0x03,
+ SenseDevTypes001,
+ "LOGICAL UNIT NOT READY, MANUAL INTERVENTION REQUIRED"
+ },
+ {
+ 0x04, 0x04,
+ SenseDevTypes010,
+ "LOGICAL UNIT NOT READY, FORMAT IN PROGRESS"
+ },
+ {
+ 0x04, 0x05,
+ SenseDevTypes011,
+ "LOGICAL UNIT NOT READY, REBUILD IN PROGRESS"
+ },
+ {
+ 0x04, 0x06,
+ SenseDevTypes011,
+ "LOGICAL UNIT NOT READY, RECALCULATION IN PROGRESS"
+ },
+ {
+ 0x04, 0x07,
+ SenseDevTypes001,
+ "LOGICAL UNIT NOT READY, OPERATION IN PROGRESS"
+ },
+ {
+ 0x04, 0x08,
+ SenseDevTypes005,
+ "LOGICAL UNIT NOT READY, LONG WRITE IN PROGRESS"
+ },
+ {
+ 0x04, 0x09,
+ SenseDevTypes001,
+ "LOGICAL UNIT NOT READY, SELF-TEST IN PROGRESS"
+ },
+ {
+ 0x04, 0x10,
+ SenseDevTypes012,
+ "auxiliary memory code 2 (99-148) [proposed]"
+ },
+ {
+ 0x05, 0x00,
+ SenseDevTypes013,
+ "LOGICAL UNIT DOES NOT RESPOND TO SELECTION"
+ },
+ {
+ 0x06, 0x00,
+ SenseDevTypes008,
+ "NO REFERENCE POSITION FOUND"
+ },
+ {
+ 0x07, 0x00,
+ SenseDevTypes014,
+ "MULTIPLE PERIPHERAL DEVICES SELECTED"
+ },
+ {
+ 0x08, 0x00,
+ SenseDevTypes013,
+ "LOGICAL UNIT COMMUNICATION FAILURE"
+ },
+ {
+ 0x08, 0x01,
+ SenseDevTypes013,
+ "LOGICAL UNIT COMMUNICATION TIME-OUT"
+ },
+ {
+ 0x08, 0x02,
+ SenseDevTypes013,
+ "LOGICAL UNIT COMMUNICATION PARITY ERROR"
+ },
+ {
+ 0x08, 0x03,
+ SenseDevTypes015,
+ "LOGICAL UNIT COMMUNICATION CRC ERROR (ULTRA-DMA/32)"
+ },
+ {
+ 0x08, 0x04,
+ SenseDevTypes016,
+ "UNREACHABLE COPY TARGET"
+ },
+ {
+ 0x09, 0x00,
+ SenseDevTypes017,
+ "TRACK FOLLOWING ERROR"
+ },
+ {
+ 0x09, 0x01,
+ SenseDevTypes018,
+ "TRACKING SERVO FAILURE"
+ },
+ {
+ 0x09, 0x02,
+ SenseDevTypes018,
+ "FOCUS SERVO FAILURE"
+ },
+ {
+ 0x09, 0x03,
+ SenseDevTypes019,
+ "SPINDLE SERVO FAILURE"
+ },
+ {
+ 0x09, 0x04,
+ SenseDevTypes017,
+ "HEAD SELECT FAULT"
+ },
+ {
+ 0x0A, 0x00,
+ SenseDevTypes001,
+ "ERROR LOG OVERFLOW"
+ },
+ {
+ 0x0B, 0x00,
+ SenseDevTypes001,
+ "WARNING"
+ },
+ {
+ 0x0B, 0x01,
+ SenseDevTypes001,
+ "WARNING - SPECIFIED TEMPERATURE EXCEEDED"
+ },
+ {
+ 0x0B, 0x02,
+ SenseDevTypes001,
+ "WARNING - ENCLOSURE DEGRADED"
+ },
+ {
+ 0x0C, 0x00,
+ SenseDevTypes020,
+ "WRITE ERROR"
+ },
+ {
+ 0x0C, 0x01,
+ SenseDevTypes021,
+ "WRITE ERROR - RECOVERED WITH AUTO REALLOCATION"
+ },
+ {
+ 0x0C, 0x02,
+ SenseDevTypes007,
+ "WRITE ERROR - AUTO REALLOCATION FAILED"
+ },
+ {
+ 0x0C, 0x03,
+ SenseDevTypes007,
+ "WRITE ERROR - RECOMMEND REASSIGNMENT"
+ },
+ {
+ 0x0C, 0x04,
+ SenseDevTypes022,
+ "COMPRESSION CHECK MISCOMPARE ERROR"
+ },
+ {
+ 0x0C, 0x05,
+ SenseDevTypes022,
+ "DATA EXPANSION OCCURRED DURING COMPRESSION"
+ },
+ {
+ 0x0C, 0x06,
+ SenseDevTypes022,
+ "BLOCK NOT COMPRESSIBLE"
+ },
+ {
+ 0x0C, 0x07,
+ SenseDevTypes005,
+ "WRITE ERROR - RECOVERY NEEDED"
+ },
+ {
+ 0x0C, 0x08,
+ SenseDevTypes005,
+ "WRITE ERROR - RECOVERY FAILED"
+ },
+ {
+ 0x0C, 0x09,
+ SenseDevTypes005,
+ "WRITE ERROR - LOSS OF STREAMING"
+ },
+ {
+ 0x0C, 0x0A,
+ SenseDevTypes005,
+ "WRITE ERROR - PADDING BLOCKS ADDED"
+ },
+ {
+ 0x0C, 0x0B,
+ SenseDevTypes012,
+ "auxiliary memory code 4 (99-148) [proposed]"
+ },
+ {
+ 0x10, 0x00,
+ SenseDevTypes007,
+ "ID CRC OR ECC ERROR"
+ },
+ {
+ 0x11, 0x00,
+ SenseDevTypes023,
+ "UNRECOVERED READ ERROR"
+ },
+ {
+ 0x11, 0x01,
+ SenseDevTypes023,
+ "READ RETRIES EXHAUSTED"
+ },
+ {
+ 0x11, 0x02,
+ SenseDevTypes023,
+ "ERROR TOO LONG TO CORRECT"
+ },
+ {
+ 0x11, 0x03,
+ SenseDevTypes024,
+ "MULTIPLE READ ERRORS"
+ },
+ {
+ 0x11, 0x04,
+ SenseDevTypes007,
+ "UNRECOVERED READ ERROR - AUTO REALLOCATE FAILED"
+ },
+ {
+ 0x11, 0x05,
+ SenseDevTypes025,
+ "L-EC UNCORRECTABLE ERROR"
+ },
+ {
+ 0x11, 0x06,
+ SenseDevTypes025,
+ "CIRC UNRECOVERED ERROR"
+ },
+ {
+ 0x11, 0x07,
+ SenseDevTypes026,
+ "DATA RE-SYNCHRONIZATION ERROR"
+ },
+ {
+ 0x11, 0x08,
+ SenseDevTypes002,
+ "INCOMPLETE BLOCK READ"
+ },
+ {
+ 0x11, 0x09,
+ SenseDevTypes002,
+ "NO GAP FOUND"
+ },
+ {
+ 0x11, 0x0A,
+ SenseDevTypes027,
+ "MISCORRECTED ERROR"
+ },
+ {
+ 0x11, 0x0B,
+ SenseDevTypes007,
+ "UNRECOVERED READ ERROR - RECOMMEND REASSIGNMENT"
+ },
+ {
+ 0x11, 0x0C,
+ SenseDevTypes007,
+ "UNRECOVERED READ ERROR - RECOMMEND REWRITE THE DATA"
+ },
+ {
+ 0x11, 0x0D,
+ SenseDevTypes017,
+ "DE-COMPRESSION CRC ERROR"
+ },
+ {
+ 0x11, 0x0E,
+ SenseDevTypes017,
+ "CANNOT DECOMPRESS USING DECLARED ALGORITHM"
+ },
+ {
+ 0x11, 0x0F,
+ SenseDevTypes005,
+ "ERROR READING UPC/EAN NUMBER"
+ },
+ {
+ 0x11, 0x10,
+ SenseDevTypes005,
+ "ERROR READING ISRC NUMBER"
+ },
+ {
+ 0x11, 0x11,
+ SenseDevTypes005,
+ "READ ERROR - LOSS OF STREAMING"
+ },
+ {
+ 0x11, 0x12,
+ SenseDevTypes012,
+ "auxiliary memory code 3 (99-148) [proposed]"
+ },
+ {
+ 0x12, 0x00,
+ SenseDevTypes007,
+ "ADDRESS MARK NOT FOUND FOR ID FIELD"
+ },
+ {
+ 0x13, 0x00,
+ SenseDevTypes007,
+ "ADDRESS MARK NOT FOUND FOR DATA FIELD"
+ },
+ {
+ 0x14, 0x00,
+ SenseDevTypes028,
+ "RECORDED ENTITY NOT FOUND"
+ },
+ {
+ 0x14, 0x01,
+ SenseDevTypes029,
+ "RECORD NOT FOUND"
+ },
+ {
+ 0x14, 0x02,
+ SenseDevTypes002,
+ "FILEMARK OR SETMARK NOT FOUND"
+ },
+ {
+ 0x14, 0x03,
+ SenseDevTypes002,
+ "END-OF-DATA NOT FOUND"
+ },
+ {
+ 0x14, 0x04,
+ SenseDevTypes002,
+ "BLOCK SEQUENCE ERROR"
+ },
+ {
+ 0x14, 0x05,
+ SenseDevTypes030,
+ "RECORD NOT FOUND - RECOMMEND REASSIGNMENT"
+ },
+ {
+ 0x14, 0x06,
+ SenseDevTypes030,
+ "RECORD NOT FOUND - DATA AUTO-REALLOCATED"
+ },
+ {
+ 0x15, 0x00,
+ SenseDevTypes014,
+ "RANDOM POSITIONING ERROR"
+ },
+ {
+ 0x15, 0x01,
+ SenseDevTypes014,
+ "MECHANICAL POSITIONING ERROR"
+ },
+ {
+ 0x15, 0x02,
+ SenseDevTypes029,
+ "POSITIONING ERROR DETECTED BY READ OF MEDIUM"
+ },
+ {
+ 0x16, 0x00,
+ SenseDevTypes007,
+ "DATA SYNCHRONIZATION MARK ERROR"
+ },
+ {
+ 0x16, 0x01,
+ SenseDevTypes007,
+ "DATA SYNC ERROR - DATA REWRITTEN"
+ },
+ {
+ 0x16, 0x02,
+ SenseDevTypes007,
+ "DATA SYNC ERROR - RECOMMEND REWRITE"
+ },
+ {
+ 0x16, 0x03,
+ SenseDevTypes007,
+ "DATA SYNC ERROR - DATA AUTO-REALLOCATED"
+ },
+ {
+ 0x16, 0x04,
+ SenseDevTypes007,
+ "DATA SYNC ERROR - RECOMMEND REASSIGNMENT"
+ },
+ {
+ 0x17, 0x00,
+ SenseDevTypes023,
+ "RECOVERED DATA WITH NO ERROR CORRECTION APPLIED"
+ },
+ {
+ 0x17, 0x01,
+ SenseDevTypes023,
+ "RECOVERED DATA WITH RETRIES"
+ },
+ {
+ 0x17, 0x02,
+ SenseDevTypes029,
+ "RECOVERED DATA WITH POSITIVE HEAD OFFSET"
+ },
+ {
+ 0x17, 0x03,
+ SenseDevTypes029,
+ "RECOVERED DATA WITH NEGATIVE HEAD OFFSET"
+ },
+ {
+ 0x17, 0x04,
+ SenseDevTypes025,
+ "RECOVERED DATA WITH RETRIES AND/OR CIRC APPLIED"
+ },
+ {
+ 0x17, 0x05,
+ SenseDevTypes031,
+ "RECOVERED DATA USING PREVIOUS SECTOR ID"
+ },
+ {
+ 0x17, 0x06,
+ SenseDevTypes007,
+ "RECOVERED DATA WITHOUT ECC - DATA AUTO-REALLOCATED"
+ },
+ {
+ 0x17, 0x07,
+ SenseDevTypes031,
+ "RECOVERED DATA WITHOUT ECC - RECOMMEND REASSIGNMENT"
+ },
+ {
+ 0x17, 0x08,
+ SenseDevTypes031,
+ "RECOVERED DATA WITHOUT ECC - RECOMMEND REWRITE"
+ },
+ {
+ 0x17, 0x09,
+ SenseDevTypes031,
+ "RECOVERED DATA WITHOUT ECC - DATA REWRITTEN"
+ },
+ {
+ 0x18, 0x00,
+ SenseDevTypes029,
+ "RECOVERED DATA WITH ERROR CORRECTION APPLIED"
+ },
+ {
+ 0x18, 0x01,
+ SenseDevTypes031,
+ "RECOVERED DATA WITH ERROR CORR. & RETRIES APPLIED"
+ },
+ {
+ 0x18, 0x02,
+ SenseDevTypes031,
+ "RECOVERED DATA - DATA AUTO-REALLOCATED"
+ },
+ {
+ 0x18, 0x03,
+ SenseDevTypes005,
+ "RECOVERED DATA WITH CIRC"
+ },
+ {
+ 0x18, 0x04,
+ SenseDevTypes005,
+ "RECOVERED DATA WITH L-EC"
+ },
+ {
+ 0x18, 0x05,
+ SenseDevTypes031,
+ "RECOVERED DATA - RECOMMEND REASSIGNMENT"
+ },
+ {
+ 0x18, 0x06,
+ SenseDevTypes031,
+ "RECOVERED DATA - RECOMMEND REWRITE"
+ },
+ {
+ 0x18, 0x07,
+ SenseDevTypes007,
+ "RECOVERED DATA WITH ECC - DATA REWRITTEN"
+ },
+ {
+ 0x19, 0x00,
+ SenseDevTypes032,
+ "DEFECT LIST ERROR"
+ },
+ {
+ 0x19, 0x01,
+ SenseDevTypes032,
+ "DEFECT LIST NOT AVAILABLE"
+ },
+ {
+ 0x19, 0x02,
+ SenseDevTypes032,
+ "DEFECT LIST ERROR IN PRIMARY LIST"
+ },
+ {
+ 0x19, 0x03,
+ SenseDevTypes032,
+ "DEFECT LIST ERROR IN GROWN LIST"
+ },
+ {
+ 0x1A, 0x00,
+ SenseDevTypes001,
+ "PARAMETER LIST LENGTH ERROR"
+ },
+ {
+ 0x1B, 0x00,
+ SenseDevTypes001,
+ "SYNCHRONOUS DATA TRANSFER ERROR"
+ },
+ {
+ 0x1C, 0x00,
+ SenseDevTypes033,
+ "DEFECT LIST NOT FOUND"
+ },
+ {
+ 0x1C, 0x01,
+ SenseDevTypes033,
+ "PRIMARY DEFECT LIST NOT FOUND"
+ },
+ {
+ 0x1C, 0x02,
+ SenseDevTypes033,
+ "GROWN DEFECT LIST NOT FOUND"
+ },
+ {
+ 0x1D, 0x00,
+ SenseDevTypes029,
+ "MISCOMPARE DURING VERIFY OPERATION"
+ },
+ {
+ 0x1E, 0x00,
+ SenseDevTypes007,
+ "RECOVERED ID WITH ECC CORRECTION"
+ },
+ {
+ 0x1F, 0x00,
+ SenseDevTypes032,
+ "PARTIAL DEFECT LIST TRANSFER"
+ },
+ {
+ 0x20, 0x00,
+ SenseDevTypes001,
+ "INVALID COMMAND OPERATION CODE"
+ },
+ {
+ 0x20, 0x01,
+ SenseDevTypes012,
+ "access controls code 1 (99-314) [proposed]"
+ },
+ {
+ 0x20, 0x02,
+ SenseDevTypes012,
+ "access controls code 2 (99-314) [proposed]"
+ },
+ {
+ 0x20, 0x03,
+ SenseDevTypes012,
+ "access controls code 3 (99-314) [proposed]"
+ },
+ {
+ 0x21, 0x00,
+ SenseDevTypes034,
+ "LOGICAL BLOCK ADDRESS OUT OF RANGE"
+ },
+ {
+ 0x21, 0x01,
+ SenseDevTypes034,
+ "INVALID ELEMENT ADDRESS"
+ },
+ {
+ 0x22, 0x00,
+ SenseDevTypes035,
+ "ILLEGAL FUNCTION (USE 20 00, 24 00, OR 26 00)"
+ },
+ {
+ 0x24, 0x00,
+ SenseDevTypes001,
+ "INVALID FIELD IN CDB"
+ },
+ {
+ 0x24, 0x01,
+ SenseDevTypes001,
+ "CDB DECRYPTION ERROR"
+ },
+ {
+ 0x25, 0x00,
+ SenseDevTypes001,
+ "LOGICAL UNIT NOT SUPPORTED"
+ },
+ {
+ 0x26, 0x00,
+ SenseDevTypes001,
+ "INVALID FIELD IN PARAMETER LIST"
+ },
+ {
+ 0x26, 0x01,
+ SenseDevTypes001,
+ "PARAMETER NOT SUPPORTED"
+ },
+ {
+ 0x26, 0x02,
+ SenseDevTypes001,
+ "PARAMETER VALUE INVALID"
+ },
+ {
+ 0x26, 0x03,
+ SenseDevTypes036,
+ "THRESHOLD PARAMETERS NOT SUPPORTED"
+ },
+ {
+ 0x26, 0x04,
+ SenseDevTypes001,
+ "INVALID RELEASE OF PERSISTENT RESERVATION"
+ },
+ {
+ 0x26, 0x05,
+ SenseDevTypes037,
+ "DATA DECRYPTION ERROR"
+ },
+ {
+ 0x26, 0x06,
+ SenseDevTypes016,
+ "TOO MANY TARGET DESCRIPTORS"
+ },
+ {
+ 0x26, 0x07,
+ SenseDevTypes016,
+ "UNSUPPORTED TARGET DESCRIPTOR TYPE CODE"
+ },
+ {
+ 0x26, 0x08,
+ SenseDevTypes016,
+ "TOO MANY SEGMENT DESCRIPTORS"
+ },
+ {
+ 0x26, 0x09,
+ SenseDevTypes016,
+ "UNSUPPORTED SEGMENT DESCRIPTOR TYPE CODE"
+ },
+ {
+ 0x26, 0x0A,
+ SenseDevTypes016,
+ "UNEXPECTED INEXACT SEGMENT"
+ },
+ {
+ 0x26, 0x0B,
+ SenseDevTypes016,
+ "INLINE DATA LENGTH EXCEEDED"
+ },
+ {
+ 0x26, 0x0C,
+ SenseDevTypes016,
+ "INVALID OPERATION FOR COPY SOURCE OR DESTINATION"
+ },
+ {
+ 0x26, 0x0D,
+ SenseDevTypes016,
+ "COPY SEGMENT GRANULARITY VIOLATION"
+ },
+ {
+ 0x27, 0x00,
+ SenseDevTypes029,
+ "WRITE PROTECTED"
+ },
+ {
+ 0x27, 0x01,
+ SenseDevTypes029,
+ "HARDWARE WRITE PROTECTED"
+ },
+ {
+ 0x27, 0x02,
+ SenseDevTypes029,
+ "LOGICAL UNIT SOFTWARE WRITE PROTECTED"
+ },
+ {
+ 0x27, 0x03,
+ SenseDevTypes038,
+ "ASSOCIATED WRITE PROTECT"
+ },
+ {
+ 0x27, 0x04,
+ SenseDevTypes038,
+ "PERSISTENT WRITE PROTECT"
+ },
+ {
+ 0x27, 0x05,
+ SenseDevTypes038,
+ "PERMANENT WRITE PROTECT"
+ },
+ {
+ 0x28, 0x00,
+ SenseDevTypes001,
+ "NOT READY TO READY CHANGE, MEDIUM MAY HAVE CHANGED"
+ },
+ {
+ 0x28, 0x01,
+ SenseDevTypes039,
+ "IMPORT OR EXPORT ELEMENT ACCESSED"
+ },
+ {
+ 0x29, 0x00,
+ SenseDevTypes001,
+ "POWER ON, RESET, OR BUS DEVICE RESET OCCURRED"
+ },
+ {
+ 0x29, 0x01,
+ SenseDevTypes001,
+ "POWER ON OCCURRED"
+ },
+ {
+ 0x29, 0x02,
+ SenseDevTypes001,
+ "SCSI BUS RESET OCCURRED"
+ },
+ {
+ 0x29, 0x03,
+ SenseDevTypes001,
+ "BUS DEVICE RESET FUNCTION OCCURRED"
+ },
+ {
+ 0x29, 0x04,
+ SenseDevTypes001,
+ "DEVICE INTERNAL RESET"
+ },
+ {
+ 0x29, 0x05,
+ SenseDevTypes001,
+ "TRANSCEIVER MODE CHANGED TO SINGLE-ENDED"
+ },
+ {
+ 0x29, 0x06,
+ SenseDevTypes001,
+ "TRANSCEIVER MODE CHANGED TO LVD"
+ },
+ {
+ 0x2A, 0x00,
+ SenseDevTypes013,
+ "PARAMETERS CHANGED"
+ },
+ {
+ 0x2A, 0x01,
+ SenseDevTypes013,
+ "MODE PARAMETERS CHANGED"
+ },
+ {
+ 0x2A, 0x02,
+ SenseDevTypes040,
+ "LOG PARAMETERS CHANGED"
+ },
+ {
+ 0x2A, 0x03,
+ SenseDevTypes036,
+ "RESERVATIONS PREEMPTED"
+ },
+ {
+ 0x2A, 0x04,
+ SenseDevTypes041,
+ "RESERVATIONS RELEASED"
+ },
+ {
+ 0x2A, 0x05,
+ SenseDevTypes041,
+ "REGISTRATIONS PREEMPTED"
+ },
+ {
+ 0x2B, 0x00,
+ SenseDevTypes016,
+ "COPY CANNOT EXECUTE SINCE HOST CANNOT DISCONNECT"
+ },
+ {
+ 0x2C, 0x00,
+ SenseDevTypes001,
+ "COMMAND SEQUENCE ERROR"
+ },
+ {
+ 0x2C, 0x01,
+ SenseDevTypes042,
+ "TOO MANY WINDOWS SPECIFIED"
+ },
+ {
+ 0x2C, 0x02,
+ SenseDevTypes042,
+ "INVALID COMBINATION OF WINDOWS SPECIFIED"
+ },
+ {
+ 0x2C, 0x03,
+ SenseDevTypes005,
+ "CURRENT PROGRAM AREA IS NOT EMPTY"
+ },
+ {
+ 0x2C, 0x04,
+ SenseDevTypes005,
+ "CURRENT PROGRAM AREA IS EMPTY"
+ },
+ {
+ 0x2C, 0x05,
+ SenseDevTypes043,
+ "ILLEGAL POWER CONDITION REQUEST"
+ },
+ {
+ 0x2D, 0x00,
+ SenseDevTypes002,
+ "OVERWRITE ERROR ON UPDATE IN PLACE"
+ },
+ {
+ 0x2E, 0x00,
+ SenseDevTypes044,
+ "ERROR DETECTED BY THIRD PARTY TEMPORARY INITIATOR"
+ },
+ {
+ 0x2E, 0x01,
+ SenseDevTypes044,
+ "THIRD PARTY DEVICE FAILURE"
+ },
+ {
+ 0x2E, 0x02,
+ SenseDevTypes044,
+ "COPY TARGET DEVICE NOT REACHABLE"
+ },
+ {
+ 0x2E, 0x03,
+ SenseDevTypes044,
+ "INCORRECT COPY TARGET DEVICE TYPE"
+ },
+ {
+ 0x2E, 0x04,
+ SenseDevTypes044,
+ "COPY TARGET DEVICE DATA UNDERRUN"
+ },
+ {
+ 0x2E, 0x05,
+ SenseDevTypes044,
+ "COPY TARGET DEVICE DATA OVERRUN"
+ },
+ {
+ 0x2F, 0x00,
+ SenseDevTypes001,
+ "COMMANDS CLEARED BY ANOTHER INITIATOR"
+ },
+ {
+ 0x30, 0x00,
+ SenseDevTypes034,
+ "INCOMPATIBLE MEDIUM INSTALLED"
+ },
+ {
+ 0x30, 0x01,
+ SenseDevTypes029,
+ "CANNOT READ MEDIUM - UNKNOWN FORMAT"
+ },
+ {
+ 0x30, 0x02,
+ SenseDevTypes029,
+ "CANNOT READ MEDIUM - INCOMPATIBLE FORMAT"
+ },
+ {
+ 0x30, 0x03,
+ SenseDevTypes045,
+ "CLEANING CARTRIDGE INSTALLED"
+ },
+ {
+ 0x30, 0x04,
+ SenseDevTypes029,
+ "CANNOT WRITE MEDIUM - UNKNOWN FORMAT"
+ },
+ {
+ 0x30, 0x05,
+ SenseDevTypes029,
+ "CANNOT WRITE MEDIUM - INCOMPATIBLE FORMAT"
+ },
+ {
+ 0x30, 0x06,
+ SenseDevTypes017,
+ "CANNOT FORMAT MEDIUM - INCOMPATIBLE MEDIUM"
+ },
+ {
+ 0x30, 0x07,
+ SenseDevTypes006,
+ "CLEANING FAILURE"
+ },
+ {
+ 0x30, 0x08,
+ SenseDevTypes005,
+ "CANNOT WRITE - APPLICATION CODE MISMATCH"
+ },
+ {
+ 0x30, 0x09,
+ SenseDevTypes005,
+ "CURRENT SESSION NOT FIXATED FOR APPEND"
+ },
+ {
+ 0x31, 0x00,
+ SenseDevTypes029,
+ "MEDIUM FORMAT CORRUPTED"
+ },
+ {
+ 0x31, 0x01,
+ SenseDevTypes046,
+ "FORMAT COMMAND FAILED"
+ },
+ {
+ 0x32, 0x00,
+ SenseDevTypes007,
+ "NO DEFECT SPARE LOCATION AVAILABLE"
+ },
+ {
+ 0x32, 0x01,
+ SenseDevTypes007,
+ "DEFECT LIST UPDATE FAILURE"
+ },
+ {
+ 0x33, 0x00,
+ SenseDevTypes002,
+ "TAPE LENGTH ERROR"
+ },
+ {
+ 0x34, 0x00,
+ SenseDevTypes001,
+ "ENCLOSURE FAILURE"
+ },
+ {
+ 0x35, 0x00,
+ SenseDevTypes001,
+ "ENCLOSURE SERVICES FAILURE"
+ },
+ {
+ 0x35, 0x01,
+ SenseDevTypes001,
+ "UNSUPPORTED ENCLOSURE FUNCTION"
+ },
+ {
+ 0x35, 0x02,
+ SenseDevTypes001,
+ "ENCLOSURE SERVICES UNAVAILABLE"
+ },
+ {
+ 0x35, 0x03,
+ SenseDevTypes001,
+ "ENCLOSURE SERVICES TRANSFER FAILURE"
+ },
+ {
+ 0x35, 0x04,
+ SenseDevTypes001,
+ "ENCLOSURE SERVICES TRANSFER REFUSED"
+ },
+ {
+ 0x36, 0x00,
+ SenseDevTypes047,
+ "RIBBON, INK, OR TONER FAILURE"
+ },
+ {
+ 0x37, 0x00,
+ SenseDevTypes013,
+ "ROUNDED PARAMETER"
+ },
+ {
+ 0x38, 0x00,
+ SenseDevTypes043,
+ "EVENT STATUS NOTIFICATION"
+ },
+ {
+ 0x38, 0x02,
+ SenseDevTypes043,
+ "ESN - POWER MANAGEMENT CLASS EVENT"
+ },
+ {
+ 0x38, 0x04,
+ SenseDevTypes043,
+ "ESN - MEDIA CLASS EVENT"
+ },
+ {
+ 0x38, 0x06,
+ SenseDevTypes043,
+ "ESN - DEVICE BUSY CLASS EVENT"
+ },
+ {
+ 0x39, 0x00,
+ SenseDevTypes040,
+ "SAVING PARAMETERS NOT SUPPORTED"
+ },
+ {
+ 0x3A, 0x00,
+ SenseDevTypes014,
+ "MEDIUM NOT PRESENT"
+ },
+ {
+ 0x3A, 0x01,
+ SenseDevTypes034,
+ "MEDIUM NOT PRESENT - TRAY CLOSED"
+ },
+ {
+ 0x3A, 0x02,
+ SenseDevTypes034,
+ "MEDIUM NOT PRESENT - TRAY OPEN"
+ },
+ {
+ 0x3A, 0x03,
+ SenseDevTypes039,
+ "MEDIUM NOT PRESENT - LOADABLE"
+ },
+ {
+ 0x3A, 0x04,
+ SenseDevTypes039,
+ "MEDIUM NOT PRESENT - MEDIUM AUXILIARY MEMORY ACCESSIBLE"
+ },
+ {
+ 0x3B, 0x00,
+ SenseDevTypes048,
+ "SEQUENTIAL POSITIONING ERROR"
+ },
+ {
+ 0x3B, 0x01,
+ SenseDevTypes002,
+ "TAPE POSITION ERROR AT BEGINNING-OF-MEDIUM"
+ },
+ {
+ 0x3B, 0x02,
+ SenseDevTypes002,
+ "TAPE POSITION ERROR AT END-OF-MEDIUM"
+ },
+ {
+ 0x3B, 0x03,
+ SenseDevTypes047,
+ "TAPE OR ELECTRONIC VERTICAL FORMS UNIT NOT READY"
+ },
+ {
+ 0x3B, 0x04,
+ SenseDevTypes047,
+ "SLEW FAILURE"
+ },
+ {
+ 0x3B, 0x05,
+ SenseDevTypes047,
+ "PAPER JAM"
+ },
+ {
+ 0x3B, 0x06,
+ SenseDevTypes047,
+ "FAILED TO SENSE TOP-OF-FORM"
+ },
+ {
+ 0x3B, 0x07,
+ SenseDevTypes047,
+ "FAILED TO SENSE BOTTOM-OF-FORM"
+ },
+ {
+ 0x3B, 0x08,
+ SenseDevTypes002,
+ "REPOSITION ERROR"
+ },
+ {
+ 0x3B, 0x09,
+ SenseDevTypes042,
+ "READ PAST END OF MEDIUM"
+ },
+ {
+ 0x3B, 0x0A,
+ SenseDevTypes042,
+ "READ PAST BEGINNING OF MEDIUM"
+ },
+ {
+ 0x3B, 0x0B,
+ SenseDevTypes042,
+ "POSITION PAST END OF MEDIUM"
+ },
+ {
+ 0x3B, 0x0C,
+ SenseDevTypes003,
+ "POSITION PAST BEGINNING OF MEDIUM"
+ },
+ {
+ 0x3B, 0x0D,
+ SenseDevTypes034,
+ "MEDIUM DESTINATION ELEMENT FULL"
+ },
+ {
+ 0x3B, 0x0E,
+ SenseDevTypes034,
+ "MEDIUM SOURCE ELEMENT EMPTY"
+ },
+ {
+ 0x3B, 0x0F,
+ SenseDevTypes005,
+ "END OF MEDIUM REACHED"
+ },
+ {
+ 0x3B, 0x11,
+ SenseDevTypes034,
+ "MEDIUM MAGAZINE NOT ACCESSIBLE"
+ },
+ {
+ 0x3B, 0x12,
+ SenseDevTypes034,
+ "MEDIUM MAGAZINE REMOVED"
+ },
+ {
+ 0x3B, 0x13,
+ SenseDevTypes034,
+ "MEDIUM MAGAZINE INSERTED"
+ },
+ {
+ 0x3B, 0x14,
+ SenseDevTypes034,
+ "MEDIUM MAGAZINE LOCKED"
+ },
+ {
+ 0x3B, 0x15,
+ SenseDevTypes034,
+ "MEDIUM MAGAZINE UNLOCKED"
+ },
+ {
+ 0x3B, 0x16,
+ SenseDevTypes005,
+ "MECHANICAL POSITIONING OR CHANGER ERROR"
+ },
+ {
+ 0x3D, 0x00,
+ SenseDevTypes036,
+ "INVALID BITS IN IDENTIFY MESSAGE"
+ },
+ {
+ 0x3E, 0x00,
+ SenseDevTypes001,
+ "LOGICAL UNIT HAS NOT SELF-CONFIGURED YET"
+ },
+ {
+ 0x3E, 0x01,
+ SenseDevTypes001,
+ "LOGICAL UNIT FAILURE"
+ },
+ {
+ 0x3E, 0x02,
+ SenseDevTypes001,
+ "TIMEOUT ON LOGICAL UNIT"
+ },
+ {
+ 0x3E, 0x03,
+ SenseDevTypes001,
+ "LOGICAL UNIT FAILED SELF-TEST"
+ },
+ {
+ 0x3E, 0x04,
+ SenseDevTypes001,
+ "LOGICAL UNIT UNABLE TO UPDATE SELF-TEST LOG"
+ },
+ {
+ 0x3F, 0x00,
+ SenseDevTypes001,
+ "TARGET OPERATING CONDITIONS HAVE CHANGED"
+ },
+ {
+ 0x3F, 0x01,
+ SenseDevTypes001,
+ "MICROCODE HAS BEEN CHANGED"
+ },
+ {
+ 0x3F, 0x02,
+ SenseDevTypes049,
+ "CHANGED OPERATING DEFINITION"
+ },
+ {
+ 0x3F, 0x03,
+ SenseDevTypes001,
+ "INQUIRY DATA HAS CHANGED"
+ },
+ {
+ 0x3F, 0x04,
+ SenseDevTypes050,
+ "COMPONENT DEVICE ATTACHED"
+ },
+ {
+ 0x3F, 0x05,
+ SenseDevTypes050,
+ "DEVICE IDENTIFIER CHANGED"
+ },
+ {
+ 0x3F, 0x06,
+ SenseDevTypes051,
+ "REDUNDANCY GROUP CREATED OR MODIFIED"
+ },
+ {
+ 0x3F, 0x07,
+ SenseDevTypes051,
+ "REDUNDANCY GROUP DELETED"
+ },
+ {
+ 0x3F, 0x08,
+ SenseDevTypes051,
+ "SPARE CREATED OR MODIFIED"
+ },
+ {
+ 0x3F, 0x09,
+ SenseDevTypes051,
+ "SPARE DELETED"
+ },
+ {
+ 0x3F, 0x0A,
+ SenseDevTypes050,
+ "VOLUME SET CREATED OR MODIFIED"
+ },
+ {
+ 0x3F, 0x0B,
+ SenseDevTypes050,
+ "VOLUME SET DELETED"
+ },
+ {
+ 0x3F, 0x0C,
+ SenseDevTypes050,
+ "VOLUME SET DEASSIGNED"
+ },
+ {
+ 0x3F, 0x0D,
+ SenseDevTypes050,
+ "VOLUME SET REASSIGNED"
+ },
+ {
+ 0x3F, 0x0E,
+ SenseDevTypes041,
+ "REPORTED LUNS DATA HAS CHANGED"
+ },
+ {
+ 0x3F, 0x0F,
+ SenseDevTypes001,
+ "ECHO BUFFER OVERWRITTEN"
+ },
+ {
+ 0x3F, 0x10,
+ SenseDevTypes039,
+ "MEDIUM LOADABLE"
+ },
+ {
+ 0x3F, 0x11,
+ SenseDevTypes039,
+ "MEDIUM AUXILIARY MEMORY ACCESSIBLE"
+ },
+ {
+ 0x40, 0x00,
+ SenseDevTypes035,
+ "RAM FAILURE (SHOULD USE 40 NN)"
+ },
+ {
+ 0x40, 0xFF,
+ SenseDevTypes001,
+ "DIAGNOSTIC FAILURE ON COMPONENT NN (80H-FFH)"
+ },
+ {
+ 0x41, 0x00,
+ SenseDevTypes035,
+ "DATA PATH FAILURE (SHOULD USE 40 NN)"
+ },
+ {
+ 0x42, 0x00,
+ SenseDevTypes035,
+ "POWER-ON OR SELF-TEST FAILURE (SHOULD USE 40 NN)"
+ },
+ {
+ 0x43, 0x00,
+ SenseDevTypes001,
+ "MESSAGE ERROR"
+ },
+ {
+ 0x44, 0x00,
+ SenseDevTypes001,
+ "INTERNAL TARGET FAILURE"
+ },
+ {
+ 0x45, 0x00,
+ SenseDevTypes001,
+ "SELECT OR RESELECT FAILURE"
+ },
+ {
+ 0x46, 0x00,
+ SenseDevTypes049,
+ "UNSUCCESSFUL SOFT RESET"
+ },
+ {
+ 0x47, 0x00,
+ SenseDevTypes001,
+ "SCSI PARITY ERROR"
+ },
+ {
+ 0x47, 0x01,
+ SenseDevTypes001,
+ "DATA PHASE CRC ERROR DETECTED"
+ },
+ {
+ 0x47, 0x02,
+ SenseDevTypes001,
+ "SCSI PARITY ERROR DETECTED DURING ST DATA PHASE"
+ },
+ {
+ 0x47, 0x03,
+ SenseDevTypes001,
+ "INFORMATION UNIT CRC ERROR DETECTED"
+ },
+ {
+ 0x47, 0x04,
+ SenseDevTypes001,
+ "ASYNCHRONOUS INFORMATION PROTECTION ERROR DETECTED"
+ },
+ {
+ 0x48, 0x00,
+ SenseDevTypes001,
+ "INITIATOR DETECTED ERROR MESSAGE RECEIVED"
+ },
+ {
+ 0x49, 0x00,
+ SenseDevTypes001,
+ "INVALID MESSAGE ERROR"
+ },
+ {
+ 0x4A, 0x00,
+ SenseDevTypes001,
+ "COMMAND PHASE ERROR"
+ },
+ {
+ 0x4B, 0x00,
+ SenseDevTypes001,
+ "DATA PHASE ERROR"
+ },
+ {
+ 0x4C, 0x00,
+ SenseDevTypes001,
+ "LOGICAL UNIT FAILED SELF-CONFIGURATION"
+ },
+ {
+ 0x4D, 0xFF,
+ SenseDevTypes001,
+ "TAGGED OVERLAPPED COMMANDS (NN = QUEUE TAG)"
+ },
+ {
+ 0x4E, 0x00,
+ SenseDevTypes001,
+ "OVERLAPPED COMMANDS ATTEMPTED"
+ },
+ {
+ 0x50, 0x00,
+ SenseDevTypes002,
+ "WRITE APPEND ERROR"
+ },
+ {
+ 0x50, 0x01,
+ SenseDevTypes002,
+ "WRITE APPEND POSITION ERROR"
+ },
+ {
+ 0x50, 0x02,
+ SenseDevTypes002,
+ "POSITION ERROR RELATED TO TIMING"
+ },
+ {
+ 0x51, 0x00,
+ SenseDevTypes052,
+ "ERASE FAILURE"
+ },
+ {
+ 0x52, 0x00,
+ SenseDevTypes002,
+ "CARTRIDGE FAULT"
+ },
+ {
+ 0x53, 0x00,
+ SenseDevTypes014,
+ "MEDIA LOAD OR EJECT FAILED"
+ },
+ {
+ 0x53, 0x01,
+ SenseDevTypes002,
+ "UNLOAD TAPE FAILURE"
+ },
+ {
+ 0x53, 0x02,
+ SenseDevTypes034,
+ "MEDIUM REMOVAL PREVENTED"
+ },
+ {
+ 0x54, 0x00,
+ SenseDevTypes053,
+ "SCSI TO HOST SYSTEM INTERFACE FAILURE"
+ },
+ {
+ 0x55, 0x00,
+ SenseDevTypes053,
+ "SYSTEM RESOURCE FAILURE"
+ },
+ {
+ 0x55, 0x01,
+ SenseDevTypes033,
+ "SYSTEM BUFFER FULL"
+ },
+ {
+ 0x55, 0x02,
+ SenseDevTypes054,
+ "INSUFFICIENT RESERVATION RESOURCES"
+ },
+ {
+ 0x55, 0x03,
+ SenseDevTypes041,
+ "INSUFFICIENT RESOURCES"
+ },
+ {
+ 0x55, 0x04,
+ SenseDevTypes055,
+ "INSUFFICIENT REGISTRATION RESOURCES"
+ },
+ {
+ 0x55, 0x05,
+ SenseDevTypes012,
+ "access controls code 4 (99-314) [proposed]"
+ },
+ {
+ 0x55, 0x06,
+ SenseDevTypes012,
+ "auxiliary memory code 1 (99-148) [proposed]"
+ },
+ {
+ 0x57, 0x00,
+ SenseDevTypes005,
+ "UNABLE TO RECOVER TABLE-OF-CONTENTS"
+ },
+ {
+ 0x58, 0x00,
+ SenseDevTypes056,
+ "GENERATION DOES NOT EXIST"
+ },
+ {
+ 0x59, 0x00,
+ SenseDevTypes056,
+ "UPDATED BLOCK READ"
+ },
+ {
+ 0x5A, 0x00,
+ SenseDevTypes057,
+ "OPERATOR REQUEST OR STATE CHANGE INPUT"
+ },
+ {
+ 0x5A, 0x01,
+ SenseDevTypes034,
+ "OPERATOR MEDIUM REMOVAL REQUEST"
+ },
+ {
+ 0x5A, 0x02,
+ SenseDevTypes058,
+ "OPERATOR SELECTED WRITE PROTECT"
+ },
+ {
+ 0x5A, 0x03,
+ SenseDevTypes058,
+ "OPERATOR SELECTED WRITE PERMIT"
+ },
+ {
+ 0x5B, 0x00,
+ SenseDevTypes059,
+ "LOG EXCEPTION"
+ },
+ {
+ 0x5B, 0x01,
+ SenseDevTypes059,
+ "THRESHOLD CONDITION MET"
+ },
+ {
+ 0x5B, 0x02,
+ SenseDevTypes059,
+ "LOG COUNTER AT MAXIMUM"
+ },
+ {
+ 0x5B, 0x03,
+ SenseDevTypes059,
+ "LOG LIST CODES EXHAUSTED"
+ },
+ {
+ 0x5C, 0x00,
+ SenseDevTypes060,
+ "RPL STATUS CHANGE"
+ },
+ {
+ 0x5C, 0x01,
+ SenseDevTypes060,
+ "SPINDLES SYNCHRONIZED"
+ },
+ {
+ 0x5C, 0x02,
+ SenseDevTypes060,
+ "SPINDLES NOT SYNCHRONIZED"
+ },
+ {
+ 0x5D, 0x00,
+ SenseDevTypes001,
+ "FAILURE PREDICTION THRESHOLD EXCEEDED"
+ },
+ {
+ 0x5D, 0x01,
+ SenseDevTypes061,
+ "MEDIA FAILURE PREDICTION THRESHOLD EXCEEDED"
+ },
+ {
+ 0x5D, 0x02,
+ SenseDevTypes005,
+ "LOGICAL UNIT FAILURE PREDICTION THRESHOLD EXCEEDED"
+ },
+ {
+ 0x5D, 0x10,
+ SenseDevTypes062,
+ "HARDWARE IMPENDING FAILURE GENERAL HARD DRIVE FAILURE"
+ },
+ {
+ 0x5D, 0x11,
+ SenseDevTypes062,
+ "HARDWARE IMPENDING FAILURE DRIVE ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x12,
+ SenseDevTypes062,
+ "HARDWARE IMPENDING FAILURE DATA ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x13,
+ SenseDevTypes062,
+ "HARDWARE IMPENDING FAILURE SEEK ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x14,
+ SenseDevTypes062,
+ "HARDWARE IMPENDING FAILURE TOO MANY BLOCK REASSIGNS"
+ },
+ {
+ 0x5D, 0x15,
+ SenseDevTypes062,
+ "HARDWARE IMPENDING FAILURE ACCESS TIMES TOO HIGH"
+ },
+ {
+ 0x5D, 0x16,
+ SenseDevTypes062,
+ "HARDWARE IMPENDING FAILURE START UNIT TIMES TOO HIGH"
+ },
+ {
+ 0x5D, 0x17,
+ SenseDevTypes062,
+ "HARDWARE IMPENDING FAILURE CHANNEL PARAMETRICS"
+ },
+ {
+ 0x5D, 0x18,
+ SenseDevTypes062,
+ "HARDWARE IMPENDING FAILURE CONTROLLER DETECTED"
+ },
+ {
+ 0x5D, 0x19,
+ SenseDevTypes062,
+ "HARDWARE IMPENDING FAILURE THROUGHPUT PERFORMANCE"
+ },
+ {
+ 0x5D, 0x1A,
+ SenseDevTypes062,
+ "HARDWARE IMPENDING FAILURE SEEK TIME PERFORMANCE"
+ },
+ {
+ 0x5D, 0x1B,
+ SenseDevTypes062,
+ "HARDWARE IMPENDING FAILURE SPIN-UP RETRY COUNT"
+ },
+ {
+ 0x5D, 0x1C,
+ SenseDevTypes062,
+ "HARDWARE IMPENDING FAILURE DRIVE CALIBRATION RETRY COUNT"
+ },
+ {
+ 0x5D, 0x20,
+ SenseDevTypes062,
+ "CONTROLLER IMPENDING FAILURE GENERAL HARD DRIVE FAILURE"
+ },
+ {
+ 0x5D, 0x21,
+ SenseDevTypes062,
+ "CONTROLLER IMPENDING FAILURE DRIVE ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x22,
+ SenseDevTypes062,
+ "CONTROLLER IMPENDING FAILURE DATA ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x23,
+ SenseDevTypes062,
+ "CONTROLLER IMPENDING FAILURE SEEK ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x24,
+ SenseDevTypes062,
+ "CONTROLLER IMPENDING FAILURE TOO MANY BLOCK REASSIGNS"
+ },
+ {
+ 0x5D, 0x25,
+ SenseDevTypes062,
+ "CONTROLLER IMPENDING FAILURE ACCESS TIMES TOO HIGH"
+ },
+ {
+ 0x5D, 0x26,
+ SenseDevTypes062,
+ "CONTROLLER IMPENDING FAILURE START UNIT TIMES TOO HIGH"
+ },
+ {
+ 0x5D, 0x27,
+ SenseDevTypes062,
+ "CONTROLLER IMPENDING FAILURE CHANNEL PARAMETRICS"
+ },
+ {
+ 0x5D, 0x28,
+ SenseDevTypes062,
+ "CONTROLLER IMPENDING FAILURE CONTROLLER DETECTED"
+ },
+ {
+ 0x5D, 0x29,
+ SenseDevTypes062,
+ "CONTROLLER IMPENDING FAILURE THROUGHPUT PERFORMANCE"
+ },
+ {
+ 0x5D, 0x2A,
+ SenseDevTypes062,
+ "CONTROLLER IMPENDING FAILURE SEEK TIME PERFORMANCE"
+ },
+ {
+ 0x5D, 0x2B,
+ SenseDevTypes062,
+ "CONTROLLER IMPENDING FAILURE SPIN-UP RETRY COUNT"
+ },
+ {
+ 0x5D, 0x2C,
+ SenseDevTypes062,
+ "CONTROLLER IMPENDING FAILURE DRIVE CALIBRATION RETRY COUNT"
+ },
+ {
+ 0x5D, 0x30,
+ SenseDevTypes062,
+ "DATA CHANNEL IMPENDING FAILURE GENERAL HARD DRIVE FAILURE"
+ },
+ {
+ 0x5D, 0x31,
+ SenseDevTypes062,
+ "DATA CHANNEL IMPENDING FAILURE DRIVE ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x32,
+ SenseDevTypes062,
+ "DATA CHANNEL IMPENDING FAILURE DATA ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x33,
+ SenseDevTypes062,
+ "DATA CHANNEL IMPENDING FAILURE SEEK ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x34,
+ SenseDevTypes062,
+ "DATA CHANNEL IMPENDING FAILURE TOO MANY BLOCK REASSIGNS"
+ },
+ {
+ 0x5D, 0x35,
+ SenseDevTypes062,
+ "DATA CHANNEL IMPENDING FAILURE ACCESS TIMES TOO HIGH"
+ },
+ {
+ 0x5D, 0x36,
+ SenseDevTypes062,
+ "DATA CHANNEL IMPENDING FAILURE START UNIT TIMES TOO HIGH"
+ },
+ {
+ 0x5D, 0x37,
+ SenseDevTypes062,
+ "DATA CHANNEL IMPENDING FAILURE CHANNEL PARAMETRICS"
+ },
+ {
+ 0x5D, 0x38,
+ SenseDevTypes062,
+ "DATA CHANNEL IMPENDING FAILURE CONTROLLER DETECTED"
+ },
+ {
+ 0x5D, 0x39,
+ SenseDevTypes062,
+ "DATA CHANNEL IMPENDING FAILURE THROUGHPUT PERFORMANCE"
+ },
+ {
+ 0x5D, 0x3A,
+ SenseDevTypes062,
+ "DATA CHANNEL IMPENDING FAILURE SEEK TIME PERFORMANCE"
+ },
+ {
+ 0x5D, 0x3B,
+ SenseDevTypes062,
+ "DATA CHANNEL IMPENDING FAILURE SPIN-UP RETRY COUNT"
+ },
+ {
+ 0x5D, 0x3C,
+ SenseDevTypes062,
+ "DATA CHANNEL IMPENDING FAILURE DRIVE CALIBRATION RETRY COUNT"
+ },
+ {
+ 0x5D, 0x40,
+ SenseDevTypes062,
+ "SERVO IMPENDING FAILURE GENERAL HARD DRIVE FAILURE"
+ },
+ {
+ 0x5D, 0x41,
+ SenseDevTypes062,
+ "SERVO IMPENDING FAILURE DRIVE ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x42,
+ SenseDevTypes062,
+ "SERVO IMPENDING FAILURE DATA ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x43,
+ SenseDevTypes062,
+ "SERVO IMPENDING FAILURE SEEK ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x44,
+ SenseDevTypes062,
+ "SERVO IMPENDING FAILURE TOO MANY BLOCK REASSIGNS"
+ },
+ {
+ 0x5D, 0x45,
+ SenseDevTypes062,
+ "SERVO IMPENDING FAILURE ACCESS TIMES TOO HIGH"
+ },
+ {
+ 0x5D, 0x46,
+ SenseDevTypes062,
+ "SERVO IMPENDING FAILURE START UNIT TIMES TOO HIGH"
+ },
+ {
+ 0x5D, 0x47,
+ SenseDevTypes062,
+ "SERVO IMPENDING FAILURE CHANNEL PARAMETRICS"
+ },
+ {
+ 0x5D, 0x48,
+ SenseDevTypes062,
+ "SERVO IMPENDING FAILURE CONTROLLER DETECTED"
+ },
+ {
+ 0x5D, 0x49,
+ SenseDevTypes062,
+ "SERVO IMPENDING FAILURE THROUGHPUT PERFORMANCE"
+ },
+ {
+ 0x5D, 0x4A,
+ SenseDevTypes062,
+ "SERVO IMPENDING FAILURE SEEK TIME PERFORMANCE"
+ },
+ {
+ 0x5D, 0x4B,
+ SenseDevTypes062,
+ "SERVO IMPENDING FAILURE SPIN-UP RETRY COUNT"
+ },
+ {
+ 0x5D, 0x4C,
+ SenseDevTypes062,
+ "SERVO IMPENDING FAILURE DRIVE CALIBRATION RETRY COUNT"
+ },
+ {
+ 0x5D, 0x50,
+ SenseDevTypes062,
+ "SPINDLE IMPENDING FAILURE GENERAL HARD DRIVE FAILURE"
+ },
+ {
+ 0x5D, 0x51,
+ SenseDevTypes062,
+ "SPINDLE IMPENDING FAILURE DRIVE ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x52,
+ SenseDevTypes062,
+ "SPINDLE IMPENDING FAILURE DATA ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x53,
+ SenseDevTypes062,
+ "SPINDLE IMPENDING FAILURE SEEK ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x54,
+ SenseDevTypes062,
+ "SPINDLE IMPENDING FAILURE TOO MANY BLOCK REASSIGNS"
+ },
+ {
+ 0x5D, 0x55,
+ SenseDevTypes062,
+ "SPINDLE IMPENDING FAILURE ACCESS TIMES TOO HIGH"
+ },
+ {
+ 0x5D, 0x56,
+ SenseDevTypes062,
+ "SPINDLE IMPENDING FAILURE START UNIT TIMES TOO HIGH"
+ },
+ {
+ 0x5D, 0x57,
+ SenseDevTypes062,
+ "SPINDLE IMPENDING FAILURE CHANNEL PARAMETRICS"
+ },
+ {
+ 0x5D, 0x58,
+ SenseDevTypes062,
+ "SPINDLE IMPENDING FAILURE CONTROLLER DETECTED"
+ },
+ {
+ 0x5D, 0x59,
+ SenseDevTypes062,
+ "SPINDLE IMPENDING FAILURE THROUGHPUT PERFORMANCE"
+ },
+ {
+ 0x5D, 0x5A,
+ SenseDevTypes062,
+ "SPINDLE IMPENDING FAILURE SEEK TIME PERFORMANCE"
+ },
+ {
+ 0x5D, 0x5B,
+ SenseDevTypes062,
+ "SPINDLE IMPENDING FAILURE SPIN-UP RETRY COUNT"
+ },
+ {
+ 0x5D, 0x5C,
+ SenseDevTypes062,
+ "SPINDLE IMPENDING FAILURE DRIVE CALIBRATION RETRY COUNT"
+ },
+ {
+ 0x5D, 0x60,
+ SenseDevTypes062,
+ "FIRMWARE IMPENDING FAILURE GENERAL HARD DRIVE FAILURE"
+ },
+ {
+ 0x5D, 0x61,
+ SenseDevTypes062,
+ "FIRMWARE IMPENDING FAILURE DRIVE ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x62,
+ SenseDevTypes062,
+ "FIRMWARE IMPENDING FAILURE DATA ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x63,
+ SenseDevTypes062,
+ "FIRMWARE IMPENDING FAILURE SEEK ERROR RATE TOO HIGH"
+ },
+ {
+ 0x5D, 0x64,
+ SenseDevTypes062,
+ "FIRMWARE IMPENDING FAILURE TOO MANY BLOCK REASSIGNS"
+ },
+ {
+ 0x5D, 0x65,
+ SenseDevTypes062,
+ "FIRMWARE IMPENDING FAILURE ACCESS TIMES TOO HIGH"
+ },
+ {
+ 0x5D, 0x66,
+ SenseDevTypes062,
+ "FIRMWARE IMPENDING FAILURE START UNIT TIMES TOO HIGH"
+ },
+ {
+ 0x5D, 0x67,
+ SenseDevTypes062,
+ "FIRMWARE IMPENDING FAILURE CHANNEL PARAMETRICS"
+ },
+ {
+ 0x5D, 0x68,
+ SenseDevTypes062,
+ "FIRMWARE IMPENDING FAILURE CONTROLLER DETECTED"
+ },
+ {
+ 0x5D, 0x69,
+ SenseDevTypes062,
+ "FIRMWARE IMPENDING FAILURE THROUGHPUT PERFORMANCE"
+ },
+ {
+ 0x5D, 0x6A,
+ SenseDevTypes062,
+ "FIRMWARE IMPENDING FAILURE SEEK TIME PERFORMANCE"
+ },
+ {
+ 0x5D, 0x6B,
+ SenseDevTypes062,
+ "FIRMWARE IMPENDING FAILURE SPIN-UP RETRY COUNT"
+ },
+ {
+ 0x5D, 0x6C,
+ SenseDevTypes062,
+ "FIRMWARE IMPENDING FAILURE DRIVE CALIBRATION RETRY COUNT"
+ },
+ {
+ 0x5D, 0xFF,
+ SenseDevTypes001,
+ "FAILURE PREDICTION THRESHOLD EXCEEDED (FALSE)"
+ },
+ {
+ 0x5E, 0x00,
+ SenseDevTypes044,
+ "LOW POWER CONDITION ON"
+ },
+ {
+ 0x5E, 0x01,
+ SenseDevTypes044,
+ "IDLE CONDITION ACTIVATED BY TIMER"
+ },
+ {
+ 0x5E, 0x02,
+ SenseDevTypes044,
+ "STANDBY CONDITION ACTIVATED BY TIMER"
+ },
+ {
+ 0x5E, 0x03,
+ SenseDevTypes044,
+ "IDLE CONDITION ACTIVATED BY COMMAND"
+ },
+ {
+ 0x5E, 0x04,
+ SenseDevTypes044,
+ "STANDBY CONDITION ACTIVATED BY COMMAND"
+ },
+ {
+ 0x5E, 0x41,
+ SenseDevTypes043,
+ "POWER STATE CHANGE TO ACTIVE"
+ },
+ {
+ 0x5E, 0x42,
+ SenseDevTypes043,
+ "POWER STATE CHANGE TO IDLE"
+ },
+ {
+ 0x5E, 0x43,
+ SenseDevTypes043,
+ "POWER STATE CHANGE TO STANDBY"
+ },
+ {
+ 0x5E, 0x45,
+ SenseDevTypes043,
+ "POWER STATE CHANGE TO SLEEP"
+ },
+ {
+ 0x5E, 0x47,
+ SenseDevTypes063,
+ "POWER STATE CHANGE TO DEVICE CONTROL"
+ },
+ {
+ 0x60, 0x00,
+ SenseDevTypes042,
+ "LAMP FAILURE"
+ },
+ {
+ 0x61, 0x00,
+ SenseDevTypes042,
+ "VIDEO ACQUISITION ERROR"
+ },
+ {
+ 0x61, 0x01,
+ SenseDevTypes042,
+ "UNABLE TO ACQUIRE VIDEO"
+ },
+ {
+ 0x61, 0x02,
+ SenseDevTypes042,
+ "OUT OF FOCUS"
+ },
+ {
+ 0x62, 0x00,
+ SenseDevTypes042,
+ "SCAN HEAD POSITIONING ERROR"
+ },
+ {
+ 0x63, 0x00,
+ SenseDevTypes005,
+ "END OF USER AREA ENCOUNTERED ON THIS TRACK"
+ },
+ {
+ 0x63, 0x01,
+ SenseDevTypes005,
+ "PACKET DOES NOT FIT IN AVAILABLE SPACE"
+ },
+ {
+ 0x64, 0x00,
+ SenseDevTypes005,
+ "ILLEGAL MODE FOR THIS TRACK"
+ },
+ {
+ 0x64, 0x01,
+ SenseDevTypes005,
+ "INVALID PACKET SIZE"
+ },
+ {
+ 0x65, 0x00,
+ SenseDevTypes001,
+ "VOLTAGE FAULT"
+ },
+ {
+ 0x66, 0x00,
+ SenseDevTypes042,
+ "AUTOMATIC DOCUMENT FEEDER COVER UP"
+ },
+ {
+ 0x66, 0x01,
+ SenseDevTypes042,
+ "AUTOMATIC DOCUMENT FEEDER LIFT UP"
+ },
+ {
+ 0x66, 0x02,
+ SenseDevTypes042,
+ "DOCUMENT JAM IN AUTOMATIC DOCUMENT FEEDER"
+ },
+ {
+ 0x66, 0x03,
+ SenseDevTypes042,
+ "DOCUMENT MISS FEED AUTOMATIC IN DOCUMENT FEEDER"
+ },
+ {
+ 0x67, 0x00,
+ SenseDevTypes064,
+ "CONFIGURATION FAILURE"
+ },
+ {
+ 0x67, 0x01,
+ SenseDevTypes064,
+ "CONFIGURATION OF INCAPABLE LOGICAL UNITS FAILED"
+ },
+ {
+ 0x67, 0x02,
+ SenseDevTypes064,
+ "ADD LOGICAL UNIT FAILED"
+ },
+ {
+ 0x67, 0x03,
+ SenseDevTypes064,
+ "MODIFICATION OF LOGICAL UNIT FAILED"
+ },
+ {
+ 0x67, 0x04,
+ SenseDevTypes064,
+ "EXCHANGE OF LOGICAL UNIT FAILED"
+ },
+ {
+ 0x67, 0x05,
+ SenseDevTypes064,
+ "REMOVE OF LOGICAL UNIT FAILED"
+ },
+ {
+ 0x67, 0x06,
+ SenseDevTypes064,
+ "ATTACHMENT OF LOGICAL UNIT FAILED"
+ },
+ {
+ 0x67, 0x07,
+ SenseDevTypes064,
+ "CREATION OF LOGICAL UNIT FAILED"
+ },
+ {
+ 0x67, 0x08,
+ SenseDevTypes064,
+ "ASSIGN FAILURE OCCURRED"
+ },
+ {
+ 0x67, 0x09,
+ SenseDevTypes064,
+ "MULTIPLY ASSIGNED LOGICAL UNIT"
+ },
+ {
+ 0x68, 0x00,
+ SenseDevTypes064,
+ "LOGICAL UNIT NOT CONFIGURED"
+ },
+ {
+ 0x69, 0x00,
+ SenseDevTypes064,
+ "DATA LOSS ON LOGICAL UNIT"
+ },
+ {
+ 0x69, 0x01,
+ SenseDevTypes064,
+ "MULTIPLE LOGICAL UNIT FAILURES"
+ },
+ {
+ 0x69, 0x02,
+ SenseDevTypes064,
+ "PARITY/DATA MISMATCH"
+ },
+ {
+ 0x6A, 0x00,
+ SenseDevTypes064,
+ "INFORMATIONAL, REFER TO LOG"
+ },
+ {
+ 0x6B, 0x00,
+ SenseDevTypes064,
+ "STATE CHANGE HAS OCCURRED"
+ },
+ {
+ 0x6B, 0x01,
+ SenseDevTypes064,
+ "REDUNDANCY LEVEL GOT BETTER"
+ },
+ {
+ 0x6B, 0x02,
+ SenseDevTypes064,
+ "REDUNDANCY LEVEL GOT WORSE"
+ },
+ {
+ 0x6C, 0x00,
+ SenseDevTypes064,
+ "REBUILD FAILURE OCCURRED"
+ },
+ {
+ 0x6D, 0x00,
+ SenseDevTypes064,
+ "RECALCULATE FAILURE OCCURRED"
+ },
+ {
+ 0x6E, 0x00,
+ SenseDevTypes064,
+ "COMMAND TO LOGICAL UNIT FAILED"
+ },
+ {
+ 0x6F, 0x00,
+ SenseDevTypes005,
+ "COPY PROTECTION KEY EXCHANGE FAILURE - AUTHENTICATION FAILURE"
+ },
+ {
+ 0x6F, 0x01,
+ SenseDevTypes005,
+ "COPY PROTECTION KEY EXCHANGE FAILURE - KEY NOT PRESENT"
+ },
+ {
+ 0x6F, 0x02,
+ SenseDevTypes005,
+ "COPY PROTECTION KEY EXCHANGE FAILURE - KEY NOT ESTABLISHED"
+ },
+ {
+ 0x6F, 0x03,
+ SenseDevTypes005,
+ "READ OF SCRAMBLED SECTOR WITHOUT AUTHENTICATION"
+ },
+ {
+ 0x6F, 0x04,
+ SenseDevTypes005,
+ "MEDIA REGION CODE IS MISMATCHED TO LOGICAL UNIT REGION"
+ },
+ {
+ 0x6F, 0x05,
+ SenseDevTypes005,
+ "DRIVE REGION MUST BE PERMANENT/REGION RESET COUNT ERROR"
+ },
+ {
+ 0x70, 0xFF,
+ SenseDevTypes002,
+ "DECOMPRESSION EXCEPTION SHORT ALGORITHM ID OF NN"
+ },
+ {
+ 0x71, 0x00,
+ SenseDevTypes002,
+ "DECOMPRESSION EXCEPTION LONG ALGORITHM ID"
+ },
+ {
+ 0x72, 0x00,
+ SenseDevTypes005,
+ "SESSION FIXATION ERROR"
+ },
+ {
+ 0x72, 0x01,
+ SenseDevTypes005,
+ "SESSION FIXATION ERROR WRITING LEAD-IN"
+ },
+ {
+ 0x72, 0x02,
+ SenseDevTypes005,
+ "SESSION FIXATION ERROR WRITING LEAD-OUT"
+ },
+ {
+ 0x72, 0x03,
+ SenseDevTypes005,
+ "SESSION FIXATION ERROR - INCOMPLETE TRACK IN SESSION"
+ },
+ {
+ 0x72, 0x04,
+ SenseDevTypes005,
+ "EMPTY OR PARTIALLY WRITTEN RESERVED TRACK"
+ },
+ {
+ 0x72, 0x05,
+ SenseDevTypes005,
+ "NO MORE TRACK RESERVATIONS ALLOWED"
+ },
+ {
+ 0x73, 0x00,
+ SenseDevTypes005,
+ "CD CONTROL ERROR"
+ },
+ {
+ 0x73, 0x01,
+ SenseDevTypes005,
+ "POWER CALIBRATION AREA ALMOST FULL"
+ },
+ {
+ 0x73, 0x02,
+ SenseDevTypes005,
+ "POWER CALIBRATION AREA IS FULL"
+ },
+ {
+ 0x73, 0x03,
+ SenseDevTypes005,
+ "POWER CALIBRATION AREA ERROR"
+ },
+ {
+ 0x73, 0x04,
+ SenseDevTypes005,
+ "PROGRAM MEMORY AREA UPDATE FAILURE"
+ },
+ {
+ 0x73, 0x05,
+ SenseDevTypes005,
+ "PROGRAM MEMORY AREA IS FULL"
+ },
+ {
+ 0x73, 0x06,
+ SenseDevTypes005,
+ "RMA/PMA IS FULL"
+ },
+};
+
+static int ASCQ_TableSize = 463;
+
+
+#endif
--- /dev/null
+#!/bin/sh
+#
+# ascq_tbl.sh - Translate SCSI t10.org's "asc-num.txt" file of
+# SCSI Additional Sense Code & Qualifiers (ASC/ASCQ's)
+# into something useful in C, creating "ascq_tbl.c" file.
+#
+#*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*#
+
+PREF_INFILE="t10.org/asc-num.txt" # From SCSI t10.org
+PREF_OUTFILE="ascq_tbl.c"
+
+#*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*#
+
+xlate_ascq() {
+ cat | awk '
+ BEGIN {
+ DQ = "\042";
+ OUTFILE = "'"${PREF_OUTFILE}"'";
+ TRUE = 1;
+ FALSE = 0;
+ #debug = TRUE;
+
+ # read and discard all lines up to and including the one that begins
+ # with the "magic token" of "------- -------------- ---"...
+ headers_gone = FALSE;
+ while (!headers_gone) {
+ if (getline <= 0)
+ exit 1;
+ header_line[++hdrs] = $0;
+ if (debug)
+ printf("header_line[%d] = :%s:\n", ++hdrs, $0);
+ if ($0 ~ /^------- -------------- ---/) {
+ headers_gone = TRUE;
+ }
+ }
+ outcount = 0;
+ }
+
+ (NF > 1) {
+ ++outcount;
+ if (debug)
+ printf( "DBG: %s\n", $0 );
+ ASC[outcount] = substr($0,1,2);
+ ASCQ[outcount] = substr($0,5,2);
+ devtypes = substr($0,10,14);
+ gsub(/ /, ".", devtypes);
+ DESCRIP[outcount] = substr($0,26);
+
+ if (!(devtypes in DevTypesVoodoo)) {
+ DevTypesVoodoo[devtypes] = ++voodoo;
+ DevTypesIdx[voodoo] = devtypes;
+ }
+ DEVTYPES[outcount] = DevTypesVoodoo[devtypes];
+
+ # Handle 0xNN exception stuff...
+ if (ASCQ[outcount] == "NN" || ASCQ[outcount] == "nn")
+ ASCQ[outcount] = "FF";
+ }
+
+ END {
+ printf("#ifndef SCSI_ASCQ_TBL_C_INCLUDED\n") > OUTFILE;
+ printf("#define SCSI_ASCQ_TBL_C_INCLUDED\n") >> OUTFILE;
+
+ printf("\n/* AuToMaGiCaLlY generated from: %s'"${FIN}"'%s\n", DQ, DQ) >> OUTFILE;
+ printf(" *******************************************************************************\n") >> OUTFILE;
+ for (i=1; i<=hdrs; i++) {
+ printf(" * %s\n", header_line[i]) >> OUTFILE;
+ }
+ printf(" */\n") >> OUTFILE;
+
+ printf("\n") >> OUTFILE;
+ for (i=1; i<=voodoo; i++) {
+ printf("static char SenseDevTypes%03d[] = %s%s%s;\n", i, DQ, DevTypesIdx[i], DQ) >> OUTFILE;
+ }
+
+ printf("\nstatic ASCQ_Table_t ASCQ_Table[] = {\n") >> OUTFILE;
+ for (i=1; i<=outcount; i++) {
+ printf(" {\n") >> OUTFILE;
+ printf(" 0x%s, 0x%s,\n", ASC[i], ASCQ[i]) >> OUTFILE;
+ printf(" SenseDevTypes%03d,\n", DEVTYPES[i]) >> OUTFILE;
+ printf(" %s%s%s\n", DQ, DESCRIP[i], DQ) >> OUTFILE;
+ printf(" },\n") >> OUTFILE;
+ }
+ printf( "};\n\n" ) >> OUTFILE;
+
+ printf( "static int ASCQ_TableSize = %d;\n\n", outcount ) >> OUTFILE;
+ printf( "Total of %d ASC/ASCQ records generated\n", outcount );
+ printf("\n#endif\n") >> OUTFILE;
+ close(OUTFILE);
+ }'
+ return
+}
+
+#*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*#
+
+# main()
+if [ $# -lt 1 ]; then
+ echo "INFO: No input filename supplied - using: $PREF_INFILE" >&2
+ FIN=$PREF_INFILE
+else
+ FIN="$1"
+ if [ "$FIN" != "$PREF_INFILE" ]; then
+ echo "INFO: Ok, I'll try chewing on '$FIN' for SCSI ASC/ASCQ combos..." >&2
+ fi
+ shift
+fi
+
+cat $FIN | xlate_ascq
+exit 0
--- /dev/null
+/*
+ * linux/drivers/message/fusion/isense.c
+ * Little linux driver / shim that interfaces with the Fusion MPT
+ * Linux base driver to provide english readable strings in SCSI
+ * Error Report logging output. This module implements SCSI-3
+ * Opcode lookup and a sorted table of SCSI-3 ASC/ASCQ strings.
+ *
+ * Copyright (c) 1991-2004 Steven J. Ralston
+ * Written By: Steven J. Ralston
+ * (yes I wrote some of the orig. code back in 1991!)
+ * (mailto:sjralston1@netscape.net)
+ * (mailto:mpt_linux_developer@lsil.com)
+ *
+ * $Id: isense.c,v 1.33 2002/02/27 18:44:19 sralston Exp $
+ */
+/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
+/*
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; version 2 of the License.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ NO WARRANTY
+ THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+ CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+ LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+ MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+ solely responsible for determining the appropriateness of using and
+ distributing the Program and assumes all risks associated with its
+ exercise of rights under this Agreement, including but not limited to
+ the risks and costs of program errors, damage to or loss of data,
+ programs or equipment, and unavailability or interruption of operations.
+
+ DISCLAIMER OF LIABILITY
+ NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+ DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+ ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+ HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
+
+#include <linux/version.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <asm/io.h>
+
+#define MODULEAUTHOR "Steven J. Ralston"
+#define COPYRIGHT "Copyright (c) 2001-2004 " MODULEAUTHOR
+#include "mptbase.h"
+
+#include "isense.h"
+
+/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
+/*
+ * Private data...
+ */
+
+/*
+ * YIKES! I don't usually #include C source files, but..
+ * The following #include's pulls in our needed ASCQ_Table[] array,
+ * ASCQ_TableSz integer, and ScsiOpcodeString[] array!
+ */
+#include "ascq_tbl.c"
+#include "scsiops.c"
+
+/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
+#define my_NAME "SCSI-3 Opcodes & ASC/ASCQ Strings"
+#define my_VERSION MPT_LINUX_VERSION_COMMON
+#define MYNAM "isense"
+
+MODULE_AUTHOR(MODULEAUTHOR);
+MODULE_DESCRIPTION(my_NAME);
+MODULE_LICENSE("GPL");
+
+/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
+int __init isense_init(void)
+{
+ show_mptmod_ver(my_NAME, my_VERSION);
+
+ /*
+ * Install our handler
+ */
+ if (mpt_register_ascqops_strings(&ASCQ_Table[0], ASCQ_TableSize, ScsiOpcodeString) != 1)
+ {
+ printk(KERN_ERR MYNAM ": ERROR: Can't register with Fusion MPT base driver!\n");
+ return -EBUSY;
+ }
+ printk(KERN_INFO MYNAM ": Registered SCSI-3 Opcodes & ASC/ASCQ Strings\n");
+ return 0;
+}
+
+
+/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
+static void isense_exit(void)
+{
+#ifdef MODULE
+ mpt_deregister_ascqops_strings();
+#endif
+ printk(KERN_INFO MYNAM ": Deregistered SCSI-3 Opcodes & ASC/ASCQ Strings\n");
+}
+
+module_init(isense_init);
+module_exit(isense_exit);
+
+/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
+
--- /dev/null
+#ifndef ISENSE_H_INCLUDED
+#define ISENSE_H_INCLUDED
+/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
+
+#ifdef __KERNEL__
+#include <linux/types.h> /* needed for u8, etc. */
+#include <linux/string.h> /* needed for strcat */
+#include <linux/kernel.h> /* needed for sprintf */
+#else
+ #ifndef U_STUFF_DEFINED
+ #define U_STUFF_DEFINED
+ typedef unsigned char u8;
+ typedef unsigned short u16;
+ typedef unsigned int u32;
+ #endif
+#endif
+
+#include "scsi3.h" /* needed for all things SCSI */
+
+/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
+/*
+ * Defines and typedefs...
+ */
+
+#ifdef __KERNEL__
+#define PrintF(x) printk x
+#else
+#define PrintF(x) printf x
+#endif
+
+#ifndef TRUE
+#define TRUE 1
+#define FALSE 0
+#endif
+
+#define RETRY_STATUS ((int) 1)
+#define PUT_STATUS ((int) 0)
+
+/*
+ * A generic structure to hold info about IO request that caused
+ * a Request Sense to be performed, and the resulting Sense Data.
+ */
+typedef struct IO_Info
+{
+ char *DevIDStr; /* String of chars which identifies the device. */
+ u8 *cdbPtr; /* Pointer (Virtual/Logical addr) to CDB bytes of
+ IO request that caused ContAllegianceCond. */
+ u8 *sensePtr; /* Pointer (Virtual/Logical addr) to Sense Data
+ returned by Request Sense operation. */
+ u8 *dataPtr; /* Pointer (Virtual/Logical addr) to Data buffer
+ of IO request caused ContAllegianceCondition. */
+ u8 *inqPtr; /* Pointer (Virtual/Logical addr) to Inquiry Data for
+ IO *Device* that caused ContAllegianceCondition. */
+ u8 SCSIStatus; /* SCSI status byte of IO request that caused
+ Contingent Allegiance Condition. */
+ u8 DoDisplay; /* Shall we display any messages? */
+ u16 rsvd_align1;
+ u32 ComplCode; /* Four-byte OS-dependent completion code. */
+ u32 NotifyL; /* Four-byte OS-dependent notification field. */
+} IO_Info_t;
+
+/*
+ * SCSI Additional Sense Code and Additional Sense Code Qualifier table.
+ */
+typedef struct ASCQ_Table
+{
+ u8 ASC;
+ u8 ASCQ;
+ char *DevTypes;
+ char *Description;
+} ASCQ_Table_t;
+
+#if 0
+/*
+ * SCSI Opcodes table.
+ */
+typedef struct SCSI_OPS_Table
+{
+ u8 OpCode;
+ char *DevTypes;
+ char *ScsiCmndStr;
+} SCSI_OPS_Table_t;
+#endif
+
+/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
+/*
+ * Public entry point prototypes
+ */
+
+/* in scsiherr.c, needed by mptscsih.c */
+extern int mpt_ScsiHost_ErrorReport(IO_Info_t *ioop);
+
+/*=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=*/
+#endif
+
--- /dev/null
+/*
+ * linux/drivers/message/fusion/scsi3.h
+ * SCSI-3 definitions and macros.
+ * (Ultimately) SCSI-3 definitions; for now, inheriting
+ * SCSI-2 definitions.
+ *
+ * Copyright (c) 1996-2004 Steven J. Ralston
+ * Written By: Steven J. Ralston (19960517)
+ * (mailto:sjralston1@netscape.net)
+ * (mailto:mpt_linux_developer@lsil.com)
+ *
+ * $Id: scsi3.h,v 1.9 2002/02/27 18:45:02 sralston Exp $
+ */
+
+#ifndef SCSI3_H_INCLUDED
+#define SCSI3_H_INCLUDED
+/***************************************************************************/
+
+/****************************************************************************
+ *
+ * Includes
+ */
+#ifdef __KERNEL__
+#include <linux/types.h>
+#else
+ #ifndef U_STUFF_DEFINED
+ #define U_STUFF_DEFINED
+ typedef unsigned char u8;
+ typedef unsigned short u16;
+ typedef unsigned int u32;
+ #endif
+#endif
+
+/****************************************************************************
+ *
+ * Defines
+ */
+
+/*
+ * SCSI Commands
+ */
+#define CMD_TestUnitReady 0x00
+#define CMD_RezeroUnit 0x01 /* direct-access devices */
+#define CMD_Rewind 0x01 /* sequential-access devices */
+#define CMD_RequestSense 0x03
+#define CMD_FormatUnit 0x04
+#define CMD_ReassignBlock 0x07
+#define CMD_Read6 0x08
+#define CMD_Write6 0x0A
+#define CMD_WriteFilemark 0x10
+#define CMD_Space 0x11
+#define CMD_Inquiry 0x12
+#define CMD_ModeSelect6 0x15
+#define CMD_ModeSense6 0x1A
+#define CMD_Reserve6 0x16
+#define CMD_Release6 0x17
+#define CMD_Erase 0x19
+#define CMD_StartStopUnit 0x1b /* direct-access devices */
+#define CMD_LoadUnload 0x1b /* sequential-access devices */
+#define CMD_ReceiveDiagnostic 0x1C
+#define CMD_SendDiagnostic 0x1D
+#define CMD_ReadCapacity 0x25
+#define CMD_Read10 0x28
+#define CMD_Write10 0x2A
+#define CMD_WriteVerify 0x2E
+#define CMD_Verify 0x2F
+#define CMD_SynchronizeCache 0x35
+#define CMD_ReadDefectData 0x37
+#define CMD_WriteBuffer 0x3B
+#define CMD_ReadBuffer 0x3C
+#define CMD_ReadLong 0x3E
+#define CMD_LogSelect 0x4C
+#define CMD_LogSense 0x4D
+#define CMD_ModeSelect10 0x55
+#define CMD_Reserve10 0x56
+#define CMD_Release10 0x57
+#define CMD_ModeSense10 0x5A
+#define CMD_PersistReserveIn 0x5E
+#define CMD_PersistReserveOut 0x5F
+#define CMD_ReportLuns 0xA0
+
+/*
+ * Control byte field
+ */
+#define CONTROL_BYTE_NACA_BIT 0x04
+#define CONTROL_BYTE_Flag_BIT 0x02
+#define CONTROL_BYTE_Link_BIT 0x01
+
+/*
+ * SCSI Messages
+ */
+#define MSG_COMPLETE 0x00
+#define MSG_EXTENDED 0x01
+#define MSG_SAVE_POINTERS 0x02
+#define MSG_RESTORE_POINTERS 0x03
+#define MSG_DISCONNECT 0x04
+#define MSG_IDERROR 0x05
+#define MSG_ABORT 0x06
+#define MSG_REJECT 0x07
+#define MSG_NOP 0x08
+#define MSG_PARITY_ERROR 0x09
+#define MSG_LINKED_CMD_COMPLETE 0x0a
+#define MSG_LCMD_COMPLETE_W_FLG 0x0b
+#define MSG_BUS_DEVICE_RESET 0x0c
+#define MSG_ABORT_TAG 0x0d
+#define MSG_CLEAR_QUEUE 0x0e
+#define MSG_INITIATE_RECOVERY 0x0f
+
+#define MSG_RELEASE_RECOVRY 0x10
+#define MSG_TERMINATE_IO 0x11
+
+#define MSG_SIMPLE_QUEUE 0x20
+#define MSG_HEAD_OF_QUEUE 0x21
+#define MSG_ORDERED_QUEUE 0x22
+#define MSG_IGNORE_WIDE_RESIDUE 0x23
+
+#define MSG_IDENTIFY 0x80
+#define MSG_IDENTIFY_W_DISC 0xc0
+
+/*
+ * SCSI Phases
+ */
+#define PHS_DATA_OUT 0x00
+#define PHS_DATA_IN 0x01
+#define PHS_COMMAND 0x02
+#define PHS_STATUS 0x03
+#define PHS_MSG_OUT 0x06
+#define PHS_MSG_IN 0x07
+
+/*
+ * Statuses
+ */
+#define STS_GOOD 0x00
+#define STS_CHECK_CONDITION 0x02
+#define STS_CONDITION_MET 0x04
+#define STS_BUSY 0x08
+#define STS_INTERMEDIATE 0x10
+#define STS_INTERMEDIATE_CONDITION_MET 0x14
+#define STS_RESERVATION_CONFLICT 0x18
+#define STS_COMMAND_TERMINATED 0x22
+#define STS_TASK_SET_FULL 0x28
+#define STS_QUEUE_FULL 0x28
+#define STS_ACA_ACTIVE 0x30
+
+#define STS_VALID_MASK 0x3e
+
+#define SCSI_STATUS(x) ((x) & STS_VALID_MASK)
+
+/*
+ * SCSI QTag Types
+ */
+#define QTAG_SIMPLE 0x20
+#define QTAG_HEAD_OF_Q 0x21
+#define QTAG_ORDERED 0x22
+
+/*
+ * SCSI Sense Key Definitons
+ */
+#define SK_NO_SENSE 0x00
+#define SK_RECOVERED_ERROR 0x01
+#define SK_NOT_READY 0x02
+#define SK_MEDIUM_ERROR 0x03
+#define SK_HARDWARE_ERROR 0x04
+#define SK_ILLEGAL_REQUEST 0x05
+#define SK_UNIT_ATTENTION 0x06
+#define SK_DATA_PROTECT 0x07
+#define SK_BLANK_CHECK 0x08
+#define SK_VENDOR_SPECIFIC 0x09
+#define SK_COPY_ABORTED 0x0a
+#define SK_ABORTED_COMMAND 0x0b
+#define SK_EQUAL 0x0c
+#define SK_VOLUME_OVERFLOW 0x0d
+#define SK_MISCOMPARE 0x0e
+#define SK_RESERVED 0x0f
+
+
+
+#define SCSI_MAX_INQUIRY_BYTES 96
+#define SCSI_STD_INQUIRY_BYTES 36
+
+#undef USE_SCSI_COMPLETE_INQDATA
+/*
+ * Structure definition for SCSI Inquiry Data
+ *
+ * NOTE: The following structure is 96 bytes in size
+ * iff USE_SCSI_COMPLETE_INQDATA IS defined above (i.e. w/ "#define").
+ * If USE_SCSI_COMPLETE_INQDATA is NOT defined above (i.e. w/ "#undef")
+ * then the following structure is only 36 bytes in size.
+ * THE CHOICE IS YOURS!
+ */
+typedef struct SCSI_Inquiry_Data
+{
+#ifdef USE_SCSI_COMPLETE_INQDATA
+ u8 InqByte[SCSI_MAX_INQUIRY_BYTES];
+#else
+ u8 InqByte[SCSI_STD_INQUIRY_BYTES];
+#endif
+
+/*
+ * the following structure works only for little-endian (Intel,
+ * LSB first (1234) byte order) systems with 4-byte ints.
+ *
+ u32 Periph_Device_Type : 5,
+ Periph_Qualifier : 3,
+ Device_Type_Modifier : 7,
+ Removable_Media : 1,
+ ANSI_Version : 3,
+ ECMA_Version : 3,
+ ISO_Version : 2,
+ Response_Data_Format : 4,
+ reserved_0 : 3,
+ AERC : 1 ;
+ u32 Additional_Length : 8,
+ reserved_1 :16,
+ SftReset : 1,
+ CmdQue : 1,
+ reserved_2 : 1,
+ Linked : 1,
+ Sync : 1,
+ WBus16 : 1,
+ WBus32 : 1,
+ RelAdr : 1 ;
+ u8 Vendor_ID[8];
+ u8 Product_ID[16];
+ u8 Revision_Level [4];
+#ifdef USE_SCSI_COMPLETE_INQDATA
+ u8 Vendor_Specific[20];
+ u8 reserved_3[40];
+#endif
+ *
+ */
+
+} SCSI_Inquiry_Data_t;
+
+#define INQ_PERIPHINFO_BYTE 0
+#define INQ_Periph_Qualifier_MASK 0xe0
+#define INQ_Periph_Device_Type_MASK 0x1f
+
+#define INQ_Peripheral_Qualifier(inqp) \
+ (int)((*((u8*)(inqp)+INQ_PERIPHINFO_BYTE) & INQ_Periph_Qualifier_MASK) >> 5)
+#define INQ_Peripheral_Device_Type(inqp) \
+ (int)(*((u8*)(inqp)+INQ_PERIPHINFO_BYTE) & INQ_Periph_Device_Type_MASK)
+
+
+#define INQ_DEVTYPEMOD_BYTE 1
+#define INQ_RMB_BIT 0x80
+#define INQ_Device_Type_Modifier_MASK 0x7f
+
+#define INQ_Removable_Medium(inqp) \
+ (int)(*((u8*)(inqp)+INQ_DEVTYPEMOD_BYTE) & INQ_RMB_BIT)
+#define INQ_Device_Type_Modifier(inqp) \
+ (int)(*((u8*)(inqp)+INQ_DEVTYPEMOD_BYTE) & INQ_Device_Type_Modifier_MASK)
+
+
+#define INQ_VERSIONINFO_BYTE 2
+#define INQ_ISO_Version_MASK 0xc0
+#define INQ_ECMA_Version_MASK 0x38
+#define INQ_ANSI_Version_MASK 0x07
+
+#define INQ_ISO_Version(inqp) \
+ (int)(*((u8*)(inqp)+INQ_VERSIONINFO_BYTE) & INQ_ISO_Version_MASK)
+#define INQ_ECMA_Version(inqp) \
+ (int)(*((u8*)(inqp)+INQ_VERSIONINFO_BYTE) & INQ_ECMA_Version_MASK)
+#define INQ_ANSI_Version(inqp) \
+ (int)(*((u8*)(inqp)+INQ_VERSIONINFO_BYTE) & INQ_ANSI_Version_MASK)
+
+
+#define INQ_BYTE3 3
+#define INQ_AERC_BIT 0x80
+#define INQ_TrmTsk_BIT 0x40
+#define INQ_NormACA_BIT 0x20
+#define INQ_RDF_MASK 0x0F
+
+#define INQ_AER_Capable(inqp) \
+ (int)(*((u8*)(inqp)+INQ_BYTE3) & INQ_AERC_BIT)
+#define INQ_TrmTsk(inqp) \
+ (int)(*((u8*)(inqp)+INQ_BYTE3) & INQ_TrmTsk_BIT)
+#define INQ_NormACA(inqp) \
+ (int)(*((u8*)(inqp)+INQ_BYTE3) & INQ_NormACA_BIT)
+#define INQ_Response_Data_Format(inqp) \
+ (int)(*((u8*)(inqp)+INQ_BYTE3) & INQ_RDF_MASK)
+
+
+#define INQ_CAPABILITY_BYTE 7
+#define INQ_RelAdr_BIT 0x80
+#define INQ_WBus32_BIT 0x40
+#define INQ_WBus16_BIT 0x20
+#define INQ_Sync_BIT 0x10
+#define INQ_Linked_BIT 0x08
+ /* INQ_Reserved BIT 0x40 */
+#define INQ_CmdQue_BIT 0x02
+#define INQ_SftRe_BIT 0x01
+
+#define IS_RelAdr_DEV(inqp) \
+ (int)(*((u8*)(inqp)+INQ_CAPABILITY_BYTE) & INQ_RelAdr_BIT)
+#define IS_WBus32_DEV(inqp) \
+ (int)(*((u8*)(inqp)+INQ_CAPABILITY_BYTE) & INQ_WBus32_BIT)
+#define IS_WBus16_DEV(inqp) \
+ (int)(*((u8*)(inqp)+INQ_CAPABILITY_BYTE) & INQ_WBus16_BIT)
+#define IS_Sync_DEV(inqp) \
+ (int)(*((u8*)(inqp)+INQ_CAPABILITY_BYTE) & INQ_Sync_BIT)
+#define IS_Linked_DEV(inqp) \
+ (int)(*((u8*)(inqp)+INQ_CAPABILITY_BYTE) & INQ_Linked_BIT)
+#define IS_CmdQue_DEV(inqp) \
+ (int)(*((u8*)(inqp)+INQ_CAPABILITY_BYTE) & INQ_CmdQue_BIT)
+#define IS_SftRe_DEV(inqp) \
+ (int)(*((u8*)(inqp)+INQ_CAPABILITY_BYTE) & INQ_SftRe_BIT)
+
+#define INQ_Width_BITS \
+ (INQ_WBus32_BIT | INQ_WBus16_BIT)
+#define IS_Wide_DEV(inqp) \
+ (int)(*((u8*)(inqp)+INQ_CAPABILITY_BYTE) & INQ_Width_BITS)
+
+
+/*
+ * SCSI peripheral device types
+ */
+#define SCSI_TYPE_DAD 0x00 /* Direct Access Device */
+#define SCSI_TYPE_SAD 0x01 /* Sequential Access Device */
+#define SCSI_TYPE_TAPE SCSI_TYPE_SAD
+#define SCSI_TYPE_PRT 0x02 /* Printer */
+#define SCSI_TYPE_PROC 0x03 /* Processor */
+#define SCSI_TYPE_WORM 0x04
+#define SCSI_TYPE_CDROM 0x05
+#define SCSI_TYPE_SCAN 0x06 /* Scanner */
+#define SCSI_TYPE_OPTICAL 0x07 /* Magneto/Optical */
+#define SCSI_TYPE_CHANGER 0x08
+#define SCSI_TYPE_COMM 0x09 /* Communications device */
+#define SCSI_TYPE_UNKNOWN 0x1f
+#define SCSI_TYPE_UNCONFIGURED_LUN 0x7f
+
+#define SCSI_TYPE_MAX_KNOWN SCSI_TYPE_COMM
+
+/*
+ * Peripheral Qualifiers
+ */
+#define DEVICE_PRESENT 0x00
+#define LUN_NOT_PRESENT 0x01
+#define LUN_NOT_SUPPORTED 0x03
+
+/*
+ * ANSI Versions
+ */
+#ifndef SCSI_1
+#define SCSI_1 0x01
+#endif
+#ifndef SCSI_2
+#define SCSI_2 0x02
+#endif
+#ifndef SCSI_3
+#define SCSI_3 0x03
+#endif
+
+
+#define SCSI_MAX_SENSE_BYTES 255
+#define SCSI_STD_SENSE_BYTES 18
+#define SCSI_PAD_SENSE_BYTES (SCSI_MAX_SENSE_BYTES - SCSI_STD_SENSE_BYTES)
+
+#undef USE_SCSI_COMPLETE_SENSE
+/*
+ * Structure definition for SCSI Sense Data
+ *
+ * NOTE: The following structure is 255 bytes in size
+ * iiff USE_SCSI_COMPLETE_SENSE IS defined above (i.e. w/ "#define").
+ * If USE_SCSI_COMPLETE_SENSE is NOT defined above (i.e. w/ "#undef")
+ * then the following structure is only 19 bytes in size.
+ * THE CHOICE IS YOURS!
+ *
+ */
+typedef struct SCSI_Sense_Data
+{
+#ifdef USE_SCSI_COMPLETE_SENSE
+ u8 SenseByte[SCSI_MAX_SENSE_BYTES];
+#else
+ u8 SenseByte[SCSI_STD_SENSE_BYTES];
+#endif
+
+/*
+ * the following structure works only for little-endian (Intel,
+ * LSB first (1234) byte order) systems with 4-byte ints.
+ *
+ u8 Error_Code :4, // 0x00
+ Error_Class :3,
+ Valid :1
+ ;
+ u8 Segment_Number // 0x01
+ ;
+ u8 Sense_Key :4, // 0x02
+ Reserved :1,
+ Incorrect_Length_Indicator:1,
+ End_Of_Media :1,
+ Filemark :1
+ ;
+ u8 Information_MSB; // 0x03
+ u8 Information_Byte2; // 0x04
+ u8 Information_Byte1; // 0x05
+ u8 Information_LSB; // 0x06
+ u8 Additional_Length; // 0x07
+
+ u32 Command_Specific_Information; // 0x08 - 0x0b
+
+ u8 Additional_Sense_Code; // 0x0c
+ u8 Additional_Sense_Code_Qualifier; // 0x0d
+ u8 Field_Replaceable_Unit_Code; // 0x0e
+ u8 Illegal_Req_Bit_Pointer :3, // 0x0f
+ Illegal_Req_Bit_Valid :1,
+ Illegal_Req_Reserved :2,
+ Illegal_Req_Cmd_Data :1,
+ Sense_Key_Specific_Valid :1
+ ;
+ u16 Sense_Key_Specific_Data; // 0x10 - 0x11
+
+#ifdef USE_SCSI_COMPLETE_SENSE
+ u8 Additional_Sense_Data[SCSI_PAD_SENSE_BYTES];
+#else
+ u8 Additional_Sense_Data[1];
+#endif
+ *
+ */
+
+} SCSI_Sense_Data_t;
+
+
+#define SD_ERRCODE_BYTE 0
+#define SD_Valid_BIT 0x80
+#define SD_Error_Code_MASK 0x7f
+#define SD_Valid(sdp) \
+ (int)(*((u8*)(sdp)+SD_ERRCODE_BYTE) & SD_Valid_BIT)
+#define SD_Error_Code(sdp) \
+ (int)(*((u8*)(sdp)+SD_ERRCODE_BYTE) & SD_Error_Code_MASK)
+
+
+#define SD_SEGNUM_BYTE 1
+#define SD_Segment_Number(sdp) (int)(*((u8*)(sdp)+SD_SEGNUM_BYTE))
+
+
+#define SD_SENSEKEY_BYTE 2
+#define SD_Filemark_BIT 0x80
+#define SD_EOM_BIT 0x40
+#define SD_ILI_BIT 0x20
+#define SD_Sense_Key_MASK 0x0f
+#define SD_Filemark(sdp) \
+ (int)(*((u8*)(sdp)+SD_SENSEKEY_BYTE) & SD_Filemark_BIT)
+#define SD_EOM(sdp) \
+ (int)(*((u8*)(sdp)+SD_SENSEKEY_BYTE) & SD_EOM_BIT)
+#define SD_ILI(sdp) \
+ (int)(*((u8*)(sdp)+SD_SENSEKEY_BYTE) & SD_ILI_BIT)
+#define SD_Sense_Key(sdp) \
+ (int)(*((u8*)(sdp)+SD_SENSEKEY_BYTE) & SD_Sense_Key_MASK)
+
+
+#define SD_INFO3_BYTE 3
+#define SD_INFO2_BYTE 4
+#define SD_INFO1_BYTE 5
+#define SD_INFO0_BYTE 6
+#define SD_Information3(sdp) (int)(*((u8*)(sdp)+SD_INFO3_BYTE))
+#define SD_Information2(sdp) (int)(*((u8*)(sdp)+SD_INFO2_BYTE))
+#define SD_Information1(sdp) (int)(*((u8*)(sdp)+SD_INFO1_BYTE))
+#define SD_Information0(sdp) (int)(*((u8*)(sdp)+SD_INFO0_BYTE))
+
+
+#define SD_ADDL_LEN_BYTE 7
+#define SD_Additional_Sense_Length(sdp) \
+ (int)(*((u8*)(sdp)+SD_ADDL_LEN_BYTE))
+#define SD_Addl_Sense_Len SD_Additional_Sense_Length
+
+
+#define SD_CMD_SPECIFIC3_BYTE 8
+#define SD_CMD_SPECIFIC2_BYTE 9
+#define SD_CMD_SPECIFIC1_BYTE 10
+#define SD_CMD_SPECIFIC0_BYTE 11
+#define SD_Cmd_Specific_Info3(sdp) (int)(*((u8*)(sdp)+SD_CMD_SPECIFIC3_BYTE))
+#define SD_Cmd_Specific_Info2(sdp) (int)(*((u8*)(sdp)+SD_CMD_SPECIFIC2_BYTE))
+#define SD_Cmd_Specific_Info1(sdp) (int)(*((u8*)(sdp)+SD_CMD_SPECIFIC1_BYTE))
+#define SD_Cmd_Specific_Info0(sdp) (int)(*((u8*)(sdp)+SD_CMD_SPECIFIC0_BYTE))
+
+
+#define SD_ADDL_SENSE_CODE_BYTE 12
+#define SD_Additional_Sense_Code(sdp) \
+ (int)(*((u8*)(sdp)+SD_ADDL_SENSE_CODE_BYTE))
+#define SD_Addl_Sense_Code SD_Additional_Sense_Code
+#define SD_ASC SD_Additional_Sense_Code
+
+
+#define SD_ADDL_SENSE_CODE_QUAL_BYTE 13
+#define SD_Additional_Sense_Code_Qualifier(sdp) \
+ (int)(*((u8*)(sdp)+SD_ADDL_SENSE_CODE_QUAL_BYTE))
+#define SD_Addl_Sense_Code_Qual SD_Additional_Sense_Code_Qualifier
+#define SD_ASCQ SD_Additional_Sense_Code_Qualifier
+
+
+#define SD_FIELD_REPL_UNIT_CODE_BYTE 14
+#define SD_Field_Replaceable_Unit_Code(sdp) \
+ (int)(*((u8*)(sdp)+SD_FIELD_REPL_UNIT_CODE_BYTE))
+#define SD_Field_Repl_Unit_Code SD_Field_Replaceable_Unit_Code
+#define SD_FRUC SD_Field_Replaceable_Unit_Code
+#define SD_FRU SD_Field_Replaceable_Unit_Code
+
+
+/*
+ * Sense-Key Specific offsets and macros.
+ */
+#define SD_SKS2_BYTE 15
+#define SD_SKS_Valid_BIT 0x80
+#define SD_SKS_Cmd_Data_BIT 0x40
+#define SD_SKS_Bit_Ptr_Valid_BIT 0x08
+#define SD_SKS_Bit_Ptr_MASK 0x07
+#define SD_SKS1_BYTE 16
+#define SD_SKS0_BYTE 17
+#define SD_Sense_Key_Specific_Valid(sdp) \
+ (int)(*((u8*)(sdp)+SD_SKS2_BYTE) & SD_SKS_Valid_BIT)
+#define SD_SKS_Valid SD_Sense_Key_Specific_Valid
+#define SD_SKS_CDB_Error(sdp) \
+ (int)(*((u8*)(sdp)+SD_SKS2_BYTE) & SD_SKS_Cmd_Data_BIT)
+#define SD_Was_Illegal_Request SD_SKS_CDB_Error
+#define SD_SKS_Bit_Pointer_Valid(sdp) \
+ (int)(*((u8*)(sdp)+SD_SKS2_BYTE) & SD_SKS_Bit_Ptr_Valid_BIT)
+#define SD_SKS_Bit_Pointer(sdp) \
+ (int)(*((u8*)(sdp)+SD_SKS2_BYTE) & SD_SKS_Bit_Ptr_MASK)
+#define SD_Field_Pointer(sdp) \
+ (int)( ((u16)(*((u8*)(sdp)+SD_SKS1_BYTE)) << 8) \
+ + *((u8*)(sdp)+SD_SKS0_BYTE) )
+#define SD_Bad_Byte SD_Field_Pointer
+#define SD_Actual_Retry_Count SD_Field_Pointer
+#define SD_Progress_Indication SD_Field_Pointer
+
+/*
+ * Mode Sense Write Protect Mask
+ */
+#define WRITE_PROTECT_MASK 0X80
+
+/*
+ * Medium Type Codes
+ */
+#define OPTICAL_DEFAULT 0x00
+#define OPTICAL_READ_ONLY_MEDIUM 0x01
+#define OPTICAL_WRITE_ONCE_MEDIUM 0x02
+#define OPTICAL_READ_WRITABLE_MEDIUM 0x03
+#define OPTICAL_RO_OR_WO_MEDIUM 0x04
+#define OPTICAL_RO_OR_RW_MEDIUM 0x05
+#define OPTICAL_WO_OR_RW_MEDIUM 0x06
+
+
+
+/*
+ * Structure definition for READ6, WRITE6 (6-byte CDB)
+ */
+typedef struct SCSI_RW6_CDB
+{
+ u32 OpCode :8,
+ LBA_HI :5, /* 5 MSBit's of the LBA */
+ Lun :3,
+ LBA_MID :8, /* NOTE: total of 21 bits in LBA */
+ LBA_LO :8 ; /* Max LBA = 0x001fffff */
+ u8 BlockCount;
+ u8 Control;
+} SCSI_RW6_t;
+
+#define MAX_RW6_LBA ((u32)0x001fffff)
+
+/*
+ * Structure definition for READ10, WRITE10 (10-byte CDB)
+ *
+ * NOTE: ParityCheck bit is applicable only for VERIFY and WRITE VERIFY for
+ * the ADP-92 DAC only. In the SCSI2 spec. this same bit is defined as a
+ * FUA (forced unit access) bit for READs and WRITEs. Since this driver
+ * does not use the FUA, this bit is defined as it is used by the ADP-92.
+ * Also, for READ CAPACITY, only the OpCode field is used.
+ */
+typedef struct SCSI_RW10_CDB
+{
+ u8 OpCode;
+ u8 Reserved1;
+ u32 LBA;
+ u8 Reserved2;
+ u16 BlockCount;
+ u8 Control;
+} SCSI_RW10_t;
+
+#define PARITY_CHECK 0x08 /* parity check bit - byte[1], bit 3 */
+
+ /*
+ * Structure definition for data returned by READ CAPACITY cmd;
+ * READ CAPACITY data
+ */
+ typedef struct READ_CAP_DATA
+ {
+ u32 MaxLBA;
+ u32 BlockBytes;
+ } SCSI_READ_CAP_DATA_t, *pSCSI_READ_CAP_DATA_t;
+
+
+/*
+ * Structure definition for FORMAT UNIT CDB (6-byte CDB)
+ */
+typedef struct _SCSI_FORMAT_UNIT
+{
+ u8 OpCode;
+ u8 Reserved1;
+ u8 VendorSpecific;
+ u16 Interleave;
+ u8 Control;
+} SCSI_FORMAT_UNIT_t;
+
+/*
+ * Structure definition for REQUEST SENSE (6-byte CDB)
+ */
+typedef struct _SCSI_REQUEST_SENSE
+{
+ u8 OpCode;
+ u8 Reserved1;
+ u8 Reserved2;
+ u8 Reserved3;
+ u8 AllocLength;
+ u8 Control;
+} SCSI_REQ_SENSE_t;
+
+/*
+ * Structure definition for REPORT LUNS (12-byte CDB)
+ */
+typedef struct _SCSI_REPORT_LUNS
+{
+ u8 OpCode;
+ u8 Reserved1[5];
+ u32 AllocationLength;
+ u8 Reserved2;
+ u8 Control;
+} SCSI_REPORT_LUNS_t, *pSCSI_REPORT_LUNS_t;
+
+ /*
+ * (per-level) LUN information bytes
+ */
+/*
+ * Following doesn't work on ARMCC compiler
+ * [apparently] because it pads every struct
+ * to be multiple of 4 bytes!
+ * So SCSI_LUN_LEVELS_t winds up being 16
+ * bytes instead of 8!
+ *
+ typedef struct LUN_INFO
+ {
+ u8 AddrMethod_plus_LunOrBusNumber;
+ u8 LunOrTarget;
+ } SCSI_LUN_INFO_t, *pSCSI_LUN_INFO_t;
+
+ typedef struct LUN_LEVELS
+ {
+ SCSI_LUN_INFO_t LUN_0;
+ SCSI_LUN_INFO_t LUN_1;
+ SCSI_LUN_INFO_t LUN_2;
+ SCSI_LUN_INFO_t LUN_3;
+ } SCSI_LUN_LEVELS_t, *pSCSI_LUN_LEVELS_t;
+*/
+ /*
+ * All 4 levels (8 bytes) of LUN information
+ */
+ typedef struct LUN_LEVELS
+ {
+ u8 LVL1_AddrMethod_plus_LunOrBusNumber;
+ u8 LVL1_LunOrTarget;
+ u8 LVL2_AddrMethod_plus_LunOrBusNumber;
+ u8 LVL2_LunOrTarget;
+ u8 LVL3_AddrMethod_plus_LunOrBusNumber;
+ u8 LVL3_LunOrTarget;
+ u8 LVL4_AddrMethod_plus_LunOrBusNumber;
+ u8 LVL4_LunOrTarget;
+ } SCSI_LUN_LEVELS_t, *pSCSI_LUN_LEVELS_t;
+
+ /*
+ * Structure definition for data returned by REPORT LUNS cmd;
+ * LUN reporting parameter list format
+ */
+ typedef struct LUN_REPORT
+ {
+ u32 LunListLength;
+ u32 Reserved;
+ SCSI_LUN_LEVELS_t LunInfo[1];
+ } SCSI_LUN_REPORT_t, *pSCSI_LUN_REPORT_t;
+
+/****************************************************************************
+ *
+ * Externals
+ */
+
+/****************************************************************************
+ *
+ * Public Typedefs & Related Defines
+ */
+
+/****************************************************************************
+ *
+ * Macros (embedded, above)
+ */
+
+/****************************************************************************
+ *
+ * Public Variables
+ */
+
+/****************************************************************************
+ *
+ * Public Prototypes (module entry points)
+ */
+
+
+/***************************************************************************/
+#endif
--- /dev/null
+
+static const char *ScsiOpcodeString[256] = {
+ "TEST UNIT READY\0\01", /* 00h */
+ "REWIND\0\002"
+ "\001REZERO UNIT", /* 01h */
+ "\0\0", /* 02h */
+ "REQUEST SENSE\0\01", /* 03h */
+ "FORMAT UNIT\0\03"
+ "\001FORMAT MEDIUM\0"
+ "\002FORMAT", /* 04h */
+ "READ BLOCK LIMITS\0\1", /* 05h */
+ "\0\0", /* 06h */
+ "REASSIGN BLOCKS\0\02"
+ "\010INITIALIZE ELEMENT STATUS", /* 07h */
+ "READ(06)\0\04"
+ "\001READ\0"
+ "\003RECEIVE\0"
+ "\011GET MESSAGE(06)", /* 08h */
+ "\0\0", /* 09h */
+ "WRITE(06)\0\05"
+ "\001WRITE\0"
+ "\002PRINT\0"
+ "\003SEND(6)\0"
+ "\011SEND MESSAGE(06)", /* 0Ah */
+ "SEEK(06)\0\02"
+ "\003SLEW AND PRINT", /* 0Bh */
+ "\0\0", /* 0Ch */
+ "\0\0", /* 0Dh */
+ "\0\0", /* 0Eh */
+ "READ REVERSE\0\01", /* 0Fh */
+ "WRITE FILEMARKS\0\02"
+ "\003SYNCRONIZE BUFFER", /* 10h */
+ "SPACE(6)\0\01", /* 11h */
+ "INQUIRY\0\01", /* 12h */
+ "VERIFY\0\01", /* 13h */
+ "RECOVER BUFFERED DATA\0\01", /* 14h */
+ "MODE SELECT(06)\0\01", /* 15h */
+ "RESERVE(06)\0\02"
+ "\010RESERVE ELEMENT(06)", /* 16h */
+ "RELEASE(06)\0\02"
+ "\010RELEASE ELEMENT(06)", /* 17h */
+ "COPY\0\01", /* 18h */
+ "ERASE\0\01", /* 19h */
+ "MODE SENSE(06)\0\01", /* 1Ah */
+ "STOP START UNIT\0\04"
+ "\001LOAD UNLOAD\0"
+ "\002STOP PRINT\0"
+ "\006SCAN\0\002", /* 1Bh */
+ "RECEIVE DIAGNOSTIC RESULTS\0\01", /* 1Ch */
+ "SEND DIAGNOSTIC\0\01", /* 1Dh */
+ "PREVENT ALLOW MEDIUM REMOVAL\0\01", /* 1Eh */
+ "\0\0", /* 1Fh */
+ "\0\0", /* 20h */
+ "\0\0", /* 21h */
+ "\0\0", /* 22h */
+ "READ FORMAT CAPACITIES\0\01", /* 23h */
+ "SET WINDOW\0\01", /* 24h */
+ "READ CAPACITY\0\03"
+ "\006GET WINDOW\0"
+ "\037FREAD CARD CAPACITY", /* 25h */
+ "\0\0", /* 26h */
+ "\0\0", /* 27h */
+ "READ(10)\0\02"
+ "\011GET MESSAGE(10)", /* 28h */
+ "READ GENERATION\0\01", /* 29h */
+ "WRITE(10)\0\03"
+ "\011SEND(10)\0"
+ "\011SEND MESSAGE(10)", /* 2Ah */
+ "SEEK(10)\0\03"
+ "LOCATE(10)\0"
+ "POSITION TO ELEMENT", /* 2Bh */
+ "ERASE(10)\0\01", /* 2Ch */
+ "READ UPDATED BLOCK\0\01", /* 2Dh */
+ "WRITE AND VERIFY(10)\0\01", /* 2Eh */
+ "VERIFY(10)\0\01", /* 2Fh */
+ "SEARCH DATA HIGH(10)\0\01", /* 30h */
+ "SEARCH DATA EQUAL(10)\0\02"
+ "OBJECT POSITION", /* 31h */
+ "SEARCH DATA LOW(10)\0\01", /* 32h */
+ "SET LIMITS(10)\0\01", /* 33h */
+ "PRE-FETCH(10)\0\03"
+ "READ POSITION\0"
+ "GET DATA BUFFER STATUS", /* 34h */
+ "SYNCHRONIZE CACHE(10)\0\01", /* 35h */
+ "LOCK UNLOCK CACHE(10)\0\01", /* 36h */
+ "READ DEFECT DATA(10)\0\01", /* 37h */
+ "MEDIUM SCAN\0\01", /* 38h */
+ "COMPARE\0\01", /* 39h */
+ "COPY AND VERIFY\0\01", /* 3Ah */
+ "WRITE BUFFER\0\01", /* 3Bh */
+ "READ BUFFER\0\01", /* 3Ch */
+ "UPDATE BLOCK\0\01", /* 3Dh */
+ "READ LONG\0\01", /* 3Eh */
+ "WRITE LONG\0\01", /* 3Fh */
+ "CHANGE DEFINITION\0\01", /* 40h */
+ "WRITE SAME(10)\0\01", /* 41h */
+ "READ SUB-CHANNEL\0\01", /* 42h */
+ "READ TOC/PMA/ATIP\0\01", /* 43h */
+ "REPORT DENSITY SUPPORT\0\01", /* 44h */
+ "READ HEADER\0\01", /* 44h */
+ "PLAY AUDIO(10)\0\01", /* 45h */
+ "GET CONFIGURATION\0\01", /* 46h */
+ "PLAY AUDIO MSF\0\01", /* 47h */
+ "PLAY AUDIO TRACK INDEX\0\01", /* 48h */
+ "PLAY TRACK RELATIVE(10)\0\01", /* 49h */
+ "GET EVENT STATUS NOTIFICATION\0\01", /* 4Ah */
+ "PAUSE/RESUME\0\01", /* 4Bh */
+ "LOG SELECT\0\01", /* 4Ch */
+ "LOG SENSE\0\01", /* 4Dh */
+ "STOP PLAY/SCAN\0\01", /* 4Eh */
+ "\0\0", /* 4Fh */
+ "XDWRITE(10)\0\01", /* 50h */
+ "XPWRITE(10)\0\02"
+ "READ DISC INFORMATION", /* 51h */
+ "XDREAD(10)\0\01"
+ "READ TRACK INFORMATION", /* 52h */
+ "RESERVE TRACK\0\01", /* 53h */
+ "SEND OPC INFORMATION\0\01", /* 54h */
+ "MODE SELECT(10)\0\01", /* 55h */
+ "RESERVE(10)\0\02"
+ "RESERVE ELEMENT(10)", /* 56h */
+ "RELEASE(10)\0\02"
+ "RELEASE ELEMENT(10)", /* 57h */
+ "REPAIR TRACK\0\01", /* 58h */
+ "READ MASTER CUE\0\01", /* 59h */
+ "MODE SENSE(10)\0\01", /* 5Ah */
+ "CLOSE TRACK/SESSION\0\01", /* 5Bh */
+ "READ BUFFER CAPACITY\0\01", /* 5Ch */
+ "SEND CUE SHEET\0\01", /* 5Dh */
+ "PERSISTENT RESERVE IN\0\01", /* 5Eh */
+ "PERSISTENT RESERVE OUT\0\01", /* 5Fh */
+ "\0\0", /* 60h */
+ "\0\0", /* 61h */
+ "\0\0", /* 62h */
+ "\0\0", /* 63h */
+ "\0\0", /* 64h */
+ "\0\0", /* 65h */
+ "\0\0", /* 66h */
+ "\0\0", /* 67h */
+ "\0\0", /* 68h */
+ "\0\0", /* 69h */
+ "\0\0", /* 6Ah */
+ "\0\0", /* 6Bh */
+ "\0\0", /* 6Ch */
+ "\0\0", /* 6Dh */
+ "\0\0", /* 6Eh */
+ "\0\0", /* 6Fh */
+ "\0\0", /* 70h */
+ "\0\0", /* 71h */
+ "\0\0", /* 72h */
+ "\0\0", /* 73h */
+ "\0\0", /* 74h */
+ "\0\0", /* 75h */
+ "\0\0", /* 76h */
+ "\0\0", /* 77h */
+ "\0\0", /* 78h */
+ "\0\0", /* 79h */
+ "\0\0", /* 7Ah */
+ "\0\0", /* 7Bh */
+ "\0\0", /* 7Ch */
+ "\0\0", /* 7Eh */
+ "\0\0", /* 7Eh */
+ "\0\0", /* 7Fh */
+ "XDWRITE EXTENDED(16)\0\01", /* 80h */
+ "REBUILD(16)\0\01", /* 81h */
+ "REGENERATE(16)\0\01", /* 82h */
+ "EXTENDED COPY\0\01", /* 83h */
+ "RECEIVE COPY RESULTS\0\01", /* 84h */
+ "ACCESS CONTROL IN [proposed]\0\01", /* 86h */
+ "ACCESS CONTROL OUT [proposed]\0\01", /* 87h */
+ "READ(16)\0\01", /* 88h */
+ "DEVICE LOCKS [proposed]\0\01", /* 89h */
+ "WRITE(16)\0\01", /* 8Ah */
+ "\0\0", /* 8Bh */
+ "READ ATTRIBUTES [proposed]\0\01", /* 8Ch */
+ "WRITE ATTRIBUTES [proposed]\0\01", /* 8Dh */
+ "WRITE AND VERIFY(16)\0\01", /* 8Eh */
+ "VERIFY(16)\0\01", /* 8Fh */
+ "PRE-FETCH(16)\0\01", /* 90h */
+ "SYNCHRONIZE CACHE(16)\0\02"
+ "SPACE(16) [1]", /* 91h */
+ "LOCK UNLOCK CACHE(16)\0\02"
+ "LOCATE(16) [1]", /* 92h */
+ "WRITE SAME(16)\0\01", /* 93h */
+ "[usage proposed by SCSI Socket Services project]\0\01", /* 94h */
+ "[usage proposed by SCSI Socket Services project]\0\01", /* 95h */
+ "[usage proposed by SCSI Socket Services project]\0\01", /* 96h */
+ "[usage proposed by SCSI Socket Services project]\0\01", /* 97h */
+ "MARGIN CONTROL [proposed]\0\01", /* 98h */
+ "\0\0", /* 99h */
+ "\0\0", /* 9Ah */
+ "\0\0", /* 9Bh */
+ "\0\0", /* 9Ch */
+ "\0\0", /* 9Dh */
+ "SERVICE ACTION IN [proposed]\0\01", /* 9Eh */
+ "SERVICE ACTION OUT [proposed]\0\01", /* 9Fh */
+ "REPORT LUNS\0\01", /* A0h */
+ "BLANK\0\01", /* A1h */
+ "SEND EVENT\0\01", /* A2h */
+ "MAINTENANCE (IN)\0\02"
+ "SEND KEY", /* A3h */
+ "MAINTENANCE (OUT)\0\02"
+ "REPORT KEY", /* A4h */
+ "MOVE MEDIUM\0\02"
+ "PLAY AUDIO(12)", /* A5h */
+ "EXCHANGE MEDIUM\0\02"
+ "LOAD/UNLOAD C/DVD", /* A6h */
+ "MOVE MEDIUM ATTACHED\0\02"
+ "SET READ AHEAD\0\01", /* A7h */
+ "READ(12)\0\02"
+ "GET MESSAGE(12)", /* A8h */
+ "PLAY TRACK RELATIVE(12)\0\01", /* A9h */
+ "WRITE(12)\0\02"
+ "SEND MESSAGE(12)", /* AAh */
+ "\0\0", /* ABh */
+ "ERASE(12)\0\02"
+ "GET PERFORMANCE", /* ACh */
+ "READ DVD STRUCTURE\0\01", /* ADh */
+ "WRITE AND VERIFY(12)\0\01", /* AEh */
+ "VERIFY(12)\0\01", /* AFh */
+ "SEARCH DATA HIGH(12)\0\01", /* B0h */
+ "SEARCH DATA EQUAL(12)\0\01", /* B1h */
+ "SEARCH DATA LOW(12)\0\01", /* B2h */
+ "SET LIMITS(12)\0\01", /* B3h */
+ "READ ELEMENT STATUS ATTACHED\0\01", /* B4h */
+ "REQUEST VOLUME ELEMENT ADDRESS\0\01", /* B5h */
+ "SEND VOLUME TAG\0\02"
+ "SET STREAMING", /* B6h */
+ "READ DEFECT DATA(12)\0\01", /* B7h */
+ "READ ELEMENT STATUS\0\01", /* B8h */
+ "READ CD MSF\0\01", /* B9h */
+ "REDUNDANCY GROUP (IN)\0\02"
+ "SCAN", /* BAh */
+ "REDUNDANCY GROUP (OUT)\0\02"
+ "SET CD-ROM SPEED", /* BBh */
+ "SPARE (IN)\0\02"
+ "PLAY CD", /* BCh */
+ "SPARE (OUT)\0\02"
+ "MECHANISM STATUS", /* BDh */
+ "VOLUME SET (IN)\0\02"
+ "READ CD", /* BEh */
+ "VOLUME SET (OUT)\0\0\02"
+ "SEND DVD STRUCTURE", /* BFh */
+ "\0\0", /* C0h */
+ "\0\0", /* C1h */
+ "\0\0", /* C2h */
+ "\0\0", /* C3h */
+ "\0\0", /* C4h */
+ "\0\0", /* C5h */
+ "\0\0", /* C6h */
+ "\0\0", /* C7h */
+ "\0\0", /* C8h */
+ "\0\0", /* C9h */
+ "\0\0", /* CAh */
+ "\0\0", /* CBh */
+ "\0\0", /* CCh */
+ "\0\0", /* CDh */
+ "\0\0", /* CEh */
+ "\0\0", /* CFh */
+ "\0\0", /* D0h */
+ "\0\0", /* D1h */
+ "\0\0", /* D2h */
+ "\0\0", /* D3h */
+ "\0\0", /* D4h */
+ "\0\0", /* D5h */
+ "\0\0", /* D6h */
+ "\0\0", /* D7h */
+ "\0\0", /* D8h */
+ "\0\0", /* D9h */
+ "\0\0", /* DAh */
+ "\0\0", /* DBh */
+ "\0\0", /* DCh */
+ "\0\0", /* DEh */
+ "\0\0", /* DEh */
+ "\0\0", /* DFh */
+ "\0\0", /* E0h */
+ "\0\0", /* E1h */
+ "\0\0", /* E2h */
+ "\0\0", /* E3h */
+ "\0\0", /* E4h */
+ "\0\0", /* E5h */
+ "\0\0", /* E6h */
+ "\0\0", /* E7h */
+ "\0\0", /* E8h */
+ "\0\0", /* E9h */
+ "\0\0", /* EAh */
+ "\0\0", /* EBh */
+ "\0\0", /* ECh */
+ "\0\0", /* EDh */
+ "\0\0", /* EEh */
+ "\0\0", /* EFh */
+ "\0\0", /* F0h */
+ "\0\0", /* F1h */
+ "\0\0", /* F2h */
+ "\0\0", /* F3h */
+ "\0\0", /* F4h */
+ "\0\0", /* F5h */
+ "\0\0", /* F6h */
+ "\0\0", /* F7h */
+ "\0\0", /* F8h */
+ "\0\0", /* F9h */
+ "\0\0", /* FAh */
+ "\0\0", /* FBh */
+ "\0\0", /* FEh */
+ "\0\0", /* FEh */
+ "\0\0", /* FEh */
+ "\0\0" /* FFh */
+};
+
--- /dev/null
+/*
+ * Core I2O structure management
+ *
+ * (C) Copyright 1999-2002 Red Hat Software
+ *
+ * Written by Alan Cox, Building Number Three Ltd
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * A lot of the I2O message side code from this is taken from the
+ * Red Creek RCPCI45 adapter driver by Red Creek Communications
+ *
+ * Fixes/additions:
+ * Philipp Rumpf
+ * Juha Sievänen <Juha.Sievanen@cs.Helsinki.FI>
+ * Auvo Häkkinen <Auvo.Hakkinen@cs.Helsinki.FI>
+ * Deepak Saxena <deepak@plexity.net>
+ * Boji T Kannanthanam <boji.t.kannanthanam@intel.com>
+ * Alan Cox <alan@redhat.com>:
+ * Ported to Linux 2.5.
+ * Markus Lidel <Markus.Lidel@shadowconnect.com>:
+ * Minor fixes for 2.6.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+
+#include <linux/i2o.h>
+
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/smp_lock.h>
+
+#include <linux/bitops.h>
+#include <linux/wait.h>
+#include <linux/delay.h>
+#include <linux/timer.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
+#include <asm/semaphore.h>
+#include <linux/completion.h>
+#include <linux/workqueue.h>
+
+#include <asm/io.h>
+#include <linux/reboot.h>
+#ifdef CONFIG_MTRR
+#include <asm/mtrr.h>
+#endif // CONFIG_MTRR
+
+#include "i2o_lan.h"
+
+//#define DRIVERDEBUG
+
+#ifdef DRIVERDEBUG
+#define dprintk(s, args...) printk(s, ## args)
+#else
+#define dprintk(s, args...)
+#endif
+
+/* OSM table */
+static struct i2o_handler *i2o_handlers[MAX_I2O_MODULES];
+
+/* Controller list */
+static struct i2o_controller *i2o_controllers[MAX_I2O_CONTROLLERS];
+struct i2o_controller *i2o_controller_chain;
+int i2o_num_controllers;
+
+/* Initiator Context for Core message */
+static int core_context;
+
+/* Initialization && shutdown functions */
+void i2o_sys_init(void);
+static void i2o_sys_shutdown(void);
+static int i2o_reset_controller(struct i2o_controller *);
+static int i2o_reboot_event(struct notifier_block *, unsigned long , void *);
+static int i2o_online_controller(struct i2o_controller *);
+static int i2o_init_outbound_q(struct i2o_controller *);
+static int i2o_post_outbound_messages(struct i2o_controller *);
+
+/* Reply handler */
+static void i2o_core_reply(struct i2o_handler *, struct i2o_controller *,
+ struct i2o_message *);
+
+/* Various helper functions */
+static int i2o_lct_get(struct i2o_controller *);
+static int i2o_lct_notify(struct i2o_controller *);
+static int i2o_hrt_get(struct i2o_controller *);
+
+static int i2o_build_sys_table(void);
+static int i2o_systab_send(struct i2o_controller *c);
+
+/* I2O core event handler */
+static int i2o_core_evt(void *);
+static int evt_pid;
+static int evt_running;
+
+/* Dynamic LCT update handler */
+static int i2o_dyn_lct(void *);
+
+void i2o_report_controller_unit(struct i2o_controller *, struct i2o_device *);
+
+static void i2o_pci_dispose(struct i2o_controller *c);
+
+/*
+ * I2O System Table. Contains information about
+ * all the IOPs in the system. Used to inform IOPs
+ * about each other's existence.
+ *
+ * sys_tbl_ver is the CurrentChangeIndicator that is
+ * used by IOPs to track changes.
+ */
+static struct i2o_sys_tbl *sys_tbl;
+static int sys_tbl_ind;
+static int sys_tbl_len;
+
+/*
+ * This spin lock is used to keep a device from being
+ * added and deleted concurrently across CPUs or interrupts.
+ * This can occur when a user creates a device and immediatelly
+ * deletes it before the new_dev_notify() handler is called.
+ */
+static spinlock_t i2o_dev_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * Structures and definitions for synchronous message posting.
+ * See i2o_post_wait() for description.
+ */
+struct i2o_post_wait_data
+{
+ int *status; /* Pointer to status block on caller stack */
+ int *complete; /* Pointer to completion flag on caller stack */
+ u32 id; /* Unique identifier */
+ wait_queue_head_t *wq; /* Wake up for caller (NULL for dead) */
+ struct i2o_post_wait_data *next; /* Chain */
+ void *mem[2]; /* Memory blocks to recover on failure path */
+ dma_addr_t phys[2]; /* Physical address of blocks to recover */
+ u32 size[2]; /* Size of blocks to recover */
+};
+
+static struct i2o_post_wait_data *post_wait_queue;
+static u32 post_wait_id; // Unique ID for each post_wait
+static spinlock_t post_wait_lock = SPIN_LOCK_UNLOCKED;
+static void i2o_post_wait_complete(struct i2o_controller *, u32, int);
+
+/* OSM descriptor handler */
+static struct i2o_handler i2o_core_handler =
+{
+ (void *)i2o_core_reply,
+ NULL,
+ NULL,
+ NULL,
+ "I2O core layer",
+ 0,
+ I2O_CLASS_EXECUTIVE
+};
+
+/*
+ * Used when queueing a reply to be handled later
+ */
+
+struct reply_info
+{
+ struct i2o_controller *iop;
+ u32 msg[MSG_FRAME_SIZE];
+};
+static struct reply_info evt_reply;
+static struct reply_info events[I2O_EVT_Q_LEN];
+static int evt_in;
+static int evt_out;
+static int evt_q_len;
+#define MODINC(x,y) ((x) = ((x) + 1) % (y))
+
+/*
+ * I2O configuration spinlock. This isnt a big deal for contention
+ * so we have one only
+ */
+
+static DECLARE_MUTEX(i2o_configuration_lock);
+
+/*
+ * Event spinlock. Used to keep event queue sane and from
+ * handling multiple events simultaneously.
+ */
+static spinlock_t i2o_evt_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * Semaphore used to synchronize event handling thread with
+ * interrupt handler.
+ */
+
+static DECLARE_MUTEX(evt_sem);
+static DECLARE_COMPLETION(evt_dead);
+static DECLARE_WAIT_QUEUE_HEAD(evt_wait);
+
+static struct notifier_block i2o_reboot_notifier =
+{
+ i2o_reboot_event,
+ NULL,
+ 0
+};
+
+/*
+ * Config options
+ */
+
+static int verbose;
+
+#if BITS_PER_LONG == 64
+/**
+ * i2o_context_list_add - append an ptr to the context list and return a
+ * matching context id.
+ * @ptr: pointer to add to the context list
+ * @c: controller to which the context list belong
+ * returns context id, which could be used in the transaction context
+ * field.
+ *
+ * Because the context field in I2O is only 32-bit large, on 64-bit the
+ * pointer is to large to fit in the context field. The i2o_context_list
+ * functiones map pointers to context fields.
+ */
+u32 i2o_context_list_add(void *ptr, struct i2o_controller *c) {
+ u32 context = 1;
+ struct i2o_context_list_element **entry = &c->context_list;
+ struct i2o_context_list_element *element;
+ unsigned long flags;
+
+ spin_lock_irqsave(&c->context_list_lock, flags);
+ while(*entry && ((*entry)->flags & I2O_CONTEXT_LIST_USED)) {
+ if((*entry)->context >= context)
+ context = (*entry)->context + 1;
+ entry = &((*entry)->next);
+ }
+
+ if(!*entry) {
+ if(unlikely(!context)) {
+ spin_unlock_irqrestore(&c->context_list_lock, flags);
+ printk(KERN_EMERG "i2o_core: context list overflow\n");
+ return 0;
+ }
+
+ element = kmalloc(sizeof(struct i2o_context_list_element), GFP_KERNEL);
+ if(!element) {
+ printk(KERN_EMERG "i2o_core: could not allocate memory for context list element\n");
+ return 0;
+ }
+ element->context = context;
+ element->next = NULL;
+ *entry = element;
+ } else
+ element = *entry;
+
+ element->ptr = ptr;
+ element->flags = I2O_CONTEXT_LIST_USED;
+
+ spin_unlock_irqrestore(&c->context_list_lock, flags);
+ dprintk(KERN_DEBUG "i2o_core: add context to list %p -> %d\n", ptr, context);
+ return context;
+}
+
+/**
+ * i2o_context_list_remove - remove a ptr from the context list and return
+ * the matching context id.
+ * @ptr: pointer to be removed from the context list
+ * @c: controller to which the context list belong
+ * returns context id, which could be used in the transaction context
+ * field.
+ */
+u32 i2o_context_list_remove(void *ptr, struct i2o_controller *c) {
+ struct i2o_context_list_element **entry = &c->context_list;
+ struct i2o_context_list_element *element;
+ u32 context;
+ unsigned long flags;
+
+ spin_lock_irqsave(&c->context_list_lock, flags);
+ while(*entry && ((*entry)->ptr != ptr))
+ entry = &((*entry)->next);
+
+ if(unlikely(!*entry)) {
+ spin_unlock_irqrestore(&c->context_list_lock, flags);
+ printk(KERN_WARNING "i2o_core: could not remove nonexistent ptr %p\n", ptr);
+ return 0;
+ }
+
+ element = *entry;
+
+ context = element->context;
+ element->ptr = NULL;
+ element->flags |= I2O_CONTEXT_LIST_DELETED;
+
+ spin_unlock_irqrestore(&c->context_list_lock, flags);
+ dprintk(KERN_DEBUG "i2o_core: markt as deleted in context list %p -> %d\n", ptr, context);
+ return context;
+}
+
+/**
+ * i2o_context_list_get - get a ptr from the context list and remove it
+ * from the list.
+ * @context: context id to which the pointer belong
+ * @c: controller to which the context list belong
+ * returns pointer to the matching context id
+ */
+void *i2o_context_list_get(u32 context, struct i2o_controller *c) {
+ struct i2o_context_list_element **entry = &c->context_list;
+ struct i2o_context_list_element *element;
+ void *ptr;
+ int count = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&c->context_list_lock, flags);
+ while(*entry && ((*entry)->context != context)) {
+ entry = &((*entry)->next);
+ count ++;
+ }
+
+ if(unlikely(!*entry)) {
+ spin_unlock_irqrestore(&c->context_list_lock, flags);
+ printk(KERN_WARNING "i2o_core: context id %d not found\n", context);
+ return NULL;
+ }
+
+ element = *entry;
+ ptr = element->ptr;
+ if(count >= I2O_CONTEXT_LIST_MIN_LENGTH) {
+ *entry = (*entry)->next;
+ kfree(element);
+ } else {
+ element->ptr = NULL;
+ element->flags &= !I2O_CONTEXT_LIST_USED;
+ }
+
+ spin_unlock_irqrestore(&c->context_list_lock, flags);
+ dprintk(KERN_DEBUG "i2o_core: get ptr from context list %d -> %p\n", context, ptr);
+ return ptr;
+}
+#endif
+
+/*
+ * I2O Core reply handler
+ */
+static void i2o_core_reply(struct i2o_handler *h, struct i2o_controller *c,
+ struct i2o_message *m)
+{
+ u32 *msg=(u32 *)m;
+ u32 status;
+ u32 context = msg[2];
+
+ if (msg[0] & MSG_FAIL) // Fail bit is set
+ {
+ u32 *preserved_msg = (u32*)(c->msg_virt + msg[7]);
+
+ i2o_report_status(KERN_INFO, "i2o_core", msg);
+ i2o_dump_message(preserved_msg);
+
+ /* If the failed request needs special treatment,
+ * it should be done here. */
+
+ /* Release the preserved msg by resubmitting it as a NOP */
+
+ preserved_msg[0] = cpu_to_le32(THREE_WORD_MSG_SIZE | SGL_OFFSET_0);
+ preserved_msg[1] = cpu_to_le32(I2O_CMD_UTIL_NOP << 24 | HOST_TID << 12 | 0);
+ preserved_msg[2] = 0;
+ i2o_post_message(c, msg[7]);
+
+ /* If reply to i2o_post_wait failed, return causes a timeout */
+
+ return;
+ }
+
+#ifdef DRIVERDEBUG
+ i2o_report_status(KERN_INFO, "i2o_core", msg);
+#endif
+
+ if(msg[2]&0x80000000) // Post wait message
+ {
+ if (msg[4] >> 24)
+ status = (msg[4] & 0xFFFF);
+ else
+ status = I2O_POST_WAIT_OK;
+
+ i2o_post_wait_complete(c, context, status);
+ return;
+ }
+
+ if(m->function == I2O_CMD_UTIL_EVT_REGISTER)
+ {
+ memcpy(events[evt_in].msg, msg, (msg[0]>>16)<<2);
+ events[evt_in].iop = c;
+
+ spin_lock(&i2o_evt_lock);
+ MODINC(evt_in, I2O_EVT_Q_LEN);
+ if(evt_q_len == I2O_EVT_Q_LEN)
+ MODINC(evt_out, I2O_EVT_Q_LEN);
+ else
+ evt_q_len++;
+ spin_unlock(&i2o_evt_lock);
+
+ up(&evt_sem);
+ wake_up_interruptible(&evt_wait);
+ return;
+ }
+
+ if(m->function == I2O_CMD_LCT_NOTIFY)
+ {
+ up(&c->lct_sem);
+ return;
+ }
+
+ /*
+ * If this happens, we want to dump the message to the syslog so
+ * it can be sent back to the card manufacturer by the end user
+ * to aid in debugging.
+ *
+ */
+ printk(KERN_WARNING "%s: Unsolicited message reply sent to core!"
+ "Message dumped to syslog\n",
+ c->name);
+ i2o_dump_message(msg);
+
+ return;
+}
+
+/**
+ * i2o_install_handler - install a message handler
+ * @h: Handler structure
+ *
+ * Install an I2O handler - these handle the asynchronous messaging
+ * from the card once it has initialised. If the table of handlers is
+ * full then -ENOSPC is returned. On a success 0 is returned and the
+ * context field is set by the function. The structure is part of the
+ * system from this time onwards. It must not be freed until it has
+ * been uninstalled
+ */
+
+int i2o_install_handler(struct i2o_handler *h)
+{
+ int i;
+ down(&i2o_configuration_lock);
+ for(i=0;i<MAX_I2O_MODULES;i++)
+ {
+ if(i2o_handlers[i]==NULL)
+ {
+ h->context = i;
+ i2o_handlers[i]=h;
+ up(&i2o_configuration_lock);
+ return 0;
+ }
+ }
+ up(&i2o_configuration_lock);
+ return -ENOSPC;
+}
+
+/**
+ * i2o_remove_handler - remove an i2o message handler
+ * @h: handler
+ *
+ * Remove a message handler previously installed with i2o_install_handler.
+ * After this function returns the handler object can be freed or re-used
+ */
+
+int i2o_remove_handler(struct i2o_handler *h)
+{
+ i2o_handlers[h->context]=NULL;
+ return 0;
+}
+
+
+/*
+ * Each I2O controller has a chain of devices on it.
+ * Each device has a pointer to its LCT entry to be used
+ * for fun purposes.
+ */
+
+/**
+ * i2o_install_device - attach a device to a controller
+ * @c: controller
+ * @d: device
+ *
+ * Add a new device to an i2o controller. This can be called from
+ * non interrupt contexts only. It adds the device and marks it as
+ * unclaimed. The device memory becomes part of the kernel and must
+ * be uninstalled before being freed or reused. Zero is returned
+ * on success.
+ */
+
+int i2o_install_device(struct i2o_controller *c, struct i2o_device *d)
+{
+ int i;
+
+ down(&i2o_configuration_lock);
+ d->controller=c;
+ d->owner=NULL;
+ d->next=c->devices;
+ d->prev=NULL;
+ if (c->devices != NULL)
+ c->devices->prev=d;
+ c->devices=d;
+ *d->dev_name = 0;
+
+ for(i = 0; i < I2O_MAX_MANAGERS; i++)
+ d->managers[i] = NULL;
+
+ up(&i2o_configuration_lock);
+ return 0;
+}
+
+/* we need this version to call out of i2o_delete_controller */
+
+int __i2o_delete_device(struct i2o_device *d)
+{
+ struct i2o_device **p;
+ int i;
+
+ p=&(d->controller->devices);
+
+ /*
+ * Hey we have a driver!
+ * Check to see if the driver wants us to notify it of
+ * device deletion. If it doesn't we assume that it
+ * is unsafe to delete a device with an owner and
+ * fail.
+ */
+ if(d->owner)
+ {
+ if(d->owner->dev_del_notify)
+ {
+ dprintk(KERN_INFO "Device has owner, notifying\n");
+ d->owner->dev_del_notify(d->controller, d);
+ if(d->owner)
+ {
+ printk(KERN_WARNING
+ "Driver \"%s\" did not release device!\n", d->owner->name);
+ return -EBUSY;
+ }
+ }
+ else
+ return -EBUSY;
+ }
+
+ /*
+ * Tell any other users who are talking to this device
+ * that it's going away. We assume that everything works.
+ */
+ for(i=0; i < I2O_MAX_MANAGERS; i++)
+ {
+ if(d->managers[i] && d->managers[i]->dev_del_notify)
+ d->managers[i]->dev_del_notify(d->controller, d);
+ }
+
+ while(*p!=NULL)
+ {
+ if(*p==d)
+ {
+ /*
+ * Destroy
+ */
+ *p=d->next;
+ kfree(d);
+ return 0;
+ }
+ p=&((*p)->next);
+ }
+ printk(KERN_ERR "i2o_delete_device: passed invalid device.\n");
+ return -EINVAL;
+}
+
+/**
+ * i2o_delete_device - remove an i2o device
+ * @d: device to remove
+ *
+ * This function unhooks a device from a controller. The device
+ * will not be unhooked if it has an owner who does not wish to free
+ * it, or if the owner lacks a dev_del_notify function. In that case
+ * -EBUSY is returned. On success 0 is returned. Other errors cause
+ * negative errno values to be returned
+ */
+
+int i2o_delete_device(struct i2o_device *d)
+{
+ int ret;
+
+ down(&i2o_configuration_lock);
+
+ /*
+ * Seek, locate
+ */
+
+ ret = __i2o_delete_device(d);
+
+ up(&i2o_configuration_lock);
+
+ return ret;
+}
+
+/**
+ * i2o_install_controller - attach a controller
+ * @c: controller
+ *
+ * Add a new controller to the i2o layer. This can be called from
+ * non interrupt contexts only. It adds the controller and marks it as
+ * unused with no devices. If the tables are full or memory allocations
+ * fail then a negative errno code is returned. On success zero is
+ * returned and the controller is bound to the system. The structure
+ * must not be freed or reused until being uninstalled.
+ */
+
+int i2o_install_controller(struct i2o_controller *c)
+{
+ int i;
+ down(&i2o_configuration_lock);
+ for(i=0;i<MAX_I2O_CONTROLLERS;i++)
+ {
+ if(i2o_controllers[i]==NULL)
+ {
+ c->dlct = (i2o_lct*)pci_alloc_consistent(c->pdev, 8192, &c->dlct_phys);
+ if(c->dlct==NULL)
+ {
+ up(&i2o_configuration_lock);
+ return -ENOMEM;
+ }
+ i2o_controllers[i]=c;
+ c->devices = NULL;
+ c->next=i2o_controller_chain;
+ i2o_controller_chain=c;
+ c->unit = i;
+ c->page_frame = NULL;
+ c->hrt = NULL;
+ c->hrt_len = 0;
+ c->lct = NULL;
+ c->status_block = NULL;
+ sprintf(c->name, "i2o/iop%d", i);
+ i2o_num_controllers++;
+ init_MUTEX_LOCKED(&c->lct_sem);
+ up(&i2o_configuration_lock);
+ return 0;
+ }
+ }
+ printk(KERN_ERR "No free i2o controller slots.\n");
+ up(&i2o_configuration_lock);
+ return -EBUSY;
+}
+
+/**
+ * i2o_delete_controller - delete a controller
+ * @c: controller
+ *
+ * Remove an i2o controller from the system. If the controller or its
+ * devices are busy then -EBUSY is returned. On a failure a negative
+ * errno code is returned. On success zero is returned.
+ */
+
+int i2o_delete_controller(struct i2o_controller *c)
+{
+ struct i2o_controller **p;
+ int users;
+ char name[16];
+ int stat;
+
+ dprintk(KERN_INFO "Deleting controller %s\n", c->name);
+
+ /*
+ * Clear event registration as this can cause weird behavior
+ */
+ if(c->status_block->iop_state == ADAPTER_STATE_OPERATIONAL)
+ i2o_event_register(c, core_context, 0, 0, 0);
+
+ down(&i2o_configuration_lock);
+ if((users=atomic_read(&c->users)))
+ {
+ dprintk(KERN_INFO "I2O: %d users for controller %s\n", users,
+ c->name);
+ up(&i2o_configuration_lock);
+ return -EBUSY;
+ }
+ while(c->devices)
+ {
+ if(__i2o_delete_device(c->devices)<0)
+ {
+ /* Shouldnt happen */
+ I2O_IRQ_WRITE32(c, 0xFFFFFFFF);
+ c->enabled = 0;
+ up(&i2o_configuration_lock);
+ return -EBUSY;
+ }
+ }
+
+ /*
+ * If this is shutdown time, the thread's already been killed
+ */
+ if(c->lct_running) {
+ stat = kill_proc(c->lct_pid, SIGKILL, 1);
+ if(!stat) {
+ int count = 10 * 100;
+ while(c->lct_running && --count) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule_timeout(1);
+ }
+
+ if(!count)
+ printk(KERN_ERR
+ "%s: LCT thread still running!\n",
+ c->name);
+ }
+ }
+
+ p=&i2o_controller_chain;
+
+ while(*p)
+ {
+ if(*p==c)
+ {
+ /* Ask the IOP to switch to RESET state */
+ i2o_reset_controller(c);
+
+ /* Release IRQ */
+ i2o_pci_dispose(c);
+
+ *p=c->next;
+ up(&i2o_configuration_lock);
+
+ if(c->page_frame)
+ {
+ pci_unmap_single(c->pdev, c->page_frame_map, MSG_POOL_SIZE, PCI_DMA_FROMDEVICE);
+ kfree(c->page_frame);
+ }
+ if(c->hrt)
+ pci_free_consistent(c->pdev, c->hrt_len, c->hrt, c->hrt_phys);
+ if(c->lct)
+ pci_free_consistent(c->pdev, c->lct->table_size << 2, c->lct, c->lct_phys);
+ if(c->status_block)
+ pci_free_consistent(c->pdev, sizeof(i2o_status_block), c->status_block, c->status_block_phys);
+ if(c->dlct)
+ pci_free_consistent(c->pdev, 8192, c->dlct, c->dlct_phys);
+
+ i2o_controllers[c->unit]=NULL;
+ memcpy(name, c->name, strlen(c->name)+1);
+ kfree(c);
+ dprintk(KERN_INFO "%s: Deleted from controller chain.\n", name);
+
+ i2o_num_controllers--;
+ return 0;
+ }
+ p=&((*p)->next);
+ }
+ up(&i2o_configuration_lock);
+ printk(KERN_ERR "i2o_delete_controller: bad pointer!\n");
+ return -ENOENT;
+}
+
+/**
+ * i2o_unlock_controller - unlock a controller
+ * @c: controller to unlock
+ *
+ * Take a lock on an i2o controller. This prevents it being deleted.
+ * i2o controllers are not refcounted so a deletion of an in use device
+ * will fail, not take affect on the last dereference.
+ */
+
+void i2o_unlock_controller(struct i2o_controller *c)
+{
+ atomic_dec(&c->users);
+}
+
+/**
+ * i2o_find_controller - return a locked controller
+ * @n: controller number
+ *
+ * Returns a pointer to the controller object. The controller is locked
+ * on return. NULL is returned if the controller is not found.
+ */
+
+struct i2o_controller *i2o_find_controller(int n)
+{
+ struct i2o_controller *c;
+
+ if(n<0 || n>=MAX_I2O_CONTROLLERS)
+ return NULL;
+
+ down(&i2o_configuration_lock);
+ c=i2o_controllers[n];
+ if(c!=NULL)
+ atomic_inc(&c->users);
+ up(&i2o_configuration_lock);
+ return c;
+}
+
+/**
+ * i2o_issue_claim - claim or release a device
+ * @cmd: command
+ * @c: controller to claim for
+ * @tid: i2o task id
+ * @type: type of claim
+ *
+ * Issue I2O UTIL_CLAIM and UTIL_RELEASE messages. The message to be sent
+ * is set by cmd. The tid is the task id of the object to claim and the
+ * type is the claim type (see the i2o standard)
+ *
+ * Zero is returned on success.
+ */
+
+static int i2o_issue_claim(u32 cmd, struct i2o_controller *c, int tid, u32 type)
+{
+ u32 msg[5];
+
+ msg[0] = FIVE_WORD_MSG_SIZE | SGL_OFFSET_0;
+ msg[1] = cmd << 24 | HOST_TID<<12 | tid;
+ msg[3] = 0;
+ msg[4] = type;
+
+ return i2o_post_wait(c, msg, sizeof(msg), 60);
+}
+
+/*
+ * i2o_claim_device - claim a device for use by an OSM
+ * @d: device to claim
+ * @h: handler for this device
+ *
+ * Do the leg work to assign a device to a given OSM on Linux. The
+ * kernel updates the internal handler data for the device and then
+ * performs an I2O claim for the device, attempting to claim the
+ * device as primary. If the attempt fails a negative errno code
+ * is returned. On success zero is returned.
+ */
+
+int i2o_claim_device(struct i2o_device *d, struct i2o_handler *h)
+{
+ int ret = 0;
+
+ down(&i2o_configuration_lock);
+ if (d->owner) {
+ printk(KERN_INFO "Device claim called, but dev already owned by %s!",
+ h->name);
+ ret = -EBUSY;
+ goto out;
+ }
+ d->owner=h;
+
+ if(i2o_issue_claim(I2O_CMD_UTIL_CLAIM ,d->controller,d->lct_data.tid,
+ I2O_CLAIM_PRIMARY))
+ {
+ d->owner = NULL;
+ ret = -EBUSY;
+ }
+out:
+ up(&i2o_configuration_lock);
+ return ret;
+}
+
+/**
+ * i2o_release_device - release a device that the OSM is using
+ * @d: device to claim
+ * @h: handler for this device
+ *
+ * Drop a claim by an OSM on a given I2O device. The handler is cleared
+ * and 0 is returned on success.
+ *
+ * AC - some devices seem to want to refuse an unclaim until they have
+ * finished internal processing. It makes sense since you don't want a
+ * new device to go reconfiguring the entire system until you are done.
+ * Thus we are prepared to wait briefly.
+ */
+
+int i2o_release_device(struct i2o_device *d, struct i2o_handler *h)
+{
+ int err = 0;
+ int tries;
+
+ down(&i2o_configuration_lock);
+ if (d->owner != h) {
+ printk(KERN_INFO "Claim release called, but not owned by %s!\n",
+ h->name);
+ up(&i2o_configuration_lock);
+ return -ENOENT;
+ }
+
+ for(tries=0;tries<10;tries++)
+ {
+ d->owner = NULL;
+
+ /*
+ * If the controller takes a nonblocking approach to
+ * releases we have to sleep/poll for a few times.
+ */
+
+ if((err=i2o_issue_claim(I2O_CMD_UTIL_RELEASE, d->controller, d->lct_data.tid, I2O_CLAIM_PRIMARY)) )
+ {
+ err = -ENXIO;
+ current->state = TASK_UNINTERRUPTIBLE;
+ schedule_timeout(HZ);
+ }
+ else
+ {
+ err=0;
+ break;
+ }
+ }
+ up(&i2o_configuration_lock);
+ return err;
+}
+
+/**
+ * i2o_device_notify_on - Enable deletion notifiers
+ * @d: device for notification
+ * @h: handler to install
+ *
+ * Called by OSMs to let the core know that they want to be
+ * notified if the given device is deleted from the system.
+ */
+
+int i2o_device_notify_on(struct i2o_device *d, struct i2o_handler *h)
+{
+ int i;
+
+ if(d->num_managers == I2O_MAX_MANAGERS)
+ return -ENOSPC;
+
+ for(i = 0; i < I2O_MAX_MANAGERS; i++)
+ {
+ if(!d->managers[i])
+ {
+ d->managers[i] = h;
+ break;
+ }
+ }
+
+ d->num_managers++;
+
+ return 0;
+}
+
+/**
+ * i2o_device_notify_off - Remove deletion notifiers
+ * @d: device for notification
+ * @h: handler to remove
+ *
+ * Called by OSMs to let the core know that they no longer
+ * are interested in the fate of the given device.
+ */
+int i2o_device_notify_off(struct i2o_device *d, struct i2o_handler *h)
+{
+ int i;
+
+ for(i=0; i < I2O_MAX_MANAGERS; i++)
+ {
+ if(d->managers[i] == h)
+ {
+ d->managers[i] = NULL;
+ d->num_managers--;
+ return 0;
+ }
+ }
+
+ return -ENOENT;
+}
+
+/**
+ * i2o_event_register - register interest in an event
+ * @c: Controller to register interest with
+ * @tid: I2O task id
+ * @init_context: initiator context to use with this notifier
+ * @tr_context: transaction context to use with this notifier
+ * @evt_mask: mask of events
+ *
+ * Create and posts an event registration message to the task. No reply
+ * is waited for, or expected. Errors in posting will be reported.
+ */
+
+int i2o_event_register(struct i2o_controller *c, u32 tid,
+ u32 init_context, u32 tr_context, u32 evt_mask)
+{
+ u32 msg[5]; // Not performance critical, so we just
+ // i2o_post_this it instead of building it
+ // in IOP memory
+
+ msg[0] = FIVE_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_UTIL_EVT_REGISTER<<24 | HOST_TID<<12 | tid;
+ msg[2] = init_context;
+ msg[3] = tr_context;
+ msg[4] = evt_mask;
+
+ return i2o_post_this(c, msg, sizeof(msg));
+}
+
+/*
+ * i2o_event_ack - acknowledge an event
+ * @c: controller
+ * @msg: pointer to the UTIL_EVENT_REGISTER reply we received
+ *
+ * We just take a pointer to the original UTIL_EVENT_REGISTER reply
+ * message and change the function code since that's what spec
+ * describes an EventAck message looking like.
+ */
+
+int i2o_event_ack(struct i2o_controller *c, u32 *msg)
+{
+ struct i2o_message *m = (struct i2o_message *)msg;
+
+ m->function = I2O_CMD_UTIL_EVT_ACK;
+
+ return i2o_post_wait(c, msg, m->size * 4, 2);
+}
+
+/*
+ * Core event handler. Runs as a separate thread and is woken
+ * up whenever there is an Executive class event.
+ */
+static int i2o_core_evt(void *reply_data)
+{
+ struct reply_info *reply = (struct reply_info *) reply_data;
+ u32 *msg = reply->msg;
+ struct i2o_controller *c = NULL;
+ unsigned long flags;
+
+ daemonize("i2oevtd");
+ allow_signal(SIGKILL);
+
+ evt_running = 1;
+
+ while(1)
+ {
+ if(down_interruptible(&evt_sem))
+ {
+ dprintk(KERN_INFO "I2O event thread dead\n");
+ printk("exiting...");
+ evt_running = 0;
+ complete_and_exit(&evt_dead, 0);
+ }
+
+ /*
+ * Copy the data out of the queue so that we don't have to lock
+ * around the whole function and just around the qlen update
+ */
+ spin_lock_irqsave(&i2o_evt_lock, flags);
+ memcpy(reply, &events[evt_out], sizeof(struct reply_info));
+ MODINC(evt_out, I2O_EVT_Q_LEN);
+ evt_q_len--;
+ spin_unlock_irqrestore(&i2o_evt_lock, flags);
+
+ c = reply->iop;
+ dprintk(KERN_INFO "I2O IRTOS EVENT: iop%d, event %#10x\n", c->unit, msg[4]);
+
+ /*
+ * We do not attempt to delete/quiesce/etc. the controller if
+ * some sort of error indidication occurs. We may want to do
+ * so in the future, but for now we just let the user deal with
+ * it. One reason for this is that what to do with an error
+ * or when to send what ærror is not really agreed on, so
+ * we get errors that may not be fatal but just look like they
+ * are...so let the user deal with it.
+ */
+ switch(msg[4])
+ {
+ case I2O_EVT_IND_EXEC_RESOURCE_LIMITS:
+ printk(KERN_ERR "%s: Out of resources\n", c->name);
+ break;
+
+ case I2O_EVT_IND_EXEC_POWER_FAIL:
+ printk(KERN_ERR "%s: Power failure\n", c->name);
+ break;
+
+ case I2O_EVT_IND_EXEC_HW_FAIL:
+ {
+ char *fail[] =
+ {
+ "Unknown Error",
+ "Power Lost",
+ "Code Violation",
+ "Parity Error",
+ "Code Execution Exception",
+ "Watchdog Timer Expired"
+ };
+
+ if(msg[5] <= 6)
+ printk(KERN_ERR "%s: Hardware Failure: %s\n",
+ c->name, fail[msg[5]]);
+ else
+ printk(KERN_ERR "%s: Unknown Hardware Failure\n", c->name);
+
+ break;
+ }
+
+ /*
+ * New device created
+ * - Create a new i2o_device entry
+ * - Inform all interested drivers about this device's existence
+ */
+ case I2O_EVT_IND_EXEC_NEW_LCT_ENTRY:
+ {
+ struct i2o_device *d = (struct i2o_device *)
+ kmalloc(sizeof(struct i2o_device), GFP_KERNEL);
+ int i;
+
+ if (d == NULL) {
+ printk(KERN_EMERG "i2oevtd: out of memory\n");
+ break;
+ }
+ memcpy(&d->lct_data, &msg[5], sizeof(i2o_lct_entry));
+
+ d->next = NULL;
+ d->controller = c;
+ d->flags = 0;
+
+ i2o_report_controller_unit(c, d);
+ i2o_install_device(c,d);
+
+ for(i = 0; i < MAX_I2O_MODULES; i++)
+ {
+ if(i2o_handlers[i] &&
+ i2o_handlers[i]->new_dev_notify &&
+ (i2o_handlers[i]->class&d->lct_data.class_id))
+ {
+ spin_lock(&i2o_dev_lock);
+ i2o_handlers[i]->new_dev_notify(c,d);
+ spin_unlock(&i2o_dev_lock);
+ }
+ }
+
+ break;
+ }
+
+ /*
+ * LCT entry for a device has been modified, so update it
+ * internally.
+ */
+ case I2O_EVT_IND_EXEC_MODIFIED_LCT:
+ {
+ struct i2o_device *d;
+ i2o_lct_entry *new_lct = (i2o_lct_entry *)&msg[5];
+
+ for(d = c->devices; d; d = d->next)
+ {
+ if(d->lct_data.tid == new_lct->tid)
+ {
+ memcpy(&d->lct_data, new_lct, sizeof(i2o_lct_entry));
+ break;
+ }
+ }
+ break;
+ }
+
+ case I2O_EVT_IND_CONFIGURATION_FLAG:
+ printk(KERN_WARNING "%s requires user configuration\n", c->name);
+ break;
+
+ case I2O_EVT_IND_GENERAL_WARNING:
+ printk(KERN_WARNING "%s: Warning notification received!"
+ "Check configuration for errors!\n", c->name);
+ break;
+
+ case I2O_EVT_IND_EVT_MASK_MODIFIED:
+ /* Well I guess that was us hey .. */
+ break;
+
+ default:
+ printk(KERN_WARNING "%s: No handler for event (0x%08x)\n", c->name, msg[4]);
+ break;
+ }
+ }
+
+ return 0;
+}
+
+/*
+ * Dynamic LCT update. This compares the LCT with the currently
+ * installed devices to check for device deletions..this needed b/c there
+ * is no DELETED_LCT_ENTRY EventIndicator for the Executive class so
+ * we can't just have the event handler do this...annoying
+ *
+ * This is a hole in the spec that will hopefully be fixed someday.
+ */
+static int i2o_dyn_lct(void *foo)
+{
+ struct i2o_controller *c = (struct i2o_controller *)foo;
+ struct i2o_device *d = NULL;
+ struct i2o_device *d1 = NULL;
+ int i = 0;
+ int found = 0;
+ int entries;
+ void *tmp;
+
+ daemonize("iop%d_lctd", c->unit);
+ allow_signal(SIGKILL);
+
+ c->lct_running = 1;
+
+ while(1)
+ {
+ down_interruptible(&c->lct_sem);
+ if(signal_pending(current))
+ {
+ dprintk(KERN_ERR "%s: LCT thread dead\n", c->name);
+ c->lct_running = 0;
+ return 0;
+ }
+
+ entries = c->dlct->table_size;
+ entries -= 3;
+ entries /= 9;
+
+ dprintk(KERN_INFO "%s: Dynamic LCT Update\n",c->name);
+ dprintk(KERN_INFO "%s: Dynamic LCT contains %d entries\n", c->name, entries);
+
+ if(!entries)
+ {
+ printk(KERN_INFO "%s: Empty LCT???\n", c->name);
+ continue;
+ }
+
+ /*
+ * Loop through all the devices on the IOP looking for their
+ * LCT data in the LCT. We assume that TIDs are not repeated.
+ * as that is the only way to really tell. It's been confirmed
+ * by the IRTOS vendor(s?) that TIDs are not reused until they
+ * wrap arround(4096), and I doubt a system will up long enough
+ * to create/delete that many devices.
+ */
+ for(d = c->devices; d; )
+ {
+ found = 0;
+ d1 = d->next;
+
+ for(i = 0; i < entries; i++)
+ {
+ if(d->lct_data.tid == c->dlct->lct_entry[i].tid)
+ {
+ found = 1;
+ break;
+ }
+ }
+ if(!found)
+ {
+ dprintk(KERN_INFO "i2o_core: Deleted device!\n");
+ spin_lock(&i2o_dev_lock);
+ i2o_delete_device(d);
+ spin_unlock(&i2o_dev_lock);
+ }
+ d = d1;
+ }
+
+ /*
+ * Tell LCT to renotify us next time there is a change
+ */
+ i2o_lct_notify(c);
+
+ /*
+ * Copy new LCT into public LCT
+ *
+ * Possible race if someone is reading LCT while we are copying
+ * over it. If this happens, we'll fix it then. but I doubt that
+ * the LCT will get updated often enough or will get read by
+ * a user often enough to worry.
+ */
+ if(c->lct->table_size < c->dlct->table_size)
+ {
+ dma_addr_t phys;
+ tmp = c->lct;
+ c->lct = pci_alloc_consistent(c->pdev, c->dlct->table_size<<2, &phys);
+ if(!c->lct)
+ {
+ printk(KERN_ERR "%s: No memory for LCT!\n", c->name);
+ c->lct = tmp;
+ continue;
+ }
+ pci_free_consistent(tmp, c->lct->table_size << 2, c->lct, c->lct_phys);
+ c->lct_phys = phys;
+ }
+ memcpy(c->lct, c->dlct, c->dlct->table_size<<2);
+ }
+
+ return 0;
+}
+
+/**
+ * i2o_run_queue - process pending events on a controller
+ * @c: controller to process
+ *
+ * This is called by the bus specific driver layer when an interrupt
+ * or poll of this card interface is desired.
+ */
+
+void i2o_run_queue(struct i2o_controller *c)
+{
+ struct i2o_message *m;
+ u32 mv;
+ u32 *msg;
+
+ /*
+ * Old 960 steppings had a bug in the I2O unit that caused
+ * the queue to appear empty when it wasn't.
+ */
+ if((mv=I2O_REPLY_READ32(c))==0xFFFFFFFF)
+ mv=I2O_REPLY_READ32(c);
+
+ while(mv!=0xFFFFFFFF)
+ {
+ struct i2o_handler *i;
+ /* Map the message from the page frame map to kernel virtual */
+ /* m=(struct i2o_message *)(mv - (unsigned long)c->page_frame_map + (unsigned long)c->page_frame); */
+ m=(struct i2o_message *)bus_to_virt(mv);
+ msg=(u32*)m;
+
+ /*
+ * Ensure this message is seen coherently but cachably by
+ * the processor
+ */
+
+ pci_dma_sync_single_for_cpu(c->pdev, c->page_frame_map, MSG_FRAME_SIZE, PCI_DMA_FROMDEVICE);
+
+ /*
+ * Despatch it
+ */
+
+ i=i2o_handlers[m->initiator_context&(MAX_I2O_MODULES-1)];
+ if(i && i->reply)
+ i->reply(i,c,m);
+ else
+ {
+ printk(KERN_WARNING "I2O: Spurious reply to handler %d\n",
+ m->initiator_context&(MAX_I2O_MODULES-1));
+ }
+ i2o_flush_reply(c,mv);
+ mb();
+
+ /* That 960 bug again... */
+ if((mv=I2O_REPLY_READ32(c))==0xFFFFFFFF)
+ mv=I2O_REPLY_READ32(c);
+ }
+}
+
+
+/**
+ * i2o_get_class_name - do i2o class name lookup
+ * @class: class number
+ *
+ * Return a descriptive string for an i2o class
+ */
+
+const char *i2o_get_class_name(int class)
+{
+ int idx = 16;
+ static char *i2o_class_name[] = {
+ "Executive",
+ "Device Driver Module",
+ "Block Device",
+ "Tape Device",
+ "LAN Interface",
+ "WAN Interface",
+ "Fibre Channel Port",
+ "Fibre Channel Device",
+ "SCSI Device",
+ "ATE Port",
+ "ATE Device",
+ "Floppy Controller",
+ "Floppy Device",
+ "Secondary Bus Port",
+ "Peer Transport Agent",
+ "Peer Transport",
+ "Unknown"
+ };
+
+ switch(class&0xFFF)
+ {
+ case I2O_CLASS_EXECUTIVE:
+ idx = 0; break;
+ case I2O_CLASS_DDM:
+ idx = 1; break;
+ case I2O_CLASS_RANDOM_BLOCK_STORAGE:
+ idx = 2; break;
+ case I2O_CLASS_SEQUENTIAL_STORAGE:
+ idx = 3; break;
+ case I2O_CLASS_LAN:
+ idx = 4; break;
+ case I2O_CLASS_WAN:
+ idx = 5; break;
+ case I2O_CLASS_FIBRE_CHANNEL_PORT:
+ idx = 6; break;
+ case I2O_CLASS_FIBRE_CHANNEL_PERIPHERAL:
+ idx = 7; break;
+ case I2O_CLASS_SCSI_PERIPHERAL:
+ idx = 8; break;
+ case I2O_CLASS_ATE_PORT:
+ idx = 9; break;
+ case I2O_CLASS_ATE_PERIPHERAL:
+ idx = 10; break;
+ case I2O_CLASS_FLOPPY_CONTROLLER:
+ idx = 11; break;
+ case I2O_CLASS_FLOPPY_DEVICE:
+ idx = 12; break;
+ case I2O_CLASS_BUS_ADAPTER_PORT:
+ idx = 13; break;
+ case I2O_CLASS_PEER_TRANSPORT_AGENT:
+ idx = 14; break;
+ case I2O_CLASS_PEER_TRANSPORT:
+ idx = 15; break;
+ }
+
+ return i2o_class_name[idx];
+}
+
+
+/**
+ * i2o_wait_message - obtain an i2o message from the IOP
+ * @c: controller
+ * @why: explanation
+ *
+ * This function waits up to 5 seconds for a message slot to be
+ * available. If no message is available it prints an error message
+ * that is expected to be what the message will be used for (eg
+ * "get_status"). 0xFFFFFFFF is returned on a failure.
+ *
+ * On a success the message is returned. This is the physical page
+ * frame offset address from the read port. (See the i2o spec)
+ */
+
+u32 i2o_wait_message(struct i2o_controller *c, char *why)
+{
+ long time=jiffies;
+ u32 m;
+ while((m=I2O_POST_READ32(c))==0xFFFFFFFF)
+ {
+ if((jiffies-time)>=5*HZ)
+ {
+ dprintk(KERN_ERR "%s: Timeout waiting for message frame to send %s.\n",
+ c->name, why);
+ return 0xFFFFFFFF;
+ }
+ schedule();
+ barrier();
+ }
+ return m;
+}
+
+/**
+ * i2o_report_controller_unit - print information about a tid
+ * @c: controller
+ * @d: device
+ *
+ * Dump an information block associated with a given unit (TID). The
+ * tables are read and a block of text is output to printk that is
+ * formatted intended for the user.
+ */
+
+void i2o_report_controller_unit(struct i2o_controller *c, struct i2o_device *d)
+{
+ char buf[64];
+ char str[22];
+ int ret;
+ int unit = d->lct_data.tid;
+
+ if(verbose==0)
+ return;
+
+ printk(KERN_INFO "Target ID %d.\n", unit);
+ if((ret=i2o_query_scalar(c, unit, 0xF100, 3, buf, 16))>=0)
+ {
+ buf[16]=0;
+ printk(KERN_INFO " Vendor: %s\n", buf);
+ }
+ if((ret=i2o_query_scalar(c, unit, 0xF100, 4, buf, 16))>=0)
+ {
+ buf[16]=0;
+ printk(KERN_INFO " Device: %s\n", buf);
+ }
+ if(i2o_query_scalar(c, unit, 0xF100, 5, buf, 16)>=0)
+ {
+ buf[16]=0;
+ printk(KERN_INFO " Description: %s\n", buf);
+ }
+ if((ret=i2o_query_scalar(c, unit, 0xF100, 6, buf, 8))>=0)
+ {
+ buf[8]=0;
+ printk(KERN_INFO " Rev: %s\n", buf);
+ }
+
+ printk(KERN_INFO " Class: ");
+ sprintf(str, "%-21s", i2o_get_class_name(d->lct_data.class_id));
+ printk("%s\n", str);
+
+ printk(KERN_INFO " Subclass: 0x%04X\n", d->lct_data.sub_class);
+ printk(KERN_INFO " Flags: ");
+
+ if(d->lct_data.device_flags&(1<<0))
+ printk("C"); // ConfigDialog requested
+ if(d->lct_data.device_flags&(1<<1))
+ printk("U"); // Multi-user capable
+ if(!(d->lct_data.device_flags&(1<<4)))
+ printk("P"); // Peer service enabled!
+ if(!(d->lct_data.device_flags&(1<<5)))
+ printk("M"); // Mgmt service enabled!
+ printk("\n");
+
+}
+
+
+/*
+ * Parse the hardware resource table. Right now we print it out
+ * and don't do a lot with it. We should collate these and then
+ * interact with the Linux resource allocation block.
+ *
+ * Lets prove we can read it first eh ?
+ *
+ * This is full of endianisms!
+ */
+
+static int i2o_parse_hrt(struct i2o_controller *c)
+{
+#ifdef DRIVERDEBUG
+ u32 *rows=(u32*)c->hrt;
+ u8 *p=(u8 *)c->hrt;
+ u8 *d;
+ int count;
+ int length;
+ int i;
+ int state;
+
+ if(p[3]!=0)
+ {
+ printk(KERN_ERR "%s: HRT table for controller is too new a version.\n",
+ c->name);
+ return -1;
+ }
+
+ count=p[0]|(p[1]<<8);
+ length = p[2];
+
+ printk(KERN_INFO "%s: HRT has %d entries of %d bytes each.\n",
+ c->name, count, length<<2);
+
+ rows+=2;
+
+ for(i=0;i<count;i++)
+ {
+ printk(KERN_INFO "Adapter %08X: ", rows[0]);
+ p=(u8 *)(rows+1);
+ d=(u8 *)(rows+2);
+ state=p[1]<<8|p[0];
+
+ printk("TID %04X:[", state&0xFFF);
+ state>>=12;
+ if(state&(1<<0))
+ printk("H"); /* Hidden */
+ if(state&(1<<2))
+ {
+ printk("P"); /* Present */
+ if(state&(1<<1))
+ printk("C"); /* Controlled */
+ }
+ if(state>9)
+ printk("*"); /* Hard */
+
+ printk("]:");
+
+ switch(p[3]&0xFFFF)
+ {
+ case 0:
+ /* Adapter private bus - easy */
+ printk("Local bus %d: I/O at 0x%04X Mem 0x%08X",
+ p[2], d[1]<<8|d[0], *(u32 *)(d+4));
+ break;
+ case 1:
+ /* ISA bus */
+ printk("ISA %d: CSN %d I/O at 0x%04X Mem 0x%08X",
+ p[2], d[2], d[1]<<8|d[0], *(u32 *)(d+4));
+ break;
+
+ case 2: /* EISA bus */
+ printk("EISA %d: Slot %d I/O at 0x%04X Mem 0x%08X",
+ p[2], d[3], d[1]<<8|d[0], *(u32 *)(d+4));
+ break;
+
+ case 3: /* MCA bus */
+ printk("MCA %d: Slot %d I/O at 0x%04X Mem 0x%08X",
+ p[2], d[3], d[1]<<8|d[0], *(u32 *)(d+4));
+ break;
+
+ case 4: /* PCI bus */
+ printk("PCI %d: Bus %d Device %d Function %d",
+ p[2], d[2], d[1], d[0]);
+ break;
+
+ case 0x80: /* Other */
+ default:
+ printk("Unsupported bus type.");
+ break;
+ }
+ printk("\n");
+ rows+=length;
+ }
+#endif
+ return 0;
+}
+
+/*
+ * The logical configuration table tells us what we can talk to
+ * on the board. Most of the stuff isn't interesting to us.
+ */
+
+static int i2o_parse_lct(struct i2o_controller *c)
+{
+ int i;
+ int max;
+ int tid;
+ struct i2o_device *d;
+ i2o_lct *lct = c->lct;
+
+ if (lct == NULL) {
+ printk(KERN_ERR "%s: LCT is empty???\n", c->name);
+ return -1;
+ }
+
+ max = lct->table_size;
+ max -= 3;
+ max /= 9;
+
+ printk(KERN_INFO "%s: LCT has %d entries.\n", c->name, max);
+
+ if(lct->iop_flags&(1<<0))
+ printk(KERN_WARNING "%s: Configuration dialog desired.\n", c->name);
+
+ for(i=0;i<max;i++)
+ {
+ d = (struct i2o_device *)kmalloc(sizeof(struct i2o_device), GFP_KERNEL);
+ if(d==NULL)
+ {
+ printk(KERN_CRIT "i2o_core: Out of memory for I2O device data.\n");
+ return -ENOMEM;
+ }
+
+ d->controller = c;
+ d->next = NULL;
+
+ memcpy(&d->lct_data, &lct->lct_entry[i], sizeof(i2o_lct_entry));
+
+ d->flags = 0;
+ tid = d->lct_data.tid;
+
+ i2o_report_controller_unit(c, d);
+
+ i2o_install_device(c, d);
+ }
+ return 0;
+}
+
+
+/**
+ * i2o_quiesce_controller - quiesce controller
+ * @c: controller
+ *
+ * Quiesce an IOP. Causes IOP to make external operation quiescent
+ * (i2o 'READY' state). Internal operation of the IOP continues normally.
+ */
+
+int i2o_quiesce_controller(struct i2o_controller *c)
+{
+ u32 msg[4];
+ int ret;
+
+ i2o_status_get(c);
+
+ /* SysQuiesce discarded if IOP not in READY or OPERATIONAL state */
+
+ if ((c->status_block->iop_state != ADAPTER_STATE_READY) &&
+ (c->status_block->iop_state != ADAPTER_STATE_OPERATIONAL))
+ {
+ return 0;
+ }
+
+ msg[0] = FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1] = I2O_CMD_SYS_QUIESCE<<24|HOST_TID<<12|ADAPTER_TID;
+ msg[3] = 0;
+
+ /* Long timeout needed for quiesce if lots of devices */
+
+ if ((ret = i2o_post_wait(c, msg, sizeof(msg), 240)))
+ printk(KERN_INFO "%s: Unable to quiesce (status=%#x).\n",
+ c->name, -ret);
+ else
+ dprintk(KERN_INFO "%s: Quiesced.\n", c->name);
+
+ i2o_status_get(c); // Entered READY state
+ return ret;
+}
+
+/**
+ * i2o_enable_controller - move controller from ready to operational
+ * @c: controller
+ *
+ * Enable IOP. This allows the IOP to resume external operations and
+ * reverses the effect of a quiesce. In the event of an error a negative
+ * errno code is returned.
+ */
+
+int i2o_enable_controller(struct i2o_controller *c)
+{
+ u32 msg[4];
+ int ret;
+
+ i2o_status_get(c);
+
+ /* Enable only allowed on READY state */
+ if(c->status_block->iop_state != ADAPTER_STATE_READY)
+ return -EINVAL;
+
+ msg[0]=FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1]=I2O_CMD_SYS_ENABLE<<24|HOST_TID<<12|ADAPTER_TID;
+
+ /* How long of a timeout do we need? */
+
+ if ((ret = i2o_post_wait(c, msg, sizeof(msg), 240)))
+ printk(KERN_ERR "%s: Could not enable (status=%#x).\n",
+ c->name, -ret);
+ else
+ dprintk(KERN_INFO "%s: Enabled.\n", c->name);
+
+ i2o_status_get(c); // entered OPERATIONAL state
+
+ return ret;
+}
+
+/**
+ * i2o_clear_controller - clear a controller
+ * @c: controller
+ *
+ * Clear an IOP to HOLD state, ie. terminate external operations, clear all
+ * input queues and prepare for a system restart. IOP's internal operation
+ * continues normally and the outbound queue is alive.
+ * The IOP is not expected to rebuild its LCT.
+ */
+
+int i2o_clear_controller(struct i2o_controller *c)
+{
+ struct i2o_controller *iop;
+ u32 msg[4];
+ int ret;
+
+ /* Quiesce all IOPs first */
+
+ for (iop = i2o_controller_chain; iop; iop = iop->next)
+ i2o_quiesce_controller(iop);
+
+ msg[0]=FOUR_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1]=I2O_CMD_ADAPTER_CLEAR<<24|HOST_TID<<12|ADAPTER_TID;
+ msg[3]=0;
+
+ if ((ret=i2o_post_wait(c, msg, sizeof(msg), 30)))
+ printk(KERN_INFO "%s: Unable to clear (status=%#x).\n",
+ c->name, -ret);
+ else
+ dprintk(KERN_INFO "%s: Cleared.\n",c->name);
+
+ i2o_status_get(c);
+
+ /* Enable other IOPs */
+
+ for (iop = i2o_controller_chain; iop; iop = iop->next)
+ if (iop != c)
+ i2o_enable_controller(iop);
+
+ return ret;
+}
+
+
+/**
+ * i2o_reset_controller - reset an IOP
+ * @c: controller to reset
+ *
+ * Reset the IOP into INIT state and wait until IOP gets into RESET state.
+ * Terminate all external operations, clear IOP's inbound and outbound
+ * queues, terminate all DDMs, and reload the IOP's operating environment
+ * and all local DDMs. The IOP rebuilds its LCT.
+ */
+
+static int i2o_reset_controller(struct i2o_controller *c)
+{
+ struct i2o_controller *iop;
+ u32 m;
+ u8 *status;
+ dma_addr_t status_phys;
+ u32 *msg;
+ long time;
+
+ /* Quiesce all IOPs first */
+
+ for (iop = i2o_controller_chain; iop; iop = iop->next)
+ {
+ if(!iop->dpt)
+ i2o_quiesce_controller(iop);
+ }
+
+ m=i2o_wait_message(c, "AdapterReset");
+ if(m==0xFFFFFFFF)
+ return -ETIMEDOUT;
+ msg=(u32 *)(c->msg_virt+m);
+
+ status = pci_alloc_consistent(c->pdev, 4, &status_phys);
+ if(status == NULL) {
+ printk(KERN_ERR "IOP reset failed - no free memory.\n");
+ return -ENOMEM;
+ }
+ memset(status, 0, 4);
+
+ msg[0]=EIGHT_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1]=I2O_CMD_ADAPTER_RESET<<24|HOST_TID<<12|ADAPTER_TID;
+ msg[2]=core_context;
+ msg[3]=0;
+ msg[4]=0;
+ msg[5]=0;
+ msg[6]=status_phys;
+ msg[7]=0; /* 64bit host FIXME */
+
+ i2o_post_message(c,m);
+
+ /* Wait for a reply */
+ time=jiffies;
+ while(*status==0)
+ {
+ if((jiffies-time)>=20*HZ)
+ {
+ printk(KERN_ERR "IOP reset timeout.\n");
+ /* The controller still may respond and overwrite
+ * status_phys, LEAK it to prevent memory corruption.
+ */
+ return -ETIMEDOUT;
+ }
+ schedule();
+ barrier();
+ }
+
+ if (*status==I2O_CMD_IN_PROGRESS)
+ {
+ /*
+ * Once the reset is sent, the IOP goes into the INIT state
+ * which is indeterminate. We need to wait until the IOP
+ * has rebooted before we can let the system talk to
+ * it. We read the inbound Free_List until a message is
+ * available. If we can't read one in the given ammount of
+ * time, we assume the IOP could not reboot properly.
+ */
+
+ dprintk(KERN_INFO "%s: Reset in progress, waiting for reboot...\n",
+ c->name);
+
+ time = jiffies;
+ m = I2O_POST_READ32(c);
+ while(m == 0XFFFFFFFF)
+ {
+ if((jiffies-time) >= 30*HZ)
+ {
+ printk(KERN_ERR "%s: Timeout waiting for IOP reset.\n",
+ c->name);
+ /* The controller still may respond and
+ * overwrite status_phys, LEAK it to prevent
+ * memory corruption.
+ */
+ return -ETIMEDOUT;
+ }
+ schedule();
+ barrier();
+ m = I2O_POST_READ32(c);
+ }
+ i2o_flush_reply(c,m);
+ }
+
+ /* If IopReset was rejected or didn't perform reset, try IopClear */
+
+ i2o_status_get(c);
+ if (status[0] == I2O_CMD_REJECTED ||
+ c->status_block->iop_state != ADAPTER_STATE_RESET)
+ {
+ printk(KERN_WARNING "%s: Reset rejected, trying to clear\n",c->name);
+ i2o_clear_controller(c);
+ }
+ else
+ dprintk(KERN_INFO "%s: Reset completed.\n", c->name);
+
+ /* Enable other IOPs */
+
+ for (iop = i2o_controller_chain; iop; iop = iop->next)
+ if (iop != c)
+ i2o_enable_controller(iop);
+
+ pci_free_consistent(c->pdev, 4, status, status_phys);
+ return 0;
+}
+
+
+/**
+ * i2o_status_get - get the status block for the IOP
+ * @c: controller
+ *
+ * Issue a status query on the controller. This updates the
+ * attached status_block. If the controller fails to reply or an
+ * error occurs then a negative errno code is returned. On success
+ * zero is returned and the status_blok is updated.
+ */
+
+int i2o_status_get(struct i2o_controller *c)
+{
+ long time;
+ u32 m;
+ u32 *msg;
+ u8 *status_block;
+
+ if (c->status_block == NULL)
+ {
+ c->status_block = (i2o_status_block *)
+ pci_alloc_consistent(c->pdev, sizeof(i2o_status_block), &c->status_block_phys);
+ if (c->status_block == NULL)
+ {
+ printk(KERN_CRIT "%s: Get Status Block failed; Out of memory.\n",
+ c->name);
+ return -ENOMEM;
+ }
+ }
+
+ status_block = (u8*)c->status_block;
+ memset(c->status_block,0,sizeof(i2o_status_block));
+
+ m=i2o_wait_message(c, "StatusGet");
+ if(m==0xFFFFFFFF)
+ return -ETIMEDOUT;
+ msg=(u32 *)(c->msg_virt+m);
+
+ msg[0]=NINE_WORD_MSG_SIZE|SGL_OFFSET_0;
+ msg[1]=I2O_CMD_STATUS_GET<<24|HOST_TID<<12|ADAPTER_TID;
+ msg[2]=core_context;
+ msg[3]=0;
+ msg[4]=0;
+ msg[5]=0;
+ msg[6]=c->status_block_phys;
+ msg[7]=0; /* 64bit host FIXME */
+ msg[8]=sizeof(i2o_status_block); /* always 88 bytes */
+
+ i2o_post_message(c,m);
+
+ /* Wait for a reply */
+
+ time=jiffies;
+ while(status_block[87]!=0xFF)
+ {
+ if((jiffies-time)>=5*HZ)
+ {
+ printk(KERN_ERR "%s: Get status timeout.\n",c->name);
+ return -ETIMEDOUT;
+ }
+ yield();
+ barrier();
+ }
+
+#ifdef DRIVERDEBUG
+ printk(KERN_INFO "%s: State = ", c->name);
+ switch (c->status_block->iop_state) {
+ case 0x01:
+ printk("INIT\n");
+ break;
+ case 0x02:
+ printk("RESET\n");
+ break;
+ case 0x04:
+ printk("HOLD\n");
+ break;
+ case 0x05:
+ printk("READY\n");
+ break;
+ case 0x08:
+ printk("OPERATIONAL\n");
+ break;
+ case 0x10:
+ printk("FAILED\n");
+ break;
+ case 0x11:
+ printk("FAULTED\n");
+ break;
+ default:
+ printk("%x (unknown !!)\n",c->status_block->iop_state);
+}
+#endif
+
+ return 0;
+}
+
+/*
+ * Get the Hardware Resource Table for the device.
+ * The HRT contains information about possible hidden devices
+ * but is mostly useless to us
+ */
+int i2o_hrt_get(struct i2o_controller *c)
+{
+ u32 msg[6];
+ int ret, size = sizeof(i2o_hrt);
+ int loops = 3; /* we only try 3 times to get the HRT, this should be
+ more then enough. Worst case should be 2 times.*/
+
+ /* First read just the header to figure out the real size */
+
+ do {
+ /* first we allocate the memory for the HRT */
+ if (c->hrt == NULL) {
+ c->hrt=pci_alloc_consistent(c->pdev, size, &c->hrt_phys);
+ if (c->hrt == NULL) {
+ printk(KERN_CRIT "%s: Hrt Get failed; Out of memory.\n", c->name);
+ return -ENOMEM;
+ }
+ c->hrt_len = size;
+ }
+
+ msg[0]= SIX_WORD_MSG_SIZE| SGL_OFFSET_4;
+ msg[1]= I2O_CMD_HRT_GET<<24 | HOST_TID<<12 | ADAPTER_TID;
+ msg[3]= 0;
+ msg[4]= (0xD0000000 | c->hrt_len); /* Simple transaction */
+ msg[5]= c->hrt_phys; /* Dump it here */
+
+ ret = i2o_post_wait_mem(c, msg, sizeof(msg), 20, c->hrt, NULL, c->hrt_phys, 0, c->hrt_len, 0);
+
+ if(ret == -ETIMEDOUT)
+ {
+ /* The HRT block we used is in limbo somewhere. When the iop wakes up
+ we will recover it */
+ c->hrt = NULL;
+ c->hrt_len = 0;
+ return ret;
+ }
+
+ if(ret<0)
+ {
+ printk(KERN_ERR "%s: Unable to get HRT (status=%#x)\n",
+ c->name, -ret);
+ return ret;
+ }
+
+ if (c->hrt->num_entries * c->hrt->entry_len << 2 > c->hrt_len) {
+ size = c->hrt->num_entries * c->hrt->entry_len << 2;
+ pci_free_consistent(c->pdev, c->hrt_len, c->hrt, c->hrt_phys);
+ c->hrt_len = 0;
+ c->hrt = NULL;
+ }
+ loops --;
+ } while (c->hrt == NULL && loops > 0);
+
+ if(c->hrt == NULL)
+ {
+ printk(KERN_ERR "%s: Unable to get HRT after three tries, giving up\n", c->name);
+ return -1;
+ }
+
+ i2o_parse_hrt(c); // just for debugging
+
+ return 0;
+}
+
+/*
+ * Send the I2O System Table to the specified IOP
+ *
+ * The system table contains information about all the IOPs in the
+ * system. It is build and then sent to each IOP so that IOPs can
+ * establish connections between each other.
+ *
+ */
+static int i2o_systab_send(struct i2o_controller *iop)
+{
+ u32 msg[12];
+ dma_addr_t sys_tbl_phys;
+ int ret;
+ struct resource *root;
+ u32 *privbuf = kmalloc(16, GFP_KERNEL);
+ if(privbuf == NULL)
+ return -ENOMEM;
+
+
+ if(iop->status_block->current_mem_size < iop->status_block->desired_mem_size)
+ {
+ struct resource *res = &iop->mem_resource;
+ res->name = iop->pdev->bus->name;
+ res->flags = IORESOURCE_MEM;
+ res->start = 0;
+ res->end = 0;
+ printk("%s: requires private memory resources.\n", iop->name);
+ root = pci_find_parent_resource(iop->pdev, res);
+ if(root==NULL)
+ printk("Can't find parent resource!\n");
+ if(root && allocate_resource(root, res,
+ iop->status_block->desired_mem_size,
+ iop->status_block->desired_mem_size,
+ iop->status_block->desired_mem_size,
+ 1<<20, /* Unspecified, so use 1Mb and play safe */
+ NULL,
+ NULL)>=0)
+ {
+ iop->mem_alloc = 1;
+ iop->status_block->current_mem_size = 1 + res->end - res->start;
+ iop->status_block->current_mem_base = res->start;
+ printk(KERN_INFO "%s: allocated %ld bytes of PCI memory at 0x%08lX.\n",
+ iop->name, 1+res->end-res->start, res->start);
+ }
+ }
+ if(iop->status_block->current_io_size < iop->status_block->desired_io_size)
+ {
+ struct resource *res = &iop->io_resource;
+ res->name = iop->pdev->bus->name;
+ res->flags = IORESOURCE_IO;
+ res->start = 0;
+ res->end = 0;
+ printk("%s: requires private memory resources.\n", iop->name);
+ root = pci_find_parent_resource(iop->pdev, res);
+ if(root==NULL)
+ printk("Can't find parent resource!\n");
+ if(root && allocate_resource(root, res,
+ iop->status_block->desired_io_size,
+ iop->status_block->desired_io_size,
+ iop->status_block->desired_io_size,
+ 1<<20, /* Unspecified, so use 1Mb and play safe */
+ NULL,
+ NULL)>=0)
+ {
+ iop->io_alloc = 1;
+ iop->status_block->current_io_size = 1 + res->end - res->start;
+ iop->status_block->current_mem_base = res->start;
+ printk(KERN_INFO "%s: allocated %ld bytes of PCI I/O at 0x%08lX.\n",
+ iop->name, 1+res->end-res->start, res->start);
+ }
+ }
+ else
+ {
+ privbuf[0] = iop->status_block->current_mem_base;
+ privbuf[1] = iop->status_block->current_mem_size;
+ privbuf[2] = iop->status_block->current_io_base;
+ privbuf[3] = iop->status_block->current_io_size;
+ }
+
+ msg[0] = I2O_MESSAGE_SIZE(12) | SGL_OFFSET_6;
+ msg[1] = I2O_CMD_SYS_TAB_SET<<24 | HOST_TID<<12 | ADAPTER_TID;
+ msg[3] = 0;
+ msg[4] = (0<<16) | ((iop->unit+2) ); /* Host 0 IOP ID (unit + 2) */
+ msg[5] = 0; /* Segment 0 */
+
+ /*
+ * Provide three SGL-elements:
+ * System table (SysTab), Private memory space declaration and
+ * Private i/o space declaration
+ *
+ * Nasty one here. We can't use pci_alloc_consistent to send the
+ * same table to everyone. We have to go remap it for them all
+ */
+
+ sys_tbl_phys = pci_map_single(iop->pdev, sys_tbl, sys_tbl_len, PCI_DMA_TODEVICE);
+ msg[6] = 0x54000000 | sys_tbl_phys;
+
+ msg[7] = sys_tbl_phys;
+ msg[8] = 0x54000000 | privbuf[1];
+ msg[9] = privbuf[0];
+ msg[10] = 0xD4000000 | privbuf[3];
+ msg[11] = privbuf[2];
+
+ ret=i2o_post_wait(iop, msg, sizeof(msg), 120);
+
+ pci_unmap_single(iop->pdev, sys_tbl_phys, sys_tbl_len, PCI_DMA_TODEVICE);
+
+ if(ret==-ETIMEDOUT)
+ {
+ printk(KERN_ERR "%s: SysTab setup timed out.\n", iop->name);
+ }
+ else if(ret<0)
+ {
+ printk(KERN_ERR "%s: Unable to set SysTab (status=%#x).\n",
+ iop->name, -ret);
+ }
+ else
+ {
+ dprintk(KERN_INFO "%s: SysTab set.\n", iop->name);
+ }
+ i2o_status_get(iop); // Entered READY state
+
+ kfree(privbuf);
+ return ret;
+
+ }
+
+/*
+ * Initialize I2O subsystem.
+ */
+void __init i2o_sys_init(void)
+{
+ struct i2o_controller *iop, *niop = NULL;
+
+ printk(KERN_INFO "Activating I2O controllers...\n");
+ printk(KERN_INFO "This may take a few minutes if there are many devices\n");
+
+ /* In INIT state, Activate IOPs */
+ for (iop = i2o_controller_chain; iop; iop = niop) {
+ dprintk(KERN_INFO "Calling i2o_activate_controller for %s...\n",
+ iop->name);
+ niop = iop->next;
+ if (i2o_activate_controller(iop) < 0)
+ i2o_delete_controller(iop);
+ }
+
+ /* Active IOPs in HOLD state */
+
+rebuild_sys_tab:
+ if (i2o_controller_chain == NULL)
+ return;
+
+ /*
+ * If build_sys_table fails, we kill everything and bail
+ * as we can't init the IOPs w/o a system table
+ */
+ dprintk(KERN_INFO "i2o_core: Calling i2o_build_sys_table...\n");
+ if (i2o_build_sys_table() < 0) {
+ i2o_sys_shutdown();
+ return;
+ }
+
+ /* If IOP don't get online, we need to rebuild the System table */
+ for (iop = i2o_controller_chain; iop; iop = niop) {
+ niop = iop->next;
+ dprintk(KERN_INFO "Calling i2o_online_controller for %s...\n", iop->name);
+ if (i2o_online_controller(iop) < 0) {
+ i2o_delete_controller(iop);
+ goto rebuild_sys_tab;
+ }
+ }
+
+ /* Active IOPs now in OPERATIONAL state */
+
+ /*
+ * Register for status updates from all IOPs
+ */
+ for(iop = i2o_controller_chain; iop; iop=iop->next) {
+
+ /* Create a kernel thread to deal with dynamic LCT updates */
+ iop->lct_pid = kernel_thread(i2o_dyn_lct, iop, CLONE_SIGHAND);
+
+ /* Update change ind on DLCT */
+ iop->dlct->change_ind = iop->lct->change_ind;
+
+ /* Start dynamic LCT updates */
+ i2o_lct_notify(iop);
+
+ /* Register for all events from IRTOS */
+ i2o_event_register(iop, core_context, 0, 0, 0xFFFFFFFF);
+ }
+}
+
+/**
+ * i2o_sys_shutdown - shutdown I2O system
+ *
+ * Bring down each i2o controller and then return. Each controller
+ * is taken through an orderly shutdown
+ */
+
+static void i2o_sys_shutdown(void)
+{
+ struct i2o_controller *iop, *niop;
+
+ /* Delete all IOPs from the controller chain */
+ /* that will reset all IOPs too */
+
+ for (iop = i2o_controller_chain; iop; iop = niop) {
+ niop = iop->next;
+ i2o_delete_controller(iop);
+ }
+}
+
+/**
+ * i2o_activate_controller - bring controller up to HOLD
+ * @iop: controller
+ *
+ * This function brings an I2O controller into HOLD state. The adapter
+ * is reset if necessary and then the queues and resource table
+ * are read. -1 is returned on a failure, 0 on success.
+ *
+ */
+
+int i2o_activate_controller(struct i2o_controller *iop)
+{
+ /* In INIT state, Wait Inbound Q to initialize (in i2o_status_get) */
+ /* In READY state, Get status */
+
+ if (i2o_status_get(iop) < 0) {
+ printk(KERN_INFO "Unable to obtain status of %s, "
+ "attempting a reset.\n", iop->name);
+ if (i2o_reset_controller(iop) < 0)
+ return -1;
+ }
+
+ if(iop->status_block->iop_state == ADAPTER_STATE_FAULTED) {
+ printk(KERN_CRIT "%s: hardware fault\n", iop->name);
+ return -1;
+ }
+
+ if (iop->status_block->i2o_version > I2OVER15) {
+ printk(KERN_ERR "%s: Not running vrs. 1.5. of the I2O Specification.\n",
+ iop->name);
+ return -1;
+ }
+
+ if (iop->status_block->iop_state == ADAPTER_STATE_READY ||
+ iop->status_block->iop_state == ADAPTER_STATE_OPERATIONAL ||
+ iop->status_block->iop_state == ADAPTER_STATE_HOLD ||
+ iop->status_block->iop_state == ADAPTER_STATE_FAILED)
+ {
+ dprintk(KERN_INFO "%s: Already running, trying to reset...\n",
+ iop->name);
+ if (i2o_reset_controller(iop) < 0)
+ return -1;
+ }
+
+ if (i2o_init_outbound_q(iop) < 0)
+ return -1;
+
+ if (i2o_post_outbound_messages(iop))
+ return -1;
+
+ /* In HOLD state */
+
+ if (i2o_hrt_get(iop) < 0)
+ return -1;
+
+ return 0;
+}
+
+
+/**
+ * i2o_init_outbound_queue - setup the outbound queue
+ * @c: controller
+ *
+ * Clear and (re)initialize IOP's outbound queue. Returns 0 on
+ * success or a negative errno code on a failure.
+ */
+
+int i2o_init_outbound_q(struct i2o_controller *c)
+{
+ u8 *status;
+ dma_addr_t status_phys;
+ u32 m;
+ u32 *msg;
+ u32 time;
+
+ dprintk(KERN_INFO "%s: Initializing Outbound Queue...\n", c->name);
+ m=i2o_wait_message(c, "OutboundInit");
+ if(m==0xFFFFFFFF)
+ return -ETIMEDOUT;
+ msg=(u32 *)(c->msg_virt+m);
+
+ status = pci_alloc_consistent(c->pdev, 4, &status_phys);
+ if (status==NULL) {
+ printk(KERN_ERR "%s: Outbound Queue initialization failed - no free memory.\n",
+ c->name);
+ return -ENOMEM;
+ }
+ memset(status, 0, 4);
+
+ msg[0]= EIGHT_WORD_MSG_SIZE| TRL_OFFSET_6;
+ msg[1]= I2O_CMD_OUTBOUND_INIT<<24 | HOST_TID<<12 | ADAPTER_TID;
+ msg[2]= core_context;
+ msg[3]= 0x0106; /* Transaction context */
+ msg[4]= 4096; /* Host page frame size */
+ /* Frame size is in words. 256 bytes a frame for now */
+ msg[5]= MSG_FRAME_SIZE<<16|0x80; /* Outbound msg frame size in words and Initcode */
+ msg[6]= 0xD0000004; /* Simple SG LE, EOB */
+ msg[7]= status_phys;
+
+ i2o_post_message(c,m);
+
+ barrier();
+ time=jiffies;
+ while(status[0] < I2O_CMD_REJECTED)
+ {
+ if((jiffies-time)>=30*HZ)
+ {
+ if(status[0]==0x00)
+ printk(KERN_ERR "%s: Ignored queue initialize request.\n",
+ c->name);
+ else
+ printk(KERN_ERR "%s: Outbound queue initialize timeout.\n",
+ c->name);
+ pci_free_consistent(c->pdev, 4, status, status_phys);
+ return -ETIMEDOUT;
+ }
+ yield();
+ barrier();
+ }
+
+ if(status[0] != I2O_CMD_COMPLETED)
+ {
+ printk(KERN_ERR "%s: IOP outbound initialise failed.\n", c->name);
+ pci_free_consistent(c->pdev, 4, status, status_phys);
+ return -ETIMEDOUT;
+ }
+ pci_free_consistent(c->pdev, 4, status, status_phys);
+ return 0;
+}
+
+/**
+ * i2o_post_outbound_messages - fill message queue
+ * @c: controller
+ *
+ * Allocate a message frame and load the messages into the IOP. The
+ * function returns zero on success or a negative errno code on
+ * failure.
+ */
+
+int i2o_post_outbound_messages(struct i2o_controller *c)
+{
+ int i;
+ u32 m;
+ /* Alloc space for IOP's outbound queue message frames */
+
+ c->page_frame = kmalloc(MSG_POOL_SIZE, GFP_KERNEL);
+ if(c->page_frame==NULL) {
+ printk(KERN_ERR "%s: Outbound Q initialize failed; out of memory.\n",
+ c->name);
+ return -ENOMEM;
+ }
+
+ c->page_frame_map = pci_map_single(c->pdev, c->page_frame, MSG_POOL_SIZE, PCI_DMA_FROMDEVICE);
+
+ if(c->page_frame_map == 0)
+ {
+ kfree(c->page_frame);
+ printk(KERN_ERR "%s: Unable to map outbound queue.\n", c->name);
+ return -ENOMEM;
+ }
+
+ m = c->page_frame_map;
+
+ /* Post frames */
+
+ for(i=0; i< NMBR_MSG_FRAMES; i++) {
+ I2O_REPLY_WRITE32(c,m);
+ mb();
+ m += (MSG_FRAME_SIZE << 2);
+ }
+
+ return 0;
+}
+
+/*
+ * Get the IOP's Logical Configuration Table
+ */
+int i2o_lct_get(struct i2o_controller *c)
+{
+ u32 msg[8];
+ int ret, size = c->status_block->expected_lct_size;
+
+ do {
+ if (c->lct == NULL) {
+ c->lct = pci_alloc_consistent(c->pdev, size, &c->lct_phys);
+ if(c->lct == NULL) {
+ printk(KERN_CRIT "%s: Lct Get failed. Out of memory.\n",
+ c->name);
+ return -ENOMEM;
+ }
+ }
+ memset(c->lct, 0, size);
+
+ msg[0] = EIGHT_WORD_MSG_SIZE|SGL_OFFSET_6;
+ msg[1] = I2O_CMD_LCT_NOTIFY<<24 | HOST_TID<<12 | ADAPTER_TID;
+ /* msg[2] filled in i2o_post_wait */
+ msg[3] = 0;
+ msg[4] = 0xFFFFFFFF; /* All devices */
+ msg[5] = 0x00000000; /* Report now */
+ msg[6] = 0xD0000000|size;
+ msg[7] = c->lct_phys;
+
+ ret=i2o_post_wait_mem(c, msg, sizeof(msg), 120, c->lct, NULL, c->lct_phys, 0, size, 0);
+
+ if(ret == -ETIMEDOUT)
+ {
+ c->lct = NULL;
+ return ret;
+ }
+
+ if(ret<0)
+ {
+ printk(KERN_ERR "%s: LCT Get failed (status=%#x.\n",
+ c->name, -ret);
+ return ret;
+ }
+
+ if (c->lct->table_size << 2 > size) {
+ int new_size = c->lct->table_size << 2;
+ pci_free_consistent(c->pdev, size, c->lct, c->lct_phys);
+ size = new_size;
+ c->lct = NULL;
+ }
+ } while (c->lct == NULL);
+
+ if ((ret=i2o_parse_lct(c)) < 0)
+ return ret;
+
+ return 0;
+}
+
+/*
+ * Like above, but used for async notification. The main
+ * difference is that we keep track of the CurrentChangeIndiicator
+ * so that we only get updates when it actually changes.
+ *
+ */
+int i2o_lct_notify(struct i2o_controller *c)
+{
+ u32 msg[8];
+
+ msg[0] = EIGHT_WORD_MSG_SIZE|SGL_OFFSET_6;
+ msg[1] = I2O_CMD_LCT_NOTIFY<<24 | HOST_TID<<12 | ADAPTER_TID;
+ msg[2] = core_context;
+ msg[3] = 0xDEADBEEF;
+ msg[4] = 0xFFFFFFFF; /* All devices */
+ msg[5] = c->dlct->change_ind+1; /* Next change */
+ msg[6] = 0xD0000000|8192;
+ msg[7] = c->dlct_phys;
+
+ return i2o_post_this(c, msg, sizeof(msg));
+}
+
+/*
+ * Bring a controller online into OPERATIONAL state.
+ */
+
+int i2o_online_controller(struct i2o_controller *iop)
+{
+ u32 v;
+
+ if (i2o_systab_send(iop) < 0)
+ return -1;
+
+ /* In READY state */
+
+ dprintk(KERN_INFO "%s: Attempting to enable...\n", iop->name);
+ if (i2o_enable_controller(iop) < 0)
+ return -1;
+
+ /* In OPERATIONAL state */
+
+ dprintk(KERN_INFO "%s: Attempting to get/parse lct...\n", iop->name);
+ if (i2o_lct_get(iop) < 0)
+ return -1;
+
+ /* Check battery status */
+
+ iop->battery = 0;
+ if(i2o_query_scalar(iop, ADAPTER_TID, 0x0000, 4, &v, 4)>=0)
+ {
+ if(v&16)
+ iop->battery = 1;
+ }
+
+ return 0;
+}
+
+/*
+ * Build system table
+ *
+ * The system table contains information about all the IOPs in the
+ * system (duh) and is used by the Executives on the IOPs to establish
+ * peer2peer connections. We're not supporting peer2peer at the moment,
+ * but this will be needed down the road for things like lan2lan forwarding.
+ */
+static int i2o_build_sys_table(void)
+{
+ struct i2o_controller *iop = NULL;
+ struct i2o_controller *niop = NULL;
+ int count = 0;
+
+ sys_tbl_len = sizeof(struct i2o_sys_tbl) + // Header + IOPs
+ (i2o_num_controllers) *
+ sizeof(struct i2o_sys_tbl_entry);
+
+ if(sys_tbl)
+ kfree(sys_tbl);
+
+ sys_tbl = kmalloc(sys_tbl_len, GFP_KERNEL);
+ if(!sys_tbl) {
+ printk(KERN_CRIT "SysTab Set failed. Out of memory.\n");
+ return -ENOMEM;
+ }
+ memset((void*)sys_tbl, 0, sys_tbl_len);
+
+ sys_tbl->num_entries = i2o_num_controllers;
+ sys_tbl->version = I2OVERSION; /* TODO: Version 2.0 */
+ sys_tbl->change_ind = sys_tbl_ind++;
+
+ for(iop = i2o_controller_chain; iop; iop = niop)
+ {
+ niop = iop->next;
+
+ /*
+ * Get updated IOP state so we have the latest information
+ *
+ * We should delete the controller at this point if it
+ * doesn't respond since if it's not on the system table
+ * it is techninically not part of the I2O subsyßtem...
+ */
+ if(i2o_status_get(iop)) {
+ printk(KERN_ERR "%s: Deleting b/c could not get status while"
+ "attempting to build system table\n", iop->name);
+ i2o_delete_controller(iop);
+ sys_tbl->num_entries--;
+ continue; // try the next one
+ }
+
+ sys_tbl->iops[count].org_id = iop->status_block->org_id;
+ sys_tbl->iops[count].iop_id = iop->unit + 2;
+ sys_tbl->iops[count].seg_num = 0;
+ sys_tbl->iops[count].i2o_version =
+ iop->status_block->i2o_version;
+ sys_tbl->iops[count].iop_state =
+ iop->status_block->iop_state;
+ sys_tbl->iops[count].msg_type =
+ iop->status_block->msg_type;
+ sys_tbl->iops[count].frame_size =
+ iop->status_block->inbound_frame_size;
+ sys_tbl->iops[count].last_changed = sys_tbl_ind - 1; // ??
+ sys_tbl->iops[count].iop_capabilities =
+ iop->status_block->iop_capabilities;
+ sys_tbl->iops[count].inbound_low = (u32)iop->post_port;
+ sys_tbl->iops[count].inbound_high = 0; // FIXME: 64-bit support
+
+ count++;
+ }
+
+#ifdef DRIVERDEBUG
+{
+ u32 *table;
+ table = (u32*)sys_tbl;
+ for(count = 0; count < (sys_tbl_len >>2); count++)
+ printk(KERN_INFO "sys_tbl[%d] = %0#10x\n", count, table[count]);
+}
+#endif
+
+ return 0;
+}
+
+
+/*
+ * Run time support routines
+ */
+
+/*
+ * Generic "post and forget" helpers. This is less efficient - we do
+ * a memcpy for example that isnt strictly needed, but for most uses
+ * this is simply not worth optimising
+ */
+
+int i2o_post_this(struct i2o_controller *c, u32 *data, int len)
+{
+ u32 m;
+ u32 *msg;
+ unsigned long t=jiffies;
+
+ do
+ {
+ mb();
+ m = I2O_POST_READ32(c);
+ }
+ while(m==0xFFFFFFFF && (jiffies-t)<HZ);
+
+ if(m==0xFFFFFFFF)
+ {
+ printk(KERN_ERR "%s: Timeout waiting for message frame!\n",
+ c->name);
+ return -ETIMEDOUT;
+ }
+ msg = (u32 *)(c->msg_virt + m);
+ memcpy_toio(msg, data, len);
+ i2o_post_message(c,m);
+ return 0;
+}
+
+/**
+ * i2o_post_wait_mem - I2O query/reply with DMA buffers
+ * @c: controller
+ * @msg: message to send
+ * @len: length of message
+ * @timeout: time in seconds to wait
+ * @mem1: attached memory buffer 1
+ * @mem2: attached memory buffer 2
+ * @phys1: physical address of buffer 1
+ * @phys2: physical address of buffer 2
+ * @size1: size of buffer 1
+ * @size2: size of buffer 2
+ *
+ * This core API allows an OSM to post a message and then be told whether
+ * or not the system received a successful reply.
+ *
+ * If the message times out then the value '-ETIMEDOUT' is returned. This
+ * is a special case. In this situation the message may (should) complete
+ * at an indefinite time in the future. When it completes it will use the
+ * memory buffers attached to the request. If -ETIMEDOUT is returned then
+ * the memory buffers must not be freed. Instead the event completion will
+ * free them for you. In all other cases the buffers are your problem.
+ *
+ * Pass NULL for unneeded buffers.
+ */
+
+int i2o_post_wait_mem(struct i2o_controller *c, u32 *msg, int len, int timeout, void *mem1, void *mem2, dma_addr_t phys1, dma_addr_t phys2, int size1, int size2)
+{
+ DECLARE_WAIT_QUEUE_HEAD(wq_i2o_post);
+ DECLARE_WAITQUEUE(wait, current);
+ int complete = 0;
+ int status;
+ unsigned long flags = 0;
+ struct i2o_post_wait_data *wait_data =
+ kmalloc(sizeof(struct i2o_post_wait_data), GFP_KERNEL);
+
+ if(!wait_data)
+ return -ENOMEM;
+
+ /*
+ * Create a new notification object
+ */
+ wait_data->status = &status;
+ wait_data->complete = &complete;
+ wait_data->mem[0] = mem1;
+ wait_data->mem[1] = mem2;
+ wait_data->phys[0] = phys1;
+ wait_data->phys[1] = phys2;
+ wait_data->size[0] = size1;
+ wait_data->size[1] = size2;
+
+ /*
+ * Queue the event with its unique id
+ */
+ spin_lock_irqsave(&post_wait_lock, flags);
+
+ wait_data->next = post_wait_queue;
+ post_wait_queue = wait_data;
+ wait_data->id = (++post_wait_id) & 0x7fff;
+ wait_data->wq = &wq_i2o_post;
+
+ spin_unlock_irqrestore(&post_wait_lock, flags);
+
+ /*
+ * Fill in the message id
+ */
+
+ msg[2] = 0x80000000|(u32)core_context|((u32)wait_data->id<<16);
+
+ /*
+ * Post the message to the controller. At some point later it
+ * will return. If we time out before it returns then
+ * complete will be zero. From the point post_this returns
+ * the wait_data may have been deleted.
+ */
+
+ add_wait_queue(&wq_i2o_post, &wait);
+ set_current_state(TASK_INTERRUPTIBLE);
+ if ((status = i2o_post_this(c, msg, len))==0) {
+ schedule_timeout(HZ * timeout);
+ }
+ else
+ {
+ remove_wait_queue(&wq_i2o_post, &wait);
+ return -EIO;
+ }
+ remove_wait_queue(&wq_i2o_post, &wait);
+
+ if(signal_pending(current))
+ status = -EINTR;
+
+ spin_lock_irqsave(&post_wait_lock, flags);
+ barrier(); /* Be sure we see complete as it is locked */
+ if(!complete)
+ {
+ /*
+ * Mark the entry dead. We cannot remove it. This is important.
+ * When it does terminate (which it must do if the controller hasnt
+ * died..) then it will otherwise scribble on stuff.
+ * !complete lets us safely check if the entry is still
+ * allocated and thus we can write into it
+ */
+ wait_data->wq = NULL;
+ status = -ETIMEDOUT;
+ }
+ else
+ {
+ /* Debugging check - remove me soon */
+ if(status == -ETIMEDOUT)
+ {
+ printk("TIMEDOUT BUG!\n");
+ status = -EIO;
+ }
+ }
+ /* And the wait_data is not leaked either! */
+ spin_unlock_irqrestore(&post_wait_lock, flags);
+ return status;
+}
+
+/**
+ * i2o_post_wait - I2O query/reply
+ * @c: controller
+ * @msg: message to send
+ * @len: length of message
+ * @timeout: time in seconds to wait
+ *
+ * This core API allows an OSM to post a message and then be told whether
+ * or not the system received a successful reply.
+ */
+
+int i2o_post_wait(struct i2o_controller *c, u32 *msg, int len, int timeout)
+{
+ return i2o_post_wait_mem(c, msg, len, timeout, NULL, NULL, 0, 0, 0, 0);
+}
+
+/*
+ * i2o_post_wait is completed and we want to wake up the
+ * sleeping proccess. Called by core's reply handler.
+ */
+
+static void i2o_post_wait_complete(struct i2o_controller *c, u32 context, int status)
+{
+ struct i2o_post_wait_data **p1, *q;
+ unsigned long flags;
+
+ /*
+ * We need to search through the post_wait
+ * queue to see if the given message is still
+ * outstanding. If not, it means that the IOP
+ * took longer to respond to the message than we
+ * had allowed and timer has already expired.
+ * Not much we can do about that except log
+ * it for debug purposes, increase timeout, and recompile
+ *
+ * Lock needed to keep anyone from moving queue pointers
+ * around while we're looking through them.
+ */
+
+ spin_lock_irqsave(&post_wait_lock, flags);
+
+ for(p1 = &post_wait_queue; *p1!=NULL; p1 = &((*p1)->next))
+ {
+ q = (*p1);
+ if(q->id == ((context >> 16) & 0x7fff)) {
+ /*
+ * Delete it
+ */
+
+ *p1 = q->next;
+
+ /*
+ * Live or dead ?
+ */
+
+ if(q->wq)
+ {
+ /* Live entry - wakeup and set status */
+ *q->status = status;
+ *q->complete = 1;
+ wake_up(q->wq);
+ }
+ else
+ {
+ /*
+ * Free resources. Caller is dead
+ */
+
+ if(q->mem[0])
+ pci_free_consistent(c->pdev, q->size[0], q->mem[0], q->phys[0]);
+ if(q->mem[1])
+ pci_free_consistent(c->pdev, q->size[1], q->mem[1], q->phys[1]);
+
+ printk(KERN_WARNING "i2o_post_wait event completed after timeout.\n");
+ }
+ kfree(q);
+ spin_unlock(&post_wait_lock);
+ return;
+ }
+ }
+ spin_unlock(&post_wait_lock);
+
+ printk(KERN_DEBUG "i2o_post_wait: Bogus reply!\n");
+}
+
+/* Issue UTIL_PARAMS_GET or UTIL_PARAMS_SET
+ *
+ * This function can be used for all UtilParamsGet/Set operations.
+ * The OperationList is given in oplist-buffer,
+ * and results are returned in reslist-buffer.
+ * Note that the minimum sized reslist is 8 bytes and contains
+ * ResultCount, ErrorInfoSize, BlockStatus and BlockSize.
+ */
+
+int i2o_issue_params(int cmd, struct i2o_controller *iop, int tid,
+ void *oplist, int oplen, void *reslist, int reslen)
+{
+ u32 msg[9];
+ u32 *res32 = (u32*)reslist;
+ u32 *restmp = (u32*)reslist;
+ int len = 0;
+ int i = 0;
+ int wait_status;
+ u32 *opmem, *resmem;
+ dma_addr_t opmem_phys, resmem_phys;
+
+ /* Get DMAable memory */
+ opmem = pci_alloc_consistent(iop->pdev, oplen, &opmem_phys);
+ if(opmem == NULL)
+ return -ENOMEM;
+ memcpy(opmem, oplist, oplen);
+
+ resmem = pci_alloc_consistent(iop->pdev, reslen, &resmem_phys);
+ if(resmem == NULL)
+ {
+ pci_free_consistent(iop->pdev, oplen, opmem, opmem_phys);
+ return -ENOMEM;
+ }
+
+ msg[0] = NINE_WORD_MSG_SIZE | SGL_OFFSET_5;
+ msg[1] = cmd << 24 | HOST_TID << 12 | tid;
+ msg[3] = 0;
+ msg[4] = 0;
+ msg[5] = 0x54000000 | oplen; /* OperationList */
+ msg[6] = opmem_phys;
+ msg[7] = 0xD0000000 | reslen; /* ResultList */
+ msg[8] = resmem_phys;
+
+ wait_status = i2o_post_wait_mem(iop, msg, sizeof(msg), 10, opmem, resmem, opmem_phys, resmem_phys, oplen, reslen);
+
+ /*
+ * This only looks like a memory leak - don't "fix" it.
+ */
+ if(wait_status == -ETIMEDOUT)
+ return wait_status;
+
+ memcpy(reslist, resmem, reslen);
+ pci_free_consistent(iop->pdev, reslen, resmem, resmem_phys);
+ pci_free_consistent(iop->pdev, oplen, opmem, opmem_phys);
+
+ /* Query failed */
+ if(wait_status != 0)
+ return wait_status;
+ /*
+ * Calculate number of bytes of Result LIST
+ * We need to loop through each Result BLOCK and grab the length
+ */
+ restmp = res32 + 1;
+ len = 1;
+ for(i = 0; i < (res32[0]&0X0000FFFF); i++)
+ {
+ if(restmp[0]&0x00FF0000) /* BlockStatus != SUCCESS */
+ {
+ printk(KERN_WARNING "%s - Error:\n ErrorInfoSize = 0x%02x, "
+ "BlockStatus = 0x%02x, BlockSize = 0x%04x\n",
+ (cmd == I2O_CMD_UTIL_PARAMS_SET) ? "PARAMS_SET"
+ : "PARAMS_GET",
+ res32[1]>>24, (res32[1]>>16)&0xFF, res32[1]&0xFFFF);
+
+ /*
+ * If this is the only request,than we return an error
+ */
+ if((res32[0]&0x0000FFFF) == 1)
+ {
+ return -((res32[1] >> 16) & 0xFF); /* -BlockStatus */
+ }
+ }
+ len += restmp[0] & 0x0000FFFF; /* Length of res BLOCK */
+ restmp += restmp[0] & 0x0000FFFF; /* Skip to next BLOCK */
+ }
+ return (len << 2); /* bytes used by result list */
+}
+
+/*
+ * Query one scalar group value or a whole scalar group.
+ */
+int i2o_query_scalar(struct i2o_controller *iop, int tid,
+ int group, int field, void *buf, int buflen)
+{
+ u16 opblk[] = { 1, 0, I2O_PARAMS_FIELD_GET, group, 1, field };
+ u8 resblk[8+buflen]; /* 8 bytes for header */
+ int size;
+
+ if (field == -1) /* whole group */
+ opblk[4] = -1;
+
+ size = i2o_issue_params(I2O_CMD_UTIL_PARAMS_GET, iop, tid,
+ opblk, sizeof(opblk), resblk, sizeof(resblk));
+
+ memcpy(buf, resblk+8, buflen); /* cut off header */
+
+ if(size>buflen)
+ return buflen;
+ return size;
+}
+
+/*
+ * Set a scalar group value or a whole group.
+ */
+int i2o_set_scalar(struct i2o_controller *iop, int tid,
+ int group, int field, void *buf, int buflen)
+{
+ u16 *opblk;
+ u8 resblk[8+buflen]; /* 8 bytes for header */
+ int size;
+
+ opblk = kmalloc(buflen+64, GFP_KERNEL);
+ if (opblk == NULL)
+ {
+ printk(KERN_ERR "i2o: no memory for operation buffer.\n");
+ return -ENOMEM;
+ }
+
+ opblk[0] = 1; /* operation count */
+ opblk[1] = 0; /* pad */
+ opblk[2] = I2O_PARAMS_FIELD_SET;
+ opblk[3] = group;
+
+ if(field == -1) { /* whole group */
+ opblk[4] = -1;
+ memcpy(opblk+5, buf, buflen);
+ }
+ else /* single field */
+ {
+ opblk[4] = 1;
+ opblk[5] = field;
+ memcpy(opblk+6, buf, buflen);
+ }
+
+ size = i2o_issue_params(I2O_CMD_UTIL_PARAMS_SET, iop, tid,
+ opblk, 12+buflen, resblk, sizeof(resblk));
+
+ kfree(opblk);
+ if(size>buflen)
+ return buflen;
+ return size;
+}
+
+/*
+ * if oper == I2O_PARAMS_TABLE_GET, get from all rows
+ * if fieldcount == -1 return all fields
+ * ibuf and ibuflen are unused (use NULL, 0)
+ * else return specific fields
+ * ibuf contains fieldindexes
+ *
+ * if oper == I2O_PARAMS_LIST_GET, get from specific rows
+ * if fieldcount == -1 return all fields
+ * ibuf contains rowcount, keyvalues
+ * else return specific fields
+ * fieldcount is # of fieldindexes
+ * ibuf contains fieldindexes, rowcount, keyvalues
+ *
+ * You could also use directly function i2o_issue_params().
+ */
+int i2o_query_table(int oper, struct i2o_controller *iop, int tid, int group,
+ int fieldcount, void *ibuf, int ibuflen,
+ void *resblk, int reslen)
+{
+ u16 *opblk;
+ int size;
+
+ opblk = kmalloc(10 + ibuflen, GFP_KERNEL);
+ if (opblk == NULL)
+ {
+ printk(KERN_ERR "i2o: no memory for query buffer.\n");
+ return -ENOMEM;
+ }
+
+ opblk[0] = 1; /* operation count */
+ opblk[1] = 0; /* pad */
+ opblk[2] = oper;
+ opblk[3] = group;
+ opblk[4] = fieldcount;
+ memcpy(opblk+5, ibuf, ibuflen); /* other params */
+
+ size = i2o_issue_params(I2O_CMD_UTIL_PARAMS_GET,iop, tid,
+ opblk, 10+ibuflen, resblk, reslen);
+
+ kfree(opblk);
+ if(size>reslen)
+ return reslen;
+ return size;
+}
+
+/*
+ * Clear table group, i.e. delete all rows.
+ */
+int i2o_clear_table(struct i2o_controller *iop, int tid, int group)
+{
+ u16 opblk[] = { 1, 0, I2O_PARAMS_TABLE_CLEAR, group };
+ u8 resblk[32]; /* min 8 bytes for result header */
+
+ return i2o_issue_params(I2O_CMD_UTIL_PARAMS_SET, iop, tid,
+ opblk, sizeof(opblk), resblk, sizeof(resblk));
+}
+
+/*
+ * Add a new row into a table group.
+ *
+ * if fieldcount==-1 then we add whole rows
+ * buf contains rowcount, keyvalues
+ * else just specific fields are given, rest use defaults
+ * buf contains fieldindexes, rowcount, keyvalues
+ */
+int i2o_row_add_table(struct i2o_controller *iop, int tid,
+ int group, int fieldcount, void *buf, int buflen)
+{
+ u16 *opblk;
+ u8 resblk[32]; /* min 8 bytes for header */
+ int size;
+
+ opblk = kmalloc(buflen+64, GFP_KERNEL);
+ if (opblk == NULL)
+ {
+ printk(KERN_ERR "i2o: no memory for operation buffer.\n");
+ return -ENOMEM;
+ }
+
+ opblk[0] = 1; /* operation count */
+ opblk[1] = 0; /* pad */
+ opblk[2] = I2O_PARAMS_ROW_ADD;
+ opblk[3] = group;
+ opblk[4] = fieldcount;
+ memcpy(opblk+5, buf, buflen);
+
+ size = i2o_issue_params(I2O_CMD_UTIL_PARAMS_SET, iop, tid,
+ opblk, 10+buflen, resblk, sizeof(resblk));
+
+ kfree(opblk);
+ if(size>buflen)
+ return buflen;
+ return size;
+}
+
+
+/*
+ * Used for error reporting/debugging purposes.
+ * Following fail status are common to all classes.
+ * The preserved message must be handled in the reply handler.
+ */
+void i2o_report_fail_status(u8 req_status, u32* msg)
+{
+ static char *FAIL_STATUS[] = {
+ "0x80", /* not used */
+ "SERVICE_SUSPENDED", /* 0x81 */
+ "SERVICE_TERMINATED", /* 0x82 */
+ "CONGESTION",
+ "FAILURE",
+ "STATE_ERROR",
+ "TIME_OUT",
+ "ROUTING_FAILURE",
+ "INVALID_VERSION",
+ "INVALID_OFFSET",
+ "INVALID_MSG_FLAGS",
+ "FRAME_TOO_SMALL",
+ "FRAME_TOO_LARGE",
+ "INVALID_TARGET_ID",
+ "INVALID_INITIATOR_ID",
+ "INVALID_INITIATOR_CONTEX", /* 0x8F */
+ "UNKNOWN_FAILURE" /* 0xFF */
+ };
+
+ if (req_status == I2O_FSC_TRANSPORT_UNKNOWN_FAILURE)
+ printk("TRANSPORT_UNKNOWN_FAILURE (%0#2x)\n.", req_status);
+ else
+ printk("TRANSPORT_%s.\n", FAIL_STATUS[req_status & 0x0F]);
+
+ /* Dump some details */
+
+ printk(KERN_ERR " InitiatorId = %d, TargetId = %d\n",
+ (msg[1] >> 12) & 0xFFF, msg[1] & 0xFFF);
+ printk(KERN_ERR " LowestVersion = 0x%02X, HighestVersion = 0x%02X\n",
+ (msg[4] >> 8) & 0xFF, msg[4] & 0xFF);
+ printk(KERN_ERR " FailingHostUnit = 0x%04X, FailingIOP = 0x%03X\n",
+ msg[5] >> 16, msg[5] & 0xFFF);
+
+ printk(KERN_ERR " Severity: 0x%02X ", (msg[4] >> 16) & 0xFF);
+ if (msg[4] & (1<<16))
+ printk("(FormatError), "
+ "this msg can never be delivered/processed.\n");
+ if (msg[4] & (1<<17))
+ printk("(PathError), "
+ "this msg can no longer be delivered/processed.\n");
+ if (msg[4] & (1<<18))
+ printk("(PathState), "
+ "the system state does not allow delivery.\n");
+ if (msg[4] & (1<<19))
+ printk("(Congestion), resources temporarily not available;"
+ "do not retry immediately.\n");
+}
+
+/*
+ * Used for error reporting/debugging purposes.
+ * Following reply status are common to all classes.
+ */
+void i2o_report_common_status(u8 req_status)
+{
+ static char *REPLY_STATUS[] = {
+ "SUCCESS",
+ "ABORT_DIRTY",
+ "ABORT_NO_DATA_TRANSFER",
+ "ABORT_PARTIAL_TRANSFER",
+ "ERROR_DIRTY",
+ "ERROR_NO_DATA_TRANSFER",
+ "ERROR_PARTIAL_TRANSFER",
+ "PROCESS_ABORT_DIRTY",
+ "PROCESS_ABORT_NO_DATA_TRANSFER",
+ "PROCESS_ABORT_PARTIAL_TRANSFER",
+ "TRANSACTION_ERROR",
+ "PROGRESS_REPORT"
+ };
+
+ if (req_status >= ARRAY_SIZE(REPLY_STATUS))
+ printk("RequestStatus = %0#2x", req_status);
+ else
+ printk("%s", REPLY_STATUS[req_status]);
+}
+
+/*
+ * Used for error reporting/debugging purposes.
+ * Following detailed status are valid for executive class,
+ * utility class, DDM class and for transaction error replies.
+ */
+static void i2o_report_common_dsc(u16 detailed_status)
+{
+ static char *COMMON_DSC[] = {
+ "SUCCESS",
+ "0x01", // not used
+ "BAD_KEY",
+ "TCL_ERROR",
+ "REPLY_BUFFER_FULL",
+ "NO_SUCH_PAGE",
+ "INSUFFICIENT_RESOURCE_SOFT",
+ "INSUFFICIENT_RESOURCE_HARD",
+ "0x08", // not used
+ "CHAIN_BUFFER_TOO_LARGE",
+ "UNSUPPORTED_FUNCTION",
+ "DEVICE_LOCKED",
+ "DEVICE_RESET",
+ "INAPPROPRIATE_FUNCTION",
+ "INVALID_INITIATOR_ADDRESS",
+ "INVALID_MESSAGE_FLAGS",
+ "INVALID_OFFSET",
+ "INVALID_PARAMETER",
+ "INVALID_REQUEST",
+ "INVALID_TARGET_ADDRESS",
+ "MESSAGE_TOO_LARGE",
+ "MESSAGE_TOO_SMALL",
+ "MISSING_PARAMETER",
+ "TIMEOUT",
+ "UNKNOWN_ERROR",
+ "UNKNOWN_FUNCTION",
+ "UNSUPPORTED_VERSION",
+ "DEVICE_BUSY",
+ "DEVICE_NOT_AVAILABLE"
+ };
+
+ if (detailed_status > I2O_DSC_DEVICE_NOT_AVAILABLE)
+ printk(" / DetailedStatus = %0#4x.\n", detailed_status);
+ else
+ printk(" / %s.\n", COMMON_DSC[detailed_status]);
+}
+
+/*
+ * Used for error reporting/debugging purposes
+ */
+static void i2o_report_lan_dsc(u16 detailed_status)
+{
+ static char *LAN_DSC[] = { // Lan detailed status code strings
+ "SUCCESS",
+ "DEVICE_FAILURE",
+ "DESTINATION_NOT_FOUND",
+ "TRANSMIT_ERROR",
+ "TRANSMIT_ABORTED",
+ "RECEIVE_ERROR",
+ "RECEIVE_ABORTED",
+ "DMA_ERROR",
+ "BAD_PACKET_DETECTED",
+ "OUT_OF_MEMORY",
+ "BUCKET_OVERRUN",
+ "IOP_INTERNAL_ERROR",
+ "CANCELED",
+ "INVALID_TRANSACTION_CONTEXT",
+ "DEST_ADDRESS_DETECTED",
+ "DEST_ADDRESS_OMITTED",
+ "PARTIAL_PACKET_RETURNED",
+ "TEMP_SUSPENDED_STATE", // last Lan detailed status code
+ "INVALID_REQUEST" // general detailed status code
+ };
+
+ if (detailed_status > I2O_DSC_INVALID_REQUEST)
+ printk(" / %0#4x.\n", detailed_status);
+ else
+ printk(" / %s.\n", LAN_DSC[detailed_status]);
+}
+
+/*
+ * Used for error reporting/debugging purposes
+ */
+static void i2o_report_util_cmd(u8 cmd)
+{
+ switch (cmd) {
+ case I2O_CMD_UTIL_NOP:
+ printk("UTIL_NOP, ");
+ break;
+ case I2O_CMD_UTIL_ABORT:
+ printk("UTIL_ABORT, ");
+ break;
+ case I2O_CMD_UTIL_CLAIM:
+ printk("UTIL_CLAIM, ");
+ break;
+ case I2O_CMD_UTIL_RELEASE:
+ printk("UTIL_CLAIM_RELEASE, ");
+ break;
+ case I2O_CMD_UTIL_CONFIG_DIALOG:
+ printk("UTIL_CONFIG_DIALOG, ");
+ break;
+ case I2O_CMD_UTIL_DEVICE_RESERVE:
+ printk("UTIL_DEVICE_RESERVE, ");
+ break;
+ case I2O_CMD_UTIL_DEVICE_RELEASE:
+ printk("UTIL_DEVICE_RELEASE, ");
+ break;
+ case I2O_CMD_UTIL_EVT_ACK:
+ printk("UTIL_EVENT_ACKNOWLEDGE, ");
+ break;
+ case I2O_CMD_UTIL_EVT_REGISTER:
+ printk("UTIL_EVENT_REGISTER, ");
+ break;
+ case I2O_CMD_UTIL_LOCK:
+ printk("UTIL_LOCK, ");
+ break;
+ case I2O_CMD_UTIL_LOCK_RELEASE:
+ printk("UTIL_LOCK_RELEASE, ");
+ break;
+ case I2O_CMD_UTIL_PARAMS_GET:
+ printk("UTIL_PARAMS_GET, ");
+ break;
+ case I2O_CMD_UTIL_PARAMS_SET:
+ printk("UTIL_PARAMS_SET, ");
+ break;
+ case I2O_CMD_UTIL_REPLY_FAULT_NOTIFY:
+ printk("UTIL_REPLY_FAULT_NOTIFY, ");
+ break;
+ default:
+ printk("Cmd = %0#2x, ",cmd);
+ }
+}
+
+/*
+ * Used for error reporting/debugging purposes
+ */
+static void i2o_report_exec_cmd(u8 cmd)
+{
+ switch (cmd) {
+ case I2O_CMD_ADAPTER_ASSIGN:
+ printk("EXEC_ADAPTER_ASSIGN, ");
+ break;
+ case I2O_CMD_ADAPTER_READ:
+ printk("EXEC_ADAPTER_READ, ");
+ break;
+ case I2O_CMD_ADAPTER_RELEASE:
+ printk("EXEC_ADAPTER_RELEASE, ");
+ break;
+ case I2O_CMD_BIOS_INFO_SET:
+ printk("EXEC_BIOS_INFO_SET, ");
+ break;
+ case I2O_CMD_BOOT_DEVICE_SET:
+ printk("EXEC_BOOT_DEVICE_SET, ");
+ break;
+ case I2O_CMD_CONFIG_VALIDATE:
+ printk("EXEC_CONFIG_VALIDATE, ");
+ break;
+ case I2O_CMD_CONN_SETUP:
+ printk("EXEC_CONN_SETUP, ");
+ break;
+ case I2O_CMD_DDM_DESTROY:
+ printk("EXEC_DDM_DESTROY, ");
+ break;
+ case I2O_CMD_DDM_ENABLE:
+ printk("EXEC_DDM_ENABLE, ");
+ break;
+ case I2O_CMD_DDM_QUIESCE:
+ printk("EXEC_DDM_QUIESCE, ");
+ break;
+ case I2O_CMD_DDM_RESET:
+ printk("EXEC_DDM_RESET, ");
+ break;
+ case I2O_CMD_DDM_SUSPEND:
+ printk("EXEC_DDM_SUSPEND, ");
+ break;
+ case I2O_CMD_DEVICE_ASSIGN:
+ printk("EXEC_DEVICE_ASSIGN, ");
+ break;
+ case I2O_CMD_DEVICE_RELEASE:
+ printk("EXEC_DEVICE_RELEASE, ");
+ break;
+ case I2O_CMD_HRT_GET:
+ printk("EXEC_HRT_GET, ");
+ break;
+ case I2O_CMD_ADAPTER_CLEAR:
+ printk("EXEC_IOP_CLEAR, ");
+ break;
+ case I2O_CMD_ADAPTER_CONNECT:
+ printk("EXEC_IOP_CONNECT, ");
+ break;
+ case I2O_CMD_ADAPTER_RESET:
+ printk("EXEC_IOP_RESET, ");
+ break;
+ case I2O_CMD_LCT_NOTIFY:
+ printk("EXEC_LCT_NOTIFY, ");
+ break;
+ case I2O_CMD_OUTBOUND_INIT:
+ printk("EXEC_OUTBOUND_INIT, ");
+ break;
+ case I2O_CMD_PATH_ENABLE:
+ printk("EXEC_PATH_ENABLE, ");
+ break;
+ case I2O_CMD_PATH_QUIESCE:
+ printk("EXEC_PATH_QUIESCE, ");
+ break;
+ case I2O_CMD_PATH_RESET:
+ printk("EXEC_PATH_RESET, ");
+ break;
+ case I2O_CMD_STATIC_MF_CREATE:
+ printk("EXEC_STATIC_MF_CREATE, ");
+ break;
+ case I2O_CMD_STATIC_MF_RELEASE:
+ printk("EXEC_STATIC_MF_RELEASE, ");
+ break;
+ case I2O_CMD_STATUS_GET:
+ printk("EXEC_STATUS_GET, ");
+ break;
+ case I2O_CMD_SW_DOWNLOAD:
+ printk("EXEC_SW_DOWNLOAD, ");
+ break;
+ case I2O_CMD_SW_UPLOAD:
+ printk("EXEC_SW_UPLOAD, ");
+ break;
+ case I2O_CMD_SW_REMOVE:
+ printk("EXEC_SW_REMOVE, ");
+ break;
+ case I2O_CMD_SYS_ENABLE:
+ printk("EXEC_SYS_ENABLE, ");
+ break;
+ case I2O_CMD_SYS_MODIFY:
+ printk("EXEC_SYS_MODIFY, ");
+ break;
+ case I2O_CMD_SYS_QUIESCE:
+ printk("EXEC_SYS_QUIESCE, ");
+ break;
+ case I2O_CMD_SYS_TAB_SET:
+ printk("EXEC_SYS_TAB_SET, ");
+ break;
+ default:
+ printk("Cmd = %#02x, ",cmd);
+ }
+}
+
+/*
+ * Used for error reporting/debugging purposes
+ */
+static void i2o_report_lan_cmd(u8 cmd)
+{
+ switch (cmd) {
+ case LAN_PACKET_SEND:
+ printk("LAN_PACKET_SEND, ");
+ break;
+ case LAN_SDU_SEND:
+ printk("LAN_SDU_SEND, ");
+ break;
+ case LAN_RECEIVE_POST:
+ printk("LAN_RECEIVE_POST, ");
+ break;
+ case LAN_RESET:
+ printk("LAN_RESET, ");
+ break;
+ case LAN_SUSPEND:
+ printk("LAN_SUSPEND, ");
+ break;
+ default:
+ printk("Cmd = %0#2x, ",cmd);
+ }
+}
+
+/*
+ * Used for error reporting/debugging purposes.
+ * Report Cmd name, Request status, Detailed Status.
+ */
+void i2o_report_status(const char *severity, const char *str, u32 *msg)
+{
+ u8 cmd = (msg[1]>>24)&0xFF;
+ u8 req_status = (msg[4]>>24)&0xFF;
+ u16 detailed_status = msg[4]&0xFFFF;
+ struct i2o_handler *h = i2o_handlers[msg[2] & (MAX_I2O_MODULES-1)];
+
+ if (cmd == I2O_CMD_UTIL_EVT_REGISTER)
+ return; // No status in this reply
+
+ printk("%s%s: ", severity, str);
+
+ if (cmd < 0x1F) // Utility cmd
+ i2o_report_util_cmd(cmd);
+
+ else if (cmd >= 0xA0 && cmd <= 0xEF) // Executive cmd
+ i2o_report_exec_cmd(cmd);
+
+ else if (h->class == I2O_CLASS_LAN && cmd >= 0x30 && cmd <= 0x3F)
+ i2o_report_lan_cmd(cmd); // LAN cmd
+ else
+ printk("Cmd = %0#2x, ", cmd); // Other cmds
+
+ if (msg[0] & MSG_FAIL) {
+ i2o_report_fail_status(req_status, msg);
+ return;
+ }
+
+ i2o_report_common_status(req_status);
+
+ if (cmd < 0x1F || (cmd >= 0xA0 && cmd <= 0xEF))
+ i2o_report_common_dsc(detailed_status);
+ else if (h->class == I2O_CLASS_LAN && cmd >= 0x30 && cmd <= 0x3F)
+ i2o_report_lan_dsc(detailed_status);
+ else
+ printk(" / DetailedStatus = %0#4x.\n", detailed_status);
+}
+
+/* Used to dump a message to syslog during debugging */
+void i2o_dump_message(u32 *msg)
+{
+#ifdef DRIVERDEBUG
+ int i;
+ printk(KERN_INFO "Dumping I2O message size %d @ %p\n",
+ msg[0]>>16&0xffff, msg);
+ for(i = 0; i < ((msg[0]>>16)&0xffff); i++)
+ printk(KERN_INFO " msg[%d] = %0#10x\n", i, msg[i]);
+#endif
+}
+
+/*
+ * I2O reboot/shutdown notification.
+ *
+ * - Call each OSM's reboot notifier (if one exists)
+ * - Quiesce each IOP in the system
+ *
+ * Each IOP has to be quiesced before we can ensure that the system
+ * can be properly shutdown as a transaction that has already been
+ * acknowledged still needs to be placed in permanent store on the IOP.
+ * The SysQuiesce causes the IOP to force all HDMs to complete their
+ * transactions before returning, so only at that point is it safe
+ *
+ */
+static int i2o_reboot_event(struct notifier_block *n, unsigned long code, void
+*p)
+{
+ int i = 0;
+ struct i2o_controller *c = NULL;
+
+ if(code != SYS_RESTART && code != SYS_HALT && code != SYS_POWER_OFF)
+ return NOTIFY_DONE;
+
+ printk(KERN_INFO "Shutting down I2O system.\n");
+ printk(KERN_INFO
+ " This could take a few minutes if there are many devices attached\n");
+
+ for(i = 0; i < MAX_I2O_MODULES; i++)
+ {
+ if(i2o_handlers[i] && i2o_handlers[i]->reboot_notify)
+ i2o_handlers[i]->reboot_notify();
+ }
+
+ for(c = i2o_controller_chain; c; c = c->next)
+ {
+ if(i2o_quiesce_controller(c))
+ {
+ printk(KERN_WARNING "i2o: Could not quiesce %s.\n"
+ "Verify setup on next system power up.\n",
+ c->name);
+ }
+ }
+
+ printk(KERN_INFO "I2O system down.\n");
+ return NOTIFY_DONE;
+}
+
+
+
+
+/**
+ * i2o_pci_dispose - Free bus specific resources
+ * @c: I2O controller
+ *
+ * Disable interrupts and then free interrupt, I/O and mtrr resources
+ * used by this controller. Called by the I2O core on unload.
+ */
+
+static void i2o_pci_dispose(struct i2o_controller *c)
+{
+ I2O_IRQ_WRITE32(c,0xFFFFFFFF);
+ if(c->irq > 0)
+ free_irq(c->irq, c);
+ iounmap(c->base_virt);
+ if(c->raptor)
+ iounmap(c->msg_virt);
+
+#ifdef CONFIG_MTRR
+ if(c->mtrr_reg0 > 0)
+ mtrr_del(c->mtrr_reg0, 0, 0);
+ if(c->mtrr_reg1 > 0)
+ mtrr_del(c->mtrr_reg1, 0, 0);
+#endif
+}
+
+/**
+ * i2o_pci_interrupt - Bus specific interrupt handler
+ * @irq: interrupt line
+ * @dev_id: cookie
+ *
+ * Handle an interrupt from a PCI based I2O controller. This turns out
+ * to be rather simple. We keep the controller pointer in the cookie.
+ */
+
+static irqreturn_t i2o_pci_interrupt(int irq, void *dev_id, struct pt_regs *r)
+{
+ struct i2o_controller *c = dev_id;
+ i2o_run_queue(c);
+ return IRQ_HANDLED;
+}
+
+/**
+ * i2o_pci_install - Install a PCI i2o controller
+ * @dev: PCI device of the I2O controller
+ *
+ * Install a PCI (or in theory AGP) i2o controller. Devices are
+ * initialized, configured and registered with the i2o core subsystem. Be
+ * very careful with ordering. There may be pending interrupts.
+ *
+ * To Do: Add support for polled controllers
+ */
+
+int __init i2o_pci_install(struct pci_dev *dev)
+{
+ struct i2o_controller *c=kmalloc(sizeof(struct i2o_controller),
+ GFP_KERNEL);
+ void *bar0_virt;
+ void *bar1_virt;
+ unsigned long bar0_phys = 0;
+ unsigned long bar1_phys = 0;
+ unsigned long bar0_size = 0;
+ unsigned long bar1_size = 0;
+
+ int i;
+
+ if(c==NULL)
+ {
+ printk(KERN_ERR "i2o: Insufficient memory to add controller.\n");
+ return -ENOMEM;
+ }
+ memset(c, 0, sizeof(*c));
+
+ c->irq = -1;
+ c->dpt = 0;
+ c->raptor = 0;
+ c->short_req = 0;
+ c->pdev = dev;
+
+#if BITS_PER_LONG == 64
+ c->context_list_lock = SPIN_LOCK_UNLOCKED;
+#endif
+
+ /*
+ * Cards that fall apart if you hit them with large I/O
+ * loads...
+ */
+
+ if(dev->vendor == PCI_VENDOR_ID_NCR && dev->device == 0x0630)
+ {
+ c->short_req = 1;
+ printk(KERN_INFO "I2O: Symbios FC920 workarounds activated.\n");
+ }
+
+ if(dev->subsystem_vendor == PCI_VENDOR_ID_PROMISE)
+ {
+ c->promise = 1;
+ printk(KERN_INFO "I2O: Promise workarounds activated.\n");
+ }
+
+ /*
+ * Cards that go bananas if you quiesce them before you reset
+ * them
+ */
+
+ if(dev->vendor == PCI_VENDOR_ID_DPT) {
+ c->dpt=1;
+ if(dev->device == 0xA511)
+ c->raptor=1;
+ }
+
+ for(i=0; i<6; i++)
+ {
+ /* Skip I/O spaces */
+ if(!(pci_resource_flags(dev, i) & IORESOURCE_IO))
+ {
+ if(!bar0_phys)
+ {
+ bar0_phys = pci_resource_start(dev, i);
+ bar0_size = pci_resource_len(dev, i);
+ if(!c->raptor)
+ break;
+ }
+ else
+ {
+ bar1_phys = pci_resource_start(dev, i);
+ bar1_size = pci_resource_len(dev, i);
+ break;
+ }
+ }
+ }
+
+ if(i==6)
+ {
+ printk(KERN_ERR "i2o: I2O controller has no memory regions defined.\n");
+ kfree(c);
+ return -EINVAL;
+ }
+
+
+ /* Map the I2O controller */
+ if(!c->raptor)
+ printk(KERN_INFO "i2o: PCI I2O controller at %08lX size=%ld\n", bar0_phys, bar0_size);
+ else
+ printk(KERN_INFO "i2o: PCI I2O controller\n BAR0 at 0x%08lX size=%ld\n BAR1 at 0x%08lX size=%ld\n", bar0_phys, bar0_size, bar1_phys, bar1_size);
+
+ bar0_virt = ioremap(bar0_phys, bar0_size);
+ if(bar0_virt==0)
+ {
+ printk(KERN_ERR "i2o: Unable to map controller.\n");
+ kfree(c);
+ return -EINVAL;
+ }
+
+ if(c->raptor)
+ {
+ bar1_virt = ioremap(bar1_phys, bar1_size);
+ if(bar1_virt==0)
+ {
+ printk(KERN_ERR "i2o: Unable to map controller.\n");
+ kfree(c);
+ iounmap(bar0_virt);
+ return -EINVAL;
+ }
+ } else {
+ bar1_virt = bar0_virt;
+ bar1_phys = bar0_phys;
+ bar1_size = bar0_size;
+ }
+
+ c->irq_mask = bar0_virt+0x34;
+ c->post_port = bar0_virt+0x40;
+ c->reply_port = bar0_virt+0x44;
+
+ c->base_phys = bar0_phys;
+ c->base_virt = bar0_virt;
+ c->msg_phys = bar1_phys;
+ c->msg_virt = bar1_virt;
+
+ /*
+ * Enable Write Combining MTRR for IOP's memory region
+ */
+#ifdef CONFIG_MTRR
+ c->mtrr_reg0 = mtrr_add(c->base_phys, bar0_size, MTRR_TYPE_WRCOMB, 1);
+ /*
+ * If it is an INTEL i960 I/O processor then set the first 64K to
+ * Uncacheable since the region contains the Messaging unit which
+ * shouldn't be cached.
+ */
+ c->mtrr_reg1 = -1;
+ if(dev->vendor == PCI_VENDOR_ID_INTEL || dev->vendor == PCI_VENDOR_ID_DPT)
+ {
+ printk(KERN_INFO "I2O: MTRR workaround for Intel i960 processor\n");
+ c->mtrr_reg1 = mtrr_add(c->base_phys, 65536, MTRR_TYPE_UNCACHABLE, 1);
+ if(c->mtrr_reg1< 0)
+ {
+ printk(KERN_INFO "i2o_pci: Error in setting MTRR_TYPE_UNCACHABLE\n");
+ mtrr_del(c->mtrr_reg0, c->msg_phys, bar1_size);
+ c->mtrr_reg0 = -1;
+ }
+ }
+ if(c->raptor)
+ c->mtrr_reg1 = mtrr_add(c->msg_phys, bar1_size, MTRR_TYPE_WRCOMB, 1);
+
+#endif
+
+ I2O_IRQ_WRITE32(c,0xFFFFFFFF);
+
+ i = i2o_install_controller(c);
+
+ if(i<0)
+ {
+ printk(KERN_ERR "i2o: Unable to install controller.\n");
+ kfree(c);
+ iounmap(bar0_virt);
+ if(c->raptor)
+ iounmap(bar1_virt);
+ return i;
+ }
+
+ c->irq = dev->irq;
+ if(c->irq)
+ {
+ i=request_irq(dev->irq, i2o_pci_interrupt, SA_SHIRQ,
+ c->name, c);
+ if(i<0)
+ {
+ printk(KERN_ERR "%s: unable to allocate interrupt %d.\n",
+ c->name, dev->irq);
+ c->irq = -1;
+ i2o_delete_controller(c);
+ iounmap(bar0_virt);
+ if(c->raptor)
+ iounmap(bar1_virt);
+ return -EBUSY;
+ }
+ }
+
+ printk(KERN_INFO "%s: Installed at IRQ%d\n", c->name, dev->irq);
+ I2O_IRQ_WRITE32(c,0x0);
+ c->enabled = 1;
+ return 0;
+}
+
+/**
+ * i2o_pci_scan - Scan the pci bus for controllers
+ *
+ * Scan the PCI devices on the system looking for any device which is a
+ * memory of the Intelligent, I2O class. We attempt to set up each such device
+ * and register it with the core.
+ *
+ * Returns the number of controllers registered
+ *
+ * Note; Do not change this to a hot plug interface. I2O 1.5 itself
+ * does not support hot plugging.
+ */
+
+int __init i2o_pci_scan(void)
+{
+ struct pci_dev *dev = NULL;
+ int count=0;
+
+ printk(KERN_INFO "i2o: Checking for PCI I2O controllers...\n");
+
+ while ((dev = pci_find_device(PCI_ANY_ID, PCI_ANY_ID, dev)) != NULL)
+ {
+ if((dev->class>>8)!=PCI_CLASS_INTELLIGENT_I2O &&
+ (dev->vendor!=PCI_VENDOR_ID_DPT || dev->device!=0xA511))
+ continue;
+
+ if((dev->class>>8)==PCI_CLASS_INTELLIGENT_I2O &&
+ (dev->class&0xFF)>1)
+ {
+ printk(KERN_INFO "i2o: I2O Controller found but does not support I2O 1.5 (skipping).\n");
+ continue;
+ }
+ if (pci_enable_device(dev))
+ continue;
+ printk(KERN_INFO "i2o: I2O controller on bus %d at %d.\n",
+ dev->bus->number, dev->devfn);
+ if(pci_set_dma_mask(dev, 0xffffffff))
+ {
+ printk(KERN_WARNING "I2O controller on bus %d at %d : No suitable DMA available\n", dev->bus->number, dev->devfn);
+ continue;
+ }
+ pci_set_master(dev);
+ if(i2o_pci_install(dev)==0)
+ count++;
+ }
+ if(count)
+ printk(KERN_INFO "i2o: %d I2O controller%s found and installed.\n", count,
+ count==1?"":"s");
+ return count?count:-ENODEV;
+}
+
+static int i2o_core_init(void)
+{
+ printk(KERN_INFO "I2O Core - (C) Copyright 1999 Red Hat Software\n");
+ if (i2o_install_handler(&i2o_core_handler) < 0)
+ {
+ printk(KERN_ERR "i2o_core: Unable to install core handler.\nI2O stack not loaded!");
+ return 0;
+ }
+
+ core_context = i2o_core_handler.context;
+
+ /*
+ * Initialize event handling thread
+ */
+
+ init_MUTEX_LOCKED(&evt_sem);
+ evt_pid = kernel_thread(i2o_core_evt, &evt_reply, CLONE_SIGHAND);
+ if(evt_pid < 0)
+ {
+ printk(KERN_ERR "I2O: Could not create event handler kernel thread\n");
+ i2o_remove_handler(&i2o_core_handler);
+ return 0;
+ }
+ else
+ printk(KERN_INFO "I2O: Event thread created as pid %d\n", evt_pid);
+
+ i2o_pci_scan();
+ if(i2o_num_controllers)
+ i2o_sys_init();
+
+ register_reboot_notifier(&i2o_reboot_notifier);
+
+ return 0;
+}
+
+static void i2o_core_exit(void)
+{
+ int stat;
+
+ unregister_reboot_notifier(&i2o_reboot_notifier);
+
+ if(i2o_num_controllers)
+ i2o_sys_shutdown();
+
+ /*
+ * If this is shutdown time, the thread has already been killed
+ */
+ if(evt_running) {
+ printk("Terminating i2o threads...");
+ stat = kill_proc(evt_pid, SIGKILL, 1);
+ if(!stat) {
+ printk("waiting...\n");
+ wait_for_completion(&evt_dead);
+ }
+ printk("done.\n");
+ }
+ i2o_remove_handler(&i2o_core_handler);
+}
+
+module_init(i2o_core_init);
+module_exit(i2o_core_exit);
+
+MODULE_PARM(verbose, "i");
+MODULE_PARM_DESC(verbose, "Verbose diagnostics");
+
+MODULE_AUTHOR("Red Hat Software");
+MODULE_DESCRIPTION("I2O Core");
+MODULE_LICENSE("GPL");
+
+EXPORT_SYMBOL(i2o_controller_chain);
+EXPORT_SYMBOL(i2o_num_controllers);
+EXPORT_SYMBOL(i2o_find_controller);
+EXPORT_SYMBOL(i2o_unlock_controller);
+EXPORT_SYMBOL(i2o_status_get);
+EXPORT_SYMBOL(i2o_install_handler);
+EXPORT_SYMBOL(i2o_remove_handler);
+EXPORT_SYMBOL(i2o_install_controller);
+EXPORT_SYMBOL(i2o_delete_controller);
+EXPORT_SYMBOL(i2o_run_queue);
+EXPORT_SYMBOL(i2o_claim_device);
+EXPORT_SYMBOL(i2o_release_device);
+EXPORT_SYMBOL(i2o_device_notify_on);
+EXPORT_SYMBOL(i2o_device_notify_off);
+EXPORT_SYMBOL(i2o_post_this);
+EXPORT_SYMBOL(i2o_post_wait);
+EXPORT_SYMBOL(i2o_post_wait_mem);
+EXPORT_SYMBOL(i2o_query_scalar);
+EXPORT_SYMBOL(i2o_set_scalar);
+EXPORT_SYMBOL(i2o_query_table);
+EXPORT_SYMBOL(i2o_clear_table);
+EXPORT_SYMBOL(i2o_row_add_table);
+EXPORT_SYMBOL(i2o_issue_params);
+EXPORT_SYMBOL(i2o_event_register);
+EXPORT_SYMBOL(i2o_event_ack);
+EXPORT_SYMBOL(i2o_report_status);
+EXPORT_SYMBOL(i2o_dump_message);
+EXPORT_SYMBOL(i2o_get_class_name);
+EXPORT_SYMBOL(i2o_context_list_add);
+EXPORT_SYMBOL(i2o_context_list_get);
+EXPORT_SYMBOL(i2o_context_list_remove);
--- /dev/null
+/*
+ * drivers/mtd/maps/chestnut.c
+ *
+ * $Id: chestnut.c,v 1.1 2005/01/05 16:59:50 dwmw2 Exp $
+ *
+ * Flash map driver for IBM Chestnut (750FXGX Eval)
+ *
+ * Chose not to enable 8 bit flash as it contains the firmware and board
+ * info. Thus only the 32bit flash is supported.
+ *
+ * Author: <source@mvista.com>
+ *
+ * 2004 (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+#include <platforms/chestnut.h>
+
+static struct map_info chestnut32_map = {
+ .name = "User FS",
+ .size = CHESTNUT_32BIT_SIZE,
+ .bankwidth = 4,
+ .phys = CHESTNUT_32BIT_BASE,
+};
+
+static struct mtd_partition chestnut32_partitions[] = {
+ {
+ .name = "User FS",
+ .offset = 0,
+ .size = CHESTNUT_32BIT_SIZE,
+ }
+};
+
+static struct mtd_info *flash32;
+
+int __init init_chestnut(void)
+{
+ /* 32-bit FLASH */
+
+ chestnut32_map.virt = ioremap(chestnut32_map.phys, chestnut32_map.size);
+
+ if (!chestnut32_map.virt) {
+ printk(KERN_NOTICE "Failed to ioremap 32-bit flash\n");
+ return -EIO;
+ }
+
+ simple_map_init(&chestnut32_map);
+
+ flash32 = do_map_probe("cfi_probe", &chestnut32_map);
+ if (flash32) {
+ flash32->owner = THIS_MODULE;
+ add_mtd_partitions(flash32, chestnut32_partitions,
+ ARRAY_SIZE(chestnut32_partitions));
+ } else {
+ printk(KERN_NOTICE "map probe failed for 32-bit flash\n");
+ return -ENXIO;
+ }
+
+ return 0;
+}
+
+static void __exit
+cleanup_chestnut(void)
+{
+ if (flash32) {
+ del_mtd_partitions(flash32);
+ map_destroy(flash32);
+ }
+
+ if (chestnut32_map.virt) {
+ iounmap((void *)chestnut32_map.virt);
+ chestnut32_map.virt = 0;
+ }
+}
+
+module_init(init_chestnut);
+module_exit(cleanup_chestnut);
+
+MODULE_DESCRIPTION("MTD map and partitions for IBM Chestnut (750fxgx Eval)");
+MODULE_AUTHOR("<source@mvista.com>");
+MODULE_LICENSE("GPL");
--- /dev/null
+/***********************************************************************
+ * FILE NAME : DC390.H *
+ * BY : C.L. Huang *
+ * Description: Device Driver for Tekram DC-390(T) PCI SCSI *
+ * Bus Master Host Adapter *
+ ***********************************************************************/
+/* $Id: dc390.h,v 2.43.2.22 2000/12/20 00:39:36 garloff Exp $ */
+
+/*
+ * DC390/AMD 53C974 driver, header file
+ */
+
+#ifndef DC390_H
+#define DC390_H
+
+#include <linux/version.h>
+
+#define DC390_BANNER "Tekram DC390/AM53C974"
+#define DC390_VERSION "2.1d 2004-05-27"
+
+/* We don't have eh_abort_handler, eh_device_reset_handler,
+ * eh_bus_reset_handler, eh_host_reset_handler yet!
+ * So long: Use old exception handling :-( */
+#define OLD_EH
+
+#if LINUX_VERSION_CODE < KERNEL_VERSION (2,1,70) || defined (OLD_EH)
+# define NEW_EH
+#else
+# define NEW_EH use_new_eh_code: 1,
+# define USE_NEW_EH
+#endif
+#endif /* DC390_H */
--- /dev/null
+/*
+ * linux/drivers/usb/host/ohci-omap.h
+ *
+ * OMAP OHCI USB controller specific defines
+ */
+
+/* OMAP USB OHCI common defines */
+#define OMAP_OHCI_NAME "omap-ohci"
+#define OMAP_OHCI_BASE 0xfffba000
+#define OMAP_OHCI_SIZE 4096
+
+#define HMC_CLEAR (0x3f << 1)
+#define APLL_NDPLL_SWITCH 0x0001
+#define DPLL_PLL_ENABLE 0x0010
+#define DPLL_LOCK 0x0001
+#define SOFT_REQ_REG_REQ 0x0001
+#define USB_MCLK_EN 0x0010
+#define USB_HOST_HHC_UHOST_EN 0x00000200
+#define SOFT_USB_OTG_REQ (1 << 8)
+#define SOFT_USB_REQ (1 << 3)
+#define STATUS_REQ_REG 0xfffe0840
+#define USB_HOST_DPLL_REQ (1 << 8)
+#define SOFT_DPLL_REQ (1 << 0)
+
+/* OMAP-1510 USB OHCI defines */
+#define OMAP1510_LB_MEMSIZE 32 /* Should be same as SDRAM size */
+#define OMAP1510_LB_CLOCK_DIV 0xfffec10c
+#define OMAP1510_LB_MMU_CTL 0xfffec208
+#define OMAP1510_LB_MMU_LCK 0xfffec224
+#define OMAP1510_LB_MMU_LD_TLB 0xfffec228
+#define OMAP1510_LB_MMU_CAM_H 0xfffec22c
+#define OMAP1510_LB_MMU_CAM_L 0xfffec230
+#define OMAP1510_LB_MMU_RAM_H 0xfffec234
+#define OMAP1510_LB_MMU_RAM_L 0xfffec238
+
+/* OMAP-1610 USB OHCI defines */
+#define USB_TRANSCEIVER_CTRL 0xfffe1064
+#define OTG_REV 0xfffb0400
+
+#define OTG_SYSCON_1 0xfffb0404
+#define OTG_IDLE_EN (1 << 15)
+#define DEV_IDLE_EN (1 << 13)
+
+#define OTG_SYSCON_2 0xfffb0408
+#define OTG_CTRL 0xfffb040c
+#define OTG_IRQ_EN 0xfffb0410
+#define OTG_IRQ_SRC 0xfffb0414
+
+#define OTG_EN (1 << 31)
+#define USBX_SYNCHRO (1 << 30)
+#define SRP_VBUS (1 << 12)
+#define OTG_PADEN (1 << 10)
+#define HMC_PADEN (1 << 9)
+#define UHOST_EN (1 << 8)
+
+/* Hardware specific defines */
+#define OMAP1510_FPGA_HOST_CTRL 0xe800020c
--- /dev/null
+/* Driver for Philips webcam
+ Functions that send various control messages to the webcam, including
+ video modes.
+ (C) 1999-2003 Nemosoft Unv. (webcam@smcc.demon.nl)
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+/*
+ Changes
+ 2001/08/03 Alvarado Added methods for changing white balance and
+ red/green gains
+ */
+
+/* Control functions for the cam; brightness, contrast, video mode, etc. */
+
+#ifdef __KERNEL__
+#include <asm/uaccess.h>
+#endif
+#include <asm/errno.h>
+#include <linux/version.h>
+
+#include "pwc.h"
+#include "pwc-ioctl.h"
+#include "pwc-uncompress.h"
+
+/* Request types: video */
+#define SET_LUM_CTL 0x01
+#define GET_LUM_CTL 0x02
+#define SET_CHROM_CTL 0x03
+#define GET_CHROM_CTL 0x04
+#define SET_STATUS_CTL 0x05
+#define GET_STATUS_CTL 0x06
+#define SET_EP_STREAM_CTL 0x07
+#define GET_EP_STREAM_CTL 0x08
+#define SET_MPT_CTL 0x0D
+#define GET_MPT_CTL 0x0E
+
+/* Selectors for the Luminance controls [GS]ET_LUM_CTL */
+#define AGC_MODE_FORMATTER 0x2000
+#define PRESET_AGC_FORMATTER 0x2100
+#define SHUTTER_MODE_FORMATTER 0x2200
+#define PRESET_SHUTTER_FORMATTER 0x2300
+#define PRESET_CONTOUR_FORMATTER 0x2400
+#define AUTO_CONTOUR_FORMATTER 0x2500
+#define BACK_LIGHT_COMPENSATION_FORMATTER 0x2600
+#define CONTRAST_FORMATTER 0x2700
+#define DYNAMIC_NOISE_CONTROL_FORMATTER 0x2800
+#define FLICKERLESS_MODE_FORMATTER 0x2900
+#define AE_CONTROL_SPEED 0x2A00
+#define BRIGHTNESS_FORMATTER 0x2B00
+#define GAMMA_FORMATTER 0x2C00
+
+/* Selectors for the Chrominance controls [GS]ET_CHROM_CTL */
+#define WB_MODE_FORMATTER 0x1000
+#define AWB_CONTROL_SPEED_FORMATTER 0x1100
+#define AWB_CONTROL_DELAY_FORMATTER 0x1200
+#define PRESET_MANUAL_RED_GAIN_FORMATTER 0x1300
+#define PRESET_MANUAL_BLUE_GAIN_FORMATTER 0x1400
+#define COLOUR_MODE_FORMATTER 0x1500
+#define SATURATION_MODE_FORMATTER1 0x1600
+#define SATURATION_MODE_FORMATTER2 0x1700
+
+/* Selectors for the Status controls [GS]ET_STATUS_CTL */
+#define SAVE_USER_DEFAULTS_FORMATTER 0x0200
+#define RESTORE_USER_DEFAULTS_FORMATTER 0x0300
+#define RESTORE_FACTORY_DEFAULTS_FORMATTER 0x0400
+#define READ_AGC_FORMATTER 0x0500
+#define READ_SHUTTER_FORMATTER 0x0600
+#define READ_RED_GAIN_FORMATTER 0x0700
+#define READ_BLUE_GAIN_FORMATTER 0x0800
+#define SENSOR_TYPE_FORMATTER1 0x0C00
+#define READ_RAW_Y_MEAN_FORMATTER 0x3100
+#define SET_POWER_SAVE_MODE_FORMATTER 0x3200
+#define MIRROR_IMAGE_FORMATTER 0x3300
+#define LED_FORMATTER 0x3400
+#define SENSOR_TYPE_FORMATTER2 0x3700
+
+/* Formatters for the Video Endpoint controls [GS]ET_EP_STREAM_CTL */
+#define VIDEO_OUTPUT_CONTROL_FORMATTER 0x0100
+
+/* Formatters for the motorized pan & tilt [GS]ET_MPT_CTL */
+#define PT_RELATIVE_CONTROL_FORMATTER 0x01
+#define PT_RESET_CONTROL_FORMATTER 0x02
+#define PT_STATUS_FORMATTER 0x03
+
+static char *size2name[PSZ_MAX] =
+{
+ "subQCIF",
+ "QSIF",
+ "QCIF",
+ "SIF",
+ "CIF",
+ "VGA",
+};
+
+/********/
+
+/* Entries for the Nala (645/646) camera; the Nala doesn't have compression
+ preferences, so you either get compressed or non-compressed streams.
+
+ An alternate value of 0 means this mode is not available at all.
+ */
+
+struct Nala_table_entry {
+ char alternate; /* USB alternate setting */
+ int compressed; /* Compressed yes/no */
+
+ unsigned char mode[3]; /* precomputed mode table */
+};
+
+static struct Nala_table_entry Nala_table[PSZ_MAX][8] =
+{
+#include "pwc_nala.h"
+};
+
+/* This tables contains entries for the 675/680/690 (Timon) camera, with
+ 4 different qualities (no compression, low, medium, high).
+ It lists the bandwidth requirements for said mode by its alternate interface
+ number. An alternate of 0 means that the mode is unavailable.
+
+ There are 6 * 4 * 4 entries:
+ 6 different resolutions subqcif, qsif, qcif, sif, cif, vga
+ 6 framerates: 5, 10, 15, 20, 25, 30
+ 4 compression modi: none, low, medium, high
+
+ When an uncompressed mode is not available, the next available compressed mode
+ will be chosen (unless the decompressor is absent). Sometimes there are only
+ 1 or 2 compressed modes available; in that case entries are duplicated.
+*/
+struct Timon_table_entry
+{
+ char alternate; /* USB alternate interface */
+ unsigned short packetsize; /* Normal packet size */
+ unsigned short bandlength; /* Bandlength when decompressing */
+ unsigned char mode[13]; /* precomputed mode settings for cam */
+};
+
+static struct Timon_table_entry Timon_table[PSZ_MAX][6][4] =
+{
+#include "pwc_timon.h"
+};
+
+/* Entries for the Kiara (730/740/750) camera */
+
+struct Kiara_table_entry
+{
+ char alternate; /* USB alternate interface */
+ unsigned short packetsize; /* Normal packet size */
+ unsigned short bandlength; /* Bandlength when decompressing */
+ unsigned char mode[12]; /* precomputed mode settings for cam */
+};
+
+static struct Kiara_table_entry Kiara_table[PSZ_MAX][6][4] =
+{
+#include "pwc_kiara.h"
+};
+
+
+/****************************************************************************/
+
+
+#define SendControlMsg(request, value, buflen) \
+ usb_control_msg(pdev->udev, usb_sndctrlpipe(pdev->udev, 0), \
+ request, \
+ USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE, \
+ value, \
+ pdev->vcinterface, \
+ &buf, buflen, HZ / 2)
+
+#define RecvControlMsg(request, value, buflen) \
+ usb_control_msg(pdev->udev, usb_rcvctrlpipe(pdev->udev, 0), \
+ request, \
+ USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE, \
+ value, \
+ pdev->vcinterface, \
+ &buf, buflen, HZ / 2)
+
+
+#if PWC_DEBUG
+void pwc_hexdump(void *p, int len)
+{
+ int i;
+ unsigned char *s;
+ char buf[100], *d;
+
+ s = (unsigned char *)p;
+ d = buf;
+ *d = '\0';
+ Debug("Doing hexdump @ %p, %d bytes.\n", p, len);
+ for (i = 0; i < len; i++) {
+ d += sprintf(d, "%02X ", *s++);
+ if ((i & 0xF) == 0xF) {
+ Debug("%s\n", buf);
+ d = buf;
+ *d = '\0';
+ }
+ }
+ if ((i & 0xF) != 0)
+ Debug("%s\n", buf);
+}
+#endif
+
+static inline int send_video_command(struct usb_device *udev, int index, void *buf, int buflen)
+{
+ return usb_control_msg(udev,
+ usb_sndctrlpipe(udev, 0),
+ SET_EP_STREAM_CTL,
+ USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ VIDEO_OUTPUT_CONTROL_FORMATTER,
+ index,
+ buf, buflen, HZ);
+}
+
+
+
+static inline int set_video_mode_Nala(struct pwc_device *pdev, int size, int frames)
+{
+ unsigned char buf[3];
+ int ret, fps;
+ struct Nala_table_entry *pEntry;
+ int frames2frames[31] =
+ { /* closest match of framerate */
+ 0, 0, 0, 0, 4, /* 0-4 */
+ 5, 5, 7, 7, 10, /* 5-9 */
+ 10, 10, 12, 12, 15, /* 10-14 */
+ 15, 15, 15, 20, 20, /* 15-19 */
+ 20, 20, 20, 24, 24, /* 20-24 */
+ 24, 24, 24, 24, 24, /* 25-29 */
+ 24 /* 30 */
+ };
+ int frames2table[31] =
+ { 0, 0, 0, 0, 0, /* 0-4 */
+ 1, 1, 1, 2, 2, /* 5-9 */
+ 3, 3, 4, 4, 4, /* 10-14 */
+ 5, 5, 5, 5, 5, /* 15-19 */
+ 6, 6, 6, 6, 7, /* 20-24 */
+ 7, 7, 7, 7, 7, /* 25-29 */
+ 7 /* 30 */
+ };
+
+ if (size < 0 || size > PSZ_CIF || frames < 4 || frames > 25)
+ return -EINVAL;
+ frames = frames2frames[frames];
+ fps = frames2table[frames];
+ pEntry = &Nala_table[size][fps];
+ if (pEntry->alternate == 0)
+ return -EINVAL;
+
+ if (pEntry->compressed && pdev->decompressor == NULL)
+ return -ENOENT; /* Not supported. */
+
+ memcpy(buf, pEntry->mode, 3);
+ ret = send_video_command(pdev->udev, pdev->vendpoint, buf, 3);
+ if (ret < 0) {
+ Debug("Failed to send video command... %d\n", ret);
+ return ret;
+ }
+ if (pEntry->compressed && pdev->decompressor != 0 && pdev->vpalette != VIDEO_PALETTE_RAW)
+ pdev->decompressor->init(pdev->type, pdev->release, buf, pdev->decompress_data);
+
+ pdev->cmd_len = 3;
+ memcpy(pdev->cmd_buf, buf, 3);
+
+ /* Set various parameters */
+ pdev->vframes = frames;
+ pdev->vsize = size;
+ pdev->valternate = pEntry->alternate;
+ pdev->image = pwc_image_sizes[size];
+ pdev->frame_size = (pdev->image.x * pdev->image.y * 3) / 2;
+ if (pEntry->compressed) {
+ if (pdev->release < 5) { /* 4 fold compression */
+ pdev->vbandlength = 528;
+ pdev->frame_size /= 4;
+ }
+ else {
+ pdev->vbandlength = 704;
+ pdev->frame_size /= 3;
+ }
+ }
+ else
+ pdev->vbandlength = 0;
+ return 0;
+}
+
+
+static inline int set_video_mode_Timon(struct pwc_device *pdev, int size, int frames, int compression, int snapshot)
+{
+ unsigned char buf[13];
+ struct Timon_table_entry *pChoose;
+ int ret, fps;
+
+ if (size >= PSZ_MAX || frames < 5 || frames > 30 || compression < 0 || compression > 3)
+ return -EINVAL;
+ if (size == PSZ_VGA && frames > 15)
+ return -EINVAL;
+ fps = (frames / 5) - 1;
+
+ /* Find a supported framerate with progressively higher compression ratios
+ if the preferred ratio is not available.
+ */
+ pChoose = NULL;
+ if (pdev->decompressor == NULL) {
+#if PWC_DEBUG
+ Debug("Trying to find uncompressed mode.\n");
+#endif
+ pChoose = &Timon_table[size][fps][0];
+ }
+ else {
+ while (compression <= 3) {
+ pChoose = &Timon_table[size][fps][compression];
+ if (pChoose->alternate != 0)
+ break;
+ compression++;
+ }
+ }
+ if (pChoose == NULL || pChoose->alternate == 0)
+ return -ENOENT; /* Not supported. */
+
+ memcpy(buf, pChoose->mode, 13);
+ if (snapshot)
+ buf[0] |= 0x80;
+ ret = send_video_command(pdev->udev, pdev->vendpoint, buf, 13);
+ if (ret < 0)
+ return ret;
+
+ if (pChoose->bandlength > 0 && pdev->decompressor != 0 && pdev->vpalette != VIDEO_PALETTE_RAW)
+ pdev->decompressor->init(pdev->type, pdev->release, buf, pdev->decompress_data);
+
+ pdev->cmd_len = 13;
+ memcpy(pdev->cmd_buf, buf, 13);
+
+ /* Set various parameters */
+ pdev->vframes = frames;
+ pdev->vsize = size;
+ pdev->vsnapshot = snapshot;
+ pdev->valternate = pChoose->alternate;
+ pdev->image = pwc_image_sizes[size];
+ pdev->vbandlength = pChoose->bandlength;
+ if (pChoose->bandlength > 0)
+ pdev->frame_size = (pChoose->bandlength * pdev->image.y) / 4;
+ else
+ pdev->frame_size = (pdev->image.x * pdev->image.y * 12) / 8;
+ return 0;
+}
+
+
+static inline int set_video_mode_Kiara(struct pwc_device *pdev, int size, int frames, int compression, int snapshot)
+{
+ struct Kiara_table_entry *pChoose = NULL;
+ int fps, ret;
+ unsigned char buf[12];
+ struct Kiara_table_entry RawEntry = {6, 773, 1272, {0xAD, 0xF4, 0x10, 0x27, 0xB6, 0x24, 0x96, 0x02, 0x30, 0x05, 0x03, 0x80}};
+
+ if (size >= PSZ_MAX || frames < 5 || frames > 30 || compression < 0 || compression > 3)
+ return -EINVAL;
+ if (size == PSZ_VGA && frames > 15)
+ return -EINVAL;
+ fps = (frames / 5) - 1;
+
+ /* special case: VGA @ 5 fps and snapshot is raw bayer mode */
+ if (size == PSZ_VGA && frames == 5 && snapshot)
+ {
+ /* Only available in case the raw palette is selected or
+ we have the decompressor available. This mode is
+ only available in compressed form
+ */
+ if (pdev->vpalette == VIDEO_PALETTE_RAW || pdev->decompressor != NULL)
+ {
+ Info("Choosing VGA/5 BAYER mode (%d).\n", pdev->vpalette);
+ pChoose = &RawEntry;
+ }
+ else
+ {
+ Info("VGA/5 BAYER mode _must_ have a decompressor available, or use RAW palette.\n");
+ }
+ }
+ else
+ {
+ /* Find a supported framerate with progressively higher compression ratios
+ if the preferred ratio is not available.
+ Skip this step when using RAW modes.
+ */
+ if (pdev->decompressor == NULL && pdev->vpalette != VIDEO_PALETTE_RAW) {
+#if PWC_DEBUG
+ Debug("Trying to find uncompressed mode.\n");
+#endif
+ pChoose = &Kiara_table[size][fps][0];
+ }
+ else {
+ while (compression <= 3) {
+ pChoose = &Kiara_table[size][fps][compression];
+ if (pChoose->alternate != 0)
+ break;
+ compression++;
+ }
+ }
+ }
+ if (pChoose == NULL || pChoose->alternate == 0)
+ return -ENOENT; /* Not supported. */
+
+ /* usb_control_msg won't take staticly allocated arrays as argument?? */
+ memcpy(buf, pChoose->mode, 12);
+ if (snapshot)
+ buf[0] |= 0x80;
+
+ /* Firmware bug: video endpoint is 5, but commands are sent to endpoint 4 */
+ ret = send_video_command(pdev->udev, 4 /* pdev->vendpoint */, buf, 12);
+ if (ret < 0)
+ return ret;
+
+ if (pChoose->bandlength > 0 && pdev->decompressor != 0 && pdev->vpalette != VIDEO_PALETTE_RAW)
+ pdev->decompressor->init(pdev->type, pdev->release, buf, pdev->decompress_data);
+
+ pdev->cmd_len = 12;
+ memcpy(pdev->cmd_buf, buf, 12);
+ /* All set and go */
+ pdev->vframes = frames;
+ pdev->vsize = size;
+ pdev->vsnapshot = snapshot;
+ pdev->valternate = pChoose->alternate;
+ pdev->image = pwc_image_sizes[size];
+ pdev->vbandlength = pChoose->bandlength;
+ if (pdev->vbandlength > 0)
+ pdev->frame_size = (pdev->vbandlength * pdev->image.y) / 4;
+ else
+ pdev->frame_size = (pdev->image.x * pdev->image.y * 12) / 8;
+ return 0;
+}
+
+
+
+/**
+ @pdev: device structure
+ @width: viewport width
+ @height: viewport height
+ @frame: framerate, in fps
+ @compression: preferred compression ratio
+ @snapshot: snapshot mode or streaming
+ */
+int pwc_set_video_mode(struct pwc_device *pdev, int width, int height, int frames, int compression, int snapshot)
+{
+ int ret, size;
+
+ Trace(TRACE_FLOW, "set_video_mode(%dx%d @ %d, palette %d).\n", width, height, frames, pdev->vpalette);
+ size = pwc_decode_size(pdev, width, height);
+ if (size < 0) {
+ Debug("Could not find suitable size.\n");
+ return -ERANGE;
+ }
+ Debug("decode_size = %d.\n", size);
+
+ ret = -EINVAL;
+ switch(pdev->type) {
+ case 645:
+ case 646:
+ ret = set_video_mode_Nala(pdev, size, frames);
+ break;
+
+ case 675:
+ case 680:
+ case 690:
+ ret = set_video_mode_Timon(pdev, size, frames, compression, snapshot);
+ break;
+
+ case 720:
+ case 730:
+ case 740:
+ case 750:
+ ret = set_video_mode_Kiara(pdev, size, frames, compression, snapshot);
+ break;
+ }
+ if (ret < 0) {
+ if (ret == -ENOENT)
+ Info("Video mode %s@%d fps is only supported with the decompressor module (pwcx).\n", size2name[size], frames);
+ else {
+ Err("Failed to set video mode %s@%d fps; return code = %d\n", size2name[size], frames, ret);
+ }
+ return ret;
+ }
+ pdev->view.x = width;
+ pdev->view.y = height;
+ pdev->frame_total_size = pdev->frame_size + pdev->frame_header_size + pdev->frame_trailer_size;
+ pwc_set_image_buffer_size(pdev);
+ Trace(TRACE_SIZE, "Set viewport to %dx%d, image size is %dx%d.\n", width, height, pwc_image_sizes[size].x, pwc_image_sizes[size].y);
+ return 0;
+}
+
+
+void pwc_set_image_buffer_size(struct pwc_device *pdev)
+{
+ int i, factor = 0, filler = 0;
+
+ /* for PALETTE_YUV420P */
+ switch(pdev->vpalette)
+ {
+ case VIDEO_PALETTE_YUV420P:
+ factor = 6;
+ filler = 128;
+ break;
+ case VIDEO_PALETTE_RAW:
+ factor = 6; /* can be uncompressed YUV420P */
+ filler = 0;
+ break;
+ }
+
+ /* Set sizes in bytes */
+ pdev->image.size = pdev->image.x * pdev->image.y * factor / 4;
+ pdev->view.size = pdev->view.x * pdev->view.y * factor / 4;
+
+ /* Align offset, or you'll get some very weird results in
+ YUV420 mode... x must be multiple of 4 (to get the Y's in
+ place), and y even (or you'll mixup U & V). This is less of a
+ problem for YUV420P.
+ */
+ pdev->offset.x = ((pdev->view.x - pdev->image.x) / 2) & 0xFFFC;
+ pdev->offset.y = ((pdev->view.y - pdev->image.y) / 2) & 0xFFFE;
+
+ /* Fill buffers with gray or black */
+ for (i = 0; i < MAX_IMAGES; i++) {
+ if (pdev->image_ptr[i] != NULL)
+ memset(pdev->image_ptr[i], filler, pdev->view.size);
+ }
+}
+
+
+
+/* BRIGHTNESS */
+
+int pwc_get_brightness(struct pwc_device *pdev)
+{
+ char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_LUM_CTL, BRIGHTNESS_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ return buf << 9;
+}
+
+int pwc_set_brightness(struct pwc_device *pdev, int value)
+{
+ char buf;
+
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ buf = (value >> 9) & 0x7f;
+ return SendControlMsg(SET_LUM_CTL, BRIGHTNESS_FORMATTER, 1);
+}
+
+/* CONTRAST */
+
+int pwc_get_contrast(struct pwc_device *pdev)
+{
+ char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_LUM_CTL, CONTRAST_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ return buf << 10;
+}
+
+int pwc_set_contrast(struct pwc_device *pdev, int value)
+{
+ char buf;
+
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ buf = (value >> 10) & 0x3f;
+ return SendControlMsg(SET_LUM_CTL, CONTRAST_FORMATTER, 1);
+}
+
+/* GAMMA */
+
+int pwc_get_gamma(struct pwc_device *pdev)
+{
+ char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_LUM_CTL, GAMMA_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ return buf << 11;
+}
+
+int pwc_set_gamma(struct pwc_device *pdev, int value)
+{
+ char buf;
+
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ buf = (value >> 11) & 0x1f;
+ return SendControlMsg(SET_LUM_CTL, GAMMA_FORMATTER, 1);
+}
+
+
+/* SATURATION */
+
+int pwc_get_saturation(struct pwc_device *pdev)
+{
+ char buf;
+ int ret;
+
+ if (pdev->type < 675)
+ return -1;
+ ret = RecvControlMsg(GET_CHROM_CTL, pdev->type < 730 ? SATURATION_MODE_FORMATTER2 : SATURATION_MODE_FORMATTER1, 1);
+ if (ret < 0)
+ return ret;
+ return 32768 + buf * 327;
+}
+
+int pwc_set_saturation(struct pwc_device *pdev, int value)
+{
+ char buf;
+
+ if (pdev->type < 675)
+ return -EINVAL;
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ /* saturation ranges from -100 to +100 */
+ buf = (value - 32768) / 327;
+ return SendControlMsg(SET_CHROM_CTL, pdev->type < 730 ? SATURATION_MODE_FORMATTER2 : SATURATION_MODE_FORMATTER1, 1);
+}
+
+/* AGC */
+
+static inline int pwc_set_agc(struct pwc_device *pdev, int mode, int value)
+{
+ char buf;
+ int ret;
+
+ if (mode)
+ buf = 0x0; /* auto */
+ else
+ buf = 0xff; /* fixed */
+
+ ret = SendControlMsg(SET_LUM_CTL, AGC_MODE_FORMATTER, 1);
+
+ if (!mode && ret >= 0) {
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ buf = (value >> 10) & 0x3F;
+ ret = SendControlMsg(SET_LUM_CTL, PRESET_AGC_FORMATTER, 1);
+ }
+ if (ret < 0)
+ return ret;
+ return 0;
+}
+
+static inline int pwc_get_agc(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_LUM_CTL, AGC_MODE_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+
+ if (buf != 0) { /* fixed */
+ ret = RecvControlMsg(GET_LUM_CTL, PRESET_AGC_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ if (buf > 0x3F)
+ buf = 0x3F;
+ *value = (buf << 10);
+ }
+ else { /* auto */
+ ret = RecvControlMsg(GET_STATUS_CTL, READ_AGC_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ /* Gah... this value ranges from 0x00 ... 0x9F */
+ if (buf > 0x9F)
+ buf = 0x9F;
+ *value = -(48 + buf * 409);
+ }
+
+ return 0;
+}
+
+static inline int pwc_set_shutter_speed(struct pwc_device *pdev, int mode, int value)
+{
+ char buf[2];
+ int speed, ret;
+
+
+ if (mode)
+ buf[0] = 0x0; /* auto */
+ else
+ buf[0] = 0xff; /* fixed */
+
+ ret = SendControlMsg(SET_LUM_CTL, SHUTTER_MODE_FORMATTER, 1);
+
+ if (!mode && ret >= 0) {
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ switch(pdev->type) {
+ case 675:
+ case 680:
+ case 690:
+ /* speed ranges from 0x0 to 0x290 (656) */
+ speed = (value / 100);
+ buf[1] = speed >> 8;
+ buf[0] = speed & 0xff;
+ break;
+ case 720:
+ case 730:
+ case 740:
+ case 750:
+ /* speed seems to range from 0x0 to 0xff */
+ buf[1] = 0;
+ buf[0] = value >> 8;
+ break;
+ }
+
+ ret = SendControlMsg(SET_LUM_CTL, PRESET_SHUTTER_FORMATTER, 2);
+ }
+ return ret;
+}
+
+
+/* POWER */
+
+int pwc_camera_power(struct pwc_device *pdev, int power)
+{
+ char buf;
+
+ if (pdev->type < 675 || (pdev->type < 730 && pdev->release < 6))
+ return 0; /* Not supported by Nala or Timon < release 6 */
+
+ if (power)
+ buf = 0x00; /* active */
+ else
+ buf = 0xFF; /* power save */
+ return SendControlMsg(SET_STATUS_CTL, SET_POWER_SAVE_MODE_FORMATTER, 1);
+}
+
+
+
+/* private calls */
+
+static inline int pwc_restore_user(struct pwc_device *pdev)
+{
+ char buf; /* dummy */
+ return SendControlMsg(SET_STATUS_CTL, RESTORE_USER_DEFAULTS_FORMATTER, 0);
+}
+
+static inline int pwc_save_user(struct pwc_device *pdev)
+{
+ char buf; /* dummy */
+ return SendControlMsg(SET_STATUS_CTL, SAVE_USER_DEFAULTS_FORMATTER, 0);
+}
+
+static inline int pwc_restore_factory(struct pwc_device *pdev)
+{
+ char buf; /* dummy */
+ return SendControlMsg(SET_STATUS_CTL, RESTORE_FACTORY_DEFAULTS_FORMATTER, 0);
+}
+
+ /* ************************************************* */
+ /* Patch by Alvarado: (not in the original version */
+
+ /*
+ * the camera recognizes modes from 0 to 4:
+ *
+ * 00: indoor (incandescant lighting)
+ * 01: outdoor (sunlight)
+ * 02: fluorescent lighting
+ * 03: manual
+ * 04: auto
+ */
+static inline int pwc_set_awb(struct pwc_device *pdev, int mode)
+{
+ char buf;
+ int ret;
+
+ if (mode < 0)
+ mode = 0;
+
+ if (mode > 4)
+ mode = 4;
+
+ buf = mode & 0x07; /* just the lowest three bits */
+
+ ret = SendControlMsg(SET_CHROM_CTL, WB_MODE_FORMATTER, 1);
+
+ if (ret < 0)
+ return ret;
+ return 0;
+}
+
+static inline int pwc_get_awb(struct pwc_device *pdev)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_CHROM_CTL, WB_MODE_FORMATTER, 1);
+
+ if (ret < 0)
+ return ret;
+ return buf;
+}
+
+static inline int pwc_set_red_gain(struct pwc_device *pdev, int value)
+{
+ unsigned char buf;
+
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ /* only the msb is considered */
+ buf = value >> 8;
+ return SendControlMsg(SET_CHROM_CTL, PRESET_MANUAL_RED_GAIN_FORMATTER, 1);
+}
+
+static inline int pwc_get_red_gain(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_CHROM_CTL, PRESET_MANUAL_RED_GAIN_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *value = buf << 8;
+ return 0;
+}
+
+
+static inline int pwc_set_blue_gain(struct pwc_device *pdev, int value)
+{
+ unsigned char buf;
+
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ /* only the msb is considered */
+ buf = value >> 8;
+ return SendControlMsg(SET_CHROM_CTL, PRESET_MANUAL_BLUE_GAIN_FORMATTER, 1);
+}
+
+static inline int pwc_get_blue_gain(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_CHROM_CTL, PRESET_MANUAL_BLUE_GAIN_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *value = buf << 8;
+ return 0;
+}
+
+
+/* The following two functions are different, since they only read the
+ internal red/blue gains, which may be different from the manual
+ gains set or read above.
+ */
+static inline int pwc_read_red_gain(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_STATUS_CTL, READ_RED_GAIN_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *value = buf << 8;
+ return 0;
+}
+
+static inline int pwc_read_blue_gain(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_STATUS_CTL, READ_BLUE_GAIN_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *value = buf << 8;
+ return 0;
+}
+
+
+static inline int pwc_set_wb_speed(struct pwc_device *pdev, int speed)
+{
+ unsigned char buf;
+
+ /* useful range is 0x01..0x20 */
+ buf = speed / 0x7f0;
+ return SendControlMsg(SET_CHROM_CTL, AWB_CONTROL_SPEED_FORMATTER, 1);
+}
+
+static inline int pwc_get_wb_speed(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_CHROM_CTL, AWB_CONTROL_SPEED_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *value = buf * 0x7f0;
+ return 0;
+}
+
+
+static inline int pwc_set_wb_delay(struct pwc_device *pdev, int delay)
+{
+ unsigned char buf;
+
+ /* useful range is 0x01..0x3F */
+ buf = (delay >> 10);
+ return SendControlMsg(SET_CHROM_CTL, AWB_CONTROL_DELAY_FORMATTER, 1);
+}
+
+static inline int pwc_get_wb_delay(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_CHROM_CTL, AWB_CONTROL_DELAY_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *value = buf << 10;
+ return 0;
+}
+
+
+int pwc_set_leds(struct pwc_device *pdev, int on_value, int off_value)
+{
+ unsigned char buf[2];
+
+ if (pdev->type < 730)
+ return 0;
+ on_value /= 100;
+ off_value /= 100;
+ if (on_value < 0)
+ on_value = 0;
+ if (on_value > 0xff)
+ on_value = 0xff;
+ if (off_value < 0)
+ off_value = 0;
+ if (off_value > 0xff)
+ off_value = 0xff;
+
+ buf[0] = on_value;
+ buf[1] = off_value;
+
+ return SendControlMsg(SET_STATUS_CTL, LED_FORMATTER, 2);
+}
+
+int pwc_get_leds(struct pwc_device *pdev, int *on_value, int *off_value)
+{
+ unsigned char buf[2];
+ int ret;
+
+ if (pdev->type < 730) {
+ *on_value = -1;
+ *off_value = -1;
+ return 0;
+ }
+
+ ret = RecvControlMsg(GET_STATUS_CTL, LED_FORMATTER, 2);
+ if (ret < 0)
+ return ret;
+ *on_value = buf[0] * 100;
+ *off_value = buf[1] * 100;
+ return 0;
+}
+
+static inline int pwc_set_contour(struct pwc_device *pdev, int contour)
+{
+ unsigned char buf;
+ int ret;
+
+ if (contour < 0)
+ buf = 0xff; /* auto contour on */
+ else
+ buf = 0x0; /* auto contour off */
+ ret = SendControlMsg(SET_LUM_CTL, AUTO_CONTOUR_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+
+ if (contour < 0)
+ return 0;
+ if (contour > 0xffff)
+ contour = 0xffff;
+
+ buf = (contour >> 10); /* contour preset is [0..3f] */
+ ret = SendControlMsg(SET_LUM_CTL, PRESET_CONTOUR_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ return 0;
+}
+
+static inline int pwc_get_contour(struct pwc_device *pdev, int *contour)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_LUM_CTL, AUTO_CONTOUR_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+
+ if (buf == 0) {
+ /* auto mode off, query current preset value */
+ ret = RecvControlMsg(GET_LUM_CTL, PRESET_CONTOUR_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *contour = buf << 10;
+ }
+ else
+ *contour = -1;
+ return 0;
+}
+
+
+static inline int pwc_set_backlight(struct pwc_device *pdev, int backlight)
+{
+ unsigned char buf;
+
+ if (backlight)
+ buf = 0xff;
+ else
+ buf = 0x0;
+ return SendControlMsg(SET_LUM_CTL, BACK_LIGHT_COMPENSATION_FORMATTER, 1);
+}
+
+static inline int pwc_get_backlight(struct pwc_device *pdev, int *backlight)
+{
+ int ret;
+ unsigned char buf;
+
+ ret = RecvControlMsg(GET_LUM_CTL, BACK_LIGHT_COMPENSATION_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *backlight = buf;
+ return 0;
+}
+
+
+static inline int pwc_set_flicker(struct pwc_device *pdev, int flicker)
+{
+ unsigned char buf;
+
+ if (flicker)
+ buf = 0xff;
+ else
+ buf = 0x0;
+ return SendControlMsg(SET_LUM_CTL, FLICKERLESS_MODE_FORMATTER, 1);
+}
+
+static inline int pwc_get_flicker(struct pwc_device *pdev, int *flicker)
+{
+ int ret;
+ unsigned char buf;
+
+ ret = RecvControlMsg(GET_LUM_CTL, FLICKERLESS_MODE_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *flicker = buf;
+ return 0;
+}
+
+
+static inline int pwc_set_dynamic_noise(struct pwc_device *pdev, int noise)
+{
+ unsigned char buf;
+
+ if (noise < 0)
+ noise = 0;
+ if (noise > 3)
+ noise = 3;
+ buf = noise;
+ return SendControlMsg(SET_LUM_CTL, DYNAMIC_NOISE_CONTROL_FORMATTER, 1);
+}
+
+static inline int pwc_get_dynamic_noise(struct pwc_device *pdev, int *noise)
+{
+ int ret;
+ unsigned char buf;
+
+ ret = RecvControlMsg(GET_LUM_CTL, DYNAMIC_NOISE_CONTROL_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *noise = buf;
+ return 0;
+}
+
+int pwc_mpt_reset(struct pwc_device *pdev, int flags)
+{
+ unsigned char buf;
+
+ buf = flags & 0x03; // only lower two bits are currently used
+ return SendControlMsg(SET_MPT_CTL, PT_RESET_CONTROL_FORMATTER, 1);
+}
+
+static inline int pwc_mpt_set_angle(struct pwc_device *pdev, int pan, int tilt)
+{
+ unsigned char buf[4];
+
+ /* set new relative angle; angles are expressed in degrees * 100,
+ but cam as .5 degree resolution, hence devide by 200. Also
+ the angle must be multiplied by 64 before it's send to
+ the cam (??)
+ */
+ pan = 64 * pan / 100;
+ tilt = -64 * tilt / 100; /* positive tilt is down, which is not what the user would expect */
+ buf[0] = pan & 0xFF;
+ buf[1] = (pan >> 8) & 0xFF;
+ buf[2] = tilt & 0xFF;
+ buf[3] = (tilt >> 8) & 0xFF;
+ return SendControlMsg(SET_MPT_CTL, PT_RELATIVE_CONTROL_FORMATTER, 4);
+}
+
+static inline int pwc_mpt_get_status(struct pwc_device *pdev, struct pwc_mpt_status *status)
+{
+ int ret;
+ unsigned char buf[5];
+
+ ret = RecvControlMsg(GET_MPT_CTL, PT_STATUS_FORMATTER, 5);
+ if (ret < 0)
+ return ret;
+ status->status = buf[0] & 0x7; // 3 bits are used for reporting
+ status->time_pan = (buf[1] << 8) + buf[2];
+ status->time_tilt = (buf[3] << 8) + buf[4];
+ return 0;
+}
+
+
+int pwc_get_cmos_sensor(struct pwc_device *pdev, int *sensor)
+{
+ unsigned char buf;
+ int ret = -1, request;
+
+ if (pdev->type < 675)
+ request = SENSOR_TYPE_FORMATTER1;
+ else if (pdev->type < 730)
+ return -1; /* The Vesta series doesn't have this call */
+ else
+ request = SENSOR_TYPE_FORMATTER2;
+
+ ret = RecvControlMsg(GET_STATUS_CTL, request, 1);
+ if (ret < 0)
+ return ret;
+ if (pdev->type < 675)
+ *sensor = buf | 0x100;
+ else
+ *sensor = buf;
+ return 0;
+}
+
+
+ /* End of Add-Ons */
+ /* ************************************************* */
+
+/* Linux 2.5.something and 2.6 pass direct pointers to arguments of
+ ioctl() calls. With 2.4, you have to do tedious copy_from_user()
+ and copy_to_user() calls. With these macros we circumvent this,
+ and let me maintain only one source file. The functionality is
+ exactly the same otherwise.
+ */
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 0)
+
+/* define local variable for arg */
+#define ARG_DEF(ARG_type, ARG_name)\
+ ARG_type *ARG_name = arg;
+/* copy arg to local variable */
+#define ARG_IN(ARG_name) /* nothing */
+/* argument itself (referenced) */
+#define ARGR(ARG_name) (*ARG_name)
+/* argument address */
+#define ARGA(ARG_name) ARG_name
+/* copy local variable to arg */
+#define ARG_OUT(ARG_name) /* nothing */
+
+#else
+
+#define ARG_DEF(ARG_type, ARG_name)\
+ ARG_type ARG_name;
+#define ARG_IN(ARG_name)\
+ if (copy_from_user(&ARG_name, arg, sizeof(ARG_name))) {\
+ ret = -EFAULT;\
+ break;\
+ }
+#define ARGR(ARG_name) ARG_name
+#define ARGA(ARG_name) &ARG_name
+#define ARG_OUT(ARG_name)\
+ if (copy_to_user(arg, &ARG_name, sizeof(ARG_name))) {\
+ ret = -EFAULT;\
+ break;\
+ }
+
+#endif
+
+int pwc_ioctl(struct pwc_device *pdev, unsigned int cmd, void *arg)
+{
+ int ret = 0;
+
+ switch(cmd) {
+ case VIDIOCPWCRUSER:
+ {
+ if (pwc_restore_user(pdev))
+ ret = -EINVAL;
+ break;
+ }
+
+ case VIDIOCPWCSUSER:
+ {
+ if (pwc_save_user(pdev))
+ ret = -EINVAL;
+ break;
+ }
+
+ case VIDIOCPWCFACTORY:
+ {
+ if (pwc_restore_factory(pdev))
+ ret = -EINVAL;
+ break;
+ }
+
+ case VIDIOCPWCSCQUAL:
+ {
+ ARG_DEF(int, qual)
+
+ ARG_IN(qual)
+ if (ARGR(qual) < 0 || ARGR(qual) > 3)
+ ret = -EINVAL;
+ else
+ ret = pwc_try_video_mode(pdev, pdev->view.x, pdev->view.y, pdev->vframes, ARGR(qual), pdev->vsnapshot);
+ if (ret >= 0)
+ pdev->vcompression = ARGR(qual);
+ break;
+ }
+
+ case VIDIOCPWCGCQUAL:
+ {
+ ARG_DEF(int, qual)
+
+ ARGR(qual) = pdev->vcompression;
+ ARG_OUT(qual)
+ break;
+ }
+
+ case VIDIOCPWCPROBE:
+ {
+ ARG_DEF(struct pwc_probe, probe)
+
+ strcpy(ARGR(probe).name, pdev->vdev->name);
+ ARGR(probe).type = pdev->type;
+ ARG_OUT(probe)
+ break;
+ }
+
+ case VIDIOCPWCGSERIAL:
+ {
+ ARG_DEF(struct pwc_serial, serial)
+
+ strcpy(ARGR(serial).serial, pdev->serial);
+ ARG_OUT(serial)
+ break;
+ }
+
+ case VIDIOCPWCSAGC:
+ {
+ ARG_DEF(int, agc)
+
+ ARG_IN(agc)
+ if (pwc_set_agc(pdev, ARGR(agc) < 0 ? 1 : 0, ARGR(agc)))
+ ret = -EINVAL;
+ break;
+ }
+
+ case VIDIOCPWCGAGC:
+ {
+ ARG_DEF(int, agc)
+
+ if (pwc_get_agc(pdev, ARGA(agc)))
+ ret = -EINVAL;
+ ARG_OUT(agc)
+ break;
+ }
+
+ case VIDIOCPWCSSHUTTER:
+ {
+ ARG_DEF(int, shutter_speed)
+
+ ARG_IN(shutter_speed)
+ ret = pwc_set_shutter_speed(pdev, ARGR(shutter_speed) < 0 ? 1 : 0, ARGR(shutter_speed));
+ break;
+ }
+
+ case VIDIOCPWCSAWB:
+ {
+ ARG_DEF(struct pwc_whitebalance, wb)
+
+ ARG_IN(wb)
+ ret = pwc_set_awb(pdev, ARGR(wb).mode);
+ if (ret >= 0 && ARGR(wb).mode == PWC_WB_MANUAL) {
+ pwc_set_red_gain(pdev, ARGR(wb).manual_red);
+ pwc_set_blue_gain(pdev, ARGR(wb).manual_blue);
+ }
+ break;
+ }
+
+ case VIDIOCPWCGAWB:
+ {
+ ARG_DEF(struct pwc_whitebalance, wb)
+
+ memset(ARGA(wb), 0, sizeof(struct pwc_whitebalance));
+ ARGR(wb).mode = pwc_get_awb(pdev);
+ if (ARGR(wb).mode < 0)
+ ret = -EINVAL;
+ else {
+ if (ARGR(wb).mode == PWC_WB_MANUAL) {
+ ret = pwc_get_red_gain(pdev, &ARGR(wb).manual_red);
+ if (ret < 0)
+ break;
+ ret = pwc_get_blue_gain(pdev, &ARGR(wb).manual_blue);
+ if (ret < 0)
+ break;
+ }
+ if (ARGR(wb).mode == PWC_WB_AUTO) {
+ ret = pwc_read_red_gain(pdev, &ARGR(wb).read_red);
+ if (ret < 0)
+ break;
+ ret =pwc_read_blue_gain(pdev, &ARGR(wb).read_blue);
+ if (ret < 0)
+ break;
+ }
+ }
+ ARG_OUT(wb)
+ break;
+ }
+
+ case VIDIOCPWCSAWBSPEED:
+ {
+ ARG_DEF(struct pwc_wb_speed, wbs)
+
+ if (ARGR(wbs).control_speed > 0) {
+ ret = pwc_set_wb_speed(pdev, ARGR(wbs).control_speed);
+ }
+ if (ARGR(wbs).control_delay > 0) {
+ ret = pwc_set_wb_delay(pdev, ARGR(wbs).control_delay);
+ }
+ break;
+ }
+
+ case VIDIOCPWCGAWBSPEED:
+ {
+ ARG_DEF(struct pwc_wb_speed, wbs)
+
+ ret = pwc_get_wb_speed(pdev, &ARGR(wbs).control_speed);
+ if (ret < 0)
+ break;
+ ret = pwc_get_wb_delay(pdev, &ARGR(wbs).control_delay);
+ if (ret < 0)
+ break;
+ ARG_OUT(wbs)
+ break;
+ }
+
+ case VIDIOCPWCSLED:
+ {
+ ARG_DEF(struct pwc_leds, leds)
+
+ ARG_IN(leds)
+ ret = pwc_set_leds(pdev, ARGR(leds).led_on, ARGR(leds).led_off);
+ break;
+ }
+
+
+ case VIDIOCPWCGLED:
+ {
+ ARG_DEF(struct pwc_leds, leds)
+
+ ret = pwc_get_leds(pdev, &ARGR(leds).led_on, &ARGR(leds).led_off);
+ ARG_OUT(leds)
+ break;
+ }
+
+ case VIDIOCPWCSCONTOUR:
+ {
+ ARG_DEF(int, contour)
+
+ ARG_IN(contour)
+ ret = pwc_set_contour(pdev, ARGR(contour));
+ break;
+ }
+
+ case VIDIOCPWCGCONTOUR:
+ {
+ ARG_DEF(int, contour)
+
+ ret = pwc_get_contour(pdev, ARGA(contour));
+ ARG_OUT(contour)
+ break;
+ }
+
+ case VIDIOCPWCSBACKLIGHT:
+ {
+ ARG_DEF(int, backlight)
+
+ ARG_IN(backlight)
+ ret = pwc_set_backlight(pdev, ARGR(backlight));
+ break;
+ }
+
+ case VIDIOCPWCGBACKLIGHT:
+ {
+ ARG_DEF(int, backlight)
+
+ ret = pwc_get_backlight(pdev, ARGA(backlight));
+ ARG_OUT(backlight)
+ break;
+ }
+
+ case VIDIOCPWCSFLICKER:
+ {
+ ARG_DEF(int, flicker)
+
+ ARG_IN(flicker)
+ ret = pwc_set_flicker(pdev, ARGR(flicker));
+ break;
+ }
+
+ case VIDIOCPWCGFLICKER:
+ {
+ ARG_DEF(int, flicker)
+
+ ret = pwc_get_flicker(pdev, ARGA(flicker));
+ ARG_OUT(flicker)
+ break;
+ }
+
+ case VIDIOCPWCSDYNNOISE:
+ {
+ ARG_DEF(int, dynnoise)
+
+ ARG_IN(dynnoise)
+ ret = pwc_set_dynamic_noise(pdev, ARGR(dynnoise));
+ break;
+ }
+
+ case VIDIOCPWCGDYNNOISE:
+ {
+ ARG_DEF(int, dynnoise)
+
+ ret = pwc_get_dynamic_noise(pdev, ARGA(dynnoise));
+ ARG_OUT(dynnoise);
+ break;
+ }
+
+ case VIDIOCPWCGREALSIZE:
+ {
+ ARG_DEF(struct pwc_imagesize, size)
+
+ ARGR(size).width = pdev->image.x;
+ ARGR(size).height = pdev->image.y;
+ ARG_OUT(size)
+ break;
+ }
+
+ case VIDIOCPWCMPTRESET:
+ {
+ if (pdev->features & FEATURE_MOTOR_PANTILT)
+ {
+ ARG_DEF(int, flags)
+
+ ARG_IN(flags)
+ ret = pwc_mpt_reset(pdev, ARGR(flags));
+ if (ret >= 0)
+ {
+ pdev->pan_angle = 0;
+ pdev->tilt_angle = 0;
+ }
+ }
+ else
+ {
+ ret = -ENXIO;
+ }
+ break;
+ }
+
+ case VIDIOCPWCMPTGRANGE:
+ {
+ if (pdev->features & FEATURE_MOTOR_PANTILT)
+ {
+ ARG_DEF(struct pwc_mpt_range, range)
+
+ ARGR(range) = pdev->angle_range;
+ ARG_OUT(range)
+ }
+ else
+ {
+ ret = -ENXIO;
+ }
+ break;
+ }
+
+ case VIDIOCPWCMPTSANGLE:
+ {
+ int new_pan, new_tilt;
+
+ if (pdev->features & FEATURE_MOTOR_PANTILT)
+ {
+ ARG_DEF(struct pwc_mpt_angles, angles)
+
+ ARG_IN(angles)
+ /* The camera can only set relative angles, so
+ do some calculations when getting an absolute angle .
+ */
+ if (ARGR(angles).absolute)
+ {
+ new_pan = ARGR(angles).pan;
+ new_tilt = ARGR(angles).tilt;
+ }
+ else
+ {
+ new_pan = pdev->pan_angle + ARGR(angles).pan;
+ new_tilt = pdev->tilt_angle + ARGR(angles).tilt;
+ }
+ /* check absolute ranges */
+ if (new_pan < pdev->angle_range.pan_min ||
+ new_pan > pdev->angle_range.pan_max ||
+ new_tilt < pdev->angle_range.tilt_min ||
+ new_tilt > pdev->angle_range.tilt_max)
+ {
+ ret = -ERANGE;
+ }
+ else
+ {
+ /* go to relative range, check again */
+ new_pan -= pdev->pan_angle;
+ new_tilt -= pdev->tilt_angle;
+ /* angles are specified in degrees * 100, thus the limit = 36000 */
+ if (new_pan < -36000 || new_pan > 36000 || new_tilt < -36000 || new_tilt > 36000)
+ ret = -ERANGE;
+ }
+ if (ret == 0) /* no errors so far */
+ {
+ ret = pwc_mpt_set_angle(pdev, new_pan, new_tilt);
+ if (ret >= 0)
+ {
+ pdev->pan_angle += new_pan;
+ pdev->tilt_angle += new_tilt;
+ }
+ if (ret == -EPIPE) /* stall -> out of range */
+ ret = -ERANGE;
+ }
+ }
+ else
+ {
+ ret = -ENXIO;
+ }
+ break;
+ }
+
+ case VIDIOCPWCMPTGANGLE:
+ {
+
+ if (pdev->features & FEATURE_MOTOR_PANTILT)
+ {
+ ARG_DEF(struct pwc_mpt_angles, angles)
+
+ ARGR(angles).absolute = 1;
+ ARGR(angles).pan = pdev->pan_angle;
+ ARGR(angles).tilt = pdev->tilt_angle;
+ ARG_OUT(angles)
+ }
+ else
+ {
+ ret = -ENXIO;
+ }
+ break;
+ }
+
+ case VIDIOCPWCMPTSTATUS:
+ {
+ if (pdev->features & FEATURE_MOTOR_PANTILT)
+ {
+ ARG_DEF(struct pwc_mpt_status, status)
+
+ ret = pwc_mpt_get_status(pdev, ARGA(status));
+ ARG_OUT(status)
+ }
+ else
+ {
+ ret = -ENXIO;
+ }
+ break;
+ }
+
+ case VIDIOCPWCGVIDCMD:
+ {
+ ARG_DEF(struct pwc_video_command, cmd);
+
+ ARGR(cmd).type = pdev->type;
+ ARGR(cmd).release = pdev->release;
+ ARGR(cmd).command_len = pdev->cmd_len;
+ memcpy(&ARGR(cmd).command_buf, pdev->cmd_buf, pdev->cmd_len);
+ ARGR(cmd).bandlength = pdev->vbandlength;
+ ARGR(cmd).frame_size = pdev->frame_size;
+ ARG_OUT(cmd)
+ break;
+ }
+
+ default:
+ ret = -ENOIOCTLCMD;
+ break;
+ }
+
+ if (ret > 0)
+ return 0;
+ return ret;
+}
+
+
+
--- /dev/null
+/* Linux driver for Philips webcam
+ USB and Video4Linux interface part.
+ (C) 1999-2004 Nemosoft Unv.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+*/
+
+/*
+ This code forms the interface between the USB layers and the Philips
+ specific stuff. Some adanved stuff of the driver falls under an
+ NDA, signed between me and Philips B.V., Eindhoven, the Netherlands, and
+ is thus not distributed in source form. The binary pwcx.o module
+ contains the code that falls under the NDA.
+
+ In case you're wondering: 'pwc' stands for "Philips WebCam", but
+ I really didn't want to type 'philips_web_cam' every time (I'm lazy as
+ any Linux kernel hacker, but I don't like uncomprehensible abbreviations
+ without explanation).
+
+ Oh yes, convention: to disctinguish between all the various pointers to
+ device-structures, I use these names for the pointer variables:
+ udev: struct usb_device *
+ vdev: struct video_device *
+ pdev: struct pwc_devive *
+*/
+
+/* Contributors:
+ - Alvarado: adding whitebalance code
+ - Alistar Moire: QuickCam 3000 Pro device/product ID
+ - Tony Hoyle: Creative Labs Webcam 5 device/product ID
+ - Mark Burazin: solving hang in VIDIOCSYNC when camera gets unplugged
+ - Jk Fang: Sotec Afina Eye ID
+ - Xavier Roche: QuickCam Pro 4000 ID
+ - Jens Knudsen: QuickCam Zoom ID
+ - J. Debert: QuickCam for Notebooks ID
+*/
+
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/poll.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <asm/io.h>
+
+#include "pwc.h"
+#include "pwc-ioctl.h"
+#include "pwc-uncompress.h"
+
+/* Function prototypes and driver templates */
+
+/* hotplug device table support */
+static struct usb_device_id pwc_device_table [] = {
+ { USB_DEVICE(0x0471, 0x0302) }, /* Philips models */
+ { USB_DEVICE(0x0471, 0x0303) },
+ { USB_DEVICE(0x0471, 0x0304) },
+ { USB_DEVICE(0x0471, 0x0307) },
+ { USB_DEVICE(0x0471, 0x0308) },
+ { USB_DEVICE(0x0471, 0x030C) },
+ { USB_DEVICE(0x0471, 0x0310) },
+ { USB_DEVICE(0x0471, 0x0311) },
+ { USB_DEVICE(0x0471, 0x0312) },
+ { USB_DEVICE(0x0471, 0x0313) }, /* the 'new' 720K */
+ { USB_DEVICE(0x069A, 0x0001) }, /* Askey */
+ { USB_DEVICE(0x046D, 0x08B0) }, /* Logitech QuickCam Pro 3000 */
+ { USB_DEVICE(0x046D, 0x08B1) }, /* Logitech QuickCam Notebook Pro */
+ { USB_DEVICE(0x046D, 0x08B2) }, /* Logitech QuickCam Pro 4000 */
+ { USB_DEVICE(0x046D, 0x08B3) }, /* Logitech QuickCam Zoom (old model) */
+ { USB_DEVICE(0x046D, 0x08B4) }, /* Logitech QuickCam Zoom (new model) */
+ { USB_DEVICE(0x046D, 0x08B5) }, /* Logitech QuickCam Orbit/Sphere */
+ { USB_DEVICE(0x046D, 0x08B6) }, /* Logitech (reserved) */
+ { USB_DEVICE(0x046D, 0x08B7) }, /* Logitech (reserved) */
+ { USB_DEVICE(0x046D, 0x08B8) }, /* Logitech (reserved) */
+ { USB_DEVICE(0x055D, 0x9000) }, /* Samsung */
+ { USB_DEVICE(0x055D, 0x9001) },
+ { USB_DEVICE(0x041E, 0x400C) }, /* Creative Webcam 5 */
+ { USB_DEVICE(0x041E, 0x4011) }, /* Creative Webcam Pro Ex */
+ { USB_DEVICE(0x04CC, 0x8116) }, /* Afina Eye */
+ { USB_DEVICE(0x06BE, 0x8116) }, /* new Afina Eye */
+ { USB_DEVICE(0x0d81, 0x1910) }, /* Visionite */
+ { USB_DEVICE(0x0d81, 0x1900) },
+ { }
+};
+MODULE_DEVICE_TABLE(usb, pwc_device_table);
+
+static int usb_pwc_probe(struct usb_interface *intf, const struct usb_device_id *id);
+static void usb_pwc_disconnect(struct usb_interface *intf);
+
+static struct usb_driver pwc_driver = {
+ .owner = THIS_MODULE,
+ .name = "Philips webcam", /* name */
+ .id_table = pwc_device_table,
+ .probe = usb_pwc_probe, /* probe() */
+ .disconnect = usb_pwc_disconnect, /* disconnect() */
+};
+
+#define MAX_DEV_HINTS 20
+#define MAX_ISOC_ERRORS 20
+
+static int default_size = PSZ_QCIF;
+static int default_fps = 10;
+static int default_fbufs = 3; /* Default number of frame buffers */
+static int default_mbufs = 2; /* Default number of mmap() buffers */
+ int pwc_trace = TRACE_MODULE | TRACE_FLOW | TRACE_PWCX;
+static int power_save = 0;
+static int led_on = 100, led_off = 0; /* defaults to LED that is on while in use */
+ int pwc_preferred_compression = 2; /* 0..3 = uncompressed..high */
+static struct {
+ int type;
+ char serial_number[30];
+ int device_node;
+ struct pwc_device *pdev;
+} device_hint[MAX_DEV_HINTS];
+
+/***/
+
+static int pwc_video_open(struct inode *inode, struct file *file);
+static int pwc_video_close(struct inode *inode, struct file *file);
+static ssize_t pwc_video_read(struct file *file, char __user *buf,
+ size_t count, loff_t *ppos);
+static unsigned int pwc_video_poll(struct file *file, poll_table *wait);
+static int pwc_video_ioctl(struct inode *inode, struct file *file,
+ unsigned int ioctlnr, unsigned long arg);
+static int pwc_video_mmap(struct file *file, struct vm_area_struct *vma);
+
+static struct file_operations pwc_fops = {
+ .owner = THIS_MODULE,
+ .open = pwc_video_open,
+ .release = pwc_video_close,
+ .read = pwc_video_read,
+ .poll = pwc_video_poll,
+ .mmap = pwc_video_mmap,
+ .ioctl = pwc_video_ioctl,
+ .llseek = no_llseek,
+};
+static struct video_device pwc_template = {
+ .owner = THIS_MODULE,
+ .name = "Philips Webcam", /* Filled in later */
+ .type = VID_TYPE_CAPTURE,
+ .hardware = VID_HARDWARE_PWC,
+ .release = video_device_release,
+ .fops = &pwc_fops,
+ .minor = -1,
+};
+
+/***************************************************************************/
+
+/* Okay, this is some magic that I worked out and the reasoning behind it...
+
+ The biggest problem with any USB device is of course: "what to do
+ when the user unplugs the device while it is in use by an application?"
+ We have several options:
+ 1) Curse them with the 7 plagues when they do (requires divine intervention)
+ 2) Tell them not to (won't work: they'll do it anyway)
+ 3) Oops the kernel (this will have a negative effect on a user's uptime)
+ 4) Do something sensible.
+
+ Of course, we go for option 4.
+
+ It happens that this device will be linked to two times, once from
+ usb_device and once from the video_device in their respective 'private'
+ pointers. This is done when the device is probed() and all initialization
+ succeeded. The pwc_device struct links back to both structures.
+
+ When a device is unplugged while in use it will be removed from the
+ list of known USB devices; I also de-register it as a V4L device, but
+ unfortunately I can't free the memory since the struct is still in use
+ by the file descriptor. This free-ing is then deferend until the first
+ opportunity. Crude, but it works.
+
+ A small 'advantage' is that if a user unplugs the cam and plugs it back
+ in, it should get assigned the same video device minor, but unfortunately
+ it's non-trivial to re-link the cam back to the video device... (that
+ would surely be magic! :))
+*/
+
+/***************************************************************************/
+/* Private functions */
+
+/* Here we want the physical address of the memory.
+ * This is used when initializing the contents of the area.
+ */
+static inline unsigned long kvirt_to_pa(unsigned long adr)
+{
+ unsigned long kva, ret;
+
+ kva = (unsigned long) page_address(vmalloc_to_page((void *)adr));
+ kva |= adr & (PAGE_SIZE-1); /* restore the offset */
+ ret = __pa(kva);
+ return ret;
+}
+
+static void * rvmalloc(unsigned long size)
+{
+ void * mem;
+ unsigned long adr;
+
+ size=PAGE_ALIGN(size);
+ mem=vmalloc_32(size);
+ if (mem)
+ {
+ memset(mem, 0, size); /* Clear the ram out, no junk to the user */
+ adr=(unsigned long) mem;
+ while (size > 0)
+ {
+ SetPageReserved(vmalloc_to_page((void *)adr));
+ adr+=PAGE_SIZE;
+ size-=PAGE_SIZE;
+ }
+ }
+ return mem;
+}
+
+static void rvfree(void * mem, unsigned long size)
+{
+ unsigned long adr;
+
+ if (mem)
+ {
+ adr=(unsigned long) mem;
+ while ((long) size > 0)
+ {
+ ClearPageReserved(vmalloc_to_page((void *)adr));
+ adr+=PAGE_SIZE;
+ size-=PAGE_SIZE;
+ }
+ vfree(mem);
+ }
+}
+
+
+
+
+static int pwc_allocate_buffers(struct pwc_device *pdev)
+{
+ int i;
+ void *kbuf;
+
+ Trace(TRACE_MEMORY, ">> pwc_allocate_buffers(pdev = 0x%p)\n", pdev);
+
+ if (pdev == NULL)
+ return -ENXIO;
+
+#ifdef PWC_MAGIC
+ if (pdev->magic != PWC_MAGIC) {
+ Err("allocate_buffers(): magic failed.\n");
+ return -ENXIO;
+ }
+#endif
+ /* Allocate Isochronuous pipe buffers */
+ for (i = 0; i < MAX_ISO_BUFS; i++) {
+ if (pdev->sbuf[i].data == NULL) {
+ kbuf = kmalloc(ISO_BUFFER_SIZE, GFP_KERNEL);
+ if (kbuf == NULL) {
+ Err("Failed to allocate iso buffer %d.\n", i);
+ return -ENOMEM;
+ }
+ Trace(TRACE_MEMORY, "Allocated iso buffer at %p.\n", kbuf);
+ pdev->sbuf[i].data = kbuf;
+ memset(kbuf, 0, ISO_BUFFER_SIZE);
+ }
+ }
+
+ /* Allocate frame buffer structure */
+ if (pdev->fbuf == NULL) {
+ kbuf = kmalloc(default_fbufs * sizeof(struct pwc_frame_buf), GFP_KERNEL);
+ if (kbuf == NULL) {
+ Err("Failed to allocate frame buffer structure.\n");
+ return -ENOMEM;
+ }
+ Trace(TRACE_MEMORY, "Allocated frame buffer structure at %p.\n", kbuf);
+ pdev->fbuf = kbuf;
+ memset(kbuf, 0, default_fbufs * sizeof(struct pwc_frame_buf));
+ }
+ /* create frame buffers, and make circular ring */
+ for (i = 0; i < default_fbufs; i++) {
+ if (pdev->fbuf[i].data == NULL) {
+ kbuf = vmalloc(PWC_FRAME_SIZE); /* need vmalloc since frame buffer > 128K */
+ if (kbuf == NULL) {
+ Err("Failed to allocate frame buffer %d.\n", i);
+ return -ENOMEM;
+ }
+ Trace(TRACE_MEMORY, "Allocated frame buffer %d at %p.\n", i, kbuf);
+ pdev->fbuf[i].data = kbuf;
+ memset(kbuf, 128, PWC_FRAME_SIZE);
+ }
+ }
+
+ /* Allocate decompressor table space */
+ kbuf = NULL;
+ if (pdev->decompressor != NULL) {
+ kbuf = kmalloc(pdev->decompressor->table_size, GFP_KERNEL);
+ if (kbuf == NULL) {
+ Err("Failed to allocate decompress table.\n");
+ return -ENOMEM;
+ }
+ Trace(TRACE_MEMORY, "Allocated decompress table %p.\n", kbuf);
+ }
+ pdev->decompress_data = kbuf;
+
+ /* Allocate image buffer; double buffer for mmap() */
+ kbuf = rvmalloc(default_mbufs * pdev->len_per_image);
+ if (kbuf == NULL) {
+ Err("Failed to allocate image buffer(s).\n");
+ return -ENOMEM;
+ }
+ Trace(TRACE_MEMORY, "Allocated image buffer at %p.\n", kbuf);
+ pdev->image_data = kbuf;
+ for (i = 0; i < default_mbufs; i++)
+ pdev->image_ptr[i] = kbuf + i * pdev->len_per_image;
+ for (; i < MAX_IMAGES; i++)
+ pdev->image_ptr[i] = NULL;
+
+ kbuf = NULL;
+
+ Trace(TRACE_MEMORY, "<< pwc_allocate_buffers()\n");
+ return 0;
+}
+
+static void pwc_free_buffers(struct pwc_device *pdev)
+{
+ int i;
+
+ Trace(TRACE_MEMORY, "Entering free_buffers(%p).\n", pdev);
+
+ if (pdev == NULL)
+ return;
+#ifdef PWC_MAGIC
+ if (pdev->magic != PWC_MAGIC) {
+ Err("free_buffers(): magic failed.\n");
+ return;
+ }
+#endif
+
+ /* Release Iso-pipe buffers */
+ for (i = 0; i < MAX_ISO_BUFS; i++)
+ if (pdev->sbuf[i].data != NULL) {
+ Trace(TRACE_MEMORY, "Freeing ISO buffer at %p.\n", pdev->sbuf[i].data);
+ kfree(pdev->sbuf[i].data);
+ pdev->sbuf[i].data = NULL;
+ }
+
+ /* The same for frame buffers */
+ if (pdev->fbuf != NULL) {
+ for (i = 0; i < default_fbufs; i++) {
+ if (pdev->fbuf[i].data != NULL) {
+ Trace(TRACE_MEMORY, "Freeing frame buffer %d at %p.\n", i, pdev->fbuf[i].data);
+ vfree(pdev->fbuf[i].data);
+ pdev->fbuf[i].data = NULL;
+ }
+ }
+ kfree(pdev->fbuf);
+ pdev->fbuf = NULL;
+ }
+
+ /* Intermediate decompression buffer & tables */
+ if (pdev->decompress_data != NULL) {
+ Trace(TRACE_MEMORY, "Freeing decompression buffer at %p.\n", pdev->decompress_data);
+ kfree(pdev->decompress_data);
+ pdev->decompress_data = NULL;
+ }
+ pdev->decompressor = NULL;
+
+ /* Release image buffers */
+ if (pdev->image_data != NULL) {
+ Trace(TRACE_MEMORY, "Freeing image buffer at %p.\n", pdev->image_data);
+ rvfree(pdev->image_data, default_mbufs * pdev->len_per_image);
+ }
+ pdev->image_data = NULL;
+
+ Trace(TRACE_MEMORY, "Leaving free_buffers().\n");
+}
+
+/* The frame & image buffer mess.
+
+ Yes, this is a mess. Well, it used to be simple, but alas... In this
+ module, 3 buffers schemes are used to get the data from the USB bus to
+ the user program. The first scheme involves the ISO buffers (called thus
+ since they transport ISO data from the USB controller), and not really
+ interesting. Suffices to say the data from this buffer is quickly
+ gathered in an interrupt handler (pwc_isoc_handler) and placed into the
+ frame buffer.
+
+ The frame buffer is the second scheme, and is the central element here.
+ It collects the data from a single frame from the camera (hence, the
+ name). Frames are delimited by the USB camera with a short USB packet,
+ so that's easy to detect. The frame buffers form a list that is filled
+ by the camera+USB controller and drained by the user process through
+ either read() or mmap().
+
+ The image buffer is the third scheme, in which frames are decompressed
+ and converted into planar format. For mmap() there is more than
+ one image buffer available.
+
+ The frame buffers provide the image buffering. In case the user process
+ is a bit slow, this introduces lag and some undesired side-effects.
+ The problem arises when the frame buffer is full. I used to drop the last
+ frame, which makes the data in the queue stale very quickly. But dropping
+ the frame at the head of the queue proved to be a litte bit more difficult.
+ I tried a circular linked scheme, but this introduced more problems than
+ it solved.
+
+ Because filling and draining are completely asynchronous processes, this
+ requires some fiddling with pointers and mutexes.
+
+ Eventually, I came up with a system with 2 lists: an 'empty' frame list
+ and a 'full' frame list:
+ * Initially, all frame buffers but one are on the 'empty' list; the one
+ remaining buffer is our initial fill frame.
+ * If a frame is needed for filling, we try to take it from the 'empty'
+ list, unless that list is empty, in which case we take the buffer at
+ the head of the 'full' list.
+ * When our fill buffer has been filled, it is appended to the 'full'
+ list.
+ * If a frame is needed by read() or mmap(), it is taken from the head of
+ the 'full' list, handled, and then appended to the 'empty' list. If no
+ buffer is present on the 'full' list, we wait.
+ The advantage is that the buffer that is currently being decompressed/
+ converted, is on neither list, and thus not in our way (any other scheme
+ I tried had the problem of old data lingering in the queue).
+
+ Whatever strategy you choose, it always remains a tradeoff: with more
+ frame buffers the chances of a missed frame are reduced. On the other
+ hand, on slower machines it introduces lag because the queue will
+ always be full.
+ */
+
+/**
+ \brief Find next frame buffer to fill. Take from empty or full list, whichever comes first.
+ */
+static inline int pwc_next_fill_frame(struct pwc_device *pdev)
+{
+ int ret;
+ unsigned long flags;
+
+ ret = 0;
+ spin_lock_irqsave(&pdev->ptrlock, flags);
+ if (pdev->fill_frame != NULL) {
+ /* append to 'full' list */
+ if (pdev->full_frames == NULL) {
+ pdev->full_frames = pdev->fill_frame;
+ pdev->full_frames_tail = pdev->full_frames;
+ }
+ else {
+ pdev->full_frames_tail->next = pdev->fill_frame;
+ pdev->full_frames_tail = pdev->fill_frame;
+ }
+ }
+ if (pdev->empty_frames != NULL) {
+ /* We have empty frames available. That's easy */
+ pdev->fill_frame = pdev->empty_frames;
+ pdev->empty_frames = pdev->empty_frames->next;
+ }
+ else {
+ /* Hmm. Take it from the full list */
+#if PWC_DEBUG
+ /* sanity check */
+ if (pdev->full_frames == NULL) {
+ Err("Neither empty or full frames available!\n");
+ spin_unlock_irqrestore(&pdev->ptrlock, flags);
+ return -EINVAL;
+ }
+#endif
+ pdev->fill_frame = pdev->full_frames;
+ pdev->full_frames = pdev->full_frames->next;
+ ret = 1;
+ }
+ pdev->fill_frame->next = NULL;
+#if PWC_DEBUG
+ Trace(TRACE_SEQUENCE, "Assigning sequence number %d.\n", pdev->sequence);
+ pdev->fill_frame->sequence = pdev->sequence++;
+#endif
+ spin_unlock_irqrestore(&pdev->ptrlock, flags);
+ return ret;
+}
+
+
+/**
+ \brief Reset all buffers, pointers and lists, except for the image_used[] buffer.
+
+ If the image_used[] buffer is cleared too, mmap()/VIDIOCSYNC will run into trouble.
+ */
+static void pwc_reset_buffers(struct pwc_device *pdev)
+{
+ int i;
+ unsigned long flags;
+
+ spin_lock_irqsave(&pdev->ptrlock, flags);
+ pdev->full_frames = NULL;
+ pdev->full_frames_tail = NULL;
+ for (i = 0; i < default_fbufs; i++) {
+ pdev->fbuf[i].filled = 0;
+ if (i > 0)
+ pdev->fbuf[i].next = &pdev->fbuf[i - 1];
+ else
+ pdev->fbuf->next = NULL;
+ }
+ pdev->empty_frames = &pdev->fbuf[default_fbufs - 1];
+ pdev->empty_frames_tail = pdev->fbuf;
+ pdev->read_frame = NULL;
+ pdev->fill_frame = pdev->empty_frames;
+ pdev->empty_frames = pdev->empty_frames->next;
+
+ pdev->image_read_pos = 0;
+ pdev->fill_image = 0;
+ spin_unlock_irqrestore(&pdev->ptrlock, flags);
+}
+
+
+/**
+ \brief Do all the handling for getting one frame: get pointer, decompress, advance pointers.
+ */
+static int pwc_handle_frame(struct pwc_device *pdev)
+{
+ int ret = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&pdev->ptrlock, flags);
+ /* First grab our read_frame; this is removed from all lists, so
+ we can release the lock after this without problems */
+ if (pdev->read_frame != NULL) {
+ /* This can't theoretically happen */
+ Err("Huh? Read frame still in use?\n");
+ }
+ else {
+ if (pdev->full_frames == NULL) {
+ Err("Woops. No frames ready.\n");
+ }
+ else {
+ pdev->read_frame = pdev->full_frames;
+ pdev->full_frames = pdev->full_frames->next;
+ pdev->read_frame->next = NULL;
+ }
+
+ if (pdev->read_frame != NULL) {
+#if PWC_DEBUG
+ Trace(TRACE_SEQUENCE, "Decompressing frame %d\n", pdev->read_frame->sequence);
+#endif
+ /* Decompression is a lenghty process, so it's outside of the lock.
+ This gives the isoc_handler the opportunity to fill more frames
+ in the mean time.
+ */
+ spin_unlock_irqrestore(&pdev->ptrlock, flags);
+ ret = pwc_decompress(pdev);
+ spin_lock_irqsave(&pdev->ptrlock, flags);
+
+ /* We're done with read_buffer, tack it to the end of the empty buffer list */
+ if (pdev->empty_frames == NULL) {
+ pdev->empty_frames = pdev->read_frame;
+ pdev->empty_frames_tail = pdev->empty_frames;
+ }
+ else {
+ pdev->empty_frames_tail->next = pdev->read_frame;
+ pdev->empty_frames_tail = pdev->read_frame;
+ }
+ pdev->read_frame = NULL;
+ }
+ }
+ spin_unlock_irqrestore(&pdev->ptrlock, flags);
+ return ret;
+}
+
+/**
+ \brief Advance pointers of image buffer (after each user request)
+*/
+static inline void pwc_next_image(struct pwc_device *pdev)
+{
+ pdev->image_used[pdev->fill_image] = 0;
+ pdev->fill_image = (pdev->fill_image + 1) % default_mbufs;
+}
+
+
+/* This gets called for the Isochronous pipe (video). This is done in
+ * interrupt time, so it has to be fast, not crash, and not stall. Neat.
+ */
+static void pwc_isoc_handler(struct urb *urb, struct pt_regs *regs)
+{
+ struct pwc_device *pdev;
+ int i, fst, flen;
+ int awake;
+ struct pwc_frame_buf *fbuf;
+ unsigned char *fillptr = NULL;
+ unsigned char *iso_buf = NULL;
+
+ awake = 0;
+ pdev = (struct pwc_device *)urb->context;
+ if (pdev == NULL) {
+ Err("isoc_handler() called with NULL device?!\n");
+ return;
+ }
+#ifdef PWC_MAGIC
+ if (pdev->magic != PWC_MAGIC) {
+ Err("isoc_handler() called with bad magic!\n");
+ return;
+ }
+#endif
+ if (urb->status == -ENOENT || urb->status == -ECONNRESET) {
+ Trace(TRACE_OPEN, "pwc_isoc_handler(): URB (%p) unlinked %ssynchronuously.\n", urb, urb->status == -ENOENT ? "" : "a");
+ return;
+ }
+ if (urb->status != -EINPROGRESS && urb->status != 0) {
+ const char *errmsg;
+
+ errmsg = "Unknown";
+ switch(urb->status) {
+ case -ENOSR: errmsg = "Buffer error (overrun)"; break;
+ case -EPIPE: errmsg = "Stalled (device not responding)"; break;
+ case -EOVERFLOW: errmsg = "Babble (bad cable?)"; break;
+ case -EPROTO: errmsg = "Bit-stuff error (bad cable?)"; break;
+ case -EILSEQ: errmsg = "CRC/Timeout (could be anything)"; break;
+ case -ETIMEDOUT: errmsg = "NAK (device does not respond)"; break;
+ }
+ Trace(TRACE_FLOW, "pwc_isoc_handler() called with status %d [%s].\n", urb->status, errmsg);
+ /* Give up after a number of contiguous errors on the USB bus.
+ Appearantly something is wrong so we simulate an unplug event.
+ */
+ if (++pdev->visoc_errors > MAX_ISOC_ERRORS)
+ {
+ Info("Too many ISOC errors, bailing out.\n");
+ pdev->error_status = EIO;
+ awake = 1;
+ wake_up_interruptible(&pdev->frameq);
+ }
+ goto handler_end; // ugly, but practical
+ }
+
+ fbuf = pdev->fill_frame;
+ if (fbuf == NULL) {
+ Err("pwc_isoc_handler without valid fill frame.\n");
+ awake = 1;
+ goto handler_end;
+ }
+ else {
+ fillptr = fbuf->data + fbuf->filled;
+ }
+
+ /* Reset ISOC error counter. We did get here, after all. */
+ pdev->visoc_errors = 0;
+
+ /* vsync: 0 = don't copy data
+ 1 = sync-hunt
+ 2 = synched
+ */
+ /* Compact data */
+ for (i = 0; i < urb->number_of_packets; i++) {
+ fst = urb->iso_frame_desc[i].status;
+ flen = urb->iso_frame_desc[i].actual_length;
+ iso_buf = urb->transfer_buffer + urb->iso_frame_desc[i].offset;
+ if (fst == 0) {
+ if (flen > 0) { /* if valid data... */
+ if (pdev->vsync > 0) { /* ...and we are not sync-hunting... */
+ pdev->vsync = 2;
+
+ /* ...copy data to frame buffer, if possible */
+ if (flen + fbuf->filled > pdev->frame_total_size) {
+ Trace(TRACE_FLOW, "Frame buffer overflow (flen = %d, frame_total_size = %d).\n", flen, pdev->frame_total_size);
+ pdev->vsync = 0; /* Hmm, let's wait for an EOF (end-of-frame) */
+ pdev->vframes_error++;
+ }
+ else {
+ memmove(fillptr, iso_buf, flen);
+ fillptr += flen;
+ }
+ }
+ fbuf->filled += flen;
+ } /* ..flen > 0 */
+
+ if (flen < pdev->vlast_packet_size) {
+ /* Shorter packet... We probably have the end of an image-frame;
+ wake up read() process and let select()/poll() do something.
+ Decompression is done in user time over there.
+ */
+ if (pdev->vsync == 2) {
+ /* The ToUCam Fun CMOS sensor causes the firmware to send 2 or 3 bogus
+ frames on the USB wire after an exposure change. This conditition is
+ however detected in the cam and a bit is set in the header.
+ */
+ if (pdev->type == 730) {
+ unsigned char *ptr = (unsigned char *)fbuf->data;
+
+ if (ptr[1] == 1 && ptr[0] & 0x10) {
+#if PWC_DEBUG
+ Debug("Hyundai CMOS sensor bug. Dropping frame %d.\n", fbuf->sequence);
+#endif
+ pdev->drop_frames += 2;
+ pdev->vframes_error++;
+ }
+ if ((ptr[0] ^ pdev->vmirror) & 0x01) {
+ if (ptr[0] & 0x01)
+ Info("Snapshot button pressed.\n");
+ else
+ Info("Snapshot button released.\n");
+ }
+ if ((ptr[0] ^ pdev->vmirror) & 0x02) {
+ if (ptr[0] & 0x02)
+ Info("Image is mirrored.\n");
+ else
+ Info("Image is normal.\n");
+ }
+ pdev->vmirror = ptr[0] & 0x03;
+ /* Sometimes the trailer of the 730 is still sent as a 4 byte packet
+ after a short frame; this condition is filtered out specifically. A 4 byte
+ frame doesn't make sense anyway.
+ So we get either this sequence:
+ drop_bit set -> 4 byte frame -> short frame -> good frame
+ Or this one:
+ drop_bit set -> short frame -> good frame
+ So we drop either 3 or 2 frames in all!
+ */
+ if (fbuf->filled == 4)
+ pdev->drop_frames++;
+ }
+
+ /* In case we were instructed to drop the frame, do so silently.
+ The buffer pointers are not updated either (but the counters are reset below).
+ */
+ if (pdev->drop_frames > 0)
+ pdev->drop_frames--;
+ else {
+ /* Check for underflow first */
+ if (fbuf->filled < pdev->frame_total_size) {
+ Trace(TRACE_FLOW, "Frame buffer underflow (%d bytes); discarded.\n", fbuf->filled);
+ pdev->vframes_error++;
+ }
+ else {
+ /* Send only once per EOF */
+ awake = 1; /* delay wake_ups */
+
+ /* Find our next frame to fill. This will always succeed, since we
+ * nick a frame from either empty or full list, but if we had to
+ * take it from the full list, it means a frame got dropped.
+ */
+ if (pwc_next_fill_frame(pdev)) {
+ pdev->vframes_dumped++;
+ if ((pdev->vframe_count > FRAME_LOWMARK) && (pwc_trace & TRACE_FLOW)) {
+ if (pdev->vframes_dumped < 20)
+ Trace(TRACE_FLOW, "Dumping frame %d.\n", pdev->vframe_count);
+ if (pdev->vframes_dumped == 20)
+ Trace(TRACE_FLOW, "Dumping frame %d (last message).\n", pdev->vframe_count);
+ }
+ }
+ fbuf = pdev->fill_frame;
+ }
+ } /* !drop_frames */
+ pdev->vframe_count++;
+ }
+ fbuf->filled = 0;
+ fillptr = fbuf->data;
+ pdev->vsync = 1;
+ } /* .. flen < last_packet_size */
+ pdev->vlast_packet_size = flen;
+ } /* ..status == 0 */
+#if PWC_DEBUG
+ /* This is normally not interesting to the user, unless you are really debugging something */
+ else {
+ static int iso_error = 0;
+ iso_error++;
+ if (iso_error < 20)
+ Trace(TRACE_FLOW, "Iso frame %d of USB has error %d\n", i, fst);
+ }
+#endif
+ }
+
+handler_end:
+ if (awake)
+ wake_up_interruptible(&pdev->frameq);
+
+ urb->dev = pdev->udev;
+ i = usb_submit_urb(urb, GFP_ATOMIC);
+ if (i != 0)
+ Err("Error (%d) re-submitting urb in pwc_isoc_handler.\n", i);
+}
+
+
+static int pwc_isoc_init(struct pwc_device *pdev)
+{
+ struct usb_device *udev;
+ struct urb *urb;
+ int i, j, ret;
+
+ struct usb_interface *intf;
+ struct usb_host_interface *idesc = NULL;
+
+ if (pdev == NULL)
+ return -EFAULT;
+ if (pdev->iso_init)
+ return 0;
+ pdev->vsync = 0;
+ udev = pdev->udev;
+
+ /* Get the current alternate interface, adjust packet size */
+ if (!udev->actconfig)
+ return -EFAULT;
+ intf = usb_ifnum_to_if(udev, 0);
+ if (intf)
+ idesc = usb_altnum_to_altsetting(intf, pdev->valternate);
+ if (!idesc)
+ return -EFAULT;
+
+ /* Search video endpoint */
+ pdev->vmax_packet_size = -1;
+ for (i = 0; i < idesc->desc.bNumEndpoints; i++)
+ if ((idesc->endpoint[i].desc.bEndpointAddress & 0xF) == pdev->vendpoint) {
+ pdev->vmax_packet_size = idesc->endpoint[i].desc.wMaxPacketSize;
+ break;
+ }
+
+ if (pdev->vmax_packet_size < 0 || pdev->vmax_packet_size > ISO_MAX_FRAME_SIZE) {
+ Err("Failed to find packet size for video endpoint in current alternate setting.\n");
+ return -ENFILE; /* Odd error, that should be noticable */
+ }
+
+ /* Set alternate interface */
+ ret = 0;
+ Trace(TRACE_OPEN, "Setting alternate interface %d\n", pdev->valternate);
+ ret = usb_set_interface(pdev->udev, 0, pdev->valternate);
+ if (ret < 0)
+ return ret;
+
+ for (i = 0; i < MAX_ISO_BUFS; i++) {
+ urb = usb_alloc_urb(ISO_FRAMES_PER_DESC, GFP_KERNEL);
+ if (urb == NULL) {
+ Err("Failed to allocate urb %d\n", i);
+ ret = -ENOMEM;
+ break;
+ }
+ pdev->sbuf[i].urb = urb;
+ Trace(TRACE_MEMORY, "Allocated URB at 0x%p\n", urb);
+ }
+ if (ret) {
+ /* De-allocate in reverse order */
+ while (i >= 0) {
+ if (pdev->sbuf[i].urb != NULL)
+ usb_free_urb(pdev->sbuf[i].urb);
+ pdev->sbuf[i].urb = NULL;
+ i--;
+ }
+ return ret;
+ }
+
+ /* init URB structure */
+ for (i = 0; i < MAX_ISO_BUFS; i++) {
+ urb = pdev->sbuf[i].urb;
+
+ urb->interval = 1; // devik
+ urb->dev = udev;
+ urb->pipe = usb_rcvisocpipe(udev, pdev->vendpoint);
+ urb->transfer_flags = URB_ISO_ASAP;
+ urb->transfer_buffer = pdev->sbuf[i].data;
+ urb->transfer_buffer_length = ISO_BUFFER_SIZE;
+ urb->complete = pwc_isoc_handler;
+ urb->context = pdev;
+ urb->start_frame = 0;
+ urb->number_of_packets = ISO_FRAMES_PER_DESC;
+ for (j = 0; j < ISO_FRAMES_PER_DESC; j++) {
+ urb->iso_frame_desc[j].offset = j * ISO_MAX_FRAME_SIZE;
+ urb->iso_frame_desc[j].length = pdev->vmax_packet_size;
+ }
+ }
+
+ /* link */
+ for (i = 0; i < MAX_ISO_BUFS; i++) {
+ ret = usb_submit_urb(pdev->sbuf[i].urb, GFP_KERNEL);
+ if (ret)
+ Err("isoc_init() submit_urb %d failed with error %d\n", i, ret);
+ else
+ Trace(TRACE_MEMORY, "URB 0x%p submitted.\n", pdev->sbuf[i].urb);
+ }
+
+ /* All is done... */
+ pdev->iso_init = 1;
+ Trace(TRACE_OPEN, "<< pwc_isoc_init()\n");
+ return 0;
+}
+
+static void pwc_isoc_cleanup(struct pwc_device *pdev)
+{
+ int i;
+
+ Trace(TRACE_OPEN, ">> pwc_isoc_cleanup()\n");
+ if (pdev == NULL)
+ return;
+
+ /* Unlinking ISOC buffers one by one */
+ for (i = 0; i < MAX_ISO_BUFS; i++) {
+ struct urb *urb;
+
+ urb = pdev->sbuf[i].urb;
+ if (urb != 0) {
+ if (pdev->iso_init) {
+ Trace(TRACE_MEMORY, "Unlinking URB %p\n", urb);
+ usb_unlink_urb(urb);
+ }
+ Trace(TRACE_MEMORY, "Freeing URB\n");
+ usb_free_urb(urb);
+ pdev->sbuf[i].urb = NULL;
+ }
+ }
+
+ /* Stop camera, but only if we are sure the camera is still there (unplug
+ is signalled by EPIPE)
+ */
+ if (pdev->error_status && pdev->error_status != EPIPE) {
+ Trace(TRACE_OPEN, "Setting alternate interface 0.\n");
+ usb_set_interface(pdev->udev, 0, 0);
+ }
+
+ pdev->iso_init = 0;
+ Trace(TRACE_OPEN, "<< pwc_isoc_cleanup()\n");
+}
+
+int pwc_try_video_mode(struct pwc_device *pdev, int width, int height, int new_fps, int new_compression, int new_snapshot)
+{
+ int ret, start;
+
+ /* Stop isoc stuff */
+ pwc_isoc_cleanup(pdev);
+ /* Reset parameters */
+ pwc_reset_buffers(pdev);
+ /* Try to set video mode... */
+ start = ret = pwc_set_video_mode(pdev, width, height, new_fps, new_compression, new_snapshot);
+ if (ret) {
+ Trace(TRACE_FLOW, "pwc_set_video_mode attempt 1 failed.\n");
+ /* That failed... restore old mode (we know that worked) */
+ start = pwc_set_video_mode(pdev, pdev->view.x, pdev->view.y, pdev->vframes, pdev->vcompression, pdev->vsnapshot);
+ if (start) {
+ Trace(TRACE_FLOW, "pwc_set_video_mode attempt 2 failed.\n");
+ }
+ }
+ if (start == 0)
+ {
+ if (pwc_isoc_init(pdev) < 0)
+ {
+ Info("Failed to restart ISOC transfers in pwc_try_video_mode.\n");
+ ret = -EAGAIN; /* let's try again, who knows if it works a second time */
+ }
+ }
+ pdev->drop_frames++; /* try to avoid garbage during switch */
+ return ret; /* Return original error code */
+}
+
+
+/***************************************************************************/
+/* Video4Linux functions */
+
+static int pwc_video_open(struct inode *inode, struct file *file)
+{
+ int i;
+ struct video_device *vdev = video_devdata(file);
+ struct pwc_device *pdev;
+
+ Trace(TRACE_OPEN, ">> video_open called(vdev = 0x%p).\n", vdev);
+
+ pdev = (struct pwc_device *)vdev->priv;
+ if (pdev == NULL)
+ BUG();
+ if (pdev->vopen)
+ return -EBUSY;
+
+ down(&pdev->modlock);
+ if (!pdev->usb_init) {
+ Trace(TRACE_OPEN, "Doing first time initialization.\n");
+ pdev->usb_init = 1;
+
+ if (pwc_trace & TRACE_OPEN)
+ {
+ /* Query sensor type */
+ const char *sensor_type = NULL;
+ int ret;
+
+ ret = pwc_get_cmos_sensor(pdev, &i);
+ if (ret >= 0)
+ {
+ switch(i) {
+ case 0x00: sensor_type = "Hyundai CMOS sensor"; break;
+ case 0x20: sensor_type = "Sony CCD sensor + TDA8787"; break;
+ case 0x2E: sensor_type = "Sony CCD sensor + Exas 98L59"; break;
+ case 0x2F: sensor_type = "Sony CCD sensor + ADI 9804"; break;
+ case 0x30: sensor_type = "Sharp CCD sensor + TDA8787"; break;
+ case 0x3E: sensor_type = "Sharp CCD sensor + Exas 98L59"; break;
+ case 0x3F: sensor_type = "Sharp CCD sensor + ADI 9804"; break;
+ case 0x40: sensor_type = "UPA 1021 sensor"; break;
+ case 0x100: sensor_type = "VGA sensor"; break;
+ case 0x101: sensor_type = "PAL MR sensor"; break;
+ default: sensor_type = "unknown type of sensor"; break;
+ }
+ }
+ if (sensor_type != NULL)
+ Info("This %s camera is equipped with a %s (%d).\n", pdev->vdev->name, sensor_type, i);
+ }
+ }
+
+ /* Turn on camera */
+ if (power_save) {
+ i = pwc_camera_power(pdev, 1);
+ if (i < 0)
+ Info("Failed to restore power to the camera! (%d)\n", i);
+ }
+ /* Set LED on/off time */
+ if (pwc_set_leds(pdev, led_on, led_off) < 0)
+ Info("Failed to set LED on/off time.\n");
+
+ /* Find our decompressor, if any */
+ pdev->decompressor = pwc_find_decompressor(pdev->type);
+#if PWC_DEBUG
+ Debug("Found decompressor for %d at 0x%p\n", pdev->type, pdev->decompressor);
+#endif
+ pwc_construct(pdev); /* set min/max sizes correct */
+
+ /* So far, so good. Allocate memory. */
+ i = pwc_allocate_buffers(pdev);
+ if (i < 0) {
+ Trace(TRACE_OPEN, "Failed to allocate buffer memory.\n");
+ up(&pdev->modlock);
+ return i;
+ }
+
+ /* Reset buffers & parameters */
+ pwc_reset_buffers(pdev);
+ for (i = 0; i < default_mbufs; i++)
+ pdev->image_used[i] = 0;
+ pdev->vframe_count = 0;
+ pdev->vframes_dumped = 0;
+ pdev->vframes_error = 0;
+ pdev->visoc_errors = 0;
+ pdev->error_status = 0;
+#if PWC_DEBUG
+ pdev->sequence = 0;
+#endif
+ pwc_construct(pdev); /* set min/max sizes correct */
+
+ /* Set some defaults */
+ pdev->vsnapshot = 0;
+
+ /* Start iso pipe for video; first try the last used video size
+ (or the default one); if that fails try QCIF/10 or QSIF/10;
+ it that fails too, give up.
+ */
+ i = pwc_set_video_mode(pdev, pwc_image_sizes[pdev->vsize].x, pwc_image_sizes[pdev->vsize].y, pdev->vframes, pdev->vcompression, 0);
+ if (i) {
+ Trace(TRACE_OPEN, "First attempt at set_video_mode failed.\n");
+ if (pdev->type == 730 || pdev->type == 740 || pdev->type == 750)
+ i = pwc_set_video_mode(pdev, pwc_image_sizes[PSZ_QSIF].x, pwc_image_sizes[PSZ_QSIF].y, 10, pdev->vcompression, 0);
+ else
+ i = pwc_set_video_mode(pdev, pwc_image_sizes[PSZ_QCIF].x, pwc_image_sizes[PSZ_QCIF].y, 10, pdev->vcompression, 0);
+ }
+ if (i) {
+ Trace(TRACE_OPEN, "Second attempt at set_video_mode failed.\n");
+ up(&pdev->modlock);
+ return i;
+ }
+
+ i = pwc_isoc_init(pdev);
+ if (i) {
+ Trace(TRACE_OPEN, "Failed to init ISOC stuff = %d.\n", i);
+ up(&pdev->modlock);
+ return i;
+ }
+
+ pdev->vopen++;
+ file->private_data = vdev;
+ /* lock decompressor; this has a small race condition, since we
+ could in theory unload pwcx.o between pwc_find_decompressor()
+ above and this call. I doubt it's ever going to be a problem.
+ */
+ if (pdev->decompressor != NULL)
+ pdev->decompressor->lock();
+ up(&pdev->modlock);
+ Trace(TRACE_OPEN, "<< video_open() returns 0.\n");
+ return 0;
+}
+
+/* Note that all cleanup is done in the reverse order as in _open */
+static int pwc_video_close(struct inode *inode, struct file *file)
+{
+ struct video_device *vdev = file->private_data;
+ struct pwc_device *pdev;
+ int i;
+
+ Trace(TRACE_OPEN, ">> video_close called(vdev = 0x%p).\n", vdev);
+
+ pdev = (struct pwc_device *)vdev->priv;
+ if (pdev->vopen == 0)
+ Info("video_close() called on closed device?\n");
+
+ /* Dump statistics, but only if a reasonable amount of frames were
+ processed (to prevent endless log-entries in case of snap-shot
+ programs)
+ */
+ if (pdev->vframe_count > 20)
+ Info("Closing video device: %d frames received, dumped %d frames, %d frames with errors.\n", pdev->vframe_count, pdev->vframes_dumped, pdev->vframes_error);
+
+ if (pdev->decompressor != NULL) {
+ pdev->decompressor->exit();
+ pdev->decompressor->unlock();
+ pdev->decompressor = NULL;
+ }
+
+ pwc_isoc_cleanup(pdev);
+ pwc_free_buffers(pdev);
+
+ /* Turn off LEDS and power down camera, but only when not unplugged */
+ if (pdev->error_status != EPIPE) {
+ /* Turn LEDs off */
+ if (pwc_set_leds(pdev, 0, 0) < 0)
+ Info("Failed to set LED on/off time.\n");
+ if (power_save) {
+ i = pwc_camera_power(pdev, 0);
+ if (i < 0)
+ Err("Failed to power down camera (%d)\n", i);
+ }
+ }
+ pdev->vopen = 0;
+ Trace(TRACE_OPEN, "<< video_close()\n");
+ return 0;
+}
+
+/*
+ * FIXME: what about two parallel reads ????
+ * ANSWER: Not supported. You can't open the device more than once,
+ despite what the V4L1 interface says. First, I don't see
+ the need, second there's no mechanism of alerting the
+ 2nd/3rd/... process of events like changing image size.
+ And I don't see the point of blocking that for the
+ 2nd/3rd/... process.
+ In multi-threaded environments reading parallel from any
+ device is tricky anyhow.
+ */
+
+static ssize_t pwc_video_read(struct file *file, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ struct video_device *vdev = file->private_data;
+ struct pwc_device *pdev;
+ int noblock = file->f_flags & O_NONBLOCK;
+ DECLARE_WAITQUEUE(wait, current);
+ int bytes_to_read;
+
+ Trace(TRACE_READ, "video_read(0x%p, %p, %zd) called.\n", vdev, buf, count);
+ if (vdev == NULL)
+ return -EFAULT;
+ pdev = vdev->priv;
+ if (pdev == NULL)
+ return -EFAULT;
+ if (pdev->error_status)
+ return -pdev->error_status; /* Something happened, report what. */
+
+ /* In case we're doing partial reads, we don't have to wait for a frame */
+ if (pdev->image_read_pos == 0) {
+ /* Do wait queueing according to the (doc)book */
+ add_wait_queue(&pdev->frameq, &wait);
+ while (pdev->full_frames == NULL) {
+ /* Check for unplugged/etc. here */
+ if (pdev->error_status) {
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+ return -pdev->error_status ;
+ }
+ if (noblock) {
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+ return -EWOULDBLOCK;
+ }
+ if (signal_pending(current)) {
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+ return -ERESTARTSYS;
+ }
+ schedule();
+ set_current_state(TASK_INTERRUPTIBLE);
+ }
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+
+ /* Decompress and release frame */
+ if (pwc_handle_frame(pdev))
+ return -EFAULT;
+ }
+
+ Trace(TRACE_READ, "Copying data to user space.\n");
+ if (pdev->vpalette == VIDEO_PALETTE_RAW)
+ bytes_to_read = pdev->frame_size;
+ else
+ bytes_to_read = pdev->view.size;
+
+ /* copy bytes to user space; we allow for partial reads */
+ if (count + pdev->image_read_pos > bytes_to_read)
+ count = bytes_to_read - pdev->image_read_pos;
+ if (copy_to_user(buf, pdev->image_ptr[pdev->fill_image] + pdev->image_read_pos, count))
+ return -EFAULT;
+ pdev->image_read_pos += count;
+ if (pdev->image_read_pos >= bytes_to_read) { /* All data has been read */
+ pdev->image_read_pos = 0;
+ pwc_next_image(pdev);
+ }
+ return count;
+}
+
+static unsigned int pwc_video_poll(struct file *file, poll_table *wait)
+{
+ struct video_device *vdev = file->private_data;
+ struct pwc_device *pdev;
+
+ if (vdev == NULL)
+ return -EFAULT;
+ pdev = vdev->priv;
+ if (pdev == NULL)
+ return -EFAULT;
+
+ poll_wait(file, &pdev->frameq, wait);
+ if (pdev->error_status)
+ return POLLERR;
+ if (pdev->full_frames != NULL) /* we have frames waiting */
+ return (POLLIN | POLLRDNORM);
+
+ return 0;
+}
+
+static int pwc_video_do_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, void *arg)
+{
+ struct video_device *vdev = file->private_data;
+ struct pwc_device *pdev;
+ DECLARE_WAITQUEUE(wait, current);
+
+ if (vdev == NULL)
+ return -EFAULT;
+ pdev = vdev->priv;
+ if (pdev == NULL)
+ return -EFAULT;
+
+ switch (cmd) {
+ /* Query cabapilities */
+ case VIDIOCGCAP:
+ {
+ struct video_capability *caps = arg;
+
+ strcpy(caps->name, vdev->name);
+ caps->type = VID_TYPE_CAPTURE;
+ caps->channels = 1;
+ caps->audios = 1;
+ caps->minwidth = pdev->view_min.x;
+ caps->minheight = pdev->view_min.y;
+ caps->maxwidth = pdev->view_max.x;
+ caps->maxheight = pdev->view_max.y;
+ break;
+ }
+
+ /* Channel functions (simulate 1 channel) */
+ case VIDIOCGCHAN:
+ {
+ struct video_channel *v = arg;
+
+ if (v->channel != 0)
+ return -EINVAL;
+ v->flags = 0;
+ v->tuners = 0;
+ v->type = VIDEO_TYPE_CAMERA;
+ strcpy(v->name, "Webcam");
+ return 0;
+ }
+
+ case VIDIOCSCHAN:
+ {
+ /* The spec says the argument is an integer, but
+ the bttv driver uses a video_channel arg, which
+ makes sense becasue it also has the norm flag.
+ */
+ struct video_channel *v = arg;
+ if (v->channel != 0)
+ return -EINVAL;
+ return 0;
+ }
+
+
+ /* Picture functions; contrast etc. */
+ case VIDIOCGPICT:
+ {
+ struct video_picture *p = arg;
+ int val;
+
+ val = pwc_get_brightness(pdev);
+ if (val >= 0)
+ p->brightness = val;
+ else
+ p->brightness = 0xffff;
+ val = pwc_get_contrast(pdev);
+ if (val >= 0)
+ p->contrast = val;
+ else
+ p->contrast = 0xffff;
+ /* Gamma, Whiteness, what's the difference? :) */
+ val = pwc_get_gamma(pdev);
+ if (val >= 0)
+ p->whiteness = val;
+ else
+ p->whiteness = 0xffff;
+ val = pwc_get_saturation(pdev);
+ if (val >= 0)
+ p->colour = val;
+ else
+ p->colour = 0xffff;
+ p->depth = 24;
+ p->palette = pdev->vpalette;
+ p->hue = 0xFFFF; /* N/A */
+ break;
+ }
+
+ case VIDIOCSPICT:
+ {
+ struct video_picture *p = arg;
+ /*
+ * FIXME: Suppose we are mid read
+ ANSWER: No problem: the firmware of the camera
+ can handle brightness/contrast/etc
+ changes at _any_ time, and the palette
+ is used exactly once in the uncompress
+ routine.
+ */
+ pwc_set_brightness(pdev, p->brightness);
+ pwc_set_contrast(pdev, p->contrast);
+ pwc_set_gamma(pdev, p->whiteness);
+ pwc_set_saturation(pdev, p->colour);
+ if (p->palette && p->palette != pdev->vpalette) {
+ switch (p->palette) {
+ case VIDEO_PALETTE_YUV420P:
+ case VIDEO_PALETTE_RAW:
+ pdev->vpalette = p->palette;
+ return pwc_try_video_mode(pdev, pdev->image.x, pdev->image.y, pdev->vframes, pdev->vcompression, pdev->vsnapshot);
+ break;
+ default:
+ return -EINVAL;
+ break;
+ }
+ }
+ break;
+ }
+
+ /* Window/size parameters */
+ case VIDIOCGWIN:
+ {
+ struct video_window *vw = arg;
+
+ vw->x = 0;
+ vw->y = 0;
+ vw->width = pdev->view.x;
+ vw->height = pdev->view.y;
+ vw->chromakey = 0;
+ vw->flags = (pdev->vframes << PWC_FPS_SHIFT) |
+ (pdev->vsnapshot ? PWC_FPS_SNAPSHOT : 0);
+ break;
+ }
+
+ case VIDIOCSWIN:
+ {
+ struct video_window *vw = arg;
+ int fps, snapshot, ret;
+
+ fps = (vw->flags & PWC_FPS_FRMASK) >> PWC_FPS_SHIFT;
+ snapshot = vw->flags & PWC_FPS_SNAPSHOT;
+ if (fps == 0)
+ fps = pdev->vframes;
+ if (pdev->view.x == vw->width && pdev->view.y && fps == pdev->vframes && snapshot == pdev->vsnapshot)
+ return 0;
+ ret = pwc_try_video_mode(pdev, vw->width, vw->height, fps, pdev->vcompression, snapshot);
+ if (ret)
+ return ret;
+ break;
+ }
+
+ /* We don't have overlay support (yet) */
+ case VIDIOCGFBUF:
+ {
+ struct video_buffer *vb = arg;
+
+ memset(vb,0,sizeof(*vb));
+ break;
+ }
+
+ /* mmap() functions */
+ case VIDIOCGMBUF:
+ {
+ /* Tell the user program how much memory is needed for a mmap() */
+ struct video_mbuf *vm = arg;
+ int i;
+
+ memset(vm, 0, sizeof(*vm));
+ vm->size = default_mbufs * pdev->len_per_image;
+ vm->frames = default_mbufs; /* double buffering should be enough for most applications */
+ for (i = 0; i < default_mbufs; i++)
+ vm->offsets[i] = i * pdev->len_per_image;
+ break;
+ }
+
+ case VIDIOCMCAPTURE:
+ {
+ /* Start capture into a given image buffer (called 'frame' in video_mmap structure) */
+ struct video_mmap *vm = arg;
+
+ Trace(TRACE_READ, "VIDIOCMCAPTURE: %dx%d, frame %d, format %d\n", vm->width, vm->height, vm->frame, vm->format);
+ if (vm->frame < 0 || vm->frame >= default_mbufs)
+ return -EINVAL;
+
+ /* xawtv is nasty. It probes the available palettes
+ by setting a very small image size and trying
+ various palettes... The driver doesn't support
+ such small images, so I'm working around it.
+ */
+ if (vm->format)
+ {
+ switch (vm->format)
+ {
+ case VIDEO_PALETTE_YUV420P:
+ case VIDEO_PALETTE_RAW:
+ break;
+ default:
+ return -EINVAL;
+ break;
+ }
+ }
+
+ if ((vm->width != pdev->view.x || vm->height != pdev->view.y) &&
+ (vm->width >= pdev->view_min.x && vm->height >= pdev->view_min.y)) {
+ int ret;
+
+ Trace(TRACE_OPEN, "VIDIOCMCAPTURE: changing size to please xawtv :-(.\n");
+ ret = pwc_try_video_mode(pdev, vm->width, vm->height, pdev->vframes, pdev->vcompression, pdev->vsnapshot);
+ if (ret)
+ return ret;
+ } /* ... size mismatch */
+
+ /* FIXME: should we lock here? */
+ if (pdev->image_used[vm->frame])
+ return -EBUSY; /* buffer wasn't available. Bummer */
+ pdev->image_used[vm->frame] = 1;
+
+ /* Okay, we're done here. In the SYNC call we wait until a
+ frame comes available, then expand image into the given
+ buffer.
+ In contrast to the CPiA cam the Philips cams deliver a
+ constant stream, almost like a grabber card. Also,
+ we have separate buffers for the rawdata and the image,
+ meaning we can nearly always expand into the requested buffer.
+ */
+ Trace(TRACE_READ, "VIDIOCMCAPTURE done.\n");
+ break;
+ }
+
+ case VIDIOCSYNC:
+ {
+ /* The doc says: "Whenever a buffer is used it should
+ call VIDIOCSYNC to free this frame up and continue."
+
+ The only odd thing about this whole procedure is
+ that MCAPTURE flags the buffer as "in use", and
+ SYNC immediately unmarks it, while it isn't
+ after SYNC that you know that the buffer actually
+ got filled! So you better not start a CAPTURE in
+ the same frame immediately (use double buffering).
+ This is not a problem for this cam, since it has
+ extra intermediate buffers, but a hardware
+ grabber card will then overwrite the buffer
+ you're working on.
+ */
+ int *mbuf = arg;
+ int ret;
+
+ Trace(TRACE_READ, "VIDIOCSYNC called (%d).\n", *mbuf);
+
+ /* bounds check */
+ if (*mbuf < 0 || *mbuf >= default_mbufs)
+ return -EINVAL;
+ /* check if this buffer was requested anyway */
+ if (pdev->image_used[*mbuf] == 0)
+ return -EINVAL;
+
+ /* Add ourselves to the frame wait-queue.
+
+ FIXME: needs auditing for safety.
+ QUESTION: In what respect? I think that using the
+ frameq is safe now.
+ */
+ add_wait_queue(&pdev->frameq, &wait);
+ while (pdev->full_frames == NULL) {
+ if (pdev->error_status) {
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+ return -pdev->error_status;
+ }
+
+ if (signal_pending(current)) {
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+ return -ERESTARTSYS;
+ }
+ schedule();
+ set_current_state(TASK_INTERRUPTIBLE);
+ }
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+
+ /* The frame is ready. Expand in the image buffer
+ requested by the user. I don't care if you
+ mmap() 5 buffers and request data in this order:
+ buffer 4 2 3 0 1 2 3 0 4 3 1 . . .
+ Grabber hardware may not be so forgiving.
+ */
+ Trace(TRACE_READ, "VIDIOCSYNC: frame ready.\n");
+ pdev->fill_image = *mbuf; /* tell in which buffer we want the image to be expanded */
+ /* Decompress, etc */
+ ret = pwc_handle_frame(pdev);
+ pdev->image_used[*mbuf] = 0;
+ if (ret)
+ return -EFAULT;
+ break;
+ }
+
+ case VIDIOCGAUDIO:
+ {
+ struct video_audio *v = arg;
+
+ strcpy(v->name, "Microphone");
+ v->audio = -1; /* unknown audio minor */
+ v->flags = 0;
+ v->mode = VIDEO_SOUND_MONO;
+ v->volume = 0;
+ v->bass = 0;
+ v->treble = 0;
+ v->balance = 0x8000;
+ v->step = 1;
+ break;
+ }
+
+ case VIDIOCSAUDIO:
+ {
+ /* Dummy: nothing can be set */
+ break;
+ }
+
+ case VIDIOCGUNIT:
+ {
+ struct video_unit *vu = arg;
+
+ vu->video = pdev->vdev->minor & 0x3F;
+ vu->audio = -1; /* not known yet */
+ vu->vbi = -1;
+ vu->radio = -1;
+ vu->teletext = -1;
+ break;
+ }
+ default:
+ return pwc_ioctl(pdev, cmd, arg);
+ } /* ..switch */
+ return 0;
+}
+
+static int pwc_video_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ return video_usercopy(inode, file, cmd, arg, pwc_video_do_ioctl);
+}
+
+
+static int pwc_video_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct video_device *vdev = file->private_data;
+ struct pwc_device *pdev;
+ unsigned long start = vma->vm_start;
+ unsigned long size = vma->vm_end-vma->vm_start;
+ unsigned long page, pos;
+
+ Trace(TRACE_MEMORY, "mmap(0x%p, 0x%lx, %lu) called.\n", vdev, start, size);
+ pdev = vdev->priv;
+
+ pos = (unsigned long)pdev->image_data;
+ while (size > 0) {
+ page = kvirt_to_pa(pos);
+ if (remap_page_range(vma, start, page, PAGE_SIZE, PAGE_SHARED))
+ return -EAGAIN;
+
+ start += PAGE_SIZE;
+ pos += PAGE_SIZE;
+ if (size > PAGE_SIZE)
+ size -= PAGE_SIZE;
+ else
+ size = 0;
+ }
+
+ return 0;
+}
+
+/***************************************************************************/
+/* USB functions */
+
+/* This function gets called when a new device is plugged in or the usb core
+ * is loaded.
+ */
+
+static int usb_pwc_probe(struct usb_interface *intf, const struct usb_device_id *id)
+{
+ struct usb_device *udev = interface_to_usbdev(intf);
+ struct pwc_device *pdev = NULL;
+ int vendor_id, product_id, type_id;
+ int i, hint;
+ int features = 0;
+ int video_nr = -1; /* default: use next available device */
+ char serial_number[30], *name;
+
+ /* Check if we can handle this device */
+ Trace(TRACE_PROBE, "probe() called [%04X %04X], if %d\n",
+ udev->descriptor.idVendor, udev->descriptor.idProduct,
+ intf->altsetting->desc.bInterfaceNumber);
+
+ /* the interfaces are probed one by one. We are only interested in the
+ video interface (0) now.
+ Interface 1 is the Audio Control, and interface 2 Audio itself.
+ */
+ if (intf->altsetting->desc.bInterfaceNumber > 0)
+ return -ENODEV;
+
+ vendor_id = udev->descriptor.idVendor;
+ product_id = udev->descriptor.idProduct;
+
+ if (vendor_id == 0x0471) {
+ switch (product_id) {
+ case 0x0302:
+ Info("Philips PCA645VC USB webcam detected.\n");
+ name = "Philips 645 webcam";
+ type_id = 645;
+ break;
+ case 0x0303:
+ Info("Philips PCA646VC USB webcam detected.\n");
+ name = "Philips 646 webcam";
+ type_id = 646;
+ break;
+ case 0x0304:
+ Info("Askey VC010 type 2 USB webcam detected.\n");
+ name = "Askey VC010 webcam";
+ type_id = 646;
+ break;
+ case 0x0307:
+ Info("Philips PCVC675K (Vesta) USB webcam detected.\n");
+ name = "Philips 675 webcam";
+ type_id = 675;
+ break;
+ case 0x0308:
+ Info("Philips PCVC680K (Vesta Pro) USB webcam detected.\n");
+ name = "Philips 680 webcam";
+ type_id = 680;
+ break;
+ case 0x030C:
+ Info("Philips PCVC690K (Vesta Pro Scan) USB webcam detected.\n");
+ name = "Philips 690 webcam";
+ type_id = 690;
+ break;
+ case 0x0310:
+ Info("Philips PCVC730K (ToUCam Fun)/PCVC830 (ToUCam II) USB webcam detected.\n");
+ name = "Philips 730 webcam";
+ type_id = 730;
+ break;
+ case 0x0311:
+ Info("Philips PCVC740K (ToUCam Pro)/PCVC840 (ToUCam II) USB webcam detected.\n");
+ name = "Philips 740 webcam";
+ type_id = 740;
+ break;
+ case 0x0312:
+ Info("Philips PCVC750K (ToUCam Pro Scan) USB webcam detected.\n");
+ name = "Philips 750 webcam";
+ type_id = 750;
+ break;
+ case 0x0313:
+ Info("Philips PCVC720K/40 (ToUCam XS) USB webcam detected.\n");
+ name = "Philips 720K/40 webcam";
+ type_id = 720;
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else if (vendor_id == 0x069A) {
+ switch(product_id) {
+ case 0x0001:
+ Info("Askey VC010 type 1 USB webcam detected.\n");
+ name = "Askey VC010 webcam";
+ type_id = 645;
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else if (vendor_id == 0x046d) {
+ switch(product_id) {
+ case 0x08b0:
+ Info("Logitech QuickCam Pro 3000 USB webcam detected.\n");
+ name = "Logitech QuickCam Pro 3000";
+ type_id = 740; /* CCD sensor */
+ break;
+ case 0x08b1:
+ Info("Logitech QuickCam Notebook Pro USB webcam detected.\n");
+ name = "Logitech QuickCam Notebook Pro";
+ type_id = 740; /* CCD sensor */
+ break;
+ case 0x08b2:
+ Info("Logitech QuickCam 4000 Pro USB webcam detected.\n");
+ name = "Logitech QuickCam Pro 4000";
+ type_id = 740; /* CCD sensor */
+ break;
+ case 0x08b3:
+ Info("Logitech QuickCam Zoom USB webcam detected.\n");
+ name = "Logitech QuickCam Zoom";
+ type_id = 740; /* CCD sensor */
+ break;
+ case 0x08B4:
+ Info("Logitech QuickCam Zoom (new model) USB webcam detected.\n");
+ name = "Logitech QuickCam Zoom";
+ type_id = 740; /* CCD sensor */
+ break;
+ case 0x08b5:
+ Info("Logitech QuickCam Orbit/Sphere USB webcam detected.\n");
+ name = "Logitech QuickCam Orbit";
+ type_id = 740; /* CCD sensor */
+ features |= FEATURE_MOTOR_PANTILT;
+ break;
+ case 0x08b6:
+ case 0x08b7:
+ case 0x08b8:
+ Info("Logitech QuickCam detected (reserved ID).\n");
+ name = "Logitech QuickCam (res.)";
+ type_id = 730; /* Assuming CMOS */
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else if (vendor_id == 0x055d) {
+ /* I don't know the difference between the C10 and the C30;
+ I suppose the difference is the sensor, but both cameras
+ work equally well with a type_id of 675
+ */
+ switch(product_id) {
+ case 0x9000:
+ Info("Samsung MPC-C10 USB webcam detected.\n");
+ name = "Samsung MPC-C10";
+ type_id = 675;
+ break;
+ case 0x9001:
+ Info("Samsung MPC-C30 USB webcam detected.\n");
+ name = "Samsung MPC-C30";
+ type_id = 675;
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else if (vendor_id == 0x041e) {
+ switch(product_id) {
+ case 0x400c:
+ Info("Creative Labs Webcam 5 detected.\n");
+ name = "Creative Labs Webcam 5";
+ type_id = 730;
+ break;
+ case 0x4011:
+ Info("Creative Labs Webcam Pro Ex detected.\n");
+ name = "Creative Labs Webcam Pro Ex";
+ type_id = 740;
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else if (vendor_id == 0x04cc) {
+ switch(product_id) {
+ case 0x8116:
+ Info("Sotec Afina Eye USB webcam detected.\n");
+ name = "Sotec Afina Eye";
+ type_id = 730;
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else if (vendor_id == 0x06be) {
+ switch(product_id) {
+ case 0x8116:
+ /* Basicly the same as the Sotec Afina Eye */
+ Info("AME CU-001 USB webcam detected.\n");
+ name = "AME CU-001";
+ type_id = 730;
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else if (vendor_id == 0x06be) {
+ switch(product_id) {
+ case 0x8116:
+ /* This is essentially the same cam as the Sotec Afina Eye */
+ Info("AME Co. Afina Eye USB webcam detected.\n");
+ name = "AME Co. Afina Eye";
+ type_id = 750;
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+
+ }
+ else if (vendor_id == 0x0d81) {
+ switch(product_id) {
+ case 0x1900:
+ Info("Visionite VCS-UC300 USB webcam detected.\n");
+ name = "Visionite VCS-UC300";
+ type_id = 740; /* CCD sensor */
+ break;
+ case 0x1910:
+ Info("Visionite VCS-UM100 USB webcam detected.\n");
+ name = "Visionite VCS-UM100";
+ type_id = 730; /* CMOS sensor */
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else
+ return -ENODEV; /* Not any of the know types; but the list keeps growing. */
+
+ memset(serial_number, 0, 30);
+ usb_string(udev, udev->descriptor.iSerialNumber, serial_number, 29);
+ Trace(TRACE_PROBE, "Device serial number is %s\n", serial_number);
+
+ if (udev->descriptor.bNumConfigurations > 1)
+ Info("Warning: more than 1 configuration available.\n");
+
+ /* Allocate structure, initialize pointers, mutexes, etc. and link it to the usb_device */
+ pdev = kmalloc(sizeof(struct pwc_device), GFP_KERNEL);
+ if (pdev == NULL) {
+ Err("Oops, could not allocate memory for pwc_device.\n");
+ return -ENOMEM;
+ }
+ memset(pdev, 0, sizeof(struct pwc_device));
+ pdev->type = type_id;
+ pdev->vsize = default_size;
+ pdev->vframes = default_fps;
+ strcpy(pdev->serial, serial_number);
+ pdev->features = features;
+ if (vendor_id == 0x046D && product_id == 0x08B5)
+ {
+ /* Logitech QuickCam Orbit
+ The ranges have been determined experimentally; they may differ from cam to cam.
+ Also, the exact ranges left-right and up-down are different for my cam
+ */
+ pdev->angle_range.pan_min = -7000;
+ pdev->angle_range.pan_max = 7000;
+ pdev->angle_range.tilt_min = -3000;
+ pdev->angle_range.tilt_max = 2500;
+ }
+
+ init_MUTEX(&pdev->modlock);
+ pdev->ptrlock = SPIN_LOCK_UNLOCKED;
+
+ pdev->udev = udev;
+ init_waitqueue_head(&pdev->frameq);
+ pdev->vcompression = pwc_preferred_compression;
+
+ /* Allocate video_device structure */
+ pdev->vdev = video_device_alloc();
+ if (pdev->vdev == 0)
+ {
+ Err("Err, cannot allocate video_device struture. Failing probe.");
+ kfree(pdev);
+ return -ENOMEM;
+ }
+ memcpy(pdev->vdev, &pwc_template, sizeof(pwc_template));
+ strcpy(pdev->vdev->name, name);
+ pdev->vdev->owner = THIS_MODULE;
+ video_set_drvdata(pdev->vdev, pdev);
+
+ pdev->release = udev->descriptor.bcdDevice;
+ Trace(TRACE_PROBE, "Release: %04x\n", pdev->release);
+
+ /* Now search device_hint[] table for a match, so we can hint a node number. */
+ for (hint = 0; hint < MAX_DEV_HINTS; hint++) {
+ if (((device_hint[hint].type == -1) || (device_hint[hint].type == pdev->type)) &&
+ (device_hint[hint].pdev == NULL)) {
+ /* so far, so good... try serial number */
+ if ((device_hint[hint].serial_number[0] == '*') || !strcmp(device_hint[hint].serial_number, serial_number)) {
+ /* match! */
+ video_nr = device_hint[hint].device_node;
+ Trace(TRACE_PROBE, "Found hint, will try to register as /dev/video%d\n", video_nr);
+ break;
+ }
+ }
+ }
+
+ pdev->vdev->release = video_device_release;
+ i = video_register_device(pdev->vdev, VFL_TYPE_GRABBER, video_nr);
+ if (i < 0) {
+ Err("Failed to register as video device (%d).\n", i);
+ video_device_release(pdev->vdev); /* Drip... drip... drip... */
+ kfree(pdev); /* Oops, no memory leaks please */
+ return -EIO;
+ }
+ else {
+ Info("Registered as /dev/video%d.\n", pdev->vdev->minor & 0x3F);
+ }
+
+ /* occupy slot */
+ if (hint < MAX_DEV_HINTS)
+ device_hint[hint].pdev = pdev;
+
+ Trace(TRACE_PROBE, "probe() function returning struct at 0x%p.\n", pdev);
+ usb_set_intfdata (intf, pdev);
+ return 0;
+}
+
+/* The user janked out the cable... */
+static void usb_pwc_disconnect(struct usb_interface *intf)
+{
+ struct pwc_device *pdev;
+ int hint;
+
+ lock_kernel();
+ pdev = usb_get_intfdata (intf);
+ usb_set_intfdata (intf, NULL);
+ if (pdev == NULL) {
+ Err("pwc_disconnect() Called without private pointer.\n");
+ goto disconnect_out;
+ }
+ if (pdev->udev == NULL) {
+ Err("pwc_disconnect() already called for %p\n", pdev);
+ goto disconnect_out;
+ }
+ if (pdev->udev != interface_to_usbdev(intf)) {
+ Err("pwc_disconnect() Woops: pointer mismatch udev/pdev.\n");
+ goto disconnect_out;
+ }
+#ifdef PWC_MAGIC
+ if (pdev->magic != PWC_MAGIC) {
+ Err("pwc_disconnect() Magic number failed. Consult your scrolls and try again.\n");
+ goto disconnect_out;
+ }
+#endif
+
+ /* We got unplugged; this is signalled by an EPIPE error code */
+ if (pdev->vopen) {
+ Info("Disconnected while webcam is in use!\n");
+ pdev->error_status = EPIPE;
+ }
+
+ /* Alert waiting processes */
+ wake_up_interruptible(&pdev->frameq);
+ /* Wait until device is closed */
+ while (pdev->vopen)
+ schedule();
+ /* Device is now closed, so we can safely unregister it */
+ Trace(TRACE_PROBE, "Unregistering video device in disconnect().\n");
+ video_unregister_device(pdev->vdev);
+
+ /* Free memory (don't set pdev to 0 just yet) */
+ kfree(pdev);
+
+disconnect_out:
+ /* search device_hint[] table if we occupy a slot, by any chance */
+ for (hint = 0; hint < MAX_DEV_HINTS; hint++)
+ if (device_hint[hint].pdev == pdev)
+ device_hint[hint].pdev = NULL;
+
+ unlock_kernel();
+}
+
+
+/* *grunt* We have to do atoi ourselves :-( */
+static int pwc_atoi(const char *s)
+{
+ int k = 0;
+
+ k = 0;
+ while (*s != '\0' && *s >= '0' && *s <= '9') {
+ k = 10 * k + (*s - '0');
+ s++;
+ }
+ return k;
+}
+
+
+/*
+ * Initialization code & module stuff
+ */
+
+static char *size = NULL;
+static int fps = 0;
+static int fbufs = 0;
+static int mbufs = 0;
+static int trace = -1;
+static int compression = -1;
+static int leds[2] = { -1, -1 };
+static char *dev_hint[MAX_DEV_HINTS] = { };
+
+MODULE_PARM(size, "s");
+MODULE_PARM_DESC(size, "Initial image size. One of sqcif, qsif, qcif, sif, cif, vga");
+MODULE_PARM(fps, "i");
+MODULE_PARM_DESC(fps, "Initial frames per second. Varies with model, useful range 5-30");
+MODULE_PARM(fbufs, "i");
+MODULE_PARM_DESC(fbufs, "Number of internal frame buffers to reserve");
+MODULE_PARM(mbufs, "i");
+MODULE_PARM_DESC(mbufs, "Number of external (mmap()ed) image buffers");
+MODULE_PARM(trace, "i");
+MODULE_PARM_DESC(trace, "For debugging purposes");
+MODULE_PARM(power_save, "i");
+MODULE_PARM_DESC(power_save, "Turn power save feature in camera on or off");
+MODULE_PARM(compression, "i");
+MODULE_PARM_DESC(compression, "Preferred compression quality. Range 0 (uncompressed) to 3 (high compression)");
+MODULE_PARM(leds, "2i");
+MODULE_PARM_DESC(leds, "LED on,off time in milliseconds");
+MODULE_PARM(dev_hint, "0-20s");
+MODULE_PARM_DESC(dev_hint, "Device node hints");
+
+MODULE_DESCRIPTION("Philips & OEM USB webcam driver");
+MODULE_AUTHOR("Nemosoft Unv. <webcam@smcc.demon.nl>");
+MODULE_LICENSE("GPL");
+
+static int __init usb_pwc_init(void)
+{
+ int i, sz;
+ char *sizenames[PSZ_MAX] = { "sqcif", "qsif", "qcif", "sif", "cif", "vga" };
+
+ Info("Philips webcam module version " PWC_VERSION " loaded.\n");
+ Info("Supports Philips PCA645/646, PCVC675/680/690, PCVC720[40]/730/740/750 & PCVC830/840.\n");
+ Info("Also supports the Askey VC010, various Logitech Quickcams, Samsung MPC-C10 and MPC-C30,\n");
+ Info("the Creative WebCam 5 & Pro Ex, SOTEC Afina Eye and Visionite VCS-UC300 and VCS-UM100.\n");
+
+ if (fps) {
+ if (fps < 4 || fps > 30) {
+ Err("Framerate out of bounds (4-30).\n");
+ return -EINVAL;
+ }
+ default_fps = fps;
+ Info("Default framerate set to %d.\n", default_fps);
+ }
+
+ if (size) {
+ /* string; try matching with array */
+ for (sz = 0; sz < PSZ_MAX; sz++) {
+ if (!strcmp(sizenames[sz], size)) { /* Found! */
+ default_size = sz;
+ break;
+ }
+ }
+ if (sz == PSZ_MAX) {
+ Err("Size not recognized; try size=[sqcif | qsif | qcif | sif | cif | vga].\n");
+ return -EINVAL;
+ }
+ Info("Default image size set to %s [%dx%d].\n", sizenames[default_size], pwc_image_sizes[default_size].x, pwc_image_sizes[default_size].y);
+ }
+ if (mbufs) {
+ if (mbufs < 1 || mbufs > MAX_IMAGES) {
+ Err("Illegal number of mmap() buffers; use a number between 1 and %d.\n", MAX_IMAGES);
+ return -EINVAL;
+ }
+ default_mbufs = mbufs;
+ Info("Number of image buffers set to %d.\n", default_mbufs);
+ }
+ if (fbufs) {
+ if (fbufs < 2 || fbufs > MAX_FRAMES) {
+ Err("Illegal number of frame buffers; use a number between 2 and %d.\n", MAX_FRAMES);
+ return -EINVAL;
+ }
+ default_fbufs = fbufs;
+ Info("Number of frame buffers set to %d.\n", default_fbufs);
+ }
+ if (trace >= 0) {
+ Info("Trace options: 0x%04x\n", trace);
+ pwc_trace = trace;
+ }
+ if (compression >= 0) {
+ if (compression > 3) {
+ Err("Invalid compression setting; use a number between 0 (uncompressed) and 3 (high).\n");
+ return -EINVAL;
+ }
+ pwc_preferred_compression = compression;
+ Info("Preferred compression set to %d.\n", pwc_preferred_compression);
+ }
+ if (power_save)
+ Info("Enabling power save on open/close.\n");
+ if (leds[0] >= 0)
+ led_on = leds[0];
+ if (leds[1] >= 0)
+ led_off = leds[1];
+
+ /* Big device node whoopla. Basicly, it allows you to assign a
+ device node (/dev/videoX) to a camera, based on its type
+ & serial number. The format is [type[.serialnumber]:]node.
+
+ Any camera that isn't matched by these rules gets the next
+ available free device node.
+ */
+ for (i = 0; i < MAX_DEV_HINTS; i++) {
+ char *s, *colon, *dot;
+
+ /* This loop also initializes the array */
+ device_hint[i].pdev = NULL;
+ s = dev_hint[i];
+ if (s != NULL && *s != '\0') {
+ device_hint[i].type = -1; /* wildcard */
+ strcpy(device_hint[i].serial_number, "*");
+
+ /* parse string: chop at ':' & '/' */
+ colon = dot = s;
+ while (*colon != '\0' && *colon != ':')
+ colon++;
+ while (*dot != '\0' && *dot != '.')
+ dot++;
+ /* Few sanity checks */
+ if (*dot != '\0' && dot > colon) {
+ Err("Malformed camera hint: the colon must be after the dot.\n");
+ return -EINVAL;
+ }
+
+ if (*colon == '\0') {
+ /* No colon */
+ if (*dot != '\0') {
+ Err("Malformed camera hint: no colon + device node given.\n");
+ return -EINVAL;
+ }
+ else {
+ /* No type or serial number specified, just a number. */
+ device_hint[i].device_node = pwc_atoi(s);
+ }
+ }
+ else {
+ /* There's a colon, so we have at least a type and a device node */
+ device_hint[i].type = pwc_atoi(s);
+ device_hint[i].device_node = pwc_atoi(colon + 1);
+ if (*dot != '\0') {
+ /* There's a serial number as well */
+ int k;
+
+ dot++;
+ k = 0;
+ while (*dot != ':' && k < 29) {
+ device_hint[i].serial_number[k++] = *dot;
+ dot++;
+ }
+ device_hint[i].serial_number[k] = '\0';
+ }
+ }
+#if PWC_DEBUG
+ Debug("device_hint[%d]:\n", i);
+ Debug(" type : %d\n", device_hint[i].type);
+ Debug(" serial# : %s\n", device_hint[i].serial_number);
+ Debug(" node : %d\n", device_hint[i].device_node);
+#endif
+ }
+ else
+ device_hint[i].type = 0; /* not filled */
+ } /* ..for MAX_DEV_HINTS */
+
+ Trace(TRACE_PROBE, "Registering driver at address 0x%p.\n", &pwc_driver);
+ return usb_register(&pwc_driver);
+}
+
+static void __exit usb_pwc_exit(void)
+{
+ Trace(TRACE_MODULE, "Deregistering driver.\n");
+ usb_deregister(&pwc_driver);
+ Info("Philips webcam module removed.\n");
+}
+
+module_init(usb_pwc_init);
+module_exit(usb_pwc_exit);
+
--- /dev/null
+#ifndef PWC_IOCTL_H
+#define PWC_IOCTL_H
+
+/* (C) 2001-2004 Nemosoft Unv. webcam@smcc.demon.nl
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+/* This is pwc-ioctl.h belonging to PWC 8.12.1
+ It contains structures and defines to communicate from user space
+ directly to the driver.
+ */
+
+/*
+ Changes
+ 2001/08/03 Alvarado Added ioctl constants to access methods for
+ changing white balance and red/blue gains
+ 2002/12/15 G. H. Fernandez-Toribio VIDIOCGREALSIZE
+ 2003/12/13 Nemosft Unv. Some modifications to make interfacing to
+ PWCX easier
+ */
+
+/* These are private ioctl() commands, specific for the Philips webcams.
+ They contain functions not found in other webcams, and settings not
+ specified in the Video4Linux API.
+
+ The #define names are built up like follows:
+ VIDIOC VIDeo IOCtl prefix
+ PWC Philps WebCam
+ G optional: Get
+ S optional: Set
+ ... the function
+ */
+
+
+ /* Enumeration of image sizes */
+#define PSZ_SQCIF 0x00
+#define PSZ_QSIF 0x01
+#define PSZ_QCIF 0x02
+#define PSZ_SIF 0x03
+#define PSZ_CIF 0x04
+#define PSZ_VGA 0x05
+#define PSZ_MAX 6
+
+
+/* The frame rate is encoded in the video_window.flags parameter using
+ the upper 16 bits, since some flags are defined nowadays. The following
+ defines provide a mask and shift to filter out this value.
+
+ In 'Snapshot' mode the camera freezes its automatic exposure and colour
+ balance controls.
+ */
+#define PWC_FPS_SHIFT 16
+#define PWC_FPS_MASK 0x00FF0000
+#define PWC_FPS_FRMASK 0x003F0000
+#define PWC_FPS_SNAPSHOT 0x00400000
+
+
+/* structure for transfering x & y coordinates */
+struct pwc_coord
+{
+ int x, y; /* guess what */
+ int size; /* size, or offset */
+};
+
+
+/* Used with VIDIOCPWCPROBE */
+struct pwc_probe
+{
+ char name[32];
+ int type;
+};
+
+struct pwc_serial
+{
+ char serial[30]; /* String with serial number. Contains terminating 0 */
+};
+
+/* pwc_whitebalance.mode values */
+#define PWC_WB_INDOOR 0
+#define PWC_WB_OUTDOOR 1
+#define PWC_WB_FL 2
+#define PWC_WB_MANUAL 3
+#define PWC_WB_AUTO 4
+
+/* Used with VIDIOCPWC[SG]AWB (Auto White Balance).
+ Set mode to one of the PWC_WB_* values above.
+ *red and *blue are the respective gains of these colour components inside
+ the camera; range 0..65535
+ When 'mode' == PWC_WB_MANUAL, 'manual_red' and 'manual_blue' are set or read;
+ otherwise undefined.
+ 'read_red' and 'read_blue' are read-only.
+*/
+struct pwc_whitebalance
+{
+ int mode;
+ int manual_red, manual_blue; /* R/W */
+ int read_red, read_blue; /* R/O */
+};
+
+/*
+ 'control_speed' and 'control_delay' are used in automatic whitebalance mode,
+ and tell the camera how fast it should react to changes in lighting, and
+ with how much delay. Valid values are 0..65535.
+*/
+struct pwc_wb_speed
+{
+ int control_speed;
+ int control_delay;
+
+};
+
+/* Used with VIDIOCPWC[SG]LED */
+struct pwc_leds
+{
+ int led_on; /* Led on-time; range = 0..25000 */
+ int led_off; /* Led off-time; range = 0..25000 */
+};
+
+/* Image size (used with GREALSIZE) */
+struct pwc_imagesize
+{
+ int width;
+ int height;
+};
+
+/* Defines and structures for Motorized Pan & Tilt */
+#define PWC_MPT_PAN 0x01
+#define PWC_MPT_TILT 0x02
+#define PWC_MPT_TIMEOUT 0x04 /* for status */
+
+/* Set angles; when absolute != 0, the angle is absolute and the
+ driver calculates the relative offset for you. This can only
+ be used with VIDIOCPWCSANGLE; VIDIOCPWCGANGLE always returns
+ absolute angles.
+ */
+struct pwc_mpt_angles
+{
+ int absolute; /* write-only */
+ int pan; /* degrees * 100 */
+ int tilt; /* degress * 100 */
+};
+
+/* Range of angles of the camera, both horizontally and vertically.
+ */
+struct pwc_mpt_range
+{
+ int pan_min, pan_max; /* degrees * 100 */
+ int tilt_min, tilt_max;
+};
+
+struct pwc_mpt_status
+{
+ int status;
+ int time_pan;
+ int time_tilt;
+};
+
+
+/* This is used for out-of-kernel decompression. With it, you can get
+ all the necessary information to initialize and use the decompressor
+ routines in standalone applications.
+ */
+struct pwc_video_command
+{
+ int type; /* camera type (645, 675, 730, etc.) */
+ int release; /* release number */
+
+ int size; /* one of PSZ_* */
+ int alternate;
+ int command_len; /* length of USB video command */
+ unsigned char command_buf[13]; /* Actual USB video command */
+ int bandlength; /* >0 = compressed */
+ int frame_size; /* Size of one (un)compressed frame */
+};
+
+/* Flags for PWCX subroutines. Not all modules honour all flags. */
+#define PWCX_FLAG_PLANAR 0x0001
+#define PWCX_FLAG_BAYER 0x0008
+
+
+/* IOCTL definitions */
+
+ /* Restore user settings */
+#define VIDIOCPWCRUSER _IO('v', 192)
+ /* Save user settings */
+#define VIDIOCPWCSUSER _IO('v', 193)
+ /* Restore factory settings */
+#define VIDIOCPWCFACTORY _IO('v', 194)
+
+ /* You can manipulate the compression factor. A compression preference of 0
+ means use uncompressed modes when available; 1 is low compression, 2 is
+ medium and 3 is high compression preferred. Of course, the higher the
+ compression, the lower the bandwidth used but more chance of artefacts
+ in the image. The driver automatically chooses a higher compression when
+ the preferred mode is not available.
+ */
+ /* Set preferred compression quality (0 = uncompressed, 3 = highest compression) */
+#define VIDIOCPWCSCQUAL _IOW('v', 195, int)
+ /* Get preferred compression quality */
+#define VIDIOCPWCGCQUAL _IOR('v', 195, int)
+
+
+/* Retrieve serial number of camera */
+#define VIDIOCPWCGSERIAL _IOR('v', 198, struct pwc_serial)
+
+ /* This is a probe function; since so many devices are supported, it
+ becomes difficult to include all the names in programs that want to
+ check for the enhanced Philips stuff. So in stead, try this PROBE;
+ it returns a structure with the original name, and the corresponding
+ Philips type.
+ To use, fill the structure with zeroes, call PROBE and if that succeeds,
+ compare the name with that returned from VIDIOCGCAP; they should be the
+ same. If so, you can be assured it is a Philips (OEM) cam and the type
+ is valid.
+ */
+#define VIDIOCPWCPROBE _IOR('v', 199, struct pwc_probe)
+
+ /* Set AGC (Automatic Gain Control); int < 0 = auto, 0..65535 = fixed */
+#define VIDIOCPWCSAGC _IOW('v', 200, int)
+ /* Get AGC; int < 0 = auto; >= 0 = fixed, range 0..65535 */
+#define VIDIOCPWCGAGC _IOR('v', 200, int)
+ /* Set shutter speed; int < 0 = auto; >= 0 = fixed, range 0..65535 */
+#define VIDIOCPWCSSHUTTER _IOW('v', 201, int)
+
+ /* Color compensation (Auto White Balance) */
+#define VIDIOCPWCSAWB _IOW('v', 202, struct pwc_whitebalance)
+#define VIDIOCPWCGAWB _IOR('v', 202, struct pwc_whitebalance)
+
+ /* Auto WB speed */
+#define VIDIOCPWCSAWBSPEED _IOW('v', 203, struct pwc_wb_speed)
+#define VIDIOCPWCGAWBSPEED _IOR('v', 203, struct pwc_wb_speed)
+
+ /* LEDs on/off/blink; int range 0..65535 */
+#define VIDIOCPWCSLED _IOW('v', 205, struct pwc_leds)
+#define VIDIOCPWCGLED _IOR('v', 205, struct pwc_leds)
+
+ /* Contour (sharpness); int < 0 = auto, 0..65536 = fixed */
+#define VIDIOCPWCSCONTOUR _IOW('v', 206, int)
+#define VIDIOCPWCGCONTOUR _IOR('v', 206, int)
+
+ /* Backlight compensation; 0 = off, otherwise on */
+#define VIDIOCPWCSBACKLIGHT _IOW('v', 207, int)
+#define VIDIOCPWCGBACKLIGHT _IOR('v', 207, int)
+
+ /* Flickerless mode; = 0 off, otherwise on */
+#define VIDIOCPWCSFLICKER _IOW('v', 208, int)
+#define VIDIOCPWCGFLICKER _IOR('v', 208, int)
+
+ /* Dynamic noise reduction; 0 off, 3 = high noise reduction */
+#define VIDIOCPWCSDYNNOISE _IOW('v', 209, int)
+#define VIDIOCPWCGDYNNOISE _IOR('v', 209, int)
+
+ /* Real image size as used by the camera; tells you whether or not there's a gray border around the image */
+#define VIDIOCPWCGREALSIZE _IOR('v', 210, struct pwc_imagesize)
+
+ /* Motorized pan & tilt functions */
+#define VIDIOCPWCMPTRESET _IOW('v', 211, int)
+#define VIDIOCPWCMPTGRANGE _IOR('v', 211, struct pwc_mpt_range)
+#define VIDIOCPWCMPTSANGLE _IOW('v', 212, struct pwc_mpt_angles)
+#define VIDIOCPWCMPTGANGLE _IOR('v', 212, struct pwc_mpt_angles)
+#define VIDIOCPWCMPTSTATUS _IOR('v', 213, struct pwc_mpt_status)
+
+ /* Get the USB set-video command; needed for initializing libpwcx */
+#define VIDIOCPWCGVIDCMD _IOR('v', 215, struct pwc_video_command)
+
+#endif
--- /dev/null
+/* Linux driver for Philips webcam
+ Various miscellaneous functions and tables.
+ (C) 1999-2003 Nemosoft Unv. (webcam@smcc.demon.nl)
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#include <linux/slab.h>
+
+#include "pwc.h"
+
+struct pwc_coord pwc_image_sizes[PSZ_MAX] =
+{
+ { 128, 96, 0 },
+ { 160, 120, 0 },
+ { 176, 144, 0 },
+ { 320, 240, 0 },
+ { 352, 288, 0 },
+ { 640, 480, 0 },
+};
+
+/* x,y -> PSZ_ */
+int pwc_decode_size(struct pwc_device *pdev, int width, int height)
+{
+ int i, find;
+
+ /* Make sure we don't go beyond our max size.
+ NB: we have different limits for RAW and normal modes. In case
+ you don't have the decompressor loaded or use RAW mode,
+ the maximum viewable size is smaller.
+ */
+ if (pdev->vpalette == VIDEO_PALETTE_RAW)
+ {
+ if (width > pdev->abs_max.x || height > pdev->abs_max.y)
+ {
+ Debug("VIDEO_PALETTE_RAW: going beyond abs_max.\n");
+ return -1;
+ }
+ }
+ else
+ {
+ if (width > pdev->view_max.x || height > pdev->view_max.y)
+ {
+ Debug("VIDEO_PALETTE_ not RAW: going beyond view_max.\n");
+ return -1;
+ }
+ }
+
+ /* Find the largest size supported by the camera that fits into the
+ requested size.
+ */
+ find = -1;
+ for (i = 0; i < PSZ_MAX; i++) {
+ if (pdev->image_mask & (1 << i)) {
+ if (pwc_image_sizes[i].x <= width && pwc_image_sizes[i].y <= height)
+ find = i;
+ }
+ }
+ return find;
+}
+
+/* initialize variables depending on type and decompressor*/
+void pwc_construct(struct pwc_device *pdev)
+{
+ switch(pdev->type) {
+ case 645:
+ case 646:
+ pdev->view_min.x = 128;
+ pdev->view_min.y = 96;
+ pdev->view_max.x = 352;
+ pdev->view_max.y = 288;
+ pdev->abs_max.x = 352;
+ pdev->abs_max.y = 288;
+ pdev->image_mask = 1 << PSZ_SQCIF | 1 << PSZ_QCIF | 1 << PSZ_CIF;
+ pdev->vcinterface = 2;
+ pdev->vendpoint = 4;
+ pdev->frame_header_size = 0;
+ pdev->frame_trailer_size = 0;
+ break;
+ case 675:
+ case 680:
+ case 690:
+ pdev->view_min.x = 128;
+ pdev->view_min.y = 96;
+ /* Anthill bug #38: PWC always reports max size, even without PWCX */
+ if (pdev->decompressor != NULL) {
+ pdev->view_max.x = 640;
+ pdev->view_max.y = 480;
+ }
+ else {
+ pdev->view_max.x = 352;
+ pdev->view_max.y = 288;
+ }
+ pdev->image_mask = 1 << PSZ_SQCIF | 1 << PSZ_QSIF | 1 << PSZ_QCIF | 1 << PSZ_SIF | 1 << PSZ_CIF | 1 << PSZ_VGA;
+ pdev->abs_max.x = 640;
+ pdev->abs_max.y = 480;
+ pdev->vcinterface = 3;
+ pdev->vendpoint = 4;
+ pdev->frame_header_size = 0;
+ pdev->frame_trailer_size = 0;
+ break;
+ case 720:
+ case 730:
+ case 740:
+ case 750:
+ pdev->view_min.x = 160;
+ pdev->view_min.y = 120;
+ /* Anthill bug #38: PWC always reports max size, even without PWCX */
+ if (pdev->decompressor != NULL) {
+ pdev->view_max.x = 640;
+ pdev->view_max.y = 480;
+ }
+ else {
+ /* We use CIF, not SIF since some tools really need CIF. So we cheat a bit. */
+ pdev->view_max.x = 352;
+ pdev->view_max.y = 288;
+ }
+ pdev->image_mask = 1 << PSZ_QSIF | 1 << PSZ_SIF | 1 << PSZ_VGA;
+ pdev->abs_max.x = 640;
+ pdev->abs_max.y = 480;
+ pdev->vcinterface = 3;
+ pdev->vendpoint = 5;
+ pdev->frame_header_size = TOUCAM_HEADER_SIZE;
+ pdev->frame_trailer_size = TOUCAM_TRAILER_SIZE;
+ break;
+ }
+ pdev->vpalette = VIDEO_PALETTE_YUV420P; /* default */
+ pdev->view_min.size = pdev->view_min.x * pdev->view_min.y;
+ pdev->view_max.size = pdev->view_max.x * pdev->view_max.y;
+ /* length of image, in YUV format; always allocate enough memory. */
+ pdev->len_per_image = (pdev->abs_max.x * pdev->abs_max.y * 3) / 2;
+}
+
+
--- /dev/null
+/* Linux driver for Philips webcam
+ Decompression frontend.
+ (C) 1999-2003 Nemosoft Unv. (webcam@smcc.demon.nl)
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+/*
+ This is where the decompression routines register and unregister
+ themselves. It also has a decompressor wrapper function.
+*/
+
+#include <asm/current.h>
+#include <asm/types.h>
+// #include <linux/sched.h>
+
+#include "pwc.h"
+#include "pwc-uncompress.h"
+
+
+/* This contains a list of all registered decompressors */
+static LIST_HEAD(pwc_decompressor_list);
+
+/* Should the pwc_decompress structure ever change, we increase the
+ version number so that we don't get nasty surprises, or can
+ dynamically adjust our structure.
+ */
+const int pwc_decompressor_version = PWC_MAJOR;
+
+/* Add decompressor to list, ignoring duplicates */
+void pwc_register_decompressor(struct pwc_decompressor *pwcd)
+{
+ if (pwc_find_decompressor(pwcd->type) == NULL) {
+ Trace(TRACE_PWCX, "Adding decompressor for model %d.\n", pwcd->type);
+ list_add_tail(&pwcd->pwcd_list, &pwc_decompressor_list);
+ }
+}
+
+/* Remove decompressor from list */
+void pwc_unregister_decompressor(int type)
+{
+ struct pwc_decompressor *find;
+
+ find = pwc_find_decompressor(type);
+ if (find != NULL) {
+ Trace(TRACE_PWCX, "Removing decompressor for model %d.\n", type);
+ list_del(&find->pwcd_list);
+ }
+}
+
+/* Find decompressor in list */
+struct pwc_decompressor *pwc_find_decompressor(int type)
+{
+ struct list_head *tmp;
+ struct pwc_decompressor *pwcd;
+
+ list_for_each(tmp, &pwc_decompressor_list) {
+ pwcd = list_entry(tmp, struct pwc_decompressor, pwcd_list);
+ if (pwcd->type == type)
+ return pwcd;
+ }
+ return NULL;
+}
+
+
+
+int pwc_decompress(struct pwc_device *pdev)
+{
+ struct pwc_frame_buf *fbuf;
+ int n, line, col, stride;
+ void *yuv, *image;
+ u16 *src;
+ u16 *dsty, *dstu, *dstv;
+
+ if (pdev == NULL)
+ return -EFAULT;
+#if defined(__KERNEL__) && defined(PWC_MAGIC)
+ if (pdev->magic != PWC_MAGIC) {
+ Err("pwc_decompress(): magic failed.\n");
+ return -EFAULT;
+ }
+#endif
+
+ fbuf = pdev->read_frame;
+ if (fbuf == NULL)
+ return -EFAULT;
+ image = pdev->image_ptr[pdev->fill_image];
+ if (!image)
+ return -EFAULT;
+
+ yuv = fbuf->data + pdev->frame_header_size; /* Skip header */
+
+ /* Raw format; that's easy... */
+ if (pdev->vpalette == VIDEO_PALETTE_RAW)
+ {
+ memcpy(image, yuv, pdev->frame_size);
+ return 0;
+ }
+
+ if (pdev->vbandlength == 0) {
+ /* Uncompressed mode. We copy the data into the output buffer,
+ using the viewport size (which may be larger than the image
+ size). Unfortunately we have to do a bit of byte stuffing
+ to get the desired output format/size.
+ */
+ /*
+ * We do some byte shuffling here to go from the
+ * native format to YUV420P.
+ */
+ src = (u16 *)yuv;
+ n = pdev->view.x * pdev->view.y;
+
+ /* offset in Y plane */
+ stride = pdev->view.x * pdev->offset.y + pdev->offset.x;
+ dsty = (u16 *)(image + stride);
+
+ /* offsets in U/V planes */
+ stride = pdev->view.x * pdev->offset.y / 4 + pdev->offset.x / 2;
+ dstu = (u16 *)(image + n + stride);
+ dstv = (u16 *)(image + n + n / 4 + stride);
+
+ /* increment after each line */
+ stride = (pdev->view.x - pdev->image.x) / 2; /* u16 is 2 bytes */
+
+ for (line = 0; line < pdev->image.y; line++) {
+ for (col = 0; col < pdev->image.x; col += 4) {
+ *dsty++ = *src++;
+ *dsty++ = *src++;
+ if (line & 1)
+ *dstv++ = *src++;
+ else
+ *dstu++ = *src++;
+ }
+ dsty += stride;
+ if (line & 1)
+ dstv += (stride >> 1);
+ else
+ dstu += (stride >> 1);
+ }
+ }
+ else {
+ /* Compressed; the decompressor routines will write the data
+ in planar format immediately.
+ */
+ int flags;
+
+ flags = PWCX_FLAG_PLANAR;
+ if (pdev->vsize == PSZ_VGA && pdev->vframes == 5 && pdev->vsnapshot)
+ flags |= PWCX_FLAG_BAYER;
+
+ if (pdev->decompressor)
+ pdev->decompressor->decompress(
+ &pdev->image, &pdev->view, &pdev->offset,
+ yuv, image,
+ flags,
+ pdev->decompress_data, pdev->vbandlength);
+ else
+ return -ENXIO; /* No such device or address: missing decompressor */
+ }
+ return 0;
+}
+
+/* Make sure these functions are available for the decompressor plugin
+ both when this code is compiled into the kernel or as as module.
+ */
+
+EXPORT_SYMBOL_NOVERS(pwc_decompressor_version);
+EXPORT_SYMBOL(pwc_register_decompressor);
+EXPORT_SYMBOL(pwc_unregister_decompressor);
--- /dev/null
+/* (C) 1999-2003 Nemosoft Unv. (webcam@smcc.demon.nl)
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+/* This file is the bridge between the kernel module and the plugin; it
+ describes the structures and datatypes used in both modules. Any
+ significant change should be reflected by increasing the
+ pwc_decompressor_version major number.
+ */
+#ifndef PWC_UNCOMPRESS_H
+#define PWC_UNCOMPRESS_H
+
+#include <linux/config.h>
+#include <linux/linkage.h>
+#include <linux/list.h>
+
+#include "pwc-ioctl.h"
+
+/* from pwc-dec.h */
+#define PWCX_FLAG_PLANAR 0x0001
+/* */
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* The decompressor structure.
+ Every type of decompressor registers itself with the main module.
+ When a device is opened, it looks up the correct compressor, and
+ uses that when a compressed video mode is requested.
+ */
+struct pwc_decompressor
+{
+ int type; /* type of camera (645, 680, etc) */
+ int table_size; /* memory needed */
+
+ void (* init)(int type, int release, void *buffer, void *table); /* Initialization routine; should be called after each set_video_mode */
+ void (* exit)(void); /* Cleanup routine */
+ void (* decompress)(struct pwc_coord *image, struct pwc_coord *view,
+ struct pwc_coord *offset,
+ void *src, void *dst, int flags,
+ void *table, int bandlength);
+ void (* lock)(void); /* make sure module cannot be unloaded */
+ void (* unlock)(void); /* release lock on module */
+
+ struct list_head pwcd_list;
+};
+
+
+/* Our structure version number. Is set to the version number major */
+extern const int pwc_decompressor_version;
+
+/* Adds decompressor to list, based on its 'type' field (which matches the 'type' field in pwc_device; ignores any double requests */
+extern void pwc_register_decompressor(struct pwc_decompressor *pwcd);
+/* Removes decompressor, based on the type number */
+extern void pwc_unregister_decompressor(int type);
+/* Returns pointer to decompressor struct, or NULL if it doesn't exist */
+extern struct pwc_decompressor *pwc_find_decompressor(int type);
+
+#ifdef CONFIG_USB_PWCX
+/* If the decompressor is compiled in, we must call these manually */
+extern int usb_pwcx_init(void);
+extern void usb_pwcx_exit(void);
+#endif
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif
--- /dev/null
+/* (C) 1999-2003 Nemosoft Unv. (webcam@smcc.demon.nl)
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#ifndef PWC_H
+#define PWC_H
+
+#include <linux/version.h>
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/usb.h>
+#include <linux/spinlock.h>
+#include <linux/videodev.h>
+#include <linux/wait.h>
+#include <linux/smp_lock.h>
+#include <asm/semaphore.h>
+#include <asm/errno.h>
+
+#include "pwc-uncompress.h"
+#include "pwc-ioctl.h"
+
+/* Defines and structures for the Philips webcam */
+/* Used for checking memory corruption/pointer validation */
+#define PWC_MAGIC 0x89DC10ABUL
+#undef PWC_MAGIC
+
+/* Turn some debugging options on/off */
+#define PWC_DEBUG 0
+
+/* Trace certain actions in the driver */
+#define TRACE_MODULE 0x0001
+#define TRACE_PROBE 0x0002
+#define TRACE_OPEN 0x0004
+#define TRACE_READ 0x0008
+#define TRACE_MEMORY 0x0010
+#define TRACE_FLOW 0x0020
+#define TRACE_SIZE 0x0040
+#define TRACE_PWCX 0x0080
+#define TRACE_SEQUENCE 0x1000
+
+#define Trace(R, A...) if (pwc_trace & R) printk(KERN_DEBUG PWC_NAME " " A)
+#define Debug(A...) printk(KERN_DEBUG PWC_NAME " " A)
+#define Info(A...) printk(KERN_INFO PWC_NAME " " A)
+#define Err(A...) printk(KERN_ERR PWC_NAME " " A)
+
+
+/* Defines for ToUCam cameras */
+#define TOUCAM_HEADER_SIZE 8
+#define TOUCAM_TRAILER_SIZE 4
+
+#define FEATURE_MOTOR_PANTILT 0x0001
+
+/* Version block */
+#define PWC_MAJOR 9
+#define PWC_MINOR 0
+#define PWC_VERSION "9.0.1"
+#define PWC_NAME "pwc"
+
+/* Turn certain features on/off */
+#define PWC_INT_PIPE 0
+
+/* Ignore errors in the first N frames, to allow for startup delays */
+#define FRAME_LOWMARK 5
+
+/* Size and number of buffers for the ISO pipe. */
+#define MAX_ISO_BUFS 2
+#define ISO_FRAMES_PER_DESC 10
+#define ISO_MAX_FRAME_SIZE 960
+#define ISO_BUFFER_SIZE (ISO_FRAMES_PER_DESC * ISO_MAX_FRAME_SIZE)
+
+/* Frame buffers: contains compressed or uncompressed video data. */
+#define MAX_FRAMES 5
+/* Maximum size after decompression is 640x480 YUV data, 1.5 * 640 * 480 */
+#define PWC_FRAME_SIZE (460800 + TOUCAM_HEADER_SIZE + TOUCAM_TRAILER_SIZE)
+
+/* Absolute maximum number of buffers available for mmap() */
+#define MAX_IMAGES 10
+
+/* The following structures were based on cpia.h. Why reinvent the wheel? :-) */
+struct pwc_iso_buf
+{
+ void *data;
+ int length;
+ int read;
+ struct urb *urb;
+};
+
+/* intermediate buffers with raw data from the USB cam */
+struct pwc_frame_buf
+{
+ void *data;
+ volatile int filled; /* number of bytes filled */
+ struct pwc_frame_buf *next; /* list */
+#if PWC_DEBUG
+ int sequence; /* Sequence number */
+#endif
+};
+
+struct pwc_device
+{
+ struct video_device *vdev;
+#ifdef PWC_MAGIC
+ int magic;
+#endif
+ /* Pointer to our usb_device */
+ struct usb_device *udev;
+
+ int type; /* type of cam (645, 646, 675, 680, 690, 720, 730, 740, 750) */
+ int release; /* release number */
+ int features; /* feature bits */
+ char serial[30]; /* serial number (string) */
+ int error_status; /* set when something goes wrong with the cam (unplugged, USB errors) */
+ int usb_init; /* set when the cam has been initialized over USB */
+
+ /*** Video data ***/
+ int vopen; /* flag */
+ int vendpoint; /* video isoc endpoint */
+ int vcinterface; /* video control interface */
+ int valternate; /* alternate interface needed */
+ int vframes, vsize; /* frames-per-second & size (see PSZ_*) */
+ int vpalette; /* palette: 420P, RAW or RGBBAYER */
+ int vframe_count; /* received frames */
+ int vframes_dumped; /* counter for dumped frames */
+ int vframes_error; /* frames received in error */
+ int vmax_packet_size; /* USB maxpacket size */
+ int vlast_packet_size; /* for frame synchronisation */
+ int visoc_errors; /* number of contiguous ISOC errors */
+ int vcompression; /* desired compression factor */
+ int vbandlength; /* compressed band length; 0 is uncompressed */
+ char vsnapshot; /* snapshot mode */
+ char vsync; /* used by isoc handler */
+ char vmirror; /* for ToUCaM series */
+
+ int cmd_len;
+ unsigned char cmd_buf[13];
+
+ /* The image acquisition requires 3 to 4 steps:
+ 1. data is gathered in short packets from the USB controller
+ 2. data is synchronized and packed into a frame buffer
+ 3a. in case data is compressed, decompress it directly into image buffer
+ 3b. in case data is uncompressed, copy into image buffer with viewport
+ 4. data is transferred to the user process
+
+ Note that MAX_ISO_BUFS != MAX_FRAMES != MAX_IMAGES....
+ We have in effect a back-to-back-double-buffer system.
+ */
+ /* 1: isoc */
+ struct pwc_iso_buf sbuf[MAX_ISO_BUFS];
+ char iso_init;
+
+ /* 2: frame */
+ struct pwc_frame_buf *fbuf; /* all frames */
+ struct pwc_frame_buf *empty_frames, *empty_frames_tail; /* all empty frames */
+ struct pwc_frame_buf *full_frames, *full_frames_tail; /* all filled frames */
+ struct pwc_frame_buf *fill_frame; /* frame currently being filled */
+ struct pwc_frame_buf *read_frame; /* frame currently read by user process */
+ int frame_header_size, frame_trailer_size;
+ int frame_size;
+ int frame_total_size; /* including header & trailer */
+ int drop_frames;
+#if PWC_DEBUG
+ int sequence; /* Debugging aid */
+#endif
+
+ /* 3: decompression */
+ struct pwc_decompressor *decompressor; /* function block with decompression routines */
+ void *decompress_data; /* private data for decompression engine */
+
+ /* 4: image */
+ /* We have an 'image' and a 'view', where 'image' is the fixed-size image
+ as delivered by the camera, and 'view' is the size requested by the
+ program. The camera image is centered in this viewport, laced with
+ a gray or black border. view_min <= image <= view <= view_max;
+ */
+ int image_mask; /* bitmask of supported sizes */
+ struct pwc_coord view_min, view_max; /* minimum and maximum viewable sizes */
+ struct pwc_coord abs_max; /* maximum supported size with compression */
+ struct pwc_coord image, view; /* image and viewport size */
+ struct pwc_coord offset; /* offset within the viewport */
+
+ void *image_data; /* total buffer, which is subdivided into ... */
+ void *image_ptr[MAX_IMAGES]; /* ...several images... */
+ int fill_image; /* ...which are rotated. */
+ int len_per_image; /* length per image */
+ int image_read_pos; /* In case we read data in pieces, keep track of were we are in the imagebuffer */
+ int image_used[MAX_IMAGES]; /* For MCAPTURE and SYNC */
+
+ struct semaphore modlock; /* to prevent races in video_open(), etc */
+ spinlock_t ptrlock; /* for manipulating the buffer pointers */
+
+ /*** motorized pan/tilt feature */
+ struct pwc_mpt_range angle_range;
+ int pan_angle; /* in degrees * 100 */
+ int tilt_angle; /* absolute angle; 0,0 is home position */
+
+ /*** Misc. data ***/
+ wait_queue_head_t frameq; /* When waiting for a frame to finish... */
+#if PWC_INT_PIPE
+ void *usb_int_handler; /* for the interrupt endpoint */
+#endif
+};
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Global variables */
+extern int pwc_trace;
+extern int pwc_preferred_compression;
+
+/** functions in pwc-if.c */
+int pwc_try_video_mode(struct pwc_device *pdev, int width, int height, int new_fps, int new_compression, int new_snapshot);
+
+/** Functions in pwc-misc.c */
+/* sizes in pixels */
+extern struct pwc_coord pwc_image_sizes[PSZ_MAX];
+
+int pwc_decode_size(struct pwc_device *pdev, int width, int height);
+void pwc_construct(struct pwc_device *pdev);
+
+/** Functions in pwc-ctrl.c */
+/* Request a certain video mode. Returns < 0 if not possible */
+extern int pwc_set_video_mode(struct pwc_device *pdev, int width, int height, int frames, int compression, int snapshot);
+/* Calculate the number of bytes per image (not frame) */
+extern void pwc_set_image_buffer_size(struct pwc_device *pdev);
+
+/* Various controls; should be obvious. Value 0..65535, or < 0 on error */
+extern int pwc_get_brightness(struct pwc_device *pdev);
+extern int pwc_set_brightness(struct pwc_device *pdev, int value);
+extern int pwc_get_contrast(struct pwc_device *pdev);
+extern int pwc_set_contrast(struct pwc_device *pdev, int value);
+extern int pwc_get_gamma(struct pwc_device *pdev);
+extern int pwc_set_gamma(struct pwc_device *pdev, int value);
+extern int pwc_get_saturation(struct pwc_device *pdev);
+extern int pwc_set_saturation(struct pwc_device *pdev, int value);
+extern int pwc_set_leds(struct pwc_device *pdev, int on_value, int off_value);
+extern int pwc_get_leds(struct pwc_device *pdev, int *on_value, int *off_value);
+extern int pwc_get_cmos_sensor(struct pwc_device *pdev, int *sensor);
+
+/* Power down or up the camera; not supported by all models */
+extern int pwc_camera_power(struct pwc_device *pdev, int power);
+
+/* Private ioctl()s; see pwc-ioctl.h */
+extern int pwc_ioctl(struct pwc_device *pdev, unsigned int cmd, void *arg);
+
+
+/** pwc-uncompress.c */
+/* Expand frame to image, possibly including decompression. Uses read_frame and fill_image */
+extern int pwc_decompress(struct pwc_device *pdev);
+
+#ifdef __cplusplus
+}
+#endif
+
+
+#endif
--- /dev/null
+ /* SQCIF */
+ {
+ /* 5 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ },
+ /* QSIF */
+ {
+ /* 5 fps */
+ {
+ {1, 146, 0, {0x1D, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0x00, 0x80}},
+ {1, 146, 0, {0x1D, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0x00, 0x80}},
+ {1, 146, 0, {0x1D, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0x00, 0x80}},
+ {1, 146, 0, {0x1D, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0x00, 0x80}},
+ },
+ /* 10 fps */
+ {
+ {2, 291, 0, {0x1C, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x23, 0x01, 0x80}},
+ {1, 192, 630, {0x14, 0xF4, 0x30, 0x13, 0xA9, 0x12, 0xE1, 0x17, 0x08, 0xC0, 0x00, 0x80}},
+ {1, 192, 630, {0x14, 0xF4, 0x30, 0x13, 0xA9, 0x12, 0xE1, 0x17, 0x08, 0xC0, 0x00, 0x80}},
+ {1, 192, 630, {0x14, 0xF4, 0x30, 0x13, 0xA9, 0x12, 0xE1, 0x17, 0x08, 0xC0, 0x00, 0x80}},
+ },
+ /* 15 fps */
+ {
+ {3, 437, 0, {0x1B, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0xB5, 0x01, 0x80}},
+ {2, 292, 640, {0x13, 0xF4, 0x30, 0x13, 0xF7, 0x13, 0x2F, 0x13, 0x20, 0x24, 0x01, 0x80}},
+ {2, 292, 640, {0x13, 0xF4, 0x30, 0x13, 0xF7, 0x13, 0x2F, 0x13, 0x20, 0x24, 0x01, 0x80}},
+ {1, 192, 420, {0x13, 0xF4, 0x30, 0x0D, 0x1B, 0x0C, 0x53, 0x1E, 0x18, 0xC0, 0x00, 0x80}},
+ },
+ /* 20 fps */
+ {
+ {4, 589, 0, {0x1A, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x4D, 0x02, 0x80}},
+ {3, 448, 730, {0x12, 0xF4, 0x30, 0x16, 0xC9, 0x16, 0x01, 0x0E, 0x18, 0xC0, 0x01, 0x80}},
+ {2, 292, 476, {0x12, 0xF4, 0x30, 0x0E, 0xD8, 0x0E, 0x10, 0x19, 0x18, 0x24, 0x01, 0x80}},
+ {1, 192, 312, {0x12, 0xF4, 0x50, 0x09, 0xB3, 0x08, 0xEB, 0x1E, 0x18, 0xC0, 0x00, 0x80}},
+ },
+ /* 25 fps */
+ {
+ {5, 703, 0, {0x19, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0xBF, 0x02, 0x80}},
+ {3, 447, 610, {0x11, 0xF4, 0x30, 0x13, 0x0B, 0x12, 0x43, 0x14, 0x28, 0xBF, 0x01, 0x80}},
+ {2, 292, 398, {0x11, 0xF4, 0x50, 0x0C, 0x6C, 0x0B, 0xA4, 0x1E, 0x28, 0x24, 0x01, 0x80}},
+ {1, 193, 262, {0x11, 0xF4, 0x50, 0x08, 0x23, 0x07, 0x5B, 0x1E, 0x28, 0xC1, 0x00, 0x80}},
+ },
+ /* 30 fps */
+ {
+ {8, 874, 0, {0x18, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x6A, 0x03, 0x80}},
+ {5, 704, 730, {0x10, 0xF4, 0x30, 0x16, 0xC9, 0x16, 0x01, 0x0E, 0x28, 0xC0, 0x02, 0x80}},
+ {3, 448, 492, {0x10, 0xF4, 0x30, 0x0F, 0x5D, 0x0E, 0x95, 0x15, 0x28, 0xC0, 0x01, 0x80}},
+ {2, 292, 320, {0x10, 0xF4, 0x50, 0x09, 0xFB, 0x09, 0x33, 0x1E, 0x28, 0x24, 0x01, 0x80}},
+ },
+ },
+ /* QCIF */
+ {
+ /* 5 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ },
+ /* SIF */
+ {
+ /* 5 fps */
+ {
+ {4, 582, 0, {0x0D, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x46, 0x02, 0x80}},
+ {3, 387, 1276, {0x05, 0xF4, 0x30, 0x27, 0xD8, 0x26, 0x48, 0x03, 0x10, 0x83, 0x01, 0x80}},
+ {2, 291, 960, {0x05, 0xF4, 0x30, 0x1D, 0xF2, 0x1C, 0x62, 0x04, 0x10, 0x23, 0x01, 0x80}},
+ {1, 191, 630, {0x05, 0xF4, 0x50, 0x13, 0xA9, 0x12, 0x19, 0x05, 0x18, 0xBF, 0x00, 0x80}},
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {6, 775, 1278, {0x04, 0xF4, 0x30, 0x27, 0xE8, 0x26, 0x58, 0x05, 0x30, 0x07, 0x03, 0x80}},
+ {3, 447, 736, {0x04, 0xF4, 0x30, 0x16, 0xFB, 0x15, 0x6B, 0x05, 0x28, 0xBF, 0x01, 0x80}},
+ {2, 292, 480, {0x04, 0xF4, 0x70, 0x0E, 0xF9, 0x0D, 0x69, 0x09, 0x28, 0x24, 0x01, 0x80}},
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {9, 955, 1050, {0x03, 0xF4, 0x30, 0x20, 0xCF, 0x1F, 0x3F, 0x06, 0x48, 0xBB, 0x03, 0x80}},
+ {4, 592, 650, {0x03, 0xF4, 0x30, 0x14, 0x44, 0x12, 0xB4, 0x08, 0x30, 0x50, 0x02, 0x80}},
+ {3, 448, 492, {0x03, 0xF4, 0x50, 0x0F, 0x52, 0x0D, 0xC2, 0x09, 0x38, 0xC0, 0x01, 0x80}},
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {9, 958, 782, {0x02, 0xF4, 0x30, 0x18, 0x6A, 0x16, 0xDA, 0x0B, 0x58, 0xBE, 0x03, 0x80}},
+ {5, 703, 574, {0x02, 0xF4, 0x50, 0x11, 0xE7, 0x10, 0x57, 0x0B, 0x40, 0xBF, 0x02, 0x80}},
+ {3, 446, 364, {0x02, 0xF4, 0x90, 0x0B, 0x5C, 0x09, 0xCC, 0x0E, 0x38, 0xBE, 0x01, 0x80}},
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {9, 958, 654, {0x01, 0xF4, 0x30, 0x14, 0x66, 0x12, 0xD6, 0x0B, 0x50, 0xBE, 0x03, 0x80}},
+ {6, 776, 530, {0x01, 0xF4, 0x50, 0x10, 0x8C, 0x0E, 0xFC, 0x0C, 0x48, 0x08, 0x03, 0x80}},
+ {4, 592, 404, {0x01, 0xF4, 0x70, 0x0C, 0x96, 0x0B, 0x06, 0x0B, 0x48, 0x50, 0x02, 0x80}},
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {9, 957, 526, {0x00, 0xF4, 0x50, 0x10, 0x68, 0x0E, 0xD8, 0x0D, 0x58, 0xBD, 0x03, 0x80}},
+ {6, 775, 426, {0x00, 0xF4, 0x70, 0x0D, 0x48, 0x0B, 0xB8, 0x0F, 0x50, 0x07, 0x03, 0x80}},
+ {4, 590, 324, {0x00, 0x7A, 0x88, 0x0A, 0x1C, 0x08, 0xB4, 0x0E, 0x50, 0x4E, 0x02, 0x80}},
+ },
+ },
+ /* CIF */
+ {
+ /* 5 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ },
+ /* VGA */
+ {
+ /* 5 fps */
+ {
+ {0, },
+ {6, 773, 1272, {0x25, 0xF4, 0x30, 0x27, 0xB6, 0x24, 0x96, 0x02, 0x30, 0x05, 0x03, 0x80}},
+ {4, 592, 976, {0x25, 0xF4, 0x50, 0x1E, 0x78, 0x1B, 0x58, 0x03, 0x30, 0x50, 0x02, 0x80}},
+ {3, 448, 738, {0x25, 0xF4, 0x90, 0x17, 0x0C, 0x13, 0xEC, 0x04, 0x30, 0xC0, 0x01, 0x80}},
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {9, 956, 788, {0x24, 0xF4, 0x70, 0x18, 0x9C, 0x15, 0x7C, 0x03, 0x48, 0xBC, 0x03, 0x80}},
+ {6, 776, 640, {0x24, 0xF4, 0xB0, 0x13, 0xFC, 0x11, 0x2C, 0x04, 0x48, 0x08, 0x03, 0x80}},
+ {4, 592, 488, {0x24, 0x7A, 0xE8, 0x0F, 0x3C, 0x0C, 0x6C, 0x06, 0x48, 0x50, 0x02, 0x80}},
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {9, 957, 526, {0x23, 0x7A, 0xE8, 0x10, 0x68, 0x0D, 0x98, 0x06, 0x58, 0xBD, 0x03, 0x80}},
+ {9, 957, 526, {0x23, 0x7A, 0xE8, 0x10, 0x68, 0x0D, 0x98, 0x06, 0x58, 0xBD, 0x03, 0x80}},
+ {8, 895, 492, {0x23, 0x7A, 0xE8, 0x0F, 0x5D, 0x0C, 0x8D, 0x06, 0x58, 0x7F, 0x03, 0x80}},
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ },
--- /dev/null
+ /* SQCIF */
+ {
+ {0, 0, {0x04, 0x01, 0x03}},
+ {8, 0, {0x05, 0x01, 0x03}},
+ {7, 0, {0x08, 0x01, 0x03}},
+ {7, 0, {0x0A, 0x01, 0x03}},
+ {6, 0, {0x0C, 0x01, 0x03}},
+ {5, 0, {0x0F, 0x01, 0x03}},
+ {4, 0, {0x14, 0x01, 0x03}},
+ {3, 0, {0x18, 0x01, 0x03}},
+ },
+ /* QSIF */
+ {
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ },
+ /* QCIF */
+ {
+ {0, 0, {0x04, 0x01, 0x02}},
+ {8, 0, {0x05, 0x01, 0x02}},
+ {7, 0, {0x08, 0x01, 0x02}},
+ {6, 0, {0x0A, 0x01, 0x02}},
+ {5, 0, {0x0C, 0x01, 0x02}},
+ {4, 0, {0x0F, 0x01, 0x02}},
+ {1, 0, {0x14, 0x01, 0x02}},
+ {1, 0, {0x18, 0x01, 0x02}},
+ },
+ /* SIF */
+ {
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ },
+ /* CIF */
+ {
+ {4, 0, {0x04, 0x01, 0x01}},
+ {7, 1, {0x05, 0x03, 0x01}},
+ {6, 1, {0x08, 0x03, 0x01}},
+ {4, 1, {0x0A, 0x03, 0x01}},
+ {3, 1, {0x0C, 0x03, 0x01}},
+ {2, 1, {0x0F, 0x03, 0x01}},
+ {0},
+ {0},
+ },
+ /* VGA */
+ {
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ },
--- /dev/null
+ /* SQCIF */
+ {
+ /* 5 fps */
+ {
+ {1, 140, 0, {0x05, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x8C, 0xFC, 0x80, 0x02}},
+ {1, 140, 0, {0x05, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x8C, 0xFC, 0x80, 0x02}},
+ {1, 140, 0, {0x05, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x8C, 0xFC, 0x80, 0x02}},
+ {1, 140, 0, {0x05, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x8C, 0xFC, 0x80, 0x02}},
+ },
+ /* 10 fps */
+ {
+ {2, 280, 0, {0x04, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x18, 0xA9, 0x80, 0x02}},
+ {2, 280, 0, {0x04, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x18, 0xA9, 0x80, 0x02}},
+ {2, 280, 0, {0x04, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x18, 0xA9, 0x80, 0x02}},
+ {2, 280, 0, {0x04, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x18, 0xA9, 0x80, 0x02}},
+ },
+ /* 15 fps */
+ {
+ {3, 410, 0, {0x03, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x9A, 0x71, 0x80, 0x02}},
+ {3, 410, 0, {0x03, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x9A, 0x71, 0x80, 0x02}},
+ {3, 410, 0, {0x03, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x9A, 0x71, 0x80, 0x02}},
+ {3, 410, 0, {0x03, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x9A, 0x71, 0x80, 0x02}},
+ },
+ /* 20 fps */
+ {
+ {4, 559, 0, {0x02, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x2F, 0x56, 0x80, 0x02}},
+ {4, 559, 0, {0x02, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x2F, 0x56, 0x80, 0x02}},
+ {4, 559, 0, {0x02, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x2F, 0x56, 0x80, 0x02}},
+ {4, 559, 0, {0x02, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x2F, 0x56, 0x80, 0x02}},
+ },
+ /* 25 fps */
+ {
+ {5, 659, 0, {0x01, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x93, 0x46, 0x80, 0x02}},
+ {5, 659, 0, {0x01, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x93, 0x46, 0x80, 0x02}},
+ {5, 659, 0, {0x01, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x93, 0x46, 0x80, 0x02}},
+ {5, 659, 0, {0x01, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x93, 0x46, 0x80, 0x02}},
+ },
+ /* 30 fps */
+ {
+ {7, 838, 0, {0x00, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x46, 0x3B, 0x80, 0x02}},
+ {7, 838, 0, {0x00, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x46, 0x3B, 0x80, 0x02}},
+ {7, 838, 0, {0x00, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x46, 0x3B, 0x80, 0x02}},
+ {7, 838, 0, {0x00, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x46, 0x3B, 0x80, 0x02}},
+ },
+ },
+ /* QSIF */
+ {
+ /* 5 fps */
+ {
+ {1, 146, 0, {0x2D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0xFC, 0xC0, 0x02}},
+ {1, 146, 0, {0x2D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0xFC, 0xC0, 0x02}},
+ {1, 146, 0, {0x2D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0xFC, 0xC0, 0x02}},
+ {1, 146, 0, {0x2D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0xFC, 0xC0, 0x02}},
+ },
+ /* 10 fps */
+ {
+ {2, 291, 0, {0x2C, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x23, 0xA1, 0xC0, 0x02}},
+ {1, 191, 630, {0x2C, 0xF4, 0x05, 0x13, 0xA9, 0x12, 0xE1, 0x17, 0x08, 0xBF, 0xF4, 0xC0, 0x02}},
+ {1, 191, 630, {0x2C, 0xF4, 0x05, 0x13, 0xA9, 0x12, 0xE1, 0x17, 0x08, 0xBF, 0xF4, 0xC0, 0x02}},
+ {1, 191, 630, {0x2C, 0xF4, 0x05, 0x13, 0xA9, 0x12, 0xE1, 0x17, 0x08, 0xBF, 0xF4, 0xC0, 0x02}},
+ },
+ /* 15 fps */
+ {
+ {3, 437, 0, {0x2B, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0xB5, 0x6D, 0xC0, 0x02}},
+ {2, 291, 640, {0x2B, 0xF4, 0x05, 0x13, 0xF7, 0x13, 0x2F, 0x13, 0x08, 0x23, 0xA1, 0xC0, 0x02}},
+ {2, 291, 640, {0x2B, 0xF4, 0x05, 0x13, 0xF7, 0x13, 0x2F, 0x13, 0x08, 0x23, 0xA1, 0xC0, 0x02}},
+ {1, 191, 420, {0x2B, 0xF4, 0x0D, 0x0D, 0x1B, 0x0C, 0x53, 0x1E, 0x08, 0xBF, 0xF4, 0xC0, 0x02}},
+ },
+ /* 20 fps */
+ {
+ {4, 588, 0, {0x2A, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x4C, 0x52, 0xC0, 0x02}},
+ {3, 447, 730, {0x2A, 0xF4, 0x05, 0x16, 0xC9, 0x16, 0x01, 0x0E, 0x18, 0xBF, 0x69, 0xC0, 0x02}},
+ {2, 292, 476, {0x2A, 0xF4, 0x0D, 0x0E, 0xD8, 0x0E, 0x10, 0x19, 0x18, 0x24, 0xA1, 0xC0, 0x02}},
+ {1, 192, 312, {0x2A, 0xF4, 0x1D, 0x09, 0xB3, 0x08, 0xEB, 0x1E, 0x18, 0xC0, 0xF4, 0xC0, 0x02}},
+ },
+ /* 25 fps */
+ {
+ {5, 703, 0, {0x29, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0xBF, 0x42, 0xC0, 0x02}},
+ {3, 447, 610, {0x29, 0xF4, 0x05, 0x13, 0x0B, 0x12, 0x43, 0x14, 0x18, 0xBF, 0x69, 0xC0, 0x02}},
+ {2, 292, 398, {0x29, 0xF4, 0x0D, 0x0C, 0x6C, 0x0B, 0xA4, 0x1E, 0x18, 0x24, 0xA1, 0xC0, 0x02}},
+ {1, 192, 262, {0x29, 0xF4, 0x25, 0x08, 0x23, 0x07, 0x5B, 0x1E, 0x18, 0xC0, 0xF4, 0xC0, 0x02}},
+ },
+ /* 30 fps */
+ {
+ {8, 873, 0, {0x28, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x69, 0x37, 0xC0, 0x02}},
+ {5, 704, 774, {0x28, 0xF4, 0x05, 0x18, 0x21, 0x17, 0x59, 0x0F, 0x18, 0xC0, 0x42, 0xC0, 0x02}},
+ {3, 448, 492, {0x28, 0xF4, 0x05, 0x0F, 0x5D, 0x0E, 0x95, 0x15, 0x18, 0xC0, 0x69, 0xC0, 0x02}},
+ {2, 291, 320, {0x28, 0xF4, 0x1D, 0x09, 0xFB, 0x09, 0x33, 0x1E, 0x18, 0x23, 0xA1, 0xC0, 0x02}},
+ },
+ },
+ /* QCIF */
+ {
+ /* 5 fps */
+ {
+ {1, 193, 0, {0x0D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0xC1, 0xF4, 0xC0, 0x02}},
+ {1, 193, 0, {0x0D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0xC1, 0xF4, 0xC0, 0x02}},
+ {1, 193, 0, {0x0D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0xC1, 0xF4, 0xC0, 0x02}},
+ {1, 193, 0, {0x0D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0xC1, 0xF4, 0xC0, 0x02}},
+ },
+ /* 10 fps */
+ {
+ {3, 385, 0, {0x0C, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0x81, 0x79, 0xC0, 0x02}},
+ {2, 291, 800, {0x0C, 0xF4, 0x05, 0x18, 0xF4, 0x18, 0x18, 0x11, 0x08, 0x23, 0xA1, 0xC0, 0x02}},
+ {2, 291, 800, {0x0C, 0xF4, 0x05, 0x18, 0xF4, 0x18, 0x18, 0x11, 0x08, 0x23, 0xA1, 0xC0, 0x02}},
+ {1, 194, 532, {0x0C, 0xF4, 0x05, 0x10, 0x9A, 0x0F, 0xBE, 0x1B, 0x08, 0xC2, 0xF0, 0xC0, 0x02}},
+ },
+ /* 15 fps */
+ {
+ {4, 577, 0, {0x0B, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0x41, 0x52, 0xC0, 0x02}},
+ {3, 447, 818, {0x0B, 0xF4, 0x05, 0x19, 0x89, 0x18, 0xAD, 0x0F, 0x10, 0xBF, 0x69, 0xC0, 0x02}},
+ {2, 292, 534, {0x0B, 0xF4, 0x05, 0x10, 0xA3, 0x0F, 0xC7, 0x19, 0x10, 0x24, 0xA1, 0xC0, 0x02}},
+ {1, 195, 356, {0x0B, 0xF4, 0x15, 0x0B, 0x11, 0x0A, 0x35, 0x1E, 0x10, 0xC3, 0xF0, 0xC0, 0x02}},
+ },
+ /* 20 fps */
+ {
+ {6, 776, 0, {0x0A, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0x08, 0x3F, 0xC0, 0x02}},
+ {4, 591, 804, {0x0A, 0xF4, 0x05, 0x19, 0x1E, 0x18, 0x42, 0x0F, 0x18, 0x4F, 0x4E, 0xC0, 0x02}},
+ {3, 447, 608, {0x0A, 0xF4, 0x05, 0x12, 0xFD, 0x12, 0x21, 0x15, 0x18, 0xBF, 0x69, 0xC0, 0x02}},
+ {2, 291, 396, {0x0A, 0xF4, 0x15, 0x0C, 0x5E, 0x0B, 0x82, 0x1E, 0x18, 0x23, 0xA1, 0xC0, 0x02}},
+ },
+ /* 25 fps */
+ {
+ {9, 928, 0, {0x09, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0xA0, 0x33, 0xC0, 0x02}},
+ {5, 703, 800, {0x09, 0xF4, 0x05, 0x18, 0xF4, 0x18, 0x18, 0x10, 0x18, 0xBF, 0x42, 0xC0, 0x02}},
+ {3, 447, 508, {0x09, 0xF4, 0x0D, 0x0F, 0xD2, 0x0E, 0xF6, 0x1B, 0x18, 0xBF, 0x69, 0xC0, 0x02}},
+ {2, 292, 332, {0x09, 0xF4, 0x1D, 0x0A, 0x5A, 0x09, 0x7E, 0x1E, 0x18, 0x24, 0xA1, 0xC0, 0x02}},
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {9, 956, 876, {0x08, 0xF4, 0x05, 0x1B, 0x58, 0x1A, 0x7C, 0x0E, 0x20, 0xBC, 0x33, 0x10, 0x02}},
+ {4, 592, 542, {0x08, 0xF4, 0x05, 0x10, 0xE4, 0x10, 0x08, 0x17, 0x20, 0x50, 0x4E, 0x10, 0x02}},
+ {2, 291, 266, {0x08, 0xF4, 0x25, 0x08, 0x48, 0x07, 0x6C, 0x1E, 0x20, 0x23, 0xA1, 0x10, 0x02}},
+ },
+ },
+ /* SIF */
+ {
+ /* 5 fps */
+ {
+ {4, 582, 0, {0x35, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x46, 0x52, 0x60, 0x02}},
+ {3, 387, 1276, {0x35, 0xF4, 0x05, 0x27, 0xD8, 0x26, 0x48, 0x03, 0x10, 0x83, 0x79, 0x60, 0x02}},
+ {2, 291, 960, {0x35, 0xF4, 0x0D, 0x1D, 0xF2, 0x1C, 0x62, 0x04, 0x10, 0x23, 0xA1, 0x60, 0x02}},
+ {1, 191, 630, {0x35, 0xF4, 0x1D, 0x13, 0xA9, 0x12, 0x19, 0x05, 0x08, 0xBF, 0xF4, 0x60, 0x02}},
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {6, 775, 1278, {0x34, 0xF4, 0x05, 0x27, 0xE8, 0x26, 0x58, 0x05, 0x30, 0x07, 0x3F, 0x10, 0x02}},
+ {3, 447, 736, {0x34, 0xF4, 0x15, 0x16, 0xFB, 0x15, 0x6B, 0x05, 0x18, 0xBF, 0x69, 0x10, 0x02}},
+ {2, 291, 480, {0x34, 0xF4, 0x2D, 0x0E, 0xF9, 0x0D, 0x69, 0x09, 0x18, 0x23, 0xA1, 0x10, 0x02}},
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {9, 955, 1050, {0x33, 0xF4, 0x05, 0x20, 0xCF, 0x1F, 0x3F, 0x06, 0x48, 0xBB, 0x33, 0x10, 0x02}},
+ {4, 591, 650, {0x33, 0xF4, 0x15, 0x14, 0x44, 0x12, 0xB4, 0x08, 0x30, 0x4F, 0x4E, 0x10, 0x02}},
+ {3, 448, 492, {0x33, 0xF4, 0x25, 0x0F, 0x52, 0x0D, 0xC2, 0x09, 0x28, 0xC0, 0x69, 0x10, 0x02}},
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {9, 958, 782, {0x32, 0xF4, 0x0D, 0x18, 0x6A, 0x16, 0xDA, 0x0B, 0x58, 0xBE, 0x33, 0xD0, 0x02}},
+ {5, 703, 574, {0x32, 0xF4, 0x1D, 0x11, 0xE7, 0x10, 0x57, 0x0B, 0x40, 0xBF, 0x42, 0xD0, 0x02}},
+ {3, 446, 364, {0x32, 0xF4, 0x3D, 0x0B, 0x5C, 0x09, 0xCC, 0x0E, 0x30, 0xBE, 0x69, 0xD0, 0x02}},
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {9, 958, 654, {0x31, 0xF4, 0x15, 0x14, 0x66, 0x12, 0xD6, 0x0B, 0x50, 0xBE, 0x33, 0x90, 0x02}},
+ {6, 776, 530, {0x31, 0xF4, 0x25, 0x10, 0x8C, 0x0E, 0xFC, 0x0C, 0x48, 0x08, 0x3F, 0x90, 0x02}},
+ {4, 592, 404, {0x31, 0xF4, 0x35, 0x0C, 0x96, 0x0B, 0x06, 0x0B, 0x38, 0x50, 0x4E, 0x90, 0x02}},
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {9, 957, 526, {0x30, 0xF4, 0x25, 0x10, 0x68, 0x0E, 0xD8, 0x0D, 0x58, 0xBD, 0x33, 0x60, 0x02}},
+ {6, 775, 426, {0x30, 0xF4, 0x35, 0x0D, 0x48, 0x0B, 0xB8, 0x0F, 0x50, 0x07, 0x3F, 0x60, 0x02}},
+ {4, 590, 324, {0x30, 0x7A, 0x4B, 0x0A, 0x1C, 0x08, 0xB4, 0x0E, 0x40, 0x4E, 0x52, 0x60, 0x02}},
+ },
+ },
+ /* CIF */
+ {
+ /* 5 fps */
+ {
+ {6, 771, 0, {0x15, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x03, 0x3F, 0x80, 0x02}},
+ {4, 465, 1278, {0x15, 0xF4, 0x05, 0x27, 0xEE, 0x26, 0x36, 0x03, 0x18, 0xD1, 0x65, 0x80, 0x02}},
+ {2, 291, 800, {0x15, 0xF4, 0x15, 0x18, 0xF4, 0x17, 0x3C, 0x05, 0x18, 0x23, 0xA1, 0x80, 0x02}},
+ {1, 193, 528, {0x15, 0xF4, 0x2D, 0x10, 0x7E, 0x0E, 0xC6, 0x0A, 0x18, 0xC1, 0xF4, 0x80, 0x02}},
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {9, 932, 1278, {0x14, 0xF4, 0x05, 0x27, 0xEE, 0x26, 0x36, 0x04, 0x30, 0xA4, 0x33, 0x10, 0x02}},
+ {4, 591, 812, {0x14, 0xF4, 0x15, 0x19, 0x56, 0x17, 0x9E, 0x06, 0x28, 0x4F, 0x4E, 0x10, 0x02}},
+ {2, 291, 400, {0x14, 0xF4, 0x3D, 0x0C, 0x7A, 0x0A, 0xC2, 0x0E, 0x28, 0x23, 0xA1, 0x10, 0x02}},
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {9, 956, 876, {0x13, 0xF4, 0x0D, 0x1B, 0x58, 0x19, 0xA0, 0x05, 0x38, 0xBC, 0x33, 0x60, 0x02}},
+ {5, 703, 644, {0x13, 0xF4, 0x1D, 0x14, 0x1C, 0x12, 0x64, 0x08, 0x38, 0xBF, 0x42, 0x60, 0x02}},
+ {3, 448, 410, {0x13, 0xF4, 0x3D, 0x0C, 0xC4, 0x0B, 0x0C, 0x0E, 0x38, 0xC0, 0x69, 0x60, 0x02}},
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {9, 956, 650, {0x12, 0xF4, 0x1D, 0x14, 0x4A, 0x12, 0x92, 0x09, 0x48, 0xBC, 0x33, 0x10, 0x03}},
+ {6, 776, 528, {0x12, 0xF4, 0x2D, 0x10, 0x7E, 0x0E, 0xC6, 0x0A, 0x40, 0x08, 0x3F, 0x10, 0x03}},
+ {4, 591, 402, {0x12, 0xF4, 0x3D, 0x0C, 0x8F, 0x0A, 0xD7, 0x0E, 0x40, 0x4F, 0x4E, 0x10, 0x03}},
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {9, 956, 544, {0x11, 0xF4, 0x25, 0x10, 0xF4, 0x0F, 0x3C, 0x0A, 0x48, 0xBC, 0x33, 0xC0, 0x02}},
+ {7, 840, 478, {0x11, 0xF4, 0x2D, 0x0E, 0xEB, 0x0D, 0x33, 0x0B, 0x48, 0x48, 0x3B, 0xC0, 0x02}},
+ {5, 703, 400, {0x11, 0xF4, 0x3D, 0x0C, 0x7A, 0x0A, 0xC2, 0x0E, 0x48, 0xBF, 0x42, 0xC0, 0x02}},
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {9, 956, 438, {0x10, 0xF4, 0x35, 0x0D, 0xAC, 0x0B, 0xF4, 0x0D, 0x50, 0xBC, 0x33, 0x10, 0x02}},
+ {7, 838, 384, {0x10, 0xF4, 0x45, 0x0B, 0xFD, 0x0A, 0x45, 0x0F, 0x50, 0x46, 0x3B, 0x10, 0x02}},
+ {6, 773, 354, {0x10, 0x7A, 0x4B, 0x0B, 0x0C, 0x09, 0x80, 0x10, 0x50, 0x05, 0x3F, 0x10, 0x02}},
+ },
+ },
+ /* VGA */
+ {
+ /* 5 fps */
+ {
+ {0, },
+ {6, 773, 1272, {0x1D, 0xF4, 0x15, 0x27, 0xB6, 0x24, 0x96, 0x02, 0x30, 0x05, 0x3F, 0x10, 0x02}},
+ {4, 592, 976, {0x1D, 0xF4, 0x25, 0x1E, 0x78, 0x1B, 0x58, 0x03, 0x30, 0x50, 0x4E, 0x10, 0x02}},
+ {3, 448, 738, {0x1D, 0xF4, 0x3D, 0x17, 0x0C, 0x13, 0xEC, 0x04, 0x30, 0xC0, 0x69, 0x10, 0x02}},
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {9, 956, 788, {0x1C, 0xF4, 0x35, 0x18, 0x9C, 0x15, 0x7C, 0x03, 0x48, 0xBC, 0x33, 0x10, 0x02}},
+ {6, 776, 640, {0x1C, 0x7A, 0x53, 0x13, 0xFC, 0x11, 0x2C, 0x04, 0x48, 0x08, 0x3F, 0x10, 0x02}},
+ {4, 592, 488, {0x1C, 0x7A, 0x6B, 0x0F, 0x3C, 0x0C, 0x6C, 0x06, 0x48, 0x50, 0x4E, 0x10, 0x02}},
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {9, 957, 526, {0x1B, 0x7A, 0x63, 0x10, 0x68, 0x0D, 0x98, 0x06, 0x58, 0xBD, 0x33, 0x80, 0x02}},
+ {9, 957, 526, {0x1B, 0x7A, 0x63, 0x10, 0x68, 0x0D, 0x98, 0x06, 0x58, 0xBD, 0x33, 0x80, 0x02}},
+ {8, 895, 492, {0x1B, 0x7A, 0x6B, 0x0F, 0x5D, 0x0C, 0x8D, 0x06, 0x58, 0x7F, 0x37, 0x80, 0x02}},
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ },
To compile this as a module, choose M here: the module will be called
ramfs.
+config RELAYFS_FS
+ tristate "Relayfs file system support"
+ ---help---
+ Relayfs is a high-speed data relay filesystem designed to provide
+ an efficient mechanism for tools and facilities to relay large
+ amounts of data from kernel space to user space. It's not useful
+ on its own, and should only be enabled if other facilities that
+ need it are enabled, such as for example klog or the Linux Trace
+ Toolkit.
+
+ See <file:Documentation/filesystems/relayfs.txt> for further
+ information.
+
+ This file system is also available as a module ( = code which can be
+ inserted in and removed from the running kernel whenever you want).
+ The module is called relayfs. If you want to compile it as a
+ module, say M here and read <file:Documentation/modules.txt>.
+
+ If unsure, say N.
+
+config KLOG_CHANNEL
+ bool "Enable klog debugging support"
+ depends on RELAYFS_FS
+ help
+ If you say Y to this, a relayfs channel named klog will be created
+ in the root of the relayfs file system. You can write to the klog
+ channel using klog() or klog_raw() from within the kernel or
+ kernel modules, and read from the klog channel by mounting relayfs
+ and using read(2) to read from it (or using cat). If you're not
+ sure, say N.
+
+config KLOG_CHANNEL_AUTOENABLE
+ bool "Enable klog logging on startup"
+ depends on KLOG_CHANNEL
+ default y
+ help
+ If you say Y to this, the klog channel will be automatically enabled
+ on startup. Otherwise, to turn klog logging on, you need use
+ sysctl (fs.relayfs.klog_enabled). This option is used in cases where
+ you don't actually want the channel to be written to until it's
+ enabled. If you're not sure, say Y.
+
+config KLOG_CHANNEL_SHIFT
+ depends on KLOG_CHANNEL
+ int "klog debugging channel size (14 => 16KB, 22 => 4MB)"
+ range 14 22
+ default 21
+ help
+ Select klog debugging channel size as a power of 2.
+
endmenu
menu "Miscellaneous filesystems"
To compile this file system support as a module, choose M here: the
module will be called hpfs. If unsure, say N.
-
-
config QNX4FS_FS
tristate "QNX4 file system support (read only)"
help
It's currently broken, so for now:
answer N.
-
-
config SYSV_FS
tristate "System V/Xenix/V7/Coherent file system support"
help
If you haven't heard about all of this before, it's safe to say N.
-
-
config UFS_FS
tristate "UFS file system support (read only)"
help
obj-$(CONFIG_CRAMFS) += cramfs/
obj-$(CONFIG_RAMFS) += ramfs/
obj-$(CONFIG_HUGETLBFS) += hugetlbfs/
+obj-$(CONFIG_RELAYFS_FS) += relayfs/
obj-$(CONFIG_CODA_FS) += coda/
obj-$(CONFIG_MINIX_FS) += minix/
obj-$(CONFIG_FAT_FS) += fat/
#include <linux/mount.h>
#include <linux/tty.h>
#include <linux/devpts_fs.h>
+#include <linux/vs_base.h>
#include <linux/xattr.h>
extern struct xattr_handler devpts_xattr_security_handler;
return lookup_one_len(s, root, sprintf(s, "%d", num));
}
+
int devpts_pty_new(struct tty_struct *tty)
{
int number = tty->index;
#include <linux/security.h>
#include <linux/syscalls.h>
#include <linux/rmap.h>
+#include <linux/vs_memory.h>
#include <linux/acct.h>
#include <linux/vs_memory.h>
atomic_set(&newsighand->count, 1);
memcpy(newsighand->action, oldsighand->action,
sizeof(newsighand->action));
-
write_lock_irq(&tasklist_lock);
spin_lock(&oldsighand->siglock);
spin_lock(&newsighand->siglock);
-
current->sighand = newsighand;
recalc_sigpending();
-
spin_unlock(&newsighand->siglock);
spin_unlock(&oldsighand->siglock);
write_unlock_irq(&tasklist_lock);
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/fs.h>
+#include <linux/namei.h>
+#include <linux/vs_base.h>
#include "ext2.h"
#include "xattr.h"
#include "acl.h"
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/fs.h>
+#include <linux/namei.h>
#include <linux/ext3_jbd.h>
#include <linux/ext3_fs.h>
+#include <linux/vs_base.h>
#include "xattr.h"
#include "acl.h"
#include <linux/quotaops.h>
#include <linux/buffer_head.h>
#include <linux/random.h>
+#include <linux/vs_dlimit.h>
#include <linux/bitops.h>
#include <linux/vs_dlimit.h>
#include <linux/vserver/xid.h>
if (!test_opt(inode->i_sb, RESERVATION) ||!S_ISREG(inode->i_mode))
return -ENOTTY;
- if (IS_RDONLY(inode) ||
- (filp && MNT_IS_RDONLY(filp->f_vfsmnt)))
+ if (IS_RDONLY(inode))
return -EROFS;
if ((current->fsuid != inode->i_uid) && !capable(CAP_FOWNER))
if (!capable(CAP_SYS_RESOURCE))
return -EPERM;
- if (IS_RDONLY(inode) ||
- (filp && MNT_IS_RDONLY(filp->f_vfsmnt)))
+ if (IS_RDONLY(inode))
return -EROFS;
if (get_user(n_blocks_count, (__u32 __user *)arg))
if (!capable(CAP_SYS_RESOURCE))
return -EPERM;
- if (IS_RDONLY(inode) ||
- (filp && MNT_IS_RDONLY(filp->f_vfsmnt)))
+ if (IS_RDONLY(inode))
return -EROFS;
if (copy_from_user(&input, (struct ext3_new_group_input __user *)arg,
/* fixme: if stealth, return -ENOTTY */
if (!capable(CAP_CONTEXT))
return -EPERM;
- if (IS_RDONLY(inode) ||
- (filp && MNT_IS_RDONLY(filp->f_vfsmnt)))
+ if (IS_RDONLY(inode))
return -EROFS;
if (!(inode->i_sb->s_flags & MS_TAGXID))
return -ENOSYS;
#include <linux/pagemap.h>
#include <linux/cdev.h>
#include <linux/bootmem.h>
+#include <linux/vs_base.h>
/*
* This is needed for the following functions:
int permission(struct inode *inode, int mask, struct nameidata *nd)
{
int retval, submask;
+ umode_t mode = inode->i_mode;
if (mask & MAY_WRITE) {
- umode_t mode = inode->i_mode;
-
/*
* Nobody gets write access to a read-only fs.
*/
- if ((IS_RDONLY(inode) || (nd && MNT_IS_RDONLY(nd->mnt))) &&
+ if (IS_RDONLY(inode) &&
(S_ISREG(mode) || S_ISDIR(mode) || S_ISLNK(mode)))
return -EROFS;
return -EACCES;
}
-
/* Ordinary permission routines do not understand MAY_APPEND. */
submask = mask & ~MAY_APPEND;
+ if (nd && (mask & MAY_WRITE) && MNT_IS_RDONLY(nd->mnt) &&
+ (S_ISREG(mode) || S_ISDIR(mode) || S_ISLNK(mode)))
+ return -EROFS;
if ((retval = xid_permission(inode, mask, nd)))
return retval;
if (inode->i_op && inode->i_op->permission)
inode = dentry->d_inode;
if (!inode)
goto done;
- if (inode->i_sb->s_magic == PROC_SUPER_MAGIC) {
- struct proc_dir_entry *de = PDE(inode);
-
- if (de && !vx_hide_check(0, de->vx_flags))
- goto hidden;
- }
#ifdef CONFIG_VSERVER_FILESHARING
/* MEF: PlanetLab FS module assumes that any file that can be
* named (e.g., via a cross mount) is not hidden from another
#endif
if (!vx_check(inode->i_xid, VX_WATCH|VX_ADMIN|VX_HOSTID|VX_IDENT))
goto hidden;
+ if (inode->i_sb->s_magic == PROC_SUPER_MAGIC) {
+ struct proc_dir_entry *de = PDE(inode);
+
+ if (de && !vx_hide_check(0, de->vx_flags))
+ goto hidden;
+ }
done:
path->mnt = mnt;
path->dentry = dentry;
* 10. We don't allow removal of NFS sillyrenamed files; it's handled by
* nfs_async_unlink().
*/
-static inline int may_delete(struct inode *dir, struct dentry *victim,
- int isdir, struct nameidata *nd)
+static inline int may_delete(struct inode *dir,struct dentry *victim,int isdir)
{
int error;
BUG_ON(victim->d_parent->d_inode != dir);
- error = permission(dir,MAY_WRITE | MAY_EXEC, nd);
+ error = permission(dir,MAY_WRITE | MAY_EXEC, NULL);
if (error)
return error;
if (IS_APPEND(dir))
return permission(dir,MAY_WRITE | MAY_EXEC, nd);
}
+static inline int mnt_may_create(struct vfsmount *mnt, struct inode *dir, struct dentry *child) {
+ if (child->d_inode)
+ return -EEXIST;
+ if (IS_DEADDIR(dir))
+ return -ENOENT;
+ if (mnt->mnt_flags & MNT_RDONLY)
+ return -EROFS;
+ return 0;
+}
+
+static inline int mnt_may_unlink(struct vfsmount *mnt, struct inode *dir, struct dentry *child) {
+ if (!child->d_inode)
+ return -ENOENT;
+ if (mnt->mnt_flags & MNT_RDONLY)
+ return -EROFS;
+ return 0;
+}
+
/*
* Special case: O_CREAT|O_EXCL implies O_NOFOLLOW for security
* reasons.
return -EACCES;
flag &= ~O_TRUNC;
- } else if ((IS_RDONLY(inode) || MNT_IS_RDONLY(nd->mnt))
+ } else if ((IS_RDONLY(inode) || (nd && MNT_IS_RDONLY(nd->mnt)))
&& (flag & FMODE_WRITE))
return -EROFS;
/*
struct dentry *lookup_create(struct nameidata *nd, int is_dir)
{
struct dentry *dentry;
+ int error;
down(&nd->dentry->d_inode->i_sem);
- dentry = ERR_PTR(-EEXIST);
+ error = -EEXIST;
if (nd->last_type != LAST_NORM)
- goto fail;
+ goto out;
nd->flags &= ~LOOKUP_PARENT;
dentry = lookup_hash(&nd->last, nd->dentry);
if (IS_ERR(dentry))
+ goto ret;
+ error = mnt_may_create(nd->mnt, nd->dentry->d_inode, dentry);
+ if (error)
goto fail;
+ error = -ENOENT;
if (!is_dir && nd->last.name[nd->last.len] && !dentry->d_inode)
- goto enoent;
+ goto fail;
+ret:
return dentry;
-enoent:
- dput(dentry);
- dentry = ERR_PTR(-ENOENT);
fail:
- return dentry;
+ dput(dentry);
+out:
+ return ERR_PTR(error);
}
EXPORT_SYMBOL_GPL(lookup_create);
-int vfs_mknod(struct inode *dir, struct dentry *dentry,
- int mode, dev_t dev, struct nameidata *nd)
+int vfs_mknod(struct inode *dir, struct dentry *dentry, int mode, dev_t dev)
{
- int error = may_create(dir, dentry, nd);
+ int error = may_create(dir, dentry, NULL);
if (error)
return error;
goto out;
dentry = lookup_create(&nd, 0);
error = PTR_ERR(dentry);
+
if (!IS_POSIXACL(nd.dentry->d_inode))
mode &= ~current->fs->umask;
if (!IS_ERR(dentry)) {
error = vfs_create(nd.dentry->d_inode,dentry,mode,&nd);
break;
case S_IFCHR: case S_IFBLK:
- error = vfs_mknod(nd.dentry->d_inode, dentry, mode,
- new_decode_dev(dev), &nd);
+ error = vfs_mknod(nd.dentry->d_inode,dentry,mode,
+ new_decode_dev(dev));
break;
case S_IFIFO: case S_IFSOCK:
- error = vfs_mknod(nd.dentry->d_inode, dentry, mode,
- 0, &nd);
+ error = vfs_mknod(nd.dentry->d_inode,dentry,mode,0);
break;
case S_IFDIR:
error = -EPERM;
return error;
}
-int vfs_mkdir(struct inode *dir, struct dentry *dentry,
- int mode, struct nameidata *nd)
+int vfs_mkdir(struct inode *dir, struct dentry *dentry, int mode)
{
- int error = may_create(dir, dentry, nd);
+ int error = may_create(dir, dentry, NULL);
if (error)
return error;
if (!IS_ERR(dentry)) {
if (!IS_POSIXACL(nd.dentry->d_inode))
mode &= ~current->fs->umask;
- error = vfs_mkdir(nd.dentry->d_inode, dentry,
- mode, &nd);
+ error = vfs_mkdir(nd.dentry->d_inode, dentry, mode);
dput(dentry);
}
up(&nd.dentry->d_inode->i_sem);
spin_unlock(&dcache_lock);
}
-int vfs_rmdir(struct inode *dir, struct dentry *dentry,
- struct nameidata *nd)
+int vfs_rmdir(struct inode *dir, struct dentry *dentry)
{
- int error = may_delete(dir, dentry, 1, nd);
+ int error = may_delete(dir, dentry, 1);
if (error)
return error;
dentry = lookup_hash(&nd.last, nd.dentry);
error = PTR_ERR(dentry);
if (!IS_ERR(dentry)) {
- error = vfs_rmdir(nd.dentry->d_inode, dentry, &nd);
+ error = mnt_may_unlink(nd.mnt, nd.dentry->d_inode, dentry);
+ if (error)
+ goto exit2;
+ error = vfs_rmdir(nd.dentry->d_inode, dentry);
+ exit2:
dput(dentry);
}
up(&nd.dentry->d_inode->i_sem);
return error;
}
-int vfs_unlink(struct inode *dir, struct dentry *dentry,
- struct nameidata *nd)
+int vfs_unlink(struct inode *dir, struct dentry *dentry)
{
- int error = may_delete(dir, dentry, 0, nd);
+ int error = may_delete(dir, dentry, 0);
if (error)
return error;
/* Why not before? Because we want correct error value */
if (nd.last.name[nd.last.len])
goto slashes;
+ error = mnt_may_unlink(nd.mnt, nd.dentry->d_inode, dentry);
+ if (error)
+ goto exit2;
inode = dentry->d_inode;
if (inode)
atomic_inc(&inode->i_count);
- error = vfs_unlink(nd.dentry->d_inode, dentry, &nd);
+ error = vfs_unlink(nd.dentry->d_inode, dentry);
exit2:
dput(dentry);
}
goto exit2;
}
-int vfs_symlink(struct inode *dir, struct dentry *dentry,
- const char *oldname, int mode, struct nameidata *nd)
+int vfs_symlink(struct inode *dir, struct dentry *dentry, const char *oldname, int mode)
{
- int error = may_create(dir, dentry, nd);
+ int error = may_create(dir, dentry, NULL);
if (error)
return error;
dentry = lookup_create(&nd, 0);
error = PTR_ERR(dentry);
if (!IS_ERR(dentry)) {
- error = vfs_symlink(nd.dentry->d_inode, dentry,
- from, S_IALLUGO, &nd);
+ error = vfs_symlink(nd.dentry->d_inode, dentry, from, S_IALLUGO);
dput(dentry);
}
up(&nd.dentry->d_inode->i_sem);
return error;
}
-int vfs_link(struct dentry *old_dentry, struct inode *dir,
- struct dentry *new_dentry, struct nameidata *nd)
+int vfs_link(struct dentry *old_dentry, struct inode *dir, struct dentry *new_dentry)
{
struct inode *inode = old_dentry->d_inode;
int error;
if (!inode)
return -ENOENT;
- error = may_create(dir, new_dentry, nd);
+ error = may_create(dir, new_dentry, NULL);
if (error)
return error;
new_dentry = lookup_create(&nd, 0);
error = PTR_ERR(new_dentry);
if (!IS_ERR(new_dentry)) {
- error = vfs_link(old_nd.dentry, nd.dentry->d_inode,
- new_dentry, &nd);
+ error = vfs_link(old_nd.dentry, nd.dentry->d_inode, new_dentry);
dput(new_dentry);
}
up(&nd.dentry->d_inode->i_sem);
if (old_dentry->d_inode == new_dentry->d_inode)
return 0;
- error = may_delete(old_dir, old_dentry, is_dir, NULL);
+ error = may_delete(old_dir, old_dentry, is_dir);
if (error)
return error;
if (!new_dentry->d_inode)
error = may_create(new_dir, new_dentry, NULL);
else
- error = may_delete(new_dir, new_dentry, is_dir, NULL);
+ error = may_delete(new_dir, new_dentry, is_dir);
if (error)
return error;
struct dentry *root, *point;
int ret;
+ if (!mnt)
+ return 1;
if (mnt == mnt->mnt_namespace->root)
return 1;
struct vfsmount *mnt = v;
int err = 0;
static struct proc_fs_info {
- int s_flag;
- int mnt_flag;
- char *set_str;
- char *unset_str;
+ int flag;
+ char *str;
} fs_info[] = {
- { MS_RDONLY, MNT_RDONLY, "ro", "rw" },
- { MS_SYNCHRONOUS, 0, ",sync", NULL },
- { MS_DIRSYNC, 0, ",dirsync", NULL },
- { MS_MANDLOCK, 0, ",mand", NULL },
- { MS_NOATIME, MNT_NOATIME, ",noatime", NULL },
- { MS_NODIRATIME, MNT_NODIRATIME, ",nodiratime", NULL },
- { MS_TAGXID, MS_TAGXID, ",tagxid", NULL },
- { 0, MNT_NOSUID, ",nosuid", NULL },
- { 0, MNT_NODEV, ",nodev", NULL },
- { 0, MNT_NOEXEC, ",noexec", NULL },
- { 0, 0, NULL, NULL }
+ { MS_SYNCHRONOUS, ",sync" },
+ { MS_DIRSYNC, ",dirsync" },
+ { MS_MANDLOCK, ",mand" },
+ { MS_NOATIME, ",noatime" },
+ { MS_NODIRATIME, ",nodiratime" },
+ { MS_TAGXID, ",tagxid" },
+ { 0, NULL }
+ };
+ static struct proc_fs_info mnt_info[] = {
+ { MNT_NOSUID, ",nosuid" },
+ { MNT_NODEV, ",nodev" },
+ { MNT_NOEXEC, ",noexec" },
+ { 0, NULL }
};
- struct proc_fs_info *p;
- unsigned long s_flags = mnt->mnt_sb->s_flags;
- int mnt_flags = mnt->mnt_flags;
+
+ struct proc_fs_info *fs_infop;
if (vx_flags(VXF_HIDE_MOUNT, 0))
return 0;
seq_putc(m, ' ');
}
mangle(m, mnt->mnt_sb->s_type->name);
- seq_putc(m, ' ');
- for (p = fs_info; (p->s_flag | p->mnt_flag) ; p++) {
- if ((s_flags & p->s_flag) || (mnt_flags & p->mnt_flag)) {
- if (p->set_str)
- seq_puts(m, p->set_str);
- } else {
- if (p->unset_str)
- seq_puts(m, p->unset_str);
- }
+ seq_puts(m, mnt->mnt_sb->s_flags & MS_RDONLY ? " ro" : " rw");
+ for (fs_infop = fs_info; fs_infop->flag; fs_infop++) {
+ if (mnt->mnt_sb->s_flags & fs_infop->flag)
+ seq_puts(m, fs_infop->str);
+ }
+ for (fs_infop = mnt_info; fs_infop->flag; fs_infop++) {
+ if (mnt->mnt_flags & fs_infop->flag)
+ seq_puts(m, fs_infop->str);
}
if (mnt->mnt_flags & MNT_XID)
seq_printf(m, ",xid=%d", mnt->mnt_xid);
if (nd->flags & LOOKUP_DIRECTORY)
return 0;
/* Are we trying to write to a read only partition? */
- if ((IS_RDONLY(dir) || MNT_IS_RDONLY(nd->mnt)) &&
+ if ((IS_RDONLY(dir) || (nd && MNT_IS_RDONLY(nd->mnt))) &&
(nd->intent.open.flags & (O_CREAT|O_TRUNC|FMODE_WRITE)))
return 0;
return 1;
goto out_fail;
}
- clnt->cl_intr = 1;
- clnt->cl_softrtry = 1;
- clnt->cl_tagxid = 1;
+ clnt->cl_intr = (server->flags & NFS_MOUNT_INTR) ? 1 : 0;
+ clnt->cl_softrtry = (server->flags & NFS_MOUNT_SOFT) ? 1 : 0;
+ clnt->cl_tagxid = (server->flags & NFS_MOUNT_TAGXID) ? 1 : 0;
clnt->cl_chatty = 1;
return clnt;
{ NFS_MOUNT_NOCTO, ",nocto", "" },
{ NFS_MOUNT_NOAC, ",noac", "" },
{ NFS_MOUNT_NONLM, ",nolock", ",lock" },
+ { NFS_MOUNT_BROKEN_SUID, ",broken_suid", "" },
{ NFS_MOUNT_TAGXID, ",tagxid", "" },
{ 0, NULL, NULL }
};
out:
return inode;
-
+/* FIXME
+fail_dlim:
+ make_bad_inode(inode);
+ iput(inode);
+ inode = NULL;
+*/
out_no_inode:
printk("nfs_fhget: iget failed\n");
goto out;
Opt_soft, Opt_hard, Opt_intr,
Opt_nointr, Opt_posix, Opt_noposix, Opt_cto, Opt_nocto, Opt_ac,
Opt_noac, Opt_lock, Opt_nolock, Opt_v2, Opt_v3, Opt_udp, Opt_tcp,
- Opt_tagxid,
+ Opt_broken_suid, Opt_tagxid,
/* Error token */
Opt_err
};
{Opt_udp, "udp"},
{Opt_tcp, "proto=tcp"},
{Opt_tcp, "tcp"},
+ {Opt_broken_suid, "broken_suid"},
{Opt_tagxid, "tagxid"},
{Opt_err, NULL}
case Opt_tcp:
nfs_data.flags |= NFS_MOUNT_TCP;
break;
+ case Opt_broken_suid:
+ nfs_data.flags |= NFS_MOUNT_BROKEN_SUID;
+ break;
case Opt_tagxid:
nfs_data.flags |= NFS_MOUNT_TAGXID;
break;
err = vfs_create(dirp, dchild, iap->ia_mode, NULL);
break;
case S_IFDIR:
- err = vfs_mkdir(dirp, dchild, iap->ia_mode, NULL);
+ err = vfs_mkdir(dirp, dchild, iap->ia_mode);
break;
case S_IFCHR:
case S_IFBLK:
case S_IFIFO:
case S_IFSOCK:
- err = vfs_mknod(dirp, dchild, iap->ia_mode, rdev, NULL);
+ err = vfs_mknod(dirp, dchild, iap->ia_mode, rdev);
break;
default:
printk("nfsd: bad file type %o in nfsd_create\n", type);
else {
strncpy(path_alloced, path, plen);
path_alloced[plen] = 0;
- err = vfs_symlink(dentry->d_inode, dnew,
- path_alloced, mode, NULL);
+ err = vfs_symlink(dentry->d_inode, dnew, path_alloced, mode);
kfree(path_alloced);
}
} else
- err = vfs_symlink(dentry->d_inode, dnew,
- path, mode, NULL);
+ err = vfs_symlink(dentry->d_inode, dnew, path, mode);
if (!err) {
if (EX_ISSYNC(fhp->fh_export))
dold = tfhp->fh_dentry;
dest = dold->d_inode;
- err = vfs_link(dold, dirp, dnew, NULL);
+ err = vfs_link(dold, dirp, dnew);
if (!err) {
if (EX_ISSYNC(ffhp->fh_export)) {
nfsd_sync_dir(ddir);
err = nfserr_perm;
} else
#endif
- err = vfs_unlink(dirp, rdentry, NULL);
+ err = vfs_unlink(dirp, rdentry);
} else { /* It's RMDIR */
- err = vfs_rmdir(dirp, rdentry, NULL);
+ err = vfs_rmdir(dirp, rdentry);
}
dput(rdentry);
if (!(acc & MAY_LOCAL_ACCESS))
if (acc & (MAY_WRITE | MAY_SATTR | MAY_TRUNC)) {
if (EX_RDONLY(exp) || IS_RDONLY(inode)
- || MNT_IS_RDONLY(exp->ex_mnt))
+ || (exp && MNT_IS_RDONLY(exp->ex_mnt)))
return nfserr_rofs;
if (/* (acc & MAY_WRITE) && */ IS_IMMUTABLE(inode))
return nfserr_perm;
#include <asm/uaccess.h>
#include <linux/fs.h>
#include <linux/pagemap.h>
+#include <linux/vs_base.h>
+#include <linux/vs_limit.h>
+#include <linux/vs_dlimit.h>
+#include <linux/vserver/xid.h>
#include <linux/syscalls.h>
#include <linux/vs_limit.h>
#include <linux/vs_dlimit.h>
inode = dentry->d_inode;
err = -EROFS;
- if (IS_RDONLY(inode) || MNT_IS_RDONLY(file->f_vfsmnt))
+ if (IS_RDONLY(inode) || (file && MNT_IS_RDONLY(file->f_vfsmnt)))
goto out_putf;
err = -EPERM;
if (IS_IMMUTABLE(inode) || IS_APPEND(inode))
#include <linux/init.h>
#include <linux/idr.h>
#include <linux/namei.h>
+#include <linux/vs_base.h>
+#include <linux/vserver/inode.h>
#include <linux/bitops.h>
#include <linux/vserver/inode.h>
#include <asm/uaccess.h>
return 1;
}
+static int proc_revalidate_dentry(struct dentry *de, struct nameidata *nd)
+{
+ /* maybe add a check if it's really necessary? */
+ return 0;
+}
+
static struct dentry_operations proc_dentry_operations =
{
+ .d_revalidate = proc_revalidate_dentry,
.d_delete = proc_delete_dentry,
};
#include <linux/sysrq.h>
#include <linux/vmalloc.h>
#include <linux/crash_dump.h>
+#include <linux/vs_base.h>
+#include <linux/vs_cvirt.h>
#include <asm/uaccess.h>
#include <asm/pgtable.h>
if (dir->d_inode->i_nlink <= 2) {
root = get_xa_root (inode->i_sb);
reiserfs_write_lock_xattrs (inode->i_sb);
- err = vfs_rmdir (root->d_inode, dir, NULL);
+ err = vfs_rmdir (root->d_inode, dir);
reiserfs_write_unlock_xattrs (inode->i_sb);
dput (root);
} else {
{
umode_t mode = inode->i_mode;
+ /* Prevent vservers from escaping chroot() barriers */
+ if (IS_BARRIER(inode) && !vx_check(0, VX_ADMIN))
+ return -EACCES;
+
if (mask & MAY_WRITE) {
/*
* Nobody gets write access to a read-only fs.
--- /dev/null
+#
+# relayfs Makefile
+#
+
+obj-$(CONFIG_RELAYFS_FS) += relayfs.o
+
+relayfs-y := relay.o relay_lockless.o relay_locking.o inode.o resize.o
+relayfs-$(CONFIG_KLOG_CHANNEL) += klog.o
--- /dev/null
+/*
+ * VFS-related code for RelayFS, a high-speed data relay filesystem.
+ *
+ * Copyright (C) 2003 - Tom Zanussi <zanussi@us.ibm.com>, IBM Corp
+ * Copyright (C) 2003 - Karim Yaghmour <karim@opersys.com>
+ *
+ * Based on ramfs, Copyright (C) 2002 - Linus Torvalds
+ *
+ * This file is released under the GPL.
+ */
+
+#include <linux/module.h>
+#include <linux/fs.h>
+#include <linux/mount.h>
+#include <linux/pagemap.h>
+#include <linux/highmem.h>
+#include <linux/init.h>
+#include <linux/string.h>
+#include <linux/smp_lock.h>
+#include <linux/backing-dev.h>
+#include <linux/namei.h>
+#include <linux/poll.h>
+#include <asm/uaccess.h>
+#include <asm/relay.h>
+
+#define RELAYFS_MAGIC 0x26F82121
+
+static struct super_operations relayfs_ops;
+static struct address_space_operations relayfs_aops;
+static struct inode_operations relayfs_file_inode_operations;
+static struct file_operations relayfs_file_operations;
+static struct inode_operations relayfs_dir_inode_operations;
+
+static struct vfsmount * relayfs_mount;
+static int relayfs_mount_count;
+
+static struct backing_dev_info relayfs_backing_dev_info = {
+ .ra_pages = 0, /* No readahead */
+ .memory_backed = 1, /* Does not contribute to dirty memory */
+};
+
+static struct inode *
+relayfs_get_inode(struct super_block *sb, int mode, dev_t dev)
+{
+ struct inode * inode;
+
+ inode = new_inode(sb);
+
+ if (inode) {
+ inode->i_mode = mode;
+ inode->i_uid = current->fsuid;
+ inode->i_gid = current->fsgid;
+ inode->i_blksize = PAGE_CACHE_SIZE;
+ inode->i_blocks = 0;
+ inode->i_mapping->a_ops = &relayfs_aops;
+ inode->i_mapping->backing_dev_info = &relayfs_backing_dev_info;
+ inode->i_atime = inode->i_mtime = inode->i_ctime = CURRENT_TIME;
+ switch (mode & S_IFMT) {
+ default:
+ init_special_inode(inode, mode, dev);
+ break;
+ case S_IFREG:
+ inode->i_op = &relayfs_file_inode_operations;
+ inode->i_fop = &relayfs_file_operations;
+ break;
+ case S_IFDIR:
+ inode->i_op = &relayfs_dir_inode_operations;
+ inode->i_fop = &simple_dir_operations;
+
+ /* directory inodes start off with i_nlink == 2 (for "." entry) */
+ inode->i_nlink++;
+ break;
+ case S_IFLNK:
+ inode->i_op = &page_symlink_inode_operations;
+ break;
+ }
+ }
+ return inode;
+}
+
+/*
+ * File creation. Allocate an inode, and we're done..
+ */
+/* SMP-safe */
+static int
+relayfs_mknod(struct inode *dir, struct dentry *dentry, int mode, dev_t dev)
+{
+ struct inode * inode;
+ int error = -ENOSPC;
+
+ inode = relayfs_get_inode(dir->i_sb, mode, dev);
+
+ if (inode) {
+ d_instantiate(dentry, inode);
+ dget(dentry); /* Extra count - pin the dentry in core */
+ error = 0;
+ }
+ return error;
+}
+
+static int
+relayfs_mkdir(struct inode * dir, struct dentry * dentry, int mode)
+{
+ int retval;
+
+ retval = relayfs_mknod(dir, dentry, mode | S_IFDIR, 0);
+
+ if (!retval)
+ dir->i_nlink++;
+ return retval;
+}
+
+static int
+relayfs_create(struct inode *dir, struct dentry *dentry, int mode, struct nameidata *nd)
+{
+ return relayfs_mknod(dir, dentry, mode | S_IFREG, 0);
+}
+
+static int
+relayfs_symlink(struct inode * dir, struct dentry *dentry, const char * symname)
+{
+ struct inode *inode;
+ int error = -ENOSPC;
+
+ inode = relayfs_get_inode(dir->i_sb, S_IFLNK|S_IRWXUGO, 0);
+
+ if (inode) {
+ int l = strlen(symname)+1;
+ error = page_symlink(inode, symname, l);
+ if (!error) {
+ d_instantiate(dentry, inode);
+ dget(dentry);
+ } else
+ iput(inode);
+ }
+ return error;
+}
+
+/**
+ * relayfs_create_entry - create a relayfs directory or file
+ * @name: the name of the file to create
+ * @parent: parent directory
+ * @dentry: result dentry
+ * @entry_type: type of file to create (S_IFREG, S_IFDIR)
+ * @mode: mode
+ * @data: data to associate with the file
+ *
+ * Creates a file or directory with the specifed permissions.
+ */
+static int
+relayfs_create_entry(const char * name, struct dentry * parent, struct dentry **dentry, int entry_type, int mode, void * data)
+{
+ struct qstr qname;
+ struct dentry * d;
+
+ int error = 0;
+
+ error = simple_pin_fs("relayfs", &relayfs_mount, &relayfs_mount_count);
+ if (error) {
+ printk(KERN_ERR "Couldn't mount relayfs: errcode %d\n", error);
+ return error;
+ }
+
+ qname.name = name;
+ qname.len = strlen(name);
+ qname.hash = full_name_hash(name, qname.len);
+
+ if (parent == NULL)
+ if (relayfs_mount && relayfs_mount->mnt_sb)
+ parent = relayfs_mount->mnt_sb->s_root;
+
+ if (parent == NULL) {
+ simple_release_fs(&relayfs_mount, &relayfs_mount_count);
+ return -EINVAL;
+ }
+
+ parent = dget(parent);
+ down(&parent->d_inode->i_sem);
+ d = lookup_hash(&qname, parent);
+ if (IS_ERR(d)) {
+ error = PTR_ERR(d);
+ goto release_mount;
+ }
+
+ if (d->d_inode) {
+ error = -EEXIST;
+ goto release_mount;
+ }
+
+ if (entry_type == S_IFREG)
+ error = relayfs_create(parent->d_inode, d, entry_type | mode, NULL);
+ else
+ error = relayfs_mkdir(parent->d_inode, d, entry_type | mode);
+ if (error)
+ goto release_mount;
+
+ if ((entry_type == S_IFREG) && data) {
+ d->d_inode->u.generic_ip = data;
+ goto exit; /* don't release mount for regular files */
+ }
+
+release_mount:
+ simple_release_fs(&relayfs_mount, &relayfs_mount_count);
+exit:
+ *dentry = d;
+ up(&parent->d_inode->i_sem);
+ dput(parent);
+
+ return error;
+}
+
+/**
+ * relayfs_create_file - create a file in the relay filesystem
+ * @name: the name of the file to create
+ * @parent: parent directory
+ * @dentry: result dentry
+ * @data: data to associate with the file
+ * @mode: mode, if not specied the default perms are used
+ *
+ * The file will be created user rw on behalf of current user.
+ */
+int
+relayfs_create_file(const char * name, struct dentry * parent, struct dentry **dentry, void * data, int mode)
+{
+ if (!mode)
+ mode = S_IRUSR | S_IWUSR;
+
+ return relayfs_create_entry(name, parent, dentry, S_IFREG,
+ mode, data);
+}
+
+/**
+ * relayfs_create_dir - create a directory in the relay filesystem
+ * @name: the name of the directory to create
+ * @parent: parent directory
+ * @dentry: result dentry
+ *
+ * The directory will be created world rwx on behalf of current user.
+ */
+int
+relayfs_create_dir(const char * name, struct dentry * parent, struct dentry **dentry)
+{
+ return relayfs_create_entry(name, parent, dentry, S_IFDIR,
+ S_IRWXU | S_IRUGO | S_IXUGO, NULL);
+}
+
+/**
+ * relayfs_remove_file - remove a file in the relay filesystem
+ * @dentry: file dentry
+ *
+ * Remove a file previously created by relayfs_create_file.
+ */
+int
+relayfs_remove_file(struct dentry *dentry)
+{
+ struct dentry *parent;
+ int is_reg;
+
+ parent = dentry->d_parent;
+ if (parent == NULL)
+ return -EINVAL;
+
+ is_reg = S_ISREG(dentry->d_inode->i_mode);
+
+ parent = dget(parent);
+ down(&parent->d_inode->i_sem);
+ if (dentry->d_inode) {
+ simple_unlink(parent->d_inode, dentry);
+ d_delete(dentry);
+ }
+ dput(dentry);
+ up(&parent->d_inode->i_sem);
+ dput(parent);
+
+ if(is_reg)
+ simple_release_fs(&relayfs_mount, &relayfs_mount_count);
+
+ return 0;
+}
+
+/**
+ * relayfs_open - open file op for relayfs files
+ * @inode: the inode
+ * @filp: the file
+ *
+ * Associates the channel with the file, and increments the
+ * channel refcount. Reads will be 'auto-consuming'.
+ */
+int
+relayfs_open(struct inode *inode, struct file *filp)
+{
+ struct rchan *rchan;
+ struct rchan_reader *reader;
+ int retval = 0;
+
+ if (inode->u.generic_ip) {
+ rchan = (struct rchan *)inode->u.generic_ip;
+ if (rchan == NULL)
+ return -EACCES;
+ reader = __add_rchan_reader(rchan, filp, 1, 0);
+ if (reader == NULL)
+ return -ENOMEM;
+ filp->private_data = reader;
+ retval = rchan->callbacks->fileop_notify(rchan->id, filp,
+ RELAY_FILE_OPEN);
+ if (retval == 0)
+ /* Inc relay channel refcount for file */
+ rchan_get(rchan->id);
+ else {
+ __remove_rchan_reader(reader);
+ retval = -EPERM;
+ }
+ }
+
+ return retval;
+}
+
+/**
+ * relayfs_mmap - mmap file op for relayfs files
+ * @filp: the file
+ * @vma: the vma describing what to map
+ *
+ * Calls upon relay_mmap_buffer to map the file into user space.
+ */
+int
+relayfs_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ struct rchan *rchan;
+
+ rchan = ((struct rchan_reader *)filp->private_data)->rchan;
+
+ return __relay_mmap_buffer(rchan, vma);
+}
+
+/**
+ * relayfs_file_read - read file op for relayfs files
+ * @filp: the file
+ * @buf: user buf to read into
+ * @count: bytes requested
+ * @offset: offset into file
+ *
+ * Reads count bytes from the channel, or as much as is available within
+ * the sub-buffer currently being read. Reads are 'auto-consuming'.
+ * See relay_read() for details.
+ *
+ * Returns bytes read on success, 0 or -EAGAIN if nothing available,
+ * negative otherwise.
+ */
+ssize_t
+relayfs_file_read(struct file *filp, char * buf, size_t count, loff_t *offset)
+{
+ size_t read_count;
+ struct rchan_reader *reader;
+ u32 dummy; /* all VFS readers are auto-consuming */
+
+ if (offset != &filp->f_pos) /* pread, seeking not supported */
+ return -ESPIPE;
+
+ if (count == 0)
+ return 0;
+
+ reader = (struct rchan_reader *)filp->private_data;
+ read_count = relay_read(reader, buf, count,
+ filp->f_flags & (O_NDELAY | O_NONBLOCK) ? 0 : 1, &dummy);
+
+ return read_count;
+}
+
+/**
+ * relayfs_file_write - write file op for relayfs files
+ * @filp: the file
+ * @buf: user buf to write from
+ * @count: bytes to write
+ * @offset: offset into file
+ *
+ * Reserves a slot in the relay buffer and writes count bytes
+ * into it. The current limit for a single write is 2 pages
+ * worth. The user_deliver() channel callback will be invoked on
+ *
+ * Returns bytes written on success, 0 or -EAGAIN if nothing available,
+ * negative otherwise.
+ */
+ssize_t
+relayfs_file_write(struct file *filp, const char *buf, size_t count, loff_t *offset)
+{
+ int write_count;
+ char * write_buf;
+ struct rchan *rchan;
+ int err = 0;
+ void *wrote_pos;
+ struct rchan_reader *reader;
+
+ reader = (struct rchan_reader *)filp->private_data;
+ if (reader == NULL)
+ return -EPERM;
+
+ rchan = reader->rchan;
+ if (rchan == NULL)
+ return -EPERM;
+
+ if (count == 0)
+ return 0;
+
+ /* Change this if need to write more than 2 pages at once */
+ if (count > 2 * PAGE_SIZE)
+ return -EINVAL;
+
+ write_buf = (char *)__get_free_pages(GFP_KERNEL, 1);
+ if (write_buf == NULL)
+ return -ENOMEM;
+
+ if (copy_from_user(write_buf, buf, count))
+ return -EFAULT;
+
+ if (filp->f_flags & (O_NDELAY | O_NONBLOCK)) {
+ write_count = relay_write(rchan->id, write_buf, count, -1, &wrote_pos);
+ if (write_count == 0)
+ return -EAGAIN;
+ } else {
+ err = wait_event_interruptible(rchan->write_wait,
+ (write_count = relay_write(rchan->id, write_buf, count, -1, &wrote_pos)));
+ if (err)
+ return err;
+ }
+
+ free_pages((unsigned long)write_buf, 1);
+
+ rchan->callbacks->user_deliver(rchan->id, wrote_pos, write_count);
+
+ return write_count;
+}
+
+/**
+ * relayfs_ioctl - ioctl file op for relayfs files
+ * @inode: the inode
+ * @filp: the file
+ * @cmd: the command
+ * @arg: command arg
+ *
+ * Passes the specified cmd/arg to the kernel client. arg may be a
+ * pointer to user-space data, in which case the kernel client is
+ * responsible for copying the data to/from user space appropriately.
+ * The kernel client is also responsible for returning a meaningful
+ * return value for ioctl calls.
+ *
+ * Returns result of relay channel callback, -EPERM if unsuccessful.
+ */
+int
+relayfs_ioctl(struct inode *inode, struct file *filp, unsigned int cmd, unsigned long arg)
+{
+ struct rchan *rchan;
+ struct rchan_reader *reader;
+
+ reader = (struct rchan_reader *)filp->private_data;
+ if (reader == NULL)
+ return -EPERM;
+
+ rchan = reader->rchan;
+ if (rchan == NULL)
+ return -EPERM;
+
+ return rchan->callbacks->ioctl(rchan->id, cmd, arg);
+}
+
+/**
+ * relayfs_poll - poll file op for relayfs files
+ * @filp: the file
+ * @wait: poll table
+ *
+ * Poll implemention.
+ */
+static unsigned int
+relayfs_poll(struct file *filp, poll_table *wait)
+{
+ struct rchan_reader *reader;
+ unsigned int mask = 0;
+
+ reader = (struct rchan_reader *)filp->private_data;
+
+ if (reader->rchan->finalized)
+ return POLLERR;
+
+ if (filp->f_mode & FMODE_READ) {
+ poll_wait(filp, &reader->rchan->read_wait, wait);
+ if (!rchan_empty(reader))
+ mask |= POLLIN | POLLRDNORM;
+ }
+
+ if (filp->f_mode & FMODE_WRITE) {
+ poll_wait(filp, &reader->rchan->write_wait, wait);
+ if (!rchan_full(reader))
+ mask |= POLLOUT | POLLWRNORM;
+ }
+
+ return mask;
+}
+
+/**
+ * relayfs_release - release file op for relayfs files
+ * @inode: the inode
+ * @filp: the file
+ *
+ * Decrements the channel refcount, as the filesystem is
+ * no longer using it.
+ */
+int
+relayfs_release(struct inode *inode, struct file *filp)
+{
+ struct rchan_reader *reader;
+ struct rchan *rchan;
+
+ reader = (struct rchan_reader *)filp->private_data;
+ if (reader == NULL || reader->rchan == NULL)
+ return 0;
+ rchan = reader->rchan;
+
+ rchan->callbacks->fileop_notify(reader->rchan->id, filp,
+ RELAY_FILE_CLOSE);
+ __remove_rchan_reader(reader);
+ /* The channel is no longer in use as far as this file is concerned */
+ rchan_put(rchan);
+
+ return 0;
+}
+
+static struct address_space_operations relayfs_aops = {
+ .readpage = simple_readpage,
+ .prepare_write = simple_prepare_write,
+ .commit_write = simple_commit_write
+};
+
+static struct file_operations relayfs_file_operations = {
+ .open = relayfs_open,
+ .read = relayfs_file_read,
+ .write = relayfs_file_write,
+ .ioctl = relayfs_ioctl,
+ .poll = relayfs_poll,
+ .mmap = relayfs_mmap,
+ .fsync = simple_sync_file,
+ .release = relayfs_release,
+};
+
+static struct inode_operations relayfs_file_inode_operations = {
+ .getattr = simple_getattr,
+};
+
+static struct inode_operations relayfs_dir_inode_operations = {
+ .create = relayfs_create,
+ .lookup = simple_lookup,
+ .link = simple_link,
+ .unlink = simple_unlink,
+ .symlink = relayfs_symlink,
+ .mkdir = relayfs_mkdir,
+ .rmdir = simple_rmdir,
+ .mknod = relayfs_mknod,
+ .rename = simple_rename,
+};
+
+static struct super_operations relayfs_ops = {
+ .statfs = simple_statfs,
+ .drop_inode = generic_delete_inode,
+};
+
+static int
+relayfs_fill_super(struct super_block * sb, void * data, int silent)
+{
+ struct inode * inode;
+ struct dentry * root;
+
+ sb->s_blocksize = PAGE_CACHE_SIZE;
+ sb->s_blocksize_bits = PAGE_CACHE_SHIFT;
+ sb->s_magic = RELAYFS_MAGIC;
+ sb->s_op = &relayfs_ops;
+ inode = relayfs_get_inode(sb, S_IFDIR | 0755, 0);
+
+ if (!inode)
+ return -ENOMEM;
+
+ root = d_alloc_root(inode);
+ if (!root) {
+ iput(inode);
+ return -ENOMEM;
+ }
+ sb->s_root = root;
+
+ return 0;
+}
+
+static struct super_block *
+relayfs_get_sb(struct file_system_type *fs_type,
+ int flags, const char *dev_name, void *data)
+{
+ return get_sb_single(fs_type, flags, data, relayfs_fill_super);
+}
+
+static struct file_system_type relayfs_fs_type = {
+ .owner = THIS_MODULE,
+ .name = "relayfs",
+ .get_sb = relayfs_get_sb,
+ .kill_sb = kill_litter_super,
+};
+
+static int __init
+init_relayfs_fs(void)
+{
+ int err = register_filesystem(&relayfs_fs_type);
+#ifdef CONFIG_KLOG_CHANNEL
+ if (!err)
+ create_klog_channel();
+#endif
+ return err;
+}
+
+static void __exit
+exit_relayfs_fs(void)
+{
+#ifdef CONFIG_KLOG_CHANNEL
+ remove_klog_channel();
+#endif
+ unregister_filesystem(&relayfs_fs_type);
+}
+
+module_init(init_relayfs_fs)
+module_exit(exit_relayfs_fs)
+
+MODULE_AUTHOR("Tom Zanussi <zanussi@us.ibm.com> and Karim Yaghmour <karim@opersys.com>");
+MODULE_DESCRIPTION("Relay Filesystem");
+MODULE_LICENSE("GPL");
+
--- /dev/null
+/*
+ * KLOG Generic Logging facility built upon the relayfs infrastructure
+ *
+ * Authors: Hubertus Franke (frankeh@us.ibm.com)
+ * Tom Zanussi (zanussi@us.ibm.com)
+ *
+ * Please direct all questions/comments to zanussi@us.ibm.com
+ *
+ * Copyright (C) 2003, IBM Corp
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/kernel.h>
+#include <linux/smp_lock.h>
+#include <linux/console.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/config.h>
+#include <linux/delay.h>
+#include <linux/smp.h>
+#include <linux/sysctl.h>
+#include <linux/relayfs_fs.h>
+#include <linux/klog.h>
+
+/* klog channel id */
+static int klog_channel = -1;
+
+/* maximum size of klog formatting buffer beyond which truncation will occur */
+#define KLOG_BUF_SIZE (512)
+/* per-cpu klog formatting buffer */
+static char buf[NR_CPUS][KLOG_BUF_SIZE];
+
+/*
+ * klog_enabled determines whether klog()/klog_raw() actually do write
+ * to the klog channel at any given time. If klog_enabled == 1 they do,
+ * otherwise they don't. Settable using sysctl fs.relayfs.klog_enabled.
+ */
+#ifdef CONFIG_KLOG_CHANNEL_AUTOENABLE
+static int klog_enabled = 1;
+#else
+static int klog_enabled = 0;
+#endif
+
+/**
+ * klog - write a formatted string into the klog channel
+ * @fmt: format string
+ *
+ * Returns number of bytes written, negative number on failure.
+ */
+int klog(const char *fmt, ...)
+{
+ va_list args;
+ int len, err;
+ char *cbuf;
+ unsigned long flags;
+
+ if (!klog_enabled || klog_channel < 0)
+ return 0;
+
+ local_irq_save(flags);
+ cbuf = buf[smp_processor_id()];
+
+ va_start(args, fmt);
+ len = vsnprintf(cbuf, KLOG_BUF_SIZE, fmt, args);
+ va_end(args);
+
+ err = relay_write(klog_channel, cbuf, len, -1, NULL);
+ local_irq_restore(flags);
+
+ return err;
+}
+
+/**
+ * klog_raw - directly write into the klog channel
+ * @buf: buffer containing data to write
+ * @len: # bytes to write
+ *
+ * Returns number of bytes written, negative number on failure.
+ */
+int klog_raw(const char *buf,int len)
+{
+ int err = 0;
+
+ if (klog_enabled && klog_channel >= 0)
+ err = relay_write(klog_channel, buf, len, -1, NULL);
+
+ return err;
+}
+
+/**
+ * relayfs sysctl data
+ *
+ * Only sys/fs/relayfs/klog_enabled for now.
+ */
+#define CTL_ENABLE_KLOG 100
+#define CTL_RELAYFS 100
+
+static struct ctl_table_header *relayfs_ctl_table_header;
+
+static struct ctl_table relayfs_table[] =
+{
+ {
+ .ctl_name = CTL_ENABLE_KLOG,
+ .procname = "klog_enabled",
+ .data = &klog_enabled,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
+ {
+ 0
+ }
+};
+
+static struct ctl_table relayfs_dir_table[] =
+{
+ {
+ .ctl_name = CTL_RELAYFS,
+ .procname = "relayfs",
+ .data = NULL,
+ .maxlen = 0,
+ .mode = 0555,
+ .child = relayfs_table,
+ },
+ {
+ 0
+ }
+};
+
+static struct ctl_table relayfs_root_table[] =
+{
+ {
+ .ctl_name = CTL_FS,
+ .procname = "fs",
+ .data = NULL,
+ .maxlen = 0,
+ .mode = 0555,
+ .child = relayfs_dir_table,
+ },
+ {
+ 0
+ }
+};
+
+/**
+ * create_klog_channel - creates channel /mnt/relay/klog
+ *
+ * Returns channel id on success, negative otherwise.
+ */
+int
+create_klog_channel(void)
+{
+ u32 bufsize, nbufs;
+ u32 channel_flags;
+
+ channel_flags = RELAY_DELIVERY_PACKET | RELAY_USAGE_GLOBAL;
+ channel_flags |= RELAY_SCHEME_ANY | RELAY_TIMESTAMP_ANY;
+
+ bufsize = 1 << (CONFIG_KLOG_CHANNEL_SHIFT - 2);
+ nbufs = 4;
+
+ klog_channel = relay_open("klog",
+ bufsize,
+ nbufs,
+ channel_flags,
+ NULL,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ 0,
+ NULL,
+ 0);
+
+ if (klog_channel < 0)
+ printk("klog channel creation failed, errcode: %d\n", klog_channel);
+ else {
+ printk("klog channel created (%u bytes)\n", 1 << CONFIG_KLOG_CHANNEL_SHIFT);
+ relayfs_ctl_table_header = register_sysctl_table(relayfs_root_table, 1);
+ }
+
+ return klog_channel;
+}
+
+/**
+ * remove_klog_channel - destroys channel /mnt/relay/klog
+ *
+ * Returns 0, negative otherwise.
+ */
+int
+remove_klog_channel(void)
+{
+ if (relayfs_ctl_table_header)
+ unregister_sysctl_table(relayfs_ctl_table_header);
+
+ return relay_close(klog_channel);
+}
+
+EXPORT_SYMBOL(klog);
+EXPORT_SYMBOL(klog_raw);
+
--- /dev/null
+/*
+ * Public API and common code for RelayFS.
+ *
+ * Please see Documentation/filesystems/relayfs.txt for API description.
+ *
+ * Copyright (C) 2002, 2003 - Tom Zanussi (zanussi@us.ibm.com), IBM Corp
+ * Copyright (C) 1999, 2000, 2001, 2002 - Karim Yaghmour (karim@opersys.com)
+ *
+ * This file is released under the GPL.
+ */
+
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/stddef.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/time.h>
+#include <linux/page-flags.h>
+#include <linux/vmalloc.h>
+#include <linux/mm.h>
+#include <linux/mman.h>
+#include <linux/delay.h>
+
+#include <asm/io.h>
+#include <asm/current.h>
+#include <asm/uaccess.h>
+#include <asm/bitops.h>
+#include <asm/pgtable.h>
+#include <asm/relay.h>
+#include <asm/hardirq.h>
+
+#include "relay_lockless.h"
+#include "relay_locking.h"
+#include "resize.h"
+
+/* Relay channel table, indexed by channel id */
+static struct rchan * rchan_table[RELAY_MAX_CHANNELS];
+static rwlock_t rchan_table_lock = RW_LOCK_UNLOCKED;
+
+/* Relay operation structs, one per scheme */
+static struct relay_ops lockless_ops = {
+ .reserve = lockless_reserve,
+ .commit = lockless_commit,
+ .get_offset = lockless_get_offset,
+ .finalize = lockless_finalize,
+ .reset = lockless_reset,
+ .reset_index = lockless_reset_index
+};
+
+static struct relay_ops locking_ops = {
+ .reserve = locking_reserve,
+ .commit = locking_commit,
+ .get_offset = locking_get_offset,
+ .finalize = locking_finalize,
+ .reset = locking_reset,
+ .reset_index = locking_reset_index
+};
+
+/*
+ * Low-level relayfs kernel API. These functions should not normally be
+ * used by clients. See high-level kernel API below.
+ */
+
+/**
+ * rchan_get - get channel associated with id, incrementing refcount
+ * @rchan_id: the channel id
+ *
+ * Returns channel if successful, NULL otherwise.
+ */
+struct rchan *
+rchan_get(int rchan_id)
+{
+ struct rchan *rchan;
+
+ if ((rchan_id < 0) || (rchan_id >= RELAY_MAX_CHANNELS))
+ return NULL;
+
+ read_lock(&rchan_table_lock);
+ rchan = rchan_table[rchan_id];
+ if (rchan)
+ atomic_inc(&rchan->refcount);
+ read_unlock(&rchan_table_lock);
+
+ return rchan;
+}
+
+/**
+ * clear_readers - clear non-VFS readers
+ * @rchan: the channel
+ *
+ * Clear the channel pointers of all non-VFS readers open on the channel.
+ */
+static inline void
+clear_readers(struct rchan *rchan)
+{
+ struct list_head *p;
+ struct rchan_reader *reader;
+
+ read_lock(&rchan->open_readers_lock);
+ list_for_each(p, &rchan->open_readers) {
+ reader = list_entry(p, struct rchan_reader, list);
+ if (!reader->vfs_reader)
+ reader->rchan = NULL;
+ }
+ read_unlock(&rchan->open_readers_lock);
+}
+
+/**
+ * rchan_alloc_id - reserve a channel id and store associated channel
+ * @rchan: the channel
+ *
+ * Returns channel id if successful, -1 otherwise.
+ */
+static inline int
+rchan_alloc_id(struct rchan *rchan)
+{
+ int i;
+ int rchan_id = -1;
+
+ if (rchan == NULL)
+ return -1;
+
+ write_lock(&rchan_table_lock);
+ for (i = 0; i < RELAY_MAX_CHANNELS; i++) {
+ if (rchan_table[i] == NULL) {
+ rchan_table[i] = rchan;
+ rchan_id = rchan->id = i;
+ break;
+ }
+ }
+ if (rchan_id != -1)
+ atomic_inc(&rchan->refcount);
+ write_unlock(&rchan_table_lock);
+
+ return rchan_id;
+}
+
+/**
+ * rchan_free_id - revoke a channel id and remove associated channel
+ * @rchan_id: the channel id
+ */
+static inline void
+rchan_free_id(int rchan_id)
+{
+ struct rchan *rchan;
+
+ if ((rchan_id < 0) || (rchan_id >= RELAY_MAX_CHANNELS))
+ return;
+
+ write_lock(&rchan_table_lock);
+ rchan = rchan_table[rchan_id];
+ rchan_table[rchan_id] = NULL;
+ write_unlock(&rchan_table_lock);
+}
+
+/**
+ * rchan_destroy_buf - destroy the current channel buffer
+ * @rchan: the channel
+ */
+static inline void
+rchan_destroy_buf(struct rchan *rchan)
+{
+ if (rchan->buf && !rchan->init_buf)
+ free_rchan_buf(rchan->buf,
+ rchan->buf_page_array,
+ rchan->buf_page_count);
+}
+
+/**
+ * relay_release - perform end-of-buffer processing for last buffer
+ * @rchan: the channel
+ *
+ * Returns 0 if successful, negative otherwise.
+ *
+ * Releases the channel buffer, destroys the channel, and removes the
+ * relay file from the relayfs filesystem. Should only be called from
+ * rchan_put(). If we're here, it means by definition refcount is 0.
+ */
+static int
+relay_release(struct rchan *rchan)
+{
+ if (rchan == NULL)
+ return -EBADF;
+
+ rchan_destroy_buf(rchan);
+ rchan_free_id(rchan->id);
+ relayfs_remove_file(rchan->dentry);
+ clear_readers(rchan);
+ kfree(rchan);
+
+ return 0;
+}
+
+/**
+ * rchan_get - decrement channel refcount, releasing it if 0
+ * @rchan: the channel
+ *
+ * If the refcount reaches 0, the channel will be destroyed.
+ */
+void
+rchan_put(struct rchan *rchan)
+{
+ if (atomic_dec_and_test(&rchan->refcount))
+ relay_release(rchan);
+}
+
+/**
+ * relay_reserve - reserve a slot in the channel buffer
+ * @rchan: the channel
+ * @len: the length of the slot to reserve
+ * @td: the time delta between buffer start and current write, or TSC
+ * @err: receives the result flags
+ * @interrupting: 1 if interrupting previous, used only in locking scheme
+ *
+ * Returns pointer to the beginning of the reserved slot, NULL if error.
+ *
+ * The errcode value contains the result flags and is an ORed combination
+ * of the following:
+ *
+ * RELAY_BUFFER_SWITCH_NONE - no buffer switch occurred
+ * RELAY_EVENT_DISCARD_NONE - event should not be discarded
+ * RELAY_BUFFER_SWITCH - buffer switch occurred
+ * RELAY_EVENT_DISCARD - event should be discarded (all buffers are full)
+ * RELAY_EVENT_TOO_LONG - event won't fit into even an empty buffer
+ *
+ * buffer_start and buffer_end callbacks are triggered at this point
+ * if applicable.
+ */
+char *
+relay_reserve(struct rchan *rchan,
+ u32 len,
+ struct timeval *ts,
+ u32 *td,
+ int *err,
+ int *interrupting)
+{
+ if (rchan == NULL)
+ return NULL;
+
+ *interrupting = 0;
+
+ return rchan->relay_ops->reserve(rchan, len, ts, td, err, interrupting);
+}
+
+
+/**
+ * wakeup_readers - wake up VFS readers waiting on a channel
+ * @private: the channel
+ *
+ * This is the work function used to defer reader waking. The
+ * reason waking is deferred is that calling directly from commit
+ * causes problems if you're writing from say the scheduler.
+ */
+static void
+wakeup_readers(void *private)
+{
+ struct rchan *rchan = (struct rchan *)private;
+
+ wake_up_interruptible(&rchan->read_wait);
+}
+
+
+/**
+ * relay_commit - commit a reserved slot in the buffer
+ * @rchan: the channel
+ * @from: commit the length starting here
+ * @len: length committed
+ * @interrupting: 1 if interrupting previous, used only in locking scheme
+ *
+ * After the write into the reserved buffer has been complted, this
+ * function must be called in order for the relay to determine whether
+ * buffers are complete and to wake up VFS readers.
+ *
+ * delivery callback is triggered at this point if applicable.
+ */
+void
+relay_commit(struct rchan *rchan,
+ char *from,
+ u32 len,
+ int reserve_code,
+ int interrupting)
+{
+ int deliver;
+
+ if (rchan == NULL)
+ return;
+
+ deliver = packet_delivery(rchan) ||
+ (reserve_code & RELAY_BUFFER_SWITCH);
+
+ rchan->relay_ops->commit(rchan, from, len, deliver, interrupting);
+
+ /* The params are always the same, so no worry about re-queuing */
+ if (deliver && waitqueue_active(&rchan->read_wait)) {
+ PREPARE_WORK(&rchan->wake_readers, wakeup_readers, rchan);
+ schedule_delayed_work(&rchan->wake_readers, 1);
+ }
+}
+
+/**
+ * relay_get_offset - get current and max channel buffer offsets
+ * @rchan: the channel
+ * @max_offset: maximum channel offset
+ *
+ * Returns the current and maximum channel buffer offsets.
+ */
+u32
+relay_get_offset(struct rchan *rchan, u32 *max_offset)
+{
+ return rchan->relay_ops->get_offset(rchan, max_offset);
+}
+
+/**
+ * reset_index - try once to reset the current channel index
+ * @rchan: the channel
+ * @old_index: the index read before reset
+ *
+ * Attempts to reset the channel index to 0. It tries once, and
+ * if it fails, returns negative, 0 otherwise.
+ */
+int
+reset_index(struct rchan *rchan, u32 old_index)
+{
+ return rchan->relay_ops->reset_index(rchan, old_index);
+}
+
+/*
+ * close() vm_op implementation for relayfs file mapping.
+ */
+static void
+relay_file_mmap_close(struct vm_area_struct *vma)
+{
+ struct file *filp = vma->vm_file;
+ struct rchan_reader *reader;
+ struct rchan *rchan;
+
+ reader = (struct rchan_reader *)filp->private_data;
+ rchan = reader->rchan;
+
+ atomic_dec(&rchan->mapped);
+
+ rchan->callbacks->fileop_notify(reader->rchan->id, filp,
+ RELAY_FILE_UNMAP);
+}
+
+/*
+ * vm_ops for relay file mappings.
+ */
+static struct vm_operations_struct relay_file_mmap_ops = {
+ .close = relay_file_mmap_close
+};
+
+/* \begin{Code inspired from BTTV driver} */
+static inline unsigned long
+kvirt_to_pa(unsigned long adr)
+{
+ unsigned long kva, ret;
+
+ kva = (unsigned long) page_address(vmalloc_to_page((void *) adr));
+ kva |= adr & (PAGE_SIZE - 1);
+ ret = __pa(kva);
+ return ret;
+}
+
+static int
+relay_mmap_region(struct vm_area_struct *vma,
+ const char *adr,
+ const char *start_pos,
+ unsigned long size)
+{
+ unsigned long start = (unsigned long) adr;
+ unsigned long page, pos;
+
+ pos = (unsigned long) start_pos;
+
+ while (size > 0) {
+ page = kvirt_to_pa(pos);
+ if (remap_page_range(vma, start, page, PAGE_SIZE, PAGE_SHARED))
+ return -EAGAIN;
+ start += PAGE_SIZE;
+ pos += PAGE_SIZE;
+ size -= PAGE_SIZE;
+ }
+
+ return 0;
+}
+/* \end{Code inspired from BTTV driver} */
+
+/**
+ * relay_mmap_buffer: - mmap buffer to process address space
+ * @rchan_id: relay channel id
+ * @vma: vm_area_struct describing memory to be mapped
+ *
+ * Returns:
+ * 0 if ok
+ * -EAGAIN, when remap failed
+ * -EINVAL, invalid requested length
+ *
+ * Caller should already have grabbed mmap_sem.
+ */
+int
+__relay_mmap_buffer(struct rchan *rchan,
+ struct vm_area_struct *vma)
+{
+ int err = 0;
+ unsigned long length = vma->vm_end - vma->vm_start;
+ struct file *filp = vma->vm_file;
+
+ if (rchan == NULL) {
+ err = -EBADF;
+ goto exit;
+ }
+
+ if (rchan->init_buf) {
+ err = -EPERM;
+ goto exit;
+ }
+
+ if (length != (unsigned long)rchan->alloc_size) {
+ err = -EINVAL;
+ goto exit;
+ }
+
+ err = relay_mmap_region(vma,
+ (char *)vma->vm_start,
+ rchan->buf,
+ rchan->alloc_size);
+
+ if (err == 0) {
+ vma->vm_ops = &relay_file_mmap_ops;
+ err = rchan->callbacks->fileop_notify(rchan->id, filp,
+ RELAY_FILE_MAP);
+ if (err == 0)
+ atomic_inc(&rchan->mapped);
+ }
+exit:
+ return err;
+}
+
+/*
+ * High-level relayfs kernel API. See Documentation/filesystems/relafys.txt.
+ */
+
+/*
+ * rchan_callback implementations defining default channel behavior. Used
+ * in place of corresponding NULL values in client callback struct.
+ */
+
+/*
+ * buffer_end() default callback. Does nothing.
+ */
+static int
+buffer_end_default_callback(int rchan_id,
+ char *current_write_pos,
+ char *end_of_buffer,
+ struct timeval end_time,
+ u32 end_tsc,
+ int using_tsc)
+{
+ return 0;
+}
+
+/*
+ * buffer_start() default callback. Does nothing.
+ */
+static int
+buffer_start_default_callback(int rchan_id,
+ char *current_write_pos,
+ u32 buffer_id,
+ struct timeval start_time,
+ u32 start_tsc,
+ int using_tsc)
+{
+ return 0;
+}
+
+/*
+ * deliver() default callback. Does nothing.
+ */
+static void
+deliver_default_callback(int rchan_id, char *from, u32 len)
+{
+}
+
+/*
+ * user_deliver() default callback. Does nothing.
+ */
+static void
+user_deliver_default_callback(int rchan_id, char *from, u32 len)
+{
+}
+
+/*
+ * needs_resize() default callback. Does nothing.
+ */
+static void
+needs_resize_default_callback(int rchan_id,
+ int resize_type,
+ u32 suggested_buf_size,
+ u32 suggested_n_bufs)
+{
+}
+
+/*
+ * fileop_notify() default callback. Does nothing.
+ */
+static int
+fileop_notify_default_callback(int rchan_id,
+ struct file *filp,
+ enum relay_fileop fileop)
+{
+ return 0;
+}
+
+/*
+ * ioctl() default callback. Does nothing.
+ */
+static int
+ioctl_default_callback(int rchan_id,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ return 0;
+}
+
+/* relay channel default callbacks */
+static struct rchan_callbacks default_channel_callbacks = {
+ .buffer_start = buffer_start_default_callback,
+ .buffer_end = buffer_end_default_callback,
+ .deliver = deliver_default_callback,
+ .user_deliver = user_deliver_default_callback,
+ .needs_resize = needs_resize_default_callback,
+ .fileop_notify = fileop_notify_default_callback,
+ .ioctl = ioctl_default_callback,
+};
+
+/**
+ * check_attribute_flags - check sanity of channel attributes
+ * @flags: channel attributes
+ * @resizeable: 1 if true
+ *
+ * Returns 0 if successful, negative otherwise.
+ */
+static int
+check_attribute_flags(u32 *attribute_flags, int resizeable)
+{
+ u32 flags = *attribute_flags;
+
+ if (!(flags & RELAY_DELIVERY_BULK) && !(flags & RELAY_DELIVERY_PACKET))
+ return -EINVAL; /* Delivery mode must be specified */
+
+ if (!(flags & RELAY_USAGE_SMP) && !(flags & RELAY_USAGE_GLOBAL))
+ return -EINVAL; /* Usage must be specified */
+
+ if (resizeable) { /* Resizeable can never be continuous */
+ *attribute_flags &= ~RELAY_MODE_CONTINUOUS;
+ *attribute_flags |= RELAY_MODE_NO_OVERWRITE;
+ }
+
+ if ((flags & RELAY_MODE_CONTINUOUS) &&
+ (flags & RELAY_MODE_NO_OVERWRITE))
+ return -EINVAL; /* Can't have it both ways */
+
+ if (!(flags & RELAY_MODE_CONTINUOUS) &&
+ !(flags & RELAY_MODE_NO_OVERWRITE))
+ *attribute_flags |= RELAY_MODE_CONTINUOUS; /* Default to continuous */
+
+ if (!(flags & RELAY_SCHEME_ANY))
+ return -EINVAL; /* One or both must be specified */
+ else if (flags & RELAY_SCHEME_LOCKLESS) {
+ if (have_cmpxchg())
+ *attribute_flags &= ~RELAY_SCHEME_LOCKING;
+ else if (flags & RELAY_SCHEME_LOCKING)
+ *attribute_flags &= ~RELAY_SCHEME_LOCKLESS;
+ else
+ return -EINVAL; /* Locking scheme not an alternative */
+ }
+
+ if (!(flags & RELAY_TIMESTAMP_ANY))
+ return -EINVAL; /* One or both must be specified */
+ else if (flags & RELAY_TIMESTAMP_TSC) {
+ if (have_tsc())
+ *attribute_flags &= ~RELAY_TIMESTAMP_GETTIMEOFDAY;
+ else if (flags & RELAY_TIMESTAMP_GETTIMEOFDAY)
+ *attribute_flags &= ~RELAY_TIMESTAMP_TSC;
+ else
+ return -EINVAL; /* gettimeofday not an alternative */
+ }
+
+ return 0;
+}
+
+/*
+ * High-level API functions.
+ */
+
+/**
+ * __relay_reset - internal reset function
+ * @rchan: the channel
+ * @init: 1 if this is a first-time channel initialization
+ *
+ * See relay_reset for description of effect.
+ */
+void
+__relay_reset(struct rchan *rchan, int init)
+{
+ int i;
+
+ if (init) {
+ rchan->version = RELAYFS_CHANNEL_VERSION;
+ init_MUTEX(&rchan->resize_sem);
+ init_waitqueue_head(&rchan->read_wait);
+ init_waitqueue_head(&rchan->write_wait);
+ atomic_set(&rchan->refcount, 0);
+ INIT_LIST_HEAD(&rchan->open_readers);
+ rchan->open_readers_lock = RW_LOCK_UNLOCKED;
+ }
+
+ rchan->buf_id = rchan->buf_idx = 0;
+ atomic_set(&rchan->suspended, 0);
+ atomic_set(&rchan->mapped, 0);
+ rchan->half_switch = 0;
+ rchan->bufs_produced = 0;
+ rchan->bufs_consumed = 0;
+ rchan->bytes_consumed = 0;
+ rchan->initialized = 0;
+ rchan->finalized = 0;
+ rchan->resize_min = rchan->resize_max = 0;
+ rchan->resizing = 0;
+ rchan->replace_buffer = 0;
+ rchan->resize_buf = NULL;
+ rchan->resize_buf_size = 0;
+ rchan->resize_alloc_size = 0;
+ rchan->resize_n_bufs = 0;
+ rchan->resize_err = 0;
+ rchan->resize_failures = 0;
+ rchan->resize_order = 0;
+
+ rchan->expand_page_array = NULL;
+ rchan->expand_page_count = 0;
+ rchan->shrink_page_array = NULL;
+ rchan->shrink_page_count = 0;
+ rchan->resize_page_array = NULL;
+ rchan->resize_page_count = 0;
+ rchan->old_buf_page_array = NULL;
+ rchan->expand_buf_id = 0;
+
+ INIT_WORK(&rchan->wake_readers, NULL, NULL);
+ INIT_WORK(&rchan->wake_writers, NULL, NULL);
+
+ for (i = 0; i < RELAY_MAX_BUFS; i++)
+ rchan->unused_bytes[i] = 0;
+
+ rchan->relay_ops->reset(rchan, init);
+}
+
+/**
+ * relay_reset - reset the channel
+ * @rchan: the channel
+ *
+ * Returns 0 if successful, negative if not.
+ *
+ * This has the effect of erasing all data from the buffer and
+ * restarting the channel in its initial state. The buffer itself
+ * is not freed, so any mappings are still in effect.
+ *
+ * NOTE: Care should be taken that the channnel isn't actually
+ * being used by anything when this call is made.
+ */
+int
+relay_reset(int rchan_id)
+{
+ struct rchan *rchan;
+
+ rchan = rchan_get(rchan_id);
+ if (rchan == NULL)
+ return -EBADF;
+
+ __relay_reset(rchan, 0);
+ update_readers_consumed(rchan, 0, 0);
+
+ rchan_put(rchan);
+
+ return 0;
+}
+
+/**
+ * check_init_buf - check the sanity of init_buf, if present
+ * @init_buf: the initbuf
+ * @init_buf_size: the total initbuf size
+ * @bufsize: the channel's sub-buffer size
+ * @nbufs: the number of sub-buffers in the channel
+ *
+ * Returns 0 if ok, negative otherwise.
+ */
+static int
+check_init_buf(char *init_buf, u32 init_buf_size, u32 bufsize, u32 nbufs)
+{
+ int err = 0;
+
+ if (init_buf && nbufs == 1) /* 1 sub-buffer makes no sense */
+ err = -EINVAL;
+
+ if (init_buf && (bufsize * nbufs != init_buf_size))
+ err = -EINVAL;
+
+ return err;
+}
+
+/**
+ * rchan_create_buf - allocate the initial channel buffer
+ * @rchan: the channel
+ * @size_alloc: the total size of the channel buffer
+ *
+ * Returns 0 if successful, negative otherwise.
+ */
+static inline int
+rchan_create_buf(struct rchan *rchan, int size_alloc)
+{
+ struct page **page_array;
+ int page_count;
+
+ if ((rchan->buf = (char *)alloc_rchan_buf(size_alloc, &page_array, &page_count)) == NULL) {
+ rchan->buf_page_array = NULL;
+ rchan->buf_page_count = 0;
+ return -ENOMEM;
+ }
+
+ rchan->buf_page_array = page_array;
+ rchan->buf_page_count = page_count;
+
+ return 0;
+}
+
+/**
+ * rchan_create - allocate and initialize a channel, including buffer
+ * @chanpath: path specifying the relayfs channel file to create
+ * @bufsize: the size of the sub-buffers within the channel buffer
+ * @nbufs: the number of sub-buffers within the channel buffer
+ * @rchan_flags: flags specifying buffer attributes
+ * @err: err code
+ *
+ * Returns channel if successful, NULL otherwise, err receives errcode.
+ *
+ * Allocates a struct rchan representing a relay channel, according
+ * to the attributes passed in via rchan_flags. Does some basic sanity
+ * checking but doesn't try to do anything smart. In particular, the
+ * number of buffers must be a power of 2, and if the lockless scheme
+ * is being used, the sub-buffer size must also be a power of 2. The
+ * locking scheme can use buffers of any size.
+ */
+static struct rchan *
+rchan_create(const char *chanpath,
+ int bufsize,
+ int nbufs,
+ u32 rchan_flags,
+ char *init_buf,
+ u32 init_buf_size,
+ int *err)
+{
+ int size_alloc;
+ struct rchan *rchan = NULL;
+
+ *err = 0;
+
+ rchan = (struct rchan *)kmalloc(sizeof(struct rchan), GFP_KERNEL);
+ if (rchan == NULL) {
+ *err = -ENOMEM;
+ return NULL;
+ }
+ rchan->buf = rchan->init_buf = NULL;
+
+ *err = check_init_buf(init_buf, init_buf_size, bufsize, nbufs);
+ if (*err)
+ goto exit;
+
+ if (nbufs == 1 && bufsize) {
+ rchan->n_bufs = nbufs;
+ rchan->buf_size = bufsize;
+ size_alloc = bufsize;
+ goto alloc;
+ }
+
+ if (bufsize <= 0 ||
+ (rchan_flags & RELAY_SCHEME_LOCKLESS && hweight32(bufsize) != 1) ||
+ hweight32(nbufs) != 1 ||
+ nbufs < RELAY_MIN_BUFS ||
+ nbufs > RELAY_MAX_BUFS) {
+ *err = -EINVAL;
+ goto exit;
+ }
+
+ size_alloc = FIX_SIZE(bufsize * nbufs);
+ if (size_alloc > RELAY_MAX_BUF_SIZE) {
+ *err = -EINVAL;
+ goto exit;
+ }
+ rchan->n_bufs = nbufs;
+ rchan->buf_size = bufsize;
+
+ if (rchan_flags & RELAY_SCHEME_LOCKLESS) {
+ offset_bits(rchan) = ffs(bufsize) - 1;
+ offset_mask(rchan) = RELAY_BUF_OFFSET_MASK(offset_bits(rchan));
+ bufno_bits(rchan) = ffs(nbufs) - 1;
+ }
+alloc:
+ if (rchan_alloc_id(rchan) == -1) {
+ *err = -ENOMEM;
+ goto exit;
+ }
+
+ if (init_buf == NULL) {
+ *err = rchan_create_buf(rchan, size_alloc);
+ if (*err) {
+ rchan_free_id(rchan->id);
+ goto exit;
+ }
+ } else
+ rchan->buf = rchan->init_buf = init_buf;
+
+ rchan->alloc_size = size_alloc;
+
+ if (rchan_flags & RELAY_SCHEME_LOCKLESS)
+ rchan->relay_ops = &lockless_ops;
+ else
+ rchan->relay_ops = &locking_ops;
+
+exit:
+ if (*err) {
+ kfree(rchan);
+ rchan = NULL;
+ }
+
+ return rchan;
+}
+
+
+static char tmpname[NAME_MAX];
+
+/**
+ * rchan_create_dir - create directory for file
+ * @chanpath: path to file, including filename
+ * @residual: filename remaining after parse
+ * @topdir: the directory filename should be created in
+ *
+ * Returns 0 if successful, negative otherwise.
+ *
+ * Inspired by xlate_proc_name() in procfs. Given a file path which
+ * includes the filename, creates any and all directories necessary
+ * to create the file.
+ */
+static int
+rchan_create_dir(const char * chanpath,
+ const char **residual,
+ struct dentry **topdir)
+{
+ const char *cp = chanpath, *next;
+ struct dentry *parent = NULL;
+ int len, err = 0;
+
+ while (1) {
+ next = strchr(cp, '/');
+ if (!next)
+ break;
+
+ len = next - cp;
+
+ strncpy(tmpname, cp, len);
+ tmpname[len] = '\0';
+ err = relayfs_create_dir(tmpname, parent, &parent);
+ if (err && (err != -EEXIST))
+ return err;
+ cp += len + 1;
+ }
+
+ *residual = cp;
+ *topdir = parent;
+
+ return err;
+}
+
+/**
+ * rchan_create_file - create file, including parent directories
+ * @chanpath: path to file, including filename
+ * @dentry: result dentry
+ * @data: data to associate with the file
+ *
+ * Returns 0 if successful, negative otherwise.
+ */
+static int
+rchan_create_file(const char * chanpath,
+ struct dentry **dentry,
+ struct rchan * data,
+ int mode)
+{
+ int err;
+ const char * fname;
+ struct dentry *topdir;
+
+ err = rchan_create_dir(chanpath, &fname, &topdir);
+ if (err && (err != -EEXIST))
+ return err;
+
+ err = relayfs_create_file(fname, topdir, dentry, (void *)data, mode);
+
+ return err;
+}
+
+/**
+ * relay_open - create a new file/channel buffer in relayfs
+ * @chanpath: name of file to create, including path
+ * @bufsize: size of sub-buffers
+ * @nbufs: number of sub-buffers
+ * @flags: channel attributes
+ * @callbacks: client callback functions
+ * @start_reserve: number of bytes to reserve at start of each sub-buffer
+ * @end_reserve: number of bytes to reserve at end of each sub-buffer
+ * @rchan_start_reserve: additional reserve at start of first sub-buffer
+ * @resize_min: minimum total buffer size, if set
+ * @resize_max: maximum total buffer size, if set
+ * @mode: the perms to be given to the relayfs file, 0 to accept defaults
+ * @init_buf: initial memory buffer to start out with, NULL if N/A
+ * @init_buf_size: initial memory buffer size to start out with, 0 if N/A
+ *
+ * Returns channel id if successful, negative otherwise.
+ *
+ * Creates a relay channel using the sizes and attributes specified.
+ * The default permissions, used if mode == 0 are S_IRUSR | S_IWUSR. See
+ * Documentation/filesystems/relayfs.txt for details.
+ */
+int
+relay_open(const char *chanpath,
+ int bufsize,
+ int nbufs,
+ u32 flags,
+ struct rchan_callbacks *channel_callbacks,
+ u32 start_reserve,
+ u32 end_reserve,
+ u32 rchan_start_reserve,
+ u32 resize_min,
+ u32 resize_max,
+ int mode,
+ char *init_buf,
+ u32 init_buf_size)
+{
+ int err;
+ struct rchan *rchan;
+ struct dentry *dentry;
+ struct rchan_callbacks *callbacks = NULL;
+
+ if (chanpath == NULL)
+ return -EINVAL;
+
+ if (nbufs != 1) {
+ err = check_attribute_flags(&flags, resize_min ? 1 : 0);
+ if (err)
+ return err;
+ }
+
+ rchan = rchan_create(chanpath, bufsize, nbufs, flags, init_buf, init_buf_size, &err);
+
+ if (err < 0)
+ return err;
+
+ /* Create file in fs */
+ if ((err = rchan_create_file(chanpath, &dentry, rchan, mode)) < 0) {
+ rchan_destroy_buf(rchan);
+ rchan_free_id(rchan->id);
+ kfree(rchan);
+ return err;
+ }
+
+ rchan->dentry = dentry;
+
+ if (channel_callbacks == NULL)
+ callbacks = &default_channel_callbacks;
+ else
+ callbacks = channel_callbacks;
+
+ if (callbacks->buffer_end == NULL)
+ callbacks->buffer_end = buffer_end_default_callback;
+ if (callbacks->buffer_start == NULL)
+ callbacks->buffer_start = buffer_start_default_callback;
+ if (callbacks->deliver == NULL)
+ callbacks->deliver = deliver_default_callback;
+ if (callbacks->user_deliver == NULL)
+ callbacks->user_deliver = user_deliver_default_callback;
+ if (callbacks->needs_resize == NULL)
+ callbacks->needs_resize = needs_resize_default_callback;
+ if (callbacks->fileop_notify == NULL)
+ callbacks->fileop_notify = fileop_notify_default_callback;
+ if (callbacks->ioctl == NULL)
+ callbacks->ioctl = ioctl_default_callback;
+ rchan->callbacks = callbacks;
+
+ /* Just to let the client know the sizes used */
+ rchan->callbacks->needs_resize(rchan->id,
+ RELAY_RESIZE_REPLACED,
+ rchan->buf_size,
+ rchan->n_bufs);
+
+ rchan->flags = flags;
+ rchan->start_reserve = start_reserve;
+ rchan->end_reserve = end_reserve;
+ rchan->rchan_start_reserve = rchan_start_reserve;
+
+ __relay_reset(rchan, 1);
+
+ if (resize_min > 0 && resize_max > 0 &&
+ resize_max < RELAY_MAX_TOTAL_BUF_SIZE) {
+ rchan->resize_min = resize_min;
+ rchan->resize_max = resize_max;
+ init_shrink_timer(rchan);
+ }
+
+ rchan_get(rchan->id);
+
+ return rchan->id;
+}
+
+/**
+ * relay_discard_init_buf - alloc channel buffer and copy init_buf into it
+ * @rchan_id: the channel id
+ *
+ * Returns 0 if successful, negative otherwise.
+ *
+ * NOTE: May sleep. Should also be called only when the channel isn't
+ * actively being written into.
+ */
+int
+relay_discard_init_buf(int rchan_id)
+{
+ struct rchan *rchan;
+ int err = 0;
+
+ rchan = rchan_get(rchan_id);
+ if (rchan == NULL)
+ return -EBADF;
+
+ if (rchan->init_buf == NULL) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ err = rchan_create_buf(rchan, rchan->alloc_size);
+ if (err)
+ goto out;
+
+ memcpy(rchan->buf, rchan->init_buf, rchan->n_bufs * rchan->buf_size);
+ rchan->init_buf = NULL;
+out:
+ rchan_put(rchan);
+
+ return err;
+}
+
+/**
+ * relay_finalize - perform end-of-buffer processing for last buffer
+ * @rchan_id: the channel id
+ * @releasing: true if called when releasing file
+ *
+ * Returns 0 if successful, negative otherwise.
+ */
+static int
+relay_finalize(int rchan_id)
+{
+ struct rchan *rchan = rchan_get(rchan_id);
+ if (rchan == NULL)
+ return -EBADF;
+
+ if (rchan->finalized == 0) {
+ rchan->relay_ops->finalize(rchan);
+ rchan->finalized = 1;
+ }
+
+ if (waitqueue_active(&rchan->read_wait)) {
+ PREPARE_WORK(&rchan->wake_readers, wakeup_readers, rchan);
+ schedule_delayed_work(&rchan->wake_readers, 1);
+ }
+
+ rchan_put(rchan);
+
+ return 0;
+}
+
+/**
+ * restore_callbacks - restore default channel callbacks
+ * @rchan: the channel
+ *
+ * Restore callbacks to the default versions.
+ */
+static inline void
+restore_callbacks(struct rchan *rchan)
+{
+ if (rchan->callbacks != &default_channel_callbacks)
+ rchan->callbacks = &default_channel_callbacks;
+}
+
+/**
+ * relay_close - close the channel
+ * @rchan_id: relay channel id
+ *
+ * Finalizes the last sub-buffer and marks the channel as finalized.
+ * The channel buffer and channel data structure are then freed
+ * automatically when the last reference to the channel is given up.
+ */
+int
+relay_close(int rchan_id)
+{
+ int err;
+ struct rchan *rchan;
+
+ if ((rchan_id < 0) || (rchan_id >= RELAY_MAX_CHANNELS))
+ return -EBADF;
+
+ err = relay_finalize(rchan_id);
+
+ if (!err) {
+ read_lock(&rchan_table_lock);
+ rchan = rchan_table[rchan_id];
+ read_unlock(&rchan_table_lock);
+
+ if (rchan) {
+ restore_callbacks(rchan);
+ if (rchan->resize_min)
+ del_timer(&rchan->shrink_timer);
+ rchan_put(rchan);
+ }
+ }
+
+ return err;
+}
+
+/**
+ * relay_write - reserve a slot in the channel and write data into it
+ * @rchan_id: relay channel id
+ * @data_ptr: data to be written into reserved slot
+ * @count: number of bytes to write
+ * @td_offset: optional offset where time delta should be written
+ * @wrote_pos: optional ptr returning buf pos written to, ignored if NULL
+ *
+ * Returns the number of bytes written, 0 or negative on failure.
+ *
+ * Reserves space in the channel and writes count bytes of data_ptr
+ * to it. Automatically performs any necessary locking, depending
+ * on the scheme and SMP usage in effect (no locking is done for the
+ * lockless scheme regardless of usage).
+ *
+ * If td_offset is >= 0, the internal time delta calculated when
+ * slot was reserved will be written at that offset.
+ *
+ * If wrote_pos is non-NULL, it will receive the location the data
+ * was written to, which may be needed for some applications but is not
+ * normally interesting.
+ */
+int
+relay_write(int rchan_id,
+ const void *data_ptr,
+ size_t count,
+ int td_offset,
+ void **wrote_pos)
+{
+ unsigned long flags;
+ char *reserved, *write_pos;
+ int bytes_written = 0;
+ int reserve_code, interrupting;
+ struct timeval ts;
+ u32 td;
+ struct rchan *rchan;
+
+ rchan = rchan_get(rchan_id);
+ if (rchan == NULL)
+ return -EBADF;
+
+ relay_lock_channel(rchan, flags); /* nop for lockless */
+
+ write_pos = reserved = relay_reserve(rchan, count, &ts, &td,
+ &reserve_code, &interrupting);
+
+ if (reserved != NULL) {
+ relay_write_direct(write_pos, data_ptr, count);
+ if ((td_offset >= 0) && (td_offset < count - sizeof(td)))
+ *((u32 *)(reserved + td_offset)) = td;
+ bytes_written = count;
+ } else if (reserve_code == RELAY_WRITE_TOO_LONG)
+ bytes_written = -EINVAL;
+
+ if (bytes_written > 0)
+ relay_commit(rchan, reserved, bytes_written, reserve_code, interrupting);
+
+ relay_unlock_channel(rchan, flags); /* nop for lockless */
+
+ rchan_put(rchan);
+
+ if (wrote_pos)
+ *wrote_pos = reserved;
+
+ return bytes_written;
+}
+
+/**
+ * wakeup_writers - wake up VFS writers waiting on a channel
+ * @private: the channel
+ *
+ * This is the work function used to defer writer waking. The
+ * reason waking is deferred is that calling directly from
+ * buffers_consumed causes problems if you're writing from say
+ * the scheduler.
+ */
+static void
+wakeup_writers(void *private)
+{
+ struct rchan *rchan = (struct rchan *)private;
+
+ wake_up_interruptible(&rchan->write_wait);
+}
+
+
+/**
+ * __relay_buffers_consumed - internal version of relay_buffers_consumed
+ * @rchan: the relay channel
+ * @bufs_consumed: number of buffers to add to current count for channel
+ *
+ * Internal - updates the channel's consumed buffer count.
+ */
+static void
+__relay_buffers_consumed(struct rchan *rchan, u32 bufs_consumed)
+{
+ rchan->bufs_consumed += bufs_consumed;
+
+ if (rchan->bufs_consumed > rchan->bufs_produced)
+ rchan->bufs_consumed = rchan->bufs_produced;
+
+ atomic_set(&rchan->suspended, 0);
+
+ PREPARE_WORK(&rchan->wake_writers, wakeup_writers, rchan);
+ schedule_delayed_work(&rchan->wake_writers, 1);
+}
+
+/**
+ * __reader_buffers_consumed - update reader/channel consumed buffer count
+ * @reader: channel reader
+ * @bufs_consumed: number of buffers to add to current count for channel
+ *
+ * Internal - updates the reader's consumed buffer count. If the reader's
+ * resulting total is greater than the channel's, update the channel's.
+*/
+static void
+__reader_buffers_consumed(struct rchan_reader *reader, u32 bufs_consumed)
+{
+ reader->bufs_consumed += bufs_consumed;
+
+ if (reader->bufs_consumed > reader->rchan->bufs_consumed)
+ __relay_buffers_consumed(reader->rchan, bufs_consumed);
+}
+
+/**
+ * relay_buffers_consumed - add to the # buffers consumed for the channel
+ * @reader: channel reader
+ * @bufs_consumed: number of buffers to add to current count for channel
+ *
+ * Adds to the channel's consumed buffer count. buffers_consumed should
+ * be the number of buffers newly consumed, not the total number consumed.
+ *
+ * NOTE: kernel clients don't need to call this function if the reader
+ * is auto-consuming or the channel is MODE_CONTINUOUS.
+ */
+void
+relay_buffers_consumed(struct rchan_reader *reader, u32 bufs_consumed)
+{
+ if (reader && reader->rchan)
+ __reader_buffers_consumed(reader, bufs_consumed);
+}
+
+/**
+ * __relay_bytes_consumed - internal version of relay_bytes_consumed
+ * @rchan: the relay channel
+ * @bytes_consumed: number of bytes to add to current count for channel
+ * @read_offset: where the bytes were consumed from
+ *
+ * Internal - updates the channel's consumed count.
+*/
+static void
+__relay_bytes_consumed(struct rchan *rchan, u32 bytes_consumed, u32 read_offset)
+{
+ u32 consuming_idx;
+ u32 unused;
+
+ consuming_idx = read_offset / rchan->buf_size;
+
+ if (consuming_idx >= rchan->n_bufs)
+ consuming_idx = rchan->n_bufs - 1;
+ rchan->bytes_consumed += bytes_consumed;
+
+ unused = rchan->unused_bytes[consuming_idx];
+
+ if (rchan->bytes_consumed + unused >= rchan->buf_size) {
+ __relay_buffers_consumed(rchan, 1);
+ rchan->bytes_consumed = 0;
+ }
+}
+
+/**
+ * __reader_bytes_consumed - update reader/channel consumed count
+ * @reader: channel reader
+ * @bytes_consumed: number of bytes to add to current count for channel
+ * @read_offset: where the bytes were consumed from
+ *
+ * Internal - updates the reader's consumed count. If the reader's
+ * resulting total is greater than the channel's, update the channel's.
+*/
+static void
+__reader_bytes_consumed(struct rchan_reader *reader, u32 bytes_consumed, u32 read_offset)
+{
+ u32 consuming_idx;
+ u32 unused;
+
+ consuming_idx = read_offset / reader->rchan->buf_size;
+
+ if (consuming_idx >= reader->rchan->n_bufs)
+ consuming_idx = reader->rchan->n_bufs - 1;
+
+ reader->bytes_consumed += bytes_consumed;
+
+ unused = reader->rchan->unused_bytes[consuming_idx];
+
+ if (reader->bytes_consumed + unused >= reader->rchan->buf_size) {
+ reader->bufs_consumed++;
+ reader->bytes_consumed = 0;
+ }
+
+ if ((reader->bufs_consumed > reader->rchan->bufs_consumed) ||
+ ((reader->bufs_consumed == reader->rchan->bufs_consumed) &&
+ (reader->bytes_consumed > reader->rchan->bytes_consumed)))
+ __relay_bytes_consumed(reader->rchan, bytes_consumed, read_offset);
+}
+
+/**
+ * relay_bytes_consumed - add to the # bytes consumed for the channel
+ * @reader: channel reader
+ * @bytes_consumed: number of bytes to add to current count for channel
+ * @read_offset: where the bytes were consumed from
+ *
+ * Adds to the channel's consumed count. bytes_consumed should be the
+ * number of bytes actually read e.g. return value of relay_read() and
+ * the read_offset should be the actual offset the bytes were read from
+ * e.g. the actual_read_offset set by relay_read(). See
+ * Documentation/filesystems/relayfs.txt for more details.
+ *
+ * NOTE: kernel clients don't need to call this function if the reader
+ * is auto-consuming or the channel is MODE_CONTINUOUS.
+ */
+void
+relay_bytes_consumed(struct rchan_reader *reader, u32 bytes_consumed, u32 read_offset)
+{
+ if (reader && reader->rchan)
+ __reader_bytes_consumed(reader, bytes_consumed, read_offset);
+}
+
+/**
+ * update_readers_consumed - apply offset change to reader
+ * @rchan: the channel
+ *
+ * Apply the consumed counts to all readers open on the channel.
+ */
+void
+update_readers_consumed(struct rchan *rchan, u32 bufs_consumed, u32 bytes_consumed)
+{
+ struct list_head *p;
+ struct rchan_reader *reader;
+
+ read_lock(&rchan->open_readers_lock);
+ list_for_each(p, &rchan->open_readers) {
+ reader = list_entry(p, struct rchan_reader, list);
+ reader->bufs_consumed = bufs_consumed;
+ reader->bytes_consumed = bytes_consumed;
+ if (reader->vfs_reader)
+ reader->pos.file->f_pos = 0;
+ else
+ reader->pos.f_pos = 0;
+ reader->offset_changed = 1;
+ }
+ read_unlock(&rchan->open_readers_lock);
+}
+
+/**
+ * do_read - utility function to do the actual read to user
+ * @rchan: the channel
+ * @buf: user buf to read into, NULL if just getting info
+ * @count: bytes requested
+ * @read_offset: offset into channel
+ * @new_offset: new offset into channel after read
+ * @actual_read_offset: read offset actually used
+ *
+ * Returns the number of bytes read, 0 if none.
+ */
+static ssize_t
+do_read(struct rchan *rchan, char *buf, size_t count, u32 read_offset, u32 *new_offset, u32 *actual_read_offset)
+{
+ u32 read_bufno, cur_bufno;
+ u32 avail_offset, cur_idx, max_offset, buf_end_offset;
+ u32 avail_count, buf_size;
+ int unused_bytes = 0;
+ size_t read_count = 0;
+ u32 last_buf_byte_offset;
+
+ *actual_read_offset = read_offset;
+
+ buf_size = rchan->buf_size;
+ if (unlikely(!buf_size)) BUG();
+
+ read_bufno = read_offset / buf_size;
+ if (unlikely(read_bufno >= RELAY_MAX_BUFS)) BUG();
+ unused_bytes = rchan->unused_bytes[read_bufno];
+
+ avail_offset = cur_idx = relay_get_offset(rchan, &max_offset);
+
+ if (cur_idx == read_offset) {
+ if (atomic_read(&rchan->suspended) == 1) {
+ read_offset += 1;
+ if (read_offset >= max_offset)
+ read_offset = 0;
+ *actual_read_offset = read_offset;
+ } else {
+ *new_offset = read_offset;
+ return 0;
+ }
+ } else {
+ last_buf_byte_offset = (read_bufno + 1) * buf_size - 1;
+ if (read_offset == last_buf_byte_offset) {
+ if (unused_bytes != 1) {
+ read_offset += 1;
+ if (read_offset >= max_offset)
+ read_offset = 0;
+ *actual_read_offset = read_offset;
+ }
+ }
+ }
+
+ read_bufno = read_offset / buf_size;
+ if (unlikely(read_bufno >= RELAY_MAX_BUFS)) BUG();
+ unused_bytes = rchan->unused_bytes[read_bufno];
+
+ cur_bufno = cur_idx / buf_size;
+
+ buf_end_offset = (read_bufno + 1) * buf_size - unused_bytes;
+ if (avail_offset > buf_end_offset)
+ avail_offset = buf_end_offset;
+ else if (avail_offset < read_offset)
+ avail_offset = buf_end_offset;
+ avail_count = avail_offset - read_offset;
+ read_count = avail_count >= count ? count : avail_count;
+
+ if (read_count && buf != NULL)
+ if (copy_to_user(buf, rchan->buf + read_offset, read_count))
+ return -EFAULT;
+
+ if (read_bufno == cur_bufno)
+ if (read_count && (read_offset + read_count >= buf_end_offset) && (read_offset + read_count <= cur_idx)) {
+ *new_offset = cur_idx;
+ return read_count;
+ }
+
+ if (read_offset + read_count + unused_bytes > max_offset)
+ *new_offset = 0;
+ else if (read_offset + read_count >= buf_end_offset)
+ *new_offset = read_offset + read_count + unused_bytes;
+ else
+ *new_offset = read_offset + read_count;
+
+ return read_count;
+}
+
+/**
+ * __relay_read - read bytes from channel, relative to current reader pos
+ * @reader: channel reader
+ * @buf: user buf to read into, NULL if just getting info
+ * @count: bytes requested
+ * @read_offset: offset into channel
+ * @new_offset: new offset into channel after read
+ * @actual_read_offset: read offset actually used
+ * @wait: if non-zero, wait for something to read
+ *
+ * Internal - see relay_read() for details.
+ *
+ * Returns the number of bytes read, 0 if none, negative on failure.
+ */
+static ssize_t
+__relay_read(struct rchan_reader *reader, char *buf, size_t count, u32 read_offset, u32 *new_offset, u32 *actual_read_offset, int wait)
+{
+ int err = 0;
+ size_t read_count = 0;
+ struct rchan *rchan = reader->rchan;
+
+ if (!wait && !rchan->initialized)
+ return -EAGAIN;
+
+ if (using_lockless(rchan))
+ read_offset &= idx_mask(rchan);
+
+ if (read_offset >= rchan->n_bufs * rchan->buf_size) {
+ *new_offset = 0;
+ if (!wait)
+ return -EAGAIN;
+ else
+ return -EINTR;
+ }
+
+ if (buf != NULL && wait) {
+ err = wait_event_interruptible(rchan->read_wait,
+ ((rchan->finalized == 1) ||
+ (atomic_read(&rchan->suspended) == 1) ||
+ (relay_get_offset(rchan, NULL) != read_offset)));
+
+ if (rchan->finalized)
+ return 0;
+
+ if (reader->offset_changed) {
+ reader->offset_changed = 0;
+ return -EINTR;
+ }
+
+ if (err)
+ return err;
+ }
+
+ read_count = do_read(rchan, buf, count, read_offset, new_offset, actual_read_offset);
+
+ if (read_count < 0)
+ err = read_count;
+
+ if (err)
+ return err;
+ else
+ return read_count;
+}
+
+/**
+ * relay_read - read bytes from channel, relative to current reader pos
+ * @reader: channel reader
+ * @buf: user buf to read into, NULL if just getting info
+ * @count: bytes requested
+ * @wait: if non-zero, wait for something to read
+ * @actual_read_offset: set read offset actually used, must not be NULL
+ *
+ * Reads count bytes from the channel, or as much as is available within
+ * the sub-buffer currently being read. The read offset that will be
+ * read from is the position contained within the reader object. If the
+ * wait flag is set, buf is non-NULL, and there is nothing available,
+ * it will wait until there is. If the wait flag is 0 and there is
+ * nothing available, -EAGAIN is returned. If buf is NULL, the value
+ * returned is the number of bytes that would have been read.
+ * actual_read_offset is the value that should be passed as the read
+ * offset to relay_bytes_consumed, needed only if the reader is not
+ * auto-consuming and the channel is MODE_NO_OVERWRITE, but in any case,
+ * it must not be NULL. See Documentation/filesystems/relayfs.txt for
+ * more details.
+ */
+ssize_t
+relay_read(struct rchan_reader *reader, char *buf, size_t count, int wait, u32 *actual_read_offset)
+{
+ u32 new_offset;
+ u32 read_offset;
+ ssize_t read_count;
+
+ if (reader == NULL || reader->rchan == NULL)
+ return -EBADF;
+
+ if (actual_read_offset == NULL)
+ return -EINVAL;
+
+ if (reader->vfs_reader)
+ read_offset = (u32)(reader->pos.file->f_pos);
+ else
+ read_offset = reader->pos.f_pos;
+ *actual_read_offset = read_offset;
+
+ read_count = __relay_read(reader, buf, count, read_offset,
+ &new_offset, actual_read_offset, wait);
+
+ if (read_count < 0)
+ return read_count;
+
+ if (reader->vfs_reader)
+ reader->pos.file->f_pos = new_offset;
+ else
+ reader->pos.f_pos = new_offset;
+
+ if (reader->auto_consume && ((read_count) || (new_offset != read_offset)))
+ __reader_bytes_consumed(reader, read_count, *actual_read_offset);
+
+ if (read_count == 0 && !wait)
+ return -EAGAIN;
+
+ return read_count;
+}
+
+/**
+ * relay_bytes_avail - number of bytes available in current sub-buffer
+ * @reader: channel reader
+ *
+ * Returns the number of bytes available relative to the reader's
+ * current read position within the corresponding sub-buffer, 0 if
+ * there is nothing available. See Documentation/filesystems/relayfs.txt
+ * for more details.
+ */
+ssize_t
+relay_bytes_avail(struct rchan_reader *reader)
+{
+ u32 f_pos;
+ u32 new_offset;
+ u32 actual_read_offset;
+ ssize_t bytes_read;
+
+ if (reader == NULL || reader->rchan == NULL)
+ return -EBADF;
+
+ if (reader->vfs_reader)
+ f_pos = (u32)reader->pos.file->f_pos;
+ else
+ f_pos = reader->pos.f_pos;
+ new_offset = f_pos;
+
+ bytes_read = __relay_read(reader, NULL, reader->rchan->buf_size,
+ f_pos, &new_offset, &actual_read_offset, 0);
+
+ if ((new_offset != f_pos) &&
+ ((bytes_read == -EINTR) || (bytes_read == 0)))
+ bytes_read = -EAGAIN;
+ else if ((bytes_read < 0) && (bytes_read != -EAGAIN))
+ bytes_read = 0;
+
+ return bytes_read;
+}
+
+/**
+ * rchan_empty - boolean, is the channel empty wrt reader?
+ * @reader: channel reader
+ *
+ * Returns 1 if the channel is empty, 0 otherwise.
+ */
+int
+rchan_empty(struct rchan_reader *reader)
+{
+ ssize_t avail_count;
+ u32 buffers_ready;
+ struct rchan *rchan = reader->rchan;
+ u32 cur_idx, curbuf_bytes;
+ int mapped;
+
+ if (atomic_read(&rchan->suspended) == 1)
+ return 0;
+
+ mapped = atomic_read(&rchan->mapped);
+
+ if (mapped && bulk_delivery(rchan)) {
+ buffers_ready = rchan->bufs_produced - rchan->bufs_consumed;
+ return buffers_ready ? 0 : 1;
+ }
+
+ if (mapped && packet_delivery(rchan)) {
+ buffers_ready = rchan->bufs_produced - rchan->bufs_consumed;
+ if (buffers_ready)
+ return 0;
+ else {
+ cur_idx = relay_get_offset(rchan, NULL);
+ curbuf_bytes = cur_idx % rchan->buf_size;
+ return curbuf_bytes == rchan->bytes_consumed ? 1 : 0;
+ }
+ }
+
+ avail_count = relay_bytes_avail(reader);
+
+ return avail_count ? 0 : 1;
+}
+
+/**
+ * rchan_full - boolean, is the channel full wrt consuming reader?
+ * @reader: channel reader
+ *
+ * Returns 1 if the channel is full, 0 otherwise.
+ */
+int
+rchan_full(struct rchan_reader *reader)
+{
+ u32 buffers_ready;
+ struct rchan *rchan = reader->rchan;
+
+ if (mode_continuous(rchan))
+ return 0;
+
+ buffers_ready = rchan->bufs_produced - rchan->bufs_consumed;
+
+ return buffers_ready > reader->rchan->n_bufs - 1 ? 1 : 0;
+}
+
+/**
+ * relay_info - get status and other information about a relay channel
+ * @rchan_id: relay channel id
+ * @rchan_info: pointer to the rchan_info struct to be filled in
+ *
+ * Fills in an rchan_info struct with channel status and attribute
+ * information. See Documentation/filesystems/relayfs.txt for details.
+ *
+ * Returns 0 if successful, negative otherwise.
+ */
+int
+relay_info(int rchan_id, struct rchan_info *rchan_info)
+{
+ int i;
+ struct rchan *rchan;
+
+ rchan = rchan_get(rchan_id);
+ if (rchan == NULL)
+ return -EBADF;
+
+ rchan_info->flags = rchan->flags;
+ rchan_info->buf_size = rchan->buf_size;
+ rchan_info->buf_addr = rchan->buf;
+ rchan_info->alloc_size = rchan->alloc_size;
+ rchan_info->n_bufs = rchan->n_bufs;
+ rchan_info->cur_idx = relay_get_offset(rchan, NULL);
+ rchan_info->bufs_produced = rchan->bufs_produced;
+ rchan_info->bufs_consumed = rchan->bufs_consumed;
+ rchan_info->buf_id = rchan->buf_id;
+
+ for (i = 0; i < rchan->n_bufs; i++) {
+ rchan_info->unused_bytes[i] = rchan->unused_bytes[i];
+ if (using_lockless(rchan))
+ rchan_info->buffer_complete[i] = (atomic_read(&fill_count(rchan, i)) == rchan->buf_size);
+ else
+ rchan_info->buffer_complete[i] = 0;
+ }
+
+ rchan_put(rchan);
+
+ return 0;
+}
+
+/**
+ * __add_rchan_reader - creates and adds a reader to a channel
+ * @rchan: relay channel
+ * @filp: the file associated with rchan, if applicable
+ * @auto_consume: boolean, whether reader's reads automatically consume
+ * @map_reader: boolean, whether reader's reading via a channel mapping
+ *
+ * Returns a pointer to the reader object create, NULL if unsuccessful
+ *
+ * Creates and initializes an rchan_reader object for reading the channel.
+ * If filp is non-NULL, the reader is a VFS reader, otherwise not.
+ *
+ * If the reader is a map reader, it isn't considered a VFS reader for
+ * our purposes. Also, map_readers can't be auto-consuming.
+ */
+struct rchan_reader *
+__add_rchan_reader(struct rchan *rchan, struct file *filp, int auto_consume, int map_reader)
+{
+ struct rchan_reader *reader;
+ u32 will_read;
+
+ reader = kmalloc(sizeof(struct rchan_reader), GFP_KERNEL);
+
+ if (reader) {
+ write_lock(&rchan->open_readers_lock);
+ reader->rchan = rchan;
+ if (filp) {
+ reader->vfs_reader = 1;
+ reader->pos.file = filp;
+ } else {
+ reader->vfs_reader = 0;
+ reader->pos.f_pos = 0;
+ }
+ reader->map_reader = map_reader;
+ reader->auto_consume = auto_consume;
+
+ if (!map_reader) {
+ will_read = rchan->bufs_produced % rchan->n_bufs;
+ if (!will_read && atomic_read(&rchan->suspended))
+ will_read = rchan->n_bufs;
+ reader->bufs_consumed = rchan->bufs_produced - will_read;
+ rchan->bufs_consumed = reader->bufs_consumed;
+ rchan->bytes_consumed = reader->bytes_consumed = 0;
+ reader->offset_changed = 0;
+ }
+
+ list_add(&reader->list, &rchan->open_readers);
+ write_unlock(&rchan->open_readers_lock);
+ }
+
+ return reader;
+}
+
+/**
+ * add_rchan_reader - create a reader for a channel
+ * @rchan_id: relay channel handle
+ * @auto_consume: boolean, whether reader's reads automatically consume
+ *
+ * Returns a pointer to the reader object created, NULL if unsuccessful
+ *
+ * Creates and initializes an rchan_reader object for reading the channel.
+ * This function is useful only for non-VFS readers.
+ */
+struct rchan_reader *
+add_rchan_reader(int rchan_id, int auto_consume)
+{
+ struct rchan *rchan = rchan_get(rchan_id);
+ if (rchan == NULL)
+ return NULL;
+
+ return __add_rchan_reader(rchan, NULL, auto_consume, 0);
+}
+
+/**
+ * add_map_reader - create a map reader for a channel
+ * @rchan_id: relay channel handle
+ *
+ * Returns a pointer to the reader object created, NULL if unsuccessful
+ *
+ * Creates and initializes an rchan_reader object for reading the channel.
+ * This function is useful only for map readers.
+ */
+struct rchan_reader *
+add_map_reader(int rchan_id)
+{
+ struct rchan *rchan = rchan_get(rchan_id);
+ if (rchan == NULL)
+ return NULL;
+
+ return __add_rchan_reader(rchan, NULL, 0, 1);
+}
+
+/**
+ * __remove_rchan_reader - destroy a channel reader
+ * @reader: channel reader
+ *
+ * Internal - removes reader from the open readers list, and frees it.
+ */
+void
+__remove_rchan_reader(struct rchan_reader *reader)
+{
+ struct list_head *p;
+ struct rchan_reader *found_reader = NULL;
+
+ write_lock(&reader->rchan->open_readers_lock);
+ list_for_each(p, &reader->rchan->open_readers) {
+ found_reader = list_entry(p, struct rchan_reader, list);
+ if (found_reader == reader) {
+ list_del(&found_reader->list);
+ break;
+ }
+ }
+ write_unlock(&reader->rchan->open_readers_lock);
+
+ if (found_reader)
+ kfree(found_reader);
+}
+
+/**
+ * remove_rchan_reader - destroy a channel reader
+ * @reader: channel reader
+ *
+ * Finds and removes the given reader from the channel. This function
+ * is useful only for non-VFS readers.
+ *
+ * Returns 0 if successful, negative otherwise.
+ */
+int
+remove_rchan_reader(struct rchan_reader *reader)
+{
+ int err = 0;
+
+ if (reader) {
+ rchan_put(reader->rchan);
+ __remove_rchan_reader(reader);
+ } else
+ err = -EINVAL;
+
+ return err;
+}
+
+/**
+ * remove_map_reader - destroy a map reader
+ * @reader: channel reader
+ *
+ * Finds and removes the given map reader from the channel. This function
+ * is useful only for map readers.
+ *
+ * Returns 0 if successful, negative otherwise.
+ */
+int
+remove_map_reader(struct rchan_reader *reader)
+{
+ return remove_rchan_reader(reader);
+}
+
+EXPORT_SYMBOL(relay_open);
+EXPORT_SYMBOL(relay_close);
+EXPORT_SYMBOL(relay_reset);
+EXPORT_SYMBOL(relay_reserve);
+EXPORT_SYMBOL(relay_commit);
+EXPORT_SYMBOL(relay_read);
+EXPORT_SYMBOL(relay_write);
+EXPORT_SYMBOL(relay_bytes_avail);
+EXPORT_SYMBOL(relay_buffers_consumed);
+EXPORT_SYMBOL(relay_bytes_consumed);
+EXPORT_SYMBOL(relay_info);
+EXPORT_SYMBOL(relay_discard_init_buf);
+
+
--- /dev/null
+/*
+ * RelayFS locking scheme implementation.
+ *
+ * Copyright (C) 1999, 2000, 2001, 2002 - Karim Yaghmour (karim@opersys.com)
+ * Copyright (C) 2002, 2003 - Tom Zanussi (zanussi@us.ibm.com), IBM Corp
+ *
+ * This file is released under the GPL.
+ */
+
+#include <asm/relay.h>
+#include "relay_locking.h"
+#include "resize.h"
+
+/**
+ * switch_buffers - switches between read and write buffers.
+ * @cur_time: current time.
+ * @cur_tsc: the TSC associated with current_time, if applicable
+ * @rchan: the channel
+ * @finalizing: if true, don't start a new buffer
+ * @resetting: if true,
+ *
+ * This should be called from with interrupts disabled.
+ */
+static void
+switch_buffers(struct timeval cur_time,
+ u32 cur_tsc,
+ struct rchan *rchan,
+ int finalizing,
+ int resetting,
+ int finalize_buffer_only)
+{
+ char *chan_buf_end;
+ int bytes_written;
+
+ if (!rchan->half_switch) {
+ bytes_written = rchan->callbacks->buffer_end(rchan->id,
+ cur_write_pos(rchan), write_buf_end(rchan),
+ cur_time, cur_tsc, using_tsc(rchan));
+ if (bytes_written == 0)
+ rchan->unused_bytes[rchan->buf_idx % rchan->n_bufs] =
+ write_buf_end(rchan) - cur_write_pos(rchan);
+ }
+
+ if (finalize_buffer_only) {
+ rchan->bufs_produced++;
+ return;
+ }
+
+ chan_buf_end = rchan->buf + rchan->n_bufs * rchan->buf_size;
+ if((write_buf(rchan) + rchan->buf_size >= chan_buf_end) || resetting)
+ write_buf(rchan) = rchan->buf;
+ else
+ write_buf(rchan) += rchan->buf_size;
+ write_buf_end(rchan) = write_buf(rchan) + rchan->buf_size;
+ write_limit(rchan) = write_buf_end(rchan) - rchan->end_reserve;
+ cur_write_pos(rchan) = write_buf(rchan);
+
+ rchan->buf_start_time = cur_time;
+ rchan->buf_start_tsc = cur_tsc;
+
+ if (resetting)
+ rchan->buf_idx = 0;
+ else
+ rchan->buf_idx++;
+ rchan->buf_id++;
+
+ if (!packet_delivery(rchan))
+ rchan->unused_bytes[rchan->buf_idx % rchan->n_bufs] = 0;
+
+ if (resetting) {
+ rchan->bufs_produced = rchan->bufs_produced + rchan->n_bufs;
+ rchan->bufs_produced -= rchan->bufs_produced % rchan->n_bufs;
+ rchan->bufs_consumed = rchan->bufs_produced;
+ rchan->bytes_consumed = 0;
+ update_readers_consumed(rchan, rchan->bufs_consumed, rchan->bytes_consumed);
+ } else if (!rchan->half_switch)
+ rchan->bufs_produced++;
+
+ rchan->half_switch = 0;
+
+ if (!finalizing) {
+ bytes_written = rchan->callbacks->buffer_start(rchan->id, cur_write_pos(rchan), rchan->buf_id, cur_time, cur_tsc, using_tsc(rchan));
+ cur_write_pos(rchan) += bytes_written;
+ }
+}
+
+/**
+ * locking_reserve - reserve a slot in the buffer for an event.
+ * @rchan: the channel
+ * @slot_len: the length of the slot to reserve
+ * @ts: variable that will receive the time the slot was reserved
+ * @tsc: the timestamp counter associated with time
+ * @err: receives the result flags
+ * @interrupting: if this write is interrupting another, set to non-zero
+ *
+ * Returns pointer to the beginning of the reserved slot, NULL if error.
+ *
+ * The err value contains the result flags and is an ORed combination
+ * of the following:
+ *
+ * RELAY_BUFFER_SWITCH_NONE - no buffer switch occurred
+ * RELAY_EVENT_DISCARD_NONE - event should not be discarded
+ * RELAY_BUFFER_SWITCH - buffer switch occurred
+ * RELAY_EVENT_DISCARD - event should be discarded (all buffers are full)
+ * RELAY_EVENT_TOO_LONG - event won't fit into even an empty buffer
+ */
+inline char *
+locking_reserve(struct rchan *rchan,
+ u32 slot_len,
+ struct timeval *ts,
+ u32 *tsc,
+ int *err,
+ int *interrupting)
+{
+ u32 buffers_ready;
+ int bytes_written;
+
+ *err = RELAY_BUFFER_SWITCH_NONE;
+
+ if (slot_len >= rchan->buf_size) {
+ *err = RELAY_WRITE_DISCARD | RELAY_WRITE_TOO_LONG;
+ return NULL;
+ }
+
+ if (rchan->initialized == 0) {
+ rchan->initialized = 1;
+ get_timestamp(&rchan->buf_start_time,
+ &rchan->buf_start_tsc, rchan);
+ rchan->unused_bytes[0] = 0;
+ bytes_written = rchan->callbacks->buffer_start(
+ rchan->id, cur_write_pos(rchan),
+ rchan->buf_id, rchan->buf_start_time,
+ rchan->buf_start_tsc, using_tsc(rchan));
+ cur_write_pos(rchan) += bytes_written;
+ *tsc = get_time_delta(ts, rchan);
+ return cur_write_pos(rchan);
+ }
+
+ *tsc = get_time_delta(ts, rchan);
+
+ if (in_progress_event_size(rchan)) {
+ interrupted_pos(rchan) = cur_write_pos(rchan);
+ cur_write_pos(rchan) = in_progress_event_pos(rchan)
+ + in_progress_event_size(rchan)
+ + interrupting_size(rchan);
+ *interrupting = 1;
+ } else {
+ in_progress_event_pos(rchan) = cur_write_pos(rchan);
+ in_progress_event_size(rchan) = slot_len;
+ interrupting_size(rchan) = 0;
+ }
+
+ if (cur_write_pos(rchan) + slot_len > write_limit(rchan)) {
+ if (atomic_read(&rchan->suspended) == 1) {
+ in_progress_event_pos(rchan) = NULL;
+ in_progress_event_size(rchan) = 0;
+ interrupting_size(rchan) = 0;
+ *err = RELAY_WRITE_DISCARD;
+ return NULL;
+ }
+
+ buffers_ready = rchan->bufs_produced - rchan->bufs_consumed;
+ if (buffers_ready == rchan->n_bufs - 1) {
+ if (!mode_continuous(rchan)) {
+ atomic_set(&rchan->suspended, 1);
+ in_progress_event_pos(rchan) = NULL;
+ in_progress_event_size(rchan) = 0;
+ interrupting_size(rchan) = 0;
+ get_timestamp(ts, tsc, rchan);
+ switch_buffers(*ts, *tsc, rchan, 0, 0, 1);
+ recalc_time_delta(ts, tsc, rchan);
+ rchan->half_switch = 1;
+
+ cur_write_pos(rchan) = write_buf_end(rchan) - 1;
+ *err = RELAY_BUFFER_SWITCH | RELAY_WRITE_DISCARD;
+ return NULL;
+ }
+ }
+
+ get_timestamp(ts, tsc, rchan);
+ switch_buffers(*ts, *tsc, rchan, 0, 0, 0);
+ recalc_time_delta(ts, tsc, rchan);
+ *err = RELAY_BUFFER_SWITCH;
+ }
+
+ return cur_write_pos(rchan);
+}
+
+/**
+ * locking_commit - commit a reserved slot in the buffer
+ * @rchan: the channel
+ * @from: commit the length starting here
+ * @len: length committed
+ * @deliver: length committed
+ * @interrupting: not used
+ *
+ * Commits len bytes and calls deliver callback if applicable.
+ */
+inline void
+locking_commit(struct rchan *rchan,
+ char *from,
+ u32 len,
+ int deliver,
+ int interrupting)
+{
+ cur_write_pos(rchan) += len;
+
+ if (interrupting) {
+ cur_write_pos(rchan) = interrupted_pos(rchan);
+ interrupting_size(rchan) += len;
+ } else {
+ in_progress_event_size(rchan) = 0;
+ if (interrupting_size(rchan)) {
+ cur_write_pos(rchan) += interrupting_size(rchan);
+ interrupting_size(rchan) = 0;
+ }
+ }
+
+ if (deliver) {
+ if (bulk_delivery(rchan)) {
+ u32 cur_idx = cur_write_pos(rchan) - rchan->buf;
+ u32 cur_bufno = cur_idx / rchan->buf_size;
+ from = rchan->buf + cur_bufno * rchan->buf_size;
+ len = cur_idx - cur_bufno * rchan->buf_size;
+ }
+ rchan->callbacks->deliver(rchan->id, from, len);
+ expand_check(rchan);
+ }
+}
+
+/**
+ * locking_finalize: - finalize last buffer at end of channel use
+ * @rchan: the channel
+ */
+inline void
+locking_finalize(struct rchan *rchan)
+{
+ unsigned long int flags;
+ struct timeval time;
+ u32 tsc;
+
+ local_irq_save(flags);
+ get_timestamp(&time, &tsc, rchan);
+ switch_buffers(time, tsc, rchan, 1, 0, 0);
+ local_irq_restore(flags);
+}
+
+/**
+ * locking_get_offset - get current and max 'file' offsets for VFS
+ * @rchan: the channel
+ * @max_offset: maximum channel offset
+ *
+ * Returns the current and maximum buffer offsets in VFS terms.
+ */
+u32
+locking_get_offset(struct rchan *rchan,
+ u32 *max_offset)
+{
+ if (max_offset)
+ *max_offset = rchan->buf_size * rchan->n_bufs - 1;
+
+ return cur_write_pos(rchan) - rchan->buf;
+}
+
+/**
+ * locking_reset - reset the channel
+ * @rchan: the channel
+ * @init: 1 if this is a first-time channel initialization
+ */
+void locking_reset(struct rchan *rchan, int init)
+{
+ if (init)
+ channel_lock(rchan) = SPIN_LOCK_UNLOCKED;
+ write_buf(rchan) = rchan->buf;
+ write_buf_end(rchan) = write_buf(rchan) + rchan->buf_size;
+ cur_write_pos(rchan) = write_buf(rchan);
+ write_limit(rchan) = write_buf_end(rchan) - rchan->end_reserve;
+ in_progress_event_pos(rchan) = NULL;
+ in_progress_event_size(rchan) = 0;
+ interrupted_pos(rchan) = NULL;
+ interrupting_size(rchan) = 0;
+}
+
+/**
+ * locking_reset_index - atomically set channel index to the beginning
+ * @rchan: the channel
+ *
+ * If this fails, it means that something else just logged something
+ * and therefore we probably no longer want to do this. It's up to the
+ * caller anyway...
+ *
+ * Returns 0 if the index was successfully set, negative otherwise
+ */
+int
+locking_reset_index(struct rchan *rchan, u32 old_idx)
+{
+ unsigned long flags;
+ struct timeval time;
+ u32 tsc;
+ u32 cur_idx;
+
+ relay_lock_channel(rchan, flags);
+ cur_idx = locking_get_offset(rchan, NULL);
+ if (cur_idx != old_idx) {
+ relay_unlock_channel(rchan, flags);
+ return -1;
+ }
+
+ get_timestamp(&time, &tsc, rchan);
+ switch_buffers(time, tsc, rchan, 0, 1, 0);
+
+ relay_unlock_channel(rchan, flags);
+
+ return 0;
+}
+
+
+
+
+
+
+
--- /dev/null
+#ifndef _RELAY_LOCKING_H
+#define _RELAY_LOCKING_H
+
+extern char *
+locking_reserve(struct rchan *rchan,
+ u32 slot_len,
+ struct timeval *time_stamp,
+ u32 *tsc,
+ int *err,
+ int *interrupting);
+
+extern void
+locking_commit(struct rchan *rchan,
+ char *from,
+ u32 len,
+ int deliver,
+ int interrupting);
+
+extern void
+locking_resume(struct rchan *rchan);
+
+extern void
+locking_finalize(struct rchan *rchan);
+
+extern u32
+locking_get_offset(struct rchan *rchan, u32 *max_offset);
+
+extern void
+locking_reset(struct rchan *rchan, int init);
+
+extern int
+locking_reset_index(struct rchan *rchan, u32 old_idx);
+
+#endif /* _RELAY_LOCKING_H */
--- /dev/null
+/*
+ * RelayFS lockless scheme implementation.
+ *
+ * Copyright (C) 1999, 2000, 2001, 2002 - Karim Yaghmour (karim@opersys.com)
+ * Copyright (C) 2002, 2003 - Tom Zanussi (zanussi@us.ibm.com), IBM Corp
+ * Copyright (C) 2002, 2003 - Bob Wisniewski (bob@watson.ibm.com), IBM Corp
+ *
+ * This file is released under the GPL.
+ */
+
+#include <asm/relay.h>
+#include "relay_lockless.h"
+#include "resize.h"
+
+/**
+ * compare_and_store_volatile - self-explicit
+ * @ptr: ptr to the word that will receive the new value
+ * @oval: the value we think is currently in *ptr
+ * @nval: the value *ptr will get if we were right
+ */
+inline int
+compare_and_store_volatile(volatile u32 *ptr,
+ u32 oval,
+ u32 nval)
+{
+ u32 prev;
+
+ barrier();
+ prev = cmpxchg(ptr, oval, nval);
+ barrier();
+
+ return (prev == oval);
+}
+
+/**
+ * atomic_set_volatile - atomically set the value in ptr to nval.
+ * @ptr: ptr to the word that will receive the new value
+ * @nval: the new value
+ */
+inline void
+atomic_set_volatile(atomic_t *ptr,
+ u32 nval)
+{
+ barrier();
+ atomic_set(ptr, (int)nval);
+ barrier();
+}
+
+/**
+ * atomic_add_volatile - atomically add val to the value at ptr.
+ * @ptr: ptr to the word that will receive the addition
+ * @val: the value to add to *ptr
+ */
+inline void
+atomic_add_volatile(atomic_t *ptr, u32 val)
+{
+ barrier();
+ atomic_add((int)val, ptr);
+ barrier();
+}
+
+/**
+ * atomic_sub_volatile - atomically substract val from the value at ptr.
+ * @ptr: ptr to the word that will receive the subtraction
+ * @val: the value to subtract from *ptr
+ */
+inline void
+atomic_sub_volatile(atomic_t *ptr, s32 val)
+{
+ barrier();
+ atomic_sub((int)val, ptr);
+ barrier();
+}
+
+/**
+ * lockless_commit - commit a reserved slot in the buffer
+ * @rchan: the channel
+ * @from: commit the length starting here
+ * @len: length committed
+ * @deliver: length committed
+ * @interrupting: not used
+ *
+ * Commits len bytes and calls deliver callback if applicable.
+ */
+inline void
+lockless_commit(struct rchan *rchan,
+ char *from,
+ u32 len,
+ int deliver,
+ int interrupting)
+{
+ u32 bufno, idx;
+
+ idx = from - rchan->buf;
+
+ if (len > 0) {
+ bufno = RELAY_BUFNO_GET(idx, offset_bits(rchan));
+ atomic_add_volatile(&fill_count(rchan, bufno), len);
+ }
+
+ if (deliver) {
+ u32 mask = offset_mask(rchan);
+ if (bulk_delivery(rchan)) {
+ from = rchan->buf + RELAY_BUF_OFFSET_CLEAR(idx, mask);
+ len += RELAY_BUF_OFFSET_GET(idx, mask);
+ }
+ rchan->callbacks->deliver(rchan->id, from, len);
+ expand_check(rchan);
+ }
+}
+
+/**
+ * get_buffer_end - get the address of the end of buffer
+ * @rchan: the channel
+ * @buf_idx: index into channel corresponding to address
+ */
+static inline char *
+get_buffer_end(struct rchan *rchan, u32 buf_idx)
+{
+ return rchan->buf
+ + RELAY_BUF_OFFSET_CLEAR(buf_idx, offset_mask(rchan))
+ + RELAY_BUF_SIZE(offset_bits(rchan));
+}
+
+
+/**
+ * finalize_buffer - utility function consolidating end-of-buffer tasks.
+ * @rchan: the channel
+ * @end_idx: index into buffer to write the end-buffer event at
+ * @size_lost: number of unused bytes at the end of the buffer
+ * @time_stamp: the time of the end-buffer event
+ * @tsc: the timestamp counter associated with time
+ * @resetting: are we resetting the channel?
+ *
+ * This function must be called with local irqs disabled.
+ */
+static inline void
+finalize_buffer(struct rchan *rchan,
+ u32 end_idx,
+ u32 size_lost,
+ struct timeval *time_stamp,
+ u32 *tsc,
+ int resetting)
+{
+ char* cur_write_pos;
+ char* write_buf_end;
+ u32 bufno;
+ int bytes_written;
+
+ cur_write_pos = rchan->buf + end_idx;
+ write_buf_end = get_buffer_end(rchan, end_idx - 1);
+
+ bytes_written = rchan->callbacks->buffer_end(rchan->id, cur_write_pos,
+ write_buf_end, *time_stamp, *tsc, using_tsc(rchan));
+ if (bytes_written == 0)
+ rchan->unused_bytes[rchan->buf_idx % rchan->n_bufs] = size_lost;
+
+ bufno = RELAY_BUFNO_GET(end_idx, offset_bits(rchan));
+ atomic_add_volatile(&fill_count(rchan, bufno), size_lost);
+ if (resetting) {
+ rchan->bufs_produced = rchan->bufs_produced + rchan->n_bufs;
+ rchan->bufs_produced -= rchan->bufs_produced % rchan->n_bufs;
+ rchan->bufs_consumed = rchan->bufs_produced;
+ rchan->bytes_consumed = 0;
+ update_readers_consumed(rchan, rchan->bufs_consumed, rchan->bytes_consumed);
+ } else
+ rchan->bufs_produced++;
+}
+
+/**
+ * lockless_finalize: - finalize last buffer at end of channel use
+ * @rchan: the channel
+ */
+inline void
+lockless_finalize(struct rchan *rchan)
+{
+ u32 event_end_idx;
+ u32 size_lost;
+ unsigned long int flags;
+ struct timeval time;
+ u32 tsc;
+
+ event_end_idx = RELAY_BUF_OFFSET_GET(idx(rchan), offset_mask(rchan));
+ size_lost = RELAY_BUF_SIZE(offset_bits(rchan)) - event_end_idx;
+
+ local_irq_save(flags);
+ get_timestamp(&time, &tsc, rchan);
+ finalize_buffer(rchan, idx(rchan) & idx_mask(rchan), size_lost,
+ &time, &tsc, 0);
+ local_irq_restore(flags);
+}
+
+/**
+ * discard_check: - determine whether a write should be discarded
+ * @rchan: the channel
+ * @old_idx: index into buffer where check for space should begin
+ * @write_len: the length of the write to check
+ * @time_stamp: the time of the end-buffer event
+ * @tsc: the timestamp counter associated with time
+ *
+ * The return value contains the result flags and is an ORed combination
+ * of the following:
+ *
+ * RELAY_WRITE_DISCARD_NONE - write should not be discarded
+ * RELAY_BUFFER_SWITCH - buffer switch occurred
+ * RELAY_WRITE_DISCARD - write should be discarded (all buffers are full)
+ * RELAY_WRITE_TOO_LONG - write won't fit into even an empty buffer
+ */
+static inline int
+discard_check(struct rchan *rchan,
+ u32 old_idx,
+ u32 write_len,
+ struct timeval *time_stamp,
+ u32 *tsc)
+{
+ u32 buffers_ready;
+ u32 offset_mask = offset_mask(rchan);
+ u8 offset_bits = offset_bits(rchan);
+ u32 idx_mask = idx_mask(rchan);
+ u32 size_lost;
+ unsigned long int flags;
+
+ if (write_len > RELAY_BUF_SIZE(offset_bits))
+ return RELAY_WRITE_DISCARD | RELAY_WRITE_TOO_LONG;
+
+ if (mode_continuous(rchan))
+ return RELAY_WRITE_DISCARD_NONE;
+
+ local_irq_save(flags);
+ if (atomic_read(&rchan->suspended) == 1) {
+ local_irq_restore(flags);
+ return RELAY_WRITE_DISCARD;
+ }
+ if (rchan->half_switch) {
+ local_irq_restore(flags);
+ return RELAY_WRITE_DISCARD_NONE;
+ }
+ buffers_ready = rchan->bufs_produced - rchan->bufs_consumed;
+ if (buffers_ready == rchan->n_bufs - 1) {
+ atomic_set(&rchan->suspended, 1);
+ size_lost = RELAY_BUF_SIZE(offset_bits)
+ - RELAY_BUF_OFFSET_GET(old_idx, offset_mask);
+ finalize_buffer(rchan, old_idx & idx_mask, size_lost,
+ time_stamp, tsc, 0);
+ rchan->half_switch = 1;
+ idx(rchan) = RELAY_BUF_OFFSET_CLEAR((old_idx & idx_mask), offset_mask(rchan)) + RELAY_BUF_SIZE(offset_bits) - 1;
+ local_irq_restore(flags);
+
+ return RELAY_BUFFER_SWITCH | RELAY_WRITE_DISCARD;
+ }
+ local_irq_restore(flags);
+
+ return RELAY_WRITE_DISCARD_NONE;
+}
+
+/**
+ * switch_buffers - switch over to a new sub-buffer
+ * @rchan: the channel
+ * @slot_len: the length of the slot needed for the current write
+ * @offset: the offset calculated for the new index
+ * @ts: timestamp
+ * @tsc: the timestamp counter associated with time
+ * @old_idx: the value of the buffer control index when we were called
+ * @old_idx: the new calculated value of the buffer control index
+ * @resetting: are we resetting the channel?
+ */
+static inline void
+switch_buffers(struct rchan *rchan,
+ u32 slot_len,
+ u32 offset,
+ struct timeval *ts,
+ u32 *tsc,
+ u32 new_idx,
+ u32 old_idx,
+ int resetting)
+{
+ u32 size_lost = rchan->end_reserve;
+ unsigned long int flags;
+ u32 idx_mask = idx_mask(rchan);
+ u8 offset_bits = offset_bits(rchan);
+ char *cur_write_pos;
+ u32 new_buf_no;
+ u32 start_reserve = rchan->start_reserve;
+
+ if (resetting)
+ size_lost = RELAY_BUF_SIZE(offset_bits(rchan)) - old_idx % rchan->buf_size;
+
+ if (offset > 0)
+ size_lost += slot_len - offset;
+ else
+ old_idx += slot_len;
+
+ local_irq_save(flags);
+ if (!rchan->half_switch)
+ finalize_buffer(rchan, old_idx & idx_mask, size_lost,
+ ts, tsc, resetting);
+ rchan->half_switch = 0;
+ rchan->buf_start_time = *ts;
+ rchan->buf_start_tsc = *tsc;
+ local_irq_restore(flags);
+
+ cur_write_pos = rchan->buf + RELAY_BUF_OFFSET_CLEAR((new_idx
+ & idx_mask), offset_mask(rchan));
+ if (resetting)
+ rchan->buf_idx = 0;
+ else
+ rchan->buf_idx++;
+ rchan->buf_id++;
+
+ rchan->unused_bytes[rchan->buf_idx % rchan->n_bufs] = 0;
+
+ rchan->callbacks->buffer_start(rchan->id, cur_write_pos,
+ rchan->buf_id, *ts, *tsc, using_tsc(rchan));
+ new_buf_no = RELAY_BUFNO_GET(new_idx & idx_mask, offset_bits);
+ atomic_sub_volatile(&fill_count(rchan, new_buf_no),
+ RELAY_BUF_SIZE(offset_bits) - start_reserve);
+ if (atomic_read(&fill_count(rchan, new_buf_no)) < start_reserve)
+ atomic_set_volatile(&fill_count(rchan, new_buf_no),
+ start_reserve);
+}
+
+/**
+ * lockless_reserve_slow - the slow reserve path in the lockless scheme
+ * @rchan: the channel
+ * @slot_len: the length of the slot to reserve
+ * @ts: variable that will receive the time the slot was reserved
+ * @tsc: the timestamp counter associated with time
+ * @old_idx: the value of the buffer control index when we were called
+ * @err: receives the result flags
+ *
+ * Returns pointer to the beginning of the reserved slot, NULL if error.
+
+ * err values same as for lockless_reserve.
+ */
+static inline char *
+lockless_reserve_slow(struct rchan *rchan,
+ u32 slot_len,
+ struct timeval *ts,
+ u32 *tsc,
+ u32 old_idx,
+ int *err)
+{
+ u32 new_idx, offset;
+ unsigned long int flags;
+ u32 offset_mask = offset_mask(rchan);
+ u32 idx_mask = idx_mask(rchan);
+ u32 start_reserve = rchan->start_reserve;
+ u32 end_reserve = rchan->end_reserve;
+ int discard_event;
+ u32 reserved_idx;
+ char *cur_write_pos;
+ int initializing = 0;
+
+ *err = RELAY_BUFFER_SWITCH_NONE;
+
+ discard_event = discard_check(rchan, old_idx, slot_len, ts, tsc);
+ if (discard_event != RELAY_WRITE_DISCARD_NONE) {
+ *err = discard_event;
+ return NULL;
+ }
+
+ local_irq_save(flags);
+ if (rchan->initialized == 0) {
+ rchan->initialized = initializing = 1;
+ idx(rchan) = rchan->start_reserve + rchan->rchan_start_reserve;
+ }
+ local_irq_restore(flags);
+
+ do {
+ old_idx = idx(rchan);
+ new_idx = old_idx + slot_len;
+
+ offset = RELAY_BUF_OFFSET_GET(new_idx + end_reserve,
+ offset_mask);
+ if ((offset < slot_len) && (offset > 0)) {
+ reserved_idx = RELAY_BUF_OFFSET_CLEAR(new_idx
+ + end_reserve, offset_mask) + start_reserve;
+ new_idx = reserved_idx + slot_len;
+ } else if (offset < slot_len) {
+ reserved_idx = old_idx;
+ new_idx = RELAY_BUF_OFFSET_CLEAR(new_idx
+ + end_reserve, offset_mask) + start_reserve;
+ } else
+ reserved_idx = old_idx;
+ get_timestamp(ts, tsc, rchan);
+ } while (!compare_and_store_volatile(&idx(rchan), old_idx, new_idx));
+
+ reserved_idx &= idx_mask;
+
+ if (initializing == 1) {
+ cur_write_pos = rchan->buf
+ + RELAY_BUF_OFFSET_CLEAR((old_idx & idx_mask),
+ offset_mask(rchan));
+ rchan->buf_start_time = *ts;
+ rchan->buf_start_tsc = *tsc;
+ rchan->unused_bytes[0] = 0;
+
+ rchan->callbacks->buffer_start(rchan->id, cur_write_pos,
+ rchan->buf_id, *ts, *tsc, using_tsc(rchan));
+ }
+
+ if (offset < slot_len) {
+ switch_buffers(rchan, slot_len, offset, ts, tsc, new_idx,
+ old_idx, 0);
+ *err = RELAY_BUFFER_SWITCH;
+ }
+
+ /* If not using TSC, need to calc time delta */
+ recalc_time_delta(ts, tsc, rchan);
+
+ return rchan->buf + reserved_idx;
+}
+
+/**
+ * lockless_reserve - reserve a slot in the buffer for an event.
+ * @rchan: the channel
+ * @slot_len: the length of the slot to reserve
+ * @ts: variable that will receive the time the slot was reserved
+ * @tsc: the timestamp counter associated with time
+ * @err: receives the result flags
+ * @interrupting: not used
+ *
+ * Returns pointer to the beginning of the reserved slot, NULL if error.
+ *
+ * The err value contains the result flags and is an ORed combination
+ * of the following:
+ *
+ * RELAY_BUFFER_SWITCH_NONE - no buffer switch occurred
+ * RELAY_EVENT_DISCARD_NONE - event should not be discarded
+ * RELAY_BUFFER_SWITCH - buffer switch occurred
+ * RELAY_EVENT_DISCARD - event should be discarded (all buffers are full)
+ * RELAY_EVENT_TOO_LONG - event won't fit into even an empty buffer
+ */
+inline char *
+lockless_reserve(struct rchan *rchan,
+ u32 slot_len,
+ struct timeval *ts,
+ u32 *tsc,
+ int *err,
+ int *interrupting)
+{
+ u32 old_idx, new_idx, offset;
+ u32 offset_mask = offset_mask(rchan);
+
+ do {
+ old_idx = idx(rchan);
+ new_idx = old_idx + slot_len;
+
+ offset = RELAY_BUF_OFFSET_GET(new_idx + rchan->end_reserve,
+ offset_mask);
+ if (offset < slot_len)
+ return lockless_reserve_slow(rchan, slot_len,
+ ts, tsc, old_idx, err);
+ get_time_or_tsc(ts, tsc, rchan);
+ } while (!compare_and_store_volatile(&idx(rchan), old_idx, new_idx));
+
+ /* If not using TSC, need to calc time delta */
+ recalc_time_delta(ts, tsc, rchan);
+
+ *err = RELAY_BUFFER_SWITCH_NONE;
+
+ return rchan->buf + (old_idx & idx_mask(rchan));
+}
+
+/**
+ * lockless_get_offset - get current and max channel offsets
+ * @rchan: the channel
+ * @max_offset: maximum channel offset
+ *
+ * Returns the current and maximum channel offsets.
+ */
+u32
+lockless_get_offset(struct rchan *rchan,
+ u32 *max_offset)
+{
+ if (max_offset)
+ *max_offset = rchan->buf_size * rchan->n_bufs - 1;
+
+ return rchan->initialized ? idx(rchan) & idx_mask(rchan) : 0;
+}
+
+/**
+ * lockless_reset - reset the channel
+ * @rchan: the channel
+ * @init: 1 if this is a first-time channel initialization
+ */
+void lockless_reset(struct rchan *rchan, int init)
+{
+ int i;
+
+ /* Start first buffer at 0 - (end_reserve + 1) so that it
+ gets initialized via buffer_start callback as well. */
+ idx(rchan) = 0UL - (rchan->end_reserve + 1);
+ idx_mask(rchan) =
+ (1UL << (bufno_bits(rchan) + offset_bits(rchan))) - 1;
+ atomic_set(&fill_count(rchan, 0),
+ (int)rchan->start_reserve +
+ (int)rchan->rchan_start_reserve);
+ for (i = 1; i < rchan->n_bufs; i++)
+ atomic_set(&fill_count(rchan, i),
+ (int)RELAY_BUF_SIZE(offset_bits(rchan)));
+}
+
+/**
+ * lockless_reset_index - atomically set channel index to the beginning
+ * @rchan: the channel
+ * @old_idx: the current index
+ *
+ * If this fails, it means that something else just logged something
+ * and therefore we probably no longer want to do this. It's up to the
+ * caller anyway...
+ *
+ * Returns 0 if the index was successfully set, negative otherwise
+ */
+int
+lockless_reset_index(struct rchan *rchan, u32 old_idx)
+{
+ struct timeval ts;
+ u32 tsc;
+ u32 new_idx;
+
+ if (compare_and_store_volatile(&idx(rchan), old_idx, 0)) {
+ new_idx = rchan->start_reserve;
+ switch_buffers(rchan, 0, 0, &ts, &tsc, new_idx, old_idx, 1);
+ return 0;
+ } else
+ return -1;
+}
+
+
+
+
+
+
+
+
+
+
+
+
+
--- /dev/null
+#ifndef _RELAY_LOCKLESS_H
+#define _RELAY_LOCKLESS_H
+
+extern char *
+lockless_reserve(struct rchan *rchan,
+ u32 slot_len,
+ struct timeval *time_stamp,
+ u32 *tsc,
+ int * interrupting,
+ int * errcode);
+
+extern void
+lockless_commit(struct rchan *rchan,
+ char * from,
+ u32 len,
+ int deliver,
+ int interrupting);
+
+extern void
+lockless_resume(struct rchan *rchan);
+
+extern void
+lockless_finalize(struct rchan *rchan);
+
+extern u32
+lockless_get_offset(struct rchan *rchan, u32 *max_offset);
+
+extern void
+lockless_reset(struct rchan *rchan, int init);
+
+extern int
+lockless_reset_index(struct rchan *rchan, u32 old_idx);
+
+#endif/* _RELAY_LOCKLESS_H */
--- /dev/null
+/*
+ * RelayFS buffer management and resizing code.
+ *
+ * Copyright (C) 2002, 2003 - Tom Zanussi (zanussi@us.ibm.com), IBM Corp
+ * Copyright (C) 1999, 2000, 2001, 2002 - Karim Yaghmour (karim@opersys.com)
+ *
+ * This file is released under the GPL.
+ */
+
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+#include <linux/mm.h>
+#include <asm/relay.h>
+#include "resize.h"
+
+/**
+ * alloc_page_array - alloc array to hold pages, but not pages
+ * @size: the total size of the memory represented by the page array
+ * @page_count: the number of pages the array can hold
+ * @err: 0 on success, negative otherwise
+ *
+ * Returns a pointer to the page array if successful, NULL otherwise.
+ */
+static struct page **
+alloc_page_array(int size, int *page_count, int *err)
+{
+ int n_pages;
+ struct page **page_array;
+ int page_array_size;
+
+ *err = 0;
+
+ size = PAGE_ALIGN(size);
+ n_pages = size >> PAGE_SHIFT;
+ page_array_size = n_pages * sizeof(struct page *);
+ page_array = kmalloc(page_array_size, GFP_KERNEL);
+ if (page_array == NULL) {
+ *err = -ENOMEM;
+ return NULL;
+ }
+ *page_count = n_pages;
+ memset(page_array, 0, page_array_size);
+
+ return page_array;
+}
+
+/**
+ * free_page_array - free array to hold pages, but not pages
+ * @page_array: pointer to the page array
+ */
+static inline void
+free_page_array(struct page **page_array)
+{
+ kfree(page_array);
+}
+
+/**
+ * depopulate_page_array - free and unreserve all pages in the array
+ * @page_array: pointer to the page array
+ * @page_count: number of pages to free
+ */
+static void
+depopulate_page_array(struct page **page_array, int page_count)
+{
+ int i;
+
+ for (i = 0; i < page_count; i++) {
+ ClearPageReserved(page_array[i]);
+ __free_page(page_array[i]);
+ }
+}
+
+/**
+ * populate_page_array - allocate and reserve pages
+ * @page_array: pointer to the page array
+ * @page_count: number of pages to allocate
+ *
+ * Returns 0 if successful, negative otherwise.
+ */
+static int
+populate_page_array(struct page **page_array, int page_count)
+{
+ int i;
+
+ for (i = 0; i < page_count; i++) {
+ page_array[i] = alloc_page(GFP_KERNEL);
+ if (unlikely(!page_array[i])) {
+ depopulate_page_array(page_array, i);
+ return -ENOMEM;
+ }
+ SetPageReserved(page_array[i]);
+ }
+ return 0;
+}
+
+/**
+ * alloc_rchan_buf - allocate the initial channel buffer
+ * @size: total size of the buffer
+ * @page_array: receives a pointer to the buffer's page array
+ * @page_count: receives the number of pages allocated
+ *
+ * Returns a pointer to the resulting buffer, NULL if unsuccessful
+ */
+void *
+alloc_rchan_buf(unsigned long size, struct page ***page_array, int *page_count)
+{
+ void *mem;
+ int err;
+
+ *page_array = alloc_page_array(size, page_count, &err);
+ if (!*page_array)
+ return NULL;
+
+ err = populate_page_array(*page_array, *page_count);
+ if (err) {
+ free_page_array(*page_array);
+ *page_array = NULL;
+ return NULL;
+ }
+
+ mem = vmap(*page_array, *page_count, GFP_KERNEL, PAGE_KERNEL);
+ if (!mem) {
+ depopulate_page_array(*page_array, *page_count);
+ free_page_array(*page_array);
+ *page_array = NULL;
+ return NULL;
+ }
+ memset(mem, 0, size);
+
+ return mem;
+}
+
+/**
+ * free_rchan_buf - free a channel buffer
+ * @buf: pointer to the buffer to free
+ * @page_array: pointer to the buffer's page array
+ * @page_count: number of pages in page array
+ */
+void
+free_rchan_buf(void *buf, struct page **page_array, int page_count)
+{
+ vunmap(buf);
+ depopulate_page_array(page_array, page_count);
+ free_page_array(page_array);
+}
+
+/**
+ * expand_check - check whether the channel needs expanding
+ * @rchan: the channel
+ *
+ * If the channel needs expanding, the needs_resize callback is
+ * called with RELAY_RESIZE_EXPAND.
+ *
+ * Returns the suggested number of sub-buffers for the new
+ * buffer.
+ */
+void
+expand_check(struct rchan *rchan)
+{
+ u32 active_bufs;
+ u32 new_n_bufs = 0;
+ u32 threshold = rchan->n_bufs * RESIZE_THRESHOLD;
+
+ if (rchan->init_buf)
+ return;
+
+ if (rchan->resize_min == 0)
+ return;
+
+ if (rchan->resizing || rchan->replace_buffer)
+ return;
+
+ active_bufs = rchan->bufs_produced - rchan->bufs_consumed + 1;
+
+ if (rchan->resize_max && active_bufs == threshold) {
+ new_n_bufs = rchan->n_bufs * 2;
+ }
+
+ if (new_n_bufs && (new_n_bufs * rchan->buf_size <= rchan->resize_max))
+ rchan->callbacks->needs_resize(rchan->id,
+ RELAY_RESIZE_EXPAND,
+ rchan->buf_size,
+ new_n_bufs);
+}
+
+/**
+ * can_shrink - check whether the channel can shrink
+ * @rchan: the channel
+ * @cur_idx: the current channel index
+ *
+ * Returns the suggested number of sub-buffers for the new
+ * buffer, 0 if the buffer is not shrinkable.
+ */
+static inline u32
+can_shrink(struct rchan *rchan, u32 cur_idx)
+{
+ u32 active_bufs = rchan->bufs_produced - rchan->bufs_consumed + 1;
+ u32 new_n_bufs = 0;
+ u32 cur_bufno_bytes = cur_idx % rchan->buf_size;
+
+ if (rchan->resize_min == 0 ||
+ rchan->resize_min >= rchan->n_bufs * rchan->buf_size)
+ goto out;
+
+ if (active_bufs > 1)
+ goto out;
+
+ if (cur_bufno_bytes != rchan->bytes_consumed)
+ goto out;
+
+ new_n_bufs = rchan->resize_min / rchan->buf_size;
+out:
+ return new_n_bufs;
+}
+
+/**
+ * shrink_check: - timer function checking whether the channel can shrink
+ * @data: unused
+ *
+ * Every SHRINK_TIMER_SECS, check whether the channel is shrinkable.
+ * If so, we attempt to atomically reset the channel to the beginning.
+ * The needs_resize callback is then called with RELAY_RESIZE_SHRINK.
+ * If the reset fails, it means we really shouldn't be shrinking now
+ * and need to wait until the next time around.
+ */
+static void
+shrink_check(unsigned long data)
+{
+ struct rchan *rchan = (struct rchan *)data;
+ u32 shrink_to_nbufs, cur_idx;
+
+ del_timer(&rchan->shrink_timer);
+ rchan->shrink_timer.expires = jiffies + SHRINK_TIMER_SECS * HZ;
+ add_timer(&rchan->shrink_timer);
+
+ if (rchan->init_buf)
+ return;
+
+ if (rchan->resizing || rchan->replace_buffer)
+ return;
+
+ if (using_lockless(rchan))
+ cur_idx = idx(rchan);
+ else
+ cur_idx = relay_get_offset(rchan, NULL);
+
+ shrink_to_nbufs = can_shrink(rchan, cur_idx);
+ if (shrink_to_nbufs != 0 && reset_index(rchan, cur_idx) == 0) {
+ update_readers_consumed(rchan, rchan->bufs_consumed, 0);
+ rchan->callbacks->needs_resize(rchan->id,
+ RELAY_RESIZE_SHRINK,
+ rchan->buf_size,
+ shrink_to_nbufs);
+ }
+}
+
+/**
+ * init_shrink_timer: - Start timer used to check shrinkability.
+ * @rchan: the channel
+ */
+void
+init_shrink_timer(struct rchan *rchan)
+{
+ if (rchan->resize_min) {
+ init_timer(&rchan->shrink_timer);
+ rchan->shrink_timer.function = shrink_check;
+ rchan->shrink_timer.data = (unsigned long)rchan;
+ rchan->shrink_timer.expires = jiffies + SHRINK_TIMER_SECS * HZ;
+ add_timer(&rchan->shrink_timer);
+ }
+}
+
+
+/**
+ * alloc_new_pages - allocate new pages for expanding buffer
+ * @rchan: the channel
+ *
+ * Returns 0 on success, negative otherwise.
+ */
+static int
+alloc_new_pages(struct rchan *rchan)
+{
+ int new_pages_size, err;
+
+ if (unlikely(rchan->expand_page_array)) BUG();
+
+ new_pages_size = rchan->resize_alloc_size - rchan->alloc_size;
+ rchan->expand_page_array = alloc_page_array(new_pages_size,
+ &rchan->expand_page_count, &err);
+ if (rchan->expand_page_array == NULL) {
+ rchan->resize_err = -ENOMEM;
+ return -ENOMEM;
+ }
+
+ err = populate_page_array(rchan->expand_page_array,
+ rchan->expand_page_count);
+ if (err) {
+ rchan->resize_err = -ENOMEM;
+ free_page_array(rchan->expand_page_array);
+ rchan->expand_page_array = NULL;
+ }
+
+ return err;
+}
+
+/**
+ * clear_resize_offset - helper function for buffer resizing
+ * @rchan: the channel
+ *
+ * Clear the saved offset change.
+ */
+static inline void
+clear_resize_offset(struct rchan *rchan)
+{
+ rchan->resize_offset.ge = 0UL;
+ rchan->resize_offset.le = 0UL;
+ rchan->resize_offset.delta = 0;
+}
+
+/**
+ * save_resize_offset - helper function for buffer resizing
+ * @rchan: the channel
+ * @ge: affected region ge this
+ * @le: affected region le this
+ * @delta: apply this delta
+ *
+ * Save a resize offset.
+ */
+static inline void
+save_resize_offset(struct rchan *rchan, u32 ge, u32 le, int delta)
+{
+ rchan->resize_offset.ge = ge;
+ rchan->resize_offset.le = le;
+ rchan->resize_offset.delta = delta;
+}
+
+/**
+ * update_file_offset - apply offset change to reader
+ * @reader: the channel reader
+ * @change_idx: the offset index into the offsets array
+ *
+ * Returns non-zero if the offset was applied.
+ *
+ * Apply the offset delta saved in change_idx to the reader's
+ * current read position.
+ */
+static inline int
+update_file_offset(struct rchan_reader *reader)
+{
+ int applied = 0;
+ struct rchan *rchan = reader->rchan;
+ u32 f_pos;
+ int delta = reader->rchan->resize_offset.delta;
+
+ if (reader->vfs_reader)
+ f_pos = (u32)reader->pos.file->f_pos;
+ else
+ f_pos = reader->pos.f_pos;
+
+ if (f_pos == relay_get_offset(rchan, NULL))
+ return 0;
+
+ if ((f_pos >= rchan->resize_offset.ge - 1) &&
+ (f_pos <= rchan->resize_offset.le)) {
+ if (reader->vfs_reader)
+ reader->pos.file->f_pos += delta;
+ else
+ reader->pos.f_pos += delta;
+ applied = 1;
+ }
+
+ return applied;
+}
+
+/**
+ * update_file_offsets - apply offset change to readers
+ * @rchan: the channel
+ *
+ * Apply the saved offset deltas to all files open on the channel.
+ */
+static inline void
+update_file_offsets(struct rchan *rchan)
+{
+ struct list_head *p;
+ struct rchan_reader *reader;
+
+ read_lock(&rchan->open_readers_lock);
+ list_for_each(p, &rchan->open_readers) {
+ reader = list_entry(p, struct rchan_reader, list);
+ if (update_file_offset(reader))
+ reader->offset_changed = 1;
+ }
+ read_unlock(&rchan->open_readers_lock);
+}
+
+/**
+ * setup_expand_buf - setup expand buffer for replacement
+ * @rchan: the channel
+ * @newsize: the size of the new buffer
+ * @oldsize: the size of the old buffer
+ * @old_n_bufs: the number of sub-buffers in the old buffer
+ *
+ * Inserts new pages into the old buffer to create a larger
+ * new channel buffer, splitting them at old_cur_idx, the bottom
+ * half of the old buffer going to the bottom of the new, likewise
+ * for the top half.
+ */
+static void
+setup_expand_buf(struct rchan *rchan, int newsize, int oldsize, u32 old_n_bufs)
+{
+ u32 cur_idx;
+ int cur_bufno, delta, i, j;
+ u32 ge, le;
+ int cur_pageno;
+ u32 free_bufs, free_pages;
+ u32 free_pages_in_cur_buf;
+ u32 free_bufs_to_end;
+ u32 cur_pages = rchan->alloc_size >> PAGE_SHIFT;
+ u32 pages_per_buf = cur_pages / rchan->n_bufs;
+ u32 bufs_ready = rchan->bufs_produced - rchan->bufs_consumed;
+
+ if (!rchan->resize_page_array || !rchan->expand_page_array ||
+ !rchan->buf_page_array)
+ return;
+
+ if (bufs_ready >= rchan->n_bufs) {
+ bufs_ready = rchan->n_bufs;
+ free_bufs = 0;
+ } else
+ free_bufs = rchan->n_bufs - bufs_ready - 1;
+
+ cur_idx = relay_get_offset(rchan, NULL);
+ cur_pageno = cur_idx / PAGE_SIZE;
+ cur_bufno = cur_idx / rchan->buf_size;
+
+ free_pages_in_cur_buf = (pages_per_buf - 1) - (cur_pageno % pages_per_buf);
+ free_pages = free_bufs * pages_per_buf + free_pages_in_cur_buf;
+ free_bufs_to_end = (rchan->n_bufs - 1) - cur_bufno;
+ if (free_bufs >= free_bufs_to_end) {
+ free_pages = free_bufs_to_end * pages_per_buf + free_pages_in_cur_buf;
+ free_bufs = free_bufs_to_end;
+ }
+
+ for (i = 0, j = 0; i <= cur_pageno + free_pages; i++, j++)
+ rchan->resize_page_array[j] = rchan->buf_page_array[i];
+ for (i = 0; i < rchan->expand_page_count; i++, j++)
+ rchan->resize_page_array[j] = rchan->expand_page_array[i];
+ for (i = cur_pageno + free_pages + 1; i < rchan->buf_page_count; i++, j++)
+ rchan->resize_page_array[j] = rchan->buf_page_array[i];
+
+ delta = newsize - oldsize;
+ ge = (cur_pageno + 1 + free_pages) * PAGE_SIZE;
+ le = oldsize;
+ save_resize_offset(rchan, ge, le, delta);
+
+ rchan->expand_buf_id = rchan->buf_id + 1 + free_bufs;
+}
+
+/**
+ * setup_shrink_buf - setup shrink buffer for replacement
+ * @rchan: the channel
+ *
+ * Removes pages from the old buffer to create a smaller
+ * new channel buffer.
+ */
+static void
+setup_shrink_buf(struct rchan *rchan)
+{
+ int i;
+ int copy_end_page;
+
+ if (!rchan->resize_page_array || !rchan->shrink_page_array ||
+ !rchan->buf_page_array)
+ return;
+
+ copy_end_page = rchan->resize_alloc_size / PAGE_SIZE;
+
+ for (i = 0; i < copy_end_page; i++)
+ rchan->resize_page_array[i] = rchan->buf_page_array[i];
+}
+
+/**
+ * cleanup_failed_alloc - relaybuf_alloc helper
+ */
+static void
+cleanup_failed_alloc(struct rchan *rchan)
+{
+ if (rchan->expand_page_array) {
+ depopulate_page_array(rchan->expand_page_array,
+ rchan->expand_page_count);
+ free_page_array(rchan->expand_page_array);
+ rchan->expand_page_array = NULL;
+ rchan->expand_page_count = 0;
+ } else if (rchan->shrink_page_array) {
+ free_page_array(rchan->shrink_page_array);
+ rchan->shrink_page_array = NULL;
+ rchan->shrink_page_count = 0;
+ }
+
+ if (rchan->resize_page_array) {
+ free_page_array(rchan->resize_page_array);
+ rchan->resize_page_array = NULL;
+ rchan->resize_page_count = 0;
+ }
+}
+
+/**
+ * relaybuf_alloc - allocate a new resized channel buffer
+ * @private: pointer to the channel struct
+ *
+ * Internal - manages the allocation and remapping of new channel
+ * buffers.
+ */
+static void
+relaybuf_alloc(void *private)
+{
+ struct rchan *rchan = (struct rchan *)private;
+ int i, j, err;
+ u32 old_cur_idx;
+ int free_size;
+ int free_start_page, free_end_page;
+ u32 newsize, oldsize;
+
+ if (rchan->resize_alloc_size > rchan->alloc_size) {
+ err = alloc_new_pages(rchan);
+ if (err) goto cleanup;
+ } else {
+ free_size = rchan->alloc_size - rchan->resize_alloc_size;
+ BUG_ON(free_size <= 0);
+ rchan->shrink_page_array = alloc_page_array(free_size,
+ &rchan->shrink_page_count, &err);
+ if (rchan->shrink_page_array == NULL)
+ goto cleanup;
+ free_start_page = rchan->resize_alloc_size / PAGE_SIZE;
+ free_end_page = rchan->alloc_size / PAGE_SIZE;
+ for (i = 0, j = free_start_page; j < free_end_page; i++, j++)
+ rchan->shrink_page_array[i] = rchan->buf_page_array[j];
+ }
+
+ rchan->resize_page_array = alloc_page_array(rchan->resize_alloc_size,
+ &rchan->resize_page_count, &err);
+ if (rchan->resize_page_array == NULL)
+ goto cleanup;
+
+ old_cur_idx = relay_get_offset(rchan, NULL);
+ clear_resize_offset(rchan);
+ newsize = rchan->resize_alloc_size;
+ oldsize = rchan->alloc_size;
+ if (newsize > oldsize)
+ setup_expand_buf(rchan, newsize, oldsize, rchan->n_bufs);
+ else
+ setup_shrink_buf(rchan);
+
+ rchan->resize_buf = vmap(rchan->resize_page_array, rchan->resize_page_count, GFP_KERNEL, PAGE_KERNEL);
+
+ if (rchan->resize_buf == NULL)
+ goto cleanup;
+
+ rchan->replace_buffer = 1;
+ rchan->resizing = 0;
+
+ rchan->callbacks->needs_resize(rchan->id, RELAY_RESIZE_REPLACE, 0, 0);
+ return;
+
+cleanup:
+ cleanup_failed_alloc(rchan);
+ rchan->resize_err = -ENOMEM;
+ return;
+}
+
+/**
+ * relaybuf_free - free a resized channel buffer
+ * @private: pointer to the channel struct
+ *
+ * Internal - manages the de-allocation and unmapping of old channel
+ * buffers.
+ */
+static void
+relaybuf_free(void *private)
+{
+ struct free_rchan_buf *free_buf = (struct free_rchan_buf *)private;
+ int i;
+
+ if (free_buf->unmap_buf)
+ vunmap(free_buf->unmap_buf);
+
+ for (i = 0; i < 3; i++) {
+ if (!free_buf->page_array[i].array)
+ continue;
+ if (free_buf->page_array[i].count)
+ depopulate_page_array(free_buf->page_array[i].array,
+ free_buf->page_array[i].count);
+ free_page_array(free_buf->page_array[i].array);
+ }
+
+ kfree(free_buf);
+}
+
+/**
+ * calc_order - determine the power-of-2 order of a resize
+ * @high: the larger size
+ * @low: the smaller size
+ *
+ * Returns order
+ */
+static inline int
+calc_order(u32 high, u32 low)
+{
+ int order = 0;
+
+ if (!high || !low || high <= low)
+ return 0;
+
+ while (high > low) {
+ order++;
+ high /= 2;
+ }
+
+ return order;
+}
+
+/**
+ * check_size - check the sanity of the requested channel size
+ * @rchan: the channel
+ * @nbufs: the new number of sub-buffers
+ * @err: return code
+ *
+ * Returns the non-zero total buffer size if ok, otherwise 0 and
+ * sets errcode if not.
+ */
+static inline u32
+check_size(struct rchan *rchan, u32 nbufs, int *err)
+{
+ u32 new_channel_size = 0;
+
+ *err = 0;
+
+ if (nbufs > rchan->n_bufs) {
+ rchan->resize_order = calc_order(nbufs, rchan->n_bufs);
+ if (!rchan->resize_order) {
+ *err = -EINVAL;
+ goto out;
+ }
+
+ new_channel_size = rchan->buf_size * nbufs;
+ if (new_channel_size > rchan->resize_max) {
+ *err = -EINVAL;
+ goto out;
+ }
+ } else if (nbufs < rchan->n_bufs) {
+ if (rchan->n_bufs < 2) {
+ *err = -EINVAL;
+ goto out;
+ }
+ rchan->resize_order = -calc_order(rchan->n_bufs, nbufs);
+ if (!rchan->resize_order) {
+ *err = -EINVAL;
+ goto out;
+ }
+
+ new_channel_size = rchan->buf_size * nbufs;
+ if (new_channel_size < rchan->resize_min) {
+ *err = -EINVAL;
+ goto out;
+ }
+ } else
+ *err = -EINVAL;
+out:
+ return new_channel_size;
+}
+
+/**
+ * __relay_realloc_buffer - allocate a new channel buffer
+ * @rchan: the channel
+ * @new_nbufs: the new number of sub-buffers
+ * @async: do the allocation using a work queue
+ *
+ * Internal - see relay_realloc_buffer() for details.
+ */
+static int
+__relay_realloc_buffer(struct rchan *rchan, u32 new_nbufs, int async)
+{
+ u32 new_channel_size;
+ int err = 0;
+
+ if (new_nbufs == rchan->n_bufs)
+ return -EINVAL;
+
+ if (down_trylock(&rchan->resize_sem))
+ return -EBUSY;
+
+ if (rchan->init_buf) {
+ err = -EPERM;
+ goto out;
+ }
+
+ if (rchan->replace_buffer) {
+ err = -EBUSY;
+ goto out;
+ }
+
+ if (rchan->resizing) {
+ err = -EBUSY;
+ goto out;
+ } else
+ rchan->resizing = 1;
+
+ if (rchan->resize_failures > MAX_RESIZE_FAILURES) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ new_channel_size = check_size(rchan, new_nbufs, &err);
+ if (err)
+ goto out;
+
+ rchan->resize_n_bufs = new_nbufs;
+ rchan->resize_buf_size = rchan->buf_size;
+ rchan->resize_alloc_size = FIX_SIZE(new_channel_size);
+
+ if (async) {
+ INIT_WORK(&rchan->work, relaybuf_alloc, rchan);
+ schedule_delayed_work(&rchan->work, 1);
+ } else
+ relaybuf_alloc((void *)rchan);
+out:
+ up(&rchan->resize_sem);
+
+ return err;
+}
+
+/**
+ * relay_realloc_buffer - allocate a new channel buffer
+ * @rchan_id: the channel id
+ * @bufsize: the new sub-buffer size
+ * @nbufs: the new number of sub-buffers
+ *
+ * Allocates a new channel buffer using the specified sub-buffer size
+ * and count. If async is non-zero, the allocation is done in the
+ * background using a work queue. When the allocation has completed,
+ * the needs_resize() callback is called with a resize_type of
+ * RELAY_RESIZE_REPLACE. This function doesn't replace the old buffer
+ * with the new - see relay_replace_buffer(). See
+ * Documentation/filesystems/relayfs.txt for more details.
+ *
+ * Returns 0 on success, or errcode if the channel is busy or if
+ * the allocation couldn't happen for some reason.
+ */
+int
+relay_realloc_buffer(int rchan_id, u32 new_nbufs, int async)
+{
+ int err;
+
+ struct rchan *rchan;
+
+ rchan = rchan_get(rchan_id);
+ if (rchan == NULL)
+ return -EBADF;
+
+ err = __relay_realloc_buffer(rchan, new_nbufs, async);
+
+ rchan_put(rchan);
+
+ return err;
+}
+
+/**
+ * expand_cancel_check - check whether the current expand needs canceling
+ * @rchan: the channel
+ *
+ * Returns 1 if the expand should be canceled, 0 otherwise.
+ */
+static int
+expand_cancel_check(struct rchan *rchan)
+{
+ if (rchan->buf_id >= rchan->expand_buf_id)
+ return 1;
+ else
+ return 0;
+}
+
+/**
+ * shrink_cancel_check - check whether the current shrink needs canceling
+ * @rchan: the channel
+ *
+ * Returns 1 if the shrink should be canceled, 0 otherwise.
+ */
+static int
+shrink_cancel_check(struct rchan *rchan, u32 newsize)
+{
+ u32 active_bufs = rchan->bufs_produced - rchan->bufs_consumed + 1;
+ u32 cur_idx = relay_get_offset(rchan, NULL);
+
+ if (cur_idx >= newsize)
+ return 1;
+
+ if (active_bufs > 1)
+ return 1;
+
+ return 0;
+}
+
+/**
+ * switch_rchan_buf - do_replace_buffer helper
+ */
+static void
+switch_rchan_buf(struct rchan *rchan,
+ int newsize,
+ int oldsize,
+ u32 old_nbufs,
+ u32 cur_idx)
+{
+ u32 newbufs, cur_bufno;
+ int i;
+
+ cur_bufno = cur_idx / rchan->buf_size;
+
+ rchan->buf = rchan->resize_buf;
+ rchan->alloc_size = rchan->resize_alloc_size;
+ rchan->n_bufs = rchan->resize_n_bufs;
+
+ if (newsize > oldsize) {
+ u32 ge = rchan->resize_offset.ge;
+ u32 moved_buf = ge / rchan->buf_size;
+
+ newbufs = (newsize - oldsize) / rchan->buf_size;
+ for (i = moved_buf; i < old_nbufs; i++) {
+ if (using_lockless(rchan))
+ atomic_set(&fill_count(rchan, i + newbufs),
+ atomic_read(&fill_count(rchan, i)));
+ rchan->unused_bytes[i + newbufs] = rchan->unused_bytes[i];
+ }
+ for (i = moved_buf; i < moved_buf + newbufs; i++) {
+ if (using_lockless(rchan))
+ atomic_set(&fill_count(rchan, i),
+ (int)RELAY_BUF_SIZE(offset_bits(rchan)));
+ rchan->unused_bytes[i] = 0;
+ }
+ }
+
+ rchan->buf_idx = cur_bufno;
+
+ if (!using_lockless(rchan)) {
+ cur_write_pos(rchan) = rchan->buf + cur_idx;
+ write_buf(rchan) = rchan->buf + cur_bufno * rchan->buf_size;
+ write_buf_end(rchan) = write_buf(rchan) + rchan->buf_size;
+ write_limit(rchan) = write_buf_end(rchan) - rchan->end_reserve;
+ } else {
+ idx(rchan) &= idx_mask(rchan);
+ bufno_bits(rchan) += rchan->resize_order;
+ idx_mask(rchan) =
+ (1UL << (bufno_bits(rchan) + offset_bits(rchan))) - 1;
+ }
+}
+
+/**
+ * do_replace_buffer - does the work of channel buffer replacement
+ * @rchan: the channel
+ * @newsize: new channel buffer size
+ * @oldsize: old channel buffer size
+ * @old_n_bufs: old channel sub-buffer count
+ *
+ * Returns 0 if replacement happened, 1 if canceled
+ *
+ * Does the work of switching buffers and fixing everything up
+ * so the channel can continue with a new size.
+ */
+static int
+do_replace_buffer(struct rchan *rchan,
+ int newsize,
+ int oldsize,
+ u32 old_nbufs)
+{
+ u32 cur_idx;
+ int err = 0;
+ int canceled;
+
+ cur_idx = relay_get_offset(rchan, NULL);
+
+ if (newsize > oldsize)
+ canceled = expand_cancel_check(rchan);
+ else
+ canceled = shrink_cancel_check(rchan, newsize);
+
+ if (canceled) {
+ err = -EAGAIN;
+ goto out;
+ }
+
+ switch_rchan_buf(rchan, newsize, oldsize, old_nbufs, cur_idx);
+
+ if (rchan->resize_offset.delta)
+ update_file_offsets(rchan);
+
+ atomic_set(&rchan->suspended, 0);
+
+ rchan->old_buf_page_array = rchan->buf_page_array;
+ rchan->buf_page_array = rchan->resize_page_array;
+ rchan->buf_page_count = rchan->resize_page_count;
+ rchan->resize_page_array = NULL;
+ rchan->resize_page_count = 0;
+ rchan->resize_buf = NULL;
+ rchan->resize_buf_size = 0;
+ rchan->resize_alloc_size = 0;
+ rchan->resize_n_bufs = 0;
+ rchan->resize_err = 0;
+ rchan->resize_order = 0;
+out:
+ rchan->callbacks->needs_resize(rchan->id,
+ RELAY_RESIZE_REPLACED,
+ rchan->buf_size,
+ rchan->n_bufs);
+ return err;
+}
+
+/**
+ * add_free_page_array - add a page_array to be freed
+ * @free_rchan_buf: the free_rchan_buf struct
+ * @page_array: the page array to free
+ * @page_count: the number of pages to free, 0 to free the array only
+ *
+ * Internal - Used add page_arrays to be freed asynchronously.
+ */
+static inline void
+add_free_page_array(struct free_rchan_buf *free_rchan_buf,
+ struct page **page_array, int page_count)
+{
+ int cur = free_rchan_buf->cur++;
+
+ free_rchan_buf->page_array[cur].array = page_array;
+ free_rchan_buf->page_array[cur].count = page_count;
+}
+
+/**
+ * free_replaced_buffer - free a channel's old buffer
+ * @rchan: the channel
+ * @oldbuf: the old buffer
+ * @oldsize: old buffer size
+ *
+ * Frees a channel buffer via work queue.
+ */
+static int
+free_replaced_buffer(struct rchan *rchan, char *oldbuf, int oldsize)
+{
+ struct free_rchan_buf *free_buf;
+
+ free_buf = kmalloc(sizeof(struct free_rchan_buf), GFP_ATOMIC);
+ if (!free_buf)
+ return -ENOMEM;
+ memset(free_buf, 0, sizeof(struct free_rchan_buf));
+
+ free_buf->unmap_buf = oldbuf;
+ add_free_page_array(free_buf, rchan->old_buf_page_array, 0);
+ rchan->old_buf_page_array = NULL;
+ add_free_page_array(free_buf, rchan->expand_page_array, 0);
+ add_free_page_array(free_buf, rchan->shrink_page_array, rchan->shrink_page_count);
+
+ rchan->expand_page_array = NULL;
+ rchan->expand_page_count = 0;
+ rchan->shrink_page_array = NULL;
+ rchan->shrink_page_count = 0;
+
+ INIT_WORK(&free_buf->work, relaybuf_free, free_buf);
+ schedule_delayed_work(&free_buf->work, 1);
+
+ return 0;
+}
+
+/**
+ * free_canceled_resize - free buffers allocated for a canceled resize
+ * @rchan: the channel
+ *
+ * Frees canceled buffers via work queue.
+ */
+static int
+free_canceled_resize(struct rchan *rchan)
+{
+ struct free_rchan_buf *free_buf;
+
+ free_buf = kmalloc(sizeof(struct free_rchan_buf), GFP_ATOMIC);
+ if (!free_buf)
+ return -ENOMEM;
+ memset(free_buf, 0, sizeof(struct free_rchan_buf));
+
+ if (rchan->resize_alloc_size > rchan->alloc_size)
+ add_free_page_array(free_buf, rchan->expand_page_array, rchan->expand_page_count);
+ else
+ add_free_page_array(free_buf, rchan->shrink_page_array, 0);
+
+ add_free_page_array(free_buf, rchan->resize_page_array, 0);
+ free_buf->unmap_buf = rchan->resize_buf;
+
+ rchan->expand_page_array = NULL;
+ rchan->expand_page_count = 0;
+ rchan->shrink_page_array = NULL;
+ rchan->shrink_page_count = 0;
+ rchan->resize_page_array = NULL;
+ rchan->resize_page_count = 0;
+ rchan->resize_buf = NULL;
+
+ INIT_WORK(&free_buf->work, relaybuf_free, free_buf);
+ schedule_delayed_work(&free_buf->work, 1);
+
+ return 0;
+}
+
+/**
+ * __relay_replace_buffer - replace channel buffer with new buffer
+ * @rchan: the channel
+ *
+ * Internal - see relay_replace_buffer() for details.
+ *
+ * Returns 0 if successful, negative otherwise.
+ */
+static int
+__relay_replace_buffer(struct rchan *rchan)
+{
+ int oldsize;
+ int err = 0;
+ char *oldbuf;
+
+ if (down_trylock(&rchan->resize_sem))
+ return -EBUSY;
+
+ if (rchan->init_buf) {
+ err = -EPERM;
+ goto out;
+ }
+
+ if (!rchan->replace_buffer)
+ goto out;
+
+ if (rchan->resizing) {
+ err = -EBUSY;
+ goto out;
+ }
+
+ if (rchan->resize_buf == NULL) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ oldbuf = rchan->buf;
+ oldsize = rchan->alloc_size;
+
+ err = do_replace_buffer(rchan, rchan->resize_alloc_size,
+ oldsize, rchan->n_bufs);
+ if (err == 0)
+ err = free_replaced_buffer(rchan, oldbuf, oldsize);
+ else
+ err = free_canceled_resize(rchan);
+out:
+ rchan->replace_buffer = 0;
+ up(&rchan->resize_sem);
+
+ return err;
+}
+
+/**
+ * relay_replace_buffer - replace channel buffer with new buffer
+ * @rchan_id: the channel id
+ *
+ * Replaces the current channel buffer with the new buffer allocated
+ * by relay_alloc_buffer and contained in the channel struct. When the
+ * replacement is complete, the needs_resize() callback is called with
+ * RELAY_RESIZE_REPLACED.
+ *
+ * Returns 0 on success, or errcode if the channel is busy or if
+ * the replacement or previous allocation didn't happen for some reason.
+ */
+int
+relay_replace_buffer(int rchan_id)
+{
+ int err;
+
+ struct rchan *rchan;
+
+ rchan = rchan_get(rchan_id);
+ if (rchan == NULL)
+ return -EBADF;
+
+ err = __relay_replace_buffer(rchan);
+
+ rchan_put(rchan);
+
+ return err;
+}
+
+EXPORT_SYMBOL(relay_realloc_buffer);
+EXPORT_SYMBOL(relay_replace_buffer);
+
--- /dev/null
+#ifndef _RELAY_RESIZE_H
+#define _RELAY_RESIZE_H
+
+/*
+ * If the channel usage has been below the low water mark for more than
+ * this amount of time, we can shrink the buffer if necessary.
+ */
+#define SHRINK_TIMER_SECS 60
+
+/* This inspired by rtai/shmem */
+#define FIX_SIZE(x) (((x) - 1) & PAGE_MASK) + PAGE_SIZE
+
+/* Don't attempt resizing again after this many failures */
+#define MAX_RESIZE_FAILURES 1
+
+/* Trigger resizing if a resizable channel is this full */
+#define RESIZE_THRESHOLD 3 / 4
+
+/*
+ * Used for deferring resized channel free
+ */
+struct free_rchan_buf
+{
+ char *unmap_buf;
+ struct
+ {
+ struct page **array;
+ int count;
+ } page_array[3];
+
+ int cur;
+ struct work_struct work; /* resize de-allocation work struct */
+};
+
+extern void *
+alloc_rchan_buf(unsigned long size,
+ struct page ***page_array,
+ int *page_count);
+
+extern void
+free_rchan_buf(void *buf,
+ struct page **page_array,
+ int page_count);
+
+extern void
+expand_check(struct rchan *rchan);
+
+extern void
+init_shrink_timer(struct rchan *rchan);
+
+#endif/* _RELAY_RESIZE_H */
*/
static long
setxattr(struct dentry *d, char __user *name, void __user *value,
- size_t size, int flags, struct vfsmount *mnt)
+ size_t size, int flags)
{
int error;
void *kvalue = NULL;
error = security_inode_setxattr(d, kname, kvalue, size, flags);
if (error)
goto out;
- error = -EROFS;
- if (MNT_IS_RDONLY(mnt))
- goto out;
error = d->d_inode->i_op->setxattr(d, kname, kvalue, size, flags);
if (!error)
security_inode_post_setxattr(d, kname, kvalue, size, flags);
error = user_path_walk(path, &nd);
if (error)
return error;
- error = setxattr(nd.dentry, name, value, size, flags, nd.mnt);
+ error = setxattr(nd.dentry, name, value, size, flags);
path_release(&nd);
return error;
}
error = user_path_walk_link(path, &nd);
if (error)
return error;
- error = setxattr(nd.dentry, name, value, size, flags, nd.mnt);
+ error = setxattr(nd.dentry, name, value, size, flags);
path_release(&nd);
return error;
}
f = fget(fd);
if (!f)
return error;
- error = setxattr(f->f_dentry, name, value, size, flags, f->f_vfsmnt);
+ error = setxattr(f->f_dentry, name, value, size, flags);
fput(f);
return error;
}
* Extended attribute REMOVE operations
*/
static long
-removexattr(struct dentry *d, char __user *name, struct vfsmount *mnt)
+removexattr(struct dentry *d, char __user *name)
{
int error;
char kname[XATTR_NAME_MAX + 1];
error = security_inode_removexattr(d, kname);
if (error)
goto out;
- error = -EROFS;
- if (MNT_IS_RDONLY(mnt))
- goto out;
down(&d->d_inode->i_sem);
error = d->d_inode->i_op->removexattr(d, kname);
up(&d->d_inode->i_sem);
error = user_path_walk(path, &nd);
if (error)
return error;
- error = removexattr(nd.dentry, name, nd.mnt);
+ error = removexattr(nd.dentry, name);
path_release(&nd);
return error;
}
error = user_path_walk_link(path, &nd);
if (error)
return error;
- error = removexattr(nd.dentry, name, nd.mnt);
+ error = removexattr(nd.dentry, name);
path_release(&nd);
return error;
}
f = fget(fd);
if (!f)
return error;
- error = removexattr(f->f_dentry, name, f->f_vfsmnt);
+ error = removexattr(f->f_dentry, name);
fput(f);
return error;
}
attr_flags = 0;
if (filp->f_flags & (O_NDELAY|O_NONBLOCK))
attr_flags |= ATTR_NONBLOCK;
-
+
va.va_mask = XFS_AT_XFLAGS | XFS_AT_EXTSIZE;
va.va_xflags = fa.fsx_xflags;
va.va_extsize = fa.fsx_extsize;
--- /dev/null
+/*
+ * Copyright (c) 2002 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+
+STATIC int xfs_cap_allow_set(vnode_t *);
+
+
+/*
+ * Test for existence of capability attribute as efficiently as possible.
+ */
+int
+xfs_cap_vhascap(
+ vnode_t *vp)
+{
+ int error;
+ int len = sizeof(xfs_cap_set_t);
+ int flags = ATTR_KERNOVAL|ATTR_ROOT;
+
+ VOP_ATTR_GET(vp, SGI_CAP_LINUX, NULL, &len, flags, sys_cred, error);
+ return (error == 0);
+}
+
+/*
+ * Convert from extended attribute representation to in-memory for XFS.
+ */
+STATIC int
+posix_cap_xattr_to_xfs(
+ posix_cap_xattr *src,
+ size_t size,
+ xfs_cap_set_t *dest)
+{
+ if (!src || !dest)
+ return EINVAL;
+
+ if (src->c_version != cpu_to_le32(POSIX_CAP_XATTR_VERSION))
+ return EINVAL;
+ if (src->c_abiversion != cpu_to_le32(_LINUX_CAPABILITY_VERSION))
+ return EINVAL;
+
+ if (size < sizeof(posix_cap_xattr))
+ return EINVAL;
+
+ ASSERT(sizeof(dest->cap_effective) == sizeof(src->c_effective));
+
+ dest->cap_effective = src->c_effective;
+ dest->cap_permitted = src->c_permitted;
+ dest->cap_inheritable = src->c_inheritable;
+
+ return 0;
+}
+
+/*
+ * Convert from in-memory XFS to extended attribute representation.
+ */
+STATIC int
+posix_cap_xfs_to_xattr(
+ xfs_cap_set_t *src,
+ posix_cap_xattr *xattr_cap,
+ size_t size)
+{
+ size_t new_size = posix_cap_xattr_size();
+
+ if (size < new_size)
+ return -ERANGE;
+
+ ASSERT(sizeof(xattr_cap->c_effective) == sizeof(src->cap_effective));
+
+ xattr_cap->c_version = cpu_to_le32(POSIX_CAP_XATTR_VERSION);
+ xattr_cap->c_abiversion = cpu_to_le32(_LINUX_CAPABILITY_VERSION);
+ xattr_cap->c_effective = src->cap_effective;
+ xattr_cap->c_permitted = src->cap_permitted;
+ xattr_cap->c_inheritable= src->cap_inheritable;
+
+ return new_size;
+}
+
+int
+xfs_cap_vget(
+ vnode_t *vp,
+ void *cap,
+ size_t size)
+{
+ int error;
+ int len = sizeof(xfs_cap_set_t);
+ int flags = ATTR_ROOT;
+ xfs_cap_set_t xfs_cap = { 0 };
+ posix_cap_xattr *xattr_cap = cap;
+ char *data = (char *)&xfs_cap;
+
+ VN_HOLD(vp);
+ if ((error = _MAC_VACCESS(vp, NULL, VREAD)))
+ goto out;
+
+ if (!size) {
+ flags |= ATTR_KERNOVAL;
+ data = NULL;
+ }
+ VOP_ATTR_GET(vp, SGI_CAP_LINUX, data, &len, flags, sys_cred, error);
+ if (error)
+ goto out;
+ ASSERT(len == sizeof(xfs_cap_set_t));
+
+ error = (size)? -posix_cap_xattr_size() :
+ -posix_cap_xfs_to_xattr(&xfs_cap, xattr_cap, size);
+out:
+ VN_RELE(vp);
+ return -error;
+}
+
+int
+xfs_cap_vremove(
+ vnode_t *vp)
+{
+ int error;
+
+ VN_HOLD(vp);
+ error = xfs_cap_allow_set(vp);
+ if (!error) {
+ VOP_ATTR_REMOVE(vp, SGI_CAP_LINUX, ATTR_ROOT, sys_cred, error);
+ if (error == ENOATTR)
+ error = 0; /* 'scool */
+ }
+ VN_RELE(vp);
+ return -error;
+}
+
+int
+xfs_cap_vset(
+ vnode_t *vp,
+ void *cap,
+ size_t size)
+{
+ posix_cap_xattr *xattr_cap = cap;
+ xfs_cap_set_t xfs_cap;
+ int error;
+
+ if (!cap)
+ return -EINVAL;
+
+ error = posix_cap_xattr_to_xfs(xattr_cap, size, &xfs_cap);
+ if (error)
+ return -error;
+
+ VN_HOLD(vp);
+ error = xfs_cap_allow_set(vp);
+ if (error)
+ goto out;
+
+ VOP_ATTR_SET(vp, SGI_CAP_LINUX, (char *)&xfs_cap,
+ sizeof(xfs_cap_set_t), ATTR_ROOT, sys_cred, error);
+out:
+ VN_RELE(vp);
+ return -error;
+}
+
+STATIC int
+xfs_cap_allow_set(
+ vnode_t *vp)
+{
+ vattr_t va;
+ int error;
+
+ if (vp->v_vfsp->vfs_flag & VFS_RDONLY)
+ return EROFS;
+ if (vp->v_inode.i_flags & (S_IMMUTABLE|S_APPEND))
+ return EPERM;
+ if ((error = _MAC_VACCESS(vp, NULL, VWRITE)))
+ return error;
+ va.va_mask = XFS_AT_UID;
+ VOP_GETATTR(vp, &va, 0, NULL, error);
+ if (error)
+ return error;
+ if (va.va_uid != current->fsuid && !capable(CAP_FOWNER))
+ return EPERM;
+ return error;
+}
--- /dev/null
+/*
+ * Copyright (c) 2000-2002 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+
+static xfs_mac_label_t *mac_low_high_lp;
+static xfs_mac_label_t *mac_high_low_lp;
+static xfs_mac_label_t *mac_admin_high_lp;
+static xfs_mac_label_t *mac_equal_equal_lp;
+
+/*
+ * Test for the existence of a MAC label as efficiently as possible.
+ */
+int
+xfs_mac_vhaslabel(
+ vnode_t *vp)
+{
+ int error;
+ int len = sizeof(xfs_mac_label_t);
+ int flags = ATTR_KERNOVAL|ATTR_ROOT;
+
+ VOP_ATTR_GET(vp, SGI_MAC_FILE, NULL, &len, flags, sys_cred, error);
+ return (error == 0);
+}
+
+int
+xfs_mac_iaccess(xfs_inode_t *ip, mode_t mode, struct cred *cr)
+{
+ xfs_mac_label_t mac;
+ xfs_mac_label_t *mp = mac_high_low_lp;
+
+ if (cr == NULL || sys_cred == NULL ) {
+ return EACCES;
+ }
+
+ if (xfs_attr_fetch(ip, SGI_MAC_FILE, (char *)&mac, sizeof(mac)) == 0) {
+ if ((mp = mac_add_label(&mac)) == NULL) {
+ return mac_access(mac_high_low_lp, cr, mode);
+ }
+ }
+
+ return mac_access(mp, cr, mode);
+}
--- /dev/null
+#ifndef _ASM_ALPHA_RELAY_H
+#define _ASM_ALPHA_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-iop310/irqs.h
+ *
+ * Author: Nicolas Pitre
+ * Copyright: (C) 2001 MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * 06/13/01: Added 80310 on-chip interrupt sources <dsaxena@mvista.com>
+ *
+ */
+#include <linux/config.h>
+
+/*
+ * XS80200 specific IRQs
+ */
+#define IRQ_XS80200_BCU 0 /* Bus Control Unit */
+#define IRQ_XS80200_PMU 1 /* Performance Monitoring Unit */
+#define IRQ_XS80200_EXTIRQ 2 /* external IRQ signal */
+#define IRQ_XS80200_EXTFIQ 3 /* external IRQ signal */
+
+#define NR_XS80200_IRQS 4
+
+#define XSCALE_PMU_IRQ IRQ_XS80200_PMU
+
+/*
+ * IOP80310 chipset interrupts
+ */
+#define IOP310_IRQ_OFS NR_XS80200_IRQS
+#define IOP310_IRQ(x) (IOP310_IRQ_OFS + (x))
+
+/*
+ * On FIQ1ISR register
+ */
+#define IRQ_IOP310_DMA0 IOP310_IRQ(0) /* DMA Channel 0 */
+#define IRQ_IOP310_DMA1 IOP310_IRQ(1) /* DMA Channel 1 */
+#define IRQ_IOP310_DMA2 IOP310_IRQ(2) /* DMA Channel 2 */
+#define IRQ_IOP310_PMON IOP310_IRQ(3) /* Bus performance Unit */
+#define IRQ_IOP310_AAU IOP310_IRQ(4) /* Application Accelator Unit */
+
+/*
+ * On FIQ2ISR register
+ */
+#define IRQ_IOP310_I2C IOP310_IRQ(5) /* I2C unit */
+#define IRQ_IOP310_MU IOP310_IRQ(6) /* messaging unit */
+
+#define NR_IOP310_IRQS (IOP310_IRQ(6) + 1)
+
+#define NR_IRQS NR_IOP310_IRQS
+
+
+/*
+ * Interrupts available on the Cyclone IQ80310 board
+ */
+#ifdef CONFIG_ARCH_IQ80310
+
+#define IQ80310_IRQ_OFS NR_IOP310_IRQS
+#define IQ80310_IRQ(y) ((IQ80310_IRQ_OFS) + (y))
+
+#define IRQ_IQ80310_TIMER IQ80310_IRQ(0) /* Timer Interrupt */
+#define IRQ_IQ80310_I82559 IQ80310_IRQ(1) /* I82559 Ethernet Interrupt */
+#define IRQ_IQ80310_UART1 IQ80310_IRQ(2) /* UART1 Interrupt */
+#define IRQ_IQ80310_UART2 IQ80310_IRQ(3) /* UART2 Interrupt */
+#define IRQ_IQ80310_INTD IQ80310_IRQ(4) /* PCI INTD */
+
+
+/*
+ * ONLY AVAILABLE ON REV F OR NEWER BOARDS!
+ */
+#define IRQ_IQ80310_INTA IQ80310_IRQ(5) /* PCI INTA */
+#define IRQ_IQ80310_INTB IQ80310_IRQ(6) /* PCI INTB */
+#define IRQ_IQ80310_INTC IQ80310_IRQ(7) /* PCI INTC */
+
+#undef NR_IRQS
+#define NR_IRQS (IQ80310_IRQ(7) + 1)
+
+#endif // CONFIG_ARCH_IQ80310
+
--- /dev/null
+/*
+ * linux/include/asm/arch-iop3xx/iop310.h
+ *
+ * Intel IOP310 Companion Chip definitions
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef _IOP310_HW_H_
+#define _IOP310_HW_H_
+
+/*
+ * This is needed for mixed drivers that need to work on all
+ * IOP3xx variants but behave slightly differently on each.
+ */
+#ifndef __ASSEMBLY__
+#define iop_is_310() ((processor_id & 0xffffe3f0) == 0x69052000)
+#endif
+
+/*
+ * IOP310 I/O and Mem space regions for PCI autoconfiguration
+ */
+#define IOP310_PCISEC_LOWER_IO 0x90010000
+#define IOP310_PCISEC_UPPER_IO 0x9001ffff
+#define IOP310_PCISEC_LOWER_MEM 0x88000000
+#define IOP310_PCISEC_UPPER_MEM 0x8bffffff
+
+#define IOP310_PCIPRI_LOWER_IO 0x90000000
+#define IOP310_PCIPRI_UPPER_IO 0x9000ffff
+#define IOP310_PCIPRI_LOWER_MEM 0x80000000
+#define IOP310_PCIPRI_UPPER_MEM 0x83ffffff
+
+#define IOP310_PCI_WINDOW_SIZE 64 * 0x100000
+
+/*
+ * IOP310 chipset registers
+ */
+#define IOP310_VIRT_MEM_BASE 0xe8001000 /* chip virtual mem address*/
+#define IOP310_PHY_MEM_BASE 0x00001000 /* chip physical memory address */
+#define IOP310_REG_ADDR(reg) (IOP310_VIRT_MEM_BASE | IOP310_PHY_MEM_BASE | (reg))
+
+/* PCI-to-PCI Bridge Unit 0x00001000 through 0x000010FF */
+#define IOP310_VIDR (volatile u16 *)IOP310_REG_ADDR(0x00001000)
+#define IOP310_DIDR (volatile u16 *)IOP310_REG_ADDR(0x00001002)
+#define IOP310_PCR (volatile u16 *)IOP310_REG_ADDR(0x00001004)
+#define IOP310_PSR (volatile u16 *)IOP310_REG_ADDR(0x00001006)
+#define IOP310_RIDR (volatile u8 *)IOP310_REG_ADDR(0x00001008)
+#define IOP310_CCR (volatile u32 *)IOP310_REG_ADDR(0x00001009)
+#define IOP310_CLSR (volatile u8 *)IOP310_REG_ADDR(0x0000100C)
+#define IOP310_PLTR (volatile u8 *)IOP310_REG_ADDR(0x0000100D)
+#define IOP310_HTR (volatile u8 *)IOP310_REG_ADDR(0x0000100E)
+/* Reserved 0x0000100F through 0x00001017 */
+#define IOP310_PBNR (volatile u8 *)IOP310_REG_ADDR(0x00001018)
+#define IOP310_SBNR (volatile u8 *)IOP310_REG_ADDR(0x00001019)
+#define IOP310_SUBBNR (volatile u8 *)IOP310_REG_ADDR(0x0000101A)
+#define IOP310_SLTR (volatile u8 *)IOP310_REG_ADDR(0x0000101B)
+#define IOP310_IOBR (volatile u8 *)IOP310_REG_ADDR(0x0000101C)
+#define IOP310_IOLR (volatile u8 *)IOP310_REG_ADDR(0x0000101D)
+#define IOP310_SSR (volatile u16 *)IOP310_REG_ADDR(0x0000101E)
+#define IOP310_MBR (volatile u16 *)IOP310_REG_ADDR(0x00001020)
+#define IOP310_MLR (volatile u16 *)IOP310_REG_ADDR(0x00001022)
+#define IOP310_PMBR (volatile u16 *)IOP310_REG_ADDR(0x00001024)
+#define IOP310_PMLR (volatile u16 *)IOP310_REG_ADDR(0x00001026)
+/* Reserved 0x00001028 through 0x00001033 */
+#define IOP310_CAPR (volatile u8 *)IOP310_REG_ADDR(0x00001034)
+/* Reserved 0x00001035 through 0x0000103D */
+#define IOP310_BCR (volatile u16 *)IOP310_REG_ADDR(0x0000103E)
+#define IOP310_EBCR (volatile u16 *)IOP310_REG_ADDR(0x00001040)
+#define IOP310_SISR (volatile u16 *)IOP310_REG_ADDR(0x00001042)
+#define IOP310_PBISR (volatile u32 *)IOP310_REG_ADDR(0x00001044)
+#define IOP310_SBISR (volatile u32 *)IOP310_REG_ADDR(0x00001048)
+#define IOP310_SACR (volatile u32 *)IOP310_REG_ADDR(0x0000104C)
+#define IOP310_PIRSR (volatile u32 *)IOP310_REG_ADDR(0x00001050)
+#define IOP310_SIOBR (volatile u8 *)IOP310_REG_ADDR(0x00001054)
+#define IOP310_SIOLR (volatile u8 *)IOP310_REG_ADDR(0x00001055)
+#define IOP310_SCDR (volatile u8 *)IOP310_REG_ADDR(0x00001056)
+
+#define IOP310_SMBR (volatile u16 *)IOP310_REG_ADDR(0x00001058)
+#define IOP310_SMLR (volatile u16 *)IOP310_REG_ADDR(0x0000105A)
+#define IOP310_SDER (volatile u16 *)IOP310_REG_ADDR(0x0000105C)
+#define IOP310_QCR (volatile u16 *)IOP310_REG_ADDR(0x0000105E)
+#define IOP310_CAPID (volatile u8 *)IOP310_REG_ADDR(0x00001068)
+#define IOP310_NIPTR (volatile u8 *)IOP310_REG_ADDR(0x00001069)
+#define IOP310_PMCR (volatile u16 *)IOP310_REG_ADDR(0x0000106A)
+#define IOP310_PMCSR (volatile u16 *)IOP310_REG_ADDR(0x0000106C)
+#define IOP310_PMCSRBSE (volatile u8 *)IOP310_REG_ADDR(0x0000106E)
+/* Reserved 0x00001064 through 0x000010FFH */
+
+/* Performance monitoring unit 0x00001100 through 0x000011FF*/
+#define IOP310_PMONGTMR (volatile u32 *)IOP310_REG_ADDR(0x00001100)
+#define IOP310_PMONESR (volatile u32 *)IOP310_REG_ADDR(0x00001104)
+#define IOP310_PMONEMISR (volatile u32 *)IOP310_REG_ADDR(0x00001108)
+#define IOP310_PMONGTSR (volatile u32 *)IOP310_REG_ADDR(0x00001110)
+#define IOP310_PMONPECR1 (volatile u32 *)IOP310_REG_ADDR(0x00001114)
+#define IOP310_PMONPECR2 (volatile u32 *)IOP310_REG_ADDR(0x00001118)
+#define IOP310_PMONPECR3 (volatile u32 *)IOP310_REG_ADDR(0x0000111C)
+#define IOP310_PMONPECR4 (volatile u32 *)IOP310_REG_ADDR(0x00001120)
+#define IOP310_PMONPECR5 (volatile u32 *)IOP310_REG_ADDR(0x00001124)
+#define IOP310_PMONPECR6 (volatile u32 *)IOP310_REG_ADDR(0x00001128)
+#define IOP310_PMONPECR7 (volatile u32 *)IOP310_REG_ADDR(0x0000112C)
+#define IOP310_PMONPECR8 (volatile u32 *)IOP310_REG_ADDR(0x00001130)
+#define IOP310_PMONPECR9 (volatile u32 *)IOP310_REG_ADDR(0x00001134)
+#define IOP310_PMONPECR10 (volatile u32 *)IOP310_REG_ADDR(0x00001138)
+#define IOP310_PMONPECR11 (volatile u32 *)IOP310_REG_ADDR(0x0000113C)
+#define IOP310_PMONPECR12 (volatile u32 *)IOP310_REG_ADDR(0x00001140)
+#define IOP310_PMONPECR13 (volatile u32 *)IOP310_REG_ADDR(0x00001144)
+#define IOP310_PMONPECR14 (volatile u32 *)IOP310_REG_ADDR(0x00001148)
+
+/* Address Translation Unit 0x00001200 through 0x000012FF */
+#define IOP310_ATUVID (volatile u16 *)IOP310_REG_ADDR(0x00001200)
+#define IOP310_ATUDID (volatile u16 *)IOP310_REG_ADDR(0x00001202)
+#define IOP310_PATUCMD (volatile u16 *)IOP310_REG_ADDR(0x00001204)
+#define IOP310_PATUSR (volatile u16 *)IOP310_REG_ADDR(0x00001206)
+#define IOP310_ATURID (volatile u8 *)IOP310_REG_ADDR(0x00001208)
+#define IOP310_ATUCCR (volatile u32 *)IOP310_REG_ADDR(0x00001209)
+#define IOP310_ATUCLSR (volatile u8 *)IOP310_REG_ADDR(0x0000120C)
+#define IOP310_ATULT (volatile u8 *)IOP310_REG_ADDR(0x0000120D)
+#define IOP310_ATUHTR (volatile u8 *)IOP310_REG_ADDR(0x0000120E)
+
+#define IOP310_PIABAR (volatile u32 *)IOP310_REG_ADDR(0x00001210)
+/* Reserved 0x00001214 through 0x0000122B */
+#define IOP310_ASVIR (volatile u16 *)IOP310_REG_ADDR(0x0000122C)
+#define IOP310_ASIR (volatile u16 *)IOP310_REG_ADDR(0x0000122E)
+#define IOP310_ERBAR (volatile u32 *)IOP310_REG_ADDR(0x00001230)
+#define IOP310_ATUCAPPTR (volatile u8 *)IOP310_REG_ADDR(0x00001234)
+/* Reserved 0x00001235 through 0x0000123B */
+#define IOP310_ATUILR (volatile u8 *)IOP310_REG_ADDR(0x0000123C)
+#define IOP310_ATUIPR (volatile u8 *)IOP310_REG_ADDR(0x0000123D)
+#define IOP310_ATUMGNT (volatile u8 *)IOP310_REG_ADDR(0x0000123E)
+#define IOP310_ATUMLAT (volatile u8 *)IOP310_REG_ADDR(0x0000123F)
+#define IOP310_PIALR (volatile u32 *)IOP310_REG_ADDR(0x00001240)
+#define IOP310_PIATVR (volatile u32 *)IOP310_REG_ADDR(0x00001244)
+#define IOP310_SIABAR (volatile u32 *)IOP310_REG_ADDR(0x00001248)
+#define IOP310_SIALR (volatile u32 *)IOP310_REG_ADDR(0x0000124C)
+#define IOP310_SIATVR (volatile u32 *)IOP310_REG_ADDR(0x00001250)
+#define IOP310_POMWVR (volatile u32 *)IOP310_REG_ADDR(0x00001254)
+/* Reserved 0x00001258 through 0x0000125B */
+#define IOP310_POIOWVR (volatile u32 *)IOP310_REG_ADDR(0x0000125C)
+#define IOP310_PODWVR (volatile u32 *)IOP310_REG_ADDR(0x00001260)
+#define IOP310_POUDR (volatile u32 *)IOP310_REG_ADDR(0x00001264)
+#define IOP310_SOMWVR (volatile u32 *)IOP310_REG_ADDR(0x00001268)
+#define IOP310_SOIOWVR (volatile u32 *)IOP310_REG_ADDR(0x0000126C)
+/* Reserved 0x00001270 through 0x00001273*/
+#define IOP310_ERLR (volatile u32 *)IOP310_REG_ADDR(0x00001274)
+#define IOP310_ERTVR (volatile u32 *)IOP310_REG_ADDR(0x00001278)
+/* Reserved 0x00001279 through 0x0000127C*/
+#define IOP310_ATUCAPID (volatile u8 *)IOP310_REG_ADDR(0x00001280)
+#define IOP310_ATUNIPTR (volatile u8 *)IOP310_REG_ADDR(0x00001281)
+#define IOP310_APMCR (volatile u16 *)IOP310_REG_ADDR(0x00001282)
+#define IOP310_APMCSR (volatile u16 *)IOP310_REG_ADDR(0x00001284)
+/* Reserved 0x00001286 through 0x00001287 */
+#define IOP310_ATUCR (volatile u32 *)IOP310_REG_ADDR(0x00001288)
+/* Reserved 0x00001289 through 0x0000128C*/
+#define IOP310_PATUISR (volatile u32 *)IOP310_REG_ADDR(0x00001290)
+#define IOP310_SATUISR (volatile u32 *)IOP310_REG_ADDR(0x00001294)
+#define IOP310_SATUCMD (volatile u16 *)IOP310_REG_ADDR(0x00001298)
+#define IOP310_SATUSR (volatile u16 *)IOP310_REG_ADDR(0x0000129A)
+#define IOP310_SODWVR (volatile u32 *)IOP310_REG_ADDR(0x0000129C)
+#define IOP310_SOUDR (volatile u32 *)IOP310_REG_ADDR(0x000012A0)
+#define IOP310_POCCAR (volatile u32 *)IOP310_REG_ADDR(0x000012A4)
+#define IOP310_SOCCAR (volatile u32 *)IOP310_REG_ADDR(0x000012A8)
+#define IOP310_POCCDR (volatile u32 *)IOP310_REG_ADDR(0x000012AC)
+#define IOP310_SOCCDR (volatile u32 *)IOP310_REG_ADDR(0x000012B0)
+#define IOP310_PAQCR (volatile u32 *)IOP310_REG_ADDR(0x000012B4)
+#define IOP310_SAQCR (volatile u32 *)IOP310_REG_ADDR(0x000012B8)
+#define IOP310_PATUIMR (volatile u32 *)IOP310_REG_ADDR(0x000012BC)
+#define IOP310_SATUIMR (volatile u32 *)IOP310_REG_ADDR(0x000012C0)
+/* Reserved 0x000012C4 through 0x000012FF */
+/* Messaging Unit 0x00001300 through 0x000013FF */
+#define IOP310_MUIMR0 (volatile u32 *)IOP310_REG_ADDR(0x00001310)
+#define IOP310_MUIMR1 (volatile u32 *)IOP310_REG_ADDR(0x00001314)
+#define IOP310_MUOMR0 (volatile u32 *)IOP310_REG_ADDR(0x00001318)
+#define IOP310_MUOMR1 (volatile u32 *)IOP310_REG_ADDR(0x0000131C)
+#define IOP310_MUIDR (volatile u32 *)IOP310_REG_ADDR(0x00001320)
+#define IOP310_MUIISR (volatile u32 *)IOP310_REG_ADDR(0x00001324)
+#define IOP310_MUIIMR (volatile u32 *)IOP310_REG_ADDR(0x00001328)
+#define IOP310_MUODR (volatile u32 *)IOP310_REG_ADDR(0x0000132C)
+#define IOP310_MUOISR (volatile u32 *)IOP310_REG_ADDR(0x00001330)
+#define IOP310_MUOIMR (volatile u32 *)IOP310_REG_ADDR(0x00001334)
+#define IOP310_MUMUCR (volatile u32 *)IOP310_REG_ADDR(0x00001350)
+#define IOP310_MUQBAR (volatile u32 *)IOP310_REG_ADDR(0x00001354)
+#define IOP310_MUIFHPR (volatile u32 *)IOP310_REG_ADDR(0x00001360)
+#define IOP310_MUIFTPR (volatile u32 *)IOP310_REG_ADDR(0x00001364)
+#define IOP310_MUIPHPR (volatile u32 *)IOP310_REG_ADDR(0x00001368)
+#define IOP310_MUIPTPR (volatile u32 *)IOP310_REG_ADDR(0x0000136C)
+#define IOP310_MUOFHPR (volatile u32 *)IOP310_REG_ADDR(0x00001370)
+#define IOP310_MUOFTPR (volatile u32 *)IOP310_REG_ADDR(0x00001374)
+#define IOP310_MUOPHPR (volatile u32 *)IOP310_REG_ADDR(0x00001378)
+#define IOP310_MUOPTPR (volatile u32 *)IOP310_REG_ADDR(0x0000137C)
+#define IOP310_MUIAR (volatile u32 *)IOP310_REG_ADDR(0x00001380)
+/* DMA Controller 0x00001400 through 0x000014FF */
+#define IOP310_DMA0CCR (volatile u32 *)IOP310_REG_ADDR(0x00001400)
+#define IOP310_DMA0CSR (volatile u32 *)IOP310_REG_ADDR(0x00001404)
+/* Reserved 0x001408 through 0x00140B */
+#define IOP310_DMA0DAR (volatile u32 *)IOP310_REG_ADDR(0x0000140C)
+#define IOP310_DMA0NDAR (volatile u32 *)IOP310_REG_ADDR(0x00001410)
+#define IOP310_DMA0PADR (volatile u32 *)IOP310_REG_ADDR(0x00001414)
+#define IOP310_DMA0PUADR (volatile u32 *)IOP310_REG_ADDR(0x00001418)
+#define IOP310_DMA0LADR (volatile u32 *)IOP310_REG_ADDR(0x0000141C)
+#define IOP310_DMA0BCR (volatile u32 *)IOP310_REG_ADDR(0x00001420)
+#define IOP310_DMA0DCR (volatile u32 *)IOP310_REG_ADDR(0x00001424)
+/* Reserved 0x00001428 through 0x0000143F */
+#define IOP310_DMA1CCR (volatile u32 *)IOP310_REG_ADDR(0x00001440)
+#define IOP310_DMA1CSR (volatile u32 *)IOP310_REG_ADDR(0x00001444)
+/* Reserved 0x00001448 through 0x0000144B */
+#define IOP310_DMA1DAR (volatile u32 *)IOP310_REG_ADDR(0x0000144C)
+#define IOP310_DMA1NDAR (volatile u32 *)IOP310_REG_ADDR(0x00001450)
+#define IOP310_DMA1PADR (volatile u32 *)IOP310_REG_ADDR(0x00001454)
+#define IOP310_DMA1PUADR (volatile u32 *)IOP310_REG_ADDR(0x00001458)
+#define IOP310_DMA1LADR (volatile u32 *)IOP310_REG_ADDR(0x0000145C)
+#define IOP310_DMA1BCR (volatile u32 *)IOP310_REG_ADDR(0x00001460)
+#define IOP310_DMA1DCR (volatile u32 *)IOP310_REG_ADDR(0x00001464)
+/* Reserved 0x00001468 through 0x0000147F */
+#define IOP310_DMA2CCR (volatile u32 *)IOP310_REG_ADDR(0x00001480)
+#define IOP310_DMA2CSR (volatile u32 *)IOP310_REG_ADDR(0x00001484)
+/* Reserved 0x00001488 through 0x0000148B */
+#define IOP310_DMA2DAR (volatile u32 *)IOP310_REG_ADDR(0x0000148C)
+#define IOP310_DMA2NDAR (volatile u32 *)IOP310_REG_ADDR(0x00001490)
+#define IOP310_DMA2PADR (volatile u32 *)IOP310_REG_ADDR(0x00001494)
+#define IOP310_DMA2PUADR (volatile u32 *)IOP310_REG_ADDR(0x00001498)
+#define IOP310_DMA2LADR (volatile u32 *)IOP310_REG_ADDR(0x0000149C)
+#define IOP310_DMA2BCR (volatile u32 *)IOP310_REG_ADDR(0x000014A0)
+#define IOP310_DMA2DCR (volatile u32 *)IOP310_REG_ADDR(0x000014A4)
+
+/* Memory controller 0x00001500 through 0x0015FF */
+
+/* core interface unit 0x00001640 - 0x0000167F */
+#define IOP310_CIUISR (volatile u32 *)IOP310_REG_ADDR(0x00001644)
+
+/* PCI and Peripheral Interrupt Controller 0x00001700 - 0x0000171B */
+#define IOP310_IRQISR (volatile u32 *)IOP310_REG_ADDR(0x00001700)
+#define IOP310_FIQ2ISR (volatile u32 *)IOP310_REG_ADDR(0x00001704)
+#define IOP310_FIQ1ISR (volatile u32 *)IOP310_REG_ADDR(0x00001708)
+#define IOP310_PDIDR (volatile u32 *)IOP310_REG_ADDR(0x00001710)
+
+/* AAU registers. DJ 0x00001800 - 0x00001838 */
+#define IOP310_AAUACR (volatile u32 *)IOP310_REG_ADDR(0x00001800)
+#define IOP310_AAUASR (volatile u32 *)IOP310_REG_ADDR(0x00001804)
+#define IOP310_AAUADAR (volatile u32 *)IOP310_REG_ADDR(0x00001808)
+#define IOP310_AAUANDAR (volatile u32 *)IOP310_REG_ADDR(0x0000180C)
+#define IOP310_AAUSAR1 (volatile u32 *)IOP310_REG_ADDR(0x00001810)
+#define IOP310_AAUSAR2 (volatile u32 *)IOP310_REG_ADDR(0x00001814)
+#define IOP310_AAUSAR3 (volatile u32 *)IOP310_REG_ADDR(0x00001818)
+#define IOP310_AAUSAR4 (volatile u32 *)IOP310_REG_ADDR(0x0000181C)
+#define IOP310_AAUDAR (volatile u32 *)IOP310_REG_ADDR(0x00001820)
+#define IOP310_AAUABCR (volatile u32 *)IOP310_REG_ADDR(0x00001824)
+#define IOP310_AAUADCR (volatile u32 *)IOP310_REG_ADDR(0x00001828)
+#define IOP310_AAUSAR5 (volatile u32 *)IOP310_REG_ADDR(0x0000182C)
+#define IOP310_AAUSAR6 (volatile u32 *)IOP310_REG_ADDR(0x00001830)
+#define IOP310_AAUSAR7 (volatile u32 *)IOP310_REG_ADDR(0x00001834)
+#define IOP310_AAUSAR8 (volatile u32 *)IOP310_REG_ADDR(0x00001838)
+
+#endif // _IOP310_HW_H_
--- /dev/null
+/*
+ * linux/include/asm/arch-iop80310/iq80310.h
+ *
+ * Intel IQ-80310 evaluation board registers
+ */
+
+#ifndef _IQ80310_H_
+#define _IQ80310_H_
+
+#define IQ80310_RAMBASE 0xa0000000
+#define IQ80310_UART1 0xfe800000 /* UART #1 */
+#define IQ80310_UART2 0xfe810000 /* UART #2 */
+#define IQ80310_INT_STAT 0xfe820000 /* Interrupt (XINT3#) Status */
+#define IQ80310_BOARD_REV 0xfe830000 /* Board revision register */
+#define IQ80310_CPLD_REV 0xfe840000 /* CPLD revision register */
+#define IQ80310_7SEG_1 0xfe840000 /* 7-Segment MSB */
+#define IQ80310_7SEG_0 0xfe850000 /* 7-Segment LSB (WO) */
+#define IQ80310_PCI_INT_STAT 0xfe850000 /* PCI Interrupt Status */
+#define IQ80310_INT_MASK 0xfe860000 /* Interrupt (XINT3#) Mask */
+#define IQ80310_BACKPLANE 0xfe870000 /* Backplane Detect */
+#define IQ80310_TIMER_LA0 0xfe880000 /* Timer LA0 */
+#define IQ80310_TIMER_LA1 0xfe890000 /* Timer LA1 */
+#define IQ80310_TIMER_LA2 0xfe8a0000 /* Timer LA2 */
+#define IQ80310_TIMER_LA3 0xfe8b0000 /* Timer LA3 */
+#define IQ80310_TIMER_EN 0xfe8c0000 /* Timer Enable */
+#define IQ80310_ROTARY_SW 0xfe8d0000 /* Rotary Switch */
+#define IQ80310_JTAG 0xfe8e0000 /* JTAG Port Access */
+#define IQ80310_BATT_STAT 0xfe8f0000 /* Battery Status */
+
+#endif // _IQ80310_H_
--- /dev/null
+/*
+ * Definitions for XScale 80312 PMON
+ * (C) 2001 Intel Corporation
+ * Author: Chen Chen(chen.chen@intel.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef _IOP310_PMON_H_
+#define _IOP310_PMON_H_
+
+/*
+ * Different modes for Event Select Register for intel 80312
+ */
+
+#define IOP310_PMON_MODE0 0x00000000
+#define IOP310_PMON_MODE1 0x00000001
+#define IOP310_PMON_MODE2 0x00000002
+#define IOP310_PMON_MODE3 0x00000003
+#define IOP310_PMON_MODE4 0x00000004
+#define IOP310_PMON_MODE5 0x00000005
+#define IOP310_PMON_MODE6 0x00000006
+#define IOP310_PMON_MODE7 0x00000007
+
+typedef struct _iop310_pmon_result
+{
+ u32 timestamp; /* Global Time Stamp Register */
+ u32 timestamp_overflow; /* Time Stamp overflow count */
+ u32 event_count[14]; /* Programmable Event Counter
+ Registers 1-14 */
+ u32 event_overflow[14]; /* Overflow counter for PECR1-14 */
+} iop310_pmon_res_t;
+
+/* function prototypes */
+
+/* Claim IQ80312 PMON for usage */
+int iop310_pmon_claim(void);
+
+/* Start IQ80312 PMON */
+int iop310_pmon_start(int, int);
+
+/* Stop Performance Monitor Unit */
+int iop310_pmon_stop(iop310_pmon_res_t *);
+
+/* Release IQ80312 PMON */
+int iop310_pmon_release(int);
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-omap/bus.h
+ *
+ * Virtual bus for OMAP. Allows better power management, such as managing
+ * shared clocks, and mapping of bus addresses to Local Bus addresses.
+ *
+ * See drivers/usb/host/ohci-omap.c or drivers/video/omap/omapfb.c for
+ * examples on how to register drivers to this bus.
+ *
+ * Copyright (C) 2003 - 2004 Nokia Corporation
+ * Written by Tony Lindgren <tony@atomide.com>
+ * Portions of code based on sa1111.c.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __ASM_ARM_ARCH_OMAP_BUS_H
+#define __ASM_ARM_ARCH_OMAP_BUS_H
+
+extern struct bus_type omap_bus_types[];
+
+/*
+ * Description for physical device
+ */
+struct omap_dev {
+ struct device dev; /* Standard device description */
+ char *name;
+ unsigned int devid; /* OMAP device id */
+ unsigned int busid; /* OMAP virtual busid */
+ struct resource res; /* Standard resource description */
+ void *mapbase; /* OMAP physical address */
+ unsigned int irq[6]; /* OMAP interrupts */
+ u64 *dma_mask; /* Used by USB OHCI only */
+ u64 coherent_dma_mask; /* Used by USB OHCI only */
+};
+
+#define OMAP_DEV(_d) container_of((_d), struct omap_dev, dev)
+
+#define omap_get_drvdata(d) dev_get_drvdata(&(d)->dev)
+#define omap_set_drvdata(d,p) dev_set_drvdata(&(d)->dev, p)
+
+/*
+ * Description for device driver
+ */
+struct omap_driver {
+ struct device_driver drv; /* Standard driver description */
+ unsigned int devid; /* OMAP device id for bus */
+ unsigned int busid; /* OMAP virtual busid */
+ unsigned int clocks; /* OMAP shared clocks */
+ int (*probe)(struct omap_dev *);
+ int (*remove)(struct omap_dev *);
+ int (*suspend)(struct omap_dev *, u32);
+ int (*resume)(struct omap_dev *);
+};
+
+#define OMAP_DRV(_d) container_of((_d), struct omap_driver, drv)
+#define OMAP_DRIVER_NAME(_omapdev) ((_omapdev)->dev.driver->name)
+
+/*
+ * Device ID numbers for bus types
+ */
+#define OMAP_OCP_DEVID_USB 0
+
+#define OMAP_TIPB_DEVID_OHCI 0
+#define OMAP_TIPB_DEVID_LCD 1
+#define OMAP_TIPB_DEVID_MMC 2
+#define OMAP_TIPB_DEVID_OTG 3
+#define OMAP_TIPB_DEVID_UDC 4
+
+/*
+ * Virtual bus definitions for OMAP
+ */
+#define OMAP_NR_BUSES 2
+
+#define OMAP_BUS_NAME_TIPB "tipb"
+#define OMAP_BUS_NAME_LBUS "lbus"
+
+enum {
+ OMAP_BUS_TIPB = 0,
+ OMAP_BUS_LBUS,
+};
+
+/* See arch/arm/mach-omap/bus.c for the rest of the bus definitions. */
+
+extern int omap_driver_register(struct omap_driver *driver);
+extern void omap_driver_unregister(struct omap_driver *driver);
+extern int omap_device_register(struct omap_dev *odev);
+extern void omap_device_unregister(struct omap_dev *odev);
+
+#endif
--- /dev/null
+#ifndef _ASM_ARM_RELAY_H
+#define _ASM_ARM_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+#ifndef _ASM_ARM_RELAY_H
+#define _ASM_ARM_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+#ifndef _ASM_CRIS_RELAY_H
+#define _ASM_CRIS_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+#ifndef _ASM_GENERIC_RELAY_H
+#define _ASM_GENERIC_RELAY_H
+/*
+ * linux/include/asm-generic/relay.h
+ *
+ * Copyright (C) 2002, 2003 - Tom Zanussi (zanussi@us.ibm.com), IBM Corp
+ * Copyright (C) 2002 - Karim Yaghmour (karim@opersys.com)
+ *
+ * Architecture-independent definitions for relayfs
+ */
+
+#include <linux/relayfs_fs.h>
+
+/**
+ * get_time_delta - utility function for getting time delta
+ * @now: pointer to a timeval struct that may be given current time
+ * @rchan: the channel
+ *
+ * Returns the time difference between the current time and the buffer
+ * start time.
+ */
+static inline u32
+get_time_delta(struct timeval *now, struct rchan *rchan)
+{
+ u32 time_delta;
+
+ do_gettimeofday(now);
+ time_delta = calc_time_delta(now, &rchan->buf_start_time);
+
+ return time_delta;
+}
+
+/**
+ * get_timestamp - utility function for getting a time and TSC pair
+ * @now: current time
+ * @tsc: the TSC associated with now
+ * @rchan: the channel
+ *
+ * Sets the value pointed to by now to the current time. Value pointed to
+ * by tsc is not set since there is no generic TSC support.
+ */
+static inline void
+get_timestamp(struct timeval *now,
+ u32 *tsc,
+ struct rchan *rchan)
+{
+ do_gettimeofday(now);
+}
+
+/**
+ * get_time_or_tsc: - Utility function for getting a time or a TSC.
+ * @now: current time
+ * @tsc: current TSC
+ * @rchan: the channel
+ *
+ * Sets the value pointed to by now to the current time.
+ */
+static inline void
+get_time_or_tsc(struct timeval *now,
+ u32 *tsc,
+ struct rchan *rchan)
+{
+ do_gettimeofday(now);
+}
+
+/**
+ * have_tsc - does this platform have a useable TSC?
+ *
+ * Returns 0.
+ */
+static inline int
+have_tsc(void)
+{
+ return 0;
+}
+#endif
--- /dev/null
+#ifndef _ASM_H8300_RELAY_H
+#define _ASM_H8300_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+#ifndef __ASM_SOFTIRQ_H
+#define __ASM_SOFTIRQ_H
+
+#include <linux/preempt.h>
+#include <asm/hardirq.h>
+
+#define local_bh_disable() \
+ do { preempt_count() += SOFTIRQ_OFFSET; barrier(); } while (0)
+#define __local_bh_enable() \
+ do { barrier(); preempt_count() -= SOFTIRQ_OFFSET; } while (0)
+
+#define local_bh_enable() \
+do { \
+ __local_bh_enable(); \
+ if (unlikely(!in_interrupt() && softirq_pending(smp_processor_id()))) \
+ do_softirq(); \
+ preempt_check_resched(); \
+} while (0)
+
+#endif /* __ASM_SOFTIRQ_H */
--- /dev/null
+/*
+ * atomic_kmap.h: temporary virtual kernel memory mappings
+ *
+ * Copyright (C) 2003 Ingo Molnar <mingo@redhat.com>
+ */
+
+#ifndef _ASM_ATOMIC_KMAP_H
+#define _ASM_ATOMIC_KMAP_H
+
+#ifdef __KERNEL__
+
+#include <linux/config.h>
+#include <asm/tlbflush.h>
+
+#ifdef CONFIG_DEBUG_HIGHMEM
+#define HIGHMEM_DEBUG 1
+#else
+#define HIGHMEM_DEBUG 0
+#endif
+
+extern pte_t *kmap_pte;
+#define kmap_prot PAGE_KERNEL
+#define kmap_prot_nocache PAGE_KERNEL_NOCACHE
+
+#define PKMAP_BASE (0xff000000UL)
+#define NR_SHARED_PMDS ((0xffffffff-PKMAP_BASE+1)/PMD_SIZE)
+
+static inline unsigned long __kmap_atomic_vaddr(enum km_type type)
+{
+ enum fixed_addresses idx;
+
+ idx = type + KM_TYPE_NR*smp_processor_id();
+ return __fix_to_virt(FIX_KMAP_BEGIN + idx);
+}
+
+static inline void *__kmap_atomic_noflush(struct page *page, enum km_type type)
+{
+ enum fixed_addresses idx;
+ unsigned long vaddr;
+
+ idx = type + KM_TYPE_NR*smp_processor_id();
+ vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
+ /*
+ * NOTE: entries that rely on some secondary TLB-flush
+ * effect must not be global:
+ */
+ set_pte(kmap_pte-idx, mk_pte(page, PAGE_KERNEL));
+
+ return (void*) vaddr;
+}
+
+static inline void *__kmap_atomic(struct page *page, enum km_type type)
+{
+ enum fixed_addresses idx;
+ unsigned long vaddr;
+
+ idx = type + KM_TYPE_NR*smp_processor_id();
+ vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx);
+#if HIGHMEM_DEBUG
+ BUG_ON(!pte_none(*(kmap_pte-idx)));
+#else
+ /*
+ * Performance optimization - do not flush if the new
+ * pte is the same as the old one:
+ */
+ if (pte_val(*(kmap_pte-idx)) == pte_val(mk_pte(page, kmap_prot)))
+ return (void *) vaddr;
+#endif
+ set_pte(kmap_pte-idx, mk_pte(page, kmap_prot));
+ __flush_tlb_one(vaddr);
+
+ return (void*) vaddr;
+}
+
+static inline void __kunmap_atomic(void *kvaddr, enum km_type type)
+{
+#if HIGHMEM_DEBUG
+ unsigned long vaddr = (unsigned long) kvaddr & PAGE_MASK;
+ enum fixed_addresses idx = type + KM_TYPE_NR*smp_processor_id();
+
+ BUG_ON(vaddr != __fix_to_virt(FIX_KMAP_BEGIN+idx));
+ /*
+ * force other mappings to Oops if they'll try to access
+ * this pte without first remap it
+ */
+ pte_clear(kmap_pte-idx);
+ __flush_tlb_one(vaddr);
+#endif
+}
+
+#define __kunmap_atomic_type(type) \
+ __kunmap_atomic((void *)__kmap_atomic_vaddr(type), (type))
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_ATOMIC_KMAP_H */
--- /dev/null
+/*
+ * Kernel header file for Linux crash dumps.
+ *
+ * Created by: Matt Robinson (yakker@sgi.com)
+ *
+ * Copyright 1999 Silicon Graphics, Inc. All rights reserved.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+/* This header file holds the architecture specific crash dump header */
+#ifndef _ASM_DUMP_H
+#define _ASM_DUMP_H
+
+/* necessary header files */
+#include <asm/ptrace.h>
+#include <asm/page.h>
+#include <linux/threads.h>
+#include <linux/mm.h>
+
+/* definitions */
+#define DUMP_ASM_MAGIC_NUMBER 0xdeaddeadULL /* magic number */
+#define DUMP_ASM_VERSION_NUMBER 0x3 /* version number */
+
+/* max number of cpus */
+#define DUMP_MAX_NUM_CPUS 32
+
+/*
+ * Structure: __dump_header_asm
+ * Function: This is the header for architecture-specific stuff. It
+ * follows right after the dump header.
+ */
+struct __dump_header_asm {
+ /* the dump magic number -- unique to verify dump is valid */
+ u64 dha_magic_number;
+
+ /* the version number of this dump */
+ u32 dha_version;
+
+ /* the size of this header (in case we can't read it) */
+ u32 dha_header_size;
+
+ /* the esp for i386 systems */
+ u32 dha_esp;
+
+ /* the eip for i386 systems */
+ u32 dha_eip;
+
+ /* the dump registers */
+ struct pt_regs dha_regs;
+
+ /* smp specific */
+ u32 dha_smp_num_cpus;
+ u32 dha_dumping_cpu;
+ struct pt_regs dha_smp_regs[DUMP_MAX_NUM_CPUS];
+ u32 dha_smp_current_task[DUMP_MAX_NUM_CPUS];
+ u32 dha_stack[DUMP_MAX_NUM_CPUS];
+ u32 dha_stack_ptr[DUMP_MAX_NUM_CPUS];
+} __attribute__((packed));
+
+#ifdef __KERNEL__
+
+extern struct __dump_header_asm dump_header_asm;
+
+#ifdef CONFIG_SMP
+extern cpumask_t irq_affinity[];
+extern int (*dump_ipi_function_ptr)(struct pt_regs *);
+extern void dump_send_ipi(void);
+#else
+#define dump_send_ipi() do { } while(0)
+#endif
+
+static inline void get_current_regs(struct pt_regs *regs)
+{
+ __asm__ __volatile__("movl %%ebx,%0" : "=m"(regs->ebx));
+ __asm__ __volatile__("movl %%ecx,%0" : "=m"(regs->ecx));
+ __asm__ __volatile__("movl %%edx,%0" : "=m"(regs->edx));
+ __asm__ __volatile__("movl %%esi,%0" : "=m"(regs->esi));
+ __asm__ __volatile__("movl %%edi,%0" : "=m"(regs->edi));
+ __asm__ __volatile__("movl %%ebp,%0" : "=m"(regs->ebp));
+ __asm__ __volatile__("movl %%eax,%0" : "=m"(regs->eax));
+ __asm__ __volatile__("movl %%esp,%0" : "=m"(regs->esp));
+ __asm__ __volatile__("movw %%ss, %%ax;" :"=a"(regs->xss));
+ __asm__ __volatile__("movw %%cs, %%ax;" :"=a"(regs->xcs));
+ __asm__ __volatile__("movw %%ds, %%ax;" :"=a"(regs->xds));
+ __asm__ __volatile__("movw %%es, %%ax;" :"=a"(regs->xes));
+ __asm__ __volatile__("pushfl; popl %0" :"=m"(regs->eflags));
+ regs->eip = (unsigned long)current_text_addr();
+}
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_DUMP_H */
D(11) KM_SOFTIRQ0,
D(12) KM_SOFTIRQ1,
D(13) KM_CRASHDUMP,
-D(14) KM_UNUSED,
+D(14) KM_NETDUMP,
D(15) KM_TYPE_NR
};
-
#undef D
-
#endif
--- /dev/null
+#ifndef _ASM_I386_RELAY_H
+#define _ASM_I386_RELAY_H
+/*
+ * linux/include/asm-i386/relay.h
+ *
+ * Copyright (C) 2002, 2003 - Tom Zanussi (zanussi@us.ibm.com), IBM Corp
+ * Copyright (C) 2002 - Karim Yaghmour (karim@opersys.com)
+ *
+ * i386 definitions for relayfs
+ */
+
+#include <linux/relayfs_fs.h>
+
+#ifdef CONFIG_X86_TSC
+#include <asm/msr.h>
+
+/**
+ * get_time_delta - utility function for getting time delta
+ * @now: pointer to a timeval struct that may be given current time
+ * @rchan: the channel
+ *
+ * Returns either the TSC if TSCs are being used, or the time and the
+ * time difference between the current time and the buffer start time
+ * if TSCs are not being used.
+ */
+static inline u32
+get_time_delta(struct timeval *now, struct rchan *rchan)
+{
+ u32 time_delta;
+
+ if ((using_tsc(rchan) == 1) && cpu_has_tsc)
+ rdtscl(time_delta);
+ else {
+ do_gettimeofday(now);
+ time_delta = calc_time_delta(now, &rchan->buf_start_time);
+ }
+
+ return time_delta;
+}
+
+/**
+ * get_timestamp - utility function for getting a time and TSC pair
+ * @now: current time
+ * @tsc: the TSC associated with now
+ * @rchan: the channel
+ *
+ * Sets the value pointed to by now to the current time and the value
+ * pointed to by tsc to the tsc associated with that time, if the
+ * platform supports TSC.
+ */
+static inline void
+get_timestamp(struct timeval *now,
+ u32 *tsc,
+ struct rchan *rchan)
+{
+ do_gettimeofday(now);
+
+ if ((using_tsc(rchan) == 1) && cpu_has_tsc)
+ rdtscl(*tsc);
+}
+
+/**
+ * get_time_or_tsc - utility function for getting a time or a TSC
+ * @now: current time
+ * @tsc: current TSC
+ * @rchan: the channel
+ *
+ * Sets the value pointed to by now to the current time or the value
+ * pointed to by tsc to the current tsc, depending on whether we're
+ * using TSCs or not.
+ */
+static inline void
+get_time_or_tsc(struct timeval *now,
+ u32 *tsc,
+ struct rchan *rchan)
+{
+ if ((using_tsc(rchan) == 1) && cpu_has_tsc)
+ rdtscl(*tsc);
+ else
+ do_gettimeofday(now);
+}
+
+/**
+ * have_tsc - does this platform have a useable TSC?
+ *
+ * Returns 1 if this platform has a useable TSC counter for
+ * timestamping purposes, 0 otherwise.
+ */
+static inline int
+have_tsc(void)
+{
+ if (cpu_has_tsc)
+ return 1;
+ else
+ return 0;
+}
+
+#else /* No TSC support (#ifdef CONFIG_X86_TSC) */
+#include <asm-generic/relay.h>
+#endif /* #ifdef CONFIG_X86_TSC */
+#endif
extern cpumask_t cpu_core_map[];
extern void smp_flush_tlb(void);
+extern void dump_send_ipi(void);
extern void smp_message_irq(int cpl, void *dev_id, struct pt_regs *regs);
extern void smp_invalidate_rcv(void); /* Process an NMI */
extern void (*mtrr_hook) (void);
--- /dev/null
+#ifndef _ASM_IA64_RELAY_H
+#define _ASM_IA64_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_CDL_H
+#define _ASM_IA64_SN_CDL_H
+
+#ifdef __KERNEL__
+#include <asm/sn/sgi.h>
+#endif
+
+struct cdl {
+ int part_num; /* Part part number */
+ int mfg_num; /* Part MFG number */
+ int (*attach)(vertex_hdl_t); /* Attach routine */
+};
+
+
+/*
+ * cdl: connection/driver list
+ *
+ * support code for bus infrastructure for busses
+ * that have self-identifying devices; initially
+ * constructed for xtalk, pciio and gioio modules.
+ */
+typedef struct cdl *cdl_p;
+
+/*
+ * cdl_add_connpt: add a connection point
+ *
+ * Calls the attach routines of all the drivers on
+ * the list that match this connection point, in
+ * the order that they were added to the list.
+ */
+extern int cdl_add_connpt(int key1,
+ int key2,
+ vertex_hdl_t conn,
+ int drv_flags);
+#endif /* _ASM_IA64_SN_CDL_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_DMAMAP_H
+#define _ASM_IA64_SN_DMAMAP_H
+
+/*
+ * Definitions for allocating, freeing, and using DMA maps
+ */
+
+/*
+ * DMA map types
+ */
+#define DMA_SCSI 0
+#define DMA_A24VME 1 /* Challenge/Onyx only */
+#define DMA_A32VME 2 /* Challenge/Onyx only */
+#define DMA_A64VME 3 /* SN0/Racer */
+
+#define DMA_EISA 4
+
+#define DMA_PCI32 5 /* SN0/Racer */
+#define DMA_PCI64 6 /* SN0/Racer */
+
+/*
+ * DMA map structure as returned by dma_mapalloc()
+ */
+typedef struct dmamap {
+ int dma_type; /* Map type (see above) */
+ int dma_adap; /* I/O adapter */
+ int dma_index; /* Beginning map register to use */
+ int dma_size; /* Number of map registers to use */
+ paddr_t dma_addr; /* Corresponding bus addr for A24/A32 */
+ unsigned long dma_virtaddr; /* Beginning virtual address that is mapped */
+} dmamap_t;
+
+/* standard flags values for pio_map routines,
+ * including {xtalk,pciio}_dmamap calls.
+ * NOTE: try to keep these in step with PIOMAP flags.
+ */
+#define DMAMAP_FIXED 0x1
+#define DMAMAP_NOSLEEP 0x2
+#define DMAMAP_INPLACE 0x4
+
+#define DMAMAP_FLAGS 0x7
+
+#endif /* _ASM_IA64_SN_DMAMAP_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_DRIVER_H
+#define _ASM_IA64_SN_DRIVER_H
+
+#include <asm/sn/sgi.h>
+#include <asm/types.h>
+
+/*
+** Interface for device driver handle management.
+**
+** These functions are mostly for use by the loadable driver code, and
+** for use by I/O bus infrastructure code.
+*/
+
+typedef struct device_driver_s *device_driver_t;
+
+/* == Driver thread priority support == */
+typedef int ilvl_t;
+
+struct eframe_s;
+struct piomap;
+struct dmamap;
+
+typedef unsigned long iobush_t;
+
+/* interrupt function */
+typedef void *intr_arg_t;
+typedef void intr_func_f(intr_arg_t);
+typedef intr_func_f *intr_func_t;
+
+#define INTR_ARG(n) ((intr_arg_t)(__psunsigned_t)(n))
+
+/* system interrupt resource handle -- returned from intr_alloc */
+typedef struct intr_s *intr_t;
+#define INTR_HANDLE_NONE ((intr_t)0)
+
+/*
+ * restore interrupt level value, returned from intr_block_level
+ * for use with intr_unblock_level.
+ */
+typedef void *rlvl_t;
+
+
+/*
+ * A basic, platform-independent description of I/O requirements for
+ * a device. This structure is usually formed by lboot based on information
+ * in configuration files. It contains information about PIO, DMA, and
+ * interrupt requirements for a specific instance of a device.
+ *
+ * The pio description is currently unused.
+ *
+ * The dma description describes bandwidth characteristics and bandwidth
+ * allocation requirements. (TBD)
+ *
+ * The Interrupt information describes the priority of interrupt, desired
+ * destination, policy (TBD), whether this is an error interrupt, etc.
+ * For now, interrupts are targeted to specific CPUs.
+ */
+
+typedef struct device_desc_s {
+ /* pio description (currently none) */
+
+ /* dma description */
+ /* TBD: allocated badwidth requirements */
+
+ /* interrupt description */
+ vertex_hdl_t intr_target; /* Hardware locator string */
+ int intr_policy; /* TBD */
+ ilvl_t intr_swlevel; /* software level for blocking intr */
+ char *intr_name; /* name of interrupt, if any */
+
+ int flags;
+} *device_desc_t;
+
+/* flag values */
+#define D_INTR_ISERR 0x1 /* interrupt is for error handling */
+#define D_IS_ASSOC 0x2 /* descriptor is associated with a dev */
+#define D_INTR_NOTHREAD 0x4 /* Interrupt handler isn't threaded. */
+
+#define INTR_SWLEVEL_NOTHREAD_DEFAULT 0 /* Default
+ * Interrupt level in case of
+ * non-threaded interrupt
+ * handlers
+ */
+#endif /* _ASM_IA64_SN_DRIVER_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_HCL_H
+#define _ASM_IA64_SN_HCL_H
+
+#include <linux/fs.h>
+#include <asm/sn/sgi.h>
+
+extern vertex_hdl_t hwgraph_root;
+extern vertex_hdl_t linux_busnum;
+
+void hwgraph_debug(char *, const char *, int, vertex_hdl_t, vertex_hdl_t, char *, ...);
+
+#if 1
+#define HWGRAPH_DEBUG(args...) hwgraph_debug(args)
+#else
+#define HWGRAPH_DEBUG(args)
+#endif
+
+typedef long labelcl_info_place_t;
+typedef long arbitrary_info_t;
+typedef long arb_info_desc_t;
+
+
+/*
+ * Reserve room in every vertex for 2 pieces of fast access indexed information
+ * Note that we do not save a pointer to the bdevsw or cdevsw[] tables anymore.
+ */
+#define HWGRAPH_NUM_INDEX_INFO 2 /* MAX Entries */
+#define HWGRAPH_CONNECTPT 0 /* connect point (aprent) */
+#define HWGRAPH_FASTINFO 1 /* callee's private handle */
+
+/*
+ * Reserved edge_place_t values, used as the "place" parameter to edge_get_next.
+ * Every vertex in the hwgraph has up to 2 *implicit* edges. There is an implicit
+ * edge called "." that points to the current vertex. There is an implicit edge
+ * called ".." that points to the vertex' connect point.
+ */
+#define EDGE_PLACE_WANT_CURRENT 0 /* "." */
+#define EDGE_PLACE_WANT_CONNECTPT 1 /* ".." */
+#define EDGE_PLACE_WANT_REAL_EDGES 2 /* Get the first real edge */
+#define HWGRAPH_RESERVED_PLACES 2
+
+
+/*
+ * Special pre-defined edge labels.
+ */
+#define HWGRAPH_EDGELBL_HW "hw"
+#define HWGRAPH_EDGELBL_DOT "."
+#define HWGRAPH_EDGELBL_DOTDOT ".."
+
+#include <asm/sn/labelcl.h>
+#define hwgraph_fastinfo_set(a,b) labelcl_info_replace_IDX(a, HWGRAPH_FASTINFO, b, NULL)
+#define hwgraph_connectpt_set labelcl_info_connectpt_set
+#define hwgraph_generate_path hwgfs_generate_path
+#define hwgraph_path_to_vertex(a) hwgfs_find_handle(NULL, a, 0, 0, 0, 1)
+#define hwgraph_vertex_unref(a)
+
+/*
+ * External declarations of EXPORTED SYMBOLS in hcl.c
+ */
+extern vertex_hdl_t hwgraph_register(vertex_hdl_t, const char *,
+ unsigned int, unsigned int, unsigned int, unsigned int,
+ umode_t, uid_t, gid_t, struct file_operations *, void *);
+
+extern int hwgraph_mk_symlink(vertex_hdl_t, const char *, unsigned int,
+ unsigned int, const char *, unsigned int, vertex_hdl_t *, void *);
+
+extern int hwgraph_vertex_destroy(vertex_hdl_t);
+
+extern int hwgraph_edge_add(vertex_hdl_t, vertex_hdl_t, char *);
+extern int hwgraph_edge_get(vertex_hdl_t, char *, vertex_hdl_t *);
+
+extern arbitrary_info_t hwgraph_fastinfo_get(vertex_hdl_t);
+extern vertex_hdl_t hwgraph_mk_dir(vertex_hdl_t, const char *, unsigned int, void *);
+
+extern int hwgraph_connectpt_set(vertex_hdl_t, vertex_hdl_t);
+extern vertex_hdl_t hwgraph_connectpt_get(vertex_hdl_t);
+extern int hwgraph_edge_get_next(vertex_hdl_t, char *, vertex_hdl_t *, unsigned int *);
+
+extern graph_error_t hwgraph_traverse(vertex_hdl_t, char *, vertex_hdl_t *);
+
+extern int hwgraph_vertex_get_next(vertex_hdl_t *, vertex_hdl_t *);
+extern int hwgraph_path_add(vertex_hdl_t, char *, vertex_hdl_t *);
+extern vertex_hdl_t hwgraph_path_to_dev(char *);
+extern vertex_hdl_t hwgraph_block_device_get(vertex_hdl_t);
+extern vertex_hdl_t hwgraph_char_device_get(vertex_hdl_t);
+extern graph_error_t hwgraph_char_device_add(vertex_hdl_t, char *, char *, vertex_hdl_t *);
+extern int hwgraph_path_add(vertex_hdl_t, char *, vertex_hdl_t *);
+extern int hwgraph_info_add_LBL(vertex_hdl_t, char *, arbitrary_info_t);
+extern int hwgraph_info_get_LBL(vertex_hdl_t, char *, arbitrary_info_t *);
+extern int hwgraph_info_replace_LBL(vertex_hdl_t, char *, arbitrary_info_t,
+ arbitrary_info_t *);
+extern int hwgraph_info_get_exported_LBL(vertex_hdl_t, char *, int *, arbitrary_info_t *);
+extern int hwgraph_info_get_next_LBL(vertex_hdl_t, char *, arbitrary_info_t *,
+ labelcl_info_place_t *);
+extern int hwgraph_info_export_LBL(vertex_hdl_t, char *, int);
+extern int hwgraph_info_unexport_LBL(vertex_hdl_t, char *);
+extern int hwgraph_info_remove_LBL(vertex_hdl_t, char *, arbitrary_info_t *);
+extern char *vertex_to_name(vertex_hdl_t, char *, unsigned int);
+
+#endif /* _ASM_IA64_SN_HCL_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#ifndef _ASM_IA64_SN_HCL_UTIL_H
+#define _ASM_IA64_SN_HCL_UTIL_H
+
+#include <asm/sn/sgi.h>
+
+extern char * dev_to_name(vertex_hdl_t, char *, unsigned int);
+extern int device_master_set(vertex_hdl_t, vertex_hdl_t);
+extern vertex_hdl_t device_master_get(vertex_hdl_t);
+extern cnodeid_t master_node_get(vertex_hdl_t);
+extern cnodeid_t nodevertex_to_cnodeid(vertex_hdl_t);
+extern void mark_nodevertex_as_node(vertex_hdl_t, cnodeid_t);
+
+#endif /* _ASM_IA64_SN_HCL_UTIL_H */
--- /dev/null
+#ifndef _ASM_IA64_SN_HWGFS_H
+#define _ASM_IA64_SN_HWGFS_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <asm/types.h>
+
+typedef struct dentry *hwgfs_handle_t;
+
+extern hwgfs_handle_t hwgfs_register(hwgfs_handle_t dir, const char *name,
+ unsigned int flags,
+ unsigned int major, unsigned int minor,
+ umode_t mode, void *ops, void *info);
+extern int hwgfs_mk_symlink(hwgfs_handle_t dir, const char *name,
+ unsigned int flags, const char *link,
+ hwgfs_handle_t *handle, void *info);
+extern hwgfs_handle_t hwgfs_mk_dir(hwgfs_handle_t dir, const char *name,
+ void *info);
+extern void hwgfs_unregister(hwgfs_handle_t de);
+
+extern hwgfs_handle_t hwgfs_find_handle(hwgfs_handle_t dir, const char *name,
+ unsigned int major,unsigned int minor,
+ char type, int traverse_symlinks);
+extern hwgfs_handle_t hwgfs_get_parent(hwgfs_handle_t de);
+extern int hwgfs_generate_path(hwgfs_handle_t de, char *path, int buflen);
+
+extern void *hwgfs_get_info(hwgfs_handle_t de);
+extern int hwgfs_set_info(hwgfs_handle_t de, void *info);
+
+#endif /* _ASM_IA64_SN_HWGFS_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#ifndef _ASM_IA64_SN_IFCONFIG_NET_H
+#define _ASM_IA64_SN_IFCONFIG_NET_H
+
+#define NETCONFIG_FILE "/tmp/ifconfig_net"
+#define POUND_CHAR '#'
+#define MAX_LINE_LEN 128
+#define MAXPATHLEN 128
+
+struct ifname_num {
+ long next_eth;
+ long next_fddi;
+ long next_hip;
+ long next_tr;
+ long next_fc;
+ long size;
+};
+
+struct ifname_MAC {
+ char name[16];
+ unsigned char dev_addr[7];
+ unsigned char addr_len; /* hardware address length */
+};
+
+#endif /* _ASM_IA64_SN_IFCONFIG_NET_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2002-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+
+#ifndef _ASM_IA64_SN_IOC4_H
+#define _ASM_IA64_SN_IOC4_H
+
+/*
+ * Bytebus device space
+ */
+#define IOC4_BYTEBUS_DEV0 0x80000L /* Addressed using pci_bar0 */
+#define IOC4_BYTEBUS_DEV1 0xA0000L /* Addressed using pci_bar0 */
+#define IOC4_BYTEBUS_DEV2 0xC0000L /* Addressed using pci_bar0 */
+#define IOC4_BYTEBUS_DEV3 0xE0000L /* Addressed using pci_bar0 */
+
+#endif /* _ASM_IA64_SN_IOC4_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+
+#ifndef _ASM_IA64_SN_IOCONFIG_BUS_H
+#define _ASM_IA64_SN_IOCONFIG_BUS_H
+
+#define IOCONFIG_PCIBUS "/boot/efi/ioconfig_pcibus"
+#define POUND_CHAR '#'
+#define MAX_LINE_LEN 128
+#define MAXPATHLEN 128
+
+struct ioconfig_parm {
+ unsigned long ioconfig_activated;
+ unsigned long number;
+ void *buffer;
+};
+
+struct ascii_moduleid {
+ unsigned char io_moduleid[8]; /* pci path name */
+};
+
+#endif /* _ASM_IA64_SN_IOCONFIG_BUS_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_IOERROR_H
+#define _ASM_IA64_SN_IOERROR_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/types.h>
+#include <asm/sn/types.h>
+
+/*
+ * Macros defining the various Errors to be handled as part of
+ * IO Error handling.
+ */
+
+/*
+ * List of errors to be handled by each subsystem.
+ * "error_code" field will take one of these values.
+ * The error code is built up of single bits expressing
+ * our confidence that the error was that type; note
+ * that it is possible to have a PIO or DMA error where
+ * we don't know whether it was a READ or a WRITE, or
+ * even a READ or WRITE error that we're not sure whether
+ * to call a PIO or DMA.
+ *
+ * It is also possible to set both PIO and DMA, and possible
+ * to set both READ and WRITE; the first may be nonsensical
+ * but the second *could* be used to designate an access
+ * that is known to be a read-modify-write cycle. It is
+ * quite possible that nobody will ever use PIO|DMA or
+ * READ|WRITE ... but being flexible is good.
+ */
+#define IOECODE_UNSPEC 0
+#define IOECODE_READ 1
+#define IOECODE_WRITE 2
+#define IOECODE_PIO 4
+#define IOECODE_DMA 8
+
+#define IOECODE_PIO_READ (IOECODE_PIO|IOECODE_READ)
+#define IOECODE_PIO_WRITE (IOECODE_PIO|IOECODE_WRITE)
+#define IOECODE_DMA_READ (IOECODE_DMA|IOECODE_READ)
+#define IOECODE_DMA_WRITE (IOECODE_DMA|IOECODE_WRITE)
+
+/* support older names, but try to move everything
+ * to using new names that identify which package
+ * controls their values ...
+ */
+#define PIO_READ_ERROR IOECODE_PIO_READ
+#define PIO_WRITE_ERROR IOECODE_PIO_WRITE
+#define DMA_READ_ERROR IOECODE_DMA_READ
+#define DMA_WRITE_ERROR IOECODE_DMA_WRITE
+
+/*
+ * List of error numbers returned by error handling sub-system.
+ */
+
+#define IOERROR_HANDLED 0 /* Error Properly handled. */
+#define IOERROR_NODEV 0x1 /* No such device attached */
+#define IOERROR_BADHANDLE 0x2 /* Received bad handle */
+#define IOERROR_BADWIDGETNUM 0x3 /* Bad widget number */
+#define IOERROR_BADERRORCODE 0x4 /* Bad error code passed in */
+#define IOERROR_INVALIDADDR 0x5 /* Invalid address specified */
+
+#define IOERROR_WIDGETLEVEL 0x6 /* Some failure at widget level */
+#define IOERROR_XTALKLEVEL 0x7
+
+#define IOERROR_HWGRAPH_LOOKUP 0x8 /* hwgraph lookup failed for path */
+#define IOERROR_UNHANDLED 0x9 /* handler rejected error */
+
+#define IOERROR_PANIC 0xA /* subsidiary handler has already
+ * started decode: continue error
+ * data dump, and panic from top
+ * caller in error chain.
+ */
+
+/*
+ * IO errors at the bus/device driver level
+ */
+
+#define IOERROR_DEV_NOTFOUND 0x10 /* Device matching bus addr not found */
+#define IOERROR_DEV_SHUTDOWN 0x11 /* Device has been shutdown */
+
+/*
+ * Type of address.
+ * Indicates the direction of transfer that caused the error.
+ */
+#define IOERROR_ADDR_PIO 1 /* Error Address generated due to PIO */
+#define IOERROR_ADDR_DMA 2 /* Error address generated due to DMA */
+
+/*
+ * IO error structure.
+ *
+ * This structure would expand to hold the information retrieved from
+ * all IO related error registers.
+ *
+ * This structure is defined to hold all system specific
+ * information related to a single error.
+ *
+ * This serves a couple of purpose.
+ * - Error handling often involves translating one form of address to other
+ * form. So, instead of having different data structures at each level,
+ * we have a single structure, and the appropriate fields get filled in
+ * at each layer.
+ * - This provides a way to dump all error related information in any layer
+ * of erorr handling (debugging aid).
+ *
+ * A second possibility is to allow each layer to define its own error
+ * data structure, and fill in the proper fields. This has the advantage
+ * of isolating the layers.
+ * A big concern is the potential stack usage (and overflow), if each layer
+ * defines these structures on stack (assuming we don't want to do kmalloc.
+ *
+ * Any layer wishing to pass extra information to a layer next to it in
+ * error handling hierarchy, can do so as a separate parameter.
+ */
+
+typedef struct io_error_s {
+ /* Bit fields indicating which structure fields are valid */
+ union {
+ struct {
+ unsigned ievb_errortype:1;
+ unsigned ievb_widgetnum:1;
+ unsigned ievb_widgetdev:1;
+ unsigned ievb_srccpu:1;
+ unsigned ievb_srcnode:1;
+ unsigned ievb_errnode:1;
+ unsigned ievb_sysioaddr:1;
+ unsigned ievb_xtalkaddr:1;
+ unsigned ievb_busspace:1;
+ unsigned ievb_busaddr:1;
+ unsigned ievb_vaddr:1;
+ unsigned ievb_memaddr:1;
+ unsigned ievb_epc:1;
+ unsigned ievb_ef:1;
+ unsigned ievb_tnum:1;
+ } iev_b;
+ unsigned iev_a;
+ } ie_v;
+
+ short ie_errortype; /* error type: extra info about error */
+ short ie_widgetnum; /* Widget number that's in error */
+ short ie_widgetdev; /* Device within widget in error */
+ cpuid_t ie_srccpu; /* CPU on srcnode generating error */
+ cnodeid_t ie_srcnode; /* Node which caused the error */
+ cnodeid_t ie_errnode; /* Node where error was noticed */
+ iopaddr_t ie_sysioaddr; /* Sys specific IO address */
+ iopaddr_t ie_xtalkaddr; /* Xtalk (48bit) addr of Error */
+ iopaddr_t ie_busspace; /* Bus specific address space */
+ iopaddr_t ie_busaddr; /* Bus specific address */
+ caddr_t ie_vaddr; /* Virtual address of error */
+ paddr_t ie_memaddr; /* Physical memory address */
+ caddr_t ie_epc; /* pc when error reported */
+ caddr_t ie_ef; /* eframe when error reported */
+ short ie_tnum; /* Xtalk TNUM field */
+} ioerror_t;
+
+#define IOERROR_INIT(e) do { (e)->ie_v.iev_a = 0; } while (0)
+#define IOERROR_SETVALUE(e,f,v) do { (e)->ie_ ## f = (v); (e)->ie_v.iev_b.ievb_ ## f = 1; } while (0)
+#define IOERROR_FIELDVALID(e,f) ((unsigned long long)((e)->ie_v.iev_b.ievb_ ## f) != (unsigned long long) 0)
+#define IOERROR_NOGETVALUE(e,f) (ASSERT(IOERROR_FIELDVALID(e,f)), ((e)->ie_ ## f))
+#define IOERROR_GETVALUE(p,e,f) ASSERT(IOERROR_FIELDVALID(e,f)); p=((e)->ie_ ## f)
+
+/* hub code likes to call the SysAD address "hubaddr" ... */
+#define ie_hubaddr ie_sysioaddr
+#define ievb_hubaddr ievb_sysioaddr
+#endif
+
+/*
+ * Error handling Modes.
+ */
+typedef enum {
+ MODE_DEVPROBE, /* Probing mode. Errors not fatal */
+ MODE_DEVERROR, /* Error while system is running */
+ MODE_DEVUSERERROR, /* Device Error created due to user mode access */
+ MODE_DEVREENABLE /* Reenable pass */
+} ioerror_mode_t;
+
+
+typedef int error_handler_f(void *, int, ioerror_mode_t, ioerror_t *);
+typedef void *error_handler_arg_t;
+
+#ifdef ERROR_DEBUG
+#define IOERR_PRINTF(x) (x)
+#else
+#define IOERR_PRINTF(x)
+#endif /* ERROR_DEBUG */
+
+#endif /* _ASM_IA64_SN_IOERROR_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_IOERROR_HANDLING_H
+#define _ASM_IA64_SN_IOERROR_HANDLING_H
+
+#include <linux/types.h>
+#include <asm/sn/sgi.h>
+
+#ifdef __KERNEL__
+
+/*
+ * Basic types required for io error handling interfaces.
+ */
+
+/*
+ * Return code from the io error handling interfaces.
+ */
+
+enum error_return_code_e {
+ /* Success */
+ ERROR_RETURN_CODE_SUCCESS,
+
+ /* Unknown failure */
+ ERROR_RETURN_CODE_GENERAL_FAILURE,
+
+ /* Nth error noticed while handling the first error */
+ ERROR_RETURN_CODE_NESTED_CALL,
+
+ /* State of the vertex is invalid */
+ ERROR_RETURN_CODE_INVALID_STATE,
+
+ /* Invalid action */
+ ERROR_RETURN_CODE_INVALID_ACTION,
+
+ /* Valid action but not cannot set it */
+ ERROR_RETURN_CODE_CANNOT_SET_ACTION,
+
+ /* Valid action but not possible for the current state */
+ ERROR_RETURN_CODE_CANNOT_PERFORM_ACTION,
+
+ /* Valid state but cannot change the state of the vertex to it */
+ ERROR_RETURN_CODE_CANNOT_SET_STATE,
+
+ /* ??? */
+ ERROR_RETURN_CODE_DUPLICATE,
+
+ /* Reached the root of the system critical graph */
+ ERROR_RETURN_CODE_SYS_CRITICAL_GRAPH_BEGIN,
+
+ /* Reached the leaf of the system critical graph */
+ ERROR_RETURN_CODE_SYS_CRITICAL_GRAPH_ADD,
+
+ /* Cannot shutdown the device in hw/sw */
+ ERROR_RETURN_CODE_SHUTDOWN_FAILED,
+
+ /* Cannot restart the device in hw/sw */
+ ERROR_RETURN_CODE_RESET_FAILED,
+
+ /* Cannot failover the io subsystem */
+ ERROR_RETURN_CODE_FAILOVER_FAILED,
+
+ /* No Jump Buffer exists */
+ ERROR_RETURN_CODE_NO_JUMP_BUFFER
+};
+
+typedef uint64_t error_return_code_t;
+
+/*
+ * State of the vertex during error handling.
+ */
+enum error_state_e {
+ /* Ignore state */
+ ERROR_STATE_IGNORE,
+
+ /* Invalid state */
+ ERROR_STATE_NONE,
+
+ /* Trying to decipher the error bits */
+ ERROR_STATE_LOOKUP,
+
+ /* Trying to carryout the action decided upon after
+ * looking at the error bits
+ */
+ ERROR_STATE_ACTION,
+
+ /* Donot allow any other operations to this vertex from
+ * other parts of the kernel. This is also used to indicate
+ * that the device has been software shutdown.
+ */
+ ERROR_STATE_SHUTDOWN,
+
+ /* This is a transitory state when no new requests are accepted
+ * on behalf of the device. This is usually used when trying to
+ * quiesce all the outstanding operations and preparing the
+ * device for a failover / shutdown etc.
+ */
+ ERROR_STATE_SHUTDOWN_IN_PROGRESS,
+
+ /* This is the state when there is absolutely no activity going
+ * on wrt device.
+ */
+ ERROR_STATE_SHUTDOWN_COMPLETE,
+
+ /* This is the state when the device has issued a retry. */
+ ERROR_STATE_RETRY,
+
+ /* This is the normal state. This can also be used to indicate
+ * that the device has been software-enabled after software-
+ * shutting down previously.
+ */
+ ERROR_STATE_NORMAL
+
+};
+
+typedef uint64_t error_state_t;
+
+/*
+ * Generic error classes. This is used to classify errors after looking
+ * at the error bits and helpful in deciding on the action.
+ */
+enum error_class_e {
+ /* Unclassified error */
+ ERROR_CLASS_UNKNOWN,
+
+ /* LLP transmit error */
+ ERROR_CLASS_LLP_XMIT,
+
+ /* LLP receive error */
+ ERROR_CLASS_LLP_RECV,
+
+ /* Credit error */
+ ERROR_CLASS_CREDIT,
+
+ /* Timeout error */
+ ERROR_CLASS_TIMEOUT,
+
+ /* Access error */
+ ERROR_CLASS_ACCESS,
+
+ /* System coherency error */
+ ERROR_CLASS_SYS_COHERENCY,
+
+ /* Bad data error (ecc / parity etc) */
+ ERROR_CLASS_BAD_DATA,
+
+ /* Illegal request packet */
+ ERROR_CLASS_BAD_REQ_PKT,
+
+ /* Illegal response packet */
+ ERROR_CLASS_BAD_RESP_PKT
+};
+
+#endif /* __KERNEL__ */
+#endif /* _ASM_IA64_SN_IOERROR_HANDLING_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_IOGRAPH_H
+#define _ASM_IA64_SN_IOGRAPH_H
+
+#include <asm/sn/xtalk/xbow.h> /* For get MAX_PORT_NUM */
+
+/*
+ * During initialization, platform-dependent kernel code establishes some
+ * basic elements of the hardware graph. This file contains edge and
+ * info labels that are used across various platforms -- it serves as an
+ * ad-hoc registry.
+ */
+
+/* edges names */
+#define EDGE_LBL_BUS "bus"
+#define EDGE_LBL_CONN ".connection"
+#define EDGE_LBL_GUEST ".guest" /* For IOC3 */
+#define EDGE_LBL_HOST ".host" /* For IOC3 */
+#define EDGE_LBL_PERFMON "mon"
+#define EDGE_LBL_USRPCI "usrpci"
+#define EDGE_LBL_BLOCK "block"
+#define EDGE_LBL_BOARD "board"
+#define EDGE_LBL_CHAR "char"
+#define EDGE_LBL_CONTROLLER "controller"
+#define EDGE_LBL_CPU "cpu"
+#define EDGE_LBL_CPUNUM "cpunum"
+#define EDGE_LBL_DIRECT "direct"
+#define EDGE_LBL_DISABLED "disabled"
+#define EDGE_LBL_DISK "disk"
+#define EDGE_LBL_HUB "hub" /* For SN0 */
+#define EDGE_LBL_HW "hw"
+#define EDGE_LBL_INTERCONNECT "link"
+#define EDGE_LBL_IO "io"
+#define EDGE_LBL_LUN "lun"
+#define EDGE_LBL_LINUX "linux"
+#define EDGE_LBL_LINUX_BUS EDGE_LBL_LINUX "/bus/pci-x"
+#define EDGE_LBL_MACHDEP "machdep" /* Platform depedent devices */
+#define EDGE_LBL_MASTER ".master"
+#define EDGE_LBL_MEMORY "memory"
+#define EDGE_LBL_META_ROUTER "metarouter"
+#define EDGE_LBL_MIDPLANE "midplane"
+#define EDGE_LBL_MODULE "module"
+#define EDGE_LBL_NODE "node"
+#define EDGE_LBL_NODENUM "nodenum"
+#define EDGE_LBL_NVRAM "nvram"
+#define EDGE_LBL_PARTITION "partition"
+#define EDGE_LBL_PCI "pci"
+#define EDGE_LBL_PCIX "pci-x"
+#define EDGE_LBL_PCIX_0 EDGE_LBL_PCIX "/0"
+#define EDGE_LBL_PCIX_1 EDGE_LBL_PCIX "/1"
+#define EDGE_LBL_AGP "agp"
+#define EDGE_LBL_AGP_0 EDGE_LBL_AGP "/0"
+#define EDGE_LBL_AGP_1 EDGE_LBL_AGP "/1"
+#define EDGE_LBL_PORT "port"
+#define EDGE_LBL_PROM "prom"
+#define EDGE_LBL_RACK "rack"
+#define EDGE_LBL_RDISK "rdisk"
+#define EDGE_LBL_REPEATER_ROUTER "repeaterrouter"
+#define EDGE_LBL_ROUTER "router"
+#define EDGE_LBL_RPOS "bay" /* Position in rack */
+#define EDGE_LBL_SCSI "scsi"
+#define EDGE_LBL_SCSI_CTLR "scsi_ctlr"
+#define EDGE_LBL_SLOT "slot"
+#define EDGE_LBL_TARGET "target"
+#define EDGE_LBL_UNKNOWN "unknown"
+#define EDGE_LBL_XBOW "xbow"
+#define EDGE_LBL_XIO "xio"
+#define EDGE_LBL_XSWITCH ".xswitch"
+#define EDGE_LBL_XTALK "xtalk"
+#define EDGE_LBL_XWIDGET "xwidget"
+#define EDGE_LBL_ELSC "elsc"
+#define EDGE_LBL_L1 "L1"
+#define EDGE_LBL_XPLINK "xplink" /* Cross partition */
+#define EDGE_LBL_XPLINK_NET "net" /* XP network devs */
+#define EDGE_LBL_XPLINK_RAW "raw" /* XP Raw devs */
+#define EDGE_LBL_SLAB "slab" /* Slab of a module */
+#define EDGE_LBL_XPLINK_KERNEL "kernel" /* XP kernel devs */
+#define EDGE_LBL_XPLINK_ADMIN "admin" /* Partition admin */
+#define EDGE_LBL_IOBRICK "iobrick"
+#define EDGE_LBL_PXBRICK "PXbrick"
+#define EDGE_LBL_OPUSBRICK "onboardio"
+#define EDGE_LBL_IXBRICK "IXbrick"
+#define EDGE_LBL_CGBRICK "CGbrick"
+#define EDGE_LBL_CPUBUS "cpubus" /* CPU Interfaces (SysAd) */
+
+/* vertex info labels in hwgraph */
+#define INFO_LBL_CNODEID "_cnodeid"
+#define INFO_LBL_CONTROLLER_NAME "_controller_name"
+#define INFO_LBL_CPUBUS "_cpubus"
+#define INFO_LBL_CPUID "_cpuid"
+#define INFO_LBL_CPU_INFO "_cpu"
+#define INFO_LBL_DETAIL_INVENT "_detail_invent" /* inventory data*/
+#define INFO_LBL_DIAGVAL "_diag_reason" /* Reason disabled */
+#define INFO_LBL_DRIVER "_driver" /* points to attached device_driver_t */
+#define INFO_LBL_ELSC "_elsc"
+#define INFO_LBL_SUBCH "_subch" /* system controller subchannel */
+#define INFO_LBL_HUB_INFO "_hubinfo"
+#define INFO_LBL_HWGFSLIST "_hwgfs_list"
+#define INFO_LBL_TRAVERSE "_hwg_traverse" /* hwgraph traverse function */
+#define INFO_LBL_MODULE_INFO "_module" /* module data ptr */
+#define INFO_LBL_MDPERF_DATA "_mdperf" /* mdperf monitoring*/
+#define INFO_LBL_NODE_INFO "_node"
+#define INFO_LBL_PCIBR_HINTS "_pcibr_hints"
+#define INFO_LBL_PCIIO "_pciio"
+#define INFO_LBL_PFUNCS "_pciio_ops" /* ops vector for gio providers */
+#define INFO_LBL_PERMISSIONS "_permissions" /* owner, uid, gid */
+#define INFO_LBL_ROUTER_INFO "_router"
+#define INFO_LBL_SUBDEVS "_subdevs" /* subdevice enable bits */
+#define INFO_LBL_XSWITCH "_xswitch"
+#define INFO_LBL_XSWITCH_ID "_xswitch_id"
+#define INFO_LBL_XSWITCH_VOL "_xswitch_volunteer"
+#define INFO_LBL_XFUNCS "_xtalk_ops" /* ops vector for gio providers */
+#define INFO_LBL_XWIDGET "_xwidget"
+
+
+#ifdef __KERNEL__
+void init_all_devices(void);
+#endif /* __KERNEL__ */
+
+int io_brick_map_widget(int, int);
+
+/*
+ * Map a brick's widget number to a meaningful int
+ */
+
+struct io_brick_map_s {
+ int ibm_type; /* brick type */
+ int ibm_map_wid[MAX_PORT_NUM]; /* wid to int map */
+};
+
+#endif /* _ASM_IA64_SN_IOGRAPH_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Derived from IRIX <sys/SN/kldir.h>, revision 1.21.
+ *
+ * Copyright (C) 1992-1997,1999,2001-2003 Silicon Graphics, Inc. All Rights Reserved.
+ * Copyright (C) 1999 by Ralf Baechle
+ */
+#ifndef _ASM_IA64_SN_KLDIR_H
+#define _ASM_IA64_SN_KLDIR_H
+
+#include <linux/types.h>
+
+/*
+ * The kldir memory area resides at a fixed place in each node's memory and
+ * provides pointers to most other IP27 memory areas. This allows us to
+ * resize and/or relocate memory areas at a later time without breaking all
+ * firmware and kernels that use them. Indices in the array are
+ * permanently dedicated to areas listed below. Some memory areas (marked
+ * below) reside at a permanently fixed location, but are included in the
+ * directory for completeness.
+ */
+
+#define KLDIR_MAGIC 0x434d5f53505f5357
+
+/*
+ * The upper portion of the memory map applies during boot
+ * only and is overwritten by IRIX/SYMMON.
+ *
+ * MEMORY MAP PER NODE
+ *
+ * 0x2000000 (32M) +-----------------------------------------+
+ * | IO6 BUFFERS FOR FLASH ENET IOC3 |
+ * 0x1F80000 (31.5M) +-----------------------------------------+
+ * | IO6 TEXT/DATA/BSS/stack |
+ * 0x1C00000 (30M) +-----------------------------------------+
+ * | IO6 PROM DEBUG TEXT/DATA/BSS/stack |
+ * 0x0800000 (28M) +-----------------------------------------+
+ * | IP27 PROM TEXT/DATA/BSS/stack |
+ * 0x1B00000 (27M) +-----------------------------------------+
+ * | IP27 CFG |
+ * 0x1A00000 (26M) +-----------------------------------------+
+ * | Graphics PROM |
+ * 0x1800000 (24M) +-----------------------------------------+
+ * | 3rd Party PROM drivers |
+ * 0x1600000 (22M) +-----------------------------------------+
+ * | |
+ * | Free |
+ * | |
+ * +-----------------------------------------+
+ * | UNIX DEBUG Version |
+ * 0x190000 (2M--) +-----------------------------------------+
+ * | SYMMON |
+ * | (For UNIX Debug only) |
+ * 0x34000 (208K) +-----------------------------------------+
+ * | SYMMON STACK [NUM_CPU_PER_NODE] |
+ * | (For UNIX Debug only) |
+ * 0x25000 (148K) +-----------------------------------------+
+ * | KLCONFIG - II (temp) |
+ * | |
+ * | ---------------------------- |
+ * | |
+ * | UNIX NON-DEBUG Version |
+ * 0x19000 (100K) +-----------------------------------------+
+ *
+ *
+ * The lower portion of the memory map contains information that is
+ * permanent and is used by the IP27PROM, IO6PROM and IRIX.
+ *
+ * 0x19000 (100K) +-----------------------------------------+
+ * | |
+ * | PI Error Spools (32K) |
+ * | |
+ * 0x12000 (72K) +-----------------------------------------+
+ * | Unused |
+ * 0x11c00 (71K) +-----------------------------------------+
+ * | CPU 1 NMI Eframe area |
+ * 0x11a00 (70.5K) +-----------------------------------------+
+ * | CPU 0 NMI Eframe area |
+ * 0x11800 (70K) +-----------------------------------------+
+ * | CPU 1 NMI Register save area |
+ * 0x11600 (69.5K) +-----------------------------------------+
+ * | CPU 0 NMI Register save area |
+ * 0x11400 (69K) +-----------------------------------------+
+ * | GDA (1k) |
+ * 0x11000 (68K) +-----------------------------------------+
+ * | Early cache Exception stack |
+ * | and/or |
+ * | kernel/io6prom nmi registers |
+ * 0x10800 (66k) +-----------------------------------------+
+ * | cache error eframe |
+ * 0x10400 (65K) +-----------------------------------------+
+ * | Exception Handlers (UALIAS copy) |
+ * 0x10000 (64K) +-----------------------------------------+
+ * | |
+ * | |
+ * | KLCONFIG - I (permanent) (48K) |
+ * | |
+ * | |
+ * | |
+ * 0x4000 (16K) +-----------------------------------------+
+ * | NMI Handler (Protected Page) |
+ * 0x3000 (12K) +-----------------------------------------+
+ * | ARCS PVECTORS (master node only) |
+ * 0x2c00 (11K) +-----------------------------------------+
+ * | ARCS TVECTORS (master node only) |
+ * 0x2800 (10K) +-----------------------------------------+
+ * | LAUNCH [NUM_CPU] |
+ * 0x2400 (9K) +-----------------------------------------+
+ * | Low memory directory (KLDIR) |
+ * 0x2000 (8K) +-----------------------------------------+
+ * | ARCS SPB (1K) |
+ * 0x1000 (4K) +-----------------------------------------+
+ * | Early cache Exception stack |
+ * | and/or |
+ * | kernel/io6prom nmi registers |
+ * 0x800 (2k) +-----------------------------------------+
+ * | cache error eframe |
+ * 0x400 (1K) +-----------------------------------------+
+ * | Exception Handlers |
+ * 0x0 (0K) +-----------------------------------------+
+ */
+
+#ifdef __ASSEMBLY__
+#define KLDIR_OFF_MAGIC 0x00
+#define KLDIR_OFF_OFFSET 0x08
+#define KLDIR_OFF_POINTER 0x10
+#define KLDIR_OFF_SIZE 0x18
+#define KLDIR_OFF_COUNT 0x20
+#define KLDIR_OFF_STRIDE 0x28
+#endif /* __ASSEMBLY__ */
+
+#ifndef __ASSEMBLY__
+typedef struct kldir_ent_s {
+ u64 magic; /* Indicates validity of entry */
+ off_t offset; /* Offset from start of node space */
+ unsigned long pointer; /* Pointer to area in some cases */
+ size_t size; /* Size in bytes */
+ u64 count; /* Repeat count if array, 1 if not */
+ size_t stride; /* Stride if array, 0 if not */
+ char rsvd[16]; /* Pad entry to 0x40 bytes */
+ /* NOTE: These 16 bytes are used in the Partition KLDIR
+ entry to store partition info. Refer to klpart.h for this. */
+} kldir_ent_t;
+#endif /* __ASSEMBLY__ */
+
+
+#define KLDIR_ENT_SIZE 0x40
+#define KLDIR_MAX_ENTRIES (0x400 / 0x40)
+
+
+
+/*
+ * The upper portion of the memory map applies during boot
+ * only and is overwritten by IRIX/SYMMON. The minimum memory bank
+ * size on IP35 is 64M, which provides a limit on the amount of space
+ * the PROM can assume it has available.
+ *
+ * Most of the addresses below are defined as macros in this file, or
+ * in SN/addrs.h or SN/SN1/addrs.h.
+ *
+ * MEMORY MAP PER NODE
+ *
+ * 0x4000000 (64M) +-----------------------------------------+
+ * | |
+ * | |
+ * | IO7 TEXT/DATA/BSS/stack |
+ * 0x3000000 (48M) +-----------------------------------------+
+ * | Free |
+ * 0x2102000 (>33M) +-----------------------------------------+
+ * | IP35 Topology (PCFG) + misc data |
+ * 0x2000000 (32M) +-----------------------------------------+
+ * | IO7 BUFFERS FOR FLASH ENET IOC3 |
+ * 0x1F80000 (31.5M) +-----------------------------------------+
+ * | Free |
+ * 0x1C00000 (28M) +-----------------------------------------+
+ * | IP35 PROM TEXT/DATA/BSS/stack |
+ * 0x1A00000 (26M) +-----------------------------------------+
+ * | Routing temp. space |
+ * 0x1800000 (24M) +-----------------------------------------+
+ * | Diagnostics temp. space |
+ * 0x1500000 (21M) +-----------------------------------------+
+ * | Free |
+ * 0x1400000 (20M) +-----------------------------------------+
+ * | IO7 PROM temporary copy |
+ * 0x1300000 (19M) +-----------------------------------------+
+ * | |
+ * | Free |
+ * | (UNIX DATA starts above 0x1000000) |
+ * | |
+ * +-----------------------------------------+
+ * | UNIX DEBUG Version |
+ * 0x0310000 (3.1M) +-----------------------------------------+
+ * | SYMMON, loaded just below UNIX |
+ * | (For UNIX Debug only) |
+ * | |
+ * | |
+ * 0x006C000 (432K) +-----------------------------------------+
+ * | SYMMON STACK [NUM_CPU_PER_NODE] |
+ * | (For UNIX Debug only) |
+ * 0x004C000 (304K) +-----------------------------------------+
+ * | |
+ * | |
+ * | UNIX NON-DEBUG Version |
+ * 0x0040000 (256K) +-----------------------------------------+
+ *
+ *
+ * The lower portion of the memory map contains information that is
+ * permanent and is used by the IP35PROM, IO7PROM and IRIX.
+ *
+ * 0x40000 (256K) +-----------------------------------------+
+ * | |
+ * | KLCONFIG (64K) |
+ * | |
+ * 0x30000 (192K) +-----------------------------------------+
+ * | |
+ * | PI Error Spools (64K) |
+ * | |
+ * 0x20000 (128K) +-----------------------------------------+
+ * | |
+ * | Unused |
+ * | |
+ * 0x19000 (100K) +-----------------------------------------+
+ * | Early cache Exception stack (CPU 3)|
+ * 0x18800 (98K) +-----------------------------------------+
+ * | cache error eframe (CPU 3) |
+ * 0x18400 (97K) +-----------------------------------------+
+ * | Exception Handlers (CPU 3) |
+ * 0x18000 (96K) +-----------------------------------------+
+ * | |
+ * | Unused |
+ * | |
+ * 0x13c00 (79K) +-----------------------------------------+
+ * | GPDA (8k) |
+ * 0x11c00 (71K) +-----------------------------------------+
+ * | Early cache Exception stack (CPU 2)|
+ * 0x10800 (66k) +-----------------------------------------+
+ * | cache error eframe (CPU 2) |
+ * 0x10400 (65K) +-----------------------------------------+
+ * | Exception Handlers (CPU 2) |
+ * 0x10000 (64K) +-----------------------------------------+
+ * | |
+ * | Unused |
+ * | |
+ * 0x0b400 (45K) +-----------------------------------------+
+ * | GDA (1k) |
+ * 0x0b000 (44K) +-----------------------------------------+
+ * | NMI Eframe areas (4) |
+ * 0x0a000 (40K) +-----------------------------------------+
+ * | NMI Register save areas (4) |
+ * 0x09000 (36K) +-----------------------------------------+
+ * | Early cache Exception stack (CPU 1)|
+ * 0x08800 (34K) +-----------------------------------------+
+ * | cache error eframe (CPU 1) |
+ * 0x08400 (33K) +-----------------------------------------+
+ * | Exception Handlers (CPU 1) |
+ * 0x08000 (32K) +-----------------------------------------+
+ * | |
+ * | |
+ * | Unused |
+ * | |
+ * | |
+ * 0x04000 (16K) +-----------------------------------------+
+ * | NMI Handler (Protected Page) |
+ * 0x03000 (12K) +-----------------------------------------+
+ * | ARCS PVECTORS (master node only) |
+ * 0x02c00 (11K) +-----------------------------------------+
+ * | ARCS TVECTORS (master node only) |
+ * 0x02800 (10K) +-----------------------------------------+
+ * | LAUNCH [NUM_CPU] |
+ * 0x02400 (9K) +-----------------------------------------+
+ * | Low memory directory (KLDIR) |
+ * 0x02000 (8K) +-----------------------------------------+
+ * | ARCS SPB (1K) |
+ * 0x01000 (4K) +-----------------------------------------+
+ * | Early cache Exception stack (CPU 0)|
+ * 0x00800 (2k) +-----------------------------------------+
+ * | cache error eframe (CPU 0) |
+ * 0x00400 (1K) +-----------------------------------------+
+ * | Exception Handlers (CPU 0) |
+ * 0x00000 (0K) +-----------------------------------------+
+ */
+
+/*
+ * NOTE: To change the kernel load address, you must update:
+ * - the appropriate elspec files in irix/kern/master.d
+ * - NODEBUGUNIX_ADDR in SN/SN1/addrs.h
+ * - IP27_FREEMEM_OFFSET below
+ * - KERNEL_START_OFFSET below (if supporting cells)
+ */
+
+
+/*
+ * This is defined here because IP27_SYMMON_STK_SIZE must be at least what
+ * we define here. Since it's set up in the prom. We can't redefine it later
+ * and expect more space to be allocated. The way to find out the true size
+ * of the symmon stacks is to divide SYMMON_STK_SIZE by SYMMON_STK_STRIDE
+ * for a particular node.
+ */
+#define SYMMON_STACK_SIZE 0x8000
+
+#if defined (PROM) || defined (SABLE)
+
+/*
+ * These defines are prom version dependent. No code other than the IP35
+ * prom should attempt to use these values.
+ */
+#define IP27_LAUNCH_OFFSET 0x2400
+#define IP27_LAUNCH_SIZE 0x400
+#define IP27_LAUNCH_COUNT 4
+#define IP27_LAUNCH_STRIDE 0x100 /* could be as small as 0x80 */
+
+#define IP27_KLCONFIG_OFFSET 0x30000
+#define IP27_KLCONFIG_SIZE 0x10000
+#define IP27_KLCONFIG_COUNT 1
+#define IP27_KLCONFIG_STRIDE 0
+
+#define IP27_NMI_OFFSET 0x3000
+#define IP27_NMI_SIZE 0x100
+#define IP27_NMI_COUNT 4
+#define IP27_NMI_STRIDE 0x40
+
+#define IP27_PI_ERROR_OFFSET 0x20000
+#define IP27_PI_ERROR_SIZE 0x10000
+#define IP27_PI_ERROR_COUNT 1
+#define IP27_PI_ERROR_STRIDE 0
+
+#define IP27_SYMMON_STK_OFFSET 0x4c000
+#define IP27_SYMMON_STK_SIZE 0x20000
+#define IP27_SYMMON_STK_COUNT 4
+/* IP27_SYMMON_STK_STRIDE must be >= SYMMON_STACK_SIZE */
+#define IP27_SYMMON_STK_STRIDE 0x8000
+
+#define IP27_FREEMEM_OFFSET 0x40000
+#define IP27_FREEMEM_SIZE (-1)
+#define IP27_FREEMEM_COUNT 1
+#define IP27_FREEMEM_STRIDE 0
+
+#endif /* PROM || SABLE*/
+/*
+ * There will be only one of these in a partition so the IO7 must set it up.
+ */
+#define IO6_GDA_OFFSET 0xb000
+#define IO6_GDA_SIZE 0x400
+#define IO6_GDA_COUNT 1
+#define IO6_GDA_STRIDE 0
+
+/*
+ * save area of kernel nmi regs in the prom format
+ */
+#define IP27_NMI_KREGS_OFFSET 0x9000
+#define IP27_NMI_KREGS_CPU_SIZE 0x400
+/*
+ * save area of kernel nmi regs in eframe format
+ */
+#define IP27_NMI_EFRAME_OFFSET 0xa000
+#define IP27_NMI_EFRAME_SIZE 0x400
+
+#define GPDA_OFFSET 0x11c00
+
+#endif /* _ASM_IA64_SN_KLDIR_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997, 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+#ifndef _ASM_IA64_SN_KSYS_ELSC_H
+#define _ASM_IA64_SN_KSYS_ELSC_H
+
+/*
+ * Error codes
+ *
+ * The possible ELSC error codes are a superset of the I2C error codes,
+ * so ELSC error codes begin at -100.
+ */
+
+#define ELSC_ERROR_NONE 0
+
+#define ELSC_ERROR_CMD_SEND (-100) /* Error sending command */
+#define ELSC_ERROR_CMD_CHECKSUM (-101) /* Command checksum bad */
+#define ELSC_ERROR_CMD_UNKNOWN (-102) /* Unknown command */
+#define ELSC_ERROR_CMD_ARGS (-103) /* Invalid argument(s) */
+#define ELSC_ERROR_CMD_PERM (-104) /* Permission denied */
+#define ELSC_ERROR_CMD_STATE (-105) /* not allowed in this state*/
+
+#define ELSC_ERROR_RESP_TIMEOUT (-110) /* ELSC response timeout */
+#define ELSC_ERROR_RESP_CHECKSUM (-111) /* Response checksum bad */
+#define ELSC_ERROR_RESP_FORMAT (-112) /* Response format error */
+#define ELSC_ERROR_RESP_DIR (-113) /* Response direction error */
+
+#define ELSC_ERROR_MSG_LOST (-120) /* Queue full; msg. lost */
+#define ELSC_ERROR_LOCK_TIMEOUT (-121) /* ELSC response timeout */
+#define ELSC_ERROR_DATA_SEND (-122) /* Error sending data */
+#define ELSC_ERROR_NIC (-123) /* NIC processing error */
+#define ELSC_ERROR_NVMAGIC (-124) /* Bad magic no. in NVRAM */
+#define ELSC_ERROR_MODULE (-125) /* Moduleid processing err */
+
+#endif /* _ASM_IA64_SN_KSYS_ELSC_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+
+#ifndef _ASM_IA64_SN_KSYS_L1_H
+#define _ASM_IA64_SN_KSYS_L1_H
+
+#include <asm/sn/types.h>
+
+/* L1 Target Addresses */
+/*
+ * L1 commands and responses use source/target addresses that are
+ * 32 bits long. These are broken up into multiple bitfields that
+ * specify the type of the target controller (could actually be L2
+ * L3, not just L1), the rack and bay of the target, and the task
+ * id (L1 functionality is divided into several independent "tasks"
+ * that can each receive command requests and transmit responses)
+ */
+#define L1_ADDR_TYPE_L1 0x00 /* L1 system controller */
+#define L1_ADDR_TYPE_L2 0x01 /* L2 system controller */
+#define L1_ADDR_TYPE_L3 0x02 /* L3 system controller */
+#define L1_ADDR_TYPE_CBRICK 0x03 /* attached C brick */
+#define L1_ADDR_TYPE_IOBRICK 0x04 /* attached I/O brick */
+#define L1_ADDR_TASK_SHFT 0
+#define L1_ADDR_TASK_MASK 0x0000001F
+#define L1_ADDR_TASK_INVALID 0x00 /* invalid task */
+#define L1_ADDR_TASK_IROUTER 0x01 /* iRouter */
+#define L1_ADDR_TASK_SYS_MGMT 0x02 /* system management port */
+#define L1_ADDR_TASK_CMD 0x03 /* command interpreter */
+#define L1_ADDR_TASK_ENV 0x04 /* environmental monitor */
+#define L1_ADDR_TASK_BEDROCK 0x05 /* bedrock */
+#define L1_ADDR_TASK_GENERAL 0x06 /* general requests */
+
+/* response argument types */
+#define L1_ARG_INT 0x00 /* 4-byte integer (big-endian) */
+#define L1_ARG_ASCII 0x01 /* null-terminated ASCII string */
+#define L1_ARG_UNKNOWN 0x80 /* unknown data type. The low
+ * 7 bits will contain the data
+ * length. */
+
+/* response codes */
+#define L1_RESP_OK 0 /* no problems encountered */
+#define L1_RESP_IROUTER (- 1) /* iRouter error */
+#define L1_RESP_ARGC (-100) /* arg count mismatch */
+#define L1_RESP_REQC (-101) /* bad request code */
+#define L1_RESP_NAVAIL (-104) /* requested data not available */
+#define L1_RESP_ARGVAL (-105) /* arg value out of range */
+#define L1_RESP_INVAL (-107) /* requested data invalid */
+
+/* L1 general requests */
+
+/* request codes */
+#define L1_REQ_RDBG 0x0001 /* read debug switches */
+#define L1_REQ_RRACK 0x0002 /* read brick rack & bay */
+#define L1_REQ_RRBT 0x0003 /* read brick rack, bay & type */
+#define L1_REQ_SER_NUM 0x0004 /* read brick serial number */
+#define L1_REQ_FW_REV 0x0005 /* read L1 firmware revision */
+#define L1_REQ_EEPROM 0x0006 /* read EEPROM info */
+#define L1_REQ_EEPROM_FMT 0x0007 /* get EEPROM data format & size */
+#define L1_REQ_SYS_SERIAL 0x0008 /* read system serial number */
+#define L1_REQ_PARTITION_GET 0x0009 /* read partition id */
+#define L1_REQ_PORTSPEED 0x000a /* get ioport speed */
+
+#define L1_REQ_CONS_SUBCH 0x1002 /* select this node's console
+ subchannel */
+#define L1_REQ_CONS_NODE 0x1003 /* volunteer to be the master
+ (console-hosting) node */
+#define L1_REQ_DISP1 0x1004 /* write line 1 of L1 display */
+#define L1_REQ_DISP2 0x1005 /* write line 2 of L1 display */
+#define L1_REQ_PARTITION_SET 0x1006 /* set partition id */
+#define L1_REQ_EVENT_SUBCH 0x1007 /* set the subchannel for system
+ controller event transmission */
+
+#define L1_REQ_RESET 0x2000 /* request a full system reset */
+#define L1_REQ_PCI_UP 0x2001 /* power up pci slot or bus */
+#define L1_REQ_PCI_DOWN 0x2002 /* power down pci slot or bus */
+#define L1_REQ_PCI_RESET 0x2003 /* reset pci bus or slot */
+
+/* L1 command interpreter requests */
+
+/* request codes */
+#define L1_REQ_EXEC_CMD 0x0000 /* interpret and execute an ASCII
+ command string */
+
+/* brick type response codes */
+#define L1_BRICKTYPE_PX 0x23 /* # */
+#define L1_BRICKTYPE_PE 0x25 /* % */
+#define L1_BRICKTYPE_N_p0 0x26 /* & */
+#define L1_BRICKTYPE_IP45 0x34 /* 4 */
+#define L1_BRICKTYPE_IP41 0x35 /* 5 */
+#define L1_BRICKTYPE_TWISTER 0x36 /* 6 */ /* IP53 & ROUTER */
+#define L1_BRICKTYPE_IX 0x3d /* = */
+#define L1_BRICKTYPE_IP34 0x61 /* a */
+#define L1_BRICKTYPE_C 0x63 /* c */
+#define L1_BRICKTYPE_I 0x69 /* i */
+#define L1_BRICKTYPE_N 0x6e /* n */
+#define L1_BRICKTYPE_OPUS 0x6f /* o */
+#define L1_BRICKTYPE_P 0x70 /* p */
+#define L1_BRICKTYPE_R 0x72 /* r */
+#define L1_BRICKTYPE_CHI_CG 0x76 /* v */
+#define L1_BRICKTYPE_X 0x78 /* x */
+#define L1_BRICKTYPE_X2 0x79 /* y */
+
+/* EEPROM codes (for the "read EEPROM" request) */
+/* c brick */
+#define L1_EEP_NODE 0x00 /* node board */
+#define L1_EEP_PIMM0 0x01
+#define L1_EEP_PIMM(x) (L1_EEP_PIMM0+(x))
+#define L1_EEP_DIMM0 0x03
+#define L1_EEP_DIMM(x) (L1_EEP_DIMM0+(x))
+
+/* other brick types */
+#define L1_EEP_POWER 0x00 /* power board */
+#define L1_EEP_LOGIC 0x01 /* logic board */
+
+/* info area types */
+#define L1_EEP_CHASSIS 1 /* chassis info area */
+#define L1_EEP_BOARD 2 /* board info area */
+#define L1_EEP_IUSE 3 /* internal use area */
+#define L1_EEP_SPD 4 /* serial presence detect record */
+
+#define L1_DISPLAY_LINE_LENGTH 12 /* L1 display characters/line */
+
+#ifdef L1_DISP_2LINES
+#define L1_DISPLAY_LINES 2 /* number of L1 display lines */
+#else
+#define L1_DISPLAY_LINES 1 /* number of L1 display lines available
+ * to system software */
+#endif
+
+int elsc_display_line(nasid_t nasid, char *line, int lnum);
+int iobrick_rack_bay_type_get( nasid_t nasid, unsigned int *rack,
+ unsigned int *bay, unsigned int *brick_type );
+int iomoduleid_get( nasid_t nasid );
+
+
+#endif /* _ASM_IA64_SN_KSYS_L1_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_LABELCL_H
+#define _ASM_IA64_SN_LABELCL_H
+
+#define LABELCL_MAGIC 0x4857434c /* 'HWLC' */
+#define LABEL_LENGTH_MAX 256 /* Includes NULL char */
+#define INFO_DESC_PRIVATE (-1) /* default */
+#define INFO_DESC_EXPORT 0 /* export info itself */
+
+/*
+ * Description of a label entry.
+ */
+typedef struct label_info_s {
+ char *name;
+ arb_info_desc_t desc;
+ arbitrary_info_t info;
+} label_info_t;
+
+/*
+ * Definition of the data structure that provides the link to
+ * the hwgraph fastinfo and the label entries associated with a
+ * particular hwgraph entry.
+ */
+typedef struct labelcl_info_s {
+ unsigned long hwcl_magic;
+ unsigned long num_labels;
+ void *label_list;
+ arbitrary_info_t IDX_list[HWGRAPH_NUM_INDEX_INFO];
+} labelcl_info_t;
+
+/*
+ * Definitions for the string table that holds the actual names
+ * of the labels.
+ */
+struct string_table_item {
+ struct string_table_item *next;
+ char string[1];
+};
+
+struct string_table {
+ struct string_table_item *string_table_head;
+ long string_table_generation;
+};
+
+
+#define STRTBL_BASIC_SIZE ((size_t)(((struct string_table_item *)0)->string))
+#define STRTBL_ITEM_SIZE(str_length) (STRTBL_BASIC_SIZE + (str_length) + 1)
+
+#define STRTBL_ALLOC(str_length) \
+ ((struct string_table_item *)kmalloc(STRTBL_ITEM_SIZE(str_length), GFP_KERNEL))
+
+#define STRTBL_FREE(ptr) kfree(ptr)
+
+
+extern labelcl_info_t *labelcl_info_create(void);
+extern int labelcl_info_destroy(labelcl_info_t *);
+extern int labelcl_info_add_LBL(vertex_hdl_t, char *, arb_info_desc_t, arbitrary_info_t);
+extern int labelcl_info_remove_LBL(vertex_hdl_t, char *, arb_info_desc_t *, arbitrary_info_t *);
+extern int labelcl_info_replace_LBL(vertex_hdl_t, char *, arb_info_desc_t,
+ arbitrary_info_t, arb_info_desc_t *, arbitrary_info_t *);
+extern int labelcl_info_get_LBL(vertex_hdl_t, char *, arb_info_desc_t *,
+ arbitrary_info_t *);
+extern int labelcl_info_get_next_LBL(vertex_hdl_t, char *, arb_info_desc_t *,
+ arbitrary_info_t *, labelcl_info_place_t *);
+extern int labelcl_info_replace_IDX(vertex_hdl_t, int, arbitrary_info_t,
+ arbitrary_info_t *);
+extern int labelcl_info_connectpt_set(vertex_hdl_t, vertex_hdl_t);
+extern int labelcl_info_get_IDX(vertex_hdl_t, int, arbitrary_info_t *);
+
+#endif /* _ASM_IA64_SN_LABELCL_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_SN_PCI_BRIDGE_H
+#define _ASM_SN_PCI_BRIDGE_H
+
+
+/*
+ * bridge.h - header file for bridge chip and bridge portion of xbridge chip
+ *
+ * Also including offsets for unique PIC registers.
+ * The PIC asic is a follow-on to Xbridge and most of its registers are
+ * identical to those of Xbridge. PIC is different than Xbridge in that
+ * it will accept 64 bit register access and that, in some cases, data
+ * is kept in bits 63:32. PIC registers that are identical to Xbridge
+ * may be accessed identically to the Xbridge registers, allowing for lots
+ * of code reuse. Here are the access rules as described in the PIC
+ * manual:
+ *
+ * o Read a word on a DW boundary returns D31:00 of reg.
+ * o Read a DW on a DW boundary returns D63:00 of reg.
+ * o Write a word on a DW boundary loads D31:00 of reg.
+ * o Write a DW on a DW boundary loads D63:00 of reg.
+ * o No support for word boundary access that is not double word
+ * aligned.
+ *
+ * So we can reuse a lot of bridge_s for PIC. In bridge_s are included
+ * #define tags and unions for 64 bit access to PIC registers.
+ * For a detailed PIC register layout see pic.h.
+ */
+
+#include <linux/config.h>
+#include <asm/sn/xtalk/xwidget.h>
+#include <asm/sn/pci/pic.h>
+
+#define BRIDGE_REG_GET32(reg) \
+ __swab32( *(volatile uint32_t *) (((uint64_t)reg)^4) )
+
+#define BRIDGE_REG_SET32(reg) \
+ *(volatile uint32_t *) (((uint64_t)reg)^4)
+
+/* I/O page size */
+
+#if PAGE_SIZE == 4096
+#define IOPFNSHIFT 12 /* 4K per mapped page */
+#else
+#define IOPFNSHIFT 14 /* 16K per mapped page */
+#endif /* PAGE_SIZE */
+
+#define IOPGSIZE (1 << IOPFNSHIFT)
+#define IOPG(x) ((x) >> IOPFNSHIFT)
+#define IOPGOFF(x) ((x) & (IOPGSIZE-1))
+
+/* Bridge RAM sizes */
+
+#define BRIDGE_INTERNAL_ATES 128
+#define XBRIDGE_INTERNAL_ATES 1024
+
+#define BRIDGE_ATE_RAM_SIZE (BRIDGE_INTERNAL_ATES<<3) /* 1kB ATE */
+#define XBRIDGE_ATE_RAM_SIZE (XBRIDGE_INTERNAL_ATES<<3) /* 8kB ATE */
+
+#define PIC_WR_REQ_BUFSIZE 256
+
+#define BRIDGE_CONFIG_BASE 0x20000 /* start of bridge's */
+ /* map to each device's */
+ /* config space */
+#define BRIDGE_CONFIG1_BASE 0x28000 /* type 1 device config space */
+#define BRIDGE_CONFIG_END 0x30000
+#define BRIDGE_CONFIG_SLOT_SIZE 0x1000 /* each map == 4k */
+
+#define BRIDGE_SSRAM_512K 0x00080000 /* 512kB */
+#define BRIDGE_SSRAM_128K 0x00020000 /* 128kB */
+#define BRIDGE_SSRAM_64K 0x00010000 /* 64kB */
+#define BRIDGE_SSRAM_0K 0x00000000 /* 0kB */
+
+/* ========================================================================
+ * Bridge address map
+ */
+
+#ifndef __ASSEMBLY__
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/*
+ * All accesses to bridge hardware registers must be done
+ * using 32-bit loads and stores.
+ */
+typedef uint32_t bridgereg_t;
+
+typedef uint64_t bridge_ate_t;
+
+/* pointers to bridge ATEs
+ * are always "pointer to volatile"
+ */
+typedef volatile bridge_ate_t *bridge_ate_p;
+
+/*
+ * It is generally preferred that hardware registers on the bridge
+ * are located from C code via this structure.
+ *
+ * Generated from Bridge spec dated 04oct95
+ */
+
+
+/*
+ * pic_widget_cfg_s is a local definition of widget_cfg_t but with
+ * a union of 64bit & 32bit registers, since PIC has 64bit widget
+ * registers but BRIDGE and XBRIDGE have 32bit. PIC registers that
+ * have valid bits (ie. not just reserved) in the upper 32bits are
+ * defined as a union so we can access them as 64bit for PIC and
+ * as 32bit for BRIDGE and XBRIDGE.
+ */
+typedef volatile struct pic_widget_cfg_s {
+ bridgereg_t _b_wid_id; /* 0x000004 */
+ bridgereg_t _pad_000000;
+
+ union {
+ picreg_t _p_wid_stat; /* 0x000008 */
+ struct {
+ bridgereg_t _b_wid_stat; /* 0x00000C */
+ bridgereg_t _b_pad_000008;
+ } _b;
+ } u_wid_stat;
+ #define __p_wid_stat_64 u_wid_stat._p_wid_stat
+ #define __b_wid_stat u_wid_stat._b._b_wid_stat
+
+ bridgereg_t _b_wid_err_upper; /* 0x000014 */
+ bridgereg_t _pad_000010;
+
+ union {
+ picreg_t _p_wid_err_lower; /* 0x000018 */
+ struct {
+ bridgereg_t _b_wid_err_lower; /* 0x00001C */
+ bridgereg_t _b_pad_000018;
+ } _b;
+ } u_wid_err_lower;
+ #define __p_wid_err_64 u_wid_err_lower._p_wid_err_lower
+ #define __b_wid_err_lower u_wid_err_lower._b._b_wid_err_lower
+
+ union {
+ picreg_t _p_wid_control; /* 0x000020 */
+ struct {
+ bridgereg_t _b_wid_control; /* 0x000024 */
+ bridgereg_t _b_pad_000020;
+ } _b;
+ } u_wid_control;
+ #define __p_wid_control_64 u_wid_control._p_wid_control
+ #define __b_wid_control u_wid_control._b._b_wid_control
+
+ bridgereg_t _b_wid_req_timeout; /* 0x00002C */
+ bridgereg_t _pad_000028;
+
+ bridgereg_t _b_wid_int_upper; /* 0x000034 */
+ bridgereg_t _pad_000030;
+
+ union {
+ picreg_t _p_wid_int_lower; /* 0x000038 */
+ struct {
+ bridgereg_t _b_wid_int_lower; /* 0x00003C */
+ bridgereg_t _b_pad_000038;
+ } _b;
+ } u_wid_int_lower;
+ #define __p_wid_int_64 u_wid_int_lower._p_wid_int_lower
+ #define __b_wid_int_lower u_wid_int_lower._b._b_wid_int_lower
+
+ bridgereg_t _b_wid_err_cmdword; /* 0x000044 */
+ bridgereg_t _pad_000040;
+
+ bridgereg_t _b_wid_llp; /* 0x00004C */
+ bridgereg_t _pad_000048;
+
+ bridgereg_t _b_wid_tflush; /* 0x000054 */
+ bridgereg_t _pad_000050;
+} pic_widget_cfg_t;
+
+/*
+ * BRIDGE, XBRIDGE, PIC register definitions. NOTE: Prior to PIC, registers
+ * were a 32bit quantity and double word aligned (and only accessible as a
+ * 32bit word. PIC registers are 64bits and accessible as words or double
+ * words. PIC registers that have valid bits (ie. not just reserved) in the
+ * upper 32bits are defined as a union of one 64bit picreg_t and two 32bit
+ * bridgereg_t so we can access them both ways.
+ *
+ * It is generally preferred that hardware registers on the bridge are
+ * located from C code via this structure.
+ *
+ * Generated from Bridge spec dated 04oct95
+ */
+
+typedef volatile struct bridge_s {
+
+ /* 0x000000-0x00FFFF -- Local Registers */
+
+ /* 0x000000-0x000057 -- Standard Widget Configuration */
+ union {
+ widget_cfg_t xtalk_widget_def; /* 0x000000 */
+ pic_widget_cfg_t local_widget_def; /* 0x000000 */
+ } u_wid;
+
+ /* 32bit widget register access via the widget_cfg_t */
+ #define b_widget u_wid.xtalk_widget_def
+
+ /* 32bit widget register access via the pic_widget_cfg_t */
+ #define b_wid_id u_wid.local_widget_def._b_wid_id
+ #define b_wid_stat u_wid.local_widget_def.__b_wid_stat
+ #define b_wid_err_upper u_wid.local_widget_def._b_wid_err_upper
+ #define b_wid_err_lower u_wid.local_widget_def.__b_wid_err_lower
+ #define b_wid_control u_wid.local_widget_def.__b_wid_control
+ #define b_wid_req_timeout u_wid.local_widget_def._b_wid_req_timeout
+ #define b_wid_int_upper u_wid.local_widget_def._b_wid_int_upper
+ #define b_wid_int_lower u_wid.local_widget_def.__b_wid_int_lower
+ #define b_wid_err_cmdword u_wid.local_widget_def._b_wid_err_cmdword
+ #define b_wid_llp u_wid.local_widget_def._b_wid_llp
+ #define b_wid_tflush u_wid.local_widget_def._b_wid_tflush
+
+ /* 64bit widget register access via the pic_widget_cfg_t */
+ #define p_wid_stat_64 u_wid.local_widget_def.__p_wid_stat_64
+ #define p_wid_err_64 u_wid.local_widget_def.__p_wid_err_64
+ #define p_wid_control_64 u_wid.local_widget_def.__p_wid_control_64
+ #define p_wid_int_64 u_wid.local_widget_def.__p_wid_int_64
+
+ /* 0x000058-0x00007F -- Bridge-specific Widget Configuration */
+ bridgereg_t b_wid_aux_err; /* 0x00005C */
+ bridgereg_t _pad_000058;
+
+ bridgereg_t b_wid_resp_upper; /* 0x000064 */
+ bridgereg_t _pad_000060;
+
+ union {
+ picreg_t _p_wid_resp_lower; /* 0x000068 */
+ struct {
+ bridgereg_t _b_wid_resp_lower; /* 0x00006C */
+ bridgereg_t _b_pad_000068;
+ } _b;
+ } u_wid_resp_lower;
+ #define p_wid_resp_64 u_wid_resp_lower._p_wid_resp_lower
+ #define b_wid_resp_lower u_wid_resp_lower._b._b_wid_resp_lower
+
+ bridgereg_t b_wid_tst_pin_ctrl; /* 0x000074 */
+ bridgereg_t _pad_000070;
+
+ union {
+ picreg_t _p_addr_lkerr; /* 0x000078 */
+ struct {
+ bridgereg_t _b_pad_00007C;
+ bridgereg_t _b_pad_000078;
+ } _b;
+ } u_addr_lkerr;
+ #define p_addr_lkerr_64 u_addr_lkerr._p_addr_lkerr
+
+ /* 0x000080-0x00008F -- PMU */
+ bridgereg_t b_dir_map; /* 0x000084 */
+ bridgereg_t _pad_000080;
+
+ bridgereg_t _pad_00008C;
+ bridgereg_t _pad_000088;
+
+ /* 0x000090-0x00009F -- SSRAM */
+ bridgereg_t b_ram_perr_or_map_fault;/* 0x000094 */
+ bridgereg_t _pad_000090;
+ #define b_ram_perr b_ram_perr_or_map_fault /* Bridge */
+ #define b_map_fault b_ram_perr_or_map_fault /* Xbridge & PIC */
+
+ bridgereg_t _pad_00009C;
+ bridgereg_t _pad_000098;
+
+ /* 0x0000A0-0x0000AF -- Arbitration */
+ bridgereg_t b_arb; /* 0x0000A4 */
+ bridgereg_t _pad_0000A0;
+
+ bridgereg_t _pad_0000AC;
+ bridgereg_t _pad_0000A8;
+
+ /* 0x0000B0-0x0000BF -- Number In A Can or ATE Parity Error */
+ union {
+ picreg_t _p_ate_parity_err; /* 0x0000B0 */
+ struct {
+ bridgereg_t _b_nic; /* 0x0000B4 */
+ bridgereg_t _b_pad_0000B0;
+ } _b;
+ } u_ate_parity_err_or_nic;
+ #define p_ate_parity_err_64 u_ate_parity_err_or_nic._p_ate_parity_err
+ #define b_nic u_ate_parity_err_or_nic._b._b_nic
+
+ bridgereg_t _pad_0000BC;
+ bridgereg_t _pad_0000B8;
+
+ /* 0x0000C0-0x0000FF -- PCI/GIO */
+ bridgereg_t b_bus_timeout; /* 0x0000C4 */
+ bridgereg_t _pad_0000C0;
+ #define b_pci_bus_timeout b_bus_timeout
+
+ bridgereg_t b_pci_cfg; /* 0x0000CC */
+ bridgereg_t _pad_0000C8;
+
+ bridgereg_t b_pci_err_upper; /* 0x0000D4 */
+ bridgereg_t _pad_0000D0;
+ #define b_gio_err_upper b_pci_err_upper
+
+ union {
+ picreg_t _p_pci_err_lower; /* 0x0000D8 */
+ struct {
+ bridgereg_t _b_pci_err_lower; /* 0x0000DC */
+ bridgereg_t _b_pad_0000D8;
+ } _b;
+ } u_pci_err_lower;
+ #define p_pci_err_64 u_pci_err_lower._p_pci_err_lower
+ #define b_pci_err_lower u_pci_err_lower._b._b_pci_err_lower
+ #define b_gio_err_lower b_pci_err_lower
+
+ bridgereg_t _pad_0000E0[8];
+
+ /* 0x000100-0x0001FF -- Interrupt */
+ union {
+ picreg_t _p_int_status; /* 0x000100 */
+ struct {
+ bridgereg_t _b_int_status; /* 0x000104 */
+ bridgereg_t _b_pad_000100;
+ } _b;
+ } u_int_status;
+ #define p_int_status_64 u_int_status._p_int_status
+ #define b_int_status u_int_status._b._b_int_status
+
+ union {
+ picreg_t _p_int_enable; /* 0x000108 */
+ struct {
+ bridgereg_t _b_int_enable; /* 0x00010C */
+ bridgereg_t _b_pad_000108;
+ } _b;
+ } u_int_enable;
+ #define p_int_enable_64 u_int_enable._p_int_enable
+ #define b_int_enable u_int_enable._b._b_int_enable
+
+ union {
+ picreg_t _p_int_rst_stat; /* 0x000110 */
+ struct {
+ bridgereg_t _b_int_rst_stat; /* 0x000114 */
+ bridgereg_t _b_pad_000110;
+ } _b;
+ } u_int_rst_stat;
+ #define p_int_rst_stat_64 u_int_rst_stat._p_int_rst_stat
+ #define b_int_rst_stat u_int_rst_stat._b._b_int_rst_stat
+
+ bridgereg_t b_int_mode; /* 0x00011C */
+ bridgereg_t _pad_000118;
+
+ bridgereg_t b_int_device; /* 0x000124 */
+ bridgereg_t _pad_000120;
+
+ bridgereg_t b_int_host_err; /* 0x00012C */
+ bridgereg_t _pad_000128;
+
+ union {
+ picreg_t _p_int_addr[8]; /* 0x0001{30,,,68} */
+ struct {
+ bridgereg_t addr; /* 0x0001{34,,,6C} */
+ bridgereg_t _b_pad;
+ } _b[8];
+ } u_int_addr;
+ #define p_int_addr_64 u_int_addr._p_int_addr
+ #define b_int_addr u_int_addr._b
+
+ union {
+ picreg_t _p_err_int_view; /* 0x000170 */
+ struct {
+ bridgereg_t _b_err_int_view; /* 0x000174 */
+ bridgereg_t _b_pad_000170;
+ } _b;
+ } u_err_int_view;
+ #define p_err_int_view_64 u_err_int_view._p_err_int_view
+ #define b_err_int_view u_err_int_view._b._b_err_int_view
+
+ union {
+ picreg_t _p_mult_int; /* 0x000178 */
+ struct {
+ bridgereg_t _b_mult_int; /* 0x00017C */
+ bridgereg_t _b_pad_000178;
+ } _b;
+ } u_mult_int;
+ #define p_mult_int_64 u_mult_int._p_mult_int
+ #define b_mult_int u_mult_int._b._b_mult_int
+
+ struct {
+ bridgereg_t intr; /* 0x0001{84,,,BC} */
+ bridgereg_t __pad;
+ } b_force_always[8];
+
+ struct {
+ bridgereg_t intr; /* 0x0001{C4,,,FC} */
+ bridgereg_t __pad;
+ } b_force_pin[8];
+
+ /* 0x000200-0x0003FF -- Device */
+ struct {
+ bridgereg_t reg; /* 0x0002{04,,,3C} */
+ bridgereg_t __pad;
+ } b_device[8];
+
+ struct {
+ bridgereg_t reg; /* 0x0002{44,,,7C} */
+ bridgereg_t __pad;
+ } b_wr_req_buf[8];
+
+ struct {
+ bridgereg_t reg; /* 0x0002{84,,,8C} */
+ bridgereg_t __pad;
+ } b_rrb_map[2];
+ #define b_even_resp b_rrb_map[0].reg /* 0x000284 */
+ #define b_odd_resp b_rrb_map[1].reg /* 0x00028C */
+
+ bridgereg_t b_resp_status; /* 0x000294 */
+ bridgereg_t _pad_000290;
+
+ bridgereg_t b_resp_clear; /* 0x00029C */
+ bridgereg_t _pad_000298;
+
+ bridgereg_t _pad_0002A0[24];
+
+ /* Xbridge/PIC only */
+ union {
+ struct {
+ picreg_t lower; /* 0x0003{08,,,F8} */
+ picreg_t upper; /* 0x0003{00,,,F0} */
+ } _p[16];
+ struct {
+ bridgereg_t upper; /* 0x0003{04,,,F4} */
+ bridgereg_t _b_pad1;
+ bridgereg_t lower; /* 0x0003{0C,,,FC} */
+ bridgereg_t _b_pad2;
+ } _b[16];
+ } u_buf_addr_match;
+ #define p_buf_addr_match_64 u_buf_addr_match._p
+ #define b_buf_addr_match u_buf_addr_match._b
+
+ /* 0x000400-0x0005FF -- Performance Monitor Registers (even only) */
+ struct {
+ bridgereg_t flush_w_touch; /* 0x000{404,,,5C4} */
+ bridgereg_t __pad1;
+ bridgereg_t flush_wo_touch; /* 0x000{40C,,,5CC} */
+ bridgereg_t __pad2;
+ bridgereg_t inflight; /* 0x000{414,,,5D4} */
+ bridgereg_t __pad3;
+ bridgereg_t prefetch; /* 0x000{41C,,,5DC} */
+ bridgereg_t __pad4;
+ bridgereg_t total_pci_retry; /* 0x000{424,,,5E4} */
+ bridgereg_t __pad5;
+ bridgereg_t max_pci_retry; /* 0x000{42C,,,5EC} */
+ bridgereg_t __pad6;
+ bridgereg_t max_latency; /* 0x000{434,,,5F4} */
+ bridgereg_t __pad7;
+ bridgereg_t clear_all; /* 0x000{43C,,,5FC} */
+ bridgereg_t __pad8;
+ } b_buf_count[8];
+
+ /*
+ * "PCI/X registers that are specific to PIC". See pic.h.
+ */
+
+ /* 0x000600-0x0009FF -- PCI/X registers */
+ picreg_t p_pcix_bus_err_addr_64; /* 0x000600 */
+ picreg_t p_pcix_bus_err_attr_64; /* 0x000608 */
+ picreg_t p_pcix_bus_err_data_64; /* 0x000610 */
+ picreg_t p_pcix_pio_split_addr_64; /* 0x000618 */
+ picreg_t p_pcix_pio_split_attr_64; /* 0x000620 */
+ picreg_t p_pcix_dma_req_err_attr_64; /* 0x000628 */
+ picreg_t p_pcix_dma_req_err_addr_64; /* 0x000630 */
+ picreg_t p_pcix_timeout_64; /* 0x000638 */
+
+ picreg_t _pad_000600[120];
+
+ /* 0x000A00-0x000BFF -- PCI/X Read&Write Buffer */
+ struct {
+ picreg_t p_buf_attr; /* 0X000{A08,,,AF8} */
+ picreg_t p_buf_addr; /* 0x000{A00,,,AF0} */
+ } p_pcix_read_buf_64[16];
+
+ struct {
+ picreg_t p_buf_attr; /* 0x000{B08,,,BE8} */
+ picreg_t p_buf_addr; /* 0x000{B00,,,BE0} */
+ picreg_t __pad1; /* 0x000{B18,,,BF8} */
+ picreg_t p_buf_valid; /* 0x000{B10,,,BF0} */
+ } p_pcix_write_buf_64[8];
+
+ /*
+ * end "PCI/X registers that are specific to PIC"
+ */
+
+ char _pad_000c00[0x010000 - 0x000c00];
+
+ /* 0x010000-0x011fff -- Internal Address Translation Entry RAM */
+ /*
+ * Xbridge and PIC have 1024 internal ATE's and the Bridge has 128.
+ * Make enough room for the Xbridge/PIC ATE's and depend on runtime
+ * checks to limit access to bridge ATE's.
+ *
+ * In [X]bridge the internal ATE Ram is writen as double words only,
+ * but due to internal design issues it is read back as single words.
+ * i.e:
+ * b_int_ate_ram[index].hi.rd << 32 | xb_int_ate_ram_lo[index].rd
+ */
+ union {
+ bridge_ate_t wr; /* write-only */ /* 0x01{0000,,,1FF8} */
+ struct {
+ bridgereg_t rd; /* read-only */ /* 0x01{0004,,,1FFC} */
+ bridgereg_t _p_pad;
+ } hi;
+ } b_int_ate_ram[XBRIDGE_INTERNAL_ATES];
+ #define b_int_ate_ram_lo(idx) b_int_ate_ram[idx+512].hi.rd
+
+ /* 0x012000-0x013fff -- Internal Address Translation Entry RAM LOW */
+ struct {
+ bridgereg_t rd; /* read-only */ /* 0x01{2004,,,3FFC} */
+ bridgereg_t _p_pad;
+ } xb_int_ate_ram_lo[XBRIDGE_INTERNAL_ATES];
+
+ char _pad_014000[0x18000 - 0x014000];
+
+ /* 0x18000-0x197F8 -- PIC Write Request Ram */
+ /* 0x18000 - 0x187F8 */
+ picreg_t p_wr_req_lower[PIC_WR_REQ_BUFSIZE];
+ /* 0x18800 - 0x18FF8 */
+ picreg_t p_wr_req_upper[PIC_WR_REQ_BUFSIZE];
+ /* 0x19000 - 0x197F8 */
+ picreg_t p_wr_req_parity[PIC_WR_REQ_BUFSIZE];
+
+ char _pad_019800[0x20000 - 0x019800];
+
+ /* 0x020000-0x027FFF -- PCI Device Configuration Spaces */
+ union { /* make all access sizes available. */
+ unsigned char c[0x1000 / 1]; /* 0x02{0000,,,7FFF} */
+ uint16_t s[0x1000 / 2]; /* 0x02{0000,,,7FFF} */
+ uint32_t l[0x1000 / 4]; /* 0x02{0000,,,7FFF} */
+ uint64_t d[0x1000 / 8]; /* 0x02{0000,,,7FFF} */
+ union {
+ unsigned char c[0x100 / 1];
+ uint16_t s[0x100 / 2];
+ uint32_t l[0x100 / 4];
+ uint64_t d[0x100 / 8];
+ } f[8];
+ } b_type0_cfg_dev[8]; /* 0x02{0000,,,7FFF} */
+
+ /* 0x028000-0x028FFF -- PCI Type 1 Configuration Space */
+ union { /* make all access sizes available. */
+ unsigned char c[0x1000 / 1];
+ uint16_t s[0x1000 / 2];
+ uint32_t l[0x1000 / 4];
+ uint64_t d[0x1000 / 8];
+ union {
+ unsigned char c[0x100 / 1];
+ uint16_t s[0x100 / 2];
+ uint32_t l[0x100 / 4];
+ uint64_t d[0x100 / 8];
+ } f[8];
+ } b_type1_cfg; /* 0x028000-0x029000 */
+
+ char _pad_029000[0x007000]; /* 0x029000-0x030000 */
+
+ /* 0x030000-0x030007 -- PCI Interrupt Acknowledge Cycle */
+ union {
+ unsigned char c[8 / 1];
+ uint16_t s[8 / 2];
+ uint32_t l[8 / 4];
+ uint64_t d[8 / 8];
+ } b_pci_iack; /* 0x030000-0x030007 */
+
+ unsigned char _pad_030007[0x04fff8]; /* 0x030008-0x07FFFF */
+
+ /* 0x080000-0x0FFFFF -- External Address Translation Entry RAM */
+ bridge_ate_t b_ext_ate_ram[0x10000];
+
+ /* 0x100000-0x1FFFFF -- Reserved */
+ char _pad_100000[0x200000-0x100000];
+
+ /* 0x200000-0xBFFFFF -- PCI/GIO Device Spaces */
+ union { /* make all access sizes available. */
+ unsigned char c[0x100000 / 1];
+ uint16_t s[0x100000 / 2];
+ uint32_t l[0x100000 / 4];
+ uint64_t d[0x100000 / 8];
+ } b_devio_raw[10];
+
+ /* b_devio macro is a bit strange; it reflects the
+ * fact that the Bridge ASIC provides 2M for the
+ * first two DevIO windows and 1M for the other six.
+ */
+ #define b_devio(n) b_devio_raw[((n)<2)?(n*2):(n+2)]
+
+ /* 0xC00000-0xFFFFFF -- External Flash Proms 1,0 */
+ union { /* make all access sizes available. */
+ unsigned char c[0x400000 / 1]; /* read-only */
+ uint16_t s[0x400000 / 2]; /* read-write */
+ uint32_t l[0x400000 / 4]; /* read-only */
+ uint64_t d[0x400000 / 8]; /* read-only */
+ } b_external_flash;
+} bridge_t;
+
+#define berr_field berr_un.berr_st
+#endif /* __ASSEMBLY__ */
+
+/*
+ * The values of these macros can and should be crosschecked
+ * regularly against the offsets of the like-named fields
+ * within the "bridge_t" structure above.
+ */
+
+/* Byte offset macros for Bridge internal registers */
+
+#define BRIDGE_WID_ID WIDGET_ID
+#define BRIDGE_WID_STAT WIDGET_STATUS
+#define BRIDGE_WID_ERR_UPPER WIDGET_ERR_UPPER_ADDR
+#define BRIDGE_WID_ERR_LOWER WIDGET_ERR_LOWER_ADDR
+#define BRIDGE_WID_CONTROL WIDGET_CONTROL
+#define BRIDGE_WID_REQ_TIMEOUT WIDGET_REQ_TIMEOUT
+#define BRIDGE_WID_INT_UPPER WIDGET_INTDEST_UPPER_ADDR
+#define BRIDGE_WID_INT_LOWER WIDGET_INTDEST_LOWER_ADDR
+#define BRIDGE_WID_ERR_CMDWORD WIDGET_ERR_CMD_WORD
+#define BRIDGE_WID_LLP WIDGET_LLP_CFG
+#define BRIDGE_WID_TFLUSH WIDGET_TFLUSH
+
+#define BRIDGE_WID_AUX_ERR 0x00005C /* Aux Error Command Word */
+#define BRIDGE_WID_RESP_UPPER 0x000064 /* Response Buf Upper Addr */
+#define BRIDGE_WID_RESP_LOWER 0x00006C /* Response Buf Lower Addr */
+#define BRIDGE_WID_TST_PIN_CTRL 0x000074 /* Test pin control */
+
+#define BRIDGE_DIR_MAP 0x000084 /* Direct Map reg */
+
+/* Bridge has SSRAM Parity Error and Xbridge has Map Fault here */
+#define BRIDGE_RAM_PERR 0x000094 /* SSRAM Parity Error */
+#define BRIDGE_MAP_FAULT 0x000094 /* Map Fault */
+
+#define BRIDGE_ARB 0x0000A4 /* Arbitration Priority reg */
+
+#define BRIDGE_NIC 0x0000B4 /* Number In A Can */
+
+#define BRIDGE_BUS_TIMEOUT 0x0000C4 /* Bus Timeout Register */
+#define BRIDGE_PCI_BUS_TIMEOUT BRIDGE_BUS_TIMEOUT
+#define BRIDGE_PCI_CFG 0x0000CC /* PCI Type 1 Config reg */
+#define BRIDGE_PCI_ERR_UPPER 0x0000D4 /* PCI error Upper Addr */
+#define BRIDGE_PCI_ERR_LOWER 0x0000DC /* PCI error Lower Addr */
+
+#define BRIDGE_INT_STATUS 0x000104 /* Interrupt Status */
+#define BRIDGE_INT_ENABLE 0x00010C /* Interrupt Enables */
+#define BRIDGE_INT_RST_STAT 0x000114 /* Reset Intr Status */
+#define BRIDGE_INT_MODE 0x00011C /* Interrupt Mode */
+#define BRIDGE_INT_DEVICE 0x000124 /* Interrupt Device */
+#define BRIDGE_INT_HOST_ERR 0x00012C /* Host Error Field */
+
+#define BRIDGE_INT_ADDR0 0x000134 /* Host Address Reg */
+#define BRIDGE_INT_ADDR_OFF 0x000008 /* Host Addr offset (1..7) */
+#define BRIDGE_INT_ADDR(x) (BRIDGE_INT_ADDR0+(x)*BRIDGE_INT_ADDR_OFF)
+
+#define BRIDGE_INT_VIEW 0x000174 /* Interrupt view */
+#define BRIDGE_MULTIPLE_INT 0x00017c /* Multiple interrupt occurred */
+
+#define BRIDGE_FORCE_ALWAYS0 0x000184 /* Force an interrupt (always)*/
+#define BRIDGE_FORCE_ALWAYS_OFF 0x000008 /* Force Always offset */
+#define BRIDGE_FORCE_ALWAYS(x) (BRIDGE_FORCE_ALWAYS0+(x)*BRIDGE_FORCE_ALWAYS_OFF)
+
+#define BRIDGE_FORCE_PIN0 0x0001c4 /* Force an interrupt */
+#define BRIDGE_FORCE_PIN_OFF 0x000008 /* Force Pin offset */
+#define BRIDGE_FORCE_PIN(x) (BRIDGE_FORCE_PIN0+(x)*BRIDGE_FORCE_PIN_OFF)
+
+#define BRIDGE_DEVICE0 0x000204 /* Device 0 */
+#define BRIDGE_DEVICE_OFF 0x000008 /* Device offset (1..7) */
+#define BRIDGE_DEVICE(x) (BRIDGE_DEVICE0+(x)*BRIDGE_DEVICE_OFF)
+
+#define BRIDGE_WR_REQ_BUF0 0x000244 /* Write Request Buffer 0 */
+#define BRIDGE_WR_REQ_BUF_OFF 0x000008 /* Buffer Offset (1..7) */
+#define BRIDGE_WR_REQ_BUF(x) (BRIDGE_WR_REQ_BUF0+(x)*BRIDGE_WR_REQ_BUF_OFF)
+
+#define BRIDGE_EVEN_RESP 0x000284 /* Even Device Response Buf */
+#define BRIDGE_ODD_RESP 0x00028C /* Odd Device Response Buf */
+
+#define BRIDGE_RESP_STATUS 0x000294 /* Read Response Status reg */
+#define BRIDGE_RESP_CLEAR 0x00029C /* Read Response Clear reg */
+
+#define BRIDGE_BUF_ADDR_UPPER0 0x000304
+#define BRIDGE_BUF_ADDR_UPPER_OFF 0x000010 /* PCI Buffer Upper Offset */
+#define BRIDGE_BUF_ADDR_UPPER(x) (BRIDGE_BUF_ADDR_UPPER0+(x)*BRIDGE_BUF_ADDR_UPPER_OFF)
+
+#define BRIDGE_BUF_ADDR_LOWER0 0x00030c
+#define BRIDGE_BUF_ADDR_LOWER_OFF 0x000010 /* PCI Buffer Upper Offset */
+#define BRIDGE_BUF_ADDR_LOWER(x) (BRIDGE_BUF_ADDR_LOWER0+(x)*BRIDGE_BUF_ADDR_LOWER_OFF)
+
+/*
+ * Performance Monitor Registers.
+ *
+ * The Performance registers are those registers which are associated with
+ * monitoring the performance of PCI generated reads to the host environ
+ * ment. Because of the size of the register file only the even registers
+ * were instrumented.
+ */
+
+#define BRIDGE_BUF_OFF 0x40
+#define BRIDGE_BUF_NEXT(base, off) (base+((off)*BRIDGE_BUF_OFF))
+
+/*
+ * Buffer (x) Flush Count with Data Touch Register.
+ *
+ * This counter is incremented each time the corresponding response buffer
+ * is flushed after at least a single data element in the buffer is used.
+ * A word write to this address clears the count.
+ */
+
+#define BRIDGE_BUF_0_FLUSH_TOUCH 0x000404
+#define BRIDGE_BUF_2_FLUSH_TOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_TOUCH, 1)
+#define BRIDGE_BUF_4_FLUSH_TOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_TOUCH, 2)
+#define BRIDGE_BUF_6_FLUSH_TOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_TOUCH, 3)
+#define BRIDGE_BUF_8_FLUSH_TOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_TOUCH, 4)
+#define BRIDGE_BUF_10_FLUSH_TOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_TOUCH, 5)
+#define BRIDGE_BUF_12_FLUSH_TOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_TOUCH, 6)
+#define BRIDGE_BUF_14_FLUSH_TOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_TOUCH, 7)
+
+/*
+ * Buffer (x) Flush Count w/o Data Touch Register
+ *
+ * This counter is incremented each time the corresponding response buffer
+ * is flushed without any data element in the buffer being used. A word
+ * write to this address clears the count.
+ */
+
+
+#define BRIDGE_BUF_0_FLUSH_NOTOUCH 0x00040c
+#define BRIDGE_BUF_2_FLUSH_NOTOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_NOTOUCH, 1)
+#define BRIDGE_BUF_4_FLUSH_NOTOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_NOTOUCH, 2)
+#define BRIDGE_BUF_6_FLUSH_NOTOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_NOTOUCH, 3)
+#define BRIDGE_BUF_8_FLUSH_NOTOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_NOTOUCH, 4)
+#define BRIDGE_BUF_10_FLUSH_NOTOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_NOTOUCH, 5)
+#define BRIDGE_BUF_12_FLUSH_NOTOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_NOTOUCH, 6)
+#define BRIDGE_BUF_14_FLUSH_NOTOUCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_FLUSH_NOTOUCH, 7)
+
+/*
+ * Buffer (x) Request in Flight Count Register
+ *
+ * This counter is incremented on each bus clock while the request is in
+ * flight. A word write to this address clears the count.
+ */
+
+#define BRIDGE_BUF_0_INFLIGHT 0x000414
+#define BRIDGE_BUF_2_INFLIGHT BRIDGE_BUF_NEXT(BRIDGE_BUF_0_INFLIGHT, 1)
+#define BRIDGE_BUF_4_INFLIGHT BRIDGE_BUF_NEXT(BRIDGE_BUF_0_INFLIGHT, 2)
+#define BRIDGE_BUF_6_INFLIGHT BRIDGE_BUF_NEXT(BRIDGE_BUF_0_INFLIGHT, 3)
+#define BRIDGE_BUF_8_INFLIGHT BRIDGE_BUF_NEXT(BRIDGE_BUF_0_INFLIGHT, 4)
+#define BRIDGE_BUF_10_INFLIGHT BRIDGE_BUF_NEXT(BRIDGE_BUF_0_INFLIGHT, 5)
+#define BRIDGE_BUF_12_INFLIGHT BRIDGE_BUF_NEXT(BRIDGE_BUF_0_INFLIGHT, 6)
+#define BRIDGE_BUF_14_INFLIGHT BRIDGE_BUF_NEXT(BRIDGE_BUF_0_INFLIGHT, 7)
+
+/*
+ * Buffer (x) Prefetch Request Count Register
+ *
+ * This counter is incremented each time the request using this buffer was
+ * generated from the prefetcher. A word write to this address clears the
+ * count.
+ */
+
+#define BRIDGE_BUF_0_PREFETCH 0x00041C
+#define BRIDGE_BUF_2_PREFETCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PREFETCH, 1)
+#define BRIDGE_BUF_4_PREFETCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PREFETCH, 2)
+#define BRIDGE_BUF_6_PREFETCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PREFETCH, 3)
+#define BRIDGE_BUF_8_PREFETCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PREFETCH, 4)
+#define BRIDGE_BUF_10_PREFETCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PREFETCH, 5)
+#define BRIDGE_BUF_12_PREFETCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PREFETCH, 6)
+#define BRIDGE_BUF_14_PREFETCH BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PREFETCH, 7)
+
+/*
+ * Buffer (x) Total PCI Retry Count Register
+ *
+ * This counter is incremented each time a PCI bus retry occurs and the ad
+ * dress matches the tag for the selected buffer. The buffer must also has
+ * this request in-flight. A word write to this address clears the count.
+ */
+
+#define BRIDGE_BUF_0_PCI_RETRY 0x000424
+#define BRIDGE_BUF_2_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PCI_RETRY, 1)
+#define BRIDGE_BUF_4_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PCI_RETRY, 2)
+#define BRIDGE_BUF_6_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PCI_RETRY, 3)
+#define BRIDGE_BUF_8_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PCI_RETRY, 4)
+#define BRIDGE_BUF_10_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PCI_RETRY, 5)
+#define BRIDGE_BUF_12_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PCI_RETRY, 6)
+#define BRIDGE_BUF_14_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_PCI_RETRY, 7)
+
+/*
+ * Buffer (x) Max PCI Retry Count Register
+ *
+ * This counter is contains the maximum retry count for a single request
+ * which was in-flight for this buffer. A word write to this address
+ * clears the count.
+ */
+
+#define BRIDGE_BUF_0_MAX_PCI_RETRY 0x00042C
+#define BRIDGE_BUF_2_MAX_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_PCI_RETRY, 1)
+#define BRIDGE_BUF_4_MAX_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_PCI_RETRY, 2)
+#define BRIDGE_BUF_6_MAX_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_PCI_RETRY, 3)
+#define BRIDGE_BUF_8_MAX_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_PCI_RETRY, 4)
+#define BRIDGE_BUF_10_MAX_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_PCI_RETRY, 5)
+#define BRIDGE_BUF_12_MAX_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_PCI_RETRY, 6)
+#define BRIDGE_BUF_14_MAX_PCI_RETRY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_PCI_RETRY, 7)
+
+/*
+ * Buffer (x) Max Latency Count Register
+ *
+ * This counter is contains the maximum count (in bus clocks) for a single
+ * request which was in-flight for this buffer. A word write to this
+ * address clears the count.
+ */
+
+#define BRIDGE_BUF_0_MAX_LATENCY 0x000434
+#define BRIDGE_BUF_2_MAX_LATENCY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_LATENCY, 1)
+#define BRIDGE_BUF_4_MAX_LATENCY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_LATENCY, 2)
+#define BRIDGE_BUF_6_MAX_LATENCY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_LATENCY, 3)
+#define BRIDGE_BUF_8_MAX_LATENCY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_LATENCY, 4)
+#define BRIDGE_BUF_10_MAX_LATENCY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_LATENCY, 5)
+#define BRIDGE_BUF_12_MAX_LATENCY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_LATENCY, 6)
+#define BRIDGE_BUF_14_MAX_LATENCY BRIDGE_BUF_NEXT(BRIDGE_BUF_0_MAX_LATENCY, 7)
+
+/*
+ * Buffer (x) Clear All Register
+ *
+ * Any access to this register clears all the count values for the (x)
+ * registers.
+ */
+
+#define BRIDGE_BUF_0_CLEAR_ALL 0x00043C
+#define BRIDGE_BUF_2_CLEAR_ALL BRIDGE_BUF_NEXT(BRIDGE_BUF_0_CLEAR_ALL, 1)
+#define BRIDGE_BUF_4_CLEAR_ALL BRIDGE_BUF_NEXT(BRIDGE_BUF_0_CLEAR_ALL, 2)
+#define BRIDGE_BUF_6_CLEAR_ALL BRIDGE_BUF_NEXT(BRIDGE_BUF_0_CLEAR_ALL, 3)
+#define BRIDGE_BUF_8_CLEAR_ALL BRIDGE_BUF_NEXT(BRIDGE_BUF_0_CLEAR_ALL, 4)
+#define BRIDGE_BUF_10_CLEAR_ALL BRIDGE_BUF_NEXT(BRIDGE_BUF_0_CLEAR_ALL, 5)
+#define BRIDGE_BUF_12_CLEAR_ALL BRIDGE_BUF_NEXT(BRIDGE_BUF_0_CLEAR_ALL, 6)
+#define BRIDGE_BUF_14_CLEAR_ALL BRIDGE_BUF_NEXT(BRIDGE_BUF_0_CLEAR_ALL, 7)
+
+/* end of Performance Monitor Registers */
+
+/* Byte offset macros for Bridge I/O space.
+ *
+ * NOTE: Where applicable please use the PCIBR_xxx or PCIBRIDGE_xxx
+ * macros (below) as they will handle [X]Bridge and PIC. For example,
+ * PCIBRIDGE_TYPE0_CFG_DEV0() vs BRIDGE_TYPE0_CFG_DEV0
+ */
+
+#define BRIDGE_ATE_RAM 0x00010000 /* Internal Addr Xlat Ram */
+
+#define BRIDGE_TYPE0_CFG_DEV0 0x00020000 /* Type 0 Cfg, Device 0 */
+#define BRIDGE_TYPE0_CFG_SLOT_OFF 0x00001000 /* Type 0 Cfg Slot Offset (1..7) */
+#define BRIDGE_TYPE0_CFG_FUNC_OFF 0x00000100 /* Type 0 Cfg Func Offset (1..7) */
+#define BRIDGE_TYPE0_CFG_DEV(s) (BRIDGE_TYPE0_CFG_DEV0+\
+ (s)*BRIDGE_TYPE0_CFG_SLOT_OFF)
+#define BRIDGE_TYPE0_CFG_DEVF(s,f) (BRIDGE_TYPE0_CFG_DEV0+\
+ (s)*BRIDGE_TYPE0_CFG_SLOT_OFF+\
+ (f)*BRIDGE_TYPE0_CFG_FUNC_OFF)
+
+#define BRIDGE_TYPE1_CFG 0x00028000 /* Type 1 Cfg space */
+
+#define BRIDGE_PCI_IACK 0x00030000 /* PCI Interrupt Ack */
+#define BRIDGE_EXT_SSRAM 0x00080000 /* Extern SSRAM (ATE) */
+
+/* Byte offset macros for Bridge device IO spaces */
+
+#define BRIDGE_DEV_CNT 8 /* Up to 8 devices per bridge */
+#define BRIDGE_DEVIO0 0x00200000 /* Device IO 0 Addr */
+#define BRIDGE_DEVIO1 0x00400000 /* Device IO 1 Addr */
+#define BRIDGE_DEVIO2 0x00600000 /* Device IO 2 Addr */
+#define BRIDGE_DEVIO_OFF 0x00100000 /* Device IO Offset (3..7) */
+
+#define BRIDGE_DEVIO_2MB 0x00200000 /* Device IO Offset (0..1) */
+#define BRIDGE_DEVIO_1MB 0x00100000 /* Device IO Offset (2..7) */
+
+#ifndef __ASSEMBLY__
+
+#define BRIDGE_DEVIO(x) ((x)<=1 ? BRIDGE_DEVIO0+(x)*BRIDGE_DEVIO_2MB : BRIDGE_DEVIO2+((x)-2)*BRIDGE_DEVIO_1MB)
+
+/*
+ * The device space macros for PIC are more complicated because the PIC has
+ * two PCI/X bridges under the same widget. For PIC bus 0, the addresses are
+ * basically the same as for the [X]Bridge. For PIC bus 1, the addresses are
+ * offset by 0x800000. Here are two sets of macros. They are
+ * "PCIBRIDGE_xxx" that return the address based on the supplied bus number
+ * and also equivalent "PCIBR_xxx" macros that may be used with a
+ * pcibr_soft_s structure. Both should work with all bridges.
+ */
+#define PIC_BUS1_OFFSET 0x800000
+
+#define PCIBRIDGE_TYPE0_CFG_DEV0(busnum) \
+ ((busnum) ? BRIDGE_TYPE0_CFG_DEV0 + PIC_BUS1_OFFSET : \
+ BRIDGE_TYPE0_CFG_DEV0)
+#define PCIBRIDGE_TYPE1_CFG(busnum) \
+ ((busnum) ? BRIDGE_TYPE1_CFG + PIC_BUS1_OFFSET : BRIDGE_TYPE1_CFG)
+#define PCIBRIDGE_TYPE0_CFG_DEV(busnum, s) \
+ (PCIBRIDGE_TYPE0_CFG_DEV0(busnum)+\
+ (s)*BRIDGE_TYPE0_CFG_SLOT_OFF)
+#define PCIBRIDGE_TYPE0_CFG_DEVF(busnum, s, f) \
+ (PCIBRIDGE_TYPE0_CFG_DEV0(busnum)+\
+ (s)*BRIDGE_TYPE0_CFG_SLOT_OFF+\
+ (f)*BRIDGE_TYPE0_CFG_FUNC_OFF)
+#define PCIBRIDGE_DEVIO0(busnum) ((busnum) ? \
+ (BRIDGE_DEVIO0 + PIC_BUS1_OFFSET) : BRIDGE_DEVIO0)
+#define PCIBRIDGE_DEVIO1(busnum) ((busnum) ? \
+ (BRIDGE_DEVIO1 + PIC_BUS1_OFFSET) : BRIDGE_DEVIO1)
+#define PCIBRIDGE_DEVIO2(busnum) ((busnum) ? \
+ (BRIDGE_DEVIO2 + PIC_BUS1_OFFSET) : BRIDGE_DEVIO2)
+#define PCIBRIDGE_DEVIO(busnum, x) \
+ ((x)<=1 ? PCIBRIDGE_DEVIO0(busnum)+(x)*BRIDGE_DEVIO_2MB : \
+ PCIBRIDGE_DEVIO2(busnum)+((x)-2)*BRIDGE_DEVIO_1MB)
+
+#define PCIBR_BRIDGE_DEVIO0(ps) PCIBRIDGE_DEVIO0((ps)->bs_busnum)
+#define PCIBR_BRIDGE_DEVIO1(ps) PCIBRIDGE_DEVIO1((ps)->bs_busnum)
+#define PCIBR_BRIDGE_DEVIO2(ps) PCIBRIDGE_DEVIO2((ps)->bs_busnum)
+#define PCIBR_BRIDGE_DEVIO(ps, s) PCIBRIDGE_DEVIO((ps)->bs_busnum, s)
+
+#define PCIBR_TYPE1_CFG(ps) PCIBRIDGE_TYPE1_CFG((ps)->bs_busnum)
+#define PCIBR_BUS_TYPE0_CFG_DEV0(ps) PCIBR_TYPE0_CFG_DEV(ps, 0)
+#define PCIBR_TYPE0_CFG_DEV(ps, s) PCIBRIDGE_TYPE0_CFG_DEV((ps)->bs_busnum, s+1)
+#define PCIBR_BUS_TYPE0_CFG_DEVF(ps,s,f) PCIBRIDGE_TYPE0_CFG_DEVF((ps)->bs_busnum,(s+1),f)
+
+/* NOTE: 's' is the internal device number, not the external slot number */
+#define PCIBR_BUS_TYPE0_CFG_DEV(ps, s) \
+ PCIBRIDGE_TYPE0_CFG_DEV((ps)->bs_busnum, s+1)
+
+#endif /* LANGUAGE_C */
+
+#define BRIDGE_EXTERNAL_FLASH 0x00C00000 /* External Flash PROMS */
+
+/* ========================================================================
+ * Bridge register bit field definitions
+ */
+
+/* Widget part number of bridge */
+#define BRIDGE_WIDGET_PART_NUM 0xc002
+#define XBRIDGE_WIDGET_PART_NUM 0xd002
+
+/* Manufacturer of bridge */
+#define BRIDGE_WIDGET_MFGR_NUM 0x036
+#define XBRIDGE_WIDGET_MFGR_NUM 0x024
+
+/* Revision numbers for known [X]Bridge revisions */
+#define BRIDGE_REV_A 0x1
+#define BRIDGE_REV_B 0x2
+#define BRIDGE_REV_C 0x3
+#define BRIDGE_REV_D 0x4
+#define XBRIDGE_REV_A 0x1
+#define XBRIDGE_REV_B 0x2
+
+/* macros to determine bridge type. 'wid' == widget identification */
+#define IS_PIC_BUS0(wid) (XWIDGET_PART_NUM(wid) == PIC_WIDGET_PART_NUM_BUS0 && \
+ XWIDGET_MFG_NUM(wid) == PIC_WIDGET_MFGR_NUM)
+#define IS_PIC_BUS1(wid) (XWIDGET_PART_NUM(wid) == PIC_WIDGET_PART_NUM_BUS1 && \
+ XWIDGET_MFG_NUM(wid) == PIC_WIDGET_MFGR_NUM)
+#define IS_PIC_BRIDGE(wid) (IS_PIC_BUS0(wid) || IS_PIC_BUS1(wid))
+
+/* Part + Rev numbers allows distinction and acscending sequence */
+#define BRIDGE_PART_REV_A (BRIDGE_WIDGET_PART_NUM << 4 | BRIDGE_REV_A)
+#define BRIDGE_PART_REV_B (BRIDGE_WIDGET_PART_NUM << 4 | BRIDGE_REV_B)
+#define BRIDGE_PART_REV_C (BRIDGE_WIDGET_PART_NUM << 4 | BRIDGE_REV_C)
+#define BRIDGE_PART_REV_D (BRIDGE_WIDGET_PART_NUM << 4 | BRIDGE_REV_D)
+#define XBRIDGE_PART_REV_A (XBRIDGE_WIDGET_PART_NUM << 4 | XBRIDGE_REV_A)
+#define XBRIDGE_PART_REV_B (XBRIDGE_WIDGET_PART_NUM << 4 | XBRIDGE_REV_B)
+
+/* Bridge widget status register bits definition */
+#define PIC_STAT_PCIX_SPEED (0x3ull << 34)
+#define PIC_STAT_PCIX_ACTIVE (0x1ull << 33)
+#define BRIDGE_STAT_LLP_REC_CNT (0xFFu << 24)
+#define BRIDGE_STAT_LLP_TX_CNT (0xFF << 16)
+#define BRIDGE_STAT_FLASH_SELECT (0x1 << 6)
+#define BRIDGE_STAT_PCI_GIO_N (0x1 << 5)
+#define BRIDGE_STAT_PENDING (0x1F << 0)
+
+/* Bridge widget control register bits definition */
+#define PIC_CTRL_NO_SNOOP (0x1ull << 62)
+#define PIC_CTRL_RELAX_ORDER (0x1ull << 61)
+#define PIC_CTRL_BUS_NUM(x) ((unsigned long long)(x) << 48)
+#define PIC_CTRL_BUS_NUM_MASK (PIC_CTRL_BUS_NUM(0xff))
+#define PIC_CTRL_DEV_NUM(x) ((unsigned long long)(x) << 43)
+#define PIC_CTRL_DEV_NUM_MASK (PIC_CTRL_DEV_NUM(0x1f))
+#define PIC_CTRL_FUN_NUM(x) ((unsigned long long)(x) << 40)
+#define PIC_CTRL_FUN_NUM_MASK (PIC_CTRL_FUN_NUM(0x7))
+#define PIC_CTRL_PAR_EN_REQ (0x1ull << 29)
+#define PIC_CTRL_PAR_EN_RESP (0x1ull << 30)
+#define PIC_CTRL_PAR_EN_ATE (0x1ull << 31)
+#define BRIDGE_CTRL_FLASH_WR_EN (0x1ul << 31) /* bridge only */
+#define BRIDGE_CTRL_EN_CLK50 (0x1 << 30)
+#define BRIDGE_CTRL_EN_CLK40 (0x1 << 29)
+#define BRIDGE_CTRL_EN_CLK33 (0x1 << 28)
+#define BRIDGE_CTRL_RST(n) ((n) << 24)
+#define BRIDGE_CTRL_RST_MASK (BRIDGE_CTRL_RST(0xF))
+#define BRIDGE_CTRL_RST_PIN(x) (BRIDGE_CTRL_RST(0x1 << (x)))
+#define BRIDGE_CTRL_IO_SWAP (0x1 << 23)
+#define BRIDGE_CTRL_MEM_SWAP (0x1 << 22)
+#define BRIDGE_CTRL_PAGE_SIZE (0x1 << 21)
+#define BRIDGE_CTRL_SS_PAR_BAD (0x1 << 20)
+#define BRIDGE_CTRL_SS_PAR_EN (0x1 << 19)
+#define BRIDGE_CTRL_SSRAM_SIZE(n) ((n) << 17)
+#define BRIDGE_CTRL_SSRAM_SIZE_MASK (BRIDGE_CTRL_SSRAM_SIZE(0x3))
+#define BRIDGE_CTRL_SSRAM_512K (BRIDGE_CTRL_SSRAM_SIZE(0x3))
+#define BRIDGE_CTRL_SSRAM_128K (BRIDGE_CTRL_SSRAM_SIZE(0x2))
+#define BRIDGE_CTRL_SSRAM_64K (BRIDGE_CTRL_SSRAM_SIZE(0x1))
+#define BRIDGE_CTRL_SSRAM_1K (BRIDGE_CTRL_SSRAM_SIZE(0x0))
+#define BRIDGE_CTRL_F_BAD_PKT (0x1 << 16)
+#define BRIDGE_CTRL_LLP_XBAR_CRD(n) ((n) << 12)
+#define BRIDGE_CTRL_LLP_XBAR_CRD_MASK (BRIDGE_CTRL_LLP_XBAR_CRD(0xf))
+#define BRIDGE_CTRL_CLR_RLLP_CNT (0x1 << 11)
+#define BRIDGE_CTRL_CLR_TLLP_CNT (0x1 << 10)
+#define BRIDGE_CTRL_SYS_END (0x1 << 9)
+#define BRIDGE_CTRL_PCI_SPEED (0x3 << 4)
+
+#define BRIDGE_CTRL_BUS_SPEED(n) ((n) << 4)
+#define BRIDGE_CTRL_BUS_SPEED_MASK (BRIDGE_CTRL_BUS_SPEED(0x3))
+#define BRIDGE_CTRL_BUS_SPEED_33 0x00
+#define BRIDGE_CTRL_BUS_SPEED_66 0x10
+#define BRIDGE_CTRL_MAX_TRANS(n) ((n) << 4)
+#define BRIDGE_CTRL_MAX_TRANS_MASK (BRIDGE_CTRL_MAX_TRANS(0x1f))
+#define BRIDGE_CTRL_WIDGET_ID(n) ((n) << 0)
+#define BRIDGE_CTRL_WIDGET_ID_MASK (BRIDGE_CTRL_WIDGET_ID(0xf))
+
+/* Bridge Response buffer Error Upper Register bit fields definition */
+#define BRIDGE_RESP_ERRUPPR_DEVNUM_SHFT (20)
+#define BRIDGE_RESP_ERRUPPR_DEVNUM_MASK (0x7 << BRIDGE_RESP_ERRUPPR_DEVNUM_SHFT)
+#define BRIDGE_RESP_ERRUPPR_BUFNUM_SHFT (16)
+#define BRIDGE_RESP_ERRUPPR_BUFNUM_MASK (0xF << BRIDGE_RESP_ERRUPPR_BUFNUM_SHFT)
+#define BRIDGE_RESP_ERRRUPPR_BUFMASK (0xFFFF)
+
+#define BRIDGE_RESP_ERRUPPR_BUFNUM(x) \
+ (((x) & BRIDGE_RESP_ERRUPPR_BUFNUM_MASK) >> \
+ BRIDGE_RESP_ERRUPPR_BUFNUM_SHFT)
+
+#define BRIDGE_RESP_ERRUPPR_DEVICE(x) \
+ (((x) & BRIDGE_RESP_ERRUPPR_DEVNUM_MASK) >> \
+ BRIDGE_RESP_ERRUPPR_DEVNUM_SHFT)
+
+/* Bridge direct mapping register bits definition */
+#define BRIDGE_DIRMAP_W_ID_SHFT 20
+#define BRIDGE_DIRMAP_W_ID (0xf << BRIDGE_DIRMAP_W_ID_SHFT)
+#define BRIDGE_DIRMAP_RMF_64 (0x1 << 18)
+#define BRIDGE_DIRMAP_ADD512 (0x1 << 17)
+#define BRIDGE_DIRMAP_OFF (0x1ffff << 0)
+#define BRIDGE_DIRMAP_OFF_ADDRSHFT (31) /* lsbit of DIRMAP_OFF is xtalk address bit 31 */
+
+/* Bridge Arbitration register bits definition */
+#define BRIDGE_ARB_REQ_WAIT_TICK(x) ((x) << 16)
+#define BRIDGE_ARB_REQ_WAIT_TICK_MASK BRIDGE_ARB_REQ_WAIT_TICK(0x3)
+#define BRIDGE_ARB_REQ_WAIT_EN(x) ((x) << 8)
+#define BRIDGE_ARB_REQ_WAIT_EN_MASK BRIDGE_ARB_REQ_WAIT_EN(0xff)
+#define BRIDGE_ARB_FREEZE_GNT (1 << 6)
+#define BRIDGE_ARB_HPRI_RING_B2 (1 << 5)
+#define BRIDGE_ARB_HPRI_RING_B1 (1 << 4)
+#define BRIDGE_ARB_HPRI_RING_B0 (1 << 3)
+#define BRIDGE_ARB_LPRI_RING_B2 (1 << 2)
+#define BRIDGE_ARB_LPRI_RING_B1 (1 << 1)
+#define BRIDGE_ARB_LPRI_RING_B0 (1 << 0)
+
+/* Bridge Bus time-out register bits definition */
+#define BRIDGE_BUS_PCI_RETRY_HLD(x) ((x) << 16)
+#define BRIDGE_BUS_PCI_RETRY_HLD_MASK BRIDGE_BUS_PCI_RETRY_HLD(0x1f)
+#define BRIDGE_BUS_GIO_TIMEOUT (1 << 12)
+#define BRIDGE_BUS_PCI_RETRY_CNT(x) ((x) << 0)
+#define BRIDGE_BUS_PCI_RETRY_MASK BRIDGE_BUS_PCI_RETRY_CNT(0x3ff)
+
+/* Bridge interrupt status register bits definition */
+#define PIC_ISR_PCIX_SPLIT_MSG_PE (0x1ull << 45)
+#define PIC_ISR_PCIX_SPLIT_EMSG (0x1ull << 44)
+#define PIC_ISR_PCIX_SPLIT_TO (0x1ull << 43)
+#define PIC_ISR_PCIX_UNEX_COMP (0x1ull << 42)
+#define PIC_ISR_INT_RAM_PERR (0x1ull << 41)
+#define PIC_ISR_PCIX_ARB_ERR (0x1ull << 40)
+#define PIC_ISR_PCIX_REQ_TOUT (0x1ull << 39)
+#define PIC_ISR_PCIX_TABORT (0x1ull << 38)
+#define PIC_ISR_PCIX_PERR (0x1ull << 37)
+#define PIC_ISR_PCIX_SERR (0x1ull << 36)
+#define PIC_ISR_PCIX_MRETRY (0x1ull << 35)
+#define PIC_ISR_PCIX_MTOUT (0x1ull << 34)
+#define PIC_ISR_PCIX_DA_PARITY (0x1ull << 33)
+#define PIC_ISR_PCIX_AD_PARITY (0x1ull << 32)
+#define BRIDGE_ISR_MULTI_ERR (0x1u << 31) /* bridge only */
+#define BRIDGE_ISR_PMU_ESIZE_FAULT (0x1 << 30) /* bridge only */
+#define BRIDGE_ISR_PAGE_FAULT (0x1 << 30) /* xbridge only */
+#define BRIDGE_ISR_UNEXP_RESP (0x1 << 29)
+#define BRIDGE_ISR_BAD_XRESP_PKT (0x1 << 28)
+#define BRIDGE_ISR_BAD_XREQ_PKT (0x1 << 27)
+#define BRIDGE_ISR_RESP_XTLK_ERR (0x1 << 26)
+#define BRIDGE_ISR_REQ_XTLK_ERR (0x1 << 25)
+#define BRIDGE_ISR_INVLD_ADDR (0x1 << 24)
+#define BRIDGE_ISR_UNSUPPORTED_XOP (0x1 << 23)
+#define BRIDGE_ISR_XREQ_FIFO_OFLOW (0x1 << 22)
+#define BRIDGE_ISR_LLP_REC_SNERR (0x1 << 21)
+#define BRIDGE_ISR_LLP_REC_CBERR (0x1 << 20)
+#define BRIDGE_ISR_LLP_RCTY (0x1 << 19)
+#define BRIDGE_ISR_LLP_TX_RETRY (0x1 << 18)
+#define BRIDGE_ISR_LLP_TCTY (0x1 << 17)
+#define BRIDGE_ISR_SSRAM_PERR (0x1 << 16)
+#define BRIDGE_ISR_PCI_ABORT (0x1 << 15)
+#define BRIDGE_ISR_PCI_PARITY (0x1 << 14)
+#define BRIDGE_ISR_PCI_SERR (0x1 << 13)
+#define BRIDGE_ISR_PCI_PERR (0x1 << 12)
+#define BRIDGE_ISR_PCI_MST_TIMEOUT (0x1 << 11)
+#define BRIDGE_ISR_GIO_MST_TIMEOUT BRIDGE_ISR_PCI_MST_TIMEOUT
+#define BRIDGE_ISR_PCI_RETRY_CNT (0x1 << 10)
+#define BRIDGE_ISR_XREAD_REQ_TIMEOUT (0x1 << 9)
+#define BRIDGE_ISR_GIO_B_ENBL_ERR (0x1 << 8)
+#define BRIDGE_ISR_INT_MSK (0xff << 0)
+#define BRIDGE_ISR_INT(x) (0x1 << (x))
+
+#define BRIDGE_ISR_LINK_ERROR \
+ (BRIDGE_ISR_LLP_REC_SNERR|BRIDGE_ISR_LLP_REC_CBERR| \
+ BRIDGE_ISR_LLP_RCTY|BRIDGE_ISR_LLP_TX_RETRY| \
+ BRIDGE_ISR_LLP_TCTY)
+
+#define BRIDGE_ISR_PCIBUS_PIOERR \
+ (BRIDGE_ISR_PCI_MST_TIMEOUT|BRIDGE_ISR_PCI_ABORT| \
+ PIC_ISR_PCIX_MTOUT|PIC_ISR_PCIX_TABORT)
+
+#define BRIDGE_ISR_PCIBUS_ERROR \
+ (BRIDGE_ISR_PCIBUS_PIOERR|BRIDGE_ISR_PCI_PERR| \
+ BRIDGE_ISR_PCI_SERR|BRIDGE_ISR_PCI_RETRY_CNT| \
+ BRIDGE_ISR_PCI_PARITY|PIC_ISR_PCIX_PERR| \
+ PIC_ISR_PCIX_SERR|PIC_ISR_PCIX_MRETRY| \
+ PIC_ISR_PCIX_AD_PARITY|PIC_ISR_PCIX_DA_PARITY| \
+ PIC_ISR_PCIX_REQ_TOUT|PIC_ISR_PCIX_UNEX_COMP| \
+ PIC_ISR_PCIX_SPLIT_TO|PIC_ISR_PCIX_SPLIT_EMSG| \
+ PIC_ISR_PCIX_SPLIT_MSG_PE)
+
+#define BRIDGE_ISR_XTALK_ERROR \
+ (BRIDGE_ISR_XREAD_REQ_TIMEOUT|BRIDGE_ISR_XREQ_FIFO_OFLOW|\
+ BRIDGE_ISR_UNSUPPORTED_XOP|BRIDGE_ISR_INVLD_ADDR| \
+ BRIDGE_ISR_REQ_XTLK_ERR|BRIDGE_ISR_RESP_XTLK_ERR| \
+ BRIDGE_ISR_BAD_XREQ_PKT|BRIDGE_ISR_BAD_XRESP_PKT| \
+ BRIDGE_ISR_UNEXP_RESP)
+
+#define BRIDGE_ISR_ERRORS \
+ (BRIDGE_ISR_LINK_ERROR|BRIDGE_ISR_PCIBUS_ERROR| \
+ BRIDGE_ISR_XTALK_ERROR|BRIDGE_ISR_SSRAM_PERR| \
+ BRIDGE_ISR_PMU_ESIZE_FAULT|PIC_ISR_INT_RAM_PERR)
+
+/*
+ * List of Errors which are fatal and kill the sytem
+ */
+#define BRIDGE_ISR_ERROR_FATAL \
+ ((BRIDGE_ISR_XTALK_ERROR & ~BRIDGE_ISR_XREAD_REQ_TIMEOUT)|\
+ BRIDGE_ISR_PCI_SERR|BRIDGE_ISR_PCI_PARITY| \
+ PIC_ISR_PCIX_SERR|PIC_ISR_PCIX_AD_PARITY| \
+ PIC_ISR_PCIX_DA_PARITY| \
+ PIC_ISR_INT_RAM_PERR|PIC_ISR_PCIX_SPLIT_MSG_PE )
+
+#define BRIDGE_ISR_ERROR_DUMP \
+ (BRIDGE_ISR_PCIBUS_ERROR|BRIDGE_ISR_PMU_ESIZE_FAULT| \
+ BRIDGE_ISR_XTALK_ERROR|BRIDGE_ISR_SSRAM_PERR| \
+ PIC_ISR_PCIX_ARB_ERR|PIC_ISR_INT_RAM_PERR)
+
+/* Bridge interrupt enable register bits definition */
+#define PIC_IMR_PCIX_SPLIT_MSG_PE PIC_ISR_PCIX_SPLIT_MSG_PE
+#define PIC_IMR_PCIX_SPLIT_EMSG PIC_ISR_PCIX_SPLIT_EMSG
+#define PIC_IMR_PCIX_SPLIT_TO PIC_ISR_PCIX_SPLIT_TO
+#define PIC_IMR_PCIX_UNEX_COMP PIC_ISR_PCIX_UNEX_COMP
+#define PIC_IMR_INT_RAM_PERR PIC_ISR_INT_RAM_PERR
+#define PIC_IMR_PCIX_ARB_ERR PIC_ISR_PCIX_ARB_ERR
+#define PIC_IMR_PCIX_REQ_TOUR PIC_ISR_PCIX_REQ_TOUT
+#define PIC_IMR_PCIX_TABORT PIC_ISR_PCIX_TABORT
+#define PIC_IMR_PCIX_PERR PIC_ISR_PCIX_PERR
+#define PIC_IMR_PCIX_SERR PIC_ISR_PCIX_SERR
+#define PIC_IMR_PCIX_MRETRY PIC_ISR_PCIX_MRETRY
+#define PIC_IMR_PCIX_MTOUT PIC_ISR_PCIX_MTOUT
+#define PIC_IMR_PCIX_DA_PARITY PIC_ISR_PCIX_DA_PARITY
+#define PIC_IMR_PCIX_AD_PARITY PIC_ISR_PCIX_AD_PARITY
+#define BRIDGE_IMR_UNEXP_RESP BRIDGE_ISR_UNEXP_RESP
+#define BRIDGE_IMR_PMU_ESIZE_FAULT BRIDGE_ISR_PMU_ESIZE_FAULT
+#define BRIDGE_IMR_BAD_XRESP_PKT BRIDGE_ISR_BAD_XRESP_PKT
+#define BRIDGE_IMR_BAD_XREQ_PKT BRIDGE_ISR_BAD_XREQ_PKT
+#define BRIDGE_IMR_RESP_XTLK_ERR BRIDGE_ISR_RESP_XTLK_ERR
+#define BRIDGE_IMR_REQ_XTLK_ERR BRIDGE_ISR_REQ_XTLK_ERR
+#define BRIDGE_IMR_INVLD_ADDR BRIDGE_ISR_INVLD_ADDR
+#define BRIDGE_IMR_UNSUPPORTED_XOP BRIDGE_ISR_UNSUPPORTED_XOP
+#define BRIDGE_IMR_XREQ_FIFO_OFLOW BRIDGE_ISR_XREQ_FIFO_OFLOW
+#define BRIDGE_IMR_LLP_REC_SNERR BRIDGE_ISR_LLP_REC_SNERR
+#define BRIDGE_IMR_LLP_REC_CBERR BRIDGE_ISR_LLP_REC_CBERR
+#define BRIDGE_IMR_LLP_RCTY BRIDGE_ISR_LLP_RCTY
+#define BRIDGE_IMR_LLP_TX_RETRY BRIDGE_ISR_LLP_TX_RETRY
+#define BRIDGE_IMR_LLP_TCTY BRIDGE_ISR_LLP_TCTY
+#define BRIDGE_IMR_SSRAM_PERR BRIDGE_ISR_SSRAM_PERR
+#define BRIDGE_IMR_PCI_ABORT BRIDGE_ISR_PCI_ABORT
+#define BRIDGE_IMR_PCI_PARITY BRIDGE_ISR_PCI_PARITY
+#define BRIDGE_IMR_PCI_SERR BRIDGE_ISR_PCI_SERR
+#define BRIDGE_IMR_PCI_PERR BRIDGE_ISR_PCI_PERR
+#define BRIDGE_IMR_PCI_MST_TIMEOUT BRIDGE_ISR_PCI_MST_TIMEOUT
+#define BRIDGE_IMR_GIO_MST_TIMEOUT BRIDGE_ISR_GIO_MST_TIMEOUT
+#define BRIDGE_IMR_PCI_RETRY_CNT BRIDGE_ISR_PCI_RETRY_CNT
+#define BRIDGE_IMR_XREAD_REQ_TIMEOUT BRIDGE_ISR_XREAD_REQ_TIMEOUT
+#define BRIDGE_IMR_GIO_B_ENBL_ERR BRIDGE_ISR_GIO_B_ENBL_ERR
+#define BRIDGE_IMR_INT_MSK BRIDGE_ISR_INT_MSK
+#define BRIDGE_IMR_INT(x) BRIDGE_ISR_INT(x)
+
+/*
+ * Bridge interrupt reset register bits definition. Note, PIC can
+ * reset indiviual error interrupts, BRIDGE & XBRIDGE can only do
+ * groups of them.
+ */
+#define PIC_IRR_PCIX_SPLIT_MSG_PE PIC_ISR_PCIX_SPLIT_MSG_PE
+#define PIC_IRR_PCIX_SPLIT_EMSG PIC_ISR_PCIX_SPLIT_EMSG
+#define PIC_IRR_PCIX_SPLIT_TO PIC_ISR_PCIX_SPLIT_TO
+#define PIC_IRR_PCIX_UNEX_COMP PIC_ISR_PCIX_UNEX_COMP
+#define PIC_IRR_INT_RAM_PERR PIC_ISR_INT_RAM_PERR
+#define PIC_IRR_PCIX_ARB_ERR PIC_ISR_PCIX_ARB_ERR
+#define PIC_IRR_PCIX_REQ_TOUT PIC_ISR_PCIX_REQ_TOUT
+#define PIC_IRR_PCIX_TABORT PIC_ISR_PCIX_TABORT
+#define PIC_IRR_PCIX_PERR PIC_ISR_PCIX_PERR
+#define PIC_IRR_PCIX_SERR PIC_ISR_PCIX_SERR
+#define PIC_IRR_PCIX_MRETRY PIC_ISR_PCIX_MRETRY
+#define PIC_IRR_PCIX_MTOUT PIC_ISR_PCIX_MTOUT
+#define PIC_IRR_PCIX_DA_PARITY PIC_ISR_PCIX_DA_PARITY
+#define PIC_IRR_PCIX_AD_PARITY PIC_ISR_PCIX_AD_PARITY
+#define PIC_IRR_PAGE_FAULT BRIDGE_ISR_PAGE_FAULT
+#define PIC_IRR_UNEXP_RESP BRIDGE_ISR_UNEXP_RESP
+#define PIC_IRR_BAD_XRESP_PKT BRIDGE_ISR_BAD_XRESP_PKT
+#define PIC_IRR_BAD_XREQ_PKT BRIDGE_ISR_BAD_XREQ_PKT
+#define PIC_IRR_RESP_XTLK_ERR BRIDGE_ISR_RESP_XTLK_ERR
+#define PIC_IRR_REQ_XTLK_ERR BRIDGE_ISR_REQ_XTLK_ERR
+#define PIC_IRR_INVLD_ADDR BRIDGE_ISR_INVLD_ADDR
+#define PIC_IRR_UNSUPPORTED_XOP BRIDGE_ISR_UNSUPPORTED_XOP
+#define PIC_IRR_XREQ_FIFO_OFLOW BRIDGE_ISR_XREQ_FIFO_OFLOW
+#define PIC_IRR_LLP_REC_SNERR BRIDGE_ISR_LLP_REC_SNERR
+#define PIC_IRR_LLP_REC_CBERR BRIDGE_ISR_LLP_REC_CBERR
+#define PIC_IRR_LLP_RCTY BRIDGE_ISR_LLP_RCTY
+#define PIC_IRR_LLP_TX_RETRY BRIDGE_ISR_LLP_TX_RETRY
+#define PIC_IRR_LLP_TCTY BRIDGE_ISR_LLP_TCTY
+#define PIC_IRR_PCI_ABORT BRIDGE_ISR_PCI_ABORT
+#define PIC_IRR_PCI_PARITY BRIDGE_ISR_PCI_PARITY
+#define PIC_IRR_PCI_SERR BRIDGE_ISR_PCI_SERR
+#define PIC_IRR_PCI_PERR BRIDGE_ISR_PCI_PERR
+#define PIC_IRR_PCI_MST_TIMEOUT BRIDGE_ISR_PCI_MST_TIMEOUT
+#define PIC_IRR_PCI_RETRY_CNT BRIDGE_ISR_PCI_RETRY_CNT
+#define PIC_IRR_XREAD_REQ_TIMEOUT BRIDGE_ISR_XREAD_REQ_TIMEOUT
+#define BRIDGE_IRR_MULTI_CLR (0x1 << 6)
+#define BRIDGE_IRR_CRP_GRP_CLR (0x1 << 5)
+#define BRIDGE_IRR_RESP_BUF_GRP_CLR (0x1 << 4)
+#define BRIDGE_IRR_REQ_DSP_GRP_CLR (0x1 << 3)
+#define BRIDGE_IRR_LLP_GRP_CLR (0x1 << 2)
+#define BRIDGE_IRR_SSRAM_GRP_CLR (0x1 << 1)
+#define BRIDGE_IRR_PCI_GRP_CLR (0x1 << 0)
+#define BRIDGE_IRR_GIO_GRP_CLR (0x1 << 0)
+#define BRIDGE_IRR_ALL_CLR 0x7f
+
+#define BRIDGE_IRR_CRP_GRP (BRIDGE_ISR_UNEXP_RESP | \
+ BRIDGE_ISR_XREQ_FIFO_OFLOW)
+#define BRIDGE_IRR_RESP_BUF_GRP (BRIDGE_ISR_BAD_XRESP_PKT | \
+ BRIDGE_ISR_RESP_XTLK_ERR | \
+ BRIDGE_ISR_XREAD_REQ_TIMEOUT)
+#define BRIDGE_IRR_REQ_DSP_GRP (BRIDGE_ISR_UNSUPPORTED_XOP | \
+ BRIDGE_ISR_BAD_XREQ_PKT | \
+ BRIDGE_ISR_REQ_XTLK_ERR | \
+ BRIDGE_ISR_INVLD_ADDR)
+#define BRIDGE_IRR_LLP_GRP (BRIDGE_ISR_LLP_REC_SNERR | \
+ BRIDGE_ISR_LLP_REC_CBERR | \
+ BRIDGE_ISR_LLP_RCTY | \
+ BRIDGE_ISR_LLP_TX_RETRY | \
+ BRIDGE_ISR_LLP_TCTY)
+#define BRIDGE_IRR_SSRAM_GRP (BRIDGE_ISR_SSRAM_PERR | \
+ BRIDGE_ISR_PMU_ESIZE_FAULT)
+#define BRIDGE_IRR_PCI_GRP (BRIDGE_ISR_PCI_ABORT | \
+ BRIDGE_ISR_PCI_PARITY | \
+ BRIDGE_ISR_PCI_SERR | \
+ BRIDGE_ISR_PCI_PERR | \
+ BRIDGE_ISR_PCI_MST_TIMEOUT | \
+ BRIDGE_ISR_PCI_RETRY_CNT)
+
+#define BRIDGE_IRR_GIO_GRP (BRIDGE_ISR_GIO_B_ENBL_ERR | \
+ BRIDGE_ISR_GIO_MST_TIMEOUT)
+
+#define PIC_IRR_RAM_GRP PIC_ISR_INT_RAM_PERR
+
+#define PIC_PCIX_GRP_CLR (PIC_IRR_PCIX_AD_PARITY | \
+ PIC_IRR_PCIX_DA_PARITY | \
+ PIC_IRR_PCIX_MTOUT | \
+ PIC_IRR_PCIX_MRETRY | \
+ PIC_IRR_PCIX_SERR | \
+ PIC_IRR_PCIX_PERR | \
+ PIC_IRR_PCIX_TABORT | \
+ PIC_ISR_PCIX_REQ_TOUT | \
+ PIC_ISR_PCIX_UNEX_COMP | \
+ PIC_ISR_PCIX_SPLIT_TO | \
+ PIC_ISR_PCIX_SPLIT_EMSG | \
+ PIC_ISR_PCIX_SPLIT_MSG_PE)
+
+/* Bridge INT_DEV register bits definition */
+#define BRIDGE_INT_DEV_SHFT(n) ((n)*3)
+#define BRIDGE_INT_DEV_MASK(n) (0x7 << BRIDGE_INT_DEV_SHFT(n))
+#define BRIDGE_INT_DEV_SET(_dev, _line) (_dev << BRIDGE_INT_DEV_SHFT(_line))
+
+/* Bridge interrupt(x) register bits definition */
+#define BRIDGE_INT_ADDR_HOST 0x0003FF00
+#define BRIDGE_INT_ADDR_FLD 0x000000FF
+
+/* PIC interrupt(x) register bits definition */
+#define PIC_INT_ADDR_FLD 0x00FF000000000000
+#define PIC_INT_ADDR_HOST 0x0000FFFFFFFFFFFF
+
+#define BRIDGE_TMO_PCI_RETRY_HLD_MASK 0x1f0000
+#define BRIDGE_TMO_GIO_TIMEOUT_MASK 0x001000
+#define BRIDGE_TMO_PCI_RETRY_CNT_MASK 0x0003ff
+
+#define BRIDGE_TMO_PCI_RETRY_CNT_MAX 0x3ff
+
+/* Bridge device(x) register bits definition */
+#define BRIDGE_DEV_ERR_LOCK_EN (1ull << 28)
+#define BRIDGE_DEV_PAGE_CHK_DIS (1ull << 27)
+#define BRIDGE_DEV_FORCE_PCI_PAR (1ull << 26)
+#define BRIDGE_DEV_VIRTUAL_EN (1ull << 25)
+#define BRIDGE_DEV_PMU_WRGA_EN (1ull << 24)
+#define BRIDGE_DEV_DIR_WRGA_EN (1ull << 23)
+#define BRIDGE_DEV_DEV_SIZE (1ull << 22)
+#define BRIDGE_DEV_RT (1ull << 21)
+#define BRIDGE_DEV_SWAP_PMU (1ull << 20)
+#define BRIDGE_DEV_SWAP_DIR (1ull << 19)
+#define BRIDGE_DEV_PREF (1ull << 18)
+#define BRIDGE_DEV_PRECISE (1ull << 17)
+#define BRIDGE_DEV_COH (1ull << 16)
+#define BRIDGE_DEV_BARRIER (1ull << 15)
+#define BRIDGE_DEV_GBR (1ull << 14)
+#define BRIDGE_DEV_DEV_SWAP (1ull << 13)
+#define BRIDGE_DEV_DEV_IO_MEM (1ull << 12)
+#define BRIDGE_DEV_OFF_MASK 0x00000fff
+#define BRIDGE_DEV_OFF_ADDR_SHFT 20
+
+#define XBRIDGE_DEV_PMU_BITS BRIDGE_DEV_PMU_WRGA_EN
+#define BRIDGE_DEV_PMU_BITS (BRIDGE_DEV_PMU_WRGA_EN | \
+ BRIDGE_DEV_SWAP_PMU)
+#define BRIDGE_DEV_D32_BITS (BRIDGE_DEV_DIR_WRGA_EN | \
+ BRIDGE_DEV_SWAP_DIR | \
+ BRIDGE_DEV_PREF | \
+ BRIDGE_DEV_PRECISE | \
+ BRIDGE_DEV_COH | \
+ BRIDGE_DEV_BARRIER)
+#define XBRIDGE_DEV_D64_BITS (BRIDGE_DEV_DIR_WRGA_EN | \
+ BRIDGE_DEV_COH | \
+ BRIDGE_DEV_BARRIER)
+#define BRIDGE_DEV_D64_BITS (BRIDGE_DEV_DIR_WRGA_EN | \
+ BRIDGE_DEV_SWAP_DIR | \
+ BRIDGE_DEV_COH | \
+ BRIDGE_DEV_BARRIER)
+
+/* Bridge Error Upper register bit field definition */
+#define BRIDGE_ERRUPPR_DEVMASTER (0x1 << 20) /* Device was master */
+#define BRIDGE_ERRUPPR_PCIVDEV (0x1 << 19) /* Virtual Req value */
+#define BRIDGE_ERRUPPR_DEVNUM_SHFT (16)
+#define BRIDGE_ERRUPPR_DEVNUM_MASK (0x7 << BRIDGE_ERRUPPR_DEVNUM_SHFT)
+#define BRIDGE_ERRUPPR_DEVICE(err) (((err) >> BRIDGE_ERRUPPR_DEVNUM_SHFT) & 0x7)
+#define BRIDGE_ERRUPPR_ADDRMASK (0xFFFF)
+
+/* Bridge interrupt mode register bits definition */
+#define BRIDGE_INTMODE_CLR_PKT_EN(x) (0x1 << (x))
+
+/* this should be written to the xbow's link_control(x) register */
+#define BRIDGE_CREDIT 3
+
+/* RRB assignment register */
+#define BRIDGE_RRB_EN 0x8 /* after shifting down */
+#define BRIDGE_RRB_DEV 0x7 /* after shifting down */
+#define BRIDGE_RRB_VDEV 0x4 /* after shifting down, 2 virtual channels */
+#define BRIDGE_RRB_PDEV 0x3 /* after shifting down, 8 devices */
+
+#define PIC_RRB_EN 0x8 /* after shifting down */
+#define PIC_RRB_DEV 0x7 /* after shifting down */
+#define PIC_RRB_VDEV 0x6 /* after shifting down, 4 virtual channels */
+#define PIC_RRB_PDEV 0x1 /* after shifting down, 4 devices */
+
+/* RRB status register */
+#define BRIDGE_RRB_VALID(r) (0x00010000<<(r))
+#define BRIDGE_RRB_INUSE(r) (0x00000001<<(r))
+
+/* RRB clear register */
+#define BRIDGE_RRB_CLEAR(r) (0x00000001<<(r))
+
+/* Defines for the virtual channels so we don't hardcode 0-3 within code */
+#define VCHAN0 0 /* virtual channel 0 (ie. the "normal" channel) */
+#define VCHAN1 1 /* virtual channel 1 */
+#define VCHAN2 2 /* virtual channel 2 - PIC only */
+#define VCHAN3 3 /* virtual channel 3 - PIC only */
+
+/* PIC: PCI-X Read Buffer Attribute Register (RBAR) */
+#define NUM_RBAR 16 /* number of RBAR registers */
+
+/* xbox system controller declarations */
+#define XBOX_BRIDGE_WID 8
+#define FLASH_PROM1_BASE 0xE00000 /* To read the xbox sysctlr status */
+#define XBOX_RPS_EXISTS 1 << 6 /* RPS bit in status register */
+#define XBOX_RPS_FAIL 1 << 4 /* RPS status bit in register */
+
+/* ========================================================================
+ */
+/*
+ * Macros for Xtalk to Bridge bus (PCI/GIO) PIO
+ * refer to section 4.2.1 of Bridge Spec for xtalk to PCI/GIO PIO mappings
+ */
+/* XTALK addresses that map into Bridge Bus addr space */
+#define BRIDGE_PIO32_XTALK_ALIAS_BASE 0x000040000000L
+#define BRIDGE_PIO32_XTALK_ALIAS_LIMIT 0x00007FFFFFFFL
+#define BRIDGE_PIO64_XTALK_ALIAS_BASE 0x000080000000L
+#define BRIDGE_PIO64_XTALK_ALIAS_LIMIT 0x0000BFFFFFFFL
+#define BRIDGE_PCIIO_XTALK_ALIAS_BASE 0x000100000000L
+#define BRIDGE_PCIIO_XTALK_ALIAS_LIMIT 0x0001FFFFFFFFL
+
+/* Ranges of PCI bus space that can be accessed via PIO from xtalk */
+#define BRIDGE_MIN_PIO_ADDR_MEM 0x00000000 /* 1G PCI memory space */
+#define BRIDGE_MAX_PIO_ADDR_MEM 0x3fffffff
+#define BRIDGE_MIN_PIO_ADDR_IO 0x00000000 /* 4G PCI IO space */
+#define BRIDGE_MAX_PIO_ADDR_IO 0xffffffff
+
+/* XTALK addresses that map into PCI addresses */
+#define BRIDGE_PCI_MEM32_BASE BRIDGE_PIO32_XTALK_ALIAS_BASE
+#define BRIDGE_PCI_MEM32_LIMIT BRIDGE_PIO32_XTALK_ALIAS_LIMIT
+#define BRIDGE_PCI_MEM64_BASE BRIDGE_PIO64_XTALK_ALIAS_BASE
+#define BRIDGE_PCI_MEM64_LIMIT BRIDGE_PIO64_XTALK_ALIAS_LIMIT
+#define BRIDGE_PCI_IO_BASE BRIDGE_PCIIO_XTALK_ALIAS_BASE
+#define BRIDGE_PCI_IO_LIMIT BRIDGE_PCIIO_XTALK_ALIAS_LIMIT
+
+/*
+ * Macros for Xtalk to Bridge bus (PCI) PIO
+ * refer to section 5.2.1 Figure 4 of the "PCI Interface Chip (PIC) Volume II
+ * Programmer's Reference" (Revision 0.8 as of this writing).
+ *
+ * These are PIC bridge specific. A separate set of macros was defined
+ * because PIC deviates from Bridge/Xbridge by not supporting a big-window
+ * alias for PCI I/O space, and also redefines XTALK addresses
+ * 0x0000C0000000L and 0x000100000000L to be PCI MEM aliases for the second
+ * bus.
+ */
+
+/* XTALK addresses that map into PIC Bridge Bus addr space */
+#define PICBRIDGE0_PIO32_XTALK_ALIAS_BASE 0x000040000000L
+#define PICBRIDGE0_PIO32_XTALK_ALIAS_LIMIT 0x00007FFFFFFFL
+#define PICBRIDGE0_PIO64_XTALK_ALIAS_BASE 0x000080000000L
+#define PICBRIDGE0_PIO64_XTALK_ALIAS_LIMIT 0x0000BFFFFFFFL
+#define PICBRIDGE1_PIO32_XTALK_ALIAS_BASE 0x0000C0000000L
+#define PICBRIDGE1_PIO32_XTALK_ALIAS_LIMIT 0x0000FFFFFFFFL
+#define PICBRIDGE1_PIO64_XTALK_ALIAS_BASE 0x000100000000L
+#define PICBRIDGE1_PIO64_XTALK_ALIAS_LIMIT 0x00013FFFFFFFL
+
+/* XTALK addresses that map into PCI addresses */
+#define PICBRIDGE0_PCI_MEM32_BASE PICBRIDGE0_PIO32_XTALK_ALIAS_BASE
+#define PICBRIDGE0_PCI_MEM32_LIMIT PICBRIDGE0_PIO32_XTALK_ALIAS_LIMIT
+#define PICBRIDGE0_PCI_MEM64_BASE PICBRIDGE0_PIO64_XTALK_ALIAS_BASE
+#define PICBRIDGE0_PCI_MEM64_LIMIT PICBRIDGE0_PIO64_XTALK_ALIAS_LIMIT
+#define PICBRIDGE1_PCI_MEM32_BASE PICBRIDGE1_PIO32_XTALK_ALIAS_BASE
+#define PICBRIDGE1_PCI_MEM32_LIMIT PICBRIDGE1_PIO32_XTALK_ALIAS_LIMIT
+#define PICBRIDGE1_PCI_MEM64_BASE PICBRIDGE1_PIO64_XTALK_ALIAS_BASE
+#define PICBRIDGE1_PCI_MEM64_LIMIT PICBRIDGE1_PIO64_XTALK_ALIAS_LIMIT
+
+/*
+ * Macros for Bridge bus (PCI/GIO) to Xtalk DMA
+ */
+/* Bridge Bus DMA addresses */
+#define BRIDGE_LOCAL_BASE 0
+#define BRIDGE_DMA_MAPPED_BASE 0x40000000
+#define BRIDGE_DMA_MAPPED_SIZE 0x40000000 /* 1G Bytes */
+#define BRIDGE_DMA_DIRECT_BASE 0x80000000
+#define BRIDGE_DMA_DIRECT_SIZE 0x80000000 /* 2G Bytes */
+
+#define PCI32_LOCAL_BASE BRIDGE_LOCAL_BASE
+
+/* PCI addresses of regions decoded by Bridge for DMA */
+#define PCI32_MAPPED_BASE BRIDGE_DMA_MAPPED_BASE
+#define PCI32_DIRECT_BASE BRIDGE_DMA_DIRECT_BASE
+
+#ifndef __ASSEMBLY__
+
+#define IS_PCI32_LOCAL(x) ((uint64_t)(x) < PCI32_MAPPED_BASE)
+#define IS_PCI32_MAPPED(x) ((uint64_t)(x) < PCI32_DIRECT_BASE && \
+ (uint64_t)(x) >= PCI32_MAPPED_BASE)
+#define IS_PCI32_DIRECT(x) ((uint64_t)(x) >= PCI32_MAPPED_BASE)
+#define IS_PCI64(x) ((uint64_t)(x) >= PCI64_BASE)
+#endif /* __ASSEMBLY__ */
+
+/*
+ * The GIO address space.
+ */
+/* Xtalk to GIO PIO */
+#define BRIDGE_GIO_MEM32_BASE BRIDGE_PIO32_XTALK_ALIAS_BASE
+#define BRIDGE_GIO_MEM32_LIMIT BRIDGE_PIO32_XTALK_ALIAS_LIMIT
+
+#define GIO_LOCAL_BASE BRIDGE_LOCAL_BASE
+
+/* GIO addresses of regions decoded by Bridge for DMA */
+#define GIO_MAPPED_BASE BRIDGE_DMA_MAPPED_BASE
+#define GIO_DIRECT_BASE BRIDGE_DMA_DIRECT_BASE
+
+#ifndef __ASSEMBLY__
+
+#define IS_GIO_LOCAL(x) ((uint64_t)(x) < GIO_MAPPED_BASE)
+#define IS_GIO_MAPPED(x) ((uint64_t)(x) < GIO_DIRECT_BASE && \
+ (uint64_t)(x) >= GIO_MAPPED_BASE)
+#define IS_GIO_DIRECT(x) ((uint64_t)(x) >= GIO_MAPPED_BASE)
+#endif /* __ASSEMBLY__ */
+
+/* PCI to xtalk mapping */
+
+/* given a DIR_OFF value and a pci/gio 32 bits direct address, determine
+ * which xtalk address is accessed
+ */
+#define BRIDGE_DIRECT_32_SEG_SIZE BRIDGE_DMA_DIRECT_SIZE
+#define BRIDGE_DIRECT_32_TO_XTALK(dir_off,adr) \
+ ((dir_off) * BRIDGE_DIRECT_32_SEG_SIZE + \
+ ((adr) & (BRIDGE_DIRECT_32_SEG_SIZE - 1)) + PHYS_RAMBASE)
+
+/* 64-bit address attribute masks */
+#define PCI64_ATTR_TARG_MASK 0xf000000000000000
+#define PCI64_ATTR_TARG_SHFT 60
+#define PCI64_ATTR_PREF (1ull << 59)
+#define PCI64_ATTR_PREC (1ull << 58)
+#define PCI64_ATTR_VIRTUAL (1ull << 57)
+#define PCI64_ATTR_BAR (1ull << 56)
+#define PCI64_ATTR_SWAP (1ull << 55)
+#define PCI64_ATTR_RMF_MASK 0x00ff000000000000
+#define PCI64_ATTR_RMF_SHFT 48
+
+#ifndef __ASSEMBLY__
+/* Address translation entry for mapped pci32 accesses */
+typedef union ate_u {
+ uint64_t ent;
+ struct xb_ate_s { /* xbridge */
+ uint64_t :16;
+ uint64_t addr:36;
+ uint64_t targ:4;
+ uint64_t reserved:2;
+ uint64_t swap:1;
+ uint64_t barrier:1;
+ uint64_t prefetch:1;
+ uint64_t precise:1;
+ uint64_t coherent:1;
+ uint64_t valid:1;
+ } xb_field;
+ struct ate_s { /* bridge */
+ uint64_t rmf:16;
+ uint64_t addr:36;
+ uint64_t targ:4;
+ uint64_t reserved:3;
+ uint64_t barrier:1;
+ uint64_t prefetch:1;
+ uint64_t precise:1;
+ uint64_t coherent:1;
+ uint64_t valid:1;
+ } field;
+} ate_t;
+#endif /* __ASSEMBLY__ */
+
+#define ATE_V (1 << 0)
+#define ATE_CO (1 << 1)
+#define ATE_PREC (1 << 2)
+#define ATE_PREF (1 << 3)
+#define ATE_BAR (1 << 4)
+#define ATE_SWAP (1 << 5)
+
+#define ATE_PFNSHIFT 12
+#define ATE_TIDSHIFT 8
+#define ATE_RMFSHIFT 48
+
+#define mkate(xaddr, xid, attr) ((xaddr) & 0x0000fffffffff000ULL) | \
+ ((xid)<<ATE_TIDSHIFT) | \
+ (attr)
+
+/*
+ * for xbridge, bit 29 of the pci address is the swap bit */
+#define ATE_SWAPSHIFT 29
+#define ATE_SWAP_ON(x) ((x) |= (1 << ATE_SWAPSHIFT))
+#define ATE_SWAP_OFF(x) ((x) &= ~(1 << ATE_SWAPSHIFT))
+
+/* extern declarations */
+
+#ifndef __ASSEMBLY__
+
+/* ========================================================================
+ */
+
+#ifdef MACROFIELD_LINE
+/*
+ * This table forms a relation between the byte offset macros normally
+ * used for ASM coding and the calculated byte offsets of the fields
+ * in the C structure.
+ *
+ * See bridge_check.c and bridge_html.c for further details.
+ */
+#ifndef MACROFIELD_LINE_BITFIELD
+#define MACROFIELD_LINE_BITFIELD(m) /* ignored */
+#endif
+
+struct macrofield_s bridge_macrofield[] =
+{
+
+ MACROFIELD_LINE(BRIDGE_WID_ID, b_wid_id)
+ MACROFIELD_LINE_BITFIELD(WIDGET_REV_NUM)
+ MACROFIELD_LINE_BITFIELD(WIDGET_PART_NUM)
+ MACROFIELD_LINE_BITFIELD(WIDGET_MFG_NUM)
+ MACROFIELD_LINE(BRIDGE_WID_STAT, b_wid_stat)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_STAT_LLP_REC_CNT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_STAT_LLP_TX_CNT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_STAT_FLASH_SELECT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_STAT_PCI_GIO_N)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_STAT_PENDING)
+ MACROFIELD_LINE(BRIDGE_WID_ERR_UPPER, b_wid_err_upper)
+ MACROFIELD_LINE(BRIDGE_WID_ERR_LOWER, b_wid_err_lower)
+ MACROFIELD_LINE(BRIDGE_WID_CONTROL, b_wid_control)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_FLASH_WR_EN)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_EN_CLK50)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_EN_CLK40)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_EN_CLK33)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_RST_MASK)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_IO_SWAP)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_MEM_SWAP)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_PAGE_SIZE)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_SS_PAR_BAD)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_SS_PAR_EN)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_SSRAM_SIZE_MASK)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_F_BAD_PKT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_LLP_XBAR_CRD_MASK)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_CLR_RLLP_CNT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_CLR_TLLP_CNT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_SYS_END)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_MAX_TRANS_MASK)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_CTRL_WIDGET_ID_MASK)
+ MACROFIELD_LINE(BRIDGE_WID_REQ_TIMEOUT, b_wid_req_timeout)
+ MACROFIELD_LINE(BRIDGE_WID_INT_UPPER, b_wid_int_upper)
+ MACROFIELD_LINE_BITFIELD(WIDGET_INT_VECTOR)
+ MACROFIELD_LINE_BITFIELD(WIDGET_TARGET_ID)
+ MACROFIELD_LINE_BITFIELD(WIDGET_UPP_ADDR)
+ MACROFIELD_LINE(BRIDGE_WID_INT_LOWER, b_wid_int_lower)
+ MACROFIELD_LINE(BRIDGE_WID_ERR_CMDWORD, b_wid_err_cmdword)
+ MACROFIELD_LINE_BITFIELD(WIDGET_DIDN)
+ MACROFIELD_LINE_BITFIELD(WIDGET_SIDN)
+ MACROFIELD_LINE_BITFIELD(WIDGET_PACTYP)
+ MACROFIELD_LINE_BITFIELD(WIDGET_TNUM)
+ MACROFIELD_LINE_BITFIELD(WIDGET_COHERENT)
+ MACROFIELD_LINE_BITFIELD(WIDGET_DS)
+ MACROFIELD_LINE_BITFIELD(WIDGET_GBR)
+ MACROFIELD_LINE_BITFIELD(WIDGET_VBPM)
+ MACROFIELD_LINE_BITFIELD(WIDGET_ERROR)
+ MACROFIELD_LINE_BITFIELD(WIDGET_BARRIER)
+ MACROFIELD_LINE(BRIDGE_WID_LLP, b_wid_llp)
+ MACROFIELD_LINE_BITFIELD(WIDGET_LLP_MAXRETRY)
+ MACROFIELD_LINE_BITFIELD(WIDGET_LLP_NULLTIMEOUT)
+ MACROFIELD_LINE_BITFIELD(WIDGET_LLP_MAXBURST)
+ MACROFIELD_LINE(BRIDGE_WID_TFLUSH, b_wid_tflush)
+ MACROFIELD_LINE(BRIDGE_WID_AUX_ERR, b_wid_aux_err)
+ MACROFIELD_LINE(BRIDGE_WID_RESP_UPPER, b_wid_resp_upper)
+ MACROFIELD_LINE(BRIDGE_WID_RESP_LOWER, b_wid_resp_lower)
+ MACROFIELD_LINE(BRIDGE_WID_TST_PIN_CTRL, b_wid_tst_pin_ctrl)
+ MACROFIELD_LINE(BRIDGE_DIR_MAP, b_dir_map)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DIRMAP_W_ID)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DIRMAP_RMF_64)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DIRMAP_ADD512)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DIRMAP_OFF)
+ MACROFIELD_LINE(BRIDGE_RAM_PERR, b_ram_perr)
+ MACROFIELD_LINE(BRIDGE_ARB, b_arb)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ARB_REQ_WAIT_TICK_MASK)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ARB_REQ_WAIT_EN_MASK)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ARB_FREEZE_GNT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ARB_HPRI_RING_B2)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ARB_HPRI_RING_B1)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ARB_HPRI_RING_B0)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ARB_LPRI_RING_B2)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ARB_LPRI_RING_B1)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ARB_LPRI_RING_B0)
+ MACROFIELD_LINE(BRIDGE_NIC, b_nic)
+ MACROFIELD_LINE(BRIDGE_PCI_BUS_TIMEOUT, b_pci_bus_timeout)
+ MACROFIELD_LINE(BRIDGE_PCI_CFG, b_pci_cfg)
+ MACROFIELD_LINE(BRIDGE_PCI_ERR_UPPER, b_pci_err_upper)
+ MACROFIELD_LINE(BRIDGE_PCI_ERR_LOWER, b_pci_err_lower)
+ MACROFIELD_LINE(BRIDGE_INT_STATUS, b_int_status)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_MULTI_ERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_PMU_ESIZE_FAULT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_UNEXP_RESP)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_BAD_XRESP_PKT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_BAD_XREQ_PKT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_RESP_XTLK_ERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_REQ_XTLK_ERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_INVLD_ADDR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_UNSUPPORTED_XOP)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_XREQ_FIFO_OFLOW)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_LLP_REC_SNERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_LLP_REC_CBERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_LLP_RCTY)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_LLP_TX_RETRY)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_LLP_TCTY)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_SSRAM_PERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_PCI_ABORT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_PCI_PARITY)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_PCI_SERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_PCI_PERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_PCI_MST_TIMEOUT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_PCI_RETRY_CNT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_XREAD_REQ_TIMEOUT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_GIO_B_ENBL_ERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_ISR_INT_MSK)
+ MACROFIELD_LINE(BRIDGE_INT_ENABLE, b_int_enable)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_UNEXP_RESP)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_PMU_ESIZE_FAULT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_BAD_XRESP_PKT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_BAD_XREQ_PKT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_RESP_XTLK_ERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_REQ_XTLK_ERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_INVLD_ADDR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_UNSUPPORTED_XOP)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_XREQ_FIFO_OFLOW)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_LLP_REC_SNERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_LLP_REC_CBERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_LLP_RCTY)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_LLP_TX_RETRY)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_LLP_TCTY)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_SSRAM_PERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_PCI_ABORT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_PCI_PARITY)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_PCI_SERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_PCI_PERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_PCI_MST_TIMEOUT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_PCI_RETRY_CNT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_XREAD_REQ_TIMEOUT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_GIO_B_ENBL_ERR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IMR_INT_MSK)
+ MACROFIELD_LINE(BRIDGE_INT_RST_STAT, b_int_rst_stat)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IRR_ALL_CLR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IRR_MULTI_CLR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IRR_CRP_GRP_CLR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IRR_RESP_BUF_GRP_CLR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IRR_REQ_DSP_GRP_CLR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IRR_LLP_GRP_CLR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IRR_SSRAM_GRP_CLR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_IRR_PCI_GRP_CLR)
+ MACROFIELD_LINE(BRIDGE_INT_MODE, b_int_mode)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INTMODE_CLR_PKT_EN(7))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INTMODE_CLR_PKT_EN(6))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INTMODE_CLR_PKT_EN(5))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INTMODE_CLR_PKT_EN(4))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INTMODE_CLR_PKT_EN(3))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INTMODE_CLR_PKT_EN(2))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INTMODE_CLR_PKT_EN(1))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INTMODE_CLR_PKT_EN(0))
+ MACROFIELD_LINE(BRIDGE_INT_DEVICE, b_int_device)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INT_DEV_MASK(7))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INT_DEV_MASK(6))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INT_DEV_MASK(5))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INT_DEV_MASK(4))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INT_DEV_MASK(3))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INT_DEV_MASK(2))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INT_DEV_MASK(1))
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INT_DEV_MASK(0))
+ MACROFIELD_LINE(BRIDGE_INT_HOST_ERR, b_int_host_err)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INT_ADDR_HOST)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_INT_ADDR_FLD)
+ MACROFIELD_LINE(BRIDGE_INT_ADDR0, b_int_addr[0].addr)
+ MACROFIELD_LINE(BRIDGE_INT_ADDR(0), b_int_addr[0].addr)
+ MACROFIELD_LINE(BRIDGE_INT_ADDR(1), b_int_addr[1].addr)
+ MACROFIELD_LINE(BRIDGE_INT_ADDR(2), b_int_addr[2].addr)
+ MACROFIELD_LINE(BRIDGE_INT_ADDR(3), b_int_addr[3].addr)
+ MACROFIELD_LINE(BRIDGE_INT_ADDR(4), b_int_addr[4].addr)
+ MACROFIELD_LINE(BRIDGE_INT_ADDR(5), b_int_addr[5].addr)
+ MACROFIELD_LINE(BRIDGE_INT_ADDR(6), b_int_addr[6].addr)
+ MACROFIELD_LINE(BRIDGE_INT_ADDR(7), b_int_addr[7].addr)
+ MACROFIELD_LINE(BRIDGE_DEVICE0, b_device[0].reg)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_ERR_LOCK_EN)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_PAGE_CHK_DIS)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_FORCE_PCI_PAR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_VIRTUAL_EN)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_PMU_WRGA_EN)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_DIR_WRGA_EN)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_DEV_SIZE)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_RT)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_SWAP_PMU)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_SWAP_DIR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_PREF)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_PRECISE)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_COH)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_BARRIER)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_GBR)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_DEV_SWAP)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_DEV_IO_MEM)
+ MACROFIELD_LINE_BITFIELD(BRIDGE_DEV_OFF_MASK)
+ MACROFIELD_LINE(BRIDGE_DEVICE(0), b_device[0].reg)
+ MACROFIELD_LINE(BRIDGE_DEVICE(1), b_device[1].reg)
+ MACROFIELD_LINE(BRIDGE_DEVICE(2), b_device[2].reg)
+ MACROFIELD_LINE(BRIDGE_DEVICE(3), b_device[3].reg)
+ MACROFIELD_LINE(BRIDGE_DEVICE(4), b_device[4].reg)
+ MACROFIELD_LINE(BRIDGE_DEVICE(5), b_device[5].reg)
+ MACROFIELD_LINE(BRIDGE_DEVICE(6), b_device[6].reg)
+ MACROFIELD_LINE(BRIDGE_DEVICE(7), b_device[7].reg)
+ MACROFIELD_LINE(BRIDGE_WR_REQ_BUF0, b_wr_req_buf[0].reg)
+ MACROFIELD_LINE(BRIDGE_WR_REQ_BUF(0), b_wr_req_buf[0].reg)
+ MACROFIELD_LINE(BRIDGE_WR_REQ_BUF(1), b_wr_req_buf[1].reg)
+ MACROFIELD_LINE(BRIDGE_WR_REQ_BUF(2), b_wr_req_buf[2].reg)
+ MACROFIELD_LINE(BRIDGE_WR_REQ_BUF(3), b_wr_req_buf[3].reg)
+ MACROFIELD_LINE(BRIDGE_WR_REQ_BUF(4), b_wr_req_buf[4].reg)
+ MACROFIELD_LINE(BRIDGE_WR_REQ_BUF(5), b_wr_req_buf[5].reg)
+ MACROFIELD_LINE(BRIDGE_WR_REQ_BUF(6), b_wr_req_buf[6].reg)
+ MACROFIELD_LINE(BRIDGE_WR_REQ_BUF(7), b_wr_req_buf[7].reg)
+ MACROFIELD_LINE(BRIDGE_EVEN_RESP, b_even_resp)
+ MACROFIELD_LINE(BRIDGE_ODD_RESP, b_odd_resp)
+ MACROFIELD_LINE(BRIDGE_RESP_STATUS, b_resp_status)
+ MACROFIELD_LINE(BRIDGE_RESP_CLEAR, b_resp_clear)
+ MACROFIELD_LINE(BRIDGE_ATE_RAM, b_int_ate_ram)
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEV0, b_type0_cfg_dev[0])
+
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEV(0), b_type0_cfg_dev[0])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(0,0), b_type0_cfg_dev[0].f[0])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(0,1), b_type0_cfg_dev[0].f[1])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(0,2), b_type0_cfg_dev[0].f[2])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(0,3), b_type0_cfg_dev[0].f[3])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(0,4), b_type0_cfg_dev[0].f[4])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(0,5), b_type0_cfg_dev[0].f[5])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(0,6), b_type0_cfg_dev[0].f[6])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(0,7), b_type0_cfg_dev[0].f[7])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEV(1), b_type0_cfg_dev[1])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(1,0), b_type0_cfg_dev[1].f[0])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(1,1), b_type0_cfg_dev[1].f[1])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(1,2), b_type0_cfg_dev[1].f[2])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(1,3), b_type0_cfg_dev[1].f[3])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(1,4), b_type0_cfg_dev[1].f[4])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(1,5), b_type0_cfg_dev[1].f[5])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(1,6), b_type0_cfg_dev[1].f[6])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(1,7), b_type0_cfg_dev[1].f[7])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEV(2), b_type0_cfg_dev[2])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(2,0), b_type0_cfg_dev[2].f[0])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(2,1), b_type0_cfg_dev[2].f[1])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(2,2), b_type0_cfg_dev[2].f[2])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(2,3), b_type0_cfg_dev[2].f[3])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(2,4), b_type0_cfg_dev[2].f[4])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(2,5), b_type0_cfg_dev[2].f[5])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(2,6), b_type0_cfg_dev[2].f[6])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(2,7), b_type0_cfg_dev[2].f[7])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEV(3), b_type0_cfg_dev[3])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(3,0), b_type0_cfg_dev[3].f[0])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(3,1), b_type0_cfg_dev[3].f[1])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(3,2), b_type0_cfg_dev[3].f[2])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(3,3), b_type0_cfg_dev[3].f[3])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(3,4), b_type0_cfg_dev[3].f[4])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(3,5), b_type0_cfg_dev[3].f[5])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(3,6), b_type0_cfg_dev[3].f[6])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(3,7), b_type0_cfg_dev[3].f[7])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEV(4), b_type0_cfg_dev[4])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(4,0), b_type0_cfg_dev[4].f[0])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(4,1), b_type0_cfg_dev[4].f[1])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(4,2), b_type0_cfg_dev[4].f[2])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(4,3), b_type0_cfg_dev[4].f[3])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(4,4), b_type0_cfg_dev[4].f[4])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(4,5), b_type0_cfg_dev[4].f[5])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(4,6), b_type0_cfg_dev[4].f[6])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(4,7), b_type0_cfg_dev[4].f[7])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEV(5), b_type0_cfg_dev[5])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(5,0), b_type0_cfg_dev[5].f[0])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(5,1), b_type0_cfg_dev[5].f[1])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(5,2), b_type0_cfg_dev[5].f[2])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(5,3), b_type0_cfg_dev[5].f[3])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(5,4), b_type0_cfg_dev[5].f[4])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(5,5), b_type0_cfg_dev[5].f[5])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(5,6), b_type0_cfg_dev[5].f[6])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(5,7), b_type0_cfg_dev[5].f[7])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEV(6), b_type0_cfg_dev[6])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(6,0), b_type0_cfg_dev[6].f[0])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(6,1), b_type0_cfg_dev[6].f[1])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(6,2), b_type0_cfg_dev[6].f[2])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(6,3), b_type0_cfg_dev[6].f[3])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(6,4), b_type0_cfg_dev[6].f[4])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(6,5), b_type0_cfg_dev[6].f[5])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(6,6), b_type0_cfg_dev[6].f[6])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(6,7), b_type0_cfg_dev[6].f[7])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEV(7), b_type0_cfg_dev[7])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(7,0), b_type0_cfg_dev[7].f[0])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(7,1), b_type0_cfg_dev[7].f[1])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(7,2), b_type0_cfg_dev[7].f[2])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(7,3), b_type0_cfg_dev[7].f[3])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(7,4), b_type0_cfg_dev[7].f[4])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(7,5), b_type0_cfg_dev[7].f[5])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(7,6), b_type0_cfg_dev[7].f[6])
+ MACROFIELD_LINE(BRIDGE_TYPE0_CFG_DEVF(7,7), b_type0_cfg_dev[7].f[7])
+
+ MACROFIELD_LINE(BRIDGE_TYPE1_CFG, b_type1_cfg)
+ MACROFIELD_LINE(BRIDGE_PCI_IACK, b_pci_iack)
+ MACROFIELD_LINE(BRIDGE_EXT_SSRAM, b_ext_ate_ram)
+ MACROFIELD_LINE(BRIDGE_DEVIO0, b_devio(0))
+ MACROFIELD_LINE(BRIDGE_DEVIO(0), b_devio(0))
+ MACROFIELD_LINE(BRIDGE_DEVIO(1), b_devio(1))
+ MACROFIELD_LINE(BRIDGE_DEVIO(2), b_devio(2))
+ MACROFIELD_LINE(BRIDGE_DEVIO(3), b_devio(3))
+ MACROFIELD_LINE(BRIDGE_DEVIO(4), b_devio(4))
+ MACROFIELD_LINE(BRIDGE_DEVIO(5), b_devio(5))
+ MACROFIELD_LINE(BRIDGE_DEVIO(6), b_devio(6))
+ MACROFIELD_LINE(BRIDGE_DEVIO(7), b_devio(7))
+ MACROFIELD_LINE(BRIDGE_EXTERNAL_FLASH, b_external_flash)
+};
+#endif
+
+#ifdef __cplusplus
+};
+#endif
+#endif /* C or C++ */
+
+#endif /* _ASM_SN_PCI_BRIDGE_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PCI_CVLINK_H
+#define _ASM_IA64_SN_PCI_CVLINK_H
+
+#include <asm/sn/types.h>
+#include <asm/sn/sgi.h>
+#include <asm/sn/driver.h>
+#include <asm/sn/iograph.h>
+#include <asm/param.h>
+#include <asm/sn/pio.h>
+#include <asm/sn/xtalk/xwidget.h>
+#include <asm/sn/sn_private.h>
+#include <asm/sn/addrs.h>
+#include <asm/sn/hcl.h>
+#include <asm/sn/hcl_util.h>
+#include <asm/sn/intr.h>
+#include <asm/sn/xtalk/xtalkaddrs.h>
+#include <asm/sn/klconfig.h>
+#include <asm/sn/io.h>
+
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pcibr_private.h>
+
+#define MAX_PCI_XWIDGET 256
+#define MAX_ATE_MAPS 1024
+
+#define SN_DEVICE_SYSDATA(dev) \
+ ((struct sn_device_sysdata *) \
+ (((struct pci_controller *) ((dev)->sysdata))->platform_data))
+
+#define IS_PCI32G(dev) ((dev)->dma_mask >= 0xffffffff)
+#define IS_PCI32L(dev) ((dev)->dma_mask < 0xffffffff)
+
+#define PCIDEV_VERTEX(pci_dev) \
+ ((SN_DEVICE_SYSDATA(pci_dev))->vhdl)
+
+struct sn_widget_sysdata {
+ vertex_hdl_t vhdl;
+};
+
+struct sn_device_sysdata {
+ vertex_hdl_t vhdl;
+ pciio_provider_t *pci_provider;
+ pciio_intr_t intr_handle;
+ struct sn_flush_device_list *dma_flush_list;
+ pciio_piomap_t pio_map[PCI_ROM_RESOURCE];
+};
+
+struct ioports_to_tlbs_s {
+ unsigned long p:1,
+ rv_1:1,
+ ma:3,
+ a:1,
+ d:1,
+ pl:2,
+ ar:3,
+ ppn:38,
+ rv_2:2,
+ ed:1,
+ ig:11;
+};
+
+#endif /* _ASM_IA64_SN_PCI_CVLINK_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PCI_PCI_DEFS_H
+#define _ASM_IA64_SN_PCI_PCI_DEFS_H
+
+/* defines for the PCI bus architecture */
+
+/* Bit layout of address fields for Type-1
+ * Configuration Space cycles.
+ */
+#define PCI_TYPE0_SLOT_MASK 0xFFFFF800
+#define PCI_TYPE0_FUNC_MASK 0x00000700
+#define PCI_TYPE0_REG_MASK 0x000000FF
+
+#define PCI_TYPE0_SLOT_SHFT 11
+#define PCI_TYPE0_FUNC_SHFT 8
+#define PCI_TYPE0_REG_SHFT 0
+
+#define PCI_TYPE0_FUNC(a) (((a) & PCI_TYPE0_FUNC_MASK) >> PCI_TYPE0_FUNC_SHFT)
+#define PCI_TYPE0_REG(a) (((a) & PCI_TYPE0_REG_MASK) >> PCI_TYPE0_REG_SHFT)
+
+#define PCI_TYPE0(s,f,r) ((((1<<(s)) << PCI_TYPE0_SLOT_SHFT) & PCI_TYPE0_SLOT_MASK) |\
+ (((f) << PCI_TYPE0_FUNC_SHFT) & PCI_TYPE0_FUNC_MASK) |\
+ (((r) << PCI_TYPE0_REG_SHFT) & PCI_TYPE0_REG_MASK))
+
+/* Bit layout of address fields for Type-1
+ * Configuration Space cycles.
+ * NOTE: I'm including the byte offset within
+ * the 32-bit word as part of the register
+ * number as an extension of the layout in
+ * the PCI spec.
+ */
+#define PCI_TYPE1_BUS_MASK 0x00FF0000
+#define PCI_TYPE1_SLOT_MASK 0x0000F800
+#define PCI_TYPE1_FUNC_MASK 0x00000700
+#define PCI_TYPE1_REG_MASK 0x000000FF
+
+#define PCI_TYPE1_BUS_SHFT 16
+#define PCI_TYPE1_SLOT_SHFT 11
+#define PCI_TYPE1_FUNC_SHFT 8
+#define PCI_TYPE1_REG_SHFT 0
+
+#define PCI_TYPE1_BUS(a) (((a) & PCI_TYPE1_BUS_MASK) >> PCI_TYPE1_BUS_SHFT)
+#define PCI_TYPE1_SLOT(a) (((a) & PCI_TYPE1_SLOT_MASK) >> PCI_TYPE1_SLOT_SHFT)
+#define PCI_TYPE1_FUNC(a) (((a) & PCI_TYPE1_FUNC_MASK) >> PCI_TYPE1_FUNC_SHFT)
+#define PCI_TYPE1_REG(a) (((a) & PCI_TYPE1_REG_MASK) >> PCI_TYPE1_REG_SHFT)
+
+#define PCI_TYPE1(b,s,f,r) ((((b) << PCI_TYPE1_BUS_SHFT) & PCI_TYPE1_BUS_MASK) |\
+ (((s) << PCI_TYPE1_SLOT_SHFT) & PCI_TYPE1_SLOT_MASK) |\
+ (((f) << PCI_TYPE1_FUNC_SHFT) & PCI_TYPE1_FUNC_MASK) |\
+ (((r) << PCI_TYPE1_REG_SHFT) & PCI_TYPE1_REG_MASK))
+
+/* Byte offsets of registers in CFG space
+ */
+#define PCI_CFG_VENDOR_ID 0x00 /* Vendor ID (2 bytes) */
+#define PCI_CFG_DEVICE_ID 0x02 /* Device ID (2 bytes) */
+
+#define PCI_CFG_COMMAND 0x04 /* Command (2 bytes) */
+#define PCI_CFG_STATUS 0x06 /* Status (2 bytes) */
+
+/* NOTE: if you are using a C "switch" statement to
+ * differentiate between the Config space registers, be
+ * aware that PCI_CFG_CLASS_CODE and PCI_CFG_PROG_IF
+ * are the same offset.
+ */
+#define PCI_CFG_REV_ID 0x08 /* Revision Id (1 byte) */
+#define PCI_CFG_CLASS_CODE 0x09 /* Class Code (3 bytes) */
+#define PCI_CFG_PROG_IF 0x09 /* Prog Interface (1 byte) */
+#define PCI_CFG_SUB_CLASS 0x0A /* Sub Class (1 byte) */
+#define PCI_CFG_BASE_CLASS 0x0B /* Base Class (1 byte) */
+
+#define PCI_CFG_CACHE_LINE 0x0C /* Cache line size (1 byte) */
+#define PCI_CFG_LATENCY_TIMER 0x0D /* Latency Timer (1 byte) */
+#define PCI_CFG_HEADER_TYPE 0x0E /* Header Type (1 byte) */
+#define PCI_CFG_BIST 0x0F /* Built In Self Test */
+
+#define PCI_CFG_BASE_ADDR_0 0x10 /* Base Address (4 bytes) */
+#define PCI_CFG_BASE_ADDR_1 0x14 /* Base Address (4 bytes) */
+#define PCI_CFG_BASE_ADDR_2 0x18 /* Base Address (4 bytes) */
+#define PCI_CFG_BASE_ADDR_3 0x1C /* Base Address (4 bytes) */
+#define PCI_CFG_BASE_ADDR_4 0x20 /* Base Address (4 bytes) */
+#define PCI_CFG_BASE_ADDR_5 0x24 /* Base Address (4 bytes) */
+
+#define PCI_CFG_BASE_ADDR_OFF 0x04 /* Base Address Offset (1..5)*/
+#define PCI_CFG_BASE_ADDR(n) (PCI_CFG_BASE_ADDR_0 + (n)*PCI_CFG_BASE_ADDR_OFF)
+#define PCI_CFG_BASE_ADDRS 6 /* up to this many BASE regs */
+
+#define PCI_CFG_CARDBUS_CIS 0x28 /* Cardbus CIS Pointer (4B) */
+
+#define PCI_CFG_SUBSYS_VEND_ID 0x2C /* Subsystem Vendor ID (2B) */
+#define PCI_CFG_SUBSYS_ID 0x2E /* Subsystem ID */
+
+#define PCI_EXPANSION_ROM 0x30 /* Expansion Rom Base (4B) */
+#define PCI_CAPABILITIES_PTR 0x34 /* Capabilities Pointer */
+
+#define PCI_INTR_LINE 0x3C /* Interrupt Line (1B) */
+#define PCI_INTR_PIN 0x3D /* Interrupt Pin (1B) */
+
+#define PCI_CFG_VEND_SPECIFIC 0x40 /* first vendor specific reg */
+
+/* layout for Type 0x01 headers */
+
+#define PCI_CFG_PPB_BUS_PRI 0x18 /* immediate upstream bus # */
+#define PCI_CFG_PPB_BUS_SEC 0x19 /* immediate downstream bus # */
+#define PCI_CFG_PPB_BUS_SUB 0x1A /* last downstream bus # */
+#define PCI_CFG_PPB_SEC_LAT 0x1B /* latency timer for SEC bus */
+#define PCI_CFG_PPB_IOBASE 0x1C /* IO Base Addr bits 12..15 */
+#define PCI_CFG_PPB_IOLIM 0x1D /* IO Limit Addr bits 12..15 */
+#define PCI_CFG_PPB_SEC_STAT 0x1E /* Secondary Status */
+#define PCI_CFG_PPB_MEMBASE 0x20 /* MEM Base Addr bits 16..31 */
+#define PCI_CFG_PPB_MEMLIM 0x22 /* MEM Limit Addr bits 16..31 */
+#define PCI_CFG_PPB_MEMPFBASE 0x24 /* PfMEM Base Addr bits 16..31 */
+#define PCI_CFG_PPB_MEMPFLIM 0x26 /* PfMEM Limit Addr bits 16..31 */
+#define PCI_CFG_PPB_MEMPFBASEHI 0x28 /* PfMEM Base Addr bits 32..63 */
+#define PCI_CFG_PPB_MEMPFLIMHI 0x2C /* PfMEM Limit Addr bits 32..63 */
+#define PCI_CFG_PPB_IOBASEHI 0x30 /* IO Base Addr bits 16..31 */
+#define PCI_CFG_PPB_IOLIMHI 0x32 /* IO Limit Addr bits 16..31 */
+#define PCI_CFG_PPB_SUB_VENDOR 0x34 /* Subsystem Vendor ID */
+#define PCI_CFG_PPB_SUB_DEVICE 0x36 /* Subsystem Device ID */
+#define PCI_CFG_PPB_ROM_BASE 0x38 /* ROM base address */
+#define PCI_CFG_PPB_INT_LINE 0x3C /* Interrupt Line */
+#define PCI_CFG_PPB_INT_PIN 0x3D /* Interrupt Pin */
+#define PCI_CFG_PPB_BRIDGE_CTRL 0x3E /* Bridge Control */
+ /* XXX- these might be DEC 21152 specific */
+#define PCI_CFG_PPB_CHIP_CTRL 0x40
+#define PCI_CFG_PPB_DIAG_CTRL 0x41
+#define PCI_CFG_PPB_ARB_CTRL 0x42
+#define PCI_CFG_PPB_SERR_DISABLE 0x64
+#define PCI_CFG_PPB_CLK2_CTRL 0x68
+#define PCI_CFG_PPB_SERR_STATUS 0x6A
+
+/* Command Register layout (0x04) */
+#define PCI_CMD_IO_SPACE 0x001 /* I/O Space device */
+#define PCI_CMD_MEM_SPACE 0x002 /* Memory Space */
+#define PCI_CMD_BUS_MASTER 0x004 /* Bus Master */
+#define PCI_CMD_SPEC_CYCLES 0x008 /* Special Cycles */
+#define PCI_CMD_MEMW_INV_ENAB 0x010 /* Memory Write Inv Enable */
+#define PCI_CMD_VGA_PALETTE_SNP 0x020 /* VGA Palette Snoop */
+#define PCI_CMD_PAR_ERR_RESP 0x040 /* Parity Error Response */
+#define PCI_CMD_WAIT_CYCLE_CTL 0x080 /* Wait Cycle Control */
+#define PCI_CMD_SERR_ENABLE 0x100 /* SERR# Enable */
+#define PCI_CMD_F_BK_BK_ENABLE 0x200 /* Fast Back-to-Back Enable */
+
+/* Status Register Layout (0x06) */
+#define PCI_STAT_PAR_ERR_DET 0x8000 /* Detected Parity Error */
+#define PCI_STAT_SYS_ERR 0x4000 /* Signaled System Error */
+#define PCI_STAT_RCVD_MSTR_ABT 0x2000 /* Received Master Abort */
+#define PCI_STAT_RCVD_TGT_ABT 0x1000 /* Received Target Abort */
+#define PCI_STAT_SGNL_TGT_ABT 0x0800 /* Signaled Target Abort */
+
+#define PCI_STAT_DEVSEL_TIMING 0x0600 /* DEVSEL Timing Mask */
+#define DEVSEL_TIMING(_x) (((_x) >> 9) & 3) /* devsel tim macro */
+#define DEVSEL_FAST 0 /* Fast timing */
+#define DEVSEL_MEDIUM 1 /* Medium timing */
+#define DEVSEL_SLOW 2 /* Slow timing */
+
+#define PCI_STAT_DATA_PAR_ERR 0x0100 /* Data Parity Err Detected */
+#define PCI_STAT_F_BK_BK_CAP 0x0080 /* Fast Back-to-Back Capable */
+#define PCI_STAT_UDF_SUPP 0x0040 /* UDF Supported */
+#define PCI_STAT_66MHZ_CAP 0x0020 /* 66 MHz Capable */
+#define PCI_STAT_CAP_LIST 0x0010 /* Capabilities List */
+
+/* BIST Register Layout (0x0F) */
+#define PCI_BIST_BIST_CAP 0x80 /* BIST Capable */
+#define PCI_BIST_START_BIST 0x40 /* Start BIST */
+#define PCI_BIST_CMPLTION_MASK 0x0F /* COMPLETION MASK */
+#define PCI_BIST_CMPL_OK 0x00 /* 0 value is completion OK */
+
+/* Base Address Register 0x10 */
+#define PCI_BA_IO_CODEMASK 0x3 /* bottom 2 bits encode I/O BAR type */
+#define PCI_BA_IO_SPACE 0x1 /* I/O Space Marker */
+
+#define PCI_BA_MEM_CODEMASK 0xf /* bottom 4 bits encode MEM BAR type */
+#define PCI_BA_MEM_LOCATION 0x6 /* 2 bits for location avail */
+#define PCI_BA_MEM_32BIT 0x0 /* Anywhere in 32bit space */
+#define PCI_BA_MEM_1MEG 0x2 /* Locate below 1 Meg */
+#define PCI_BA_MEM_64BIT 0x4 /* Anywhere in 64bit space */
+#define PCI_BA_PREFETCH 0x8 /* Prefetchable, no side effect */
+
+#define PCI_BA_ROM_CODEMASK 0x1 /* bottom bit control expansion ROM enable */
+#define PCI_BA_ROM_ENABLE 0x1 /* enable expansion ROM */
+
+/* Bridge Control Register 0x3e */
+#define PCI_BCTRL_DTO_SERR 0x0800 /* Discard Timer timeout generates SERR on primary bus */
+#define PCI_BCTRL_DTO 0x0400 /* Discard Timer timeout status */
+#define PCI_BCTRL_DTO_SEC 0x0200 /* Secondary Discard Timer: 0 => 2^15 PCI clock cycles, 1 => 2^10 */
+#define PCI_BCTRL_DTO_PRI 0x0100 /* Primary Discard Timer: 0 => 2^15 PCI clock cycles, 1 => 2^10 */
+#define PCI_BCTRL_F_BK_BK_ENABLE 0x0080 /* Enable Fast Back-to-Back on secondary bus */
+#define PCI_BCTRL_RESET_SEC 0x0040 /* Reset Secondary bus */
+#define PCI_BCTRL_MSTR_ABT_MODE 0x0020 /* Master Abort Mode: 0 => do not report Master-Aborts */
+#define PCI_BCTRL_VGA_AF_ENABLE 0x0008 /* Enable VGA Address Forwarding */
+#define PCI_BCTRL_ISA_AF_ENABLE 0x0004 /* Enable ISA Address Forwarding */
+#define PCI_BCTRL_SERR_ENABLE 0x0002 /* Enable forwarding of SERR from secondary bus to primary bus */
+#define PCI_BCTRL_PAR_ERR_RESP 0x0001 /* Enable Parity Error Response reporting on secondary interface */
+
+/*
+ * PCI 2.2 introduces the concept of ``capability lists.'' Capability lists
+ * provide a flexible mechanism for a device or bridge to advertise one or
+ * more standardized capabilities such as the presense of a power management
+ * interface, etc. The presense of a capability list is indicated by
+ * PCI_STAT_CAP_LIST being non-zero in the PCI_CFG_STATUS register. If
+ * PCI_STAT_CAP_LIST is set, then PCI_CFG_CAP_PTR is a ``pointer'' into the
+ * device-specific portion of the configuration header where the first
+ * capability block is stored. This ``pointer'' is a single byte which
+ * contains an offset from the beginning of the configuration header. The
+ * bottom two bits of the pointer are reserved and should be masked off to
+ * determine the offset. Each capability block contains a capability ID, a
+ * ``pointer'' to the next capability (another offset where a zero terminates
+ * the list) and capability-specific data. Each capability block starts with
+ * the capability ID and the ``next capability pointer.'' All data following
+ * this are capability-dependent.
+ */
+#define PCI_CAP_ID 0x00 /* Capability ID (1B) */
+#define PCI_CAP_PTR 0x01 /* Capability ``pointer'' (1B) */
+
+/* PCI Capability IDs */
+#define PCI_CAP_PM 0x01 /* PCI Power Management */
+#define PCI_CAP_AGP 0x02 /* Accelerated Graphics Port */
+#define PCI_CAP_VPD 0x03 /* Vital Product Data (VPD) */
+#define PCI_CAP_SID 0x04 /* Slot Identification */
+#define PCI_CAP_MSI 0x05 /* Message Signaled Intr */
+#define PCI_CAP_HS 0x06 /* CompactPCI Hot Swap */
+#define PCI_CAP_PCIX 0x07 /* PCI-X */
+#define PCI_CAP_ID_HT 0x08 /* HyperTransport */
+
+
+/* PIO interface macros */
+
+#ifndef IOC3_EMULATION
+
+#define PCI_INB(x) (*((volatile char*)x))
+#define PCI_INH(x) (*((volatile short*)x))
+#define PCI_INW(x) (*((volatile int*)x))
+#define PCI_OUTB(x,y) (*((volatile char*)x) = y)
+#define PCI_OUTH(x,y) (*((volatile short*)x) = y)
+#define PCI_OUTW(x,y) (*((volatile int*)x) = y)
+
+#else
+
+extern unsigned int pci_read(void * address, int type);
+extern void pci_write(void * address, int data, int type);
+
+#define BYTE 1
+#define HALF 2
+#define WORD 4
+
+#define PCI_INB(x) pci_read((void *)(x),BYTE)
+#define PCI_INH(x) pci_read((void *)(x),HALF)
+#define PCI_INW(x) pci_read((void *)(x),WORD)
+#define PCI_OUTB(x,y) pci_write((void *)(x),(y),BYTE)
+#define PCI_OUTH(x,y) pci_write((void *)(x),(y),HALF)
+#define PCI_OUTW(x,y) pci_write((void *)(x),(y),WORD)
+
+#endif /* !IOC3_EMULATION */
+ /* effects on reads, merges */
+
+/*
+ * Definition of address layouts for PCI Config mechanism #1
+ * XXX- These largely duplicate PCI_TYPE1 constants at the top
+ * of the file; the two groups should probably be combined.
+ */
+
+#define CFG1_ADDR_REGISTER_MASK 0x000000fc
+#define CFG1_ADDR_FUNCTION_MASK 0x00000700
+#define CFG1_ADDR_DEVICE_MASK 0x0000f800
+#define CFG1_ADDR_BUS_MASK 0x00ff0000
+
+#define CFG1_REGISTER_SHIFT 2
+#define CFG1_FUNCTION_SHIFT 8
+#define CFG1_DEVICE_SHIFT 11
+#define CFG1_BUS_SHIFT 16
+
+/*
+ * Class codes
+ */
+#define PCI_CFG_CLASS_PRE20 0x00
+#define PCI_CFG_CLASS_STORAGE 0x01
+#define PCI_CFG_CLASS_NETWORK 0x02
+#define PCI_CFG_CLASS_DISPLAY 0x03
+#define PCI_CFG_CLASS_MMEDIA 0x04
+#define PCI_CFG_CLASS_MEMORY 0x05
+#define PCI_CFG_CLASS_BRIDGE 0x06
+#define PCI_CFG_CLASS_COMM 0x07
+#define PCI_CFG_CLASS_BASE 0x08
+#define PCI_CFG_CLASS_INPUT 0x09
+#define PCI_CFG_CLASS_DOCK 0x0A
+#define PCI_CFG_CLASS_PROC 0x0B
+#define PCI_CFG_CLASS_SERIALBUS 0x0C
+#define PCI_CFG_CLASS_OTHER 0xFF
+
+/*
+ * Important Subclasses
+ */
+#define PCI_CFG_SUBCLASS_BRIDGE_HOST 0x00
+#define PCI_CFG_SUBCLASS_BRIDGE_ISA 0x01
+#define PCI_CFG_SUBCLASS_BRIDGE_EISA 0x02
+#define PCI_CFG_SUBCLASS_BRIDGE_MC 0x03
+#define PCI_CFG_SUBCLASS_BRIDGE_PCI 0x04
+#define PCI_CFG_SUBCLASS_BRIDGE_PCMCIA 0x05
+#define PCI_CFG_SUBCLASS_BRIDGE_NUBUS 0x06
+#define PCI_CFG_SUBCLASS_BRIDGE_CARDBUS 0x07
+#define PCI_CFG_SUBCLASS_BRIDGE_OTHER 0x80
+
+#ifndef __ASSEMBLY__
+
+/*
+ * PCI config space definition
+ */
+typedef volatile struct pci_cfg_s {
+ uint16_t vendor_id;
+ uint16_t dev_id;
+ uint16_t cmd;
+ uint16_t status;
+ uint8_t rev;
+ uint8_t prog_if;
+ uint8_t sub_class;
+ uint8_t class;
+ uint8_t line_size;
+ uint8_t lt;
+ uint8_t hdr_type;
+ uint8_t bist;
+ uint32_t bar[6];
+ uint32_t cardbus;
+ uint16_t subsys_vendor_id;
+ uint16_t subsys_dev_id;
+ uint32_t exp_rom;
+ uint32_t res[2];
+ uint8_t int_line;
+ uint8_t int_pin;
+ uint8_t min_gnt;
+ uint8_t max_lat;
+} pci_cfg_t;
+
+/*
+ * PCI Type 1 config space definition for PCI to PCI Bridges (PPBs)
+ */
+typedef volatile struct pci_cfg1_s {
+ uint16_t vendor_id;
+ uint16_t dev_id;
+ uint16_t cmd;
+ uint16_t status;
+ uint8_t rev;
+ uint8_t prog_if;
+ uint8_t sub_class;
+ uint8_t class;
+ uint8_t line_size;
+ uint8_t lt;
+ uint8_t hdr_type;
+ uint8_t bist;
+ uint32_t bar[2];
+ uint8_t pri_bus_num;
+ uint8_t snd_bus_num;
+ uint8_t sub_bus_num;
+ uint8_t slt;
+ uint8_t io_base;
+ uint8_t io_limit;
+ uint16_t snd_status;
+ uint16_t mem_base;
+ uint16_t mem_limit;
+ uint16_t pmem_base;
+ uint16_t pmem_limit;
+ uint32_t pmem_base_upper;
+ uint32_t pmem_limit_upper;
+ uint16_t io_base_upper;
+ uint16_t io_limit_upper;
+ uint32_t res;
+ uint32_t exp_rom;
+ uint8_t int_line;
+ uint8_t int_pin;
+ uint16_t ppb_control;
+
+} pci_cfg1_t;
+
+/*
+ * PCI-X Capability
+ */
+typedef volatile struct cap_pcix_cmd_reg_s {
+ uint16_t data_parity_enable: 1,
+ enable_relaxed_order: 1,
+ max_mem_read_cnt: 2,
+ max_split: 3,
+ reserved1: 9;
+} cap_pcix_cmd_reg_t;
+
+typedef volatile struct cap_pcix_stat_reg_s {
+ uint32_t func_num: 3,
+ dev_num: 5,
+ bus_num: 8,
+ bit64_device: 1,
+ mhz133_capable: 1,
+ split_complt_discard: 1,
+ unexpect_split_complt: 1,
+ device_complex: 1,
+ max_mem_read_cnt: 2,
+ max_out_split: 3,
+ max_cum_read: 3,
+ split_complt_err: 1,
+ reserved1: 2;
+} cap_pcix_stat_reg_t;
+
+typedef volatile struct cap_pcix_type0_s {
+ uint8_t pcix_cap_id;
+ uint8_t pcix_cap_nxt;
+ cap_pcix_cmd_reg_t pcix_type0_command;
+ cap_pcix_stat_reg_t pcix_type0_status;
+} cap_pcix_type0_t;
+
+#endif /* __ASSEMBLY__ */
+#endif /* _ASM_IA64_SN_PCI_PCI_DEFS_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PCI_PCIBR_H
+#define _ASM_IA64_SN_PCI_PCIBR_H
+
+#if defined(__KERNEL__)
+
+#include <linux/config.h>
+#include <asm/sn/dmamap.h>
+#include <asm/sn/driver.h>
+#include <asm/sn/pio.h>
+
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/bridge.h>
+
+/* =====================================================================
+ * symbolic constants used by pcibr's xtalk bus provider
+ */
+
+#define PCIBR_PIOMAP_BUSY 0x80000000
+
+#define PCIBR_DMAMAP_BUSY 0x80000000
+#define PCIBR_DMAMAP_SSRAM 0x40000000
+
+#define PCIBR_INTR_BLOCKED 0x40000000
+#define PCIBR_INTR_BUSY 0x80000000
+
+#ifndef __ASSEMBLY__
+
+/* =====================================================================
+ * opaque types used by pcibr's xtalk bus provider
+ */
+
+typedef struct pcibr_piomap_s *pcibr_piomap_t;
+typedef struct pcibr_dmamap_s *pcibr_dmamap_t;
+typedef struct pcibr_intr_s *pcibr_intr_t;
+
+/* =====================================================================
+ * bus provider function table
+ *
+ * Normally, this table is only handed off explicitly
+ * during provider initialization, and the PCI generic
+ * layer will stash a pointer to it in the vertex; however,
+ * exporting it explicitly enables a performance hack in
+ * the generic PCI provider where if we know at compile
+ * time that the only possible PCI provider is a
+ * pcibr, we can go directly to this ops table.
+ */
+
+extern pciio_provider_t pci_pic_provider;
+
+/* =====================================================================
+ * secondary entry points: pcibr PCI bus provider
+ *
+ * These functions are normally exported explicitly by
+ * a direct call from the pcibr initialization routine
+ * into the generic crosstalk provider; they are included
+ * here to enable a more aggressive performance hack in
+ * the generic crosstalk layer, where if we know that the
+ * only possible crosstalk provider is pcibr, and we can
+ * guarantee that all entry points are properly named, and
+ * we can deal with the implicit casting properly, then
+ * we can turn many of the generic provider routines into
+ * plain brances, or even eliminate them (given sufficient
+ * smarts on the part of the compilation system).
+ */
+
+extern pcibr_piomap_t pcibr_piomap_alloc(vertex_hdl_t dev,
+ device_desc_t dev_desc,
+ pciio_space_t space,
+ iopaddr_t pci_addr,
+ size_t byte_count,
+ size_t byte_count_max,
+ unsigned flags);
+
+extern void pcibr_piomap_free(pcibr_piomap_t piomap);
+
+extern caddr_t pcibr_piomap_addr(pcibr_piomap_t piomap,
+ iopaddr_t xtalk_addr,
+ size_t byte_count);
+
+extern void pcibr_piomap_done(pcibr_piomap_t piomap);
+
+extern int pcibr_piomap_probe(pcibr_piomap_t piomap,
+ off_t offset,
+ int len,
+ void *valp);
+
+extern caddr_t pcibr_piotrans_addr(vertex_hdl_t dev,
+ device_desc_t dev_desc,
+ pciio_space_t space,
+ iopaddr_t pci_addr,
+ size_t byte_count,
+ unsigned flags);
+
+extern iopaddr_t pcibr_piospace_alloc(vertex_hdl_t dev,
+ device_desc_t dev_desc,
+ pciio_space_t space,
+ size_t byte_count,
+ size_t alignment);
+extern void pcibr_piospace_free(vertex_hdl_t dev,
+ pciio_space_t space,
+ iopaddr_t pciaddr,
+ size_t byte_count);
+
+extern pcibr_dmamap_t pcibr_dmamap_alloc(vertex_hdl_t dev,
+ device_desc_t dev_desc,
+ size_t byte_count_max,
+ unsigned flags);
+
+extern void pcibr_dmamap_free(pcibr_dmamap_t dmamap);
+
+extern iopaddr_t pcibr_dmamap_addr(pcibr_dmamap_t dmamap,
+ paddr_t paddr,
+ size_t byte_count);
+
+extern void pcibr_dmamap_done(pcibr_dmamap_t dmamap);
+
+/*
+ * pcibr_get_dmatrans_node() will return the compact node id to which
+ * all 32-bit Direct Mapping memory accesses will be directed.
+ * (This node id can be different for each PCI bus.)
+ */
+
+extern cnodeid_t pcibr_get_dmatrans_node(vertex_hdl_t pconn_vhdl);
+
+extern iopaddr_t pcibr_dmatrans_addr(vertex_hdl_t dev,
+ device_desc_t dev_desc,
+ paddr_t paddr,
+ size_t byte_count,
+ unsigned flags);
+
+extern void pcibr_dmamap_drain(pcibr_dmamap_t map);
+
+extern void pcibr_dmaaddr_drain(vertex_hdl_t vhdl,
+ paddr_t addr,
+ size_t bytes);
+
+typedef unsigned pcibr_intr_ibit_f(pciio_info_t info,
+ pciio_intr_line_t lines);
+
+extern void pcibr_intr_ibit_set(vertex_hdl_t, pcibr_intr_ibit_f *);
+
+extern pcibr_intr_t pcibr_intr_alloc(vertex_hdl_t dev,
+ device_desc_t dev_desc,
+ pciio_intr_line_t lines,
+ vertex_hdl_t owner_dev);
+
+extern void pcibr_intr_free(pcibr_intr_t intr);
+
+extern int pcibr_intr_connect(pcibr_intr_t intr, intr_func_t, intr_arg_t);
+
+extern void pcibr_intr_disconnect(pcibr_intr_t intr);
+
+extern vertex_hdl_t pcibr_intr_cpu_get(pcibr_intr_t intr);
+
+extern void pcibr_provider_startup(vertex_hdl_t pcibr);
+
+extern void pcibr_provider_shutdown(vertex_hdl_t pcibr);
+
+extern int pcibr_reset(vertex_hdl_t dev);
+
+extern pciio_endian_t pcibr_endian_set(vertex_hdl_t dev,
+ pciio_endian_t device_end,
+ pciio_endian_t desired_end);
+
+extern uint64_t pcibr_config_get(vertex_hdl_t conn,
+ unsigned reg,
+ unsigned size);
+
+extern void pcibr_config_set(vertex_hdl_t conn,
+ unsigned reg,
+ unsigned size,
+ uint64_t value);
+
+extern pciio_slot_t pcibr_error_extract(vertex_hdl_t pcibr_vhdl,
+ pciio_space_t *spacep,
+ iopaddr_t *addrp);
+
+extern int pcibr_wrb_flush(vertex_hdl_t pconn_vhdl);
+extern int pcibr_rrb_check(vertex_hdl_t pconn_vhdl,
+ int *count_vchan0,
+ int *count_vchan1,
+ int *count_reserved,
+ int *count_pool);
+
+extern int pcibr_alloc_all_rrbs(vertex_hdl_t vhdl, int even_odd,
+ int dev_1_rrbs, int virt1,
+ int dev_2_rrbs, int virt2,
+ int dev_3_rrbs, int virt3,
+ int dev_4_rrbs, int virt4);
+
+typedef void
+rrb_alloc_funct_f (vertex_hdl_t xconn_vhdl,
+ int *vendor_list);
+
+typedef rrb_alloc_funct_f *rrb_alloc_funct_t;
+
+void pcibr_set_rrb_callback(vertex_hdl_t xconn_vhdl,
+ rrb_alloc_funct_f *func);
+
+extern int pcibr_device_unregister(vertex_hdl_t);
+extern void pcibr_driver_reg_callback(vertex_hdl_t, int, int, int);
+extern void pcibr_driver_unreg_callback(vertex_hdl_t,
+ int, int, int);
+
+
+extern void * pcibr_bridge_ptr_get(vertex_hdl_t, int);
+
+/*
+ * Bridge-specific flags that can be set via pcibr_device_flags_set
+ * and cleared via pcibr_device_flags_clear. Other flags are
+ * more generic and are maniuplated through PCI-generic interfaces.
+ *
+ * Note that all PCI implementation-specific flags (Bridge flags, in
+ * this case) are in bits 15-31. The lower 15 bits are reserved
+ * for PCI-generic flags.
+ *
+ * Some of these flags have been "promoted" to the
+ * generic layer, so they can be used without having
+ * to "know" that the PCI bus is hosted by a Bridge.
+ *
+ * PCIBR_NO_ATE_ROUNDUP: Request that no rounding up be done when
+ * allocating ATE's. ATE count computation will assume that the
+ * address to be mapped will start on a page boundary.
+ */
+#define PCIBR_NO_ATE_ROUNDUP 0x00008000
+#define PCIBR_WRITE_GATHER 0x00010000 /* please use PCIIO version */
+#define PCIBR_NOWRITE_GATHER 0x00020000 /* please use PCIIO version */
+#define PCIBR_PREFETCH 0x00040000 /* please use PCIIO version */
+#define PCIBR_NOPREFETCH 0x00080000 /* please use PCIIO version */
+#define PCIBR_PRECISE 0x00100000
+#define PCIBR_NOPRECISE 0x00200000
+#define PCIBR_BARRIER 0x00400000
+#define PCIBR_NOBARRIER 0x00800000
+#define PCIBR_VCHAN0 0x01000000
+#define PCIBR_VCHAN1 0x02000000
+#define PCIBR_64BIT 0x04000000
+#define PCIBR_NO64BIT 0x08000000
+#define PCIBR_SWAP 0x10000000
+#define PCIBR_NOSWAP 0x20000000
+
+#define PCIBR_EXTERNAL_ATES 0x40000000 /* uses external ATEs */
+#define PCIBR_ACTIVE 0x80000000 /* need a "done" */
+
+/* Flags that have meaning to pcibr_device_flags_{set,clear} */
+#define PCIBR_DEVICE_FLAGS ( \
+ PCIBR_WRITE_GATHER |\
+ PCIBR_NOWRITE_GATHER |\
+ PCIBR_PREFETCH |\
+ PCIBR_NOPREFETCH |\
+ PCIBR_PRECISE |\
+ PCIBR_NOPRECISE |\
+ PCIBR_BARRIER |\
+ PCIBR_NOBARRIER \
+)
+
+/* Flags that have meaning to *_dmamap_alloc, *_dmatrans_{addr,list} */
+#define PCIBR_DMA_FLAGS ( \
+ PCIBR_PREFETCH |\
+ PCIBR_NOPREFETCH |\
+ PCIBR_PRECISE |\
+ PCIBR_NOPRECISE |\
+ PCIBR_BARRIER |\
+ PCIBR_NOBARRIER |\
+ PCIBR_VCHAN0 |\
+ PCIBR_VCHAN1 \
+)
+
+typedef int pcibr_device_flags_t;
+
+#define MINIMAL_ATES_REQUIRED(addr, size) \
+ (IOPG(IOPGOFF(addr) + (size) - 1) == IOPG((size) - 1))
+
+#define MINIMAL_ATE_FLAG(addr, size) \
+ (MINIMAL_ATES_REQUIRED((u_long)addr, size) ? PCIBR_NO_ATE_ROUNDUP : 0)
+
+/*
+ * Set bits in the Bridge Device(x) register for this device.
+ * "flags" are defined above. NOTE: this includes turning
+ * things *OFF* as well as turning them *ON* ...
+ */
+extern int pcibr_device_flags_set(vertex_hdl_t dev,
+ pcibr_device_flags_t flags);
+
+/*
+ * Allocate Read Response Buffers for use by the specified device.
+ * count_vchan0 is the total number of buffers desired for the
+ * "normal" channel. count_vchan1 is the total number of buffers
+ * desired for the "virtual" channel. Returns 0 on success, or
+ * <0 on failure, which occurs when we're unable to allocate any
+ * buffers to a channel that desires at least one buffer.
+ */
+extern int pcibr_rrb_alloc(vertex_hdl_t pconn_vhdl,
+ int *count_vchan0,
+ int *count_vchan1);
+
+/*
+ * Get the starting PCIbus address out of the given DMA map.
+ * This function is supposed to be used by a close friend of PCI bridge
+ * since it relies on the fact that the starting address of the map is fixed at
+ * the allocation time in the current implementation of PCI bridge.
+ */
+extern iopaddr_t pcibr_dmamap_pciaddr_get(pcibr_dmamap_t);
+extern void pcibr_hints_fix_rrbs(vertex_hdl_t);
+extern void pcibr_hints_dualslot(vertex_hdl_t, pciio_slot_t, pciio_slot_t);
+extern void pcibr_hints_subdevs(vertex_hdl_t, pciio_slot_t, ulong);
+extern void pcibr_hints_handsoff(vertex_hdl_t);
+
+typedef unsigned pcibr_intr_bits_f(pciio_info_t, pciio_intr_line_t, int);
+extern void pcibr_hints_intr_bits(vertex_hdl_t, pcibr_intr_bits_f *);
+
+extern int pcibr_asic_rev(vertex_hdl_t);
+
+#endif /* __ASSEMBLY__ */
+#endif /* #if defined(__KERNEL__) */
+/*
+ * Some useful ioctls into the pcibr driver
+ */
+#define PCIBR 'p'
+#define _PCIBR(x) ((PCIBR << 8) | (x))
+
+/*
+ * Bit defintions for variable slot_status in struct
+ * pcibr_soft_slot_s. They are here so that the user
+ * hot-plug utility can interpret the slot's power
+ * status.
+ */
+#ifdef CONFIG_HOTPLUG_PCI_SGI
+#define PCI_SLOT_ENABLE_CMPLT 0x01
+#define PCI_SLOT_ENABLE_INCMPLT 0x02
+#define PCI_SLOT_DISABLE_CMPLT 0x04
+#define PCI_SLOT_DISABLE_INCMPLT 0x08
+#define PCI_SLOT_POWER_ON 0x10
+#define PCI_SLOT_POWER_OFF 0x20
+#define PCI_SLOT_IS_SYS_CRITICAL 0x40
+#define PCI_SLOT_PCIBA_LOADED 0x80
+
+#define PCI_SLOT_STATUS_MASK (PCI_SLOT_ENABLE_CMPLT | \
+ PCI_SLOT_ENABLE_INCMPLT | \
+ PCI_SLOT_DISABLE_CMPLT | \
+ PCI_SLOT_DISABLE_INCMPLT)
+#define PCI_SLOT_POWER_MASK (PCI_SLOT_POWER_ON | PCI_SLOT_POWER_OFF)
+
+/*
+ * Bit defintions for variable slot_status in struct
+ * pcibr_soft_slot_s. They are here so that both
+ * the pcibr driver and the pciconfig command can
+ * reference them.
+ */
+#define SLOT_STARTUP_CMPLT 0x01
+#define SLOT_STARTUP_INCMPLT 0x02
+#define SLOT_SHUTDOWN_CMPLT 0x04
+#define SLOT_SHUTDOWN_INCMPLT 0x08
+#define SLOT_POWER_UP 0x10
+#define SLOT_POWER_DOWN 0x20
+#define SLOT_IS_SYS_CRITICAL 0x40
+
+#define SLOT_STATUS_MASK (SLOT_STARTUP_CMPLT | SLOT_STARTUP_INCMPLT | \
+ SLOT_SHUTDOWN_CMPLT | SLOT_SHUTDOWN_INCMPLT)
+#define SLOT_POWER_MASK (SLOT_POWER_UP | SLOT_POWER_DOWN)
+
+/*
+ * Bit definitions for variable resp_f_staus.
+ * They are here so that both the pcibr driver
+ * and the pciconfig command can reference them.
+ */
+#define FUNC_IS_VALID 0x01
+#define FUNC_IS_SYS_CRITICAL 0x02
+
+/*
+ * L1 slot power operations for PCI hot-plug
+ */
+#define PCI_REQ_SLOT_POWER_ON 1
+#define PCI_L1_QSIZE 128 /* our L1 message buffer size */
+
+
+#define L1_QSIZE 128 /* our L1 message buffer size */
+
+enum pcibr_slot_disable_action_e {
+ PCI_REQ_SLOT_ELIGIBLE,
+ PCI_REQ_SLOT_DISABLE
+};
+
+
+struct pcibr_slot_up_resp_s {
+ int resp_sub_errno;
+ char resp_l1_msg[L1_QSIZE + 1];
+};
+
+struct pcibr_slot_down_resp_s {
+ int resp_sub_errno;
+ char resp_l1_msg[L1_QSIZE + 1];
+};
+
+struct pcibr_slot_info_resp_s {
+ short resp_bs_bridge_type;
+ short resp_bs_bridge_mode;
+ int resp_has_host;
+ char resp_host_slot;
+ vertex_hdl_t resp_slot_conn;
+ char resp_slot_conn_name[MAXDEVNAME];
+ int resp_slot_status;
+ int resp_l1_bus_num;
+ int resp_bss_ninfo;
+ char resp_bss_devio_bssd_space[16];
+ iopaddr_t resp_bss_devio_bssd_base;
+ uint64_t resp_bss_device;
+ int resp_bss_pmu_uctr;
+ int resp_bss_d32_uctr;
+ int resp_bss_d64_uctr;
+ iopaddr_t resp_bss_d64_base;
+ unsigned resp_bss_d64_flags;
+ iopaddr_t resp_bss_d32_base;
+ unsigned resp_bss_d32_flags;
+ volatile unsigned *resp_bss_cmd_pointer;
+ unsigned resp_bss_cmd_shadow;
+ int resp_bs_rrb_valid;
+ int resp_bs_rrb_valid_v1;
+ int resp_bs_rrb_valid_v2;
+ int resp_bs_rrb_valid_v3;
+ int resp_bs_rrb_res;
+ uint64_t resp_b_resp;
+ uint64_t resp_b_int_device;
+ uint64_t resp_b_int_enable;
+ uint64_t resp_b_int_host;
+ struct pcibr_slot_func_info_resp_s {
+ int resp_f_status;
+ char resp_f_slot_name[MAXDEVNAME];
+ char resp_f_bus;
+ char resp_f_slot;
+ char resp_f_func;
+ char resp_f_master_name[MAXDEVNAME];
+ void *resp_f_pops;
+ error_handler_f *resp_f_efunc;
+ error_handler_arg_t resp_f_einfo;
+ int resp_f_vendor;
+ int resp_f_device;
+
+ struct {
+ char resp_w_space[16];
+ iopaddr_t resp_w_base;
+ size_t resp_w_size;
+ } resp_f_window[6];
+
+ unsigned resp_f_rbase;
+ unsigned resp_f_rsize;
+ int resp_f_ibit[4];
+ int resp_f_att_det_error;
+
+ } resp_func[8];
+};
+
+struct pcibr_slot_req_s {
+ int req_slot;
+ union {
+ enum pcibr_slot_disable_action_e up;
+ struct pcibr_slot_down_resp_s *down;
+ struct pcibr_slot_info_resp_s *query;
+ void *any;
+ } req_respp;
+ int req_size;
+};
+
+struct pcibr_slot_enable_resp_s {
+ int resp_sub_errno;
+ char resp_l1_msg[PCI_L1_QSIZE + 1];
+};
+
+struct pcibr_slot_disable_resp_s {
+ int resp_sub_errno;
+ char resp_l1_msg[PCI_L1_QSIZE + 1];
+};
+
+struct pcibr_slot_enable_req_s {
+ pciio_slot_t req_device;
+ struct pcibr_slot_enable_resp_s req_resp;
+};
+
+struct pcibr_slot_disable_req_s {
+ pciio_slot_t req_device;
+ enum pcibr_slot_disable_action_e req_action;
+ struct pcibr_slot_disable_resp_s req_resp;
+};
+
+struct pcibr_slot_info_req_s {
+ pciio_slot_t req_device;
+ struct pcibr_slot_info_resp_s req_resp;
+};
+
+#endif /* CONFIG_HOTPLUG_PCI_SGI */
+
+
+/*
+ * PCI specific errors, interpreted by pciconfig command
+ */
+
+/* EPERM 1 */
+#define PCI_SLOT_ALREADY_UP 2 /* slot already up */
+#define PCI_SLOT_ALREADY_DOWN 3 /* slot already down */
+#define PCI_IS_SYS_CRITICAL 4 /* slot is system critical */
+/* EIO 5 */
+/* ENXIO 6 */
+#define PCI_L1_ERR 7 /* L1 console command error */
+#define PCI_NOT_A_BRIDGE 8 /* device is not a bridge */
+#define PCI_SLOT_IN_SHOEHORN 9 /* slot is in a shorhorn */
+#define PCI_NOT_A_SLOT 10 /* slot is invalid */
+#define PCI_RESP_AREA_TOO_SMALL 11 /* slot is invalid */
+/* ENOMEM 12 */
+#define PCI_NO_DRIVER 13 /* no driver for device */
+/* EFAULT 14 */
+#define PCI_EMPTY_33MHZ 15 /* empty 33 MHz bus */
+/* EBUSY 16 */
+#define PCI_SLOT_RESET_ERR 17 /* slot reset error */
+#define PCI_SLOT_INFO_INIT_ERR 18 /* slot info init error */
+/* ENODEV 19 */
+#define PCI_SLOT_ADDR_INIT_ERR 20 /* slot addr space init error */
+#define PCI_SLOT_DEV_INIT_ERR 21 /* slot device init error */
+/* EINVAL 22 */
+#define PCI_SLOT_GUEST_INIT_ERR 23 /* slot guest info init error */
+#define PCI_SLOT_RRB_ALLOC_ERR 24 /* slot initial rrb alloc error */
+#define PCI_SLOT_DRV_ATTACH_ERR 25 /* driver attach error */
+#define PCI_SLOT_DRV_DETACH_ERR 26 /* driver detach error */
+/* EFBIG 27 */
+#define PCI_MULTI_FUNC_ERR 28 /* multi-function card error */
+#define PCI_SLOT_RBAR_ALLOC_ERR 29 /* slot PCI-X RBAR alloc error */
+/* ERANGE 34 */
+/* EUNATCH 42 */
+
+#endif /* _ASM_IA64_SN_PCI_PCIBR_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PCI_PCIBR_PRIVATE_H
+#define _ASM_IA64_SN_PCI_PCIBR_PRIVATE_H
+
+/*
+ * pcibr_private.h -- private definitions for pcibr
+ * only the pcibr driver (and its closest friends)
+ * should ever peek into this file.
+ */
+
+#include <linux/pci.h>
+#include <asm/sn/pci/pcibr.h>
+#include <asm/sn/pci/pciio_private.h>
+
+/*
+ * convenience typedefs
+ */
+
+typedef uint64_t pcibr_DMattr_t;
+typedef uint32_t pcibr_ATEattr_t;
+
+typedef struct pcibr_info_s *pcibr_info_t, **pcibr_info_h;
+typedef struct pcibr_soft_s *pcibr_soft_t;
+typedef struct pcibr_soft_slot_s *pcibr_soft_slot_t;
+typedef struct pcibr_hints_s *pcibr_hints_t;
+typedef struct pcibr_intr_list_s *pcibr_intr_list_t;
+typedef struct pcibr_intr_wrap_s *pcibr_intr_wrap_t;
+typedef struct pcibr_intr_cbuf_s *pcibr_intr_cbuf_t;
+
+typedef volatile unsigned int *cfg_p;
+typedef volatile bridgereg_t *reg_p;
+
+/*
+ * extern functions
+ */
+cfg_p pcibr_slot_config_addr(pcibr_soft_t, pciio_slot_t, int);
+cfg_p pcibr_func_config_addr(pcibr_soft_t, pciio_bus_t bus, pciio_slot_t, pciio_function_t, int);
+void pcibr_debug(uint32_t, vertex_hdl_t, char *, ...);
+void pcibr_func_config_set(pcibr_soft_t, pciio_slot_t, pciio_function_t, int, unsigned);
+/*
+ * pcireg_ externs
+ */
+
+extern uint64_t pcireg_id_get(pcibr_soft_t);
+extern uint64_t pcireg_bridge_id_get(void *);
+extern uint64_t pcireg_bus_err_get(pcibr_soft_t);
+extern uint64_t pcireg_control_get(pcibr_soft_t);
+extern uint64_t pcireg_bridge_control_get(void *);
+extern void pcireg_control_set(pcibr_soft_t, uint64_t);
+extern void pcireg_control_bit_clr(pcibr_soft_t, uint64_t);
+extern void pcireg_control_bit_set(pcibr_soft_t, uint64_t);
+extern void pcireg_req_timeout_set(pcibr_soft_t, uint64_t);
+extern void pcireg_intr_dst_set(pcibr_soft_t, uint64_t);
+extern uint64_t pcireg_intr_dst_target_id_get(pcibr_soft_t);
+extern void pcireg_intr_dst_target_id_set(pcibr_soft_t, uint64_t);
+extern uint64_t pcireg_intr_dst_addr_get(pcibr_soft_t);
+extern void pcireg_intr_dst_addr_set(pcibr_soft_t, uint64_t);
+extern uint64_t pcireg_cmdword_err_get(pcibr_soft_t);
+extern uint64_t pcireg_llp_cfg_get(pcibr_soft_t);
+extern void pcireg_llp_cfg_set(pcibr_soft_t, uint64_t);
+extern uint64_t pcireg_tflush_get(pcibr_soft_t);
+extern uint64_t pcireg_linkside_err_get(pcibr_soft_t);
+extern uint64_t pcireg_resp_err_get(pcibr_soft_t);
+extern uint64_t pcireg_resp_err_addr_get(pcibr_soft_t);
+extern uint64_t pcireg_resp_err_buf_get(pcibr_soft_t);
+extern uint64_t pcireg_resp_err_dev_get(pcibr_soft_t);
+extern uint64_t pcireg_linkside_err_addr_get(pcibr_soft_t);
+extern uint64_t pcireg_dirmap_get(pcibr_soft_t);
+extern void pcireg_dirmap_set(pcibr_soft_t, uint64_t);
+extern void pcireg_dirmap_wid_set(pcibr_soft_t, uint64_t);
+extern void pcireg_dirmap_diroff_set(pcibr_soft_t, uint64_t);
+extern void pcireg_dirmap_add512_set(pcibr_soft_t);
+extern void pcireg_dirmap_add512_clr(pcibr_soft_t);
+extern uint64_t pcireg_map_fault_get(pcibr_soft_t);
+extern uint64_t pcireg_arbitration_get(pcibr_soft_t);
+extern void pcireg_arbitration_set(pcibr_soft_t, uint64_t);
+extern void pcireg_arbitration_bit_clr(pcibr_soft_t, uint64_t);
+extern void pcireg_arbitration_bit_set(pcibr_soft_t, uint64_t);
+extern uint64_t pcireg_parity_err_get(pcibr_soft_t);
+extern uint64_t pcireg_type1_cntr_get(pcibr_soft_t);
+extern void pcireg_type1_cntr_set(pcibr_soft_t, uint64_t);
+extern uint64_t pcireg_timeout_get(pcibr_soft_t);
+extern void pcireg_timeout_set(pcibr_soft_t, uint64_t);
+extern void pcireg_timeout_bit_clr(pcibr_soft_t, uint64_t);
+extern void pcireg_timeout_bit_set(pcibr_soft_t, uint64_t);
+extern uint64_t pcireg_pci_bus_addr_get(pcibr_soft_t);
+extern uint64_t pcireg_pci_bus_addr_addr_get(pcibr_soft_t);
+extern uint64_t pcireg_intr_status_get(pcibr_soft_t);
+extern uint64_t pcireg_intr_enable_get(pcibr_soft_t);
+extern void pcireg_intr_enable_set(pcibr_soft_t, uint64_t);
+extern void pcireg_intr_enable_bit_clr(pcibr_soft_t, uint64_t);
+extern void pcireg_intr_enable_bit_set(pcibr_soft_t, uint64_t);
+extern void pcireg_intr_reset_set(pcibr_soft_t, uint64_t);
+extern void pcireg_intr_reset_bit_set(pcibr_soft_t, uint64_t);
+extern uint64_t pcireg_intr_mode_get(pcibr_soft_t);
+extern void pcireg_intr_mode_set(pcibr_soft_t, uint64_t);
+extern void pcireg_intr_mode_bit_clr(pcibr_soft_t, uint64_t);
+extern uint64_t pcireg_intr_device_get(pcibr_soft_t);
+extern void pcireg_intr_device_set(pcibr_soft_t, uint64_t);
+extern void pcireg_intr_device_bit_set(pcibr_soft_t, uint64_t);
+extern void pcireg_bridge_intr_device_bit_set(void *, uint64_t);
+extern void pcireg_intr_device_bit_clr(pcibr_soft_t, uint64_t);
+extern uint64_t pcireg_intr_host_err_get(pcibr_soft_t);
+extern void pcireg_intr_host_err_set(pcibr_soft_t, uint64_t);
+extern uint64_t pcireg_intr_addr_get(pcibr_soft_t, int);
+extern void pcireg_intr_addr_set(pcibr_soft_t, int, uint64_t);
+extern void pcireg_bridge_intr_addr_set(void *, int, uint64_t);
+extern void * pcireg_intr_addr_addr(pcibr_soft_t, int);
+extern void pcireg_intr_addr_vect_set(pcibr_soft_t, int, uint64_t);
+extern void pcireg_bridge_intr_addr_vect_set(void *, int, uint64_t);
+extern uint64_t pcireg_intr_addr_addr_get(pcibr_soft_t, int);
+extern void pcireg_intr_addr_addr_set(pcibr_soft_t, int, uint64_t);
+extern void pcireg_bridge_intr_addr_addr_set(void *, int, uint64_t);
+extern uint64_t pcireg_intr_view_get(pcibr_soft_t);
+extern uint64_t pcireg_intr_multiple_get(pcibr_soft_t);
+extern void pcireg_force_always_set(pcibr_soft_t, int);
+extern void * pcireg_bridge_force_always_addr_get(void *, int);
+extern void * pcireg_force_always_addr_get(pcibr_soft_t, int);
+extern void pcireg_force_intr_set(pcibr_soft_t, int);
+extern uint64_t pcireg_device_get(pcibr_soft_t, int);
+extern void pcireg_device_set(pcibr_soft_t, int, uint64_t);
+extern void pcireg_device_bit_set(pcibr_soft_t, int, uint64_t);
+extern void pcireg_device_bit_clr(pcibr_soft_t, int, uint64_t);
+extern uint64_t pcireg_rrb_get(pcibr_soft_t, int);
+extern void pcireg_rrb_set(pcibr_soft_t, int, uint64_t);
+extern void pcireg_rrb_bit_set(pcibr_soft_t, int, uint64_t);
+extern void pcireg_rrb_bit_clr(pcibr_soft_t, int, uint64_t);
+extern uint64_t pcireg_rrb_status_get(pcibr_soft_t);
+extern void pcireg_rrb_clear_set(pcibr_soft_t, uint64_t);
+extern uint64_t pcireg_wrb_flush_get(pcibr_soft_t, int);
+extern uint64_t pcireg_pcix_bus_err_addr_get(pcibr_soft_t);
+extern uint64_t pcireg_pcix_bus_err_attr_get(pcibr_soft_t);
+extern uint64_t pcireg_pcix_bus_err_data_get(pcibr_soft_t);
+extern uint64_t pcireg_pcix_req_err_attr_get(pcibr_soft_t);
+extern uint64_t pcireg_pcix_req_err_addr_get(pcibr_soft_t);
+extern uint64_t pcireg_pcix_pio_split_addr_get(pcibr_soft_t);
+extern uint64_t pcireg_pcix_pio_split_attr_get(pcibr_soft_t);
+extern cfg_p pcireg_type1_cfg_addr(pcibr_soft_t, pciio_function_t,
+ int);
+extern cfg_p pcireg_type0_cfg_addr(pcibr_soft_t, pciio_slot_t,
+ pciio_function_t, int);
+extern bridge_ate_t pcireg_int_ate_get(pcibr_soft_t, int);
+extern void pcireg_int_ate_set(pcibr_soft_t, int, bridge_ate_t);
+extern bridge_ate_p pcireg_int_ate_addr(pcibr_soft_t, int);
+
+extern uint64_t pcireg_speed_get(pcibr_soft_t);
+extern uint64_t pcireg_mode_get(pcibr_soft_t);
+
+/*
+ * PCIBR_DEBUG() macro and debug bitmask defines
+ */
+/* low freqency debug events (ie. initialization, resource allocation,...) */
+#define PCIBR_DEBUG_INIT 0x00000001 /* bridge init */
+#define PCIBR_DEBUG_HINTS 0x00000002 /* bridge hints */
+#define PCIBR_DEBUG_ATTACH 0x00000004 /* bridge attach */
+#define PCIBR_DEBUG_DETACH 0x00000008 /* bridge detach */
+#define PCIBR_DEBUG_ATE 0x00000010 /* bridge ATE allocation */
+#define PCIBR_DEBUG_RRB 0x00000020 /* bridge RRB allocation */
+#define PCIBR_DEBUG_RBAR 0x00000040 /* bridge RBAR allocation */
+#define PCIBR_DEBUG_PROBE 0x00000080 /* bridge device probing */
+#define PCIBR_DEBUG_INTR_ERROR 0x00000100 /* bridge error interrupt */
+#define PCIBR_DEBUG_ERROR_HDLR 0x00000200 /* bridge error handler */
+#define PCIBR_DEBUG_CONFIG 0x00000400 /* device's config space */
+#define PCIBR_DEBUG_BAR 0x00000800 /* device's BAR allocations */
+#define PCIBR_DEBUG_INTR_ALLOC 0x00001000 /* device's intr allocation */
+#define PCIBR_DEBUG_DEV_ATTACH 0x00002000 /* device's attach */
+#define PCIBR_DEBUG_DEV_DETACH 0x00004000 /* device's detach */
+#define PCIBR_DEBUG_HOTPLUG 0x00008000
+
+/* high freqency debug events (ie. map allocation, direct translation,...) */
+#define PCIBR_DEBUG_DEVREG 0x04000000 /* bridges device reg sets */
+#define PCIBR_DEBUG_PIOMAP 0x08000000 /* pcibr_piomap */
+#define PCIBR_DEBUG_PIODIR 0x10000000 /* pcibr_piotrans */
+#define PCIBR_DEBUG_DMAMAP 0x20000000 /* pcibr_dmamap */
+#define PCIBR_DEBUG_DMADIR 0x40000000 /* pcibr_dmatrans */
+#define PCIBR_DEBUG_INTR 0x80000000 /* interrupts */
+
+extern char *pcibr_debug_module;
+extern int pcibr_debug_widget;
+extern int pcibr_debug_slot;
+extern uint32_t pcibr_debug_mask;
+
+/* For low frequency events (ie. initialization, resource allocation,...) */
+#define PCIBR_DEBUG_ALWAYS(args) pcibr_debug args ;
+
+/* XXX: habeck: maybe make PCIBR_DEBUG() always available? Even in non-
+ * debug kernels? If tracing isn't enabled (i.e pcibr_debug_mask isn't
+ * set, then the overhead for this macro is just an extra 'if' check.
+ */
+/* For high frequency events (ie. map allocation, direct translation,...) */
+#if DEBUG
+#define PCIBR_DEBUG(args) PCIBR_DEBUG_ALWAYS(args)
+#else /* DEBUG */
+#define PCIBR_DEBUG(args)
+#endif /* DEBUG */
+
+/*
+ * Bridge sets up PIO using this information.
+ */
+struct pcibr_piomap_s {
+ struct pciio_piomap_s bp_pp; /* generic stuff */
+
+#define bp_flags bp_pp.pp_flags /* PCIBR_PIOMAP flags */
+#define bp_dev bp_pp.pp_dev /* associated pci card */
+#define bp_slot bp_pp.pp_slot /* which slot the card is in */
+#define bp_space bp_pp.pp_space /* which address space */
+#define bp_pciaddr bp_pp.pp_pciaddr /* starting offset of mapping */
+#define bp_mapsz bp_pp.pp_mapsz /* size of this mapping */
+#define bp_kvaddr bp_pp.pp_kvaddr /* kernel virtual address to use */
+
+ iopaddr_t bp_xtalk_addr; /* corresponding xtalk address */
+ xtalk_piomap_t bp_xtalk_pio; /* corresponding xtalk resource */
+ pcibr_piomap_t bp_next; /* Next piomap on the list */
+ pcibr_soft_t bp_soft; /* backpointer to bridge soft data */
+ atomic_t bp_toc; /* PCI timeout counter */
+
+};
+
+/*
+ * Bridge sets up DMA using this information.
+ */
+struct pcibr_dmamap_s {
+ struct pciio_dmamap_s bd_pd;
+#define bd_flags bd_pd.pd_flags /* PCIBR_DMAMAP flags */
+#define bd_dev bd_pd.pd_dev /* associated pci card */
+#define bd_slot bd_pd.pd_slot /* which slot the card is in */
+ struct pcibr_soft_s *bd_soft; /* pcibr soft state backptr */
+ xtalk_dmamap_t bd_xtalk; /* associated xtalk resources */
+
+ size_t bd_max_size; /* maximum size of mapping */
+ xwidgetnum_t bd_xio_port; /* target XIO port */
+ iopaddr_t bd_xio_addr; /* target XIO address */
+ iopaddr_t bd_pci_addr; /* via PCI address */
+
+ int bd_ate_index; /* Address Translation Entry Index */
+ int bd_ate_count; /* number of ATE's allocated */
+ bridge_ate_p bd_ate_ptr; /* where to write first ATE */
+ bridge_ate_t bd_ate_proto; /* prototype ATE (for xioaddr=0) */
+ bridge_ate_t bd_ate_prime; /* value of 1st ATE written */
+ dma_addr_t bd_dma_addr; /* Linux dma handle */
+ struct resource resource;
+};
+
+#define IBUFSIZE 5 /* size of circular buffer (holds 4) */
+
+/*
+ * Circular buffer used for interrupt processing
+ */
+struct pcibr_intr_cbuf_s {
+ spinlock_t ib_lock; /* cbuf 'put' lock */
+ int ib_in; /* index of next free entry */
+ int ib_out; /* index of next full entry */
+ pcibr_intr_wrap_t ib_cbuf[IBUFSIZE]; /* circular buffer of wrap */
+};
+
+/*
+ * Bridge sets up interrupts using this information.
+ */
+
+struct pcibr_intr_s {
+ struct pciio_intr_s bi_pi;
+#define bi_flags bi_pi.pi_flags /* PCIBR_INTR flags */
+#define bi_dev bi_pi.pi_dev /* associated pci card */
+#define bi_lines bi_pi.pi_lines /* which PCI interrupt line(s) */
+#define bi_func bi_pi.pi_func /* handler function (when connected) */
+#define bi_arg bi_pi.pi_arg /* handler parameter (when connected) */
+#define bi_mustruncpu bi_pi.pi_mustruncpu /* Where we must run. */
+#define bi_irq bi_pi.pi_irq /* IRQ assigned. */
+#define bi_cpu bi_pi.pi_cpu /* cpu assigned. */
+ unsigned int bi_ibits; /* which Bridge interrupt bit(s) */
+ pcibr_soft_t bi_soft; /* shortcut to soft info */
+ struct pcibr_intr_cbuf_s bi_ibuf; /* circular buffer of wrap ptrs */
+ unsigned bi_last_intr; /* For Shub lb lost intr. bug */
+};
+
+
+/*
+ * PCIBR_INFO_SLOT_GET_EXT returns the external slot number that the card
+ * resides in. (i.e the slot number silk screened on the back of the I/O
+ * brick). PCIBR_INFO_SLOT_GET_INT returns the internal slot (or device)
+ * number used by the pcibr code to represent that external slot (i.e to
+ * set bit patterns in BRIDGE/PIC registers to represent the device, or to
+ * offset into an array, or ...).
+ *
+ * In BRIDGE and XBRIDGE the external slot and internal device numbering
+ * are the same. (0->0, 1->1, 2->2,... 7->7) BUT in the PIC the external
+ * slot number is always 1 greater than the internal device number (1->0,
+ * 2->1, 3->2, 4->3). This is due to the fact that the PCI-X spec requires
+ * that the 'bridge' (i.e PIC) be designated as 'device 0', thus external
+ * slot numbering can't start at zero.
+ *
+ * PCIBR_DEVICE_TO_SLOT converts an internal device number to an external
+ * slot number. NOTE: PCIIO_SLOT_NONE stays as PCIIO_SLOT_NONE.
+ *
+ * PCIBR_SLOT_TO_DEVICE converts an external slot number to an internal
+ * device number. NOTE: PCIIO_SLOT_NONE stays as PCIIO_SLOT_NONE.
+ */
+#define PCIBR_INFO_SLOT_GET_EXT(info) (((pcibr_info_t)info)->f_slot)
+#define PCIBR_INFO_SLOT_GET_INT(info) (((pcibr_info_t)info)->f_dev)
+
+#define PCIBR_DEVICE_TO_SLOT(pcibr_soft, dev_num) \
+ (((dev_num) != PCIIO_SLOT_NONE) ? ((dev_num) + 1) : PCIIO_SLOT_NONE)
+
+#define PCIBR_SLOT_TO_DEVICE(pcibr_soft, slot) \
+ (((slot) != PCIIO_SLOT_NONE) ? ((slot) - 1) : PCIIO_SLOT_NONE)
+
+/*
+ * per-connect point pcibr data, including standard pciio data in-line:
+ */
+struct pcibr_info_s {
+ struct pciio_info_s f_c; /* MUST BE FIRST. */
+#define f_vertex f_c.c_vertex /* back pointer to vertex */
+#define f_bus f_c.c_bus /* which bus the card is in */
+#define f_slot f_c.c_slot /* which slot the card is in */
+#define f_func f_c.c_func /* which func (on multi-func cards) */
+#define f_vendor f_c.c_vendor /* PCI card "vendor" code */
+#define f_device f_c.c_device /* PCI card "device" code */
+#define f_master f_c.c_master /* PCI bus provider */
+#define f_mfast f_c.c_mfast /* cached fastinfo from c_master */
+#define f_pops f_c.c_pops /* cached provider from c_master */
+#define f_efunc f_c.c_efunc /* error handling function */
+#define f_einfo f_c.c_einfo /* first parameter for efunc */
+#define f_window f_c.c_window /* state of BASE regs */
+#define f_rwindow f_c.c_rwindow /* expansion ROM BASE regs */
+#define f_rbase f_c.c_rbase /* expansion ROM base */
+#define f_rsize f_c.c_rsize /* expansion ROM size */
+#define f_piospace f_c.c_piospace /* additional I/O spaces allocated */
+
+ /* pcibr-specific connection state */
+ int f_ibit[4]; /* Bridge bit for each INTx */
+ pcibr_piomap_t f_piomap;
+ int f_att_det_error;
+ pciio_slot_t f_dev; /* which device the card represents */
+ cap_pcix_type0_t *f_pcix_cap; /* pointer to the pcix capability */
+};
+
+/* =====================================================================
+ * Shared Interrupt Information
+ */
+
+struct pcibr_intr_list_s {
+ pcibr_intr_list_t il_next;
+ pcibr_intr_t il_intr;
+ pcibr_soft_t il_soft;
+ pciio_slot_t il_slot;
+};
+
+/* =====================================================================
+ * Interrupt Wrapper Data
+ */
+struct pcibr_intr_wrap_s {
+ pcibr_soft_t iw_soft; /* which bridge */
+ volatile bridgereg_t *iw_stat; /* ptr to b_int_status */
+ bridgereg_t iw_ibit; /* bit in b_int_status */
+ pcibr_intr_list_t iw_list; /* ghostbusters! */
+ int iw_hdlrcnt; /* running handler count */
+ int iw_shared; /* if Bridge bit is shared */
+ int iw_connected; /* if already connected */
+};
+
+#define PCIBR_ISR_ERR_START 8
+#define PCIBR_ISR_MAX_ERRS_BRIDGE 32
+#define PCIBR_ISR_MAX_ERRS_PIC 45
+#define PCIBR_ISR_MAX_ERRS PCIBR_ISR_MAX_ERRS_PIC
+
+/*
+ * PCI Base Address Register window allocation constants.
+ * To reduce the size of the internal resource mapping structures, do
+ * not use the entire PCI bus I/O address space
+ */
+#define PCIBR_BUS_IO_BASE 0x200000
+#define PCIBR_BUS_IO_MAX 0x0FFFFFFF
+#define PCIBR_BUS_IO_PAGE 0x100000
+
+#define PCIBR_BUS_SWIN_BASE PAGE_SIZE
+#define PCIBR_BUS_SWIN_MAX 0x000FFFFF
+#define PCIBR_BUS_SWIN_PAGE PAGE_SIZE
+
+#define PCIBR_BUS_MEM_BASE 0x200000
+#define PCIBR_BUS_MEM_MAX 0x3FFFFFFF
+#define PCIBR_BUS_MEM_PAGE 0x100000
+
+/* defines for pcibr_soft_s->bs_bridge_type */
+#define PCIBR_BRIDGETYPE_PIC 2
+#define IS_PIC_BUSNUM_SOFT(ps, bus) ((ps)->bs_busnum == (bus))
+
+/*
+ * Runtime checks for workarounds.
+ */
+#define PCIBR_WAR_ENABLED(pv, pcibr_soft) \
+ ((1 << XWIDGET_PART_REV_NUM_REV(pcibr_soft->bs_rev_num)) & pv)
+
+/* defines for pcibr_soft_s->bs_bridge_mode */
+#define PCIBR_BRIDGEMODE_PCI_33 0x0
+#define PCIBR_BRIDGEMODE_PCI_66 0x2
+#define PCIBR_BRIDGEMODE_PCIX_66 0x3
+#define PCIBR_BRIDGEMODE_PCIX_100 0x5
+#define PCIBR_BRIDGEMODE_PCIX_133 0x7
+#define BUSSPEED_MASK 0x6
+#define BUSTYPE_MASK 0x1
+
+#define IS_PCI(ps) (!IS_PCIX(ps))
+#define IS_PCIX(ps) ((ps)->bs_bridge_mode & BUSTYPE_MASK)
+
+#define IS_33MHZ(ps) ((ps)->bs_bridge_mode == PCIBR_BRIDGEMODE_PCI_33)
+#define IS_66MHZ(ps) (((ps)->bs_bridge_mode == PCIBR_BRIDGEMODE_PCI_66) || \
+ ((ps)->bs_bridge_mode == PCIBR_BRIDGEMODE_PCIX_66))
+#define IS_100MHZ(ps) ((ps)->bs_bridge_mode == PCIBR_BRIDGEMODE_PCIX_100)
+#define IS_133MHZ(ps) ((ps)->bs_bridge_mode == PCIBR_BRIDGEMODE_PCIX_133)
+
+
+/* Number of PCI slots. NOTE: this works as long as the first slot
+ * is zero. Otherwise use ((ps->bs_max_slot+1) - ps->bs_min_slot)
+ */
+#define PCIBR_NUM_SLOTS(ps) (ps->bs_max_slot+1)
+
+/* =====================================================================
+ * Bridge Device State structure
+ *
+ * one instance of this structure is kept for each
+ * Bridge ASIC in the system.
+ */
+
+struct pcibr_soft_s {
+ vertex_hdl_t bs_conn; /* xtalk connection point */
+ vertex_hdl_t bs_vhdl; /* vertex owned by pcibr */
+ uint64_t bs_int_enable; /* Mask of enabled intrs */
+ void *bs_base; /* PIO pointer to Bridge chip */
+ char *bs_name; /* hw graph name */
+ char bs_asic_name[16]; /* ASIC name */
+ xwidgetnum_t bs_xid; /* Bridge's xtalk ID number */
+ vertex_hdl_t bs_master; /* xtalk master vertex */
+ xwidgetnum_t bs_mxid; /* master's xtalk ID number */
+ pciio_slot_t bs_first_slot; /* first existing slot */
+ pciio_slot_t bs_last_slot; /* last existing slot */
+ pciio_slot_t bs_last_reset; /* last slot to reset */
+ uint32_t bs_unused_slot; /* unavailable slots bitmask */
+ pciio_slot_t bs_min_slot; /* lowest possible slot */
+ pciio_slot_t bs_max_slot; /* highest possible slot */
+ pcibr_soft_t bs_peers_soft; /* PICs other bus's soft */
+ int bs_busnum; /* PIC has two pci busses */
+
+ iopaddr_t bs_dir_xbase; /* xtalk address for 32-bit PCI direct map */
+ xwidgetnum_t bs_dir_xport; /* xtalk port for 32-bit PCI direct map */
+
+ struct resource bs_int_ate_resource;/* root resource for internal ATEs */
+ struct resource bs_ext_ate_resource;/* root resource for external ATEs */
+ void *bs_allocated_ate_res;/* resource struct allocated */
+ short bs_int_ate_size; /* number of internal ates */
+ short bs_bridge_type; /* see defines above */
+ short bs_bridge_mode; /* see defines above */
+
+ int bs_rev_num; /* revision number of Bridge */
+
+ /* bs_dma_flags are the forced dma flags used on all DMAs. Used for
+ * working around ASIC rev issues and protocol specific requirements
+ */
+ unsigned int bs_dma_flags; /* forced DMA flags */
+
+ nasid_t bs_nasid; /* nasid this bus is on */
+ moduleid_t bs_moduleid; /* io brick moduleid */
+ short bs_bricktype; /* io brick type */
+
+ /*
+ * Lock used primarily to get mutual exclusion while managing any
+ * bridge resources..
+ */
+ spinlock_t bs_lock;
+
+ vertex_hdl_t bs_noslot_conn; /* NO-SLOT connection point */
+ pcibr_info_t bs_noslot_info;
+
+#ifdef CONFIG_HOTPLUG_PCI_SGI
+ /* Linux PCI bus structure pointer */
+ struct pci_bus *bs_pci_bus;
+#endif
+
+ struct pcibr_soft_slot_s {
+ /* information we keep about each CFG slot */
+
+ /* some devices (ioc3 in non-slotted
+ * configurations, sometimes) make use
+ * of more than one REQ/GNT/INT* signal
+ * sets. The slot corresponding to the
+ * IDSEL that the device responds to is
+ * called the host slot; the slot
+ * numbers that the device is stealing
+ * REQ/GNT/INT bits from are known as
+ * the guest slots.
+ */
+ int has_host;
+ pciio_slot_t host_slot;
+ vertex_hdl_t slot_conn;
+
+#ifdef CONFIG_HOTPLUG_PCI_SGI
+ /* PCI Hot-Plug status word */
+ int slot_status;
+
+ /* PCI Hot-Plug core structure pointer */
+ struct hotplug_slot *bss_hotplug_slot;
+#endif /* CONFIG_HOTPLUG_PCI_SGI */
+
+ /* Potentially several connection points
+ * for this slot. bss_ninfo is how many,
+ * and bss_infos is a pointer to
+ * an array pcibr_info_t values (which are
+ * pointers to pcibr_info structs, stored
+ * as device_info in connection ponts).
+ */
+ int bss_ninfo;
+ pcibr_info_h bss_infos;
+
+ /* Temporary Compatibility Macros, for
+ * stuff that has moved out of bs_slot
+ * and into the info structure. These
+ * will go away when their users have
+ * converted over to multifunction-
+ * friendly use of bss_{ninfo,infos}.
+ */
+#define bss_vendor_id bss_infos[0]->f_vendor
+#define bss_device_id bss_infos[0]->f_device
+#define bss_window bss_infos[0]->f_window
+#define bssw_space w_space
+#define bssw_base w_base
+#define bssw_size w_size
+
+ /* Where is DevIO(x) pointing? */
+ /* bssd_space is NONE if it is not assigned. */
+ struct {
+ pciio_space_t bssd_space;
+ iopaddr_t bssd_base;
+ int bssd_ref_cnt;
+ } bss_devio;
+
+ /* Shadow value for Device(x) register,
+ * so we don't have to go to the chip.
+ */
+ uint64_t bss_device;
+
+ /* Number of sets on GBR/REALTIME bit outstanding
+ * Used by Priority I/O for tracking reservations
+ */
+ int bss_pri_uctr;
+
+ /* Number of "uses" of PMU, 32-bit direct,
+ * and 64-bit direct DMA (0:none, <0: trans,
+ * >0: how many dmamaps). Device(x) bits
+ * controlling attribute of each kind of
+ * channel can't be changed by dmamap_alloc
+ * or dmatrans if the controlling counter
+ * is nonzero. dmatrans is forever.
+ */
+ int bss_pmu_uctr;
+ int bss_d32_uctr;
+ int bss_d64_uctr;
+
+ /* When the contents of mapping configuration
+ * information is locked down by dmatrans,
+ * repeated checks of the same flags should
+ * be shortcircuited for efficiency.
+ */
+ iopaddr_t bss_d64_base;
+ unsigned bss_d64_flags;
+ iopaddr_t bss_d32_base;
+ unsigned bss_d32_flags;
+ } bs_slot[8];
+
+ pcibr_intr_bits_f *bs_intr_bits;
+
+ /* PIC PCI-X Read Buffer Management :
+ * bs_pcix_num_funcs: the total number of PCI-X functions
+ * on the bus
+ * bs_pcix_split_tot: total number of outstanding split
+ * transactions requested by all functions on the bus
+ * bs_pcix_rbar_percent_allowed: the percentage of the
+ * total number of buffers a function requested that are
+ * available to it, not including the 1 RBAR guaranteed
+ * to it.
+ * bs_pcix_rbar_inuse: number of RBARs in use.
+ * bs_pcix_rbar_avail: number of RBARs available. NOTE:
+ * this value can go negative if we oversubscribe the
+ * RBARs. (i.e. We have 16 RBARs but 17 functions).
+ */
+ int bs_pcix_num_funcs;
+ int bs_pcix_split_tot;
+ int bs_pcix_rbar_percent_allowed;
+
+ int bs_pcix_rbar_inuse;
+ int bs_pcix_rbar_avail;
+
+
+ /* RRB MANAGEMENT
+ * bs_rrb_fixed: bitmap of slots whose RRB
+ * allocations we should not "automatically" change
+ * bs_rrb_avail: number of RRBs that have not
+ * been allocated or reserved for {even,odd} slots
+ * bs_rrb_res: number of RRBs currently reserved for the
+ * use of the index slot number
+ * bs_rrb_res_dflt: number of RRBs reserved at boot
+ * time for the use of the index slot number
+ * bs_rrb_valid: number of RRBs currently marked valid
+ * for the indexed slot/vchan number; array[slot][vchan]
+ * bs_rrb_valid_dflt: number of RRBs marked valid at boot
+ * time for the indexed slot/vchan number; array[slot][vchan]
+ */
+ int bs_rrb_fixed;
+ int bs_rrb_avail[2];
+ int bs_rrb_res[8];
+ int bs_rrb_res_dflt[8];
+ int bs_rrb_valid[8][4];
+ int bs_rrb_valid_dflt[8][4];
+ struct {
+ /* Each Bridge interrupt bit has a single XIO
+ * interrupt channel allocated.
+ */
+ xtalk_intr_t bsi_xtalk_intr;
+ /*
+ * A wrapper structure is associated with each
+ * Bridge interrupt bit.
+ */
+ struct pcibr_intr_wrap_s bsi_pcibr_intr_wrap;
+ /* The bus and interrupt bit, used for pcibr_setpciint().
+ * The pci busnum is bit3, int_bits bit2:0
+ */
+ uint32_t bsi_int_bit;
+
+ } bs_intr[8];
+
+ xtalk_intr_t bsi_err_intr;
+
+ /*
+ * We stash away some information in this structure on getting
+ * an error interrupt. This information is used during PIO read/
+ * write error handling.
+ *
+ * As it stands now, we do not re-enable the error interrupt
+ * till the error is resolved. Error resolution happens either at
+ * bus error time for PIO Read errors (~100 microseconds), or at
+ * the scheduled timeout time for PIO write errors (~milliseconds).
+ * If this delay causes problems, we may need to move towards
+ * a different scheme..
+ *
+ * Note that there is no locking while looking at this data structure.
+ * There should not be any race between bus error code and
+ * error interrupt code.. will look into this if needed.
+ *
+ * NOTE: The above discussion of error interrupt processing is
+ * no longer true. Whether it should again be true, is
+ * being looked into.
+ */
+ struct br_errintr_info {
+ int bserr_toutcnt;
+ iopaddr_t bserr_addr; /* Address where error occured */
+ uint64_t bserr_intstat; /* interrupts active at error dump */
+ } bs_errinfo;
+
+ /*
+ * PCI Bus Space allocation data structure.
+ *
+ * The resource mapping functions rmalloc() and rmfree() are used
+ * to manage the PCI bus I/O, small window, and memory address
+ * spaces.
+ *
+ * This info is used to assign PCI bus space addresses to cards
+ * via their BARs and to the callers of the pcibr_piospace_alloc()
+ * interface.
+ *
+ * Users of the pcibr_piospace_alloc() interface, such as the VME
+ * Universe chip, need PCI bus space that is not acquired by BARs.
+ * Most of these users need "large" amounts of PIO space (typically
+ * in Megabytes), and they generally tend to take once and never
+ * release.
+ */
+ struct pciio_win_map_s bs_io_win_map; /* I/O addr space */
+ struct pciio_win_map_s bs_swin_map; /* Small window addr space */
+ struct pciio_win_map_s bs_mem_win_map; /* Memory addr space */
+
+ struct resource bs_io_win_root_resource; /* I/O addr space */
+ struct resource bs_swin_root_resource; /* Small window addr space */
+ struct resource bs_mem_win_root_resource; /* Memory addr space */
+
+ int bs_bus_addr_status; /* Bus space status */
+
+#define PCIBR_BUS_ADDR_MEM_FREED 1 /* Reserved PROM mem addr freed */
+#define PCIBR_BUS_ADDR_IO_FREED 2 /* Reserved PROM I/O addr freed */
+
+ struct bs_errintr_stat_s {
+ uint32_t bs_errcount_total;
+ uint32_t bs_lasterr_timestamp;
+ uint32_t bs_lasterr_snapshot;
+ } bs_errintr_stat[PCIBR_ISR_MAX_ERRS];
+
+ /*
+ * Bridge-wide endianness control for
+ * large-window PIO mappings
+ *
+ * These fields are set to PCIIO_BYTE_SWAP
+ * or PCIIO_WORD_VALUES once the swapper
+ * has been configured, one way or the other,
+ * for the direct windows. If they are zero,
+ * nobody has a PIO mapping through that window,
+ * and the swapper can be set either way.
+ */
+ unsigned bs_pio_end_io;
+ unsigned bs_pio_end_mem;
+};
+
+#define PCIBR_ERRTIME_THRESHOLD (100)
+#define PCIBR_ERRRATE_THRESHOLD (100)
+
+/*
+ * pcibr will respond to hints dropped in its vertex
+ * using the following structure.
+ */
+struct pcibr_hints_s {
+ /* ph_host_slot is actually +1 so "0" means "no host" */
+ pciio_slot_t ph_host_slot[8]; /* REQ/GNT/INT in use by ... */
+ unsigned int ph_rrb_fixed; /* do not change RRB allocations */
+ unsigned int ph_hands_off; /* prevent further pcibr operations */
+ rrb_alloc_funct_t rrb_alloc_funct; /* do dynamic rrb allocation */
+ pcibr_intr_bits_f *ph_intr_bits; /* map PCI INT[ABCD] to Bridge Int(n) */
+};
+
+/*
+ * Number of bridge non-fatal error interrupts we can see before
+ * we decide to disable that interrupt.
+ */
+#define PCIBR_ERRINTR_DISABLE_LEVEL 10000
+
+/* =====================================================================
+ * Bridge (pcibr) state management functions
+ *
+ * pcibr_soft_get is here because we do it in a lot
+ * of places and I want to make sure they all stay
+ * in step with each other.
+ *
+ * pcibr_soft_set is here because I want it to be
+ * closely associated with pcibr_soft_get, even
+ * though it is only called in one place.
+ */
+
+#define pcibr_soft_get(v) ((pcibr_soft_t)hwgraph_fastinfo_get((v)))
+#define pcibr_soft_set(v,i) (hwgraph_fastinfo_set((v), (arbitrary_info_t)(i)))
+
+/*
+ * Additional PIO spaces per slot are
+ * recorded in this structure.
+ */
+struct pciio_piospace_s {
+ pciio_piospace_t next; /* another space for this device */
+ char free; /* 1 if free, 0 if in use */
+ pciio_space_t space; /* Which space is in use */
+ iopaddr_t start; /* Starting address of the PIO space */
+ size_t count; /* size of PIO space */
+};
+
+/*
+ * pcibr_soft structure locking macros
+ */
+inline static unsigned long
+pcibr_lock(pcibr_soft_t pcibr_soft)
+{
+ unsigned long flag;
+ spin_lock_irqsave(&pcibr_soft->bs_lock, flag);
+ return(flag);
+}
+#define pcibr_unlock(pcibr_soft, flag) spin_unlock_irqrestore(&pcibr_soft->bs_lock, flag)
+
+#define PCIBR_VALID_SLOT(ps, s) (s < PCIBR_NUM_SLOTS(ps))
+#define PCIBR_D64_BASE_UNSET (0xFFFFFFFFFFFFFFFF)
+#define PCIBR_D32_BASE_UNSET (0xFFFFFFFF)
+#define INFO_LBL_PCIBR_ASIC_REV "_pcibr_asic_rev"
+
+#define PCIBR_SOFT_LIST 1
+#if PCIBR_SOFT_LIST
+typedef struct pcibr_list_s *pcibr_list_p;
+struct pcibr_list_s {
+ pcibr_list_p bl_next;
+ pcibr_soft_t bl_soft;
+ vertex_hdl_t bl_vhdl;
+};
+#endif /* PCIBR_SOFT_LIST */
+
+/* Devices per widget: 2 buses, 2 slots per bus, 8 functions per slot. */
+#define DEV_PER_WIDGET (2*2*8)
+
+struct sn_flush_device_list {
+ int bus;
+ int slot;
+ int pin;
+ struct bar_list {
+ unsigned long start;
+ unsigned long end;
+ } bar_list[PCI_ROM_RESOURCE];
+ unsigned long force_int_addr;
+ volatile unsigned long flush_addr;
+ spinlock_t flush_lock;
+};
+
+struct sn_flush_nasid_entry {
+ struct sn_flush_device_list **widget_p;
+ unsigned long iio_itte[8];
+};
+
+#endif /* _ASM_SN_PCI_PCIBR_PRIVATE_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PCI_PCIIO_H
+#define _ASM_IA64_SN_PCI_PCIIO_H
+
+/*
+ * pciio.h -- platform-independent PCI interface
+ */
+
+#ifdef __KERNEL__
+#include <linux/ioport.h>
+#include <asm/sn/ioerror.h>
+#include <asm/sn/driver.h>
+#include <asm/sn/hcl.h>
+#else
+#include <linux/ioport.h>
+#include <ioerror.h>
+#include <driver.h>
+#include <hcl.h>
+#endif
+
+#ifndef __ASSEMBLY__
+
+#ifdef __KERNEL__
+#include <asm/sn/dmamap.h>
+#else
+#include <dmamap.h>
+#endif
+
+typedef int pciio_vendor_id_t;
+
+#define PCIIO_VENDOR_ID_NONE (-1)
+
+typedef int pciio_device_id_t;
+
+#define PCIIO_DEVICE_ID_NONE (-1)
+
+typedef uint8_t pciio_bus_t; /* PCI bus number (0..255) */
+typedef uint8_t pciio_slot_t; /* PCI slot number (0..31, 255) */
+typedef uint8_t pciio_function_t; /* PCI func number (0..7, 255) */
+
+#define PCIIO_SLOTS ((pciio_slot_t)32)
+#define PCIIO_FUNCS ((pciio_function_t)8)
+
+#define PCIIO_SLOT_NONE ((pciio_slot_t)255)
+#define PCIIO_FUNC_NONE ((pciio_function_t)255)
+
+typedef int pciio_intr_line_t; /* PCI interrupt line(s) */
+
+#define PCIIO_INTR_LINE(n) (0x1 << (n))
+#define PCIIO_INTR_LINE_A (0x1)
+#define PCIIO_INTR_LINE_B (0x2)
+#define PCIIO_INTR_LINE_C (0x4)
+#define PCIIO_INTR_LINE_D (0x8)
+
+typedef int pciio_space_t; /* PCI address space designation */
+
+#define PCIIO_SPACE_NONE (0)
+#define PCIIO_SPACE_ROM (1)
+#define PCIIO_SPACE_IO (2)
+/* PCIIO_SPACE_ (3) */
+#define PCIIO_SPACE_MEM (4)
+#define PCIIO_SPACE_MEM32 (5)
+#define PCIIO_SPACE_MEM64 (6)
+#define PCIIO_SPACE_CFG (7)
+#define PCIIO_SPACE_WIN0 (8)
+#define PCIIO_SPACE_WIN(n) (PCIIO_SPACE_WIN0+(n)) /* 8..13 */
+/* PCIIO_SPACE_ (14) */
+#define PCIIO_SPACE_BAD (15)
+
+#if 1 /* does anyone really use these? */
+#define PCIIO_SPACE_USER0 (20)
+#define PCIIO_SPACE_USER(n) (PCIIO_SPACE_USER0+(n)) /* 20 .. ? */
+#endif
+
+/*
+ * PCI_NOWHERE is the error value returned in
+ * place of a PCI address when there is no
+ * corresponding address.
+ */
+#define PCI_NOWHERE (0)
+
+/*
+ * Acceptable flag bits for pciio service calls
+ *
+ * PCIIO_FIXED: require that mappings be established
+ * using fixed sharable resources; address
+ * translation results will be permanently
+ * available. (PIOMAP_FIXED and DMAMAP_FIXED are
+ * the same numeric value and are acceptable).
+ * PCIIO_NOSLEEP: if any part of the operation would
+ * sleep waiting for resoruces, return an error
+ * instead. (PIOMAP_NOSLEEP and DMAMAP_NOSLEEP are
+ * the same numeric value and are acceptable).
+ *
+ * PCIIO_DMA_CMD: configure this stream as a
+ * generic "command" stream. Generally this
+ * means turn off prefetchers and write
+ * gatherers, and whatever else might be
+ * necessary to make command ring DMAs
+ * work as expected.
+ * PCIIO_DMA_DATA: configure this stream as a
+ * generic "data" stream. Generally, this
+ * means turning on prefetchers and write
+ * gatherers, and anything else that might
+ * increase the DMA throughput (short of
+ * using "high priority" or "real time"
+ * resources that may lower overall system
+ * performance).
+ * PCIIO_DMA_A64: this device is capable of
+ * using 64-bit DMA addresses. Unless this
+ * flag is specified, it is assumed that
+ * the DMA address must be in the low 4G
+ * of PCI space.
+ * PCIIO_PREFETCH: if there are prefetchers
+ * available, they can be turned on.
+ * PCIIO_NOPREFETCH: any prefetchers along
+ * the dma path should be turned off.
+ * PCIIO_WRITE_GATHER: if there are write gatherers
+ * available, they can be turned on.
+ * PCIIO_NOWRITE_GATHER: any write gatherers along
+ * the dma path should be turned off.
+ *
+ * PCIIO_BYTE_STREAM: the DMA stream represents a group
+ * of ordered bytes. Arrange all byte swapping
+ * hardware so that the bytes land in the correct
+ * order. This is a common setting for data
+ * channels, but is NOT implied by PCIIO_DMA_DATA.
+ * PCIIO_WORD_VALUES: the DMA stream is used to
+ * communicate quantities stored in multiple bytes,
+ * and the device doing the DMA is little-endian;
+ * arrange any swapping hardware so that
+ * 32-bit-wide values are maintained. This is a
+ * common setting for command rings that contain
+ * DMA addresses and counts, but is NOT implied by
+ * PCIIO_DMA_CMD. CPU Accesses to 16-bit fields
+ * must have their address xor-ed with 2, and
+ * accesses to individual bytes must have their
+ * addresses xor-ed with 3 relative to what the
+ * device expects.
+ *
+ * NOTE: any "provider specific" flags that
+ * conflict with the generic flags will
+ * override the generic flags, locally
+ * at that provider.
+ *
+ * Also, note that PCI-generic flags (PCIIO_) are
+ * in bits 0-14. The upper bits, 15-31, are reserved
+ * for PCI implementation-specific flags.
+ */
+
+#define PCIIO_FIXED DMAMAP_FIXED
+#define PCIIO_NOSLEEP DMAMAP_NOSLEEP
+
+#define PCIIO_DMA_CMD 0x0010
+#define PCIIO_DMA_DATA 0x0020
+#define PCIIO_DMA_A64 0x0040
+
+#define PCIIO_WRITE_GATHER 0x0100
+#define PCIIO_NOWRITE_GATHER 0x0200
+#define PCIIO_PREFETCH 0x0400
+#define PCIIO_NOPREFETCH 0x0800
+
+/* Requesting an endianness setting that the
+ * underlieing hardware can not support
+ * WILL result in a failure to allocate
+ * dmamaps or complete a dmatrans.
+ */
+#define PCIIO_BYTE_STREAM 0x1000 /* set BYTE SWAP for "byte stream" */
+#define PCIIO_WORD_VALUES 0x2000 /* set BYTE SWAP for "word values" */
+
+/*
+ * Interface to deal with PCI endianness.
+ * The driver calls pciio_endian_set once, supplying the actual endianness of
+ * the device and the desired endianness. On SGI systems, only use LITTLE if
+ * dealing with a driver that does software swizzling. Most of the time,
+ * it's preferable to request BIG. The return value indicates the endianness
+ * that is actually achieved. On systems that support hardware swizzling,
+ * the achieved endianness will be the desired endianness. On systems without
+ * swizzle hardware, the achieved endianness will be the device's endianness.
+ */
+typedef enum pciio_endian_e {
+ PCIDMA_ENDIAN_BIG,
+ PCIDMA_ENDIAN_LITTLE
+} pciio_endian_t;
+
+/*
+ * Generic PCI bus information
+ */
+typedef enum pciio_asic_type_e {
+ PCIIO_ASIC_TYPE_UNKNOWN,
+ PCIIO_ASIC_TYPE_MACE,
+ PCIIO_ASIC_TYPE_BRIDGE,
+ PCIIO_ASIC_TYPE_XBRIDGE,
+ PCIIO_ASIC_TYPE_PIC,
+} pciio_asic_type_t;
+
+typedef enum pciio_bus_type_e {
+ PCIIO_BUS_TYPE_UNKNOWN,
+ PCIIO_BUS_TYPE_PCI,
+ PCIIO_BUS_TYPE_PCIX
+} pciio_bus_type_t;
+
+typedef enum pciio_bus_speed_e {
+ PCIIO_BUS_SPEED_UNKNOWN,
+ PCIIO_BUS_SPEED_33,
+ PCIIO_BUS_SPEED_66,
+ PCIIO_BUS_SPEED_100,
+ PCIIO_BUS_SPEED_133
+} pciio_bus_speed_t;
+
+/*
+ * Interface to set PCI arbitration priority for devices that require
+ * realtime characteristics. pciio_priority_set is used to switch a
+ * device between the PCI high-priority arbitration ring and the low
+ * priority arbitration ring.
+ *
+ * (Note: this is strictly for the PCI arbitrary priority. It has
+ * no direct relationship to GBR.)
+ */
+typedef enum pciio_priority_e {
+ PCI_PRIO_LOW,
+ PCI_PRIO_HIGH
+} pciio_priority_t;
+
+/*
+ * handles of various sorts
+ */
+typedef struct pciio_piomap_s *pciio_piomap_t;
+typedef struct pciio_dmamap_s *pciio_dmamap_t;
+typedef struct pciio_intr_s *pciio_intr_t;
+typedef struct pciio_info_s *pciio_info_t;
+typedef struct pciio_piospace_s *pciio_piospace_t;
+typedef struct pciio_win_info_s *pciio_win_info_t;
+typedef struct pciio_win_map_s *pciio_win_map_t;
+typedef struct pciio_win_alloc_s *pciio_win_alloc_t;
+typedef struct pciio_bus_map_s *pciio_bus_map_t;
+typedef struct pciio_businfo_s *pciio_businfo_t;
+
+
+/* PIO MANAGEMENT */
+
+/*
+ * A NOTE ON PCI PIO ADDRESSES
+ *
+ * PCI supports three different address spaces: CFG
+ * space, MEM space and I/O space. Further, each
+ * card always accepts CFG accesses at an address
+ * based on which slot it is attached to, but can
+ * decode up to six address ranges.
+ *
+ * Assignment of the base address registers for all
+ * PCI devices is handled centrally; most commonly,
+ * device drivers will want to talk to offsets
+ * within one or another of the address ranges. In
+ * order to do this, which of these "address
+ * spaces" the PIO is directed into must be encoded
+ * in the flag word.
+ *
+ * We reserve the right to defer allocation of PCI
+ * address space for a device window until the
+ * driver makes a piomap_alloc or piotrans_addr
+ * request.
+ *
+ * If a device driver mucks with its device's base
+ * registers through a PIO mapping to CFG space,
+ * results of further PIO through the corresponding
+ * window are UNDEFINED.
+ *
+ * Windows are named by the index in the base
+ * address register set for the device of the
+ * desired register; IN THE CASE OF 64 BIT base
+ * registers, the index should be to the word of
+ * the register that contains the mapping type
+ * bits; since the PCI CFG space is natively
+ * organized little-endian fashion, this is the
+ * first of the two words.
+ *
+ * AT THE MOMENT, any required corrections for
+ * endianness are the responsibility of the device
+ * driver; not all platforms support control in
+ * hardware of byteswapping hardware. We anticipate
+ * providing flag bits to the PIO and DMA
+ * management interfaces to request different
+ * configurations of byteswapping hardware.
+ *
+ * PIO Accesses to CFG space via the "Bridge" ASIC
+ * used in IP30 platforms preserve the native byte
+ * significance within the 32-bit word; byte
+ * addresses for single byte accesses need to be
+ * XORed with 3, and addresses for 16-bit accesses
+ * need to be XORed with 2.
+ *
+ * The IOC3 used on IP30, and other SGI PCI devices
+ * as well, require use of 32-bit accesses to their
+ * configuration space registers. Any potential PCI
+ * bus providers need to be aware of this requirement.
+ */
+
+#define PCIIO_PIOMAP_CFG (0x1)
+#define PCIIO_PIOMAP_MEM (0x2)
+#define PCIIO_PIOMAP_IO (0x4)
+#define PCIIO_PIOMAP_WIN(n) (0x8+(n))
+
+typedef pciio_piomap_t
+pciio_piomap_alloc_f (vertex_hdl_t dev, /* set up mapping for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ pciio_space_t space, /* which address space */
+ iopaddr_t pcipio_addr, /* starting address */
+ size_t byte_count,
+ size_t byte_count_max, /* maximum size of a mapping */
+ unsigned int flags); /* defined in sys/pio.h */
+
+typedef void
+pciio_piomap_free_f (pciio_piomap_t pciio_piomap);
+
+typedef caddr_t
+pciio_piomap_addr_f (pciio_piomap_t pciio_piomap, /* mapping resources */
+ iopaddr_t pciio_addr, /* map for this pcipio address */
+ size_t byte_count); /* map this many bytes */
+
+typedef void
+pciio_piomap_done_f (pciio_piomap_t pciio_piomap);
+
+typedef caddr_t
+pciio_piotrans_addr_f (vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ pciio_space_t space, /* which address space */
+ iopaddr_t pciio_addr, /* starting address */
+ size_t byte_count, /* map this many bytes */
+ unsigned int flags);
+
+typedef caddr_t
+pciio_pio_addr_f (vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ pciio_space_t space, /* which address space */
+ iopaddr_t pciio_addr, /* starting address */
+ size_t byte_count, /* map this many bytes */
+ pciio_piomap_t *mapp, /* in case a piomap was needed */
+ unsigned int flags);
+
+typedef iopaddr_t
+pciio_piospace_alloc_f (vertex_hdl_t dev, /* PIO space for this device */
+ device_desc_t dev_desc, /* Device descriptor */
+ pciio_space_t space, /* which address space */
+ size_t byte_count, /* Number of bytes of space */
+ size_t alignment); /* Alignment of allocation */
+
+typedef void
+pciio_piospace_free_f (vertex_hdl_t dev, /* Device freeing space */
+ pciio_space_t space, /* Which space is freed */
+ iopaddr_t pci_addr, /* Address being freed */
+ size_t size); /* Size freed */
+
+/* DMA MANAGEMENT */
+
+typedef pciio_dmamap_t
+pciio_dmamap_alloc_f (vertex_hdl_t dev, /* set up mappings for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ size_t byte_count_max, /* max size of a mapping */
+ unsigned int flags); /* defined in dma.h */
+
+typedef void
+pciio_dmamap_free_f (pciio_dmamap_t dmamap);
+
+typedef iopaddr_t
+pciio_dmamap_addr_f (pciio_dmamap_t dmamap, /* use these mapping resources */
+ paddr_t paddr, /* map for this address */
+ size_t byte_count); /* map this many bytes */
+
+typedef void
+pciio_dmamap_done_f (pciio_dmamap_t dmamap);
+
+typedef iopaddr_t
+pciio_dmatrans_addr_f (vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ paddr_t paddr, /* system physical address */
+ size_t byte_count, /* length */
+ unsigned int flags); /* defined in dma.h */
+
+typedef void
+pciio_dmamap_drain_f (pciio_dmamap_t map);
+
+typedef void
+pciio_dmaaddr_drain_f (vertex_hdl_t vhdl,
+ paddr_t addr,
+ size_t bytes);
+
+
+/* INTERRUPT MANAGEMENT */
+
+typedef pciio_intr_t
+pciio_intr_alloc_f (vertex_hdl_t dev, /* which PCI device */
+ device_desc_t dev_desc, /* device descriptor */
+ pciio_intr_line_t lines, /* which line(s) will be used */
+ vertex_hdl_t owner_dev); /* owner of this intr */
+
+typedef void
+pciio_intr_free_f (pciio_intr_t intr_hdl);
+
+typedef int
+pciio_intr_connect_f (pciio_intr_t intr_hdl, intr_func_t intr_func, intr_arg_t intr_arg); /* pciio intr resource handle */
+
+typedef void
+pciio_intr_disconnect_f (pciio_intr_t intr_hdl);
+
+typedef vertex_hdl_t
+pciio_intr_cpu_get_f (pciio_intr_t intr_hdl); /* pciio intr resource handle */
+
+/* CONFIGURATION MANAGEMENT */
+
+typedef void
+pciio_provider_startup_f (vertex_hdl_t pciio_provider);
+
+typedef void
+pciio_provider_shutdown_f (vertex_hdl_t pciio_provider);
+
+typedef int
+pciio_reset_f (vertex_hdl_t conn); /* pci connection point */
+
+typedef pciio_endian_t /* actual endianness */
+pciio_endian_set_f (vertex_hdl_t dev, /* specify endianness for this device */
+ pciio_endian_t device_end, /* endianness of device */
+ pciio_endian_t desired_end); /* desired endianness */
+
+typedef uint64_t
+pciio_config_get_f (vertex_hdl_t conn, /* pci connection point */
+ unsigned int reg, /* register byte offset */
+ unsigned int size); /* width in bytes (1..4) */
+
+typedef void
+pciio_config_set_f (vertex_hdl_t conn, /* pci connection point */
+ unsigned int reg, /* register byte offset */
+ unsigned int size, /* width in bytes (1..4) */
+ uint64_t value); /* value to store */
+
+typedef pciio_slot_t
+pciio_error_extract_f (vertex_hdl_t vhdl,
+ pciio_space_t *spacep,
+ iopaddr_t *addrp);
+
+typedef void
+pciio_driver_reg_callback_f (vertex_hdl_t conn,
+ int key1,
+ int key2,
+ int error);
+
+typedef void
+pciio_driver_unreg_callback_f (vertex_hdl_t conn, /* pci connection point */
+ int key1,
+ int key2,
+ int error);
+
+typedef int
+pciio_device_unregister_f (vertex_hdl_t conn);
+
+
+/*
+ * Adapters that provide a PCI interface adhere to this software interface.
+ */
+typedef struct pciio_provider_s {
+ /* ASIC PROVIDER ID */
+ pciio_asic_type_t provider_asic;
+
+ /* PIO MANAGEMENT */
+ pciio_piomap_alloc_f *piomap_alloc;
+ pciio_piomap_free_f *piomap_free;
+ pciio_piomap_addr_f *piomap_addr;
+ pciio_piomap_done_f *piomap_done;
+ pciio_piotrans_addr_f *piotrans_addr;
+ pciio_piospace_alloc_f *piospace_alloc;
+ pciio_piospace_free_f *piospace_free;
+
+ /* DMA MANAGEMENT */
+ pciio_dmamap_alloc_f *dmamap_alloc;
+ pciio_dmamap_free_f *dmamap_free;
+ pciio_dmamap_addr_f *dmamap_addr;
+ pciio_dmamap_done_f *dmamap_done;
+ pciio_dmatrans_addr_f *dmatrans_addr;
+ pciio_dmamap_drain_f *dmamap_drain;
+ pciio_dmaaddr_drain_f *dmaaddr_drain;
+
+ /* INTERRUPT MANAGEMENT */
+ pciio_intr_alloc_f *intr_alloc;
+ pciio_intr_free_f *intr_free;
+ pciio_intr_connect_f *intr_connect;
+ pciio_intr_disconnect_f *intr_disconnect;
+ pciio_intr_cpu_get_f *intr_cpu_get;
+
+ /* CONFIGURATION MANAGEMENT */
+ pciio_provider_startup_f *provider_startup;
+ pciio_provider_shutdown_f *provider_shutdown;
+ pciio_reset_f *reset;
+ pciio_endian_set_f *endian_set;
+ pciio_config_get_f *config_get;
+ pciio_config_set_f *config_set;
+
+ /* Error handling interface */
+ pciio_error_extract_f *error_extract;
+
+ /* Callback support */
+ pciio_driver_reg_callback_f *driver_reg_callback;
+ pciio_driver_unreg_callback_f *driver_unreg_callback;
+ pciio_device_unregister_f *device_unregister;
+} pciio_provider_t;
+
+/* PCI devices use these standard PCI provider interfaces */
+extern pciio_piomap_alloc_f pciio_piomap_alloc;
+extern pciio_piomap_free_f pciio_piomap_free;
+extern pciio_piomap_addr_f pciio_piomap_addr;
+extern pciio_piomap_done_f pciio_piomap_done;
+extern pciio_piotrans_addr_f pciio_piotrans_addr;
+extern pciio_pio_addr_f pciio_pio_addr;
+extern pciio_piospace_alloc_f pciio_piospace_alloc;
+extern pciio_piospace_free_f pciio_piospace_free;
+extern pciio_dmamap_alloc_f pciio_dmamap_alloc;
+extern pciio_dmamap_free_f pciio_dmamap_free;
+extern pciio_dmamap_addr_f pciio_dmamap_addr;
+extern pciio_dmamap_done_f pciio_dmamap_done;
+extern pciio_dmatrans_addr_f pciio_dmatrans_addr;
+extern pciio_dmamap_drain_f pciio_dmamap_drain;
+extern pciio_dmaaddr_drain_f pciio_dmaaddr_drain;
+extern pciio_intr_alloc_f pciio_intr_alloc;
+extern pciio_intr_free_f pciio_intr_free;
+extern pciio_intr_connect_f pciio_intr_connect;
+extern pciio_intr_disconnect_f pciio_intr_disconnect;
+extern pciio_intr_cpu_get_f pciio_intr_cpu_get;
+extern pciio_provider_startup_f pciio_provider_startup;
+extern pciio_provider_shutdown_f pciio_provider_shutdown;
+extern pciio_reset_f pciio_reset;
+extern pciio_endian_set_f pciio_endian_set;
+extern pciio_config_get_f pciio_config_get;
+extern pciio_config_set_f pciio_config_set;
+
+/* Widgetdev in the IOERROR structure is encoded as follows.
+ * +---------------------------+
+ * | slot (7:3) | function(2:0)|
+ * +---------------------------+
+ * Following are the convenience interfaces to get at form
+ * a widgetdev or to break it into its constituents.
+ */
+
+#define PCIIO_WIDGETDEV_SLOT_SHFT 3
+#define PCIIO_WIDGETDEV_SLOT_MASK 0x1f
+#define PCIIO_WIDGETDEV_FUNC_MASK 0x7
+
+#define pciio_widgetdev_create(slot,func) \
+ (((slot) << PCIIO_WIDGETDEV_SLOT_SHFT) + (func))
+
+#define pciio_widgetdev_slot_get(wdev) \
+ (((wdev) >> PCIIO_WIDGETDEV_SLOT_SHFT) & PCIIO_WIDGETDEV_SLOT_MASK)
+
+#define pciio_widgetdev_func_get(wdev) \
+ ((wdev) & PCIIO_WIDGETDEV_FUNC_MASK)
+
+
+/* Generic PCI card initialization interface
+ */
+
+extern int
+pciio_driver_register (pciio_vendor_id_t vendor_id, /* card's vendor number */
+ pciio_device_id_t device_id, /* card's device number */
+ char *driver_prefix, /* driver prefix */
+ unsigned int flags);
+
+extern void
+pciio_error_register (vertex_hdl_t pconn, /* which slot */
+ error_handler_f *efunc, /* function to call */
+ error_handler_arg_t einfo); /* first parameter */
+
+extern void pciio_driver_unregister(char *driver_prefix);
+
+typedef void pciio_iter_f(vertex_hdl_t pconn); /* a connect point */
+
+/* Interfaces used by PCI Bus Providers to talk to
+ * the Generic PCI layer.
+ */
+extern vertex_hdl_t
+pciio_device_register (vertex_hdl_t connectpt, /* vertex at center of bus */
+ vertex_hdl_t master, /* card's master ASIC (pci provider) */
+ pciio_slot_t slot, /* card's slot (0..?) */
+ pciio_function_t func, /* card's func (0..?) */
+ pciio_vendor_id_t vendor, /* card's vendor number */
+ pciio_device_id_t device); /* card's device number */
+
+extern void
+pciio_device_unregister(vertex_hdl_t connectpt);
+
+extern pciio_info_t
+pciio_device_info_new (pciio_info_t pciio_info, /* preallocated info struct */
+ vertex_hdl_t master, /* card's master ASIC (pci provider) */
+ pciio_slot_t slot, /* card's slot (0..?) */
+ pciio_function_t func, /* card's func (0..?) */
+ pciio_vendor_id_t vendor, /* card's vendor number */
+ pciio_device_id_t device); /* card's device number */
+
+extern void
+pciio_device_info_free(pciio_info_t pciio_info);
+
+extern vertex_hdl_t
+pciio_device_info_register(
+ vertex_hdl_t connectpt, /* vertex at center of bus */
+ pciio_info_t pciio_info); /* details about conn point */
+
+extern void
+pciio_device_info_unregister(
+ vertex_hdl_t connectpt, /* vertex at center of bus */
+ pciio_info_t pciio_info); /* details about conn point */
+
+
+extern int
+pciio_device_attach(
+ vertex_hdl_t pcicard, /* vertex created by pciio_device_register */
+ int drv_flags);
+extern int
+pciio_device_detach(
+ vertex_hdl_t pcicard, /* vertex created by pciio_device_register */
+ int drv_flags);
+
+
+/* create and initialize empty window mapping resource */
+extern pciio_win_map_t
+pciio_device_win_map_new(pciio_win_map_t win_map, /* preallocated win map structure */
+ size_t region_size, /* size of region to be tracked */
+ size_t page_size); /* allocation page size */
+
+/* destroy window mapping resource freeing up ancillary resources */
+extern void
+pciio_device_win_map_free(pciio_win_map_t win_map); /* preallocated win map structure */
+
+/* populate window mapping with free range of addresses */
+extern void
+pciio_device_win_populate(pciio_win_map_t win_map, /* win map */
+ iopaddr_t ioaddr, /* base address of free range */
+ size_t size); /* size of free range */
+
+/* allocate window from mapping resource */
+extern iopaddr_t
+pciio_device_win_alloc(struct resource * res,
+ pciio_win_alloc_t win_alloc, /* opaque allocation cookie */
+ size_t start, /* start unit, or 0 */
+ size_t size, /* size of allocation */
+ size_t align); /* alignment of allocation */
+
+/* free previously allocated window */
+extern void
+pciio_device_win_free(pciio_win_alloc_t win_alloc); /* opaque allocation cookie */
+
+
+/*
+ * Generic PCI interface, for use with all PCI providers
+ * and all PCI devices.
+ */
+
+/* Generic PCI interrupt interfaces */
+extern vertex_hdl_t pciio_intr_dev_get(pciio_intr_t pciio_intr);
+extern vertex_hdl_t pciio_intr_cpu_get(pciio_intr_t pciio_intr);
+
+/* Generic PCI pio interfaces */
+extern vertex_hdl_t pciio_pio_dev_get(pciio_piomap_t pciio_piomap);
+extern pciio_slot_t pciio_pio_slot_get(pciio_piomap_t pciio_piomap);
+extern pciio_space_t pciio_pio_space_get(pciio_piomap_t pciio_piomap);
+extern iopaddr_t pciio_pio_pciaddr_get(pciio_piomap_t pciio_piomap);
+extern ulong pciio_pio_mapsz_get(pciio_piomap_t pciio_piomap);
+extern caddr_t pciio_pio_kvaddr_get(pciio_piomap_t pciio_piomap);
+
+/* Generic PCI dma interfaces */
+extern vertex_hdl_t pciio_dma_dev_get(pciio_dmamap_t pciio_dmamap);
+
+/* Register/unregister PCI providers and get implementation handle */
+extern void pciio_provider_register(vertex_hdl_t provider, pciio_provider_t *pciio_fns);
+extern void pciio_provider_unregister(vertex_hdl_t provider);
+extern pciio_provider_t *pciio_provider_fns_get(vertex_hdl_t provider);
+
+/* Generic pci slot information access interface */
+extern pciio_info_t pciio_info_chk(vertex_hdl_t vhdl);
+extern pciio_info_t pciio_info_get(vertex_hdl_t vhdl);
+extern void pciio_info_set(vertex_hdl_t vhdl, pciio_info_t widget_info);
+extern vertex_hdl_t pciio_info_dev_get(pciio_info_t pciio_info);
+extern pciio_bus_t pciio_info_bus_get(pciio_info_t pciio_info);
+extern pciio_slot_t pciio_info_slot_get(pciio_info_t pciio_info);
+extern pciio_function_t pciio_info_function_get(pciio_info_t pciio_info);
+extern pciio_vendor_id_t pciio_info_vendor_id_get(pciio_info_t pciio_info);
+extern pciio_device_id_t pciio_info_device_id_get(pciio_info_t pciio_info);
+extern vertex_hdl_t pciio_info_master_get(pciio_info_t pciio_info);
+extern arbitrary_info_t pciio_info_mfast_get(pciio_info_t pciio_info);
+extern pciio_provider_t *pciio_info_pops_get(pciio_info_t pciio_info);
+extern error_handler_f *pciio_info_efunc_get(pciio_info_t);
+extern error_handler_arg_t *pciio_info_einfo_get(pciio_info_t);
+extern pciio_space_t pciio_info_bar_space_get(pciio_info_t, int);
+extern iopaddr_t pciio_info_bar_base_get(pciio_info_t, int);
+extern size_t pciio_info_bar_size_get(pciio_info_t, int);
+extern iopaddr_t pciio_info_rom_base_get(pciio_info_t);
+extern size_t pciio_info_rom_size_get(pciio_info_t);
+extern int pciio_info_type1_get(pciio_info_t);
+extern int pciio_error_handler(vertex_hdl_t, int, ioerror_mode_t, ioerror_t *);
+
+/**
+ * sn_pci_set_vchan - Set the requested Virtual Channel bits into the mapped DMA
+ * address.
+ * @pci_dev: pci device pointer
+ * @addr: mapped dma address
+ * @vchan: Virtual Channel to use 0 or 1.
+ *
+ * Set the Virtual Channel bit in the mapped dma address.
+ */
+
+static inline int
+sn_pci_set_vchan(struct pci_dev *pci_dev,
+ dma_addr_t *addr,
+ int vchan)
+{
+ if (vchan > 1) {
+ return -1;
+ }
+
+ if (!(*addr >> 32)) /* Using a mask here would be cleaner */
+ return 0; /* but this generates better code */
+
+ if (vchan == 1) {
+ /* Set Bit 57 */
+ *addr |= (1UL << 57);
+ } else {
+ /* Clear Bit 57 */
+ *addr &= ~(1UL << 57);
+ }
+
+ return 0;
+}
+
+#endif /* C or C++ */
+
+
+/*
+ * Prototypes
+ */
+
+int snia_badaddr_val(volatile void *addr, int len, volatile void *ptr);
+nasid_t snia_get_console_nasid(void);
+nasid_t snia_get_master_baseio_nasid(void);
+#endif /* _ASM_IA64_SN_PCI_PCIIO_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PCI_PCIIO_PRIVATE_H
+#define _ASM_IA64_SN_PCI_PCIIO_PRIVATE_H
+
+#include <asm/sn/pci/pciio.h>
+#include <asm/sn/pci/pci_defs.h>
+
+/*
+ * pciio_private.h -- private definitions for pciio
+ * PCI drivers should NOT include this file.
+ */
+
+/*
+ * All PCI providers set up PIO using this information.
+ */
+struct pciio_piomap_s {
+ unsigned int pp_flags; /* PCIIO_PIOMAP flags */
+ vertex_hdl_t pp_dev; /* associated pci card */
+ pciio_slot_t pp_slot; /* which slot the card is in */
+ pciio_space_t pp_space; /* which address space */
+ iopaddr_t pp_pciaddr; /* starting offset of mapping */
+ size_t pp_mapsz; /* size of this mapping */
+ caddr_t pp_kvaddr; /* kernel virtual address to use */
+};
+
+/*
+ * All PCI providers set up DMA using this information.
+ */
+struct pciio_dmamap_s {
+ unsigned int pd_flags; /* PCIIO_DMAMAP flags */
+ vertex_hdl_t pd_dev; /* associated pci card */
+ pciio_slot_t pd_slot; /* which slot the card is in */
+};
+
+/*
+ * All PCI providers set up interrupts using this information.
+ */
+
+struct pciio_intr_s {
+ unsigned int pi_flags; /* PCIIO_INTR flags */
+ vertex_hdl_t pi_dev; /* associated pci card */
+ device_desc_t pi_dev_desc; /* override device descriptor */
+ pciio_intr_line_t pi_lines; /* which interrupt line(s) */
+ intr_func_t pi_func; /* handler function (when connected) */
+ intr_arg_t pi_arg; /* handler parameter (when connected) */
+ cpuid_t pi_mustruncpu; /* Where we must run. */
+ int pi_irq; /* IRQ assigned */
+ int pi_cpu; /* cpu assigned */
+};
+
+/* PCIIO_INTR (pi_flags) flags */
+#define PCIIO_INTR_CONNECTED 1 /* interrupt handler/thread has been connected */
+#define PCIIO_INTR_NOTHREAD 2 /* interrupt handler wants to be called at interrupt level */
+
+/*
+ * Generic PCI bus information
+ */
+struct pciio_businfo_s {
+ int bi_multi_master;/* Bus provider supports multiple */
+ /* dma masters behind a single slot. */
+ /* Needed to work around a thrashing */
+ /* issue in SGI Bridge ASIC and */
+ /* its derivatives. */
+ pciio_asic_type_t bi_asic_type; /* PCI ASIC type */
+ pciio_bus_type_t bi_bus_type; /* PCI bus type */
+ pciio_bus_speed_t bi_bus_speed; /* PCI bus speed */
+};
+
+/*
+ * Some PCI provider implementations keep track of PCI window Base Address
+ * Register (BAR) address range assignment via the rmalloc()/rmfree() arena
+ * management routines. These implementations use the following data
+ * structure for each allocation address space (e.g. memory, I/O, small
+ * window, etc.).
+ *
+ * The ``page size'' encodes the minimum allocation unit and must be a power
+ * of 2. The main use of this allocation ``page size'' is to control the
+ * number of free address ranges that the mapping allocation software will
+ * need to track. Smaller values will allow more efficient use of the address
+ * ranges but will result in much larger allocation map structures ... For
+ * instance, if we want to manage allocations for a 256MB address range,
+ * choosing a 1MB allocation page size will result in up to 1MB being wasted
+ * for allocation requests smaller than 1MB. The worst case allocation
+ * pattern for the allocation software to track would be a pattern of 1MB
+ * allocated, 1MB free. This results in the need to track up to 128 free
+ * ranges.
+ */
+struct pciio_win_map_s {
+ struct map *wm_map; /* window address map */
+ int wm_page_size; /* allocation ``page size'' */
+};
+
+/*
+ * Opaque structure used to keep track of window allocation information.
+ */
+struct pciio_win_alloc_s {
+ struct resource *wa_resource; /* window map allocation resource */
+ unsigned long wa_base; /* allocation starting page number */
+ size_t wa_pages; /* number of pages in allocation */
+};
+
+/*
+ * Each PCI Card has one of these.
+ */
+
+struct pciio_info_s {
+ char *c_fingerprint;
+ vertex_hdl_t c_vertex; /* back pointer to vertex */
+ vertex_hdl_t c_hostvertex;/* top most device in tree */
+ pciio_bus_t c_bus; /* which bus the card is in */
+ pciio_slot_t c_slot; /* which slot the card is in */
+ pciio_function_t c_func; /* which func (on multi-func cards) */
+ pciio_vendor_id_t c_vendor; /* PCI card "vendor" code */
+ pciio_device_id_t c_device; /* PCI card "device" code */
+ vertex_hdl_t c_master; /* PCI bus provider */
+ arbitrary_info_t c_mfast; /* cached fastinfo from c_master */
+ pciio_provider_t *c_pops; /* cached provider from c_master */
+ error_handler_f *c_efunc; /* error handling function */
+ error_handler_arg_t c_einfo; /* first parameter for efunc */
+
+ struct pciio_win_info_s { /* state of BASE regs */
+ pciio_space_t w_space;
+ char w_code; /* low 4 bits of MEM BAR */
+ /* low 2 bits of IO BAR */
+ iopaddr_t w_base;
+ size_t w_size;
+ int w_devio_index; /* DevIO[] register used to
+ access this window */
+ struct pciio_win_alloc_s w_win_alloc; /* window allocation cookie */
+ } c_window[PCI_CFG_BASE_ADDRS + 1];
+#define c_rwindow c_window[PCI_CFG_BASE_ADDRS] /* EXPANSION ROM window */
+#define c_rbase c_rwindow.w_base /* EXPANSION ROM base addr */
+#define c_rsize c_rwindow.w_size /* EXPANSION ROM size (bytes) */
+ pciio_piospace_t c_piospace; /* additional I/O spaces allocated */
+ int c_type1; /* use type1 addressing */
+};
+
+extern char pciio_info_fingerprint[];
+#endif /* _ASM_IA64_SN_PCI_PCIIO_PRIVATE_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PCI_PIC_H
+#define _ASM_IA64_SN_PCI_PIC_H
+
+/*
+ * PIC AS DEVICE ZERO
+ * ------------------
+ *
+ * PIC handles PCI/X busses. PCI/X requires that the 'bridge' (i.e. PIC)
+ * be designated as 'device 0'. That is a departure from earlier SGI
+ * PCI bridges. Because of that we use config space 1 to access the
+ * config space of the first actual PCI device on the bus.
+ * Here's what the PIC manual says:
+ *
+ * The current PCI-X bus specification now defines that the parent
+ * hosts bus bridge (PIC for example) must be device 0 on bus 0. PIC
+ * reduced the total number of devices from 8 to 4 and removed the
+ * device registers and windows, now only supporting devices 0,1,2, and
+ * 3. PIC did leave all 8 configuration space windows. The reason was
+ * there was nothing to gain by removing them. Here in lies the problem.
+ * The device numbering we do using 0 through 3 is unrelated to the device
+ * numbering which PCI-X requires in configuration space. In the past we
+ * correlated Configs pace and our device space 0 <-> 0, 1 <-> 1, etc.
+ * PCI-X requires we start a 1, not 0 and currently the PX brick
+ * does associate our:
+ *
+ * device 0 with configuration space window 1,
+ * device 1 with configuration space window 2,
+ * device 2 with configuration space window 3,
+ * device 3 with configuration space window 4.
+ *
+ * The net effect is that all config space access are off-by-one with
+ * relation to other per-slot accesses on the PIC.
+ * Here is a table that shows some of that:
+ *
+ * Internal Slot#
+ * |
+ * | 0 1 2 3
+ * ----------|---------------------------------------
+ * config | 0x21000 0x22000 0x23000 0x24000
+ * |
+ * even rrb | 0[0] n/a 1[0] n/a [] == implied even/odd
+ * |
+ * odd rrb | n/a 0[1] n/a 1[1]
+ * |
+ * int dev | 00 01 10 11
+ * |
+ * ext slot# | 1 2 3 4
+ * ----------|---------------------------------------
+ */
+
+
+#ifdef __KERNEL__
+#include <linux/types.h>
+#include <asm/sn/xtalk/xwidget.h> /* generic widget header */
+#else
+#include <xtalk/xwidget.h>
+#endif
+
+#include <asm/sn/pci/pciio.h>
+
+
+/*
+ * bus provider function table
+ *
+ * Normally, this table is only handed off explicitly
+ * during provider initialization, and the PCI generic
+ * layer will stash a pointer to it in the vertex; however,
+ * exporting it explicitly enables a performance hack in
+ * the generic PCI provider where if we know at compile
+ * time that the only possible PCI provider is a
+ * pcibr, we can go directly to this ops table.
+ */
+
+extern pciio_provider_t pci_pic_provider;
+
+
+/*
+ * misc defines
+ *
+ */
+
+#define PIC_WIDGET_PART_NUM_BUS0 0xd102
+#define PIC_WIDGET_PART_NUM_BUS1 0xd112
+#define PIC_WIDGET_MFGR_NUM 0x24
+#define PIC_WIDGET_REV_A 0x1
+#define PIC_WIDGET_REV_B 0x2
+#define PIC_WIDGET_REV_C 0x3
+
+#define PIC_XTALK_ADDR_MASK 0x0000FFFFFFFFFFFF
+#define PIC_INTERNAL_ATES 1024
+
+
+#define IS_PIC_PART_REV_A(rev) \
+ ((rev == (PIC_WIDGET_PART_NUM_BUS0 << 4 | PIC_WIDGET_REV_A)) || \
+ (rev == (PIC_WIDGET_PART_NUM_BUS1 << 4 | PIC_WIDGET_REV_A)))
+#define IS_PIC_PART_REV_B(rev) \
+ ((rev == (PIC_WIDGET_PART_NUM_BUS0 << 4 | PIC_WIDGET_REV_B)) || \
+ (rev == (PIC_WIDGET_PART_NUM_BUS1 << 4 | PIC_WIDGET_REV_B)))
+#define IS_PIC_PART_REV_C(rev) \
+ ((rev == (PIC_WIDGET_PART_NUM_BUS0 << 4 | PIC_WIDGET_REV_C)) || \
+ (rev == (PIC_WIDGET_PART_NUM_BUS1 << 4 | PIC_WIDGET_REV_C)))
+
+
+/*
+ * misc typedefs
+ *
+ */
+typedef uint64_t picreg_t;
+typedef uint64_t picate_t;
+
+/*
+ * PIC Bridge MMR defines
+ */
+
+/*
+ * PIC STATUS register offset 0x00000008
+ */
+
+#define PIC_STAT_PCIX_ACTIVE_SHFT 33
+
+/*
+ * PIC CONTROL register offset 0x00000020
+ */
+
+#define PIC_CTRL_PCI_SPEED_SHFT 4
+#define PIC_CTRL_PCI_SPEED (0x3 << PIC_CTRL_PCI_SPEED_SHFT)
+#define PIC_CTRL_PAGE_SIZE_SHFT 21
+#define PIC_CTRL_PAGE_SIZE (0x1 << PIC_CTRL_PAGE_SIZE_SHFT)
+
+
+/*
+ * PIC Intr Destination Addr offset 0x00000038
+ */
+
+#define PIC_INTR_DEST_ADDR 0x0000FFFFFFFFFFFF
+#define PIC_INTR_DEST_TID_SHFT 48
+#define PIC_INTR_DEST_TID (0xFull << PIC_INTR_DEST_TID_SHFT)
+
+/*
+ * PIC PCI Responce Buffer offset 0x00000068
+ */
+#define PIC_RSP_BUF_ADDR 0x0000FFFFFFFFFFFF
+#define PIC_RSP_BUF_NUM_SHFT 48
+#define PIC_RSP_BUF_NUM (0xFull << PIC_RSP_BUF_NUM_SHFT)
+#define PIC_RSP_BUF_DEV_NUM_SHFT 52
+#define PIC_RSP_BUF_DEV_NUM (0x3ull << PIC_RSP_BUF_DEV_NUM_SHFT)
+
+/*
+ * PIC PCI DIRECT MAP register offset 0x00000080
+ */
+#define PIC_DIRMAP_DIROFF_SHFT 0
+#define PIC_DIRMAP_DIROFF (0x1FFFF << PIC_DIRMAP_DIROFF_SHFT)
+#define PIC_DIRMAP_ADD512_SHFT 17
+#define PIC_DIRMAP_ADD512 (0x1 << PIC_DIRMAP_ADD512_SHFT)
+#define PIC_DIRMAP_WID_SHFT 20
+#define PIC_DIRMAP_WID (0xF << PIC_DIRMAP_WID_SHFT)
+
+#define PIC_DIRMAP_OFF_ADDRSHFT 31
+
+/*
+ * Interrupt Status register offset 0x00000100
+ */
+#define PIC_ISR_PCIX_SPLIT_MSG_PE (0x1ull << 45)
+#define PIC_ISR_PCIX_SPLIT_EMSG (0x1ull << 44)
+#define PIC_ISR_PCIX_SPLIT_TO (0x1ull << 43)
+#define PIC_ISR_PCIX_UNEX_COMP (0x1ull << 42)
+#define PIC_ISR_INT_RAM_PERR (0x1ull << 41)
+#define PIC_ISR_PCIX_ARB_ERR (0x1ull << 40)
+#define PIC_ISR_PCIX_REQ_TOUT (0x1ull << 39)
+#define PIC_ISR_PCIX_TABORT (0x1ull << 38)
+#define PIC_ISR_PCIX_PERR (0x1ull << 37)
+#define PIC_ISR_PCIX_SERR (0x1ull << 36)
+#define PIC_ISR_PCIX_MRETRY (0x1ull << 35)
+#define PIC_ISR_PCIX_MTOUT (0x1ull << 34)
+#define PIC_ISR_PCIX_DA_PARITY (0x1ull << 33)
+#define PIC_ISR_PCIX_AD_PARITY (0x1ull << 32)
+#define PIC_ISR_PMU_PAGE_FAULT (0x1ull << 30)
+#define PIC_ISR_UNEXP_RESP (0x1ull << 29)
+#define PIC_ISR_BAD_XRESP_PKT (0x1ull << 28)
+#define PIC_ISR_BAD_XREQ_PKT (0x1ull << 27)
+#define PIC_ISR_RESP_XTLK_ERR (0x1ull << 26)
+#define PIC_ISR_REQ_XTLK_ERR (0x1ull << 25)
+#define PIC_ISR_INVLD_ADDR (0x1ull << 24)
+#define PIC_ISR_UNSUPPORTED_XOP (0x1ull << 23)
+#define PIC_ISR_XREQ_FIFO_OFLOW (0x1ull << 22)
+#define PIC_ISR_LLP_REC_SNERR (0x1ull << 21)
+#define PIC_ISR_LLP_REC_CBERR (0x1ull << 20)
+#define PIC_ISR_LLP_RCTY (0x1ull << 19)
+#define PIC_ISR_LLP_TX_RETRY (0x1ull << 18)
+#define PIC_ISR_LLP_TCTY (0x1ull << 17)
+#define PIC_ISR_PCI_ABORT (0x1ull << 15)
+#define PIC_ISR_PCI_PARITY (0x1ull << 14)
+#define PIC_ISR_PCI_SERR (0x1ull << 13)
+#define PIC_ISR_PCI_PERR (0x1ull << 12)
+#define PIC_ISR_PCI_MST_TIMEOUT (0x1ull << 11)
+#define PIC_ISR_PCI_RETRY_CNT (0x1ull << 10)
+#define PIC_ISR_XREAD_REQ_TIMEOUT (0x1ull << 9)
+#define PIC_ISR_INT_MSK (0xffull << 0)
+#define PIC_ISR_INT(x) (0x1ull << (x))
+
+#define PIC_ISR_LINK_ERROR \
+ (PIC_ISR_LLP_REC_SNERR|PIC_ISR_LLP_REC_CBERR| \
+ PIC_ISR_LLP_RCTY|PIC_ISR_LLP_TX_RETRY| \
+ PIC_ISR_LLP_TCTY)
+
+#define PIC_ISR_PCIBUS_PIOERR \
+ (PIC_ISR_PCI_MST_TIMEOUT|PIC_ISR_PCI_ABORT| \
+ PIC_ISR_PCIX_MTOUT|PIC_ISR_PCIX_TABORT)
+
+#define PIC_ISR_PCIBUS_ERROR \
+ (PIC_ISR_PCIBUS_PIOERR|PIC_ISR_PCI_PERR| \
+ PIC_ISR_PCI_SERR|PIC_ISR_PCI_RETRY_CNT| \
+ PIC_ISR_PCI_PARITY|PIC_ISR_PCIX_PERR| \
+ PIC_ISR_PCIX_SERR|PIC_ISR_PCIX_MRETRY| \
+ PIC_ISR_PCIX_AD_PARITY|PIC_ISR_PCIX_DA_PARITY| \
+ PIC_ISR_PCIX_REQ_TOUT|PIC_ISR_PCIX_UNEX_COMP| \
+ PIC_ISR_PCIX_SPLIT_TO|PIC_ISR_PCIX_SPLIT_EMSG| \
+ PIC_ISR_PCIX_SPLIT_MSG_PE)
+
+#define PIC_ISR_XTALK_ERROR \
+ (PIC_ISR_XREAD_REQ_TIMEOUT|PIC_ISR_XREQ_FIFO_OFLOW| \
+ PIC_ISR_UNSUPPORTED_XOP|PIC_ISR_INVLD_ADDR| \
+ PIC_ISR_REQ_XTLK_ERR|PIC_ISR_RESP_XTLK_ERR| \
+ PIC_ISR_BAD_XREQ_PKT|PIC_ISR_BAD_XRESP_PKT| \
+ PIC_ISR_UNEXP_RESP)
+
+#define PIC_ISR_ERRORS \
+ (PIC_ISR_LINK_ERROR|PIC_ISR_PCIBUS_ERROR| \
+ PIC_ISR_XTALK_ERROR| \
+ PIC_ISR_PMU_PAGE_FAULT|PIC_ISR_INT_RAM_PERR)
+
+/*
+ * PIC RESET INTR register offset 0x00000110
+ */
+
+#define PIC_IRR_ALL_CLR 0xffffffffffffffff
+
+/*
+ * PIC PCI Host Intr Addr offset 0x00000130 - 0x00000168
+ */
+#define PIC_HOST_INTR_ADDR 0x0000FFFFFFFFFFFF
+#define PIC_HOST_INTR_FLD_SHFT 48
+#define PIC_HOST_INTR_FLD (0xFFull << PIC_HOST_INTR_FLD_SHFT)
+
+
+/*
+ * PIC MMR structure mapping
+ */
+
+/* NOTE: PIC WAR. PV#854697. PIC does not allow writes just to [31:0]
+ * of a 64-bit register. When writing PIC registers, always write the
+ * entire 64 bits.
+ */
+
+typedef volatile struct pic_s {
+
+ /* 0x000000-0x00FFFF -- Local Registers */
+
+ /* 0x000000-0x000057 -- Standard Widget Configuration */
+ picreg_t p_wid_id; /* 0x000000 */
+ picreg_t p_wid_stat; /* 0x000008 */
+ picreg_t p_wid_err_upper; /* 0x000010 */
+ picreg_t p_wid_err_lower; /* 0x000018 */
+ #define p_wid_err p_wid_err_lower
+ picreg_t p_wid_control; /* 0x000020 */
+ picreg_t p_wid_req_timeout; /* 0x000028 */
+ picreg_t p_wid_int_upper; /* 0x000030 */
+ picreg_t p_wid_int_lower; /* 0x000038 */
+ #define p_wid_int p_wid_int_lower
+ picreg_t p_wid_err_cmdword; /* 0x000040 */
+ picreg_t p_wid_llp; /* 0x000048 */
+ picreg_t p_wid_tflush; /* 0x000050 */
+
+ /* 0x000058-0x00007F -- Bridge-specific Widget Configuration */
+ picreg_t p_wid_aux_err; /* 0x000058 */
+ picreg_t p_wid_resp_upper; /* 0x000060 */
+ picreg_t p_wid_resp_lower; /* 0x000068 */
+ #define p_wid_resp p_wid_resp_lower
+ picreg_t p_wid_tst_pin_ctrl; /* 0x000070 */
+ picreg_t p_wid_addr_lkerr; /* 0x000078 */
+
+ /* 0x000080-0x00008F -- PMU & MAP */
+ picreg_t p_dir_map; /* 0x000080 */
+ picreg_t _pad_000088; /* 0x000088 */
+
+ /* 0x000090-0x00009F -- SSRAM */
+ picreg_t p_map_fault; /* 0x000090 */
+ picreg_t _pad_000098; /* 0x000098 */
+
+ /* 0x0000A0-0x0000AF -- Arbitration */
+ picreg_t p_arb; /* 0x0000A0 */
+ picreg_t _pad_0000A8; /* 0x0000A8 */
+
+ /* 0x0000B0-0x0000BF -- Number In A Can or ATE Parity Error */
+ picreg_t p_ate_parity_err; /* 0x0000B0 */
+ picreg_t _pad_0000B8; /* 0x0000B8 */
+
+ /* 0x0000C0-0x0000FF -- PCI/GIO */
+ picreg_t p_bus_timeout; /* 0x0000C0 */
+ picreg_t p_pci_cfg; /* 0x0000C8 */
+ picreg_t p_pci_err_upper; /* 0x0000D0 */
+ picreg_t p_pci_err_lower; /* 0x0000D8 */
+ #define p_pci_err p_pci_err_lower
+ picreg_t _pad_0000E0[4]; /* 0x0000{E0..F8} */
+
+ /* 0x000100-0x0001FF -- Interrupt */
+ picreg_t p_int_status; /* 0x000100 */
+ picreg_t p_int_enable; /* 0x000108 */
+ picreg_t p_int_rst_stat; /* 0x000110 */
+ picreg_t p_int_mode; /* 0x000118 */
+ picreg_t p_int_device; /* 0x000120 */
+ picreg_t p_int_host_err; /* 0x000128 */
+ picreg_t p_int_addr[8]; /* 0x0001{30,,,68} */
+ picreg_t p_err_int_view; /* 0x000170 */
+ picreg_t p_mult_int; /* 0x000178 */
+ picreg_t p_force_always[8]; /* 0x0001{80,,,B8} */
+ picreg_t p_force_pin[8]; /* 0x0001{C0,,,F8} */
+
+ /* 0x000200-0x000298 -- Device */
+ picreg_t p_device[4]; /* 0x0002{00,,,18} */
+ picreg_t _pad_000220[4]; /* 0x0002{20,,,38} */
+ picreg_t p_wr_req_buf[4]; /* 0x0002{40,,,58} */
+ picreg_t _pad_000260[4]; /* 0x0002{60,,,78} */
+ picreg_t p_rrb_map[2]; /* 0x0002{80,,,88} */
+ #define p_even_resp p_rrb_map[0] /* 0x000280 */
+ #define p_odd_resp p_rrb_map[1] /* 0x000288 */
+ picreg_t p_resp_status; /* 0x000290 */
+ picreg_t p_resp_clear; /* 0x000298 */
+
+ picreg_t _pad_0002A0[12]; /* 0x0002{A0..F8} */
+
+ /* 0x000300-0x0003F8 -- Buffer Address Match Registers */
+ struct {
+ picreg_t upper; /* 0x0003{00,,,F0} */
+ picreg_t lower; /* 0x0003{08,,,F8} */
+ } p_buf_addr_match[16];
+
+ /* 0x000400-0x0005FF -- Performance Monitor Registers (even only) */
+ struct {
+ picreg_t flush_w_touch; /* 0x000{400,,,5C0} */
+ picreg_t flush_wo_touch; /* 0x000{408,,,5C8} */
+ picreg_t inflight; /* 0x000{410,,,5D0} */
+ picreg_t prefetch; /* 0x000{418,,,5D8} */
+ picreg_t total_pci_retry; /* 0x000{420,,,5E0} */
+ picreg_t max_pci_retry; /* 0x000{428,,,5E8} */
+ picreg_t max_latency; /* 0x000{430,,,5F0} */
+ picreg_t clear_all; /* 0x000{438,,,5F8} */
+ } p_buf_count[8];
+
+
+ /* 0x000600-0x0009FF -- PCI/X registers */
+ picreg_t p_pcix_bus_err_addr; /* 0x000600 */
+ picreg_t p_pcix_bus_err_attr; /* 0x000608 */
+ picreg_t p_pcix_bus_err_data; /* 0x000610 */
+ picreg_t p_pcix_pio_split_addr; /* 0x000618 */
+ picreg_t p_pcix_pio_split_attr; /* 0x000620 */
+ picreg_t p_pcix_dma_req_err_attr; /* 0x000628 */
+ picreg_t p_pcix_dma_req_err_addr; /* 0x000630 */
+ picreg_t p_pcix_timeout; /* 0x000638 */
+
+ picreg_t _pad_000640[120]; /* 0x000{640,,,9F8} */
+
+ /* 0x000A00-0x000BFF -- PCI/X Read&Write Buffer */
+ struct {
+ picreg_t p_buf_addr; /* 0x000{A00,,,AF0} */
+ picreg_t p_buf_attr; /* 0X000{A08,,,AF8} */
+ } p_pcix_read_buf_64[16];
+
+ struct {
+ picreg_t p_buf_addr; /* 0x000{B00,,,BE0} */
+ picreg_t p_buf_attr; /* 0x000{B08,,,BE8} */
+ picreg_t p_buf_valid; /* 0x000{B10,,,BF0} */
+ picreg_t __pad1; /* 0x000{B18,,,BF8} */
+ } p_pcix_write_buf_64[8];
+
+ /* End of Local Registers -- Start of Address Map space */
+
+ char _pad_000c00[0x010000 - 0x000c00];
+
+ /* 0x010000-0x011fff -- Internal ATE RAM (Auto Parity Generation) */
+ picate_t p_int_ate_ram[1024]; /* 0x010000-0x011fff */
+
+ /* 0x012000-0x013fff -- Internal ATE RAM (Manual Parity Generation) */
+ picate_t p_int_ate_ram_mp[1024]; /* 0x012000-0x013fff */
+
+ char _pad_014000[0x18000 - 0x014000];
+
+ /* 0x18000-0x197F8 -- PIC Write Request Ram */
+ picreg_t p_wr_req_lower[256]; /* 0x18000 - 0x187F8 */
+ picreg_t p_wr_req_upper[256]; /* 0x18800 - 0x18FF8 */
+ picreg_t p_wr_req_parity[256]; /* 0x19000 - 0x197F8 */
+
+ char _pad_019800[0x20000 - 0x019800];
+
+ /* 0x020000-0x027FFF -- PCI Device Configuration Spaces */
+ union {
+ uint8_t c[0x1000 / 1]; /* 0x02{0000,,,7FFF} */
+ uint16_t s[0x1000 / 2]; /* 0x02{0000,,,7FFF} */
+ uint32_t l[0x1000 / 4]; /* 0x02{0000,,,7FFF} */
+ uint64_t d[0x1000 / 8]; /* 0x02{0000,,,7FFF} */
+ union {
+ uint8_t c[0x100 / 1];
+ uint16_t s[0x100 / 2];
+ uint32_t l[0x100 / 4];
+ uint64_t d[0x100 / 8];
+ } f[8];
+ } p_type0_cfg_dev[8]; /* 0x02{0000,,,7FFF} */
+
+ /* 0x028000-0x028FFF -- PCI Type 1 Configuration Space */
+ union {
+ uint8_t c[0x1000 / 1]; /* 0x028000-0x029000 */
+ uint16_t s[0x1000 / 2]; /* 0x028000-0x029000 */
+ uint32_t l[0x1000 / 4]; /* 0x028000-0x029000 */
+ uint64_t d[0x1000 / 8]; /* 0x028000-0x029000 */
+ union {
+ uint8_t c[0x100 / 1];
+ uint16_t s[0x100 / 2];
+ uint32_t l[0x100 / 4];
+ uint64_t d[0x100 / 8];
+ } f[8];
+ } p_type1_cfg; /* 0x028000-0x029000 */
+
+ char _pad_029000[0x030000-0x029000];
+
+ /* 0x030000-0x030007 -- PCI Interrupt Acknowledge Cycle */
+ union {
+ uint8_t c[8 / 1];
+ uint16_t s[8 / 2];
+ uint32_t l[8 / 4];
+ uint64_t d[8 / 8];
+ } p_pci_iack; /* 0x030000-0x030007 */
+
+ char _pad_030007[0x040000-0x030008];
+
+ /* 0x040000-0x030007 -- PCIX Special Cycle */
+ union {
+ uint8_t c[8 / 1];
+ uint16_t s[8 / 2];
+ uint32_t l[8 / 4];
+ uint64_t d[8 / 8];
+ } p_pcix_cycle; /* 0x040000-0x040007 */
+} pic_t;
+
+#endif /* _ASM_IA64_SN_PCI_PIC_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PIO_H
+#define _ASM_IA64_SN_PIO_H
+
+#include <asm/sn/types.h>
+
+/*
+ * pioaddr_t - The kernel virtual address that a PIO can be done upon.
+ * Should probably be (volatile void*) but EVEREST would do PIO
+ * to long mostly, just cast for other sizes.
+ */
+
+typedef volatile unsigned long* pioaddr_t;
+
+/*
+ * iopaddr_t - the physical io space relative address (e.g. VME A16S 0x0800).
+ * iosapce_t - specifies the io address space to be mapped/accessed.
+ * piomap_t - the handle returned by pio_alloc() and used with all the pio
+ * access functions.
+ */
+
+
+typedef struct piomap {
+ unsigned int pio_bus;
+ unsigned int pio_adap;
+ int pio_flag;
+ int pio_reg;
+ char pio_name[7]; /* to identify the mapped device */
+ struct piomap *pio_next; /* dlist to link active piomap's */
+ struct piomap *pio_prev; /* for debug and error reporting */
+ iopaddr_t pio_iopmask; /* valid iop address bit mask */
+ iobush_t pio_bushandle; /* bus-level handle */
+} piomap_t;
+
+#define pio_type pio_iospace.ios_type
+#define pio_iopaddr pio_iospace.ios_iopaddr
+#define pio_size pio_iospace.ios_size
+#define pio_vaddr pio_iospace.ios_vaddr
+
+/* Macro to get/set PIO error function */
+#define pio_seterrf(p,f) (p)->pio_errfunc = (f)
+#define pio_geterrf(p) (p)->pio_errfunc
+
+
+/*
+ * piomap_t type defines
+ */
+
+#define PIOMAP_NTYPES 7
+
+#define PIOMAP_A16N VME_A16NP
+#define PIOMAP_A16S VME_A16S
+#define PIOMAP_A24N VME_A24NP
+#define PIOMAP_A24S VME_A24S
+#define PIOMAP_A32N VME_A32NP
+#define PIOMAP_A32S VME_A32S
+#define PIOMAP_A64 6
+
+#define PIOMAP_EISA_IO 0
+#define PIOMAP_EISA_MEM 1
+
+#define PIOMAP_PCI_IO 0
+#define PIOMAP_PCI_MEM 1
+#define PIOMAP_PCI_CFG 2
+#define PIOMAP_PCI_ID 3
+
+/* IBUS piomap types */
+#define PIOMAP_FCI 0
+
+/* dang gio piomap types */
+
+#define PIOMAP_GIO32 0
+#define PIOMAP_GIO64 1
+
+#define ET_MEM 0
+#define ET_IO 1
+#define LAN_RAM 2
+#define LAN_IO 3
+
+#define PIOREG_NULL (-1)
+
+/* standard flags values for pio_map routines,
+ * including {xtalk,pciio}_piomap calls.
+ * NOTE: try to keep these in step with DMAMAP flags.
+ */
+#define PIOMAP_UNFIXED 0x0
+#define PIOMAP_FIXED 0x1
+#define PIOMAP_NOSLEEP 0x2
+#define PIOMAP_INPLACE 0x4
+
+#define PIOMAP_FLAGS 0x7
+
+#endif /* _ASM_IA64_SN_PIO_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PRIO_H
+#define _ASM_IA64_SN_PRIO_H
+
+#include <linux/types.h>
+
+/*
+ * Priority I/O function prototypes and macro definitions
+ */
+
+typedef long long bandwidth_t;
+
+/* These should be the same as FREAD/FWRITE */
+#define PRIO_READ_ALLOCATE 0x1
+#define PRIO_WRITE_ALLOCATE 0x2
+#define PRIO_READWRITE_ALLOCATE (PRIO_READ_ALLOCATE | PRIO_WRITE_ALLOCATE)
+
+extern int prioSetBandwidth (int /* fd */,
+ int /* alloc_type */,
+ bandwidth_t /* bytes_per_sec */,
+ pid_t * /* pid */);
+extern int prioGetBandwidth (int /* fd */,
+ bandwidth_t * /* read_bw */,
+ bandwidth_t * /* write_bw */);
+extern int prioLock (pid_t *);
+extern int prioUnlock (void);
+
+/* Error returns */
+#define PRIO_SUCCESS 0
+#define PRIO_FAIL (-1)
+
+#endif /* _ASM_IA64_SN_PRIO_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+
+#ifndef _ASM_IA64_SN_SGI_H
+#define _ASM_IA64_SN_SGI_H
+
+#include <linux/config.h>
+
+#include <asm/sn/types.h>
+#include <asm/sn/hwgfs.h>
+
+typedef hwgfs_handle_t vertex_hdl_t;
+
+/* Nice general name length that lots of people like to use */
+#ifndef MAXDEVNAME
+#define MAXDEVNAME 256
+#endif
+
+
+/*
+ * Possible return values from graph routines.
+ */
+typedef enum graph_error_e {
+ GRAPH_SUCCESS, /* 0 */
+ GRAPH_DUP, /* 1 */
+ GRAPH_NOT_FOUND, /* 2 */
+ GRAPH_BAD_PARAM, /* 3 */
+ GRAPH_HIT_LIMIT, /* 4 */
+ GRAPH_CANNOT_ALLOC, /* 5 */
+ GRAPH_ILLEGAL_REQUEST, /* 6 */
+ GRAPH_IN_USE /* 7 */
+} graph_error_t;
+
+#define CNODEID_NONE ((cnodeid_t)-1)
+#define CPU_NONE (-1)
+#define GRAPH_VERTEX_NONE ((vertex_hdl_t)-1)
+
+/*
+ * Defines for individual WARs. Each is a bitmask of applicable
+ * part revision numbers. (1 << 1) == rev A, (1 << 2) == rev B,
+ * (3 << 1) == (rev A or rev B), etc
+ */
+#define PV854697 (~0) /* PIC: write 64bit regs as 64bits. permanent */
+#define PV854827 (~0UL) /* PIC: fake widget 0xf presence bit. permanent */
+#define PV855271 (1 << 1) /* PIC: use virt chan iff 64-bit device. */
+#define PV878674 (~0) /* PIC: Dont allow 64bit PIOs. permanent */
+#define PV855272 (1 << 1) /* PIC: runaway interrupt WAR */
+#define PV856155 (1 << 1) /* PIC: arbitration WAR */
+#define PV856864 (1 << 1) /* PIC: lower timeout to free TNUMs quicker */
+#define PV856866 (1 << 1) /* PIC: avoid rrb's 0/1/8/9. */
+#define PV862253 (1 << 1) /* PIC: don't enable write req RAM parity checking */
+#define PV867308 (3 << 1) /* PIC: make LLP error interrupts FATAL for PIC */
+
+/*
+ * No code is complete without an Assertion macro
+ */
+
+#if defined(DISABLE_ASSERT)
+#define ASSERT(expr)
+#define ASSERT_ALWAYS(expr)
+#else
+#define ASSERT(expr) do { \
+ if(!(expr)) { \
+ printk( "Assertion [%s] failed! %s:%s(line=%d)\n",\
+ #expr,__FILE__,__FUNCTION__,__LINE__); \
+ panic("Assertion panic\n"); \
+ } } while(0)
+
+#define ASSERT_ALWAYS(expr) do {\
+ if(!(expr)) { \
+ printk( "Assertion [%s] failed! %s:%s(line=%d)\n",\
+ #expr,__FILE__,__FUNCTION__,__LINE__); \
+ panic("Assertion always panic\n"); \
+ } } while(0)
+#endif /* DISABLE_ASSERT */
+
+#endif /* _ASM_IA64_SN_SGI_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_SLOTNUM_H
+#define _ASM_IA64_SN_SLOTNUM_H
+
+
+typedef unsigned char slotid_t;
+
+#include <asm/sn/sn2/slotnum.h>
+
+#endif /* _ASM_IA64_SN_SLOTNUM_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#ifndef _ASM_IA64_SN_SN2_ADDRS_H
+#define _ASM_IA64_SN_SN2_ADDRS_H
+
+/* McKinley Address Format:
+ *
+ * 4 4 3 3 3 3
+ * 9 8 8 7 6 5 0
+ * +-+---------+----+--------------+
+ * |0| Node ID | AS | Node Offset |
+ * +-+---------+----+--------------+
+ *
+ * Node ID: If bit 38 = 1, is ICE, else is SHUB
+ * AS: Address Space Identifier. Used only if bit 38 = 0.
+ * b'00: Local Resources and MMR space
+ * bit 35
+ * 0: Local resources space
+ * node id:
+ * 0: IA64/NT compatibility space
+ * 2: Local MMR Space
+ * 4: Local memory, regardless of local node id
+ * 1: Global MMR space
+ * b'01: GET space.
+ * b'10: AMO space.
+ * b'11: Cacheable memory space.
+ *
+ * NodeOffset: byte offset
+ */
+
+#ifndef __ASSEMBLY__
+typedef union ia64_sn2_pa {
+ struct {
+ unsigned long off : 36;
+ unsigned long as : 2;
+ unsigned long nasid: 11;
+ unsigned long fill : 15;
+ } f;
+ unsigned long l;
+ void *p;
+} ia64_sn2_pa_t;
+#endif
+
+#define TO_PHYS_MASK 0x0001ffcfffffffff /* Note - clear AS bits */
+
+
+/* Regions determined by AS */
+#define LOCAL_MMR_SPACE 0xc000008000000000 /* Local MMR space */
+#define LOCAL_PHYS_MMR_SPACE 0x8000008000000000 /* Local PhysicalMMR space */
+#define LOCAL_MEM_SPACE 0xc000010000000000 /* Local Memory space */
+#define GLOBAL_MMR_SPACE 0xc000000800000000 /* Global MMR space */
+#define GLOBAL_PHYS_MMR_SPACE 0x0000000800000000 /* Global Physical MMR space */
+#define GET_SPACE 0xe000001000000000 /* GET space */
+#define AMO_SPACE 0xc000002000000000 /* AMO space */
+#define CACHEABLE_MEM_SPACE 0xe000003000000000 /* Cacheable memory space */
+#define UNCACHED 0xc000000000000000 /* UnCacheable memory space */
+#define UNCACHED_PHYS 0x8000000000000000 /* UnCacheable physical memory space */
+
+#define PHYS_MEM_SPACE 0x0000003000000000 /* physical memory space */
+
+/* SN2 address macros */
+#define NID_SHFT 38
+#define LOCAL_MMR_ADDR(a) (UNCACHED | LOCAL_MMR_SPACE | (a))
+#define LOCAL_MMR_PHYS_ADDR(a) (UNCACHED_PHYS | LOCAL_PHYS_MMR_SPACE | (a))
+#define LOCAL_MEM_ADDR(a) (LOCAL_MEM_SPACE | (a))
+#define REMOTE_ADDR(n,a) ((((unsigned long)(n))<<NID_SHFT) | (a))
+#define GLOBAL_MMR_ADDR(n,a) (UNCACHED | GLOBAL_MMR_SPACE | REMOTE_ADDR(n,a))
+#define GLOBAL_MMR_PHYS_ADDR(n,a) (UNCACHED_PHYS | GLOBAL_PHYS_MMR_SPACE | REMOTE_ADDR(n,a))
+#define GET_ADDR(n,a) (GET_SPACE | REMOTE_ADDR(n,a))
+#define AMO_ADDR(n,a) (UNCACHED | AMO_SPACE | REMOTE_ADDR(n,a))
+#define GLOBAL_MEM_ADDR(n,a) (CACHEABLE_MEM_SPACE | REMOTE_ADDR(n,a))
+
+/* non-II mmr's start at top of big window space (4G) */
+#define BWIN_TOP 0x0000000100000000
+
+/*
+ * general address defines - for code common to SN0/SN1/SN2
+ */
+#define CAC_BASE CACHEABLE_MEM_SPACE /* cacheable memory space */
+#define IO_BASE (UNCACHED | GLOBAL_MMR_SPACE) /* lower 4G maps II's XIO space */
+#define AMO_BASE (UNCACHED | AMO_SPACE) /* fetch & op space */
+#define MSPEC_BASE AMO_BASE /* fetch & op space */
+#define UNCAC_BASE (UNCACHED | CACHEABLE_MEM_SPACE) /* uncached global memory */
+#define GET_BASE GET_SPACE /* momentarily coherent remote mem. */
+#define CALIAS_BASE LOCAL_CACHEABLE_BASE /* cached node-local memory */
+#define UALIAS_BASE (UNCACHED | LOCAL_CACHEABLE_BASE) /* uncached node-local memory */
+
+#define TO_PHYS(x) ( ((x) & TO_PHYS_MASK))
+#define TO_CAC(x) (CAC_BASE | ((x) & TO_PHYS_MASK))
+#define TO_UNCAC(x) (UNCAC_BASE | ((x) & TO_PHYS_MASK))
+#define TO_MSPEC(x) (MSPEC_BASE | ((x) & TO_PHYS_MASK))
+#define TO_GET(x) (GET_BASE | ((x) & TO_PHYS_MASK))
+#define TO_CALIAS(x) (CALIAS_BASE | TO_NODE_ADDRSPACE(x))
+#define TO_UALIAS(x) (UALIAS_BASE | TO_NODE_ADDRSPACE(x))
+#define NODE_SIZE_BITS 36 /* node offset : bits <35:0> */
+#define BWIN_SIZE_BITS 29 /* big window size: 512M */
+#define NASID_BITS 11 /* bits <48:38> */
+#define NASID_BITMASK (0x7ffULL)
+#define NASID_SHFT NID_SHFT
+#define NASID_META_BITS 0 /* ???? */
+#define NASID_LOCAL_BITS 7 /* same router as SN1 */
+
+#define NODE_ADDRSPACE_SIZE (1UL << NODE_SIZE_BITS)
+#define NASID_MASK ((uint64_t) NASID_BITMASK << NASID_SHFT)
+#define NASID_GET(_pa) (int) (((uint64_t) (_pa) >> \
+ NASID_SHFT) & NASID_BITMASK)
+#define PHYS_TO_DMA(x) ( ((x & NASID_MASK) >> 2) | \
+ (x & (NODE_ADDRSPACE_SIZE - 1)) )
+
+#define CHANGE_NASID(n,x) ({ia64_sn2_pa_t _v; _v.l = (long) (x); _v.f.nasid = n; _v.p;})
+
+/*
+ * Determine if a physical address should be referenced as cached or uncached.
+ * For now, assume all memory is cached and everything else is noncached.
+ * (Later, we may need to special case areas of memory to be reference uncached).
+ */
+#define IS_CACHED_ADDRESS(x) (((x) & PHYS_MEM_SPACE) == PHYS_MEM_SPACE)
+
+
+#ifndef __ASSEMBLY__
+#define NODE_SWIN_BASE(nasid, widget) \
+ ((widget == 0) ? NODE_BWIN_BASE((nasid), SWIN0_BIGWIN) \
+ : RAW_NODE_SWIN_BASE(nasid, widget))
+#else
+#define NODE_SWIN_BASE(nasid, widget) \
+ (NODE_IO_BASE(nasid) + ((uint64_t) (widget) << SWIN_SIZE_BITS))
+#define LOCAL_SWIN_BASE(widget) \
+ (UNCACHED | LOCAL_MMR_SPACE | (((uint64_t) (widget) << SWIN_SIZE_BITS)))
+#endif /* __ASSEMBLY__ */
+
+/*
+ * The following definitions pertain to the IO special address
+ * space. They define the location of the big and little windows
+ * of any given node.
+ */
+
+#define BWIN_INDEX_BITS 3
+#define BWIN_SIZE (1UL << BWIN_SIZE_BITS)
+#define BWIN_SIZEMASK (BWIN_SIZE - 1)
+#define BWIN_WIDGET_MASK 0x7
+#define NODE_BWIN_BASE0(nasid) (NODE_IO_BASE(nasid) + BWIN_SIZE)
+#define NODE_BWIN_BASE(nasid, bigwin) (NODE_BWIN_BASE0(nasid) + \
+ ((uint64_t) (bigwin) << BWIN_SIZE_BITS))
+
+#define BWIN_WIDGETADDR(addr) ((addr) & BWIN_SIZEMASK)
+#define BWIN_WINDOWNUM(addr) (((addr) >> BWIN_SIZE_BITS) & BWIN_WIDGET_MASK)
+
+/*
+ * Verify if addr belongs to large window address of node with "nasid"
+ *
+ *
+ * NOTE: "addr" is expected to be XKPHYS address, and NOT physical
+ * address
+ *
+ *
+ */
+
+#define NODE_BWIN_ADDR(nasid, addr) \
+ (((addr) >= NODE_BWIN_BASE0(nasid)) && \
+ ((addr) < (NODE_BWIN_BASE(nasid, HUB_NUM_BIG_WINDOW) + \
+ BWIN_SIZE)))
+
+#endif /* _ASM_IA64_SN_SN2_ADDRS_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_SN2_ARCH_H
+#define _ASM_IA64_SN_SN2_ARCH_H
+
+#define CPUS_PER_NODE 4 /* CPUs on a single hub */
+#define CPUS_PER_SUBNODE 4 /* CPUs on a single hub PI */
+
+
+/*
+ * This is the maximum number of NASIDS that can be present in a system.
+ * (Highest NASID plus one.)
+ */
+#define MAX_NASIDS 2048
+
+
+/*
+ * This is the maximum number of nodes that can be part of a kernel.
+ * Effectively, it's the maximum number of compact node ids (cnodeid_t).
+ * This is not necessarily the same as MAX_NASIDS.
+ */
+#define MAX_COMPACT_NODES 2048
+
+/*
+ * MAX_REGIONS refers to the maximum number of hardware partitioned regions.
+ */
+#define MAX_REGIONS 64
+#define MAX_NONPREMIUM_REGIONS 16
+#define MAX_PREMIUM_REGIONS MAX_REGIONS
+
+
+/*
+ * MAX_PARITIONS refers to the maximum number of logically defined
+ * partitions the system can support.
+ */
+#define MAX_PARTITIONS MAX_REGIONS
+
+
+#define NASID_MASK_BYTES ((MAX_NASIDS + 7) / 8)
+#define CNASID_MASK_BYTES (NASID_MASK_BYTES / 2)
+
+
+/*
+ * 1 FSB per SHUB, with up to 4 cpus per FSB.
+ */
+#define NUM_SUBNODES 1
+#define SUBNODE_SHFT 0
+#define SUBNODE_MASK (0x0 << SUBNODE_SHFT)
+#define LOCALCPU_SHFT 0
+#define LOCALCPU_MASK (0x3 << LOCALCPU_SHFT)
+#define SUBNODE(slice) (((slice) & SUBNODE_MASK) >> SUBNODE_SHFT)
+#define LOCALCPU(slice) (((slice) & LOCALCPU_MASK) >> LOCALCPU_SHFT)
+#define TO_SLICE(subn, local) (((subn) << SUBNODE_SHFT) | \
+ ((local) << LOCALCPU_SHFT))
+
+#endif /* _ASM_IA64_SN_SN2_ARCH_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#ifndef _ASM_IA64_SN_SN2_GEO_H
+#define _ASM_IA64_SN_SN2_GEO_H
+
+/* Headers required by declarations in this file */
+
+#include <asm/sn/slotnum.h>
+
+
+/* The geoid_t implementation below is based loosely on the pcfg_t
+ implementation in sys/SN/promcfg.h. */
+
+/* Type declaractions */
+
+/* Size of a geoid_t structure (must be before decl. of geoid_u) */
+#define GEOID_SIZE 8 /* Would 16 be better? The size can
+ be different on different platforms. */
+
+#define MAX_SLABS 0xe /* slabs per module */
+
+typedef unsigned char geo_type_t;
+
+/* Fields common to all substructures */
+typedef struct geo_any_s {
+ moduleid_t module; /* The module (box) this h/w lives in */
+ geo_type_t type; /* What type of h/w is named by this geoid_t */
+ slabid_t slab; /* The logical assembly within the module */
+} geo_any_t;
+
+/* Additional fields for particular types of hardware */
+typedef struct geo_node_s {
+ geo_any_t any; /* No additional fields needed */
+} geo_node_t;
+
+typedef struct geo_rtr_s {
+ geo_any_t any; /* No additional fields needed */
+} geo_rtr_t;
+
+typedef struct geo_iocntl_s {
+ geo_any_t any; /* No additional fields needed */
+} geo_iocntl_t;
+
+typedef struct geo_pcicard_s {
+ geo_iocntl_t any;
+ char bus; /* Bus/widget number */
+ slotid_t slot; /* PCI slot number */
+} geo_pcicard_t;
+
+/* Subcomponents of a node */
+typedef struct geo_cpu_s {
+ geo_node_t node;
+ char slice; /* Which CPU on the node */
+} geo_cpu_t;
+
+typedef struct geo_mem_s {
+ geo_node_t node;
+ char membus; /* The memory bus on the node */
+ char memslot; /* The memory slot on the bus */
+} geo_mem_t;
+
+
+typedef union geoid_u {
+ geo_any_t any;
+ geo_node_t node;
+ geo_iocntl_t iocntl;
+ geo_pcicard_t pcicard;
+ geo_rtr_t rtr;
+ geo_cpu_t cpu;
+ geo_mem_t mem;
+ char padsize[GEOID_SIZE];
+} geoid_t;
+
+
+/* Preprocessor macros */
+
+#define GEO_MAX_LEN 48 /* max. formatted length, plus some pad:
+ module/001c07/slab/5/node/memory/2/slot/4 */
+
+/* Values for geo_type_t */
+#define GEO_TYPE_INVALID 0
+#define GEO_TYPE_MODULE 1
+#define GEO_TYPE_NODE 2
+#define GEO_TYPE_RTR 3
+#define GEO_TYPE_IOCNTL 4
+#define GEO_TYPE_IOCARD 5
+#define GEO_TYPE_CPU 6
+#define GEO_TYPE_MEM 7
+#define GEO_TYPE_MAX (GEO_TYPE_MEM+1)
+
+/* Parameter for hwcfg_format_geoid_compt() */
+#define GEO_COMPT_MODULE 1
+#define GEO_COMPT_SLAB 2
+#define GEO_COMPT_IOBUS 3
+#define GEO_COMPT_IOSLOT 4
+#define GEO_COMPT_CPU 5
+#define GEO_COMPT_MEMBUS 6
+#define GEO_COMPT_MEMSLOT 7
+
+#define GEO_INVALID_STR "<invalid>"
+
+#endif /* _ASM_IA64_SN_SN2_GEO_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_SN2_INTR_H
+#define _ASM_IA64_SN_SN2_INTR_H
+
+#define SGI_UART_VECTOR (0xe9)
+#define SGI_SHUB_ERROR_VECTOR (0xea)
+
+// These two IRQ's are used by partitioning.
+#define SGI_XPC_ACTIVATE (0x30)
+#define SGI_II_ERROR (0x31)
+#define SGI_XBOW_ERROR (0x32)
+#define SGI_PCIBR_ERROR (0x33)
+#define SGI_ACPI_SCI_INT (0x34)
+#define SGI_XPC_NOTIFY (0xe7)
+
+#define IA64_SN2_FIRST_DEVICE_VECTOR (0x37)
+#define IA64_SN2_LAST_DEVICE_VECTOR (0xe6)
+
+#define SN2_IRQ_RESERVED (0x1)
+#define SN2_IRQ_CONNECTED (0x2)
+#define SN2_IRQ_SHARED (0x4)
+
+#define SN2_IRQ_PER_HUB (2048)
+
+#endif /* _ASM_IA64_SN_SN2_INTR_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#ifndef _ASM_SN_SN2_IO_H
+#define _ASM_SN_SN2_IO_H
+#include <linux/compiler.h>
+#include <asm/intrinsics.h>
+
+extern void * sn_io_addr(unsigned long port) __attribute_const__; /* Forward definition */
+extern void sn_mmiob(void); /* Forward definition */
+
+#define __sn_mf_a() ia64_mfa()
+
+extern void sn_dma_flush(unsigned long);
+
+#define __sn_inb ___sn_inb
+#define __sn_inw ___sn_inw
+#define __sn_inl ___sn_inl
+#define __sn_outb ___sn_outb
+#define __sn_outw ___sn_outw
+#define __sn_outl ___sn_outl
+#define __sn_readb ___sn_readb
+#define __sn_readw ___sn_readw
+#define __sn_readl ___sn_readl
+#define __sn_readq ___sn_readq
+#define __sn_readb_relaxed ___sn_readb_relaxed
+#define __sn_readw_relaxed ___sn_readw_relaxed
+#define __sn_readl_relaxed ___sn_readl_relaxed
+#define __sn_readq_relaxed ___sn_readq_relaxed
+
+/*
+ * The following routines are SN Platform specific, called when
+ * a reference is made to inX/outX set macros. SN Platform
+ * inX set of macros ensures that Posted DMA writes on the
+ * Bridge is flushed.
+ *
+ * The routines should be self explainatory.
+ */
+
+static inline unsigned int
+___sn_inb (unsigned long port)
+{
+ volatile unsigned char *addr;
+ unsigned char ret = -1;
+
+ if ((addr = sn_io_addr(port))) {
+ ret = *addr;
+ __sn_mf_a();
+ sn_dma_flush((unsigned long)addr);
+ }
+ return ret;
+}
+
+static inline unsigned int
+___sn_inw (unsigned long port)
+{
+ volatile unsigned short *addr;
+ unsigned short ret = -1;
+
+ if ((addr = sn_io_addr(port))) {
+ ret = *addr;
+ __sn_mf_a();
+ sn_dma_flush((unsigned long)addr);
+ }
+ return ret;
+}
+
+static inline unsigned int
+___sn_inl (unsigned long port)
+{
+ volatile unsigned int *addr;
+ unsigned int ret = -1;
+
+ if ((addr = sn_io_addr(port))) {
+ ret = *addr;
+ __sn_mf_a();
+ sn_dma_flush((unsigned long)addr);
+ }
+ return ret;
+}
+
+static inline void
+___sn_outb (unsigned char val, unsigned long port)
+{
+ volatile unsigned char *addr;
+
+ if ((addr = sn_io_addr(port))) {
+ *addr = val;
+ sn_mmiob();
+ }
+}
+
+static inline void
+___sn_outw (unsigned short val, unsigned long port)
+{
+ volatile unsigned short *addr;
+
+ if ((addr = sn_io_addr(port))) {
+ *addr = val;
+ sn_mmiob();
+ }
+}
+
+static inline void
+___sn_outl (unsigned int val, unsigned long port)
+{
+ volatile unsigned int *addr;
+
+ if ((addr = sn_io_addr(port))) {
+ *addr = val;
+ sn_mmiob();
+ }
+}
+
+/*
+ * The following routines are SN Platform specific, called when
+ * a reference is made to readX/writeX set macros. SN Platform
+ * readX set of macros ensures that Posted DMA writes on the
+ * Bridge is flushed.
+ *
+ * The routines should be self explainatory.
+ */
+
+static inline unsigned char
+___sn_readb (void *addr)
+{
+ unsigned char val;
+
+ val = *(volatile unsigned char *)addr;
+ __sn_mf_a();
+ sn_dma_flush((unsigned long)addr);
+ return val;
+}
+
+static inline unsigned short
+___sn_readw (void *addr)
+{
+ unsigned short val;
+
+ val = *(volatile unsigned short *)addr;
+ __sn_mf_a();
+ sn_dma_flush((unsigned long)addr);
+ return val;
+}
+
+static inline unsigned int
+___sn_readl (void *addr)
+{
+ unsigned int val;
+
+ val = *(volatile unsigned int *) addr;
+ __sn_mf_a();
+ sn_dma_flush((unsigned long)addr);
+ return val;
+}
+
+static inline unsigned long
+___sn_readq (void *addr)
+{
+ unsigned long val;
+
+ val = *(volatile unsigned long *) addr;
+ __sn_mf_a();
+ sn_dma_flush((unsigned long)addr);
+ return val;
+}
+
+/*
+ * For generic and SN2 kernels, we have a set of fast access
+ * PIO macros. These macros are provided on SN Platform
+ * because the normal inX and readX macros perform an
+ * additional task of flushing Post DMA request on the Bridge.
+ *
+ * These routines should be self explainatory.
+ */
+
+static inline unsigned int
+sn_inb_fast (unsigned long port)
+{
+ volatile unsigned char *addr = (unsigned char *)port;
+ unsigned char ret;
+
+ ret = *addr;
+ __sn_mf_a();
+ return ret;
+}
+
+static inline unsigned int
+sn_inw_fast (unsigned long port)
+{
+ volatile unsigned short *addr = (unsigned short *)port;
+ unsigned short ret;
+
+ ret = *addr;
+ __sn_mf_a();
+ return ret;
+}
+
+static inline unsigned int
+sn_inl_fast (unsigned long port)
+{
+ volatile unsigned int *addr = (unsigned int *)port;
+ unsigned int ret;
+
+ ret = *addr;
+ __sn_mf_a();
+ return ret;
+}
+
+static inline unsigned char
+___sn_readb_relaxed (void *addr)
+{
+ return *(volatile unsigned char *)addr;
+}
+
+static inline unsigned short
+___sn_readw_relaxed (void *addr)
+{
+ return *(volatile unsigned short *)addr;
+}
+
+static inline unsigned int
+___sn_readl_relaxed (void *addr)
+{
+ return *(volatile unsigned int *) addr;
+}
+
+static inline unsigned long
+___sn_readq_relaxed (void *addr)
+{
+ return *(volatile unsigned long *) addr;
+}
+
+#endif
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+
+#ifndef _ASM_IA64_SN_SN2_SHUB_H
+#define _ASM_IA64_SN_SN2_SHUB_H
+
+/*
+ * Junk Bus Address Space
+ * The junk bus is used to access the PROM, LED's, and UART. It's
+ * accessed through the local block MMR space. The data path is
+ * 16 bits wide. This space requires address bits 31-27 to be set, and
+ * is further divided by address bits 26:15.
+ * The LED addresses are write-only. To read the LEDs, you need to use
+ * SH_JUNK_BUS_LED0-3, defined in shub_mmr.h
+ *
+ */
+#define SH_REAL_JUNK_BUS_LED0 0x7fed00000
+#define SH_REAL_JUNK_BUS_LED1 0x7fed10000
+#define SH_REAL_JUNK_BUS_LED2 0x7fed20000
+#define SH_REAL_JUNK_BUS_LED3 0x7fed30000
+#define SH_JUNK_BUS_UART0 0x7fed40000
+#define SH_JUNK_BUS_UART1 0x7fed40008
+#define SH_JUNK_BUS_UART2 0x7fed40010
+#define SH_JUNK_BUS_UART3 0x7fed40018
+#define SH_JUNK_BUS_UART4 0x7fed40020
+#define SH_JUNK_BUS_UART5 0x7fed40028
+#define SH_JUNK_BUS_UART6 0x7fed40030
+#define SH_JUNK_BUS_UART7 0x7fed40038
+
+#endif /* _ASM_IA64_SN_SN2_SHUB_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2001, 2002-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+
+#ifndef _ASM_IA64_SN_SN2_SHUB_MD_H
+#define _ASM_IA64_SN_SN2_SHUB_MD_H
+
+/* SN2 supports a mostly-flat address space with 4 CPU-visible, evenly spaced,
+ contiguous regions, or "software banks". On SN2, software bank n begins at
+ addresses n * 16GB, 0 <= n < 4. Each bank has a 16GB address space. If
+ the 4 dimms do not use up this space there will be holes between the
+ banks. Even with these holes the whole memory space within a bank is
+ not addressable address space. The top 1/32 of each bank is directory
+ memory space and is accessible through bist only.
+
+ Physically a SN2 node board contains 2 daughter cards with 8 dimm sockets
+ each. A total of 16 dimm sockets arranged as 4 "DIMM banks" of 4 dimms
+ each. The data is stripped across the 4 memory busses so all dimms within
+ a dimm bank must have identical capacity dimms. Memory is increased or
+ decreased in sets of 4. Each dimm bank has 2 dimms on each side.
+
+ Physical Dimm Bank layout.
+ DTR Card0
+ ------------
+ Dimm Bank 3 | MemYL3 | CS 3
+ | MemXL3 |
+ |----------|
+ Dimm Bank 2 | MemYL2 | CS 2
+ | MemXL2 |
+ |----------|
+ Dimm Bank 1 | MemYL1 | CS 1
+ | MemXL1 |
+ |----------|
+ Dimm Bank 0 | MemYL0 | CS 0
+ | MemXL0 |
+ ------------
+ | |
+ BUS BUS
+ XL YL
+ | |
+ ------------
+ | SHUB |
+ | MD |
+ ------------
+ | |
+ BUS BUS
+ XR YR
+ | |
+ ------------
+ Dimm Bank 0 | MemXR0 | CS 0
+ | MemYR0 |
+ |----------|
+ Dimm Bank 1 | MemXR1 | CS 1
+ | MemYR1 |
+ |----------|
+ Dimm Bank 2 | MemXR2 | CS 2
+ | MemYR2 |
+ |----------|
+ Dimm Bank 3 | MemXR3 | CS 3
+ | MemYR3 |
+ ------------
+ DTR Card1
+
+ The dimms can be 1 or 2 sided dimms. The size and bankness is defined
+ separately for each dimm bank in the sh_[x,y,jnr]_dimm_cfg MMR register.
+
+ Normally software bank 0 would map directly to physical dimm bank 0. The
+ software banks can map to the different physical dimm banks via the
+ DIMM[0-3]_CS field in SH_[x,y,jnr]_DIMM_CFG for each dimm slot.
+
+ All the PROM's data structures (promlog variables, klconfig, etc.)
+ track memory by the physical dimm bank number. The kernel usually
+ tracks memory by the software bank number.
+
+ */
+
+
+/* Preprocessor macros */
+#define MD_MEM_BANKS 4
+#define MD_PHYS_BANKS_PER_DIMM 2 /* dimms may be 2 sided. */
+#define MD_NUM_PHYS_BANKS (MD_MEM_BANKS * MD_PHYS_BANKS_PER_DIMM)
+#define MD_DIMMS_IN_SLOT 4 /* 4 dimms in each dimm bank. aka slot */
+
+/* Address bits 35,34 control dimm bank access. */
+#define MD_BANK_SHFT 34
+#define MD_BANK_MASK (UINT64_CAST 0x3 << MD_BANK_SHFT )
+#define MD_BANK_GET(addr) (((addr) & MD_BANK_MASK) >> MD_BANK_SHFT)
+#define MD_BANK_SIZE (UINT64_CAST 0x1 << MD_BANK_SHFT ) /* 16 gb */
+#define MD_BANK_OFFSET(_b) (UINT64_CAST (_b) << MD_BANK_SHFT)
+
+/*Address bit 12 selects side of dimm if 2bnk dimms present. */
+#define MD_PHYS_BANK_SEL_SHFT 12
+#define MD_PHYS_BANK_SEL_MASK (UINT64_CAST 0x1 << MD_PHYS_BANK_SEL_SHFT)
+
+/* Address bit 7 determines if data resides on X or Y memory system.
+ * If addr Bit 7 is set the data resides on Y memory system and
+ * the corresponing directory entry reside on the X.
+ */
+#define MD_X_OR_Y_SEL_SHFT 7
+#define MD_X_OR_Y_SEL_MASK (1 << MD_X_OR_Y_SEL_SHFT)
+
+/* Address bit 8 determines which directory entry of the pair the address
+ * corresponds to. If addr Bit 8 is set DirB corresponds to the memory address.
+ */
+#define MD_DIRA_OR_DIRB_SEL_SHFT 8
+#define MD_DIRA_OR_DIRB_SEL_MASK (1 << MD_DIRA_OR_DIRB_SEL_SHFT)
+
+/* Address bit 11 determines if corresponding directory entry resides
+ * on Left or Right memory bus. If addr Bit 11 is set the corresponding
+ * directory entry resides on Right memory bus.
+ */
+#define MD_L_OR_R_SEL_SHFT 11
+#define MD_L_OR_R_SEL_MASK (1 << MD_L_OR_R_SEL_SHFT)
+
+/* DRAM sizes. */
+#define MD_SZ_64_Mb 0x0
+#define MD_SZ_128_Mb 0x1
+#define MD_SZ_256_Mb 0x2
+#define MD_SZ_512_Mb 0x3
+#define MD_SZ_1024_Mb 0x4
+#define MD_SZ_2048_Mb 0x5
+#define MD_SZ_UNUSED 0x7
+
+#define MD_DIMM_SIZE_BYTES(_size, _2bk) ( \
+ ( (_size) == 7 ? 0 : ( 0x4000000L << (_size)) << (_2bk)))\
+
+#define MD_DIMM_SIZE_MBYTES(_size, _2bk) ( \
+ ( (_size) == 7 ? 0 : ( 0x40L << (_size) ) << (_2bk))) \
+
+/* The top 1/32 of each bank is directory memory, and not accessible
+ * via normal reads and writes */
+#define MD_DIMM_USER_SIZE(_size) ((_size) * 31 / 32)
+
+/* Minimum size of a populated bank is 64M (62M usable) */
+#define MIN_BANK_SIZE MD_DIMM_USER_SIZE((64 * 0x100000))
+#define MIN_BANK_STRING "62"
+
+
+/*Possible values for FREQ field in sh_[x,y,jnr]_dimm_cfg regs */
+#define MD_DIMM_100_CL2_0 0x0
+#define MD_DIMM_133_CL2_0 0x1
+#define MD_DIMM_133_CL2_5 0x2
+#define MD_DIMM_160_CL2_0 0x3
+#define MD_DIMM_160_CL2_5 0x4
+#define MD_DIMM_160_CL3_0 0x5
+#define MD_DIMM_200_CL2_0 0x6
+#define MD_DIMM_200_CL2_5 0x7
+#define MD_DIMM_200_CL3_0 0x8
+
+/* DIMM_CFG fields */
+#define MD_DIMM_SHFT(_dimm) ((_dimm) << 3)
+#define MD_DIMM_SIZE_MASK(_dimm) \
+ (SH_JNR_DIMM_CFG_DIMM0_SIZE_MASK << \
+ (MD_DIMM_SHFT(_dimm)))
+
+#define MD_DIMM_2BK_MASK(_dimm) \
+ (SH_JNR_DIMM_CFG_DIMM0_2BK_MASK << \
+ MD_DIMM_SHFT(_dimm))
+
+#define MD_DIMM_REV_MASK(_dimm) \
+ (SH_JNR_DIMM_CFG_DIMM0_REV_MASK << \
+ MD_DIMM_SHFT(_dimm))
+
+#define MD_DIMM_CS_MASK(_dimm) \
+ (SH_JNR_DIMM_CFG_DIMM0_CS_MASK << \
+ MD_DIMM_SHFT(_dimm))
+
+#define MD_DIMM_SIZE(_dimm, _cfg) \
+ (((_cfg) & MD_DIMM_SIZE_MASK(_dimm)) \
+ >> (MD_DIMM_SHFT(_dimm)+SH_JNR_DIMM_CFG_DIMM0_SIZE_SHFT))
+
+#define MD_DIMM_TWO_SIDED(_dimm,_cfg) \
+ ( ((_cfg) & MD_DIMM_2BK_MASK(_dimm)) \
+ >> (MD_DIMM_SHFT(_dimm)+SH_JNR_DIMM_CFG_DIMM0_2BK_SHFT))
+
+#define MD_DIMM_REVERSED(_dimm,_cfg) \
+ (((_cfg) & MD_DIMM_REV_MASK(_dimm)) \
+ >> (MD_DIMM_SHFT(_dimm)+SH_JNR_DIMM_CFG_DIMM0_REV_SHFT))
+
+#define MD_DIMM_CS(_dimm,_cfg) \
+ (((_cfg) & MD_DIMM_CS_MASK(_dimm)) \
+ >> (MD_DIMM_SHFT(_dimm)+SH_JNR_DIMM_CFG_DIMM0_CS_SHFT))
+
+
+
+/* Macros to set MMRs that must be set identically to others. */
+#define MD_SET_DIMM_CFG(_n, _value) { \
+ REMOTE_HUB_S(_n, SH_X_DIMM_CFG,_value); \
+ REMOTE_HUB_S(_n, SH_Y_DIMM_CFG, _value); \
+ REMOTE_HUB_S(_n, SH_JNR_DIMM_CFG, _value);}
+
+#define MD_SET_DQCT_CFG(_n, _value) { \
+ REMOTE_HUB_S(_n, SH_X_DQCT_CFG,_value); \
+ REMOTE_HUB_S(_n, SH_Y_DQCT_CFG,_value); }
+
+#define MD_SET_CFG(_n, _value) { \
+ REMOTE_HUB_S(_n, SH_X_CFG,_value); \
+ REMOTE_HUB_S(_n, SH_Y_CFG,_value);}
+
+#define MD_SET_REFRESH_CONTROL(_n, _value) { \
+ REMOTE_HUB_S(_n, SH_X_REFRESH_CONTROL, _value); \
+ REMOTE_HUB_S(_n, SH_Y_REFRESH_CONTROL, _value);}
+
+#define MD_SET_DQ_MMR_DIR_COFIG(_n, _value) { \
+ REMOTE_HUB_S(_n, SH_MD_DQLP_MMR_DIR_CONFIG, _value); \
+ REMOTE_HUB_S(_n, SH_MD_DQRP_MMR_DIR_CONFIG, _value);}
+
+#define MD_SET_PIOWD_DIR_ENTRYS(_n, _value) { \
+ REMOTE_HUB_S(_n, SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY, _value);\
+ REMOTE_HUB_S(_n, SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY, _value);}
+
+/*
+ * There are 12 Node Presence MMRs, 4 in each primary DQ and 4 in the
+ * LB. The data in the left and right DQ MMRs and the LB must match.
+ */
+#define MD_SET_PRESENT_VEC(_n, _vec, _value) { \
+ REMOTE_HUB_S(_n, SH_MD_DQLP_MMR_DIR_PRESVEC0+((_vec)*0x10),\
+ _value); \
+ REMOTE_HUB_S(_n, SH_MD_DQRP_MMR_DIR_PRESVEC0+((_vec)*0x10),\
+ _value); \
+ REMOTE_HUB_S(_n, SH_SHUBS_PRESENT0+((_vec)*0x80), _value);}
+/*
+ * There are 16 Privilege Vector MMRs, 8 in each primary DQ. The data
+ * in the corresponding left and right DQ MMRs must match. Each MMR
+ * pair is used for a single partition.
+ */
+#define MD_SET_PRI_VEC(_n, _vec, _value) { \
+ REMOTE_HUB_S(_n, SH_MD_DQLP_MMR_DIR_PRIVEC0+((_vec)*0x10),\
+ _value); \
+ REMOTE_HUB_S(_n, SH_MD_DQRP_MMR_DIR_PRIVEC0+((_vec)*0x10),\
+ _value);}
+/*
+ * There are 16 Local/Remote MMRs, 8 in each primary DQ. The data in
+ * the corresponding left and right DQ MMRs must match. Each MMR pair
+ * is used for a single partition.
+ */
+#define MD_SET_LOC_VEC(_n, _vec, _value) { \
+ REMOTE_HUB_S(_n, SH_MD_DQLP_MMR_DIR_LOCVEC0+((_vec)*0x10),\
+ _value); \
+ REMOTE_HUB_S(_n, SH_MD_DQRP_MMR_DIR_LOCVEC0+((_vec)*0x10),\
+ _value);}
+
+/* Memory BIST CMDS */
+#define MD_DIMM_INIT_MODE_SET 0x0
+#define MD_DIMM_INIT_REFRESH 0x1
+#define MD_DIMM_INIT_PRECHARGE 0x2
+#define MD_DIMM_INIT_BURST_TERM 0x6
+#define MD_DIMM_INIT_NOP 0x7
+#define MD_DIMM_BIST_READ 0x10
+#define MD_FILL_DIR 0x20
+#define MD_FILL_DATA 0x30
+#define MD_FILL_DIR_ACCESS 0X40
+#define MD_READ_DIR_PAIR 0x50
+#define MD_READ_DIR_TAG 0x60
+
+/* SH_MMRBIST_CTL macros */
+#define MD_BIST_FAIL(_n) (REMOTE_HUB_L(_n, SH_MMRBIST_CTL) & \
+ SH_MMRBIST_CTL_FAIL_MASK)
+
+#define MD_BIST_IN_PROGRESS(_n) (REMOTE_HUB_L(_n, SH_MMRBIST_CTL) & \
+ SH_MMRBIST_CTL_IN_PROGRESS_MASK)
+
+#define MD_BIST_MEM_IDLE(_n); (REMOTE_HUB_L(_n, SH_MMRBIST_CTL) & \
+ SH_MMRBIST_CTL_MEM_IDLE_MASK)
+
+/* SH_MMRBIST_ERR macros */
+#define MD_BIST_MISCOMPARE(_n) (REMOTE_HUB_L(_n, SH_MMRBIST_ERR) & \
+ SH_MMRBIST_ERR_DETECTED_MASK)
+
+#endif /* _ASM_IA64_SN_SN2_SHUB_MD_H */
--- /dev/null
+/*
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+
+#ifndef _ASM_IA64_SN_SN2_SHUB_MMR_H
+#define _ASM_IA64_SN_SN2_SHUB_MMR_H
+
+/* ==================================================================== */
+/* Register "SH_FSB_BINIT_CONTROL" */
+/* FSB BINIT# Control */
+/* ==================================================================== */
+
+#define SH_FSB_BINIT_CONTROL 0x0000000120010000
+#define SH_FSB_BINIT_CONTROL_MASK 0x0000000000000001
+#define SH_FSB_BINIT_CONTROL_INIT 0x0000000000000000
+
+/* SH_FSB_BINIT_CONTROL_BINIT */
+/* Description: Assert the FSB's BINIT# Signal */
+#define SH_FSB_BINIT_CONTROL_BINIT_SHFT 0
+#define SH_FSB_BINIT_CONTROL_BINIT_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_FSB_RESET_CONTROL" */
+/* FSB Reset Control */
+/* ==================================================================== */
+
+#define SH_FSB_RESET_CONTROL 0x0000000120010080
+#define SH_FSB_RESET_CONTROL_MASK 0x0000000000000001
+#define SH_FSB_RESET_CONTROL_INIT 0x0000000000000000
+
+/* SH_FSB_RESET_CONTROL_RESET */
+/* Description: Assert the FSB's RESET# Signal */
+#define SH_FSB_RESET_CONTROL_RESET_SHFT 0
+#define SH_FSB_RESET_CONTROL_RESET_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_FSB_SYSTEM_AGENT_CONFIG" */
+/* FSB System Agent Configuration */
+/* ==================================================================== */
+
+#define SH_FSB_SYSTEM_AGENT_CONFIG 0x0000000120010100
+#define SH_FSB_SYSTEM_AGENT_CONFIG_MASK 0x00003fff0187fff9
+#define SH_FSB_SYSTEM_AGENT_CONFIG_INIT 0x0000000000000000
+
+/* SH_FSB_SYSTEM_AGENT_CONFIG_RCNT_SCNT_EN */
+/* Description: RCNT/SCNT Assertion Enabled */
+#define SH_FSB_SYSTEM_AGENT_CONFIG_RCNT_SCNT_EN_SHFT 0
+#define SH_FSB_SYSTEM_AGENT_CONFIG_RCNT_SCNT_EN_MASK 0x0000000000000001
+
+/* SH_FSB_SYSTEM_AGENT_CONFIG_BERR_ASSERT_EN */
+/* Description: BERR Assertion Enabled for Bus Errors */
+#define SH_FSB_SYSTEM_AGENT_CONFIG_BERR_ASSERT_EN_SHFT 3
+#define SH_FSB_SYSTEM_AGENT_CONFIG_BERR_ASSERT_EN_MASK 0x0000000000000008
+
+/* SH_FSB_SYSTEM_AGENT_CONFIG_BERR_SAMPLING_EN */
+/* Description: BERR Sampling Enabled */
+#define SH_FSB_SYSTEM_AGENT_CONFIG_BERR_SAMPLING_EN_SHFT 4
+#define SH_FSB_SYSTEM_AGENT_CONFIG_BERR_SAMPLING_EN_MASK 0x0000000000000010
+
+/* SH_FSB_SYSTEM_AGENT_CONFIG_BINIT_ASSERT_EN */
+/* Description: BINIT Assertion Enabled */
+#define SH_FSB_SYSTEM_AGENT_CONFIG_BINIT_ASSERT_EN_SHFT 5
+#define SH_FSB_SYSTEM_AGENT_CONFIG_BINIT_ASSERT_EN_MASK 0x0000000000000020
+
+/* SH_FSB_SYSTEM_AGENT_CONFIG_BNR_THROTTLING_EN */
+/* Description: stutter FSB request assertion */
+#define SH_FSB_SYSTEM_AGENT_CONFIG_BNR_THROTTLING_EN_SHFT 6
+#define SH_FSB_SYSTEM_AGENT_CONFIG_BNR_THROTTLING_EN_MASK 0x0000000000000040
+
+/* SH_FSB_SYSTEM_AGENT_CONFIG_SHORT_HANG_EN */
+/* Description: use short duration hang timeout */
+#define SH_FSB_SYSTEM_AGENT_CONFIG_SHORT_HANG_EN_SHFT 7
+#define SH_FSB_SYSTEM_AGENT_CONFIG_SHORT_HANG_EN_MASK 0x0000000000000080
+
+/* SH_FSB_SYSTEM_AGENT_CONFIG_INTA_RSP_DATA */
+/* Description: Interrupt Acknowledge Response Data */
+#define SH_FSB_SYSTEM_AGENT_CONFIG_INTA_RSP_DATA_SHFT 8
+#define SH_FSB_SYSTEM_AGENT_CONFIG_INTA_RSP_DATA_MASK 0x000000000000ff00
+
+/* SH_FSB_SYSTEM_AGENT_CONFIG_IO_TRANS_RSP */
+/* Description: IO Transaction Response */
+#define SH_FSB_SYSTEM_AGENT_CONFIG_IO_TRANS_RSP_SHFT 16
+#define SH_FSB_SYSTEM_AGENT_CONFIG_IO_TRANS_RSP_MASK 0x0000000000010000
+
+/* SH_FSB_SYSTEM_AGENT_CONFIG_XTPR_TRANS_RSP */
+/* Description: External Task Priority Register (xTPR) Transaction */
+/* Response */
+#define SH_FSB_SYSTEM_AGENT_CONFIG_XTPR_TRANS_RSP_SHFT 17
+#define SH_FSB_SYSTEM_AGENT_CONFIG_XTPR_TRANS_RSP_MASK 0x0000000000020000
+
+/* SH_FSB_SYSTEM_AGENT_CONFIG_INTA_TRANS_RSP */
+/* Description: Interrupt Acknowledge Transaction Response */
+#define SH_FSB_SYSTEM_AGENT_CONFIG_INTA_TRANS_RSP_SHFT 18
+#define SH_FSB_SYSTEM_AGENT_CONFIG_INTA_TRANS_RSP_MASK 0x0000000000040000
+
+/* SH_FSB_SYSTEM_AGENT_CONFIG_TDOT */
+/* Description: Throttle Data-bus Ownership Transitions */
+#define SH_FSB_SYSTEM_AGENT_CONFIG_TDOT_SHFT 23
+#define SH_FSB_SYSTEM_AGENT_CONFIG_TDOT_MASK 0x0000000000800000
+
+/* SH_FSB_SYSTEM_AGENT_CONFIG_SERIALIZE_FSB_EN */
+/* Description: serialize processor transactions */
+#define SH_FSB_SYSTEM_AGENT_CONFIG_SERIALIZE_FSB_EN_SHFT 24
+#define SH_FSB_SYSTEM_AGENT_CONFIG_SERIALIZE_FSB_EN_MASK 0x0000000001000000
+
+/* SH_FSB_SYSTEM_AGENT_CONFIG_BINIT_EVENT_ENABLES */
+/* Description: FSB error binit enables */
+#define SH_FSB_SYSTEM_AGENT_CONFIG_BINIT_EVENT_ENABLES_SHFT 32
+#define SH_FSB_SYSTEM_AGENT_CONFIG_BINIT_EVENT_ENABLES_MASK 0x00003fff00000000
+
+/* ==================================================================== */
+/* Register "SH_FSB_VGA_REMAP" */
+/* FSB VGA Address Space Remap */
+/* ==================================================================== */
+
+#define SH_FSB_VGA_REMAP 0x0000000120010180
+#define SH_FSB_VGA_REMAP_MASK 0x4001fffffffe0000
+#define SH_FSB_VGA_REMAP_INIT 0x0000000000000000
+
+/* SH_FSB_VGA_REMAP_OFFSET */
+/* Description: VGA Remap Node Offset */
+#define SH_FSB_VGA_REMAP_OFFSET_SHFT 17
+#define SH_FSB_VGA_REMAP_OFFSET_MASK 0x0000000ffffe0000
+
+/* SH_FSB_VGA_REMAP_ASID */
+/* Description: VGA Remap Address Space ID */
+#define SH_FSB_VGA_REMAP_ASID_SHFT 36
+#define SH_FSB_VGA_REMAP_ASID_MASK 0x0000003000000000
+
+/* SH_FSB_VGA_REMAP_NID */
+/* Description: VGA Remap Node ID */
+#define SH_FSB_VGA_REMAP_NID_SHFT 38
+#define SH_FSB_VGA_REMAP_NID_MASK 0x0001ffc000000000
+
+/* SH_FSB_VGA_REMAP_VGA_REMAPPING_ENABLED */
+/* Description: VGA Remapping Enabled */
+#define SH_FSB_VGA_REMAP_VGA_REMAPPING_ENABLED_SHFT 62
+#define SH_FSB_VGA_REMAP_VGA_REMAPPING_ENABLED_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_FSB_RESET_STATUS" */
+/* FSB Reset Status */
+/* ==================================================================== */
+
+#define SH_FSB_RESET_STATUS 0x0000000120020000
+#define SH_FSB_RESET_STATUS_MASK 0x0000000000000001
+#define SH_FSB_RESET_STATUS_INIT 0x0000000000000000
+
+/* SH_FSB_RESET_STATUS_RESET_IN_PROGRESS */
+/* Description: Reset in Progress */
+#define SH_FSB_RESET_STATUS_RESET_IN_PROGRESS_SHFT 0
+#define SH_FSB_RESET_STATUS_RESET_IN_PROGRESS_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_FSB_SYMMETRIC_AGENT_STATUS" */
+/* FSB Symmetric Agent Status */
+/* ==================================================================== */
+
+#define SH_FSB_SYMMETRIC_AGENT_STATUS 0x0000000120020080
+#define SH_FSB_SYMMETRIC_AGENT_STATUS_MASK 0x0000000000000007
+#define SH_FSB_SYMMETRIC_AGENT_STATUS_INIT 0x0000000000000000
+
+/* SH_FSB_SYMMETRIC_AGENT_STATUS_CPU_0_ACTIVE */
+/* Description: CPU 0 Active. */
+#define SH_FSB_SYMMETRIC_AGENT_STATUS_CPU_0_ACTIVE_SHFT 0
+#define SH_FSB_SYMMETRIC_AGENT_STATUS_CPU_0_ACTIVE_MASK 0x0000000000000001
+
+/* SH_FSB_SYMMETRIC_AGENT_STATUS_CPU_1_ACTIVE */
+/* Description: CPU 1 Active. */
+#define SH_FSB_SYMMETRIC_AGENT_STATUS_CPU_1_ACTIVE_SHFT 1
+#define SH_FSB_SYMMETRIC_AGENT_STATUS_CPU_1_ACTIVE_MASK 0x0000000000000002
+
+/* SH_FSB_SYMMETRIC_AGENT_STATUS_CPUS_READY */
+/* Description: The Processors are Ready */
+#define SH_FSB_SYMMETRIC_AGENT_STATUS_CPUS_READY_SHFT 2
+#define SH_FSB_SYMMETRIC_AGENT_STATUS_CPUS_READY_MASK 0x0000000000000004
+
+/* ==================================================================== */
+/* Register "SH_GFX_CREDIT_COUNT_0" */
+/* Graphics-write Credit Count for CPU 0 */
+/* ==================================================================== */
+
+#define SH_GFX_CREDIT_COUNT_0 0x0000000120030000
+#define SH_GFX_CREDIT_COUNT_0_MASK 0x80000000000fffff
+#define SH_GFX_CREDIT_COUNT_0_INIT 0x000000000000003f
+
+/* SH_GFX_CREDIT_COUNT_0_COUNT */
+/* Description: Credit Count */
+#define SH_GFX_CREDIT_COUNT_0_COUNT_SHFT 0
+#define SH_GFX_CREDIT_COUNT_0_COUNT_MASK 0x00000000000fffff
+
+/* SH_GFX_CREDIT_COUNT_0_RESET_GFX_STATE */
+/* Description: Reset GFX state */
+#define SH_GFX_CREDIT_COUNT_0_RESET_GFX_STATE_SHFT 63
+#define SH_GFX_CREDIT_COUNT_0_RESET_GFX_STATE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_GFX_CREDIT_COUNT_1" */
+/* Graphics-write Credit Count for CPU 1 */
+/* ==================================================================== */
+
+#define SH_GFX_CREDIT_COUNT_1 0x0000000120030080
+#define SH_GFX_CREDIT_COUNT_1_MASK 0x80000000000fffff
+#define SH_GFX_CREDIT_COUNT_1_INIT 0x000000000000003f
+
+/* SH_GFX_CREDIT_COUNT_1_COUNT */
+/* Description: Credit Count */
+#define SH_GFX_CREDIT_COUNT_1_COUNT_SHFT 0
+#define SH_GFX_CREDIT_COUNT_1_COUNT_MASK 0x00000000000fffff
+
+/* SH_GFX_CREDIT_COUNT_1_RESET_GFX_STATE */
+/* Description: Reset GFX state */
+#define SH_GFX_CREDIT_COUNT_1_RESET_GFX_STATE_SHFT 63
+#define SH_GFX_CREDIT_COUNT_1_RESET_GFX_STATE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_GFX_MODE_CNTRL_0" */
+/* Graphics credit mode amd message ordering for CPU 0 */
+/* ==================================================================== */
+
+#define SH_GFX_MODE_CNTRL_0 0x0000000120030100
+#define SH_GFX_MODE_CNTRL_0_MASK 0x0000000000000007
+#define SH_GFX_MODE_CNTRL_0_INIT 0x0000000000000003
+
+/* SH_GFX_MODE_CNTRL_0_DWORD_CREDITS */
+/* Description: GFX credits are tracked by D-words */
+#define SH_GFX_MODE_CNTRL_0_DWORD_CREDITS_SHFT 0
+#define SH_GFX_MODE_CNTRL_0_DWORD_CREDITS_MASK 0x0000000000000001
+
+/* SH_GFX_MODE_CNTRL_0_MIXED_MODE_CREDITS */
+/* Description: GFX credits are tracked by D-words and messages */
+#define SH_GFX_MODE_CNTRL_0_MIXED_MODE_CREDITS_SHFT 1
+#define SH_GFX_MODE_CNTRL_0_MIXED_MODE_CREDITS_MASK 0x0000000000000002
+
+/* SH_GFX_MODE_CNTRL_0_RELAXED_ORDERING */
+/* Description: GFX message routing order */
+#define SH_GFX_MODE_CNTRL_0_RELAXED_ORDERING_SHFT 2
+#define SH_GFX_MODE_CNTRL_0_RELAXED_ORDERING_MASK 0x0000000000000004
+
+/* ==================================================================== */
+/* Register "SH_GFX_MODE_CNTRL_1" */
+/* Graphics credit mode amd message ordering for CPU 1 */
+/* ==================================================================== */
+
+#define SH_GFX_MODE_CNTRL_1 0x0000000120030180
+#define SH_GFX_MODE_CNTRL_1_MASK 0x0000000000000007
+#define SH_GFX_MODE_CNTRL_1_INIT 0x0000000000000003
+
+/* SH_GFX_MODE_CNTRL_1_DWORD_CREDITS */
+/* Description: GFX credits are tracked by D-words */
+#define SH_GFX_MODE_CNTRL_1_DWORD_CREDITS_SHFT 0
+#define SH_GFX_MODE_CNTRL_1_DWORD_CREDITS_MASK 0x0000000000000001
+
+/* SH_GFX_MODE_CNTRL_1_MIXED_MODE_CREDITS */
+/* Description: GFX credits are tracked by D-words and messages */
+#define SH_GFX_MODE_CNTRL_1_MIXED_MODE_CREDITS_SHFT 1
+#define SH_GFX_MODE_CNTRL_1_MIXED_MODE_CREDITS_MASK 0x0000000000000002
+
+/* SH_GFX_MODE_CNTRL_1_RELAXED_ORDERING */
+/* Description: GFX message routing order */
+#define SH_GFX_MODE_CNTRL_1_RELAXED_ORDERING_SHFT 2
+#define SH_GFX_MODE_CNTRL_1_RELAXED_ORDERING_MASK 0x0000000000000004
+
+/* ==================================================================== */
+/* Register "SH_GFX_SKID_CREDIT_COUNT_0" */
+/* Graphics-write Skid Credit Count for CPU 0 */
+/* ==================================================================== */
+
+#define SH_GFX_SKID_CREDIT_COUNT_0 0x0000000120030200
+#define SH_GFX_SKID_CREDIT_COUNT_0_MASK 0x00000000000fffff
+#define SH_GFX_SKID_CREDIT_COUNT_0_INIT 0x0000000000000030
+
+/* SH_GFX_SKID_CREDIT_COUNT_0_SKID */
+/* Description: Skid Credit Count */
+#define SH_GFX_SKID_CREDIT_COUNT_0_SKID_SHFT 0
+#define SH_GFX_SKID_CREDIT_COUNT_0_SKID_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_GFX_SKID_CREDIT_COUNT_1" */
+/* Graphics-write Skid Credit Count for CPU 1 */
+/* ==================================================================== */
+
+#define SH_GFX_SKID_CREDIT_COUNT_1 0x0000000120030280
+#define SH_GFX_SKID_CREDIT_COUNT_1_MASK 0x00000000000fffff
+#define SH_GFX_SKID_CREDIT_COUNT_1_INIT 0x0000000000000030
+
+/* SH_GFX_SKID_CREDIT_COUNT_1_SKID */
+/* Description: Skid Credit Count */
+#define SH_GFX_SKID_CREDIT_COUNT_1_SKID_SHFT 0
+#define SH_GFX_SKID_CREDIT_COUNT_1_SKID_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_GFX_STALL_LIMIT_0" */
+/* Graphics-write Stall Limit for CPU 0 */
+/* ==================================================================== */
+
+#define SH_GFX_STALL_LIMIT_0 0x0000000120030300
+#define SH_GFX_STALL_LIMIT_0_MASK 0x0000000003ffffff
+#define SH_GFX_STALL_LIMIT_0_INIT 0x0000000000010000
+
+/* SH_GFX_STALL_LIMIT_0_LIMIT */
+/* Description: Graphics Stall Limit for CPU 0 */
+#define SH_GFX_STALL_LIMIT_0_LIMIT_SHFT 0
+#define SH_GFX_STALL_LIMIT_0_LIMIT_MASK 0x0000000003ffffff
+
+/* ==================================================================== */
+/* Register "SH_GFX_STALL_LIMIT_1" */
+/* Graphics-write Stall Limit for CPU 1 */
+/* ==================================================================== */
+
+#define SH_GFX_STALL_LIMIT_1 0x0000000120030380
+#define SH_GFX_STALL_LIMIT_1_MASK 0x0000000003ffffff
+#define SH_GFX_STALL_LIMIT_1_INIT 0x0000000000010000
+
+/* SH_GFX_STALL_LIMIT_1_LIMIT */
+/* Description: Graphics Stall Limit for CPU 1 */
+#define SH_GFX_STALL_LIMIT_1_LIMIT_SHFT 0
+#define SH_GFX_STALL_LIMIT_1_LIMIT_MASK 0x0000000003ffffff
+
+/* ==================================================================== */
+/* Register "SH_GFX_STALL_TIMER_0" */
+/* Graphics-write Stall Timer for CPU 0 */
+/* ==================================================================== */
+
+#define SH_GFX_STALL_TIMER_0 0x0000000120030400
+#define SH_GFX_STALL_TIMER_0_MASK 0x0000000003ffffff
+#define SH_GFX_STALL_TIMER_0_INIT 0x0000000000000000
+
+/* SH_GFX_STALL_TIMER_0_TIMER_VALUE */
+/* Description: Timer Value */
+#define SH_GFX_STALL_TIMER_0_TIMER_VALUE_SHFT 0
+#define SH_GFX_STALL_TIMER_0_TIMER_VALUE_MASK 0x0000000003ffffff
+
+/* ==================================================================== */
+/* Register "SH_GFX_STALL_TIMER_1" */
+/* Graphics-write Stall Timer for CPU 1 */
+/* ==================================================================== */
+
+#define SH_GFX_STALL_TIMER_1 0x0000000120030480
+#define SH_GFX_STALL_TIMER_1_MASK 0x0000000003ffffff
+#define SH_GFX_STALL_TIMER_1_INIT 0x0000000000000000
+
+/* SH_GFX_STALL_TIMER_1_TIMER_VALUE */
+/* Description: Timer Value */
+#define SH_GFX_STALL_TIMER_1_TIMER_VALUE_SHFT 0
+#define SH_GFX_STALL_TIMER_1_TIMER_VALUE_MASK 0x0000000003ffffff
+
+/* ==================================================================== */
+/* Register "SH_GFX_WINDOW_0" */
+/* Graphics-write Window for CPU 0 */
+/* ==================================================================== */
+
+#define SH_GFX_WINDOW_0 0x0000000120030500
+#define SH_GFX_WINDOW_0_MASK 0x8000000fff000000
+#define SH_GFX_WINDOW_0_INIT 0x0000000000000000
+
+/* SH_GFX_WINDOW_0_BASE_ADDR */
+/* Description: Base Address for CPU 0's 16 MB Graphics Window */
+#define SH_GFX_WINDOW_0_BASE_ADDR_SHFT 24
+#define SH_GFX_WINDOW_0_BASE_ADDR_MASK 0x0000000fff000000
+
+/* SH_GFX_WINDOW_0_GFX_WINDOW_EN */
+/* Description: Graphics Window Enabled */
+#define SH_GFX_WINDOW_0_GFX_WINDOW_EN_SHFT 63
+#define SH_GFX_WINDOW_0_GFX_WINDOW_EN_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_GFX_WINDOW_1" */
+/* Graphics-write Window for CPU 1 */
+/* ==================================================================== */
+
+#define SH_GFX_WINDOW_1 0x0000000120030580
+#define SH_GFX_WINDOW_1_MASK 0x8000000fff000000
+#define SH_GFX_WINDOW_1_INIT 0x0000000000000000
+
+/* SH_GFX_WINDOW_1_BASE_ADDR */
+/* Description: Base Address for CPU 1's 16 MB Graphics Window */
+#define SH_GFX_WINDOW_1_BASE_ADDR_SHFT 24
+#define SH_GFX_WINDOW_1_BASE_ADDR_MASK 0x0000000fff000000
+
+/* SH_GFX_WINDOW_1_GFX_WINDOW_EN */
+/* Description: Graphics Window Enabled */
+#define SH_GFX_WINDOW_1_GFX_WINDOW_EN_SHFT 63
+#define SH_GFX_WINDOW_1_GFX_WINDOW_EN_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_GFX_INTERRUPT_TIMER_LIMIT_0" */
+/* Graphics-write Interrupt Limit for CPU 0 */
+/* ==================================================================== */
+
+#define SH_GFX_INTERRUPT_TIMER_LIMIT_0 0x0000000120030600
+#define SH_GFX_INTERRUPT_TIMER_LIMIT_0_MASK 0x00000000000000ff
+#define SH_GFX_INTERRUPT_TIMER_LIMIT_0_INIT 0x0000000000000040
+
+/* SH_GFX_INTERRUPT_TIMER_LIMIT_0_INTERRUPT_TIMER_LIMIT */
+/* Description: GFX Interrupt Timer Limit */
+#define SH_GFX_INTERRUPT_TIMER_LIMIT_0_INTERRUPT_TIMER_LIMIT_SHFT 0
+#define SH_GFX_INTERRUPT_TIMER_LIMIT_0_INTERRUPT_TIMER_LIMIT_MASK 0x00000000000000ff
+
+/* ==================================================================== */
+/* Register "SH_GFX_INTERRUPT_TIMER_LIMIT_1" */
+/* Graphics-write Interrupt Limit for CPU 1 */
+/* ==================================================================== */
+
+#define SH_GFX_INTERRUPT_TIMER_LIMIT_1 0x0000000120030680
+#define SH_GFX_INTERRUPT_TIMER_LIMIT_1_MASK 0x00000000000000ff
+#define SH_GFX_INTERRUPT_TIMER_LIMIT_1_INIT 0x0000000000000040
+
+/* SH_GFX_INTERRUPT_TIMER_LIMIT_1_INTERRUPT_TIMER_LIMIT */
+/* Description: GFX Interrupt Timer Limit */
+#define SH_GFX_INTERRUPT_TIMER_LIMIT_1_INTERRUPT_TIMER_LIMIT_SHFT 0
+#define SH_GFX_INTERRUPT_TIMER_LIMIT_1_INTERRUPT_TIMER_LIMIT_MASK 0x00000000000000ff
+
+/* ==================================================================== */
+/* Register "SH_GFX_WRITE_STATUS_0" */
+/* Graphics Write Status for CPU 0 */
+/* ==================================================================== */
+
+#define SH_GFX_WRITE_STATUS_0 0x0000000120040000
+#define SH_GFX_WRITE_STATUS_0_MASK 0x8000000000000001
+#define SH_GFX_WRITE_STATUS_0_INIT 0x0000000000000000
+
+/* SH_GFX_WRITE_STATUS_0_BUSY */
+/* Description: Busy */
+#define SH_GFX_WRITE_STATUS_0_BUSY_SHFT 0
+#define SH_GFX_WRITE_STATUS_0_BUSY_MASK 0x0000000000000001
+
+/* SH_GFX_WRITE_STATUS_0_RE_ENABLE_GFX_STALL */
+/* Description: Re-enable GFX stall logic for this processor */
+#define SH_GFX_WRITE_STATUS_0_RE_ENABLE_GFX_STALL_SHFT 63
+#define SH_GFX_WRITE_STATUS_0_RE_ENABLE_GFX_STALL_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_GFX_WRITE_STATUS_1" */
+/* Graphics Write Status for CPU 1 */
+/* ==================================================================== */
+
+#define SH_GFX_WRITE_STATUS_1 0x0000000120040080
+#define SH_GFX_WRITE_STATUS_1_MASK 0x8000000000000001
+#define SH_GFX_WRITE_STATUS_1_INIT 0x0000000000000000
+
+/* SH_GFX_WRITE_STATUS_1_BUSY */
+/* Description: Busy */
+#define SH_GFX_WRITE_STATUS_1_BUSY_SHFT 0
+#define SH_GFX_WRITE_STATUS_1_BUSY_MASK 0x0000000000000001
+
+/* SH_GFX_WRITE_STATUS_1_RE_ENABLE_GFX_STALL */
+/* Description: Re-enable GFX stall logic for this processor */
+#define SH_GFX_WRITE_STATUS_1_RE_ENABLE_GFX_STALL_SHFT 63
+#define SH_GFX_WRITE_STATUS_1_RE_ENABLE_GFX_STALL_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_II_INT0" */
+/* SHub II Interrupt 0 Registers */
+/* ==================================================================== */
+
+#define SH_II_INT0 0x0000000110000000
+#define SH_II_INT0_MASK 0x00000000000001ff
+#define SH_II_INT0_INIT 0x0000000000000000
+
+/* SH_II_INT0_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_II_INT0_IDX_SHFT 0
+#define SH_II_INT0_IDX_MASK 0x00000000000000ff
+
+/* SH_II_INT0_SEND */
+/* Description: Send Interrupt Message to PI, This generates a puls */
+#define SH_II_INT0_SEND_SHFT 8
+#define SH_II_INT0_SEND_MASK 0x0000000000000100
+
+/* ==================================================================== */
+/* Register "SH_II_INT0_CONFIG" */
+/* SHub II Interrupt 0 Config Registers */
+/* ==================================================================== */
+
+#define SH_II_INT0_CONFIG 0x0000000110000080
+#define SH_II_INT0_CONFIG_MASK 0x0003ffffffefffff
+#define SH_II_INT0_CONFIG_INIT 0x0000000000000000
+
+/* SH_II_INT0_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_II_INT0_CONFIG_TYPE_SHFT 0
+#define SH_II_INT0_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_II_INT0_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_II_INT0_CONFIG_AGT_SHFT 3
+#define SH_II_INT0_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_II_INT0_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_II_INT0_CONFIG_PID_SHFT 4
+#define SH_II_INT0_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_II_INT0_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_II_INT0_CONFIG_BASE_SHFT 21
+#define SH_II_INT0_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* ==================================================================== */
+/* Register "SH_II_INT0_ENABLE" */
+/* SHub II Interrupt 0 Enable Registers */
+/* ==================================================================== */
+
+#define SH_II_INT0_ENABLE 0x0000000110000200
+#define SH_II_INT0_ENABLE_MASK 0x0000000000000001
+#define SH_II_INT0_ENABLE_INIT 0x0000000000000000
+
+/* SH_II_INT0_ENABLE_II_ENABLE */
+/* Description: Enable II Interrupt */
+#define SH_II_INT0_ENABLE_II_ENABLE_SHFT 0
+#define SH_II_INT0_ENABLE_II_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_II_INT1" */
+/* SHub II Interrupt 1 Registers */
+/* ==================================================================== */
+
+#define SH_II_INT1 0x0000000110000100
+#define SH_II_INT1_MASK 0x00000000000001ff
+#define SH_II_INT1_INIT 0x0000000000000000
+
+/* SH_II_INT1_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_II_INT1_IDX_SHFT 0
+#define SH_II_INT1_IDX_MASK 0x00000000000000ff
+
+/* SH_II_INT1_SEND */
+/* Description: Send Interrupt Message to PI, This generates a puls */
+#define SH_II_INT1_SEND_SHFT 8
+#define SH_II_INT1_SEND_MASK 0x0000000000000100
+
+/* ==================================================================== */
+/* Register "SH_II_INT1_CONFIG" */
+/* SHub II Interrupt 1 Config Registers */
+/* ==================================================================== */
+
+#define SH_II_INT1_CONFIG 0x0000000110000180
+#define SH_II_INT1_CONFIG_MASK 0x0003ffffffefffff
+#define SH_II_INT1_CONFIG_INIT 0x0000000000000000
+
+/* SH_II_INT1_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_II_INT1_CONFIG_TYPE_SHFT 0
+#define SH_II_INT1_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_II_INT1_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_II_INT1_CONFIG_AGT_SHFT 3
+#define SH_II_INT1_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_II_INT1_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_II_INT1_CONFIG_PID_SHFT 4
+#define SH_II_INT1_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_II_INT1_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_II_INT1_CONFIG_BASE_SHFT 21
+#define SH_II_INT1_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* ==================================================================== */
+/* Register "SH_II_INT1_ENABLE" */
+/* SHub II Interrupt 1 Enable Registers */
+/* ==================================================================== */
+
+#define SH_II_INT1_ENABLE 0x0000000110000280
+#define SH_II_INT1_ENABLE_MASK 0x0000000000000001
+#define SH_II_INT1_ENABLE_INIT 0x0000000000000000
+
+/* SH_II_INT1_ENABLE_II_ENABLE */
+/* Description: Enable II 1 Interrupt */
+#define SH_II_INT1_ENABLE_II_ENABLE_SHFT 0
+#define SH_II_INT1_ENABLE_II_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_INT_NODE_ID_CONFIG" */
+/* SHub Interrupt Node ID Configuration */
+/* ==================================================================== */
+
+#define SH_INT_NODE_ID_CONFIG 0x0000000110000300
+#define SH_INT_NODE_ID_CONFIG_MASK 0x0000000000000fff
+#define SH_INT_NODE_ID_CONFIG_INIT 0x0000000000000000
+
+/* SH_INT_NODE_ID_CONFIG_NODE_ID */
+/* Description: Node ID for interrupt messages */
+#define SH_INT_NODE_ID_CONFIG_NODE_ID_SHFT 0
+#define SH_INT_NODE_ID_CONFIG_NODE_ID_MASK 0x00000000000007ff
+
+/* SH_INT_NODE_ID_CONFIG_ID_SEL */
+/* Description: Select node id for interrupt messages */
+#define SH_INT_NODE_ID_CONFIG_ID_SEL_SHFT 11
+#define SH_INT_NODE_ID_CONFIG_ID_SEL_MASK 0x0000000000000800
+
+/* ==================================================================== */
+/* Register "SH_IPI_INT" */
+/* SHub Inter-Processor Interrupt Registers */
+/* ==================================================================== */
+
+#define SH_IPI_INT 0x0000000110000380
+#define SH_IPI_INT_MASK 0x8ff3ffffffefffff
+#define SH_IPI_INT_INIT 0x0000000000000000
+
+/* SH_IPI_INT_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_IPI_INT_TYPE_SHFT 0
+#define SH_IPI_INT_TYPE_MASK 0x0000000000000007
+
+/* SH_IPI_INT_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_IPI_INT_AGT_SHFT 3
+#define SH_IPI_INT_AGT_MASK 0x0000000000000008
+
+/* SH_IPI_INT_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_IPI_INT_PID_SHFT 4
+#define SH_IPI_INT_PID_MASK 0x00000000000ffff0
+
+/* SH_IPI_INT_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_IPI_INT_BASE_SHFT 21
+#define SH_IPI_INT_BASE_MASK 0x0003ffffffe00000
+
+/* SH_IPI_INT_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_IPI_INT_IDX_SHFT 52
+#define SH_IPI_INT_IDX_MASK 0x0ff0000000000000
+
+/* SH_IPI_INT_SEND */
+/* Description: Send Interrupt Message to PI, This generates a puls */
+#define SH_IPI_INT_SEND_SHFT 63
+#define SH_IPI_INT_SEND_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_IPI_INT_ENABLE" */
+/* SHub Inter-Processor Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_IPI_INT_ENABLE 0x0000000110000400
+#define SH_IPI_INT_ENABLE_MASK 0x0000000000000001
+#define SH_IPI_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_IPI_INT_ENABLE_PIO_ENABLE */
+/* Description: Enable PIO Interrupt */
+#define SH_IPI_INT_ENABLE_PIO_ENABLE_SHFT 0
+#define SH_IPI_INT_ENABLE_PIO_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT0_CONFIG" */
+/* SHub Local Interrupt 0 Registers */
+/* ==================================================================== */
+
+#define SH_LOCAL_INT0_CONFIG 0x0000000110000480
+#define SH_LOCAL_INT0_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_LOCAL_INT0_CONFIG_INIT 0x0000000000000000
+
+/* SH_LOCAL_INT0_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_LOCAL_INT0_CONFIG_TYPE_SHFT 0
+#define SH_LOCAL_INT0_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_LOCAL_INT0_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_LOCAL_INT0_CONFIG_AGT_SHFT 3
+#define SH_LOCAL_INT0_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_LOCAL_INT0_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_LOCAL_INT0_CONFIG_PID_SHFT 4
+#define SH_LOCAL_INT0_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_LOCAL_INT0_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_LOCAL_INT0_CONFIG_BASE_SHFT 21
+#define SH_LOCAL_INT0_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_LOCAL_INT0_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_LOCAL_INT0_CONFIG_IDX_SHFT 52
+#define SH_LOCAL_INT0_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT0_ENABLE" */
+/* SHub Local Interrupt 0 Enable */
+/* ==================================================================== */
+
+#define SH_LOCAL_INT0_ENABLE 0x0000000110000500
+#define SH_LOCAL_INT0_ENABLE_MASK 0x000000000000f7ff
+#define SH_LOCAL_INT0_ENABLE_INIT 0x0000000000000000
+
+/* SH_LOCAL_INT0_ENABLE_PI_HW_INT */
+/* Description: Enable PI Hardware interrupt */
+#define SH_LOCAL_INT0_ENABLE_PI_HW_INT_SHFT 0
+#define SH_LOCAL_INT0_ENABLE_PI_HW_INT_MASK 0x0000000000000001
+
+/* SH_LOCAL_INT0_ENABLE_MD_HW_INT */
+/* Description: Enable MD Hardware interrupt */
+#define SH_LOCAL_INT0_ENABLE_MD_HW_INT_SHFT 1
+#define SH_LOCAL_INT0_ENABLE_MD_HW_INT_MASK 0x0000000000000002
+
+/* SH_LOCAL_INT0_ENABLE_XN_HW_INT */
+/* Description: Enable XN Hardware interrupt */
+#define SH_LOCAL_INT0_ENABLE_XN_HW_INT_SHFT 2
+#define SH_LOCAL_INT0_ENABLE_XN_HW_INT_MASK 0x0000000000000004
+
+/* SH_LOCAL_INT0_ENABLE_LB_HW_INT */
+/* Description: Enable LB Hardware interrupt */
+#define SH_LOCAL_INT0_ENABLE_LB_HW_INT_SHFT 3
+#define SH_LOCAL_INT0_ENABLE_LB_HW_INT_MASK 0x0000000000000008
+
+/* SH_LOCAL_INT0_ENABLE_II_HW_INT */
+/* Description: Enable II wrapper Hardware interrupt */
+#define SH_LOCAL_INT0_ENABLE_II_HW_INT_SHFT 4
+#define SH_LOCAL_INT0_ENABLE_II_HW_INT_MASK 0x0000000000000010
+
+/* SH_LOCAL_INT0_ENABLE_PI_CE_INT */
+/* Description: Enable PI Correctable Error Interrupt */
+#define SH_LOCAL_INT0_ENABLE_PI_CE_INT_SHFT 5
+#define SH_LOCAL_INT0_ENABLE_PI_CE_INT_MASK 0x0000000000000020
+
+/* SH_LOCAL_INT0_ENABLE_MD_CE_INT */
+/* Description: Enable MD Correctable Error Interrupt */
+#define SH_LOCAL_INT0_ENABLE_MD_CE_INT_SHFT 6
+#define SH_LOCAL_INT0_ENABLE_MD_CE_INT_MASK 0x0000000000000040
+
+/* SH_LOCAL_INT0_ENABLE_XN_CE_INT */
+/* Description: Enable XN Correctable Error Interrupt */
+#define SH_LOCAL_INT0_ENABLE_XN_CE_INT_SHFT 7
+#define SH_LOCAL_INT0_ENABLE_XN_CE_INT_MASK 0x0000000000000080
+
+/* SH_LOCAL_INT0_ENABLE_PI_UCE_INT */
+/* Description: Enable PI Correctable Error Interrupt */
+#define SH_LOCAL_INT0_ENABLE_PI_UCE_INT_SHFT 8
+#define SH_LOCAL_INT0_ENABLE_PI_UCE_INT_MASK 0x0000000000000100
+
+/* SH_LOCAL_INT0_ENABLE_MD_UCE_INT */
+/* Description: Enable MD Correctable Error Interrupt */
+#define SH_LOCAL_INT0_ENABLE_MD_UCE_INT_SHFT 9
+#define SH_LOCAL_INT0_ENABLE_MD_UCE_INT_MASK 0x0000000000000200
+
+/* SH_LOCAL_INT0_ENABLE_XN_UCE_INT */
+/* Description: Enable XN Correctable Error Interrupt */
+#define SH_LOCAL_INT0_ENABLE_XN_UCE_INT_SHFT 10
+#define SH_LOCAL_INT0_ENABLE_XN_UCE_INT_MASK 0x0000000000000400
+
+/* SH_LOCAL_INT0_ENABLE_SYSTEM_SHUTDOWN_INT */
+/* Description: Enable System Shutdown Interrupt */
+#define SH_LOCAL_INT0_ENABLE_SYSTEM_SHUTDOWN_INT_SHFT 12
+#define SH_LOCAL_INT0_ENABLE_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000001000
+
+/* SH_LOCAL_INT0_ENABLE_UART_INT */
+/* Description: Enable Junk Bus UART Interrupt */
+#define SH_LOCAL_INT0_ENABLE_UART_INT_SHFT 13
+#define SH_LOCAL_INT0_ENABLE_UART_INT_MASK 0x0000000000002000
+
+/* SH_LOCAL_INT0_ENABLE_L1_NMI_INT */
+/* Description: Enable L1 Controller NMI Interrupt */
+#define SH_LOCAL_INT0_ENABLE_L1_NMI_INT_SHFT 14
+#define SH_LOCAL_INT0_ENABLE_L1_NMI_INT_MASK 0x0000000000004000
+
+/* SH_LOCAL_INT0_ENABLE_STOP_CLOCK */
+/* Description: Stop Clock Interrupt */
+#define SH_LOCAL_INT0_ENABLE_STOP_CLOCK_SHFT 15
+#define SH_LOCAL_INT0_ENABLE_STOP_CLOCK_MASK 0x0000000000008000
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT1_CONFIG" */
+/* SHub Local Interrupt 1 Registers */
+/* ==================================================================== */
+
+#define SH_LOCAL_INT1_CONFIG 0x0000000110000580
+#define SH_LOCAL_INT1_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_LOCAL_INT1_CONFIG_INIT 0x0000000000000000
+
+/* SH_LOCAL_INT1_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_LOCAL_INT1_CONFIG_TYPE_SHFT 0
+#define SH_LOCAL_INT1_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_LOCAL_INT1_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_LOCAL_INT1_CONFIG_AGT_SHFT 3
+#define SH_LOCAL_INT1_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_LOCAL_INT1_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_LOCAL_INT1_CONFIG_PID_SHFT 4
+#define SH_LOCAL_INT1_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_LOCAL_INT1_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_LOCAL_INT1_CONFIG_BASE_SHFT 21
+#define SH_LOCAL_INT1_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_LOCAL_INT1_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_LOCAL_INT1_CONFIG_IDX_SHFT 52
+#define SH_LOCAL_INT1_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT1_ENABLE" */
+/* SHub Local Interrupt 1 Enable */
+/* ==================================================================== */
+
+#define SH_LOCAL_INT1_ENABLE 0x0000000110000600
+#define SH_LOCAL_INT1_ENABLE_MASK 0x000000000000f7ff
+#define SH_LOCAL_INT1_ENABLE_INIT 0x0000000000000000
+
+/* SH_LOCAL_INT1_ENABLE_PI_HW_INT */
+/* Description: Enable PI Hardware interrupt */
+#define SH_LOCAL_INT1_ENABLE_PI_HW_INT_SHFT 0
+#define SH_LOCAL_INT1_ENABLE_PI_HW_INT_MASK 0x0000000000000001
+
+/* SH_LOCAL_INT1_ENABLE_MD_HW_INT */
+/* Description: Enable MD Hardware interrupt */
+#define SH_LOCAL_INT1_ENABLE_MD_HW_INT_SHFT 1
+#define SH_LOCAL_INT1_ENABLE_MD_HW_INT_MASK 0x0000000000000002
+
+/* SH_LOCAL_INT1_ENABLE_XN_HW_INT */
+/* Description: Enable XN Hardware interrupt */
+#define SH_LOCAL_INT1_ENABLE_XN_HW_INT_SHFT 2
+#define SH_LOCAL_INT1_ENABLE_XN_HW_INT_MASK 0x0000000000000004
+
+/* SH_LOCAL_INT1_ENABLE_LB_HW_INT */
+/* Description: Enable LB Hardware interrupt */
+#define SH_LOCAL_INT1_ENABLE_LB_HW_INT_SHFT 3
+#define SH_LOCAL_INT1_ENABLE_LB_HW_INT_MASK 0x0000000000000008
+
+/* SH_LOCAL_INT1_ENABLE_II_HW_INT */
+/* Description: Enable II wrapper Hardware interrupt */
+#define SH_LOCAL_INT1_ENABLE_II_HW_INT_SHFT 4
+#define SH_LOCAL_INT1_ENABLE_II_HW_INT_MASK 0x0000000000000010
+
+/* SH_LOCAL_INT1_ENABLE_PI_CE_INT */
+/* Description: Enable PI Correctable Error Interrupt */
+#define SH_LOCAL_INT1_ENABLE_PI_CE_INT_SHFT 5
+#define SH_LOCAL_INT1_ENABLE_PI_CE_INT_MASK 0x0000000000000020
+
+/* SH_LOCAL_INT1_ENABLE_MD_CE_INT */
+/* Description: Enable MD Correctable Error Interrupt */
+#define SH_LOCAL_INT1_ENABLE_MD_CE_INT_SHFT 6
+#define SH_LOCAL_INT1_ENABLE_MD_CE_INT_MASK 0x0000000000000040
+
+/* SH_LOCAL_INT1_ENABLE_XN_CE_INT */
+/* Description: Enable XN Correctable Error Interrupt */
+#define SH_LOCAL_INT1_ENABLE_XN_CE_INT_SHFT 7
+#define SH_LOCAL_INT1_ENABLE_XN_CE_INT_MASK 0x0000000000000080
+
+/* SH_LOCAL_INT1_ENABLE_PI_UCE_INT */
+/* Description: Enable PI Correctable Error Interrupt */
+#define SH_LOCAL_INT1_ENABLE_PI_UCE_INT_SHFT 8
+#define SH_LOCAL_INT1_ENABLE_PI_UCE_INT_MASK 0x0000000000000100
+
+/* SH_LOCAL_INT1_ENABLE_MD_UCE_INT */
+/* Description: Enable MD Correctable Error Interrupt */
+#define SH_LOCAL_INT1_ENABLE_MD_UCE_INT_SHFT 9
+#define SH_LOCAL_INT1_ENABLE_MD_UCE_INT_MASK 0x0000000000000200
+
+/* SH_LOCAL_INT1_ENABLE_XN_UCE_INT */
+/* Description: Enable XN Correctable Error Interrupt */
+#define SH_LOCAL_INT1_ENABLE_XN_UCE_INT_SHFT 10
+#define SH_LOCAL_INT1_ENABLE_XN_UCE_INT_MASK 0x0000000000000400
+
+/* SH_LOCAL_INT1_ENABLE_SYSTEM_SHUTDOWN_INT */
+/* Description: Enable System Shutdown Interrupt */
+#define SH_LOCAL_INT1_ENABLE_SYSTEM_SHUTDOWN_INT_SHFT 12
+#define SH_LOCAL_INT1_ENABLE_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000001000
+
+/* SH_LOCAL_INT1_ENABLE_UART_INT */
+/* Description: Enable Junk Bus UART Interrupt */
+#define SH_LOCAL_INT1_ENABLE_UART_INT_SHFT 13
+#define SH_LOCAL_INT1_ENABLE_UART_INT_MASK 0x0000000000002000
+
+/* SH_LOCAL_INT1_ENABLE_L1_NMI_INT */
+/* Description: Enable L1 Controller NMI Interrupt */
+#define SH_LOCAL_INT1_ENABLE_L1_NMI_INT_SHFT 14
+#define SH_LOCAL_INT1_ENABLE_L1_NMI_INT_MASK 0x0000000000004000
+
+/* SH_LOCAL_INT1_ENABLE_STOP_CLOCK */
+/* Description: Stop Clock Interrupt */
+#define SH_LOCAL_INT1_ENABLE_STOP_CLOCK_SHFT 15
+#define SH_LOCAL_INT1_ENABLE_STOP_CLOCK_MASK 0x0000000000008000
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT2_CONFIG" */
+/* SHub Local Interrupt 2 Registers */
+/* ==================================================================== */
+
+#define SH_LOCAL_INT2_CONFIG 0x0000000110000680
+#define SH_LOCAL_INT2_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_LOCAL_INT2_CONFIG_INIT 0x0000000000000000
+
+/* SH_LOCAL_INT2_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_LOCAL_INT2_CONFIG_TYPE_SHFT 0
+#define SH_LOCAL_INT2_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_LOCAL_INT2_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_LOCAL_INT2_CONFIG_AGT_SHFT 3
+#define SH_LOCAL_INT2_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_LOCAL_INT2_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_LOCAL_INT2_CONFIG_PID_SHFT 4
+#define SH_LOCAL_INT2_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_LOCAL_INT2_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_LOCAL_INT2_CONFIG_BASE_SHFT 21
+#define SH_LOCAL_INT2_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_LOCAL_INT2_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_LOCAL_INT2_CONFIG_IDX_SHFT 52
+#define SH_LOCAL_INT2_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT2_ENABLE" */
+/* SHub Local Interrupt 2 Enable */
+/* ==================================================================== */
+
+#define SH_LOCAL_INT2_ENABLE 0x0000000110000700
+#define SH_LOCAL_INT2_ENABLE_MASK 0x000000000000f7ff
+#define SH_LOCAL_INT2_ENABLE_INIT 0x0000000000000000
+
+/* SH_LOCAL_INT2_ENABLE_PI_HW_INT */
+/* Description: Enable PI Hardware interrupt */
+#define SH_LOCAL_INT2_ENABLE_PI_HW_INT_SHFT 0
+#define SH_LOCAL_INT2_ENABLE_PI_HW_INT_MASK 0x0000000000000001
+
+/* SH_LOCAL_INT2_ENABLE_MD_HW_INT */
+/* Description: Enable MD Hardware interrupt */
+#define SH_LOCAL_INT2_ENABLE_MD_HW_INT_SHFT 1
+#define SH_LOCAL_INT2_ENABLE_MD_HW_INT_MASK 0x0000000000000002
+
+/* SH_LOCAL_INT2_ENABLE_XN_HW_INT */
+/* Description: Enable XN Hardware interrupt */
+#define SH_LOCAL_INT2_ENABLE_XN_HW_INT_SHFT 2
+#define SH_LOCAL_INT2_ENABLE_XN_HW_INT_MASK 0x0000000000000004
+
+/* SH_LOCAL_INT2_ENABLE_LB_HW_INT */
+/* Description: Enable LB Hardware interrupt */
+#define SH_LOCAL_INT2_ENABLE_LB_HW_INT_SHFT 3
+#define SH_LOCAL_INT2_ENABLE_LB_HW_INT_MASK 0x0000000000000008
+
+/* SH_LOCAL_INT2_ENABLE_II_HW_INT */
+/* Description: Enable II wrapper Hardware interrupt */
+#define SH_LOCAL_INT2_ENABLE_II_HW_INT_SHFT 4
+#define SH_LOCAL_INT2_ENABLE_II_HW_INT_MASK 0x0000000000000010
+
+/* SH_LOCAL_INT2_ENABLE_PI_CE_INT */
+/* Description: Enable PI Correctable Error Interrupt */
+#define SH_LOCAL_INT2_ENABLE_PI_CE_INT_SHFT 5
+#define SH_LOCAL_INT2_ENABLE_PI_CE_INT_MASK 0x0000000000000020
+
+/* SH_LOCAL_INT2_ENABLE_MD_CE_INT */
+/* Description: Enable MD Correctable Error Interrupt */
+#define SH_LOCAL_INT2_ENABLE_MD_CE_INT_SHFT 6
+#define SH_LOCAL_INT2_ENABLE_MD_CE_INT_MASK 0x0000000000000040
+
+/* SH_LOCAL_INT2_ENABLE_XN_CE_INT */
+/* Description: Enable XN Correctable Error Interrupt */
+#define SH_LOCAL_INT2_ENABLE_XN_CE_INT_SHFT 7
+#define SH_LOCAL_INT2_ENABLE_XN_CE_INT_MASK 0x0000000000000080
+
+/* SH_LOCAL_INT2_ENABLE_PI_UCE_INT */
+/* Description: Enable PI Correctable Error Interrupt */
+#define SH_LOCAL_INT2_ENABLE_PI_UCE_INT_SHFT 8
+#define SH_LOCAL_INT2_ENABLE_PI_UCE_INT_MASK 0x0000000000000100
+
+/* SH_LOCAL_INT2_ENABLE_MD_UCE_INT */
+/* Description: Enable MD Correctable Error Interrupt */
+#define SH_LOCAL_INT2_ENABLE_MD_UCE_INT_SHFT 9
+#define SH_LOCAL_INT2_ENABLE_MD_UCE_INT_MASK 0x0000000000000200
+
+/* SH_LOCAL_INT2_ENABLE_XN_UCE_INT */
+/* Description: Enable XN Correctable Error Interrupt */
+#define SH_LOCAL_INT2_ENABLE_XN_UCE_INT_SHFT 10
+#define SH_LOCAL_INT2_ENABLE_XN_UCE_INT_MASK 0x0000000000000400
+
+/* SH_LOCAL_INT2_ENABLE_SYSTEM_SHUTDOWN_INT */
+/* Description: Enable System Shutdown Interrupt */
+#define SH_LOCAL_INT2_ENABLE_SYSTEM_SHUTDOWN_INT_SHFT 12
+#define SH_LOCAL_INT2_ENABLE_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000001000
+
+/* SH_LOCAL_INT2_ENABLE_UART_INT */
+/* Description: Enable Junk Bus UART Interrupt */
+#define SH_LOCAL_INT2_ENABLE_UART_INT_SHFT 13
+#define SH_LOCAL_INT2_ENABLE_UART_INT_MASK 0x0000000000002000
+
+/* SH_LOCAL_INT2_ENABLE_L1_NMI_INT */
+/* Description: Enable L1 Controller NMI Interrupt */
+#define SH_LOCAL_INT2_ENABLE_L1_NMI_INT_SHFT 14
+#define SH_LOCAL_INT2_ENABLE_L1_NMI_INT_MASK 0x0000000000004000
+
+/* SH_LOCAL_INT2_ENABLE_STOP_CLOCK */
+/* Description: Stop Clock Interrupt */
+#define SH_LOCAL_INT2_ENABLE_STOP_CLOCK_SHFT 15
+#define SH_LOCAL_INT2_ENABLE_STOP_CLOCK_MASK 0x0000000000008000
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT3_CONFIG" */
+/* SHub Local Interrupt 3 Registers */
+/* ==================================================================== */
+
+#define SH_LOCAL_INT3_CONFIG 0x0000000110000780
+#define SH_LOCAL_INT3_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_LOCAL_INT3_CONFIG_INIT 0x0000000000000000
+
+/* SH_LOCAL_INT3_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_LOCAL_INT3_CONFIG_TYPE_SHFT 0
+#define SH_LOCAL_INT3_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_LOCAL_INT3_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_LOCAL_INT3_CONFIG_AGT_SHFT 3
+#define SH_LOCAL_INT3_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_LOCAL_INT3_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_LOCAL_INT3_CONFIG_PID_SHFT 4
+#define SH_LOCAL_INT3_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_LOCAL_INT3_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_LOCAL_INT3_CONFIG_BASE_SHFT 21
+#define SH_LOCAL_INT3_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_LOCAL_INT3_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_LOCAL_INT3_CONFIG_IDX_SHFT 52
+#define SH_LOCAL_INT3_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT3_ENABLE" */
+/* SHub Local Interrupt 3 Enable */
+/* ==================================================================== */
+
+#define SH_LOCAL_INT3_ENABLE 0x0000000110000800
+#define SH_LOCAL_INT3_ENABLE_MASK 0x000000000000f7ff
+#define SH_LOCAL_INT3_ENABLE_INIT 0x0000000000000000
+
+/* SH_LOCAL_INT3_ENABLE_PI_HW_INT */
+/* Description: Enable PI Hardware interrupt */
+#define SH_LOCAL_INT3_ENABLE_PI_HW_INT_SHFT 0
+#define SH_LOCAL_INT3_ENABLE_PI_HW_INT_MASK 0x0000000000000001
+
+/* SH_LOCAL_INT3_ENABLE_MD_HW_INT */
+/* Description: Enable MD Hardware interrupt */
+#define SH_LOCAL_INT3_ENABLE_MD_HW_INT_SHFT 1
+#define SH_LOCAL_INT3_ENABLE_MD_HW_INT_MASK 0x0000000000000002
+
+/* SH_LOCAL_INT3_ENABLE_XN_HW_INT */
+/* Description: Enable XN Hardware interrupt */
+#define SH_LOCAL_INT3_ENABLE_XN_HW_INT_SHFT 2
+#define SH_LOCAL_INT3_ENABLE_XN_HW_INT_MASK 0x0000000000000004
+
+/* SH_LOCAL_INT3_ENABLE_LB_HW_INT */
+/* Description: Enable LB Hardware interrupt */
+#define SH_LOCAL_INT3_ENABLE_LB_HW_INT_SHFT 3
+#define SH_LOCAL_INT3_ENABLE_LB_HW_INT_MASK 0x0000000000000008
+
+/* SH_LOCAL_INT3_ENABLE_II_HW_INT */
+/* Description: Enable II wrapper Hardware interrupt */
+#define SH_LOCAL_INT3_ENABLE_II_HW_INT_SHFT 4
+#define SH_LOCAL_INT3_ENABLE_II_HW_INT_MASK 0x0000000000000010
+
+/* SH_LOCAL_INT3_ENABLE_PI_CE_INT */
+/* Description: Enable PI Correctable Error Interrupt */
+#define SH_LOCAL_INT3_ENABLE_PI_CE_INT_SHFT 5
+#define SH_LOCAL_INT3_ENABLE_PI_CE_INT_MASK 0x0000000000000020
+
+/* SH_LOCAL_INT3_ENABLE_MD_CE_INT */
+/* Description: Enable MD Correctable Error Interrupt */
+#define SH_LOCAL_INT3_ENABLE_MD_CE_INT_SHFT 6
+#define SH_LOCAL_INT3_ENABLE_MD_CE_INT_MASK 0x0000000000000040
+
+/* SH_LOCAL_INT3_ENABLE_XN_CE_INT */
+/* Description: Enable XN Correctable Error Interrupt */
+#define SH_LOCAL_INT3_ENABLE_XN_CE_INT_SHFT 7
+#define SH_LOCAL_INT3_ENABLE_XN_CE_INT_MASK 0x0000000000000080
+
+/* SH_LOCAL_INT3_ENABLE_PI_UCE_INT */
+/* Description: Enable PI Correctable Error Interrupt */
+#define SH_LOCAL_INT3_ENABLE_PI_UCE_INT_SHFT 8
+#define SH_LOCAL_INT3_ENABLE_PI_UCE_INT_MASK 0x0000000000000100
+
+/* SH_LOCAL_INT3_ENABLE_MD_UCE_INT */
+/* Description: Enable MD Correctable Error Interrupt */
+#define SH_LOCAL_INT3_ENABLE_MD_UCE_INT_SHFT 9
+#define SH_LOCAL_INT3_ENABLE_MD_UCE_INT_MASK 0x0000000000000200
+
+/* SH_LOCAL_INT3_ENABLE_XN_UCE_INT */
+/* Description: Enable XN Correctable Error Interrupt */
+#define SH_LOCAL_INT3_ENABLE_XN_UCE_INT_SHFT 10
+#define SH_LOCAL_INT3_ENABLE_XN_UCE_INT_MASK 0x0000000000000400
+
+/* SH_LOCAL_INT3_ENABLE_SYSTEM_SHUTDOWN_INT */
+/* Description: Enable System Shutdown Interrupt */
+#define SH_LOCAL_INT3_ENABLE_SYSTEM_SHUTDOWN_INT_SHFT 12
+#define SH_LOCAL_INT3_ENABLE_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000001000
+
+/* SH_LOCAL_INT3_ENABLE_UART_INT */
+/* Description: Enable Junk Bus UART Interrupt */
+#define SH_LOCAL_INT3_ENABLE_UART_INT_SHFT 13
+#define SH_LOCAL_INT3_ENABLE_UART_INT_MASK 0x0000000000002000
+
+/* SH_LOCAL_INT3_ENABLE_L1_NMI_INT */
+/* Description: Enable L1 Controller NMI Interrupt */
+#define SH_LOCAL_INT3_ENABLE_L1_NMI_INT_SHFT 14
+#define SH_LOCAL_INT3_ENABLE_L1_NMI_INT_MASK 0x0000000000004000
+
+/* SH_LOCAL_INT3_ENABLE_STOP_CLOCK */
+/* Description: Stop Clock Interrupt */
+#define SH_LOCAL_INT3_ENABLE_STOP_CLOCK_SHFT 15
+#define SH_LOCAL_INT3_ENABLE_STOP_CLOCK_MASK 0x0000000000008000
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT4_CONFIG" */
+/* SHub Local Interrupt 4 Registers */
+/* ==================================================================== */
+
+#define SH_LOCAL_INT4_CONFIG 0x0000000110000880
+#define SH_LOCAL_INT4_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_LOCAL_INT4_CONFIG_INIT 0x0000000000000000
+
+/* SH_LOCAL_INT4_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_LOCAL_INT4_CONFIG_TYPE_SHFT 0
+#define SH_LOCAL_INT4_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_LOCAL_INT4_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_LOCAL_INT4_CONFIG_AGT_SHFT 3
+#define SH_LOCAL_INT4_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_LOCAL_INT4_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_LOCAL_INT4_CONFIG_PID_SHFT 4
+#define SH_LOCAL_INT4_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_LOCAL_INT4_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_LOCAL_INT4_CONFIG_BASE_SHFT 21
+#define SH_LOCAL_INT4_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_LOCAL_INT4_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_LOCAL_INT4_CONFIG_IDX_SHFT 52
+#define SH_LOCAL_INT4_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT4_ENABLE" */
+/* SHub Local Interrupt 4 Enable */
+/* ==================================================================== */
+
+#define SH_LOCAL_INT4_ENABLE 0x0000000110000900
+#define SH_LOCAL_INT4_ENABLE_MASK 0x000000000000f7ff
+#define SH_LOCAL_INT4_ENABLE_INIT 0x0000000000000000
+
+/* SH_LOCAL_INT4_ENABLE_PI_HW_INT */
+/* Description: Enable PI Hardware interrupt */
+#define SH_LOCAL_INT4_ENABLE_PI_HW_INT_SHFT 0
+#define SH_LOCAL_INT4_ENABLE_PI_HW_INT_MASK 0x0000000000000001
+
+/* SH_LOCAL_INT4_ENABLE_MD_HW_INT */
+/* Description: Enable MD Hardware interrupt */
+#define SH_LOCAL_INT4_ENABLE_MD_HW_INT_SHFT 1
+#define SH_LOCAL_INT4_ENABLE_MD_HW_INT_MASK 0x0000000000000002
+
+/* SH_LOCAL_INT4_ENABLE_XN_HW_INT */
+/* Description: Enable XN Hardware interrupt */
+#define SH_LOCAL_INT4_ENABLE_XN_HW_INT_SHFT 2
+#define SH_LOCAL_INT4_ENABLE_XN_HW_INT_MASK 0x0000000000000004
+
+/* SH_LOCAL_INT4_ENABLE_LB_HW_INT */
+/* Description: Enable LB Hardware interrupt */
+#define SH_LOCAL_INT4_ENABLE_LB_HW_INT_SHFT 3
+#define SH_LOCAL_INT4_ENABLE_LB_HW_INT_MASK 0x0000000000000008
+
+/* SH_LOCAL_INT4_ENABLE_II_HW_INT */
+/* Description: Enable II wrapper Hardware interrupt */
+#define SH_LOCAL_INT4_ENABLE_II_HW_INT_SHFT 4
+#define SH_LOCAL_INT4_ENABLE_II_HW_INT_MASK 0x0000000000000010
+
+/* SH_LOCAL_INT4_ENABLE_PI_CE_INT */
+/* Description: Enable PI Correctable Error Interrupt */
+#define SH_LOCAL_INT4_ENABLE_PI_CE_INT_SHFT 5
+#define SH_LOCAL_INT4_ENABLE_PI_CE_INT_MASK 0x0000000000000020
+
+/* SH_LOCAL_INT4_ENABLE_MD_CE_INT */
+/* Description: Enable MD Correctable Error Interrupt */
+#define SH_LOCAL_INT4_ENABLE_MD_CE_INT_SHFT 6
+#define SH_LOCAL_INT4_ENABLE_MD_CE_INT_MASK 0x0000000000000040
+
+/* SH_LOCAL_INT4_ENABLE_XN_CE_INT */
+/* Description: Enable XN Correctable Error Interrupt */
+#define SH_LOCAL_INT4_ENABLE_XN_CE_INT_SHFT 7
+#define SH_LOCAL_INT4_ENABLE_XN_CE_INT_MASK 0x0000000000000080
+
+/* SH_LOCAL_INT4_ENABLE_PI_UCE_INT */
+/* Description: Enable PI Correctable Error Interrupt */
+#define SH_LOCAL_INT4_ENABLE_PI_UCE_INT_SHFT 8
+#define SH_LOCAL_INT4_ENABLE_PI_UCE_INT_MASK 0x0000000000000100
+
+/* SH_LOCAL_INT4_ENABLE_MD_UCE_INT */
+/* Description: Enable MD Correctable Error Interrupt */
+#define SH_LOCAL_INT4_ENABLE_MD_UCE_INT_SHFT 9
+#define SH_LOCAL_INT4_ENABLE_MD_UCE_INT_MASK 0x0000000000000200
+
+/* SH_LOCAL_INT4_ENABLE_XN_UCE_INT */
+/* Description: Enable XN Correctable Error Interrupt */
+#define SH_LOCAL_INT4_ENABLE_XN_UCE_INT_SHFT 10
+#define SH_LOCAL_INT4_ENABLE_XN_UCE_INT_MASK 0x0000000000000400
+
+/* SH_LOCAL_INT4_ENABLE_SYSTEM_SHUTDOWN_INT */
+/* Description: Enable System Shutdown Interrupt */
+#define SH_LOCAL_INT4_ENABLE_SYSTEM_SHUTDOWN_INT_SHFT 12
+#define SH_LOCAL_INT4_ENABLE_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000001000
+
+/* SH_LOCAL_INT4_ENABLE_UART_INT */
+/* Description: Enable Junk Bus UART Interrupt */
+#define SH_LOCAL_INT4_ENABLE_UART_INT_SHFT 13
+#define SH_LOCAL_INT4_ENABLE_UART_INT_MASK 0x0000000000002000
+
+/* SH_LOCAL_INT4_ENABLE_L1_NMI_INT */
+/* Description: Enable L1 Controller NMI Interrupt */
+#define SH_LOCAL_INT4_ENABLE_L1_NMI_INT_SHFT 14
+#define SH_LOCAL_INT4_ENABLE_L1_NMI_INT_MASK 0x0000000000004000
+
+/* SH_LOCAL_INT4_ENABLE_STOP_CLOCK */
+/* Description: Stop Clock Interrupt */
+#define SH_LOCAL_INT4_ENABLE_STOP_CLOCK_SHFT 15
+#define SH_LOCAL_INT4_ENABLE_STOP_CLOCK_MASK 0x0000000000008000
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT5_CONFIG" */
+/* SHub Local Interrupt 5 Registers */
+/* ==================================================================== */
+
+#define SH_LOCAL_INT5_CONFIG 0x0000000110000980
+#define SH_LOCAL_INT5_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_LOCAL_INT5_CONFIG_INIT 0x0000000000000000
+
+/* SH_LOCAL_INT5_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_LOCAL_INT5_CONFIG_TYPE_SHFT 0
+#define SH_LOCAL_INT5_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_LOCAL_INT5_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_LOCAL_INT5_CONFIG_AGT_SHFT 3
+#define SH_LOCAL_INT5_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_LOCAL_INT5_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_LOCAL_INT5_CONFIG_PID_SHFT 4
+#define SH_LOCAL_INT5_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_LOCAL_INT5_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_LOCAL_INT5_CONFIG_BASE_SHFT 21
+#define SH_LOCAL_INT5_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_LOCAL_INT5_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_LOCAL_INT5_CONFIG_IDX_SHFT 52
+#define SH_LOCAL_INT5_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT5_ENABLE" */
+/* SHub Local Interrupt 5 Enable */
+/* ==================================================================== */
+
+#define SH_LOCAL_INT5_ENABLE 0x0000000110000a00
+#define SH_LOCAL_INT5_ENABLE_MASK 0x000000000000f7ff
+#define SH_LOCAL_INT5_ENABLE_INIT 0x0000000000000000
+
+/* SH_LOCAL_INT5_ENABLE_PI_HW_INT */
+/* Description: Enable PI Hardware interrupt */
+#define SH_LOCAL_INT5_ENABLE_PI_HW_INT_SHFT 0
+#define SH_LOCAL_INT5_ENABLE_PI_HW_INT_MASK 0x0000000000000001
+
+/* SH_LOCAL_INT5_ENABLE_MD_HW_INT */
+/* Description: Enable MD Hardware interrupt */
+#define SH_LOCAL_INT5_ENABLE_MD_HW_INT_SHFT 1
+#define SH_LOCAL_INT5_ENABLE_MD_HW_INT_MASK 0x0000000000000002
+
+/* SH_LOCAL_INT5_ENABLE_XN_HW_INT */
+/* Description: Enable XN Hardware interrupt */
+#define SH_LOCAL_INT5_ENABLE_XN_HW_INT_SHFT 2
+#define SH_LOCAL_INT5_ENABLE_XN_HW_INT_MASK 0x0000000000000004
+
+/* SH_LOCAL_INT5_ENABLE_LB_HW_INT */
+/* Description: Enable LB Hardware interrupt */
+#define SH_LOCAL_INT5_ENABLE_LB_HW_INT_SHFT 3
+#define SH_LOCAL_INT5_ENABLE_LB_HW_INT_MASK 0x0000000000000008
+
+/* SH_LOCAL_INT5_ENABLE_II_HW_INT */
+/* Description: Enable II wrapper Hardware interrupt */
+#define SH_LOCAL_INT5_ENABLE_II_HW_INT_SHFT 4
+#define SH_LOCAL_INT5_ENABLE_II_HW_INT_MASK 0x0000000000000010
+
+/* SH_LOCAL_INT5_ENABLE_PI_CE_INT */
+/* Description: Enable PI Correctable Error Interrupt */
+#define SH_LOCAL_INT5_ENABLE_PI_CE_INT_SHFT 5
+#define SH_LOCAL_INT5_ENABLE_PI_CE_INT_MASK 0x0000000000000020
+
+/* SH_LOCAL_INT5_ENABLE_MD_CE_INT */
+/* Description: Enable MD Correctable Error Interrupt */
+#define SH_LOCAL_INT5_ENABLE_MD_CE_INT_SHFT 6
+#define SH_LOCAL_INT5_ENABLE_MD_CE_INT_MASK 0x0000000000000040
+
+/* SH_LOCAL_INT5_ENABLE_XN_CE_INT */
+/* Description: Enable XN Correctable Error Interrupt */
+#define SH_LOCAL_INT5_ENABLE_XN_CE_INT_SHFT 7
+#define SH_LOCAL_INT5_ENABLE_XN_CE_INT_MASK 0x0000000000000080
+
+/* SH_LOCAL_INT5_ENABLE_PI_UCE_INT */
+/* Description: Enable PI Correctable Error Interrupt */
+#define SH_LOCAL_INT5_ENABLE_PI_UCE_INT_SHFT 8
+#define SH_LOCAL_INT5_ENABLE_PI_UCE_INT_MASK 0x0000000000000100
+
+/* SH_LOCAL_INT5_ENABLE_MD_UCE_INT */
+/* Description: Enable MD Correctable Error Interrupt */
+#define SH_LOCAL_INT5_ENABLE_MD_UCE_INT_SHFT 9
+#define SH_LOCAL_INT5_ENABLE_MD_UCE_INT_MASK 0x0000000000000200
+
+/* SH_LOCAL_INT5_ENABLE_XN_UCE_INT */
+/* Description: Enable XN Correctable Error Interrupt */
+#define SH_LOCAL_INT5_ENABLE_XN_UCE_INT_SHFT 10
+#define SH_LOCAL_INT5_ENABLE_XN_UCE_INT_MASK 0x0000000000000400
+
+/* SH_LOCAL_INT5_ENABLE_SYSTEM_SHUTDOWN_INT */
+/* Description: Enable System Shutdown Interrupt */
+#define SH_LOCAL_INT5_ENABLE_SYSTEM_SHUTDOWN_INT_SHFT 12
+#define SH_LOCAL_INT5_ENABLE_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000001000
+
+/* SH_LOCAL_INT5_ENABLE_UART_INT */
+/* Description: Enable Junk Bus UART Interrupt */
+#define SH_LOCAL_INT5_ENABLE_UART_INT_SHFT 13
+#define SH_LOCAL_INT5_ENABLE_UART_INT_MASK 0x0000000000002000
+
+/* SH_LOCAL_INT5_ENABLE_L1_NMI_INT */
+/* Description: Enable L1 Controller NMI Interrupt */
+#define SH_LOCAL_INT5_ENABLE_L1_NMI_INT_SHFT 14
+#define SH_LOCAL_INT5_ENABLE_L1_NMI_INT_MASK 0x0000000000004000
+
+/* SH_LOCAL_INT5_ENABLE_STOP_CLOCK */
+/* Description: Stop Clock Interrupt */
+#define SH_LOCAL_INT5_ENABLE_STOP_CLOCK_SHFT 15
+#define SH_LOCAL_INT5_ENABLE_STOP_CLOCK_MASK 0x0000000000008000
+
+/* ==================================================================== */
+/* Register "SH_PROC0_ERR_INT_CONFIG" */
+/* SHub Processor 0 Error Interrupt Registers */
+/* ==================================================================== */
+
+#define SH_PROC0_ERR_INT_CONFIG 0x0000000110000a80
+#define SH_PROC0_ERR_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_PROC0_ERR_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_PROC0_ERR_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_PROC0_ERR_INT_CONFIG_TYPE_SHFT 0
+#define SH_PROC0_ERR_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_PROC0_ERR_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_PROC0_ERR_INT_CONFIG_AGT_SHFT 3
+#define SH_PROC0_ERR_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_PROC0_ERR_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_PROC0_ERR_INT_CONFIG_PID_SHFT 4
+#define SH_PROC0_ERR_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_PROC0_ERR_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_PROC0_ERR_INT_CONFIG_BASE_SHFT 21
+#define SH_PROC0_ERR_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_PROC0_ERR_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_PROC0_ERR_INT_CONFIG_IDX_SHFT 52
+#define SH_PROC0_ERR_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC1_ERR_INT_CONFIG" */
+/* SHub Processor 1 Error Interrupt Registers */
+/* ==================================================================== */
+
+#define SH_PROC1_ERR_INT_CONFIG 0x0000000110000b00
+#define SH_PROC1_ERR_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_PROC1_ERR_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_PROC1_ERR_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_PROC1_ERR_INT_CONFIG_TYPE_SHFT 0
+#define SH_PROC1_ERR_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_PROC1_ERR_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_PROC1_ERR_INT_CONFIG_AGT_SHFT 3
+#define SH_PROC1_ERR_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_PROC1_ERR_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_PROC1_ERR_INT_CONFIG_PID_SHFT 4
+#define SH_PROC1_ERR_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_PROC1_ERR_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_PROC1_ERR_INT_CONFIG_BASE_SHFT 21
+#define SH_PROC1_ERR_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_PROC1_ERR_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_PROC1_ERR_INT_CONFIG_IDX_SHFT 52
+#define SH_PROC1_ERR_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC2_ERR_INT_CONFIG" */
+/* SHub Processor 2 Error Interrupt Registers */
+/* ==================================================================== */
+
+#define SH_PROC2_ERR_INT_CONFIG 0x0000000110000b80
+#define SH_PROC2_ERR_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_PROC2_ERR_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_PROC2_ERR_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_PROC2_ERR_INT_CONFIG_TYPE_SHFT 0
+#define SH_PROC2_ERR_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_PROC2_ERR_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_PROC2_ERR_INT_CONFIG_AGT_SHFT 3
+#define SH_PROC2_ERR_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_PROC2_ERR_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_PROC2_ERR_INT_CONFIG_PID_SHFT 4
+#define SH_PROC2_ERR_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_PROC2_ERR_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_PROC2_ERR_INT_CONFIG_BASE_SHFT 21
+#define SH_PROC2_ERR_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_PROC2_ERR_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_PROC2_ERR_INT_CONFIG_IDX_SHFT 52
+#define SH_PROC2_ERR_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC3_ERR_INT_CONFIG" */
+/* SHub Processor 3 Error Interrupt Registers */
+/* ==================================================================== */
+
+#define SH_PROC3_ERR_INT_CONFIG 0x0000000110000c00
+#define SH_PROC3_ERR_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_PROC3_ERR_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_PROC3_ERR_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_PROC3_ERR_INT_CONFIG_TYPE_SHFT 0
+#define SH_PROC3_ERR_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_PROC3_ERR_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_PROC3_ERR_INT_CONFIG_AGT_SHFT 3
+#define SH_PROC3_ERR_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_PROC3_ERR_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_PROC3_ERR_INT_CONFIG_PID_SHFT 4
+#define SH_PROC3_ERR_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_PROC3_ERR_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_PROC3_ERR_INT_CONFIG_BASE_SHFT 21
+#define SH_PROC3_ERR_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_PROC3_ERR_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_PROC3_ERR_INT_CONFIG_IDX_SHFT 52
+#define SH_PROC3_ERR_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC0_ADV_INT_CONFIG" */
+/* SHub Processor 0 Advisory Interrupt Registers */
+/* ==================================================================== */
+
+#define SH_PROC0_ADV_INT_CONFIG 0x0000000110000c80
+#define SH_PROC0_ADV_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_PROC0_ADV_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_PROC0_ADV_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_PROC0_ADV_INT_CONFIG_TYPE_SHFT 0
+#define SH_PROC0_ADV_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_PROC0_ADV_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_PROC0_ADV_INT_CONFIG_AGT_SHFT 3
+#define SH_PROC0_ADV_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_PROC0_ADV_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_PROC0_ADV_INT_CONFIG_PID_SHFT 4
+#define SH_PROC0_ADV_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_PROC0_ADV_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_PROC0_ADV_INT_CONFIG_BASE_SHFT 21
+#define SH_PROC0_ADV_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_PROC0_ADV_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_PROC0_ADV_INT_CONFIG_IDX_SHFT 52
+#define SH_PROC0_ADV_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC1_ADV_INT_CONFIG" */
+/* SHub Processor 1 Advisory Interrupt Registers */
+/* ==================================================================== */
+
+#define SH_PROC1_ADV_INT_CONFIG 0x0000000110000d00
+#define SH_PROC1_ADV_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_PROC1_ADV_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_PROC1_ADV_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_PROC1_ADV_INT_CONFIG_TYPE_SHFT 0
+#define SH_PROC1_ADV_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_PROC1_ADV_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_PROC1_ADV_INT_CONFIG_AGT_SHFT 3
+#define SH_PROC1_ADV_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_PROC1_ADV_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_PROC1_ADV_INT_CONFIG_PID_SHFT 4
+#define SH_PROC1_ADV_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_PROC1_ADV_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_PROC1_ADV_INT_CONFIG_BASE_SHFT 21
+#define SH_PROC1_ADV_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_PROC1_ADV_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_PROC1_ADV_INT_CONFIG_IDX_SHFT 52
+#define SH_PROC1_ADV_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC2_ADV_INT_CONFIG" */
+/* SHub Processor 2 Advisory Interrupt Registers */
+/* ==================================================================== */
+
+#define SH_PROC2_ADV_INT_CONFIG 0x0000000110000d80
+#define SH_PROC2_ADV_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_PROC2_ADV_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_PROC2_ADV_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_PROC2_ADV_INT_CONFIG_TYPE_SHFT 0
+#define SH_PROC2_ADV_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_PROC2_ADV_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_PROC2_ADV_INT_CONFIG_AGT_SHFT 3
+#define SH_PROC2_ADV_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_PROC2_ADV_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_PROC2_ADV_INT_CONFIG_PID_SHFT 4
+#define SH_PROC2_ADV_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_PROC2_ADV_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_PROC2_ADV_INT_CONFIG_BASE_SHFT 21
+#define SH_PROC2_ADV_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_PROC2_ADV_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_PROC2_ADV_INT_CONFIG_IDX_SHFT 52
+#define SH_PROC2_ADV_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC3_ADV_INT_CONFIG" */
+/* SHub Processor 3 Advisory Interrupt Registers */
+/* ==================================================================== */
+
+#define SH_PROC3_ADV_INT_CONFIG 0x0000000110000e00
+#define SH_PROC3_ADV_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_PROC3_ADV_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_PROC3_ADV_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_PROC3_ADV_INT_CONFIG_TYPE_SHFT 0
+#define SH_PROC3_ADV_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_PROC3_ADV_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_PROC3_ADV_INT_CONFIG_AGT_SHFT 3
+#define SH_PROC3_ADV_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_PROC3_ADV_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_PROC3_ADV_INT_CONFIG_PID_SHFT 4
+#define SH_PROC3_ADV_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_PROC3_ADV_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_PROC3_ADV_INT_CONFIG_BASE_SHFT 21
+#define SH_PROC3_ADV_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_PROC3_ADV_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_PROC3_ADV_INT_CONFIG_IDX_SHFT 52
+#define SH_PROC3_ADV_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC0_ERR_INT_ENABLE" */
+/* SHub Processor 0 Error Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_PROC0_ERR_INT_ENABLE 0x0000000110000e80
+#define SH_PROC0_ERR_INT_ENABLE_MASK 0x0000000000000001
+#define SH_PROC0_ERR_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_PROC0_ERR_INT_ENABLE_PROC0_ERR_ENABLE */
+/* Description: Enable Processor 0 Error Interrupt */
+#define SH_PROC0_ERR_INT_ENABLE_PROC0_ERR_ENABLE_SHFT 0
+#define SH_PROC0_ERR_INT_ENABLE_PROC0_ERR_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_PROC1_ERR_INT_ENABLE" */
+/* SHub Processor 1 Error Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_PROC1_ERR_INT_ENABLE 0x0000000110000f00
+#define SH_PROC1_ERR_INT_ENABLE_MASK 0x0000000000000001
+#define SH_PROC1_ERR_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_PROC1_ERR_INT_ENABLE_PROC1_ERR_ENABLE */
+/* Description: Enable Processor 1 Error Interrupt */
+#define SH_PROC1_ERR_INT_ENABLE_PROC1_ERR_ENABLE_SHFT 0
+#define SH_PROC1_ERR_INT_ENABLE_PROC1_ERR_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_PROC2_ERR_INT_ENABLE" */
+/* SHub Processor 2 Error Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_PROC2_ERR_INT_ENABLE 0x0000000110000f80
+#define SH_PROC2_ERR_INT_ENABLE_MASK 0x0000000000000001
+#define SH_PROC2_ERR_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_PROC2_ERR_INT_ENABLE_PROC2_ERR_ENABLE */
+/* Description: Enable Processor 2 Error Interrupt */
+#define SH_PROC2_ERR_INT_ENABLE_PROC2_ERR_ENABLE_SHFT 0
+#define SH_PROC2_ERR_INT_ENABLE_PROC2_ERR_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_PROC3_ERR_INT_ENABLE" */
+/* SHub Processor 3 Error Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_PROC3_ERR_INT_ENABLE 0x0000000110001000
+#define SH_PROC3_ERR_INT_ENABLE_MASK 0x0000000000000001
+#define SH_PROC3_ERR_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_PROC3_ERR_INT_ENABLE_PROC3_ERR_ENABLE */
+/* Description: Enable Processor 3 Error Interrupt */
+#define SH_PROC3_ERR_INT_ENABLE_PROC3_ERR_ENABLE_SHFT 0
+#define SH_PROC3_ERR_INT_ENABLE_PROC3_ERR_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_PROC0_ADV_INT_ENABLE" */
+/* SHub Processor 0 Advisory Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_PROC0_ADV_INT_ENABLE 0x0000000110001080
+#define SH_PROC0_ADV_INT_ENABLE_MASK 0x0000000000000001
+#define SH_PROC0_ADV_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_PROC0_ADV_INT_ENABLE_PROC0_ADV_ENABLE */
+/* Description: Enable Processor 0 Advisory Interrupt */
+#define SH_PROC0_ADV_INT_ENABLE_PROC0_ADV_ENABLE_SHFT 0
+#define SH_PROC0_ADV_INT_ENABLE_PROC0_ADV_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_PROC1_ADV_INT_ENABLE" */
+/* SHub Processor 1 Advisory Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_PROC1_ADV_INT_ENABLE 0x0000000110001100
+#define SH_PROC1_ADV_INT_ENABLE_MASK 0x0000000000000001
+#define SH_PROC1_ADV_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_PROC1_ADV_INT_ENABLE_PROC1_ADV_ENABLE */
+/* Description: Enable Processor 1 Advisory Interrupt */
+#define SH_PROC1_ADV_INT_ENABLE_PROC1_ADV_ENABLE_SHFT 0
+#define SH_PROC1_ADV_INT_ENABLE_PROC1_ADV_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_PROC2_ADV_INT_ENABLE" */
+/* SHub Processor 2 Advisory Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_PROC2_ADV_INT_ENABLE 0x0000000110001180
+#define SH_PROC2_ADV_INT_ENABLE_MASK 0x0000000000000001
+#define SH_PROC2_ADV_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_PROC2_ADV_INT_ENABLE_PROC2_ADV_ENABLE */
+/* Description: Enable Processor 2 Advisory Interrupt */
+#define SH_PROC2_ADV_INT_ENABLE_PROC2_ADV_ENABLE_SHFT 0
+#define SH_PROC2_ADV_INT_ENABLE_PROC2_ADV_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_PROC3_ADV_INT_ENABLE" */
+/* SHub Processor 3 Advisory Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_PROC3_ADV_INT_ENABLE 0x0000000110001200
+#define SH_PROC3_ADV_INT_ENABLE_MASK 0x0000000000000001
+#define SH_PROC3_ADV_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_PROC3_ADV_INT_ENABLE_PROC3_ADV_ENABLE */
+/* Description: Enable Processor 3 Advisory Interrupt */
+#define SH_PROC3_ADV_INT_ENABLE_PROC3_ADV_ENABLE_SHFT 0
+#define SH_PROC3_ADV_INT_ENABLE_PROC3_ADV_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_INT_CONFIG" */
+/* SHub Profile Interrupt Configuration Registers */
+/* ==================================================================== */
+
+#define SH_PROFILE_INT_CONFIG 0x0000000110001280
+#define SH_PROFILE_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_PROFILE_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_PROFILE_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_PROFILE_INT_CONFIG_TYPE_SHFT 0
+#define SH_PROFILE_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_PROFILE_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_PROFILE_INT_CONFIG_AGT_SHFT 3
+#define SH_PROFILE_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_PROFILE_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_PROFILE_INT_CONFIG_PID_SHFT 4
+#define SH_PROFILE_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_PROFILE_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_PROFILE_INT_CONFIG_BASE_SHFT 21
+#define SH_PROFILE_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_PROFILE_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_PROFILE_INT_CONFIG_IDX_SHFT 52
+#define SH_PROFILE_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_INT_ENABLE" */
+/* SHub Profile Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_PROFILE_INT_ENABLE 0x0000000110001300
+#define SH_PROFILE_INT_ENABLE_MASK 0x0000000000000001
+#define SH_PROFILE_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_PROFILE_INT_ENABLE_PROFILE_ENABLE */
+/* Description: Enable Profile Interrupt */
+#define SH_PROFILE_INT_ENABLE_PROFILE_ENABLE_SHFT 0
+#define SH_PROFILE_INT_ENABLE_PROFILE_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_RTC0_INT_CONFIG" */
+/* SHub RTC 0 Interrupt Config Registers */
+/* ==================================================================== */
+
+#define SH_RTC0_INT_CONFIG 0x0000000110001380
+#define SH_RTC0_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_RTC0_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_RTC0_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_RTC0_INT_CONFIG_TYPE_SHFT 0
+#define SH_RTC0_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_RTC0_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_RTC0_INT_CONFIG_AGT_SHFT 3
+#define SH_RTC0_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_RTC0_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_RTC0_INT_CONFIG_PID_SHFT 4
+#define SH_RTC0_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_RTC0_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_RTC0_INT_CONFIG_BASE_SHFT 21
+#define SH_RTC0_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_RTC0_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_RTC0_INT_CONFIG_IDX_SHFT 52
+#define SH_RTC0_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_RTC0_INT_ENABLE" */
+/* SHub RTC 0 Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_RTC0_INT_ENABLE 0x0000000110001400
+#define SH_RTC0_INT_ENABLE_MASK 0x0000000000000001
+#define SH_RTC0_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_RTC0_INT_ENABLE_RTC0_ENABLE */
+/* Description: Enable RTC 0 Interrupt */
+#define SH_RTC0_INT_ENABLE_RTC0_ENABLE_SHFT 0
+#define SH_RTC0_INT_ENABLE_RTC0_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_RTC1_INT_CONFIG" */
+/* SHub RTC 1 Interrupt Config Registers */
+/* ==================================================================== */
+
+#define SH_RTC1_INT_CONFIG 0x0000000110001480
+#define SH_RTC1_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_RTC1_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_RTC1_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_RTC1_INT_CONFIG_TYPE_SHFT 0
+#define SH_RTC1_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_RTC1_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_RTC1_INT_CONFIG_AGT_SHFT 3
+#define SH_RTC1_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_RTC1_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_RTC1_INT_CONFIG_PID_SHFT 4
+#define SH_RTC1_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_RTC1_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_RTC1_INT_CONFIG_BASE_SHFT 21
+#define SH_RTC1_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_RTC1_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_RTC1_INT_CONFIG_IDX_SHFT 52
+#define SH_RTC1_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_RTC1_INT_ENABLE" */
+/* SHub RTC 1 Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_RTC1_INT_ENABLE 0x0000000110001500
+#define SH_RTC1_INT_ENABLE_MASK 0x0000000000000001
+#define SH_RTC1_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_RTC1_INT_ENABLE_RTC1_ENABLE */
+/* Description: Enable RTC 1 Interrupt */
+#define SH_RTC1_INT_ENABLE_RTC1_ENABLE_SHFT 0
+#define SH_RTC1_INT_ENABLE_RTC1_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_RTC2_INT_CONFIG" */
+/* SHub RTC 2 Interrupt Config Registers */
+/* ==================================================================== */
+
+#define SH_RTC2_INT_CONFIG 0x0000000110001580
+#define SH_RTC2_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_RTC2_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_RTC2_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_RTC2_INT_CONFIG_TYPE_SHFT 0
+#define SH_RTC2_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_RTC2_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_RTC2_INT_CONFIG_AGT_SHFT 3
+#define SH_RTC2_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_RTC2_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_RTC2_INT_CONFIG_PID_SHFT 4
+#define SH_RTC2_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_RTC2_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_RTC2_INT_CONFIG_BASE_SHFT 21
+#define SH_RTC2_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_RTC2_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_RTC2_INT_CONFIG_IDX_SHFT 52
+#define SH_RTC2_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_RTC2_INT_ENABLE" */
+/* SHub RTC 2 Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_RTC2_INT_ENABLE 0x0000000110001600
+#define SH_RTC2_INT_ENABLE_MASK 0x0000000000000001
+#define SH_RTC2_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_RTC2_INT_ENABLE_RTC2_ENABLE */
+/* Description: Enable RTC 2 Interrupt */
+#define SH_RTC2_INT_ENABLE_RTC2_ENABLE_SHFT 0
+#define SH_RTC2_INT_ENABLE_RTC2_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_RTC3_INT_CONFIG" */
+/* SHub RTC 3 Interrupt Config Registers */
+/* ==================================================================== */
+
+#define SH_RTC3_INT_CONFIG 0x0000000110001680
+#define SH_RTC3_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_RTC3_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_RTC3_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_RTC3_INT_CONFIG_TYPE_SHFT 0
+#define SH_RTC3_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_RTC3_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_RTC3_INT_CONFIG_AGT_SHFT 3
+#define SH_RTC3_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_RTC3_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_RTC3_INT_CONFIG_PID_SHFT 4
+#define SH_RTC3_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_RTC3_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_RTC3_INT_CONFIG_BASE_SHFT 21
+#define SH_RTC3_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_RTC3_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_RTC3_INT_CONFIG_IDX_SHFT 52
+#define SH_RTC3_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_RTC3_INT_ENABLE" */
+/* SHub RTC 3 Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_RTC3_INT_ENABLE 0x0000000110001700
+#define SH_RTC3_INT_ENABLE_MASK 0x0000000000000001
+#define SH_RTC3_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_RTC3_INT_ENABLE_RTC3_ENABLE */
+/* Description: Enable RTC 3 Interrupt */
+#define SH_RTC3_INT_ENABLE_RTC3_ENABLE_SHFT 0
+#define SH_RTC3_INT_ENABLE_RTC3_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_EVENT_OCCURRED" */
+/* SHub Interrupt Event Occurred */
+/* ==================================================================== */
+
+#define SH_EVENT_OCCURRED 0x0000000110010000
+#define SH_EVENT_OCCURRED_MASK 0x000000007fffffff
+#define SH_EVENT_OCCURRED_INIT 0x0000000000000000
+
+/* SH_EVENT_OCCURRED_PI_HW_INT */
+/* Description: Pending PI Hardware interrupt */
+#define SH_EVENT_OCCURRED_PI_HW_INT_SHFT 0
+#define SH_EVENT_OCCURRED_PI_HW_INT_MASK 0x0000000000000001
+
+/* SH_EVENT_OCCURRED_MD_HW_INT */
+/* Description: Pending MD Hardware interrupt */
+#define SH_EVENT_OCCURRED_MD_HW_INT_SHFT 1
+#define SH_EVENT_OCCURRED_MD_HW_INT_MASK 0x0000000000000002
+
+/* SH_EVENT_OCCURRED_XN_HW_INT */
+/* Description: Pending XN Hardware interrupt */
+#define SH_EVENT_OCCURRED_XN_HW_INT_SHFT 2
+#define SH_EVENT_OCCURRED_XN_HW_INT_MASK 0x0000000000000004
+
+/* SH_EVENT_OCCURRED_LB_HW_INT */
+/* Description: Pending LB Hardware interrupt */
+#define SH_EVENT_OCCURRED_LB_HW_INT_SHFT 3
+#define SH_EVENT_OCCURRED_LB_HW_INT_MASK 0x0000000000000008
+
+/* SH_EVENT_OCCURRED_II_HW_INT */
+/* Description: Pending II wrapper Hardware interrupt */
+#define SH_EVENT_OCCURRED_II_HW_INT_SHFT 4
+#define SH_EVENT_OCCURRED_II_HW_INT_MASK 0x0000000000000010
+
+/* SH_EVENT_OCCURRED_PI_CE_INT */
+/* Description: Pending PI Correctable Error Interrupt */
+#define SH_EVENT_OCCURRED_PI_CE_INT_SHFT 5
+#define SH_EVENT_OCCURRED_PI_CE_INT_MASK 0x0000000000000020
+
+/* SH_EVENT_OCCURRED_MD_CE_INT */
+/* Description: Pending MD Correctable Error Interrupt */
+#define SH_EVENT_OCCURRED_MD_CE_INT_SHFT 6
+#define SH_EVENT_OCCURRED_MD_CE_INT_MASK 0x0000000000000040
+
+/* SH_EVENT_OCCURRED_XN_CE_INT */
+/* Description: Pending XN Correctable Error Interrupt */
+#define SH_EVENT_OCCURRED_XN_CE_INT_SHFT 7
+#define SH_EVENT_OCCURRED_XN_CE_INT_MASK 0x0000000000000080
+
+/* SH_EVENT_OCCURRED_PI_UCE_INT */
+/* Description: Pending PI Correctable Error Interrupt */
+#define SH_EVENT_OCCURRED_PI_UCE_INT_SHFT 8
+#define SH_EVENT_OCCURRED_PI_UCE_INT_MASK 0x0000000000000100
+
+/* SH_EVENT_OCCURRED_MD_UCE_INT */
+/* Description: Pending MD Correctable Error Interrupt */
+#define SH_EVENT_OCCURRED_MD_UCE_INT_SHFT 9
+#define SH_EVENT_OCCURRED_MD_UCE_INT_MASK 0x0000000000000200
+
+/* SH_EVENT_OCCURRED_XN_UCE_INT */
+/* Description: Pending XN Correctable Error Interrupt */
+#define SH_EVENT_OCCURRED_XN_UCE_INT_SHFT 10
+#define SH_EVENT_OCCURRED_XN_UCE_INT_MASK 0x0000000000000400
+
+/* SH_EVENT_OCCURRED_PROC0_ADV_INT */
+/* Description: Pending Processor 0 Advisory Interrupt */
+#define SH_EVENT_OCCURRED_PROC0_ADV_INT_SHFT 11
+#define SH_EVENT_OCCURRED_PROC0_ADV_INT_MASK 0x0000000000000800
+
+/* SH_EVENT_OCCURRED_PROC1_ADV_INT */
+/* Description: Pending Processor 1 Advisory Interrupt */
+#define SH_EVENT_OCCURRED_PROC1_ADV_INT_SHFT 12
+#define SH_EVENT_OCCURRED_PROC1_ADV_INT_MASK 0x0000000000001000
+
+/* SH_EVENT_OCCURRED_PROC2_ADV_INT */
+/* Description: Pending Processor 2 Advisory Interrupt */
+#define SH_EVENT_OCCURRED_PROC2_ADV_INT_SHFT 13
+#define SH_EVENT_OCCURRED_PROC2_ADV_INT_MASK 0x0000000000002000
+
+/* SH_EVENT_OCCURRED_PROC3_ADV_INT */
+/* Description: Pending Processor 3 Advisory Interrupt */
+#define SH_EVENT_OCCURRED_PROC3_ADV_INT_SHFT 14
+#define SH_EVENT_OCCURRED_PROC3_ADV_INT_MASK 0x0000000000004000
+
+/* SH_EVENT_OCCURRED_PROC0_ERR_INT */
+/* Description: Pending Processor 0 Error Interrupt */
+#define SH_EVENT_OCCURRED_PROC0_ERR_INT_SHFT 15
+#define SH_EVENT_OCCURRED_PROC0_ERR_INT_MASK 0x0000000000008000
+
+/* SH_EVENT_OCCURRED_PROC1_ERR_INT */
+/* Description: Pending Processor 1 Error Interrupt */
+#define SH_EVENT_OCCURRED_PROC1_ERR_INT_SHFT 16
+#define SH_EVENT_OCCURRED_PROC1_ERR_INT_MASK 0x0000000000010000
+
+/* SH_EVENT_OCCURRED_PROC2_ERR_INT */
+/* Description: Pending Processor 2 Error Interrupt */
+#define SH_EVENT_OCCURRED_PROC2_ERR_INT_SHFT 17
+#define SH_EVENT_OCCURRED_PROC2_ERR_INT_MASK 0x0000000000020000
+
+/* SH_EVENT_OCCURRED_PROC3_ERR_INT */
+/* Description: Pending Processor 3 Error Interrupt */
+#define SH_EVENT_OCCURRED_PROC3_ERR_INT_SHFT 18
+#define SH_EVENT_OCCURRED_PROC3_ERR_INT_MASK 0x0000000000040000
+
+/* SH_EVENT_OCCURRED_SYSTEM_SHUTDOWN_INT */
+/* Description: Pending System Shutdown Interrupt */
+#define SH_EVENT_OCCURRED_SYSTEM_SHUTDOWN_INT_SHFT 19
+#define SH_EVENT_OCCURRED_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000080000
+
+/* SH_EVENT_OCCURRED_UART_INT */
+/* Description: Pending Junk Bus UART Interrupt */
+#define SH_EVENT_OCCURRED_UART_INT_SHFT 20
+#define SH_EVENT_OCCURRED_UART_INT_MASK 0x0000000000100000
+
+/* SH_EVENT_OCCURRED_L1_NMI_INT */
+/* Description: Pending L1 Controller NMI Interrupt */
+#define SH_EVENT_OCCURRED_L1_NMI_INT_SHFT 21
+#define SH_EVENT_OCCURRED_L1_NMI_INT_MASK 0x0000000000200000
+
+/* SH_EVENT_OCCURRED_STOP_CLOCK */
+/* Description: Pending Stop Clock Interrupt */
+#define SH_EVENT_OCCURRED_STOP_CLOCK_SHFT 22
+#define SH_EVENT_OCCURRED_STOP_CLOCK_MASK 0x0000000000400000
+
+/* SH_EVENT_OCCURRED_RTC0_INT */
+/* Description: Pending RTC 0 Interrupt */
+#define SH_EVENT_OCCURRED_RTC0_INT_SHFT 23
+#define SH_EVENT_OCCURRED_RTC0_INT_MASK 0x0000000000800000
+
+/* SH_EVENT_OCCURRED_RTC1_INT */
+/* Description: Pending RTC 1 Interrupt */
+#define SH_EVENT_OCCURRED_RTC1_INT_SHFT 24
+#define SH_EVENT_OCCURRED_RTC1_INT_MASK 0x0000000001000000
+
+/* SH_EVENT_OCCURRED_RTC2_INT */
+/* Description: Pending RTC 2 Interrupt */
+#define SH_EVENT_OCCURRED_RTC2_INT_SHFT 25
+#define SH_EVENT_OCCURRED_RTC2_INT_MASK 0x0000000002000000
+
+/* SH_EVENT_OCCURRED_RTC3_INT */
+/* Description: Pending RTC 3 Interrupt */
+#define SH_EVENT_OCCURRED_RTC3_INT_SHFT 26
+#define SH_EVENT_OCCURRED_RTC3_INT_MASK 0x0000000004000000
+
+/* SH_EVENT_OCCURRED_PROFILE_INT */
+/* Description: Pending Profile Interrupt */
+#define SH_EVENT_OCCURRED_PROFILE_INT_SHFT 27
+#define SH_EVENT_OCCURRED_PROFILE_INT_MASK 0x0000000008000000
+
+/* SH_EVENT_OCCURRED_IPI_INT */
+/* Description: Pending IPI Interrupt */
+#define SH_EVENT_OCCURRED_IPI_INT_SHFT 28
+#define SH_EVENT_OCCURRED_IPI_INT_MASK 0x0000000010000000
+
+/* SH_EVENT_OCCURRED_II_INT0 */
+/* Description: Pending II 0 Interrupt */
+#define SH_EVENT_OCCURRED_II_INT0_SHFT 29
+#define SH_EVENT_OCCURRED_II_INT0_MASK 0x0000000020000000
+
+/* SH_EVENT_OCCURRED_II_INT1 */
+/* Description: Pending II 1 Interrupt */
+#define SH_EVENT_OCCURRED_II_INT1_SHFT 30
+#define SH_EVENT_OCCURRED_II_INT1_MASK 0x0000000040000000
+
+/* ==================================================================== */
+/* Register "SH_EVENT_OCCURRED_ALIAS" */
+/* SHub Interrupt Event Occurred Alias */
+/* ==================================================================== */
+
+#define SH_EVENT_OCCURRED_ALIAS 0x0000000110010008
+
+/* ==================================================================== */
+/* Register "SH_EVENT_OVERFLOW" */
+/* SHub Interrupt Event Occurred Overflow */
+/* ==================================================================== */
+
+#define SH_EVENT_OVERFLOW 0x0000000110010080
+#define SH_EVENT_OVERFLOW_MASK 0x000000000fffffff
+#define SH_EVENT_OVERFLOW_INIT 0x0000000000000000
+
+/* SH_EVENT_OVERFLOW_PI_HW_INT */
+/* Description: Pending PI Hardware interrupt */
+#define SH_EVENT_OVERFLOW_PI_HW_INT_SHFT 0
+#define SH_EVENT_OVERFLOW_PI_HW_INT_MASK 0x0000000000000001
+
+/* SH_EVENT_OVERFLOW_MD_HW_INT */
+/* Description: Pending MD Hardware interrupt */
+#define SH_EVENT_OVERFLOW_MD_HW_INT_SHFT 1
+#define SH_EVENT_OVERFLOW_MD_HW_INT_MASK 0x0000000000000002
+
+/* SH_EVENT_OVERFLOW_XN_HW_INT */
+/* Description: Pending XN Hardware interrupt */
+#define SH_EVENT_OVERFLOW_XN_HW_INT_SHFT 2
+#define SH_EVENT_OVERFLOW_XN_HW_INT_MASK 0x0000000000000004
+
+/* SH_EVENT_OVERFLOW_LB_HW_INT */
+/* Description: Pending LB Hardware interrupt */
+#define SH_EVENT_OVERFLOW_LB_HW_INT_SHFT 3
+#define SH_EVENT_OVERFLOW_LB_HW_INT_MASK 0x0000000000000008
+
+/* SH_EVENT_OVERFLOW_II_HW_INT */
+/* Description: Pending II wrapper Hardware interrupt */
+#define SH_EVENT_OVERFLOW_II_HW_INT_SHFT 4
+#define SH_EVENT_OVERFLOW_II_HW_INT_MASK 0x0000000000000010
+
+/* SH_EVENT_OVERFLOW_PI_CE_INT */
+/* Description: Pending PI Correctable Error Interrupt */
+#define SH_EVENT_OVERFLOW_PI_CE_INT_SHFT 5
+#define SH_EVENT_OVERFLOW_PI_CE_INT_MASK 0x0000000000000020
+
+/* SH_EVENT_OVERFLOW_MD_CE_INT */
+/* Description: Pending MD Correctable Error Interrupt */
+#define SH_EVENT_OVERFLOW_MD_CE_INT_SHFT 6
+#define SH_EVENT_OVERFLOW_MD_CE_INT_MASK 0x0000000000000040
+
+/* SH_EVENT_OVERFLOW_XN_CE_INT */
+/* Description: Pending XN Correctable Error Interrupt */
+#define SH_EVENT_OVERFLOW_XN_CE_INT_SHFT 7
+#define SH_EVENT_OVERFLOW_XN_CE_INT_MASK 0x0000000000000080
+
+/* SH_EVENT_OVERFLOW_PI_UCE_INT */
+/* Description: Pending PI Correctable Error Interrupt */
+#define SH_EVENT_OVERFLOW_PI_UCE_INT_SHFT 8
+#define SH_EVENT_OVERFLOW_PI_UCE_INT_MASK 0x0000000000000100
+
+/* SH_EVENT_OVERFLOW_MD_UCE_INT */
+/* Description: Pending MD Correctable Error Interrupt */
+#define SH_EVENT_OVERFLOW_MD_UCE_INT_SHFT 9
+#define SH_EVENT_OVERFLOW_MD_UCE_INT_MASK 0x0000000000000200
+
+/* SH_EVENT_OVERFLOW_XN_UCE_INT */
+/* Description: Pending XN Correctable Error Interrupt */
+#define SH_EVENT_OVERFLOW_XN_UCE_INT_SHFT 10
+#define SH_EVENT_OVERFLOW_XN_UCE_INT_MASK 0x0000000000000400
+
+/* SH_EVENT_OVERFLOW_PROC0_ADV_INT */
+/* Description: Pending Processor 0 Advisory Interrupt */
+#define SH_EVENT_OVERFLOW_PROC0_ADV_INT_SHFT 11
+#define SH_EVENT_OVERFLOW_PROC0_ADV_INT_MASK 0x0000000000000800
+
+/* SH_EVENT_OVERFLOW_PROC1_ADV_INT */
+/* Description: Pending Processor 1 Advisory Interrupt */
+#define SH_EVENT_OVERFLOW_PROC1_ADV_INT_SHFT 12
+#define SH_EVENT_OVERFLOW_PROC1_ADV_INT_MASK 0x0000000000001000
+
+/* SH_EVENT_OVERFLOW_PROC2_ADV_INT */
+/* Description: Pending Processor 2 Advisory Interrupt */
+#define SH_EVENT_OVERFLOW_PROC2_ADV_INT_SHFT 13
+#define SH_EVENT_OVERFLOW_PROC2_ADV_INT_MASK 0x0000000000002000
+
+/* SH_EVENT_OVERFLOW_PROC3_ADV_INT */
+/* Description: Pending Processor 3 Advisory Interrupt */
+#define SH_EVENT_OVERFLOW_PROC3_ADV_INT_SHFT 14
+#define SH_EVENT_OVERFLOW_PROC3_ADV_INT_MASK 0x0000000000004000
+
+/* SH_EVENT_OVERFLOW_PROC0_ERR_INT */
+/* Description: Pending Processor 0 Error Interrupt */
+#define SH_EVENT_OVERFLOW_PROC0_ERR_INT_SHFT 15
+#define SH_EVENT_OVERFLOW_PROC0_ERR_INT_MASK 0x0000000000008000
+
+/* SH_EVENT_OVERFLOW_PROC1_ERR_INT */
+/* Description: Pending Processor 1 Error Interrupt */
+#define SH_EVENT_OVERFLOW_PROC1_ERR_INT_SHFT 16
+#define SH_EVENT_OVERFLOW_PROC1_ERR_INT_MASK 0x0000000000010000
+
+/* SH_EVENT_OVERFLOW_PROC2_ERR_INT */
+/* Description: Pending Processor 2 Error Interrupt */
+#define SH_EVENT_OVERFLOW_PROC2_ERR_INT_SHFT 17
+#define SH_EVENT_OVERFLOW_PROC2_ERR_INT_MASK 0x0000000000020000
+
+/* SH_EVENT_OVERFLOW_PROC3_ERR_INT */
+/* Description: Pending Processor 3 Error Interrupt */
+#define SH_EVENT_OVERFLOW_PROC3_ERR_INT_SHFT 18
+#define SH_EVENT_OVERFLOW_PROC3_ERR_INT_MASK 0x0000000000040000
+
+/* SH_EVENT_OVERFLOW_SYSTEM_SHUTDOWN_INT */
+/* Description: Pending System Shutdown Interrupt */
+#define SH_EVENT_OVERFLOW_SYSTEM_SHUTDOWN_INT_SHFT 19
+#define SH_EVENT_OVERFLOW_SYSTEM_SHUTDOWN_INT_MASK 0x0000000000080000
+
+/* SH_EVENT_OVERFLOW_UART_INT */
+/* Description: Pending Junk Bus UART Interrupt */
+#define SH_EVENT_OVERFLOW_UART_INT_SHFT 20
+#define SH_EVENT_OVERFLOW_UART_INT_MASK 0x0000000000100000
+
+/* SH_EVENT_OVERFLOW_L1_NMI_INT */
+/* Description: Pending L1 Controller NMI Interrupt */
+#define SH_EVENT_OVERFLOW_L1_NMI_INT_SHFT 21
+#define SH_EVENT_OVERFLOW_L1_NMI_INT_MASK 0x0000000000200000
+
+/* SH_EVENT_OVERFLOW_STOP_CLOCK */
+/* Description: Pending Stop Clock Interrupt */
+#define SH_EVENT_OVERFLOW_STOP_CLOCK_SHFT 22
+#define SH_EVENT_OVERFLOW_STOP_CLOCK_MASK 0x0000000000400000
+
+/* SH_EVENT_OVERFLOW_RTC0_INT */
+/* Description: Pending RTC 0 Interrupt */
+#define SH_EVENT_OVERFLOW_RTC0_INT_SHFT 23
+#define SH_EVENT_OVERFLOW_RTC0_INT_MASK 0x0000000000800000
+
+/* SH_EVENT_OVERFLOW_RTC1_INT */
+/* Description: Pending RTC 1 Interrupt */
+#define SH_EVENT_OVERFLOW_RTC1_INT_SHFT 24
+#define SH_EVENT_OVERFLOW_RTC1_INT_MASK 0x0000000001000000
+
+/* SH_EVENT_OVERFLOW_RTC2_INT */
+/* Description: Pending RTC 2 Interrupt */
+#define SH_EVENT_OVERFLOW_RTC2_INT_SHFT 25
+#define SH_EVENT_OVERFLOW_RTC2_INT_MASK 0x0000000002000000
+
+/* SH_EVENT_OVERFLOW_RTC3_INT */
+/* Description: Pending RTC 3 Interrupt */
+#define SH_EVENT_OVERFLOW_RTC3_INT_SHFT 26
+#define SH_EVENT_OVERFLOW_RTC3_INT_MASK 0x0000000004000000
+
+/* SH_EVENT_OVERFLOW_PROFILE_INT */
+/* Description: Pending Profile Interrupt */
+#define SH_EVENT_OVERFLOW_PROFILE_INT_SHFT 27
+#define SH_EVENT_OVERFLOW_PROFILE_INT_MASK 0x0000000008000000
+
+/* ==================================================================== */
+/* Register "SH_EVENT_OVERFLOW_ALIAS" */
+/* SHub Interrupt Event Occurred Overflow Alias */
+/* ==================================================================== */
+
+#define SH_EVENT_OVERFLOW_ALIAS 0x0000000110010088
+
+/* ==================================================================== */
+/* Register "SH_JUNK_BUS_TIME" */
+/* Junk Bus Timing */
+/* ==================================================================== */
+
+#define SH_JUNK_BUS_TIME 0x0000000110020000
+#define SH_JUNK_BUS_TIME_MASK 0x00000000ffffffff
+#define SH_JUNK_BUS_TIME_INIT 0x0000000040404040
+
+/* SH_JUNK_BUS_TIME_FPROM_SETUP_HOLD */
+/* Description: Fprom_Setup_Hold */
+#define SH_JUNK_BUS_TIME_FPROM_SETUP_HOLD_SHFT 0
+#define SH_JUNK_BUS_TIME_FPROM_SETUP_HOLD_MASK 0x00000000000000ff
+
+/* SH_JUNK_BUS_TIME_FPROM_ENABLE */
+/* Description: Fprom_Enable */
+#define SH_JUNK_BUS_TIME_FPROM_ENABLE_SHFT 8
+#define SH_JUNK_BUS_TIME_FPROM_ENABLE_MASK 0x000000000000ff00
+
+/* SH_JUNK_BUS_TIME_UART_SETUP_HOLD */
+/* Description: Uart_Setup_Hold */
+#define SH_JUNK_BUS_TIME_UART_SETUP_HOLD_SHFT 16
+#define SH_JUNK_BUS_TIME_UART_SETUP_HOLD_MASK 0x0000000000ff0000
+
+/* SH_JUNK_BUS_TIME_UART_ENABLE */
+/* Description: Uart_Enable */
+#define SH_JUNK_BUS_TIME_UART_ENABLE_SHFT 24
+#define SH_JUNK_BUS_TIME_UART_ENABLE_MASK 0x00000000ff000000
+
+/* ==================================================================== */
+/* Register "SH_JUNK_LATCH_TIME" */
+/* Junk Bus Latch Timing */
+/* ==================================================================== */
+
+#define SH_JUNK_LATCH_TIME 0x0000000110020080
+#define SH_JUNK_LATCH_TIME_MASK 0x0000000000000007
+#define SH_JUNK_LATCH_TIME_INIT 0x0000000000000002
+
+/* SH_JUNK_LATCH_TIME_SETUP_HOLD */
+/* Description: Setup and Hold Time */
+#define SH_JUNK_LATCH_TIME_SETUP_HOLD_SHFT 0
+#define SH_JUNK_LATCH_TIME_SETUP_HOLD_MASK 0x0000000000000007
+
+/* ==================================================================== */
+/* Register "SH_JUNK_NACK_RESET" */
+/* Junk Bus Nack Counter Reset */
+/* ==================================================================== */
+
+#define SH_JUNK_NACK_RESET 0x0000000110020100
+#define SH_JUNK_NACK_RESET_MASK 0x0000000000000001
+#define SH_JUNK_NACK_RESET_INIT 0x0000000000000000
+
+/* SH_JUNK_NACK_RESET_PULSE */
+/* Description: Junk bus nack counter reset */
+#define SH_JUNK_NACK_RESET_PULSE_SHFT 0
+#define SH_JUNK_NACK_RESET_PULSE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_JUNK_BUS_LED0" */
+/* Junk Bus LED0 */
+/* ==================================================================== */
+
+#define SH_JUNK_BUS_LED0 0x0000000110030000
+#define SH_JUNK_BUS_LED0_MASK 0x00000000000000ff
+#define SH_JUNK_BUS_LED0_INIT 0x0000000000000000
+
+/* SH_JUNK_BUS_LED0_LED0_DATA */
+/* Description: LED0_data */
+#define SH_JUNK_BUS_LED0_LED0_DATA_SHFT 0
+#define SH_JUNK_BUS_LED0_LED0_DATA_MASK 0x00000000000000ff
+
+/* ==================================================================== */
+/* Register "SH_JUNK_BUS_LED1" */
+/* Junk Bus LED1 */
+/* ==================================================================== */
+
+#define SH_JUNK_BUS_LED1 0x0000000110030080
+#define SH_JUNK_BUS_LED1_MASK 0x00000000000000ff
+#define SH_JUNK_BUS_LED1_INIT 0x0000000000000000
+
+/* SH_JUNK_BUS_LED1_LED1_DATA */
+/* Description: LED1_data */
+#define SH_JUNK_BUS_LED1_LED1_DATA_SHFT 0
+#define SH_JUNK_BUS_LED1_LED1_DATA_MASK 0x00000000000000ff
+
+/* ==================================================================== */
+/* Register "SH_JUNK_BUS_LED2" */
+/* Junk Bus LED2 */
+/* ==================================================================== */
+
+#define SH_JUNK_BUS_LED2 0x0000000110030100
+#define SH_JUNK_BUS_LED2_MASK 0x00000000000000ff
+#define SH_JUNK_BUS_LED2_INIT 0x0000000000000000
+
+/* SH_JUNK_BUS_LED2_LED2_DATA */
+/* Description: LED2_data */
+#define SH_JUNK_BUS_LED2_LED2_DATA_SHFT 0
+#define SH_JUNK_BUS_LED2_LED2_DATA_MASK 0x00000000000000ff
+
+/* ==================================================================== */
+/* Register "SH_JUNK_BUS_LED3" */
+/* Junk Bus LED3 */
+/* ==================================================================== */
+
+#define SH_JUNK_BUS_LED3 0x0000000110030180
+#define SH_JUNK_BUS_LED3_MASK 0x00000000000000ff
+#define SH_JUNK_BUS_LED3_INIT 0x0000000000000000
+
+/* SH_JUNK_BUS_LED3_LED3_DATA */
+/* Description: LED3_data */
+#define SH_JUNK_BUS_LED3_LED3_DATA_SHFT 0
+#define SH_JUNK_BUS_LED3_LED3_DATA_MASK 0x00000000000000ff
+
+/* ==================================================================== */
+/* Register "SH_JUNK_ERROR_STATUS" */
+/* Junk Bus Error Status */
+/* ==================================================================== */
+
+#define SH_JUNK_ERROR_STATUS 0x0000000110030200
+#define SH_JUNK_ERROR_STATUS_MASK 0x1fff7fffffffffff
+#define SH_JUNK_ERROR_STATUS_INIT 0x0000000000000000
+
+/* SH_JUNK_ERROR_STATUS_ADDRESS */
+/* Description: Failing junk bus address */
+#define SH_JUNK_ERROR_STATUS_ADDRESS_SHFT 0
+#define SH_JUNK_ERROR_STATUS_ADDRESS_MASK 0x00007fffffffffff
+
+/* SH_JUNK_ERROR_STATUS_CMD */
+/* Description: Junk bus command */
+#define SH_JUNK_ERROR_STATUS_CMD_SHFT 48
+#define SH_JUNK_ERROR_STATUS_CMD_MASK 0x00ff000000000000
+
+/* SH_JUNK_ERROR_STATUS_MODE */
+/* Description: Mode */
+#define SH_JUNK_ERROR_STATUS_MODE_SHFT 56
+#define SH_JUNK_ERROR_STATUS_MODE_MASK 0x0100000000000000
+
+/* SH_JUNK_ERROR_STATUS_STATUS */
+/* Description: Status */
+#define SH_JUNK_ERROR_STATUS_STATUS_SHFT 57
+#define SH_JUNK_ERROR_STATUS_STATUS_MASK 0x1e00000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_STAT" */
+/* This register describes the LLP status. */
+/* ==================================================================== */
+
+#define SH_NI0_LLP_STAT 0x0000000150000000
+#define SH_NI0_LLP_STAT_MASK 0x000000000000000f
+#define SH_NI0_LLP_STAT_INIT 0x0000000000000000
+
+/* SH_NI0_LLP_STAT_LINK_RESET_STATE */
+/* Description: Status of LLP link. */
+#define SH_NI0_LLP_STAT_LINK_RESET_STATE_SHFT 0
+#define SH_NI0_LLP_STAT_LINK_RESET_STATE_MASK 0x000000000000000f
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_RESET" */
+/* Writing issues a reset to the network interface */
+/* ==================================================================== */
+
+#define SH_NI0_LLP_RESET 0x0000000150000008
+#define SH_NI0_LLP_RESET_MASK 0x0000000000000003
+#define SH_NI0_LLP_RESET_INIT 0x0000000000000000
+
+/* SH_NI0_LLP_RESET_LINK */
+/* Description: Send Link Reset. Generates a pulse. */
+#define SH_NI0_LLP_RESET_LINK_SHFT 0
+#define SH_NI0_LLP_RESET_LINK_MASK 0x0000000000000001
+
+/* SH_NI0_LLP_RESET_WARM */
+/* Description: Send Warm Reset. Generates a pulse. */
+#define SH_NI0_LLP_RESET_WARM_SHFT 1
+#define SH_NI0_LLP_RESET_WARM_MASK 0x0000000000000002
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_RESET_EN" */
+/* Controls LLP warm reset propagation */
+/* ==================================================================== */
+
+#define SH_NI0_LLP_RESET_EN 0x0000000150000010
+#define SH_NI0_LLP_RESET_EN_MASK 0x0000000000000001
+#define SH_NI0_LLP_RESET_EN_INIT 0x0000000000000001
+
+/* SH_NI0_LLP_RESET_EN_OK */
+/* Description: Allow LLP warm reset to reset SHUB */
+#define SH_NI0_LLP_RESET_EN_OK_SHFT 0
+#define SH_NI0_LLP_RESET_EN_OK_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_CHAN_MODE" */
+/* Sets the signaling mode of LLP and channel */
+/* ==================================================================== */
+
+#define SH_NI0_LLP_CHAN_MODE 0x0000000150000018
+#define SH_NI0_LLP_CHAN_MODE_MASK 0x000000000000001f
+#define SH_NI0_LLP_CHAN_MODE_INIT 0x0000000000000000
+
+/* SH_NI0_LLP_CHAN_MODE_BITMODE32 */
+/* Description: Enables 32-bit (plus sideband) channel phits */
+#define SH_NI0_LLP_CHAN_MODE_BITMODE32_SHFT 0
+#define SH_NI0_LLP_CHAN_MODE_BITMODE32_MASK 0x0000000000000001
+
+/* SH_NI0_LLP_CHAN_MODE_AC_ENCODE */
+/* Description: Enables nearly dc-free encoding for AC-coupling */
+#define SH_NI0_LLP_CHAN_MODE_AC_ENCODE_SHFT 1
+#define SH_NI0_LLP_CHAN_MODE_AC_ENCODE_MASK 0x0000000000000002
+
+/* SH_NI0_LLP_CHAN_MODE_ENABLE_TUNING */
+/* Description: Enables automatic tuning of channel skew. */
+#define SH_NI0_LLP_CHAN_MODE_ENABLE_TUNING_SHFT 2
+#define SH_NI0_LLP_CHAN_MODE_ENABLE_TUNING_MASK 0x0000000000000004
+
+/* SH_NI0_LLP_CHAN_MODE_ENABLE_RMT_FT_UPD */
+/* Description: Enables remote fine tune updates */
+#define SH_NI0_LLP_CHAN_MODE_ENABLE_RMT_FT_UPD_SHFT 3
+#define SH_NI0_LLP_CHAN_MODE_ENABLE_RMT_FT_UPD_MASK 0x0000000000000008
+
+/* SH_NI0_LLP_CHAN_MODE_ENABLE_CLKQUAD */
+/* Description: Enables quadrature clock in the pfssd */
+#define SH_NI0_LLP_CHAN_MODE_ENABLE_CLKQUAD_SHFT 4
+#define SH_NI0_LLP_CHAN_MODE_ENABLE_CLKQUAD_MASK 0x0000000000000010
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_CONFIG" */
+/* Sets the configuration of LLP and channel */
+/* ==================================================================== */
+
+#define SH_NI0_LLP_CONFIG 0x0000000150000020
+#define SH_NI0_LLP_CONFIG_MASK 0x0000003fffffffff
+#define SH_NI0_LLP_CONFIG_INIT 0x00000007fc6ffd00
+
+/* SH_NI0_LLP_CONFIG_MAXBURST */
+#define SH_NI0_LLP_CONFIG_MAXBURST_SHFT 0
+#define SH_NI0_LLP_CONFIG_MAXBURST_MASK 0x00000000000003ff
+
+/* SH_NI0_LLP_CONFIG_MAXRETRY */
+#define SH_NI0_LLP_CONFIG_MAXRETRY_SHFT 10
+#define SH_NI0_LLP_CONFIG_MAXRETRY_MASK 0x00000000000ffc00
+
+/* SH_NI0_LLP_CONFIG_NULLTIMEOUT */
+#define SH_NI0_LLP_CONFIG_NULLTIMEOUT_SHFT 20
+#define SH_NI0_LLP_CONFIG_NULLTIMEOUT_MASK 0x0000000003f00000
+
+/* SH_NI0_LLP_CONFIG_FTU_TIME */
+#define SH_NI0_LLP_CONFIG_FTU_TIME_SHFT 26
+#define SH_NI0_LLP_CONFIG_FTU_TIME_MASK 0x0000003ffc000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_TEST_CTL" */
+/* ==================================================================== */
+
+#define SH_NI0_LLP_TEST_CTL 0x0000000150000028
+#define SH_NI0_LLP_TEST_CTL_MASK 0x7ff3f3ffffffffff
+#define SH_NI0_LLP_TEST_CTL_INIT 0x000000000a5fffff
+
+/* SH_NI0_LLP_TEST_CTL_PATTERN */
+/* Description: Send channel data pattern */
+#define SH_NI0_LLP_TEST_CTL_PATTERN_SHFT 0
+#define SH_NI0_LLP_TEST_CTL_PATTERN_MASK 0x000000ffffffffff
+
+/* SH_NI0_LLP_TEST_CTL_SEND_TEST_MODE */
+/* Description: Enables continuous send of data */
+#define SH_NI0_LLP_TEST_CTL_SEND_TEST_MODE_SHFT 40
+#define SH_NI0_LLP_TEST_CTL_SEND_TEST_MODE_MASK 0x0000030000000000
+
+/* SH_NI0_LLP_TEST_CTL_WIRE_SEL */
+#define SH_NI0_LLP_TEST_CTL_WIRE_SEL_SHFT 44
+#define SH_NI0_LLP_TEST_CTL_WIRE_SEL_MASK 0x0003f00000000000
+
+/* SH_NI0_LLP_TEST_CTL_LFSR_MODE */
+#define SH_NI0_LLP_TEST_CTL_LFSR_MODE_SHFT 52
+#define SH_NI0_LLP_TEST_CTL_LFSR_MODE_MASK 0x0030000000000000
+
+/* SH_NI0_LLP_TEST_CTL_NOISE_MODE */
+#define SH_NI0_LLP_TEST_CTL_NOISE_MODE_SHFT 54
+#define SH_NI0_LLP_TEST_CTL_NOISE_MODE_MASK 0x00c0000000000000
+
+/* SH_NI0_LLP_TEST_CTL_ARMCAPTURE */
+/* Description: Enable Capture of Next MicroPacket */
+#define SH_NI0_LLP_TEST_CTL_ARMCAPTURE_SHFT 56
+#define SH_NI0_LLP_TEST_CTL_ARMCAPTURE_MASK 0x0100000000000000
+
+/* SH_NI0_LLP_TEST_CTL_CAPTURECBONLY */
+/* Description: Only capture a micropacket with a Check Byte error */
+#define SH_NI0_LLP_TEST_CTL_CAPTURECBONLY_SHFT 57
+#define SH_NI0_LLP_TEST_CTL_CAPTURECBONLY_MASK 0x0200000000000000
+
+/* SH_NI0_LLP_TEST_CTL_SENDCBERROR */
+/* Description: Sends a single error */
+#define SH_NI0_LLP_TEST_CTL_SENDCBERROR_SHFT 58
+#define SH_NI0_LLP_TEST_CTL_SENDCBERROR_MASK 0x0400000000000000
+
+/* SH_NI0_LLP_TEST_CTL_SENDSNERROR */
+/* Description: Sends a single sequence number error */
+#define SH_NI0_LLP_TEST_CTL_SENDSNERROR_SHFT 59
+#define SH_NI0_LLP_TEST_CTL_SENDSNERROR_MASK 0x0800000000000000
+
+/* SH_NI0_LLP_TEST_CTL_FAKESNERROR */
+/* Description: Causes receiver to pretend it saw a sn error */
+#define SH_NI0_LLP_TEST_CTL_FAKESNERROR_SHFT 60
+#define SH_NI0_LLP_TEST_CTL_FAKESNERROR_MASK 0x1000000000000000
+
+/* SH_NI0_LLP_TEST_CTL_CAPTURED */
+/* Description: Indicates a Valid Micropacket was captured */
+#define SH_NI0_LLP_TEST_CTL_CAPTURED_SHFT 61
+#define SH_NI0_LLP_TEST_CTL_CAPTURED_MASK 0x2000000000000000
+
+/* SH_NI0_LLP_TEST_CTL_CBERROR */
+/* Description: Indicates a Micropacket with a CB error was capture */
+#define SH_NI0_LLP_TEST_CTL_CBERROR_SHFT 62
+#define SH_NI0_LLP_TEST_CTL_CBERROR_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_CAPT_WD1" */
+/* low order 64-bit captured word */
+/* ==================================================================== */
+
+#define SH_NI0_LLP_CAPT_WD1 0x0000000150000030
+#define SH_NI0_LLP_CAPT_WD1_MASK 0xffffffffffffffff
+#define SH_NI0_LLP_CAPT_WD1_INIT 0x0000000000000000
+
+/* SH_NI0_LLP_CAPT_WD1_DATA */
+/* Description: low order 64-bit captured word */
+#define SH_NI0_LLP_CAPT_WD1_DATA_SHFT 0
+#define SH_NI0_LLP_CAPT_WD1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_CAPT_WD2" */
+/* high order 64-bit captured word */
+/* ==================================================================== */
+
+#define SH_NI0_LLP_CAPT_WD2 0x0000000150000038
+#define SH_NI0_LLP_CAPT_WD2_MASK 0xffffffffffffffff
+#define SH_NI0_LLP_CAPT_WD2_INIT 0x0000000000000000
+
+/* SH_NI0_LLP_CAPT_WD2_DATA */
+/* Description: high order 64-bit captured word */
+#define SH_NI0_LLP_CAPT_WD2_DATA_SHFT 0
+#define SH_NI0_LLP_CAPT_WD2_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_CAPT_SBCB" */
+/* captured sideband, sequence, and CRC */
+/* ==================================================================== */
+
+#define SH_NI0_LLP_CAPT_SBCB 0x0000000150000040
+#define SH_NI0_LLP_CAPT_SBCB_MASK 0x0000001fffffffff
+#define SH_NI0_LLP_CAPT_SBCB_INIT 0x0000000000000000
+
+/* SH_NI0_LLP_CAPT_SBCB_CAPTUREDRCVSBSN */
+/* Description: sideband and sequence */
+#define SH_NI0_LLP_CAPT_SBCB_CAPTUREDRCVSBSN_SHFT 0
+#define SH_NI0_LLP_CAPT_SBCB_CAPTUREDRCVSBSN_MASK 0x000000000000ffff
+
+/* SH_NI0_LLP_CAPT_SBCB_CAPTUREDRCVCRC */
+/* Description: CRC */
+#define SH_NI0_LLP_CAPT_SBCB_CAPTUREDRCVCRC_SHFT 16
+#define SH_NI0_LLP_CAPT_SBCB_CAPTUREDRCVCRC_MASK 0x00000000ffff0000
+
+/* SH_NI0_LLP_CAPT_SBCB_SENTALLCBERRORS */
+/* Description: All CB errors have been sent */
+#define SH_NI0_LLP_CAPT_SBCB_SENTALLCBERRORS_SHFT 32
+#define SH_NI0_LLP_CAPT_SBCB_SENTALLCBERRORS_MASK 0x0000000100000000
+
+/* SH_NI0_LLP_CAPT_SBCB_SENTALLSNERRORS */
+/* Description: All SN errors have been sent */
+#define SH_NI0_LLP_CAPT_SBCB_SENTALLSNERRORS_SHFT 33
+#define SH_NI0_LLP_CAPT_SBCB_SENTALLSNERRORS_MASK 0x0000000200000000
+
+/* SH_NI0_LLP_CAPT_SBCB_FAKEDALLSNERRORS */
+/* Description: All faked SN errors have been sent */
+#define SH_NI0_LLP_CAPT_SBCB_FAKEDALLSNERRORS_SHFT 34
+#define SH_NI0_LLP_CAPT_SBCB_FAKEDALLSNERRORS_MASK 0x0000000400000000
+
+/* SH_NI0_LLP_CAPT_SBCB_CHARGEOVERFLOW */
+/* Description: wire charge counter overflowed, valid if llp_mode e */
+#define SH_NI0_LLP_CAPT_SBCB_CHARGEOVERFLOW_SHFT 35
+#define SH_NI0_LLP_CAPT_SBCB_CHARGEOVERFLOW_MASK 0x0000000800000000
+
+/* SH_NI0_LLP_CAPT_SBCB_CHARGEUNDERFLOW */
+/* Description: wire charge counter underflowed, valid if llp_mode */
+/* enabled */
+#define SH_NI0_LLP_CAPT_SBCB_CHARGEUNDERFLOW_SHFT 36
+#define SH_NI0_LLP_CAPT_SBCB_CHARGEUNDERFLOW_MASK 0x0000001000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_ERR" */
+/* ==================================================================== */
+
+#define SH_NI0_LLP_ERR 0x0000000150000048
+#define SH_NI0_LLP_ERR_MASK 0x001fffffffffffff
+#define SH_NI0_LLP_ERR_INIT 0x0000000000000000
+
+/* SH_NI0_LLP_ERR_RX_SN_ERR_COUNT */
+/* Description: Counts the sequence number errors received */
+#define SH_NI0_LLP_ERR_RX_SN_ERR_COUNT_SHFT 0
+#define SH_NI0_LLP_ERR_RX_SN_ERR_COUNT_MASK 0x00000000000000ff
+
+/* SH_NI0_LLP_ERR_RX_CB_ERR_COUNT */
+/* Description: Counts the check byte errors received */
+#define SH_NI0_LLP_ERR_RX_CB_ERR_COUNT_SHFT 8
+#define SH_NI0_LLP_ERR_RX_CB_ERR_COUNT_MASK 0x000000000000ff00
+
+/* SH_NI0_LLP_ERR_RETRY_COUNT */
+/* Description: Counts the retries */
+#define SH_NI0_LLP_ERR_RETRY_COUNT_SHFT 16
+#define SH_NI0_LLP_ERR_RETRY_COUNT_MASK 0x0000000000ff0000
+
+/* SH_NI0_LLP_ERR_RETRY_TIMEOUT */
+/* Description: Indicates a retry timeout has occurred */
+#define SH_NI0_LLP_ERR_RETRY_TIMEOUT_SHFT 24
+#define SH_NI0_LLP_ERR_RETRY_TIMEOUT_MASK 0x0000000001000000
+
+/* SH_NI0_LLP_ERR_RCV_LINK_RESET */
+/* Description: Indicates a link reset has been received */
+#define SH_NI0_LLP_ERR_RCV_LINK_RESET_SHFT 25
+#define SH_NI0_LLP_ERR_RCV_LINK_RESET_MASK 0x0000000002000000
+
+/* SH_NI0_LLP_ERR_SQUASH */
+/* Description: Indicates a micropacket was squashed */
+#define SH_NI0_LLP_ERR_SQUASH_SHFT 26
+#define SH_NI0_LLP_ERR_SQUASH_MASK 0x0000000004000000
+
+/* SH_NI0_LLP_ERR_POWER_NOT_OK */
+/* Description: Detects and traps a loss of power_OK */
+#define SH_NI0_LLP_ERR_POWER_NOT_OK_SHFT 27
+#define SH_NI0_LLP_ERR_POWER_NOT_OK_MASK 0x0000000008000000
+
+/* SH_NI0_LLP_ERR_WIRE_CNT */
+/* Description: counts the errors detected on a single wire test */
+#define SH_NI0_LLP_ERR_WIRE_CNT_SHFT 28
+#define SH_NI0_LLP_ERR_WIRE_CNT_MASK 0x000ffffff0000000
+
+/* SH_NI0_LLP_ERR_WIRE_OVERFLOW */
+/* Description: wire_error_cnt has overflowed */
+#define SH_NI0_LLP_ERR_WIRE_OVERFLOW_SHFT 52
+#define SH_NI0_LLP_ERR_WIRE_OVERFLOW_MASK 0x0010000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_STAT" */
+/* This register describes the LLP status. */
+/* ==================================================================== */
+
+#define SH_NI1_LLP_STAT 0x0000000150002000
+#define SH_NI1_LLP_STAT_MASK 0x000000000000000f
+#define SH_NI1_LLP_STAT_INIT 0x0000000000000000
+
+/* SH_NI1_LLP_STAT_LINK_RESET_STATE */
+/* Description: Status of LLP link. */
+#define SH_NI1_LLP_STAT_LINK_RESET_STATE_SHFT 0
+#define SH_NI1_LLP_STAT_LINK_RESET_STATE_MASK 0x000000000000000f
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_RESET" */
+/* Writing issues a reset to the network interface */
+/* ==================================================================== */
+
+#define SH_NI1_LLP_RESET 0x0000000150002008
+#define SH_NI1_LLP_RESET_MASK 0x0000000000000003
+#define SH_NI1_LLP_RESET_INIT 0x0000000000000000
+
+/* SH_NI1_LLP_RESET_LINK */
+/* Description: Send Link Reset. Generates a pulse. */
+#define SH_NI1_LLP_RESET_LINK_SHFT 0
+#define SH_NI1_LLP_RESET_LINK_MASK 0x0000000000000001
+
+/* SH_NI1_LLP_RESET_WARM */
+/* Description: Send Warm Reset. Generates a pulse. */
+#define SH_NI1_LLP_RESET_WARM_SHFT 1
+#define SH_NI1_LLP_RESET_WARM_MASK 0x0000000000000002
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_RESET_EN" */
+/* Controls LLP warm reset propagation */
+/* ==================================================================== */
+
+#define SH_NI1_LLP_RESET_EN 0x0000000150002010
+#define SH_NI1_LLP_RESET_EN_MASK 0x0000000000000001
+#define SH_NI1_LLP_RESET_EN_INIT 0x0000000000000001
+
+/* SH_NI1_LLP_RESET_EN_OK */
+/* Description: Allow LLP warm reset to reset SHUB */
+#define SH_NI1_LLP_RESET_EN_OK_SHFT 0
+#define SH_NI1_LLP_RESET_EN_OK_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_CHAN_MODE" */
+/* Sets the signaling mode of LLP and channel */
+/* ==================================================================== */
+
+#define SH_NI1_LLP_CHAN_MODE 0x0000000150002018
+#define SH_NI1_LLP_CHAN_MODE_MASK 0x000000000000001f
+#define SH_NI1_LLP_CHAN_MODE_INIT 0x0000000000000000
+
+/* SH_NI1_LLP_CHAN_MODE_BITMODE32 */
+/* Description: Enables 32-bit (plus sideband) channel phits */
+#define SH_NI1_LLP_CHAN_MODE_BITMODE32_SHFT 0
+#define SH_NI1_LLP_CHAN_MODE_BITMODE32_MASK 0x0000000000000001
+
+/* SH_NI1_LLP_CHAN_MODE_AC_ENCODE */
+/* Description: Enables nearly dc-free encoding for AC-coupling */
+#define SH_NI1_LLP_CHAN_MODE_AC_ENCODE_SHFT 1
+#define SH_NI1_LLP_CHAN_MODE_AC_ENCODE_MASK 0x0000000000000002
+
+/* SH_NI1_LLP_CHAN_MODE_ENABLE_TUNING */
+/* Description: Enables automatic tuning of channel skew. */
+#define SH_NI1_LLP_CHAN_MODE_ENABLE_TUNING_SHFT 2
+#define SH_NI1_LLP_CHAN_MODE_ENABLE_TUNING_MASK 0x0000000000000004
+
+/* SH_NI1_LLP_CHAN_MODE_ENABLE_RMT_FT_UPD */
+/* Description: Enables remote fine tune updates */
+#define SH_NI1_LLP_CHAN_MODE_ENABLE_RMT_FT_UPD_SHFT 3
+#define SH_NI1_LLP_CHAN_MODE_ENABLE_RMT_FT_UPD_MASK 0x0000000000000008
+
+/* SH_NI1_LLP_CHAN_MODE_ENABLE_CLKQUAD */
+/* Description: Enables quadrature clock in the pfssd */
+#define SH_NI1_LLP_CHAN_MODE_ENABLE_CLKQUAD_SHFT 4
+#define SH_NI1_LLP_CHAN_MODE_ENABLE_CLKQUAD_MASK 0x0000000000000010
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_CONFIG" */
+/* Sets the configuration of LLP and channel */
+/* ==================================================================== */
+
+#define SH_NI1_LLP_CONFIG 0x0000000150002020
+#define SH_NI1_LLP_CONFIG_MASK 0x0000003fffffffff
+#define SH_NI1_LLP_CONFIG_INIT 0x00000007fc6ffd00
+
+/* SH_NI1_LLP_CONFIG_MAXBURST */
+#define SH_NI1_LLP_CONFIG_MAXBURST_SHFT 0
+#define SH_NI1_LLP_CONFIG_MAXBURST_MASK 0x00000000000003ff
+
+/* SH_NI1_LLP_CONFIG_MAXRETRY */
+#define SH_NI1_LLP_CONFIG_MAXRETRY_SHFT 10
+#define SH_NI1_LLP_CONFIG_MAXRETRY_MASK 0x00000000000ffc00
+
+/* SH_NI1_LLP_CONFIG_NULLTIMEOUT */
+#define SH_NI1_LLP_CONFIG_NULLTIMEOUT_SHFT 20
+#define SH_NI1_LLP_CONFIG_NULLTIMEOUT_MASK 0x0000000003f00000
+
+/* SH_NI1_LLP_CONFIG_FTU_TIME */
+#define SH_NI1_LLP_CONFIG_FTU_TIME_SHFT 26
+#define SH_NI1_LLP_CONFIG_FTU_TIME_MASK 0x0000003ffc000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_TEST_CTL" */
+/* ==================================================================== */
+
+#define SH_NI1_LLP_TEST_CTL 0x0000000150002028
+#define SH_NI1_LLP_TEST_CTL_MASK 0x7ff3f3ffffffffff
+#define SH_NI1_LLP_TEST_CTL_INIT 0x000000000a5fffff
+
+/* SH_NI1_LLP_TEST_CTL_PATTERN */
+/* Description: Send channel data pattern */
+#define SH_NI1_LLP_TEST_CTL_PATTERN_SHFT 0
+#define SH_NI1_LLP_TEST_CTL_PATTERN_MASK 0x000000ffffffffff
+
+/* SH_NI1_LLP_TEST_CTL_SEND_TEST_MODE */
+/* Description: Enables continuous send of data */
+#define SH_NI1_LLP_TEST_CTL_SEND_TEST_MODE_SHFT 40
+#define SH_NI1_LLP_TEST_CTL_SEND_TEST_MODE_MASK 0x0000030000000000
+
+/* SH_NI1_LLP_TEST_CTL_WIRE_SEL */
+#define SH_NI1_LLP_TEST_CTL_WIRE_SEL_SHFT 44
+#define SH_NI1_LLP_TEST_CTL_WIRE_SEL_MASK 0x0003f00000000000
+
+/* SH_NI1_LLP_TEST_CTL_LFSR_MODE */
+#define SH_NI1_LLP_TEST_CTL_LFSR_MODE_SHFT 52
+#define SH_NI1_LLP_TEST_CTL_LFSR_MODE_MASK 0x0030000000000000
+
+/* SH_NI1_LLP_TEST_CTL_NOISE_MODE */
+#define SH_NI1_LLP_TEST_CTL_NOISE_MODE_SHFT 54
+#define SH_NI1_LLP_TEST_CTL_NOISE_MODE_MASK 0x00c0000000000000
+
+/* SH_NI1_LLP_TEST_CTL_ARMCAPTURE */
+/* Description: Enable Capture of Next MicroPacket */
+#define SH_NI1_LLP_TEST_CTL_ARMCAPTURE_SHFT 56
+#define SH_NI1_LLP_TEST_CTL_ARMCAPTURE_MASK 0x0100000000000000
+
+/* SH_NI1_LLP_TEST_CTL_CAPTURECBONLY */
+/* Description: Only capture a micropacket with a Check Byte error */
+#define SH_NI1_LLP_TEST_CTL_CAPTURECBONLY_SHFT 57
+#define SH_NI1_LLP_TEST_CTL_CAPTURECBONLY_MASK 0x0200000000000000
+
+/* SH_NI1_LLP_TEST_CTL_SENDCBERROR */
+/* Description: Sends a single error */
+#define SH_NI1_LLP_TEST_CTL_SENDCBERROR_SHFT 58
+#define SH_NI1_LLP_TEST_CTL_SENDCBERROR_MASK 0x0400000000000000
+
+/* SH_NI1_LLP_TEST_CTL_SENDSNERROR */
+/* Description: Sends a single sequence number error */
+#define SH_NI1_LLP_TEST_CTL_SENDSNERROR_SHFT 59
+#define SH_NI1_LLP_TEST_CTL_SENDSNERROR_MASK 0x0800000000000000
+
+/* SH_NI1_LLP_TEST_CTL_FAKESNERROR */
+/* Description: Causes receiver to pretend it saw a sn error */
+#define SH_NI1_LLP_TEST_CTL_FAKESNERROR_SHFT 60
+#define SH_NI1_LLP_TEST_CTL_FAKESNERROR_MASK 0x1000000000000000
+
+/* SH_NI1_LLP_TEST_CTL_CAPTURED */
+/* Description: Indicates a Valid Micropacket was captured */
+#define SH_NI1_LLP_TEST_CTL_CAPTURED_SHFT 61
+#define SH_NI1_LLP_TEST_CTL_CAPTURED_MASK 0x2000000000000000
+
+/* SH_NI1_LLP_TEST_CTL_CBERROR */
+/* Description: Indicates a Micropacket with a CB error was capture */
+#define SH_NI1_LLP_TEST_CTL_CBERROR_SHFT 62
+#define SH_NI1_LLP_TEST_CTL_CBERROR_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_CAPT_WD1" */
+/* low order 64-bit captured word */
+/* ==================================================================== */
+
+#define SH_NI1_LLP_CAPT_WD1 0x0000000150002030
+#define SH_NI1_LLP_CAPT_WD1_MASK 0xffffffffffffffff
+#define SH_NI1_LLP_CAPT_WD1_INIT 0x0000000000000000
+
+/* SH_NI1_LLP_CAPT_WD1_DATA */
+/* Description: low order 64-bit captured word */
+#define SH_NI1_LLP_CAPT_WD1_DATA_SHFT 0
+#define SH_NI1_LLP_CAPT_WD1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_CAPT_WD2" */
+/* high order 64-bit captured word */
+/* ==================================================================== */
+
+#define SH_NI1_LLP_CAPT_WD2 0x0000000150002038
+#define SH_NI1_LLP_CAPT_WD2_MASK 0xffffffffffffffff
+#define SH_NI1_LLP_CAPT_WD2_INIT 0x0000000000000000
+
+/* SH_NI1_LLP_CAPT_WD2_DATA */
+/* Description: high order 64-bit captured word */
+#define SH_NI1_LLP_CAPT_WD2_DATA_SHFT 0
+#define SH_NI1_LLP_CAPT_WD2_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_CAPT_SBCB" */
+/* captured sideband, sequence, and CRC */
+/* ==================================================================== */
+
+#define SH_NI1_LLP_CAPT_SBCB 0x0000000150002040
+#define SH_NI1_LLP_CAPT_SBCB_MASK 0x0000001fffffffff
+#define SH_NI1_LLP_CAPT_SBCB_INIT 0x0000000000000000
+
+/* SH_NI1_LLP_CAPT_SBCB_CAPTUREDRCVSBSN */
+/* Description: sideband and sequence */
+#define SH_NI1_LLP_CAPT_SBCB_CAPTUREDRCVSBSN_SHFT 0
+#define SH_NI1_LLP_CAPT_SBCB_CAPTUREDRCVSBSN_MASK 0x000000000000ffff
+
+/* SH_NI1_LLP_CAPT_SBCB_CAPTUREDRCVCRC */
+/* Description: CRC */
+#define SH_NI1_LLP_CAPT_SBCB_CAPTUREDRCVCRC_SHFT 16
+#define SH_NI1_LLP_CAPT_SBCB_CAPTUREDRCVCRC_MASK 0x00000000ffff0000
+
+/* SH_NI1_LLP_CAPT_SBCB_SENTALLCBERRORS */
+/* Description: All CB errors have been sent */
+#define SH_NI1_LLP_CAPT_SBCB_SENTALLCBERRORS_SHFT 32
+#define SH_NI1_LLP_CAPT_SBCB_SENTALLCBERRORS_MASK 0x0000000100000000
+
+/* SH_NI1_LLP_CAPT_SBCB_SENTALLSNERRORS */
+/* Description: All SN errors have been sent */
+#define SH_NI1_LLP_CAPT_SBCB_SENTALLSNERRORS_SHFT 33
+#define SH_NI1_LLP_CAPT_SBCB_SENTALLSNERRORS_MASK 0x0000000200000000
+
+/* SH_NI1_LLP_CAPT_SBCB_FAKEDALLSNERRORS */
+/* Description: All faked SN errors have been sent */
+#define SH_NI1_LLP_CAPT_SBCB_FAKEDALLSNERRORS_SHFT 34
+#define SH_NI1_LLP_CAPT_SBCB_FAKEDALLSNERRORS_MASK 0x0000000400000000
+
+/* SH_NI1_LLP_CAPT_SBCB_CHARGEOVERFLOW */
+/* Description: wire charge counter overflowed, valid if llp_mode e */
+#define SH_NI1_LLP_CAPT_SBCB_CHARGEOVERFLOW_SHFT 35
+#define SH_NI1_LLP_CAPT_SBCB_CHARGEOVERFLOW_MASK 0x0000000800000000
+
+/* SH_NI1_LLP_CAPT_SBCB_CHARGEUNDERFLOW */
+/* Description: wire charge counter underflowed, valid if llp_mode */
+/* enabled */
+#define SH_NI1_LLP_CAPT_SBCB_CHARGEUNDERFLOW_SHFT 36
+#define SH_NI1_LLP_CAPT_SBCB_CHARGEUNDERFLOW_MASK 0x0000001000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_ERR" */
+/* ==================================================================== */
+
+#define SH_NI1_LLP_ERR 0x0000000150002048
+#define SH_NI1_LLP_ERR_MASK 0x001fffffffffffff
+#define SH_NI1_LLP_ERR_INIT 0x0000000000000000
+
+/* SH_NI1_LLP_ERR_RX_SN_ERR_COUNT */
+/* Description: Counts the sequence number errors received */
+#define SH_NI1_LLP_ERR_RX_SN_ERR_COUNT_SHFT 0
+#define SH_NI1_LLP_ERR_RX_SN_ERR_COUNT_MASK 0x00000000000000ff
+
+/* SH_NI1_LLP_ERR_RX_CB_ERR_COUNT */
+/* Description: Counts the check byte errors received */
+#define SH_NI1_LLP_ERR_RX_CB_ERR_COUNT_SHFT 8
+#define SH_NI1_LLP_ERR_RX_CB_ERR_COUNT_MASK 0x000000000000ff00
+
+/* SH_NI1_LLP_ERR_RETRY_COUNT */
+/* Description: Counts the retries */
+#define SH_NI1_LLP_ERR_RETRY_COUNT_SHFT 16
+#define SH_NI1_LLP_ERR_RETRY_COUNT_MASK 0x0000000000ff0000
+
+/* SH_NI1_LLP_ERR_RETRY_TIMEOUT */
+/* Description: Indicates a retry timeout has occurred */
+#define SH_NI1_LLP_ERR_RETRY_TIMEOUT_SHFT 24
+#define SH_NI1_LLP_ERR_RETRY_TIMEOUT_MASK 0x0000000001000000
+
+/* SH_NI1_LLP_ERR_RCV_LINK_RESET */
+/* Description: Indicates a link reset has been received */
+#define SH_NI1_LLP_ERR_RCV_LINK_RESET_SHFT 25
+#define SH_NI1_LLP_ERR_RCV_LINK_RESET_MASK 0x0000000002000000
+
+/* SH_NI1_LLP_ERR_SQUASH */
+/* Description: Indicates a micropacket was squashed */
+#define SH_NI1_LLP_ERR_SQUASH_SHFT 26
+#define SH_NI1_LLP_ERR_SQUASH_MASK 0x0000000004000000
+
+/* SH_NI1_LLP_ERR_POWER_NOT_OK */
+/* Description: Detects and traps a loss of power_OK */
+#define SH_NI1_LLP_ERR_POWER_NOT_OK_SHFT 27
+#define SH_NI1_LLP_ERR_POWER_NOT_OK_MASK 0x0000000008000000
+
+/* SH_NI1_LLP_ERR_WIRE_CNT */
+/* Description: counts the errors detected on a single wire test */
+#define SH_NI1_LLP_ERR_WIRE_CNT_SHFT 28
+#define SH_NI1_LLP_ERR_WIRE_CNT_MASK 0x000ffffff0000000
+
+/* SH_NI1_LLP_ERR_WIRE_OVERFLOW */
+/* Description: wire_error_cnt has overflowed */
+#define SH_NI1_LLP_ERR_WIRE_OVERFLOW_SHFT 52
+#define SH_NI1_LLP_ERR_WIRE_OVERFLOW_MASK 0x0010000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LLP_TO_FIFO02_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW 0x0000000150001010
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_MASK 0x3f3f003f3f00bfbf
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000
+
+/* SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000
+
+/* SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000
+
+/* SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNNI0_LLP_TO_FIFO02_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LLP_TO_FIFO13_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW 0x0000000150001020
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_MASK 0x3f3f003f3f00bfbf
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000
+
+/* SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000
+
+/* SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000
+
+/* SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNNI0_LLP_TO_FIFO13_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LLP_DEBIT_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_LLP_DEBIT_FLOW 0x0000000150001030
+#define SH_XNNI0_LLP_DEBIT_FLOW_MASK 0x1f1f1f1f1f1f1f1f
+#define SH_XNNI0_LLP_DEBIT_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC0_DYN */
+/* Description: vc0 debit dynamic value */
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC0_DYN_SHFT 0
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC0_DYN_MASK 0x000000000000001f
+
+/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC0_CAP */
+/* Description: vc0 debit captured value */
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC0_CAP_SHFT 8
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC0_CAP_MASK 0x0000000000001f00
+
+/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC1_DYN */
+/* Description: vc1 debit dynamic value */
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC1_DYN_SHFT 16
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC1_DYN_MASK 0x00000000001f0000
+
+/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC1_CAP */
+/* Description: vc1 debit captured value */
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC1_CAP_SHFT 24
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC1_CAP_MASK 0x000000001f000000
+
+/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC2_DYN */
+/* Description: vc2 debit dynamic value */
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC2_DYN_SHFT 32
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC2_DYN_MASK 0x0000001f00000000
+
+/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC2_CAP */
+/* Description: vc2 debit captured value */
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC2_CAP_SHFT 40
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC2_CAP_MASK 0x00001f0000000000
+
+/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC3_DYN */
+/* Description: vc3 debit dynamic value */
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC3_DYN_SHFT 48
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC3_DYN_MASK 0x001f000000000000
+
+/* SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC3_CAP */
+/* Description: vc3 debit captured value */
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC3_CAP_SHFT 56
+#define SH_XNNI0_LLP_DEBIT_FLOW_DEBIT_VC3_CAP_MASK 0x1f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LINK_0_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_LINK_0_FLOW 0x0000000150001040
+#define SH_XNNI0_LINK_0_FLOW_MASK 0x000000007f7f7fbf
+#define SH_XNNI0_LINK_0_FLOW_INIT 0x0000000000001800
+
+/* SH_XNNI0_LINK_0_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI0_LINK_0_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI0_LINK_0_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI0_LINK_0_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on vc0 from debit cntr */
+#define SH_XNNI0_LINK_0_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI0_LINK_0_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 Limit Test */
+#define SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_TEST_SHFT 8
+#define SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_TEST_MASK 0x0000000000007f00
+
+/* SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_DYN */
+/* Description: Dynamic vc0 credit value */
+#define SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_DYN_SHFT 16
+#define SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_DYN_MASK 0x00000000007f0000
+
+/* SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_CAP */
+/* Description: Captured vc0 credit */
+#define SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_CAP_SHFT 24
+#define SH_XNNI0_LINK_0_FLOW_CREDIT_VC0_CAP_MASK 0x000000007f000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LINK_1_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_LINK_1_FLOW 0x0000000150001050
+#define SH_XNNI0_LINK_1_FLOW_MASK 0x000000007f7f7fbf
+#define SH_XNNI0_LINK_1_FLOW_INIT 0x0000000000001800
+
+/* SH_XNNI0_LINK_1_FLOW_DEBIT_VC1_WITHHOLD */
+/* Description: vc1 withhold */
+#define SH_XNNI0_LINK_1_FLOW_DEBIT_VC1_WITHHOLD_SHFT 0
+#define SH_XNNI0_LINK_1_FLOW_DEBIT_VC1_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI0_LINK_1_FLOW_DEBIT_VC1_FORCE_CRED */
+/* Description: Force Credit on vc1 from debit cntr */
+#define SH_XNNI0_LINK_1_FLOW_DEBIT_VC1_FORCE_CRED_SHFT 7
+#define SH_XNNI0_LINK_1_FLOW_DEBIT_VC1_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_TEST */
+/* Description: vc1 Limit Test */
+#define SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_TEST_SHFT 8
+#define SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_TEST_MASK 0x0000000000007f00
+
+/* SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_DYN */
+/* Description: Dynamic vc1 credit value */
+#define SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_DYN_SHFT 16
+#define SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_DYN_MASK 0x00000000007f0000
+
+/* SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_CAP */
+/* Description: Captured vc1 credit */
+#define SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_CAP_SHFT 24
+#define SH_XNNI0_LINK_1_FLOW_CREDIT_VC1_CAP_MASK 0x000000007f000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LINK_2_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_LINK_2_FLOW 0x0000000150001060
+#define SH_XNNI0_LINK_2_FLOW_MASK 0x000000007f7f7fbf
+#define SH_XNNI0_LINK_2_FLOW_INIT 0x0000000000001800
+
+/* SH_XNNI0_LINK_2_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI0_LINK_2_FLOW_DEBIT_VC2_WITHHOLD_SHFT 0
+#define SH_XNNI0_LINK_2_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI0_LINK_2_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on vc2 from debit cntr */
+#define SH_XNNI0_LINK_2_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 7
+#define SH_XNNI0_LINK_2_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 Limit Test */
+#define SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_TEST_SHFT 8
+#define SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_TEST_MASK 0x0000000000007f00
+
+/* SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_DYN */
+/* Description: Dynamic vc2 credit value */
+#define SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_DYN_SHFT 16
+#define SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_DYN_MASK 0x00000000007f0000
+
+/* SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_CAP */
+/* Description: Captured vc2 credit */
+#define SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_CAP_SHFT 24
+#define SH_XNNI0_LINK_2_FLOW_CREDIT_VC2_CAP_MASK 0x000000007f000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LINK_3_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_LINK_3_FLOW 0x0000000150001070
+#define SH_XNNI0_LINK_3_FLOW_MASK 0x000000007f7f7fbf
+#define SH_XNNI0_LINK_3_FLOW_INIT 0x0000000000001800
+
+/* SH_XNNI0_LINK_3_FLOW_DEBIT_VC3_WITHHOLD */
+/* Description: vc3 withhold */
+#define SH_XNNI0_LINK_3_FLOW_DEBIT_VC3_WITHHOLD_SHFT 0
+#define SH_XNNI0_LINK_3_FLOW_DEBIT_VC3_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI0_LINK_3_FLOW_DEBIT_VC3_FORCE_CRED */
+/* Description: Force Credit on vc3 from debit cntr */
+#define SH_XNNI0_LINK_3_FLOW_DEBIT_VC3_FORCE_CRED_SHFT 7
+#define SH_XNNI0_LINK_3_FLOW_DEBIT_VC3_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_TEST */
+/* Description: vc3 Limit Test */
+#define SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_TEST_SHFT 8
+#define SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_TEST_MASK 0x0000000000007f00
+
+/* SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_DYN */
+/* Description: Dynamic vc3 credit value */
+#define SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_DYN_SHFT 16
+#define SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_DYN_MASK 0x00000000007f0000
+
+/* SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_CAP */
+/* Description: Captured vc3 credit */
+#define SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_CAP_SHFT 24
+#define SH_XNNI0_LINK_3_FLOW_CREDIT_VC3_CAP_MASK 0x000000007f000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LLP_TO_FIFO02_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW 0x0000000150003010
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_MASK 0x3f3f003f3f00bfbf
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000
+
+/* SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000
+
+/* SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000
+
+/* SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNNI1_LLP_TO_FIFO02_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LLP_TO_FIFO13_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW 0x0000000150003020
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_MASK 0x3f3f003f3f00bfbf
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000
+
+/* SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000
+
+/* SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000
+
+/* SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNNI1_LLP_TO_FIFO13_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LLP_DEBIT_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_LLP_DEBIT_FLOW 0x0000000150003030
+#define SH_XNNI1_LLP_DEBIT_FLOW_MASK 0x1f1f1f1f1f1f1f1f
+#define SH_XNNI1_LLP_DEBIT_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC0_DYN */
+/* Description: vc0 debit dynamic value */
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC0_DYN_SHFT 0
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC0_DYN_MASK 0x000000000000001f
+
+/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC0_CAP */
+/* Description: vc0 debit captured value */
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC0_CAP_SHFT 8
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC0_CAP_MASK 0x0000000000001f00
+
+/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC1_DYN */
+/* Description: vc1 debit dynamic value */
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC1_DYN_SHFT 16
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC1_DYN_MASK 0x00000000001f0000
+
+/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC1_CAP */
+/* Description: vc1 debit captured value */
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC1_CAP_SHFT 24
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC1_CAP_MASK 0x000000001f000000
+
+/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC2_DYN */
+/* Description: vc2 debit dynamic value */
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC2_DYN_SHFT 32
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC2_DYN_MASK 0x0000001f00000000
+
+/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC2_CAP */
+/* Description: vc2 debit captured value */
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC2_CAP_SHFT 40
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC2_CAP_MASK 0x00001f0000000000
+
+/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC3_DYN */
+/* Description: vc3 debit dynamic value */
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC3_DYN_SHFT 48
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC3_DYN_MASK 0x001f000000000000
+
+/* SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC3_CAP */
+/* Description: vc3 debit captured value */
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC3_CAP_SHFT 56
+#define SH_XNNI1_LLP_DEBIT_FLOW_DEBIT_VC3_CAP_MASK 0x1f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LINK_0_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_LINK_0_FLOW 0x0000000150003040
+#define SH_XNNI1_LINK_0_FLOW_MASK 0x000000007f7f7fbf
+#define SH_XNNI1_LINK_0_FLOW_INIT 0x0000000000001800
+
+/* SH_XNNI1_LINK_0_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI1_LINK_0_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI1_LINK_0_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI1_LINK_0_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on vc0 from debit cntr */
+#define SH_XNNI1_LINK_0_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI1_LINK_0_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 Limit Test */
+#define SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_TEST_SHFT 8
+#define SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_TEST_MASK 0x0000000000007f00
+
+/* SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_DYN */
+/* Description: Dynamic vc0 credit value */
+#define SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_DYN_SHFT 16
+#define SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_DYN_MASK 0x00000000007f0000
+
+/* SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_CAP */
+/* Description: Captured vc0 credit */
+#define SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_CAP_SHFT 24
+#define SH_XNNI1_LINK_0_FLOW_CREDIT_VC0_CAP_MASK 0x000000007f000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LINK_1_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_LINK_1_FLOW 0x0000000150003050
+#define SH_XNNI1_LINK_1_FLOW_MASK 0x000000007f7f7fbf
+#define SH_XNNI1_LINK_1_FLOW_INIT 0x0000000000001800
+
+/* SH_XNNI1_LINK_1_FLOW_DEBIT_VC1_WITHHOLD */
+/* Description: vc1 withhold */
+#define SH_XNNI1_LINK_1_FLOW_DEBIT_VC1_WITHHOLD_SHFT 0
+#define SH_XNNI1_LINK_1_FLOW_DEBIT_VC1_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI1_LINK_1_FLOW_DEBIT_VC1_FORCE_CRED */
+/* Description: Force Credit on vc1 from debit cntr */
+#define SH_XNNI1_LINK_1_FLOW_DEBIT_VC1_FORCE_CRED_SHFT 7
+#define SH_XNNI1_LINK_1_FLOW_DEBIT_VC1_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_TEST */
+/* Description: vc1 Limit Test */
+#define SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_TEST_SHFT 8
+#define SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_TEST_MASK 0x0000000000007f00
+
+/* SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_DYN */
+/* Description: Dynamic vc1 credit value */
+#define SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_DYN_SHFT 16
+#define SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_DYN_MASK 0x00000000007f0000
+
+/* SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_CAP */
+/* Description: Captured vc1 credit */
+#define SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_CAP_SHFT 24
+#define SH_XNNI1_LINK_1_FLOW_CREDIT_VC1_CAP_MASK 0x000000007f000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LINK_2_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_LINK_2_FLOW 0x0000000150003060
+#define SH_XNNI1_LINK_2_FLOW_MASK 0x000000007f7f7fbf
+#define SH_XNNI1_LINK_2_FLOW_INIT 0x0000000000001800
+
+/* SH_XNNI1_LINK_2_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI1_LINK_2_FLOW_DEBIT_VC2_WITHHOLD_SHFT 0
+#define SH_XNNI1_LINK_2_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI1_LINK_2_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on vc2 from debit cntr */
+#define SH_XNNI1_LINK_2_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 7
+#define SH_XNNI1_LINK_2_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 Limit Test */
+#define SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_TEST_SHFT 8
+#define SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_TEST_MASK 0x0000000000007f00
+
+/* SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_DYN */
+/* Description: Dynamic vc2 credit value */
+#define SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_DYN_SHFT 16
+#define SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_DYN_MASK 0x00000000007f0000
+
+/* SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_CAP */
+/* Description: Captured vc2 credit */
+#define SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_CAP_SHFT 24
+#define SH_XNNI1_LINK_2_FLOW_CREDIT_VC2_CAP_MASK 0x000000007f000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LINK_3_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_LINK_3_FLOW 0x0000000150003070
+#define SH_XNNI1_LINK_3_FLOW_MASK 0x000000007f7f7fbf
+#define SH_XNNI1_LINK_3_FLOW_INIT 0x0000000000001800
+
+/* SH_XNNI1_LINK_3_FLOW_DEBIT_VC3_WITHHOLD */
+/* Description: vc3 withhold */
+#define SH_XNNI1_LINK_3_FLOW_DEBIT_VC3_WITHHOLD_SHFT 0
+#define SH_XNNI1_LINK_3_FLOW_DEBIT_VC3_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI1_LINK_3_FLOW_DEBIT_VC3_FORCE_CRED */
+/* Description: Force Credit on vc3 from debit cntr */
+#define SH_XNNI1_LINK_3_FLOW_DEBIT_VC3_FORCE_CRED_SHFT 7
+#define SH_XNNI1_LINK_3_FLOW_DEBIT_VC3_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_TEST */
+/* Description: vc3 Limit Test */
+#define SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_TEST_SHFT 8
+#define SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_TEST_MASK 0x0000000000007f00
+
+/* SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_DYN */
+/* Description: Dynamic vc3 credit value */
+#define SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_DYN_SHFT 16
+#define SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_DYN_MASK 0x00000000007f0000
+
+/* SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_CAP */
+/* Description: Captured vc3 credit */
+#define SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_CAP_SHFT 24
+#define SH_XNNI1_LINK_3_FLOW_CREDIT_VC3_CAP_MASK 0x000000007f000000
+
+/* ==================================================================== */
+/* Register "SH_IILB_LOCAL_TABLE" */
+/* local lookup table */
+/* ==================================================================== */
+
+#define SH_IILB_LOCAL_TABLE 0x0000000150020000
+#define SH_IILB_LOCAL_TABLE_MASK 0x800000000000003f
+#define SH_IILB_LOCAL_TABLE_MEMDEPTH 128
+#define SH_IILB_LOCAL_TABLE_INIT 0x0000000000000000
+
+/* SH_IILB_LOCAL_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_IILB_LOCAL_TABLE_DIR0_SHFT 0
+#define SH_IILB_LOCAL_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_IILB_LOCAL_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_IILB_LOCAL_TABLE_V0_SHFT 4
+#define SH_IILB_LOCAL_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_IILB_LOCAL_TABLE_NI_SEL0 */
+/* Description: ni select for requests */
+#define SH_IILB_LOCAL_TABLE_NI_SEL0_SHFT 5
+#define SH_IILB_LOCAL_TABLE_NI_SEL0_MASK 0x0000000000000020
+
+/* SH_IILB_LOCAL_TABLE_VALID */
+/* Description: Indicates that this entry is valid */
+#define SH_IILB_LOCAL_TABLE_VALID_SHFT 63
+#define SH_IILB_LOCAL_TABLE_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_IILB_GLOBAL_TABLE" */
+/* global lookup table */
+/* ==================================================================== */
+
+#define SH_IILB_GLOBAL_TABLE 0x0000000150020400
+#define SH_IILB_GLOBAL_TABLE_MASK 0x800000000000003f
+#define SH_IILB_GLOBAL_TABLE_MEMDEPTH 16
+#define SH_IILB_GLOBAL_TABLE_INIT 0x0000000000000000
+
+/* SH_IILB_GLOBAL_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_IILB_GLOBAL_TABLE_DIR0_SHFT 0
+#define SH_IILB_GLOBAL_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_IILB_GLOBAL_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_IILB_GLOBAL_TABLE_V0_SHFT 4
+#define SH_IILB_GLOBAL_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_IILB_GLOBAL_TABLE_NI_SEL0 */
+/* Description: ni select for requests */
+#define SH_IILB_GLOBAL_TABLE_NI_SEL0_SHFT 5
+#define SH_IILB_GLOBAL_TABLE_NI_SEL0_MASK 0x0000000000000020
+
+/* SH_IILB_GLOBAL_TABLE_VALID */
+/* Description: Indicates that this entry is valid */
+#define SH_IILB_GLOBAL_TABLE_VALID_SHFT 63
+#define SH_IILB_GLOBAL_TABLE_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_IILB_OVER_RIDE_TABLE" */
+/* If enabled, bypass the Global/Local tables */
+/* ==================================================================== */
+
+#define SH_IILB_OVER_RIDE_TABLE 0x0000000150020480
+#define SH_IILB_OVER_RIDE_TABLE_MASK 0x800000000000003f
+#define SH_IILB_OVER_RIDE_TABLE_INIT 0x8000000000000000
+
+/* SH_IILB_OVER_RIDE_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_IILB_OVER_RIDE_TABLE_DIR0_SHFT 0
+#define SH_IILB_OVER_RIDE_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_IILB_OVER_RIDE_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_IILB_OVER_RIDE_TABLE_V0_SHFT 4
+#define SH_IILB_OVER_RIDE_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_IILB_OVER_RIDE_TABLE_NI_SEL0 */
+/* Description: ni select */
+#define SH_IILB_OVER_RIDE_TABLE_NI_SEL0_SHFT 5
+#define SH_IILB_OVER_RIDE_TABLE_NI_SEL0_MASK 0x0000000000000020
+
+/* SH_IILB_OVER_RIDE_TABLE_ENABLE */
+/* Description: Indicates that this entry is enabled */
+#define SH_IILB_OVER_RIDE_TABLE_ENABLE_SHFT 63
+#define SH_IILB_OVER_RIDE_TABLE_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_IILB_RSP_PLANE_HINT" */
+/* If enabled, invert incoming response only plane hint bit before lo */
+/* ==================================================================== */
+
+#define SH_IILB_RSP_PLANE_HINT 0x0000000150020488
+#define SH_IILB_RSP_PLANE_HINT_MASK 0x0000000000000000
+#define SH_IILB_RSP_PLANE_HINT_INIT 0x0000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_LOCAL_TABLE" */
+/* local lookup table */
+/* ==================================================================== */
+
+#define SH_PI_LOCAL_TABLE 0x0000000150021000
+#define SH_PI_LOCAL_TABLE_MASK 0x8000000000003f3f
+#define SH_PI_LOCAL_TABLE_MEMDEPTH 128
+#define SH_PI_LOCAL_TABLE_INIT 0x0000000000000000
+
+/* SH_PI_LOCAL_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_PI_LOCAL_TABLE_DIR0_SHFT 0
+#define SH_PI_LOCAL_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_PI_LOCAL_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_PI_LOCAL_TABLE_V0_SHFT 4
+#define SH_PI_LOCAL_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_PI_LOCAL_TABLE_NI_SEL0 */
+/* Description: ni select for requests */
+#define SH_PI_LOCAL_TABLE_NI_SEL0_SHFT 5
+#define SH_PI_LOCAL_TABLE_NI_SEL0_MASK 0x0000000000000020
+
+/* SH_PI_LOCAL_TABLE_DIR1 */
+#define SH_PI_LOCAL_TABLE_DIR1_SHFT 8
+#define SH_PI_LOCAL_TABLE_DIR1_MASK 0x0000000000000f00
+
+/* SH_PI_LOCAL_TABLE_V1 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_PI_LOCAL_TABLE_V1_SHFT 12
+#define SH_PI_LOCAL_TABLE_V1_MASK 0x0000000000001000
+
+/* SH_PI_LOCAL_TABLE_NI_SEL1 */
+/* Description: ni select for plane-hint 1 */
+#define SH_PI_LOCAL_TABLE_NI_SEL1_SHFT 13
+#define SH_PI_LOCAL_TABLE_NI_SEL1_MASK 0x0000000000002000
+
+/* SH_PI_LOCAL_TABLE_VALID */
+/* Description: Indicates that this entry is valid */
+#define SH_PI_LOCAL_TABLE_VALID_SHFT 63
+#define SH_PI_LOCAL_TABLE_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_GLOBAL_TABLE" */
+/* global lookup table */
+/* ==================================================================== */
+
+#define SH_PI_GLOBAL_TABLE 0x0000000150021400
+#define SH_PI_GLOBAL_TABLE_MASK 0x8000000000003f3f
+#define SH_PI_GLOBAL_TABLE_MEMDEPTH 16
+#define SH_PI_GLOBAL_TABLE_INIT 0x0000000000000000
+
+/* SH_PI_GLOBAL_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_PI_GLOBAL_TABLE_DIR0_SHFT 0
+#define SH_PI_GLOBAL_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_PI_GLOBAL_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_PI_GLOBAL_TABLE_V0_SHFT 4
+#define SH_PI_GLOBAL_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_PI_GLOBAL_TABLE_NI_SEL0 */
+/* Description: ni select for requests */
+#define SH_PI_GLOBAL_TABLE_NI_SEL0_SHFT 5
+#define SH_PI_GLOBAL_TABLE_NI_SEL0_MASK 0x0000000000000020
+
+/* SH_PI_GLOBAL_TABLE_DIR1 */
+#define SH_PI_GLOBAL_TABLE_DIR1_SHFT 8
+#define SH_PI_GLOBAL_TABLE_DIR1_MASK 0x0000000000000f00
+
+/* SH_PI_GLOBAL_TABLE_V1 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_PI_GLOBAL_TABLE_V1_SHFT 12
+#define SH_PI_GLOBAL_TABLE_V1_MASK 0x0000000000001000
+
+/* SH_PI_GLOBAL_TABLE_NI_SEL1 */
+/* Description: ni select for plane-hint 1 */
+#define SH_PI_GLOBAL_TABLE_NI_SEL1_SHFT 13
+#define SH_PI_GLOBAL_TABLE_NI_SEL1_MASK 0x0000000000002000
+
+/* SH_PI_GLOBAL_TABLE_VALID */
+/* Description: Indicates that this entry is valid */
+#define SH_PI_GLOBAL_TABLE_VALID_SHFT 63
+#define SH_PI_GLOBAL_TABLE_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_OVER_RIDE_TABLE" */
+/* If enabled, bypass the Global/Local tables */
+/* ==================================================================== */
+
+#define SH_PI_OVER_RIDE_TABLE 0x0000000150021480
+#define SH_PI_OVER_RIDE_TABLE_MASK 0x8000000000003f3f
+#define SH_PI_OVER_RIDE_TABLE_INIT 0x8000000000002000
+
+/* SH_PI_OVER_RIDE_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_PI_OVER_RIDE_TABLE_DIR0_SHFT 0
+#define SH_PI_OVER_RIDE_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_PI_OVER_RIDE_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_PI_OVER_RIDE_TABLE_V0_SHFT 4
+#define SH_PI_OVER_RIDE_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_PI_OVER_RIDE_TABLE_NI_SEL0 */
+/* Description: ni select */
+#define SH_PI_OVER_RIDE_TABLE_NI_SEL0_SHFT 5
+#define SH_PI_OVER_RIDE_TABLE_NI_SEL0_MASK 0x0000000000000020
+
+/* SH_PI_OVER_RIDE_TABLE_DIR1 */
+#define SH_PI_OVER_RIDE_TABLE_DIR1_SHFT 8
+#define SH_PI_OVER_RIDE_TABLE_DIR1_MASK 0x0000000000000f00
+
+/* SH_PI_OVER_RIDE_TABLE_V1 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_PI_OVER_RIDE_TABLE_V1_SHFT 12
+#define SH_PI_OVER_RIDE_TABLE_V1_MASK 0x0000000000001000
+
+/* SH_PI_OVER_RIDE_TABLE_NI_SEL1 */
+/* Description: ni select */
+#define SH_PI_OVER_RIDE_TABLE_NI_SEL1_SHFT 13
+#define SH_PI_OVER_RIDE_TABLE_NI_SEL1_MASK 0x0000000000002000
+
+/* SH_PI_OVER_RIDE_TABLE_ENABLE */
+/* Description: Indicates that this entry is enabled */
+#define SH_PI_OVER_RIDE_TABLE_ENABLE_SHFT 63
+#define SH_PI_OVER_RIDE_TABLE_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_RSP_PLANE_HINT" */
+/* If enabled, invert incoming response only plane hint bit before lo */
+/* ==================================================================== */
+
+#define SH_PI_RSP_PLANE_HINT 0x0000000150021488
+#define SH_PI_RSP_PLANE_HINT_MASK 0x0000000000000001
+#define SH_PI_RSP_PLANE_HINT_INIT 0x0000000000000000
+
+/* SH_PI_RSP_PLANE_HINT_INVERT */
+/* Description: Invert Response Plane Hint */
+#define SH_PI_RSP_PLANE_HINT_INVERT_SHFT 0
+#define SH_PI_RSP_PLANE_HINT_INVERT_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_NI0_LOCAL_TABLE" */
+/* local lookup table */
+/* ==================================================================== */
+
+#define SH_NI0_LOCAL_TABLE 0x0000000150022000
+#define SH_NI0_LOCAL_TABLE_MASK 0x800000000000001f
+#define SH_NI0_LOCAL_TABLE_MEMDEPTH 128
+#define SH_NI0_LOCAL_TABLE_INIT 0x0000000000000000
+
+/* SH_NI0_LOCAL_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_NI0_LOCAL_TABLE_DIR0_SHFT 0
+#define SH_NI0_LOCAL_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_NI0_LOCAL_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_NI0_LOCAL_TABLE_V0_SHFT 4
+#define SH_NI0_LOCAL_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_NI0_LOCAL_TABLE_VALID */
+/* Description: Indicates that this entry is valid */
+#define SH_NI0_LOCAL_TABLE_VALID_SHFT 63
+#define SH_NI0_LOCAL_TABLE_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_GLOBAL_TABLE" */
+/* global lookup table */
+/* ==================================================================== */
+
+#define SH_NI0_GLOBAL_TABLE 0x0000000150022400
+#define SH_NI0_GLOBAL_TABLE_MASK 0x800000000000001f
+#define SH_NI0_GLOBAL_TABLE_MEMDEPTH 16
+#define SH_NI0_GLOBAL_TABLE_INIT 0x0000000000000000
+
+/* SH_NI0_GLOBAL_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_NI0_GLOBAL_TABLE_DIR0_SHFT 0
+#define SH_NI0_GLOBAL_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_NI0_GLOBAL_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_NI0_GLOBAL_TABLE_V0_SHFT 4
+#define SH_NI0_GLOBAL_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_NI0_GLOBAL_TABLE_VALID */
+/* Description: Indicates that this entry is valid */
+#define SH_NI0_GLOBAL_TABLE_VALID_SHFT 63
+#define SH_NI0_GLOBAL_TABLE_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_OVER_RIDE_TABLE" */
+/* If enabled, bypass the Global/Local tables */
+/* ==================================================================== */
+
+#define SH_NI0_OVER_RIDE_TABLE 0x0000000150022480
+#define SH_NI0_OVER_RIDE_TABLE_MASK 0x800000000000001f
+#define SH_NI0_OVER_RIDE_TABLE_INIT 0x8000000000000000
+
+/* SH_NI0_OVER_RIDE_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_NI0_OVER_RIDE_TABLE_DIR0_SHFT 0
+#define SH_NI0_OVER_RIDE_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_NI0_OVER_RIDE_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_NI0_OVER_RIDE_TABLE_V0_SHFT 4
+#define SH_NI0_OVER_RIDE_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_NI0_OVER_RIDE_TABLE_ENABLE */
+/* Description: Indicates that this entry is enabled */
+#define SH_NI0_OVER_RIDE_TABLE_ENABLE_SHFT 63
+#define SH_NI0_OVER_RIDE_TABLE_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_RSP_PLANE_HINT" */
+/* If enabled, invert incoming response only plane hint bit before lo */
+/* ==================================================================== */
+
+#define SH_NI0_RSP_PLANE_HINT 0x0000000150022488
+#define SH_NI0_RSP_PLANE_HINT_MASK 0x0000000000000000
+#define SH_NI0_RSP_PLANE_HINT_INIT 0x0000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_LOCAL_TABLE" */
+/* local lookup table */
+/* ==================================================================== */
+
+#define SH_NI1_LOCAL_TABLE 0x0000000150023000
+#define SH_NI1_LOCAL_TABLE_MASK 0x800000000000001f
+#define SH_NI1_LOCAL_TABLE_MEMDEPTH 128
+#define SH_NI1_LOCAL_TABLE_INIT 0x0000000000000000
+
+/* SH_NI1_LOCAL_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_NI1_LOCAL_TABLE_DIR0_SHFT 0
+#define SH_NI1_LOCAL_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_NI1_LOCAL_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_NI1_LOCAL_TABLE_V0_SHFT 4
+#define SH_NI1_LOCAL_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_NI1_LOCAL_TABLE_VALID */
+/* Description: Indicates that this entry is valid */
+#define SH_NI1_LOCAL_TABLE_VALID_SHFT 63
+#define SH_NI1_LOCAL_TABLE_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_GLOBAL_TABLE" */
+/* global lookup table */
+/* ==================================================================== */
+
+#define SH_NI1_GLOBAL_TABLE 0x0000000150023400
+#define SH_NI1_GLOBAL_TABLE_MASK 0x800000000000001f
+#define SH_NI1_GLOBAL_TABLE_MEMDEPTH 16
+#define SH_NI1_GLOBAL_TABLE_INIT 0x0000000000000000
+
+/* SH_NI1_GLOBAL_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_NI1_GLOBAL_TABLE_DIR0_SHFT 0
+#define SH_NI1_GLOBAL_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_NI1_GLOBAL_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_NI1_GLOBAL_TABLE_V0_SHFT 4
+#define SH_NI1_GLOBAL_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_NI1_GLOBAL_TABLE_VALID */
+/* Description: Indicates that this entry is valid */
+#define SH_NI1_GLOBAL_TABLE_VALID_SHFT 63
+#define SH_NI1_GLOBAL_TABLE_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_OVER_RIDE_TABLE" */
+/* If enabled, bypass the Global/Local tables */
+/* ==================================================================== */
+
+#define SH_NI1_OVER_RIDE_TABLE 0x0000000150023480
+#define SH_NI1_OVER_RIDE_TABLE_MASK 0x800000000000001f
+#define SH_NI1_OVER_RIDE_TABLE_INIT 0x8000000000000000
+
+/* SH_NI1_OVER_RIDE_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_NI1_OVER_RIDE_TABLE_DIR0_SHFT 0
+#define SH_NI1_OVER_RIDE_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_NI1_OVER_RIDE_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_NI1_OVER_RIDE_TABLE_V0_SHFT 4
+#define SH_NI1_OVER_RIDE_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_NI1_OVER_RIDE_TABLE_ENABLE */
+/* Description: Indicates that this entry is enabled */
+#define SH_NI1_OVER_RIDE_TABLE_ENABLE_SHFT 63
+#define SH_NI1_OVER_RIDE_TABLE_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_RSP_PLANE_HINT" */
+/* If enabled, invert incoming response only plane hint bit before lo */
+/* ==================================================================== */
+
+#define SH_NI1_RSP_PLANE_HINT 0x0000000150023488
+#define SH_NI1_RSP_PLANE_HINT_MASK 0x0000000000000000
+#define SH_NI1_RSP_PLANE_HINT_INIT 0x0000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_LOCAL_TABLE" */
+/* local lookup table */
+/* ==================================================================== */
+
+#define SH_MD_LOCAL_TABLE 0x0000000150024000
+#define SH_MD_LOCAL_TABLE_MASK 0x8000000000003f3f
+#define SH_MD_LOCAL_TABLE_MEMDEPTH 128
+#define SH_MD_LOCAL_TABLE_INIT 0x0000000000000000
+
+/* SH_MD_LOCAL_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_MD_LOCAL_TABLE_DIR0_SHFT 0
+#define SH_MD_LOCAL_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_MD_LOCAL_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_MD_LOCAL_TABLE_V0_SHFT 4
+#define SH_MD_LOCAL_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_MD_LOCAL_TABLE_NI_SEL0 */
+/* Description: ni select for requests */
+#define SH_MD_LOCAL_TABLE_NI_SEL0_SHFT 5
+#define SH_MD_LOCAL_TABLE_NI_SEL0_MASK 0x0000000000000020
+
+/* SH_MD_LOCAL_TABLE_DIR1 */
+#define SH_MD_LOCAL_TABLE_DIR1_SHFT 8
+#define SH_MD_LOCAL_TABLE_DIR1_MASK 0x0000000000000f00
+
+/* SH_MD_LOCAL_TABLE_V1 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_MD_LOCAL_TABLE_V1_SHFT 12
+#define SH_MD_LOCAL_TABLE_V1_MASK 0x0000000000001000
+
+/* SH_MD_LOCAL_TABLE_NI_SEL1 */
+/* Description: ni select for plane-hint 1 */
+#define SH_MD_LOCAL_TABLE_NI_SEL1_SHFT 13
+#define SH_MD_LOCAL_TABLE_NI_SEL1_MASK 0x0000000000002000
+
+/* SH_MD_LOCAL_TABLE_VALID */
+/* Description: Indicates that this entry is valid */
+#define SH_MD_LOCAL_TABLE_VALID_SHFT 63
+#define SH_MD_LOCAL_TABLE_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_GLOBAL_TABLE" */
+/* global lookup table */
+/* ==================================================================== */
+
+#define SH_MD_GLOBAL_TABLE 0x0000000150024400
+#define SH_MD_GLOBAL_TABLE_MASK 0x8000000000003f3f
+#define SH_MD_GLOBAL_TABLE_MEMDEPTH 16
+#define SH_MD_GLOBAL_TABLE_INIT 0x0000000000000000
+
+/* SH_MD_GLOBAL_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_MD_GLOBAL_TABLE_DIR0_SHFT 0
+#define SH_MD_GLOBAL_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_MD_GLOBAL_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_MD_GLOBAL_TABLE_V0_SHFT 4
+#define SH_MD_GLOBAL_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_MD_GLOBAL_TABLE_NI_SEL0 */
+/* Description: ni select for requests */
+#define SH_MD_GLOBAL_TABLE_NI_SEL0_SHFT 5
+#define SH_MD_GLOBAL_TABLE_NI_SEL0_MASK 0x0000000000000020
+
+/* SH_MD_GLOBAL_TABLE_DIR1 */
+#define SH_MD_GLOBAL_TABLE_DIR1_SHFT 8
+#define SH_MD_GLOBAL_TABLE_DIR1_MASK 0x0000000000000f00
+
+/* SH_MD_GLOBAL_TABLE_V1 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_MD_GLOBAL_TABLE_V1_SHFT 12
+#define SH_MD_GLOBAL_TABLE_V1_MASK 0x0000000000001000
+
+/* SH_MD_GLOBAL_TABLE_NI_SEL1 */
+/* Description: ni select for plane-hint 1 */
+#define SH_MD_GLOBAL_TABLE_NI_SEL1_SHFT 13
+#define SH_MD_GLOBAL_TABLE_NI_SEL1_MASK 0x0000000000002000
+
+/* SH_MD_GLOBAL_TABLE_VALID */
+/* Description: Indicates that this entry is valid */
+#define SH_MD_GLOBAL_TABLE_VALID_SHFT 63
+#define SH_MD_GLOBAL_TABLE_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_OVER_RIDE_TABLE" */
+/* If enabled, bypass the Global/Local tables */
+/* ==================================================================== */
+
+#define SH_MD_OVER_RIDE_TABLE 0x0000000150024480
+#define SH_MD_OVER_RIDE_TABLE_MASK 0x8000000000003f3f
+#define SH_MD_OVER_RIDE_TABLE_INIT 0x8000000000002000
+
+/* SH_MD_OVER_RIDE_TABLE_DIR0 */
+/* Description: Direction field for next chip */
+#define SH_MD_OVER_RIDE_TABLE_DIR0_SHFT 0
+#define SH_MD_OVER_RIDE_TABLE_DIR0_MASK 0x000000000000000f
+
+/* SH_MD_OVER_RIDE_TABLE_V0 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_MD_OVER_RIDE_TABLE_V0_SHFT 4
+#define SH_MD_OVER_RIDE_TABLE_V0_MASK 0x0000000000000010
+
+/* SH_MD_OVER_RIDE_TABLE_NI_SEL0 */
+/* Description: ni select */
+#define SH_MD_OVER_RIDE_TABLE_NI_SEL0_SHFT 5
+#define SH_MD_OVER_RIDE_TABLE_NI_SEL0_MASK 0x0000000000000020
+
+/* SH_MD_OVER_RIDE_TABLE_DIR1 */
+#define SH_MD_OVER_RIDE_TABLE_DIR1_SHFT 8
+#define SH_MD_OVER_RIDE_TABLE_DIR1_MASK 0x0000000000000f00
+
+/* SH_MD_OVER_RIDE_TABLE_V1 */
+/* Description: Low bit of virtual channel for next chip */
+#define SH_MD_OVER_RIDE_TABLE_V1_SHFT 12
+#define SH_MD_OVER_RIDE_TABLE_V1_MASK 0x0000000000001000
+
+/* SH_MD_OVER_RIDE_TABLE_NI_SEL1 */
+/* Description: ni select */
+#define SH_MD_OVER_RIDE_TABLE_NI_SEL1_SHFT 13
+#define SH_MD_OVER_RIDE_TABLE_NI_SEL1_MASK 0x0000000000002000
+
+/* SH_MD_OVER_RIDE_TABLE_ENABLE */
+/* Description: Indicates that this entry is enabled */
+#define SH_MD_OVER_RIDE_TABLE_ENABLE_SHFT 63
+#define SH_MD_OVER_RIDE_TABLE_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_RSP_PLANE_HINT" */
+/* If enabled, invert incoming response only plane hint bit before lo */
+/* ==================================================================== */
+
+#define SH_MD_RSP_PLANE_HINT 0x0000000150024488
+#define SH_MD_RSP_PLANE_HINT_MASK 0x0000000000000001
+#define SH_MD_RSP_PLANE_HINT_INIT 0x0000000000000000
+
+/* SH_MD_RSP_PLANE_HINT_INVERT */
+/* Description: Invert Response Plane Hint */
+#define SH_MD_RSP_PLANE_HINT_INVERT_SHFT 0
+#define SH_MD_RSP_PLANE_HINT_INVERT_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_LB_LIQ_CTL" */
+/* Local Block LIQ Control */
+/* ==================================================================== */
+
+#define SH_LB_LIQ_CTL 0x0000000110040000
+#define SH_LB_LIQ_CTL_MASK 0x0000000000070f1f
+#define SH_LB_LIQ_CTL_INIT 0x0000000000000000
+
+/* SH_LB_LIQ_CTL_LIQ_REQ_CTL */
+/* Description: LIQ Request Control */
+#define SH_LB_LIQ_CTL_LIQ_REQ_CTL_SHFT 0
+#define SH_LB_LIQ_CTL_LIQ_REQ_CTL_MASK 0x000000000000001f
+
+/* SH_LB_LIQ_CTL_LIQ_RPL_CTL */
+/* Description: LIQ Reply Control */
+#define SH_LB_LIQ_CTL_LIQ_RPL_CTL_SHFT 8
+#define SH_LB_LIQ_CTL_LIQ_RPL_CTL_MASK 0x0000000000000f00
+
+/* SH_LB_LIQ_CTL_FORCE_RQ_CREDIT */
+/* Description: Force request credit */
+#define SH_LB_LIQ_CTL_FORCE_RQ_CREDIT_SHFT 16
+#define SH_LB_LIQ_CTL_FORCE_RQ_CREDIT_MASK 0x0000000000010000
+
+/* SH_LB_LIQ_CTL_FORCE_RP_CREDIT */
+/* Description: Force reply credit */
+#define SH_LB_LIQ_CTL_FORCE_RP_CREDIT_SHFT 17
+#define SH_LB_LIQ_CTL_FORCE_RP_CREDIT_MASK 0x0000000000020000
+
+/* SH_LB_LIQ_CTL_FORCE_LINVV_CREDIT */
+/* Description: Force linvv credit */
+#define SH_LB_LIQ_CTL_FORCE_LINVV_CREDIT_SHFT 18
+#define SH_LB_LIQ_CTL_FORCE_LINVV_CREDIT_MASK 0x0000000000040000
+
+/* ==================================================================== */
+/* Register "SH_LB_LOQ_CTL" */
+/* Local Block LOQ Control */
+/* ==================================================================== */
+
+#define SH_LB_LOQ_CTL 0x0000000110040080
+#define SH_LB_LOQ_CTL_MASK 0x0000000000000003
+#define SH_LB_LOQ_CTL_INIT 0x0000000000000000
+
+/* SH_LB_LOQ_CTL_LOQ_REQ_CTL */
+/* Description: LOQ Request Control */
+#define SH_LB_LOQ_CTL_LOQ_REQ_CTL_SHFT 0
+#define SH_LB_LOQ_CTL_LOQ_REQ_CTL_MASK 0x0000000000000001
+
+/* SH_LB_LOQ_CTL_LOQ_RPL_CTL */
+/* Description: LOQ Reply Control */
+#define SH_LB_LOQ_CTL_LOQ_RPL_CTL_SHFT 1
+#define SH_LB_LOQ_CTL_LOQ_RPL_CTL_MASK 0x0000000000000002
+
+/* ==================================================================== */
+/* Register "SH_LB_MAX_REP_CREDIT_CNT" */
+/* Maximum number of reply credits from XN */
+/* ==================================================================== */
+
+#define SH_LB_MAX_REP_CREDIT_CNT 0x0000000110040100
+#define SH_LB_MAX_REP_CREDIT_CNT_MASK 0x000000000000001f
+#define SH_LB_MAX_REP_CREDIT_CNT_INIT 0x000000000000001f
+
+/* SH_LB_MAX_REP_CREDIT_CNT_MAX_CNT */
+/* Description: Max reply credits */
+#define SH_LB_MAX_REP_CREDIT_CNT_MAX_CNT_SHFT 0
+#define SH_LB_MAX_REP_CREDIT_CNT_MAX_CNT_MASK 0x000000000000001f
+
+/* ==================================================================== */
+/* Register "SH_LB_MAX_REQ_CREDIT_CNT" */
+/* Maximum number of request credits from XN */
+/* ==================================================================== */
+
+#define SH_LB_MAX_REQ_CREDIT_CNT 0x0000000110040180
+#define SH_LB_MAX_REQ_CREDIT_CNT_MASK 0x000000000000001f
+#define SH_LB_MAX_REQ_CREDIT_CNT_INIT 0x000000000000001f
+
+/* SH_LB_MAX_REQ_CREDIT_CNT_MAX_CNT */
+/* Description: Max request credits */
+#define SH_LB_MAX_REQ_CREDIT_CNT_MAX_CNT_SHFT 0
+#define SH_LB_MAX_REQ_CREDIT_CNT_MAX_CNT_MASK 0x000000000000001f
+
+/* ==================================================================== */
+/* Register "SH_PIO_TIME_OUT" */
+/* Local Block PIO time out value */
+/* ==================================================================== */
+
+#define SH_PIO_TIME_OUT 0x0000000110040200
+#define SH_PIO_TIME_OUT_MASK 0x000000000000ffff
+#define SH_PIO_TIME_OUT_INIT 0x0000000000000400
+
+/* SH_PIO_TIME_OUT_VALUE */
+/* Description: PIO time out value */
+#define SH_PIO_TIME_OUT_VALUE_SHFT 0
+#define SH_PIO_TIME_OUT_VALUE_MASK 0x000000000000ffff
+
+/* ==================================================================== */
+/* Register "SH_PIO_NACK_RESET" */
+/* Local Block PIO Reset for nack counters */
+/* ==================================================================== */
+
+#define SH_PIO_NACK_RESET 0x0000000110040280
+#define SH_PIO_NACK_RESET_MASK 0x0000000000000001
+#define SH_PIO_NACK_RESET_INIT 0x0000000000000000
+
+/* SH_PIO_NACK_RESET_PULSE */
+/* Description: PIO nack counter reset */
+#define SH_PIO_NACK_RESET_PULSE_SHFT 0
+#define SH_PIO_NACK_RESET_PULSE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_CONVEYOR_BELT_TIME_OUT" */
+/* Local Block conveyor belt time out value */
+/* ==================================================================== */
+
+#define SH_CONVEYOR_BELT_TIME_OUT 0x0000000110040300
+#define SH_CONVEYOR_BELT_TIME_OUT_MASK 0x0000000000000fff
+#define SH_CONVEYOR_BELT_TIME_OUT_INIT 0x0000000000000000
+
+/* SH_CONVEYOR_BELT_TIME_OUT_VALUE */
+/* Description: Conveyor belt time out value */
+#define SH_CONVEYOR_BELT_TIME_OUT_VALUE_SHFT 0
+#define SH_CONVEYOR_BELT_TIME_OUT_VALUE_MASK 0x0000000000000fff
+
+/* ==================================================================== */
+/* Register "SH_LB_CREDIT_STATUS" */
+/* Credit Counter Status Register */
+/* ==================================================================== */
+
+#define SH_LB_CREDIT_STATUS 0x0000000110050000
+#define SH_LB_CREDIT_STATUS_MASK 0x000000000ffff3df
+#define SH_LB_CREDIT_STATUS_INIT 0x0000000000000000
+
+/* SH_LB_CREDIT_STATUS_LIQ_RQ_CREDIT */
+/* Description: LIQ request queue credit counter */
+#define SH_LB_CREDIT_STATUS_LIQ_RQ_CREDIT_SHFT 0
+#define SH_LB_CREDIT_STATUS_LIQ_RQ_CREDIT_MASK 0x000000000000001f
+
+/* SH_LB_CREDIT_STATUS_LIQ_RP_CREDIT */
+/* Description: LIQ reply queue credit counter */
+#define SH_LB_CREDIT_STATUS_LIQ_RP_CREDIT_SHFT 6
+#define SH_LB_CREDIT_STATUS_LIQ_RP_CREDIT_MASK 0x00000000000003c0
+
+/* SH_LB_CREDIT_STATUS_LINVV_CREDIT */
+/* Description: LINVV credit counter */
+#define SH_LB_CREDIT_STATUS_LINVV_CREDIT_SHFT 12
+#define SH_LB_CREDIT_STATUS_LINVV_CREDIT_MASK 0x000000000003f000
+
+/* SH_LB_CREDIT_STATUS_LOQ_RQ_CREDIT */
+/* Description: LOQ request queue credit counter */
+#define SH_LB_CREDIT_STATUS_LOQ_RQ_CREDIT_SHFT 18
+#define SH_LB_CREDIT_STATUS_LOQ_RQ_CREDIT_MASK 0x00000000007c0000
+
+/* SH_LB_CREDIT_STATUS_LOQ_RP_CREDIT */
+/* Description: LOQ reply queue credit counter */
+#define SH_LB_CREDIT_STATUS_LOQ_RP_CREDIT_SHFT 23
+#define SH_LB_CREDIT_STATUS_LOQ_RP_CREDIT_MASK 0x000000000f800000
+
+/* ==================================================================== */
+/* Register "SH_LB_DEBUG_LOCAL_SEL" */
+/* LB Debug Port Select */
+/* ==================================================================== */
+
+#define SH_LB_DEBUG_LOCAL_SEL 0x0000000110050080
+#define SH_LB_DEBUG_LOCAL_SEL_MASK 0xf777777777777777
+#define SH_LB_DEBUG_LOCAL_SEL_INIT 0x0000000000000000
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE0_CHIPLET_SEL */
+/* Description: Nibble 0 Chiplet select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE0_CHIPLET_SEL_SHFT 0
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE0_CHIPLET_SEL_MASK 0x0000000000000007
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE1_CHIPLET_SEL */
+/* Description: Nibble 1 Chiplet select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE1_CHIPLET_SEL_SHFT 8
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000700
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE2_CHIPLET_SEL */
+/* Description: Nibble 2 Chiplet select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE2_CHIPLET_SEL_SHFT 16
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE2_CHIPLET_SEL_MASK 0x0000000000070000
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE3_CHIPLET_SEL */
+/* Description: Nibble 3 Chiplet select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE3_CHIPLET_SEL_SHFT 24
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE3_CHIPLET_SEL_MASK 0x0000000007000000
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE4_CHIPLET_SEL */
+/* Description: Nibble 4 Chiplet select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE4_CHIPLET_SEL_SHFT 32
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE4_CHIPLET_SEL_MASK 0x0000000700000000
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE5_CHIPLET_SEL */
+/* Description: Nibble 5 Chiplet select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE5_CHIPLET_SEL_SHFT 40
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE5_CHIPLET_SEL_MASK 0x0000070000000000
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE6_CHIPLET_SEL */
+/* Description: Nibble 6 Chiplet select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE6_CHIPLET_SEL_SHFT 48
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE6_CHIPLET_SEL_MASK 0x0007000000000000
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE7_CHIPLET_SEL */
+/* Description: Nibble 7 Chiplet select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE7_CHIPLET_SEL_SHFT 56
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE7_CHIPLET_SEL_MASK 0x0700000000000000
+
+/* SH_LB_DEBUG_LOCAL_SEL_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60
+#define SH_LB_DEBUG_LOCAL_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* SH_LB_DEBUG_LOCAL_SEL_TRIGGER_ENABLE */
+/* Description: Enable trigger on bit 32 of Analyzer data */
+#define SH_LB_DEBUG_LOCAL_SEL_TRIGGER_ENABLE_SHFT 63
+#define SH_LB_DEBUG_LOCAL_SEL_TRIGGER_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_LB_DEBUG_PERF_SEL" */
+/* LB Debug Port Performance Select */
+/* ==================================================================== */
+
+#define SH_LB_DEBUG_PERF_SEL 0x0000000110050100
+#define SH_LB_DEBUG_PERF_SEL_MASK 0x7777777777777777
+#define SH_LB_DEBUG_PERF_SEL_INIT 0x0000000000000000
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE0_CHIPLET_SEL */
+/* Description: Nibble 0 Chiplet select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE0_CHIPLET_SEL_SHFT 0
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE0_CHIPLET_SEL_MASK 0x0000000000000007
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE1_CHIPLET_SEL */
+/* Description: Nibble 1 Chiplet select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE1_CHIPLET_SEL_SHFT 8
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000700
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE2_CHIPLET_SEL */
+/* Description: Nibble 2 Chiplet select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE2_CHIPLET_SEL_SHFT 16
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE2_CHIPLET_SEL_MASK 0x0000000000070000
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE3_CHIPLET_SEL */
+/* Description: Nibble 3 Chiplet select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE3_CHIPLET_SEL_SHFT 24
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE3_CHIPLET_SEL_MASK 0x0000000007000000
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE4_CHIPLET_SEL */
+/* Description: Nibble 4 Chiplet select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE4_CHIPLET_SEL_SHFT 32
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE4_CHIPLET_SEL_MASK 0x0000000700000000
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE5_CHIPLET_SEL */
+/* Description: Nibble 5 Chiplet select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE5_CHIPLET_SEL_SHFT 40
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE5_CHIPLET_SEL_MASK 0x0000070000000000
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE6_CHIPLET_SEL */
+/* Description: Nibble 6 Chiplet select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE6_CHIPLET_SEL_SHFT 48
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE6_CHIPLET_SEL_MASK 0x0007000000000000
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE7_CHIPLET_SEL */
+/* Description: Nibble 7 Chiplet select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE7_CHIPLET_SEL_SHFT 56
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE7_CHIPLET_SEL_MASK 0x0700000000000000
+
+/* SH_LB_DEBUG_PERF_SEL_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60
+#define SH_LB_DEBUG_PERF_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_LB_DEBUG_TRIG_SEL" */
+/* LB Debug Trigger Select */
+/* ==================================================================== */
+
+#define SH_LB_DEBUG_TRIG_SEL 0x0000000110050180
+#define SH_LB_DEBUG_TRIG_SEL_MASK 0x7777777777777777
+#define SH_LB_DEBUG_TRIG_SEL_INIT 0x0000000000000000
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER0_CHIPLET_SEL */
+/* Description: Nibble 0 Chiplet select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER0_CHIPLET_SEL_SHFT 0
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER0_CHIPLET_SEL_MASK 0x0000000000000007
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER0_NIBBLE_SEL_SHFT 4
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER1_CHIPLET_SEL */
+/* Description: Nibble 1 Chiplet select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER1_CHIPLET_SEL_SHFT 8
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER1_CHIPLET_SEL_MASK 0x0000000000000700
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER1_NIBBLE_SEL_SHFT 12
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER2_CHIPLET_SEL */
+/* Description: Nibble 2 Chiplet select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER2_CHIPLET_SEL_SHFT 16
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER2_CHIPLET_SEL_MASK 0x0000000000070000
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER2_NIBBLE_SEL_SHFT 20
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER3_CHIPLET_SEL */
+/* Description: Nibble 3 Chiplet select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER3_CHIPLET_SEL_SHFT 24
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER3_CHIPLET_SEL_MASK 0x0000000007000000
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER3_NIBBLE_SEL_SHFT 28
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER4_CHIPLET_SEL */
+/* Description: Nibble 4 Chiplet select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER4_CHIPLET_SEL_SHFT 32
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER4_CHIPLET_SEL_MASK 0x0000000700000000
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER4_NIBBLE_SEL_SHFT 36
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER5_CHIPLET_SEL */
+/* Description: Nibble 5 Chiplet select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER5_CHIPLET_SEL_SHFT 40
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER5_CHIPLET_SEL_MASK 0x0000070000000000
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER5_NIBBLE_SEL_SHFT 44
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER6_CHIPLET_SEL */
+/* Description: Nibble 6 Chiplet select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER6_CHIPLET_SEL_SHFT 48
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER6_CHIPLET_SEL_MASK 0x0007000000000000
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER6_NIBBLE_SEL_SHFT 52
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER7_CHIPLET_SEL */
+/* Description: Nibble 7 Chiplet select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER7_CHIPLET_SEL_SHFT 56
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER7_CHIPLET_SEL_MASK 0x0700000000000000
+
+/* SH_LB_DEBUG_TRIG_SEL_TRIGGER7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER7_NIBBLE_SEL_SHFT 60
+#define SH_LB_DEBUG_TRIG_SEL_TRIGGER7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_DETAIL_1" */
+/* LB Error capture information: HDR1 */
+/* ==================================================================== */
+
+#define SH_LB_ERROR_DETAIL_1 0x0000000110050200
+#define SH_LB_ERROR_DETAIL_1_MASK 0x8003073fff3fffff
+#define SH_LB_ERROR_DETAIL_1_INIT 0x0000000000000000
+
+/* SH_LB_ERROR_DETAIL_1_COMMAND */
+/* Description: COMMAND */
+#define SH_LB_ERROR_DETAIL_1_COMMAND_SHFT 0
+#define SH_LB_ERROR_DETAIL_1_COMMAND_MASK 0x00000000000000ff
+
+/* SH_LB_ERROR_DETAIL_1_SUPPL */
+/* Description: SUPPLMENTAL */
+#define SH_LB_ERROR_DETAIL_1_SUPPL_SHFT 8
+#define SH_LB_ERROR_DETAIL_1_SUPPL_MASK 0x00000000003fff00
+
+/* SH_LB_ERROR_DETAIL_1_SOURCE */
+/* Description: SOURCE */
+#define SH_LB_ERROR_DETAIL_1_SOURCE_SHFT 24
+#define SH_LB_ERROR_DETAIL_1_SOURCE_MASK 0x0000003fff000000
+
+/* SH_LB_ERROR_DETAIL_1_DEST */
+/* Description: DEST */
+#define SH_LB_ERROR_DETAIL_1_DEST_SHFT 40
+#define SH_LB_ERROR_DETAIL_1_DEST_MASK 0x0000070000000000
+
+/* SH_LB_ERROR_DETAIL_1_HDR_ERR */
+/* Description: HDR_ERR */
+#define SH_LB_ERROR_DETAIL_1_HDR_ERR_SHFT 48
+#define SH_LB_ERROR_DETAIL_1_HDR_ERR_MASK 0x0001000000000000
+
+/* SH_LB_ERROR_DETAIL_1_DATA_ERR */
+/* Description: DATA_ERR */
+#define SH_LB_ERROR_DETAIL_1_DATA_ERR_SHFT 49
+#define SH_LB_ERROR_DETAIL_1_DATA_ERR_MASK 0x0002000000000000
+
+/* SH_LB_ERROR_DETAIL_1_VALID */
+/* Description: VALID */
+#define SH_LB_ERROR_DETAIL_1_VALID_SHFT 63
+#define SH_LB_ERROR_DETAIL_1_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_DETAIL_2" */
+/* LB Error Bits */
+/* ==================================================================== */
+
+#define SH_LB_ERROR_DETAIL_2 0x0000000110050280
+#define SH_LB_ERROR_DETAIL_2_MASK 0x00007fffffffffff
+#define SH_LB_ERROR_DETAIL_2_INIT 0x0000000000000000
+
+/* SH_LB_ERROR_DETAIL_2_ADDRESS */
+/* Description: ADDRESS */
+#define SH_LB_ERROR_DETAIL_2_ADDRESS_SHFT 0
+#define SH_LB_ERROR_DETAIL_2_ADDRESS_MASK 0x00007fffffffffff
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_DETAIL_3" */
+/* LB Error Bits */
+/* ==================================================================== */
+
+#define SH_LB_ERROR_DETAIL_3 0x0000000110050300
+#define SH_LB_ERROR_DETAIL_3_MASK 0xffffffffffffffff
+#define SH_LB_ERROR_DETAIL_3_INIT 0x0000000000000000
+
+/* SH_LB_ERROR_DETAIL_3_DATA */
+/* Description: DATA */
+#define SH_LB_ERROR_DETAIL_3_DATA_SHFT 0
+#define SH_LB_ERROR_DETAIL_3_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_DETAIL_4" */
+/* LB Error Bits */
+/* ==================================================================== */
+
+#define SH_LB_ERROR_DETAIL_4 0x0000000110050380
+#define SH_LB_ERROR_DETAIL_4_MASK 0xffffffffffffffff
+#define SH_LB_ERROR_DETAIL_4_INIT 0x0000000000000000
+
+/* SH_LB_ERROR_DETAIL_4_ROUTE */
+/* Description: ROUTE */
+#define SH_LB_ERROR_DETAIL_4_ROUTE_SHFT 0
+#define SH_LB_ERROR_DETAIL_4_ROUTE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_DETAIL_5" */
+/* LB Error Bits */
+/* ==================================================================== */
+
+#define SH_LB_ERROR_DETAIL_5 0x0000000110050400
+#define SH_LB_ERROR_DETAIL_5_MASK 0x000000000000007f
+#define SH_LB_ERROR_DETAIL_5_INIT 0x0000000000000000
+
+/* SH_LB_ERROR_DETAIL_5_READ_RETRY */
+/* Description: Read retry error */
+#define SH_LB_ERROR_DETAIL_5_READ_RETRY_SHFT 0
+#define SH_LB_ERROR_DETAIL_5_READ_RETRY_MASK 0x0000000000000001
+
+/* SH_LB_ERROR_DETAIL_5_PTC1_WRITE */
+/* Description: PTC1 write error */
+#define SH_LB_ERROR_DETAIL_5_PTC1_WRITE_SHFT 1
+#define SH_LB_ERROR_DETAIL_5_PTC1_WRITE_MASK 0x0000000000000002
+
+/* SH_LB_ERROR_DETAIL_5_WRITE_RETRY */
+/* Description: Write retry error */
+#define SH_LB_ERROR_DETAIL_5_WRITE_RETRY_SHFT 2
+#define SH_LB_ERROR_DETAIL_5_WRITE_RETRY_MASK 0x0000000000000004
+
+/* SH_LB_ERROR_DETAIL_5_COUNT_A_OVERFLOW */
+/* Description: Nack A counter overflow error */
+#define SH_LB_ERROR_DETAIL_5_COUNT_A_OVERFLOW_SHFT 3
+#define SH_LB_ERROR_DETAIL_5_COUNT_A_OVERFLOW_MASK 0x0000000000000008
+
+/* SH_LB_ERROR_DETAIL_5_COUNT_B_OVERFLOW */
+/* Description: Nack B counter overflow error */
+#define SH_LB_ERROR_DETAIL_5_COUNT_B_OVERFLOW_SHFT 4
+#define SH_LB_ERROR_DETAIL_5_COUNT_B_OVERFLOW_MASK 0x0000000000000010
+
+/* SH_LB_ERROR_DETAIL_5_NACK_A_TIMEOUT */
+/* Description: Nack A counter timeout error */
+#define SH_LB_ERROR_DETAIL_5_NACK_A_TIMEOUT_SHFT 5
+#define SH_LB_ERROR_DETAIL_5_NACK_A_TIMEOUT_MASK 0x0000000000000020
+
+/* SH_LB_ERROR_DETAIL_5_NACK_B_TIMEOUT */
+/* Description: Nack B counter timeout error */
+#define SH_LB_ERROR_DETAIL_5_NACK_B_TIMEOUT_SHFT 6
+#define SH_LB_ERROR_DETAIL_5_NACK_B_TIMEOUT_MASK 0x0000000000000040
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_MASK" */
+/* LB Error Mask */
+/* ==================================================================== */
+
+#define SH_LB_ERROR_MASK 0x0000000110050480
+#define SH_LB_ERROR_MASK_MASK 0x00000000007fffff
+#define SH_LB_ERROR_MASK_INIT 0x00000000007fffff
+
+/* SH_LB_ERROR_MASK_RQ_BAD_CMD */
+/* Description: RQ_BAD_CMD */
+#define SH_LB_ERROR_MASK_RQ_BAD_CMD_SHFT 0
+#define SH_LB_ERROR_MASK_RQ_BAD_CMD_MASK 0x0000000000000001
+
+/* SH_LB_ERROR_MASK_RP_BAD_CMD */
+/* Description: RP_BAD_CMD */
+#define SH_LB_ERROR_MASK_RP_BAD_CMD_SHFT 1
+#define SH_LB_ERROR_MASK_RP_BAD_CMD_MASK 0x0000000000000002
+
+/* SH_LB_ERROR_MASK_RQ_SHORT */
+/* Description: RQ_SHORT */
+#define SH_LB_ERROR_MASK_RQ_SHORT_SHFT 2
+#define SH_LB_ERROR_MASK_RQ_SHORT_MASK 0x0000000000000004
+
+/* SH_LB_ERROR_MASK_RP_SHORT */
+/* Description: RP_SHORT */
+#define SH_LB_ERROR_MASK_RP_SHORT_SHFT 3
+#define SH_LB_ERROR_MASK_RP_SHORT_MASK 0x0000000000000008
+
+/* SH_LB_ERROR_MASK_RQ_LONG */
+/* Description: RQ_LONG */
+#define SH_LB_ERROR_MASK_RQ_LONG_SHFT 4
+#define SH_LB_ERROR_MASK_RQ_LONG_MASK 0x0000000000000010
+
+/* SH_LB_ERROR_MASK_RP_LONG */
+/* Description: RP_LONG */
+#define SH_LB_ERROR_MASK_RP_LONG_SHFT 5
+#define SH_LB_ERROR_MASK_RP_LONG_MASK 0x0000000000000020
+
+/* SH_LB_ERROR_MASK_RQ_BAD_DATA */
+/* Description: RQ_BAD_DATA */
+#define SH_LB_ERROR_MASK_RQ_BAD_DATA_SHFT 6
+#define SH_LB_ERROR_MASK_RQ_BAD_DATA_MASK 0x0000000000000040
+
+/* SH_LB_ERROR_MASK_RP_BAD_DATA */
+/* Description: RP_BAD_DATA */
+#define SH_LB_ERROR_MASK_RP_BAD_DATA_SHFT 7
+#define SH_LB_ERROR_MASK_RP_BAD_DATA_MASK 0x0000000000000080
+
+/* SH_LB_ERROR_MASK_RQ_BAD_ADDR */
+/* Description: RQ_BAD_ADDR */
+#define SH_LB_ERROR_MASK_RQ_BAD_ADDR_SHFT 8
+#define SH_LB_ERROR_MASK_RQ_BAD_ADDR_MASK 0x0000000000000100
+
+/* SH_LB_ERROR_MASK_RQ_TIME_OUT */
+/* Description: RQ_TIME_OUT */
+#define SH_LB_ERROR_MASK_RQ_TIME_OUT_SHFT 9
+#define SH_LB_ERROR_MASK_RQ_TIME_OUT_MASK 0x0000000000000200
+
+/* SH_LB_ERROR_MASK_LINVV_OVERFLOW */
+/* Description: LINVV_OVERFLOW */
+#define SH_LB_ERROR_MASK_LINVV_OVERFLOW_SHFT 10
+#define SH_LB_ERROR_MASK_LINVV_OVERFLOW_MASK 0x0000000000000400
+
+/* SH_LB_ERROR_MASK_UNEXPECTED_LINV */
+/* Description: UNEXPECTED_LINV */
+#define SH_LB_ERROR_MASK_UNEXPECTED_LINV_SHFT 11
+#define SH_LB_ERROR_MASK_UNEXPECTED_LINV_MASK 0x0000000000000800
+
+/* SH_LB_ERROR_MASK_PTC_1_TIMEOUT */
+/* Description: PTC_1 Time out */
+#define SH_LB_ERROR_MASK_PTC_1_TIMEOUT_SHFT 12
+#define SH_LB_ERROR_MASK_PTC_1_TIMEOUT_MASK 0x0000000000001000
+
+/* SH_LB_ERROR_MASK_JUNK_BUS_ERR */
+/* Description: Junk Bus error */
+#define SH_LB_ERROR_MASK_JUNK_BUS_ERR_SHFT 13
+#define SH_LB_ERROR_MASK_JUNK_BUS_ERR_MASK 0x0000000000002000
+
+/* SH_LB_ERROR_MASK_PIO_CB_ERR */
+/* Description: PIO Conveyor Belt operation error */
+#define SH_LB_ERROR_MASK_PIO_CB_ERR_SHFT 14
+#define SH_LB_ERROR_MASK_PIO_CB_ERR_MASK 0x0000000000004000
+
+/* SH_LB_ERROR_MASK_VECTOR_RQ_ROUTE_ERROR */
+/* Description: Vector request Route data was invalid */
+#define SH_LB_ERROR_MASK_VECTOR_RQ_ROUTE_ERROR_SHFT 15
+#define SH_LB_ERROR_MASK_VECTOR_RQ_ROUTE_ERROR_MASK 0x0000000000008000
+
+/* SH_LB_ERROR_MASK_VECTOR_RP_ROUTE_ERROR */
+/* Description: Vector reply Route data was invalid */
+#define SH_LB_ERROR_MASK_VECTOR_RP_ROUTE_ERROR_SHFT 16
+#define SH_LB_ERROR_MASK_VECTOR_RP_ROUTE_ERROR_MASK 0x0000000000010000
+
+/* SH_LB_ERROR_MASK_GCLK_DROP */
+/* Description: Gclk drop error */
+#define SH_LB_ERROR_MASK_GCLK_DROP_SHFT 17
+#define SH_LB_ERROR_MASK_GCLK_DROP_MASK 0x0000000000020000
+
+/* SH_LB_ERROR_MASK_RQ_FIFO_ERROR */
+/* Description: Request queue FIFO error */
+#define SH_LB_ERROR_MASK_RQ_FIFO_ERROR_SHFT 18
+#define SH_LB_ERROR_MASK_RQ_FIFO_ERROR_MASK 0x0000000000040000
+
+/* SH_LB_ERROR_MASK_RP_FIFO_ERROR */
+/* Description: Reply queue FIFO error */
+#define SH_LB_ERROR_MASK_RP_FIFO_ERROR_SHFT 19
+#define SH_LB_ERROR_MASK_RP_FIFO_ERROR_MASK 0x0000000000080000
+
+/* SH_LB_ERROR_MASK_UNEXP_VALID */
+/* Description: Unexpected valid error */
+#define SH_LB_ERROR_MASK_UNEXP_VALID_SHFT 20
+#define SH_LB_ERROR_MASK_UNEXP_VALID_MASK 0x0000000000100000
+
+/* SH_LB_ERROR_MASK_RQ_CREDIT_OVERFLOW */
+/* Description: Request queue credit overflow */
+#define SH_LB_ERROR_MASK_RQ_CREDIT_OVERFLOW_SHFT 21
+#define SH_LB_ERROR_MASK_RQ_CREDIT_OVERFLOW_MASK 0x0000000000200000
+
+/* SH_LB_ERROR_MASK_RP_CREDIT_OVERFLOW */
+/* Description: Reply queue credit overflow */
+#define SH_LB_ERROR_MASK_RP_CREDIT_OVERFLOW_SHFT 22
+#define SH_LB_ERROR_MASK_RP_CREDIT_OVERFLOW_MASK 0x0000000000400000
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_OVERFLOW" */
+/* LB Error Overflow */
+/* ==================================================================== */
+
+#define SH_LB_ERROR_OVERFLOW 0x0000000110050500
+#define SH_LB_ERROR_OVERFLOW_MASK 0x00000000007fffff
+#define SH_LB_ERROR_OVERFLOW_INIT 0x0000000000000000
+
+/* SH_LB_ERROR_OVERFLOW_RQ_BAD_CMD_OVRFL */
+/* Description: RQ_BAD_CMD_OVRFL */
+#define SH_LB_ERROR_OVERFLOW_RQ_BAD_CMD_OVRFL_SHFT 0
+#define SH_LB_ERROR_OVERFLOW_RQ_BAD_CMD_OVRFL_MASK 0x0000000000000001
+
+/* SH_LB_ERROR_OVERFLOW_RP_BAD_CMD_OVRFL */
+/* Description: RP_BAD_CMD_OVRFL */
+#define SH_LB_ERROR_OVERFLOW_RP_BAD_CMD_OVRFL_SHFT 1
+#define SH_LB_ERROR_OVERFLOW_RP_BAD_CMD_OVRFL_MASK 0x0000000000000002
+
+/* SH_LB_ERROR_OVERFLOW_RQ_SHORT_OVRFL */
+/* Description: RQ_SHORT_OVRFL */
+#define SH_LB_ERROR_OVERFLOW_RQ_SHORT_OVRFL_SHFT 2
+#define SH_LB_ERROR_OVERFLOW_RQ_SHORT_OVRFL_MASK 0x0000000000000004
+
+/* SH_LB_ERROR_OVERFLOW_RP_SHORT_OVRFL */
+/* Description: RP_SHORT_OVRFL */
+#define SH_LB_ERROR_OVERFLOW_RP_SHORT_OVRFL_SHFT 3
+#define SH_LB_ERROR_OVERFLOW_RP_SHORT_OVRFL_MASK 0x0000000000000008
+
+/* SH_LB_ERROR_OVERFLOW_RQ_LONG_OVRFL */
+/* Description: RQ_LONG_OVRFL */
+#define SH_LB_ERROR_OVERFLOW_RQ_LONG_OVRFL_SHFT 4
+#define SH_LB_ERROR_OVERFLOW_RQ_LONG_OVRFL_MASK 0x0000000000000010
+
+/* SH_LB_ERROR_OVERFLOW_RP_LONG_OVRFL */
+/* Description: RP_LONG_OVRFL */
+#define SH_LB_ERROR_OVERFLOW_RP_LONG_OVRFL_SHFT 5
+#define SH_LB_ERROR_OVERFLOW_RP_LONG_OVRFL_MASK 0x0000000000000020
+
+/* SH_LB_ERROR_OVERFLOW_RQ_BAD_DATA_OVRFL */
+/* Description: RQ_BAD_DATA_OVRFL */
+#define SH_LB_ERROR_OVERFLOW_RQ_BAD_DATA_OVRFL_SHFT 6
+#define SH_LB_ERROR_OVERFLOW_RQ_BAD_DATA_OVRFL_MASK 0x0000000000000040
+
+/* SH_LB_ERROR_OVERFLOW_RP_BAD_DATA_OVRFL */
+/* Description: RP_BAD_DATA_OVRFL */
+#define SH_LB_ERROR_OVERFLOW_RP_BAD_DATA_OVRFL_SHFT 7
+#define SH_LB_ERROR_OVERFLOW_RP_BAD_DATA_OVRFL_MASK 0x0000000000000080
+
+/* SH_LB_ERROR_OVERFLOW_RQ_BAD_ADDR_OVRFL */
+/* Description: RQ_BAD_ADDR_OVRFL */
+#define SH_LB_ERROR_OVERFLOW_RQ_BAD_ADDR_OVRFL_SHFT 8
+#define SH_LB_ERROR_OVERFLOW_RQ_BAD_ADDR_OVRFL_MASK 0x0000000000000100
+
+/* SH_LB_ERROR_OVERFLOW_RQ_TIME_OUT_OVRFL */
+/* Description: RQ_TIME_OUT_OVRFL */
+#define SH_LB_ERROR_OVERFLOW_RQ_TIME_OUT_OVRFL_SHFT 9
+#define SH_LB_ERROR_OVERFLOW_RQ_TIME_OUT_OVRFL_MASK 0x0000000000000200
+
+/* SH_LB_ERROR_OVERFLOW_LINVV_OVERFLOW_OVRFL */
+/* Description: LINVV_OVERFLOW_OVRFL */
+#define SH_LB_ERROR_OVERFLOW_LINVV_OVERFLOW_OVRFL_SHFT 10
+#define SH_LB_ERROR_OVERFLOW_LINVV_OVERFLOW_OVRFL_MASK 0x0000000000000400
+
+/* SH_LB_ERROR_OVERFLOW_UNEXPECTED_LINV_OVRFL */
+/* Description: UNEXPECTED_LINV_OVRFL */
+#define SH_LB_ERROR_OVERFLOW_UNEXPECTED_LINV_OVRFL_SHFT 11
+#define SH_LB_ERROR_OVERFLOW_UNEXPECTED_LINV_OVRFL_MASK 0x0000000000000800
+
+/* SH_LB_ERROR_OVERFLOW_PTC_1_TIMEOUT_OVRFL */
+/* Description: PTC_1 Time out overflow */
+#define SH_LB_ERROR_OVERFLOW_PTC_1_TIMEOUT_OVRFL_SHFT 12
+#define SH_LB_ERROR_OVERFLOW_PTC_1_TIMEOUT_OVRFL_MASK 0x0000000000001000
+
+/* SH_LB_ERROR_OVERFLOW_JUNK_BUS_ERR_OVRFL */
+/* Description: Junk Bus error overflow */
+#define SH_LB_ERROR_OVERFLOW_JUNK_BUS_ERR_OVRFL_SHFT 13
+#define SH_LB_ERROR_OVERFLOW_JUNK_BUS_ERR_OVRFL_MASK 0x0000000000002000
+
+/* SH_LB_ERROR_OVERFLOW_PIO_CB_ERR_OVRFL */
+/* Description: PIO Conveyor Belt operation error overflow */
+#define SH_LB_ERROR_OVERFLOW_PIO_CB_ERR_OVRFL_SHFT 14
+#define SH_LB_ERROR_OVERFLOW_PIO_CB_ERR_OVRFL_MASK 0x0000000000004000
+
+/* SH_LB_ERROR_OVERFLOW_VECTOR_RQ_ROUTE_ERROR_OVRFL */
+/* Description: Vector request Route data was invalid overflow */
+#define SH_LB_ERROR_OVERFLOW_VECTOR_RQ_ROUTE_ERROR_OVRFL_SHFT 15
+#define SH_LB_ERROR_OVERFLOW_VECTOR_RQ_ROUTE_ERROR_OVRFL_MASK 0x0000000000008000
+
+/* SH_LB_ERROR_OVERFLOW_VECTOR_RP_ROUTE_ERROR_OVRFL */
+/* Description: Vector reply Route data was invalid overflow */
+#define SH_LB_ERROR_OVERFLOW_VECTOR_RP_ROUTE_ERROR_OVRFL_SHFT 16
+#define SH_LB_ERROR_OVERFLOW_VECTOR_RP_ROUTE_ERROR_OVRFL_MASK 0x0000000000010000
+
+/* SH_LB_ERROR_OVERFLOW_GCLK_DROP_OVRFL */
+/* Description: Gclk drop error overflow */
+#define SH_LB_ERROR_OVERFLOW_GCLK_DROP_OVRFL_SHFT 17
+#define SH_LB_ERROR_OVERFLOW_GCLK_DROP_OVRFL_MASK 0x0000000000020000
+
+/* SH_LB_ERROR_OVERFLOW_RQ_FIFO_ERROR_OVRFL */
+/* Description: Request queue FIFO error overflow */
+#define SH_LB_ERROR_OVERFLOW_RQ_FIFO_ERROR_OVRFL_SHFT 18
+#define SH_LB_ERROR_OVERFLOW_RQ_FIFO_ERROR_OVRFL_MASK 0x0000000000040000
+
+/* SH_LB_ERROR_OVERFLOW_RP_FIFO_ERROR_OVRFL */
+/* Description: Reply queue FIFO error overflow */
+#define SH_LB_ERROR_OVERFLOW_RP_FIFO_ERROR_OVRFL_SHFT 19
+#define SH_LB_ERROR_OVERFLOW_RP_FIFO_ERROR_OVRFL_MASK 0x0000000000080000
+
+/* SH_LB_ERROR_OVERFLOW_UNEXP_VALID_OVRFL */
+/* Description: Unexpected valid error overflow */
+#define SH_LB_ERROR_OVERFLOW_UNEXP_VALID_OVRFL_SHFT 20
+#define SH_LB_ERROR_OVERFLOW_UNEXP_VALID_OVRFL_MASK 0x0000000000100000
+
+/* SH_LB_ERROR_OVERFLOW_RQ_CREDIT_OVERFLOW_OVRFL */
+/* Description: Request queue credit overflow */
+#define SH_LB_ERROR_OVERFLOW_RQ_CREDIT_OVERFLOW_OVRFL_SHFT 21
+#define SH_LB_ERROR_OVERFLOW_RQ_CREDIT_OVERFLOW_OVRFL_MASK 0x0000000000200000
+
+/* SH_LB_ERROR_OVERFLOW_RP_CREDIT_OVERFLOW_OVRFL */
+/* Description: Reply queue credit overflow */
+#define SH_LB_ERROR_OVERFLOW_RP_CREDIT_OVERFLOW_OVRFL_SHFT 22
+#define SH_LB_ERROR_OVERFLOW_RP_CREDIT_OVERFLOW_OVRFL_MASK 0x0000000000400000
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_OVERFLOW_ALIAS" */
+/* LB Error Overflow */
+/* ==================================================================== */
+
+#define SH_LB_ERROR_OVERFLOW_ALIAS 0x0000000110050508
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_SUMMARY" */
+/* LB Error Bits */
+/* ==================================================================== */
+
+#define SH_LB_ERROR_SUMMARY 0x0000000110050580
+#define SH_LB_ERROR_SUMMARY_MASK 0x00000000007fffff
+#define SH_LB_ERROR_SUMMARY_INIT 0x0000000000000000
+
+/* SH_LB_ERROR_SUMMARY_RQ_BAD_CMD */
+/* Description: RQ_BAD_CMD */
+#define SH_LB_ERROR_SUMMARY_RQ_BAD_CMD_SHFT 0
+#define SH_LB_ERROR_SUMMARY_RQ_BAD_CMD_MASK 0x0000000000000001
+
+/* SH_LB_ERROR_SUMMARY_RP_BAD_CMD */
+/* Description: RP_BAD_CMD */
+#define SH_LB_ERROR_SUMMARY_RP_BAD_CMD_SHFT 1
+#define SH_LB_ERROR_SUMMARY_RP_BAD_CMD_MASK 0x0000000000000002
+
+/* SH_LB_ERROR_SUMMARY_RQ_SHORT */
+/* Description: RQ_SHORT */
+#define SH_LB_ERROR_SUMMARY_RQ_SHORT_SHFT 2
+#define SH_LB_ERROR_SUMMARY_RQ_SHORT_MASK 0x0000000000000004
+
+/* SH_LB_ERROR_SUMMARY_RP_SHORT */
+/* Description: RP_SHORT */
+#define SH_LB_ERROR_SUMMARY_RP_SHORT_SHFT 3
+#define SH_LB_ERROR_SUMMARY_RP_SHORT_MASK 0x0000000000000008
+
+/* SH_LB_ERROR_SUMMARY_RQ_LONG */
+/* Description: RQ_LONG */
+#define SH_LB_ERROR_SUMMARY_RQ_LONG_SHFT 4
+#define SH_LB_ERROR_SUMMARY_RQ_LONG_MASK 0x0000000000000010
+
+/* SH_LB_ERROR_SUMMARY_RP_LONG */
+/* Description: RP_LONG */
+#define SH_LB_ERROR_SUMMARY_RP_LONG_SHFT 5
+#define SH_LB_ERROR_SUMMARY_RP_LONG_MASK 0x0000000000000020
+
+/* SH_LB_ERROR_SUMMARY_RQ_BAD_DATA */
+/* Description: RQ_BAD_DATA */
+#define SH_LB_ERROR_SUMMARY_RQ_BAD_DATA_SHFT 6
+#define SH_LB_ERROR_SUMMARY_RQ_BAD_DATA_MASK 0x0000000000000040
+
+/* SH_LB_ERROR_SUMMARY_RP_BAD_DATA */
+/* Description: RP_BAD_DATA */
+#define SH_LB_ERROR_SUMMARY_RP_BAD_DATA_SHFT 7
+#define SH_LB_ERROR_SUMMARY_RP_BAD_DATA_MASK 0x0000000000000080
+
+/* SH_LB_ERROR_SUMMARY_RQ_BAD_ADDR */
+/* Description: RQ_BAD_ADDR */
+#define SH_LB_ERROR_SUMMARY_RQ_BAD_ADDR_SHFT 8
+#define SH_LB_ERROR_SUMMARY_RQ_BAD_ADDR_MASK 0x0000000000000100
+
+/* SH_LB_ERROR_SUMMARY_RQ_TIME_OUT */
+/* Description: RQ_TIME_OUT */
+#define SH_LB_ERROR_SUMMARY_RQ_TIME_OUT_SHFT 9
+#define SH_LB_ERROR_SUMMARY_RQ_TIME_OUT_MASK 0x0000000000000200
+
+/* SH_LB_ERROR_SUMMARY_LINVV_OVERFLOW */
+/* Description: LINVV_OVERFLOW */
+#define SH_LB_ERROR_SUMMARY_LINVV_OVERFLOW_SHFT 10
+#define SH_LB_ERROR_SUMMARY_LINVV_OVERFLOW_MASK 0x0000000000000400
+
+/* SH_LB_ERROR_SUMMARY_UNEXPECTED_LINV */
+/* Description: UNEXPECTED_LINV */
+#define SH_LB_ERROR_SUMMARY_UNEXPECTED_LINV_SHFT 11
+#define SH_LB_ERROR_SUMMARY_UNEXPECTED_LINV_MASK 0x0000000000000800
+
+/* SH_LB_ERROR_SUMMARY_PTC_1_TIMEOUT */
+/* Description: PTC_1 Time out */
+#define SH_LB_ERROR_SUMMARY_PTC_1_TIMEOUT_SHFT 12
+#define SH_LB_ERROR_SUMMARY_PTC_1_TIMEOUT_MASK 0x0000000000001000
+
+/* SH_LB_ERROR_SUMMARY_JUNK_BUS_ERR */
+/* Description: Junk Bus error */
+#define SH_LB_ERROR_SUMMARY_JUNK_BUS_ERR_SHFT 13
+#define SH_LB_ERROR_SUMMARY_JUNK_BUS_ERR_MASK 0x0000000000002000
+
+/* SH_LB_ERROR_SUMMARY_PIO_CB_ERR */
+/* Description: PIO Conveyor Belt operation error */
+#define SH_LB_ERROR_SUMMARY_PIO_CB_ERR_SHFT 14
+#define SH_LB_ERROR_SUMMARY_PIO_CB_ERR_MASK 0x0000000000004000
+
+/* SH_LB_ERROR_SUMMARY_VECTOR_RQ_ROUTE_ERROR */
+/* Description: Vector request Route data was invalid */
+#define SH_LB_ERROR_SUMMARY_VECTOR_RQ_ROUTE_ERROR_SHFT 15
+#define SH_LB_ERROR_SUMMARY_VECTOR_RQ_ROUTE_ERROR_MASK 0x0000000000008000
+
+/* SH_LB_ERROR_SUMMARY_VECTOR_RP_ROUTE_ERROR */
+/* Description: Vector reply Route data was invalid */
+#define SH_LB_ERROR_SUMMARY_VECTOR_RP_ROUTE_ERROR_SHFT 16
+#define SH_LB_ERROR_SUMMARY_VECTOR_RP_ROUTE_ERROR_MASK 0x0000000000010000
+
+/* SH_LB_ERROR_SUMMARY_GCLK_DROP */
+/* Description: Gclk drop error */
+#define SH_LB_ERROR_SUMMARY_GCLK_DROP_SHFT 17
+#define SH_LB_ERROR_SUMMARY_GCLK_DROP_MASK 0x0000000000020000
+
+/* SH_LB_ERROR_SUMMARY_RQ_FIFO_ERROR */
+/* Description: Request queue FIFO error */
+#define SH_LB_ERROR_SUMMARY_RQ_FIFO_ERROR_SHFT 18
+#define SH_LB_ERROR_SUMMARY_RQ_FIFO_ERROR_MASK 0x0000000000040000
+
+/* SH_LB_ERROR_SUMMARY_RP_FIFO_ERROR */
+/* Description: Reply queue FIFO error */
+#define SH_LB_ERROR_SUMMARY_RP_FIFO_ERROR_SHFT 19
+#define SH_LB_ERROR_SUMMARY_RP_FIFO_ERROR_MASK 0x0000000000080000
+
+/* SH_LB_ERROR_SUMMARY_UNEXP_VALID */
+/* Description: Unexpected valid error */
+#define SH_LB_ERROR_SUMMARY_UNEXP_VALID_SHFT 20
+#define SH_LB_ERROR_SUMMARY_UNEXP_VALID_MASK 0x0000000000100000
+
+/* SH_LB_ERROR_SUMMARY_RQ_CREDIT_OVERFLOW */
+/* Description: Request queue credit overflow */
+#define SH_LB_ERROR_SUMMARY_RQ_CREDIT_OVERFLOW_SHFT 21
+#define SH_LB_ERROR_SUMMARY_RQ_CREDIT_OVERFLOW_MASK 0x0000000000200000
+
+/* SH_LB_ERROR_SUMMARY_RP_CREDIT_OVERFLOW */
+/* Description: Reply queue credit overflow */
+#define SH_LB_ERROR_SUMMARY_RP_CREDIT_OVERFLOW_SHFT 22
+#define SH_LB_ERROR_SUMMARY_RP_CREDIT_OVERFLOW_MASK 0x0000000000400000
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_SUMMARY_ALIAS" */
+/* LB Error Bits Alias */
+/* ==================================================================== */
+
+#define SH_LB_ERROR_SUMMARY_ALIAS 0x0000000110050588
+
+/* ==================================================================== */
+/* Register "SH_LB_FIRST_ERROR" */
+/* LB First Error */
+/* ==================================================================== */
+
+#define SH_LB_FIRST_ERROR 0x0000000110050600
+#define SH_LB_FIRST_ERROR_MASK 0x00000000007fffff
+#define SH_LB_FIRST_ERROR_INIT 0x0000000000000000
+
+/* SH_LB_FIRST_ERROR_RQ_BAD_CMD */
+/* Description: RQ_BAD_CMD */
+#define SH_LB_FIRST_ERROR_RQ_BAD_CMD_SHFT 0
+#define SH_LB_FIRST_ERROR_RQ_BAD_CMD_MASK 0x0000000000000001
+
+/* SH_LB_FIRST_ERROR_RP_BAD_CMD */
+/* Description: RP_BAD_CMD */
+#define SH_LB_FIRST_ERROR_RP_BAD_CMD_SHFT 1
+#define SH_LB_FIRST_ERROR_RP_BAD_CMD_MASK 0x0000000000000002
+
+/* SH_LB_FIRST_ERROR_RQ_SHORT */
+/* Description: RQ_SHORT */
+#define SH_LB_FIRST_ERROR_RQ_SHORT_SHFT 2
+#define SH_LB_FIRST_ERROR_RQ_SHORT_MASK 0x0000000000000004
+
+/* SH_LB_FIRST_ERROR_RP_SHORT */
+/* Description: RP_SHORT */
+#define SH_LB_FIRST_ERROR_RP_SHORT_SHFT 3
+#define SH_LB_FIRST_ERROR_RP_SHORT_MASK 0x0000000000000008
+
+/* SH_LB_FIRST_ERROR_RQ_LONG */
+/* Description: RQ_LONG */
+#define SH_LB_FIRST_ERROR_RQ_LONG_SHFT 4
+#define SH_LB_FIRST_ERROR_RQ_LONG_MASK 0x0000000000000010
+
+/* SH_LB_FIRST_ERROR_RP_LONG */
+/* Description: RP_LONG */
+#define SH_LB_FIRST_ERROR_RP_LONG_SHFT 5
+#define SH_LB_FIRST_ERROR_RP_LONG_MASK 0x0000000000000020
+
+/* SH_LB_FIRST_ERROR_RQ_BAD_DATA */
+/* Description: RQ_BAD_DATA */
+#define SH_LB_FIRST_ERROR_RQ_BAD_DATA_SHFT 6
+#define SH_LB_FIRST_ERROR_RQ_BAD_DATA_MASK 0x0000000000000040
+
+/* SH_LB_FIRST_ERROR_RP_BAD_DATA */
+/* Description: RP_BAD_DATA */
+#define SH_LB_FIRST_ERROR_RP_BAD_DATA_SHFT 7
+#define SH_LB_FIRST_ERROR_RP_BAD_DATA_MASK 0x0000000000000080
+
+/* SH_LB_FIRST_ERROR_RQ_BAD_ADDR */
+/* Description: RQ_BAD_ADDR */
+#define SH_LB_FIRST_ERROR_RQ_BAD_ADDR_SHFT 8
+#define SH_LB_FIRST_ERROR_RQ_BAD_ADDR_MASK 0x0000000000000100
+
+/* SH_LB_FIRST_ERROR_RQ_TIME_OUT */
+/* Description: RQ_TIME_OUT */
+#define SH_LB_FIRST_ERROR_RQ_TIME_OUT_SHFT 9
+#define SH_LB_FIRST_ERROR_RQ_TIME_OUT_MASK 0x0000000000000200
+
+/* SH_LB_FIRST_ERROR_LINVV_OVERFLOW */
+/* Description: LINVV_OVERFLOW */
+#define SH_LB_FIRST_ERROR_LINVV_OVERFLOW_SHFT 10
+#define SH_LB_FIRST_ERROR_LINVV_OVERFLOW_MASK 0x0000000000000400
+
+/* SH_LB_FIRST_ERROR_UNEXPECTED_LINV */
+/* Description: UNEXPECTED_LINV */
+#define SH_LB_FIRST_ERROR_UNEXPECTED_LINV_SHFT 11
+#define SH_LB_FIRST_ERROR_UNEXPECTED_LINV_MASK 0x0000000000000800
+
+/* SH_LB_FIRST_ERROR_PTC_1_TIMEOUT */
+/* Description: PTC_1 Time out */
+#define SH_LB_FIRST_ERROR_PTC_1_TIMEOUT_SHFT 12
+#define SH_LB_FIRST_ERROR_PTC_1_TIMEOUT_MASK 0x0000000000001000
+
+/* SH_LB_FIRST_ERROR_JUNK_BUS_ERR */
+/* Description: Junk Bus error */
+#define SH_LB_FIRST_ERROR_JUNK_BUS_ERR_SHFT 13
+#define SH_LB_FIRST_ERROR_JUNK_BUS_ERR_MASK 0x0000000000002000
+
+/* SH_LB_FIRST_ERROR_PIO_CB_ERR */
+/* Description: PIO Conveyor Belt operation error */
+#define SH_LB_FIRST_ERROR_PIO_CB_ERR_SHFT 14
+#define SH_LB_FIRST_ERROR_PIO_CB_ERR_MASK 0x0000000000004000
+
+/* SH_LB_FIRST_ERROR_VECTOR_RQ_ROUTE_ERROR */
+/* Description: Vector request Route data was invalid */
+#define SH_LB_FIRST_ERROR_VECTOR_RQ_ROUTE_ERROR_SHFT 15
+#define SH_LB_FIRST_ERROR_VECTOR_RQ_ROUTE_ERROR_MASK 0x0000000000008000
+
+/* SH_LB_FIRST_ERROR_VECTOR_RP_ROUTE_ERROR */
+/* Description: Vector reply Route data was invalid */
+#define SH_LB_FIRST_ERROR_VECTOR_RP_ROUTE_ERROR_SHFT 16
+#define SH_LB_FIRST_ERROR_VECTOR_RP_ROUTE_ERROR_MASK 0x0000000000010000
+
+/* SH_LB_FIRST_ERROR_GCLK_DROP */
+/* Description: Gclk drop error */
+#define SH_LB_FIRST_ERROR_GCLK_DROP_SHFT 17
+#define SH_LB_FIRST_ERROR_GCLK_DROP_MASK 0x0000000000020000
+
+/* SH_LB_FIRST_ERROR_RQ_FIFO_ERROR */
+/* Description: Request queue FIFO error */
+#define SH_LB_FIRST_ERROR_RQ_FIFO_ERROR_SHFT 18
+#define SH_LB_FIRST_ERROR_RQ_FIFO_ERROR_MASK 0x0000000000040000
+
+/* SH_LB_FIRST_ERROR_RP_FIFO_ERROR */
+/* Description: Reply queue FIFO error */
+#define SH_LB_FIRST_ERROR_RP_FIFO_ERROR_SHFT 19
+#define SH_LB_FIRST_ERROR_RP_FIFO_ERROR_MASK 0x0000000000080000
+
+/* SH_LB_FIRST_ERROR_UNEXP_VALID */
+/* Description: Unexpected valid error */
+#define SH_LB_FIRST_ERROR_UNEXP_VALID_SHFT 20
+#define SH_LB_FIRST_ERROR_UNEXP_VALID_MASK 0x0000000000100000
+
+/* SH_LB_FIRST_ERROR_RQ_CREDIT_OVERFLOW */
+/* Description: Request queue credit overflow */
+#define SH_LB_FIRST_ERROR_RQ_CREDIT_OVERFLOW_SHFT 21
+#define SH_LB_FIRST_ERROR_RQ_CREDIT_OVERFLOW_MASK 0x0000000000200000
+
+/* SH_LB_FIRST_ERROR_RP_CREDIT_OVERFLOW */
+/* Description: Reply queue credit overflow */
+#define SH_LB_FIRST_ERROR_RP_CREDIT_OVERFLOW_SHFT 22
+#define SH_LB_FIRST_ERROR_RP_CREDIT_OVERFLOW_MASK 0x0000000000400000
+
+/* ==================================================================== */
+/* Register "SH_LB_LAST_CREDIT" */
+/* Credit counter status register */
+/* ==================================================================== */
+
+#define SH_LB_LAST_CREDIT 0x0000000110050680
+#define SH_LB_LAST_CREDIT_MASK 0x000000000ffff3df
+#define SH_LB_LAST_CREDIT_INIT 0x0000000000000000
+
+/* SH_LB_LAST_CREDIT_LIQ_RQ_CREDIT */
+/* Description: LIQ request queue credit counter */
+#define SH_LB_LAST_CREDIT_LIQ_RQ_CREDIT_SHFT 0
+#define SH_LB_LAST_CREDIT_LIQ_RQ_CREDIT_MASK 0x000000000000001f
+
+/* SH_LB_LAST_CREDIT_LIQ_RP_CREDIT */
+/* Description: LIQ reply queue credit counter */
+#define SH_LB_LAST_CREDIT_LIQ_RP_CREDIT_SHFT 6
+#define SH_LB_LAST_CREDIT_LIQ_RP_CREDIT_MASK 0x00000000000003c0
+
+/* SH_LB_LAST_CREDIT_LINVV_CREDIT */
+/* Description: LINVV credit counter */
+#define SH_LB_LAST_CREDIT_LINVV_CREDIT_SHFT 12
+#define SH_LB_LAST_CREDIT_LINVV_CREDIT_MASK 0x000000000003f000
+
+/* SH_LB_LAST_CREDIT_LOQ_RQ_CREDIT */
+/* Description: LOQ request queue credit counter */
+#define SH_LB_LAST_CREDIT_LOQ_RQ_CREDIT_SHFT 18
+#define SH_LB_LAST_CREDIT_LOQ_RQ_CREDIT_MASK 0x00000000007c0000
+
+/* SH_LB_LAST_CREDIT_LOQ_RP_CREDIT */
+/* Description: LOQ reply queue credit counter */
+#define SH_LB_LAST_CREDIT_LOQ_RP_CREDIT_SHFT 23
+#define SH_LB_LAST_CREDIT_LOQ_RP_CREDIT_MASK 0x000000000f800000
+
+/* ==================================================================== */
+/* Register "SH_LB_NACK_STATUS" */
+/* Nack Counter Status Register */
+/* ==================================================================== */
+
+#define SH_LB_NACK_STATUS 0x0000000110050700
+#define SH_LB_NACK_STATUS_MASK 0x3fffffff0fff0fff
+#define SH_LB_NACK_STATUS_INIT 0x0000000000000000
+
+/* SH_LB_NACK_STATUS_PIO_NACK_A */
+/* Description: PIO nackA counter */
+#define SH_LB_NACK_STATUS_PIO_NACK_A_SHFT 0
+#define SH_LB_NACK_STATUS_PIO_NACK_A_MASK 0x0000000000000fff
+
+/* SH_LB_NACK_STATUS_PIO_NACK_B */
+/* Description: PIO nackA counter */
+#define SH_LB_NACK_STATUS_PIO_NACK_B_SHFT 16
+#define SH_LB_NACK_STATUS_PIO_NACK_B_MASK 0x000000000fff0000
+
+/* SH_LB_NACK_STATUS_JUNK_NACK */
+/* Description: Junk bus nack counter */
+#define SH_LB_NACK_STATUS_JUNK_NACK_SHFT 32
+#define SH_LB_NACK_STATUS_JUNK_NACK_MASK 0x0000ffff00000000
+
+/* SH_LB_NACK_STATUS_CB_TIMEOUT_COUNT */
+/* Description: Conveyor belt time out counter */
+#define SH_LB_NACK_STATUS_CB_TIMEOUT_COUNT_SHFT 48
+#define SH_LB_NACK_STATUS_CB_TIMEOUT_COUNT_MASK 0x0fff000000000000
+
+/* SH_LB_NACK_STATUS_CB_STATE */
+/* Description: Conveyor belt state */
+#define SH_LB_NACK_STATUS_CB_STATE_SHFT 60
+#define SH_LB_NACK_STATUS_CB_STATE_MASK 0x3000000000000000
+
+/* ==================================================================== */
+/* Register "SH_LB_TRIGGER_COMPARE" */
+/* LB Test-point Trigger Compare */
+/* ==================================================================== */
+
+#define SH_LB_TRIGGER_COMPARE 0x0000000110050780
+#define SH_LB_TRIGGER_COMPARE_MASK 0x00000000ffffffff
+#define SH_LB_TRIGGER_COMPARE_INIT 0x0000000000000000
+
+/* SH_LB_TRIGGER_COMPARE_MASK */
+/* Description: Mask to select Debug bits for trigger generation */
+#define SH_LB_TRIGGER_COMPARE_MASK_SHFT 0
+#define SH_LB_TRIGGER_COMPARE_MASK_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_LB_TRIGGER_DATA" */
+/* LB Test-point Trigger Compare Data */
+/* ==================================================================== */
+
+#define SH_LB_TRIGGER_DATA 0x0000000110050800
+#define SH_LB_TRIGGER_DATA_MASK 0x00000000ffffffff
+#define SH_LB_TRIGGER_DATA_INIT 0x00000000ffffffff
+
+/* SH_LB_TRIGGER_DATA_COMPARE_PATTERN */
+/* Description: debug bit pattern for trigger generation */
+#define SH_LB_TRIGGER_DATA_COMPARE_PATTERN_SHFT 0
+#define SH_LB_TRIGGER_DATA_COMPARE_PATTERN_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_PI_AEC_CONFIG" */
+/* PI Adaptive Error Correction Configuration */
+/* ==================================================================== */
+
+#define SH_PI_AEC_CONFIG 0x0000000120050000
+#define SH_PI_AEC_CONFIG_MASK 0x0000000000000007
+#define SH_PI_AEC_CONFIG_INIT 0x0000000000000000
+
+/* SH_PI_AEC_CONFIG_MODE */
+/* Description: AEC Operation Mode */
+#define SH_PI_AEC_CONFIG_MODE_SHFT 0
+#define SH_PI_AEC_CONFIG_MODE_MASK 0x0000000000000007
+
+/* ==================================================================== */
+/* Register "SH_PI_AFI_ERROR_MASK" */
+/* PI AFI Error Mask */
+/* ==================================================================== */
+
+#define SH_PI_AFI_ERROR_MASK 0x0000000120050080
+#define SH_PI_AFI_ERROR_MASK_MASK 0x00000007ffe00000
+#define SH_PI_AFI_ERROR_MASK_INIT 0x00000007ffe00000
+
+/* SH_PI_AFI_ERROR_MASK_HUNG_BUS */
+/* Description: FSB is hung */
+#define SH_PI_AFI_ERROR_MASK_HUNG_BUS_SHFT 21
+#define SH_PI_AFI_ERROR_MASK_HUNG_BUS_MASK 0x0000000000200000
+
+/* SH_PI_AFI_ERROR_MASK_RSP_PARITY */
+/* Description: Parity error detecte during response phase */
+#define SH_PI_AFI_ERROR_MASK_RSP_PARITY_SHFT 22
+#define SH_PI_AFI_ERROR_MASK_RSP_PARITY_MASK 0x0000000000400000
+
+/* SH_PI_AFI_ERROR_MASK_IOQ_OVERRUN */
+/* Description: Over run error detected on IOQ */
+#define SH_PI_AFI_ERROR_MASK_IOQ_OVERRUN_SHFT 23
+#define SH_PI_AFI_ERROR_MASK_IOQ_OVERRUN_MASK 0x0000000000800000
+
+/* SH_PI_AFI_ERROR_MASK_REQ_FORMAT */
+/* Description: FSB request format not supported */
+#define SH_PI_AFI_ERROR_MASK_REQ_FORMAT_SHFT 24
+#define SH_PI_AFI_ERROR_MASK_REQ_FORMAT_MASK 0x0000000001000000
+
+/* SH_PI_AFI_ERROR_MASK_ADDR_ACCESS */
+/* Description: Access to Address is not supported */
+#define SH_PI_AFI_ERROR_MASK_ADDR_ACCESS_SHFT 25
+#define SH_PI_AFI_ERROR_MASK_ADDR_ACCESS_MASK 0x0000000002000000
+
+/* SH_PI_AFI_ERROR_MASK_REQ_PARITY */
+/* Description: Parity error detected during request phase */
+#define SH_PI_AFI_ERROR_MASK_REQ_PARITY_SHFT 26
+#define SH_PI_AFI_ERROR_MASK_REQ_PARITY_MASK 0x0000000004000000
+
+/* SH_PI_AFI_ERROR_MASK_ADDR_PARITY */
+/* Description: Parity error detected on address */
+#define SH_PI_AFI_ERROR_MASK_ADDR_PARITY_SHFT 27
+#define SH_PI_AFI_ERROR_MASK_ADDR_PARITY_MASK 0x0000000008000000
+
+/* SH_PI_AFI_ERROR_MASK_SHUB_FSB_DQE */
+/* Description: SHUB_FSB_DQE */
+#define SH_PI_AFI_ERROR_MASK_SHUB_FSB_DQE_SHFT 28
+#define SH_PI_AFI_ERROR_MASK_SHUB_FSB_DQE_MASK 0x0000000010000000
+
+/* SH_PI_AFI_ERROR_MASK_SHUB_FSB_UCE */
+/* Description: An un-correctable ECC error was detected */
+#define SH_PI_AFI_ERROR_MASK_SHUB_FSB_UCE_SHFT 29
+#define SH_PI_AFI_ERROR_MASK_SHUB_FSB_UCE_MASK 0x0000000020000000
+
+/* SH_PI_AFI_ERROR_MASK_SHUB_FSB_CE */
+/* Description: An correctable ECC error was detected */
+#define SH_PI_AFI_ERROR_MASK_SHUB_FSB_CE_SHFT 30
+#define SH_PI_AFI_ERROR_MASK_SHUB_FSB_CE_MASK 0x0000000040000000
+
+/* SH_PI_AFI_ERROR_MASK_LIVELOCK */
+/* Description: AFI livelock error was detected */
+#define SH_PI_AFI_ERROR_MASK_LIVELOCK_SHFT 31
+#define SH_PI_AFI_ERROR_MASK_LIVELOCK_MASK 0x0000000080000000
+
+/* SH_PI_AFI_ERROR_MASK_BAD_SNOOP */
+/* Description: AFI bad snoop error was detected */
+#define SH_PI_AFI_ERROR_MASK_BAD_SNOOP_SHFT 32
+#define SH_PI_AFI_ERROR_MASK_BAD_SNOOP_MASK 0x0000000100000000
+
+/* SH_PI_AFI_ERROR_MASK_FSB_TBL_MISS */
+/* Description: AFI FSB request table miss error was detected */
+#define SH_PI_AFI_ERROR_MASK_FSB_TBL_MISS_SHFT 33
+#define SH_PI_AFI_ERROR_MASK_FSB_TBL_MISS_MASK 0x0000000200000000
+
+/* SH_PI_AFI_ERROR_MASK_MSG_LEN */
+/* Description: Runt or Obese message received from SIC */
+#define SH_PI_AFI_ERROR_MASK_MSG_LEN_SHFT 34
+#define SH_PI_AFI_ERROR_MASK_MSG_LEN_MASK 0x0000000400000000
+
+/* ==================================================================== */
+/* Register "SH_PI_AFI_TEST_POINT_COMPARE" */
+/* PI AFI Test Point Compare */
+/* ==================================================================== */
+
+#define SH_PI_AFI_TEST_POINT_COMPARE 0x0000000120050100
+#define SH_PI_AFI_TEST_POINT_COMPARE_MASK 0xffffffffffffffff
+#define SH_PI_AFI_TEST_POINT_COMPARE_INIT 0xffffffff00000000
+
+/* SH_PI_AFI_TEST_POINT_COMPARE_COMPARE_MASK */
+/* Description: Mask to select Debug bits for trigger generation */
+#define SH_PI_AFI_TEST_POINT_COMPARE_COMPARE_MASK_SHFT 0
+#define SH_PI_AFI_TEST_POINT_COMPARE_COMPARE_MASK_MASK 0x00000000ffffffff
+
+/* SH_PI_AFI_TEST_POINT_COMPARE_COMPARE_PATTERN */
+/* Description: debug bit pattern for trigger generation */
+#define SH_PI_AFI_TEST_POINT_COMPARE_COMPARE_PATTERN_SHFT 32
+#define SH_PI_AFI_TEST_POINT_COMPARE_COMPARE_PATTERN_MASK 0xffffffff00000000
+
+/* ==================================================================== */
+/* Register "SH_PI_AFI_TEST_POINT_SELECT" */
+/* PI AFI Test Point Select */
+/* ==================================================================== */
+
+#define SH_PI_AFI_TEST_POINT_SELECT 0x0000000120050180
+#define SH_PI_AFI_TEST_POINT_SELECT_MASK 0xff7f7f7f7f7f7f7f
+#define SH_PI_AFI_TEST_POINT_SELECT_INIT 0x0000000000000000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL */
+/* Description: Nibble 0: Word Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_SHFT 0
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_MASK 0x000000000000000f
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble 0: Nibble Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_SHFT 4
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL */
+/* Description: Nibble 1: Word Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_SHFT 8
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000f00
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble 1: Nibble Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_SHFT 12
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL */
+/* Description: Nibble 2: Word Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_SHFT 16
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_MASK 0x00000000000f0000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble 2: Nibble Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_SHFT 20
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL */
+/* Description: Nibble 3: Word Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_SHFT 24
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_MASK 0x000000000f000000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble 3: Nibble Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_SHFT 28
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL */
+/* Description: Nibble 4: Word Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_SHFT 32
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_MASK 0x0000000f00000000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble 4: Nibble Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_SHFT 36
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL */
+/* Description: Nibble 5: Word Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_SHFT 40
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_MASK 0x00000f0000000000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble 5: Nibble Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_SHFT 44
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL */
+/* Description: Nibble 6: Word Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_SHFT 48
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_MASK 0x000f000000000000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble 6: Nibble Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_SHFT 52
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL */
+/* Description: Nibble 7: Word Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_SHFT 56
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_MASK 0x0f00000000000000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble 7: Nibble Select */
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_SHFT 60
+#define SH_PI_AFI_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* SH_PI_AFI_TEST_POINT_SELECT_TRIGGER_ENABLE */
+/* Description: Trigger Enabled */
+#define SH_PI_AFI_TEST_POINT_SELECT_TRIGGER_ENABLE_SHFT 63
+#define SH_PI_AFI_TEST_POINT_SELECT_TRIGGER_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_AFI_TEST_POINT_TRIGGER_SELECT" */
+/* PI CRBC Test Point Trigger Select */
+/* ==================================================================== */
+
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT 0x0000000120050200
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_MASK 0x7f7f7f7f7f7f7f7f
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_INIT 0x0000000000000000
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL */
+/* Description: Nibble 0 Chiplet select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_SHFT 0
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_MASK 0x000000000000000f
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_SHFT 4
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL */
+/* Description: Nibble 1 Chiplet select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_SHFT 8
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_MASK 0x0000000000000f00
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_SHFT 12
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL */
+/* Description: Nibble 2 Chiplet select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_SHFT 16
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_MASK 0x00000000000f0000
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_SHFT 20
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL */
+/* Description: Nibble 3 Chiplet select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_SHFT 24
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_MASK 0x000000000f000000
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_SHFT 28
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL */
+/* Description: Nibble 4 Chiplet select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_SHFT 32
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_MASK 0x0000000f00000000
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_SHFT 36
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL */
+/* Description: Nibble 5 Chiplet select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_SHFT 40
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_MASK 0x00000f0000000000
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_SHFT 44
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL */
+/* Description: Nibble 6 Chiplet select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_SHFT 48
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_MASK 0x000f000000000000
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_SHFT 52
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL */
+/* Description: Nibble 7 Chiplet select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_SHFT 56
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_MASK 0x0f00000000000000
+
+/* SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_SHFT 60
+#define SH_PI_AFI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_AUTO_REPLY_ENABLE" */
+/* PI Auto Reply Enable */
+/* ==================================================================== */
+
+#define SH_PI_AUTO_REPLY_ENABLE 0x0000000120050280
+#define SH_PI_AUTO_REPLY_ENABLE_MASK 0x0000000000000001
+#define SH_PI_AUTO_REPLY_ENABLE_INIT 0x0000000000000000
+
+/* SH_PI_AUTO_REPLY_ENABLE_AUTO_REPLY_ENABLE */
+/* Description: Auto Reply Enabled */
+#define SH_PI_AUTO_REPLY_ENABLE_AUTO_REPLY_ENABLE_SHFT 0
+#define SH_PI_AUTO_REPLY_ENABLE_AUTO_REPLY_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_PI_CAM_CONTROL" */
+/* CRB CAM MMR Access Control */
+/* ==================================================================== */
+
+#define SH_PI_CAM_CONTROL 0x0000000120050300
+#define SH_PI_CAM_CONTROL_MASK 0x800000000000037f
+#define SH_PI_CAM_CONTROL_INIT 0x0000000000000000
+
+/* SH_PI_CAM_CONTROL_CAM_INDX */
+/* Description: CRB CAM Index to perform read/write on. */
+#define SH_PI_CAM_CONTROL_CAM_INDX_SHFT 0
+#define SH_PI_CAM_CONTROL_CAM_INDX_MASK 0x000000000000007f
+
+/* SH_PI_CAM_CONTROL_CAM_WRITE */
+/* Description: Is CRB CAM MMR function a write. */
+#define SH_PI_CAM_CONTROL_CAM_WRITE_SHFT 8
+#define SH_PI_CAM_CONTROL_CAM_WRITE_MASK 0x0000000000000100
+
+/* SH_PI_CAM_CONTROL_RRB_RD_XFER_CLEAR */
+/* Description: Clear RRB read tranfer pending. */
+#define SH_PI_CAM_CONTROL_RRB_RD_XFER_CLEAR_SHFT 9
+#define SH_PI_CAM_CONTROL_RRB_RD_XFER_CLEAR_MASK 0x0000000000000200
+
+/* SH_PI_CAM_CONTROL_START */
+/* Description: Start CRB CAM read/write operation */
+#define SH_PI_CAM_CONTROL_START_SHFT 63
+#define SH_PI_CAM_CONTROL_START_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBC_TEST_POINT_COMPARE" */
+/* PI CRBC Test Point Compare */
+/* ==================================================================== */
+
+#define SH_PI_CRBC_TEST_POINT_COMPARE 0x0000000120050380
+#define SH_PI_CRBC_TEST_POINT_COMPARE_MASK 0xffffffffffffffff
+#define SH_PI_CRBC_TEST_POINT_COMPARE_INIT 0xffffffff00000000
+
+/* SH_PI_CRBC_TEST_POINT_COMPARE_COMPARE_MASK */
+/* Description: Mask to select Debug bits for trigger generation */
+#define SH_PI_CRBC_TEST_POINT_COMPARE_COMPARE_MASK_SHFT 0
+#define SH_PI_CRBC_TEST_POINT_COMPARE_COMPARE_MASK_MASK 0x00000000ffffffff
+
+/* SH_PI_CRBC_TEST_POINT_COMPARE_COMPARE_PATTERN */
+/* Description: debug bit pattern for trigger generation */
+#define SH_PI_CRBC_TEST_POINT_COMPARE_COMPARE_PATTERN_SHFT 32
+#define SH_PI_CRBC_TEST_POINT_COMPARE_COMPARE_PATTERN_MASK 0xffffffff00000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBC_TEST_POINT_SELECT" */
+/* PI CRBC Test Point Select */
+/* ==================================================================== */
+
+#define SH_PI_CRBC_TEST_POINT_SELECT 0x0000000120050400
+#define SH_PI_CRBC_TEST_POINT_SELECT_MASK 0xf777777777777777
+#define SH_PI_CRBC_TEST_POINT_SELECT_INIT 0x0000000000000000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL */
+/* Description: Nibble 0 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_SHFT 0
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_MASK 0x0000000000000007
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_SHFT 4
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL */
+/* Description: Nibble 1 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_SHFT 8
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000700
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_SHFT 12
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL */
+/* Description: Nibble 2 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_SHFT 16
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_MASK 0x0000000000070000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_SHFT 20
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL */
+/* Description: Nibble 3 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_SHFT 24
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_MASK 0x0000000007000000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_SHFT 28
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL */
+/* Description: Nibble 4 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_SHFT 32
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_MASK 0x0000000700000000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_SHFT 36
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL */
+/* Description: Nibble 5 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_SHFT 40
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_MASK 0x0000070000000000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_SHFT 44
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL */
+/* Description: Nibble 6 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_SHFT 48
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_MASK 0x0007000000000000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_SHFT 52
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL */
+/* Description: Nibble 7 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_SHFT 56
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_MASK 0x0700000000000000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_SHFT 60
+#define SH_PI_CRBC_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* SH_PI_CRBC_TEST_POINT_SELECT_TRIGGER_ENABLE */
+/* Description: Enable trigger on bit 32 of Analyzer data */
+#define SH_PI_CRBC_TEST_POINT_SELECT_TRIGGER_ENABLE_SHFT 63
+#define SH_PI_CRBC_TEST_POINT_SELECT_TRIGGER_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT" */
+/* PI CRBC Test Point Trigger Select */
+/* ==================================================================== */
+
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT 0x0000000120050480
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_MASK 0x7777777777777777
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_INIT 0x0000000000000000
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL */
+/* Description: Nibble 0 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_SHFT 0
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_MASK 0x0000000000000007
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_SHFT 4
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL */
+/* Description: Nibble 1 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_SHFT 8
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_MASK 0x0000000000000700
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_SHFT 12
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL */
+/* Description: Nibble 2 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_SHFT 16
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_MASK 0x0000000000070000
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_SHFT 20
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL */
+/* Description: Nibble 3 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_SHFT 24
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_MASK 0x0000000007000000
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_SHFT 28
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL */
+/* Description: Nibble 4 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_SHFT 32
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_MASK 0x0000000700000000
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_SHFT 36
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL */
+/* Description: Nibble 5 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_SHFT 40
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_MASK 0x0000070000000000
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_SHFT 44
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL */
+/* Description: Nibble 6 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_SHFT 48
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_MASK 0x0007000000000000
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_SHFT 52
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL */
+/* Description: Nibble 7 Chiplet select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_SHFT 56
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_MASK 0x0700000000000000
+
+/* SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_SHFT 60
+#define SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_ERROR_MASK" */
+/* PI CRBP Error Mask */
+/* ==================================================================== */
+
+#define SH_PI_CRBP_ERROR_MASK 0x0000000120050500
+#define SH_PI_CRBP_ERROR_MASK_MASK 0x00000000001fffff
+#define SH_PI_CRBP_ERROR_MASK_INIT 0x00000000001fffff
+
+/* SH_PI_CRBP_ERROR_MASK_FSB_PROTO_ERR */
+/* Description: Mask detection internal protocol table misses */
+#define SH_PI_CRBP_ERROR_MASK_FSB_PROTO_ERR_SHFT 0
+#define SH_PI_CRBP_ERROR_MASK_FSB_PROTO_ERR_MASK 0x0000000000000001
+
+/* SH_PI_CRBP_ERROR_MASK_GFX_RP_ERR */
+/* Description: Mask graphic reply error detection */
+#define SH_PI_CRBP_ERROR_MASK_GFX_RP_ERR_SHFT 1
+#define SH_PI_CRBP_ERROR_MASK_GFX_RP_ERR_MASK 0x0000000000000002
+
+/* SH_PI_CRBP_ERROR_MASK_XB_PROTO_ERR */
+/* Description: Mask detection of external protocol table misses */
+#define SH_PI_CRBP_ERROR_MASK_XB_PROTO_ERR_SHFT 2
+#define SH_PI_CRBP_ERROR_MASK_XB_PROTO_ERR_MASK 0x0000000000000004
+
+/* SH_PI_CRBP_ERROR_MASK_MEM_RP_ERR */
+/* Description: Mask memory error reply message detection */
+#define SH_PI_CRBP_ERROR_MASK_MEM_RP_ERR_SHFT 3
+#define SH_PI_CRBP_ERROR_MASK_MEM_RP_ERR_MASK 0x0000000000000008
+
+/* SH_PI_CRBP_ERROR_MASK_PIO_RP_ERR */
+/* Description: Mask PIO reply error message detection */
+#define SH_PI_CRBP_ERROR_MASK_PIO_RP_ERR_SHFT 4
+#define SH_PI_CRBP_ERROR_MASK_PIO_RP_ERR_MASK 0x0000000000000010
+
+/* SH_PI_CRBP_ERROR_MASK_MEM_TO_ERR */
+/* Description: Mask memory time-out detection */
+#define SH_PI_CRBP_ERROR_MASK_MEM_TO_ERR_SHFT 5
+#define SH_PI_CRBP_ERROR_MASK_MEM_TO_ERR_MASK 0x0000000000000020
+
+/* SH_PI_CRBP_ERROR_MASK_PIO_TO_ERR */
+/* Description: Mask PIO time-out detection */
+#define SH_PI_CRBP_ERROR_MASK_PIO_TO_ERR_SHFT 6
+#define SH_PI_CRBP_ERROR_MASK_PIO_TO_ERR_MASK 0x0000000000000040
+
+/* SH_PI_CRBP_ERROR_MASK_FSB_SHUB_UCE */
+/* Description: Mask un-correctable ECC error detection */
+#define SH_PI_CRBP_ERROR_MASK_FSB_SHUB_UCE_SHFT 7
+#define SH_PI_CRBP_ERROR_MASK_FSB_SHUB_UCE_MASK 0x0000000000000080
+
+/* SH_PI_CRBP_ERROR_MASK_FSB_SHUB_CE */
+/* Description: Mask correctable ECC error detection */
+#define SH_PI_CRBP_ERROR_MASK_FSB_SHUB_CE_SHFT 8
+#define SH_PI_CRBP_ERROR_MASK_FSB_SHUB_CE_MASK 0x0000000000000100
+
+/* SH_PI_CRBP_ERROR_MASK_MSG_COLOR_ERR */
+/* Description: Mask detection of color errors */
+#define SH_PI_CRBP_ERROR_MASK_MSG_COLOR_ERR_SHFT 9
+#define SH_PI_CRBP_ERROR_MASK_MSG_COLOR_ERR_MASK 0x0000000000000200
+
+/* SH_PI_CRBP_ERROR_MASK_MD_RQ_Q_OFLOW */
+/* Description: Mask MD Request input buffer over flow error */
+#define SH_PI_CRBP_ERROR_MASK_MD_RQ_Q_OFLOW_SHFT 10
+#define SH_PI_CRBP_ERROR_MASK_MD_RQ_Q_OFLOW_MASK 0x0000000000000400
+
+/* SH_PI_CRBP_ERROR_MASK_MD_RP_Q_OFLOW */
+/* Description: Mask MD Reply input buffer over flow error */
+#define SH_PI_CRBP_ERROR_MASK_MD_RP_Q_OFLOW_SHFT 11
+#define SH_PI_CRBP_ERROR_MASK_MD_RP_Q_OFLOW_MASK 0x0000000000000800
+
+/* SH_PI_CRBP_ERROR_MASK_XN_RQ_Q_OFLOW */
+/* Description: Mask XN Request input buffer over flow error */
+#define SH_PI_CRBP_ERROR_MASK_XN_RQ_Q_OFLOW_SHFT 12
+#define SH_PI_CRBP_ERROR_MASK_XN_RQ_Q_OFLOW_MASK 0x0000000000001000
+
+/* SH_PI_CRBP_ERROR_MASK_XN_RP_Q_OFLOW */
+/* Description: Mask XN Reply input buffer over flow error */
+#define SH_PI_CRBP_ERROR_MASK_XN_RP_Q_OFLOW_SHFT 13
+#define SH_PI_CRBP_ERROR_MASK_XN_RP_Q_OFLOW_MASK 0x0000000000002000
+
+/* SH_PI_CRBP_ERROR_MASK_NACK_OFLOW */
+/* Description: Mask NACK over flow error */
+#define SH_PI_CRBP_ERROR_MASK_NACK_OFLOW_SHFT 14
+#define SH_PI_CRBP_ERROR_MASK_NACK_OFLOW_MASK 0x0000000000004000
+
+/* SH_PI_CRBP_ERROR_MASK_GFX_INT_0 */
+/* Description: Mask GFX transfer interrupt for CPU 0 */
+#define SH_PI_CRBP_ERROR_MASK_GFX_INT_0_SHFT 15
+#define SH_PI_CRBP_ERROR_MASK_GFX_INT_0_MASK 0x0000000000008000
+
+/* SH_PI_CRBP_ERROR_MASK_GFX_INT_1 */
+/* Description: Mask GFX transfer interrupt for CPU 1 */
+#define SH_PI_CRBP_ERROR_MASK_GFX_INT_1_SHFT 16
+#define SH_PI_CRBP_ERROR_MASK_GFX_INT_1_MASK 0x0000000000010000
+
+/* SH_PI_CRBP_ERROR_MASK_MD_RQ_CRD_OFLOW */
+/* Description: Mask MD Request Credit Overflow Error */
+#define SH_PI_CRBP_ERROR_MASK_MD_RQ_CRD_OFLOW_SHFT 17
+#define SH_PI_CRBP_ERROR_MASK_MD_RQ_CRD_OFLOW_MASK 0x0000000000020000
+
+/* SH_PI_CRBP_ERROR_MASK_MD_RP_CRD_OFLOW */
+/* Description: Mask MD Reply Credit Overflow Error */
+#define SH_PI_CRBP_ERROR_MASK_MD_RP_CRD_OFLOW_SHFT 18
+#define SH_PI_CRBP_ERROR_MASK_MD_RP_CRD_OFLOW_MASK 0x0000000000040000
+
+/* SH_PI_CRBP_ERROR_MASK_XN_RQ_CRD_OFLOW */
+/* Description: Mask XN Request Credit Overflow Error */
+#define SH_PI_CRBP_ERROR_MASK_XN_RQ_CRD_OFLOW_SHFT 19
+#define SH_PI_CRBP_ERROR_MASK_XN_RQ_CRD_OFLOW_MASK 0x0000000000080000
+
+/* SH_PI_CRBP_ERROR_MASK_XN_RP_CRD_OFLOW */
+/* Description: Mask XN Reply Credit Overflow Error */
+#define SH_PI_CRBP_ERROR_MASK_XN_RP_CRD_OFLOW_SHFT 20
+#define SH_PI_CRBP_ERROR_MASK_XN_RP_CRD_OFLOW_MASK 0x0000000000100000
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_FSB_PIPE_COMPARE" */
+/* CRBP FSB Pipe Compare */
+/* ==================================================================== */
+
+#define SH_PI_CRBP_FSB_PIPE_COMPARE 0x0000000120050580
+#define SH_PI_CRBP_FSB_PIPE_COMPARE_MASK 0x001fffffffffffff
+#define SH_PI_CRBP_FSB_PIPE_COMPARE_INIT 0x0000000000000000
+
+/* SH_PI_CRBP_FSB_PIPE_COMPARE_COMPARE_ADDRESS */
+/* Description: Address A or B to compare against */
+#define SH_PI_CRBP_FSB_PIPE_COMPARE_COMPARE_ADDRESS_SHFT 0
+#define SH_PI_CRBP_FSB_PIPE_COMPARE_COMPARE_ADDRESS_MASK 0x00007fffffffffff
+
+/* SH_PI_CRBP_FSB_PIPE_COMPARE_COMPARE_REQ */
+/* Description: REQa or REQb value to compare against */
+#define SH_PI_CRBP_FSB_PIPE_COMPARE_COMPARE_REQ_SHFT 47
+#define SH_PI_CRBP_FSB_PIPE_COMPARE_COMPARE_REQ_MASK 0x001f800000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_FSB_PIPE_MASK" */
+/* CRBP Compare Mask */
+/* ==================================================================== */
+
+#define SH_PI_CRBP_FSB_PIPE_MASK 0x0000000120050600
+#define SH_PI_CRBP_FSB_PIPE_MASK_MASK 0x001fffffffffffff
+#define SH_PI_CRBP_FSB_PIPE_MASK_INIT 0x0000000000000000
+
+/* SH_PI_CRBP_FSB_PIPE_MASK_COMPARE_ADDRESS_MASK */
+/* Description: Address A or B mask values */
+#define SH_PI_CRBP_FSB_PIPE_MASK_COMPARE_ADDRESS_MASK_SHFT 0
+#define SH_PI_CRBP_FSB_PIPE_MASK_COMPARE_ADDRESS_MASK_MASK 0x00007fffffffffff
+
+/* SH_PI_CRBP_FSB_PIPE_MASK_COMPARE_REQ_MASK */
+/* Description: REQa or REQb mask values */
+#define SH_PI_CRBP_FSB_PIPE_MASK_COMPARE_REQ_MASK_SHFT 47
+#define SH_PI_CRBP_FSB_PIPE_MASK_COMPARE_REQ_MASK_MASK 0x001f800000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_TEST_POINT_COMPARE" */
+/* PI CRBP Test Point Compare */
+/* ==================================================================== */
+
+#define SH_PI_CRBP_TEST_POINT_COMPARE 0x0000000120050680
+#define SH_PI_CRBP_TEST_POINT_COMPARE_MASK 0xffffffffffffffff
+#define SH_PI_CRBP_TEST_POINT_COMPARE_INIT 0xffffffff00000000
+
+/* SH_PI_CRBP_TEST_POINT_COMPARE_COMPARE_MASK */
+/* Description: Mask to select Debug bits for trigger generation */
+#define SH_PI_CRBP_TEST_POINT_COMPARE_COMPARE_MASK_SHFT 0
+#define SH_PI_CRBP_TEST_POINT_COMPARE_COMPARE_MASK_MASK 0x00000000ffffffff
+
+/* SH_PI_CRBP_TEST_POINT_COMPARE_COMPARE_PATTERN */
+/* Description: debug bit pattern for trigger generation */
+#define SH_PI_CRBP_TEST_POINT_COMPARE_COMPARE_PATTERN_SHFT 32
+#define SH_PI_CRBP_TEST_POINT_COMPARE_COMPARE_PATTERN_MASK 0xffffffff00000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_TEST_POINT_SELECT" */
+/* PI CRBP Test Point Select */
+/* ==================================================================== */
+
+#define SH_PI_CRBP_TEST_POINT_SELECT 0x0000000120050700
+#define SH_PI_CRBP_TEST_POINT_SELECT_MASK 0xf777777777777777
+#define SH_PI_CRBP_TEST_POINT_SELECT_INIT 0x0000000000000000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL */
+/* Description: Nibble 0 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_SHFT 0
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_MASK 0x0000000000000007
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_SHFT 4
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL */
+/* Description: Nibble 1 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_SHFT 8
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000700
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_SHFT 12
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL */
+/* Description: Nibble 2 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_SHFT 16
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_MASK 0x0000000000070000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_SHFT 20
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL */
+/* Description: Nibble 3 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_SHFT 24
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_MASK 0x0000000007000000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_SHFT 28
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL */
+/* Description: Nibble 4 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_SHFT 32
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_MASK 0x0000000700000000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_SHFT 36
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL */
+/* Description: Nibble 5 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_SHFT 40
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_MASK 0x0000070000000000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_SHFT 44
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL */
+/* Description: Nibble 6 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_SHFT 48
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_MASK 0x0007000000000000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_SHFT 52
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL */
+/* Description: Nibble 7 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_SHFT 56
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_MASK 0x0700000000000000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_SHFT 60
+#define SH_PI_CRBP_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* SH_PI_CRBP_TEST_POINT_SELECT_TRIGGER_ENABLE */
+/* Description: Enable trigger on bit 32 of Analyzer data */
+#define SH_PI_CRBP_TEST_POINT_SELECT_TRIGGER_ENABLE_SHFT 63
+#define SH_PI_CRBP_TEST_POINT_SELECT_TRIGGER_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT" */
+/* PI CRBP Test Point Trigger Select */
+/* ==================================================================== */
+
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT 0x0000000120050780
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_MASK 0x7777777777777777
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_INIT 0x0000000000000000
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL */
+/* Description: Nibble 0 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_SHFT 0
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_MASK 0x0000000000000007
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_SHFT 4
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL */
+/* Description: Nibble 1 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_SHFT 8
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_MASK 0x0000000000000700
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_SHFT 12
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL */
+/* Description: Nibble 2 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_SHFT 16
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_MASK 0x0000000000070000
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_SHFT 20
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL */
+/* Description: Nibble 3 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_SHFT 24
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_MASK 0x0000000007000000
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_SHFT 28
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL */
+/* Description: Nibble 4 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_SHFT 32
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_MASK 0x0000000700000000
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_SHFT 36
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL */
+/* Description: Nibble 5 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_SHFT 40
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_MASK 0x0000070000000000
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_SHFT 44
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL */
+/* Description: Nibble 6 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_SHFT 48
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_MASK 0x0007000000000000
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_SHFT 52
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL */
+/* Description: Nibble 7 Chiplet select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_SHFT 56
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_MASK 0x0700000000000000
+
+/* SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_SHFT 60
+#define SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_XB_PIPE_COMPARE_0" */
+/* CRBP XB Pipe Compare */
+/* ==================================================================== */
+
+#define SH_PI_CRBP_XB_PIPE_COMPARE_0 0x0000000120050800
+#define SH_PI_CRBP_XB_PIPE_COMPARE_0_MASK 0x007fffffffffffff
+#define SH_PI_CRBP_XB_PIPE_COMPARE_0_INIT 0x0000000000000000
+
+/* SH_PI_CRBP_XB_PIPE_COMPARE_0_COMPARE_ADDRESS */
+/* Description: Address to compare against */
+#define SH_PI_CRBP_XB_PIPE_COMPARE_0_COMPARE_ADDRESS_SHFT 0
+#define SH_PI_CRBP_XB_PIPE_COMPARE_0_COMPARE_ADDRESS_MASK 0x00007fffffffffff
+
+/* SH_PI_CRBP_XB_PIPE_COMPARE_0_COMPARE_COMMAND */
+/* Description: SN2NET Command to compare against */
+#define SH_PI_CRBP_XB_PIPE_COMPARE_0_COMPARE_COMMAND_SHFT 47
+#define SH_PI_CRBP_XB_PIPE_COMPARE_0_COMPARE_COMMAND_MASK 0x007f800000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_XB_PIPE_COMPARE_1" */
+/* CRBP XB Pipe Compare */
+/* ==================================================================== */
+
+#define SH_PI_CRBP_XB_PIPE_COMPARE_1 0x0000000120050880
+#define SH_PI_CRBP_XB_PIPE_COMPARE_1_MASK 0x000001ff3fff3fff
+#define SH_PI_CRBP_XB_PIPE_COMPARE_1_INIT 0x0000000000000000
+
+/* SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_SOURCE */
+/* Description: Source to compare against */
+#define SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_SOURCE_SHFT 0
+#define SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_SOURCE_MASK 0x0000000000003fff
+
+/* SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_SUPPLEMENTAL */
+/* Description: Supplemental to compare against */
+#define SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_SUPPLEMENTAL_SHFT 16
+#define SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_SUPPLEMENTAL_MASK 0x000000003fff0000
+
+/* SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_ECHO */
+/* Description: Echo to compare against */
+#define SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_ECHO_SHFT 32
+#define SH_PI_CRBP_XB_PIPE_COMPARE_1_COMPARE_ECHO_MASK 0x000001ff00000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_XB_PIPE_MASK_0" */
+/* CRBP Compare Mask Register 1 */
+/* ==================================================================== */
+
+#define SH_PI_CRBP_XB_PIPE_MASK_0 0x0000000120050900
+#define SH_PI_CRBP_XB_PIPE_MASK_0_MASK 0x007fffffffffffff
+#define SH_PI_CRBP_XB_PIPE_MASK_0_INIT 0x0000000000000000
+
+/* SH_PI_CRBP_XB_PIPE_MASK_0_COMPARE_ADDRESS_MASK */
+/* Description: Address to compare against */
+#define SH_PI_CRBP_XB_PIPE_MASK_0_COMPARE_ADDRESS_MASK_SHFT 0
+#define SH_PI_CRBP_XB_PIPE_MASK_0_COMPARE_ADDRESS_MASK_MASK 0x00007fffffffffff
+
+/* SH_PI_CRBP_XB_PIPE_MASK_0_COMPARE_COMMAND_MASK */
+/* Description: SN2NET Command to compare against */
+#define SH_PI_CRBP_XB_PIPE_MASK_0_COMPARE_COMMAND_MASK_SHFT 47
+#define SH_PI_CRBP_XB_PIPE_MASK_0_COMPARE_COMMAND_MASK_MASK 0x007f800000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_XB_PIPE_MASK_1" */
+/* CRBP XB Pipe Compare Mask Register 1 */
+/* ==================================================================== */
+
+#define SH_PI_CRBP_XB_PIPE_MASK_1 0x0000000120050980
+#define SH_PI_CRBP_XB_PIPE_MASK_1_MASK 0x000001ff3fff3fff
+#define SH_PI_CRBP_XB_PIPE_MASK_1_INIT 0x0000000000000000
+
+/* SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_SOURCE_MASK */
+/* Description: Source to compare against */
+#define SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_SOURCE_MASK_SHFT 0
+#define SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_SOURCE_MASK_MASK 0x0000000000003fff
+
+/* SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_SUPPLEMENTAL_MASK */
+/* Description: Supplemental to compare against */
+#define SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_SUPPLEMENTAL_MASK_SHFT 16
+#define SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_SUPPLEMENTAL_MASK_MASK 0x000000003fff0000
+
+/* SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_ECHO_MASK */
+/* Description: Echo to compare against */
+#define SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_ECHO_MASK_SHFT 32
+#define SH_PI_CRBP_XB_PIPE_MASK_1_COMPARE_ECHO_MASK_MASK 0x000001ff00000000
+
+/* ==================================================================== */
+/* Register "SH_PI_DPC_QUEUE_CONFIG" */
+/* DPC Queue Configuration */
+/* ==================================================================== */
+
+#define SH_PI_DPC_QUEUE_CONFIG 0x0000000120050a00
+#define SH_PI_DPC_QUEUE_CONFIG_MASK 0x000000001f1f1f1f
+#define SH_PI_DPC_QUEUE_CONFIG_INIT 0x000000000c010c01
+
+/* SH_PI_DPC_QUEUE_CONFIG_DWCQ_AE_LEVEL */
+/* Description: DXB WTL Command Queue Almost Empty Level */
+#define SH_PI_DPC_QUEUE_CONFIG_DWCQ_AE_LEVEL_SHFT 0
+#define SH_PI_DPC_QUEUE_CONFIG_DWCQ_AE_LEVEL_MASK 0x000000000000001f
+
+/* SH_PI_DPC_QUEUE_CONFIG_DWCQ_AF_THRESH */
+/* Description: DXB WTL Command Queue Almost Full Threshold */
+#define SH_PI_DPC_QUEUE_CONFIG_DWCQ_AF_THRESH_SHFT 8
+#define SH_PI_DPC_QUEUE_CONFIG_DWCQ_AF_THRESH_MASK 0x0000000000001f00
+
+/* SH_PI_DPC_QUEUE_CONFIG_FWCQ_AE_LEVEL */
+/* Description: FSB WTL Command Queue Almost Empty Level */
+#define SH_PI_DPC_QUEUE_CONFIG_FWCQ_AE_LEVEL_SHFT 16
+#define SH_PI_DPC_QUEUE_CONFIG_FWCQ_AE_LEVEL_MASK 0x00000000001f0000
+
+/* SH_PI_DPC_QUEUE_CONFIG_FWCQ_AF_THRESH */
+/* Description: FSB WTL Command Queue Almost Full Threshold */
+#define SH_PI_DPC_QUEUE_CONFIG_FWCQ_AF_THRESH_SHFT 24
+#define SH_PI_DPC_QUEUE_CONFIG_FWCQ_AF_THRESH_MASK 0x000000001f000000
+
+/* ==================================================================== */
+/* Register "SH_PI_ERROR_MASK" */
+/* PI Error Mask */
+/* ==================================================================== */
+
+#define SH_PI_ERROR_MASK 0x0000000120050a80
+#define SH_PI_ERROR_MASK_MASK 0x00000007ffffffff
+#define SH_PI_ERROR_MASK_INIT 0x00000007ffffffff
+
+/* SH_PI_ERROR_MASK_FSB_PROTO_ERR */
+/* Description: Mask detection of internal protocol table misses */
+#define SH_PI_ERROR_MASK_FSB_PROTO_ERR_SHFT 0
+#define SH_PI_ERROR_MASK_FSB_PROTO_ERR_MASK 0x0000000000000001
+
+/* SH_PI_ERROR_MASK_GFX_RP_ERR */
+/* Description: Mask graphic reply error message error detection */
+#define SH_PI_ERROR_MASK_GFX_RP_ERR_SHFT 1
+#define SH_PI_ERROR_MASK_GFX_RP_ERR_MASK 0x0000000000000002
+
+/* SH_PI_ERROR_MASK_XB_PROTO_ERR */
+/* Description: Mask detection of external protocol table misses */
+#define SH_PI_ERROR_MASK_XB_PROTO_ERR_SHFT 2
+#define SH_PI_ERROR_MASK_XB_PROTO_ERR_MASK 0x0000000000000004
+
+/* SH_PI_ERROR_MASK_MEM_RP_ERR */
+/* Description: Mask memory reply error detection */
+#define SH_PI_ERROR_MASK_MEM_RP_ERR_SHFT 3
+#define SH_PI_ERROR_MASK_MEM_RP_ERR_MASK 0x0000000000000008
+
+/* SH_PI_ERROR_MASK_PIO_RP_ERR */
+/* Description: Mask PIO reply error detection */
+#define SH_PI_ERROR_MASK_PIO_RP_ERR_SHFT 4
+#define SH_PI_ERROR_MASK_PIO_RP_ERR_MASK 0x0000000000000010
+
+/* SH_PI_ERROR_MASK_MEM_TO_ERR */
+/* Description: Mask CRB time-out errors */
+#define SH_PI_ERROR_MASK_MEM_TO_ERR_SHFT 5
+#define SH_PI_ERROR_MASK_MEM_TO_ERR_MASK 0x0000000000000020
+
+/* SH_PI_ERROR_MASK_PIO_TO_ERR */
+/* Description: Mask PIO time-out errors */
+#define SH_PI_ERROR_MASK_PIO_TO_ERR_SHFT 6
+#define SH_PI_ERROR_MASK_PIO_TO_ERR_MASK 0x0000000000000040
+
+/* SH_PI_ERROR_MASK_FSB_SHUB_UCE */
+/* Description: Mask un-correctable ECC error detection */
+#define SH_PI_ERROR_MASK_FSB_SHUB_UCE_SHFT 7
+#define SH_PI_ERROR_MASK_FSB_SHUB_UCE_MASK 0x0000000000000080
+
+/* SH_PI_ERROR_MASK_FSB_SHUB_CE */
+/* Description: Mask correctable ECC error detection */
+#define SH_PI_ERROR_MASK_FSB_SHUB_CE_SHFT 8
+#define SH_PI_ERROR_MASK_FSB_SHUB_CE_MASK 0x0000000000000100
+
+/* SH_PI_ERROR_MASK_MSG_COLOR_ERR */
+/* Description: Mask message color error detection */
+#define SH_PI_ERROR_MASK_MSG_COLOR_ERR_SHFT 9
+#define SH_PI_ERROR_MASK_MSG_COLOR_ERR_MASK 0x0000000000000200
+
+/* SH_PI_ERROR_MASK_MD_RQ_Q_OFLOW */
+/* Description: Mask MD Request input buffer over flow error */
+#define SH_PI_ERROR_MASK_MD_RQ_Q_OFLOW_SHFT 10
+#define SH_PI_ERROR_MASK_MD_RQ_Q_OFLOW_MASK 0x0000000000000400
+
+/* SH_PI_ERROR_MASK_MD_RP_Q_OFLOW */
+/* Description: Mask MD Reply input buffer over flow error */
+#define SH_PI_ERROR_MASK_MD_RP_Q_OFLOW_SHFT 11
+#define SH_PI_ERROR_MASK_MD_RP_Q_OFLOW_MASK 0x0000000000000800
+
+/* SH_PI_ERROR_MASK_XN_RQ_Q_OFLOW */
+/* Description: Mask XN Request input buffer over flow error */
+#define SH_PI_ERROR_MASK_XN_RQ_Q_OFLOW_SHFT 12
+#define SH_PI_ERROR_MASK_XN_RQ_Q_OFLOW_MASK 0x0000000000001000
+
+/* SH_PI_ERROR_MASK_XN_RP_Q_OFLOW */
+/* Description: Mask XN Reply input buffer over flow error */
+#define SH_PI_ERROR_MASK_XN_RP_Q_OFLOW_SHFT 13
+#define SH_PI_ERROR_MASK_XN_RP_Q_OFLOW_MASK 0x0000000000002000
+
+/* SH_PI_ERROR_MASK_NACK_OFLOW */
+/* Description: Mask NACK over flow error */
+#define SH_PI_ERROR_MASK_NACK_OFLOW_SHFT 14
+#define SH_PI_ERROR_MASK_NACK_OFLOW_MASK 0x0000000000004000
+
+/* SH_PI_ERROR_MASK_GFX_INT_0 */
+/* Description: Mask GFX transfer interrupt for CPU 0 */
+#define SH_PI_ERROR_MASK_GFX_INT_0_SHFT 15
+#define SH_PI_ERROR_MASK_GFX_INT_0_MASK 0x0000000000008000
+
+/* SH_PI_ERROR_MASK_GFX_INT_1 */
+/* Description: Mask GFX transfer interrupt for CPU 1 */
+#define SH_PI_ERROR_MASK_GFX_INT_1_SHFT 16
+#define SH_PI_ERROR_MASK_GFX_INT_1_MASK 0x0000000000010000
+
+/* SH_PI_ERROR_MASK_MD_RQ_CRD_OFLOW */
+/* Description: Mask MD Request Credit Overflow Error */
+#define SH_PI_ERROR_MASK_MD_RQ_CRD_OFLOW_SHFT 17
+#define SH_PI_ERROR_MASK_MD_RQ_CRD_OFLOW_MASK 0x0000000000020000
+
+/* SH_PI_ERROR_MASK_MD_RP_CRD_OFLOW */
+/* Description: Mask MD Reply Credit Overflow Error */
+#define SH_PI_ERROR_MASK_MD_RP_CRD_OFLOW_SHFT 18
+#define SH_PI_ERROR_MASK_MD_RP_CRD_OFLOW_MASK 0x0000000000040000
+
+/* SH_PI_ERROR_MASK_XN_RQ_CRD_OFLOW */
+/* Description: Mask XN Request Credit Overflow Error */
+#define SH_PI_ERROR_MASK_XN_RQ_CRD_OFLOW_SHFT 19
+#define SH_PI_ERROR_MASK_XN_RQ_CRD_OFLOW_MASK 0x0000000000080000
+
+/* SH_PI_ERROR_MASK_XN_RP_CRD_OFLOW */
+/* Description: Mask XN Reply Credit Overflow Error */
+#define SH_PI_ERROR_MASK_XN_RP_CRD_OFLOW_SHFT 20
+#define SH_PI_ERROR_MASK_XN_RP_CRD_OFLOW_MASK 0x0000000000100000
+
+/* SH_PI_ERROR_MASK_HUNG_BUS */
+/* Description: Mask FSB hung error */
+#define SH_PI_ERROR_MASK_HUNG_BUS_SHFT 21
+#define SH_PI_ERROR_MASK_HUNG_BUS_MASK 0x0000000000200000
+
+/* SH_PI_ERROR_MASK_RSP_PARITY */
+/* Description: Parity error detecte during response phase */
+#define SH_PI_ERROR_MASK_RSP_PARITY_SHFT 22
+#define SH_PI_ERROR_MASK_RSP_PARITY_MASK 0x0000000000400000
+
+/* SH_PI_ERROR_MASK_IOQ_OVERRUN */
+/* Description: Over run error detected on IOQ */
+#define SH_PI_ERROR_MASK_IOQ_OVERRUN_SHFT 23
+#define SH_PI_ERROR_MASK_IOQ_OVERRUN_MASK 0x0000000000800000
+
+/* SH_PI_ERROR_MASK_REQ_FORMAT */
+/* Description: FSB request format not supported */
+#define SH_PI_ERROR_MASK_REQ_FORMAT_SHFT 24
+#define SH_PI_ERROR_MASK_REQ_FORMAT_MASK 0x0000000001000000
+
+/* SH_PI_ERROR_MASK_ADDR_ACCESS */
+/* Description: Access to Address is not supported */
+#define SH_PI_ERROR_MASK_ADDR_ACCESS_SHFT 25
+#define SH_PI_ERROR_MASK_ADDR_ACCESS_MASK 0x0000000002000000
+
+/* SH_PI_ERROR_MASK_REQ_PARITY */
+/* Description: Parity error detected during request phase */
+#define SH_PI_ERROR_MASK_REQ_PARITY_SHFT 26
+#define SH_PI_ERROR_MASK_REQ_PARITY_MASK 0x0000000004000000
+
+/* SH_PI_ERROR_MASK_ADDR_PARITY */
+/* Description: Parity error detected on address */
+#define SH_PI_ERROR_MASK_ADDR_PARITY_SHFT 27
+#define SH_PI_ERROR_MASK_ADDR_PARITY_MASK 0x0000000008000000
+
+/* SH_PI_ERROR_MASK_SHUB_FSB_DQE */
+/* Description: SHUB_FSB_DQE */
+#define SH_PI_ERROR_MASK_SHUB_FSB_DQE_SHFT 28
+#define SH_PI_ERROR_MASK_SHUB_FSB_DQE_MASK 0x0000000010000000
+
+/* SH_PI_ERROR_MASK_SHUB_FSB_UCE */
+/* Description: An un-correctable ECC error was detected */
+#define SH_PI_ERROR_MASK_SHUB_FSB_UCE_SHFT 29
+#define SH_PI_ERROR_MASK_SHUB_FSB_UCE_MASK 0x0000000020000000
+
+/* SH_PI_ERROR_MASK_SHUB_FSB_CE */
+/* Description: An correctable ECC error was detected */
+#define SH_PI_ERROR_MASK_SHUB_FSB_CE_SHFT 30
+#define SH_PI_ERROR_MASK_SHUB_FSB_CE_MASK 0x0000000040000000
+
+/* SH_PI_ERROR_MASK_LIVELOCK */
+/* Description: AFI livelock error was detected */
+#define SH_PI_ERROR_MASK_LIVELOCK_SHFT 31
+#define SH_PI_ERROR_MASK_LIVELOCK_MASK 0x0000000080000000
+
+/* SH_PI_ERROR_MASK_BAD_SNOOP */
+/* Description: AFI bad snoop error was detected */
+#define SH_PI_ERROR_MASK_BAD_SNOOP_SHFT 32
+#define SH_PI_ERROR_MASK_BAD_SNOOP_MASK 0x0000000100000000
+
+/* SH_PI_ERROR_MASK_FSB_TBL_MISS */
+/* Description: AFI FSB request table miss error was detected */
+#define SH_PI_ERROR_MASK_FSB_TBL_MISS_SHFT 33
+#define SH_PI_ERROR_MASK_FSB_TBL_MISS_MASK 0x0000000200000000
+
+/* SH_PI_ERROR_MASK_MSG_LENGTH */
+/* Description: Message length error on received message from SIC */
+#define SH_PI_ERROR_MASK_MSG_LENGTH_SHFT 34
+#define SH_PI_ERROR_MASK_MSG_LENGTH_MASK 0x0000000400000000
+
+/* ==================================================================== */
+/* Register "SH_PI_EXPRESS_REPLY_CONFIG" */
+/* PI Express Reply Configuration */
+/* ==================================================================== */
+
+#define SH_PI_EXPRESS_REPLY_CONFIG 0x0000000120050b00
+#define SH_PI_EXPRESS_REPLY_CONFIG_MASK 0x0000000000000007
+#define SH_PI_EXPRESS_REPLY_CONFIG_INIT 0x0000000000000001
+
+/* SH_PI_EXPRESS_REPLY_CONFIG_MODE */
+/* Description: Express Reply Mode */
+#define SH_PI_EXPRESS_REPLY_CONFIG_MODE_SHFT 0
+#define SH_PI_EXPRESS_REPLY_CONFIG_MODE_MASK 0x0000000000000007
+
+/* ==================================================================== */
+/* Register "SH_PI_FSB_COMPARE_VALUE" */
+/* FSB Compare Value */
+/* ==================================================================== */
+
+#define SH_PI_FSB_COMPARE_VALUE 0x0000000120050c00
+#define SH_PI_FSB_COMPARE_VALUE_MASK 0xffffffffffffffff
+#define SH_PI_FSB_COMPARE_VALUE_INIT 0x0000000000000000
+
+/* SH_PI_FSB_COMPARE_VALUE_COMPARE_VALUE */
+/* Description: Compare value */
+#define SH_PI_FSB_COMPARE_VALUE_COMPARE_VALUE_SHFT 0
+#define SH_PI_FSB_COMPARE_VALUE_COMPARE_VALUE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_PI_FSB_COMPARE_MASK" */
+/* FSB Compare Mask */
+/* ==================================================================== */
+
+#define SH_PI_FSB_COMPARE_MASK 0x0000000120050b80
+#define SH_PI_FSB_COMPARE_MASK_MASK 0xffffffffffffffff
+#define SH_PI_FSB_COMPARE_MASK_INIT 0x0000000000000000
+
+/* SH_PI_FSB_COMPARE_MASK_MASK_VALUE */
+/* Description: Mask value */
+#define SH_PI_FSB_COMPARE_MASK_MASK_VALUE_SHFT 0
+#define SH_PI_FSB_COMPARE_MASK_MASK_VALUE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_PI_FSB_ERROR_INJECTION" */
+/* Inject an Error onto the FSB */
+/* ==================================================================== */
+
+#define SH_PI_FSB_ERROR_INJECTION 0x0000000120050c80
+#define SH_PI_FSB_ERROR_INJECTION_MASK 0x000000070fff03ff
+#define SH_PI_FSB_ERROR_INJECTION_INIT 0x0000000000000000
+
+/* SH_PI_FSB_ERROR_INJECTION_RP_PE_TO_FSB */
+/* Description: Inject a RP# Parity Error onto the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_RP_PE_TO_FSB_SHFT 0
+#define SH_PI_FSB_ERROR_INJECTION_RP_PE_TO_FSB_MASK 0x0000000000000001
+
+/* SH_PI_FSB_ERROR_INJECTION_AP0_PE_TO_FSB */
+/* Description: Inject an AP[0]# Parity Error onto the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_AP0_PE_TO_FSB_SHFT 1
+#define SH_PI_FSB_ERROR_INJECTION_AP0_PE_TO_FSB_MASK 0x0000000000000002
+
+/* SH_PI_FSB_ERROR_INJECTION_AP1_PE_TO_FSB */
+/* Description: Inject an AP[1]# Parity Error onto the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_AP1_PE_TO_FSB_SHFT 2
+#define SH_PI_FSB_ERROR_INJECTION_AP1_PE_TO_FSB_MASK 0x0000000000000004
+
+/* SH_PI_FSB_ERROR_INJECTION_RSP_PE_TO_FSB */
+/* Description: Inject a RSP# Parity Error onto the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_RSP_PE_TO_FSB_SHFT 3
+#define SH_PI_FSB_ERROR_INJECTION_RSP_PE_TO_FSB_MASK 0x0000000000000008
+
+/* SH_PI_FSB_ERROR_INJECTION_DW0_CE_TO_FSB */
+/* Description: Inject a Correctable Error in Doubleword 0 onto the */
+#define SH_PI_FSB_ERROR_INJECTION_DW0_CE_TO_FSB_SHFT 4
+#define SH_PI_FSB_ERROR_INJECTION_DW0_CE_TO_FSB_MASK 0x0000000000000010
+
+/* SH_PI_FSB_ERROR_INJECTION_DW0_UCE_TO_FSB */
+/* Description: Inject an Uncorrectable Error in Doubleword 0 onto */
+/* the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_DW0_UCE_TO_FSB_SHFT 5
+#define SH_PI_FSB_ERROR_INJECTION_DW0_UCE_TO_FSB_MASK 0x0000000000000020
+
+/* SH_PI_FSB_ERROR_INJECTION_DW1_CE_TO_FSB */
+/* Description: Inject a Correctable Error in Doubleword 1 onto the */
+#define SH_PI_FSB_ERROR_INJECTION_DW1_CE_TO_FSB_SHFT 6
+#define SH_PI_FSB_ERROR_INJECTION_DW1_CE_TO_FSB_MASK 0x0000000000000040
+
+/* SH_PI_FSB_ERROR_INJECTION_DW1_UCE_TO_FSB */
+/* Description: Inject an Uncorrectable Error in Doubleword 1 onto */
+/* the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_DW1_UCE_TO_FSB_SHFT 7
+#define SH_PI_FSB_ERROR_INJECTION_DW1_UCE_TO_FSB_MASK 0x0000000000000080
+
+/* SH_PI_FSB_ERROR_INJECTION_IP0_PE_TO_FSB */
+/* Description: Inject an IP[0]# Parity Error onto the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_IP0_PE_TO_FSB_SHFT 8
+#define SH_PI_FSB_ERROR_INJECTION_IP0_PE_TO_FSB_MASK 0x0000000000000100
+
+/* SH_PI_FSB_ERROR_INJECTION_IP1_PE_TO_FSB */
+/* Description: Inject an IP[1]# Parity Error onto the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_IP1_PE_TO_FSB_SHFT 9
+#define SH_PI_FSB_ERROR_INJECTION_IP1_PE_TO_FSB_MASK 0x0000000000000200
+
+/* SH_PI_FSB_ERROR_INJECTION_RP_PE_FROM_FSB */
+/* Description: Inject a RP# Parity Error When Sampling the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_RP_PE_FROM_FSB_SHFT 16
+#define SH_PI_FSB_ERROR_INJECTION_RP_PE_FROM_FSB_MASK 0x0000000000010000
+
+/* SH_PI_FSB_ERROR_INJECTION_AP0_PE_FROM_FSB */
+/* Description: Inject an AP[0]# Parity Error When Sampling the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_AP0_PE_FROM_FSB_SHFT 17
+#define SH_PI_FSB_ERROR_INJECTION_AP0_PE_FROM_FSB_MASK 0x0000000000020000
+
+/* SH_PI_FSB_ERROR_INJECTION_AP1_PE_FROM_FSB */
+/* Description: Inject an AP[1]# Parity Error When Sampling the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_AP1_PE_FROM_FSB_SHFT 18
+#define SH_PI_FSB_ERROR_INJECTION_AP1_PE_FROM_FSB_MASK 0x0000000000040000
+
+/* SH_PI_FSB_ERROR_INJECTION_RSP_PE_FROM_FSB */
+/* Description: Inject a RSP# Parity Error When Sampling the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_RSP_PE_FROM_FSB_SHFT 19
+#define SH_PI_FSB_ERROR_INJECTION_RSP_PE_FROM_FSB_MASK 0x0000000000080000
+
+/* SH_PI_FSB_ERROR_INJECTION_DW0_CE_FROM_FSB */
+/* Description: Inject a Correctable Error in Doubleword 0 of SIC D */
+/* ata Packet 0 */
+#define SH_PI_FSB_ERROR_INJECTION_DW0_CE_FROM_FSB_SHFT 20
+#define SH_PI_FSB_ERROR_INJECTION_DW0_CE_FROM_FSB_MASK 0x0000000000100000
+
+/* SH_PI_FSB_ERROR_INJECTION_DW0_UCE_FROM_FSB */
+/* Description: Inject a Uncorrectable Error in Doubleword 0 of SIC */
+/* Data Packet 0 */
+#define SH_PI_FSB_ERROR_INJECTION_DW0_UCE_FROM_FSB_SHFT 21
+#define SH_PI_FSB_ERROR_INJECTION_DW0_UCE_FROM_FSB_MASK 0x0000000000200000
+
+/* SH_PI_FSB_ERROR_INJECTION_DW1_CE_FROM_FSB */
+/* Description: Inject a Correctable Error in Doubleword 0 of SIC D */
+/* ata Packet 0 */
+#define SH_PI_FSB_ERROR_INJECTION_DW1_CE_FROM_FSB_SHFT 22
+#define SH_PI_FSB_ERROR_INJECTION_DW1_CE_FROM_FSB_MASK 0x0000000000400000
+
+/* SH_PI_FSB_ERROR_INJECTION_DW1_UCE_FROM_FSB */
+/* Description: Inject a Uncorrectable Error in Doubleword 0 of SIC */
+/* Data Packet 0 */
+#define SH_PI_FSB_ERROR_INJECTION_DW1_UCE_FROM_FSB_SHFT 23
+#define SH_PI_FSB_ERROR_INJECTION_DW1_UCE_FROM_FSB_MASK 0x0000000000800000
+
+/* SH_PI_FSB_ERROR_INJECTION_DW2_CE_FROM_FSB */
+/* Description: Inject a Correctable Error in Doubleword 0 of SIC D */
+/* ata Packet 0 */
+#define SH_PI_FSB_ERROR_INJECTION_DW2_CE_FROM_FSB_SHFT 24
+#define SH_PI_FSB_ERROR_INJECTION_DW2_CE_FROM_FSB_MASK 0x0000000001000000
+
+/* SH_PI_FSB_ERROR_INJECTION_DW2_UCE_FROM_FSB */
+/* Description: Inject a Uncorrectable Error in Doubleword 0 of SIC */
+/* Data Packet 0 */
+#define SH_PI_FSB_ERROR_INJECTION_DW2_UCE_FROM_FSB_SHFT 25
+#define SH_PI_FSB_ERROR_INJECTION_DW2_UCE_FROM_FSB_MASK 0x0000000002000000
+
+/* SH_PI_FSB_ERROR_INJECTION_DW3_CE_FROM_FSB */
+/* Description: Inject a Correctable Error in Doubleword 0 of SIC D */
+/* ata Packet 0 */
+#define SH_PI_FSB_ERROR_INJECTION_DW3_CE_FROM_FSB_SHFT 26
+#define SH_PI_FSB_ERROR_INJECTION_DW3_CE_FROM_FSB_MASK 0x0000000004000000
+
+/* SH_PI_FSB_ERROR_INJECTION_DW3_UCE_FROM_FSB */
+/* Description: Inject a Uncorrectable Error in Doubleword 0 of SIC */
+/* Data Packet 0 */
+#define SH_PI_FSB_ERROR_INJECTION_DW3_UCE_FROM_FSB_SHFT 27
+#define SH_PI_FSB_ERROR_INJECTION_DW3_UCE_FROM_FSB_MASK 0x0000000008000000
+
+/* SH_PI_FSB_ERROR_INJECTION_IOQ_OVERRUN */
+/* Description: Inject an ioq overrun Error on the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_IOQ_OVERRUN_SHFT 32
+#define SH_PI_FSB_ERROR_INJECTION_IOQ_OVERRUN_MASK 0x0000000100000000
+
+/* SH_PI_FSB_ERROR_INJECTION_LIVELOCK */
+/* Description: Inject a livelock Error on the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_LIVELOCK_SHFT 33
+#define SH_PI_FSB_ERROR_INJECTION_LIVELOCK_MASK 0x0000000200000000
+
+/* SH_PI_FSB_ERROR_INJECTION_BUS_HANG */
+/* Description: Inject an bus hang on the FSB */
+#define SH_PI_FSB_ERROR_INJECTION_BUS_HANG_SHFT 34
+#define SH_PI_FSB_ERROR_INJECTION_BUS_HANG_MASK 0x0000000400000000
+
+/* ==================================================================== */
+/* Register "SH_PI_MD2PI_REPLY_VC_CONFIG" */
+/* MD-to-PI Reply Virtual Channel Configuration */
+/* ==================================================================== */
+
+#define SH_PI_MD2PI_REPLY_VC_CONFIG 0x0000000120050d00
+#define SH_PI_MD2PI_REPLY_VC_CONFIG_MASK 0xc000000000003fff
+#define SH_PI_MD2PI_REPLY_VC_CONFIG_INIT 0x000000000000088c
+
+/* SH_PI_MD2PI_REPLY_VC_CONFIG_HDR_DEPTH */
+/* Description: Depth of header Buffer */
+#define SH_PI_MD2PI_REPLY_VC_CONFIG_HDR_DEPTH_SHFT 0
+#define SH_PI_MD2PI_REPLY_VC_CONFIG_HDR_DEPTH_MASK 0x000000000000000f
+
+/* SH_PI_MD2PI_REPLY_VC_CONFIG_DATA_DEPTH */
+/* Description: Number of data buffers Available */
+#define SH_PI_MD2PI_REPLY_VC_CONFIG_DATA_DEPTH_SHFT 4
+#define SH_PI_MD2PI_REPLY_VC_CONFIG_DATA_DEPTH_MASK 0x00000000000000f0
+
+/* SH_PI_MD2PI_REPLY_VC_CONFIG_MAX_CREDITS */
+/* Description: Maximum credits from sender */
+#define SH_PI_MD2PI_REPLY_VC_CONFIG_MAX_CREDITS_SHFT 8
+#define SH_PI_MD2PI_REPLY_VC_CONFIG_MAX_CREDITS_MASK 0x0000000000003f00
+
+/* SH_PI_MD2PI_REPLY_VC_CONFIG_FORCE_CREDIT */
+/* Description: Send an extra credit to sender */
+#define SH_PI_MD2PI_REPLY_VC_CONFIG_FORCE_CREDIT_SHFT 62
+#define SH_PI_MD2PI_REPLY_VC_CONFIG_FORCE_CREDIT_MASK 0x4000000000000000
+
+/* SH_PI_MD2PI_REPLY_VC_CONFIG_CAPTURE_CREDIT_STATUS */
+/* Description: Capture credit and status information */
+#define SH_PI_MD2PI_REPLY_VC_CONFIG_CAPTURE_CREDIT_STATUS_SHFT 63
+#define SH_PI_MD2PI_REPLY_VC_CONFIG_CAPTURE_CREDIT_STATUS_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_MD2PI_REQUEST_VC_CONFIG" */
+/* MD-to-PI Request Virtual Channel Configuration */
+/* ==================================================================== */
+
+#define SH_PI_MD2PI_REQUEST_VC_CONFIG 0x0000000120050d80
+#define SH_PI_MD2PI_REQUEST_VC_CONFIG_MASK 0xc000000000003fff
+#define SH_PI_MD2PI_REQUEST_VC_CONFIG_INIT 0x000000000000088c
+
+/* SH_PI_MD2PI_REQUEST_VC_CONFIG_HDR_DEPTH */
+/* Description: Depth of header Buffer */
+#define SH_PI_MD2PI_REQUEST_VC_CONFIG_HDR_DEPTH_SHFT 0
+#define SH_PI_MD2PI_REQUEST_VC_CONFIG_HDR_DEPTH_MASK 0x000000000000000f
+
+/* SH_PI_MD2PI_REQUEST_VC_CONFIG_DATA_DEPTH */
+/* Description: Number of data buffers Available */
+#define SH_PI_MD2PI_REQUEST_VC_CONFIG_DATA_DEPTH_SHFT 4
+#define SH_PI_MD2PI_REQUEST_VC_CONFIG_DATA_DEPTH_MASK 0x00000000000000f0
+
+/* SH_PI_MD2PI_REQUEST_VC_CONFIG_MAX_CREDITS */
+/* Description: Maximum credits from sender */
+#define SH_PI_MD2PI_REQUEST_VC_CONFIG_MAX_CREDITS_SHFT 8
+#define SH_PI_MD2PI_REQUEST_VC_CONFIG_MAX_CREDITS_MASK 0x0000000000003f00
+
+/* SH_PI_MD2PI_REQUEST_VC_CONFIG_FORCE_CREDIT */
+/* Description: Send an extra credit to sender */
+#define SH_PI_MD2PI_REQUEST_VC_CONFIG_FORCE_CREDIT_SHFT 62
+#define SH_PI_MD2PI_REQUEST_VC_CONFIG_FORCE_CREDIT_MASK 0x4000000000000000
+
+/* SH_PI_MD2PI_REQUEST_VC_CONFIG_CAPTURE_CREDIT_STATUS */
+/* Description: Capture credit and status information */
+#define SH_PI_MD2PI_REQUEST_VC_CONFIG_CAPTURE_CREDIT_STATUS_SHFT 63
+#define SH_PI_MD2PI_REQUEST_VC_CONFIG_CAPTURE_CREDIT_STATUS_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_QUEUE_ERROR_INJECTION" */
+/* PI Queue Error Injection */
+/* ==================================================================== */
+
+#define SH_PI_QUEUE_ERROR_INJECTION 0x0000000120050e00
+#define SH_PI_QUEUE_ERROR_INJECTION_MASK 0x00000000000000ff
+#define SH_PI_QUEUE_ERROR_INJECTION_INIT 0x0000000000000000
+
+/* SH_PI_QUEUE_ERROR_INJECTION_DAT_DFR_Q */
+#define SH_PI_QUEUE_ERROR_INJECTION_DAT_DFR_Q_SHFT 0
+#define SH_PI_QUEUE_ERROR_INJECTION_DAT_DFR_Q_MASK 0x0000000000000001
+
+/* SH_PI_QUEUE_ERROR_INJECTION_DXB_WTL_CMND_Q */
+#define SH_PI_QUEUE_ERROR_INJECTION_DXB_WTL_CMND_Q_SHFT 1
+#define SH_PI_QUEUE_ERROR_INJECTION_DXB_WTL_CMND_Q_MASK 0x0000000000000002
+
+/* SH_PI_QUEUE_ERROR_INJECTION_FSB_WTL_CMND_Q */
+#define SH_PI_QUEUE_ERROR_INJECTION_FSB_WTL_CMND_Q_SHFT 2
+#define SH_PI_QUEUE_ERROR_INJECTION_FSB_WTL_CMND_Q_MASK 0x0000000000000004
+
+/* SH_PI_QUEUE_ERROR_INJECTION_MDPI_RPY_BFR */
+#define SH_PI_QUEUE_ERROR_INJECTION_MDPI_RPY_BFR_SHFT 3
+#define SH_PI_QUEUE_ERROR_INJECTION_MDPI_RPY_BFR_MASK 0x0000000000000008
+
+/* SH_PI_QUEUE_ERROR_INJECTION_PTC_INTR */
+#define SH_PI_QUEUE_ERROR_INJECTION_PTC_INTR_SHFT 4
+#define SH_PI_QUEUE_ERROR_INJECTION_PTC_INTR_MASK 0x0000000000000010
+
+/* SH_PI_QUEUE_ERROR_INJECTION_RXL_KILL_Q */
+#define SH_PI_QUEUE_ERROR_INJECTION_RXL_KILL_Q_SHFT 5
+#define SH_PI_QUEUE_ERROR_INJECTION_RXL_KILL_Q_MASK 0x0000000000000020
+
+/* SH_PI_QUEUE_ERROR_INJECTION_RXL_RDY_Q */
+#define SH_PI_QUEUE_ERROR_INJECTION_RXL_RDY_Q_SHFT 6
+#define SH_PI_QUEUE_ERROR_INJECTION_RXL_RDY_Q_MASK 0x0000000000000040
+
+/* SH_PI_QUEUE_ERROR_INJECTION_XNPI_RPY_BFR */
+#define SH_PI_QUEUE_ERROR_INJECTION_XNPI_RPY_BFR_SHFT 7
+#define SH_PI_QUEUE_ERROR_INJECTION_XNPI_RPY_BFR_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_PI_TEST_POINT_COMPARE" */
+/* PI Test Point Compare */
+/* ==================================================================== */
+
+#define SH_PI_TEST_POINT_COMPARE 0x0000000120050e80
+#define SH_PI_TEST_POINT_COMPARE_MASK 0xffffffffffffffff
+#define SH_PI_TEST_POINT_COMPARE_INIT 0xffffffff00000000
+
+/* SH_PI_TEST_POINT_COMPARE_COMPARE_MASK */
+/* Description: Mask to select test point data for trigger generati */
+#define SH_PI_TEST_POINT_COMPARE_COMPARE_MASK_SHFT 0
+#define SH_PI_TEST_POINT_COMPARE_COMPARE_MASK_MASK 0x00000000ffffffff
+
+/* SH_PI_TEST_POINT_COMPARE_COMPARE_PATTERN */
+/* Description: Pattern of test point data to cause trigger */
+#define SH_PI_TEST_POINT_COMPARE_COMPARE_PATTERN_SHFT 32
+#define SH_PI_TEST_POINT_COMPARE_COMPARE_PATTERN_MASK 0xffffffff00000000
+
+/* ==================================================================== */
+/* Register "SH_PI_TEST_POINT_SELECT" */
+/* PI Test Point Select */
+/* ==================================================================== */
+
+#define SH_PI_TEST_POINT_SELECT 0x0000000120050f00
+#define SH_PI_TEST_POINT_SELECT_MASK 0xf777777777777777
+#define SH_PI_TEST_POINT_SELECT_INIT 0x0000000000000000
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL */
+/* Description: Nibble 0 data is from Chiplet X */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_SHFT 0
+#define SH_PI_TEST_POINT_SELECT_NIBBLE0_CHIPLET_SEL_MASK 0x0000000000000007
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble X is routed to Nibble 0 */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_SHFT 4
+#define SH_PI_TEST_POINT_SELECT_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL */
+/* Description: Nibble 1 data is from Chiplet X */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_SHFT 8
+#define SH_PI_TEST_POINT_SELECT_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000700
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble X is routed to Nibble 1 */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_SHFT 12
+#define SH_PI_TEST_POINT_SELECT_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL */
+/* Description: Nibble 2 data is from Chiplet X */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_SHFT 16
+#define SH_PI_TEST_POINT_SELECT_NIBBLE2_CHIPLET_SEL_MASK 0x0000000000070000
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble X is routed to Nibble 2 */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_SHFT 20
+#define SH_PI_TEST_POINT_SELECT_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL */
+/* Description: Nibble 3 data is from Chiplet X */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_SHFT 24
+#define SH_PI_TEST_POINT_SELECT_NIBBLE3_CHIPLET_SEL_MASK 0x0000000007000000
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble X is routed to Nibble 3 */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_SHFT 28
+#define SH_PI_TEST_POINT_SELECT_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL */
+/* Description: Nibble 4 data is from Chiplet X */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_SHFT 32
+#define SH_PI_TEST_POINT_SELECT_NIBBLE4_CHIPLET_SEL_MASK 0x0000000700000000
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble X is routed to Nibble 4 */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_SHFT 36
+#define SH_PI_TEST_POINT_SELECT_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL */
+/* Description: Nibble 5 data is from Chiplet X */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_SHFT 40
+#define SH_PI_TEST_POINT_SELECT_NIBBLE5_CHIPLET_SEL_MASK 0x0000070000000000
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble X is routed to Nibble 5 */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_SHFT 44
+#define SH_PI_TEST_POINT_SELECT_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL */
+/* Description: Nibble 6 data is from Chiplet X */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_SHFT 48
+#define SH_PI_TEST_POINT_SELECT_NIBBLE6_CHIPLET_SEL_MASK 0x0007000000000000
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble X is routed to Nibble 6 */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_SHFT 52
+#define SH_PI_TEST_POINT_SELECT_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL */
+/* Description: Nibble 7 data is from Chiplet X */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_SHFT 56
+#define SH_PI_TEST_POINT_SELECT_NIBBLE7_CHIPLET_SEL_MASK 0x0700000000000000
+
+/* SH_PI_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble X is routed to Nibble 7 */
+#define SH_PI_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_SHFT 60
+#define SH_PI_TEST_POINT_SELECT_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* SH_PI_TEST_POINT_SELECT_TRIGGER_ENABLE */
+/* Description: Enable trigger on bit 32 of Analyzer data */
+#define SH_PI_TEST_POINT_SELECT_TRIGGER_ENABLE_SHFT 63
+#define SH_PI_TEST_POINT_SELECT_TRIGGER_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_TEST_POINT_TRIGGER_SELECT" */
+/* PI Test Point Trigger Select */
+/* ==================================================================== */
+
+#define SH_PI_TEST_POINT_TRIGGER_SELECT 0x0000000120050f80
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_MASK 0x7777777777777777
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_INIT 0x0000000000000000
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL */
+/* Description: Nibble 0 Chiplet select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_SHFT 0
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_CHIPLET_SEL_MASK 0x0000000000000007
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_SHFT 4
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL */
+/* Description: Nibble 1 Chiplet select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_SHFT 8
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_CHIPLET_SEL_MASK 0x0000000000000700
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_SHFT 12
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL */
+/* Description: Nibble 2 Chiplet select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_SHFT 16
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_CHIPLET_SEL_MASK 0x0000000000070000
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_SHFT 20
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL */
+/* Description: Nibble 3 Chiplet select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_SHFT 24
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_CHIPLET_SEL_MASK 0x0000000007000000
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_SHFT 28
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL */
+/* Description: Nibble 4 Chiplet select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_SHFT 32
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_CHIPLET_SEL_MASK 0x0000000700000000
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_SHFT 36
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL */
+/* Description: Nibble 5 Chiplet select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_SHFT 40
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_CHIPLET_SEL_MASK 0x0000070000000000
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_SHFT 44
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL */
+/* Description: Nibble 6 Chiplet select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_SHFT 48
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_CHIPLET_SEL_MASK 0x0007000000000000
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_SHFT 52
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL */
+/* Description: Nibble 7 Chiplet select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_SHFT 56
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_CHIPLET_SEL_MASK 0x0700000000000000
+
+/* SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_SHFT 60
+#define SH_PI_TEST_POINT_TRIGGER_SELECT_TRIGGER7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_XN2PI_REPLY_VC_CONFIG" */
+/* XN-to-PI Reply Virtual Channel Configuration */
+/* ==================================================================== */
+
+#define SH_PI_XN2PI_REPLY_VC_CONFIG 0x0000000120051000
+#define SH_PI_XN2PI_REPLY_VC_CONFIG_MASK 0xc000000000003fff
+#define SH_PI_XN2PI_REPLY_VC_CONFIG_INIT 0x000000000000068c
+
+/* SH_PI_XN2PI_REPLY_VC_CONFIG_HDR_DEPTH */
+/* Description: Depth of header Buffer */
+#define SH_PI_XN2PI_REPLY_VC_CONFIG_HDR_DEPTH_SHFT 0
+#define SH_PI_XN2PI_REPLY_VC_CONFIG_HDR_DEPTH_MASK 0x000000000000000f
+
+/* SH_PI_XN2PI_REPLY_VC_CONFIG_DATA_DEPTH */
+/* Description: Number of data buffers Available */
+#define SH_PI_XN2PI_REPLY_VC_CONFIG_DATA_DEPTH_SHFT 4
+#define SH_PI_XN2PI_REPLY_VC_CONFIG_DATA_DEPTH_MASK 0x00000000000000f0
+
+/* SH_PI_XN2PI_REPLY_VC_CONFIG_MAX_CREDITS */
+/* Description: Maximum credits from sender */
+#define SH_PI_XN2PI_REPLY_VC_CONFIG_MAX_CREDITS_SHFT 8
+#define SH_PI_XN2PI_REPLY_VC_CONFIG_MAX_CREDITS_MASK 0x0000000000003f00
+
+/* SH_PI_XN2PI_REPLY_VC_CONFIG_FORCE_CREDIT */
+/* Description: Send an extra credit to sender */
+#define SH_PI_XN2PI_REPLY_VC_CONFIG_FORCE_CREDIT_SHFT 62
+#define SH_PI_XN2PI_REPLY_VC_CONFIG_FORCE_CREDIT_MASK 0x4000000000000000
+
+/* SH_PI_XN2PI_REPLY_VC_CONFIG_CAPTURE_CREDIT_STATUS */
+/* Description: Capture credit and status information */
+#define SH_PI_XN2PI_REPLY_VC_CONFIG_CAPTURE_CREDIT_STATUS_SHFT 63
+#define SH_PI_XN2PI_REPLY_VC_CONFIG_CAPTURE_CREDIT_STATUS_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_XN2PI_REQUEST_VC_CONFIG" */
+/* XN-to-PI Request Virtual Channel Configuration */
+/* ==================================================================== */
+
+#define SH_PI_XN2PI_REQUEST_VC_CONFIG 0x0000000120051080
+#define SH_PI_XN2PI_REQUEST_VC_CONFIG_MASK 0xc000000000003fff
+#define SH_PI_XN2PI_REQUEST_VC_CONFIG_INIT 0x000000000000068c
+
+/* SH_PI_XN2PI_REQUEST_VC_CONFIG_HDR_DEPTH */
+/* Description: Depth of header Buffer */
+#define SH_PI_XN2PI_REQUEST_VC_CONFIG_HDR_DEPTH_SHFT 0
+#define SH_PI_XN2PI_REQUEST_VC_CONFIG_HDR_DEPTH_MASK 0x000000000000000f
+
+/* SH_PI_XN2PI_REQUEST_VC_CONFIG_DATA_DEPTH */
+/* Description: Number of data buffers Available */
+#define SH_PI_XN2PI_REQUEST_VC_CONFIG_DATA_DEPTH_SHFT 4
+#define SH_PI_XN2PI_REQUEST_VC_CONFIG_DATA_DEPTH_MASK 0x00000000000000f0
+
+/* SH_PI_XN2PI_REQUEST_VC_CONFIG_MAX_CREDITS */
+/* Description: Maximum credits from sender */
+#define SH_PI_XN2PI_REQUEST_VC_CONFIG_MAX_CREDITS_SHFT 8
+#define SH_PI_XN2PI_REQUEST_VC_CONFIG_MAX_CREDITS_MASK 0x0000000000003f00
+
+/* SH_PI_XN2PI_REQUEST_VC_CONFIG_FORCE_CREDIT */
+/* Description: Send an extra credit to sender */
+#define SH_PI_XN2PI_REQUEST_VC_CONFIG_FORCE_CREDIT_SHFT 62
+#define SH_PI_XN2PI_REQUEST_VC_CONFIG_FORCE_CREDIT_MASK 0x4000000000000000
+
+/* SH_PI_XN2PI_REQUEST_VC_CONFIG_CAPTURE_CREDIT_STATUS */
+/* Description: Capture credit and status information */
+#define SH_PI_XN2PI_REQUEST_VC_CONFIG_CAPTURE_CREDIT_STATUS_SHFT 63
+#define SH_PI_XN2PI_REQUEST_VC_CONFIG_CAPTURE_CREDIT_STATUS_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_AEC_STATUS" */
+/* PI Adaptive Error Correction Status */
+/* ==================================================================== */
+
+#define SH_PI_AEC_STATUS 0x0000000120060000
+#define SH_PI_AEC_STATUS_MASK 0x0000000000000007
+#define SH_PI_AEC_STATUS_INIT 0x0000000000000000
+
+/* SH_PI_AEC_STATUS_STATE */
+/* Description: AEC State */
+#define SH_PI_AEC_STATUS_STATE_SHFT 0
+#define SH_PI_AEC_STATUS_STATE_MASK 0x0000000000000007
+
+/* ==================================================================== */
+/* Register "SH_PI_AFI_FIRST_ERROR" */
+/* PI AFI First Error */
+/* ==================================================================== */
+
+#define SH_PI_AFI_FIRST_ERROR 0x0000000120060080
+#define SH_PI_AFI_FIRST_ERROR_MASK 0x00000007ffe00180
+#define SH_PI_AFI_FIRST_ERROR_INIT 0x0000000000000000
+
+/* SH_PI_AFI_FIRST_ERROR_FSB_SHUB_UCE */
+/* Description: An un-correctable ECC error was detected */
+#define SH_PI_AFI_FIRST_ERROR_FSB_SHUB_UCE_SHFT 7
+#define SH_PI_AFI_FIRST_ERROR_FSB_SHUB_UCE_MASK 0x0000000000000080
+
+/* SH_PI_AFI_FIRST_ERROR_FSB_SHUB_CE */
+/* Description: A correctable ECC error was detected */
+#define SH_PI_AFI_FIRST_ERROR_FSB_SHUB_CE_SHFT 8
+#define SH_PI_AFI_FIRST_ERROR_FSB_SHUB_CE_MASK 0x0000000000000100
+
+/* SH_PI_AFI_FIRST_ERROR_HUNG_BUS */
+/* Description: FSB is hung */
+#define SH_PI_AFI_FIRST_ERROR_HUNG_BUS_SHFT 21
+#define SH_PI_AFI_FIRST_ERROR_HUNG_BUS_MASK 0x0000000000200000
+
+/* SH_PI_AFI_FIRST_ERROR_RSP_PARITY */
+/* Description: Parity error detecte during response phase */
+#define SH_PI_AFI_FIRST_ERROR_RSP_PARITY_SHFT 22
+#define SH_PI_AFI_FIRST_ERROR_RSP_PARITY_MASK 0x0000000000400000
+
+/* SH_PI_AFI_FIRST_ERROR_IOQ_OVERRUN */
+/* Description: Over run error detected on IOQ */
+#define SH_PI_AFI_FIRST_ERROR_IOQ_OVERRUN_SHFT 23
+#define SH_PI_AFI_FIRST_ERROR_IOQ_OVERRUN_MASK 0x0000000000800000
+
+/* SH_PI_AFI_FIRST_ERROR_REQ_FORMAT */
+/* Description: FSB request format not supported */
+#define SH_PI_AFI_FIRST_ERROR_REQ_FORMAT_SHFT 24
+#define SH_PI_AFI_FIRST_ERROR_REQ_FORMAT_MASK 0x0000000001000000
+
+/* SH_PI_AFI_FIRST_ERROR_ADDR_ACCESS */
+/* Description: Access to Address is not supported */
+#define SH_PI_AFI_FIRST_ERROR_ADDR_ACCESS_SHFT 25
+#define SH_PI_AFI_FIRST_ERROR_ADDR_ACCESS_MASK 0x0000000002000000
+
+/* SH_PI_AFI_FIRST_ERROR_REQ_PARITY */
+/* Description: Parity error detected during request phase */
+#define SH_PI_AFI_FIRST_ERROR_REQ_PARITY_SHFT 26
+#define SH_PI_AFI_FIRST_ERROR_REQ_PARITY_MASK 0x0000000004000000
+
+/* SH_PI_AFI_FIRST_ERROR_ADDR_PARITY */
+/* Description: Parity error detected on address */
+#define SH_PI_AFI_FIRST_ERROR_ADDR_PARITY_SHFT 27
+#define SH_PI_AFI_FIRST_ERROR_ADDR_PARITY_MASK 0x0000000008000000
+
+/* SH_PI_AFI_FIRST_ERROR_SHUB_FSB_DQE */
+/* Description: SHUB_FSB_DQE */
+#define SH_PI_AFI_FIRST_ERROR_SHUB_FSB_DQE_SHFT 28
+#define SH_PI_AFI_FIRST_ERROR_SHUB_FSB_DQE_MASK 0x0000000010000000
+
+/* SH_PI_AFI_FIRST_ERROR_SHUB_FSB_UCE */
+/* Description: An un-correctable ECC error was detected */
+#define SH_PI_AFI_FIRST_ERROR_SHUB_FSB_UCE_SHFT 29
+#define SH_PI_AFI_FIRST_ERROR_SHUB_FSB_UCE_MASK 0x0000000020000000
+
+/* SH_PI_AFI_FIRST_ERROR_SHUB_FSB_CE */
+/* Description: An correctable ECC error was detected */
+#define SH_PI_AFI_FIRST_ERROR_SHUB_FSB_CE_SHFT 30
+#define SH_PI_AFI_FIRST_ERROR_SHUB_FSB_CE_MASK 0x0000000040000000
+
+/* SH_PI_AFI_FIRST_ERROR_LIVELOCK */
+/* Description: AFI livelock error was detected */
+#define SH_PI_AFI_FIRST_ERROR_LIVELOCK_SHFT 31
+#define SH_PI_AFI_FIRST_ERROR_LIVELOCK_MASK 0x0000000080000000
+
+/* SH_PI_AFI_FIRST_ERROR_BAD_SNOOP */
+/* Description: AFI bad snoop error was detected */
+#define SH_PI_AFI_FIRST_ERROR_BAD_SNOOP_SHFT 32
+#define SH_PI_AFI_FIRST_ERROR_BAD_SNOOP_MASK 0x0000000100000000
+
+/* SH_PI_AFI_FIRST_ERROR_FSB_TBL_MISS */
+/* Description: AFI FSB request table miss error was detected */
+#define SH_PI_AFI_FIRST_ERROR_FSB_TBL_MISS_SHFT 33
+#define SH_PI_AFI_FIRST_ERROR_FSB_TBL_MISS_MASK 0x0000000200000000
+
+/* SH_PI_AFI_FIRST_ERROR_MSG_LEN */
+/* Description: Runt or Obese message received from SIC */
+#define SH_PI_AFI_FIRST_ERROR_MSG_LEN_SHFT 34
+#define SH_PI_AFI_FIRST_ERROR_MSG_LEN_MASK 0x0000000400000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CAM_ADDRESS_READ_DATA" */
+/* CRB CAM MMR Address Read Data */
+/* ==================================================================== */
+
+#define SH_PI_CAM_ADDRESS_READ_DATA 0x0000000120060100
+#define SH_PI_CAM_ADDRESS_READ_DATA_MASK 0x8000ffffffffffff
+#define SH_PI_CAM_ADDRESS_READ_DATA_INIT 0x0000000000000000
+
+/* SH_PI_CAM_ADDRESS_READ_DATA_CAM_ADDR */
+/* Description: CRB CAM Address Read Data. */
+#define SH_PI_CAM_ADDRESS_READ_DATA_CAM_ADDR_SHFT 0
+#define SH_PI_CAM_ADDRESS_READ_DATA_CAM_ADDR_MASK 0x0000ffffffffffff
+
+/* SH_PI_CAM_ADDRESS_READ_DATA_CAM_ADDR_VAL */
+/* Description: CRB CAM Address Read Data Valid. */
+#define SH_PI_CAM_ADDRESS_READ_DATA_CAM_ADDR_VAL_SHFT 63
+#define SH_PI_CAM_ADDRESS_READ_DATA_CAM_ADDR_VAL_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CAM_LPRA_READ_DATA" */
+/* CRB CAM MMR LPRA Read Data */
+/* ==================================================================== */
+
+#define SH_PI_CAM_LPRA_READ_DATA 0x0000000120060180
+#define SH_PI_CAM_LPRA_READ_DATA_MASK 0xffffffffffffffff
+#define SH_PI_CAM_LPRA_READ_DATA_INIT 0x0000000000000000
+
+/* SH_PI_CAM_LPRA_READ_DATA_CAM_LPRA */
+/* Description: CRB CAM LPRA read data. */
+#define SH_PI_CAM_LPRA_READ_DATA_CAM_LPRA_SHFT 0
+#define SH_PI_CAM_LPRA_READ_DATA_CAM_LPRA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_PI_CAM_STATE_READ_DATA" */
+/* CRB CAM MMR State Read Data */
+/* ==================================================================== */
+
+#define SH_PI_CAM_STATE_READ_DATA 0x0000000120060200
+#define SH_PI_CAM_STATE_READ_DATA_MASK 0x8003ffff0000003f
+#define SH_PI_CAM_STATE_READ_DATA_INIT 0x0000000000000000
+
+/* SH_PI_CAM_STATE_READ_DATA_CAM_STATE */
+/* Description: CRB CAM State read data. */
+#define SH_PI_CAM_STATE_READ_DATA_CAM_STATE_SHFT 0
+#define SH_PI_CAM_STATE_READ_DATA_CAM_STATE_MASK 0x000000000000000f
+
+/* SH_PI_CAM_STATE_READ_DATA_CAM_TO */
+/* Description: CRB CAM Time-out Status. */
+#define SH_PI_CAM_STATE_READ_DATA_CAM_TO_SHFT 4
+#define SH_PI_CAM_STATE_READ_DATA_CAM_TO_MASK 0x0000000000000010
+
+/* SH_PI_CAM_STATE_READ_DATA_CAM_STATE_RD_PEND */
+/* Description: CRB CAM State Read Pending. */
+#define SH_PI_CAM_STATE_READ_DATA_CAM_STATE_RD_PEND_SHFT 5
+#define SH_PI_CAM_STATE_READ_DATA_CAM_STATE_RD_PEND_MASK 0x0000000000000020
+
+/* SH_PI_CAM_STATE_READ_DATA_CAM_LPRA */
+/* Description: CRB LPRA Overflow Data. */
+#define SH_PI_CAM_STATE_READ_DATA_CAM_LPRA_SHFT 32
+#define SH_PI_CAM_STATE_READ_DATA_CAM_LPRA_MASK 0x0003ffff00000000
+
+/* SH_PI_CAM_STATE_READ_DATA_CAM_RD_DATA_VAL */
+/* Description: CRB CAM MMR read data is valid. */
+#define SH_PI_CAM_STATE_READ_DATA_CAM_RD_DATA_VAL_SHFT 63
+#define SH_PI_CAM_STATE_READ_DATA_CAM_RD_DATA_VAL_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CORRECTED_DETAIL_1" */
+/* PI Corrected Error Detail */
+/* ==================================================================== */
+
+#define SH_PI_CORRECTED_DETAIL_1 0x0000000120060280
+#define SH_PI_CORRECTED_DETAIL_1_MASK 0xffffffffffffffff
+#define SH_PI_CORRECTED_DETAIL_1_INIT 0x0000000000000000
+
+/* SH_PI_CORRECTED_DETAIL_1_ADDRESS */
+/* Description: Address of Message that logged Correctable Error */
+#define SH_PI_CORRECTED_DETAIL_1_ADDRESS_SHFT 0
+#define SH_PI_CORRECTED_DETAIL_1_ADDRESS_MASK 0x0000ffffffffffff
+
+/* SH_PI_CORRECTED_DETAIL_1_SYNDROME */
+/* Description: Syndrome for double word data with Correctable Erro */
+#define SH_PI_CORRECTED_DETAIL_1_SYNDROME_SHFT 48
+#define SH_PI_CORRECTED_DETAIL_1_SYNDROME_MASK 0x00ff000000000000
+
+/* SH_PI_CORRECTED_DETAIL_1_DEP */
+/* Description: DEP code for Double word in error */
+#define SH_PI_CORRECTED_DETAIL_1_DEP_SHFT 56
+#define SH_PI_CORRECTED_DETAIL_1_DEP_MASK 0xff00000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CORRECTED_DETAIL_2" */
+/* PI Corrected Error Detail 2 */
+/* ==================================================================== */
+
+#define SH_PI_CORRECTED_DETAIL_2 0x0000000120060300
+#define SH_PI_CORRECTED_DETAIL_2_MASK 0xffffffffffffffff
+#define SH_PI_CORRECTED_DETAIL_2_INIT 0x0000000000000000
+
+/* SH_PI_CORRECTED_DETAIL_2_DATA */
+/* Description: Double word data in error */
+#define SH_PI_CORRECTED_DETAIL_2_DATA_SHFT 0
+#define SH_PI_CORRECTED_DETAIL_2_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_PI_CORRECTED_DETAIL_3" */
+/* PI Corrected Error Detail 3 */
+/* ==================================================================== */
+
+#define SH_PI_CORRECTED_DETAIL_3 0x0000000120060380
+#define SH_PI_CORRECTED_DETAIL_3_MASK 0xffffffffffffffff
+#define SH_PI_CORRECTED_DETAIL_3_INIT 0x0000000000000000
+
+/* SH_PI_CORRECTED_DETAIL_3_ADDRESS */
+/* Description: Address of Message that logged Correctable Error */
+#define SH_PI_CORRECTED_DETAIL_3_ADDRESS_SHFT 0
+#define SH_PI_CORRECTED_DETAIL_3_ADDRESS_MASK 0x0000ffffffffffff
+
+/* SH_PI_CORRECTED_DETAIL_3_SYNDROME */
+/* Description: Syndrome for double word data with Correctable Erro */
+#define SH_PI_CORRECTED_DETAIL_3_SYNDROME_SHFT 48
+#define SH_PI_CORRECTED_DETAIL_3_SYNDROME_MASK 0x00ff000000000000
+
+/* SH_PI_CORRECTED_DETAIL_3_DEP */
+/* Description: DEP code for Double word in error */
+#define SH_PI_CORRECTED_DETAIL_3_DEP_SHFT 56
+#define SH_PI_CORRECTED_DETAIL_3_DEP_MASK 0xff00000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_CORRECTED_DETAIL_4" */
+/* PI Corrected Error Detail 4 */
+/* ==================================================================== */
+
+#define SH_PI_CORRECTED_DETAIL_4 0x0000000120060400
+#define SH_PI_CORRECTED_DETAIL_4_MASK 0xffffffffffffffff
+#define SH_PI_CORRECTED_DETAIL_4_INIT 0x0000000000000000
+
+/* SH_PI_CORRECTED_DETAIL_4_DATA */
+/* Description: Double word data in error */
+#define SH_PI_CORRECTED_DETAIL_4_DATA_SHFT 0
+#define SH_PI_CORRECTED_DETAIL_4_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_FIRST_ERROR" */
+/* PI CRBP First Error */
+/* ==================================================================== */
+
+#define SH_PI_CRBP_FIRST_ERROR 0x0000000120060480
+#define SH_PI_CRBP_FIRST_ERROR_MASK 0x00000000001fffff
+#define SH_PI_CRBP_FIRST_ERROR_INIT 0x0000000000000000
+
+/* SH_PI_CRBP_FIRST_ERROR_FSB_PROTO_ERR */
+/* Description: CRB's FSB pipe detected protocol table miss */
+#define SH_PI_CRBP_FIRST_ERROR_FSB_PROTO_ERR_SHFT 0
+#define SH_PI_CRBP_FIRST_ERROR_FSB_PROTO_ERR_MASK 0x0000000000000001
+
+/* SH_PI_CRBP_FIRST_ERROR_GFX_RP_ERR */
+/* Description: CRB's XB pipe received a GFX error reply */
+#define SH_PI_CRBP_FIRST_ERROR_GFX_RP_ERR_SHFT 1
+#define SH_PI_CRBP_FIRST_ERROR_GFX_RP_ERR_MASK 0x0000000000000002
+
+/* SH_PI_CRBP_FIRST_ERROR_XB_PROTO_ERR */
+/* Description: CRB's XB pipe detected protocol table miss */
+#define SH_PI_CRBP_FIRST_ERROR_XB_PROTO_ERR_SHFT 2
+#define SH_PI_CRBP_FIRST_ERROR_XB_PROTO_ERR_MASK 0x0000000000000004
+
+/* SH_PI_CRBP_FIRST_ERROR_MEM_RP_ERR */
+/* Description: CRB's XB pipe received a memory error reply message */
+#define SH_PI_CRBP_FIRST_ERROR_MEM_RP_ERR_SHFT 3
+#define SH_PI_CRBP_FIRST_ERROR_MEM_RP_ERR_MASK 0x0000000000000008
+
+/* SH_PI_CRBP_FIRST_ERROR_PIO_RP_ERR */
+/* Description: CRB's XB pipe received a PIO error reply message */
+#define SH_PI_CRBP_FIRST_ERROR_PIO_RP_ERR_SHFT 4
+#define SH_PI_CRBP_FIRST_ERROR_PIO_RP_ERR_MASK 0x0000000000000010
+
+/* SH_PI_CRBP_FIRST_ERROR_MEM_TO_ERR */
+/* Description: CRB's XB pipe detected a CRB time-out */
+#define SH_PI_CRBP_FIRST_ERROR_MEM_TO_ERR_SHFT 5
+#define SH_PI_CRBP_FIRST_ERROR_MEM_TO_ERR_MASK 0x0000000000000020
+
+/* SH_PI_CRBP_FIRST_ERROR_PIO_TO_ERR */
+/* Description: CRB's XB pipe detected a PIO time-out */
+#define SH_PI_CRBP_FIRST_ERROR_PIO_TO_ERR_SHFT 6
+#define SH_PI_CRBP_FIRST_ERROR_PIO_TO_ERR_MASK 0x0000000000000040
+
+/* SH_PI_CRBP_FIRST_ERROR_FSB_SHUB_UCE */
+/* Description: An un-correctable ECC error was detected */
+#define SH_PI_CRBP_FIRST_ERROR_FSB_SHUB_UCE_SHFT 7
+#define SH_PI_CRBP_FIRST_ERROR_FSB_SHUB_UCE_MASK 0x0000000000000080
+
+/* SH_PI_CRBP_FIRST_ERROR_FSB_SHUB_CE */
+/* Description: A correctable ECC error was detected */
+#define SH_PI_CRBP_FIRST_ERROR_FSB_SHUB_CE_SHFT 8
+#define SH_PI_CRBP_FIRST_ERROR_FSB_SHUB_CE_MASK 0x0000000000000100
+
+/* SH_PI_CRBP_FIRST_ERROR_MSG_COLOR_ERR */
+/* Description: Message color was wrong */
+#define SH_PI_CRBP_FIRST_ERROR_MSG_COLOR_ERR_SHFT 9
+#define SH_PI_CRBP_FIRST_ERROR_MSG_COLOR_ERR_MASK 0x0000000000000200
+
+/* SH_PI_CRBP_FIRST_ERROR_MD_RQ_Q_OFLOW */
+/* Description: MD Request input buffer over flow error */
+#define SH_PI_CRBP_FIRST_ERROR_MD_RQ_Q_OFLOW_SHFT 10
+#define SH_PI_CRBP_FIRST_ERROR_MD_RQ_Q_OFLOW_MASK 0x0000000000000400
+
+/* SH_PI_CRBP_FIRST_ERROR_MD_RP_Q_OFLOW */
+/* Description: MD Reply input buffer over flow error */
+#define SH_PI_CRBP_FIRST_ERROR_MD_RP_Q_OFLOW_SHFT 11
+#define SH_PI_CRBP_FIRST_ERROR_MD_RP_Q_OFLOW_MASK 0x0000000000000800
+
+/* SH_PI_CRBP_FIRST_ERROR_XN_RQ_Q_OFLOW */
+/* Description: XN Request input buffer over flow error */
+#define SH_PI_CRBP_FIRST_ERROR_XN_RQ_Q_OFLOW_SHFT 12
+#define SH_PI_CRBP_FIRST_ERROR_XN_RQ_Q_OFLOW_MASK 0x0000000000001000
+
+/* SH_PI_CRBP_FIRST_ERROR_XN_RP_Q_OFLOW */
+/* Description: XN Reply input buffer over flow error */
+#define SH_PI_CRBP_FIRST_ERROR_XN_RP_Q_OFLOW_SHFT 13
+#define SH_PI_CRBP_FIRST_ERROR_XN_RP_Q_OFLOW_MASK 0x0000000000002000
+
+/* SH_PI_CRBP_FIRST_ERROR_NACK_OFLOW */
+/* Description: NACK over flow error */
+#define SH_PI_CRBP_FIRST_ERROR_NACK_OFLOW_SHFT 14
+#define SH_PI_CRBP_FIRST_ERROR_NACK_OFLOW_MASK 0x0000000000004000
+
+/* SH_PI_CRBP_FIRST_ERROR_GFX_INT_0 */
+/* Description: GFX transfer interrupt for CPU 0 */
+#define SH_PI_CRBP_FIRST_ERROR_GFX_INT_0_SHFT 15
+#define SH_PI_CRBP_FIRST_ERROR_GFX_INT_0_MASK 0x0000000000008000
+
+/* SH_PI_CRBP_FIRST_ERROR_GFX_INT_1 */
+/* Description: GFX transfer interrupt for CPU 1 */
+#define SH_PI_CRBP_FIRST_ERROR_GFX_INT_1_SHFT 16
+#define SH_PI_CRBP_FIRST_ERROR_GFX_INT_1_MASK 0x0000000000010000
+
+/* SH_PI_CRBP_FIRST_ERROR_MD_RQ_CRD_OFLOW */
+/* Description: MD Request Credit Overflow Error */
+#define SH_PI_CRBP_FIRST_ERROR_MD_RQ_CRD_OFLOW_SHFT 17
+#define SH_PI_CRBP_FIRST_ERROR_MD_RQ_CRD_OFLOW_MASK 0x0000000000020000
+
+/* SH_PI_CRBP_FIRST_ERROR_MD_RP_CRD_OFLOW */
+/* Description: MD Reply Credit Overflow Error */
+#define SH_PI_CRBP_FIRST_ERROR_MD_RP_CRD_OFLOW_SHFT 18
+#define SH_PI_CRBP_FIRST_ERROR_MD_RP_CRD_OFLOW_MASK 0x0000000000040000
+
+/* SH_PI_CRBP_FIRST_ERROR_XN_RQ_CRD_OFLOW */
+/* Description: XN Request Credit Overflow Error */
+#define SH_PI_CRBP_FIRST_ERROR_XN_RQ_CRD_OFLOW_SHFT 19
+#define SH_PI_CRBP_FIRST_ERROR_XN_RQ_CRD_OFLOW_MASK 0x0000000000080000
+
+/* SH_PI_CRBP_FIRST_ERROR_XN_RP_CRD_OFLOW */
+/* Description: XN Reply Credit Overflow Error */
+#define SH_PI_CRBP_FIRST_ERROR_XN_RP_CRD_OFLOW_SHFT 20
+#define SH_PI_CRBP_FIRST_ERROR_XN_RP_CRD_OFLOW_MASK 0x0000000000100000
+
+/* ==================================================================== */
+/* Register "SH_PI_ERROR_DETAIL_1" */
+/* PI Error Detail 1 */
+/* ==================================================================== */
+
+#define SH_PI_ERROR_DETAIL_1 0x0000000120060500
+#define SH_PI_ERROR_DETAIL_1_MASK 0xffffffffffffffff
+#define SH_PI_ERROR_DETAIL_1_INIT 0x0000000000000000
+
+/* SH_PI_ERROR_DETAIL_1_STATUS */
+/* Description: Error Detail 1 */
+#define SH_PI_ERROR_DETAIL_1_STATUS_SHFT 0
+#define SH_PI_ERROR_DETAIL_1_STATUS_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_PI_ERROR_DETAIL_2" */
+/* PI Error Detail 2 */
+/* ==================================================================== */
+
+#define SH_PI_ERROR_DETAIL_2 0x0000000120060580
+#define SH_PI_ERROR_DETAIL_2_MASK 0xffffffffffffffff
+#define SH_PI_ERROR_DETAIL_2_INIT 0x0000000000000000
+
+/* SH_PI_ERROR_DETAIL_2_STATUS */
+/* Description: Error Status */
+#define SH_PI_ERROR_DETAIL_2_STATUS_SHFT 0
+#define SH_PI_ERROR_DETAIL_2_STATUS_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_PI_ERROR_OVERFLOW" */
+/* PI Error Overflow */
+/* ==================================================================== */
+
+#define SH_PI_ERROR_OVERFLOW 0x0000000120060600
+#define SH_PI_ERROR_OVERFLOW_MASK 0x00000007ffffffff
+#define SH_PI_ERROR_OVERFLOW_INIT 0x0000000000000000
+
+/* SH_PI_ERROR_OVERFLOW_FSB_PROTO_ERR */
+/* Description: CRB's FSB pipe detected protocol table miss */
+#define SH_PI_ERROR_OVERFLOW_FSB_PROTO_ERR_SHFT 0
+#define SH_PI_ERROR_OVERFLOW_FSB_PROTO_ERR_MASK 0x0000000000000001
+
+/* SH_PI_ERROR_OVERFLOW_GFX_RP_ERR */
+/* Description: CRB's XB pipe received another GFX reply error mess */
+#define SH_PI_ERROR_OVERFLOW_GFX_RP_ERR_SHFT 1
+#define SH_PI_ERROR_OVERFLOW_GFX_RP_ERR_MASK 0x0000000000000002
+
+/* SH_PI_ERROR_OVERFLOW_XB_PROTO_ERR */
+/* Description: CRB's XB pipe detected another protocol table miss */
+#define SH_PI_ERROR_OVERFLOW_XB_PROTO_ERR_SHFT 2
+#define SH_PI_ERROR_OVERFLOW_XB_PROTO_ERR_MASK 0x0000000000000004
+
+/* SH_PI_ERROR_OVERFLOW_MEM_RP_ERR */
+/* Description: CRB's XB pipe received another memory reply error m */
+#define SH_PI_ERROR_OVERFLOW_MEM_RP_ERR_SHFT 3
+#define SH_PI_ERROR_OVERFLOW_MEM_RP_ERR_MASK 0x0000000000000008
+
+/* SH_PI_ERROR_OVERFLOW_PIO_RP_ERR */
+/* Description: CRB's XB pipe received another PIO reply error mess */
+#define SH_PI_ERROR_OVERFLOW_PIO_RP_ERR_SHFT 4
+#define SH_PI_ERROR_OVERFLOW_PIO_RP_ERR_MASK 0x0000000000000010
+
+/* SH_PI_ERROR_OVERFLOW_MEM_TO_ERR */
+/* Description: CRB's XB pipe detected a CRB time-out */
+#define SH_PI_ERROR_OVERFLOW_MEM_TO_ERR_SHFT 5
+#define SH_PI_ERROR_OVERFLOW_MEM_TO_ERR_MASK 0x0000000000000020
+
+/* SH_PI_ERROR_OVERFLOW_PIO_TO_ERR */
+/* Description: CRB's XB pipe detected a PIO time-out */
+#define SH_PI_ERROR_OVERFLOW_PIO_TO_ERR_SHFT 6
+#define SH_PI_ERROR_OVERFLOW_PIO_TO_ERR_MASK 0x0000000000000040
+
+/* SH_PI_ERROR_OVERFLOW_FSB_SHUB_UCE */
+/* Description: An un-correctable ECC error was detected */
+#define SH_PI_ERROR_OVERFLOW_FSB_SHUB_UCE_SHFT 7
+#define SH_PI_ERROR_OVERFLOW_FSB_SHUB_UCE_MASK 0x0000000000000080
+
+/* SH_PI_ERROR_OVERFLOW_FSB_SHUB_CE */
+/* Description: An correctable ECC error was detected */
+#define SH_PI_ERROR_OVERFLOW_FSB_SHUB_CE_SHFT 8
+#define SH_PI_ERROR_OVERFLOW_FSB_SHUB_CE_MASK 0x0000000000000100
+
+/* SH_PI_ERROR_OVERFLOW_MSG_COLOR_ERR */
+/* Description: Message color was not correct */
+#define SH_PI_ERROR_OVERFLOW_MSG_COLOR_ERR_SHFT 9
+#define SH_PI_ERROR_OVERFLOW_MSG_COLOR_ERR_MASK 0x0000000000000200
+
+/* SH_PI_ERROR_OVERFLOW_MD_RQ_Q_OFLOW */
+/* Description: MD Request input buffer over flow error */
+#define SH_PI_ERROR_OVERFLOW_MD_RQ_Q_OFLOW_SHFT 10
+#define SH_PI_ERROR_OVERFLOW_MD_RQ_Q_OFLOW_MASK 0x0000000000000400
+
+/* SH_PI_ERROR_OVERFLOW_MD_RP_Q_OFLOW */
+/* Description: MD Reply input buffer over flow error */
+#define SH_PI_ERROR_OVERFLOW_MD_RP_Q_OFLOW_SHFT 11
+#define SH_PI_ERROR_OVERFLOW_MD_RP_Q_OFLOW_MASK 0x0000000000000800
+
+/* SH_PI_ERROR_OVERFLOW_XN_RQ_Q_OFLOW */
+/* Description: XN Request input buffer over flow error */
+#define SH_PI_ERROR_OVERFLOW_XN_RQ_Q_OFLOW_SHFT 12
+#define SH_PI_ERROR_OVERFLOW_XN_RQ_Q_OFLOW_MASK 0x0000000000001000
+
+/* SH_PI_ERROR_OVERFLOW_XN_RP_Q_OFLOW */
+/* Description: XN Reply input buffer over flow error */
+#define SH_PI_ERROR_OVERFLOW_XN_RP_Q_OFLOW_SHFT 13
+#define SH_PI_ERROR_OVERFLOW_XN_RP_Q_OFLOW_MASK 0x0000000000002000
+
+/* SH_PI_ERROR_OVERFLOW_NACK_OFLOW */
+/* Description: NACK over flow error */
+#define SH_PI_ERROR_OVERFLOW_NACK_OFLOW_SHFT 14
+#define SH_PI_ERROR_OVERFLOW_NACK_OFLOW_MASK 0x0000000000004000
+
+/* SH_PI_ERROR_OVERFLOW_GFX_INT_0 */
+/* Description: GFX transfer interrupt for CPU 0 */
+#define SH_PI_ERROR_OVERFLOW_GFX_INT_0_SHFT 15
+#define SH_PI_ERROR_OVERFLOW_GFX_INT_0_MASK 0x0000000000008000
+
+/* SH_PI_ERROR_OVERFLOW_GFX_INT_1 */
+/* Description: GFX transfer interrupt for CPU 1 */
+#define SH_PI_ERROR_OVERFLOW_GFX_INT_1_SHFT 16
+#define SH_PI_ERROR_OVERFLOW_GFX_INT_1_MASK 0x0000000000010000
+
+/* SH_PI_ERROR_OVERFLOW_MD_RQ_CRD_OFLOW */
+/* Description: MD Request Credit Overflow Error */
+#define SH_PI_ERROR_OVERFLOW_MD_RQ_CRD_OFLOW_SHFT 17
+#define SH_PI_ERROR_OVERFLOW_MD_RQ_CRD_OFLOW_MASK 0x0000000000020000
+
+/* SH_PI_ERROR_OVERFLOW_MD_RP_CRD_OFLOW */
+/* Description: MD Reply Credit Overflow Error */
+#define SH_PI_ERROR_OVERFLOW_MD_RP_CRD_OFLOW_SHFT 18
+#define SH_PI_ERROR_OVERFLOW_MD_RP_CRD_OFLOW_MASK 0x0000000000040000
+
+/* SH_PI_ERROR_OVERFLOW_XN_RQ_CRD_OFLOW */
+/* Description: XN Request Credit Overflow Error */
+#define SH_PI_ERROR_OVERFLOW_XN_RQ_CRD_OFLOW_SHFT 19
+#define SH_PI_ERROR_OVERFLOW_XN_RQ_CRD_OFLOW_MASK 0x0000000000080000
+
+/* SH_PI_ERROR_OVERFLOW_XN_RP_CRD_OFLOW */
+/* Description: XN Reply Credit Overflow Error */
+#define SH_PI_ERROR_OVERFLOW_XN_RP_CRD_OFLOW_SHFT 20
+#define SH_PI_ERROR_OVERFLOW_XN_RP_CRD_OFLOW_MASK 0x0000000000100000
+
+/* SH_PI_ERROR_OVERFLOW_HUNG_BUS */
+/* Description: FSB is hung */
+#define SH_PI_ERROR_OVERFLOW_HUNG_BUS_SHFT 21
+#define SH_PI_ERROR_OVERFLOW_HUNG_BUS_MASK 0x0000000000200000
+
+/* SH_PI_ERROR_OVERFLOW_RSP_PARITY */
+/* Description: Parity error detecte during response phase */
+#define SH_PI_ERROR_OVERFLOW_RSP_PARITY_SHFT 22
+#define SH_PI_ERROR_OVERFLOW_RSP_PARITY_MASK 0x0000000000400000
+
+/* SH_PI_ERROR_OVERFLOW_IOQ_OVERRUN */
+/* Description: Over run error detected on IOQ */
+#define SH_PI_ERROR_OVERFLOW_IOQ_OVERRUN_SHFT 23
+#define SH_PI_ERROR_OVERFLOW_IOQ_OVERRUN_MASK 0x0000000000800000
+
+/* SH_PI_ERROR_OVERFLOW_REQ_FORMAT */
+/* Description: FSB request format not supported */
+#define SH_PI_ERROR_OVERFLOW_REQ_FORMAT_SHFT 24
+#define SH_PI_ERROR_OVERFLOW_REQ_FORMAT_MASK 0x0000000001000000
+
+/* SH_PI_ERROR_OVERFLOW_ADDR_ACCESS */
+/* Description: Access to Address is not supported */
+#define SH_PI_ERROR_OVERFLOW_ADDR_ACCESS_SHFT 25
+#define SH_PI_ERROR_OVERFLOW_ADDR_ACCESS_MASK 0x0000000002000000
+
+/* SH_PI_ERROR_OVERFLOW_REQ_PARITY */
+/* Description: Parity error detected during request phase */
+#define SH_PI_ERROR_OVERFLOW_REQ_PARITY_SHFT 26
+#define SH_PI_ERROR_OVERFLOW_REQ_PARITY_MASK 0x0000000004000000
+
+/* SH_PI_ERROR_OVERFLOW_ADDR_PARITY */
+/* Description: Parity error detected on address */
+#define SH_PI_ERROR_OVERFLOW_ADDR_PARITY_SHFT 27
+#define SH_PI_ERROR_OVERFLOW_ADDR_PARITY_MASK 0x0000000008000000
+
+/* SH_PI_ERROR_OVERFLOW_SHUB_FSB_DQE */
+/* Description: SHUB_FSB_DQE */
+#define SH_PI_ERROR_OVERFLOW_SHUB_FSB_DQE_SHFT 28
+#define SH_PI_ERROR_OVERFLOW_SHUB_FSB_DQE_MASK 0x0000000010000000
+
+/* SH_PI_ERROR_OVERFLOW_SHUB_FSB_UCE */
+/* Description: An un-correctable ECC error was detected */
+#define SH_PI_ERROR_OVERFLOW_SHUB_FSB_UCE_SHFT 29
+#define SH_PI_ERROR_OVERFLOW_SHUB_FSB_UCE_MASK 0x0000000020000000
+
+/* SH_PI_ERROR_OVERFLOW_SHUB_FSB_CE */
+/* Description: An correctable ECC error was detected */
+#define SH_PI_ERROR_OVERFLOW_SHUB_FSB_CE_SHFT 30
+#define SH_PI_ERROR_OVERFLOW_SHUB_FSB_CE_MASK 0x0000000040000000
+
+/* SH_PI_ERROR_OVERFLOW_LIVELOCK */
+/* Description: AFI livelock error was detected */
+#define SH_PI_ERROR_OVERFLOW_LIVELOCK_SHFT 31
+#define SH_PI_ERROR_OVERFLOW_LIVELOCK_MASK 0x0000000080000000
+
+/* SH_PI_ERROR_OVERFLOW_BAD_SNOOP */
+/* Description: AFI bad snoop error was detected */
+#define SH_PI_ERROR_OVERFLOW_BAD_SNOOP_SHFT 32
+#define SH_PI_ERROR_OVERFLOW_BAD_SNOOP_MASK 0x0000000100000000
+
+/* SH_PI_ERROR_OVERFLOW_FSB_TBL_MISS */
+/* Description: AFI FSB request table miss error was detected */
+#define SH_PI_ERROR_OVERFLOW_FSB_TBL_MISS_SHFT 33
+#define SH_PI_ERROR_OVERFLOW_FSB_TBL_MISS_MASK 0x0000000200000000
+
+/* SH_PI_ERROR_OVERFLOW_MSG_LENGTH */
+/* Description: Message length error on received message from SIC */
+#define SH_PI_ERROR_OVERFLOW_MSG_LENGTH_SHFT 34
+#define SH_PI_ERROR_OVERFLOW_MSG_LENGTH_MASK 0x0000000400000000
+
+/* ==================================================================== */
+/* Register "SH_PI_ERROR_OVERFLOW_ALIAS" */
+/* PI Error Overflow Alias */
+/* ==================================================================== */
+
+#define SH_PI_ERROR_OVERFLOW_ALIAS 0x0000000120060608
+
+/* ==================================================================== */
+/* Register "SH_PI_ERROR_SUMMARY" */
+/* PI Error Summary */
+/* ==================================================================== */
+
+#define SH_PI_ERROR_SUMMARY 0x0000000120060680
+#define SH_PI_ERROR_SUMMARY_MASK 0x00000007ffffffff
+#define SH_PI_ERROR_SUMMARY_INIT 0x0000000000000000
+
+/* SH_PI_ERROR_SUMMARY_FSB_PROTO_ERR */
+/* Description: CRB's FSB pipe detected protocol table miss */
+#define SH_PI_ERROR_SUMMARY_FSB_PROTO_ERR_SHFT 0
+#define SH_PI_ERROR_SUMMARY_FSB_PROTO_ERR_MASK 0x0000000000000001
+
+/* SH_PI_ERROR_SUMMARY_GFX_RP_ERR */
+/* Description: Graphic reply error message received */
+#define SH_PI_ERROR_SUMMARY_GFX_RP_ERR_SHFT 1
+#define SH_PI_ERROR_SUMMARY_GFX_RP_ERR_MASK 0x0000000000000002
+
+/* SH_PI_ERROR_SUMMARY_XB_PROTO_ERR */
+/* Description: CRB's XB pipe detected protocol table miss */
+#define SH_PI_ERROR_SUMMARY_XB_PROTO_ERR_SHFT 2
+#define SH_PI_ERROR_SUMMARY_XB_PROTO_ERR_MASK 0x0000000000000004
+
+/* SH_PI_ERROR_SUMMARY_MEM_RP_ERR */
+/* Description: Memory reply error message received */
+#define SH_PI_ERROR_SUMMARY_MEM_RP_ERR_SHFT 3
+#define SH_PI_ERROR_SUMMARY_MEM_RP_ERR_MASK 0x0000000000000008
+
+/* SH_PI_ERROR_SUMMARY_PIO_RP_ERR */
+/* Description: PIO error reply message received */
+#define SH_PI_ERROR_SUMMARY_PIO_RP_ERR_SHFT 4
+#define SH_PI_ERROR_SUMMARY_PIO_RP_ERR_MASK 0x0000000000000010
+
+/* SH_PI_ERROR_SUMMARY_MEM_TO_ERR */
+/* Description: CRB's XB pipe detected a CRB time-out */
+#define SH_PI_ERROR_SUMMARY_MEM_TO_ERR_SHFT 5
+#define SH_PI_ERROR_SUMMARY_MEM_TO_ERR_MASK 0x0000000000000020
+
+/* SH_PI_ERROR_SUMMARY_PIO_TO_ERR */
+/* Description: CRB's XB pipe detected a PIO time-out */
+#define SH_PI_ERROR_SUMMARY_PIO_TO_ERR_SHFT 6
+#define SH_PI_ERROR_SUMMARY_PIO_TO_ERR_MASK 0x0000000000000040
+
+/* SH_PI_ERROR_SUMMARY_FSB_SHUB_UCE */
+/* Description: An un-correctable ECC error was detected */
+#define SH_PI_ERROR_SUMMARY_FSB_SHUB_UCE_SHFT 7
+#define SH_PI_ERROR_SUMMARY_FSB_SHUB_UCE_MASK 0x0000000000000080
+
+/* SH_PI_ERROR_SUMMARY_FSB_SHUB_CE */
+/* Description: An correctable ECC error was detected */
+#define SH_PI_ERROR_SUMMARY_FSB_SHUB_CE_SHFT 8
+#define SH_PI_ERROR_SUMMARY_FSB_SHUB_CE_MASK 0x0000000000000100
+
+/* SH_PI_ERROR_SUMMARY_MSG_COLOR_ERR */
+/* Description: Message color was wrong */
+#define SH_PI_ERROR_SUMMARY_MSG_COLOR_ERR_SHFT 9
+#define SH_PI_ERROR_SUMMARY_MSG_COLOR_ERR_MASK 0x0000000000000200
+
+/* SH_PI_ERROR_SUMMARY_MD_RQ_Q_OFLOW */
+/* Description: MD Request input buffer over flow error */
+#define SH_PI_ERROR_SUMMARY_MD_RQ_Q_OFLOW_SHFT 10
+#define SH_PI_ERROR_SUMMARY_MD_RQ_Q_OFLOW_MASK 0x0000000000000400
+
+/* SH_PI_ERROR_SUMMARY_MD_RP_Q_OFLOW */
+/* Description: MD Reply input buffer over flow error */
+#define SH_PI_ERROR_SUMMARY_MD_RP_Q_OFLOW_SHFT 11
+#define SH_PI_ERROR_SUMMARY_MD_RP_Q_OFLOW_MASK 0x0000000000000800
+
+/* SH_PI_ERROR_SUMMARY_XN_RQ_Q_OFLOW */
+/* Description: XN Request input buffer over flow error */
+#define SH_PI_ERROR_SUMMARY_XN_RQ_Q_OFLOW_SHFT 12
+#define SH_PI_ERROR_SUMMARY_XN_RQ_Q_OFLOW_MASK 0x0000000000001000
+
+/* SH_PI_ERROR_SUMMARY_XN_RP_Q_OFLOW */
+/* Description: XN Reply input buffer over flow error */
+#define SH_PI_ERROR_SUMMARY_XN_RP_Q_OFLOW_SHFT 13
+#define SH_PI_ERROR_SUMMARY_XN_RP_Q_OFLOW_MASK 0x0000000000002000
+
+/* SH_PI_ERROR_SUMMARY_NACK_OFLOW */
+/* Description: NACK over flow error */
+#define SH_PI_ERROR_SUMMARY_NACK_OFLOW_SHFT 14
+#define SH_PI_ERROR_SUMMARY_NACK_OFLOW_MASK 0x0000000000004000
+
+/* SH_PI_ERROR_SUMMARY_GFX_INT_0 */
+/* Description: GFX transfer interrupt for CPU 0 */
+#define SH_PI_ERROR_SUMMARY_GFX_INT_0_SHFT 15
+#define SH_PI_ERROR_SUMMARY_GFX_INT_0_MASK 0x0000000000008000
+
+/* SH_PI_ERROR_SUMMARY_GFX_INT_1 */
+/* Description: GFX transfer interrupt for CPU 1 */
+#define SH_PI_ERROR_SUMMARY_GFX_INT_1_SHFT 16
+#define SH_PI_ERROR_SUMMARY_GFX_INT_1_MASK 0x0000000000010000
+
+/* SH_PI_ERROR_SUMMARY_MD_RQ_CRD_OFLOW */
+/* Description: MD Request Credit Overflow Error */
+#define SH_PI_ERROR_SUMMARY_MD_RQ_CRD_OFLOW_SHFT 17
+#define SH_PI_ERROR_SUMMARY_MD_RQ_CRD_OFLOW_MASK 0x0000000000020000
+
+/* SH_PI_ERROR_SUMMARY_MD_RP_CRD_OFLOW */
+/* Description: MD Reply Credit Overflow Error */
+#define SH_PI_ERROR_SUMMARY_MD_RP_CRD_OFLOW_SHFT 18
+#define SH_PI_ERROR_SUMMARY_MD_RP_CRD_OFLOW_MASK 0x0000000000040000
+
+/* SH_PI_ERROR_SUMMARY_XN_RQ_CRD_OFLOW */
+/* Description: XN Request Credit Overflow Error */
+#define SH_PI_ERROR_SUMMARY_XN_RQ_CRD_OFLOW_SHFT 19
+#define SH_PI_ERROR_SUMMARY_XN_RQ_CRD_OFLOW_MASK 0x0000000000080000
+
+/* SH_PI_ERROR_SUMMARY_XN_RP_CRD_OFLOW */
+/* Description: XN Reply Credit Overflow Error */
+#define SH_PI_ERROR_SUMMARY_XN_RP_CRD_OFLOW_SHFT 20
+#define SH_PI_ERROR_SUMMARY_XN_RP_CRD_OFLOW_MASK 0x0000000000100000
+
+/* SH_PI_ERROR_SUMMARY_HUNG_BUS */
+/* Description: FSB is hung */
+#define SH_PI_ERROR_SUMMARY_HUNG_BUS_SHFT 21
+#define SH_PI_ERROR_SUMMARY_HUNG_BUS_MASK 0x0000000000200000
+
+/* SH_PI_ERROR_SUMMARY_RSP_PARITY */
+/* Description: Parity error detecte during response phase */
+#define SH_PI_ERROR_SUMMARY_RSP_PARITY_SHFT 22
+#define SH_PI_ERROR_SUMMARY_RSP_PARITY_MASK 0x0000000000400000
+
+/* SH_PI_ERROR_SUMMARY_IOQ_OVERRUN */
+/* Description: Over run error detected on IOQ */
+#define SH_PI_ERROR_SUMMARY_IOQ_OVERRUN_SHFT 23
+#define SH_PI_ERROR_SUMMARY_IOQ_OVERRUN_MASK 0x0000000000800000
+
+/* SH_PI_ERROR_SUMMARY_REQ_FORMAT */
+/* Description: FSB request format not supported */
+#define SH_PI_ERROR_SUMMARY_REQ_FORMAT_SHFT 24
+#define SH_PI_ERROR_SUMMARY_REQ_FORMAT_MASK 0x0000000001000000
+
+/* SH_PI_ERROR_SUMMARY_ADDR_ACCESS */
+/* Description: Access to Address is not supported */
+#define SH_PI_ERROR_SUMMARY_ADDR_ACCESS_SHFT 25
+#define SH_PI_ERROR_SUMMARY_ADDR_ACCESS_MASK 0x0000000002000000
+
+/* SH_PI_ERROR_SUMMARY_REQ_PARITY */
+/* Description: Parity error detected during request phase */
+#define SH_PI_ERROR_SUMMARY_REQ_PARITY_SHFT 26
+#define SH_PI_ERROR_SUMMARY_REQ_PARITY_MASK 0x0000000004000000
+
+/* SH_PI_ERROR_SUMMARY_ADDR_PARITY */
+/* Description: Parity error detected on address */
+#define SH_PI_ERROR_SUMMARY_ADDR_PARITY_SHFT 27
+#define SH_PI_ERROR_SUMMARY_ADDR_PARITY_MASK 0x0000000008000000
+
+/* SH_PI_ERROR_SUMMARY_SHUB_FSB_DQE */
+/* Description: SHUB_FSB_DQE error */
+#define SH_PI_ERROR_SUMMARY_SHUB_FSB_DQE_SHFT 28
+#define SH_PI_ERROR_SUMMARY_SHUB_FSB_DQE_MASK 0x0000000010000000
+
+/* SH_PI_ERROR_SUMMARY_SHUB_FSB_UCE */
+/* Description: An un-correctable ECC error was detected */
+#define SH_PI_ERROR_SUMMARY_SHUB_FSB_UCE_SHFT 29
+#define SH_PI_ERROR_SUMMARY_SHUB_FSB_UCE_MASK 0x0000000020000000
+
+/* SH_PI_ERROR_SUMMARY_SHUB_FSB_CE */
+/* Description: An correctable ECC error was detected */
+#define SH_PI_ERROR_SUMMARY_SHUB_FSB_CE_SHFT 30
+#define SH_PI_ERROR_SUMMARY_SHUB_FSB_CE_MASK 0x0000000040000000
+
+/* SH_PI_ERROR_SUMMARY_LIVELOCK */
+/* Description: AFI livelock error was detected */
+#define SH_PI_ERROR_SUMMARY_LIVELOCK_SHFT 31
+#define SH_PI_ERROR_SUMMARY_LIVELOCK_MASK 0x0000000080000000
+
+/* SH_PI_ERROR_SUMMARY_BAD_SNOOP */
+/* Description: AFI bad snoop error was detected */
+#define SH_PI_ERROR_SUMMARY_BAD_SNOOP_SHFT 32
+#define SH_PI_ERROR_SUMMARY_BAD_SNOOP_MASK 0x0000000100000000
+
+/* SH_PI_ERROR_SUMMARY_FSB_TBL_MISS */
+/* Description: AFI FSB request table miss error was detected */
+#define SH_PI_ERROR_SUMMARY_FSB_TBL_MISS_SHFT 33
+#define SH_PI_ERROR_SUMMARY_FSB_TBL_MISS_MASK 0x0000000200000000
+
+/* SH_PI_ERROR_SUMMARY_MSG_LENGTH */
+/* Description: Message length error on received message from SIC */
+#define SH_PI_ERROR_SUMMARY_MSG_LENGTH_SHFT 34
+#define SH_PI_ERROR_SUMMARY_MSG_LENGTH_MASK 0x0000000400000000
+
+/* ==================================================================== */
+/* Register "SH_PI_ERROR_SUMMARY_ALIAS" */
+/* PI Error Summary Alias */
+/* ==================================================================== */
+
+#define SH_PI_ERROR_SUMMARY_ALIAS 0x0000000120060688
+
+/* ==================================================================== */
+/* Register "SH_PI_EXPRESS_REPLY_STATUS" */
+/* PI Express Reply Status */
+/* ==================================================================== */
+
+#define SH_PI_EXPRESS_REPLY_STATUS 0x0000000120060700
+#define SH_PI_EXPRESS_REPLY_STATUS_MASK 0x0000000000000007
+#define SH_PI_EXPRESS_REPLY_STATUS_INIT 0x0000000000000000
+
+/* SH_PI_EXPRESS_REPLY_STATUS_STATE */
+/* Description: Express Reply State */
+#define SH_PI_EXPRESS_REPLY_STATUS_STATE_SHFT 0
+#define SH_PI_EXPRESS_REPLY_STATUS_STATE_MASK 0x0000000000000007
+
+/* ==================================================================== */
+/* Register "SH_PI_FIRST_ERROR" */
+/* PI First Error */
+/* ==================================================================== */
+
+#define SH_PI_FIRST_ERROR 0x0000000120060780
+#define SH_PI_FIRST_ERROR_MASK 0x00000007ffffffff
+#define SH_PI_FIRST_ERROR_INIT 0x0000000000000000
+
+/* SH_PI_FIRST_ERROR_FSB_PROTO_ERR */
+/* Description: CRB's FSB pipe detected protocol table miss */
+#define SH_PI_FIRST_ERROR_FSB_PROTO_ERR_SHFT 0
+#define SH_PI_FIRST_ERROR_FSB_PROTO_ERR_MASK 0x0000000000000001
+
+/* SH_PI_FIRST_ERROR_GFX_RP_ERR */
+/* Description: Graphics error reply message received */
+#define SH_PI_FIRST_ERROR_GFX_RP_ERR_SHFT 1
+#define SH_PI_FIRST_ERROR_GFX_RP_ERR_MASK 0x0000000000000002
+
+/* SH_PI_FIRST_ERROR_XB_PROTO_ERR */
+/* Description: CRB's XB pipe detected protocol table miss */
+#define SH_PI_FIRST_ERROR_XB_PROTO_ERR_SHFT 2
+#define SH_PI_FIRST_ERROR_XB_PROTO_ERR_MASK 0x0000000000000004
+
+/* SH_PI_FIRST_ERROR_MEM_RP_ERR */
+/* Description: Memory reply error message received */
+#define SH_PI_FIRST_ERROR_MEM_RP_ERR_SHFT 3
+#define SH_PI_FIRST_ERROR_MEM_RP_ERR_MASK 0x0000000000000008
+
+/* SH_PI_FIRST_ERROR_PIO_RP_ERR */
+/* Description: PIO reply error message received */
+#define SH_PI_FIRST_ERROR_PIO_RP_ERR_SHFT 4
+#define SH_PI_FIRST_ERROR_PIO_RP_ERR_MASK 0x0000000000000010
+
+/* SH_PI_FIRST_ERROR_MEM_TO_ERR */
+/* Description: CRB's XB pipe detected a CRB time-out */
+#define SH_PI_FIRST_ERROR_MEM_TO_ERR_SHFT 5
+#define SH_PI_FIRST_ERROR_MEM_TO_ERR_MASK 0x0000000000000020
+
+/* SH_PI_FIRST_ERROR_PIO_TO_ERR */
+/* Description: CRB's XB pipe detected a PIO time-out */
+#define SH_PI_FIRST_ERROR_PIO_TO_ERR_SHFT 6
+#define SH_PI_FIRST_ERROR_PIO_TO_ERR_MASK 0x0000000000000040
+
+/* SH_PI_FIRST_ERROR_FSB_SHUB_UCE */
+/* Description: An un-correctable ECC error was detected */
+#define SH_PI_FIRST_ERROR_FSB_SHUB_UCE_SHFT 7
+#define SH_PI_FIRST_ERROR_FSB_SHUB_UCE_MASK 0x0000000000000080
+
+/* SH_PI_FIRST_ERROR_FSB_SHUB_CE */
+/* Description: A correctable ECC error was detected */
+#define SH_PI_FIRST_ERROR_FSB_SHUB_CE_SHFT 8
+#define SH_PI_FIRST_ERROR_FSB_SHUB_CE_MASK 0x0000000000000100
+
+/* SH_PI_FIRST_ERROR_MSG_COLOR_ERR */
+/* Description: Message color was wrong */
+#define SH_PI_FIRST_ERROR_MSG_COLOR_ERR_SHFT 9
+#define SH_PI_FIRST_ERROR_MSG_COLOR_ERR_MASK 0x0000000000000200
+
+/* SH_PI_FIRST_ERROR_MD_RQ_Q_OFLOW */
+/* Description: MD Request input buffer over flow error */
+#define SH_PI_FIRST_ERROR_MD_RQ_Q_OFLOW_SHFT 10
+#define SH_PI_FIRST_ERROR_MD_RQ_Q_OFLOW_MASK 0x0000000000000400
+
+/* SH_PI_FIRST_ERROR_MD_RP_Q_OFLOW */
+/* Description: MD Reply input buffer over flow error */
+#define SH_PI_FIRST_ERROR_MD_RP_Q_OFLOW_SHFT 11
+#define SH_PI_FIRST_ERROR_MD_RP_Q_OFLOW_MASK 0x0000000000000800
+
+/* SH_PI_FIRST_ERROR_XN_RQ_Q_OFLOW */
+/* Description: XN Request input buffer over flow error */
+#define SH_PI_FIRST_ERROR_XN_RQ_Q_OFLOW_SHFT 12
+#define SH_PI_FIRST_ERROR_XN_RQ_Q_OFLOW_MASK 0x0000000000001000
+
+/* SH_PI_FIRST_ERROR_XN_RP_Q_OFLOW */
+/* Description: XN Reply input buffer over flow error */
+#define SH_PI_FIRST_ERROR_XN_RP_Q_OFLOW_SHFT 13
+#define SH_PI_FIRST_ERROR_XN_RP_Q_OFLOW_MASK 0x0000000000002000
+
+/* SH_PI_FIRST_ERROR_NACK_OFLOW */
+/* Description: NACK over flow error */
+#define SH_PI_FIRST_ERROR_NACK_OFLOW_SHFT 14
+#define SH_PI_FIRST_ERROR_NACK_OFLOW_MASK 0x0000000000004000
+
+/* SH_PI_FIRST_ERROR_GFX_INT_0 */
+/* Description: GFX transfer interrupt for CPU 0 */
+#define SH_PI_FIRST_ERROR_GFX_INT_0_SHFT 15
+#define SH_PI_FIRST_ERROR_GFX_INT_0_MASK 0x0000000000008000
+
+/* SH_PI_FIRST_ERROR_GFX_INT_1 */
+/* Description: GFX transfer interrupt for CPU 1 */
+#define SH_PI_FIRST_ERROR_GFX_INT_1_SHFT 16
+#define SH_PI_FIRST_ERROR_GFX_INT_1_MASK 0x0000000000010000
+
+/* SH_PI_FIRST_ERROR_MD_RQ_CRD_OFLOW */
+/* Description: MD Request Credit Overflow Error */
+#define SH_PI_FIRST_ERROR_MD_RQ_CRD_OFLOW_SHFT 17
+#define SH_PI_FIRST_ERROR_MD_RQ_CRD_OFLOW_MASK 0x0000000000020000
+
+/* SH_PI_FIRST_ERROR_MD_RP_CRD_OFLOW */
+/* Description: MD Reply Credit Overflow Error */
+#define SH_PI_FIRST_ERROR_MD_RP_CRD_OFLOW_SHFT 18
+#define SH_PI_FIRST_ERROR_MD_RP_CRD_OFLOW_MASK 0x0000000000040000
+
+/* SH_PI_FIRST_ERROR_XN_RQ_CRD_OFLOW */
+/* Description: XN Request Credit Overflow Error */
+#define SH_PI_FIRST_ERROR_XN_RQ_CRD_OFLOW_SHFT 19
+#define SH_PI_FIRST_ERROR_XN_RQ_CRD_OFLOW_MASK 0x0000000000080000
+
+/* SH_PI_FIRST_ERROR_XN_RP_CRD_OFLOW */
+/* Description: XN Reply Credit Overflow Error */
+#define SH_PI_FIRST_ERROR_XN_RP_CRD_OFLOW_SHFT 20
+#define SH_PI_FIRST_ERROR_XN_RP_CRD_OFLOW_MASK 0x0000000000100000
+
+/* SH_PI_FIRST_ERROR_HUNG_BUS */
+/* Description: FSB is hung */
+#define SH_PI_FIRST_ERROR_HUNG_BUS_SHFT 21
+#define SH_PI_FIRST_ERROR_HUNG_BUS_MASK 0x0000000000200000
+
+/* SH_PI_FIRST_ERROR_RSP_PARITY */
+/* Description: Parity error detecte during response phase */
+#define SH_PI_FIRST_ERROR_RSP_PARITY_SHFT 22
+#define SH_PI_FIRST_ERROR_RSP_PARITY_MASK 0x0000000000400000
+
+/* SH_PI_FIRST_ERROR_IOQ_OVERRUN */
+/* Description: Over run error detected on IOQ */
+#define SH_PI_FIRST_ERROR_IOQ_OVERRUN_SHFT 23
+#define SH_PI_FIRST_ERROR_IOQ_OVERRUN_MASK 0x0000000000800000
+
+/* SH_PI_FIRST_ERROR_REQ_FORMAT */
+/* Description: FSB request format not supported */
+#define SH_PI_FIRST_ERROR_REQ_FORMAT_SHFT 24
+#define SH_PI_FIRST_ERROR_REQ_FORMAT_MASK 0x0000000001000000
+
+/* SH_PI_FIRST_ERROR_ADDR_ACCESS */
+/* Description: Access to Address is not supported */
+#define SH_PI_FIRST_ERROR_ADDR_ACCESS_SHFT 25
+#define SH_PI_FIRST_ERROR_ADDR_ACCESS_MASK 0x0000000002000000
+
+/* SH_PI_FIRST_ERROR_REQ_PARITY */
+/* Description: Parity error detected during request phase */
+#define SH_PI_FIRST_ERROR_REQ_PARITY_SHFT 26
+#define SH_PI_FIRST_ERROR_REQ_PARITY_MASK 0x0000000004000000
+
+/* SH_PI_FIRST_ERROR_ADDR_PARITY */
+/* Description: Parity error detected on address */
+#define SH_PI_FIRST_ERROR_ADDR_PARITY_SHFT 27
+#define SH_PI_FIRST_ERROR_ADDR_PARITY_MASK 0x0000000008000000
+
+/* SH_PI_FIRST_ERROR_SHUB_FSB_DQE */
+/* Description: SHUB_FSB_DQE */
+#define SH_PI_FIRST_ERROR_SHUB_FSB_DQE_SHFT 28
+#define SH_PI_FIRST_ERROR_SHUB_FSB_DQE_MASK 0x0000000010000000
+
+/* SH_PI_FIRST_ERROR_SHUB_FSB_UCE */
+/* Description: An un-correctable ECC error was detected */
+#define SH_PI_FIRST_ERROR_SHUB_FSB_UCE_SHFT 29
+#define SH_PI_FIRST_ERROR_SHUB_FSB_UCE_MASK 0x0000000020000000
+
+/* SH_PI_FIRST_ERROR_SHUB_FSB_CE */
+/* Description: An correctable ECC error was detected */
+#define SH_PI_FIRST_ERROR_SHUB_FSB_CE_SHFT 30
+#define SH_PI_FIRST_ERROR_SHUB_FSB_CE_MASK 0x0000000040000000
+
+/* SH_PI_FIRST_ERROR_LIVELOCK */
+/* Description: AFI livelock error was detected */
+#define SH_PI_FIRST_ERROR_LIVELOCK_SHFT 31
+#define SH_PI_FIRST_ERROR_LIVELOCK_MASK 0x0000000080000000
+
+/* SH_PI_FIRST_ERROR_BAD_SNOOP */
+/* Description: AFI bad snoop error was detected */
+#define SH_PI_FIRST_ERROR_BAD_SNOOP_SHFT 32
+#define SH_PI_FIRST_ERROR_BAD_SNOOP_MASK 0x0000000100000000
+
+/* SH_PI_FIRST_ERROR_FSB_TBL_MISS */
+/* Description: AFI FSB request table miss error was detected */
+#define SH_PI_FIRST_ERROR_FSB_TBL_MISS_SHFT 33
+#define SH_PI_FIRST_ERROR_FSB_TBL_MISS_MASK 0x0000000200000000
+
+/* SH_PI_FIRST_ERROR_MSG_LENGTH */
+/* Description: Message length error on received message from SIC */
+#define SH_PI_FIRST_ERROR_MSG_LENGTH_SHFT 34
+#define SH_PI_FIRST_ERROR_MSG_LENGTH_MASK 0x0000000400000000
+
+/* ==================================================================== */
+/* Register "SH_PI_FIRST_ERROR_ALIAS" */
+/* PI First Error Alias */
+/* ==================================================================== */
+
+#define SH_PI_FIRST_ERROR_ALIAS 0x0000000120060788
+
+/* ==================================================================== */
+/* Register "SH_PI_PI2MD_REPLY_VC_STATUS" */
+/* PI-to-MD Reply Virtual Channel Status */
+/* ==================================================================== */
+
+#define SH_PI_PI2MD_REPLY_VC_STATUS 0x0000000120060900
+#define SH_PI_PI2MD_REPLY_VC_STATUS_MASK 0x000000000000003f
+#define SH_PI_PI2MD_REPLY_VC_STATUS_INIT 0x0000000000000000
+
+/* SH_PI_PI2MD_REPLY_VC_STATUS_OUTPUT_CRD_STAT */
+/* Description: Status of output credits */
+#define SH_PI_PI2MD_REPLY_VC_STATUS_OUTPUT_CRD_STAT_SHFT 0
+#define SH_PI_PI2MD_REPLY_VC_STATUS_OUTPUT_CRD_STAT_MASK 0x000000000000003f
+
+/* ==================================================================== */
+/* Register "SH_PI_PI2MD_REQUEST_VC_STATUS" */
+/* PI-to-MD Request Virtual Channel Status */
+/* ==================================================================== */
+
+#define SH_PI_PI2MD_REQUEST_VC_STATUS 0x0000000120060980
+#define SH_PI_PI2MD_REQUEST_VC_STATUS_MASK 0x000000000000003f
+#define SH_PI_PI2MD_REQUEST_VC_STATUS_INIT 0x0000000000000000
+
+/* SH_PI_PI2MD_REQUEST_VC_STATUS_OUTPUT_CRD_STAT */
+/* Description: Status of output credits */
+#define SH_PI_PI2MD_REQUEST_VC_STATUS_OUTPUT_CRD_STAT_SHFT 0
+#define SH_PI_PI2MD_REQUEST_VC_STATUS_OUTPUT_CRD_STAT_MASK 0x000000000000003f
+
+/* ==================================================================== */
+/* Register "SH_PI_PI2XN_REPLY_VC_STATUS" */
+/* PI-to-XN Reply Virtual Channel Status */
+/* ==================================================================== */
+
+#define SH_PI_PI2XN_REPLY_VC_STATUS 0x0000000120060a00
+#define SH_PI_PI2XN_REPLY_VC_STATUS_MASK 0x000000000000003f
+#define SH_PI_PI2XN_REPLY_VC_STATUS_INIT 0x0000000000000000
+
+/* SH_PI_PI2XN_REPLY_VC_STATUS_OUTPUT_CRD_STAT */
+/* Description: Status of output credits */
+#define SH_PI_PI2XN_REPLY_VC_STATUS_OUTPUT_CRD_STAT_SHFT 0
+#define SH_PI_PI2XN_REPLY_VC_STATUS_OUTPUT_CRD_STAT_MASK 0x000000000000003f
+
+/* ==================================================================== */
+/* Register "SH_PI_PI2XN_REQUEST_VC_STATUS" */
+/* PI-to-XN Request Virtual Channel Status */
+/* ==================================================================== */
+
+#define SH_PI_PI2XN_REQUEST_VC_STATUS 0x0000000120060a80
+#define SH_PI_PI2XN_REQUEST_VC_STATUS_MASK 0x000000000000003f
+#define SH_PI_PI2XN_REQUEST_VC_STATUS_INIT 0x0000000000000000
+
+/* SH_PI_PI2XN_REQUEST_VC_STATUS_OUTPUT_CRD_STAT */
+/* Description: Status of output credits */
+#define SH_PI_PI2XN_REQUEST_VC_STATUS_OUTPUT_CRD_STAT_SHFT 0
+#define SH_PI_PI2XN_REQUEST_VC_STATUS_OUTPUT_CRD_STAT_MASK 0x000000000000003f
+
+/* ==================================================================== */
+/* Register "SH_PI_UNCORRECTED_DETAIL_1" */
+/* PI Uncorrected Error Detail 1 */
+/* ==================================================================== */
+
+#define SH_PI_UNCORRECTED_DETAIL_1 0x0000000120060b00
+#define SH_PI_UNCORRECTED_DETAIL_1_MASK 0xffffffffffffffff
+#define SH_PI_UNCORRECTED_DETAIL_1_INIT 0x0000000000000000
+
+/* SH_PI_UNCORRECTED_DETAIL_1_ADDRESS */
+/* Description: Address of Message that logged Uncorrectable Error */
+#define SH_PI_UNCORRECTED_DETAIL_1_ADDRESS_SHFT 0
+#define SH_PI_UNCORRECTED_DETAIL_1_ADDRESS_MASK 0x0000ffffffffffff
+
+/* SH_PI_UNCORRECTED_DETAIL_1_SYNDROME */
+/* Description: Syndrome for double word data with Uncorrectable Er */
+#define SH_PI_UNCORRECTED_DETAIL_1_SYNDROME_SHFT 48
+#define SH_PI_UNCORRECTED_DETAIL_1_SYNDROME_MASK 0x00ff000000000000
+
+/* SH_PI_UNCORRECTED_DETAIL_1_DEP */
+/* Description: DEP for Double word in error */
+#define SH_PI_UNCORRECTED_DETAIL_1_DEP_SHFT 56
+#define SH_PI_UNCORRECTED_DETAIL_1_DEP_MASK 0xff00000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_UNCORRECTED_DETAIL_2" */
+/* PI Uncorrected Error Detail 2 */
+/* ==================================================================== */
+
+#define SH_PI_UNCORRECTED_DETAIL_2 0x0000000120060b80
+#define SH_PI_UNCORRECTED_DETAIL_2_MASK 0xffffffffffffffff
+#define SH_PI_UNCORRECTED_DETAIL_2_INIT 0x0000000000000000
+
+/* SH_PI_UNCORRECTED_DETAIL_2_DATA */
+/* Description: Double word data in error */
+#define SH_PI_UNCORRECTED_DETAIL_2_DATA_SHFT 0
+#define SH_PI_UNCORRECTED_DETAIL_2_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_PI_UNCORRECTED_DETAIL_3" */
+/* PI Uncorrected Error Detail 3 */
+/* ==================================================================== */
+
+#define SH_PI_UNCORRECTED_DETAIL_3 0x0000000120060c00
+#define SH_PI_UNCORRECTED_DETAIL_3_MASK 0xffffffffffffffff
+#define SH_PI_UNCORRECTED_DETAIL_3_INIT 0x0000000000000000
+
+/* SH_PI_UNCORRECTED_DETAIL_3_ADDRESS */
+/* Description: Address of Message that logged Uncorrectable Error */
+#define SH_PI_UNCORRECTED_DETAIL_3_ADDRESS_SHFT 0
+#define SH_PI_UNCORRECTED_DETAIL_3_ADDRESS_MASK 0x0000ffffffffffff
+
+/* SH_PI_UNCORRECTED_DETAIL_3_SYNDROME */
+/* Description: Syndrome for double word data with Uncorrectable Er */
+#define SH_PI_UNCORRECTED_DETAIL_3_SYNDROME_SHFT 48
+#define SH_PI_UNCORRECTED_DETAIL_3_SYNDROME_MASK 0x00ff000000000000
+
+/* SH_PI_UNCORRECTED_DETAIL_3_DEP */
+/* Description: DCP for Double word in error */
+#define SH_PI_UNCORRECTED_DETAIL_3_DEP_SHFT 56
+#define SH_PI_UNCORRECTED_DETAIL_3_DEP_MASK 0xff00000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_UNCORRECTED_DETAIL_4" */
+/* PI Uncorrected Error Detail 4 */
+/* ==================================================================== */
+
+#define SH_PI_UNCORRECTED_DETAIL_4 0x0000000120060c80
+#define SH_PI_UNCORRECTED_DETAIL_4_MASK 0xffffffffffffffff
+#define SH_PI_UNCORRECTED_DETAIL_4_INIT 0x0000000000000000
+
+/* SH_PI_UNCORRECTED_DETAIL_4_DATA */
+/* Description: Double word data in error */
+#define SH_PI_UNCORRECTED_DETAIL_4_DATA_SHFT 0
+#define SH_PI_UNCORRECTED_DETAIL_4_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_PI_MD2PI_REPLY_VC_STATUS" */
+/* MD-to-PI Reply Virtual Channel Status */
+/* ==================================================================== */
+
+#define SH_PI_MD2PI_REPLY_VC_STATUS 0x0000000120060800
+#define SH_PI_MD2PI_REPLY_VC_STATUS_MASK 0x0000000000000fff
+#define SH_PI_MD2PI_REPLY_VC_STATUS_INIT 0x0000000000000000
+
+/* SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_HDR_CRD_STAT */
+/* Description: Status of input header credits */
+#define SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_HDR_CRD_STAT_SHFT 0
+#define SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_HDR_CRD_STAT_MASK 0x000000000000000f
+
+/* SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_DAT_CRD_STAT */
+/* Description: Status of data credits */
+#define SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_DAT_CRD_STAT_SHFT 4
+#define SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_DAT_CRD_STAT_MASK 0x00000000000000f0
+
+/* SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_QUEUE_STAT */
+/* Description: Status of MD Reply Input Queue */
+#define SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_QUEUE_STAT_SHFT 8
+#define SH_PI_MD2PI_REPLY_VC_STATUS_INPUT_QUEUE_STAT_MASK 0x0000000000000f00
+
+/* ==================================================================== */
+/* Register "SH_PI_MD2PI_REQUEST_VC_STATUS" */
+/* MD-to-PI Request Virtual Channel Status */
+/* ==================================================================== */
+
+#define SH_PI_MD2PI_REQUEST_VC_STATUS 0x0000000120060880
+#define SH_PI_MD2PI_REQUEST_VC_STATUS_MASK 0x0000000000000fff
+#define SH_PI_MD2PI_REQUEST_VC_STATUS_INIT 0x0000000000000000
+
+/* SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_HDR_CRD_STAT */
+/* Description: Status of input header credits */
+#define SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_HDR_CRD_STAT_SHFT 0
+#define SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_HDR_CRD_STAT_MASK 0x000000000000000f
+
+/* SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_DAT_CRD_STAT */
+/* Description: Status of input data credits */
+#define SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_DAT_CRD_STAT_SHFT 4
+#define SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_DAT_CRD_STAT_MASK 0x00000000000000f0
+
+/* SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_QUEUE_STAT */
+/* Description: Status of MD Request Input Queue */
+#define SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_QUEUE_STAT_SHFT 8
+#define SH_PI_MD2PI_REQUEST_VC_STATUS_INPUT_QUEUE_STAT_MASK 0x0000000000000f00
+
+/* ==================================================================== */
+/* Register "SH_PI_XN2PI_REPLY_VC_STATUS" */
+/* XN-to-PI Reply Virtual Channel Status */
+/* ==================================================================== */
+
+#define SH_PI_XN2PI_REPLY_VC_STATUS 0x0000000120060d00
+#define SH_PI_XN2PI_REPLY_VC_STATUS_MASK 0x0000000000000fff
+#define SH_PI_XN2PI_REPLY_VC_STATUS_INIT 0x0000000000000000
+
+/* SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_HDR_CRD_STAT */
+/* Description: Status of input header credits */
+#define SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_HDR_CRD_STAT_SHFT 0
+#define SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_HDR_CRD_STAT_MASK 0x000000000000000f
+
+/* SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_DAT_CRD_STAT */
+/* Description: Status of input data credits */
+#define SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_DAT_CRD_STAT_SHFT 4
+#define SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_DAT_CRD_STAT_MASK 0x00000000000000f0
+
+/* SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_QUEUE_STAT */
+/* Description: Status of XN Reply Input Queue */
+#define SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_QUEUE_STAT_SHFT 8
+#define SH_PI_XN2PI_REPLY_VC_STATUS_INPUT_QUEUE_STAT_MASK 0x0000000000000f00
+
+/* ==================================================================== */
+/* Register "SH_PI_XN2PI_REQUEST_VC_STATUS" */
+/* XN-to-PI Request Virtual Channel Status */
+/* ==================================================================== */
+
+#define SH_PI_XN2PI_REQUEST_VC_STATUS 0x0000000120060d80
+#define SH_PI_XN2PI_REQUEST_VC_STATUS_MASK 0x0000000000000fff
+#define SH_PI_XN2PI_REQUEST_VC_STATUS_INIT 0x0000000000000000
+
+/* SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_HDR_CRD_STAT */
+/* Description: Status of input header credits */
+#define SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_HDR_CRD_STAT_SHFT 0
+#define SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_HDR_CRD_STAT_MASK 0x000000000000000f
+
+/* SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_DAT_CRD_STAT */
+/* Description: Status of input data credits */
+#define SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_DAT_CRD_STAT_SHFT 4
+#define SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_DAT_CRD_STAT_MASK 0x00000000000000f0
+
+/* SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_QUEUE_STAT */
+/* Description: Status of XN Request Input Queue */
+#define SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_QUEUE_STAT_SHFT 8
+#define SH_PI_XN2PI_REQUEST_VC_STATUS_INPUT_QUEUE_STAT_MASK 0x0000000000000f00
+
+/* ==================================================================== */
+/* Register "SH_XNPI_SIC_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNPI_SIC_FLOW 0x0000000150030000
+#define SH_XNPI_SIC_FLOW_MASK 0x9f1f1f1f1f1f9f9f
+#define SH_XNPI_SIC_FLOW_INIT 0x0000080000080000
+
+/* SH_XNPI_SIC_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNPI_SIC_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNPI_SIC_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000001f
+
+/* SH_XNPI_SIC_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNPI_SIC_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNPI_SIC_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNPI_SIC_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNPI_SIC_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNPI_SIC_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000001f00
+
+/* SH_XNPI_SIC_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNPI_SIC_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNPI_SIC_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNPI_SIC_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNPI_SIC_FLOW_CREDIT_VC0_TEST_SHFT 16
+#define SH_XNPI_SIC_FLOW_CREDIT_VC0_TEST_MASK 0x00000000001f0000
+
+/* SH_XNPI_SIC_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNPI_SIC_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNPI_SIC_FLOW_CREDIT_VC0_DYN_MASK 0x000000001f000000
+
+/* SH_XNPI_SIC_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNPI_SIC_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNPI_SIC_FLOW_CREDIT_VC0_CAP_MASK 0x0000001f00000000
+
+/* SH_XNPI_SIC_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNPI_SIC_FLOW_CREDIT_VC2_TEST_SHFT 40
+#define SH_XNPI_SIC_FLOW_CREDIT_VC2_TEST_MASK 0x00001f0000000000
+
+/* SH_XNPI_SIC_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNPI_SIC_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNPI_SIC_FLOW_CREDIT_VC2_DYN_MASK 0x001f000000000000
+
+/* SH_XNPI_SIC_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNPI_SIC_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNPI_SIC_FLOW_CREDIT_VC2_CAP_MASK 0x1f00000000000000
+
+/* SH_XNPI_SIC_FLOW_DISABLE_BYPASS_OUT */
+#define SH_XNPI_SIC_FLOW_DISABLE_BYPASS_OUT_SHFT 63
+#define SH_XNPI_SIC_FLOW_DISABLE_BYPASS_OUT_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNPI_TO_NI0_PORT_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNPI_TO_NI0_PORT_FLOW 0x0000000150030010
+#define SH_XNPI_TO_NI0_PORT_FLOW_MASK 0x3f3f003f3f00bfbf
+#define SH_XNPI_TO_NI0_PORT_FLOW_INIT 0x0000000000000000
+
+/* SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNPI_TO_NI0_PORT_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000
+
+/* SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000
+
+/* SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000
+
+/* SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNPI_TO_NI0_PORT_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNPI_TO_NI1_PORT_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNPI_TO_NI1_PORT_FLOW 0x0000000150030020
+#define SH_XNPI_TO_NI1_PORT_FLOW_MASK 0x3f3f003f3f00bfbf
+#define SH_XNPI_TO_NI1_PORT_FLOW_INIT 0x0000000000000000
+
+/* SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNPI_TO_NI1_PORT_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000
+
+/* SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000
+
+/* SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000
+
+/* SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNPI_TO_NI1_PORT_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNPI_TO_IILB_PORT_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNPI_TO_IILB_PORT_FLOW 0x0000000150030030
+#define SH_XNPI_TO_IILB_PORT_FLOW_MASK 0x3f3f003f3f00bfbf
+#define SH_XNPI_TO_IILB_PORT_FLOW_INIT 0x0000000000000000
+
+/* SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNPI_TO_IILB_PORT_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000
+
+/* SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000
+
+/* SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000
+
+/* SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNPI_TO_IILB_PORT_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNPI_FR_NI0_PORT_FLOW_FIFO" */
+/* ==================================================================== */
+
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO 0x0000000150030040
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_MASK 0x00001f1f3f3f3f3f
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_INIT 0x00000c0c00000000
+
+/* SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_DYN */
+/* Description: vc0 fifo entry dynamic value */
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_DYN_SHFT 0
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_DYN_MASK 0x000000000000003f
+
+/* SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_CAP */
+/* Description: vc0 fifo entry captured value */
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_CAP_SHFT 8
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_CAP_MASK 0x0000000000003f00
+
+/* SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_DYN */
+/* Description: vc2 fifo entry dynamic value */
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_DYN_SHFT 16
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_DYN_MASK 0x00000000003f0000
+
+/* SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_CAP */
+/* Description: vc2 fifo entry captured value */
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_CAP_SHFT 24
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_CAP_MASK 0x000000003f000000
+
+/* SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_TEST */
+/* Description: vc0 test credits limit */
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_TEST_SHFT 32
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_TEST_MASK 0x0000001f00000000
+
+/* SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_TEST */
+/* Description: vc2 test credits limit */
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_TEST_SHFT 40
+#define SH_XNPI_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_TEST_MASK 0x00001f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNPI_FR_NI1_PORT_FLOW_FIFO" */
+/* ==================================================================== */
+
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO 0x0000000150030050
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_MASK 0x00001f1f3f3f3f3f
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_INIT 0x00000c0c00000000
+
+/* SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_DYN */
+/* Description: vc0 fifo entry dynamic value */
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_DYN_SHFT 0
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_DYN_MASK 0x000000000000003f
+
+/* SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_CAP */
+/* Description: vc0 fifo entry captured value */
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_CAP_SHFT 8
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_CAP_MASK 0x0000000000003f00
+
+/* SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_DYN */
+/* Description: vc2 fifo entry dynamic value */
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_DYN_SHFT 16
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_DYN_MASK 0x00000000003f0000
+
+/* SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_CAP */
+/* Description: vc2 fifo entry captured value */
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_CAP_SHFT 24
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_CAP_MASK 0x000000003f000000
+
+/* SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_TEST */
+/* Description: vc0 test credits limit */
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_TEST_SHFT 32
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_TEST_MASK 0x0000001f00000000
+
+/* SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_TEST */
+/* Description: vc2 test credits limit */
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_TEST_SHFT 40
+#define SH_XNPI_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_TEST_MASK 0x00001f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNPI_FR_IILB_PORT_FLOW_FIFO" */
+/* ==================================================================== */
+
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO 0x0000000150030060
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_MASK 0x00001f1f3f3f3f3f
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_INIT 0x00000c0c00000000
+
+/* SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_DYN */
+/* Description: vc0 fifo entry dynamic value */
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_DYN_SHFT 0
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_DYN_MASK 0x000000000000003f
+
+/* SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_CAP */
+/* Description: vc0 fifo entry captured value */
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_CAP_SHFT 8
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_CAP_MASK 0x0000000000003f00
+
+/* SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_DYN */
+/* Description: vc2 fifo entry dynamic value */
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_DYN_SHFT 16
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_DYN_MASK 0x00000000003f0000
+
+/* SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_CAP */
+/* Description: vc2 fifo entry captured value */
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_CAP_SHFT 24
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_CAP_MASK 0x000000003f000000
+
+/* SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_TEST */
+/* Description: vc0 test credits limit */
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_TEST_SHFT 32
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_TEST_MASK 0x0000001f00000000
+
+/* SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_TEST */
+/* Description: vc2 test credits limit */
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_TEST_SHFT 40
+#define SH_XNPI_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_TEST_MASK 0x00001f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNMD_SIC_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNMD_SIC_FLOW 0x0000000150030100
+#define SH_XNMD_SIC_FLOW_MASK 0x9f1f1f1f1f1f9f9f
+#define SH_XNMD_SIC_FLOW_INIT 0x0000090000090000
+
+/* SH_XNMD_SIC_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNMD_SIC_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNMD_SIC_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000001f
+
+/* SH_XNMD_SIC_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNMD_SIC_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNMD_SIC_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNMD_SIC_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNMD_SIC_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNMD_SIC_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000001f00
+
+/* SH_XNMD_SIC_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNMD_SIC_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNMD_SIC_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNMD_SIC_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNMD_SIC_FLOW_CREDIT_VC0_TEST_SHFT 16
+#define SH_XNMD_SIC_FLOW_CREDIT_VC0_TEST_MASK 0x00000000001f0000
+
+/* SH_XNMD_SIC_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNMD_SIC_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNMD_SIC_FLOW_CREDIT_VC0_DYN_MASK 0x000000001f000000
+
+/* SH_XNMD_SIC_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNMD_SIC_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNMD_SIC_FLOW_CREDIT_VC0_CAP_MASK 0x0000001f00000000
+
+/* SH_XNMD_SIC_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNMD_SIC_FLOW_CREDIT_VC2_TEST_SHFT 40
+#define SH_XNMD_SIC_FLOW_CREDIT_VC2_TEST_MASK 0x00001f0000000000
+
+/* SH_XNMD_SIC_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNMD_SIC_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNMD_SIC_FLOW_CREDIT_VC2_DYN_MASK 0x001f000000000000
+
+/* SH_XNMD_SIC_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNMD_SIC_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNMD_SIC_FLOW_CREDIT_VC2_CAP_MASK 0x1f00000000000000
+
+/* SH_XNMD_SIC_FLOW_DISABLE_BYPASS_OUT */
+#define SH_XNMD_SIC_FLOW_DISABLE_BYPASS_OUT_SHFT 63
+#define SH_XNMD_SIC_FLOW_DISABLE_BYPASS_OUT_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNMD_TO_NI0_PORT_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNMD_TO_NI0_PORT_FLOW 0x0000000150030110
+#define SH_XNMD_TO_NI0_PORT_FLOW_MASK 0x3f3f003f3f00bfbf
+#define SH_XNMD_TO_NI0_PORT_FLOW_INIT 0x0000000000000000
+
+/* SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNMD_TO_NI0_PORT_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000
+
+/* SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000
+
+/* SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000
+
+/* SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNMD_TO_NI0_PORT_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNMD_TO_NI1_PORT_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNMD_TO_NI1_PORT_FLOW 0x0000000150030120
+#define SH_XNMD_TO_NI1_PORT_FLOW_MASK 0x3f3f003f3f00bfbf
+#define SH_XNMD_TO_NI1_PORT_FLOW_INIT 0x0000000000000000
+
+/* SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNMD_TO_NI1_PORT_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000
+
+/* SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000
+
+/* SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000
+
+/* SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNMD_TO_NI1_PORT_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNMD_TO_IILB_PORT_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNMD_TO_IILB_PORT_FLOW 0x0000000150030130
+#define SH_XNMD_TO_IILB_PORT_FLOW_MASK 0x3f3f003f3f00bfbf
+#define SH_XNMD_TO_IILB_PORT_FLOW_INIT 0x0000000000000000
+
+/* SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNMD_TO_IILB_PORT_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC0_DYN_MASK 0x000000003f000000
+
+/* SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC0_CAP_MASK 0x0000003f00000000
+
+/* SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC2_DYN_MASK 0x003f000000000000
+
+/* SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNMD_TO_IILB_PORT_FLOW_CREDIT_VC2_CAP_MASK 0x3f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNMD_FR_NI0_PORT_FLOW_FIFO" */
+/* ==================================================================== */
+
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO 0x0000000150030140
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_MASK 0x00001f1f3f3f3f3f
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_INIT 0x00000c0c00000000
+
+/* SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_DYN */
+/* Description: vc0 fifo entry dynamic value */
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_DYN_SHFT 0
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_DYN_MASK 0x000000000000003f
+
+/* SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_CAP */
+/* Description: vc0 fifo entry captured value */
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_CAP_SHFT 8
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_CAP_MASK 0x0000000000003f00
+
+/* SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_DYN */
+/* Description: vc2 fifo entry dynamic value */
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_DYN_SHFT 16
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_DYN_MASK 0x00000000003f0000
+
+/* SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_CAP */
+/* Description: vc2 fifo entry captured value */
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_CAP_SHFT 24
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_CAP_MASK 0x000000003f000000
+
+/* SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_TEST */
+/* Description: vc0 test credits limit */
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_TEST_SHFT 32
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC0_TEST_MASK 0x0000001f00000000
+
+/* SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_TEST */
+/* Description: vc2 test credits limit */
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_TEST_SHFT 40
+#define SH_XNMD_FR_NI0_PORT_FLOW_FIFO_ENTRY_VC2_TEST_MASK 0x00001f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNMD_FR_NI1_PORT_FLOW_FIFO" */
+/* ==================================================================== */
+
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO 0x0000000150030150
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_MASK 0x00001f1f3f3f3f3f
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_INIT 0x00000c0c00000000
+
+/* SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_DYN */
+/* Description: vc0 fifo entry dynamic value */
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_DYN_SHFT 0
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_DYN_MASK 0x000000000000003f
+
+/* SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_CAP */
+/* Description: vc0 fifo entry captured value */
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_CAP_SHFT 8
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_CAP_MASK 0x0000000000003f00
+
+/* SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_DYN */
+/* Description: vc2 fifo entry dynamic value */
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_DYN_SHFT 16
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_DYN_MASK 0x00000000003f0000
+
+/* SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_CAP */
+/* Description: vc2 fifo entry captured value */
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_CAP_SHFT 24
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_CAP_MASK 0x000000003f000000
+
+/* SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_TEST */
+/* Description: vc0 test credits limit */
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_TEST_SHFT 32
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC0_TEST_MASK 0x0000001f00000000
+
+/* SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_TEST */
+/* Description: vc2 test credits limit */
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_TEST_SHFT 40
+#define SH_XNMD_FR_NI1_PORT_FLOW_FIFO_ENTRY_VC2_TEST_MASK 0x00001f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNMD_FR_IILB_PORT_FLOW_FIFO" */
+/* ==================================================================== */
+
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO 0x0000000150030160
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_MASK 0x00001f1f3f3f3f3f
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_INIT 0x00000c0c00000000
+
+/* SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_DYN */
+/* Description: vc0 fifo entry dynamic value */
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_DYN_SHFT 0
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_DYN_MASK 0x000000000000003f
+
+/* SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_CAP */
+/* Description: vc0 fifo entry captured value */
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_CAP_SHFT 8
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_CAP_MASK 0x0000000000003f00
+
+/* SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_DYN */
+/* Description: vc2 fifo entry dynamic value */
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_DYN_SHFT 16
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_DYN_MASK 0x00000000003f0000
+
+/* SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_CAP */
+/* Description: vc2 fifo entry captured value */
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_CAP_SHFT 24
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_CAP_MASK 0x000000003f000000
+
+/* SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_TEST */
+/* Description: vc0 test credits limit */
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_TEST_SHFT 32
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC0_TEST_MASK 0x0000001f00000000
+
+/* SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_TEST */
+/* Description: vc2 test credits limit */
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_TEST_SHFT 40
+#define SH_XNMD_FR_IILB_PORT_FLOW_FIFO_ENTRY_VC2_TEST_MASK 0x00001f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNII_INTRA_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNII_INTRA_FLOW 0x0000000150030200
+#define SH_XNII_INTRA_FLOW_MASK 0x7f7f7f7f7f7fbfbf
+#define SH_XNII_INTRA_FLOW_INIT 0x00003f00003f0000
+
+/* SH_XNII_INTRA_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNII_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNII_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNII_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNII_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNII_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNII_INTRA_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNII_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNII_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNII_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNII_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNII_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNII_INTRA_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNII_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 16
+#define SH_XNII_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x00000000007f0000
+
+/* SH_XNII_INTRA_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNII_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNII_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x000000007f000000
+
+/* SH_XNII_INTRA_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNII_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNII_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x0000007f00000000
+
+/* SH_XNII_INTRA_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNII_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 40
+#define SH_XNII_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x00007f0000000000
+
+/* SH_XNII_INTRA_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNII_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNII_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x007f000000000000
+
+/* SH_XNII_INTRA_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNII_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNII_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x7f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNLB_INTRA_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNLB_INTRA_FLOW 0x0000000150030210
+#define SH_XNLB_INTRA_FLOW_MASK 0xff7f7f7f7f7fbfbf
+#define SH_XNLB_INTRA_FLOW_INIT 0x0000080000100000
+
+/* SH_XNLB_INTRA_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNLB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNLB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNLB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNLB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNLB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNLB_INTRA_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNLB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNLB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNLB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNLB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNLB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNLB_INTRA_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNLB_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 16
+#define SH_XNLB_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x00000000007f0000
+
+/* SH_XNLB_INTRA_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNLB_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 24
+#define SH_XNLB_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x000000007f000000
+
+/* SH_XNLB_INTRA_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNLB_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 32
+#define SH_XNLB_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x0000007f00000000
+
+/* SH_XNLB_INTRA_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNLB_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 40
+#define SH_XNLB_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x00007f0000000000
+
+/* SH_XNLB_INTRA_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNLB_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 48
+#define SH_XNLB_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x007f000000000000
+
+/* SH_XNLB_INTRA_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNLB_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 56
+#define SH_XNLB_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x7f00000000000000
+
+/* SH_XNLB_INTRA_FLOW_DISABLE_BYPASS_IN */
+#define SH_XNLB_INTRA_FLOW_DISABLE_BYPASS_IN_SHFT 63
+#define SH_XNLB_INTRA_FLOW_DISABLE_BYPASS_IN_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT 0x0000000150030220
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_INIT 0x0000000000000000
+
+/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_DYN */
+/* Description: vc0 debit dynamic value */
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000
+
+/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_CAP */
+/* Description: vc0 debit captured value */
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000
+
+/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_DYN */
+/* Description: vc2 debit dynamic value */
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000
+
+/* SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_CAP */
+/* Description: vc2 debit captured value */
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56
+#define SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT 0x0000000150030230
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_INIT 0x0000000000000000
+
+/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_DYN */
+/* Description: vc0 debit dynamic value */
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000
+
+/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_CAP */
+/* Description: vc0 debit captured value */
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000
+
+/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_DYN */
+/* Description: vc2 debit dynamic value */
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000
+
+/* SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_CAP */
+/* Description: vc2 debit captured value */
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56
+#define SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT 0x0000000150030240
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_INIT 0x0000000000000000
+
+/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN */
+/* Description: vc0 debit dynamic value */
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000
+
+/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP */
+/* Description: vc0 debit captured value */
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000
+
+/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN */
+/* Description: vc2 debit dynamic value */
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000
+
+/* SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP */
+/* Description: vc2 debit captured value */
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56
+#define SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT 0x0000000150030250
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_INIT 0x0000000000000000
+
+/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN */
+/* Description: vc0 debit dynamic value */
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000
+
+/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP */
+/* Description: vc0 debit captured value */
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000
+
+/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN */
+/* Description: vc2 debit dynamic value */
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000
+
+/* SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP */
+/* Description: vc2 debit captured value */
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56
+#define SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT 0x0000000150030260
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_INIT 0x0000000000000000
+
+/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN */
+/* Description: vc0 debit dynamic value */
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000
+
+/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP */
+/* Description: vc0 debit captured value */
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000
+
+/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN */
+/* Description: vc2 debit dynamic value */
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000
+
+/* SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP */
+/* Description: vc2 debit captured value */
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56
+#define SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT 0x0000000150030270
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c
+
+/* SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f
+
+/* SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00
+
+/* SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000
+
+/* SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000
+
+/* SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000
+
+/* SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40
+#define SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT 0x0000000150030280
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c
+
+/* SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f
+
+/* SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00
+
+/* SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000
+
+/* SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000
+
+/* SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000
+
+/* SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40
+#define SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT 0x0000000150030290
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c
+
+/* SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f
+
+/* SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00
+
+/* SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000
+
+/* SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000
+
+/* SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000
+
+/* SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40
+#define SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT 0x00000001500302a0
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c
+
+/* SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f
+
+/* SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00
+
+/* SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000
+
+/* SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000
+
+/* SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000
+
+/* SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40
+#define SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT 0x00000001500302b0
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c
+
+/* SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f
+
+/* SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00
+
+/* SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000
+
+/* SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000
+
+/* SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000
+
+/* SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40
+#define SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT 0x0000000150030300
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_INIT 0x0000000000000000
+
+/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN */
+/* Description: vc0 debit dynamic value */
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000
+
+/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP */
+/* Description: vc0 debit captured value */
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000
+
+/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN */
+/* Description: vc2 debit dynamic value */
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000
+
+/* SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP */
+/* Description: vc2 debit captured value */
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56
+#define SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT 0x0000000150030310
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_INIT 0x0000000000000000
+
+/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN */
+/* Description: vc0 debit dynamic value */
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000
+
+/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP */
+/* Description: vc0 debit captured value */
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000
+
+/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN */
+/* Description: vc2 debit dynamic value */
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000
+
+/* SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP */
+/* Description: vc2 debit captured value */
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56
+#define SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT 0x0000000150030320
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_INIT 0x0000000000000000
+
+/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN */
+/* Description: vc0 debit dynamic value */
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000
+
+/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP */
+/* Description: vc0 debit captured value */
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000
+
+/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN */
+/* Description: vc2 debit dynamic value */
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000
+
+/* SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP */
+/* Description: vc2 debit captured value */
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56
+#define SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT 0x0000000150030330
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c
+
+/* SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f
+
+/* SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00
+
+/* SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000
+
+/* SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000
+
+/* SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000
+
+/* SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40
+#define SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT 0x0000000150030340
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c
+
+/* SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f
+
+/* SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00
+
+/* SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000
+
+/* SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000
+
+/* SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000
+
+/* SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40
+#define SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT 0x0000000150030350
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c
+
+/* SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f
+
+/* SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00
+
+/* SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000
+
+/* SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000
+
+/* SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000
+
+/* SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40
+#define SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_0_INTRANI_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_0_INTRANI_FLOW 0x0000000150030360
+#define SH_XNNI0_0_INTRANI_FLOW_MASK 0x00000000000000bf
+#define SH_XNNI0_0_INTRANI_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI0_0_INTRANI_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI0_0_INTRANI_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI0_0_INTRANI_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI0_0_INTRANI_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNNI0_0_INTRANI_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI0_0_INTRANI_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_1_INTRANI_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_1_INTRANI_FLOW 0x0000000150030370
+#define SH_XNNI0_1_INTRANI_FLOW_MASK 0x00000000000000bf
+#define SH_XNNI0_1_INTRANI_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI0_1_INTRANI_FLOW_DEBIT_VC1_WITHHOLD */
+/* Description: vc1 withhold */
+#define SH_XNNI0_1_INTRANI_FLOW_DEBIT_VC1_WITHHOLD_SHFT 0
+#define SH_XNNI0_1_INTRANI_FLOW_DEBIT_VC1_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI0_1_INTRANI_FLOW_DEBIT_VC1_FORCE_CRED */
+/* Description: Force Credit on VC1 from debit cntr */
+#define SH_XNNI0_1_INTRANI_FLOW_DEBIT_VC1_FORCE_CRED_SHFT 7
+#define SH_XNNI0_1_INTRANI_FLOW_DEBIT_VC1_FORCE_CRED_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_2_INTRANI_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_2_INTRANI_FLOW 0x0000000150030380
+#define SH_XNNI0_2_INTRANI_FLOW_MASK 0x00000000000000bf
+#define SH_XNNI0_2_INTRANI_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI0_2_INTRANI_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI0_2_INTRANI_FLOW_DEBIT_VC2_WITHHOLD_SHFT 0
+#define SH_XNNI0_2_INTRANI_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI0_2_INTRANI_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNNI0_2_INTRANI_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 7
+#define SH_XNNI0_2_INTRANI_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_3_INTRANI_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_3_INTRANI_FLOW 0x0000000150030390
+#define SH_XNNI0_3_INTRANI_FLOW_MASK 0x00000000000000bf
+#define SH_XNNI0_3_INTRANI_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI0_3_INTRANI_FLOW_DEBIT_VC3_WITHHOLD */
+/* Description: vc3 withhold */
+#define SH_XNNI0_3_INTRANI_FLOW_DEBIT_VC3_WITHHOLD_SHFT 0
+#define SH_XNNI0_3_INTRANI_FLOW_DEBIT_VC3_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI0_3_INTRANI_FLOW_DEBIT_VC3_FORCE_CRED */
+/* Description: Force Credit on VC3 from debit cntr */
+#define SH_XNNI0_3_INTRANI_FLOW_DEBIT_VC3_FORCE_CRED_SHFT 7
+#define SH_XNNI0_3_INTRANI_FLOW_DEBIT_VC3_FORCE_CRED_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_VCSWITCH_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_VCSWITCH_FLOW 0x00000001500303a0
+#define SH_XNNI0_VCSWITCH_FLOW_MASK 0x0000000701010101
+#define SH_XNNI0_VCSWITCH_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI0_VCSWITCH_FLOW_NI_VCFIFO_DATELINE_SWITCH */
+/* Description: Swap VC0/2 with VC1/3 */
+#define SH_XNNI0_VCSWITCH_FLOW_NI_VCFIFO_DATELINE_SWITCH_SHFT 0
+#define SH_XNNI0_VCSWITCH_FLOW_NI_VCFIFO_DATELINE_SWITCH_MASK 0x0000000000000001
+
+/* SH_XNNI0_VCSWITCH_FLOW_PI_VCFIFO_SWITCH */
+/* Description: Swap VC0/2 with VC1/3 */
+#define SH_XNNI0_VCSWITCH_FLOW_PI_VCFIFO_SWITCH_SHFT 8
+#define SH_XNNI0_VCSWITCH_FLOW_PI_VCFIFO_SWITCH_MASK 0x0000000000000100
+
+/* SH_XNNI0_VCSWITCH_FLOW_MD_VCFIFO_SWITCH */
+/* Description: Swap VC0/2 with VC1/3 */
+#define SH_XNNI0_VCSWITCH_FLOW_MD_VCFIFO_SWITCH_SHFT 16
+#define SH_XNNI0_VCSWITCH_FLOW_MD_VCFIFO_SWITCH_MASK 0x0000000000010000
+
+/* SH_XNNI0_VCSWITCH_FLOW_IILB_VCFIFO_SWITCH */
+/* Description: Swap VC0/2 with VC1/3 */
+#define SH_XNNI0_VCSWITCH_FLOW_IILB_VCFIFO_SWITCH_SHFT 24
+#define SH_XNNI0_VCSWITCH_FLOW_IILB_VCFIFO_SWITCH_MASK 0x0000000001000000
+
+/* SH_XNNI0_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_IN */
+#define SH_XNNI0_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_IN_SHFT 32
+#define SH_XNNI0_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_IN_MASK 0x0000000100000000
+
+/* SH_XNNI0_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_OUT */
+#define SH_XNNI0_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_OUT_SHFT 33
+#define SH_XNNI0_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_OUT_MASK 0x0000000200000000
+
+/* SH_XNNI0_VCSWITCH_FLOW_ASYNC_FIFOES */
+#define SH_XNNI0_VCSWITCH_FLOW_ASYNC_FIFOES_SHFT 34
+#define SH_XNNI0_VCSWITCH_FLOW_ASYNC_FIFOES_MASK 0x0000000400000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_TIMER_REG" */
+/* ==================================================================== */
+
+#define SH_XNNI0_TIMER_REG 0x00000001500303b0
+#define SH_XNNI0_TIMER_REG_MASK 0x0000000100ffffff
+#define SH_XNNI0_TIMER_REG_INIT 0x0000000000ffffff
+
+/* SH_XNNI0_TIMER_REG_TIMEOUT_REG */
+/* Description: Master Timeout Counter */
+#define SH_XNNI0_TIMER_REG_TIMEOUT_REG_SHFT 0
+#define SH_XNNI0_TIMER_REG_TIMEOUT_REG_MASK 0x0000000000ffffff
+
+/* SH_XNNI0_TIMER_REG_LINKCLEANUP_REG */
+/* Description: Link Clean Up */
+#define SH_XNNI0_TIMER_REG_LINKCLEANUP_REG_SHFT 32
+#define SH_XNNI0_TIMER_REG_LINKCLEANUP_REG_MASK 0x0000000100000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_FIFO02_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_FIFO02_FLOW 0x00000001500303c0
+#define SH_XNNI0_FIFO02_FLOW_MASK 0x00000f0f0f0f0f0f
+#define SH_XNNI0_FIFO02_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI0_FIFO02_FLOW_COUNT_VC0_LIMIT */
+/* Description: limit reg zero disables functionality */
+#define SH_XNNI0_FIFO02_FLOW_COUNT_VC0_LIMIT_SHFT 0
+#define SH_XNNI0_FIFO02_FLOW_COUNT_VC0_LIMIT_MASK 0x000000000000000f
+
+/* SH_XNNI0_FIFO02_FLOW_COUNT_VC0_DYN */
+/* Description: dynamic counter value */
+#define SH_XNNI0_FIFO02_FLOW_COUNT_VC0_DYN_SHFT 8
+#define SH_XNNI0_FIFO02_FLOW_COUNT_VC0_DYN_MASK 0x0000000000000f00
+
+/* SH_XNNI0_FIFO02_FLOW_COUNT_VC0_CAP */
+/* Description: captured counter value */
+#define SH_XNNI0_FIFO02_FLOW_COUNT_VC0_CAP_SHFT 16
+#define SH_XNNI0_FIFO02_FLOW_COUNT_VC0_CAP_MASK 0x00000000000f0000
+
+/* SH_XNNI0_FIFO02_FLOW_COUNT_VC2_LIMIT */
+/* Description: limit reg zero disables functionality */
+#define SH_XNNI0_FIFO02_FLOW_COUNT_VC2_LIMIT_SHFT 24
+#define SH_XNNI0_FIFO02_FLOW_COUNT_VC2_LIMIT_MASK 0x000000000f000000
+
+/* SH_XNNI0_FIFO02_FLOW_COUNT_VC2_DYN */
+/* Description: counter dynamic value */
+#define SH_XNNI0_FIFO02_FLOW_COUNT_VC2_DYN_SHFT 32
+#define SH_XNNI0_FIFO02_FLOW_COUNT_VC2_DYN_MASK 0x0000000f00000000
+
+/* SH_XNNI0_FIFO02_FLOW_COUNT_VC2_CAP */
+/* Description: captured counter value */
+#define SH_XNNI0_FIFO02_FLOW_COUNT_VC2_CAP_SHFT 40
+#define SH_XNNI0_FIFO02_FLOW_COUNT_VC2_CAP_MASK 0x00000f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_FIFO13_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_FIFO13_FLOW 0x00000001500303d0
+#define SH_XNNI0_FIFO13_FLOW_MASK 0x00000f0f0f0f0f0f
+#define SH_XNNI0_FIFO13_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI0_FIFO13_FLOW_COUNT_VC1_LIMIT */
+/* Description: limit reg zero disables functionality */
+#define SH_XNNI0_FIFO13_FLOW_COUNT_VC1_LIMIT_SHFT 0
+#define SH_XNNI0_FIFO13_FLOW_COUNT_VC1_LIMIT_MASK 0x000000000000000f
+
+/* SH_XNNI0_FIFO13_FLOW_COUNT_VC1_DYN */
+/* Description: dynamic counter value */
+#define SH_XNNI0_FIFO13_FLOW_COUNT_VC1_DYN_SHFT 8
+#define SH_XNNI0_FIFO13_FLOW_COUNT_VC1_DYN_MASK 0x0000000000000f00
+
+/* SH_XNNI0_FIFO13_FLOW_COUNT_VC1_CAP */
+/* Description: captured counter value */
+#define SH_XNNI0_FIFO13_FLOW_COUNT_VC1_CAP_SHFT 16
+#define SH_XNNI0_FIFO13_FLOW_COUNT_VC1_CAP_MASK 0x00000000000f0000
+
+/* SH_XNNI0_FIFO13_FLOW_COUNT_VC3_LIMIT */
+/* Description: limit reg zero disables functionality */
+#define SH_XNNI0_FIFO13_FLOW_COUNT_VC3_LIMIT_SHFT 24
+#define SH_XNNI0_FIFO13_FLOW_COUNT_VC3_LIMIT_MASK 0x000000000f000000
+
+/* SH_XNNI0_FIFO13_FLOW_COUNT_VC3_DYN */
+/* Description: counter dynamic value */
+#define SH_XNNI0_FIFO13_FLOW_COUNT_VC3_DYN_SHFT 32
+#define SH_XNNI0_FIFO13_FLOW_COUNT_VC3_DYN_MASK 0x0000000f00000000
+
+/* SH_XNNI0_FIFO13_FLOW_COUNT_VC3_CAP */
+/* Description: captured counter value */
+#define SH_XNNI0_FIFO13_FLOW_COUNT_VC3_CAP_SHFT 40
+#define SH_XNNI0_FIFO13_FLOW_COUNT_VC3_CAP_MASK 0x00000f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_NI_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_NI_FLOW 0x00000001500303e0
+#define SH_XNNI0_NI_FLOW_MASK 0xff0fff0fff0fff0f
+#define SH_XNNI0_NI_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI0_NI_FLOW_VC0_LIMIT */
+/* Description: vc0 limit reg, zero disables functionality */
+#define SH_XNNI0_NI_FLOW_VC0_LIMIT_SHFT 0
+#define SH_XNNI0_NI_FLOW_VC0_LIMIT_MASK 0x000000000000000f
+
+/* SH_XNNI0_NI_FLOW_VC0_DYN */
+/* Description: vc0 counter dynamic value */
+#define SH_XNNI0_NI_FLOW_VC0_DYN_SHFT 8
+#define SH_XNNI0_NI_FLOW_VC0_DYN_MASK 0x0000000000000f00
+
+/* SH_XNNI0_NI_FLOW_VC0_CAP */
+/* Description: vc0 counter captured value */
+#define SH_XNNI0_NI_FLOW_VC0_CAP_SHFT 12
+#define SH_XNNI0_NI_FLOW_VC0_CAP_MASK 0x000000000000f000
+
+/* SH_XNNI0_NI_FLOW_VC1_LIMIT */
+/* Description: vc1 limit reg, zero disables functionality */
+#define SH_XNNI0_NI_FLOW_VC1_LIMIT_SHFT 16
+#define SH_XNNI0_NI_FLOW_VC1_LIMIT_MASK 0x00000000000f0000
+
+/* SH_XNNI0_NI_FLOW_VC1_DYN */
+/* Description: vc1 counter dynamic value */
+#define SH_XNNI0_NI_FLOW_VC1_DYN_SHFT 24
+#define SH_XNNI0_NI_FLOW_VC1_DYN_MASK 0x000000000f000000
+
+/* SH_XNNI0_NI_FLOW_VC1_CAP */
+/* Description: vc1 counter captured value */
+#define SH_XNNI0_NI_FLOW_VC1_CAP_SHFT 28
+#define SH_XNNI0_NI_FLOW_VC1_CAP_MASK 0x00000000f0000000
+
+/* SH_XNNI0_NI_FLOW_VC2_LIMIT */
+/* Description: vc2 limit reg, zero disables functionality */
+#define SH_XNNI0_NI_FLOW_VC2_LIMIT_SHFT 32
+#define SH_XNNI0_NI_FLOW_VC2_LIMIT_MASK 0x0000000f00000000
+
+/* SH_XNNI0_NI_FLOW_VC2_DYN */
+/* Description: vc2 counter dynamic value */
+#define SH_XNNI0_NI_FLOW_VC2_DYN_SHFT 40
+#define SH_XNNI0_NI_FLOW_VC2_DYN_MASK 0x00000f0000000000
+
+/* SH_XNNI0_NI_FLOW_VC2_CAP */
+/* Description: vc2 counter captured value */
+#define SH_XNNI0_NI_FLOW_VC2_CAP_SHFT 44
+#define SH_XNNI0_NI_FLOW_VC2_CAP_MASK 0x0000f00000000000
+
+/* SH_XNNI0_NI_FLOW_VC3_LIMIT */
+/* Description: vc3 limit reg, zero disables functionality */
+#define SH_XNNI0_NI_FLOW_VC3_LIMIT_SHFT 48
+#define SH_XNNI0_NI_FLOW_VC3_LIMIT_MASK 0x000f000000000000
+
+/* SH_XNNI0_NI_FLOW_VC3_DYN */
+/* Description: vc3 counter dynamic value */
+#define SH_XNNI0_NI_FLOW_VC3_DYN_SHFT 56
+#define SH_XNNI0_NI_FLOW_VC3_DYN_MASK 0x0f00000000000000
+
+/* SH_XNNI0_NI_FLOW_VC3_CAP */
+/* Description: vc3 counter captured value */
+#define SH_XNNI0_NI_FLOW_VC3_CAP_SHFT 60
+#define SH_XNNI0_NI_FLOW_VC3_CAP_MASK 0xf000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_DEAD_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI0_DEAD_FLOW 0x00000001500303f0
+#define SH_XNNI0_DEAD_FLOW_MASK 0xff0fff0fff0fff0f
+#define SH_XNNI0_DEAD_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI0_DEAD_FLOW_VC0_LIMIT */
+/* Description: vc0 limit reg, zero disables functionality */
+#define SH_XNNI0_DEAD_FLOW_VC0_LIMIT_SHFT 0
+#define SH_XNNI0_DEAD_FLOW_VC0_LIMIT_MASK 0x000000000000000f
+
+/* SH_XNNI0_DEAD_FLOW_VC0_DYN */
+/* Description: vc0 counter dynamic value */
+#define SH_XNNI0_DEAD_FLOW_VC0_DYN_SHFT 8
+#define SH_XNNI0_DEAD_FLOW_VC0_DYN_MASK 0x0000000000000f00
+
+/* SH_XNNI0_DEAD_FLOW_VC0_CAP */
+/* Description: vc0 counter captured value */
+#define SH_XNNI0_DEAD_FLOW_VC0_CAP_SHFT 12
+#define SH_XNNI0_DEAD_FLOW_VC0_CAP_MASK 0x000000000000f000
+
+/* SH_XNNI0_DEAD_FLOW_VC1_LIMIT */
+/* Description: vc1 limit reg, zero disables functionality */
+#define SH_XNNI0_DEAD_FLOW_VC1_LIMIT_SHFT 16
+#define SH_XNNI0_DEAD_FLOW_VC1_LIMIT_MASK 0x00000000000f0000
+
+/* SH_XNNI0_DEAD_FLOW_VC1_DYN */
+/* Description: vc1 counter dynamic value */
+#define SH_XNNI0_DEAD_FLOW_VC1_DYN_SHFT 24
+#define SH_XNNI0_DEAD_FLOW_VC1_DYN_MASK 0x000000000f000000
+
+/* SH_XNNI0_DEAD_FLOW_VC1_CAP */
+/* Description: vc1 counter captured value */
+#define SH_XNNI0_DEAD_FLOW_VC1_CAP_SHFT 28
+#define SH_XNNI0_DEAD_FLOW_VC1_CAP_MASK 0x00000000f0000000
+
+/* SH_XNNI0_DEAD_FLOW_VC2_LIMIT */
+/* Description: vc2 limit reg, zero disables functionality */
+#define SH_XNNI0_DEAD_FLOW_VC2_LIMIT_SHFT 32
+#define SH_XNNI0_DEAD_FLOW_VC2_LIMIT_MASK 0x0000000f00000000
+
+/* SH_XNNI0_DEAD_FLOW_VC2_DYN */
+/* Description: vc2 counter dynamic value */
+#define SH_XNNI0_DEAD_FLOW_VC2_DYN_SHFT 40
+#define SH_XNNI0_DEAD_FLOW_VC2_DYN_MASK 0x00000f0000000000
+
+/* SH_XNNI0_DEAD_FLOW_VC2_CAP */
+/* Description: vc2 counter captured value */
+#define SH_XNNI0_DEAD_FLOW_VC2_CAP_SHFT 44
+#define SH_XNNI0_DEAD_FLOW_VC2_CAP_MASK 0x0000f00000000000
+
+/* SH_XNNI0_DEAD_FLOW_VC3_LIMIT */
+/* Description: vc3 limit reg, zero disables functionality */
+#define SH_XNNI0_DEAD_FLOW_VC3_LIMIT_SHFT 48
+#define SH_XNNI0_DEAD_FLOW_VC3_LIMIT_MASK 0x000f000000000000
+
+/* SH_XNNI0_DEAD_FLOW_VC3_DYN */
+/* Description: vc3 counter dynamic value */
+#define SH_XNNI0_DEAD_FLOW_VC3_DYN_SHFT 56
+#define SH_XNNI0_DEAD_FLOW_VC3_DYN_MASK 0x0f00000000000000
+
+/* SH_XNNI0_DEAD_FLOW_VC3_CAP */
+/* Description: vc3 counter captured value */
+#define SH_XNNI0_DEAD_FLOW_VC3_CAP_SHFT 60
+#define SH_XNNI0_DEAD_FLOW_VC3_CAP_MASK 0xf000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_INJECT_AGE" */
+/* ==================================================================== */
+
+#define SH_XNNI0_INJECT_AGE 0x0000000150030400
+#define SH_XNNI0_INJECT_AGE_MASK 0x000000000000ffff
+#define SH_XNNI0_INJECT_AGE_INIT 0x0000000000000000
+
+/* SH_XNNI0_INJECT_AGE_REQUEST_INJECT */
+/* Description: Value of AGE field for outgoing requests */
+#define SH_XNNI0_INJECT_AGE_REQUEST_INJECT_SHFT 0
+#define SH_XNNI0_INJECT_AGE_REQUEST_INJECT_MASK 0x00000000000000ff
+
+/* SH_XNNI0_INJECT_AGE_REPLY_INJECT */
+/* Description: Value of AGE field for outgoing replies */
+#define SH_XNNI0_INJECT_AGE_REPLY_INJECT_SHFT 8
+#define SH_XNNI0_INJECT_AGE_REPLY_INJECT_MASK 0x000000000000ff00
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT 0x0000000150030500
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_INIT 0x0000000000000000
+
+/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN */
+/* Description: vc0 debit dynamic value */
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000
+
+/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP */
+/* Description: vc0 debit captured value */
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000
+
+/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN */
+/* Description: vc2 debit dynamic value */
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000
+
+/* SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP */
+/* Description: vc2 debit captured value */
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56
+#define SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT 0x0000000150030510
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_INIT 0x0000000000000000
+
+/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN */
+/* Description: vc0 debit dynamic value */
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000
+
+/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP */
+/* Description: vc0 debit captured value */
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000
+
+/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN */
+/* Description: vc2 debit dynamic value */
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000
+
+/* SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP */
+/* Description: vc2 debit captured value */
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56
+#define SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT 0x0000000150030520
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_MASK 0x7f7f007f7f00bfbf
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_INIT 0x0000000000000000
+
+/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_SHFT 8
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x0000000000003f00
+
+/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 15
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000008000
+
+/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN */
+/* Description: vc0 debit dynamic value */
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN_SHFT 24
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_DYN_MASK 0x000000007f000000
+
+/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP */
+/* Description: vc0 debit captured value */
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP_SHFT 32
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC0_CAP_MASK 0x0000007f00000000
+
+/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN */
+/* Description: vc2 debit dynamic value */
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN_SHFT 48
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_DYN_MASK 0x007f000000000000
+
+/* SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP */
+/* Description: vc2 debit captured value */
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP_SHFT 56
+#define SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT_VC2_CAP_MASK 0x7f00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT 0x0000000150030530
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c
+
+/* SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f
+
+/* SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00
+
+/* SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000
+
+/* SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000
+
+/* SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000
+
+/* SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40
+#define SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT 0x0000000150030540
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c
+
+/* SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f
+
+/* SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00
+
+/* SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000
+
+/* SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000
+
+/* SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000
+
+/* SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40
+#define SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT 0x0000000150030550
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_MASK 0x00007f7f7f7f7f7f
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_INIT 0x000000000c00000c
+
+/* SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST */
+/* Description: vc0 credit_test */
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST_SHFT 0
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_TEST_MASK 0x000000000000007f
+
+/* SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN */
+/* Description: vc0 credit dynamic value */
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN_SHFT 8
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_DYN_MASK 0x0000000000007f00
+
+/* SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP */
+/* Description: vc0 credit captured value */
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP_SHFT 16
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC0_CAP_MASK 0x00000000007f0000
+
+/* SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST */
+/* Description: vc2 credit_test */
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST_SHFT 24
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_TEST_MASK 0x000000007f000000
+
+/* SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN */
+/* Description: vc2 credit dynamic value */
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN_SHFT 32
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_DYN_MASK 0x0000007f00000000
+
+/* SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP */
+/* Description: vc2 credit captured value */
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP_SHFT 40
+#define SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT_VC2_CAP_MASK 0x00007f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_0_INTRANI_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_0_INTRANI_FLOW 0x0000000150030560
+#define SH_XNNI1_0_INTRANI_FLOW_MASK 0x00000000000000bf
+#define SH_XNNI1_0_INTRANI_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI1_0_INTRANI_FLOW_DEBIT_VC0_WITHHOLD */
+/* Description: vc0 withhold */
+#define SH_XNNI1_0_INTRANI_FLOW_DEBIT_VC0_WITHHOLD_SHFT 0
+#define SH_XNNI1_0_INTRANI_FLOW_DEBIT_VC0_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI1_0_INTRANI_FLOW_DEBIT_VC0_FORCE_CRED */
+/* Description: Force Credit on VC0 from debit cntr */
+#define SH_XNNI1_0_INTRANI_FLOW_DEBIT_VC0_FORCE_CRED_SHFT 7
+#define SH_XNNI1_0_INTRANI_FLOW_DEBIT_VC0_FORCE_CRED_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_1_INTRANI_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_1_INTRANI_FLOW 0x0000000150030570
+#define SH_XNNI1_1_INTRANI_FLOW_MASK 0x00000000000000bf
+#define SH_XNNI1_1_INTRANI_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI1_1_INTRANI_FLOW_DEBIT_VC1_WITHHOLD */
+/* Description: vc1 withhold */
+#define SH_XNNI1_1_INTRANI_FLOW_DEBIT_VC1_WITHHOLD_SHFT 0
+#define SH_XNNI1_1_INTRANI_FLOW_DEBIT_VC1_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI1_1_INTRANI_FLOW_DEBIT_VC1_FORCE_CRED */
+/* Description: Force Credit on VC1 from debit cntr */
+#define SH_XNNI1_1_INTRANI_FLOW_DEBIT_VC1_FORCE_CRED_SHFT 7
+#define SH_XNNI1_1_INTRANI_FLOW_DEBIT_VC1_FORCE_CRED_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_2_INTRANI_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_2_INTRANI_FLOW 0x0000000150030580
+#define SH_XNNI1_2_INTRANI_FLOW_MASK 0x00000000000000bf
+#define SH_XNNI1_2_INTRANI_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI1_2_INTRANI_FLOW_DEBIT_VC2_WITHHOLD */
+/* Description: vc2 withhold */
+#define SH_XNNI1_2_INTRANI_FLOW_DEBIT_VC2_WITHHOLD_SHFT 0
+#define SH_XNNI1_2_INTRANI_FLOW_DEBIT_VC2_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI1_2_INTRANI_FLOW_DEBIT_VC2_FORCE_CRED */
+/* Description: Force Credit on VC2 from debit cntr */
+#define SH_XNNI1_2_INTRANI_FLOW_DEBIT_VC2_FORCE_CRED_SHFT 7
+#define SH_XNNI1_2_INTRANI_FLOW_DEBIT_VC2_FORCE_CRED_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_3_INTRANI_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_3_INTRANI_FLOW 0x0000000150030590
+#define SH_XNNI1_3_INTRANI_FLOW_MASK 0x00000000000000bf
+#define SH_XNNI1_3_INTRANI_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI1_3_INTRANI_FLOW_DEBIT_VC3_WITHHOLD */
+/* Description: vc3 withhold */
+#define SH_XNNI1_3_INTRANI_FLOW_DEBIT_VC3_WITHHOLD_SHFT 0
+#define SH_XNNI1_3_INTRANI_FLOW_DEBIT_VC3_WITHHOLD_MASK 0x000000000000003f
+
+/* SH_XNNI1_3_INTRANI_FLOW_DEBIT_VC3_FORCE_CRED */
+/* Description: Force Credit on VC3 from debit cntr */
+#define SH_XNNI1_3_INTRANI_FLOW_DEBIT_VC3_FORCE_CRED_SHFT 7
+#define SH_XNNI1_3_INTRANI_FLOW_DEBIT_VC3_FORCE_CRED_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_VCSWITCH_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_VCSWITCH_FLOW 0x00000001500305a0
+#define SH_XNNI1_VCSWITCH_FLOW_MASK 0x0000000701010101
+#define SH_XNNI1_VCSWITCH_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI1_VCSWITCH_FLOW_NI_VCFIFO_DATELINE_SWITCH */
+/* Description: Swap VC0/2 with VC1/3 */
+#define SH_XNNI1_VCSWITCH_FLOW_NI_VCFIFO_DATELINE_SWITCH_SHFT 0
+#define SH_XNNI1_VCSWITCH_FLOW_NI_VCFIFO_DATELINE_SWITCH_MASK 0x0000000000000001
+
+/* SH_XNNI1_VCSWITCH_FLOW_PI_VCFIFO_SWITCH */
+/* Description: Swap VC0/2 with VC1/3 */
+#define SH_XNNI1_VCSWITCH_FLOW_PI_VCFIFO_SWITCH_SHFT 8
+#define SH_XNNI1_VCSWITCH_FLOW_PI_VCFIFO_SWITCH_MASK 0x0000000000000100
+
+/* SH_XNNI1_VCSWITCH_FLOW_MD_VCFIFO_SWITCH */
+/* Description: Swap VC0/2 with VC1/3 */
+#define SH_XNNI1_VCSWITCH_FLOW_MD_VCFIFO_SWITCH_SHFT 16
+#define SH_XNNI1_VCSWITCH_FLOW_MD_VCFIFO_SWITCH_MASK 0x0000000000010000
+
+/* SH_XNNI1_VCSWITCH_FLOW_IILB_VCFIFO_SWITCH */
+/* Description: Swap VC0/2 with VC1/3 */
+#define SH_XNNI1_VCSWITCH_FLOW_IILB_VCFIFO_SWITCH_SHFT 24
+#define SH_XNNI1_VCSWITCH_FLOW_IILB_VCFIFO_SWITCH_MASK 0x0000000001000000
+
+/* SH_XNNI1_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_IN */
+#define SH_XNNI1_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_IN_SHFT 32
+#define SH_XNNI1_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_IN_MASK 0x0000000100000000
+
+/* SH_XNNI1_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_OUT */
+#define SH_XNNI1_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_OUT_SHFT 33
+#define SH_XNNI1_VCSWITCH_FLOW_DISABLE_SYNC_BYPASS_OUT_MASK 0x0000000200000000
+
+/* SH_XNNI1_VCSWITCH_FLOW_ASYNC_FIFOES */
+#define SH_XNNI1_VCSWITCH_FLOW_ASYNC_FIFOES_SHFT 34
+#define SH_XNNI1_VCSWITCH_FLOW_ASYNC_FIFOES_MASK 0x0000000400000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_TIMER_REG" */
+/* ==================================================================== */
+
+#define SH_XNNI1_TIMER_REG 0x00000001500305b0
+#define SH_XNNI1_TIMER_REG_MASK 0x0000000100ffffff
+#define SH_XNNI1_TIMER_REG_INIT 0x0000000000ffffff
+
+/* SH_XNNI1_TIMER_REG_TIMEOUT_REG */
+/* Description: Master Timeout Counter */
+#define SH_XNNI1_TIMER_REG_TIMEOUT_REG_SHFT 0
+#define SH_XNNI1_TIMER_REG_TIMEOUT_REG_MASK 0x0000000000ffffff
+
+/* SH_XNNI1_TIMER_REG_LINKCLEANUP_REG */
+/* Description: Link Clean Up */
+#define SH_XNNI1_TIMER_REG_LINKCLEANUP_REG_SHFT 32
+#define SH_XNNI1_TIMER_REG_LINKCLEANUP_REG_MASK 0x0000000100000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_FIFO02_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_FIFO02_FLOW 0x00000001500305c0
+#define SH_XNNI1_FIFO02_FLOW_MASK 0x00000f0f0f0f0f0f
+#define SH_XNNI1_FIFO02_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI1_FIFO02_FLOW_COUNT_VC0_LIMIT */
+/* Description: limit reg zero disables functionality */
+#define SH_XNNI1_FIFO02_FLOW_COUNT_VC0_LIMIT_SHFT 0
+#define SH_XNNI1_FIFO02_FLOW_COUNT_VC0_LIMIT_MASK 0x000000000000000f
+
+/* SH_XNNI1_FIFO02_FLOW_COUNT_VC0_DYN */
+/* Description: dynamic counter value */
+#define SH_XNNI1_FIFO02_FLOW_COUNT_VC0_DYN_SHFT 8
+#define SH_XNNI1_FIFO02_FLOW_COUNT_VC0_DYN_MASK 0x0000000000000f00
+
+/* SH_XNNI1_FIFO02_FLOW_COUNT_VC0_CAP */
+/* Description: captured counter value */
+#define SH_XNNI1_FIFO02_FLOW_COUNT_VC0_CAP_SHFT 16
+#define SH_XNNI1_FIFO02_FLOW_COUNT_VC0_CAP_MASK 0x00000000000f0000
+
+/* SH_XNNI1_FIFO02_FLOW_COUNT_VC2_LIMIT */
+/* Description: limit reg zero disables functionality */
+#define SH_XNNI1_FIFO02_FLOW_COUNT_VC2_LIMIT_SHFT 24
+#define SH_XNNI1_FIFO02_FLOW_COUNT_VC2_LIMIT_MASK 0x000000000f000000
+
+/* SH_XNNI1_FIFO02_FLOW_COUNT_VC2_DYN */
+/* Description: counter dynamic value */
+#define SH_XNNI1_FIFO02_FLOW_COUNT_VC2_DYN_SHFT 32
+#define SH_XNNI1_FIFO02_FLOW_COUNT_VC2_DYN_MASK 0x0000000f00000000
+
+/* SH_XNNI1_FIFO02_FLOW_COUNT_VC2_CAP */
+/* Description: captured counter value */
+#define SH_XNNI1_FIFO02_FLOW_COUNT_VC2_CAP_SHFT 40
+#define SH_XNNI1_FIFO02_FLOW_COUNT_VC2_CAP_MASK 0x00000f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_FIFO13_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_FIFO13_FLOW 0x00000001500305d0
+#define SH_XNNI1_FIFO13_FLOW_MASK 0x00000f0f0f0f0f0f
+#define SH_XNNI1_FIFO13_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI1_FIFO13_FLOW_COUNT_VC1_LIMIT */
+/* Description: limit reg zero disables functionality */
+#define SH_XNNI1_FIFO13_FLOW_COUNT_VC1_LIMIT_SHFT 0
+#define SH_XNNI1_FIFO13_FLOW_COUNT_VC1_LIMIT_MASK 0x000000000000000f
+
+/* SH_XNNI1_FIFO13_FLOW_COUNT_VC1_DYN */
+/* Description: dynamic counter value */
+#define SH_XNNI1_FIFO13_FLOW_COUNT_VC1_DYN_SHFT 8
+#define SH_XNNI1_FIFO13_FLOW_COUNT_VC1_DYN_MASK 0x0000000000000f00
+
+/* SH_XNNI1_FIFO13_FLOW_COUNT_VC1_CAP */
+/* Description: captured counter value */
+#define SH_XNNI1_FIFO13_FLOW_COUNT_VC1_CAP_SHFT 16
+#define SH_XNNI1_FIFO13_FLOW_COUNT_VC1_CAP_MASK 0x00000000000f0000
+
+/* SH_XNNI1_FIFO13_FLOW_COUNT_VC3_LIMIT */
+/* Description: limit reg zero disables functionality */
+#define SH_XNNI1_FIFO13_FLOW_COUNT_VC3_LIMIT_SHFT 24
+#define SH_XNNI1_FIFO13_FLOW_COUNT_VC3_LIMIT_MASK 0x000000000f000000
+
+/* SH_XNNI1_FIFO13_FLOW_COUNT_VC3_DYN */
+/* Description: counter dynamic value */
+#define SH_XNNI1_FIFO13_FLOW_COUNT_VC3_DYN_SHFT 32
+#define SH_XNNI1_FIFO13_FLOW_COUNT_VC3_DYN_MASK 0x0000000f00000000
+
+/* SH_XNNI1_FIFO13_FLOW_COUNT_VC3_CAP */
+/* Description: captured counter value */
+#define SH_XNNI1_FIFO13_FLOW_COUNT_VC3_CAP_SHFT 40
+#define SH_XNNI1_FIFO13_FLOW_COUNT_VC3_CAP_MASK 0x00000f0000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_NI_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_NI_FLOW 0x00000001500305e0
+#define SH_XNNI1_NI_FLOW_MASK 0xff0fff0fff0fff0f
+#define SH_XNNI1_NI_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI1_NI_FLOW_VC0_LIMIT */
+/* Description: vc0 limit reg, zero disables functionality */
+#define SH_XNNI1_NI_FLOW_VC0_LIMIT_SHFT 0
+#define SH_XNNI1_NI_FLOW_VC0_LIMIT_MASK 0x000000000000000f
+
+/* SH_XNNI1_NI_FLOW_VC0_DYN */
+/* Description: vc0 counter dynamic value */
+#define SH_XNNI1_NI_FLOW_VC0_DYN_SHFT 8
+#define SH_XNNI1_NI_FLOW_VC0_DYN_MASK 0x0000000000000f00
+
+/* SH_XNNI1_NI_FLOW_VC0_CAP */
+/* Description: vc0 counter captured value */
+#define SH_XNNI1_NI_FLOW_VC0_CAP_SHFT 12
+#define SH_XNNI1_NI_FLOW_VC0_CAP_MASK 0x000000000000f000
+
+/* SH_XNNI1_NI_FLOW_VC1_LIMIT */
+/* Description: vc1 limit reg, zero disables functionality */
+#define SH_XNNI1_NI_FLOW_VC1_LIMIT_SHFT 16
+#define SH_XNNI1_NI_FLOW_VC1_LIMIT_MASK 0x00000000000f0000
+
+/* SH_XNNI1_NI_FLOW_VC1_DYN */
+/* Description: vc1 counter dynamic value */
+#define SH_XNNI1_NI_FLOW_VC1_DYN_SHFT 24
+#define SH_XNNI1_NI_FLOW_VC1_DYN_MASK 0x000000000f000000
+
+/* SH_XNNI1_NI_FLOW_VC1_CAP */
+/* Description: vc1 counter captured value */
+#define SH_XNNI1_NI_FLOW_VC1_CAP_SHFT 28
+#define SH_XNNI1_NI_FLOW_VC1_CAP_MASK 0x00000000f0000000
+
+/* SH_XNNI1_NI_FLOW_VC2_LIMIT */
+/* Description: vc2 limit reg, zero disables functionality */
+#define SH_XNNI1_NI_FLOW_VC2_LIMIT_SHFT 32
+#define SH_XNNI1_NI_FLOW_VC2_LIMIT_MASK 0x0000000f00000000
+
+/* SH_XNNI1_NI_FLOW_VC2_DYN */
+/* Description: vc2 counter dynamic value */
+#define SH_XNNI1_NI_FLOW_VC2_DYN_SHFT 40
+#define SH_XNNI1_NI_FLOW_VC2_DYN_MASK 0x00000f0000000000
+
+/* SH_XNNI1_NI_FLOW_VC2_CAP */
+/* Description: vc2 counter captured value */
+#define SH_XNNI1_NI_FLOW_VC2_CAP_SHFT 44
+#define SH_XNNI1_NI_FLOW_VC2_CAP_MASK 0x0000f00000000000
+
+/* SH_XNNI1_NI_FLOW_VC3_LIMIT */
+/* Description: vc3 limit reg, zero disables functionality */
+#define SH_XNNI1_NI_FLOW_VC3_LIMIT_SHFT 48
+#define SH_XNNI1_NI_FLOW_VC3_LIMIT_MASK 0x000f000000000000
+
+/* SH_XNNI1_NI_FLOW_VC3_DYN */
+/* Description: vc3 counter dynamic value */
+#define SH_XNNI1_NI_FLOW_VC3_DYN_SHFT 56
+#define SH_XNNI1_NI_FLOW_VC3_DYN_MASK 0x0f00000000000000
+
+/* SH_XNNI1_NI_FLOW_VC3_CAP */
+/* Description: vc3 counter captured value */
+#define SH_XNNI1_NI_FLOW_VC3_CAP_SHFT 60
+#define SH_XNNI1_NI_FLOW_VC3_CAP_MASK 0xf000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_DEAD_FLOW" */
+/* ==================================================================== */
+
+#define SH_XNNI1_DEAD_FLOW 0x00000001500305f0
+#define SH_XNNI1_DEAD_FLOW_MASK 0xff0fff0fff0fff0f
+#define SH_XNNI1_DEAD_FLOW_INIT 0x0000000000000000
+
+/* SH_XNNI1_DEAD_FLOW_VC0_LIMIT */
+/* Description: vc0 limit reg, zero disables functionality */
+#define SH_XNNI1_DEAD_FLOW_VC0_LIMIT_SHFT 0
+#define SH_XNNI1_DEAD_FLOW_VC0_LIMIT_MASK 0x000000000000000f
+
+/* SH_XNNI1_DEAD_FLOW_VC0_DYN */
+/* Description: vc0 counter dynamic value */
+#define SH_XNNI1_DEAD_FLOW_VC0_DYN_SHFT 8
+#define SH_XNNI1_DEAD_FLOW_VC0_DYN_MASK 0x0000000000000f00
+
+/* SH_XNNI1_DEAD_FLOW_VC0_CAP */
+/* Description: vc0 counter captured value */
+#define SH_XNNI1_DEAD_FLOW_VC0_CAP_SHFT 12
+#define SH_XNNI1_DEAD_FLOW_VC0_CAP_MASK 0x000000000000f000
+
+/* SH_XNNI1_DEAD_FLOW_VC1_LIMIT */
+/* Description: vc1 limit reg, zero disables functionality */
+#define SH_XNNI1_DEAD_FLOW_VC1_LIMIT_SHFT 16
+#define SH_XNNI1_DEAD_FLOW_VC1_LIMIT_MASK 0x00000000000f0000
+
+/* SH_XNNI1_DEAD_FLOW_VC1_DYN */
+/* Description: vc1 counter dynamic value */
+#define SH_XNNI1_DEAD_FLOW_VC1_DYN_SHFT 24
+#define SH_XNNI1_DEAD_FLOW_VC1_DYN_MASK 0x000000000f000000
+
+/* SH_XNNI1_DEAD_FLOW_VC1_CAP */
+/* Description: vc1 counter captured value */
+#define SH_XNNI1_DEAD_FLOW_VC1_CAP_SHFT 28
+#define SH_XNNI1_DEAD_FLOW_VC1_CAP_MASK 0x00000000f0000000
+
+/* SH_XNNI1_DEAD_FLOW_VC2_LIMIT */
+/* Description: vc2 limit reg, zero disables functionality */
+#define SH_XNNI1_DEAD_FLOW_VC2_LIMIT_SHFT 32
+#define SH_XNNI1_DEAD_FLOW_VC2_LIMIT_MASK 0x0000000f00000000
+
+/* SH_XNNI1_DEAD_FLOW_VC2_DYN */
+/* Description: vc2 counter dynamic value */
+#define SH_XNNI1_DEAD_FLOW_VC2_DYN_SHFT 40
+#define SH_XNNI1_DEAD_FLOW_VC2_DYN_MASK 0x00000f0000000000
+
+/* SH_XNNI1_DEAD_FLOW_VC2_CAP */
+/* Description: vc2 counter captured value */
+#define SH_XNNI1_DEAD_FLOW_VC2_CAP_SHFT 44
+#define SH_XNNI1_DEAD_FLOW_VC2_CAP_MASK 0x0000f00000000000
+
+/* SH_XNNI1_DEAD_FLOW_VC3_LIMIT */
+/* Description: vc3 limit reg, zero disables functionality */
+#define SH_XNNI1_DEAD_FLOW_VC3_LIMIT_SHFT 48
+#define SH_XNNI1_DEAD_FLOW_VC3_LIMIT_MASK 0x000f000000000000
+
+/* SH_XNNI1_DEAD_FLOW_VC3_DYN */
+/* Description: vc3 counter dynamic value */
+#define SH_XNNI1_DEAD_FLOW_VC3_DYN_SHFT 56
+#define SH_XNNI1_DEAD_FLOW_VC3_DYN_MASK 0x0f00000000000000
+
+/* SH_XNNI1_DEAD_FLOW_VC3_CAP */
+/* Description: vc3 counter captured value */
+#define SH_XNNI1_DEAD_FLOW_VC3_CAP_SHFT 60
+#define SH_XNNI1_DEAD_FLOW_VC3_CAP_MASK 0xf000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_INJECT_AGE" */
+/* ==================================================================== */
+
+#define SH_XNNI1_INJECT_AGE 0x0000000150030600
+#define SH_XNNI1_INJECT_AGE_MASK 0x000000000000ffff
+#define SH_XNNI1_INJECT_AGE_INIT 0x0000000000000000
+
+/* SH_XNNI1_INJECT_AGE_REQUEST_INJECT */
+/* Description: Value of AGE field for outgoing requests */
+#define SH_XNNI1_INJECT_AGE_REQUEST_INJECT_SHFT 0
+#define SH_XNNI1_INJECT_AGE_REQUEST_INJECT_MASK 0x00000000000000ff
+
+/* SH_XNNI1_INJECT_AGE_REPLY_INJECT */
+/* Description: Value of AGE field for outgoing replies */
+#define SH_XNNI1_INJECT_AGE_REPLY_INJECT_SHFT 8
+#define SH_XNNI1_INJECT_AGE_REPLY_INJECT_MASK 0x000000000000ff00
+
+/* ==================================================================== */
+/* Register "SH_XN_DEBUG_SEL" */
+/* XN Debug Port Select */
+/* ==================================================================== */
+
+#define SH_XN_DEBUG_SEL 0x0000000150031000
+#define SH_XN_DEBUG_SEL_MASK 0xf777777777777777
+#define SH_XN_DEBUG_SEL_INIT 0x0000000000000000
+
+/* SH_XN_DEBUG_SEL_NIBBLE0_RLM_SEL */
+/* Description: Nibble 0 RLM select */
+#define SH_XN_DEBUG_SEL_NIBBLE0_RLM_SEL_SHFT 0
+#define SH_XN_DEBUG_SEL_NIBBLE0_RLM_SEL_MASK 0x0000000000000007
+
+/* SH_XN_DEBUG_SEL_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_XN_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4
+#define SH_XN_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_XN_DEBUG_SEL_NIBBLE1_RLM_SEL */
+/* Description: Nibble 1 RLM select */
+#define SH_XN_DEBUG_SEL_NIBBLE1_RLM_SEL_SHFT 8
+#define SH_XN_DEBUG_SEL_NIBBLE1_RLM_SEL_MASK 0x0000000000000700
+
+/* SH_XN_DEBUG_SEL_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_XN_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12
+#define SH_XN_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_XN_DEBUG_SEL_NIBBLE2_RLM_SEL */
+/* Description: Nibble 2 RLM select */
+#define SH_XN_DEBUG_SEL_NIBBLE2_RLM_SEL_SHFT 16
+#define SH_XN_DEBUG_SEL_NIBBLE2_RLM_SEL_MASK 0x0000000000070000
+
+/* SH_XN_DEBUG_SEL_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_XN_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20
+#define SH_XN_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_XN_DEBUG_SEL_NIBBLE3_RLM_SEL */
+/* Description: Nibble 3 RLM select */
+#define SH_XN_DEBUG_SEL_NIBBLE3_RLM_SEL_SHFT 24
+#define SH_XN_DEBUG_SEL_NIBBLE3_RLM_SEL_MASK 0x0000000007000000
+
+/* SH_XN_DEBUG_SEL_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_XN_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28
+#define SH_XN_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_XN_DEBUG_SEL_NIBBLE4_RLM_SEL */
+/* Description: Nibble 4 RLM select */
+#define SH_XN_DEBUG_SEL_NIBBLE4_RLM_SEL_SHFT 32
+#define SH_XN_DEBUG_SEL_NIBBLE4_RLM_SEL_MASK 0x0000000700000000
+
+/* SH_XN_DEBUG_SEL_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_XN_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36
+#define SH_XN_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_XN_DEBUG_SEL_NIBBLE5_RLM_SEL */
+/* Description: Nibble 5 RLM select */
+#define SH_XN_DEBUG_SEL_NIBBLE5_RLM_SEL_SHFT 40
+#define SH_XN_DEBUG_SEL_NIBBLE5_RLM_SEL_MASK 0x0000070000000000
+
+/* SH_XN_DEBUG_SEL_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_XN_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44
+#define SH_XN_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_XN_DEBUG_SEL_NIBBLE6_RLM_SEL */
+/* Description: Nibble 6 RLM select */
+#define SH_XN_DEBUG_SEL_NIBBLE6_RLM_SEL_SHFT 48
+#define SH_XN_DEBUG_SEL_NIBBLE6_RLM_SEL_MASK 0x0007000000000000
+
+/* SH_XN_DEBUG_SEL_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_XN_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52
+#define SH_XN_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_XN_DEBUG_SEL_NIBBLE7_RLM_SEL */
+/* Description: Nibble 7 RLM select */
+#define SH_XN_DEBUG_SEL_NIBBLE7_RLM_SEL_SHFT 56
+#define SH_XN_DEBUG_SEL_NIBBLE7_RLM_SEL_MASK 0x0700000000000000
+
+/* SH_XN_DEBUG_SEL_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_XN_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60
+#define SH_XN_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* SH_XN_DEBUG_SEL_TRIGGER_ENABLE */
+/* Description: Enable trigger on bit 32 of Analyzer data */
+#define SH_XN_DEBUG_SEL_TRIGGER_ENABLE_SHFT 63
+#define SH_XN_DEBUG_SEL_TRIGGER_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_DEBUG_TRIG_SEL" */
+/* XN Debug trigger Select */
+/* ==================================================================== */
+
+#define SH_XN_DEBUG_TRIG_SEL 0x0000000150031020
+#define SH_XN_DEBUG_TRIG_SEL_MASK 0x7777777777777777
+#define SH_XN_DEBUG_TRIG_SEL_INIT 0x0000000000000000
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER0_RLM_SEL */
+/* Description: Nibble 0 RLM select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER0_RLM_SEL_SHFT 0
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER0_RLM_SEL_MASK 0x0000000000000007
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER0_NIBBLE_SEL_SHFT 4
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER1_RLM_SEL */
+/* Description: Nibble 1 RLM select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER1_RLM_SEL_SHFT 8
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER1_RLM_SEL_MASK 0x0000000000000700
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER1_NIBBLE_SEL_SHFT 12
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER2_RLM_SEL */
+/* Description: Nibble 2 RLM select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER2_RLM_SEL_SHFT 16
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER2_RLM_SEL_MASK 0x0000000000070000
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER2_NIBBLE_SEL_SHFT 20
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER3_RLM_SEL */
+/* Description: Nibble 3 RLM select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER3_RLM_SEL_SHFT 24
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER3_RLM_SEL_MASK 0x0000000007000000
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER3_NIBBLE_SEL_SHFT 28
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER4_RLM_SEL */
+/* Description: Nibble 4 RLM select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER4_RLM_SEL_SHFT 32
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER4_RLM_SEL_MASK 0x0000000700000000
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER4_NIBBLE_SEL_SHFT 36
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER5_RLM_SEL */
+/* Description: Nibble 5 RLM select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER5_RLM_SEL_SHFT 40
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER5_RLM_SEL_MASK 0x0000070000000000
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER5_NIBBLE_SEL_SHFT 44
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER6_RLM_SEL */
+/* Description: Nibble 6 RLM select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER6_RLM_SEL_SHFT 48
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER6_RLM_SEL_MASK 0x0007000000000000
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER6_NIBBLE_SEL_SHFT 52
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER7_RLM_SEL */
+/* Description: Nibble 7 RLM select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER7_RLM_SEL_SHFT 56
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER7_RLM_SEL_MASK 0x0700000000000000
+
+/* SH_XN_DEBUG_TRIG_SEL_TRIGGER7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER7_NIBBLE_SEL_SHFT 60
+#define SH_XN_DEBUG_TRIG_SEL_TRIGGER7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_TRIGGER_COMPARE" */
+/* XN Debug Compare */
+/* ==================================================================== */
+
+#define SH_XN_TRIGGER_COMPARE 0x0000000150031040
+#define SH_XN_TRIGGER_COMPARE_MASK 0x00000000ffffffff
+#define SH_XN_TRIGGER_COMPARE_INIT 0x0000000000000000
+
+/* SH_XN_TRIGGER_COMPARE_MASK */
+/* Description: Mask to select Debug bits for trigger generation */
+#define SH_XN_TRIGGER_COMPARE_MASK_SHFT 0
+#define SH_XN_TRIGGER_COMPARE_MASK_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_TRIGGER_DATA" */
+/* XN Debug Compare Data */
+/* ==================================================================== */
+
+#define SH_XN_TRIGGER_DATA 0x0000000150031050
+#define SH_XN_TRIGGER_DATA_MASK 0x00000000ffffffff
+#define SH_XN_TRIGGER_DATA_INIT 0x00000000ffffffff
+
+/* SH_XN_TRIGGER_DATA_COMPARE_PATTERN */
+/* Description: debug bit pattern for trigger generation */
+#define SH_XN_TRIGGER_DATA_COMPARE_PATTERN_SHFT 0
+#define SH_XN_TRIGGER_DATA_COMPARE_PATTERN_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_DEBUG_SEL" */
+/* XN IILB Debug Port Select */
+/* ==================================================================== */
+
+#define SH_XN_IILB_DEBUG_SEL 0x0000000150031060
+#define SH_XN_IILB_DEBUG_SEL_MASK 0x7777777777777777
+#define SH_XN_IILB_DEBUG_SEL_INIT 0x0000000000000000
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE0_INPUT_SEL */
+/* Description: Nibble 0 input select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE0_INPUT_SEL_SHFT 0
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE0_INPUT_SEL_MASK 0x0000000000000007
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE1_INPUT_SEL */
+/* Description: Nibble 1 input select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE1_INPUT_SEL_SHFT 8
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE1_INPUT_SEL_MASK 0x0000000000000700
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE2_INPUT_SEL */
+/* Description: Nibble 2 input select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE2_INPUT_SEL_SHFT 16
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE2_INPUT_SEL_MASK 0x0000000000070000
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE3_INPUT_SEL */
+/* Description: Nibble 3 input select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE3_INPUT_SEL_SHFT 24
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE3_INPUT_SEL_MASK 0x0000000007000000
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE4_INPUT_SEL */
+/* Description: Nibble 4 input select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE4_INPUT_SEL_SHFT 32
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE4_INPUT_SEL_MASK 0x0000000700000000
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE5_INPUT_SEL */
+/* Description: Nibble 5 input select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE5_INPUT_SEL_SHFT 40
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE5_INPUT_SEL_MASK 0x0000070000000000
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE6_INPUT_SEL */
+/* Description: Nibble 6 input select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE6_INPUT_SEL_SHFT 48
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE6_INPUT_SEL_MASK 0x0007000000000000
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE7_INPUT_SEL */
+/* Description: Nibble 7 input select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE7_INPUT_SEL_SHFT 56
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE7_INPUT_SEL_MASK 0x0700000000000000
+
+/* SH_XN_IILB_DEBUG_SEL_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60
+#define SH_XN_IILB_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_DEBUG_SEL" */
+/* XN PI Debug Port Select */
+/* ==================================================================== */
+
+#define SH_XN_PI_DEBUG_SEL 0x00000001500310a0
+#define SH_XN_PI_DEBUG_SEL_MASK 0x7777777777777777
+#define SH_XN_PI_DEBUG_SEL_INIT 0x0000000000000000
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE0_INPUT_SEL */
+/* Description: Nibble 0 input select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE0_INPUT_SEL_SHFT 0
+#define SH_XN_PI_DEBUG_SEL_NIBBLE0_INPUT_SEL_MASK 0x0000000000000007
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4
+#define SH_XN_PI_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE1_INPUT_SEL */
+/* Description: Nibble 1 input select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE1_INPUT_SEL_SHFT 8
+#define SH_XN_PI_DEBUG_SEL_NIBBLE1_INPUT_SEL_MASK 0x0000000000000700
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12
+#define SH_XN_PI_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE2_INPUT_SEL */
+/* Description: Nibble 2 input select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE2_INPUT_SEL_SHFT 16
+#define SH_XN_PI_DEBUG_SEL_NIBBLE2_INPUT_SEL_MASK 0x0000000000070000
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20
+#define SH_XN_PI_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE3_INPUT_SEL */
+/* Description: Nibble 3 input select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE3_INPUT_SEL_SHFT 24
+#define SH_XN_PI_DEBUG_SEL_NIBBLE3_INPUT_SEL_MASK 0x0000000007000000
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28
+#define SH_XN_PI_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE4_INPUT_SEL */
+/* Description: Nibble 4 input select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE4_INPUT_SEL_SHFT 32
+#define SH_XN_PI_DEBUG_SEL_NIBBLE4_INPUT_SEL_MASK 0x0000000700000000
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36
+#define SH_XN_PI_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE5_INPUT_SEL */
+/* Description: Nibble 5 input select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE5_INPUT_SEL_SHFT 40
+#define SH_XN_PI_DEBUG_SEL_NIBBLE5_INPUT_SEL_MASK 0x0000070000000000
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44
+#define SH_XN_PI_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE6_INPUT_SEL */
+/* Description: Nibble 6 input select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE6_INPUT_SEL_SHFT 48
+#define SH_XN_PI_DEBUG_SEL_NIBBLE6_INPUT_SEL_MASK 0x0007000000000000
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52
+#define SH_XN_PI_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE7_INPUT_SEL */
+/* Description: Nibble 7 input select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE7_INPUT_SEL_SHFT 56
+#define SH_XN_PI_DEBUG_SEL_NIBBLE7_INPUT_SEL_MASK 0x0700000000000000
+
+/* SH_XN_PI_DEBUG_SEL_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_XN_PI_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60
+#define SH_XN_PI_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_DEBUG_SEL" */
+/* XN MD Debug Port Select */
+/* ==================================================================== */
+
+#define SH_XN_MD_DEBUG_SEL 0x0000000150031080
+#define SH_XN_MD_DEBUG_SEL_MASK 0x7777777777777777
+#define SH_XN_MD_DEBUG_SEL_INIT 0x0000000000000000
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE0_INPUT_SEL */
+/* Description: Nibble 0 input select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE0_INPUT_SEL_SHFT 0
+#define SH_XN_MD_DEBUG_SEL_NIBBLE0_INPUT_SEL_MASK 0x0000000000000007
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4
+#define SH_XN_MD_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE1_INPUT_SEL */
+/* Description: Nibble 1 input select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE1_INPUT_SEL_SHFT 8
+#define SH_XN_MD_DEBUG_SEL_NIBBLE1_INPUT_SEL_MASK 0x0000000000000700
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12
+#define SH_XN_MD_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE2_INPUT_SEL */
+/* Description: Nibble 2 input select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE2_INPUT_SEL_SHFT 16
+#define SH_XN_MD_DEBUG_SEL_NIBBLE2_INPUT_SEL_MASK 0x0000000000070000
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20
+#define SH_XN_MD_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE3_INPUT_SEL */
+/* Description: Nibble 3 input select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE3_INPUT_SEL_SHFT 24
+#define SH_XN_MD_DEBUG_SEL_NIBBLE3_INPUT_SEL_MASK 0x0000000007000000
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28
+#define SH_XN_MD_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE4_INPUT_SEL */
+/* Description: Nibble 4 input select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE4_INPUT_SEL_SHFT 32
+#define SH_XN_MD_DEBUG_SEL_NIBBLE4_INPUT_SEL_MASK 0x0000000700000000
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36
+#define SH_XN_MD_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE5_INPUT_SEL */
+/* Description: Nibble 5 input select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE5_INPUT_SEL_SHFT 40
+#define SH_XN_MD_DEBUG_SEL_NIBBLE5_INPUT_SEL_MASK 0x0000070000000000
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44
+#define SH_XN_MD_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE6_INPUT_SEL */
+/* Description: Nibble 6 input select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE6_INPUT_SEL_SHFT 48
+#define SH_XN_MD_DEBUG_SEL_NIBBLE6_INPUT_SEL_MASK 0x0007000000000000
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52
+#define SH_XN_MD_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE7_INPUT_SEL */
+/* Description: Nibble 7 input select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE7_INPUT_SEL_SHFT 56
+#define SH_XN_MD_DEBUG_SEL_NIBBLE7_INPUT_SEL_MASK 0x0700000000000000
+
+/* SH_XN_MD_DEBUG_SEL_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_XN_MD_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60
+#define SH_XN_MD_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_DEBUG_SEL" */
+/* XN NI0 Debug Port Select */
+/* ==================================================================== */
+
+#define SH_XN_NI0_DEBUG_SEL 0x00000001500310c0
+#define SH_XN_NI0_DEBUG_SEL_MASK 0x7777777777777777
+#define SH_XN_NI0_DEBUG_SEL_INIT 0x0000000000000000
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE0_INPUT_SEL */
+/* Description: Nibble 0 input select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE0_INPUT_SEL_SHFT 0
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE0_INPUT_SEL_MASK 0x0000000000000007
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE1_INPUT_SEL */
+/* Description: Nibble 1 input select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE1_INPUT_SEL_SHFT 8
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE1_INPUT_SEL_MASK 0x0000000000000700
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE2_INPUT_SEL */
+/* Description: Nibble 2 input select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE2_INPUT_SEL_SHFT 16
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE2_INPUT_SEL_MASK 0x0000000000070000
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE3_INPUT_SEL */
+/* Description: Nibble 3 input select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE3_INPUT_SEL_SHFT 24
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE3_INPUT_SEL_MASK 0x0000000007000000
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE4_INPUT_SEL */
+/* Description: Nibble 4 input select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE4_INPUT_SEL_SHFT 32
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE4_INPUT_SEL_MASK 0x0000000700000000
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE5_INPUT_SEL */
+/* Description: Nibble 5 input select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE5_INPUT_SEL_SHFT 40
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE5_INPUT_SEL_MASK 0x0000070000000000
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE6_INPUT_SEL */
+/* Description: Nibble 6 input select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE6_INPUT_SEL_SHFT 48
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE6_INPUT_SEL_MASK 0x0007000000000000
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE7_INPUT_SEL */
+/* Description: Nibble 7 input select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE7_INPUT_SEL_SHFT 56
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE7_INPUT_SEL_MASK 0x0700000000000000
+
+/* SH_XN_NI0_DEBUG_SEL_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60
+#define SH_XN_NI0_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_DEBUG_SEL" */
+/* XN NI1 Debug Port Select */
+/* ==================================================================== */
+
+#define SH_XN_NI1_DEBUG_SEL 0x00000001500310e0
+#define SH_XN_NI1_DEBUG_SEL_MASK 0x7777777777777777
+#define SH_XN_NI1_DEBUG_SEL_INIT 0x0000000000000000
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE0_INPUT_SEL */
+/* Description: Nibble 0 input select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE0_INPUT_SEL_SHFT 0
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE0_INPUT_SEL_MASK 0x0000000000000007
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE1_INPUT_SEL */
+/* Description: Nibble 1 input select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE1_INPUT_SEL_SHFT 8
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE1_INPUT_SEL_MASK 0x0000000000000700
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE2_INPUT_SEL */
+/* Description: Nibble 2 input select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE2_INPUT_SEL_SHFT 16
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE2_INPUT_SEL_MASK 0x0000000000070000
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE3_INPUT_SEL */
+/* Description: Nibble 3 input select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE3_INPUT_SEL_SHFT 24
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE3_INPUT_SEL_MASK 0x0000000007000000
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE4_INPUT_SEL */
+/* Description: Nibble 4 input select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE4_INPUT_SEL_SHFT 32
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE4_INPUT_SEL_MASK 0x0000000700000000
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE5_INPUT_SEL */
+/* Description: Nibble 5 input select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE5_INPUT_SEL_SHFT 40
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE5_INPUT_SEL_MASK 0x0000070000000000
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE6_INPUT_SEL */
+/* Description: Nibble 6 input select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE6_INPUT_SEL_SHFT 48
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE6_INPUT_SEL_MASK 0x0007000000000000
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE7_INPUT_SEL */
+/* Description: Nibble 7 input select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE7_INPUT_SEL_SHFT 56
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE7_INPUT_SEL_MASK 0x0700000000000000
+
+/* SH_XN_NI1_DEBUG_SEL_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60
+#define SH_XN_NI1_DEBUG_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_LB_CMP_EXP_DATA0" */
+/* IILB compare LB input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_LB_CMP_EXP_DATA0 0x0000000150031100
+#define SH_XN_IILB_LB_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_IILB_LB_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_IILB_LB_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_IILB_LB_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_IILB_LB_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_LB_CMP_EXP_DATA1" */
+/* IILB compare LB input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_LB_CMP_EXP_DATA1 0x0000000150031110
+#define SH_XN_IILB_LB_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_IILB_LB_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_IILB_LB_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_IILB_LB_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_IILB_LB_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_LB_CMP_ENABLE0" */
+/* IILB compare LB input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_LB_CMP_ENABLE0 0x0000000150031120
+#define SH_XN_IILB_LB_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_IILB_LB_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_IILB_LB_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_IILB_LB_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_IILB_LB_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_LB_CMP_ENABLE1" */
+/* IILB compare LB input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_LB_CMP_ENABLE1 0x0000000150031130
+#define SH_XN_IILB_LB_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_IILB_LB_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_IILB_LB_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_IILB_LB_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_IILB_LB_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_II_CMP_EXP_DATA0" */
+/* IILB compare II input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_II_CMP_EXP_DATA0 0x0000000150031140
+#define SH_XN_IILB_II_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_IILB_II_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_IILB_II_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_IILB_II_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_IILB_II_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_II_CMP_EXP_DATA1" */
+/* IILB compare II input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_II_CMP_EXP_DATA1 0x0000000150031150
+#define SH_XN_IILB_II_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_IILB_II_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_IILB_II_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_IILB_II_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_IILB_II_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_II_CMP_ENABLE0" */
+/* IILB compare II input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_II_CMP_ENABLE0 0x0000000150031160
+#define SH_XN_IILB_II_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_IILB_II_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_IILB_II_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_IILB_II_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_IILB_II_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_II_CMP_ENABLE1" */
+/* IILB compare II input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_II_CMP_ENABLE1 0x0000000150031170
+#define SH_XN_IILB_II_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_IILB_II_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_IILB_II_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_IILB_II_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_IILB_II_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_MD_CMP_EXP_DATA0" */
+/* IILB compare MD input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_MD_CMP_EXP_DATA0 0x0000000150031180
+#define SH_XN_IILB_MD_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_IILB_MD_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_IILB_MD_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_IILB_MD_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_IILB_MD_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_MD_CMP_EXP_DATA1" */
+/* IILB compare MD input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_MD_CMP_EXP_DATA1 0x0000000150031190
+#define SH_XN_IILB_MD_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_IILB_MD_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_IILB_MD_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_IILB_MD_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_IILB_MD_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_MD_CMP_ENABLE0" */
+/* IILB compare MD input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_MD_CMP_ENABLE0 0x00000001500311a0
+#define SH_XN_IILB_MD_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_IILB_MD_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_IILB_MD_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_IILB_MD_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_IILB_MD_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_MD_CMP_ENABLE1" */
+/* IILB compare MD input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_MD_CMP_ENABLE1 0x00000001500311b0
+#define SH_XN_IILB_MD_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_IILB_MD_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_IILB_MD_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_IILB_MD_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_IILB_MD_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_PI_CMP_EXP_DATA0" */
+/* IILB compare PI input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_PI_CMP_EXP_DATA0 0x00000001500311c0
+#define SH_XN_IILB_PI_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_IILB_PI_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_IILB_PI_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_IILB_PI_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_IILB_PI_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_PI_CMP_EXP_DATA1" */
+/* IILB compare PI input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_PI_CMP_EXP_DATA1 0x00000001500311d0
+#define SH_XN_IILB_PI_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_IILB_PI_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_IILB_PI_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_IILB_PI_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_IILB_PI_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_PI_CMP_ENABLE0" */
+/* IILB compare PI input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_PI_CMP_ENABLE0 0x00000001500311e0
+#define SH_XN_IILB_PI_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_IILB_PI_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_IILB_PI_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_IILB_PI_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_IILB_PI_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_PI_CMP_ENABLE1" */
+/* IILB compare PI input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_PI_CMP_ENABLE1 0x00000001500311f0
+#define SH_XN_IILB_PI_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_IILB_PI_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_IILB_PI_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_IILB_PI_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_IILB_PI_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI0_CMP_EXP_DATA0" */
+/* IILB compare NI0 input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_NI0_CMP_EXP_DATA0 0x0000000150031200
+#define SH_XN_IILB_NI0_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_IILB_NI0_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_IILB_NI0_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_IILB_NI0_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_IILB_NI0_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI0_CMP_EXP_DATA1" */
+/* IILB compare NI0 input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_NI0_CMP_EXP_DATA1 0x0000000150031210
+#define SH_XN_IILB_NI0_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_IILB_NI0_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_IILB_NI0_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_IILB_NI0_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_IILB_NI0_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI0_CMP_ENABLE0" */
+/* IILB compare NI0 input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_NI0_CMP_ENABLE0 0x0000000150031220
+#define SH_XN_IILB_NI0_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_IILB_NI0_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_IILB_NI0_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_IILB_NI0_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_IILB_NI0_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI0_CMP_ENABLE1" */
+/* IILB compare NI0 input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_NI0_CMP_ENABLE1 0x0000000150031230
+#define SH_XN_IILB_NI0_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_IILB_NI0_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_IILB_NI0_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_IILB_NI0_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_IILB_NI0_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI1_CMP_EXP_DATA0" */
+/* IILB compare NI1 input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_NI1_CMP_EXP_DATA0 0x0000000150031240
+#define SH_XN_IILB_NI1_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_IILB_NI1_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_IILB_NI1_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_IILB_NI1_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_IILB_NI1_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI1_CMP_EXP_DATA1" */
+/* IILB compare NI1 input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_NI1_CMP_EXP_DATA1 0x0000000150031250
+#define SH_XN_IILB_NI1_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_IILB_NI1_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_IILB_NI1_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_IILB_NI1_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_IILB_NI1_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI1_CMP_ENABLE0" */
+/* IILB compare NI1 input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_NI1_CMP_ENABLE0 0x0000000150031260
+#define SH_XN_IILB_NI1_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_IILB_NI1_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_IILB_NI1_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_IILB_NI1_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_IILB_NI1_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI1_CMP_ENABLE1" */
+/* IILB compare NI1 input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_IILB_NI1_CMP_ENABLE1 0x0000000150031270
+#define SH_XN_IILB_NI1_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_IILB_NI1_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_IILB_NI1_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_IILB_NI1_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_IILB_NI1_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_IILB_CMP_EXP_DATA0" */
+/* MD compare IILB input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_MD_IILB_CMP_EXP_DATA0 0x0000000150031500
+#define SH_XN_MD_IILB_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_MD_IILB_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_MD_IILB_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_MD_IILB_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_MD_IILB_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_IILB_CMP_EXP_DATA1" */
+/* MD compare IILB input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_MD_IILB_CMP_EXP_DATA1 0x0000000150031510
+#define SH_XN_MD_IILB_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_MD_IILB_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_MD_IILB_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_MD_IILB_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_MD_IILB_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_IILB_CMP_ENABLE0" */
+/* MD compare IILB input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_MD_IILB_CMP_ENABLE0 0x0000000150031520
+#define SH_XN_MD_IILB_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_MD_IILB_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_MD_IILB_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_MD_IILB_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_MD_IILB_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_IILB_CMP_ENABLE1" */
+/* MD compare IILB input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_MD_IILB_CMP_ENABLE1 0x0000000150031530
+#define SH_XN_MD_IILB_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_MD_IILB_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_MD_IILB_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_MD_IILB_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_MD_IILB_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI0_CMP_EXP_DATA0" */
+/* MD compare NI0 input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_MD_NI0_CMP_EXP_DATA0 0x0000000150031540
+#define SH_XN_MD_NI0_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_MD_NI0_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_MD_NI0_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_MD_NI0_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_MD_NI0_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI0_CMP_EXP_DATA1" */
+/* MD compare NI0 input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_MD_NI0_CMP_EXP_DATA1 0x0000000150031550
+#define SH_XN_MD_NI0_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_MD_NI0_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_MD_NI0_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_MD_NI0_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_MD_NI0_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI0_CMP_ENABLE0" */
+/* MD compare NI0 input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_MD_NI0_CMP_ENABLE0 0x0000000150031560
+#define SH_XN_MD_NI0_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_MD_NI0_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_MD_NI0_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_MD_NI0_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_MD_NI0_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI0_CMP_ENABLE1" */
+/* MD compare NI0 input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_MD_NI0_CMP_ENABLE1 0x0000000150031570
+#define SH_XN_MD_NI0_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_MD_NI0_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_MD_NI0_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_MD_NI0_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_MD_NI0_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI1_CMP_EXP_DATA0" */
+/* MD compare NI1 input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_MD_NI1_CMP_EXP_DATA0 0x0000000150031580
+#define SH_XN_MD_NI1_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_MD_NI1_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_MD_NI1_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_MD_NI1_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_MD_NI1_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI1_CMP_EXP_DATA1" */
+/* MD compare NI1 input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_MD_NI1_CMP_EXP_DATA1 0x0000000150031590
+#define SH_XN_MD_NI1_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_MD_NI1_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_MD_NI1_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_MD_NI1_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_MD_NI1_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI1_CMP_ENABLE0" */
+/* MD compare NI1 input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_MD_NI1_CMP_ENABLE0 0x00000001500315a0
+#define SH_XN_MD_NI1_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_MD_NI1_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_MD_NI1_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_MD_NI1_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_MD_NI1_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI1_CMP_ENABLE1" */
+/* MD compare NI1 input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_MD_NI1_CMP_ENABLE1 0x00000001500315b0
+#define SH_XN_MD_NI1_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_MD_NI1_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_MD_NI1_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_MD_NI1_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_MD_NI1_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_EXP_HDR0" */
+/* MD compare SIC input expected header0 */
+/* ==================================================================== */
+
+#define SH_XN_MD_SIC_CMP_EXP_HDR0 0x00000001500315c0
+#define SH_XN_MD_SIC_CMP_EXP_HDR0_MASK 0xffffffffffffffff
+#define SH_XN_MD_SIC_CMP_EXP_HDR0_INIT 0x0000000000000000
+
+/* SH_XN_MD_SIC_CMP_EXP_HDR0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_MD_SIC_CMP_EXP_HDR0_DATA_SHFT 0
+#define SH_XN_MD_SIC_CMP_EXP_HDR0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_EXP_HDR1" */
+/* MD compare SIC input expected header1 */
+/* ==================================================================== */
+
+#define SH_XN_MD_SIC_CMP_EXP_HDR1 0x00000001500315d0
+#define SH_XN_MD_SIC_CMP_EXP_HDR1_MASK 0x000003ffffffffff
+#define SH_XN_MD_SIC_CMP_EXP_HDR1_INIT 0x0000000000000000
+
+/* SH_XN_MD_SIC_CMP_EXP_HDR1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_MD_SIC_CMP_EXP_HDR1_DATA_SHFT 0
+#define SH_XN_MD_SIC_CMP_EXP_HDR1_DATA_MASK 0x000003ffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_HDR_ENABLE0" */
+/* MD compare SIC header enable0 */
+/* ==================================================================== */
+
+#define SH_XN_MD_SIC_CMP_HDR_ENABLE0 0x00000001500315e0
+#define SH_XN_MD_SIC_CMP_HDR_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_MD_SIC_CMP_HDR_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_MD_SIC_CMP_HDR_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_MD_SIC_CMP_HDR_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_MD_SIC_CMP_HDR_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_HDR_ENABLE1" */
+/* MD compare SIC header enable1 */
+/* ==================================================================== */
+
+#define SH_XN_MD_SIC_CMP_HDR_ENABLE1 0x00000001500315f0
+#define SH_XN_MD_SIC_CMP_HDR_ENABLE1_MASK 0x000003ffffffffff
+#define SH_XN_MD_SIC_CMP_HDR_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_MD_SIC_CMP_HDR_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_MD_SIC_CMP_HDR_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_MD_SIC_CMP_HDR_ENABLE1_ENABLE_MASK 0x000003ffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA0" */
+/* MD compare SIC data0 */
+/* ==================================================================== */
+
+#define SH_XN_MD_SIC_CMP_DATA0 0x0000000150031600
+#define SH_XN_MD_SIC_CMP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_MD_SIC_CMP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_MD_SIC_CMP_DATA0_DATA0 */
+/* Description: Data0 */
+#define SH_XN_MD_SIC_CMP_DATA0_DATA0_SHFT 0
+#define SH_XN_MD_SIC_CMP_DATA0_DATA0_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA1" */
+/* MD compare SIC data1 */
+/* ==================================================================== */
+
+#define SH_XN_MD_SIC_CMP_DATA1 0x0000000150031610
+#define SH_XN_MD_SIC_CMP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_MD_SIC_CMP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_MD_SIC_CMP_DATA1_DATA1 */
+/* Description: Data1 */
+#define SH_XN_MD_SIC_CMP_DATA1_DATA1_SHFT 0
+#define SH_XN_MD_SIC_CMP_DATA1_DATA1_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA2" */
+/* MD compare SIC data2 */
+/* ==================================================================== */
+
+#define SH_XN_MD_SIC_CMP_DATA2 0x0000000150031620
+#define SH_XN_MD_SIC_CMP_DATA2_MASK 0xffffffffffffffff
+#define SH_XN_MD_SIC_CMP_DATA2_INIT 0x0000000000000000
+
+/* SH_XN_MD_SIC_CMP_DATA2_DATA2 */
+/* Description: Data2 */
+#define SH_XN_MD_SIC_CMP_DATA2_DATA2_SHFT 0
+#define SH_XN_MD_SIC_CMP_DATA2_DATA2_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA3" */
+/* MD compare SIC data3 */
+/* ==================================================================== */
+
+#define SH_XN_MD_SIC_CMP_DATA3 0x0000000150031630
+#define SH_XN_MD_SIC_CMP_DATA3_MASK 0xffffffffffffffff
+#define SH_XN_MD_SIC_CMP_DATA3_INIT 0x0000000000000000
+
+/* SH_XN_MD_SIC_CMP_DATA3_DATA3 */
+/* Description: Data3 */
+#define SH_XN_MD_SIC_CMP_DATA3_DATA3_SHFT 0
+#define SH_XN_MD_SIC_CMP_DATA3_DATA3_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE0" */
+/* MD enable compare SIC data0 */
+/* ==================================================================== */
+
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE0 0x0000000150031640
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_MD_SIC_CMP_DATA_ENABLE0_DATA_ENABLE0 */
+/* Description: Data0 */
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE0_DATA_ENABLE0_SHFT 0
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE0_DATA_ENABLE0_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE1" */
+/* MD enable compare SIC data1 */
+/* ==================================================================== */
+
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE1 0x0000000150031650
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_MD_SIC_CMP_DATA_ENABLE1_DATA_ENABLE1 */
+/* Description: Data1 */
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE1_DATA_ENABLE1_SHFT 0
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE1_DATA_ENABLE1_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE2" */
+/* MD enable compare SIC data2 */
+/* ==================================================================== */
+
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE2 0x0000000150031660
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE2_MASK 0xffffffffffffffff
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE2_INIT 0x0000000000000000
+
+/* SH_XN_MD_SIC_CMP_DATA_ENABLE2_DATA_ENABLE2 */
+/* Description: Data2 */
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE2_DATA_ENABLE2_SHFT 0
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE2_DATA_ENABLE2_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE3" */
+/* MD enable compare SIC data3 */
+/* ==================================================================== */
+
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE3 0x0000000150031670
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE3_MASK 0xffffffffffffffff
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE3_INIT 0x0000000000000000
+
+/* SH_XN_MD_SIC_CMP_DATA_ENABLE3_DATA_ENABLE3 */
+/* Description: Data3 */
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE3_DATA_ENABLE3_SHFT 0
+#define SH_XN_MD_SIC_CMP_DATA_ENABLE3_DATA_ENABLE3_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_IILB_CMP_EXP_DATA0" */
+/* PI compare IILB input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_PI_IILB_CMP_EXP_DATA0 0x0000000150031300
+#define SH_XN_PI_IILB_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_PI_IILB_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_PI_IILB_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_PI_IILB_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_PI_IILB_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_IILB_CMP_EXP_DATA1" */
+/* PI compare IILB input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_PI_IILB_CMP_EXP_DATA1 0x0000000150031310
+#define SH_XN_PI_IILB_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_PI_IILB_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_PI_IILB_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_PI_IILB_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_PI_IILB_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_IILB_CMP_ENABLE0" */
+/* PI compare IILB input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_PI_IILB_CMP_ENABLE0 0x0000000150031320
+#define SH_XN_PI_IILB_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_PI_IILB_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_PI_IILB_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_PI_IILB_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_PI_IILB_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_IILB_CMP_ENABLE1" */
+/* PI compare IILB input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_PI_IILB_CMP_ENABLE1 0x0000000150031330
+#define SH_XN_PI_IILB_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_PI_IILB_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_PI_IILB_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_PI_IILB_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_PI_IILB_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI0_CMP_EXP_DATA0" */
+/* PI compare NI0 input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_PI_NI0_CMP_EXP_DATA0 0x0000000150031340
+#define SH_XN_PI_NI0_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_PI_NI0_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_PI_NI0_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_PI_NI0_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_PI_NI0_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI0_CMP_EXP_DATA1" */
+/* PI compare NI0 input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_PI_NI0_CMP_EXP_DATA1 0x0000000150031350
+#define SH_XN_PI_NI0_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_PI_NI0_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_PI_NI0_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_PI_NI0_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_PI_NI0_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI0_CMP_ENABLE0" */
+/* PI compare NI0 input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_PI_NI0_CMP_ENABLE0 0x0000000150031360
+#define SH_XN_PI_NI0_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_PI_NI0_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_PI_NI0_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_PI_NI0_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_PI_NI0_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI0_CMP_ENABLE1" */
+/* PI compare NI0 input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_PI_NI0_CMP_ENABLE1 0x0000000150031370
+#define SH_XN_PI_NI0_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_PI_NI0_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_PI_NI0_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_PI_NI0_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_PI_NI0_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI1_CMP_EXP_DATA0" */
+/* PI compare NI1 input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_PI_NI1_CMP_EXP_DATA0 0x0000000150031380
+#define SH_XN_PI_NI1_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_PI_NI1_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_PI_NI1_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_PI_NI1_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_PI_NI1_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI1_CMP_EXP_DATA1" */
+/* PI compare NI1 input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_PI_NI1_CMP_EXP_DATA1 0x0000000150031390
+#define SH_XN_PI_NI1_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_PI_NI1_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_PI_NI1_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_PI_NI1_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_PI_NI1_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI1_CMP_ENABLE0" */
+/* PI compare NI1 input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_PI_NI1_CMP_ENABLE0 0x00000001500313a0
+#define SH_XN_PI_NI1_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_PI_NI1_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_PI_NI1_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_PI_NI1_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_PI_NI1_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI1_CMP_ENABLE1" */
+/* PI compare NI1 input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_PI_NI1_CMP_ENABLE1 0x00000001500313b0
+#define SH_XN_PI_NI1_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_PI_NI1_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_PI_NI1_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_PI_NI1_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_PI_NI1_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_EXP_HDR0" */
+/* PI compare SIC input expected header0 */
+/* ==================================================================== */
+
+#define SH_XN_PI_SIC_CMP_EXP_HDR0 0x00000001500313c0
+#define SH_XN_PI_SIC_CMP_EXP_HDR0_MASK 0xffffffffffffffff
+#define SH_XN_PI_SIC_CMP_EXP_HDR0_INIT 0x0000000000000000
+
+/* SH_XN_PI_SIC_CMP_EXP_HDR0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_PI_SIC_CMP_EXP_HDR0_DATA_SHFT 0
+#define SH_XN_PI_SIC_CMP_EXP_HDR0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_EXP_HDR1" */
+/* PI compare SIC input expected header1 */
+/* ==================================================================== */
+
+#define SH_XN_PI_SIC_CMP_EXP_HDR1 0x00000001500313d0
+#define SH_XN_PI_SIC_CMP_EXP_HDR1_MASK 0x000003ffffffffff
+#define SH_XN_PI_SIC_CMP_EXP_HDR1_INIT 0x0000000000000000
+
+/* SH_XN_PI_SIC_CMP_EXP_HDR1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_PI_SIC_CMP_EXP_HDR1_DATA_SHFT 0
+#define SH_XN_PI_SIC_CMP_EXP_HDR1_DATA_MASK 0x000003ffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_HDR_ENABLE0" */
+/* PI compare SIC header enable0 */
+/* ==================================================================== */
+
+#define SH_XN_PI_SIC_CMP_HDR_ENABLE0 0x00000001500313e0
+#define SH_XN_PI_SIC_CMP_HDR_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_PI_SIC_CMP_HDR_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_PI_SIC_CMP_HDR_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_PI_SIC_CMP_HDR_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_PI_SIC_CMP_HDR_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_HDR_ENABLE1" */
+/* PI compare SIC header enable1 */
+/* ==================================================================== */
+
+#define SH_XN_PI_SIC_CMP_HDR_ENABLE1 0x00000001500313f0
+#define SH_XN_PI_SIC_CMP_HDR_ENABLE1_MASK 0x000003ffffffffff
+#define SH_XN_PI_SIC_CMP_HDR_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_PI_SIC_CMP_HDR_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_PI_SIC_CMP_HDR_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_PI_SIC_CMP_HDR_ENABLE1_ENABLE_MASK 0x000003ffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA0" */
+/* PI compare SIC data0 */
+/* ==================================================================== */
+
+#define SH_XN_PI_SIC_CMP_DATA0 0x0000000150031400
+#define SH_XN_PI_SIC_CMP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_PI_SIC_CMP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_PI_SIC_CMP_DATA0_DATA0 */
+/* Description: Data0 */
+#define SH_XN_PI_SIC_CMP_DATA0_DATA0_SHFT 0
+#define SH_XN_PI_SIC_CMP_DATA0_DATA0_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA1" */
+/* PI compare SIC data1 */
+/* ==================================================================== */
+
+#define SH_XN_PI_SIC_CMP_DATA1 0x0000000150031410
+#define SH_XN_PI_SIC_CMP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_PI_SIC_CMP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_PI_SIC_CMP_DATA1_DATA1 */
+/* Description: Data1 */
+#define SH_XN_PI_SIC_CMP_DATA1_DATA1_SHFT 0
+#define SH_XN_PI_SIC_CMP_DATA1_DATA1_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA2" */
+/* PI compare SIC data2 */
+/* ==================================================================== */
+
+#define SH_XN_PI_SIC_CMP_DATA2 0x0000000150031420
+#define SH_XN_PI_SIC_CMP_DATA2_MASK 0xffffffffffffffff
+#define SH_XN_PI_SIC_CMP_DATA2_INIT 0x0000000000000000
+
+/* SH_XN_PI_SIC_CMP_DATA2_DATA2 */
+/* Description: Data2 */
+#define SH_XN_PI_SIC_CMP_DATA2_DATA2_SHFT 0
+#define SH_XN_PI_SIC_CMP_DATA2_DATA2_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA3" */
+/* PI compare SIC data3 */
+/* ==================================================================== */
+
+#define SH_XN_PI_SIC_CMP_DATA3 0x0000000150031430
+#define SH_XN_PI_SIC_CMP_DATA3_MASK 0xffffffffffffffff
+#define SH_XN_PI_SIC_CMP_DATA3_INIT 0x0000000000000000
+
+/* SH_XN_PI_SIC_CMP_DATA3_DATA3 */
+/* Description: Data3 */
+#define SH_XN_PI_SIC_CMP_DATA3_DATA3_SHFT 0
+#define SH_XN_PI_SIC_CMP_DATA3_DATA3_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE0" */
+/* PI enable compare SIC data0 */
+/* ==================================================================== */
+
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE0 0x0000000150031440
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_PI_SIC_CMP_DATA_ENABLE0_DATA_ENABLE0 */
+/* Description: Data0 */
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE0_DATA_ENABLE0_SHFT 0
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE0_DATA_ENABLE0_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE1" */
+/* PI enable compare SIC data1 */
+/* ==================================================================== */
+
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE1 0x0000000150031450
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_PI_SIC_CMP_DATA_ENABLE1_DATA_ENABLE1 */
+/* Description: Data1 */
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE1_DATA_ENABLE1_SHFT 0
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE1_DATA_ENABLE1_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE2" */
+/* PI enable compare SIC data2 */
+/* ==================================================================== */
+
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE2 0x0000000150031460
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE2_MASK 0xffffffffffffffff
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE2_INIT 0x0000000000000000
+
+/* SH_XN_PI_SIC_CMP_DATA_ENABLE2_DATA_ENABLE2 */
+/* Description: Data2 */
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE2_DATA_ENABLE2_SHFT 0
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE2_DATA_ENABLE2_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE3" */
+/* PI enable compare SIC data3 */
+/* ==================================================================== */
+
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE3 0x0000000150031470
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE3_MASK 0xffffffffffffffff
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE3_INIT 0x0000000000000000
+
+/* SH_XN_PI_SIC_CMP_DATA_ENABLE3_DATA_ENABLE3 */
+/* Description: Data3 */
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE3_DATA_ENABLE3_SHFT 0
+#define SH_XN_PI_SIC_CMP_DATA_ENABLE3_DATA_ENABLE3_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_IILB_CMP_EXP_DATA0" */
+/* NI0 compare IILB input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_IILB_CMP_EXP_DATA0 0x0000000150031700
+#define SH_XN_NI0_IILB_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_NI0_IILB_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_NI0_IILB_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_NI0_IILB_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_NI0_IILB_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_IILB_CMP_EXP_DATA1" */
+/* NI0 compare IILB input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_IILB_CMP_EXP_DATA1 0x0000000150031710
+#define SH_XN_NI0_IILB_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_NI0_IILB_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_NI0_IILB_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_NI0_IILB_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_NI0_IILB_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_IILB_CMP_ENABLE0" */
+/* NI0 compare IILB input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_IILB_CMP_ENABLE0 0x0000000150031720
+#define SH_XN_NI0_IILB_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_NI0_IILB_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_NI0_IILB_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_NI0_IILB_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_NI0_IILB_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_IILB_CMP_ENABLE1" */
+/* NI0 compare IILB input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_IILB_CMP_ENABLE1 0x0000000150031730
+#define SH_XN_NI0_IILB_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_NI0_IILB_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_NI0_IILB_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_NI0_IILB_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_NI0_IILB_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_PI_CMP_EXP_DATA0" */
+/* NI0 compare PI input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_PI_CMP_EXP_DATA0 0x0000000150031740
+#define SH_XN_NI0_PI_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_NI0_PI_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_NI0_PI_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_NI0_PI_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_NI0_PI_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_PI_CMP_EXP_DATA1" */
+/* NI0 compare PI input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_PI_CMP_EXP_DATA1 0x0000000150031750
+#define SH_XN_NI0_PI_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_NI0_PI_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_NI0_PI_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_NI0_PI_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_NI0_PI_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_PI_CMP_ENABLE0" */
+/* NI0 compare PI input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_PI_CMP_ENABLE0 0x0000000150031760
+#define SH_XN_NI0_PI_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_NI0_PI_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_NI0_PI_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_NI0_PI_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_NI0_PI_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_PI_CMP_ENABLE1" */
+/* NI0 compare PI input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_PI_CMP_ENABLE1 0x0000000150031770
+#define SH_XN_NI0_PI_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_NI0_PI_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_NI0_PI_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_NI0_PI_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_NI0_PI_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_MD_CMP_EXP_DATA0" */
+/* NI0 compare MD input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_MD_CMP_EXP_DATA0 0x0000000150031780
+#define SH_XN_NI0_MD_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_NI0_MD_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_NI0_MD_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_NI0_MD_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_NI0_MD_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_MD_CMP_EXP_DATA1" */
+/* NI0 compare MD input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_MD_CMP_EXP_DATA1 0x0000000150031790
+#define SH_XN_NI0_MD_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_NI0_MD_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_NI0_MD_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_NI0_MD_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_NI0_MD_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_MD_CMP_ENABLE0" */
+/* NI0 compare MD input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_MD_CMP_ENABLE0 0x00000001500317a0
+#define SH_XN_NI0_MD_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_NI0_MD_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_NI0_MD_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_NI0_MD_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_NI0_MD_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_MD_CMP_ENABLE1" */
+/* NI0 compare MD input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_MD_CMP_ENABLE1 0x00000001500317b0
+#define SH_XN_NI0_MD_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_NI0_MD_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_NI0_MD_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_NI0_MD_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_NI0_MD_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_NI_CMP_EXP_DATA0" */
+/* NI0 compare NI input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_NI_CMP_EXP_DATA0 0x00000001500317c0
+#define SH_XN_NI0_NI_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_NI0_NI_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_NI0_NI_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_NI0_NI_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_NI0_NI_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_NI_CMP_EXP_DATA1" */
+/* NI0 compare NI input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_NI_CMP_EXP_DATA1 0x00000001500317d0
+#define SH_XN_NI0_NI_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_NI0_NI_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_NI0_NI_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_NI0_NI_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_NI0_NI_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_NI_CMP_ENABLE0" */
+/* NI0 compare NI input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_NI_CMP_ENABLE0 0x00000001500317e0
+#define SH_XN_NI0_NI_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_NI0_NI_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_NI0_NI_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_NI0_NI_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_NI0_NI_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_NI_CMP_ENABLE1" */
+/* NI0 compare NI input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_NI_CMP_ENABLE1 0x00000001500317f0
+#define SH_XN_NI0_NI_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_NI0_NI_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_NI0_NI_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_NI0_NI_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_NI0_NI_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_LLP_CMP_EXP_DATA0" */
+/* NI0 compare LLP input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_LLP_CMP_EXP_DATA0 0x0000000150031800
+#define SH_XN_NI0_LLP_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_NI0_LLP_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_NI0_LLP_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_NI0_LLP_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_NI0_LLP_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_LLP_CMP_EXP_DATA1" */
+/* NI0 compare LLP input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_LLP_CMP_EXP_DATA1 0x0000000150031810
+#define SH_XN_NI0_LLP_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_NI0_LLP_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_NI0_LLP_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_NI0_LLP_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_NI0_LLP_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_LLP_CMP_ENABLE0" */
+/* NI0 compare LLP input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_LLP_CMP_ENABLE0 0x0000000150031820
+#define SH_XN_NI0_LLP_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_NI0_LLP_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_NI0_LLP_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_NI0_LLP_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_NI0_LLP_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_LLP_CMP_ENABLE1" */
+/* NI0 compare LLP input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_NI0_LLP_CMP_ENABLE1 0x0000000150031830
+#define SH_XN_NI0_LLP_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_NI0_LLP_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_NI0_LLP_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_NI0_LLP_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_NI0_LLP_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_IILB_CMP_EXP_DATA0" */
+/* NI1 compare IILB input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_IILB_CMP_EXP_DATA0 0x0000000150031900
+#define SH_XN_NI1_IILB_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_NI1_IILB_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_NI1_IILB_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_NI1_IILB_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_NI1_IILB_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_IILB_CMP_EXP_DATA1" */
+/* NI1 compare IILB input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_IILB_CMP_EXP_DATA1 0x0000000150031910
+#define SH_XN_NI1_IILB_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_NI1_IILB_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_NI1_IILB_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_NI1_IILB_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_NI1_IILB_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_IILB_CMP_ENABLE0" */
+/* NI1 compare IILB input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_IILB_CMP_ENABLE0 0x0000000150031920
+#define SH_XN_NI1_IILB_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_NI1_IILB_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_NI1_IILB_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_NI1_IILB_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_NI1_IILB_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_IILB_CMP_ENABLE1" */
+/* NI1 compare IILB input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_IILB_CMP_ENABLE1 0x0000000150031930
+#define SH_XN_NI1_IILB_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_NI1_IILB_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_NI1_IILB_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_NI1_IILB_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_NI1_IILB_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_PI_CMP_EXP_DATA0" */
+/* NI1 compare PI input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_PI_CMP_EXP_DATA0 0x0000000150031940
+#define SH_XN_NI1_PI_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_NI1_PI_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_NI1_PI_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_NI1_PI_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_NI1_PI_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_PI_CMP_EXP_DATA1" */
+/* NI1 compare PI input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_PI_CMP_EXP_DATA1 0x0000000150031950
+#define SH_XN_NI1_PI_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_NI1_PI_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_NI1_PI_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_NI1_PI_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_NI1_PI_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_PI_CMP_ENABLE0" */
+/* NI1 compare PI input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_PI_CMP_ENABLE0 0x0000000150031960
+#define SH_XN_NI1_PI_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_NI1_PI_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_NI1_PI_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_NI1_PI_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_NI1_PI_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_PI_CMP_ENABLE1" */
+/* NI1 compare PI input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_PI_CMP_ENABLE1 0x0000000150031970
+#define SH_XN_NI1_PI_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_NI1_PI_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_NI1_PI_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_NI1_PI_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_NI1_PI_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_MD_CMP_EXP_DATA0" */
+/* NI1 compare MD input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_MD_CMP_EXP_DATA0 0x0000000150031980
+#define SH_XN_NI1_MD_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_NI1_MD_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_NI1_MD_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_NI1_MD_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_NI1_MD_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_MD_CMP_EXP_DATA1" */
+/* NI1 compare MD input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_MD_CMP_EXP_DATA1 0x0000000150031990
+#define SH_XN_NI1_MD_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_NI1_MD_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_NI1_MD_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_NI1_MD_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_NI1_MD_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_MD_CMP_ENABLE0" */
+/* NI1 compare MD input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_MD_CMP_ENABLE0 0x00000001500319a0
+#define SH_XN_NI1_MD_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_NI1_MD_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_NI1_MD_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_NI1_MD_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_NI1_MD_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_MD_CMP_ENABLE1" */
+/* NI1 compare MD input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_MD_CMP_ENABLE1 0x00000001500319b0
+#define SH_XN_NI1_MD_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_NI1_MD_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_NI1_MD_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_NI1_MD_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_NI1_MD_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_NI_CMP_EXP_DATA0" */
+/* NI1 compare NI input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_NI_CMP_EXP_DATA0 0x00000001500319c0
+#define SH_XN_NI1_NI_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_NI1_NI_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_NI1_NI_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_NI1_NI_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_NI1_NI_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_NI_CMP_EXP_DATA1" */
+/* NI1 compare NI input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_NI_CMP_EXP_DATA1 0x00000001500319d0
+#define SH_XN_NI1_NI_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_NI1_NI_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_NI1_NI_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_NI1_NI_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_NI1_NI_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_NI_CMP_ENABLE0" */
+/* NI1 compare NI input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_NI_CMP_ENABLE0 0x00000001500319e0
+#define SH_XN_NI1_NI_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_NI1_NI_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_NI1_NI_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_NI1_NI_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_NI1_NI_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_NI_CMP_ENABLE1" */
+/* NI1 compare NI input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_NI_CMP_ENABLE1 0x00000001500319f0
+#define SH_XN_NI1_NI_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_NI1_NI_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_NI1_NI_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_NI1_NI_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_NI1_NI_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_LLP_CMP_EXP_DATA0" */
+/* NI1 compare LLP input expected data0 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_LLP_CMP_EXP_DATA0 0x0000000150031a00
+#define SH_XN_NI1_LLP_CMP_EXP_DATA0_MASK 0xffffffffffffffff
+#define SH_XN_NI1_LLP_CMP_EXP_DATA0_INIT 0x0000000000000000
+
+/* SH_XN_NI1_LLP_CMP_EXP_DATA0_DATA */
+/* Description: Expected data 0 */
+#define SH_XN_NI1_LLP_CMP_EXP_DATA0_DATA_SHFT 0
+#define SH_XN_NI1_LLP_CMP_EXP_DATA0_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_LLP_CMP_EXP_DATA1" */
+/* NI1 compare LLP input expected data1 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_LLP_CMP_EXP_DATA1 0x0000000150031a10
+#define SH_XN_NI1_LLP_CMP_EXP_DATA1_MASK 0xffffffffffffffff
+#define SH_XN_NI1_LLP_CMP_EXP_DATA1_INIT 0x0000000000000000
+
+/* SH_XN_NI1_LLP_CMP_EXP_DATA1_DATA */
+/* Description: Expected data 1 */
+#define SH_XN_NI1_LLP_CMP_EXP_DATA1_DATA_SHFT 0
+#define SH_XN_NI1_LLP_CMP_EXP_DATA1_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_LLP_CMP_ENABLE0" */
+/* NI1 compare LLP input enable0 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_LLP_CMP_ENABLE0 0x0000000150031a20
+#define SH_XN_NI1_LLP_CMP_ENABLE0_MASK 0xffffffffffffffff
+#define SH_XN_NI1_LLP_CMP_ENABLE0_INIT 0x0000000000000000
+
+/* SH_XN_NI1_LLP_CMP_ENABLE0_ENABLE */
+/* Description: Enable0 */
+#define SH_XN_NI1_LLP_CMP_ENABLE0_ENABLE_SHFT 0
+#define SH_XN_NI1_LLP_CMP_ENABLE0_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_LLP_CMP_ENABLE1" */
+/* NI1 compare LLP input enable1 */
+/* ==================================================================== */
+
+#define SH_XN_NI1_LLP_CMP_ENABLE1 0x0000000150031a30
+#define SH_XN_NI1_LLP_CMP_ENABLE1_MASK 0xffffffffffffffff
+#define SH_XN_NI1_LLP_CMP_ENABLE1_INIT 0x0000000000000000
+
+/* SH_XN_NI1_LLP_CMP_ENABLE1_ENABLE */
+/* Description: Enable1 */
+#define SH_XN_NI1_LLP_CMP_ENABLE1_ENABLE_SHFT 0
+#define SH_XN_NI1_LLP_CMP_ENABLE1_ENABLE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ECC_INJ_REG" */
+/* ==================================================================== */
+
+#define SH_XNPI_ECC_INJ_REG 0x0000000150032000
+#define SH_XNPI_ECC_INJ_REG_MASK 0xf0fff0fff0fff0ff
+#define SH_XNPI_ECC_INJ_REG_INIT 0x0000000000000000
+
+/* SH_XNPI_ECC_INJ_REG_BYTE0 */
+/* Description: Replacement Checkbyte */
+#define SH_XNPI_ECC_INJ_REG_BYTE0_SHFT 0
+#define SH_XNPI_ECC_INJ_REG_BYTE0_MASK 0x00000000000000ff
+
+/* SH_XNPI_ECC_INJ_REG_DATA_1SHOT0 */
+/* Description: 1 shot mask data */
+#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT0_SHFT 12
+#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT0_MASK 0x0000000000001000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_CONT0 */
+/* Description: toggle mask data */
+#define SH_XNPI_ECC_INJ_REG_DATA_CONT0_SHFT 13
+#define SH_XNPI_ECC_INJ_REG_DATA_CONT0_MASK 0x0000000000002000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT0 */
+/* Description: Replace Checkbyte One Shot */
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT0_SHFT 14
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT0_MASK 0x0000000000004000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_CB_CONT0 */
+/* Description: Replace Checkbyte Continuous */
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT0_SHFT 15
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT0_MASK 0x0000000000008000
+
+/* SH_XNPI_ECC_INJ_REG_BYTE1 */
+/* Description: Replacement Checkbyte */
+#define SH_XNPI_ECC_INJ_REG_BYTE1_SHFT 16
+#define SH_XNPI_ECC_INJ_REG_BYTE1_MASK 0x0000000000ff0000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_1SHOT1 */
+/* Description: 1 shot mask data */
+#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT1_SHFT 28
+#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT1_MASK 0x0000000010000000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_CONT1 */
+/* Description: toggle mask data */
+#define SH_XNPI_ECC_INJ_REG_DATA_CONT1_SHFT 29
+#define SH_XNPI_ECC_INJ_REG_DATA_CONT1_MASK 0x0000000020000000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT1 */
+/* Description: Replace Checkbyte One Shot */
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT1_SHFT 30
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT1_MASK 0x0000000040000000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_CB_CONT1 */
+/* Description: Replace Checkbyte Continous */
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT1_SHFT 31
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT1_MASK 0x0000000080000000
+
+/* SH_XNPI_ECC_INJ_REG_BYTE2 */
+/* Description: Replacement Checkbyte */
+#define SH_XNPI_ECC_INJ_REG_BYTE2_SHFT 32
+#define SH_XNPI_ECC_INJ_REG_BYTE2_MASK 0x000000ff00000000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_1SHOT2 */
+/* Description: 1 shot mask data */
+#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT2_SHFT 44
+#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT2_MASK 0x0000100000000000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_CONT2 */
+/* Description: toggle mask data */
+#define SH_XNPI_ECC_INJ_REG_DATA_CONT2_SHFT 45
+#define SH_XNPI_ECC_INJ_REG_DATA_CONT2_MASK 0x0000200000000000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT2 */
+/* Description: Replace Checkbyte OneShot */
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT2_SHFT 46
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT2_MASK 0x0000400000000000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_CB_CONT2 */
+/* Description: Replace Checkbyte Continous */
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT2_SHFT 47
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT2_MASK 0x0000800000000000
+
+/* SH_XNPI_ECC_INJ_REG_BYTE3 */
+/* Description: Replacement Checkbyte */
+#define SH_XNPI_ECC_INJ_REG_BYTE3_SHFT 48
+#define SH_XNPI_ECC_INJ_REG_BYTE3_MASK 0x00ff000000000000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_1SHOT3 */
+/* Description: 1 shot mask data */
+#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT3_SHFT 60
+#define SH_XNPI_ECC_INJ_REG_DATA_1SHOT3_MASK 0x1000000000000000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_CONT3 */
+/* Description: toggle mask data */
+#define SH_XNPI_ECC_INJ_REG_DATA_CONT3_SHFT 61
+#define SH_XNPI_ECC_INJ_REG_DATA_CONT3_MASK 0x2000000000000000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT3 */
+/* Description: Replace Checkbyte One-Shot */
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT3_SHFT 62
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_1SHOT3_MASK 0x4000000000000000
+
+/* SH_XNPI_ECC_INJ_REG_DATA_CB_CONT3 */
+/* Description: Replace Checkbyte Continous */
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT3_SHFT 63
+#define SH_XNPI_ECC_INJ_REG_DATA_CB_CONT3_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ECC0_INJ_MASK_REG" */
+/* ==================================================================== */
+
+#define SH_XNPI_ECC0_INJ_MASK_REG 0x0000000150032008
+#define SH_XNPI_ECC0_INJ_MASK_REG_MASK 0xffffffffffffffff
+#define SH_XNPI_ECC0_INJ_MASK_REG_INIT 0x0000000000000000
+
+/* SH_XNPI_ECC0_INJ_MASK_REG_MASK_ECC0 */
+/* Description: Replacement Data */
+#define SH_XNPI_ECC0_INJ_MASK_REG_MASK_ECC0_SHFT 0
+#define SH_XNPI_ECC0_INJ_MASK_REG_MASK_ECC0_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ECC1_INJ_MASK_REG" */
+/* ==================================================================== */
+
+#define SH_XNPI_ECC1_INJ_MASK_REG 0x0000000150032010
+#define SH_XNPI_ECC1_INJ_MASK_REG_MASK 0xffffffffffffffff
+#define SH_XNPI_ECC1_INJ_MASK_REG_INIT 0x0000000000000000
+
+/* SH_XNPI_ECC1_INJ_MASK_REG_MASK_ECC1 */
+/* Description: Replacement Data */
+#define SH_XNPI_ECC1_INJ_MASK_REG_MASK_ECC1_SHFT 0
+#define SH_XNPI_ECC1_INJ_MASK_REG_MASK_ECC1_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ECC2_INJ_MASK_REG" */
+/* ==================================================================== */
+
+#define SH_XNPI_ECC2_INJ_MASK_REG 0x0000000150032018
+#define SH_XNPI_ECC2_INJ_MASK_REG_MASK 0xffffffffffffffff
+#define SH_XNPI_ECC2_INJ_MASK_REG_INIT 0x0000000000000000
+
+/* SH_XNPI_ECC2_INJ_MASK_REG_MASK_ECC2 */
+/* Description: Replacement Data */
+#define SH_XNPI_ECC2_INJ_MASK_REG_MASK_ECC2_SHFT 0
+#define SH_XNPI_ECC2_INJ_MASK_REG_MASK_ECC2_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ECC3_INJ_MASK_REG" */
+/* ==================================================================== */
+
+#define SH_XNPI_ECC3_INJ_MASK_REG 0x0000000150032020
+#define SH_XNPI_ECC3_INJ_MASK_REG_MASK 0xffffffffffffffff
+#define SH_XNPI_ECC3_INJ_MASK_REG_INIT 0x0000000000000000
+
+/* SH_XNPI_ECC3_INJ_MASK_REG_MASK_ECC3 */
+/* Description: Replacement Data */
+#define SH_XNPI_ECC3_INJ_MASK_REG_MASK_ECC3_SHFT 0
+#define SH_XNPI_ECC3_INJ_MASK_REG_MASK_ECC3_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ECC_INJ_REG" */
+/* ==================================================================== */
+
+#define SH_XNMD_ECC_INJ_REG 0x0000000150032030
+#define SH_XNMD_ECC_INJ_REG_MASK 0xf0fff0fff0fff0ff
+#define SH_XNMD_ECC_INJ_REG_INIT 0x0000000000000000
+
+/* SH_XNMD_ECC_INJ_REG_BYTE0 */
+/* Description: Replacement Checkbyte */
+#define SH_XNMD_ECC_INJ_REG_BYTE0_SHFT 0
+#define SH_XNMD_ECC_INJ_REG_BYTE0_MASK 0x00000000000000ff
+
+/* SH_XNMD_ECC_INJ_REG_DATA_1SHOT0 */
+/* Description: 1 shot mask data */
+#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT0_SHFT 12
+#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT0_MASK 0x0000000000001000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_CONT0 */
+/* Description: toggle mask data */
+#define SH_XNMD_ECC_INJ_REG_DATA_CONT0_SHFT 13
+#define SH_XNMD_ECC_INJ_REG_DATA_CONT0_MASK 0x0000000000002000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT0 */
+/* Description: Replace Checkbyte One Shot */
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT0_SHFT 14
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT0_MASK 0x0000000000004000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_CB_CONT0 */
+/* Description: Replace Checkbyte Continuous */
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT0_SHFT 15
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT0_MASK 0x0000000000008000
+
+/* SH_XNMD_ECC_INJ_REG_BYTE1 */
+/* Description: Replacement Checkbyte */
+#define SH_XNMD_ECC_INJ_REG_BYTE1_SHFT 16
+#define SH_XNMD_ECC_INJ_REG_BYTE1_MASK 0x0000000000ff0000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_1SHOT1 */
+/* Description: 1 shot mask data */
+#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT1_SHFT 28
+#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT1_MASK 0x0000000010000000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_CONT1 */
+/* Description: toggle mask data */
+#define SH_XNMD_ECC_INJ_REG_DATA_CONT1_SHFT 29
+#define SH_XNMD_ECC_INJ_REG_DATA_CONT1_MASK 0x0000000020000000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT1 */
+/* Description: Replace Checkbyte One Shot */
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT1_SHFT 30
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT1_MASK 0x0000000040000000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_CB_CONT1 */
+/* Description: Replace Checkbyte Continous */
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT1_SHFT 31
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT1_MASK 0x0000000080000000
+
+/* SH_XNMD_ECC_INJ_REG_BYTE2 */
+/* Description: Replacement Checkbyte */
+#define SH_XNMD_ECC_INJ_REG_BYTE2_SHFT 32
+#define SH_XNMD_ECC_INJ_REG_BYTE2_MASK 0x000000ff00000000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_1SHOT2 */
+/* Description: 1 shot mask data */
+#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT2_SHFT 44
+#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT2_MASK 0x0000100000000000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_CONT2 */
+/* Description: toggle mask data */
+#define SH_XNMD_ECC_INJ_REG_DATA_CONT2_SHFT 45
+#define SH_XNMD_ECC_INJ_REG_DATA_CONT2_MASK 0x0000200000000000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT2 */
+/* Description: Replace Checkbyte OneShot */
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT2_SHFT 46
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT2_MASK 0x0000400000000000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_CB_CONT2 */
+/* Description: Replace Checkbyte Continous */
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT2_SHFT 47
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT2_MASK 0x0000800000000000
+
+/* SH_XNMD_ECC_INJ_REG_BYTE3 */
+/* Description: Replacement Checkbyte */
+#define SH_XNMD_ECC_INJ_REG_BYTE3_SHFT 48
+#define SH_XNMD_ECC_INJ_REG_BYTE3_MASK 0x00ff000000000000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_1SHOT3 */
+/* Description: 1 shot mask data */
+#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT3_SHFT 60
+#define SH_XNMD_ECC_INJ_REG_DATA_1SHOT3_MASK 0x1000000000000000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_CONT3 */
+/* Description: toggle mask data */
+#define SH_XNMD_ECC_INJ_REG_DATA_CONT3_SHFT 61
+#define SH_XNMD_ECC_INJ_REG_DATA_CONT3_MASK 0x2000000000000000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT3 */
+/* Description: Replace Checkbyte One-Shot */
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT3_SHFT 62
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_1SHOT3_MASK 0x4000000000000000
+
+/* SH_XNMD_ECC_INJ_REG_DATA_CB_CONT3 */
+/* Description: Replace Checkbyte Continous */
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT3_SHFT 63
+#define SH_XNMD_ECC_INJ_REG_DATA_CB_CONT3_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ECC0_INJ_MASK_REG" */
+/* ==================================================================== */
+
+#define SH_XNMD_ECC0_INJ_MASK_REG 0x0000000150032038
+#define SH_XNMD_ECC0_INJ_MASK_REG_MASK 0xffffffffffffffff
+#define SH_XNMD_ECC0_INJ_MASK_REG_INIT 0x0000000000000000
+
+/* SH_XNMD_ECC0_INJ_MASK_REG_MASK_ECC0 */
+/* Description: Replacement Data */
+#define SH_XNMD_ECC0_INJ_MASK_REG_MASK_ECC0_SHFT 0
+#define SH_XNMD_ECC0_INJ_MASK_REG_MASK_ECC0_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ECC1_INJ_MASK_REG" */
+/* ==================================================================== */
+
+#define SH_XNMD_ECC1_INJ_MASK_REG 0x0000000150032040
+#define SH_XNMD_ECC1_INJ_MASK_REG_MASK 0xffffffffffffffff
+#define SH_XNMD_ECC1_INJ_MASK_REG_INIT 0x0000000000000000
+
+/* SH_XNMD_ECC1_INJ_MASK_REG_MASK_ECC1 */
+/* Description: Replacement Data */
+#define SH_XNMD_ECC1_INJ_MASK_REG_MASK_ECC1_SHFT 0
+#define SH_XNMD_ECC1_INJ_MASK_REG_MASK_ECC1_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ECC2_INJ_MASK_REG" */
+/* ==================================================================== */
+
+#define SH_XNMD_ECC2_INJ_MASK_REG 0x0000000150032048
+#define SH_XNMD_ECC2_INJ_MASK_REG_MASK 0xffffffffffffffff
+#define SH_XNMD_ECC2_INJ_MASK_REG_INIT 0x0000000000000000
+
+/* SH_XNMD_ECC2_INJ_MASK_REG_MASK_ECC2 */
+/* Description: Replacement Data */
+#define SH_XNMD_ECC2_INJ_MASK_REG_MASK_ECC2_SHFT 0
+#define SH_XNMD_ECC2_INJ_MASK_REG_MASK_ECC2_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ECC3_INJ_MASK_REG" */
+/* ==================================================================== */
+
+#define SH_XNMD_ECC3_INJ_MASK_REG 0x0000000150032050
+#define SH_XNMD_ECC3_INJ_MASK_REG_MASK 0xffffffffffffffff
+#define SH_XNMD_ECC3_INJ_MASK_REG_INIT 0x0000000000000000
+
+/* SH_XNMD_ECC3_INJ_MASK_REG_MASK_ECC3 */
+/* Description: Replacement Data */
+#define SH_XNMD_ECC3_INJ_MASK_REG_MASK_ECC3_SHFT 0
+#define SH_XNMD_ECC3_INJ_MASK_REG_MASK_ECC3_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ECC_ERR_REPORT" */
+/* ==================================================================== */
+
+#define SH_XNMD_ECC_ERR_REPORT 0x0000000150032058
+#define SH_XNMD_ECC_ERR_REPORT_MASK 0x0001000100010001
+#define SH_XNMD_ECC_ERR_REPORT_INIT 0x0000000000000000
+
+/* SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE0 */
+/* Description: Disable Error Correction */
+#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE0_SHFT 0
+#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE0_MASK 0x0000000000000001
+
+/* SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE1 */
+/* Description: Disable Error Correction */
+#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE1_SHFT 16
+#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE1_MASK 0x0000000000010000
+
+/* SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE2 */
+/* Description: Disable Error Correction */
+#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE2_SHFT 32
+#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE2_MASK 0x0000000100000000
+
+/* SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE3 */
+/* Description: Disable Error Correction */
+#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE3_SHFT 48
+#define SH_XNMD_ECC_ERR_REPORT_ECC_DISABLE3_MASK 0x0001000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_SUMMARY_1" */
+/* ni0 Error Summary Bits */
+/* ==================================================================== */
+
+#define SH_NI0_ERROR_SUMMARY_1 0x0000000150040500
+#define SH_NI0_ERROR_SUMMARY_1_MASK 0xffffffffffffffff
+#define SH_NI0_ERROR_SUMMARY_1_INIT 0xffffffffffffffff
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT0 */
+/* Description: Fifo 02 debit0 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT2 */
+/* Description: Fifo 02 debit2 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT0 */
+/* Description: Fifo 13 debit0 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT2 */
+/* Description: Fifo 13 debit2 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit overflow 0 */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit overflow 1 */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit overflow 2 */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit overflow 0 */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit overflow 1 */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit overflow 2 */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT0 */
+/* Description: PI Fifo debit0 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT2 */
+/* Description: PI Fifo debit2 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT0 */
+/* Description: IILB Fifo debit0 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT2 */
+/* Description: IILB Fifo debit2 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT0 */
+/* Description: MD Fifo debit0 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT2 */
+/* Description: MD Fifo debit2 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT0 */
+/* Description: NI Fifo debit0 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT1 */
+/* Description: NI Fifo debit1 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT2 */
+/* Description: NI Fifo debit2 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT3 */
+/* Description: NI Fifo debit3 overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit overflow */
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI0_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC0 */
+/* Description: Fifo02 vc0 tail timeout */
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC2 */
+/* Description: Fifo02 vc2 tail timeout */
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC1 */
+/* Description: Fifo13 vc1 tail timeout */
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC3 */
+/* Description: Fifo13 vc3 tail timeout */
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC0 */
+/* Description: NI vc0 tail timeout */
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC0_SHFT 60
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC1 */
+/* Description: NI vc1 tail timeout */
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC1_SHFT 61
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC2 */
+/* Description: NI vc2 tail timeout */
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC2_SHFT 62
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000
+
+/* SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC3 */
+/* Description: NI vc3 tail timeout */
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC3_SHFT 63
+#define SH_NI0_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_SUMMARY_1_ALIAS" */
+/* ni0 Error Summary Bits Alias */
+/* ==================================================================== */
+
+#define SH_NI0_ERROR_SUMMARY_1_ALIAS 0x0000000150040508
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_SUMMARY_2" */
+/* ni0 Error Summary Bits */
+/* ==================================================================== */
+
+#define SH_NI0_ERROR_SUMMARY_2 0x0000000150040510
+#define SH_NI0_ERROR_SUMMARY_2_MASK 0x7fffffff003fffff
+#define SH_NI0_ERROR_SUMMARY_2_INIT 0x7fffffff003fffff
+
+/* SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCNI */
+/* Description: Illegal VC NI */
+#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCNI_SHFT 0
+#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCNI_MASK 0x0000000000000001
+
+/* SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCPI */
+/* Description: Illegal VC PI */
+#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCPI_SHFT 1
+#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCPI_MASK 0x0000000000000002
+
+/* SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCMD */
+/* Description: Illegal VC MD */
+#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCMD_SHFT 2
+#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCMD_MASK 0x0000000000000004
+
+/* SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCIILB */
+/* Description: Illegal VC IILB */
+#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCIILB_SHFT 3
+#define SH_NI0_ERROR_SUMMARY_2_ILLEGAL_VCIILB_MASK 0x0000000000000008
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit underflow 0 */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit underflow 1 */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit underflow 2 */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit underflow 0 */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit underflow 1 */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit underflow 2 */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit underflow */
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI0_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC0 */
+/* Description: llp deadlock vc0 */
+#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC0_SHFT 56
+#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC1 */
+/* Description: llp deadlock vc1 */
+#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC1_SHFT 57
+#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC2 */
+/* Description: llp deadlock vc2 */
+#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC2_SHFT 58
+#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC3 */
+/* Description: llp deadlock vc3 */
+#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC3_SHFT 59
+#define SH_NI0_ERROR_SUMMARY_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_CHIPLET_NOMATCH */
+/* Description: chiplet nomatch */
+#define SH_NI0_ERROR_SUMMARY_2_CHIPLET_NOMATCH_SHFT 60
+#define SH_NI0_ERROR_SUMMARY_2_CHIPLET_NOMATCH_MASK 0x1000000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_NI0_ERROR_SUMMARY_2_LUT_READ_ERROR_SHFT 61
+#define SH_NI0_ERROR_SUMMARY_2_LUT_READ_ERROR_MASK 0x2000000000000000
+
+/* SH_NI0_ERROR_SUMMARY_2_RETRY_TIMEOUT_ERROR */
+/* Description: Retry Timeout Error */
+#define SH_NI0_ERROR_SUMMARY_2_RETRY_TIMEOUT_ERROR_SHFT 62
+#define SH_NI0_ERROR_SUMMARY_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_SUMMARY_2_ALIAS" */
+/* ni0 Error Summary Bits Alias */
+/* ==================================================================== */
+
+#define SH_NI0_ERROR_SUMMARY_2_ALIAS 0x0000000150040518
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_OVERFLOW_1" */
+/* ni0 Error Overflow Bits */
+/* ==================================================================== */
+
+#define SH_NI0_ERROR_OVERFLOW_1 0x0000000150040520
+#define SH_NI0_ERROR_OVERFLOW_1_MASK 0xffffffffffffffff
+#define SH_NI0_ERROR_OVERFLOW_1_INIT 0xffffffffffffffff
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT0 */
+/* Description: Fifo 02 debit0 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT2 */
+/* Description: Fifo 02 debit2 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT0 */
+/* Description: Fifo 13 debit0 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT2 */
+/* Description: Fifo 13 debit2 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit overflow 0 */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit overflow 1 */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit overflow 2 */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit overflow 0 */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit overflow 1 */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit overflow 2 */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT0 */
+/* Description: PI Fifo debit0 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT2 */
+/* Description: PI Fifo debit2 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT0 */
+/* Description: IILB Fifo debit0 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT2 */
+/* Description: IILB Fifo debit2 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT0 */
+/* Description: MD Fifo debit0 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT2 */
+/* Description: MD Fifo debit2 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT0 */
+/* Description: NI Fifo debit0 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT1 */
+/* Description: NI Fifo debit1 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT2 */
+/* Description: NI Fifo debit2 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT3 */
+/* Description: NI Fifo debit3 overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit overflow */
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI0_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC0 */
+/* Description: Fifo02 vc0 tail timeout */
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC2 */
+/* Description: Fifo02 vc2 tail timeout */
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC1 */
+/* Description: Fifo13 vc1 tail timeout */
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC3 */
+/* Description: Fifo13 vc3 tail timeout */
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC0 */
+/* Description: NI vc0 tail timeout */
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC0_SHFT 60
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC1 */
+/* Description: NI vc1 tail timeout */
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC1_SHFT 61
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC2 */
+/* Description: NI vc2 tail timeout */
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC2_SHFT 62
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC3 */
+/* Description: NI vc3 tail timeout */
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC3_SHFT 63
+#define SH_NI0_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_OVERFLOW_1_ALIAS" */
+/* ni0 Error Overflow Bits Alias */
+/* ==================================================================== */
+
+#define SH_NI0_ERROR_OVERFLOW_1_ALIAS 0x0000000150040528
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_OVERFLOW_2" */
+/* ni0 Error Overflow Bits */
+/* ==================================================================== */
+
+#define SH_NI0_ERROR_OVERFLOW_2 0x0000000150040530
+#define SH_NI0_ERROR_OVERFLOW_2_MASK 0x7fffffff003fffff
+#define SH_NI0_ERROR_OVERFLOW_2_INIT 0x7fffffff003fffff
+
+/* SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCNI */
+/* Description: Illegal VC NI */
+#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCNI_SHFT 0
+#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCNI_MASK 0x0000000000000001
+
+/* SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCPI */
+/* Description: Illegal VC PI */
+#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCPI_SHFT 1
+#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCPI_MASK 0x0000000000000002
+
+/* SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCMD */
+/* Description: Illegal VC MD */
+#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCMD_SHFT 2
+#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCMD_MASK 0x0000000000000004
+
+/* SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCIILB */
+/* Description: Illegal VC IILB */
+#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCIILB_SHFT 3
+#define SH_NI0_ERROR_OVERFLOW_2_ILLEGAL_VCIILB_MASK 0x0000000000000008
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit underflow 0 */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit underflow 1 */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit underflow 2 */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit underflow 0 */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit underflow 1 */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit underflow 2 */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit underflow */
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI0_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC0 */
+/* Description: llp deadlock vc0 */
+#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC0_SHFT 56
+#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC1 */
+/* Description: llp deadlock vc1 */
+#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC1_SHFT 57
+#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC2 */
+/* Description: llp deadlock vc2 */
+#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC2_SHFT 58
+#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC3 */
+/* Description: llp deadlock vc3 */
+#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC3_SHFT 59
+#define SH_NI0_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_CHIPLET_NOMATCH */
+/* Description: chiplet nomatch */
+#define SH_NI0_ERROR_OVERFLOW_2_CHIPLET_NOMATCH_SHFT 60
+#define SH_NI0_ERROR_OVERFLOW_2_CHIPLET_NOMATCH_MASK 0x1000000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_NI0_ERROR_OVERFLOW_2_LUT_READ_ERROR_SHFT 61
+#define SH_NI0_ERROR_OVERFLOW_2_LUT_READ_ERROR_MASK 0x2000000000000000
+
+/* SH_NI0_ERROR_OVERFLOW_2_RETRY_TIMEOUT_ERROR */
+/* Description: Retry Timeout Error */
+#define SH_NI0_ERROR_OVERFLOW_2_RETRY_TIMEOUT_ERROR_SHFT 62
+#define SH_NI0_ERROR_OVERFLOW_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_OVERFLOW_2_ALIAS" */
+/* ni0 Error Overflow Bits Alias */
+/* ==================================================================== */
+
+#define SH_NI0_ERROR_OVERFLOW_2_ALIAS 0x0000000150040538
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_MASK_1" */
+/* ni0 Error Mask Bits */
+/* ==================================================================== */
+
+#define SH_NI0_ERROR_MASK_1 0x0000000150040540
+#define SH_NI0_ERROR_MASK_1_MASK 0xffffffffffffffff
+#define SH_NI0_ERROR_MASK_1_INIT 0xffffffffffffffff
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT0 */
+/* Description: Fifo 02 debit0 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT2 */
+/* Description: Fifo 02 debit2 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT0 */
+/* Description: Fifo 13 debit0 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT2 */
+/* Description: Fifo 13 debit2 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit overflow 0 */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI0_ERROR_MASK_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit overflow 1 */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI0_ERROR_MASK_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit overflow 2 */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI0_ERROR_MASK_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit overflow 0 */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI0_ERROR_MASK_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit overflow 1 */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI0_ERROR_MASK_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit overflow 2 */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI0_ERROR_MASK_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT0 */
+/* Description: PI Fifo debit0 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT2 */
+/* Description: PI Fifo debit2 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT0 */
+/* Description: IILB Fifo debit0 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT2 */
+/* Description: IILB Fifo debit2 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT0 */
+/* Description: MD Fifo debit0 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT2 */
+/* Description: MD Fifo debit2 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT0 */
+/* Description: NI Fifo debit0 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT1 */
+/* Description: NI Fifo debit1 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT2 */
+/* Description: NI Fifo debit2 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT3 */
+/* Description: NI Fifo debit3 overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit overflow */
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI0_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC0 */
+/* Description: Fifo02 vc0 tail timeout */
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000
+
+/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC2 */
+/* Description: Fifo02 vc2 tail timeout */
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000
+
+/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC1 */
+/* Description: Fifo13 vc1 tail timeout */
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000
+
+/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC3 */
+/* Description: Fifo13 vc3 tail timeout */
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000
+
+/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC0 */
+/* Description: NI vc0 tail timeout */
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC0_SHFT 60
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000
+
+/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC1 */
+/* Description: NI vc1 tail timeout */
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC1_SHFT 61
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000
+
+/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC2 */
+/* Description: NI vc2 tail timeout */
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC2_SHFT 62
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000
+
+/* SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC3 */
+/* Description: NI vc3 tail timeout */
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC3_SHFT 63
+#define SH_NI0_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_MASK_2" */
+/* ni0 Error Mask Bits */
+/* ==================================================================== */
+
+#define SH_NI0_ERROR_MASK_2 0x0000000150040550
+#define SH_NI0_ERROR_MASK_2_MASK 0x7fffffff003fffff
+#define SH_NI0_ERROR_MASK_2_INIT 0x7fffffff003fffff
+
+/* SH_NI0_ERROR_MASK_2_ILLEGAL_VCNI */
+/* Description: Illegal VC NI */
+#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCNI_SHFT 0
+#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCNI_MASK 0x0000000000000001
+
+/* SH_NI0_ERROR_MASK_2_ILLEGAL_VCPI */
+/* Description: Illegal VC PI */
+#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCPI_SHFT 1
+#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCPI_MASK 0x0000000000000002
+
+/* SH_NI0_ERROR_MASK_2_ILLEGAL_VCMD */
+/* Description: Illegal VC MD */
+#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCMD_SHFT 2
+#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCMD_MASK 0x0000000000000004
+
+/* SH_NI0_ERROR_MASK_2_ILLEGAL_VCIILB */
+/* Description: Illegal VC IILB */
+#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCIILB_SHFT 3
+#define SH_NI0_ERROR_MASK_2_ILLEGAL_VCIILB_MASK 0x0000000000000008
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit underflow 0 */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit underflow 1 */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit underflow 2 */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit underflow 0 */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit underflow 1 */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit underflow 2 */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit underflow */
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI0_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC0 */
+/* Description: llp deadlock vc0 */
+#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC0_SHFT 56
+#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000
+
+/* SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC1 */
+/* Description: llp deadlock vc1 */
+#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC1_SHFT 57
+#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000
+
+/* SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC2 */
+/* Description: llp deadlock vc2 */
+#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC2_SHFT 58
+#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000
+
+/* SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC3 */
+/* Description: llp deadlock vc3 */
+#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC3_SHFT 59
+#define SH_NI0_ERROR_MASK_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000
+
+/* SH_NI0_ERROR_MASK_2_CHIPLET_NOMATCH */
+/* Description: chiplet nomatch */
+#define SH_NI0_ERROR_MASK_2_CHIPLET_NOMATCH_SHFT 60
+#define SH_NI0_ERROR_MASK_2_CHIPLET_NOMATCH_MASK 0x1000000000000000
+
+/* SH_NI0_ERROR_MASK_2_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_NI0_ERROR_MASK_2_LUT_READ_ERROR_SHFT 61
+#define SH_NI0_ERROR_MASK_2_LUT_READ_ERROR_MASK 0x2000000000000000
+
+/* SH_NI0_ERROR_MASK_2_RETRY_TIMEOUT_ERROR */
+/* Description: Retry Timeout Error */
+#define SH_NI0_ERROR_MASK_2_RETRY_TIMEOUT_ERROR_SHFT 62
+#define SH_NI0_ERROR_MASK_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_FIRST_ERROR_1" */
+/* ni0 First Error Bits */
+/* ==================================================================== */
+
+#define SH_NI0_FIRST_ERROR_1 0x0000000150040560
+#define SH_NI0_FIRST_ERROR_1_MASK 0xffffffffffffffff
+#define SH_NI0_FIRST_ERROR_1_INIT 0xffffffffffffffff
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT0 */
+/* Description: Fifo 02 debit0 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT2 */
+/* Description: Fifo 02 debit2 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT0 */
+/* Description: Fifo 13 debit0 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT2 */
+/* Description: Fifo 13 debit2 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit overflow 0 */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit overflow 1 */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit overflow 2 */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit overflow 0 */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit overflow 1 */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit overflow 2 */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT0 */
+/* Description: PI Fifo debit0 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT2 */
+/* Description: PI Fifo debit2 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT0 */
+/* Description: IILB Fifo debit0 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT2 */
+/* Description: IILB Fifo debit2 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT0 */
+/* Description: MD Fifo debit0 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT2 */
+/* Description: MD Fifo debit2 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT0 */
+/* Description: NI Fifo debit0 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT1 */
+/* Description: NI Fifo debit1 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT2 */
+/* Description: NI Fifo debit2 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT3 */
+/* Description: NI Fifo debit3 overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit overflow */
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI0_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC0 */
+/* Description: Fifo02 vc0 tail timeout */
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000
+
+/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC2 */
+/* Description: Fifo02 vc2 tail timeout */
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000
+
+/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC1 */
+/* Description: Fifo13 vc1 tail timeout */
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000
+
+/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC3 */
+/* Description: Fifo13 vc3 tail timeout */
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000
+
+/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC0 */
+/* Description: NI vc0 tail timeout */
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC0_SHFT 60
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000
+
+/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC1 */
+/* Description: NI vc1 tail timeout */
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC1_SHFT 61
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000
+
+/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC2 */
+/* Description: NI vc2 tail timeout */
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC2_SHFT 62
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000
+
+/* SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC3 */
+/* Description: NI vc3 tail timeout */
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC3_SHFT 63
+#define SH_NI0_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_FIRST_ERROR_2" */
+/* ni0 First Error Bits */
+/* ==================================================================== */
+
+#define SH_NI0_FIRST_ERROR_2 0x0000000150040570
+#define SH_NI0_FIRST_ERROR_2_MASK 0x7fffffff003fffff
+#define SH_NI0_FIRST_ERROR_2_INIT 0x7fffffff003fffff
+
+/* SH_NI0_FIRST_ERROR_2_ILLEGAL_VCNI */
+/* Description: Illegal VC NI */
+#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCNI_SHFT 0
+#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCNI_MASK 0x0000000000000001
+
+/* SH_NI0_FIRST_ERROR_2_ILLEGAL_VCPI */
+/* Description: Illegal VC PI */
+#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCPI_SHFT 1
+#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCPI_MASK 0x0000000000000002
+
+/* SH_NI0_FIRST_ERROR_2_ILLEGAL_VCMD */
+/* Description: Illegal VC MD */
+#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCMD_SHFT 2
+#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCMD_MASK 0x0000000000000004
+
+/* SH_NI0_FIRST_ERROR_2_ILLEGAL_VCIILB */
+/* Description: Illegal VC IILB */
+#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCIILB_SHFT 3
+#define SH_NI0_FIRST_ERROR_2_ILLEGAL_VCIILB_MASK 0x0000000000000008
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit underflow 0 */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit underflow 1 */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit underflow 2 */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit underflow 0 */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit underflow 1 */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit underflow 2 */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit underflow */
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI0_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC0 */
+/* Description: llp deadlock vc0 */
+#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC0_SHFT 56
+#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000
+
+/* SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC1 */
+/* Description: llp deadlock vc1 */
+#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC1_SHFT 57
+#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000
+
+/* SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC2 */
+/* Description: llp deadlock vc2 */
+#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC2_SHFT 58
+#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000
+
+/* SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC3 */
+/* Description: llp deadlock vc3 */
+#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC3_SHFT 59
+#define SH_NI0_FIRST_ERROR_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000
+
+/* SH_NI0_FIRST_ERROR_2_CHIPLET_NOMATCH */
+/* Description: chiplet nomatch */
+#define SH_NI0_FIRST_ERROR_2_CHIPLET_NOMATCH_SHFT 60
+#define SH_NI0_FIRST_ERROR_2_CHIPLET_NOMATCH_MASK 0x1000000000000000
+
+/* SH_NI0_FIRST_ERROR_2_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_NI0_FIRST_ERROR_2_LUT_READ_ERROR_SHFT 61
+#define SH_NI0_FIRST_ERROR_2_LUT_READ_ERROR_MASK 0x2000000000000000
+
+/* SH_NI0_FIRST_ERROR_2_RETRY_TIMEOUT_ERROR */
+/* Description: Retry Timeout Error */
+#define SH_NI0_FIRST_ERROR_2_RETRY_TIMEOUT_ERROR_SHFT 62
+#define SH_NI0_FIRST_ERROR_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_DETAIL_1" */
+/* ni0 Chiplet no match header bits 63:0 */
+/* ==================================================================== */
+
+#define SH_NI0_ERROR_DETAIL_1 0x0000000150040580
+#define SH_NI0_ERROR_DETAIL_1_MASK 0xffffffffffffffff
+#define SH_NI0_ERROR_DETAIL_1_INIT 0x0000000000000000
+
+/* SH_NI0_ERROR_DETAIL_1_HEADER */
+/* Description: Header bits 63:0 */
+#define SH_NI0_ERROR_DETAIL_1_HEADER_SHFT 0
+#define SH_NI0_ERROR_DETAIL_1_HEADER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_DETAIL_2" */
+/* ni0 Chiplet no match header bits 127:64 */
+/* ==================================================================== */
+
+#define SH_NI0_ERROR_DETAIL_2 0x0000000150040590
+#define SH_NI0_ERROR_DETAIL_2_MASK 0xffffffffffffffff
+#define SH_NI0_ERROR_DETAIL_2_INIT 0x0000000000000000
+
+/* SH_NI0_ERROR_DETAIL_2_HEADER */
+/* Description: Header bits 127:64 */
+#define SH_NI0_ERROR_DETAIL_2_HEADER_SHFT 0
+#define SH_NI0_ERROR_DETAIL_2_HEADER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_SUMMARY_1" */
+/* ni1 Error Summary Bits */
+/* ==================================================================== */
+
+#define SH_NI1_ERROR_SUMMARY_1 0x0000000150040600
+#define SH_NI1_ERROR_SUMMARY_1_MASK 0xffffffffffffffff
+#define SH_NI1_ERROR_SUMMARY_1_INIT 0xffffffffffffffff
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT0 */
+/* Description: Fifo 02 debit0 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT2 */
+/* Description: Fifo 02 debit2 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT0 */
+/* Description: Fifo 13 debit0 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT2 */
+/* Description: Fifo 13 debit2 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit overflow 0 */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit overflow 1 */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit overflow 2 */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit overflow 0 */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit overflow 1 */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit overflow 2 */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT0 */
+/* Description: PI Fifo debit0 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT2 */
+/* Description: PI Fifo debit2 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT0 */
+/* Description: IILB Fifo debit0 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT2 */
+/* Description: IILB Fifo debit2 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT0 */
+/* Description: MD Fifo debit0 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT2 */
+/* Description: MD Fifo debit2 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT0 */
+/* Description: NI Fifo debit0 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT1 */
+/* Description: NI Fifo debit1 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT2 */
+/* Description: NI Fifo debit2 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT3 */
+/* Description: NI Fifo debit3 overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit overflow */
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI1_ERROR_SUMMARY_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC0 */
+/* Description: Fifo02 vc0 tail timeout */
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC2 */
+/* Description: Fifo02 vc2 tail timeout */
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC1 */
+/* Description: Fifo13 vc1 tail timeout */
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC3 */
+/* Description: Fifo13 vc3 tail timeout */
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC0 */
+/* Description: NI vc0 tail timeout */
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC0_SHFT 60
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC1 */
+/* Description: NI vc1 tail timeout */
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC1_SHFT 61
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC2 */
+/* Description: NI vc2 tail timeout */
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC2_SHFT 62
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000
+
+/* SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC3 */
+/* Description: NI vc3 tail timeout */
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC3_SHFT 63
+#define SH_NI1_ERROR_SUMMARY_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_SUMMARY_1_ALIAS" */
+/* ni1 Error Summary Bits Alias */
+/* ==================================================================== */
+
+#define SH_NI1_ERROR_SUMMARY_1_ALIAS 0x0000000150040608
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_SUMMARY_2" */
+/* ni1 Error Summary Bits */
+/* ==================================================================== */
+
+#define SH_NI1_ERROR_SUMMARY_2 0x0000000150040610
+#define SH_NI1_ERROR_SUMMARY_2_MASK 0x7fffffff003fffff
+#define SH_NI1_ERROR_SUMMARY_2_INIT 0x7fffffff003fffff
+
+/* SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCNI */
+/* Description: Illegal VC NI */
+#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCNI_SHFT 0
+#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCNI_MASK 0x0000000000000001
+
+/* SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCPI */
+/* Description: Illegal VC PI */
+#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCPI_SHFT 1
+#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCPI_MASK 0x0000000000000002
+
+/* SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCMD */
+/* Description: Illegal VC MD */
+#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCMD_SHFT 2
+#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCMD_MASK 0x0000000000000004
+
+/* SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCIILB */
+/* Description: Illegal VC IILB */
+#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCIILB_SHFT 3
+#define SH_NI1_ERROR_SUMMARY_2_ILLEGAL_VCIILB_MASK 0x0000000000000008
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit underflow 0 */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit underflow 1 */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit underflow 2 */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit underflow 0 */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit underflow 1 */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit underflow 2 */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit underflow */
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI1_ERROR_SUMMARY_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC0 */
+/* Description: llp deadlock vc0 */
+#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC0_SHFT 56
+#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC1 */
+/* Description: llp deadlock vc1 */
+#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC1_SHFT 57
+#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC2 */
+/* Description: llp deadlock vc2 */
+#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC2_SHFT 58
+#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC3 */
+/* Description: llp deadlock vc3 */
+#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC3_SHFT 59
+#define SH_NI1_ERROR_SUMMARY_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_CHIPLET_NOMATCH */
+/* Description: chiplet nomatch */
+#define SH_NI1_ERROR_SUMMARY_2_CHIPLET_NOMATCH_SHFT 60
+#define SH_NI1_ERROR_SUMMARY_2_CHIPLET_NOMATCH_MASK 0x1000000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_NI1_ERROR_SUMMARY_2_LUT_READ_ERROR_SHFT 61
+#define SH_NI1_ERROR_SUMMARY_2_LUT_READ_ERROR_MASK 0x2000000000000000
+
+/* SH_NI1_ERROR_SUMMARY_2_RETRY_TIMEOUT_ERROR */
+/* Description: Retry Timeout Error */
+#define SH_NI1_ERROR_SUMMARY_2_RETRY_TIMEOUT_ERROR_SHFT 62
+#define SH_NI1_ERROR_SUMMARY_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_SUMMARY_2_ALIAS" */
+/* ni1 Error Summary Bits Alias */
+/* ==================================================================== */
+
+#define SH_NI1_ERROR_SUMMARY_2_ALIAS 0x0000000150040618
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_OVERFLOW_1" */
+/* ni1 Error Overflow Bits */
+/* ==================================================================== */
+
+#define SH_NI1_ERROR_OVERFLOW_1 0x0000000150040620
+#define SH_NI1_ERROR_OVERFLOW_1_MASK 0xffffffffffffffff
+#define SH_NI1_ERROR_OVERFLOW_1_INIT 0xffffffffffffffff
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT0 */
+/* Description: Fifo 02 debit0 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT2 */
+/* Description: Fifo 02 debit2 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT0 */
+/* Description: Fifo 13 debit0 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT2 */
+/* Description: Fifo 13 debit2 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit overflow 0 */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit overflow 1 */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit overflow 2 */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit overflow 0 */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit overflow 1 */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit overflow 2 */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT0 */
+/* Description: PI Fifo debit0 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT2 */
+/* Description: PI Fifo debit2 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT0 */
+/* Description: IILB Fifo debit0 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT2 */
+/* Description: IILB Fifo debit2 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT0 */
+/* Description: MD Fifo debit0 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT2 */
+/* Description: MD Fifo debit2 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT0 */
+/* Description: NI Fifo debit0 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT1 */
+/* Description: NI Fifo debit1 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT2 */
+/* Description: NI Fifo debit2 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT3 */
+/* Description: NI Fifo debit3 overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit overflow */
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI1_ERROR_OVERFLOW_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC0 */
+/* Description: Fifo02 vc0 tail timeout */
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC2 */
+/* Description: Fifo02 vc2 tail timeout */
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC1 */
+/* Description: Fifo13 vc1 tail timeout */
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC3 */
+/* Description: Fifo13 vc3 tail timeout */
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC0 */
+/* Description: NI vc0 tail timeout */
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC0_SHFT 60
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC1 */
+/* Description: NI vc1 tail timeout */
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC1_SHFT 61
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC2 */
+/* Description: NI vc2 tail timeout */
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC2_SHFT 62
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC3 */
+/* Description: NI vc3 tail timeout */
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC3_SHFT 63
+#define SH_NI1_ERROR_OVERFLOW_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_OVERFLOW_1_ALIAS" */
+/* ni1 Error Overflow Bits Alias */
+/* ==================================================================== */
+
+#define SH_NI1_ERROR_OVERFLOW_1_ALIAS 0x0000000150040628
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_OVERFLOW_2" */
+/* ni1 Error Overflow Bits */
+/* ==================================================================== */
+
+#define SH_NI1_ERROR_OVERFLOW_2 0x0000000150040630
+#define SH_NI1_ERROR_OVERFLOW_2_MASK 0x7fffffff003fffff
+#define SH_NI1_ERROR_OVERFLOW_2_INIT 0x7fffffff003fffff
+
+/* SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCNI */
+/* Description: Illegal VC NI */
+#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCNI_SHFT 0
+#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCNI_MASK 0x0000000000000001
+
+/* SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCPI */
+/* Description: Illegal VC PI */
+#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCPI_SHFT 1
+#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCPI_MASK 0x0000000000000002
+
+/* SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCMD */
+/* Description: Illegal VC MD */
+#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCMD_SHFT 2
+#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCMD_MASK 0x0000000000000004
+
+/* SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCIILB */
+/* Description: Illegal VC IILB */
+#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCIILB_SHFT 3
+#define SH_NI1_ERROR_OVERFLOW_2_ILLEGAL_VCIILB_MASK 0x0000000000000008
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit underflow 0 */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit underflow 1 */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit underflow 2 */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit underflow 0 */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit underflow 1 */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit underflow 2 */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit underflow */
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI1_ERROR_OVERFLOW_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC0 */
+/* Description: llp deadlock vc0 */
+#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC0_SHFT 56
+#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC1 */
+/* Description: llp deadlock vc1 */
+#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC1_SHFT 57
+#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC2 */
+/* Description: llp deadlock vc2 */
+#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC2_SHFT 58
+#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC3 */
+/* Description: llp deadlock vc3 */
+#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC3_SHFT 59
+#define SH_NI1_ERROR_OVERFLOW_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_CHIPLET_NOMATCH */
+/* Description: chiplet nomatch */
+#define SH_NI1_ERROR_OVERFLOW_2_CHIPLET_NOMATCH_SHFT 60
+#define SH_NI1_ERROR_OVERFLOW_2_CHIPLET_NOMATCH_MASK 0x1000000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_NI1_ERROR_OVERFLOW_2_LUT_READ_ERROR_SHFT 61
+#define SH_NI1_ERROR_OVERFLOW_2_LUT_READ_ERROR_MASK 0x2000000000000000
+
+/* SH_NI1_ERROR_OVERFLOW_2_RETRY_TIMEOUT_ERROR */
+/* Description: Retry Timeout Error */
+#define SH_NI1_ERROR_OVERFLOW_2_RETRY_TIMEOUT_ERROR_SHFT 62
+#define SH_NI1_ERROR_OVERFLOW_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_OVERFLOW_2_ALIAS" */
+/* ni1 Error Overflow Bits Alias */
+/* ==================================================================== */
+
+#define SH_NI1_ERROR_OVERFLOW_2_ALIAS 0x0000000150040638
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_MASK_1" */
+/* ni1 Error Mask Bits */
+/* ==================================================================== */
+
+#define SH_NI1_ERROR_MASK_1 0x0000000150040640
+#define SH_NI1_ERROR_MASK_1_MASK 0xffffffffffffffff
+#define SH_NI1_ERROR_MASK_1_INIT 0xffffffffffffffff
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT0 */
+/* Description: Fifo 02 debit0 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT2 */
+/* Description: Fifo 02 debit2 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT0 */
+/* Description: Fifo 13 debit0 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT2 */
+/* Description: Fifo 13 debit2 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit overflow 0 */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI1_ERROR_MASK_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit overflow 1 */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI1_ERROR_MASK_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit overflow 2 */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI1_ERROR_MASK_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit overflow 0 */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI1_ERROR_MASK_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit overflow 1 */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI1_ERROR_MASK_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit overflow 2 */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI1_ERROR_MASK_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT0 */
+/* Description: PI Fifo debit0 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT2 */
+/* Description: PI Fifo debit2 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT0 */
+/* Description: IILB Fifo debit0 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT2 */
+/* Description: IILB Fifo debit2 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT0 */
+/* Description: MD Fifo debit0 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT2 */
+/* Description: MD Fifo debit2 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT0 */
+/* Description: NI Fifo debit0 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT1 */
+/* Description: NI Fifo debit1 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT2 */
+/* Description: NI Fifo debit2 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT3 */
+/* Description: NI Fifo debit3 overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit overflow */
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI1_ERROR_MASK_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC0 */
+/* Description: Fifo02 vc0 tail timeout */
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000
+
+/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC2 */
+/* Description: Fifo02 vc2 tail timeout */
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000
+
+/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC1 */
+/* Description: Fifo13 vc1 tail timeout */
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000
+
+/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC3 */
+/* Description: Fifo13 vc3 tail timeout */
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000
+
+/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC0 */
+/* Description: NI vc0 tail timeout */
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC0_SHFT 60
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000
+
+/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC1 */
+/* Description: NI vc1 tail timeout */
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC1_SHFT 61
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000
+
+/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC2 */
+/* Description: NI vc2 tail timeout */
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC2_SHFT 62
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000
+
+/* SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC3 */
+/* Description: NI vc3 tail timeout */
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC3_SHFT 63
+#define SH_NI1_ERROR_MASK_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_MASK_2" */
+/* ni1 Error Mask Bits */
+/* ==================================================================== */
+
+#define SH_NI1_ERROR_MASK_2 0x0000000150040650
+#define SH_NI1_ERROR_MASK_2_MASK 0x7fffffff003fffff
+#define SH_NI1_ERROR_MASK_2_INIT 0x7fffffff003fffff
+
+/* SH_NI1_ERROR_MASK_2_ILLEGAL_VCNI */
+/* Description: Illegal VC NI */
+#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCNI_SHFT 0
+#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCNI_MASK 0x0000000000000001
+
+/* SH_NI1_ERROR_MASK_2_ILLEGAL_VCPI */
+/* Description: Illegal VC PI */
+#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCPI_SHFT 1
+#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCPI_MASK 0x0000000000000002
+
+/* SH_NI1_ERROR_MASK_2_ILLEGAL_VCMD */
+/* Description: Illegal VC MD */
+#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCMD_SHFT 2
+#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCMD_MASK 0x0000000000000004
+
+/* SH_NI1_ERROR_MASK_2_ILLEGAL_VCIILB */
+/* Description: Illegal VC IILB */
+#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCIILB_SHFT 3
+#define SH_NI1_ERROR_MASK_2_ILLEGAL_VCIILB_MASK 0x0000000000000008
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit underflow 0 */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit underflow 1 */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit underflow 2 */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit underflow 0 */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit underflow 1 */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit underflow 2 */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit underflow */
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI1_ERROR_MASK_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC0 */
+/* Description: llp deadlock vc0 */
+#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC0_SHFT 56
+#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000
+
+/* SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC1 */
+/* Description: llp deadlock vc1 */
+#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC1_SHFT 57
+#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000
+
+/* SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC2 */
+/* Description: llp deadlock vc2 */
+#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC2_SHFT 58
+#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000
+
+/* SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC3 */
+/* Description: llp deadlock vc3 */
+#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC3_SHFT 59
+#define SH_NI1_ERROR_MASK_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000
+
+/* SH_NI1_ERROR_MASK_2_CHIPLET_NOMATCH */
+/* Description: chiplet nomatch */
+#define SH_NI1_ERROR_MASK_2_CHIPLET_NOMATCH_SHFT 60
+#define SH_NI1_ERROR_MASK_2_CHIPLET_NOMATCH_MASK 0x1000000000000000
+
+/* SH_NI1_ERROR_MASK_2_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_NI1_ERROR_MASK_2_LUT_READ_ERROR_SHFT 61
+#define SH_NI1_ERROR_MASK_2_LUT_READ_ERROR_MASK 0x2000000000000000
+
+/* SH_NI1_ERROR_MASK_2_RETRY_TIMEOUT_ERROR */
+/* Description: Retry Timeout Error */
+#define SH_NI1_ERROR_MASK_2_RETRY_TIMEOUT_ERROR_SHFT 62
+#define SH_NI1_ERROR_MASK_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_FIRST_ERROR_1" */
+/* ni1 First Error Bits */
+/* ==================================================================== */
+
+#define SH_NI1_FIRST_ERROR_1 0x0000000150040660
+#define SH_NI1_FIRST_ERROR_1_MASK 0xffffffffffffffff
+#define SH_NI1_FIRST_ERROR_1_INIT 0xffffffffffffffff
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT0 */
+/* Description: Fifo 02 debit0 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT0_SHFT 0
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT0_MASK 0x0000000000000001
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT2 */
+/* Description: Fifo 02 debit2 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT2_SHFT 1
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_DEBIT2_MASK 0x0000000000000002
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT0 */
+/* Description: Fifo 13 debit0 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT0_SHFT 2
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT0_MASK 0x0000000000000004
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT2 */
+/* Description: Fifo 13 debit2 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT2_SHFT 3
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_DEBIT2_MASK 0x0000000000000008
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit overflow 0 */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit overflow 1 */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit overflow 2 */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit overflow 0 */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit overflow 1 */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit overflow 2 */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT0 */
+/* Description: PI Fifo debit0 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT0_SHFT 22
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT0_MASK 0x0000000000400000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT2 */
+/* Description: PI Fifo debit2 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT2_SHFT 23
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_DEBIT2_MASK 0x0000000000800000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT0 */
+/* Description: IILB Fifo debit0 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT0_SHFT 24
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT0_MASK 0x0000000001000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT2 */
+/* Description: IILB Fifo debit2 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT2_SHFT 25
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_DEBIT2_MASK 0x0000000002000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT0 */
+/* Description: MD Fifo debit0 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT0_SHFT 26
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT0_MASK 0x0000000004000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT2 */
+/* Description: MD Fifo debit2 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT2_SHFT 27
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_DEBIT2_MASK 0x0000000008000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT0 */
+/* Description: NI Fifo debit0 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT0_SHFT 28
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT0_MASK 0x0000000010000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT1 */
+/* Description: NI Fifo debit1 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT1_SHFT 29
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT1_MASK 0x0000000020000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT2 */
+/* Description: NI Fifo debit2 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT2_SHFT 30
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT2_MASK 0x0000000040000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT3 */
+/* Description: NI Fifo debit3 overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT3_SHFT 31
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_DEBIT3_MASK 0x0000000080000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit overflow */
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI1_FIRST_ERROR_1_OVERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC0 */
+/* Description: Fifo02 vc0 tail timeout */
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC0_SHFT 56
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC0_MASK 0x0100000000000000
+
+/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC2 */
+/* Description: Fifo02 vc2 tail timeout */
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC2_SHFT 57
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO02_VC2_MASK 0x0200000000000000
+
+/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC1 */
+/* Description: Fifo13 vc1 tail timeout */
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC1_SHFT 58
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC1_MASK 0x0400000000000000
+
+/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC3 */
+/* Description: Fifo13 vc3 tail timeout */
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC3_SHFT 59
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_FIFO13_VC3_MASK 0x0800000000000000
+
+/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC0 */
+/* Description: NI vc0 tail timeout */
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC0_SHFT 60
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC0_MASK 0x1000000000000000
+
+/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC1 */
+/* Description: NI vc1 tail timeout */
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC1_SHFT 61
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC1_MASK 0x2000000000000000
+
+/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC2 */
+/* Description: NI vc2 tail timeout */
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC2_SHFT 62
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC2_MASK 0x4000000000000000
+
+/* SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC3 */
+/* Description: NI vc3 tail timeout */
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC3_SHFT 63
+#define SH_NI1_FIRST_ERROR_1_TAIL_TIMEOUT_NI_VC3_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_FIRST_ERROR_2" */
+/* ni1 First Error Bits */
+/* ==================================================================== */
+
+#define SH_NI1_FIRST_ERROR_2 0x0000000150040670
+#define SH_NI1_FIRST_ERROR_2_MASK 0x7fffffff003fffff
+#define SH_NI1_FIRST_ERROR_2_INIT 0x7fffffff003fffff
+
+/* SH_NI1_FIRST_ERROR_2_ILLEGAL_VCNI */
+/* Description: Illegal VC NI */
+#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCNI_SHFT 0
+#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCNI_MASK 0x0000000000000001
+
+/* SH_NI1_FIRST_ERROR_2_ILLEGAL_VCPI */
+/* Description: Illegal VC PI */
+#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCPI_SHFT 1
+#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCPI_MASK 0x0000000000000002
+
+/* SH_NI1_FIRST_ERROR_2_ILLEGAL_VCMD */
+/* Description: Illegal VC MD */
+#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCMD_SHFT 2
+#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCMD_MASK 0x0000000000000004
+
+/* SH_NI1_FIRST_ERROR_2_ILLEGAL_VCIILB */
+/* Description: Illegal VC IILB */
+#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCIILB_SHFT 3
+#define SH_NI1_FIRST_ERROR_2_ILLEGAL_VCIILB_MASK 0x0000000000000008
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_POP */
+/* Description: Fifo 02 vc0 pop underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_POP_SHFT 4
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_POP_MASK 0x0000000000000010
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_POP */
+/* Description: Fifo 02 vc2 pop underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_POP_SHFT 5
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_POP_MASK 0x0000000000000020
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_POP */
+/* Description: Fifo 13 vc1 pop underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_POP_SHFT 6
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_POP_MASK 0x0000000000000040
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_POP */
+/* Description: Fifo 13 vc3 pop underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_POP_SHFT 7
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_POP_MASK 0x0000000000000080
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_PUSH */
+/* Description: Fifo 02 vc0 push underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_PUSH_SHFT 8
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_PUSH_MASK 0x0000000000000100
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_PUSH */
+/* Description: Fifo 02 vc2 push underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_PUSH_SHFT 9
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_PUSH_MASK 0x0000000000000200
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_PUSH */
+/* Description: Fifo 13 vc1 push underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_PUSH_SHFT 10
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC1_PUSH_MASK 0x0000000000000400
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_PUSH */
+/* Description: Fifo 13 vc3 push underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_PUSH_SHFT 11
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC3_PUSH_MASK 0x0000000000000800
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_CREDIT */
+/* Description: Fifo 02 vc0 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_CREDIT_SHFT 12
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_CREDIT */
+/* Description: Fifo 02 vc2 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_CREDIT_SHFT 13
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO02_VC2_CREDIT_MASK 0x0000000000002000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC0_CREDIT */
+/* Description: Fifo 13 vc0 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC0_CREDIT_SHFT 14
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC0_CREDIT_MASK 0x0000000000004000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC2_CREDIT */
+/* Description: Fifo 13 vc2 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC2_CREDIT_SHFT 15
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_FIFO13_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW0_VC0_CREDIT */
+/* Description: VC0 credit underflow 0 */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW0_VC0_CREDIT_SHFT 16
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW0_VC0_CREDIT_MASK 0x0000000000010000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW1_VC0_CREDIT */
+/* Description: VC0 credit underflow 1 */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW1_VC0_CREDIT_SHFT 17
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW1_VC0_CREDIT_MASK 0x0000000000020000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW2_VC0_CREDIT */
+/* Description: VC0 credit underflow 2 */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW2_VC0_CREDIT_SHFT 18
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW2_VC0_CREDIT_MASK 0x0000000000040000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW0_VC2_CREDIT */
+/* Description: VC2 credit underflow 0 */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW0_VC2_CREDIT_SHFT 19
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW0_VC2_CREDIT_MASK 0x0000000000080000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW1_VC2_CREDIT */
+/* Description: VC2 credit underflow 1 */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW1_VC2_CREDIT_SHFT 20
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW1_VC2_CREDIT_MASK 0x0000000000100000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW2_VC2_CREDIT */
+/* Description: VC2 credit underflow 2 */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW2_VC2_CREDIT_SHFT 21
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW2_VC2_CREDIT_MASK 0x0000000000200000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_POP */
+/* Description: PI Fifo vc0 pop underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_POP_SHFT 32
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_POP_MASK 0x0000000100000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_POP */
+/* Description: PI Fifo vc2 pop underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_POP_SHFT 33
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_POP_MASK 0x0000000200000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_POP */
+/* Description: IILB Fifo vc0 pop underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_POP_SHFT 34
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_POP_MASK 0x0000000400000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_POP */
+/* Description: IILB Fifo vc2 pop underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_POP_SHFT 35
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_POP_MASK 0x0000000800000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_POP */
+/* Description: MD Fifo vc0 pop underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_POP_SHFT 36
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_POP_MASK 0x0000001000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_POP */
+/* Description: MD Fifo vc2 pop underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_POP_SHFT 37
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_POP_MASK 0x0000002000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_POP */
+/* Description: NI Fifo vc0 pop underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_POP_SHFT 38
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_POP_MASK 0x0000004000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_POP */
+/* Description: NI Fifo vc2 pop underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_POP_SHFT 39
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_POP_MASK 0x0000008000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_PUSH */
+/* Description: PI Fifo vc0 push underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_PUSH_SHFT 40
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_PUSH_MASK 0x0000010000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_PUSH */
+/* Description: PI Fifo vc2 push underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_PUSH_SHFT 41
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_PUSH_MASK 0x0000020000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_PUSH */
+/* Description: IILB Fifo vc0 push underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_SHFT 42
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_PUSH_MASK 0x0000040000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_PUSH */
+/* Description: IILB Fifo vc2 push underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_SHFT 43
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_PUSH_MASK 0x0000080000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_PUSH */
+/* Description: MD Fifo vc0 push underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_PUSH_SHFT 44
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_PUSH_MASK 0x0000100000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_PUSH */
+/* Description: MD Fifo vc2 push underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_PUSH_SHFT 45
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_PUSH_MASK 0x0000200000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_CREDIT */
+/* Description: PI Fifo vc0 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_SHFT 46
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_CREDIT */
+/* Description: PI Fifo vc2 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_SHFT 47
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_PI_FIFO_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT */
+/* Description: IILB Fifo vc0 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_SHFT 48
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC0_CREDIT_MASK 0x0001000000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT */
+/* Description: IILB Fifo vc2 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_SHFT 49
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_IILB_FIFO_VC2_CREDIT_MASK 0x0002000000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_CREDIT */
+/* Description: MD Fifo vc0 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_SHFT 50
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC0_CREDIT_MASK 0x0004000000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_CREDIT */
+/* Description: MD Fifo vc2 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_SHFT 51
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_MD_FIFO_VC2_CREDIT_MASK 0x0008000000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_CREDIT */
+/* Description: NI Fifo vc0 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_SHFT 52
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC0_CREDIT_MASK 0x0010000000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC1_CREDIT */
+/* Description: NI Fifo vc1 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_SHFT 53
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC1_CREDIT_MASK 0x0020000000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_CREDIT */
+/* Description: NI Fifo vc2 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_SHFT 54
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC2_CREDIT_MASK 0x0040000000000000
+
+/* SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC3_CREDIT */
+/* Description: NI Fifo vc3 credit underflow */
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_SHFT 55
+#define SH_NI1_FIRST_ERROR_2_UNDERFLOW_NI_FIFO_VC3_CREDIT_MASK 0x0080000000000000
+
+/* SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC0 */
+/* Description: llp deadlock vc0 */
+#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC0_SHFT 56
+#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC0_MASK 0x0100000000000000
+
+/* SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC1 */
+/* Description: llp deadlock vc1 */
+#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC1_SHFT 57
+#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC1_MASK 0x0200000000000000
+
+/* SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC2 */
+/* Description: llp deadlock vc2 */
+#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC2_SHFT 58
+#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC2_MASK 0x0400000000000000
+
+/* SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC3 */
+/* Description: llp deadlock vc3 */
+#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC3_SHFT 59
+#define SH_NI1_FIRST_ERROR_2_LLP_DEADLOCK_VC3_MASK 0x0800000000000000
+
+/* SH_NI1_FIRST_ERROR_2_CHIPLET_NOMATCH */
+/* Description: chiplet nomatch */
+#define SH_NI1_FIRST_ERROR_2_CHIPLET_NOMATCH_SHFT 60
+#define SH_NI1_FIRST_ERROR_2_CHIPLET_NOMATCH_MASK 0x1000000000000000
+
+/* SH_NI1_FIRST_ERROR_2_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_NI1_FIRST_ERROR_2_LUT_READ_ERROR_SHFT 61
+#define SH_NI1_FIRST_ERROR_2_LUT_READ_ERROR_MASK 0x2000000000000000
+
+/* SH_NI1_FIRST_ERROR_2_RETRY_TIMEOUT_ERROR */
+/* Description: Retry Timeout Error */
+#define SH_NI1_FIRST_ERROR_2_RETRY_TIMEOUT_ERROR_SHFT 62
+#define SH_NI1_FIRST_ERROR_2_RETRY_TIMEOUT_ERROR_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_DETAIL_1" */
+/* ni1 Chiplet no match header bits 63:0 */
+/* ==================================================================== */
+
+#define SH_NI1_ERROR_DETAIL_1 0x0000000150040680
+#define SH_NI1_ERROR_DETAIL_1_MASK 0xffffffffffffffff
+#define SH_NI1_ERROR_DETAIL_1_INIT 0x0000000000000000
+
+/* SH_NI1_ERROR_DETAIL_1_HEADER */
+/* Description: Header bits 63:0 */
+#define SH_NI1_ERROR_DETAIL_1_HEADER_SHFT 0
+#define SH_NI1_ERROR_DETAIL_1_HEADER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_DETAIL_2" */
+/* ni1 Chiplet no match header bits 127:64 */
+/* ==================================================================== */
+
+#define SH_NI1_ERROR_DETAIL_2 0x0000000150040690
+#define SH_NI1_ERROR_DETAIL_2_MASK 0xffffffffffffffff
+#define SH_NI1_ERROR_DETAIL_2_INIT 0x0000000000000000
+
+/* SH_NI1_ERROR_DETAIL_2_HEADER */
+/* Description: Header bits 127:64 */
+#define SH_NI1_ERROR_DETAIL_2_HEADER_SHFT 0
+#define SH_NI1_ERROR_DETAIL_2_HEADER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_CORRECTED_DETAIL_1" */
+/* Corrected error details */
+/* ==================================================================== */
+
+#define SH_XN_CORRECTED_DETAIL_1 0x0000000150040070
+#define SH_XN_CORRECTED_DETAIL_1_MASK 0x0fff0fff0fff0fff
+#define SH_XN_CORRECTED_DETAIL_1_INIT 0x0000000000000000
+
+/* SH_XN_CORRECTED_DETAIL_1_ECC0_SYNDROME */
+/* Description: ECC0 Syndrome */
+#define SH_XN_CORRECTED_DETAIL_1_ECC0_SYNDROME_SHFT 0
+#define SH_XN_CORRECTED_DETAIL_1_ECC0_SYNDROME_MASK 0x00000000000000ff
+
+/* SH_XN_CORRECTED_DETAIL_1_ECC0_WC */
+/* Description: ECC0 Word Count */
+#define SH_XN_CORRECTED_DETAIL_1_ECC0_WC_SHFT 8
+#define SH_XN_CORRECTED_DETAIL_1_ECC0_WC_MASK 0x0000000000000300
+
+/* SH_XN_CORRECTED_DETAIL_1_ECC0_VC */
+/* Description: ECC0 Virtual Channel */
+#define SH_XN_CORRECTED_DETAIL_1_ECC0_VC_SHFT 10
+#define SH_XN_CORRECTED_DETAIL_1_ECC0_VC_MASK 0x0000000000000c00
+
+/* SH_XN_CORRECTED_DETAIL_1_ECC1_SYNDROME */
+/* Description: ECC1 Syndrome */
+#define SH_XN_CORRECTED_DETAIL_1_ECC1_SYNDROME_SHFT 16
+#define SH_XN_CORRECTED_DETAIL_1_ECC1_SYNDROME_MASK 0x0000000000ff0000
+
+/* SH_XN_CORRECTED_DETAIL_1_ECC1_WC */
+/* Description: ECC1 Word Count */
+#define SH_XN_CORRECTED_DETAIL_1_ECC1_WC_SHFT 24
+#define SH_XN_CORRECTED_DETAIL_1_ECC1_WC_MASK 0x0000000003000000
+
+/* SH_XN_CORRECTED_DETAIL_1_ECC1_VC */
+/* Description: ECC1 Virtual Channel */
+#define SH_XN_CORRECTED_DETAIL_1_ECC1_VC_SHFT 26
+#define SH_XN_CORRECTED_DETAIL_1_ECC1_VC_MASK 0x000000000c000000
+
+/* SH_XN_CORRECTED_DETAIL_1_ECC2_SYNDROME */
+/* Description: ECC2 Syndrome */
+#define SH_XN_CORRECTED_DETAIL_1_ECC2_SYNDROME_SHFT 32
+#define SH_XN_CORRECTED_DETAIL_1_ECC2_SYNDROME_MASK 0x000000ff00000000
+
+/* SH_XN_CORRECTED_DETAIL_1_ECC2_WC */
+/* Description: ECC2 Word Count */
+#define SH_XN_CORRECTED_DETAIL_1_ECC2_WC_SHFT 40
+#define SH_XN_CORRECTED_DETAIL_1_ECC2_WC_MASK 0x0000030000000000
+
+/* SH_XN_CORRECTED_DETAIL_1_ECC2_VC */
+/* Description: ECC2 Virtual Channel */
+#define SH_XN_CORRECTED_DETAIL_1_ECC2_VC_SHFT 42
+#define SH_XN_CORRECTED_DETAIL_1_ECC2_VC_MASK 0x00000c0000000000
+
+/* SH_XN_CORRECTED_DETAIL_1_ECC3_SYNDROME */
+/* Description: ECC3 Syndrome */
+#define SH_XN_CORRECTED_DETAIL_1_ECC3_SYNDROME_SHFT 48
+#define SH_XN_CORRECTED_DETAIL_1_ECC3_SYNDROME_MASK 0x00ff000000000000
+
+/* SH_XN_CORRECTED_DETAIL_1_ECC3_WC */
+/* Description: ECC3 Word Count */
+#define SH_XN_CORRECTED_DETAIL_1_ECC3_WC_SHFT 56
+#define SH_XN_CORRECTED_DETAIL_1_ECC3_WC_MASK 0x0300000000000000
+
+/* SH_XN_CORRECTED_DETAIL_1_ECC3_VC */
+/* Description: ECC3 Virtual Channel */
+#define SH_XN_CORRECTED_DETAIL_1_ECC3_VC_SHFT 58
+#define SH_XN_CORRECTED_DETAIL_1_ECC3_VC_MASK 0x0c00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_CORRECTED_DETAIL_2" */
+/* Corrected error data */
+/* ==================================================================== */
+
+#define SH_XN_CORRECTED_DETAIL_2 0x0000000150040080
+#define SH_XN_CORRECTED_DETAIL_2_MASK 0xffffffffffffffff
+#define SH_XN_CORRECTED_DETAIL_2_INIT 0x0000000000000000
+
+/* SH_XN_CORRECTED_DETAIL_2_DATA */
+/* Description: ECC data */
+#define SH_XN_CORRECTED_DETAIL_2_DATA_SHFT 0
+#define SH_XN_CORRECTED_DETAIL_2_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_CORRECTED_DETAIL_3" */
+/* Corrected error header0 */
+/* ==================================================================== */
+
+#define SH_XN_CORRECTED_DETAIL_3 0x0000000150040090
+#define SH_XN_CORRECTED_DETAIL_3_MASK 0xffffffffffffffff
+#define SH_XN_CORRECTED_DETAIL_3_INIT 0x0000000000000000
+
+/* SH_XN_CORRECTED_DETAIL_3_HEADER0 */
+/* Description: ECC header0 (bits 63 - 0) */
+#define SH_XN_CORRECTED_DETAIL_3_HEADER0_SHFT 0
+#define SH_XN_CORRECTED_DETAIL_3_HEADER0_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_CORRECTED_DETAIL_4" */
+/* Corrected error header1 */
+/* ==================================================================== */
+
+#define SH_XN_CORRECTED_DETAIL_4 0x00000001500400a0
+#define SH_XN_CORRECTED_DETAIL_4_MASK 0xc00003ffffffffff
+#define SH_XN_CORRECTED_DETAIL_4_INIT 0x0000000000000000
+
+/* SH_XN_CORRECTED_DETAIL_4_HEADER1 */
+/* Description: ECC header1 (bits 104 - 64) */
+#define SH_XN_CORRECTED_DETAIL_4_HEADER1_SHFT 0
+#define SH_XN_CORRECTED_DETAIL_4_HEADER1_MASK 0x000003ffffffffff
+
+/* SH_XN_CORRECTED_DETAIL_4_ERR_GROUP */
+/* Description: Error group */
+#define SH_XN_CORRECTED_DETAIL_4_ERR_GROUP_SHFT 62
+#define SH_XN_CORRECTED_DETAIL_4_ERR_GROUP_MASK 0xc000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_UNCORRECTED_DETAIL_1" */
+/* Uncorrected error details */
+/* ==================================================================== */
+
+#define SH_XN_UNCORRECTED_DETAIL_1 0x00000001500400b0
+#define SH_XN_UNCORRECTED_DETAIL_1_MASK 0x0fff0fff0fff0fff
+#define SH_XN_UNCORRECTED_DETAIL_1_INIT 0x0000000000000000
+
+/* SH_XN_UNCORRECTED_DETAIL_1_ECC0_SYNDROME */
+/* Description: ECC0 Syndrome */
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC0_SYNDROME_SHFT 0
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC0_SYNDROME_MASK 0x00000000000000ff
+
+/* SH_XN_UNCORRECTED_DETAIL_1_ECC0_WC */
+/* Description: ECC0 Word Count */
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC0_WC_SHFT 8
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC0_WC_MASK 0x0000000000000300
+
+/* SH_XN_UNCORRECTED_DETAIL_1_ECC0_VC */
+/* Description: ECC0 Virtual Channel */
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC0_VC_SHFT 10
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC0_VC_MASK 0x0000000000000c00
+
+/* SH_XN_UNCORRECTED_DETAIL_1_ECC1_SYNDROME */
+/* Description: ECC1 Syndrome */
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC1_SYNDROME_SHFT 16
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC1_SYNDROME_MASK 0x0000000000ff0000
+
+/* SH_XN_UNCORRECTED_DETAIL_1_ECC1_WC */
+/* Description: ECC1 Word Count */
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC1_WC_SHFT 24
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC1_WC_MASK 0x0000000003000000
+
+/* SH_XN_UNCORRECTED_DETAIL_1_ECC1_VC */
+/* Description: ECC1 Virtual Channel */
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC1_VC_SHFT 26
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC1_VC_MASK 0x000000000c000000
+
+/* SH_XN_UNCORRECTED_DETAIL_1_ECC2_SYNDROME */
+/* Description: ECC2 Syndrome */
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC2_SYNDROME_SHFT 32
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC2_SYNDROME_MASK 0x000000ff00000000
+
+/* SH_XN_UNCORRECTED_DETAIL_1_ECC2_WC */
+/* Description: ECC2 Word Count */
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC2_WC_SHFT 40
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC2_WC_MASK 0x0000030000000000
+
+/* SH_XN_UNCORRECTED_DETAIL_1_ECC2_VC */
+/* Description: ECC2 Virtual Channel */
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC2_VC_SHFT 42
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC2_VC_MASK 0x00000c0000000000
+
+/* SH_XN_UNCORRECTED_DETAIL_1_ECC3_SYNDROME */
+/* Description: ECC3 Syndrome */
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC3_SYNDROME_SHFT 48
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC3_SYNDROME_MASK 0x00ff000000000000
+
+/* SH_XN_UNCORRECTED_DETAIL_1_ECC3_WC */
+/* Description: ECC3 Word Count */
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC3_WC_SHFT 56
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC3_WC_MASK 0x0300000000000000
+
+/* SH_XN_UNCORRECTED_DETAIL_1_ECC3_VC */
+/* Description: ECC3 Virtual Channel */
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC3_VC_SHFT 58
+#define SH_XN_UNCORRECTED_DETAIL_1_ECC3_VC_MASK 0x0c00000000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_UNCORRECTED_DETAIL_2" */
+/* Uncorrected error data */
+/* ==================================================================== */
+
+#define SH_XN_UNCORRECTED_DETAIL_2 0x00000001500400c0
+#define SH_XN_UNCORRECTED_DETAIL_2_MASK 0xffffffffffffffff
+#define SH_XN_UNCORRECTED_DETAIL_2_INIT 0x0000000000000000
+
+/* SH_XN_UNCORRECTED_DETAIL_2_DATA */
+/* Description: ECC data */
+#define SH_XN_UNCORRECTED_DETAIL_2_DATA_SHFT 0
+#define SH_XN_UNCORRECTED_DETAIL_2_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_UNCORRECTED_DETAIL_3" */
+/* Uncorrected error header0 */
+/* ==================================================================== */
+
+#define SH_XN_UNCORRECTED_DETAIL_3 0x00000001500400d0
+#define SH_XN_UNCORRECTED_DETAIL_3_MASK 0xffffffffffffffff
+#define SH_XN_UNCORRECTED_DETAIL_3_INIT 0x0000000000000000
+
+/* SH_XN_UNCORRECTED_DETAIL_3_HEADER0 */
+/* Description: ECC header0 (bits 63 - 0) */
+#define SH_XN_UNCORRECTED_DETAIL_3_HEADER0_SHFT 0
+#define SH_XN_UNCORRECTED_DETAIL_3_HEADER0_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_UNCORRECTED_DETAIL_4" */
+/* Uncorrected error header1 */
+/* ==================================================================== */
+
+#define SH_XN_UNCORRECTED_DETAIL_4 0x00000001500400e0
+#define SH_XN_UNCORRECTED_DETAIL_4_MASK 0xc00003ffffffffff
+#define SH_XN_UNCORRECTED_DETAIL_4_INIT 0x0000000000000000
+
+/* SH_XN_UNCORRECTED_DETAIL_4_HEADER1 */
+/* Description: ECC header1 (bits 104 - 64) */
+#define SH_XN_UNCORRECTED_DETAIL_4_HEADER1_SHFT 0
+#define SH_XN_UNCORRECTED_DETAIL_4_HEADER1_MASK 0x000003ffffffffff
+
+/* SH_XN_UNCORRECTED_DETAIL_4_ERR_GROUP */
+/* Description: Error group */
+#define SH_XN_UNCORRECTED_DETAIL_4_ERR_GROUP_SHFT 62
+#define SH_XN_UNCORRECTED_DETAIL_4_ERR_GROUP_MASK 0xc000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ERROR_DETAIL_1" */
+/* Look Up Table Address (md) */
+/* ==================================================================== */
+
+#define SH_XNMD_ERROR_DETAIL_1 0x00000001500400f0
+#define SH_XNMD_ERROR_DETAIL_1_MASK 0x00000000000007ff
+#define SH_XNMD_ERROR_DETAIL_1_INIT 0x0000000000000000
+
+/* SH_XNMD_ERROR_DETAIL_1_LUT_ADDR */
+/* Description: Look Up Table Read Address */
+#define SH_XNMD_ERROR_DETAIL_1_LUT_ADDR_SHFT 0
+#define SH_XNMD_ERROR_DETAIL_1_LUT_ADDR_MASK 0x00000000000007ff
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ERROR_DETAIL_1" */
+/* Look Up Table Address (pi) */
+/* ==================================================================== */
+
+#define SH_XNPI_ERROR_DETAIL_1 0x0000000150040100
+#define SH_XNPI_ERROR_DETAIL_1_MASK 0x00000000000007ff
+#define SH_XNPI_ERROR_DETAIL_1_INIT 0x0000000000000000
+
+/* SH_XNPI_ERROR_DETAIL_1_LUT_ADDR */
+/* Description: Look Up Table Read Address */
+#define SH_XNPI_ERROR_DETAIL_1_LUT_ADDR_SHFT 0
+#define SH_XNPI_ERROR_DETAIL_1_LUT_ADDR_MASK 0x00000000000007ff
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERROR_DETAIL_1" */
+/* Chiplet NoMatch header [63:0] */
+/* ==================================================================== */
+
+#define SH_XNIILB_ERROR_DETAIL_1 0x0000000150040110
+#define SH_XNIILB_ERROR_DETAIL_1_MASK 0xffffffffffffffff
+#define SH_XNIILB_ERROR_DETAIL_1_INIT 0x0000000000000000
+
+/* SH_XNIILB_ERROR_DETAIL_1_HEADER */
+/* Description: header bits [63:0] */
+#define SH_XNIILB_ERROR_DETAIL_1_HEADER_SHFT 0
+#define SH_XNIILB_ERROR_DETAIL_1_HEADER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERROR_DETAIL_2" */
+/* Chiplet NoMatch header [127:64] */
+/* ==================================================================== */
+
+#define SH_XNIILB_ERROR_DETAIL_2 0x0000000150040120
+#define SH_XNIILB_ERROR_DETAIL_2_MASK 0xffffffffffffffff
+#define SH_XNIILB_ERROR_DETAIL_2_INIT 0x0000000000000000
+
+/* SH_XNIILB_ERROR_DETAIL_2_HEADER */
+/* Description: header bits [127:64] */
+#define SH_XNIILB_ERROR_DETAIL_2_HEADER_SHFT 0
+#define SH_XNIILB_ERROR_DETAIL_2_HEADER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERROR_DETAIL_3" */
+/* Look Up Table Address (iilb) */
+/* ==================================================================== */
+
+#define SH_XNIILB_ERROR_DETAIL_3 0x0000000150040130
+#define SH_XNIILB_ERROR_DETAIL_3_MASK 0x00000000000007ff
+#define SH_XNIILB_ERROR_DETAIL_3_INIT 0x0000000000000000
+
+/* SH_XNIILB_ERROR_DETAIL_3_LUT_ADDR */
+/* Description: Look Up Table Read Address */
+#define SH_XNIILB_ERROR_DETAIL_3_LUT_ADDR_SHFT 0
+#define SH_XNIILB_ERROR_DETAIL_3_LUT_ADDR_MASK 0x00000000000007ff
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_DETAIL_3" */
+/* Look Up Table Address (ni0) */
+/* ==================================================================== */
+
+#define SH_NI0_ERROR_DETAIL_3 0x0000000150040140
+#define SH_NI0_ERROR_DETAIL_3_MASK 0x00000000000007ff
+#define SH_NI0_ERROR_DETAIL_3_INIT 0x0000000000000000
+
+/* SH_NI0_ERROR_DETAIL_3_LUT_ADDR */
+/* Description: Look Up Table Read Address */
+#define SH_NI0_ERROR_DETAIL_3_LUT_ADDR_SHFT 0
+#define SH_NI0_ERROR_DETAIL_3_LUT_ADDR_MASK 0x00000000000007ff
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_DETAIL_3" */
+/* Look Up Table Address (ni1) */
+/* ==================================================================== */
+
+#define SH_NI1_ERROR_DETAIL_3 0x0000000150040150
+#define SH_NI1_ERROR_DETAIL_3_MASK 0x00000000000007ff
+#define SH_NI1_ERROR_DETAIL_3_INIT 0x0000000000000000
+
+/* SH_NI1_ERROR_DETAIL_3_LUT_ADDR */
+/* Description: Look Up Table Read Address */
+#define SH_NI1_ERROR_DETAIL_3_LUT_ADDR_SHFT 0
+#define SH_NI1_ERROR_DETAIL_3_LUT_ADDR_MASK 0x00000000000007ff
+
+/* ==================================================================== */
+/* Register "SH_XN_ERROR_SUMMARY" */
+/* ==================================================================== */
+
+#define SH_XN_ERROR_SUMMARY 0x0000000150040000
+#define SH_XN_ERROR_SUMMARY_MASK 0x0000003fffffffff
+#define SH_XN_ERROR_SUMMARY_INIT 0x0000003fffffffff
+
+/* SH_XN_ERROR_SUMMARY_NI0_POP_OVERFLOW */
+/* Description: NI0 pop overflow */
+#define SH_XN_ERROR_SUMMARY_NI0_POP_OVERFLOW_SHFT 0
+#define SH_XN_ERROR_SUMMARY_NI0_POP_OVERFLOW_MASK 0x0000000000000001
+
+/* SH_XN_ERROR_SUMMARY_NI0_PUSH_OVERFLOW */
+/* Description: NI0 push overflow */
+#define SH_XN_ERROR_SUMMARY_NI0_PUSH_OVERFLOW_SHFT 1
+#define SH_XN_ERROR_SUMMARY_NI0_PUSH_OVERFLOW_MASK 0x0000000000000002
+
+/* SH_XN_ERROR_SUMMARY_NI0_CREDIT_OVERFLOW */
+/* Description: NI0 credit overflow */
+#define SH_XN_ERROR_SUMMARY_NI0_CREDIT_OVERFLOW_SHFT 2
+#define SH_XN_ERROR_SUMMARY_NI0_CREDIT_OVERFLOW_MASK 0x0000000000000004
+
+/* SH_XN_ERROR_SUMMARY_NI0_DEBIT_OVERFLOW */
+/* Description: NI0 debit overflow */
+#define SH_XN_ERROR_SUMMARY_NI0_DEBIT_OVERFLOW_SHFT 3
+#define SH_XN_ERROR_SUMMARY_NI0_DEBIT_OVERFLOW_MASK 0x0000000000000008
+
+/* SH_XN_ERROR_SUMMARY_NI0_POP_UNDERFLOW */
+/* Description: NI0 pop underflow */
+#define SH_XN_ERROR_SUMMARY_NI0_POP_UNDERFLOW_SHFT 4
+#define SH_XN_ERROR_SUMMARY_NI0_POP_UNDERFLOW_MASK 0x0000000000000010
+
+/* SH_XN_ERROR_SUMMARY_NI0_PUSH_UNDERFLOW */
+/* Description: NI0 push underflow */
+#define SH_XN_ERROR_SUMMARY_NI0_PUSH_UNDERFLOW_SHFT 5
+#define SH_XN_ERROR_SUMMARY_NI0_PUSH_UNDERFLOW_MASK 0x0000000000000020
+
+/* SH_XN_ERROR_SUMMARY_NI0_CREDIT_UNDERFLOW */
+/* Description: NI0 credit underflow */
+#define SH_XN_ERROR_SUMMARY_NI0_CREDIT_UNDERFLOW_SHFT 6
+#define SH_XN_ERROR_SUMMARY_NI0_CREDIT_UNDERFLOW_MASK 0x0000000000000040
+
+/* SH_XN_ERROR_SUMMARY_NI0_LLP_ERROR */
+/* Description: NI0 llp error */
+#define SH_XN_ERROR_SUMMARY_NI0_LLP_ERROR_SHFT 7
+#define SH_XN_ERROR_SUMMARY_NI0_LLP_ERROR_MASK 0x0000000000000080
+
+/* SH_XN_ERROR_SUMMARY_NI0_PIPE_ERROR */
+/* Description: NI0 Pipe in/out errors */
+#define SH_XN_ERROR_SUMMARY_NI0_PIPE_ERROR_SHFT 8
+#define SH_XN_ERROR_SUMMARY_NI0_PIPE_ERROR_MASK 0x0000000000000100
+
+/* SH_XN_ERROR_SUMMARY_NI1_POP_OVERFLOW */
+/* Description: NI1 pop overflow */
+#define SH_XN_ERROR_SUMMARY_NI1_POP_OVERFLOW_SHFT 9
+#define SH_XN_ERROR_SUMMARY_NI1_POP_OVERFLOW_MASK 0x0000000000000200
+
+/* SH_XN_ERROR_SUMMARY_NI1_PUSH_OVERFLOW */
+/* Description: NI1 push overflow */
+#define SH_XN_ERROR_SUMMARY_NI1_PUSH_OVERFLOW_SHFT 10
+#define SH_XN_ERROR_SUMMARY_NI1_PUSH_OVERFLOW_MASK 0x0000000000000400
+
+/* SH_XN_ERROR_SUMMARY_NI1_CREDIT_OVERFLOW */
+/* Description: NI1 credit overflow */
+#define SH_XN_ERROR_SUMMARY_NI1_CREDIT_OVERFLOW_SHFT 11
+#define SH_XN_ERROR_SUMMARY_NI1_CREDIT_OVERFLOW_MASK 0x0000000000000800
+
+/* SH_XN_ERROR_SUMMARY_NI1_DEBIT_OVERFLOW */
+/* Description: NI1 debit overflow */
+#define SH_XN_ERROR_SUMMARY_NI1_DEBIT_OVERFLOW_SHFT 12
+#define SH_XN_ERROR_SUMMARY_NI1_DEBIT_OVERFLOW_MASK 0x0000000000001000
+
+/* SH_XN_ERROR_SUMMARY_NI1_POP_UNDERFLOW */
+/* Description: NI1 pop underflow */
+#define SH_XN_ERROR_SUMMARY_NI1_POP_UNDERFLOW_SHFT 13
+#define SH_XN_ERROR_SUMMARY_NI1_POP_UNDERFLOW_MASK 0x0000000000002000
+
+/* SH_XN_ERROR_SUMMARY_NI1_PUSH_UNDERFLOW */
+/* Description: NI1 push underflow */
+#define SH_XN_ERROR_SUMMARY_NI1_PUSH_UNDERFLOW_SHFT 14
+#define SH_XN_ERROR_SUMMARY_NI1_PUSH_UNDERFLOW_MASK 0x0000000000004000
+
+/* SH_XN_ERROR_SUMMARY_NI1_CREDIT_UNDERFLOW */
+/* Description: NI1 credit underflow */
+#define SH_XN_ERROR_SUMMARY_NI1_CREDIT_UNDERFLOW_SHFT 15
+#define SH_XN_ERROR_SUMMARY_NI1_CREDIT_UNDERFLOW_MASK 0x0000000000008000
+
+/* SH_XN_ERROR_SUMMARY_NI1_LLP_ERROR */
+/* Description: NI1 llp error */
+#define SH_XN_ERROR_SUMMARY_NI1_LLP_ERROR_SHFT 16
+#define SH_XN_ERROR_SUMMARY_NI1_LLP_ERROR_MASK 0x0000000000010000
+
+/* SH_XN_ERROR_SUMMARY_NI1_PIPE_ERROR */
+/* Description: NI1 pipe in/out error */
+#define SH_XN_ERROR_SUMMARY_NI1_PIPE_ERROR_SHFT 17
+#define SH_XN_ERROR_SUMMARY_NI1_PIPE_ERROR_MASK 0x0000000000020000
+
+/* SH_XN_ERROR_SUMMARY_XNMD_CREDIT_OVERFLOW */
+/* Description: XNMD credit overflow */
+#define SH_XN_ERROR_SUMMARY_XNMD_CREDIT_OVERFLOW_SHFT 18
+#define SH_XN_ERROR_SUMMARY_XNMD_CREDIT_OVERFLOW_MASK 0x0000000000040000
+
+/* SH_XN_ERROR_SUMMARY_XNMD_DEBIT_OVERFLOW */
+/* Description: XNMD debit overflow */
+#define SH_XN_ERROR_SUMMARY_XNMD_DEBIT_OVERFLOW_SHFT 19
+#define SH_XN_ERROR_SUMMARY_XNMD_DEBIT_OVERFLOW_MASK 0x0000000000080000
+
+/* SH_XN_ERROR_SUMMARY_XNMD_DATA_BUFF_OVERFLOW */
+/* Description: XNMD data buffer overflow */
+#define SH_XN_ERROR_SUMMARY_XNMD_DATA_BUFF_OVERFLOW_SHFT 20
+#define SH_XN_ERROR_SUMMARY_XNMD_DATA_BUFF_OVERFLOW_MASK 0x0000000000100000
+
+/* SH_XN_ERROR_SUMMARY_XNMD_CREDIT_UNDERFLOW */
+/* Description: XNMD credit underflow */
+#define SH_XN_ERROR_SUMMARY_XNMD_CREDIT_UNDERFLOW_SHFT 21
+#define SH_XN_ERROR_SUMMARY_XNMD_CREDIT_UNDERFLOW_MASK 0x0000000000200000
+
+/* SH_XN_ERROR_SUMMARY_XNMD_SBE_ERROR */
+/* Description: XNMD single bit error */
+#define SH_XN_ERROR_SUMMARY_XNMD_SBE_ERROR_SHFT 22
+#define SH_XN_ERROR_SUMMARY_XNMD_SBE_ERROR_MASK 0x0000000000400000
+
+/* SH_XN_ERROR_SUMMARY_XNMD_UCE_ERROR */
+/* Description: XNMD uncorrectable error */
+#define SH_XN_ERROR_SUMMARY_XNMD_UCE_ERROR_SHFT 23
+#define SH_XN_ERROR_SUMMARY_XNMD_UCE_ERROR_MASK 0x0000000000800000
+
+/* SH_XN_ERROR_SUMMARY_XNMD_LUT_ERROR */
+/* Description: XNMD look up table error */
+#define SH_XN_ERROR_SUMMARY_XNMD_LUT_ERROR_SHFT 24
+#define SH_XN_ERROR_SUMMARY_XNMD_LUT_ERROR_MASK 0x0000000001000000
+
+/* SH_XN_ERROR_SUMMARY_XNPI_CREDIT_OVERFLOW */
+/* Description: XNMD credit overflow */
+#define SH_XN_ERROR_SUMMARY_XNPI_CREDIT_OVERFLOW_SHFT 25
+#define SH_XN_ERROR_SUMMARY_XNPI_CREDIT_OVERFLOW_MASK 0x0000000002000000
+
+/* SH_XN_ERROR_SUMMARY_XNPI_DEBIT_OVERFLOW */
+/* Description: XNPI debit overflow */
+#define SH_XN_ERROR_SUMMARY_XNPI_DEBIT_OVERFLOW_SHFT 26
+#define SH_XN_ERROR_SUMMARY_XNPI_DEBIT_OVERFLOW_MASK 0x0000000004000000
+
+/* SH_XN_ERROR_SUMMARY_XNPI_DATA_BUFF_OVERFLOW */
+/* Description: XNPI data buffer overflow */
+#define SH_XN_ERROR_SUMMARY_XNPI_DATA_BUFF_OVERFLOW_SHFT 27
+#define SH_XN_ERROR_SUMMARY_XNPI_DATA_BUFF_OVERFLOW_MASK 0x0000000008000000
+
+/* SH_XN_ERROR_SUMMARY_XNPI_CREDIT_UNDERFLOW */
+/* Description: XNPI credit underflow */
+#define SH_XN_ERROR_SUMMARY_XNPI_CREDIT_UNDERFLOW_SHFT 28
+#define SH_XN_ERROR_SUMMARY_XNPI_CREDIT_UNDERFLOW_MASK 0x0000000010000000
+
+/* SH_XN_ERROR_SUMMARY_XNPI_SBE_ERROR */
+/* Description: XNPI single bit error */
+#define SH_XN_ERROR_SUMMARY_XNPI_SBE_ERROR_SHFT 29
+#define SH_XN_ERROR_SUMMARY_XNPI_SBE_ERROR_MASK 0x0000000020000000
+
+/* SH_XN_ERROR_SUMMARY_XNPI_UCE_ERROR */
+/* Description: XNPI uncorrectable error */
+#define SH_XN_ERROR_SUMMARY_XNPI_UCE_ERROR_SHFT 30
+#define SH_XN_ERROR_SUMMARY_XNPI_UCE_ERROR_MASK 0x0000000040000000
+
+/* SH_XN_ERROR_SUMMARY_XNPI_LUT_ERROR */
+/* Description: XNPI look up table error */
+#define SH_XN_ERROR_SUMMARY_XNPI_LUT_ERROR_SHFT 31
+#define SH_XN_ERROR_SUMMARY_XNPI_LUT_ERROR_MASK 0x0000000080000000
+
+/* SH_XN_ERROR_SUMMARY_IILB_DEBIT_OVERFLOW */
+/* Description: IILB debit overflow */
+#define SH_XN_ERROR_SUMMARY_IILB_DEBIT_OVERFLOW_SHFT 32
+#define SH_XN_ERROR_SUMMARY_IILB_DEBIT_OVERFLOW_MASK 0x0000000100000000
+
+/* SH_XN_ERROR_SUMMARY_IILB_CREDIT_OVERFLOW */
+/* Description: IILB credit overflow */
+#define SH_XN_ERROR_SUMMARY_IILB_CREDIT_OVERFLOW_SHFT 33
+#define SH_XN_ERROR_SUMMARY_IILB_CREDIT_OVERFLOW_MASK 0x0000000200000000
+
+/* SH_XN_ERROR_SUMMARY_IILB_FIFO_OVERFLOW */
+/* Description: IILB fifo overflow */
+#define SH_XN_ERROR_SUMMARY_IILB_FIFO_OVERFLOW_SHFT 34
+#define SH_XN_ERROR_SUMMARY_IILB_FIFO_OVERFLOW_MASK 0x0000000400000000
+
+/* SH_XN_ERROR_SUMMARY_IILB_CREDIT_UNDERFLOW */
+/* Description: IILB credit underflow */
+#define SH_XN_ERROR_SUMMARY_IILB_CREDIT_UNDERFLOW_SHFT 35
+#define SH_XN_ERROR_SUMMARY_IILB_CREDIT_UNDERFLOW_MASK 0x0000000800000000
+
+/* SH_XN_ERROR_SUMMARY_IILB_FIFO_UNDERFLOW */
+/* Description: IILB fifo underflow */
+#define SH_XN_ERROR_SUMMARY_IILB_FIFO_UNDERFLOW_SHFT 36
+#define SH_XN_ERROR_SUMMARY_IILB_FIFO_UNDERFLOW_MASK 0x0000001000000000
+
+/* SH_XN_ERROR_SUMMARY_IILB_CHIPLET_OR_LUT */
+/* Description: IILB chiplet nomatch or lut read error */
+#define SH_XN_ERROR_SUMMARY_IILB_CHIPLET_OR_LUT_SHFT 37
+#define SH_XN_ERROR_SUMMARY_IILB_CHIPLET_OR_LUT_MASK 0x0000002000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_ERRORS_ALIAS" */
+/* ==================================================================== */
+
+#define SH_XN_ERRORS_ALIAS 0x0000000150040008
+
+/* ==================================================================== */
+/* Register "SH_XN_ERROR_OVERFLOW" */
+/* ==================================================================== */
+
+#define SH_XN_ERROR_OVERFLOW 0x0000000150040020
+#define SH_XN_ERROR_OVERFLOW_MASK 0x0000003fffffffff
+#define SH_XN_ERROR_OVERFLOW_INIT 0x0000003fffffffff
+
+/* SH_XN_ERROR_OVERFLOW_NI0_POP_OVERFLOW */
+/* Description: NI0 pop overflow */
+#define SH_XN_ERROR_OVERFLOW_NI0_POP_OVERFLOW_SHFT 0
+#define SH_XN_ERROR_OVERFLOW_NI0_POP_OVERFLOW_MASK 0x0000000000000001
+
+/* SH_XN_ERROR_OVERFLOW_NI0_PUSH_OVERFLOW */
+/* Description: NI0 push overflow */
+#define SH_XN_ERROR_OVERFLOW_NI0_PUSH_OVERFLOW_SHFT 1
+#define SH_XN_ERROR_OVERFLOW_NI0_PUSH_OVERFLOW_MASK 0x0000000000000002
+
+/* SH_XN_ERROR_OVERFLOW_NI0_CREDIT_OVERFLOW */
+/* Description: NI0 credit overflow */
+#define SH_XN_ERROR_OVERFLOW_NI0_CREDIT_OVERFLOW_SHFT 2
+#define SH_XN_ERROR_OVERFLOW_NI0_CREDIT_OVERFLOW_MASK 0x0000000000000004
+
+/* SH_XN_ERROR_OVERFLOW_NI0_DEBIT_OVERFLOW */
+/* Description: NI0 debit overflow */
+#define SH_XN_ERROR_OVERFLOW_NI0_DEBIT_OVERFLOW_SHFT 3
+#define SH_XN_ERROR_OVERFLOW_NI0_DEBIT_OVERFLOW_MASK 0x0000000000000008
+
+/* SH_XN_ERROR_OVERFLOW_NI0_POP_UNDERFLOW */
+/* Description: NI0 pop underflow */
+#define SH_XN_ERROR_OVERFLOW_NI0_POP_UNDERFLOW_SHFT 4
+#define SH_XN_ERROR_OVERFLOW_NI0_POP_UNDERFLOW_MASK 0x0000000000000010
+
+/* SH_XN_ERROR_OVERFLOW_NI0_PUSH_UNDERFLOW */
+/* Description: NI0 push underflow */
+#define SH_XN_ERROR_OVERFLOW_NI0_PUSH_UNDERFLOW_SHFT 5
+#define SH_XN_ERROR_OVERFLOW_NI0_PUSH_UNDERFLOW_MASK 0x0000000000000020
+
+/* SH_XN_ERROR_OVERFLOW_NI0_CREDIT_UNDERFLOW */
+/* Description: NI0 credit underflow */
+#define SH_XN_ERROR_OVERFLOW_NI0_CREDIT_UNDERFLOW_SHFT 6
+#define SH_XN_ERROR_OVERFLOW_NI0_CREDIT_UNDERFLOW_MASK 0x0000000000000040
+
+/* SH_XN_ERROR_OVERFLOW_NI0_LLP_ERROR */
+/* Description: NI0 llp error */
+#define SH_XN_ERROR_OVERFLOW_NI0_LLP_ERROR_SHFT 7
+#define SH_XN_ERROR_OVERFLOW_NI0_LLP_ERROR_MASK 0x0000000000000080
+
+/* SH_XN_ERROR_OVERFLOW_NI0_PIPE_ERROR */
+/* Description: NI0 Pipe in/out errors */
+#define SH_XN_ERROR_OVERFLOW_NI0_PIPE_ERROR_SHFT 8
+#define SH_XN_ERROR_OVERFLOW_NI0_PIPE_ERROR_MASK 0x0000000000000100
+
+/* SH_XN_ERROR_OVERFLOW_NI1_POP_OVERFLOW */
+/* Description: NI1 pop overflow */
+#define SH_XN_ERROR_OVERFLOW_NI1_POP_OVERFLOW_SHFT 9
+#define SH_XN_ERROR_OVERFLOW_NI1_POP_OVERFLOW_MASK 0x0000000000000200
+
+/* SH_XN_ERROR_OVERFLOW_NI1_PUSH_OVERFLOW */
+/* Description: NI1 push overflow */
+#define SH_XN_ERROR_OVERFLOW_NI1_PUSH_OVERFLOW_SHFT 10
+#define SH_XN_ERROR_OVERFLOW_NI1_PUSH_OVERFLOW_MASK 0x0000000000000400
+
+/* SH_XN_ERROR_OVERFLOW_NI1_CREDIT_OVERFLOW */
+/* Description: NI1 credit overflow */
+#define SH_XN_ERROR_OVERFLOW_NI1_CREDIT_OVERFLOW_SHFT 11
+#define SH_XN_ERROR_OVERFLOW_NI1_CREDIT_OVERFLOW_MASK 0x0000000000000800
+
+/* SH_XN_ERROR_OVERFLOW_NI1_DEBIT_OVERFLOW */
+/* Description: NI1 debit overflow */
+#define SH_XN_ERROR_OVERFLOW_NI1_DEBIT_OVERFLOW_SHFT 12
+#define SH_XN_ERROR_OVERFLOW_NI1_DEBIT_OVERFLOW_MASK 0x0000000000001000
+
+/* SH_XN_ERROR_OVERFLOW_NI1_POP_UNDERFLOW */
+/* Description: NI1 pop underflow */
+#define SH_XN_ERROR_OVERFLOW_NI1_POP_UNDERFLOW_SHFT 13
+#define SH_XN_ERROR_OVERFLOW_NI1_POP_UNDERFLOW_MASK 0x0000000000002000
+
+/* SH_XN_ERROR_OVERFLOW_NI1_PUSH_UNDERFLOW */
+/* Description: NI1 push underflow */
+#define SH_XN_ERROR_OVERFLOW_NI1_PUSH_UNDERFLOW_SHFT 14
+#define SH_XN_ERROR_OVERFLOW_NI1_PUSH_UNDERFLOW_MASK 0x0000000000004000
+
+/* SH_XN_ERROR_OVERFLOW_NI1_CREDIT_UNDERFLOW */
+/* Description: NI1 credit underflow */
+#define SH_XN_ERROR_OVERFLOW_NI1_CREDIT_UNDERFLOW_SHFT 15
+#define SH_XN_ERROR_OVERFLOW_NI1_CREDIT_UNDERFLOW_MASK 0x0000000000008000
+
+/* SH_XN_ERROR_OVERFLOW_NI1_LLP_ERROR */
+/* Description: NI1 llp error */
+#define SH_XN_ERROR_OVERFLOW_NI1_LLP_ERROR_SHFT 16
+#define SH_XN_ERROR_OVERFLOW_NI1_LLP_ERROR_MASK 0x0000000000010000
+
+/* SH_XN_ERROR_OVERFLOW_NI1_PIPE_ERROR */
+/* Description: NI1 pipe in/out error */
+#define SH_XN_ERROR_OVERFLOW_NI1_PIPE_ERROR_SHFT 17
+#define SH_XN_ERROR_OVERFLOW_NI1_PIPE_ERROR_MASK 0x0000000000020000
+
+/* SH_XN_ERROR_OVERFLOW_XNMD_CREDIT_OVERFLOW */
+/* Description: XNMD credit overflow */
+#define SH_XN_ERROR_OVERFLOW_XNMD_CREDIT_OVERFLOW_SHFT 18
+#define SH_XN_ERROR_OVERFLOW_XNMD_CREDIT_OVERFLOW_MASK 0x0000000000040000
+
+/* SH_XN_ERROR_OVERFLOW_XNMD_DEBIT_OVERFLOW */
+/* Description: XNMD debit overflow */
+#define SH_XN_ERROR_OVERFLOW_XNMD_DEBIT_OVERFLOW_SHFT 19
+#define SH_XN_ERROR_OVERFLOW_XNMD_DEBIT_OVERFLOW_MASK 0x0000000000080000
+
+/* SH_XN_ERROR_OVERFLOW_XNMD_DATA_BUFF_OVERFLOW */
+/* Description: XNMD data buffer overflow */
+#define SH_XN_ERROR_OVERFLOW_XNMD_DATA_BUFF_OVERFLOW_SHFT 20
+#define SH_XN_ERROR_OVERFLOW_XNMD_DATA_BUFF_OVERFLOW_MASK 0x0000000000100000
+
+/* SH_XN_ERROR_OVERFLOW_XNMD_CREDIT_UNDERFLOW */
+/* Description: XNMD credit underflow */
+#define SH_XN_ERROR_OVERFLOW_XNMD_CREDIT_UNDERFLOW_SHFT 21
+#define SH_XN_ERROR_OVERFLOW_XNMD_CREDIT_UNDERFLOW_MASK 0x0000000000200000
+
+/* SH_XN_ERROR_OVERFLOW_XNMD_SBE_ERROR */
+/* Description: XNMD single bit error */
+#define SH_XN_ERROR_OVERFLOW_XNMD_SBE_ERROR_SHFT 22
+#define SH_XN_ERROR_OVERFLOW_XNMD_SBE_ERROR_MASK 0x0000000000400000
+
+/* SH_XN_ERROR_OVERFLOW_XNMD_UCE_ERROR */
+/* Description: XNMD uncorrectable error */
+#define SH_XN_ERROR_OVERFLOW_XNMD_UCE_ERROR_SHFT 23
+#define SH_XN_ERROR_OVERFLOW_XNMD_UCE_ERROR_MASK 0x0000000000800000
+
+/* SH_XN_ERROR_OVERFLOW_XNMD_LUT_ERROR */
+/* Description: XNMD look up table error */
+#define SH_XN_ERROR_OVERFLOW_XNMD_LUT_ERROR_SHFT 24
+#define SH_XN_ERROR_OVERFLOW_XNMD_LUT_ERROR_MASK 0x0000000001000000
+
+/* SH_XN_ERROR_OVERFLOW_XNPI_CREDIT_OVERFLOW */
+/* Description: XNMD credit overflow */
+#define SH_XN_ERROR_OVERFLOW_XNPI_CREDIT_OVERFLOW_SHFT 25
+#define SH_XN_ERROR_OVERFLOW_XNPI_CREDIT_OVERFLOW_MASK 0x0000000002000000
+
+/* SH_XN_ERROR_OVERFLOW_XNPI_DEBIT_OVERFLOW */
+/* Description: XNPI debit overflow */
+#define SH_XN_ERROR_OVERFLOW_XNPI_DEBIT_OVERFLOW_SHFT 26
+#define SH_XN_ERROR_OVERFLOW_XNPI_DEBIT_OVERFLOW_MASK 0x0000000004000000
+
+/* SH_XN_ERROR_OVERFLOW_XNPI_DATA_BUFF_OVERFLOW */
+/* Description: XNPI data buffer overflow */
+#define SH_XN_ERROR_OVERFLOW_XNPI_DATA_BUFF_OVERFLOW_SHFT 27
+#define SH_XN_ERROR_OVERFLOW_XNPI_DATA_BUFF_OVERFLOW_MASK 0x0000000008000000
+
+/* SH_XN_ERROR_OVERFLOW_XNPI_CREDIT_UNDERFLOW */
+/* Description: XNPI credit underflow */
+#define SH_XN_ERROR_OVERFLOW_XNPI_CREDIT_UNDERFLOW_SHFT 28
+#define SH_XN_ERROR_OVERFLOW_XNPI_CREDIT_UNDERFLOW_MASK 0x0000000010000000
+
+/* SH_XN_ERROR_OVERFLOW_XNPI_SBE_ERROR */
+/* Description: XNPI single bit error */
+#define SH_XN_ERROR_OVERFLOW_XNPI_SBE_ERROR_SHFT 29
+#define SH_XN_ERROR_OVERFLOW_XNPI_SBE_ERROR_MASK 0x0000000020000000
+
+/* SH_XN_ERROR_OVERFLOW_XNPI_UCE_ERROR */
+/* Description: XNPI uncorrectable error */
+#define SH_XN_ERROR_OVERFLOW_XNPI_UCE_ERROR_SHFT 30
+#define SH_XN_ERROR_OVERFLOW_XNPI_UCE_ERROR_MASK 0x0000000040000000
+
+/* SH_XN_ERROR_OVERFLOW_XNPI_LUT_ERROR */
+/* Description: XNPI look up table error */
+#define SH_XN_ERROR_OVERFLOW_XNPI_LUT_ERROR_SHFT 31
+#define SH_XN_ERROR_OVERFLOW_XNPI_LUT_ERROR_MASK 0x0000000080000000
+
+/* SH_XN_ERROR_OVERFLOW_IILB_DEBIT_OVERFLOW */
+/* Description: IILB debit overflow */
+#define SH_XN_ERROR_OVERFLOW_IILB_DEBIT_OVERFLOW_SHFT 32
+#define SH_XN_ERROR_OVERFLOW_IILB_DEBIT_OVERFLOW_MASK 0x0000000100000000
+
+/* SH_XN_ERROR_OVERFLOW_IILB_CREDIT_OVERFLOW */
+/* Description: IILB credit overflow */
+#define SH_XN_ERROR_OVERFLOW_IILB_CREDIT_OVERFLOW_SHFT 33
+#define SH_XN_ERROR_OVERFLOW_IILB_CREDIT_OVERFLOW_MASK 0x0000000200000000
+
+/* SH_XN_ERROR_OVERFLOW_IILB_FIFO_OVERFLOW */
+/* Description: IILB fifo overflow */
+#define SH_XN_ERROR_OVERFLOW_IILB_FIFO_OVERFLOW_SHFT 34
+#define SH_XN_ERROR_OVERFLOW_IILB_FIFO_OVERFLOW_MASK 0x0000000400000000
+
+/* SH_XN_ERROR_OVERFLOW_IILB_CREDIT_UNDERFLOW */
+/* Description: IILB credit underflow */
+#define SH_XN_ERROR_OVERFLOW_IILB_CREDIT_UNDERFLOW_SHFT 35
+#define SH_XN_ERROR_OVERFLOW_IILB_CREDIT_UNDERFLOW_MASK 0x0000000800000000
+
+/* SH_XN_ERROR_OVERFLOW_IILB_FIFO_UNDERFLOW */
+/* Description: IILB fifo underflow */
+#define SH_XN_ERROR_OVERFLOW_IILB_FIFO_UNDERFLOW_SHFT 36
+#define SH_XN_ERROR_OVERFLOW_IILB_FIFO_UNDERFLOW_MASK 0x0000001000000000
+
+/* SH_XN_ERROR_OVERFLOW_IILB_CHIPLET_OR_LUT */
+/* Description: IILB chiplet nomatch or lut read error */
+#define SH_XN_ERROR_OVERFLOW_IILB_CHIPLET_OR_LUT_SHFT 37
+#define SH_XN_ERROR_OVERFLOW_IILB_CHIPLET_OR_LUT_MASK 0x0000002000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_ERROR_OVERFLOW_ALIAS" */
+/* ==================================================================== */
+
+#define SH_XN_ERROR_OVERFLOW_ALIAS 0x0000000150040028
+
+/* ==================================================================== */
+/* Register "SH_XN_ERROR_MASK" */
+/* ==================================================================== */
+
+#define SH_XN_ERROR_MASK 0x0000000150040040
+#define SH_XN_ERROR_MASK_MASK 0x0000003fffffffff
+#define SH_XN_ERROR_MASK_INIT 0x0000003fffffffff
+
+/* SH_XN_ERROR_MASK_NI0_POP_OVERFLOW */
+/* Description: NI0 pop overflow */
+#define SH_XN_ERROR_MASK_NI0_POP_OVERFLOW_SHFT 0
+#define SH_XN_ERROR_MASK_NI0_POP_OVERFLOW_MASK 0x0000000000000001
+
+/* SH_XN_ERROR_MASK_NI0_PUSH_OVERFLOW */
+/* Description: NI0 push overflow */
+#define SH_XN_ERROR_MASK_NI0_PUSH_OVERFLOW_SHFT 1
+#define SH_XN_ERROR_MASK_NI0_PUSH_OVERFLOW_MASK 0x0000000000000002
+
+/* SH_XN_ERROR_MASK_NI0_CREDIT_OVERFLOW */
+/* Description: NI0 credit overflow */
+#define SH_XN_ERROR_MASK_NI0_CREDIT_OVERFLOW_SHFT 2
+#define SH_XN_ERROR_MASK_NI0_CREDIT_OVERFLOW_MASK 0x0000000000000004
+
+/* SH_XN_ERROR_MASK_NI0_DEBIT_OVERFLOW */
+/* Description: NI0 debit overflow */
+#define SH_XN_ERROR_MASK_NI0_DEBIT_OVERFLOW_SHFT 3
+#define SH_XN_ERROR_MASK_NI0_DEBIT_OVERFLOW_MASK 0x0000000000000008
+
+/* SH_XN_ERROR_MASK_NI0_POP_UNDERFLOW */
+/* Description: NI0 pop underflow */
+#define SH_XN_ERROR_MASK_NI0_POP_UNDERFLOW_SHFT 4
+#define SH_XN_ERROR_MASK_NI0_POP_UNDERFLOW_MASK 0x0000000000000010
+
+/* SH_XN_ERROR_MASK_NI0_PUSH_UNDERFLOW */
+/* Description: NI0 push underflow */
+#define SH_XN_ERROR_MASK_NI0_PUSH_UNDERFLOW_SHFT 5
+#define SH_XN_ERROR_MASK_NI0_PUSH_UNDERFLOW_MASK 0x0000000000000020
+
+/* SH_XN_ERROR_MASK_NI0_CREDIT_UNDERFLOW */
+/* Description: NI0 credit underflow */
+#define SH_XN_ERROR_MASK_NI0_CREDIT_UNDERFLOW_SHFT 6
+#define SH_XN_ERROR_MASK_NI0_CREDIT_UNDERFLOW_MASK 0x0000000000000040
+
+/* SH_XN_ERROR_MASK_NI0_LLP_ERROR */
+/* Description: NI0 llp error */
+#define SH_XN_ERROR_MASK_NI0_LLP_ERROR_SHFT 7
+#define SH_XN_ERROR_MASK_NI0_LLP_ERROR_MASK 0x0000000000000080
+
+/* SH_XN_ERROR_MASK_NI0_PIPE_ERROR */
+/* Description: NI0 Pipe in/out errors */
+#define SH_XN_ERROR_MASK_NI0_PIPE_ERROR_SHFT 8
+#define SH_XN_ERROR_MASK_NI0_PIPE_ERROR_MASK 0x0000000000000100
+
+/* SH_XN_ERROR_MASK_NI1_POP_OVERFLOW */
+/* Description: NI1 pop overflow */
+#define SH_XN_ERROR_MASK_NI1_POP_OVERFLOW_SHFT 9
+#define SH_XN_ERROR_MASK_NI1_POP_OVERFLOW_MASK 0x0000000000000200
+
+/* SH_XN_ERROR_MASK_NI1_PUSH_OVERFLOW */
+/* Description: NI1 push overflow */
+#define SH_XN_ERROR_MASK_NI1_PUSH_OVERFLOW_SHFT 10
+#define SH_XN_ERROR_MASK_NI1_PUSH_OVERFLOW_MASK 0x0000000000000400
+
+/* SH_XN_ERROR_MASK_NI1_CREDIT_OVERFLOW */
+/* Description: NI1 credit overflow */
+#define SH_XN_ERROR_MASK_NI1_CREDIT_OVERFLOW_SHFT 11
+#define SH_XN_ERROR_MASK_NI1_CREDIT_OVERFLOW_MASK 0x0000000000000800
+
+/* SH_XN_ERROR_MASK_NI1_DEBIT_OVERFLOW */
+/* Description: NI1 debit overflow */
+#define SH_XN_ERROR_MASK_NI1_DEBIT_OVERFLOW_SHFT 12
+#define SH_XN_ERROR_MASK_NI1_DEBIT_OVERFLOW_MASK 0x0000000000001000
+
+/* SH_XN_ERROR_MASK_NI1_POP_UNDERFLOW */
+/* Description: NI1 pop underflow */
+#define SH_XN_ERROR_MASK_NI1_POP_UNDERFLOW_SHFT 13
+#define SH_XN_ERROR_MASK_NI1_POP_UNDERFLOW_MASK 0x0000000000002000
+
+/* SH_XN_ERROR_MASK_NI1_PUSH_UNDERFLOW */
+/* Description: NI1 push underflow */
+#define SH_XN_ERROR_MASK_NI1_PUSH_UNDERFLOW_SHFT 14
+#define SH_XN_ERROR_MASK_NI1_PUSH_UNDERFLOW_MASK 0x0000000000004000
+
+/* SH_XN_ERROR_MASK_NI1_CREDIT_UNDERFLOW */
+/* Description: NI1 credit underflow */
+#define SH_XN_ERROR_MASK_NI1_CREDIT_UNDERFLOW_SHFT 15
+#define SH_XN_ERROR_MASK_NI1_CREDIT_UNDERFLOW_MASK 0x0000000000008000
+
+/* SH_XN_ERROR_MASK_NI1_LLP_ERROR */
+/* Description: NI1 llp error */
+#define SH_XN_ERROR_MASK_NI1_LLP_ERROR_SHFT 16
+#define SH_XN_ERROR_MASK_NI1_LLP_ERROR_MASK 0x0000000000010000
+
+/* SH_XN_ERROR_MASK_NI1_PIPE_ERROR */
+/* Description: NI1 pipe in/out error */
+#define SH_XN_ERROR_MASK_NI1_PIPE_ERROR_SHFT 17
+#define SH_XN_ERROR_MASK_NI1_PIPE_ERROR_MASK 0x0000000000020000
+
+/* SH_XN_ERROR_MASK_XNMD_CREDIT_OVERFLOW */
+/* Description: XNMD credit overflow */
+#define SH_XN_ERROR_MASK_XNMD_CREDIT_OVERFLOW_SHFT 18
+#define SH_XN_ERROR_MASK_XNMD_CREDIT_OVERFLOW_MASK 0x0000000000040000
+
+/* SH_XN_ERROR_MASK_XNMD_DEBIT_OVERFLOW */
+/* Description: XNMD debit overflow */
+#define SH_XN_ERROR_MASK_XNMD_DEBIT_OVERFLOW_SHFT 19
+#define SH_XN_ERROR_MASK_XNMD_DEBIT_OVERFLOW_MASK 0x0000000000080000
+
+/* SH_XN_ERROR_MASK_XNMD_DATA_BUFF_OVERFLOW */
+/* Description: XNMD data buffer overflow */
+#define SH_XN_ERROR_MASK_XNMD_DATA_BUFF_OVERFLOW_SHFT 20
+#define SH_XN_ERROR_MASK_XNMD_DATA_BUFF_OVERFLOW_MASK 0x0000000000100000
+
+/* SH_XN_ERROR_MASK_XNMD_CREDIT_UNDERFLOW */
+/* Description: XNMD credit underflow */
+#define SH_XN_ERROR_MASK_XNMD_CREDIT_UNDERFLOW_SHFT 21
+#define SH_XN_ERROR_MASK_XNMD_CREDIT_UNDERFLOW_MASK 0x0000000000200000
+
+/* SH_XN_ERROR_MASK_XNMD_SBE_ERROR */
+/* Description: XNMD single bit error */
+#define SH_XN_ERROR_MASK_XNMD_SBE_ERROR_SHFT 22
+#define SH_XN_ERROR_MASK_XNMD_SBE_ERROR_MASK 0x0000000000400000
+
+/* SH_XN_ERROR_MASK_XNMD_UCE_ERROR */
+/* Description: XNMD uncorrectable error */
+#define SH_XN_ERROR_MASK_XNMD_UCE_ERROR_SHFT 23
+#define SH_XN_ERROR_MASK_XNMD_UCE_ERROR_MASK 0x0000000000800000
+
+/* SH_XN_ERROR_MASK_XNMD_LUT_ERROR */
+/* Description: XNMD look up table error */
+#define SH_XN_ERROR_MASK_XNMD_LUT_ERROR_SHFT 24
+#define SH_XN_ERROR_MASK_XNMD_LUT_ERROR_MASK 0x0000000001000000
+
+/* SH_XN_ERROR_MASK_XNPI_CREDIT_OVERFLOW */
+/* Description: XNMD credit overflow */
+#define SH_XN_ERROR_MASK_XNPI_CREDIT_OVERFLOW_SHFT 25
+#define SH_XN_ERROR_MASK_XNPI_CREDIT_OVERFLOW_MASK 0x0000000002000000
+
+/* SH_XN_ERROR_MASK_XNPI_DEBIT_OVERFLOW */
+/* Description: XNPI debit overflow */
+#define SH_XN_ERROR_MASK_XNPI_DEBIT_OVERFLOW_SHFT 26
+#define SH_XN_ERROR_MASK_XNPI_DEBIT_OVERFLOW_MASK 0x0000000004000000
+
+/* SH_XN_ERROR_MASK_XNPI_DATA_BUFF_OVERFLOW */
+/* Description: XNPI data buffer overflow */
+#define SH_XN_ERROR_MASK_XNPI_DATA_BUFF_OVERFLOW_SHFT 27
+#define SH_XN_ERROR_MASK_XNPI_DATA_BUFF_OVERFLOW_MASK 0x0000000008000000
+
+/* SH_XN_ERROR_MASK_XNPI_CREDIT_UNDERFLOW */
+/* Description: XNPI credit underflow */
+#define SH_XN_ERROR_MASK_XNPI_CREDIT_UNDERFLOW_SHFT 28
+#define SH_XN_ERROR_MASK_XNPI_CREDIT_UNDERFLOW_MASK 0x0000000010000000
+
+/* SH_XN_ERROR_MASK_XNPI_SBE_ERROR */
+/* Description: XNPI single bit error */
+#define SH_XN_ERROR_MASK_XNPI_SBE_ERROR_SHFT 29
+#define SH_XN_ERROR_MASK_XNPI_SBE_ERROR_MASK 0x0000000020000000
+
+/* SH_XN_ERROR_MASK_XNPI_UCE_ERROR */
+/* Description: XNPI uncorrectable error */
+#define SH_XN_ERROR_MASK_XNPI_UCE_ERROR_SHFT 30
+#define SH_XN_ERROR_MASK_XNPI_UCE_ERROR_MASK 0x0000000040000000
+
+/* SH_XN_ERROR_MASK_XNPI_LUT_ERROR */
+/* Description: XNPI look up table error */
+#define SH_XN_ERROR_MASK_XNPI_LUT_ERROR_SHFT 31
+#define SH_XN_ERROR_MASK_XNPI_LUT_ERROR_MASK 0x0000000080000000
+
+/* SH_XN_ERROR_MASK_IILB_DEBIT_OVERFLOW */
+/* Description: IILB debit overflow */
+#define SH_XN_ERROR_MASK_IILB_DEBIT_OVERFLOW_SHFT 32
+#define SH_XN_ERROR_MASK_IILB_DEBIT_OVERFLOW_MASK 0x0000000100000000
+
+/* SH_XN_ERROR_MASK_IILB_CREDIT_OVERFLOW */
+/* Description: IILB credit overflow */
+#define SH_XN_ERROR_MASK_IILB_CREDIT_OVERFLOW_SHFT 33
+#define SH_XN_ERROR_MASK_IILB_CREDIT_OVERFLOW_MASK 0x0000000200000000
+
+/* SH_XN_ERROR_MASK_IILB_FIFO_OVERFLOW */
+/* Description: IILB fifo overflow */
+#define SH_XN_ERROR_MASK_IILB_FIFO_OVERFLOW_SHFT 34
+#define SH_XN_ERROR_MASK_IILB_FIFO_OVERFLOW_MASK 0x0000000400000000
+
+/* SH_XN_ERROR_MASK_IILB_CREDIT_UNDERFLOW */
+/* Description: IILB credit underflow */
+#define SH_XN_ERROR_MASK_IILB_CREDIT_UNDERFLOW_SHFT 35
+#define SH_XN_ERROR_MASK_IILB_CREDIT_UNDERFLOW_MASK 0x0000000800000000
+
+/* SH_XN_ERROR_MASK_IILB_FIFO_UNDERFLOW */
+/* Description: IILB fifo underflow */
+#define SH_XN_ERROR_MASK_IILB_FIFO_UNDERFLOW_SHFT 36
+#define SH_XN_ERROR_MASK_IILB_FIFO_UNDERFLOW_MASK 0x0000001000000000
+
+/* SH_XN_ERROR_MASK_IILB_CHIPLET_OR_LUT */
+/* Description: IILB chiplet nomatch or lut read error */
+#define SH_XN_ERROR_MASK_IILB_CHIPLET_OR_LUT_SHFT 37
+#define SH_XN_ERROR_MASK_IILB_CHIPLET_OR_LUT_MASK 0x0000002000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_FIRST_ERROR" */
+/* ==================================================================== */
+
+#define SH_XN_FIRST_ERROR 0x0000000150040060
+#define SH_XN_FIRST_ERROR_MASK 0x0000003fffffffff
+#define SH_XN_FIRST_ERROR_INIT 0x0000003fffffffff
+
+/* SH_XN_FIRST_ERROR_NI0_POP_OVERFLOW */
+/* Description: NI0 pop overflow */
+#define SH_XN_FIRST_ERROR_NI0_POP_OVERFLOW_SHFT 0
+#define SH_XN_FIRST_ERROR_NI0_POP_OVERFLOW_MASK 0x0000000000000001
+
+/* SH_XN_FIRST_ERROR_NI0_PUSH_OVERFLOW */
+/* Description: NI0 push overflow */
+#define SH_XN_FIRST_ERROR_NI0_PUSH_OVERFLOW_SHFT 1
+#define SH_XN_FIRST_ERROR_NI0_PUSH_OVERFLOW_MASK 0x0000000000000002
+
+/* SH_XN_FIRST_ERROR_NI0_CREDIT_OVERFLOW */
+/* Description: NI0 credit overflow */
+#define SH_XN_FIRST_ERROR_NI0_CREDIT_OVERFLOW_SHFT 2
+#define SH_XN_FIRST_ERROR_NI0_CREDIT_OVERFLOW_MASK 0x0000000000000004
+
+/* SH_XN_FIRST_ERROR_NI0_DEBIT_OVERFLOW */
+/* Description: NI0 debit overflow */
+#define SH_XN_FIRST_ERROR_NI0_DEBIT_OVERFLOW_SHFT 3
+#define SH_XN_FIRST_ERROR_NI0_DEBIT_OVERFLOW_MASK 0x0000000000000008
+
+/* SH_XN_FIRST_ERROR_NI0_POP_UNDERFLOW */
+/* Description: NI0 pop underflow */
+#define SH_XN_FIRST_ERROR_NI0_POP_UNDERFLOW_SHFT 4
+#define SH_XN_FIRST_ERROR_NI0_POP_UNDERFLOW_MASK 0x0000000000000010
+
+/* SH_XN_FIRST_ERROR_NI0_PUSH_UNDERFLOW */
+/* Description: NI0 push underflow */
+#define SH_XN_FIRST_ERROR_NI0_PUSH_UNDERFLOW_SHFT 5
+#define SH_XN_FIRST_ERROR_NI0_PUSH_UNDERFLOW_MASK 0x0000000000000020
+
+/* SH_XN_FIRST_ERROR_NI0_CREDIT_UNDERFLOW */
+/* Description: NI0 credit underflow */
+#define SH_XN_FIRST_ERROR_NI0_CREDIT_UNDERFLOW_SHFT 6
+#define SH_XN_FIRST_ERROR_NI0_CREDIT_UNDERFLOW_MASK 0x0000000000000040
+
+/* SH_XN_FIRST_ERROR_NI0_LLP_ERROR */
+/* Description: NI0 llp error */
+#define SH_XN_FIRST_ERROR_NI0_LLP_ERROR_SHFT 7
+#define SH_XN_FIRST_ERROR_NI0_LLP_ERROR_MASK 0x0000000000000080
+
+/* SH_XN_FIRST_ERROR_NI0_PIPE_ERROR */
+/* Description: NI0 Pipe in/out errors */
+#define SH_XN_FIRST_ERROR_NI0_PIPE_ERROR_SHFT 8
+#define SH_XN_FIRST_ERROR_NI0_PIPE_ERROR_MASK 0x0000000000000100
+
+/* SH_XN_FIRST_ERROR_NI1_POP_OVERFLOW */
+/* Description: NI1 pop overflow */
+#define SH_XN_FIRST_ERROR_NI1_POP_OVERFLOW_SHFT 9
+#define SH_XN_FIRST_ERROR_NI1_POP_OVERFLOW_MASK 0x0000000000000200
+
+/* SH_XN_FIRST_ERROR_NI1_PUSH_OVERFLOW */
+/* Description: NI1 push overflow */
+#define SH_XN_FIRST_ERROR_NI1_PUSH_OVERFLOW_SHFT 10
+#define SH_XN_FIRST_ERROR_NI1_PUSH_OVERFLOW_MASK 0x0000000000000400
+
+/* SH_XN_FIRST_ERROR_NI1_CREDIT_OVERFLOW */
+/* Description: NI1 credit overflow */
+#define SH_XN_FIRST_ERROR_NI1_CREDIT_OVERFLOW_SHFT 11
+#define SH_XN_FIRST_ERROR_NI1_CREDIT_OVERFLOW_MASK 0x0000000000000800
+
+/* SH_XN_FIRST_ERROR_NI1_DEBIT_OVERFLOW */
+/* Description: NI1 debit overflow */
+#define SH_XN_FIRST_ERROR_NI1_DEBIT_OVERFLOW_SHFT 12
+#define SH_XN_FIRST_ERROR_NI1_DEBIT_OVERFLOW_MASK 0x0000000000001000
+
+/* SH_XN_FIRST_ERROR_NI1_POP_UNDERFLOW */
+/* Description: NI1 pop underflow */
+#define SH_XN_FIRST_ERROR_NI1_POP_UNDERFLOW_SHFT 13
+#define SH_XN_FIRST_ERROR_NI1_POP_UNDERFLOW_MASK 0x0000000000002000
+
+/* SH_XN_FIRST_ERROR_NI1_PUSH_UNDERFLOW */
+/* Description: NI1 push underflow */
+#define SH_XN_FIRST_ERROR_NI1_PUSH_UNDERFLOW_SHFT 14
+#define SH_XN_FIRST_ERROR_NI1_PUSH_UNDERFLOW_MASK 0x0000000000004000
+
+/* SH_XN_FIRST_ERROR_NI1_CREDIT_UNDERFLOW */
+/* Description: NI1 credit underflow */
+#define SH_XN_FIRST_ERROR_NI1_CREDIT_UNDERFLOW_SHFT 15
+#define SH_XN_FIRST_ERROR_NI1_CREDIT_UNDERFLOW_MASK 0x0000000000008000
+
+/* SH_XN_FIRST_ERROR_NI1_LLP_ERROR */
+/* Description: NI1 llp error */
+#define SH_XN_FIRST_ERROR_NI1_LLP_ERROR_SHFT 16
+#define SH_XN_FIRST_ERROR_NI1_LLP_ERROR_MASK 0x0000000000010000
+
+/* SH_XN_FIRST_ERROR_NI1_PIPE_ERROR */
+/* Description: NI1 pipe in/out error */
+#define SH_XN_FIRST_ERROR_NI1_PIPE_ERROR_SHFT 17
+#define SH_XN_FIRST_ERROR_NI1_PIPE_ERROR_MASK 0x0000000000020000
+
+/* SH_XN_FIRST_ERROR_XNMD_CREDIT_OVERFLOW */
+/* Description: XNMD credit overflow */
+#define SH_XN_FIRST_ERROR_XNMD_CREDIT_OVERFLOW_SHFT 18
+#define SH_XN_FIRST_ERROR_XNMD_CREDIT_OVERFLOW_MASK 0x0000000000040000
+
+/* SH_XN_FIRST_ERROR_XNMD_DEBIT_OVERFLOW */
+/* Description: XNMD debit overflow */
+#define SH_XN_FIRST_ERROR_XNMD_DEBIT_OVERFLOW_SHFT 19
+#define SH_XN_FIRST_ERROR_XNMD_DEBIT_OVERFLOW_MASK 0x0000000000080000
+
+/* SH_XN_FIRST_ERROR_XNMD_DATA_BUFF_OVERFLOW */
+/* Description: XNMD data buffer overflow */
+#define SH_XN_FIRST_ERROR_XNMD_DATA_BUFF_OVERFLOW_SHFT 20
+#define SH_XN_FIRST_ERROR_XNMD_DATA_BUFF_OVERFLOW_MASK 0x0000000000100000
+
+/* SH_XN_FIRST_ERROR_XNMD_CREDIT_UNDERFLOW */
+/* Description: XNMD credit underflow */
+#define SH_XN_FIRST_ERROR_XNMD_CREDIT_UNDERFLOW_SHFT 21
+#define SH_XN_FIRST_ERROR_XNMD_CREDIT_UNDERFLOW_MASK 0x0000000000200000
+
+/* SH_XN_FIRST_ERROR_XNMD_SBE_ERROR */
+/* Description: XNMD single bit error */
+#define SH_XN_FIRST_ERROR_XNMD_SBE_ERROR_SHFT 22
+#define SH_XN_FIRST_ERROR_XNMD_SBE_ERROR_MASK 0x0000000000400000
+
+/* SH_XN_FIRST_ERROR_XNMD_UCE_ERROR */
+/* Description: XNMD uncorrectable error */
+#define SH_XN_FIRST_ERROR_XNMD_UCE_ERROR_SHFT 23
+#define SH_XN_FIRST_ERROR_XNMD_UCE_ERROR_MASK 0x0000000000800000
+
+/* SH_XN_FIRST_ERROR_XNMD_LUT_ERROR */
+/* Description: XNMD look up table error */
+#define SH_XN_FIRST_ERROR_XNMD_LUT_ERROR_SHFT 24
+#define SH_XN_FIRST_ERROR_XNMD_LUT_ERROR_MASK 0x0000000001000000
+
+/* SH_XN_FIRST_ERROR_XNPI_CREDIT_OVERFLOW */
+/* Description: XNMD credit overflow */
+#define SH_XN_FIRST_ERROR_XNPI_CREDIT_OVERFLOW_SHFT 25
+#define SH_XN_FIRST_ERROR_XNPI_CREDIT_OVERFLOW_MASK 0x0000000002000000
+
+/* SH_XN_FIRST_ERROR_XNPI_DEBIT_OVERFLOW */
+/* Description: XNPI debit overflow */
+#define SH_XN_FIRST_ERROR_XNPI_DEBIT_OVERFLOW_SHFT 26
+#define SH_XN_FIRST_ERROR_XNPI_DEBIT_OVERFLOW_MASK 0x0000000004000000
+
+/* SH_XN_FIRST_ERROR_XNPI_DATA_BUFF_OVERFLOW */
+/* Description: XNPI data buffer overflow */
+#define SH_XN_FIRST_ERROR_XNPI_DATA_BUFF_OVERFLOW_SHFT 27
+#define SH_XN_FIRST_ERROR_XNPI_DATA_BUFF_OVERFLOW_MASK 0x0000000008000000
+
+/* SH_XN_FIRST_ERROR_XNPI_CREDIT_UNDERFLOW */
+/* Description: XNPI credit underflow */
+#define SH_XN_FIRST_ERROR_XNPI_CREDIT_UNDERFLOW_SHFT 28
+#define SH_XN_FIRST_ERROR_XNPI_CREDIT_UNDERFLOW_MASK 0x0000000010000000
+
+/* SH_XN_FIRST_ERROR_XNPI_SBE_ERROR */
+/* Description: XNPI single bit error */
+#define SH_XN_FIRST_ERROR_XNPI_SBE_ERROR_SHFT 29
+#define SH_XN_FIRST_ERROR_XNPI_SBE_ERROR_MASK 0x0000000020000000
+
+/* SH_XN_FIRST_ERROR_XNPI_UCE_ERROR */
+/* Description: XNPI uncorrectable error */
+#define SH_XN_FIRST_ERROR_XNPI_UCE_ERROR_SHFT 30
+#define SH_XN_FIRST_ERROR_XNPI_UCE_ERROR_MASK 0x0000000040000000
+
+/* SH_XN_FIRST_ERROR_XNPI_LUT_ERROR */
+/* Description: XNPI look up table error */
+#define SH_XN_FIRST_ERROR_XNPI_LUT_ERROR_SHFT 31
+#define SH_XN_FIRST_ERROR_XNPI_LUT_ERROR_MASK 0x0000000080000000
+
+/* SH_XN_FIRST_ERROR_IILB_DEBIT_OVERFLOW */
+/* Description: IILB debit overflow */
+#define SH_XN_FIRST_ERROR_IILB_DEBIT_OVERFLOW_SHFT 32
+#define SH_XN_FIRST_ERROR_IILB_DEBIT_OVERFLOW_MASK 0x0000000100000000
+
+/* SH_XN_FIRST_ERROR_IILB_CREDIT_OVERFLOW */
+/* Description: IILB credit overflow */
+#define SH_XN_FIRST_ERROR_IILB_CREDIT_OVERFLOW_SHFT 33
+#define SH_XN_FIRST_ERROR_IILB_CREDIT_OVERFLOW_MASK 0x0000000200000000
+
+/* SH_XN_FIRST_ERROR_IILB_FIFO_OVERFLOW */
+/* Description: IILB fifo overflow */
+#define SH_XN_FIRST_ERROR_IILB_FIFO_OVERFLOW_SHFT 34
+#define SH_XN_FIRST_ERROR_IILB_FIFO_OVERFLOW_MASK 0x0000000400000000
+
+/* SH_XN_FIRST_ERROR_IILB_CREDIT_UNDERFLOW */
+/* Description: IILB credit underflow */
+#define SH_XN_FIRST_ERROR_IILB_CREDIT_UNDERFLOW_SHFT 35
+#define SH_XN_FIRST_ERROR_IILB_CREDIT_UNDERFLOW_MASK 0x0000000800000000
+
+/* SH_XN_FIRST_ERROR_IILB_FIFO_UNDERFLOW */
+/* Description: IILB fifo underflow */
+#define SH_XN_FIRST_ERROR_IILB_FIFO_UNDERFLOW_SHFT 36
+#define SH_XN_FIRST_ERROR_IILB_FIFO_UNDERFLOW_MASK 0x0000001000000000
+
+/* SH_XN_FIRST_ERROR_IILB_CHIPLET_OR_LUT */
+/* Description: IILB chiplet nomatch or lut read error */
+#define SH_XN_FIRST_ERROR_IILB_CHIPLET_OR_LUT_SHFT 37
+#define SH_XN_FIRST_ERROR_IILB_CHIPLET_OR_LUT_MASK 0x0000002000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERROR_SUMMARY" */
+/* ==================================================================== */
+
+#define SH_XNIILB_ERROR_SUMMARY 0x0000000150040200
+#define SH_XNIILB_ERROR_SUMMARY_MASK 0xffffffffffffffff
+#define SH_XNIILB_ERROR_SUMMARY_INIT 0xffffffffffffffff
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_DEBIT0 */
+/* Description: II debit0 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_DEBIT0_SHFT 0
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_DEBIT0_MASK 0x0000000000000001
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_DEBIT2 */
+/* Description: II debit2 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_DEBIT2_SHFT 1
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_DEBIT2_MASK 0x0000000000000002
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_DEBIT0 */
+/* Description: LB debit0 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_DEBIT0_SHFT 2
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_DEBIT0_MASK 0x0000000000000004
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_DEBIT2 */
+/* Description: LB debit2 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_DEBIT2_SHFT 3
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_DEBIT2_MASK 0x0000000000000008
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_VC0 */
+/* Description: II VC0 fifo overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_VC0_SHFT 4
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_VC0_MASK 0x0000000000000010
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_VC2 */
+/* Description: II VC2 fifo overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_VC2_SHFT 5
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_II_VC2_MASK 0x0000000000000020
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_II_VC0 */
+/* Description: II VC0 fifo underflow */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_II_VC0_SHFT 6
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_II_VC0_MASK 0x0000000000000040
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_II_VC2 */
+/* Description: II VC2 fifo underflow */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_II_VC2_SHFT 7
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_II_VC2_MASK 0x0000000000000080
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_VC0 */
+/* Description: LB VC0 fifo overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_VC0_SHFT 8
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_VC0_MASK 0x0000000000000100
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_VC2 */
+/* Description: LB VC2 fifo overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_VC2_SHFT 9
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_LB_VC2_MASK 0x0000000000000200
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_LB_VC0 */
+/* Description: LB VC0 fifo underflow */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_LB_VC0_SHFT 10
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_LB_VC0_MASK 0x0000000000000400
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_LB_VC2 */
+/* Description: LB VC2 fifo underflow */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_LB_VC2_SHFT 11
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_LB_VC2_MASK 0x0000000000000800
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC0_CREDIT_IN */
+/* Description: PI VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC0_CREDIT_IN_SHFT 12
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000001000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_IN */
+/* Description: IILB VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_IN_SHFT 13
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000002000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC0_CREDIT_IN */
+/* Description: MD VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC0_CREDIT_IN_SHFT 14
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000000004000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_IN */
+/* Description: NI0 VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_IN_SHFT 15
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000000008000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_IN */
+/* Description: NI1 VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_IN_SHFT 16
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000000010000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC2_CREDIT_IN */
+/* Description: PI VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC2_CREDIT_IN_SHFT 17
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000000020000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_IN */
+/* Description: IILB VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_IN_SHFT 18
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000000040000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC2_CREDIT_IN */
+/* Description: MD VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC2_CREDIT_IN_SHFT 19
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000000080000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_IN */
+/* Description: NI0 VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_IN_SHFT 20
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000000100000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_IN */
+/* Description: NI1 VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_IN_SHFT 21
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000000200000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC0_CREDIT_IN */
+/* Description: PI VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC0_CREDIT_IN_SHFT 22
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000400000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_IN */
+/* Description: IILB VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_IN_SHFT 23
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000800000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC0_CREDIT_IN */
+/* Description: MD VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC0_CREDIT_IN_SHFT 24
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000001000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_IN */
+/* Description: NI0 VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_IN_SHFT 25
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000002000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_IN */
+/* Description: NI1 VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_IN_SHFT 26
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000004000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC2_CREDIT_IN */
+/* Description: PI VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC2_CREDIT_IN_SHFT 27
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000008000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_IN */
+/* Description: IILB VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_IN_SHFT 28
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000010000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC2_CREDIT_IN */
+/* Description: MD VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC2_CREDIT_IN_SHFT 29
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000020000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_IN */
+/* Description: NI0 VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_IN_SHFT 30
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000040000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_IN */
+/* Description: NI1 VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_IN_SHFT 31
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000080000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_DEBIT0 */
+/* Description: PI Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_DEBIT0_SHFT 32
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_DEBIT0_MASK 0x0000000100000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_DEBIT2 */
+/* Description: PI Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_DEBIT2_SHFT 33
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_DEBIT2_MASK 0x0000000200000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0 */
+/* Description: IILB Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0_SHFT 34
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0_MASK 0x0000000400000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2 */
+/* Description: IILB Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2_SHFT 35
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2_MASK 0x0000000800000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_DEBIT0 */
+/* Description: MD Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_DEBIT0_SHFT 36
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_DEBIT0_MASK 0x0000001000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_DEBIT2 */
+/* Description: MD Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_DEBIT2_SHFT 37
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_DEBIT2_MASK 0x0000002000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0 */
+/* Description: NI0 Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0_SHFT 38
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0_MASK 0x0000004000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2 */
+/* Description: NI0 Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2_SHFT 39
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2_MASK 0x0000008000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0 */
+/* Description: NI1 Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0_SHFT 40
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0_MASK 0x0000010000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2 */
+/* Description: NI1 Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2_SHFT 41
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2_MASK 0x0000020000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC0_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC0_CREDIT_OUT_SHFT 42
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0000040000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC2_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC2_CREDIT_OUT_SHFT 43
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0000080000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC0_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC0_CREDIT_OUT_SHFT 44
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0000100000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC2_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC2_CREDIT_OUT_SHFT 45
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0000200000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_OUT_SHFT 46
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0000400000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_OUT_SHFT 47
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0000800000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_OUT_SHFT 48
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0001000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_OUT_SHFT 49
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0002000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_OUT_SHFT 50
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x0004000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_OUT_SHFT 51
+#define SH_XNIILB_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x0008000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC0_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC0_CREDIT_OUT_SHFT 52
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0010000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC2_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC2_CREDIT_OUT_SHFT 53
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0020000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC0_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC0_CREDIT_OUT_SHFT 54
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0040000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC2_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC2_CREDIT_OUT_SHFT 55
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0080000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_OUT_SHFT 56
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0100000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_OUT_SHFT 57
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0200000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_OUT_SHFT 58
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0400000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_OUT_SHFT 59
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0800000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_OUT_SHFT 60
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x1000000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_OUT_SHFT 61
+#define SH_XNIILB_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x2000000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_CHIPLET_NOMATCH */
+/* Description: chiplet nomatch */
+#define SH_XNIILB_ERROR_SUMMARY_CHIPLET_NOMATCH_SHFT 62
+#define SH_XNIILB_ERROR_SUMMARY_CHIPLET_NOMATCH_MASK 0x4000000000000000
+
+/* SH_XNIILB_ERROR_SUMMARY_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_XNIILB_ERROR_SUMMARY_LUT_READ_ERROR_SHFT 63
+#define SH_XNIILB_ERROR_SUMMARY_LUT_READ_ERROR_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERRORS_ALIAS" */
+/* ==================================================================== */
+
+#define SH_XNIILB_ERRORS_ALIAS 0x0000000150040208
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERROR_OVERFLOW" */
+/* ==================================================================== */
+
+#define SH_XNIILB_ERROR_OVERFLOW 0x0000000150040220
+#define SH_XNIILB_ERROR_OVERFLOW_MASK 0xffffffffffffffff
+#define SH_XNIILB_ERROR_OVERFLOW_INIT 0xffffffffffffffff
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_DEBIT0 */
+/* Description: II debit0 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_DEBIT0_SHFT 0
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_DEBIT0_MASK 0x0000000000000001
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_DEBIT2 */
+/* Description: II debit2 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_DEBIT2_SHFT 1
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_DEBIT2_MASK 0x0000000000000002
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_DEBIT0 */
+/* Description: LB debit0 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_DEBIT0_SHFT 2
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_DEBIT0_MASK 0x0000000000000004
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_DEBIT2 */
+/* Description: LB debit2 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_DEBIT2_SHFT 3
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_DEBIT2_MASK 0x0000000000000008
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_VC0 */
+/* Description: II VC0 fifo overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_VC0_SHFT 4
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_VC0_MASK 0x0000000000000010
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_VC2 */
+/* Description: II VC2 fifo overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_VC2_SHFT 5
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_II_VC2_MASK 0x0000000000000020
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_II_VC0 */
+/* Description: II VC0 fifo underflow */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_II_VC0_SHFT 6
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_II_VC0_MASK 0x0000000000000040
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_II_VC2 */
+/* Description: II VC2 fifo underflow */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_II_VC2_SHFT 7
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_II_VC2_MASK 0x0000000000000080
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_VC0 */
+/* Description: LB VC0 fifo overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_VC0_SHFT 8
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_VC0_MASK 0x0000000000000100
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_VC2 */
+/* Description: LB VC2 fifo overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_VC2_SHFT 9
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_LB_VC2_MASK 0x0000000000000200
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_LB_VC0 */
+/* Description: LB VC0 fifo underflow */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_LB_VC0_SHFT 10
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_LB_VC0_MASK 0x0000000000000400
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_LB_VC2 */
+/* Description: LB VC2 fifo underflow */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_LB_VC2_SHFT 11
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_LB_VC2_MASK 0x0000000000000800
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC0_CREDIT_IN */
+/* Description: PI VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC0_CREDIT_IN_SHFT 12
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000001000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_IN */
+/* Description: IILB VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_IN_SHFT 13
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000002000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC0_CREDIT_IN */
+/* Description: MD VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC0_CREDIT_IN_SHFT 14
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000000004000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_IN */
+/* Description: NI0 VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_IN_SHFT 15
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000000008000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_IN */
+/* Description: NI1 VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_IN_SHFT 16
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000000010000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC2_CREDIT_IN */
+/* Description: PI VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC2_CREDIT_IN_SHFT 17
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000000020000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_IN */
+/* Description: IILB VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_IN_SHFT 18
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000000040000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC2_CREDIT_IN */
+/* Description: MD VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC2_CREDIT_IN_SHFT 19
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000000080000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_IN */
+/* Description: NI0 VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_IN_SHFT 20
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000000100000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_IN */
+/* Description: NI1 VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_IN_SHFT 21
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000000200000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC0_CREDIT_IN */
+/* Description: PI VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC0_CREDIT_IN_SHFT 22
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000400000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_IN */
+/* Description: IILB VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_IN_SHFT 23
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000800000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC0_CREDIT_IN */
+/* Description: MD VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC0_CREDIT_IN_SHFT 24
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000001000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_IN */
+/* Description: NI0 VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_IN_SHFT 25
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000002000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_IN */
+/* Description: NI1 VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_IN_SHFT 26
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000004000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC2_CREDIT_IN */
+/* Description: PI VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC2_CREDIT_IN_SHFT 27
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000008000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_IN */
+/* Description: IILB VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_IN_SHFT 28
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000010000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC2_CREDIT_IN */
+/* Description: MD VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC2_CREDIT_IN_SHFT 29
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000020000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_IN */
+/* Description: NI0 VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_IN_SHFT 30
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000040000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_IN */
+/* Description: NI1 VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_IN_SHFT 31
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000080000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_DEBIT0 */
+/* Description: PI Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_DEBIT0_SHFT 32
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_DEBIT0_MASK 0x0000000100000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_DEBIT2 */
+/* Description: PI Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_DEBIT2_SHFT 33
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_DEBIT2_MASK 0x0000000200000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0 */
+/* Description: IILB Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0_SHFT 34
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0_MASK 0x0000000400000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2 */
+/* Description: IILB Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2_SHFT 35
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2_MASK 0x0000000800000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_DEBIT0 */
+/* Description: MD Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_DEBIT0_SHFT 36
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_DEBIT0_MASK 0x0000001000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_DEBIT2 */
+/* Description: MD Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_DEBIT2_SHFT 37
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_DEBIT2_MASK 0x0000002000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0 */
+/* Description: NI0 Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0_SHFT 38
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0_MASK 0x0000004000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2 */
+/* Description: NI0 Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2_SHFT 39
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2_MASK 0x0000008000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0 */
+/* Description: NI1 Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0_SHFT 40
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0_MASK 0x0000010000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2 */
+/* Description: NI1 Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2_SHFT 41
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2_MASK 0x0000020000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC0_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC0_CREDIT_OUT_SHFT 42
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0000040000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC2_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC2_CREDIT_OUT_SHFT 43
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0000080000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC0_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC0_CREDIT_OUT_SHFT 44
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0000100000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC2_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC2_CREDIT_OUT_SHFT 45
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0000200000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_OUT_SHFT 46
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0000400000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_OUT_SHFT 47
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0000800000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_OUT_SHFT 48
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0001000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_OUT_SHFT 49
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0002000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_OUT_SHFT 50
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x0004000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_OUT_SHFT 51
+#define SH_XNIILB_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x0008000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC0_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC0_CREDIT_OUT_SHFT 52
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0010000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC2_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC2_CREDIT_OUT_SHFT 53
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0020000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC0_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC0_CREDIT_OUT_SHFT 54
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0040000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC2_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC2_CREDIT_OUT_SHFT 55
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0080000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_OUT_SHFT 56
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0100000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_OUT_SHFT 57
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0200000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_OUT_SHFT 58
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0400000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_OUT_SHFT 59
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0800000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_OUT_SHFT 60
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x1000000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_OUT_SHFT 61
+#define SH_XNIILB_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x2000000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_CHIPLET_NOMATCH */
+/* Description: chiplet nomatch */
+#define SH_XNIILB_ERROR_OVERFLOW_CHIPLET_NOMATCH_SHFT 62
+#define SH_XNIILB_ERROR_OVERFLOW_CHIPLET_NOMATCH_MASK 0x4000000000000000
+
+/* SH_XNIILB_ERROR_OVERFLOW_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_XNIILB_ERROR_OVERFLOW_LUT_READ_ERROR_SHFT 63
+#define SH_XNIILB_ERROR_OVERFLOW_LUT_READ_ERROR_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERROR_OVERFLOW_ALIAS" */
+/* ==================================================================== */
+
+#define SH_XNIILB_ERROR_OVERFLOW_ALIAS 0x0000000150040228
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERROR_MASK" */
+/* ==================================================================== */
+
+#define SH_XNIILB_ERROR_MASK 0x0000000150040240
+#define SH_XNIILB_ERROR_MASK_MASK 0xffffffffffffffff
+#define SH_XNIILB_ERROR_MASK_INIT 0xffffffffffffffff
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_II_DEBIT0 */
+/* Description: II debit0 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_DEBIT0_SHFT 0
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_DEBIT0_MASK 0x0000000000000001
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_II_DEBIT2 */
+/* Description: II debit2 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_DEBIT2_SHFT 1
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_DEBIT2_MASK 0x0000000000000002
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_LB_DEBIT0 */
+/* Description: LB debit0 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_DEBIT0_SHFT 2
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_DEBIT0_MASK 0x0000000000000004
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_LB_DEBIT2 */
+/* Description: LB debit2 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_DEBIT2_SHFT 3
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_DEBIT2_MASK 0x0000000000000008
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_II_VC0 */
+/* Description: II VC0 fifo overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_VC0_SHFT 4
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_VC0_MASK 0x0000000000000010
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_II_VC2 */
+/* Description: II VC2 fifo overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_VC2_SHFT 5
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_II_VC2_MASK 0x0000000000000020
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_II_VC0 */
+/* Description: II VC0 fifo underflow */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_II_VC0_SHFT 6
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_II_VC0_MASK 0x0000000000000040
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_II_VC2 */
+/* Description: II VC2 fifo underflow */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_II_VC2_SHFT 7
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_II_VC2_MASK 0x0000000000000080
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_LB_VC0 */
+/* Description: LB VC0 fifo overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_VC0_SHFT 8
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_VC0_MASK 0x0000000000000100
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_LB_VC2 */
+/* Description: LB VC2 fifo overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_VC2_SHFT 9
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_LB_VC2_MASK 0x0000000000000200
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_LB_VC0 */
+/* Description: LB VC0 fifo underflow */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_LB_VC0_SHFT 10
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_LB_VC0_MASK 0x0000000000000400
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_LB_VC2 */
+/* Description: LB VC2 fifo underflow */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_LB_VC2_SHFT 11
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_LB_VC2_MASK 0x0000000000000800
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC0_CREDIT_IN */
+/* Description: PI VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC0_CREDIT_IN_SHFT 12
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000001000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_IN */
+/* Description: IILB VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_IN_SHFT 13
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000002000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC0_CREDIT_IN */
+/* Description: MD VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC0_CREDIT_IN_SHFT 14
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000000004000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_IN */
+/* Description: NI0 VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_IN_SHFT 15
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000000008000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_IN */
+/* Description: NI1 VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_IN_SHFT 16
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000000010000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC2_CREDIT_IN */
+/* Description: PI VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC2_CREDIT_IN_SHFT 17
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000000020000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_IN */
+/* Description: IILB VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_IN_SHFT 18
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000000040000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC2_CREDIT_IN */
+/* Description: MD VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC2_CREDIT_IN_SHFT 19
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000000080000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_IN */
+/* Description: NI0 VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_IN_SHFT 20
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000000100000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_IN */
+/* Description: NI1 VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_IN_SHFT 21
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000000200000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC0_CREDIT_IN */
+/* Description: PI VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC0_CREDIT_IN_SHFT 22
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000400000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_IN */
+/* Description: IILB VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_IN_SHFT 23
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000800000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC0_CREDIT_IN */
+/* Description: MD VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC0_CREDIT_IN_SHFT 24
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000001000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_IN */
+/* Description: NI0 VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_IN_SHFT 25
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000002000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_IN */
+/* Description: NI1 VC0 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_IN_SHFT 26
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000004000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC2_CREDIT_IN */
+/* Description: PI VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC2_CREDIT_IN_SHFT 27
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000008000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_IN */
+/* Description: IILB VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_IN_SHFT 28
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000010000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC2_CREDIT_IN */
+/* Description: MD VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC2_CREDIT_IN_SHFT 29
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000020000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_IN */
+/* Description: NI0 VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_IN_SHFT 30
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000040000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_IN */
+/* Description: NI1 VC2 credit overflow Pipe In */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_IN_SHFT 31
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000080000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_PI_DEBIT0 */
+/* Description: PI Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_DEBIT0_SHFT 32
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_DEBIT0_MASK 0x0000000100000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_PI_DEBIT2 */
+/* Description: PI Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_DEBIT2_SHFT 33
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_DEBIT2_MASK 0x0000000200000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_DEBIT0 */
+/* Description: IILB Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_DEBIT0_SHFT 34
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_DEBIT0_MASK 0x0000000400000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_DEBIT2 */
+/* Description: IILB Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_DEBIT2_SHFT 35
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_DEBIT2_MASK 0x0000000800000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_MD_DEBIT0 */
+/* Description: MD Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_DEBIT0_SHFT 36
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_DEBIT0_MASK 0x0000001000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_MD_DEBIT2 */
+/* Description: MD Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_DEBIT2_SHFT 37
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_DEBIT2_MASK 0x0000002000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_DEBIT0 */
+/* Description: NI0 Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_DEBIT0_SHFT 38
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_DEBIT0_MASK 0x0000004000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_DEBIT2 */
+/* Description: NI0 Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_DEBIT2_SHFT 39
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_DEBIT2_MASK 0x0000008000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_DEBIT0 */
+/* Description: NI1 Fifo Debit0 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_DEBIT0_SHFT 40
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_DEBIT0_MASK 0x0000010000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_DEBIT2 */
+/* Description: NI1 Fifo Debit2 overflow */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_DEBIT2_SHFT 41
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_DEBIT2_MASK 0x0000020000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC0_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC0_CREDIT_OUT_SHFT 42
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0000040000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC2_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC2_CREDIT_OUT_SHFT 43
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0000080000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC0_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC0_CREDIT_OUT_SHFT 44
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0000100000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC2_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC2_CREDIT_OUT_SHFT 45
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0000200000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_OUT_SHFT 46
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0000400000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_OUT_SHFT 47
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0000800000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_OUT_SHFT 48
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0001000000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_OUT_SHFT 49
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0002000000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_OUT_SHFT 50
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x0004000000000000
+
+/* SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_OUT_SHFT 51
+#define SH_XNIILB_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x0008000000000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC0_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC0_CREDIT_OUT_SHFT 52
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0010000000000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC2_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC2_CREDIT_OUT_SHFT 53
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0020000000000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC0_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC0_CREDIT_OUT_SHFT 54
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0040000000000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC2_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC2_CREDIT_OUT_SHFT 55
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0080000000000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_OUT_SHFT 56
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0100000000000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_OUT_SHFT 57
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0200000000000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_OUT_SHFT 58
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0400000000000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_OUT_SHFT 59
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0800000000000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_OUT_SHFT 60
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x1000000000000000
+
+/* SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_OUT_SHFT 61
+#define SH_XNIILB_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x2000000000000000
+
+/* SH_XNIILB_ERROR_MASK_CHIPLET_NOMATCH */
+/* Description: chiplet nomatch */
+#define SH_XNIILB_ERROR_MASK_CHIPLET_NOMATCH_SHFT 62
+#define SH_XNIILB_ERROR_MASK_CHIPLET_NOMATCH_MASK 0x4000000000000000
+
+/* SH_XNIILB_ERROR_MASK_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_XNIILB_ERROR_MASK_LUT_READ_ERROR_SHFT 63
+#define SH_XNIILB_ERROR_MASK_LUT_READ_ERROR_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_FIRST_ERROR" */
+/* ==================================================================== */
+
+#define SH_XNIILB_FIRST_ERROR 0x0000000150040260
+#define SH_XNIILB_FIRST_ERROR_MASK 0xffffffffffffffff
+#define SH_XNIILB_FIRST_ERROR_INIT 0xffffffffffffffff
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_II_DEBIT0 */
+/* Description: II debit0 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_DEBIT0_SHFT 0
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_DEBIT0_MASK 0x0000000000000001
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_II_DEBIT2 */
+/* Description: II debit2 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_DEBIT2_SHFT 1
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_DEBIT2_MASK 0x0000000000000002
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_DEBIT0 */
+/* Description: LB debit0 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_DEBIT0_SHFT 2
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_DEBIT0_MASK 0x0000000000000004
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_DEBIT2 */
+/* Description: LB debit2 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_DEBIT2_SHFT 3
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_DEBIT2_MASK 0x0000000000000008
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_II_VC0 */
+/* Description: II VC0 fifo overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_VC0_SHFT 4
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_VC0_MASK 0x0000000000000010
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_II_VC2 */
+/* Description: II VC2 fifo overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_VC2_SHFT 5
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_II_VC2_MASK 0x0000000000000020
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_II_VC0 */
+/* Description: II VC0 fifo underflow */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_II_VC0_SHFT 6
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_II_VC0_MASK 0x0000000000000040
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_II_VC2 */
+/* Description: II VC2 fifo underflow */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_II_VC2_SHFT 7
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_II_VC2_MASK 0x0000000000000080
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_VC0 */
+/* Description: LB VC0 fifo overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_VC0_SHFT 8
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_VC0_MASK 0x0000000000000100
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_VC2 */
+/* Description: LB VC2 fifo overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_VC2_SHFT 9
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_LB_VC2_MASK 0x0000000000000200
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_LB_VC0 */
+/* Description: LB VC0 fifo underflow */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_LB_VC0_SHFT 10
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_LB_VC0_MASK 0x0000000000000400
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_LB_VC2 */
+/* Description: LB VC2 fifo underflow */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_LB_VC2_SHFT 11
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_LB_VC2_MASK 0x0000000000000800
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC0_CREDIT_IN */
+/* Description: PI VC0 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC0_CREDIT_IN_SHFT 12
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000001000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_IN */
+/* Description: IILB VC0 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_IN_SHFT 13
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000002000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC0_CREDIT_IN */
+/* Description: MD VC0 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC0_CREDIT_IN_SHFT 14
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000000004000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_IN */
+/* Description: NI0 VC0 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_IN_SHFT 15
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000000008000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_IN */
+/* Description: NI1 VC0 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_IN_SHFT 16
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000000010000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC2_CREDIT_IN */
+/* Description: PI VC2 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC2_CREDIT_IN_SHFT 17
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000000020000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_IN */
+/* Description: IILB VC2 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_IN_SHFT 18
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000000040000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC2_CREDIT_IN */
+/* Description: MD VC2 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC2_CREDIT_IN_SHFT 19
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000000080000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_IN */
+/* Description: NI0 VC2 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_IN_SHFT 20
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000000100000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_IN */
+/* Description: NI1 VC2 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_IN_SHFT 21
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000000200000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC0_CREDIT_IN */
+/* Description: PI VC0 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC0_CREDIT_IN_SHFT 22
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC0_CREDIT_IN_MASK 0x0000000000400000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_IN */
+/* Description: IILB VC0 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_IN_SHFT 23
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_IN_MASK 0x0000000000800000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC0_CREDIT_IN */
+/* Description: MD VC0 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC0_CREDIT_IN_SHFT 24
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC0_CREDIT_IN_MASK 0x0000000001000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_IN */
+/* Description: NI0 VC0 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_IN_SHFT 25
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_IN_MASK 0x0000000002000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_IN */
+/* Description: NI1 VC0 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_IN_SHFT 26
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_IN_MASK 0x0000000004000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC2_CREDIT_IN */
+/* Description: PI VC2 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC2_CREDIT_IN_SHFT 27
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC2_CREDIT_IN_MASK 0x0000000008000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_IN */
+/* Description: IILB VC2 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_IN_SHFT 28
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_IN_MASK 0x0000000010000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC2_CREDIT_IN */
+/* Description: MD VC2 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC2_CREDIT_IN_SHFT 29
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC2_CREDIT_IN_MASK 0x0000000020000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_IN */
+/* Description: NI0 VC2 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_IN_SHFT 30
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_IN_MASK 0x0000000040000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_IN */
+/* Description: NI1 VC2 credit overflow Pipe In */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_IN_SHFT 31
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_IN_MASK 0x0000000080000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_DEBIT0 */
+/* Description: PI Fifo Debit0 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_DEBIT0_SHFT 32
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_DEBIT0_MASK 0x0000000100000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_DEBIT2 */
+/* Description: PI Fifo Debit2 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_DEBIT2_SHFT 33
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_DEBIT2_MASK 0x0000000200000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_DEBIT0 */
+/* Description: IILB Fifo Debit0 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_DEBIT0_SHFT 34
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_DEBIT0_MASK 0x0000000400000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_DEBIT2 */
+/* Description: IILB Fifo Debit2 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_DEBIT2_SHFT 35
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_DEBIT2_MASK 0x0000000800000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_DEBIT0 */
+/* Description: MD Fifo Debit0 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_DEBIT0_SHFT 36
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_DEBIT0_MASK 0x0000001000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_DEBIT2 */
+/* Description: MD Fifo Debit2 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_DEBIT2_SHFT 37
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_DEBIT2_MASK 0x0000002000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_DEBIT0 */
+/* Description: NI0 Fifo Debit0 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_DEBIT0_SHFT 38
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_DEBIT0_MASK 0x0000004000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_DEBIT2 */
+/* Description: NI0 Fifo Debit2 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_DEBIT2_SHFT 39
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_DEBIT2_MASK 0x0000008000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_DEBIT0 */
+/* Description: NI1 Fifo Debit0 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_DEBIT0_SHFT 40
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_DEBIT0_MASK 0x0000010000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_DEBIT2 */
+/* Description: NI1 Fifo Debit2 overflow */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_DEBIT2_SHFT 41
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_DEBIT2_MASK 0x0000020000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC0_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC0_CREDIT_OUT_SHFT 42
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0000040000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC2_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC2_CREDIT_OUT_SHFT 43
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0000080000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC0_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC0_CREDIT_OUT_SHFT 44
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0000100000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC2_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC2_CREDIT_OUT_SHFT 45
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0000200000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_OUT_SHFT 46
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0000400000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_OUT_SHFT 47
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0000800000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_OUT_SHFT 48
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0001000000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_OUT_SHFT 49
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0002000000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_OUT_SHFT 50
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x0004000000000000
+
+/* SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_OUT_SHFT 51
+#define SH_XNIILB_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x0008000000000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC0_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC0_CREDIT_OUT_SHFT 52
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC0_CREDIT_OUT_MASK 0x0010000000000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC2_CREDIT_OUT */
+/* Description: PI VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC2_CREDIT_OUT_SHFT 53
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_PI_VC2_CREDIT_OUT_MASK 0x0020000000000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC0_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC0_CREDIT_OUT_SHFT 54
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC0_CREDIT_OUT_MASK 0x0040000000000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC2_CREDIT_OUT */
+/* Description: MD VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC2_CREDIT_OUT_SHFT 55
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_MD_VC2_CREDIT_OUT_MASK 0x0080000000000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_OUT_SHFT 56
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_OUT_MASK 0x0100000000000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_OUT */
+/* Description: IILB VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_OUT_SHFT 57
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_OUT_MASK 0x0200000000000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_OUT_SHFT 58
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_OUT_MASK 0x0400000000000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_OUT */
+/* Description: NI0 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_OUT_SHFT 59
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_OUT_MASK 0x0800000000000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_OUT_SHFT 60
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_OUT_MASK 0x1000000000000000
+
+/* SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_OUT */
+/* Description: NI1 VC0 Credit overflow Pipe Out */
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_OUT_SHFT 61
+#define SH_XNIILB_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_OUT_MASK 0x2000000000000000
+
+/* SH_XNIILB_FIRST_ERROR_CHIPLET_NOMATCH */
+/* Description: chiplet nomatch */
+#define SH_XNIILB_FIRST_ERROR_CHIPLET_NOMATCH_SHFT 62
+#define SH_XNIILB_FIRST_ERROR_CHIPLET_NOMATCH_MASK 0x4000000000000000
+
+/* SH_XNIILB_FIRST_ERROR_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_XNIILB_FIRST_ERROR_LUT_READ_ERROR_SHFT 63
+#define SH_XNIILB_FIRST_ERROR_LUT_READ_ERROR_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ERROR_SUMMARY" */
+/* ==================================================================== */
+
+#define SH_XNPI_ERROR_SUMMARY 0x0000000150040300
+#define SH_XNPI_ERROR_SUMMARY_MASK 0x0003ffffffffffff
+#define SH_XNPI_ERROR_SUMMARY_INIT 0x0003ffffffffffff
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_SHFT 0
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC0_SHFT 1
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC0_MASK 0x0000000000000002
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_SHFT 2
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC2_SHFT 3
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC2_MASK 0x0000000000000008
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_SHFT 4
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC0_SHFT 5
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC0_MASK 0x0000000000000020
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_SHFT 6
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC2_SHFT 7
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC2_MASK 0x0000000000000080
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_SHFT 8
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC0_SHFT 9
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC0_MASK 0x0000000000000200
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_SHFT 10
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC2_SHFT 11
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC2_MASK 0x0000000000000800
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_VC0_CREDIT_SHFT 12
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_VC0_CREDIT_SHFT 13
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_VC2_CREDIT_SHFT 14
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_VC2_CREDIT_SHFT 15
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC0 */
+/* Description: VC0 Data Buffer overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC0_SHFT 16
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC2 */
+/* Description: VC2 Data Buffer overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC2_SHFT 17
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000
+
+/* SH_XNPI_ERROR_SUMMARY_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_XNPI_ERROR_SUMMARY_LUT_READ_ERROR_SHFT 18
+#define SH_XNPI_ERROR_SUMMARY_LUT_READ_ERROR_MASK 0x0000000000040000
+
+/* SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR0 */
+/* Description: Single Bit Error in Bits 63:0 */
+#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR0_SHFT 19
+#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR0_MASK 0x0000000000080000
+
+/* SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR1 */
+/* Description: Single Bit Error in Bits 127:64 */
+#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR1_SHFT 20
+#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR1_MASK 0x0000000000100000
+
+/* SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR2 */
+/* Description: Single Bit Error in Bits 191:128 */
+#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR2_SHFT 21
+#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR2_MASK 0x0000000000200000
+
+/* SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR3 */
+/* Description: Single Bit Error in Bits 255:192 */
+#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR3_SHFT 22
+#define SH_XNPI_ERROR_SUMMARY_SINGLE_BIT_ERROR3_MASK 0x0000000000400000
+
+/* SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR0 */
+/* Description: Uncorrectable Error in Bits 63:0 */
+#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR0_SHFT 23
+#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR0_MASK 0x0000000000800000
+
+/* SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR1 */
+/* Description: Uncorrectable Error in Bits 127:64 */
+#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR1_SHFT 24
+#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR1_MASK 0x0000000001000000
+
+/* SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR2 */
+/* Description: Uncorrectable Error in Bits 191:128 */
+#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR2_SHFT 25
+#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR2_MASK 0x0000000002000000
+
+/* SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR3 */
+/* Description: Uncorrectable Error in Bits 255:192 */
+#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR3_SHFT 26
+#define SH_XNPI_ERROR_SUMMARY_UNCOR_ERROR3_MASK 0x0000000004000000
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR0_SHFT 27
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_SIC_CNTR0_SHFT 28
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR2_SHFT 29
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_SIC_CNTR2_SHFT 30
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0 */
+/* Description: NI0 Debit 0 Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0_SHFT 31
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2 */
+/* Description: NI0 Debit 2 Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2_SHFT 32
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0 */
+/* Description: NI1 Debit 0 Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0_SHFT 33
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2 */
+/* Description: NI1 Debit 2 Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2_SHFT 34
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0 */
+/* Description: IILB Debit 0 Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0_SHFT 35
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2 */
+/* Description: IILB Debit 2 Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2_SHFT 36
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_SHFT 38
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_SHFT 40
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_SHFT 42
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_SHFT 44
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_SHFT 46
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Underflow */
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47
+#define SH_XNPI_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_SHFT 48
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000
+
+/* SH_XNPI_ERROR_SUMMARY_OVERFLOW_HEADER_CANCEL_FIFO */
+/* Description: Header Cancel Fifo Overflow */
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49
+#define SH_XNPI_ERROR_SUMMARY_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ERRORS_ALIAS" */
+/* ==================================================================== */
+
+#define SH_XNPI_ERRORS_ALIAS 0x0000000150040308
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ERROR_OVERFLOW" */
+/* ==================================================================== */
+
+#define SH_XNPI_ERROR_OVERFLOW 0x0000000150040320
+#define SH_XNPI_ERROR_OVERFLOW_MASK 0x0003ffffffffffff
+#define SH_XNPI_ERROR_OVERFLOW_INIT 0x0003ffffffffffff
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_SHFT 0
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_SHFT 1
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_MASK 0x0000000000000002
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_SHFT 2
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_SHFT 3
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_MASK 0x0000000000000008
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_SHFT 4
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_SHFT 5
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_MASK 0x0000000000000020
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_SHFT 6
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_SHFT 7
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_MASK 0x0000000000000080
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_SHFT 8
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_SHFT 9
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_MASK 0x0000000000000200
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_SHFT 10
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_SHFT 11
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_MASK 0x0000000000000800
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_VC0_CREDIT_SHFT 12
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_VC0_CREDIT_SHFT 13
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_VC2_CREDIT_SHFT 14
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_VC2_CREDIT_SHFT 15
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC0 */
+/* Description: VC0 Data Buffer overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC0_SHFT 16
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC2 */
+/* Description: VC2 Data Buffer overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC2_SHFT 17
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000
+
+/* SH_XNPI_ERROR_OVERFLOW_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_XNPI_ERROR_OVERFLOW_LUT_READ_ERROR_SHFT 18
+#define SH_XNPI_ERROR_OVERFLOW_LUT_READ_ERROR_MASK 0x0000000000040000
+
+/* SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR0 */
+/* Description: Single Bit Error in Bits 63:0 */
+#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR0_SHFT 19
+#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR0_MASK 0x0000000000080000
+
+/* SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR1 */
+/* Description: Single Bit Error in Bits 127:64 */
+#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR1_SHFT 20
+#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR1_MASK 0x0000000000100000
+
+/* SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR2 */
+/* Description: Single Bit Error in Bits 191:128 */
+#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR2_SHFT 21
+#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR2_MASK 0x0000000000200000
+
+/* SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR3 */
+/* Description: Single Bit Error in Bits 255:192 */
+#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR3_SHFT 22
+#define SH_XNPI_ERROR_OVERFLOW_SINGLE_BIT_ERROR3_MASK 0x0000000000400000
+
+/* SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR0 */
+/* Description: Uncorrectable Error in Bits 63:0 */
+#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR0_SHFT 23
+#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR0_MASK 0x0000000000800000
+
+/* SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR1 */
+/* Description: Uncorrectable Error in Bits 127:64 */
+#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR1_SHFT 24
+#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR1_MASK 0x0000000001000000
+
+/* SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR2 */
+/* Description: Uncorrectable Error in Bits 191:128 */
+#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR2_SHFT 25
+#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR2_MASK 0x0000000002000000
+
+/* SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR3 */
+/* Description: Uncorrectable Error in Bits 255:192 */
+#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR3_SHFT 26
+#define SH_XNPI_ERROR_OVERFLOW_UNCOR_ERROR3_MASK 0x0000000004000000
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR0_SHFT 27
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR0_SHFT 28
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR2_SHFT 29
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR2_SHFT 30
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0 */
+/* Description: NI0 Debit 0 Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0_SHFT 31
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2 */
+/* Description: NI0 Debit 2 Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2_SHFT 32
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0 */
+/* Description: NI1 Debit 0 Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0_SHFT 33
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2 */
+/* Description: NI1 Debit 2 Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2_SHFT 34
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0 */
+/* Description: IILB Debit 0 Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0_SHFT 35
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2 */
+/* Description: IILB Debit 2 Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2_SHFT 36
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_SHFT 38
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_SHFT 40
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_SHFT 42
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_SHFT 44
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_SHFT 46
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Underflow */
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47
+#define SH_XNPI_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_SHFT 48
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000
+
+/* SH_XNPI_ERROR_OVERFLOW_OVERFLOW_HEADER_CANCEL_FIFO */
+/* Description: Header Cancel Fifo Overflow */
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49
+#define SH_XNPI_ERROR_OVERFLOW_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ERROR_OVERFLOW_ALIAS" */
+/* ==================================================================== */
+
+#define SH_XNPI_ERROR_OVERFLOW_ALIAS 0x0000000150040328
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ERROR_MASK" */
+/* ==================================================================== */
+
+#define SH_XNPI_ERROR_MASK 0x0000000150040340
+#define SH_XNPI_ERROR_MASK_MASK 0x0003ffffffffffff
+#define SH_XNPI_ERROR_MASK_INIT 0x0003ffffffffffff
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC0_SHFT 0
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC0_SHFT 1
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC0_MASK 0x0000000000000002
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC2_SHFT 2
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC2_SHFT 3
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC2_MASK 0x0000000000000008
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC0_SHFT 4
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC0_SHFT 5
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC0_MASK 0x0000000000000020
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC2_SHFT 6
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC2_SHFT 7
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC2_MASK 0x0000000000000080
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC0_SHFT 8
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC0_SHFT 9
+#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC0_MASK 0x0000000000000200
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC2_SHFT 10
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC2_SHFT 11
+#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC2_MASK 0x0000000000000800
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_VC0_CREDIT_SHFT 12
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_VC0_CREDIT_SHFT 13
+#define SH_XNPI_ERROR_MASK_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_VC2_CREDIT_SHFT 14
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_VC2_CREDIT_SHFT 15
+#define SH_XNPI_ERROR_MASK_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_DATABUFF_VC0 */
+/* Description: VC0 Data Buffer overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_DATABUFF_VC0_SHFT 16
+#define SH_XNPI_ERROR_MASK_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_DATABUFF_VC2 */
+/* Description: VC2 Data Buffer overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_DATABUFF_VC2_SHFT 17
+#define SH_XNPI_ERROR_MASK_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000
+
+/* SH_XNPI_ERROR_MASK_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_XNPI_ERROR_MASK_LUT_READ_ERROR_SHFT 18
+#define SH_XNPI_ERROR_MASK_LUT_READ_ERROR_MASK 0x0000000000040000
+
+/* SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR0 */
+/* Description: Single Bit Error in Bits 63:0 */
+#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR0_SHFT 19
+#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR0_MASK 0x0000000000080000
+
+/* SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR1 */
+/* Description: Single Bit Error in Bits 127:64 */
+#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR1_SHFT 20
+#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR1_MASK 0x0000000000100000
+
+/* SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR2 */
+/* Description: Single Bit Error in Bits 191:128 */
+#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR2_SHFT 21
+#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR2_MASK 0x0000000000200000
+
+/* SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR3 */
+/* Description: Single Bit Error in Bits 255:192 */
+#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR3_SHFT 22
+#define SH_XNPI_ERROR_MASK_SINGLE_BIT_ERROR3_MASK 0x0000000000400000
+
+/* SH_XNPI_ERROR_MASK_UNCOR_ERROR0 */
+/* Description: Uncorrectable Error in Bits 63:0 */
+#define SH_XNPI_ERROR_MASK_UNCOR_ERROR0_SHFT 23
+#define SH_XNPI_ERROR_MASK_UNCOR_ERROR0_MASK 0x0000000000800000
+
+/* SH_XNPI_ERROR_MASK_UNCOR_ERROR1 */
+/* Description: Uncorrectable Error in Bits 127:64 */
+#define SH_XNPI_ERROR_MASK_UNCOR_ERROR1_SHFT 24
+#define SH_XNPI_ERROR_MASK_UNCOR_ERROR1_MASK 0x0000000001000000
+
+/* SH_XNPI_ERROR_MASK_UNCOR_ERROR2 */
+/* Description: Uncorrectable Error in Bits 191:128 */
+#define SH_XNPI_ERROR_MASK_UNCOR_ERROR2_SHFT 25
+#define SH_XNPI_ERROR_MASK_UNCOR_ERROR2_MASK 0x0000000002000000
+
+/* SH_XNPI_ERROR_MASK_UNCOR_ERROR3 */
+/* Description: Uncorrectable Error in Bits 255:192 */
+#define SH_XNPI_ERROR_MASK_UNCOR_ERROR3_SHFT 26
+#define SH_XNPI_ERROR_MASK_UNCOR_ERROR3_MASK 0x0000000004000000
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_SIC_CNTR0_SHFT 27
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_SIC_CNTR0_SHFT 28
+#define SH_XNPI_ERROR_MASK_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_SIC_CNTR2_SHFT 29
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_SIC_CNTR2_SHFT 30
+#define SH_XNPI_ERROR_MASK_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_NI0_DEBIT0 */
+/* Description: NI0 Debit 0 Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_DEBIT0_SHFT 31
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_NI0_DEBIT2 */
+/* Description: NI0 Debit 2 Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_DEBIT2_SHFT 32
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_NI1_DEBIT0 */
+/* Description: NI1 Debit 0 Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_DEBIT0_SHFT 33
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_NI1_DEBIT2 */
+/* Description: NI1 Debit 2 Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_DEBIT2_SHFT 34
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_IILB_DEBIT0 */
+/* Description: IILB Debit 0 Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_DEBIT0_SHFT 35
+#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_IILB_DEBIT2 */
+/* Description: IILB Debit 2 Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_DEBIT2_SHFT 36
+#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_SHFT 38
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_SHFT 40
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_SHFT 42
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_SHFT 44
+#define SH_XNPI_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_SHFT 46
+#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Underflow */
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47
+#define SH_XNPI_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_SHFT 48
+#define SH_XNPI_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000
+
+/* SH_XNPI_ERROR_MASK_OVERFLOW_HEADER_CANCEL_FIFO */
+/* Description: Header Cancel Fifo Overflow */
+#define SH_XNPI_ERROR_MASK_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49
+#define SH_XNPI_ERROR_MASK_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNPI_FIRST_ERROR" */
+/* ==================================================================== */
+
+#define SH_XNPI_FIRST_ERROR 0x0000000150040360
+#define SH_XNPI_FIRST_ERROR_MASK 0x0003ffffffffffff
+#define SH_XNPI_FIRST_ERROR_INIT 0x0003ffffffffffff
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC0_SHFT 0
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC0_SHFT 1
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC0_MASK 0x0000000000000002
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC2_SHFT 2
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC2_SHFT 3
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC2_MASK 0x0000000000000008
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC0_SHFT 4
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC0_SHFT 5
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC0_MASK 0x0000000000000020
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC2_SHFT 6
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC2_SHFT 7
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC2_MASK 0x0000000000000080
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC0_SHFT 8
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC0_SHFT 9
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC0_MASK 0x0000000000000200
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC2_SHFT 10
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC2_SHFT 11
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC2_MASK 0x0000000000000800
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_VC0_CREDIT_SHFT 12
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_VC0_CREDIT_SHFT 13
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_VC2_CREDIT_SHFT 14
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_VC2_CREDIT_SHFT 15
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_DATABUFF_VC0 */
+/* Description: VC0 Data Buffer overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_DATABUFF_VC0_SHFT 16
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_DATABUFF_VC2 */
+/* Description: VC2 Data Buffer overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_DATABUFF_VC2_SHFT 17
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000
+
+/* SH_XNPI_FIRST_ERROR_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_XNPI_FIRST_ERROR_LUT_READ_ERROR_SHFT 18
+#define SH_XNPI_FIRST_ERROR_LUT_READ_ERROR_MASK 0x0000000000040000
+
+/* SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR0 */
+/* Description: Single Bit Error in Bits 63:0 */
+#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR0_SHFT 19
+#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR0_MASK 0x0000000000080000
+
+/* SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR1 */
+/* Description: Single Bit Error in Bits 127:64 */
+#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR1_SHFT 20
+#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR1_MASK 0x0000000000100000
+
+/* SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR2 */
+/* Description: Single Bit Error in Bits 191:128 */
+#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR2_SHFT 21
+#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR2_MASK 0x0000000000200000
+
+/* SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR3 */
+/* Description: Single Bit Error in Bits 255:192 */
+#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR3_SHFT 22
+#define SH_XNPI_FIRST_ERROR_SINGLE_BIT_ERROR3_MASK 0x0000000000400000
+
+/* SH_XNPI_FIRST_ERROR_UNCOR_ERROR0 */
+/* Description: Uncorrectable Error in Bits 63:0 */
+#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR0_SHFT 23
+#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR0_MASK 0x0000000000800000
+
+/* SH_XNPI_FIRST_ERROR_UNCOR_ERROR1 */
+/* Description: Uncorrectable Error in Bits 127:64 */
+#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR1_SHFT 24
+#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR1_MASK 0x0000000001000000
+
+/* SH_XNPI_FIRST_ERROR_UNCOR_ERROR2 */
+/* Description: Uncorrectable Error in Bits 191:128 */
+#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR2_SHFT 25
+#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR2_MASK 0x0000000002000000
+
+/* SH_XNPI_FIRST_ERROR_UNCOR_ERROR3 */
+/* Description: Uncorrectable Error in Bits 255:192 */
+#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR3_SHFT 26
+#define SH_XNPI_FIRST_ERROR_UNCOR_ERROR3_MASK 0x0000000004000000
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_SIC_CNTR0_SHFT 27
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_SIC_CNTR0_SHFT 28
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_SIC_CNTR2_SHFT 29
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_SIC_CNTR2_SHFT 30
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_DEBIT0 */
+/* Description: NI0 Debit 0 Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_DEBIT0_SHFT 31
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_DEBIT2 */
+/* Description: NI0 Debit 2 Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_DEBIT2_SHFT 32
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_DEBIT0 */
+/* Description: NI1 Debit 0 Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_DEBIT0_SHFT 33
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_DEBIT2 */
+/* Description: NI1 Debit 2 Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_DEBIT2_SHFT 34
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_DEBIT0 */
+/* Description: IILB Debit 0 Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_DEBIT0_SHFT 35
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_DEBIT2 */
+/* Description: IILB Debit 2 Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_DEBIT2_SHFT 36
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_SHFT 38
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_SHFT 40
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_SHFT 42
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_SHFT 44
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_SHFT 46
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Underflow */
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47
+#define SH_XNPI_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_SHFT 48
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000
+
+/* SH_XNPI_FIRST_ERROR_OVERFLOW_HEADER_CANCEL_FIFO */
+/* Description: Header Cancel Fifo Overflow */
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49
+#define SH_XNPI_FIRST_ERROR_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ERROR_SUMMARY" */
+/* ==================================================================== */
+
+#define SH_XNMD_ERROR_SUMMARY 0x0000000150040400
+#define SH_XNMD_ERROR_SUMMARY_MASK 0x0003ffffffffffff
+#define SH_XNMD_ERROR_SUMMARY_INIT 0x0003ffffffffffff
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_SHFT 0
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC0_SHFT 1
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC0_MASK 0x0000000000000002
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_SHFT 2
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC2_SHFT 3
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC2_MASK 0x0000000000000008
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_SHFT 4
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC0_SHFT 5
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC0_MASK 0x0000000000000020
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_SHFT 6
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC2_SHFT 7
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC2_MASK 0x0000000000000080
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_SHFT 8
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC0_SHFT 9
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC0_MASK 0x0000000000000200
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_SHFT 10
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC2_SHFT 11
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC2_MASK 0x0000000000000800
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_VC0_CREDIT_SHFT 12
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_VC0_CREDIT_SHFT 13
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_VC2_CREDIT_SHFT 14
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_VC2_CREDIT_SHFT 15
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC0 */
+/* Description: VC0 Data Buffer overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC0_SHFT 16
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC2 */
+/* Description: VC2 Data Buffer overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC2_SHFT 17
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000
+
+/* SH_XNMD_ERROR_SUMMARY_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_XNMD_ERROR_SUMMARY_LUT_READ_ERROR_SHFT 18
+#define SH_XNMD_ERROR_SUMMARY_LUT_READ_ERROR_MASK 0x0000000000040000
+
+/* SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR0 */
+/* Description: Single Bit Error in Bits 63:0 */
+#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR0_SHFT 19
+#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR0_MASK 0x0000000000080000
+
+/* SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR1 */
+/* Description: Single Bit Error in Bits 127:64 */
+#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR1_SHFT 20
+#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR1_MASK 0x0000000000100000
+
+/* SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR2 */
+/* Description: Single Bit Error in Bits 191:128 */
+#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR2_SHFT 21
+#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR2_MASK 0x0000000000200000
+
+/* SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR3 */
+/* Description: Single Bit Error in Bits 255:192 */
+#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR3_SHFT 22
+#define SH_XNMD_ERROR_SUMMARY_SINGLE_BIT_ERROR3_MASK 0x0000000000400000
+
+/* SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR0 */
+/* Description: Uncorrectable Error in Bits 63:0 */
+#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR0_SHFT 23
+#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR0_MASK 0x0000000000800000
+
+/* SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR1 */
+/* Description: Uncorrectable Error in Bits 127:64 */
+#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR1_SHFT 24
+#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR1_MASK 0x0000000001000000
+
+/* SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR2 */
+/* Description: Uncorrectable Error in Bits 191:128 */
+#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR2_SHFT 25
+#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR2_MASK 0x0000000002000000
+
+/* SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR3 */
+/* Description: Uncorrectable Error in Bits 255:192 */
+#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR3_SHFT 26
+#define SH_XNMD_ERROR_SUMMARY_UNCOR_ERROR3_MASK 0x0000000004000000
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR0_SHFT 27
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_SIC_CNTR0_SHFT 28
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR2_SHFT 29
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_SIC_CNTR2_SHFT 30
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0 */
+/* Description: NI0 Debit 0 Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0_SHFT 31
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2 */
+/* Description: NI0 Debit 2 Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2_SHFT 32
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0 */
+/* Description: NI1 Debit 0 Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0_SHFT 33
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2 */
+/* Description: NI1 Debit 2 Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2_SHFT 34
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0 */
+/* Description: IILB Debit 0 Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0_SHFT 35
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2 */
+/* Description: IILB Debit 2 Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2_SHFT 36
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_SHFT 38
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_SHFT 40
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_SHFT 42
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_SHFT 44
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_SHFT 46
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Underflow */
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47
+#define SH_XNMD_ERROR_SUMMARY_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_SHFT 48
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000
+
+/* SH_XNMD_ERROR_SUMMARY_OVERFLOW_HEADER_CANCEL_FIFO */
+/* Description: Header Cancel Fifo Overflow */
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49
+#define SH_XNMD_ERROR_SUMMARY_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ERRORS_ALIAS" */
+/* ==================================================================== */
+
+#define SH_XNMD_ERRORS_ALIAS 0x0000000150040408
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ERROR_OVERFLOW" */
+/* ==================================================================== */
+
+#define SH_XNMD_ERROR_OVERFLOW 0x0000000150040420
+#define SH_XNMD_ERROR_OVERFLOW_MASK 0x0003ffffffffffff
+#define SH_XNMD_ERROR_OVERFLOW_INIT 0x0003ffffffffffff
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_SHFT 0
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_SHFT 1
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_MASK 0x0000000000000002
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_SHFT 2
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_SHFT 3
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_MASK 0x0000000000000008
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_SHFT 4
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_SHFT 5
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_MASK 0x0000000000000020
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_SHFT 6
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_SHFT 7
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_MASK 0x0000000000000080
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_SHFT 8
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_SHFT 9
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_MASK 0x0000000000000200
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_SHFT 10
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_SHFT 11
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_MASK 0x0000000000000800
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_VC0_CREDIT_SHFT 12
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_VC0_CREDIT_SHFT 13
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_VC2_CREDIT_SHFT 14
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_VC2_CREDIT_SHFT 15
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC0 */
+/* Description: VC0 Data Buffer overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC0_SHFT 16
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC2 */
+/* Description: VC2 Data Buffer overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC2_SHFT 17
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000
+
+/* SH_XNMD_ERROR_OVERFLOW_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_XNMD_ERROR_OVERFLOW_LUT_READ_ERROR_SHFT 18
+#define SH_XNMD_ERROR_OVERFLOW_LUT_READ_ERROR_MASK 0x0000000000040000
+
+/* SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR0 */
+/* Description: Single Bit Error in Bits 63:0 */
+#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR0_SHFT 19
+#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR0_MASK 0x0000000000080000
+
+/* SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR1 */
+/* Description: Single Bit Error in Bits 127:64 */
+#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR1_SHFT 20
+#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR1_MASK 0x0000000000100000
+
+/* SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR2 */
+/* Description: Single Bit Error in Bits 191:128 */
+#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR2_SHFT 21
+#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR2_MASK 0x0000000000200000
+
+/* SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR3 */
+/* Description: Single Bit Error in Bits 255:192 */
+#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR3_SHFT 22
+#define SH_XNMD_ERROR_OVERFLOW_SINGLE_BIT_ERROR3_MASK 0x0000000000400000
+
+/* SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR0 */
+/* Description: Uncorrectable Error in Bits 63:0 */
+#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR0_SHFT 23
+#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR0_MASK 0x0000000000800000
+
+/* SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR1 */
+/* Description: Uncorrectable Error in Bits 127:64 */
+#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR1_SHFT 24
+#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR1_MASK 0x0000000001000000
+
+/* SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR2 */
+/* Description: Uncorrectable Error in Bits 191:128 */
+#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR2_SHFT 25
+#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR2_MASK 0x0000000002000000
+
+/* SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR3 */
+/* Description: Uncorrectable Error in Bits 255:192 */
+#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR3_SHFT 26
+#define SH_XNMD_ERROR_OVERFLOW_UNCOR_ERROR3_MASK 0x0000000004000000
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR0_SHFT 27
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR0_SHFT 28
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR2_SHFT 29
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR2_SHFT 30
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0 */
+/* Description: NI0 Debit 0 Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0_SHFT 31
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2 */
+/* Description: NI0 Debit 2 Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2_SHFT 32
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0 */
+/* Description: NI1 Debit 0 Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0_SHFT 33
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2 */
+/* Description: NI1 Debit 2 Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2_SHFT 34
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0 */
+/* Description: IILB Debit 0 Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0_SHFT 35
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2 */
+/* Description: IILB Debit 2 Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2_SHFT 36
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_SHFT 38
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_SHFT 40
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_SHFT 42
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_SHFT 44
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_SHFT 46
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Underflow */
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47
+#define SH_XNMD_ERROR_OVERFLOW_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_SHFT 48
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000
+
+/* SH_XNMD_ERROR_OVERFLOW_OVERFLOW_HEADER_CANCEL_FIFO */
+/* Description: Header Cancel Fifo Overflow */
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49
+#define SH_XNMD_ERROR_OVERFLOW_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ERROR_OVERFLOW_ALIAS" */
+/* ==================================================================== */
+
+#define SH_XNMD_ERROR_OVERFLOW_ALIAS 0x0000000150040428
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ERROR_MASK" */
+/* ==================================================================== */
+
+#define SH_XNMD_ERROR_MASK 0x0000000150040440
+#define SH_XNMD_ERROR_MASK_MASK 0x0003ffffffffffff
+#define SH_XNMD_ERROR_MASK_INIT 0x0003ffffffffffff
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC0_SHFT 0
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC0_SHFT 1
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC0_MASK 0x0000000000000002
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC2_SHFT 2
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC2_SHFT 3
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC2_MASK 0x0000000000000008
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC0_SHFT 4
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC0_SHFT 5
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC0_MASK 0x0000000000000020
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC2_SHFT 6
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC2_SHFT 7
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC2_MASK 0x0000000000000080
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC0_SHFT 8
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC0_SHFT 9
+#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC0_MASK 0x0000000000000200
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC2_SHFT 10
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC2_SHFT 11
+#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC2_MASK 0x0000000000000800
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_VC0_CREDIT_SHFT 12
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_VC0_CREDIT_SHFT 13
+#define SH_XNMD_ERROR_MASK_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_VC2_CREDIT_SHFT 14
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_VC2_CREDIT_SHFT 15
+#define SH_XNMD_ERROR_MASK_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_DATABUFF_VC0 */
+/* Description: VC0 Data Buffer overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_DATABUFF_VC0_SHFT 16
+#define SH_XNMD_ERROR_MASK_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_DATABUFF_VC2 */
+/* Description: VC2 Data Buffer overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_DATABUFF_VC2_SHFT 17
+#define SH_XNMD_ERROR_MASK_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000
+
+/* SH_XNMD_ERROR_MASK_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_XNMD_ERROR_MASK_LUT_READ_ERROR_SHFT 18
+#define SH_XNMD_ERROR_MASK_LUT_READ_ERROR_MASK 0x0000000000040000
+
+/* SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR0 */
+/* Description: Single Bit Error in Bits 63:0 */
+#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR0_SHFT 19
+#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR0_MASK 0x0000000000080000
+
+/* SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR1 */
+/* Description: Single Bit Error in Bits 127:64 */
+#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR1_SHFT 20
+#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR1_MASK 0x0000000000100000
+
+/* SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR2 */
+/* Description: Single Bit Error in Bits 191:128 */
+#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR2_SHFT 21
+#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR2_MASK 0x0000000000200000
+
+/* SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR3 */
+/* Description: Single Bit Error in Bits 255:192 */
+#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR3_SHFT 22
+#define SH_XNMD_ERROR_MASK_SINGLE_BIT_ERROR3_MASK 0x0000000000400000
+
+/* SH_XNMD_ERROR_MASK_UNCOR_ERROR0 */
+/* Description: Uncorrectable Error in Bits 63:0 */
+#define SH_XNMD_ERROR_MASK_UNCOR_ERROR0_SHFT 23
+#define SH_XNMD_ERROR_MASK_UNCOR_ERROR0_MASK 0x0000000000800000
+
+/* SH_XNMD_ERROR_MASK_UNCOR_ERROR1 */
+/* Description: Uncorrectable Error in Bits 127:64 */
+#define SH_XNMD_ERROR_MASK_UNCOR_ERROR1_SHFT 24
+#define SH_XNMD_ERROR_MASK_UNCOR_ERROR1_MASK 0x0000000001000000
+
+/* SH_XNMD_ERROR_MASK_UNCOR_ERROR2 */
+/* Description: Uncorrectable Error in Bits 191:128 */
+#define SH_XNMD_ERROR_MASK_UNCOR_ERROR2_SHFT 25
+#define SH_XNMD_ERROR_MASK_UNCOR_ERROR2_MASK 0x0000000002000000
+
+/* SH_XNMD_ERROR_MASK_UNCOR_ERROR3 */
+/* Description: Uncorrectable Error in Bits 255:192 */
+#define SH_XNMD_ERROR_MASK_UNCOR_ERROR3_SHFT 26
+#define SH_XNMD_ERROR_MASK_UNCOR_ERROR3_MASK 0x0000000004000000
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_SIC_CNTR0_SHFT 27
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_SIC_CNTR0_SHFT 28
+#define SH_XNMD_ERROR_MASK_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_SIC_CNTR2_SHFT 29
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_SIC_CNTR2_SHFT 30
+#define SH_XNMD_ERROR_MASK_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_NI0_DEBIT0 */
+/* Description: NI0 Debit 0 Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_DEBIT0_SHFT 31
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_NI0_DEBIT2 */
+/* Description: NI0 Debit 2 Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_DEBIT2_SHFT 32
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_NI1_DEBIT0 */
+/* Description: NI1 Debit 0 Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_DEBIT0_SHFT 33
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_NI1_DEBIT2 */
+/* Description: NI1 Debit 2 Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_DEBIT2_SHFT 34
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_IILB_DEBIT0 */
+/* Description: IILB Debit 0 Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_DEBIT0_SHFT 35
+#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_IILB_DEBIT2 */
+/* Description: IILB Debit 2 Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_DEBIT2_SHFT 36
+#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_SHFT 38
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_SHFT 40
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_SHFT 42
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_SHFT 44
+#define SH_XNMD_ERROR_MASK_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_SHFT 46
+#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Underflow */
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47
+#define SH_XNMD_ERROR_MASK_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_SHFT 48
+#define SH_XNMD_ERROR_MASK_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000
+
+/* SH_XNMD_ERROR_MASK_OVERFLOW_HEADER_CANCEL_FIFO */
+/* Description: Header Cancel Fifo Overflow */
+#define SH_XNMD_ERROR_MASK_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49
+#define SH_XNMD_ERROR_MASK_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000
+
+/* ==================================================================== */
+/* Register "SH_XNMD_FIRST_ERROR" */
+/* ==================================================================== */
+
+#define SH_XNMD_FIRST_ERROR 0x0000000150040460
+#define SH_XNMD_FIRST_ERROR_MASK 0x0003ffffffffffff
+#define SH_XNMD_FIRST_ERROR_INIT 0x0003ffffffffffff
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC0_SHFT 0
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC0_MASK 0x0000000000000001
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC0 */
+/* Description: NI0 VC0 fifo overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC0_SHFT 1
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC0_MASK 0x0000000000000002
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC2_SHFT 2
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC2_MASK 0x0000000000000004
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC2 */
+/* Description: NI0 VC2 fifo overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC2_SHFT 3
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC2_MASK 0x0000000000000008
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC0_SHFT 4
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC0_MASK 0x0000000000000010
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC0 */
+/* Description: NI1 VC0 fifo overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC0_SHFT 5
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC0_MASK 0x0000000000000020
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC2_SHFT 6
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC2_MASK 0x0000000000000040
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC2 */
+/* Description: NI1 VC2 fifo overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC2_SHFT 7
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC2_MASK 0x0000000000000080
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC0_SHFT 8
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC0_MASK 0x0000000000000100
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC0 */
+/* Description: IILB VC0 fifo overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC0_SHFT 9
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC0_MASK 0x0000000000000200
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC2_SHFT 10
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC2_MASK 0x0000000000000400
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC2 */
+/* Description: IILB VC2 fifo overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC2_SHFT 11
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC2_MASK 0x0000000000000800
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_VC0_CREDIT_SHFT 12
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_VC0_CREDIT_MASK 0x0000000000001000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_VC0_CREDIT */
+/* Description: VC0 Credit overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_VC0_CREDIT_SHFT 13
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_VC0_CREDIT_MASK 0x0000000000002000
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_VC2_CREDIT_SHFT 14
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_VC2_CREDIT_MASK 0x0000000000004000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_VC2_CREDIT */
+/* Description: VC2 Credit overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_VC2_CREDIT_SHFT 15
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_VC2_CREDIT_MASK 0x0000000000008000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_DATABUFF_VC0 */
+/* Description: VC0 Data Buffer overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_DATABUFF_VC0_SHFT 16
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_DATABUFF_VC0_MASK 0x0000000000010000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_DATABUFF_VC2 */
+/* Description: VC2 Data Buffer overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_DATABUFF_VC2_SHFT 17
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_DATABUFF_VC2_MASK 0x0000000000020000
+
+/* SH_XNMD_FIRST_ERROR_LUT_READ_ERROR */
+/* Description: LUT Read Error */
+#define SH_XNMD_FIRST_ERROR_LUT_READ_ERROR_SHFT 18
+#define SH_XNMD_FIRST_ERROR_LUT_READ_ERROR_MASK 0x0000000000040000
+
+/* SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR0 */
+/* Description: Single Bit Error in Bits 63:0 */
+#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR0_SHFT 19
+#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR0_MASK 0x0000000000080000
+
+/* SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR1 */
+/* Description: Single Bit Error in Bits 127:64 */
+#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR1_SHFT 20
+#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR1_MASK 0x0000000000100000
+
+/* SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR2 */
+/* Description: Single Bit Error in Bits 191:128 */
+#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR2_SHFT 21
+#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR2_MASK 0x0000000000200000
+
+/* SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR3 */
+/* Description: Single Bit Error in Bits 255:192 */
+#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR3_SHFT 22
+#define SH_XNMD_FIRST_ERROR_SINGLE_BIT_ERROR3_MASK 0x0000000000400000
+
+/* SH_XNMD_FIRST_ERROR_UNCOR_ERROR0 */
+/* Description: Uncorrectable Error in Bits 63:0 */
+#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR0_SHFT 23
+#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR0_MASK 0x0000000000800000
+
+/* SH_XNMD_FIRST_ERROR_UNCOR_ERROR1 */
+/* Description: Uncorrectable Error in Bits 127:64 */
+#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR1_SHFT 24
+#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR1_MASK 0x0000000001000000
+
+/* SH_XNMD_FIRST_ERROR_UNCOR_ERROR2 */
+/* Description: Uncorrectable Error in Bits 191:128 */
+#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR2_SHFT 25
+#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR2_MASK 0x0000000002000000
+
+/* SH_XNMD_FIRST_ERROR_UNCOR_ERROR3 */
+/* Description: Uncorrectable Error in Bits 255:192 */
+#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR3_SHFT 26
+#define SH_XNMD_FIRST_ERROR_UNCOR_ERROR3_MASK 0x0000000004000000
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_SIC_CNTR0_SHFT 27
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_SIC_CNTR0_MASK 0x0000000008000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_SIC_CNTR0 */
+/* Description: SIC Counter 0 Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_SIC_CNTR0_SHFT 28
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_SIC_CNTR0_MASK 0x0000000010000000
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_SIC_CNTR2_SHFT 29
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_SIC_CNTR2_MASK 0x0000000020000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_SIC_CNTR2 */
+/* Description: SIC Counter 2 Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_SIC_CNTR2_SHFT 30
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_SIC_CNTR2_MASK 0x0000000040000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_DEBIT0 */
+/* Description: NI0 Debit 0 Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_DEBIT0_SHFT 31
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_DEBIT0_MASK 0x0000000080000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_DEBIT2 */
+/* Description: NI0 Debit 2 Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_DEBIT2_SHFT 32
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_DEBIT2_MASK 0x0000000100000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_DEBIT0 */
+/* Description: NI1 Debit 0 Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_DEBIT0_SHFT 33
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_DEBIT0_MASK 0x0000000200000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_DEBIT2 */
+/* Description: NI1 Debit 2 Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_DEBIT2_SHFT 34
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_DEBIT2_MASK 0x0000000400000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_DEBIT0 */
+/* Description: IILB Debit 0 Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_DEBIT0_SHFT 35
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_DEBIT0_MASK 0x0000000800000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_DEBIT2 */
+/* Description: IILB Debit 2 Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_DEBIT2_SHFT 36
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_DEBIT2_MASK 0x0000001000000000
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_SHFT 37
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC0_CREDIT_MASK 0x0000002000000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT */
+/* Description: NI0 VC0 Credit Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_SHFT 38
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC0_CREDIT_MASK 0x0000004000000000
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_SHFT 39
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI0_VC2_CREDIT_MASK 0x0000008000000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT */
+/* Description: NI0 VC2 Credit Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_SHFT 40
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI0_VC2_CREDIT_MASK 0x0000010000000000
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_SHFT 41
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC0_CREDIT_MASK 0x0000020000000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT */
+/* Description: NI1 VC0 Credit Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_SHFT 42
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC0_CREDIT_MASK 0x0000040000000000
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_SHFT 43
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_NI1_VC2_CREDIT_MASK 0x0000080000000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT */
+/* Description: NI1 VC2 Credit Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_SHFT 44
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_NI1_VC2_CREDIT_MASK 0x0000100000000000
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_SHFT 45
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC0_CREDIT_MASK 0x0000200000000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT */
+/* Description: IILB VC0 Credit Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_SHFT 46
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC0_CREDIT_MASK 0x0000400000000000
+
+/* SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Underflow */
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_SHFT 47
+#define SH_XNMD_FIRST_ERROR_UNDERFLOW_IILB_VC2_CREDIT_MASK 0x0000800000000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT */
+/* Description: IILB VC2 Credit Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_SHFT 48
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_IILB_VC2_CREDIT_MASK 0x0001000000000000
+
+/* SH_XNMD_FIRST_ERROR_OVERFLOW_HEADER_CANCEL_FIFO */
+/* Description: Header Cancel Fifo Overflow */
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_HEADER_CANCEL_FIFO_SHFT 49
+#define SH_XNMD_FIRST_ERROR_OVERFLOW_HEADER_CANCEL_FIFO_MASK 0x0002000000000000
+
+/* ==================================================================== */
+/* Register "SH_AUTO_REPLY_ENABLE0" */
+/* Automatic Maintenance Reply Enable 0 */
+/* ==================================================================== */
+
+#define SH_AUTO_REPLY_ENABLE0 0x0000000110061000
+#define SH_AUTO_REPLY_ENABLE0_MASK 0xffffffffffffffff
+#define SH_AUTO_REPLY_ENABLE0_INIT 0x0000000000000000
+
+/* SH_AUTO_REPLY_ENABLE0_ENABLE0 */
+/* Description: Enable 0 */
+#define SH_AUTO_REPLY_ENABLE0_ENABLE0_SHFT 0
+#define SH_AUTO_REPLY_ENABLE0_ENABLE0_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_AUTO_REPLY_ENABLE1" */
+/* Automatic Maintenance Reply Enable 1 */
+/* ==================================================================== */
+
+#define SH_AUTO_REPLY_ENABLE1 0x0000000110061080
+#define SH_AUTO_REPLY_ENABLE1_MASK 0xffffffffffffffff
+#define SH_AUTO_REPLY_ENABLE1_INIT 0x0000000000000000
+
+/* SH_AUTO_REPLY_ENABLE1_ENABLE1 */
+/* Description: Enable 1 */
+#define SH_AUTO_REPLY_ENABLE1_ENABLE1_SHFT 0
+#define SH_AUTO_REPLY_ENABLE1_ENABLE1_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_AUTO_REPLY_HEADER0" */
+/* Automatic Maintenance Reply Header 0 */
+/* ==================================================================== */
+
+#define SH_AUTO_REPLY_HEADER0 0x0000000110061100
+#define SH_AUTO_REPLY_HEADER0_MASK 0xffffffffffffffff
+#define SH_AUTO_REPLY_HEADER0_INIT 0x0000000000000000
+
+/* SH_AUTO_REPLY_HEADER0_HEADER0 */
+/* Description: Header 0 */
+#define SH_AUTO_REPLY_HEADER0_HEADER0_SHFT 0
+#define SH_AUTO_REPLY_HEADER0_HEADER0_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_AUTO_REPLY_HEADER1" */
+/* Automatic Maintenance Reply Header 1 */
+/* ==================================================================== */
+
+#define SH_AUTO_REPLY_HEADER1 0x0000000110061180
+#define SH_AUTO_REPLY_HEADER1_MASK 0xffffffffffffffff
+#define SH_AUTO_REPLY_HEADER1_INIT 0x0000000000000000
+
+/* SH_AUTO_REPLY_HEADER1_HEADER1 */
+/* Description: Header 1 */
+#define SH_AUTO_REPLY_HEADER1_HEADER1_SHFT 0
+#define SH_AUTO_REPLY_HEADER1_HEADER1_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_ENABLE_RP_AUTO_REPLY" */
+/* Enable Automatic Maintenance Reply From Reply Queue */
+/* ==================================================================== */
+
+#define SH_ENABLE_RP_AUTO_REPLY 0x0000000110061200
+#define SH_ENABLE_RP_AUTO_REPLY_MASK 0x0000000000000001
+#define SH_ENABLE_RP_AUTO_REPLY_INIT 0x0000000000000000
+
+/* SH_ENABLE_RP_AUTO_REPLY_ENABLE */
+/* Description: Enable Reply Auto Reply */
+#define SH_ENABLE_RP_AUTO_REPLY_ENABLE_SHFT 0
+#define SH_ENABLE_RP_AUTO_REPLY_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_ENABLE_RQ_AUTO_REPLY" */
+/* Enable Automatic Maintenance Reply From Request Queue */
+/* ==================================================================== */
+
+#define SH_ENABLE_RQ_AUTO_REPLY 0x0000000110061280
+#define SH_ENABLE_RQ_AUTO_REPLY_MASK 0x0000000000000001
+#define SH_ENABLE_RQ_AUTO_REPLY_INIT 0x0000000000000000
+
+/* SH_ENABLE_RQ_AUTO_REPLY_ENABLE */
+/* Description: Enable Request Auto Reply */
+#define SH_ENABLE_RQ_AUTO_REPLY_ENABLE_SHFT 0
+#define SH_ENABLE_RQ_AUTO_REPLY_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_REDIRECT_INVAL" */
+/* Redirect invalidate to LB instead of PI */
+/* ==================================================================== */
+
+#define SH_REDIRECT_INVAL 0x0000000110061300
+#define SH_REDIRECT_INVAL_MASK 0x0000000000000001
+#define SH_REDIRECT_INVAL_INIT 0x0000000000000000
+
+/* SH_REDIRECT_INVAL_REDIRECT */
+/* Description: Redirect invalidates to LB instead of PI */
+#define SH_REDIRECT_INVAL_REDIRECT_SHFT 0
+#define SH_REDIRECT_INVAL_REDIRECT_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_CNTRL" */
+/* Diagnostic Message Control Register */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_CNTRL 0x0000000110062000
+#define SH_DIAG_MSG_CNTRL_MASK 0xc000000000003fff
+#define SH_DIAG_MSG_CNTRL_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_CNTRL_MSG_LENGTH */
+/* Description: Message data payload length, 0 - 63 */
+#define SH_DIAG_MSG_CNTRL_MSG_LENGTH_SHFT 0
+#define SH_DIAG_MSG_CNTRL_MSG_LENGTH_MASK 0x000000000000003f
+
+/* SH_DIAG_MSG_CNTRL_ERROR_INJECT_POINT */
+/* Description: Point message that the error bit would be activated */
+#define SH_DIAG_MSG_CNTRL_ERROR_INJECT_POINT_SHFT 6
+#define SH_DIAG_MSG_CNTRL_ERROR_INJECT_POINT_MASK 0x0000000000000fc0
+
+/* SH_DIAG_MSG_CNTRL_ERROR_INJECT_ENABLE */
+/* Description: Enable ERROR_INJECT_POINT field */
+#define SH_DIAG_MSG_CNTRL_ERROR_INJECT_ENABLE_SHFT 12
+#define SH_DIAG_MSG_CNTRL_ERROR_INJECT_ENABLE_MASK 0x0000000000001000
+
+/* SH_DIAG_MSG_CNTRL_PORT */
+/* Description: 0 = request port, 1 = reply port */
+#define SH_DIAG_MSG_CNTRL_PORT_SHFT 13
+#define SH_DIAG_MSG_CNTRL_PORT_MASK 0x0000000000002000
+
+/* SH_DIAG_MSG_CNTRL_START */
+/* Description: Start */
+#define SH_DIAG_MSG_CNTRL_START_SHFT 62
+#define SH_DIAG_MSG_CNTRL_START_MASK 0x4000000000000000
+
+/* SH_DIAG_MSG_CNTRL_BUSY */
+/* Description: Busy */
+#define SH_DIAG_MSG_CNTRL_BUSY_SHFT 63
+#define SH_DIAG_MSG_CNTRL_BUSY_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA0L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA0L 0x0000000110062080
+#define SH_DIAG_MSG_DATA0L_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA0L_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA0L_DATA_LOWER */
+/* Description: Lower 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA0L_DATA_LOWER_SHFT 0
+#define SH_DIAG_MSG_DATA0L_DATA_LOWER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA0U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA0U 0x0000000110062100
+#define SH_DIAG_MSG_DATA0U_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA0U_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA0U_DATA_UPPER */
+/* Description: Upper 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA0U_DATA_UPPER_SHFT 0
+#define SH_DIAG_MSG_DATA0U_DATA_UPPER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA1L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA1L 0x0000000110062180
+#define SH_DIAG_MSG_DATA1L_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA1L_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA1L_DATA_LOWER */
+/* Description: Lower 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA1L_DATA_LOWER_SHFT 0
+#define SH_DIAG_MSG_DATA1L_DATA_LOWER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA1U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA1U 0x0000000110062200
+#define SH_DIAG_MSG_DATA1U_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA1U_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA1U_DATA_UPPER */
+/* Description: Upper 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA1U_DATA_UPPER_SHFT 0
+#define SH_DIAG_MSG_DATA1U_DATA_UPPER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA2L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA2L 0x0000000110062280
+#define SH_DIAG_MSG_DATA2L_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA2L_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA2L_DATA_LOWER */
+/* Description: Lower 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA2L_DATA_LOWER_SHFT 0
+#define SH_DIAG_MSG_DATA2L_DATA_LOWER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA2U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA2U 0x0000000110062300
+#define SH_DIAG_MSG_DATA2U_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA2U_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA2U_DATA_UPPER */
+/* Description: Upper 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA2U_DATA_UPPER_SHFT 0
+#define SH_DIAG_MSG_DATA2U_DATA_UPPER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA3L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA3L 0x0000000110062380
+#define SH_DIAG_MSG_DATA3L_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA3L_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA3L_DATA_LOWER */
+/* Description: Lower 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA3L_DATA_LOWER_SHFT 0
+#define SH_DIAG_MSG_DATA3L_DATA_LOWER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA3U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA3U 0x0000000110062400
+#define SH_DIAG_MSG_DATA3U_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA3U_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA3U_DATA_UPPER */
+/* Description: Upper 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA3U_DATA_UPPER_SHFT 0
+#define SH_DIAG_MSG_DATA3U_DATA_UPPER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA4L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA4L 0x0000000110062480
+#define SH_DIAG_MSG_DATA4L_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA4L_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA4L_DATA_LOWER */
+/* Description: Lower 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA4L_DATA_LOWER_SHFT 0
+#define SH_DIAG_MSG_DATA4L_DATA_LOWER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA4U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA4U 0x0000000110062500
+#define SH_DIAG_MSG_DATA4U_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA4U_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA4U_DATA_UPPER */
+/* Description: Upper 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA4U_DATA_UPPER_SHFT 0
+#define SH_DIAG_MSG_DATA4U_DATA_UPPER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA5L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA5L 0x0000000110062580
+#define SH_DIAG_MSG_DATA5L_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA5L_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA5L_DATA_LOWER */
+/* Description: Lower 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA5L_DATA_LOWER_SHFT 0
+#define SH_DIAG_MSG_DATA5L_DATA_LOWER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA5U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA5U 0x0000000110062600
+#define SH_DIAG_MSG_DATA5U_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA5U_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA5U_DATA_UPPER */
+/* Description: Upper 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA5U_DATA_UPPER_SHFT 0
+#define SH_DIAG_MSG_DATA5U_DATA_UPPER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA6L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA6L 0x0000000110062680
+#define SH_DIAG_MSG_DATA6L_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA6L_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA6L_DATA_LOWER */
+/* Description: Lower 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA6L_DATA_LOWER_SHFT 0
+#define SH_DIAG_MSG_DATA6L_DATA_LOWER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA6U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA6U 0x0000000110062700
+#define SH_DIAG_MSG_DATA6U_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA6U_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA6U_DATA_UPPER */
+/* Description: Upper 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA6U_DATA_UPPER_SHFT 0
+#define SH_DIAG_MSG_DATA6U_DATA_UPPER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA7L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA7L 0x0000000110062780
+#define SH_DIAG_MSG_DATA7L_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA7L_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA7L_DATA_LOWER */
+/* Description: Lower 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA7L_DATA_LOWER_SHFT 0
+#define SH_DIAG_MSG_DATA7L_DATA_LOWER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA7U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA7U 0x0000000110062800
+#define SH_DIAG_MSG_DATA7U_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA7U_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA7U_DATA_UPPER */
+/* Description: Upper 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA7U_DATA_UPPER_SHFT 0
+#define SH_DIAG_MSG_DATA7U_DATA_UPPER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA8L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA8L 0x0000000110062880
+#define SH_DIAG_MSG_DATA8L_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA8L_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA8L_DATA_LOWER */
+/* Description: Lower 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA8L_DATA_LOWER_SHFT 0
+#define SH_DIAG_MSG_DATA8L_DATA_LOWER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA8U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_DATA8U 0x0000000110062900
+#define SH_DIAG_MSG_DATA8U_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_DATA8U_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_DATA8U_DATA_UPPER */
+/* Description: Upper 64 bits of Diagnositic Message Data */
+#define SH_DIAG_MSG_DATA8U_DATA_UPPER_SHFT 0
+#define SH_DIAG_MSG_DATA8U_DATA_UPPER_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_HDR0" */
+/* Diagnostice Data, lower 64 bits of header */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_HDR0 0x0000000110062980
+#define SH_DIAG_MSG_HDR0_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_HDR0_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_HDR0_HEADER0 */
+/* Description: Lower 64 bits of Diagnositic Message Header */
+#define SH_DIAG_MSG_HDR0_HEADER0_SHFT 0
+#define SH_DIAG_MSG_HDR0_HEADER0_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_HDR1" */
+/* Diagnostice Data, upper 64 bits of header */
+/* ==================================================================== */
+
+#define SH_DIAG_MSG_HDR1 0x0000000110062a00
+#define SH_DIAG_MSG_HDR1_MASK 0xffffffffffffffff
+#define SH_DIAG_MSG_HDR1_INIT 0x0000000000000000
+
+/* SH_DIAG_MSG_HDR1_HEADER1 */
+/* Description: Upper 64 bits of Diagnositic Message Header */
+#define SH_DIAG_MSG_HDR1_HEADER1_SHFT 0
+#define SH_DIAG_MSG_HDR1_HEADER1_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_DEBUG_SELECT" */
+/* SHub Debug Port Select */
+/* ==================================================================== */
+
+#define SH_DEBUG_SELECT 0x0000000110063000
+#define SH_DEBUG_SELECT_MASK 0x8fffffffffffffff
+#define SH_DEBUG_SELECT_INIT 0x0000e38e38e38e38
+
+/* SH_DEBUG_SELECT_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble0_nibble_select */
+#define SH_DEBUG_SELECT_NIBBLE0_NIBBLE_SEL_SHFT 0
+#define SH_DEBUG_SELECT_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000007
+
+/* SH_DEBUG_SELECT_NIBBLE0_CHIPLET_SEL */
+/* Description: Nibble0_chiplet_select */
+#define SH_DEBUG_SELECT_NIBBLE0_CHIPLET_SEL_SHFT 3
+#define SH_DEBUG_SELECT_NIBBLE0_CHIPLET_SEL_MASK 0x0000000000000038
+
+/* SH_DEBUG_SELECT_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble1_nibble_select */
+#define SH_DEBUG_SELECT_NIBBLE1_NIBBLE_SEL_SHFT 6
+#define SH_DEBUG_SELECT_NIBBLE1_NIBBLE_SEL_MASK 0x00000000000001c0
+
+/* SH_DEBUG_SELECT_NIBBLE1_CHIPLET_SEL */
+/* Description: Nibble1_chiplet_select */
+#define SH_DEBUG_SELECT_NIBBLE1_CHIPLET_SEL_SHFT 9
+#define SH_DEBUG_SELECT_NIBBLE1_CHIPLET_SEL_MASK 0x0000000000000e00
+
+/* SH_DEBUG_SELECT_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble2_nibble_select */
+#define SH_DEBUG_SELECT_NIBBLE2_NIBBLE_SEL_SHFT 12
+#define SH_DEBUG_SELECT_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_DEBUG_SELECT_NIBBLE2_CHIPLET_SEL */
+/* Description: Nibble2_chiplet_select */
+#define SH_DEBUG_SELECT_NIBBLE2_CHIPLET_SEL_SHFT 15
+#define SH_DEBUG_SELECT_NIBBLE2_CHIPLET_SEL_MASK 0x0000000000038000
+
+/* SH_DEBUG_SELECT_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble3_nibble_select */
+#define SH_DEBUG_SELECT_NIBBLE3_NIBBLE_SEL_SHFT 18
+#define SH_DEBUG_SELECT_NIBBLE3_NIBBLE_SEL_MASK 0x00000000001c0000
+
+/* SH_DEBUG_SELECT_NIBBLE3_CHIPLET_SEL */
+/* Description: Nibble3_chiplet_select */
+#define SH_DEBUG_SELECT_NIBBLE3_CHIPLET_SEL_SHFT 21
+#define SH_DEBUG_SELECT_NIBBLE3_CHIPLET_SEL_MASK 0x0000000000e00000
+
+/* SH_DEBUG_SELECT_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble4_nibble_select */
+#define SH_DEBUG_SELECT_NIBBLE4_NIBBLE_SEL_SHFT 24
+#define SH_DEBUG_SELECT_NIBBLE4_NIBBLE_SEL_MASK 0x0000000007000000
+
+/* SH_DEBUG_SELECT_NIBBLE4_CHIPLET_SEL */
+/* Description: Nibble4_chiplet_select */
+#define SH_DEBUG_SELECT_NIBBLE4_CHIPLET_SEL_SHFT 27
+#define SH_DEBUG_SELECT_NIBBLE4_CHIPLET_SEL_MASK 0x0000000038000000
+
+/* SH_DEBUG_SELECT_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble5_nibble_select */
+#define SH_DEBUG_SELECT_NIBBLE5_NIBBLE_SEL_SHFT 30
+#define SH_DEBUG_SELECT_NIBBLE5_NIBBLE_SEL_MASK 0x00000001c0000000
+
+/* SH_DEBUG_SELECT_NIBBLE5_CHIPLET_SEL */
+/* Description: Nibble5_chiplet_select */
+#define SH_DEBUG_SELECT_NIBBLE5_CHIPLET_SEL_SHFT 33
+#define SH_DEBUG_SELECT_NIBBLE5_CHIPLET_SEL_MASK 0x0000000e00000000
+
+/* SH_DEBUG_SELECT_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble6_nibble_select */
+#define SH_DEBUG_SELECT_NIBBLE6_NIBBLE_SEL_SHFT 36
+#define SH_DEBUG_SELECT_NIBBLE6_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_DEBUG_SELECT_NIBBLE6_CHIPLET_SEL */
+/* Description: Nibble6_chiplet_select */
+#define SH_DEBUG_SELECT_NIBBLE6_CHIPLET_SEL_SHFT 39
+#define SH_DEBUG_SELECT_NIBBLE6_CHIPLET_SEL_MASK 0x0000038000000000
+
+/* SH_DEBUG_SELECT_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble7_nibble_select */
+#define SH_DEBUG_SELECT_NIBBLE7_NIBBLE_SEL_SHFT 42
+#define SH_DEBUG_SELECT_NIBBLE7_NIBBLE_SEL_MASK 0x00001c0000000000
+
+/* SH_DEBUG_SELECT_NIBBLE7_CHIPLET_SEL */
+/* Description: Nibble7_chiplet_select */
+#define SH_DEBUG_SELECT_NIBBLE7_CHIPLET_SEL_SHFT 45
+#define SH_DEBUG_SELECT_NIBBLE7_CHIPLET_SEL_MASK 0x0000e00000000000
+
+/* SH_DEBUG_SELECT_DEBUG_II_SEL */
+/* Description: Select bits to II port */
+#define SH_DEBUG_SELECT_DEBUG_II_SEL_SHFT 48
+#define SH_DEBUG_SELECT_DEBUG_II_SEL_MASK 0x0007000000000000
+
+/* SH_DEBUG_SELECT_SEL_II */
+/* Description: Select II to debug port */
+#define SH_DEBUG_SELECT_SEL_II_SHFT 51
+#define SH_DEBUG_SELECT_SEL_II_MASK 0x0ff8000000000000
+
+/* SH_DEBUG_SELECT_TRIGGER_ENABLE */
+/* Description: Enable trigger on bit 32 of Analyzer data */
+#define SH_DEBUG_SELECT_TRIGGER_ENABLE_SHFT 63
+#define SH_DEBUG_SELECT_TRIGGER_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_TRIGGER_COMPARE_MASK" */
+/* SHub Trigger Compare Mask */
+/* ==================================================================== */
+
+#define SH_TRIGGER_COMPARE_MASK 0x0000000110063080
+#define SH_TRIGGER_COMPARE_MASK_MASK 0x00000000ffffffff
+#define SH_TRIGGER_COMPARE_MASK_INIT 0x0000000000000000
+
+/* SH_TRIGGER_COMPARE_MASK_MASK */
+/* Description: SHub Trigger Compare Mask */
+#define SH_TRIGGER_COMPARE_MASK_MASK_SHFT 0
+#define SH_TRIGGER_COMPARE_MASK_MASK_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_TRIGGER_COMPARE_PATTERN" */
+/* SHub Trigger Compare Pattern */
+/* ==================================================================== */
+
+#define SH_TRIGGER_COMPARE_PATTERN 0x0000000110063100
+#define SH_TRIGGER_COMPARE_PATTERN_MASK 0x00000000ffffffff
+#define SH_TRIGGER_COMPARE_PATTERN_INIT 0x0000000000000000
+
+/* SH_TRIGGER_COMPARE_PATTERN_DATA */
+/* Description: SHub Trigger Compare Pattern */
+#define SH_TRIGGER_COMPARE_PATTERN_DATA_SHFT 0
+#define SH_TRIGGER_COMPARE_PATTERN_DATA_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_TRIGGER_SEL" */
+/* Trigger select for SHUB debug port */
+/* ==================================================================== */
+
+#define SH_TRIGGER_SEL 0x0000000110063180
+#define SH_TRIGGER_SEL_MASK 0x7777777777777777
+#define SH_TRIGGER_SEL_INIT 0x0000000000000000
+
+/* SH_TRIGGER_SEL_NIBBLE0_INPUT_SEL */
+/* Description: Nibble 0 input select */
+#define SH_TRIGGER_SEL_NIBBLE0_INPUT_SEL_SHFT 0
+#define SH_TRIGGER_SEL_NIBBLE0_INPUT_SEL_MASK 0x0000000000000007
+
+/* SH_TRIGGER_SEL_NIBBLE0_NIBBLE_SEL */
+/* Description: Nibble 0 Nibble select */
+#define SH_TRIGGER_SEL_NIBBLE0_NIBBLE_SEL_SHFT 4
+#define SH_TRIGGER_SEL_NIBBLE0_NIBBLE_SEL_MASK 0x0000000000000070
+
+/* SH_TRIGGER_SEL_NIBBLE1_INPUT_SEL */
+/* Description: Nibble 1 input select */
+#define SH_TRIGGER_SEL_NIBBLE1_INPUT_SEL_SHFT 8
+#define SH_TRIGGER_SEL_NIBBLE1_INPUT_SEL_MASK 0x0000000000000700
+
+/* SH_TRIGGER_SEL_NIBBLE1_NIBBLE_SEL */
+/* Description: Nibble 1 Nibble select */
+#define SH_TRIGGER_SEL_NIBBLE1_NIBBLE_SEL_SHFT 12
+#define SH_TRIGGER_SEL_NIBBLE1_NIBBLE_SEL_MASK 0x0000000000007000
+
+/* SH_TRIGGER_SEL_NIBBLE2_INPUT_SEL */
+/* Description: Nibble 2 input select */
+#define SH_TRIGGER_SEL_NIBBLE2_INPUT_SEL_SHFT 16
+#define SH_TRIGGER_SEL_NIBBLE2_INPUT_SEL_MASK 0x0000000000070000
+
+/* SH_TRIGGER_SEL_NIBBLE2_NIBBLE_SEL */
+/* Description: Nibble 2 Nibble select */
+#define SH_TRIGGER_SEL_NIBBLE2_NIBBLE_SEL_SHFT 20
+#define SH_TRIGGER_SEL_NIBBLE2_NIBBLE_SEL_MASK 0x0000000000700000
+
+/* SH_TRIGGER_SEL_NIBBLE3_INPUT_SEL */
+/* Description: Nibble 3 input select */
+#define SH_TRIGGER_SEL_NIBBLE3_INPUT_SEL_SHFT 24
+#define SH_TRIGGER_SEL_NIBBLE3_INPUT_SEL_MASK 0x0000000007000000
+
+/* SH_TRIGGER_SEL_NIBBLE3_NIBBLE_SEL */
+/* Description: Nibble 3 Nibble select */
+#define SH_TRIGGER_SEL_NIBBLE3_NIBBLE_SEL_SHFT 28
+#define SH_TRIGGER_SEL_NIBBLE3_NIBBLE_SEL_MASK 0x0000000070000000
+
+/* SH_TRIGGER_SEL_NIBBLE4_INPUT_SEL */
+/* Description: Nibble 4 input select */
+#define SH_TRIGGER_SEL_NIBBLE4_INPUT_SEL_SHFT 32
+#define SH_TRIGGER_SEL_NIBBLE4_INPUT_SEL_MASK 0x0000000700000000
+
+/* SH_TRIGGER_SEL_NIBBLE4_NIBBLE_SEL */
+/* Description: Nibble 4 Nibble select */
+#define SH_TRIGGER_SEL_NIBBLE4_NIBBLE_SEL_SHFT 36
+#define SH_TRIGGER_SEL_NIBBLE4_NIBBLE_SEL_MASK 0x0000007000000000
+
+/* SH_TRIGGER_SEL_NIBBLE5_INPUT_SEL */
+/* Description: Nibble 5 input select */
+#define SH_TRIGGER_SEL_NIBBLE5_INPUT_SEL_SHFT 40
+#define SH_TRIGGER_SEL_NIBBLE5_INPUT_SEL_MASK 0x0000070000000000
+
+/* SH_TRIGGER_SEL_NIBBLE5_NIBBLE_SEL */
+/* Description: Nibble 5 Nibble select */
+#define SH_TRIGGER_SEL_NIBBLE5_NIBBLE_SEL_SHFT 44
+#define SH_TRIGGER_SEL_NIBBLE5_NIBBLE_SEL_MASK 0x0000700000000000
+
+/* SH_TRIGGER_SEL_NIBBLE6_INPUT_SEL */
+/* Description: Nibble 6 input select */
+#define SH_TRIGGER_SEL_NIBBLE6_INPUT_SEL_SHFT 48
+#define SH_TRIGGER_SEL_NIBBLE6_INPUT_SEL_MASK 0x0007000000000000
+
+/* SH_TRIGGER_SEL_NIBBLE6_NIBBLE_SEL */
+/* Description: Nibble 6 Nibble select */
+#define SH_TRIGGER_SEL_NIBBLE6_NIBBLE_SEL_SHFT 52
+#define SH_TRIGGER_SEL_NIBBLE6_NIBBLE_SEL_MASK 0x0070000000000000
+
+/* SH_TRIGGER_SEL_NIBBLE7_INPUT_SEL */
+/* Description: Nibble 7 input select */
+#define SH_TRIGGER_SEL_NIBBLE7_INPUT_SEL_SHFT 56
+#define SH_TRIGGER_SEL_NIBBLE7_INPUT_SEL_MASK 0x0700000000000000
+
+/* SH_TRIGGER_SEL_NIBBLE7_NIBBLE_SEL */
+/* Description: Nibble 7 Nibble select */
+#define SH_TRIGGER_SEL_NIBBLE7_NIBBLE_SEL_SHFT 60
+#define SH_TRIGGER_SEL_NIBBLE7_NIBBLE_SEL_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_STOP_CLK_CONTROL" */
+/* Stop Clock Control */
+/* ==================================================================== */
+
+#define SH_STOP_CLK_CONTROL 0x0000000110064000
+#define SH_STOP_CLK_CONTROL_MASK 0x00000000000000ff
+#define SH_STOP_CLK_CONTROL_INIT 0x00000000000000e0
+
+/* SH_STOP_CLK_CONTROL_STIMULUS */
+/* Description: Counter stimulus */
+#define SH_STOP_CLK_CONTROL_STIMULUS_SHFT 0
+#define SH_STOP_CLK_CONTROL_STIMULUS_MASK 0x000000000000001f
+
+/* SH_STOP_CLK_CONTROL_EVENT */
+/* Description: Counter event select (0-greater than, 1-equal) */
+#define SH_STOP_CLK_CONTROL_EVENT_SHFT 5
+#define SH_STOP_CLK_CONTROL_EVENT_MASK 0x0000000000000020
+
+/* SH_STOP_CLK_CONTROL_POLARITY */
+/* Description: Counter polarity select (0-negative edge, 1-positiv */
+/* e edge) */
+#define SH_STOP_CLK_CONTROL_POLARITY_SHFT 6
+#define SH_STOP_CLK_CONTROL_POLARITY_MASK 0x0000000000000040
+
+/* SH_STOP_CLK_CONTROL_MODE */
+/* Description: Counter mode select (0-internal, 1-external) */
+#define SH_STOP_CLK_CONTROL_MODE_SHFT 7
+#define SH_STOP_CLK_CONTROL_MODE_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_STOP_CLK_DELAY_PHASE" */
+/* Stop Clock Delay Phase */
+/* ==================================================================== */
+
+#define SH_STOP_CLK_DELAY_PHASE 0x0000000110064080
+#define SH_STOP_CLK_DELAY_PHASE_MASK 0x00000000000000ff
+#define SH_STOP_CLK_DELAY_PHASE_INIT 0x0000000000000000
+
+/* SH_STOP_CLK_DELAY_PHASE_DELAY */
+/* Description: Delay phase */
+#define SH_STOP_CLK_DELAY_PHASE_DELAY_SHFT 0
+#define SH_STOP_CLK_DELAY_PHASE_DELAY_MASK 0x00000000000000ff
+
+/* ==================================================================== */
+/* Register "SH_TSF_ARM_MASK" */
+/* Trigger sequencing facility arm mask */
+/* ==================================================================== */
+
+#define SH_TSF_ARM_MASK 0x0000000110065000
+#define SH_TSF_ARM_MASK_MASK 0xffffffffffffffff
+#define SH_TSF_ARM_MASK_INIT 0x0000000000000000
+
+/* SH_TSF_ARM_MASK_MASK */
+/* Description: Trigger sequencing facility arm mask */
+#define SH_TSF_ARM_MASK_MASK_SHFT 0
+#define SH_TSF_ARM_MASK_MASK_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_TSF_COUNTER_PRESETS" */
+/* Trigger sequencing facility counter presets */
+/* ==================================================================== */
+
+#define SH_TSF_COUNTER_PRESETS 0x0000000110065080
+#define SH_TSF_COUNTER_PRESETS_MASK 0xffffffffffffffff
+#define SH_TSF_COUNTER_PRESETS_INIT 0x0000000000000000
+
+/* SH_TSF_COUNTER_PRESETS_COUNT_32 */
+/* Description: Trigger sequencing facility counter 32 */
+#define SH_TSF_COUNTER_PRESETS_COUNT_32_SHFT 0
+#define SH_TSF_COUNTER_PRESETS_COUNT_32_MASK 0x00000000ffffffff
+
+/* SH_TSF_COUNTER_PRESETS_COUNT_16 */
+/* Description: Trigger sequencing facility counter 16 */
+#define SH_TSF_COUNTER_PRESETS_COUNT_16_SHFT 32
+#define SH_TSF_COUNTER_PRESETS_COUNT_16_MASK 0x0000ffff00000000
+
+/* SH_TSF_COUNTER_PRESETS_COUNT_8B */
+/* Description: Trigger sequencing facility counter 8b */
+#define SH_TSF_COUNTER_PRESETS_COUNT_8B_SHFT 48
+#define SH_TSF_COUNTER_PRESETS_COUNT_8B_MASK 0x00ff000000000000
+
+/* SH_TSF_COUNTER_PRESETS_COUNT_8A */
+/* Description: Trigger sequencing facility counter 8a */
+#define SH_TSF_COUNTER_PRESETS_COUNT_8A_SHFT 56
+#define SH_TSF_COUNTER_PRESETS_COUNT_8A_MASK 0xff00000000000000
+
+/* ==================================================================== */
+/* Register "SH_TSF_DECREMENT_CTL" */
+/* Trigger sequencing facility counter decrement control */
+/* ==================================================================== */
+
+#define SH_TSF_DECREMENT_CTL 0x0000000110065100
+#define SH_TSF_DECREMENT_CTL_MASK 0x000000000000ffff
+#define SH_TSF_DECREMENT_CTL_INIT 0x0000000000000000
+
+/* SH_TSF_DECREMENT_CTL_CTL */
+/* Description: Trigger sequencing facility counter decrement contr */
+#define SH_TSF_DECREMENT_CTL_CTL_SHFT 0
+#define SH_TSF_DECREMENT_CTL_CTL_MASK 0x000000000000ffff
+
+/* ==================================================================== */
+/* Register "SH_TSF_DIAG_MSG_CTL" */
+/* Trigger sequencing facility diagnostic message control */
+/* ==================================================================== */
+
+#define SH_TSF_DIAG_MSG_CTL 0x0000000110065180
+#define SH_TSF_DIAG_MSG_CTL_MASK 0x00000000000000ff
+#define SH_TSF_DIAG_MSG_CTL_INIT 0x0000000000000000
+
+/* SH_TSF_DIAG_MSG_CTL_ENABLE */
+/* Description: Trigger sequencing facility diagnostic message cont */
+#define SH_TSF_DIAG_MSG_CTL_ENABLE_SHFT 0
+#define SH_TSF_DIAG_MSG_CTL_ENABLE_MASK 0x00000000000000ff
+
+/* ==================================================================== */
+/* Register "SH_TSF_DISARM_MASK" */
+/* Trigger sequencing facility disarm mask */
+/* ==================================================================== */
+
+#define SH_TSF_DISARM_MASK 0x0000000110065200
+#define SH_TSF_DISARM_MASK_MASK 0xffffffffffffffff
+#define SH_TSF_DISARM_MASK_INIT 0x0000000000000000
+
+/* SH_TSF_DISARM_MASK_MASK */
+/* Description: Trigger sequencing facility disarm mask */
+#define SH_TSF_DISARM_MASK_MASK_SHFT 0
+#define SH_TSF_DISARM_MASK_MASK_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_TSF_ENABLE_CTL" */
+/* Trigger sequencing facility counter enable control */
+/* ==================================================================== */
+
+#define SH_TSF_ENABLE_CTL 0x0000000110065280
+#define SH_TSF_ENABLE_CTL_MASK 0x000000000000ffff
+#define SH_TSF_ENABLE_CTL_INIT 0x0000000000000000
+
+/* SH_TSF_ENABLE_CTL_CTL */
+/* Description: Trigger sequencing facility counter enable control */
+#define SH_TSF_ENABLE_CTL_CTL_SHFT 0
+#define SH_TSF_ENABLE_CTL_CTL_MASK 0x000000000000ffff
+
+/* ==================================================================== */
+/* Register "SH_TSF_SOFTWARE_ARM" */
+/* Trigger sequencing facility software arm */
+/* ==================================================================== */
+
+#define SH_TSF_SOFTWARE_ARM 0x0000000110065300
+#define SH_TSF_SOFTWARE_ARM_MASK 0x00000000000000ff
+#define SH_TSF_SOFTWARE_ARM_INIT 0x0000000000000000
+
+/* SH_TSF_SOFTWARE_ARM_BIT0 */
+/* Description: Trigger sequencing facility software arm bit 0 */
+#define SH_TSF_SOFTWARE_ARM_BIT0_SHFT 0
+#define SH_TSF_SOFTWARE_ARM_BIT0_MASK 0x0000000000000001
+
+/* SH_TSF_SOFTWARE_ARM_BIT1 */
+/* Description: Trigger sequencing facility software arm bit 1 */
+#define SH_TSF_SOFTWARE_ARM_BIT1_SHFT 1
+#define SH_TSF_SOFTWARE_ARM_BIT1_MASK 0x0000000000000002
+
+/* SH_TSF_SOFTWARE_ARM_BIT2 */
+/* Description: Trigger sequencing facility software arm bit 2 */
+#define SH_TSF_SOFTWARE_ARM_BIT2_SHFT 2
+#define SH_TSF_SOFTWARE_ARM_BIT2_MASK 0x0000000000000004
+
+/* SH_TSF_SOFTWARE_ARM_BIT3 */
+/* Description: Trigger sequencing facility software arm bit 3 */
+#define SH_TSF_SOFTWARE_ARM_BIT3_SHFT 3
+#define SH_TSF_SOFTWARE_ARM_BIT3_MASK 0x0000000000000008
+
+/* SH_TSF_SOFTWARE_ARM_BIT4 */
+/* Description: Trigger sequencing facility software arm bit 4 */
+#define SH_TSF_SOFTWARE_ARM_BIT4_SHFT 4
+#define SH_TSF_SOFTWARE_ARM_BIT4_MASK 0x0000000000000010
+
+/* SH_TSF_SOFTWARE_ARM_BIT5 */
+/* Description: Trigger sequencing facility software arm bit 5 */
+#define SH_TSF_SOFTWARE_ARM_BIT5_SHFT 5
+#define SH_TSF_SOFTWARE_ARM_BIT5_MASK 0x0000000000000020
+
+/* SH_TSF_SOFTWARE_ARM_BIT6 */
+/* Description: Trigger sequencing facility software arm bit 6 */
+#define SH_TSF_SOFTWARE_ARM_BIT6_SHFT 6
+#define SH_TSF_SOFTWARE_ARM_BIT6_MASK 0x0000000000000040
+
+/* SH_TSF_SOFTWARE_ARM_BIT7 */
+/* Description: Trigger sequencing facility software arm bit 7 */
+#define SH_TSF_SOFTWARE_ARM_BIT7_SHFT 7
+#define SH_TSF_SOFTWARE_ARM_BIT7_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_TSF_SOFTWARE_DISARM" */
+/* Trigger sequencing facility software disarm */
+/* ==================================================================== */
+
+#define SH_TSF_SOFTWARE_DISARM 0x0000000110065380
+#define SH_TSF_SOFTWARE_DISARM_MASK 0x00000000000000ff
+#define SH_TSF_SOFTWARE_DISARM_INIT 0x0000000000000000
+
+/* SH_TSF_SOFTWARE_DISARM_BIT0 */
+/* Description: Trigger sequencing facility software disarm bit 0 */
+#define SH_TSF_SOFTWARE_DISARM_BIT0_SHFT 0
+#define SH_TSF_SOFTWARE_DISARM_BIT0_MASK 0x0000000000000001
+
+/* SH_TSF_SOFTWARE_DISARM_BIT1 */
+/* Description: Trigger sequencing facility software disarm bit 1 */
+#define SH_TSF_SOFTWARE_DISARM_BIT1_SHFT 1
+#define SH_TSF_SOFTWARE_DISARM_BIT1_MASK 0x0000000000000002
+
+/* SH_TSF_SOFTWARE_DISARM_BIT2 */
+/* Description: Trigger sequencing facility software disarm bit 2 */
+#define SH_TSF_SOFTWARE_DISARM_BIT2_SHFT 2
+#define SH_TSF_SOFTWARE_DISARM_BIT2_MASK 0x0000000000000004
+
+/* SH_TSF_SOFTWARE_DISARM_BIT3 */
+/* Description: Trigger sequencing facility software disarm bit 3 */
+#define SH_TSF_SOFTWARE_DISARM_BIT3_SHFT 3
+#define SH_TSF_SOFTWARE_DISARM_BIT3_MASK 0x0000000000000008
+
+/* SH_TSF_SOFTWARE_DISARM_BIT4 */
+/* Description: Trigger sequencing facility software disarm bit 4 */
+#define SH_TSF_SOFTWARE_DISARM_BIT4_SHFT 4
+#define SH_TSF_SOFTWARE_DISARM_BIT4_MASK 0x0000000000000010
+
+/* SH_TSF_SOFTWARE_DISARM_BIT5 */
+/* Description: Trigger sequencing facility software disarm bit 5 */
+#define SH_TSF_SOFTWARE_DISARM_BIT5_SHFT 5
+#define SH_TSF_SOFTWARE_DISARM_BIT5_MASK 0x0000000000000020
+
+/* SH_TSF_SOFTWARE_DISARM_BIT6 */
+/* Description: Trigger sequencing facility software disarm bit 6 */
+#define SH_TSF_SOFTWARE_DISARM_BIT6_SHFT 6
+#define SH_TSF_SOFTWARE_DISARM_BIT6_MASK 0x0000000000000040
+
+/* SH_TSF_SOFTWARE_DISARM_BIT7 */
+/* Description: Trigger sequencing facility software disarm bit 7 */
+#define SH_TSF_SOFTWARE_DISARM_BIT7_SHFT 7
+#define SH_TSF_SOFTWARE_DISARM_BIT7_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_TSF_SOFTWARE_TRIGGERED" */
+/* Trigger sequencing facility software triggered */
+/* ==================================================================== */
+
+#define SH_TSF_SOFTWARE_TRIGGERED 0x0000000110065400
+#define SH_TSF_SOFTWARE_TRIGGERED_MASK 0x00000000000000ff
+#define SH_TSF_SOFTWARE_TRIGGERED_INIT 0x0000000000000000
+
+/* SH_TSF_SOFTWARE_TRIGGERED_BIT0 */
+/* Description: Trigger sequencing facility software triggered bit */
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT0_SHFT 0
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT0_MASK 0x0000000000000001
+
+/* SH_TSF_SOFTWARE_TRIGGERED_BIT1 */
+/* Description: Trigger sequencing facility software triggered bit */
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT1_SHFT 1
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT1_MASK 0x0000000000000002
+
+/* SH_TSF_SOFTWARE_TRIGGERED_BIT2 */
+/* Description: Trigger sequencing facility software triggered bit */
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT2_SHFT 2
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT2_MASK 0x0000000000000004
+
+/* SH_TSF_SOFTWARE_TRIGGERED_BIT3 */
+/* Description: Trigger sequencing facility software triggered bit */
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT3_SHFT 3
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT3_MASK 0x0000000000000008
+
+/* SH_TSF_SOFTWARE_TRIGGERED_BIT4 */
+/* Description: Trigger sequencing facility software triggered bit */
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT4_SHFT 4
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT4_MASK 0x0000000000000010
+
+/* SH_TSF_SOFTWARE_TRIGGERED_BIT5 */
+/* Description: Trigger sequencing facility software triggered bit */
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT5_SHFT 5
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT5_MASK 0x0000000000000020
+
+/* SH_TSF_SOFTWARE_TRIGGERED_BIT6 */
+/* Description: Trigger sequencing facility software triggered bit */
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT6_SHFT 6
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT6_MASK 0x0000000000000040
+
+/* SH_TSF_SOFTWARE_TRIGGERED_BIT7 */
+/* Description: Trigger sequencing facility software triggered bit */
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT7_SHFT 7
+#define SH_TSF_SOFTWARE_TRIGGERED_BIT7_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_TSF_TRIGGER_MASK" */
+/* Trigger sequencing facility trigger mask */
+/* ==================================================================== */
+
+#define SH_TSF_TRIGGER_MASK 0x0000000110065480
+#define SH_TSF_TRIGGER_MASK_MASK 0xffffffffffffffff
+#define SH_TSF_TRIGGER_MASK_INIT 0x0000000000000000
+
+/* SH_TSF_TRIGGER_MASK_MASK */
+/* Description: Trigger sequencing facility trigger mask */
+#define SH_TSF_TRIGGER_MASK_MASK_SHFT 0
+#define SH_TSF_TRIGGER_MASK_MASK_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_VEC_DATA" */
+/* Vector Write Request Message Data */
+/* ==================================================================== */
+
+#define SH_VEC_DATA 0x0000000110066000
+#define SH_VEC_DATA_MASK 0xffffffffffffffff
+#define SH_VEC_DATA_INIT 0x0000000000000000
+
+/* SH_VEC_DATA_DATA */
+/* Description: Data */
+#define SH_VEC_DATA_DATA_SHFT 0
+#define SH_VEC_DATA_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_VEC_PARMS" */
+/* Vector Message Parameters Register */
+/* ==================================================================== */
+
+#define SH_VEC_PARMS 0x0000000110066080
+#define SH_VEC_PARMS_MASK 0xc0003ffffffffffb
+#define SH_VEC_PARMS_INIT 0x0000000000000000
+
+/* SH_VEC_PARMS_TYPE */
+/* Description: Vector Request Message Type */
+#define SH_VEC_PARMS_TYPE_SHFT 0
+#define SH_VEC_PARMS_TYPE_MASK 0x0000000000000001
+
+/* SH_VEC_PARMS_NI_PORT */
+/* Description: Network Interface Port Select */
+#define SH_VEC_PARMS_NI_PORT_SHFT 1
+#define SH_VEC_PARMS_NI_PORT_MASK 0x0000000000000002
+
+/* SH_VEC_PARMS_ADDRESS */
+/* Description: Address[37:6] */
+#define SH_VEC_PARMS_ADDRESS_SHFT 3
+#define SH_VEC_PARMS_ADDRESS_MASK 0x00000007fffffff8
+
+/* SH_VEC_PARMS_PIO_ID */
+/* Description: PIO ID */
+#define SH_VEC_PARMS_PIO_ID_SHFT 35
+#define SH_VEC_PARMS_PIO_ID_MASK 0x00003ff800000000
+
+/* SH_VEC_PARMS_START */
+/* Description: Start */
+#define SH_VEC_PARMS_START_SHFT 62
+#define SH_VEC_PARMS_START_MASK 0x4000000000000000
+
+/* SH_VEC_PARMS_BUSY */
+/* Description: Busy */
+#define SH_VEC_PARMS_BUSY_SHFT 63
+#define SH_VEC_PARMS_BUSY_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_VEC_ROUTE" */
+/* Vector Request Message Route */
+/* ==================================================================== */
+
+#define SH_VEC_ROUTE 0x0000000110066100
+#define SH_VEC_ROUTE_MASK 0xffffffffffffffff
+#define SH_VEC_ROUTE_INIT 0x0000000000000000
+
+/* SH_VEC_ROUTE_ROUTE */
+/* Description: Route */
+#define SH_VEC_ROUTE_ROUTE_SHFT 0
+#define SH_VEC_ROUTE_ROUTE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_CPU_PERM" */
+/* CPU MMR Access Permission Bits */
+/* ==================================================================== */
+
+#define SH_CPU_PERM 0x0000000110060000
+#define SH_CPU_PERM_MASK 0xffffffffffffffff
+#define SH_CPU_PERM_INIT 0xffffffffffffffff
+
+/* SH_CPU_PERM_ACCESS_BITS */
+/* Description: Access Bits */
+#define SH_CPU_PERM_ACCESS_BITS_SHFT 0
+#define SH_CPU_PERM_ACCESS_BITS_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_CPU_PERM_OVR" */
+/* CPU MMR Access Permission Override */
+/* ==================================================================== */
+
+#define SH_CPU_PERM_OVR 0x0000000110060080
+#define SH_CPU_PERM_OVR_MASK 0xffffffffffffffff
+#define SH_CPU_PERM_OVR_INIT 0x0000000000000000
+
+/* SH_CPU_PERM_OVR_OVERRIDE */
+/* Description: Override */
+#define SH_CPU_PERM_OVR_OVERRIDE_SHFT 0
+#define SH_CPU_PERM_OVR_OVERRIDE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_EXT_IO_PERM" */
+/* External IO MMR Access Permission Bits */
+/* ==================================================================== */
+
+#define SH_EXT_IO_PERM 0x0000000110060100
+#define SH_EXT_IO_PERM_MASK 0xffffffffffffffff
+#define SH_EXT_IO_PERM_INIT 0x0000000000000000
+
+/* SH_EXT_IO_PERM_ACCESS_BITS */
+/* Description: Access Bits */
+#define SH_EXT_IO_PERM_ACCESS_BITS_SHFT 0
+#define SH_EXT_IO_PERM_ACCESS_BITS_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_EXT_IOI_ACCESS" */
+/* External IO Interrupt Access Permission Bits */
+/* ==================================================================== */
+
+#define SH_EXT_IOI_ACCESS 0x0000000110060180
+#define SH_EXT_IOI_ACCESS_MASK 0xffffffffffffffff
+#define SH_EXT_IOI_ACCESS_INIT 0xffffffffffffffff
+
+/* SH_EXT_IOI_ACCESS_ACCESS_BITS */
+/* Description: Access Bits */
+#define SH_EXT_IOI_ACCESS_ACCESS_BITS_SHFT 0
+#define SH_EXT_IOI_ACCESS_ACCESS_BITS_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_GC_FIL_CTRL" */
+/* SHub Global Clock Filter Control */
+/* ==================================================================== */
+
+#define SH_GC_FIL_CTRL 0x0000000110060200
+#define SH_GC_FIL_CTRL_MASK 0x03ff3ff3ff1fff1f
+#define SH_GC_FIL_CTRL_INIT 0x0000000000000000
+
+/* SH_GC_FIL_CTRL_OFFSET */
+/* Description: Offset */
+#define SH_GC_FIL_CTRL_OFFSET_SHFT 0
+#define SH_GC_FIL_CTRL_OFFSET_MASK 0x000000000000001f
+
+/* SH_GC_FIL_CTRL_MASK_COUNTER */
+/* Description: Mask Counter */
+#define SH_GC_FIL_CTRL_MASK_COUNTER_SHFT 8
+#define SH_GC_FIL_CTRL_MASK_COUNTER_MASK 0x00000000000fff00
+
+/* SH_GC_FIL_CTRL_MASK_ENABLE */
+/* Description: Mask Enable */
+#define SH_GC_FIL_CTRL_MASK_ENABLE_SHFT 20
+#define SH_GC_FIL_CTRL_MASK_ENABLE_MASK 0x0000000000100000
+
+/* SH_GC_FIL_CTRL_DROPOUT_COUNTER */
+/* Description: Dropout Counter */
+#define SH_GC_FIL_CTRL_DROPOUT_COUNTER_SHFT 24
+#define SH_GC_FIL_CTRL_DROPOUT_COUNTER_MASK 0x00000003ff000000
+
+/* SH_GC_FIL_CTRL_DROPOUT_THRESH */
+/* Description: Dropout threshold */
+#define SH_GC_FIL_CTRL_DROPOUT_THRESH_SHFT 36
+#define SH_GC_FIL_CTRL_DROPOUT_THRESH_MASK 0x00003ff000000000
+
+/* SH_GC_FIL_CTRL_ERROR_COUNTER */
+/* Description: Error counter */
+#define SH_GC_FIL_CTRL_ERROR_COUNTER_SHFT 48
+#define SH_GC_FIL_CTRL_ERROR_COUNTER_MASK 0x03ff000000000000
+
+/* ==================================================================== */
+/* Register "SH_GC_SRC_CTRL" */
+/* SHub Global Clock Control */
+/* ==================================================================== */
+
+#define SH_GC_SRC_CTRL 0x0000000110060280
+#define SH_GC_SRC_CTRL_MASK 0x0000000313ff3ff1
+#define SH_GC_SRC_CTRL_INIT 0x0000000100000000
+
+/* SH_GC_SRC_CTRL_ENABLE_COUNTER */
+/* Description: Enable Counter */
+#define SH_GC_SRC_CTRL_ENABLE_COUNTER_SHFT 0
+#define SH_GC_SRC_CTRL_ENABLE_COUNTER_MASK 0x0000000000000001
+
+/* SH_GC_SRC_CTRL_MAX_COUNT */
+/* Description: Max Count */
+#define SH_GC_SRC_CTRL_MAX_COUNT_SHFT 4
+#define SH_GC_SRC_CTRL_MAX_COUNT_MASK 0x0000000000003ff0
+
+/* SH_GC_SRC_CTRL_COUNTER */
+/* Description: Counter */
+#define SH_GC_SRC_CTRL_COUNTER_SHFT 16
+#define SH_GC_SRC_CTRL_COUNTER_MASK 0x0000000003ff0000
+
+/* SH_GC_SRC_CTRL_TOGGLE_BIT */
+/* Description: Toggle bit */
+#define SH_GC_SRC_CTRL_TOGGLE_BIT_SHFT 28
+#define SH_GC_SRC_CTRL_TOGGLE_BIT_MASK 0x0000000010000000
+
+/* SH_GC_SRC_CTRL_SOURCE_SEL */
+/* Description: Source select (0=ext., 1=Int., 2=SHUB) */
+#define SH_GC_SRC_CTRL_SOURCE_SEL_SHFT 32
+#define SH_GC_SRC_CTRL_SOURCE_SEL_MASK 0x0000000300000000
+
+/* ==================================================================== */
+/* Register "SH_HARD_RESET" */
+/* SHub Hard Reset */
+/* ==================================================================== */
+
+#define SH_HARD_RESET 0x0000000110060300
+#define SH_HARD_RESET_MASK 0x0000000000000001
+#define SH_HARD_RESET_INIT 0x0000000000000000
+
+/* SH_HARD_RESET_HARD_RESET */
+/* Description: Hard Reset */
+#define SH_HARD_RESET_HARD_RESET_SHFT 0
+#define SH_HARD_RESET_HARD_RESET_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_IO_PERM" */
+/* II MMR Access Permission Bits */
+/* ==================================================================== */
+
+#define SH_IO_PERM 0x0000000110060380
+#define SH_IO_PERM_MASK 0xffffffffffffffff
+#define SH_IO_PERM_INIT 0x0000000000000000
+
+/* SH_IO_PERM_ACCESS_BITS */
+/* Description: Access Bits */
+#define SH_IO_PERM_ACCESS_BITS_SHFT 0
+#define SH_IO_PERM_ACCESS_BITS_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_IOI_ACCESS" */
+/* II Interrupt Access Permission Bits */
+/* ==================================================================== */
+
+#define SH_IOI_ACCESS 0x0000000110060400
+#define SH_IOI_ACCESS_MASK 0xffffffffffffffff
+#define SH_IOI_ACCESS_INIT 0xffffffffffffffff
+
+/* SH_IOI_ACCESS_ACCESS_BITS */
+/* Description: Access Bits */
+#define SH_IOI_ACCESS_ACCESS_BITS_SHFT 0
+#define SH_IOI_ACCESS_ACCESS_BITS_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_IPI_ACCESS" */
+/* CPU interrupt Access Permission Bits */
+/* ==================================================================== */
+
+#define SH_IPI_ACCESS 0x0000000110060480
+#define SH_IPI_ACCESS_MASK 0xffffffffffffffff
+#define SH_IPI_ACCESS_INIT 0xffffffffffffffff
+
+/* SH_IPI_ACCESS_ACCESS_BITS */
+/* Description: Access Bits */
+#define SH_IPI_ACCESS_ACCESS_BITS_SHFT 0
+#define SH_IPI_ACCESS_ACCESS_BITS_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_JTAG_CONFIG" */
+/* SHub JTAG configuration */
+/* ==================================================================== */
+
+#define SH_JTAG_CONFIG 0x0000000110060500
+#define SH_JTAG_CONFIG_MASK 0x00ffffffffffffff
+#define SH_JTAG_CONFIG_INIT 0x0000000000000000
+
+/* SH_JTAG_CONFIG_MD_CLK_SEL */
+/* Description: Select divide freq of DRAMCLK */
+#define SH_JTAG_CONFIG_MD_CLK_SEL_SHFT 0
+#define SH_JTAG_CONFIG_MD_CLK_SEL_MASK 0x0000000000000003
+
+/* SH_JTAG_CONFIG_NI_CLK_SEL */
+/* Description: Selects clock source for NICLK domain */
+#define SH_JTAG_CONFIG_NI_CLK_SEL_SHFT 2
+#define SH_JTAG_CONFIG_NI_CLK_SEL_MASK 0x0000000000000004
+
+/* SH_JTAG_CONFIG_II_CLK_SEL */
+/* Description: Selects clock source for IOCLK domain */
+#define SH_JTAG_CONFIG_II_CLK_SEL_SHFT 3
+#define SH_JTAG_CONFIG_II_CLK_SEL_MASK 0x0000000000000018
+
+/* SH_JTAG_CONFIG_WRT90_TARGET */
+/* Description: wrt90_target */
+#define SH_JTAG_CONFIG_WRT90_TARGET_SHFT 5
+#define SH_JTAG_CONFIG_WRT90_TARGET_MASK 0x000000000007ffe0
+
+/* SH_JTAG_CONFIG_WRT90_OVERRIDER */
+/* Description: wrt90_overrideR */
+#define SH_JTAG_CONFIG_WRT90_OVERRIDER_SHFT 19
+#define SH_JTAG_CONFIG_WRT90_OVERRIDER_MASK 0x0000000000080000
+
+/* SH_JTAG_CONFIG_WRT90_OVERRIDE */
+/* Description: wrt90_override */
+#define SH_JTAG_CONFIG_WRT90_OVERRIDE_SHFT 20
+#define SH_JTAG_CONFIG_WRT90_OVERRIDE_MASK 0x0000000000100000
+
+/* SH_JTAG_CONFIG_JTAG_MCI_RESET_DELAY */
+/* Description: jtag_mci_reset_delay */
+#define SH_JTAG_CONFIG_JTAG_MCI_RESET_DELAY_SHFT 21
+#define SH_JTAG_CONFIG_JTAG_MCI_RESET_DELAY_MASK 0x0000000001e00000
+
+/* SH_JTAG_CONFIG_JTAG_MCI_TARGET */
+/* Description: jtag_mci_target */
+#define SH_JTAG_CONFIG_JTAG_MCI_TARGET_SHFT 25
+#define SH_JTAG_CONFIG_JTAG_MCI_TARGET_MASK 0x0000007ffe000000
+
+/* SH_JTAG_CONFIG_JTAG_MCI_OVERRIDE */
+/* Description: jtag_mci_override */
+#define SH_JTAG_CONFIG_JTAG_MCI_OVERRIDE_SHFT 39
+#define SH_JTAG_CONFIG_JTAG_MCI_OVERRIDE_MASK 0x0000008000000000
+
+/* SH_JTAG_CONFIG_FSB_CONFIG_IOQ_DEPTH */
+/* Description: 0=depth 8, 1=depth1 */
+#define SH_JTAG_CONFIG_FSB_CONFIG_IOQ_DEPTH_SHFT 40
+#define SH_JTAG_CONFIG_FSB_CONFIG_IOQ_DEPTH_MASK 0x0000010000000000
+
+/* SH_JTAG_CONFIG_FSB_CONFIG_SAMPLE_BINIT */
+/* Description: Enable sampling of BINIT */
+#define SH_JTAG_CONFIG_FSB_CONFIG_SAMPLE_BINIT_SHFT 41
+#define SH_JTAG_CONFIG_FSB_CONFIG_SAMPLE_BINIT_MASK 0x0000020000000000
+
+/* SH_JTAG_CONFIG_FSB_CONFIG_ENABLE_BUS_PARKING */
+#define SH_JTAG_CONFIG_FSB_CONFIG_ENABLE_BUS_PARKING_SHFT 42
+#define SH_JTAG_CONFIG_FSB_CONFIG_ENABLE_BUS_PARKING_MASK 0x0000040000000000
+
+/* SH_JTAG_CONFIG_FSB_CONFIG_CLOCK_RATIO */
+#define SH_JTAG_CONFIG_FSB_CONFIG_CLOCK_RATIO_SHFT 43
+#define SH_JTAG_CONFIG_FSB_CONFIG_CLOCK_RATIO_MASK 0x0000f80000000000
+
+/* SH_JTAG_CONFIG_FSB_CONFIG_OUTPUT_TRISTATE */
+/* Description: Output tristate control */
+#define SH_JTAG_CONFIG_FSB_CONFIG_OUTPUT_TRISTATE_SHFT 48
+#define SH_JTAG_CONFIG_FSB_CONFIG_OUTPUT_TRISTATE_MASK 0x000f000000000000
+
+/* SH_JTAG_CONFIG_FSB_CONFIG_ENABLE_BIST */
+/* Description: Enables BIST */
+#define SH_JTAG_CONFIG_FSB_CONFIG_ENABLE_BIST_SHFT 52
+#define SH_JTAG_CONFIG_FSB_CONFIG_ENABLE_BIST_MASK 0x0010000000000000
+
+/* SH_JTAG_CONFIG_FSB_CONFIG_AUX */
+/* Description: Enables BIST */
+#define SH_JTAG_CONFIG_FSB_CONFIG_AUX_SHFT 53
+#define SH_JTAG_CONFIG_FSB_CONFIG_AUX_MASK 0x0060000000000000
+
+/* SH_JTAG_CONFIG_GTL_CONFIG_RE */
+/* Description: Reference Enable selection for GTL io */
+#define SH_JTAG_CONFIG_GTL_CONFIG_RE_SHFT 55
+#define SH_JTAG_CONFIG_GTL_CONFIG_RE_MASK 0x0080000000000000
+
+/* ==================================================================== */
+/* Register "SH_SHUB_ID" */
+/* SHub ID Number */
+/* ==================================================================== */
+
+#define SH_SHUB_ID 0x0000000110060580
+#define SH_SHUB_ID_MASK 0x011f37ffffffffff
+#define SH_SHUB_ID_INIT 0x0010300000000000
+
+/* SH_SHUB_ID_FORCE1 */
+/* Description: Must be 1 */
+#define SH_SHUB_ID_FORCE1_SHFT 0
+#define SH_SHUB_ID_FORCE1_MASK 0x0000000000000001
+
+/* SH_SHUB_ID_MANUFACTURER */
+/* Description: Manufacturer */
+#define SH_SHUB_ID_MANUFACTURER_SHFT 1
+#define SH_SHUB_ID_MANUFACTURER_MASK 0x0000000000000ffe
+
+/* SH_SHUB_ID_PART_NUMBER */
+/* Description: Part Number */
+#define SH_SHUB_ID_PART_NUMBER_SHFT 12
+#define SH_SHUB_ID_PART_NUMBER_MASK 0x000000000ffff000
+
+/* SH_SHUB_ID_REVISION */
+/* Description: Revision */
+#define SH_SHUB_ID_REVISION_SHFT 28
+#define SH_SHUB_ID_REVISION_MASK 0x00000000f0000000
+
+/* SH_SHUB_ID_NODE_ID */
+/* Description: Node Identification */
+#define SH_SHUB_ID_NODE_ID_SHFT 32
+#define SH_SHUB_ID_NODE_ID_MASK 0x000007ff00000000
+
+/* SH_SHUB_ID_SHARING_MODE */
+/* Description: Sharing mode (Coherency Domain Size) */
+#define SH_SHUB_ID_SHARING_MODE_SHFT 44
+#define SH_SHUB_ID_SHARING_MODE_MASK 0x0000300000000000
+
+/* SH_SHUB_ID_NODES_PER_BIT */
+/* Description: Nodes per bit definition for MMR access */
+#define SH_SHUB_ID_NODES_PER_BIT_SHFT 48
+#define SH_SHUB_ID_NODES_PER_BIT_MASK 0x001f000000000000
+
+/* SH_SHUB_ID_NI_PORT */
+/* Description: NI port of vector reference, 0 = NI0, 1 = NI1 */
+#define SH_SHUB_ID_NI_PORT_SHFT 56
+#define SH_SHUB_ID_NI_PORT_MASK 0x0100000000000000
+
+/* ==================================================================== */
+/* Register "SH_SHUBS_PRESENT0" */
+/* Shubs 0 - 63 Present. Used for invalidate generation */
+/* ==================================================================== */
+
+#define SH_SHUBS_PRESENT0 0x0000000110060600
+#define SH_SHUBS_PRESENT0_MASK 0xffffffffffffffff
+#define SH_SHUBS_PRESENT0_INIT 0xffffffffffffffff
+
+/* SH_SHUBS_PRESENT0_SHUBS_PRESENT0 */
+/* Description: Shubs 0 - 63 Present configuration */
+#define SH_SHUBS_PRESENT0_SHUBS_PRESENT0_SHFT 0
+#define SH_SHUBS_PRESENT0_SHUBS_PRESENT0_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_SHUBS_PRESENT1" */
+/* Shubs 64 - 127 Present. Used for invalidate generation */
+/* ==================================================================== */
+
+#define SH_SHUBS_PRESENT1 0x0000000110060680
+#define SH_SHUBS_PRESENT1_MASK 0xffffffffffffffff
+#define SH_SHUBS_PRESENT1_INIT 0xffffffffffffffff
+
+/* SH_SHUBS_PRESENT1_SHUBS_PRESENT1 */
+/* Description: Shubs 64 - 127 Present configuration */
+#define SH_SHUBS_PRESENT1_SHUBS_PRESENT1_SHFT 0
+#define SH_SHUBS_PRESENT1_SHUBS_PRESENT1_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_SHUBS_PRESENT2" */
+/* Shubs 128 - 191 Present. Used for invalidate generation */
+/* ==================================================================== */
+
+#define SH_SHUBS_PRESENT2 0x0000000110060700
+#define SH_SHUBS_PRESENT2_MASK 0xffffffffffffffff
+#define SH_SHUBS_PRESENT2_INIT 0xffffffffffffffff
+
+/* SH_SHUBS_PRESENT2_SHUBS_PRESENT2 */
+/* Description: Shubs 128 - 191 Present configuration */
+#define SH_SHUBS_PRESENT2_SHUBS_PRESENT2_SHFT 0
+#define SH_SHUBS_PRESENT2_SHUBS_PRESENT2_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_SHUBS_PRESENT3" */
+/* Shubs 192 - 255 Present. Used for invalidate generation */
+/* ==================================================================== */
+
+#define SH_SHUBS_PRESENT3 0x0000000110060780
+#define SH_SHUBS_PRESENT3_MASK 0xffffffffffffffff
+#define SH_SHUBS_PRESENT3_INIT 0xffffffffffffffff
+
+/* SH_SHUBS_PRESENT3_SHUBS_PRESENT3 */
+/* Description: Shubs 192 - 255 Present configuration */
+#define SH_SHUBS_PRESENT3_SHUBS_PRESENT3_SHFT 0
+#define SH_SHUBS_PRESENT3_SHUBS_PRESENT3_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_SOFT_RESET" */
+/* SHub Soft Reset */
+/* ==================================================================== */
+
+#define SH_SOFT_RESET 0x0000000110060800
+#define SH_SOFT_RESET_MASK 0x0000000000000001
+#define SH_SOFT_RESET_INIT 0x0000000000000000
+
+/* SH_SOFT_RESET_SOFT_RESET */
+/* Description: Soft Reset */
+#define SH_SOFT_RESET_SOFT_RESET_SHFT 0
+#define SH_SOFT_RESET_SOFT_RESET_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_FIRST_ERROR" */
+/* Shub Global First Error Flags */
+/* ==================================================================== */
+
+#define SH_FIRST_ERROR 0x0000000110071000
+#define SH_FIRST_ERROR_MASK 0x000000000007ffff
+#define SH_FIRST_ERROR_INIT 0x0000000000000000
+
+/* SH_FIRST_ERROR_FIRST_ERROR */
+/* Description: Chiplet with first error */
+#define SH_FIRST_ERROR_FIRST_ERROR_SHFT 0
+#define SH_FIRST_ERROR_FIRST_ERROR_MASK 0x000000000007ffff
+
+/* ==================================================================== */
+/* Register "SH_II_HW_TIME_STAMP" */
+/* II hardware error time stamp */
+/* ==================================================================== */
+
+#define SH_II_HW_TIME_STAMP 0x0000000110071080
+#define SH_II_HW_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_II_HW_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_II_HW_TIME_STAMP_TIME */
+/* Description: II hardware error time stamp */
+#define SH_II_HW_TIME_STAMP_TIME_SHFT 0
+#define SH_II_HW_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_II_HW_TIME_STAMP_VALID */
+/* Description: II hardware error time stamp valid */
+#define SH_II_HW_TIME_STAMP_VALID_SHFT 63
+#define SH_II_HW_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_LB_HW_TIME_STAMP" */
+/* LB hardware error time stamp */
+/* ==================================================================== */
+
+#define SH_LB_HW_TIME_STAMP 0x0000000110071100
+#define SH_LB_HW_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_LB_HW_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_LB_HW_TIME_STAMP_TIME */
+/* Description: LB hardware error time stamp */
+#define SH_LB_HW_TIME_STAMP_TIME_SHFT 0
+#define SH_LB_HW_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_LB_HW_TIME_STAMP_VALID */
+/* Description: LB hardware error time stamp valid */
+#define SH_LB_HW_TIME_STAMP_VALID_SHFT 63
+#define SH_LB_HW_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_COR_TIME_STAMP" */
+/* MD correctable error time stamp */
+/* ==================================================================== */
+
+#define SH_MD_COR_TIME_STAMP 0x0000000110071180
+#define SH_MD_COR_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_MD_COR_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_MD_COR_TIME_STAMP_TIME */
+/* Description: MD correctable error time stamp */
+#define SH_MD_COR_TIME_STAMP_TIME_SHFT 0
+#define SH_MD_COR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_MD_COR_TIME_STAMP_VALID */
+/* Description: MD correctable error time stamp valid */
+#define SH_MD_COR_TIME_STAMP_VALID_SHFT 63
+#define SH_MD_COR_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_HW_TIME_STAMP" */
+/* MD hardware error time stamp */
+/* ==================================================================== */
+
+#define SH_MD_HW_TIME_STAMP 0x0000000110071200
+#define SH_MD_HW_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_MD_HW_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_MD_HW_TIME_STAMP_TIME */
+/* Description: MD hardware error time stamp */
+#define SH_MD_HW_TIME_STAMP_TIME_SHFT 0
+#define SH_MD_HW_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_MD_HW_TIME_STAMP_VALID */
+/* Description: MD hardware error time stamp valid */
+#define SH_MD_HW_TIME_STAMP_VALID_SHFT 63
+#define SH_MD_HW_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_UNCOR_TIME_STAMP" */
+/* MD uncorrectable error time stamp */
+/* ==================================================================== */
+
+#define SH_MD_UNCOR_TIME_STAMP 0x0000000110071280
+#define SH_MD_UNCOR_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_MD_UNCOR_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_MD_UNCOR_TIME_STAMP_TIME */
+/* Description: MD uncorrectable error time stamp */
+#define SH_MD_UNCOR_TIME_STAMP_TIME_SHFT 0
+#define SH_MD_UNCOR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_MD_UNCOR_TIME_STAMP_VALID */
+/* Description: MD uncorrectable error time stamp valid */
+#define SH_MD_UNCOR_TIME_STAMP_VALID_SHFT 63
+#define SH_MD_UNCOR_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_COR_TIME_STAMP" */
+/* PI correctable error time stamp */
+/* ==================================================================== */
+
+#define SH_PI_COR_TIME_STAMP 0x0000000110071300
+#define SH_PI_COR_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_PI_COR_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_PI_COR_TIME_STAMP_TIME */
+/* Description: PI correctable error time stamp */
+#define SH_PI_COR_TIME_STAMP_TIME_SHFT 0
+#define SH_PI_COR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_PI_COR_TIME_STAMP_VALID */
+/* Description: PI correctable error time stamp valid */
+#define SH_PI_COR_TIME_STAMP_VALID_SHFT 63
+#define SH_PI_COR_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_HW_TIME_STAMP" */
+/* PI hardware error time stamp */
+/* ==================================================================== */
+
+#define SH_PI_HW_TIME_STAMP 0x0000000110071380
+#define SH_PI_HW_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_PI_HW_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_PI_HW_TIME_STAMP_TIME */
+/* Description: PI hardware error time stamp */
+#define SH_PI_HW_TIME_STAMP_TIME_SHFT 0
+#define SH_PI_HW_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_PI_HW_TIME_STAMP_VALID */
+/* Description: PI hardware error time stamp valid */
+#define SH_PI_HW_TIME_STAMP_VALID_SHFT 63
+#define SH_PI_HW_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PI_UNCOR_TIME_STAMP" */
+/* PI uncorrectable error time stamp */
+/* ==================================================================== */
+
+#define SH_PI_UNCOR_TIME_STAMP 0x0000000110071400
+#define SH_PI_UNCOR_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_PI_UNCOR_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_PI_UNCOR_TIME_STAMP_TIME */
+/* Description: PI uncorrectable error time stamp */
+#define SH_PI_UNCOR_TIME_STAMP_TIME_SHFT 0
+#define SH_PI_UNCOR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_PI_UNCOR_TIME_STAMP_VALID */
+/* Description: PI uncorrectable error time stamp valid */
+#define SH_PI_UNCOR_TIME_STAMP_VALID_SHFT 63
+#define SH_PI_UNCOR_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC0_ADV_TIME_STAMP" */
+/* Proc 0 advisory time stamp */
+/* ==================================================================== */
+
+#define SH_PROC0_ADV_TIME_STAMP 0x0000000110071480
+#define SH_PROC0_ADV_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_PROC0_ADV_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_PROC0_ADV_TIME_STAMP_TIME */
+/* Description: Processor 0 advisory time stamp */
+#define SH_PROC0_ADV_TIME_STAMP_TIME_SHFT 0
+#define SH_PROC0_ADV_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_PROC0_ADV_TIME_STAMP_VALID */
+/* Description: Processor 0 advisory time stamp valid */
+#define SH_PROC0_ADV_TIME_STAMP_VALID_SHFT 63
+#define SH_PROC0_ADV_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC0_ERR_TIME_STAMP" */
+/* Proc 0 error time stamp */
+/* ==================================================================== */
+
+#define SH_PROC0_ERR_TIME_STAMP 0x0000000110071500
+#define SH_PROC0_ERR_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_PROC0_ERR_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_PROC0_ERR_TIME_STAMP_TIME */
+/* Description: Processor 0 error time stamp */
+#define SH_PROC0_ERR_TIME_STAMP_TIME_SHFT 0
+#define SH_PROC0_ERR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_PROC0_ERR_TIME_STAMP_VALID */
+/* Description: Processor 0 error time stamp valid */
+#define SH_PROC0_ERR_TIME_STAMP_VALID_SHFT 63
+#define SH_PROC0_ERR_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC1_ADV_TIME_STAMP" */
+/* Proc 1 advisory time stamp */
+/* ==================================================================== */
+
+#define SH_PROC1_ADV_TIME_STAMP 0x0000000110071580
+#define SH_PROC1_ADV_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_PROC1_ADV_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_PROC1_ADV_TIME_STAMP_TIME */
+/* Description: Processor 1 advisory time stamp */
+#define SH_PROC1_ADV_TIME_STAMP_TIME_SHFT 0
+#define SH_PROC1_ADV_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_PROC1_ADV_TIME_STAMP_VALID */
+/* Description: Processor 1 advisory time stamp valid */
+#define SH_PROC1_ADV_TIME_STAMP_VALID_SHFT 63
+#define SH_PROC1_ADV_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC1_ERR_TIME_STAMP" */
+/* Proc 1 error time stamp */
+/* ==================================================================== */
+
+#define SH_PROC1_ERR_TIME_STAMP 0x0000000110071600
+#define SH_PROC1_ERR_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_PROC1_ERR_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_PROC1_ERR_TIME_STAMP_TIME */
+/* Description: Processor 1 error time stamp */
+#define SH_PROC1_ERR_TIME_STAMP_TIME_SHFT 0
+#define SH_PROC1_ERR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_PROC1_ERR_TIME_STAMP_VALID */
+/* Description: Processor 1 error time stamp valid */
+#define SH_PROC1_ERR_TIME_STAMP_VALID_SHFT 63
+#define SH_PROC1_ERR_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC2_ADV_TIME_STAMP" */
+/* Proc 2 advisory time stamp */
+/* ==================================================================== */
+
+#define SH_PROC2_ADV_TIME_STAMP 0x0000000110071680
+#define SH_PROC2_ADV_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_PROC2_ADV_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_PROC2_ADV_TIME_STAMP_TIME */
+/* Description: Processor 2 advisory time stamp */
+#define SH_PROC2_ADV_TIME_STAMP_TIME_SHFT 0
+#define SH_PROC2_ADV_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_PROC2_ADV_TIME_STAMP_VALID */
+/* Description: Processor 2 advisory time stamp valid */
+#define SH_PROC2_ADV_TIME_STAMP_VALID_SHFT 63
+#define SH_PROC2_ADV_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC2_ERR_TIME_STAMP" */
+/* Proc 2 error time stamp */
+/* ==================================================================== */
+
+#define SH_PROC2_ERR_TIME_STAMP 0x0000000110071700
+#define SH_PROC2_ERR_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_PROC2_ERR_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_PROC2_ERR_TIME_STAMP_TIME */
+/* Description: Processor 2 error time stamp */
+#define SH_PROC2_ERR_TIME_STAMP_TIME_SHFT 0
+#define SH_PROC2_ERR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_PROC2_ERR_TIME_STAMP_VALID */
+/* Description: Processor 2 error time stamp valid */
+#define SH_PROC2_ERR_TIME_STAMP_VALID_SHFT 63
+#define SH_PROC2_ERR_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC3_ADV_TIME_STAMP" */
+/* Proc 3 advisory time stamp */
+/* ==================================================================== */
+
+#define SH_PROC3_ADV_TIME_STAMP 0x0000000110071780
+#define SH_PROC3_ADV_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_PROC3_ADV_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_PROC3_ADV_TIME_STAMP_TIME */
+/* Description: Processor 3 advisory time stamp */
+#define SH_PROC3_ADV_TIME_STAMP_TIME_SHFT 0
+#define SH_PROC3_ADV_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_PROC3_ADV_TIME_STAMP_VALID */
+/* Description: Processor 3 advisory time stamp valid */
+#define SH_PROC3_ADV_TIME_STAMP_VALID_SHFT 63
+#define SH_PROC3_ADV_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROC3_ERR_TIME_STAMP" */
+/* Proc 3 error time stamp */
+/* ==================================================================== */
+
+#define SH_PROC3_ERR_TIME_STAMP 0x0000000110071800
+#define SH_PROC3_ERR_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_PROC3_ERR_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_PROC3_ERR_TIME_STAMP_TIME */
+/* Description: Processor 3 error time stamp */
+#define SH_PROC3_ERR_TIME_STAMP_TIME_SHFT 0
+#define SH_PROC3_ERR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_PROC3_ERR_TIME_STAMP_VALID */
+/* Description: Processor 3 error time stamp valid */
+#define SH_PROC3_ERR_TIME_STAMP_VALID_SHFT 63
+#define SH_PROC3_ERR_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_COR_TIME_STAMP" */
+/* XN correctable error time stamp */
+/* ==================================================================== */
+
+#define SH_XN_COR_TIME_STAMP 0x0000000110071880
+#define SH_XN_COR_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_XN_COR_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_XN_COR_TIME_STAMP_TIME */
+/* Description: XN correctable error time stamp */
+#define SH_XN_COR_TIME_STAMP_TIME_SHFT 0
+#define SH_XN_COR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_XN_COR_TIME_STAMP_VALID */
+/* Description: XN correctable error time stamp valid */
+#define SH_XN_COR_TIME_STAMP_VALID_SHFT 63
+#define SH_XN_COR_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_HW_TIME_STAMP" */
+/* XN hardware error time stamp */
+/* ==================================================================== */
+
+#define SH_XN_HW_TIME_STAMP 0x0000000110071900
+#define SH_XN_HW_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_XN_HW_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_XN_HW_TIME_STAMP_TIME */
+/* Description: XN hardware error time stamp */
+#define SH_XN_HW_TIME_STAMP_TIME_SHFT 0
+#define SH_XN_HW_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_XN_HW_TIME_STAMP_VALID */
+/* Description: XN hardware error time stamp valid */
+#define SH_XN_HW_TIME_STAMP_VALID_SHFT 63
+#define SH_XN_HW_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_XN_UNCOR_TIME_STAMP" */
+/* XN uncorrectable error time stamp */
+/* ==================================================================== */
+
+#define SH_XN_UNCOR_TIME_STAMP 0x0000000110071980
+#define SH_XN_UNCOR_TIME_STAMP_MASK 0xffffffffffffffff
+#define SH_XN_UNCOR_TIME_STAMP_INIT 0x0000000000000000
+
+/* SH_XN_UNCOR_TIME_STAMP_TIME */
+/* Description: XN uncorrectable error time stamp */
+#define SH_XN_UNCOR_TIME_STAMP_TIME_SHFT 0
+#define SH_XN_UNCOR_TIME_STAMP_TIME_MASK 0x7fffffffffffffff
+
+/* SH_XN_UNCOR_TIME_STAMP_VALID */
+/* Description: XN uncorrectable error time stamp valid */
+#define SH_XN_UNCOR_TIME_STAMP_VALID_SHFT 63
+#define SH_XN_UNCOR_TIME_STAMP_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_DEBUG_PORT" */
+/* SHub Debug Port */
+/* ==================================================================== */
+
+#define SH_DEBUG_PORT 0x0000000110072000
+#define SH_DEBUG_PORT_MASK 0x00000000ffffffff
+#define SH_DEBUG_PORT_INIT 0x0000000000000000
+
+/* SH_DEBUG_PORT_DEBUG_NIBBLE0 */
+/* Description: Debug port nibble 0 */
+#define SH_DEBUG_PORT_DEBUG_NIBBLE0_SHFT 0
+#define SH_DEBUG_PORT_DEBUG_NIBBLE0_MASK 0x000000000000000f
+
+/* SH_DEBUG_PORT_DEBUG_NIBBLE1 */
+/* Description: Debug port nibble 1 */
+#define SH_DEBUG_PORT_DEBUG_NIBBLE1_SHFT 4
+#define SH_DEBUG_PORT_DEBUG_NIBBLE1_MASK 0x00000000000000f0
+
+/* SH_DEBUG_PORT_DEBUG_NIBBLE2 */
+/* Description: Debug port nibble 2 */
+#define SH_DEBUG_PORT_DEBUG_NIBBLE2_SHFT 8
+#define SH_DEBUG_PORT_DEBUG_NIBBLE2_MASK 0x0000000000000f00
+
+/* SH_DEBUG_PORT_DEBUG_NIBBLE3 */
+/* Description: Debug port nibble 3 */
+#define SH_DEBUG_PORT_DEBUG_NIBBLE3_SHFT 12
+#define SH_DEBUG_PORT_DEBUG_NIBBLE3_MASK 0x000000000000f000
+
+/* SH_DEBUG_PORT_DEBUG_NIBBLE4 */
+/* Description: Debug port nibble 4 */
+#define SH_DEBUG_PORT_DEBUG_NIBBLE4_SHFT 16
+#define SH_DEBUG_PORT_DEBUG_NIBBLE4_MASK 0x00000000000f0000
+
+/* SH_DEBUG_PORT_DEBUG_NIBBLE5 */
+/* Description: Debug port nibble 5 */
+#define SH_DEBUG_PORT_DEBUG_NIBBLE5_SHFT 20
+#define SH_DEBUG_PORT_DEBUG_NIBBLE5_MASK 0x0000000000f00000
+
+/* SH_DEBUG_PORT_DEBUG_NIBBLE6 */
+/* Description: Debug port nibble 6 */
+#define SH_DEBUG_PORT_DEBUG_NIBBLE6_SHFT 24
+#define SH_DEBUG_PORT_DEBUG_NIBBLE6_MASK 0x000000000f000000
+
+/* SH_DEBUG_PORT_DEBUG_NIBBLE7 */
+/* Description: Debug port nibble 7 */
+#define SH_DEBUG_PORT_DEBUG_NIBBLE7_SHFT 28
+#define SH_DEBUG_PORT_DEBUG_NIBBLE7_MASK 0x00000000f0000000
+
+/* ==================================================================== */
+/* Register "SH_II_DEBUG_DATA" */
+/* II Debug Data */
+/* ==================================================================== */
+
+#define SH_II_DEBUG_DATA 0x0000000110072080
+#define SH_II_DEBUG_DATA_MASK 0x00000000ffffffff
+#define SH_II_DEBUG_DATA_INIT 0x0000000000000000
+
+/* SH_II_DEBUG_DATA_II_DATA */
+/* Description: II debug data */
+#define SH_II_DEBUG_DATA_II_DATA_SHFT 0
+#define SH_II_DEBUG_DATA_II_DATA_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_II_WRAP_DEBUG_DATA" */
+/* SHub II Wrapper Debug Data */
+/* ==================================================================== */
+
+#define SH_II_WRAP_DEBUG_DATA 0x0000000110072100
+#define SH_II_WRAP_DEBUG_DATA_MASK 0x00000000ffffffff
+#define SH_II_WRAP_DEBUG_DATA_INIT 0x0000000000000000
+
+/* SH_II_WRAP_DEBUG_DATA_II_WRAP_DATA */
+/* Description: II wrapper debug data */
+#define SH_II_WRAP_DEBUG_DATA_II_WRAP_DATA_SHFT 0
+#define SH_II_WRAP_DEBUG_DATA_II_WRAP_DATA_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_LB_DEBUG_DATA" */
+/* SHub LB Debug Data */
+/* ==================================================================== */
+
+#define SH_LB_DEBUG_DATA 0x0000000110072180
+#define SH_LB_DEBUG_DATA_MASK 0x00000000ffffffff
+#define SH_LB_DEBUG_DATA_INIT 0x0000000000000000
+
+/* SH_LB_DEBUG_DATA_LB_DATA */
+/* Description: LB debug data */
+#define SH_LB_DEBUG_DATA_LB_DATA_SHFT 0
+#define SH_LB_DEBUG_DATA_LB_DATA_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DEBUG_DATA" */
+/* SHub MD Debug Data */
+/* ==================================================================== */
+
+#define SH_MD_DEBUG_DATA 0x0000000110072200
+#define SH_MD_DEBUG_DATA_MASK 0x00000000ffffffff
+#define SH_MD_DEBUG_DATA_INIT 0x0000000000000000
+
+/* SH_MD_DEBUG_DATA_MD_DATA */
+/* Description: MD debug data */
+#define SH_MD_DEBUG_DATA_MD_DATA_SHFT 0
+#define SH_MD_DEBUG_DATA_MD_DATA_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_PI_DEBUG_DATA" */
+/* SHub PI Debug Data */
+/* ==================================================================== */
+
+#define SH_PI_DEBUG_DATA 0x0000000110072280
+#define SH_PI_DEBUG_DATA_MASK 0x00000000ffffffff
+#define SH_PI_DEBUG_DATA_INIT 0x0000000000000000
+
+/* SH_PI_DEBUG_DATA_PI_DATA */
+/* Description: PI Debug Data */
+#define SH_PI_DEBUG_DATA_PI_DATA_SHFT 0
+#define SH_PI_DEBUG_DATA_PI_DATA_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_XN_DEBUG_DATA" */
+/* SHub XN Debug Data */
+/* ==================================================================== */
+
+#define SH_XN_DEBUG_DATA 0x0000000110072300
+#define SH_XN_DEBUG_DATA_MASK 0x00000000ffffffff
+#define SH_XN_DEBUG_DATA_INIT 0x0000000000000000
+
+/* SH_XN_DEBUG_DATA_XN_DATA */
+/* Description: XN debug data */
+#define SH_XN_DEBUG_DATA_XN_DATA_SHFT 0
+#define SH_XN_DEBUG_DATA_XN_DATA_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_TSF_ARMED_STATE" */
+/* Trigger sequencing facility arm state */
+/* ==================================================================== */
+
+#define SH_TSF_ARMED_STATE 0x0000000110073000
+#define SH_TSF_ARMED_STATE_MASK 0x00000000000000ff
+#define SH_TSF_ARMED_STATE_INIT 0x0000000000000000
+
+/* SH_TSF_ARMED_STATE_STATE */
+/* Description: Trigger sequencing facility armed state */
+#define SH_TSF_ARMED_STATE_STATE_SHFT 0
+#define SH_TSF_ARMED_STATE_STATE_MASK 0x00000000000000ff
+
+/* ==================================================================== */
+/* Register "SH_TSF_COUNTER_VALUE" */
+/* Trigger sequencing facility counter value */
+/* ==================================================================== */
+
+#define SH_TSF_COUNTER_VALUE 0x0000000110073080
+#define SH_TSF_COUNTER_VALUE_MASK 0xffffffffffffffff
+#define SH_TSF_COUNTER_VALUE_INIT 0x0000000000000000
+
+/* SH_TSF_COUNTER_VALUE_COUNT_32 */
+/* Description: Trigger sequencing facility counter 32 */
+#define SH_TSF_COUNTER_VALUE_COUNT_32_SHFT 0
+#define SH_TSF_COUNTER_VALUE_COUNT_32_MASK 0x00000000ffffffff
+
+/* SH_TSF_COUNTER_VALUE_COUNT_16 */
+/* Description: Trigger sequencing facility counter 16 */
+#define SH_TSF_COUNTER_VALUE_COUNT_16_SHFT 32
+#define SH_TSF_COUNTER_VALUE_COUNT_16_MASK 0x0000ffff00000000
+
+/* SH_TSF_COUNTER_VALUE_COUNT_8B */
+/* Description: Trigger sequencing facility counter 8b */
+#define SH_TSF_COUNTER_VALUE_COUNT_8B_SHFT 48
+#define SH_TSF_COUNTER_VALUE_COUNT_8B_MASK 0x00ff000000000000
+
+/* SH_TSF_COUNTER_VALUE_COUNT_8A */
+/* Description: Trigger sequencing facility counter 8a */
+#define SH_TSF_COUNTER_VALUE_COUNT_8A_SHFT 56
+#define SH_TSF_COUNTER_VALUE_COUNT_8A_MASK 0xff00000000000000
+
+/* ==================================================================== */
+/* Register "SH_TSF_TRIGGERED_STATE" */
+/* Trigger sequencing facility triggered state */
+/* ==================================================================== */
+
+#define SH_TSF_TRIGGERED_STATE 0x0000000110073100
+#define SH_TSF_TRIGGERED_STATE_MASK 0x00000000000000ff
+#define SH_TSF_TRIGGERED_STATE_INIT 0x0000000000000000
+
+/* SH_TSF_TRIGGERED_STATE_STATE */
+/* Description: Trigger sequencing facility triggered state */
+#define SH_TSF_TRIGGERED_STATE_STATE_SHFT 0
+#define SH_TSF_TRIGGERED_STATE_STATE_MASK 0x00000000000000ff
+
+/* ==================================================================== */
+/* Register "SH_VEC_RDDATA" */
+/* Vector Reply Message Data */
+/* ==================================================================== */
+
+#define SH_VEC_RDDATA 0x0000000110074000
+#define SH_VEC_RDDATA_MASK 0xffffffffffffffff
+#define SH_VEC_RDDATA_INIT 0x0000000000000000
+
+/* SH_VEC_RDDATA_DATA */
+/* Description: Data */
+#define SH_VEC_RDDATA_DATA_SHFT 0
+#define SH_VEC_RDDATA_DATA_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_VEC_RETURN" */
+/* Vector Reply Message Return Route */
+/* ==================================================================== */
+
+#define SH_VEC_RETURN 0x0000000110074080
+#define SH_VEC_RETURN_MASK 0xffffffffffffffff
+#define SH_VEC_RETURN_INIT 0x0000000000000000
+
+/* SH_VEC_RETURN_ROUTE */
+/* Description: Route */
+#define SH_VEC_RETURN_ROUTE_SHFT 0
+#define SH_VEC_RETURN_ROUTE_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_VEC_STATUS" */
+/* Vector Reply Message Status */
+/* ==================================================================== */
+
+#define SH_VEC_STATUS 0x0000000110074100
+#define SH_VEC_STATUS_MASK 0xcfffffffffffffff
+#define SH_VEC_STATUS_INIT 0x0000000000000000
+
+/* SH_VEC_STATUS_TYPE */
+/* Description: Type */
+#define SH_VEC_STATUS_TYPE_SHFT 0
+#define SH_VEC_STATUS_TYPE_MASK 0x0000000000000007
+
+/* SH_VEC_STATUS_ADDRESS */
+/* Description: Address */
+#define SH_VEC_STATUS_ADDRESS_SHFT 3
+#define SH_VEC_STATUS_ADDRESS_MASK 0x00000007fffffff8
+
+/* SH_VEC_STATUS_PIO_ID */
+/* Description: PIO ID */
+#define SH_VEC_STATUS_PIO_ID_SHFT 35
+#define SH_VEC_STATUS_PIO_ID_MASK 0x00003ff800000000
+
+/* SH_VEC_STATUS_SOURCE */
+/* Description: Source */
+#define SH_VEC_STATUS_SOURCE_SHFT 46
+#define SH_VEC_STATUS_SOURCE_MASK 0x0fffc00000000000
+
+/* SH_VEC_STATUS_OVERRUN */
+/* Description: Overrun */
+#define SH_VEC_STATUS_OVERRUN_SHFT 62
+#define SH_VEC_STATUS_OVERRUN_MASK 0x4000000000000000
+
+/* SH_VEC_STATUS_STATUS_VALID */
+/* Description: Status_Valid */
+#define SH_VEC_STATUS_STATUS_VALID_SHFT 63
+#define SH_VEC_STATUS_STATUS_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_VEC_STATUS_ALIAS" */
+/* Vector Reply Message Status Alias */
+/* ==================================================================== */
+
+#define SH_VEC_STATUS_ALIAS 0x0000000110074108
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT0_CONTROL" */
+/* Performance Counter 0 Control */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNT0_CONTROL 0x0000000110080000
+#define SH_PERFORMANCE_COUNT0_CONTROL_MASK 0x000000000007ffff
+#define SH_PERFORMANCE_COUNT0_CONTROL_INIT 0x000000000000b8b8
+
+/* SH_PERFORMANCE_COUNT0_CONTROL_UP_STIMULUS */
+/* Description: Counter 0 up stimulus */
+#define SH_PERFORMANCE_COUNT0_CONTROL_UP_STIMULUS_SHFT 0
+#define SH_PERFORMANCE_COUNT0_CONTROL_UP_STIMULUS_MASK 0x000000000000001f
+
+/* SH_PERFORMANCE_COUNT0_CONTROL_UP_EVENT */
+/* Description: Counter 0 up event select (1-greater than, 0-equal) */
+#define SH_PERFORMANCE_COUNT0_CONTROL_UP_EVENT_SHFT 5
+#define SH_PERFORMANCE_COUNT0_CONTROL_UP_EVENT_MASK 0x0000000000000020
+
+/* SH_PERFORMANCE_COUNT0_CONTROL_UP_POLARITY */
+/* Description: Counter 0 up polarity select (1-negative edge, 0-po */
+/* sitive edge) */
+#define SH_PERFORMANCE_COUNT0_CONTROL_UP_POLARITY_SHFT 6
+#define SH_PERFORMANCE_COUNT0_CONTROL_UP_POLARITY_MASK 0x0000000000000040
+
+/* SH_PERFORMANCE_COUNT0_CONTROL_UP_MODE */
+/* Description: Counter 0 up mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT0_CONTROL_UP_MODE_SHFT 7
+#define SH_PERFORMANCE_COUNT0_CONTROL_UP_MODE_MASK 0x0000000000000080
+
+/* SH_PERFORMANCE_COUNT0_CONTROL_DN_STIMULUS */
+/* Description: Counter 0 down stimulus */
+#define SH_PERFORMANCE_COUNT0_CONTROL_DN_STIMULUS_SHFT 8
+#define SH_PERFORMANCE_COUNT0_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00
+
+/* SH_PERFORMANCE_COUNT0_CONTROL_DN_EVENT */
+/* Description: Counter 0 down event select (1-greater than, 0-equa */
+#define SH_PERFORMANCE_COUNT0_CONTROL_DN_EVENT_SHFT 13
+#define SH_PERFORMANCE_COUNT0_CONTROL_DN_EVENT_MASK 0x0000000000002000
+
+/* SH_PERFORMANCE_COUNT0_CONTROL_DN_POLARITY */
+/* Description: Counter 0 down polarity select (1-negative edge, 0- */
+/* positive edge) */
+#define SH_PERFORMANCE_COUNT0_CONTROL_DN_POLARITY_SHFT 14
+#define SH_PERFORMANCE_COUNT0_CONTROL_DN_POLARITY_MASK 0x0000000000004000
+
+/* SH_PERFORMANCE_COUNT0_CONTROL_DN_MODE */
+/* Description: Counter 0 down mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT0_CONTROL_DN_MODE_SHFT 15
+#define SH_PERFORMANCE_COUNT0_CONTROL_DN_MODE_MASK 0x0000000000008000
+
+/* SH_PERFORMANCE_COUNT0_CONTROL_INC_ENABLE */
+/* Description: Counter 0 enable increment */
+#define SH_PERFORMANCE_COUNT0_CONTROL_INC_ENABLE_SHFT 16
+#define SH_PERFORMANCE_COUNT0_CONTROL_INC_ENABLE_MASK 0x0000000000010000
+
+/* SH_PERFORMANCE_COUNT0_CONTROL_DEC_ENABLE */
+/* Description: Counter 0 enable decrement */
+#define SH_PERFORMANCE_COUNT0_CONTROL_DEC_ENABLE_SHFT 17
+#define SH_PERFORMANCE_COUNT0_CONTROL_DEC_ENABLE_MASK 0x0000000000020000
+
+/* SH_PERFORMANCE_COUNT0_CONTROL_PEAK_DET_ENABLE */
+/* Description: Counter 0 enable peak detection */
+#define SH_PERFORMANCE_COUNT0_CONTROL_PEAK_DET_ENABLE_SHFT 18
+#define SH_PERFORMANCE_COUNT0_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT1_CONTROL" */
+/* Performance Counter 1 Control */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNT1_CONTROL 0x0000000110090000
+#define SH_PERFORMANCE_COUNT1_CONTROL_MASK 0x000000000007ffff
+#define SH_PERFORMANCE_COUNT1_CONTROL_INIT 0x000000000000b8b8
+
+/* SH_PERFORMANCE_COUNT1_CONTROL_UP_STIMULUS */
+/* Description: Counter 1 up stimulus */
+#define SH_PERFORMANCE_COUNT1_CONTROL_UP_STIMULUS_SHFT 0
+#define SH_PERFORMANCE_COUNT1_CONTROL_UP_STIMULUS_MASK 0x000000000000001f
+
+/* SH_PERFORMANCE_COUNT1_CONTROL_UP_EVENT */
+/* Description: Counter 1 up event select (1-greater than, 0-equal) */
+#define SH_PERFORMANCE_COUNT1_CONTROL_UP_EVENT_SHFT 5
+#define SH_PERFORMANCE_COUNT1_CONTROL_UP_EVENT_MASK 0x0000000000000020
+
+/* SH_PERFORMANCE_COUNT1_CONTROL_UP_POLARITY */
+/* Description: Counter 1 up polarity select (1-negative edge, 0-po */
+/* sitive edge) */
+#define SH_PERFORMANCE_COUNT1_CONTROL_UP_POLARITY_SHFT 6
+#define SH_PERFORMANCE_COUNT1_CONTROL_UP_POLARITY_MASK 0x0000000000000040
+
+/* SH_PERFORMANCE_COUNT1_CONTROL_UP_MODE */
+/* Description: Counter 1 up mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT1_CONTROL_UP_MODE_SHFT 7
+#define SH_PERFORMANCE_COUNT1_CONTROL_UP_MODE_MASK 0x0000000000000080
+
+/* SH_PERFORMANCE_COUNT1_CONTROL_DN_STIMULUS */
+/* Description: Counter 1 down stimulus */
+#define SH_PERFORMANCE_COUNT1_CONTROL_DN_STIMULUS_SHFT 8
+#define SH_PERFORMANCE_COUNT1_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00
+
+/* SH_PERFORMANCE_COUNT1_CONTROL_DN_EVENT */
+/* Description: Counter 1 down event select (1-greater than, 0-equa */
+#define SH_PERFORMANCE_COUNT1_CONTROL_DN_EVENT_SHFT 13
+#define SH_PERFORMANCE_COUNT1_CONTROL_DN_EVENT_MASK 0x0000000000002000
+
+/* SH_PERFORMANCE_COUNT1_CONTROL_DN_POLARITY */
+/* Description: Counter 1 down polarity select (1-negative edge, 0- */
+/* positive edge) */
+#define SH_PERFORMANCE_COUNT1_CONTROL_DN_POLARITY_SHFT 14
+#define SH_PERFORMANCE_COUNT1_CONTROL_DN_POLARITY_MASK 0x0000000000004000
+
+/* SH_PERFORMANCE_COUNT1_CONTROL_DN_MODE */
+/* Description: Counter 1 down mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT1_CONTROL_DN_MODE_SHFT 15
+#define SH_PERFORMANCE_COUNT1_CONTROL_DN_MODE_MASK 0x0000000000008000
+
+/* SH_PERFORMANCE_COUNT1_CONTROL_INC_ENABLE */
+/* Description: Counter 1 enable increment */
+#define SH_PERFORMANCE_COUNT1_CONTROL_INC_ENABLE_SHFT 16
+#define SH_PERFORMANCE_COUNT1_CONTROL_INC_ENABLE_MASK 0x0000000000010000
+
+/* SH_PERFORMANCE_COUNT1_CONTROL_DEC_ENABLE */
+/* Description: Counter 1 enable decrement */
+#define SH_PERFORMANCE_COUNT1_CONTROL_DEC_ENABLE_SHFT 17
+#define SH_PERFORMANCE_COUNT1_CONTROL_DEC_ENABLE_MASK 0x0000000000020000
+
+/* SH_PERFORMANCE_COUNT1_CONTROL_PEAK_DET_ENABLE */
+/* Description: Counter 1 enable peak detection */
+#define SH_PERFORMANCE_COUNT1_CONTROL_PEAK_DET_ENABLE_SHFT 18
+#define SH_PERFORMANCE_COUNT1_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT2_CONTROL" */
+/* Performance Counter 2 Control */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNT2_CONTROL 0x00000001100a0000
+#define SH_PERFORMANCE_COUNT2_CONTROL_MASK 0x000000000007ffff
+#define SH_PERFORMANCE_COUNT2_CONTROL_INIT 0x000000000000b8b8
+
+/* SH_PERFORMANCE_COUNT2_CONTROL_UP_STIMULUS */
+/* Description: Counter 2 up stimulus */
+#define SH_PERFORMANCE_COUNT2_CONTROL_UP_STIMULUS_SHFT 0
+#define SH_PERFORMANCE_COUNT2_CONTROL_UP_STIMULUS_MASK 0x000000000000001f
+
+/* SH_PERFORMANCE_COUNT2_CONTROL_UP_EVENT */
+/* Description: Counter 2 up event select (1-greater than, 0-equal) */
+#define SH_PERFORMANCE_COUNT2_CONTROL_UP_EVENT_SHFT 5
+#define SH_PERFORMANCE_COUNT2_CONTROL_UP_EVENT_MASK 0x0000000000000020
+
+/* SH_PERFORMANCE_COUNT2_CONTROL_UP_POLARITY */
+/* Description: Counter 2 up polarity select (1-negative edge, 0-po */
+/* sitive edge) */
+#define SH_PERFORMANCE_COUNT2_CONTROL_UP_POLARITY_SHFT 6
+#define SH_PERFORMANCE_COUNT2_CONTROL_UP_POLARITY_MASK 0x0000000000000040
+
+/* SH_PERFORMANCE_COUNT2_CONTROL_UP_MODE */
+/* Description: Counter 2 up mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT2_CONTROL_UP_MODE_SHFT 7
+#define SH_PERFORMANCE_COUNT2_CONTROL_UP_MODE_MASK 0x0000000000000080
+
+/* SH_PERFORMANCE_COUNT2_CONTROL_DN_STIMULUS */
+/* Description: Counter 2 down stimulus */
+#define SH_PERFORMANCE_COUNT2_CONTROL_DN_STIMULUS_SHFT 8
+#define SH_PERFORMANCE_COUNT2_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00
+
+/* SH_PERFORMANCE_COUNT2_CONTROL_DN_EVENT */
+/* Description: Counter 2 down event select (1-greater than, 0-equa */
+#define SH_PERFORMANCE_COUNT2_CONTROL_DN_EVENT_SHFT 13
+#define SH_PERFORMANCE_COUNT2_CONTROL_DN_EVENT_MASK 0x0000000000002000
+
+/* SH_PERFORMANCE_COUNT2_CONTROL_DN_POLARITY */
+/* Description: Counter 2 down polarity select (1-negative edge, 0- */
+/* positive edge) */
+#define SH_PERFORMANCE_COUNT2_CONTROL_DN_POLARITY_SHFT 14
+#define SH_PERFORMANCE_COUNT2_CONTROL_DN_POLARITY_MASK 0x0000000000004000
+
+/* SH_PERFORMANCE_COUNT2_CONTROL_DN_MODE */
+/* Description: Counter 2 down mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT2_CONTROL_DN_MODE_SHFT 15
+#define SH_PERFORMANCE_COUNT2_CONTROL_DN_MODE_MASK 0x0000000000008000
+
+/* SH_PERFORMANCE_COUNT2_CONTROL_INC_ENABLE */
+/* Description: Counter 2 enable increment */
+#define SH_PERFORMANCE_COUNT2_CONTROL_INC_ENABLE_SHFT 16
+#define SH_PERFORMANCE_COUNT2_CONTROL_INC_ENABLE_MASK 0x0000000000010000
+
+/* SH_PERFORMANCE_COUNT2_CONTROL_DEC_ENABLE */
+/* Description: Counter 2 enable decrement */
+#define SH_PERFORMANCE_COUNT2_CONTROL_DEC_ENABLE_SHFT 17
+#define SH_PERFORMANCE_COUNT2_CONTROL_DEC_ENABLE_MASK 0x0000000000020000
+
+/* SH_PERFORMANCE_COUNT2_CONTROL_PEAK_DET_ENABLE */
+/* Description: Counter 2 enable peak detection */
+#define SH_PERFORMANCE_COUNT2_CONTROL_PEAK_DET_ENABLE_SHFT 18
+#define SH_PERFORMANCE_COUNT2_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT3_CONTROL" */
+/* Performance Counter 3 Control */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNT3_CONTROL 0x00000001100b0000
+#define SH_PERFORMANCE_COUNT3_CONTROL_MASK 0x000000000007ffff
+#define SH_PERFORMANCE_COUNT3_CONTROL_INIT 0x000000000000b8b8
+
+/* SH_PERFORMANCE_COUNT3_CONTROL_UP_STIMULUS */
+/* Description: Counter 3 up stimulus */
+#define SH_PERFORMANCE_COUNT3_CONTROL_UP_STIMULUS_SHFT 0
+#define SH_PERFORMANCE_COUNT3_CONTROL_UP_STIMULUS_MASK 0x000000000000001f
+
+/* SH_PERFORMANCE_COUNT3_CONTROL_UP_EVENT */
+/* Description: Counter 3 up event select (1-greater than, 0-equal) */
+#define SH_PERFORMANCE_COUNT3_CONTROL_UP_EVENT_SHFT 5
+#define SH_PERFORMANCE_COUNT3_CONTROL_UP_EVENT_MASK 0x0000000000000020
+
+/* SH_PERFORMANCE_COUNT3_CONTROL_UP_POLARITY */
+/* Description: Counter 3 up polarity select (1-negative edge, 0-po */
+/* sitive edge) */
+#define SH_PERFORMANCE_COUNT3_CONTROL_UP_POLARITY_SHFT 6
+#define SH_PERFORMANCE_COUNT3_CONTROL_UP_POLARITY_MASK 0x0000000000000040
+
+/* SH_PERFORMANCE_COUNT3_CONTROL_UP_MODE */
+/* Description: Counter 3 up mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT3_CONTROL_UP_MODE_SHFT 7
+#define SH_PERFORMANCE_COUNT3_CONTROL_UP_MODE_MASK 0x0000000000000080
+
+/* SH_PERFORMANCE_COUNT3_CONTROL_DN_STIMULUS */
+/* Description: Counter 3 down stimulus */
+#define SH_PERFORMANCE_COUNT3_CONTROL_DN_STIMULUS_SHFT 8
+#define SH_PERFORMANCE_COUNT3_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00
+
+/* SH_PERFORMANCE_COUNT3_CONTROL_DN_EVENT */
+/* Description: Counter 3 down event select (1-greater than, 0-equa */
+#define SH_PERFORMANCE_COUNT3_CONTROL_DN_EVENT_SHFT 13
+#define SH_PERFORMANCE_COUNT3_CONTROL_DN_EVENT_MASK 0x0000000000002000
+
+/* SH_PERFORMANCE_COUNT3_CONTROL_DN_POLARITY */
+/* Description: Counter 3 down polarity select (1-negative edge, 0- */
+/* positive edge) */
+#define SH_PERFORMANCE_COUNT3_CONTROL_DN_POLARITY_SHFT 14
+#define SH_PERFORMANCE_COUNT3_CONTROL_DN_POLARITY_MASK 0x0000000000004000
+
+/* SH_PERFORMANCE_COUNT3_CONTROL_DN_MODE */
+/* Description: Counter 3 down mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT3_CONTROL_DN_MODE_SHFT 15
+#define SH_PERFORMANCE_COUNT3_CONTROL_DN_MODE_MASK 0x0000000000008000
+
+/* SH_PERFORMANCE_COUNT3_CONTROL_INC_ENABLE */
+/* Description: Counter 3 enable increment */
+#define SH_PERFORMANCE_COUNT3_CONTROL_INC_ENABLE_SHFT 16
+#define SH_PERFORMANCE_COUNT3_CONTROL_INC_ENABLE_MASK 0x0000000000010000
+
+/* SH_PERFORMANCE_COUNT3_CONTROL_DEC_ENABLE */
+/* Description: Counter 3 enable decrement */
+#define SH_PERFORMANCE_COUNT3_CONTROL_DEC_ENABLE_SHFT 17
+#define SH_PERFORMANCE_COUNT3_CONTROL_DEC_ENABLE_MASK 0x0000000000020000
+
+/* SH_PERFORMANCE_COUNT3_CONTROL_PEAK_DET_ENABLE */
+/* Description: Counter 3 enable peak detection */
+#define SH_PERFORMANCE_COUNT3_CONTROL_PEAK_DET_ENABLE_SHFT 18
+#define SH_PERFORMANCE_COUNT3_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT4_CONTROL" */
+/* Performance Counter 4 Control */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNT4_CONTROL 0x00000001100c0000
+#define SH_PERFORMANCE_COUNT4_CONTROL_MASK 0x000000000007ffff
+#define SH_PERFORMANCE_COUNT4_CONTROL_INIT 0x000000000000b8b8
+
+/* SH_PERFORMANCE_COUNT4_CONTROL_UP_STIMULUS */
+/* Description: Counter 4 up stimulus */
+#define SH_PERFORMANCE_COUNT4_CONTROL_UP_STIMULUS_SHFT 0
+#define SH_PERFORMANCE_COUNT4_CONTROL_UP_STIMULUS_MASK 0x000000000000001f
+
+/* SH_PERFORMANCE_COUNT4_CONTROL_UP_EVENT */
+/* Description: Counter 4 up event select (1-greater than, 0-equal) */
+#define SH_PERFORMANCE_COUNT4_CONTROL_UP_EVENT_SHFT 5
+#define SH_PERFORMANCE_COUNT4_CONTROL_UP_EVENT_MASK 0x0000000000000020
+
+/* SH_PERFORMANCE_COUNT4_CONTROL_UP_POLARITY */
+/* Description: Counter 4 up polarity select (1-negative edge, 0-po */
+/* sitive edge) */
+#define SH_PERFORMANCE_COUNT4_CONTROL_UP_POLARITY_SHFT 6
+#define SH_PERFORMANCE_COUNT4_CONTROL_UP_POLARITY_MASK 0x0000000000000040
+
+/* SH_PERFORMANCE_COUNT4_CONTROL_UP_MODE */
+/* Description: Counter 4 up mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT4_CONTROL_UP_MODE_SHFT 7
+#define SH_PERFORMANCE_COUNT4_CONTROL_UP_MODE_MASK 0x0000000000000080
+
+/* SH_PERFORMANCE_COUNT4_CONTROL_DN_STIMULUS */
+/* Description: Counter 4 down stimulus */
+#define SH_PERFORMANCE_COUNT4_CONTROL_DN_STIMULUS_SHFT 8
+#define SH_PERFORMANCE_COUNT4_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00
+
+/* SH_PERFORMANCE_COUNT4_CONTROL_DN_EVENT */
+/* Description: Counter 4 down event select (1-greater than, 0-equa */
+#define SH_PERFORMANCE_COUNT4_CONTROL_DN_EVENT_SHFT 13
+#define SH_PERFORMANCE_COUNT4_CONTROL_DN_EVENT_MASK 0x0000000000002000
+
+/* SH_PERFORMANCE_COUNT4_CONTROL_DN_POLARITY */
+/* Description: Counter 4 down polarity select (1-negative edge, 0- */
+/* positive edge) */
+#define SH_PERFORMANCE_COUNT4_CONTROL_DN_POLARITY_SHFT 14
+#define SH_PERFORMANCE_COUNT4_CONTROL_DN_POLARITY_MASK 0x0000000000004000
+
+/* SH_PERFORMANCE_COUNT4_CONTROL_DN_MODE */
+/* Description: Counter 4 down mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT4_CONTROL_DN_MODE_SHFT 15
+#define SH_PERFORMANCE_COUNT4_CONTROL_DN_MODE_MASK 0x0000000000008000
+
+/* SH_PERFORMANCE_COUNT4_CONTROL_INC_ENABLE */
+/* Description: Counter 4 enable increment */
+#define SH_PERFORMANCE_COUNT4_CONTROL_INC_ENABLE_SHFT 16
+#define SH_PERFORMANCE_COUNT4_CONTROL_INC_ENABLE_MASK 0x0000000000010000
+
+/* SH_PERFORMANCE_COUNT4_CONTROL_DEC_ENABLE */
+/* Description: Counter 4 enable decrement */
+#define SH_PERFORMANCE_COUNT4_CONTROL_DEC_ENABLE_SHFT 17
+#define SH_PERFORMANCE_COUNT4_CONTROL_DEC_ENABLE_MASK 0x0000000000020000
+
+/* SH_PERFORMANCE_COUNT4_CONTROL_PEAK_DET_ENABLE */
+/* Description: Counter 4 enable peak detection */
+#define SH_PERFORMANCE_COUNT4_CONTROL_PEAK_DET_ENABLE_SHFT 18
+#define SH_PERFORMANCE_COUNT4_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT5_CONTROL" */
+/* Performance Counter 5 Control */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNT5_CONTROL 0x00000001100d0000
+#define SH_PERFORMANCE_COUNT5_CONTROL_MASK 0x000000000007ffff
+#define SH_PERFORMANCE_COUNT5_CONTROL_INIT 0x000000000000b8b8
+
+/* SH_PERFORMANCE_COUNT5_CONTROL_UP_STIMULUS */
+/* Description: Counter 5 up stimulus */
+#define SH_PERFORMANCE_COUNT5_CONTROL_UP_STIMULUS_SHFT 0
+#define SH_PERFORMANCE_COUNT5_CONTROL_UP_STIMULUS_MASK 0x000000000000001f
+
+/* SH_PERFORMANCE_COUNT5_CONTROL_UP_EVENT */
+/* Description: Counter 5 up event select (1-greater than, 0-equal) */
+#define SH_PERFORMANCE_COUNT5_CONTROL_UP_EVENT_SHFT 5
+#define SH_PERFORMANCE_COUNT5_CONTROL_UP_EVENT_MASK 0x0000000000000020
+
+/* SH_PERFORMANCE_COUNT5_CONTROL_UP_POLARITY */
+/* Description: Counter 5 up polarity select (1-negative edge, 0-po */
+/* sitive edge) */
+#define SH_PERFORMANCE_COUNT5_CONTROL_UP_POLARITY_SHFT 6
+#define SH_PERFORMANCE_COUNT5_CONTROL_UP_POLARITY_MASK 0x0000000000000040
+
+/* SH_PERFORMANCE_COUNT5_CONTROL_UP_MODE */
+/* Description: Counter 5 up mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT5_CONTROL_UP_MODE_SHFT 7
+#define SH_PERFORMANCE_COUNT5_CONTROL_UP_MODE_MASK 0x0000000000000080
+
+/* SH_PERFORMANCE_COUNT5_CONTROL_DN_STIMULUS */
+/* Description: Counter 5 down stimulus */
+#define SH_PERFORMANCE_COUNT5_CONTROL_DN_STIMULUS_SHFT 8
+#define SH_PERFORMANCE_COUNT5_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00
+
+/* SH_PERFORMANCE_COUNT5_CONTROL_DN_EVENT */
+/* Description: Counter 5 down event select (1-greater than, 0-equa */
+#define SH_PERFORMANCE_COUNT5_CONTROL_DN_EVENT_SHFT 13
+#define SH_PERFORMANCE_COUNT5_CONTROL_DN_EVENT_MASK 0x0000000000002000
+
+/* SH_PERFORMANCE_COUNT5_CONTROL_DN_POLARITY */
+/* Description: Counter 5 down polarity select (1-negative edge, 0- */
+/* positive edge) */
+#define SH_PERFORMANCE_COUNT5_CONTROL_DN_POLARITY_SHFT 14
+#define SH_PERFORMANCE_COUNT5_CONTROL_DN_POLARITY_MASK 0x0000000000004000
+
+/* SH_PERFORMANCE_COUNT5_CONTROL_DN_MODE */
+/* Description: Counter 5 down mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT5_CONTROL_DN_MODE_SHFT 15
+#define SH_PERFORMANCE_COUNT5_CONTROL_DN_MODE_MASK 0x0000000000008000
+
+/* SH_PERFORMANCE_COUNT5_CONTROL_INC_ENABLE */
+/* Description: Counter 5 enable increment */
+#define SH_PERFORMANCE_COUNT5_CONTROL_INC_ENABLE_SHFT 16
+#define SH_PERFORMANCE_COUNT5_CONTROL_INC_ENABLE_MASK 0x0000000000010000
+
+/* SH_PERFORMANCE_COUNT5_CONTROL_DEC_ENABLE */
+/* Description: Counter 5 enable decrement */
+#define SH_PERFORMANCE_COUNT5_CONTROL_DEC_ENABLE_SHFT 17
+#define SH_PERFORMANCE_COUNT5_CONTROL_DEC_ENABLE_MASK 0x0000000000020000
+
+/* SH_PERFORMANCE_COUNT5_CONTROL_PEAK_DET_ENABLE */
+/* Description: Counter 5 enable peak detection */
+#define SH_PERFORMANCE_COUNT5_CONTROL_PEAK_DET_ENABLE_SHFT 18
+#define SH_PERFORMANCE_COUNT5_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT6_CONTROL" */
+/* Performance Counter 6 Control */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNT6_CONTROL 0x00000001100e0000
+#define SH_PERFORMANCE_COUNT6_CONTROL_MASK 0x000000000007ffff
+#define SH_PERFORMANCE_COUNT6_CONTROL_INIT 0x000000000000b8b8
+
+/* SH_PERFORMANCE_COUNT6_CONTROL_UP_STIMULUS */
+/* Description: Counter 6 up stimulus */
+#define SH_PERFORMANCE_COUNT6_CONTROL_UP_STIMULUS_SHFT 0
+#define SH_PERFORMANCE_COUNT6_CONTROL_UP_STIMULUS_MASK 0x000000000000001f
+
+/* SH_PERFORMANCE_COUNT6_CONTROL_UP_EVENT */
+/* Description: Counter 6 up event select (1-greater than, 0-equal) */
+#define SH_PERFORMANCE_COUNT6_CONTROL_UP_EVENT_SHFT 5
+#define SH_PERFORMANCE_COUNT6_CONTROL_UP_EVENT_MASK 0x0000000000000020
+
+/* SH_PERFORMANCE_COUNT6_CONTROL_UP_POLARITY */
+/* Description: Counter 6 up polarity select (1-negative edge, 0-po */
+/* sitive edge) */
+#define SH_PERFORMANCE_COUNT6_CONTROL_UP_POLARITY_SHFT 6
+#define SH_PERFORMANCE_COUNT6_CONTROL_UP_POLARITY_MASK 0x0000000000000040
+
+/* SH_PERFORMANCE_COUNT6_CONTROL_UP_MODE */
+/* Description: Counter 6 up mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT6_CONTROL_UP_MODE_SHFT 7
+#define SH_PERFORMANCE_COUNT6_CONTROL_UP_MODE_MASK 0x0000000000000080
+
+/* SH_PERFORMANCE_COUNT6_CONTROL_DN_STIMULUS */
+/* Description: Counter 6 down stimulus */
+#define SH_PERFORMANCE_COUNT6_CONTROL_DN_STIMULUS_SHFT 8
+#define SH_PERFORMANCE_COUNT6_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00
+
+/* SH_PERFORMANCE_COUNT6_CONTROL_DN_EVENT */
+/* Description: Counter 6 down event select (1-greater than, 0-equa */
+#define SH_PERFORMANCE_COUNT6_CONTROL_DN_EVENT_SHFT 13
+#define SH_PERFORMANCE_COUNT6_CONTROL_DN_EVENT_MASK 0x0000000000002000
+
+/* SH_PERFORMANCE_COUNT6_CONTROL_DN_POLARITY */
+/* Description: Counter 6 down polarity select (1-negative edge, 0- */
+/* positive edge) */
+#define SH_PERFORMANCE_COUNT6_CONTROL_DN_POLARITY_SHFT 14
+#define SH_PERFORMANCE_COUNT6_CONTROL_DN_POLARITY_MASK 0x0000000000004000
+
+/* SH_PERFORMANCE_COUNT6_CONTROL_DN_MODE */
+/* Description: Counter 6 down mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT6_CONTROL_DN_MODE_SHFT 15
+#define SH_PERFORMANCE_COUNT6_CONTROL_DN_MODE_MASK 0x0000000000008000
+
+/* SH_PERFORMANCE_COUNT6_CONTROL_INC_ENABLE */
+/* Description: Counter 6 enable increment */
+#define SH_PERFORMANCE_COUNT6_CONTROL_INC_ENABLE_SHFT 16
+#define SH_PERFORMANCE_COUNT6_CONTROL_INC_ENABLE_MASK 0x0000000000010000
+
+/* SH_PERFORMANCE_COUNT6_CONTROL_DEC_ENABLE */
+/* Description: Counter 6 enable decrement */
+#define SH_PERFORMANCE_COUNT6_CONTROL_DEC_ENABLE_SHFT 17
+#define SH_PERFORMANCE_COUNT6_CONTROL_DEC_ENABLE_MASK 0x0000000000020000
+
+/* SH_PERFORMANCE_COUNT6_CONTROL_PEAK_DET_ENABLE */
+/* Description: Counter 6 enable peak detection */
+#define SH_PERFORMANCE_COUNT6_CONTROL_PEAK_DET_ENABLE_SHFT 18
+#define SH_PERFORMANCE_COUNT6_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT7_CONTROL" */
+/* Performance Counter 7 Control */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNT7_CONTROL 0x00000001100f0000
+#define SH_PERFORMANCE_COUNT7_CONTROL_MASK 0x000000000007ffff
+#define SH_PERFORMANCE_COUNT7_CONTROL_INIT 0x000000000000b8b8
+
+/* SH_PERFORMANCE_COUNT7_CONTROL_UP_STIMULUS */
+/* Description: Counter 7 up stimulus */
+#define SH_PERFORMANCE_COUNT7_CONTROL_UP_STIMULUS_SHFT 0
+#define SH_PERFORMANCE_COUNT7_CONTROL_UP_STIMULUS_MASK 0x000000000000001f
+
+/* SH_PERFORMANCE_COUNT7_CONTROL_UP_EVENT */
+/* Description: Counter 7 up event select (1-greater than, 0-equal) */
+#define SH_PERFORMANCE_COUNT7_CONTROL_UP_EVENT_SHFT 5
+#define SH_PERFORMANCE_COUNT7_CONTROL_UP_EVENT_MASK 0x0000000000000020
+
+/* SH_PERFORMANCE_COUNT7_CONTROL_UP_POLARITY */
+/* Description: Counter 7 up polarity select (1-negative edge, 0-po */
+/* sitive edge) */
+#define SH_PERFORMANCE_COUNT7_CONTROL_UP_POLARITY_SHFT 6
+#define SH_PERFORMANCE_COUNT7_CONTROL_UP_POLARITY_MASK 0x0000000000000040
+
+/* SH_PERFORMANCE_COUNT7_CONTROL_UP_MODE */
+/* Description: Counter 7 up mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT7_CONTROL_UP_MODE_SHFT 7
+#define SH_PERFORMANCE_COUNT7_CONTROL_UP_MODE_MASK 0x0000000000000080
+
+/* SH_PERFORMANCE_COUNT7_CONTROL_DN_STIMULUS */
+/* Description: Counter 7 down stimulus */
+#define SH_PERFORMANCE_COUNT7_CONTROL_DN_STIMULUS_SHFT 8
+#define SH_PERFORMANCE_COUNT7_CONTROL_DN_STIMULUS_MASK 0x0000000000001f00
+
+/* SH_PERFORMANCE_COUNT7_CONTROL_DN_EVENT */
+/* Description: Counter 7 down event select (1-greater than, 0-equa */
+#define SH_PERFORMANCE_COUNT7_CONTROL_DN_EVENT_SHFT 13
+#define SH_PERFORMANCE_COUNT7_CONTROL_DN_EVENT_MASK 0x0000000000002000
+
+/* SH_PERFORMANCE_COUNT7_CONTROL_DN_POLARITY */
+/* Description: Counter 7 down polarity select (1-negative edge, 0- */
+/* positive edge) */
+#define SH_PERFORMANCE_COUNT7_CONTROL_DN_POLARITY_SHFT 14
+#define SH_PERFORMANCE_COUNT7_CONTROL_DN_POLARITY_MASK 0x0000000000004000
+
+/* SH_PERFORMANCE_COUNT7_CONTROL_DN_MODE */
+/* Description: Counter 7 down mode select (1-internal, 0-external) */
+#define SH_PERFORMANCE_COUNT7_CONTROL_DN_MODE_SHFT 15
+#define SH_PERFORMANCE_COUNT7_CONTROL_DN_MODE_MASK 0x0000000000008000
+
+/* SH_PERFORMANCE_COUNT7_CONTROL_INC_ENABLE */
+/* Description: Counter 7 enable increment */
+#define SH_PERFORMANCE_COUNT7_CONTROL_INC_ENABLE_SHFT 16
+#define SH_PERFORMANCE_COUNT7_CONTROL_INC_ENABLE_MASK 0x0000000000010000
+
+/* SH_PERFORMANCE_COUNT7_CONTROL_DEC_ENABLE */
+/* Description: Counter 7 enable decrement */
+#define SH_PERFORMANCE_COUNT7_CONTROL_DEC_ENABLE_SHFT 17
+#define SH_PERFORMANCE_COUNT7_CONTROL_DEC_ENABLE_MASK 0x0000000000020000
+
+/* SH_PERFORMANCE_COUNT7_CONTROL_PEAK_DET_ENABLE */
+/* Description: Counter 7 enable peak detection */
+#define SH_PERFORMANCE_COUNT7_CONTROL_PEAK_DET_ENABLE_SHFT 18
+#define SH_PERFORMANCE_COUNT7_CONTROL_PEAK_DET_ENABLE_MASK 0x0000000000040000
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_DN_CONTROL" */
+/* Profile Counter Down Control */
+/* ==================================================================== */
+
+#define SH_PROFILE_DN_CONTROL 0x0000000110100000
+#define SH_PROFILE_DN_CONTROL_MASK 0x00000000000000ff
+#define SH_PROFILE_DN_CONTROL_INIT 0x00000000000000b8
+
+/* SH_PROFILE_DN_CONTROL_STIMULUS */
+/* Description: Counter stimulus */
+#define SH_PROFILE_DN_CONTROL_STIMULUS_SHFT 0
+#define SH_PROFILE_DN_CONTROL_STIMULUS_MASK 0x000000000000001f
+
+/* SH_PROFILE_DN_CONTROL_EVENT */
+/* Description: Counter event select (1-greater than, 0-equal) */
+#define SH_PROFILE_DN_CONTROL_EVENT_SHFT 5
+#define SH_PROFILE_DN_CONTROL_EVENT_MASK 0x0000000000000020
+
+/* SH_PROFILE_DN_CONTROL_POLARITY */
+/* Description: Counter polarity select (1-negative edge, 0-positiv */
+/* e edge) */
+#define SH_PROFILE_DN_CONTROL_POLARITY_SHFT 6
+#define SH_PROFILE_DN_CONTROL_POLARITY_MASK 0x0000000000000040
+
+/* SH_PROFILE_DN_CONTROL_MODE */
+/* Description: Counter mode select (1-internal, 0-external) */
+#define SH_PROFILE_DN_CONTROL_MODE_SHFT 7
+#define SH_PROFILE_DN_CONTROL_MODE_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_PEAK_CONTROL" */
+/* Profile Counter Peak Control */
+/* ==================================================================== */
+
+#define SH_PROFILE_PEAK_CONTROL 0x0000000110100080
+#define SH_PROFILE_PEAK_CONTROL_MASK 0x0000000000000068
+#define SH_PROFILE_PEAK_CONTROL_INIT 0x0000000000000060
+
+/* SH_PROFILE_PEAK_CONTROL_STIMULUS */
+/* Description: Counter stimulus */
+#define SH_PROFILE_PEAK_CONTROL_STIMULUS_SHFT 3
+#define SH_PROFILE_PEAK_CONTROL_STIMULUS_MASK 0x0000000000000008
+
+/* SH_PROFILE_PEAK_CONTROL_EVENT */
+/* Description: Counter event select (0-greater than, 1-equal) */
+#define SH_PROFILE_PEAK_CONTROL_EVENT_SHFT 5
+#define SH_PROFILE_PEAK_CONTROL_EVENT_MASK 0x0000000000000020
+
+/* SH_PROFILE_PEAK_CONTROL_POLARITY */
+/* Description: Counter polarity select (0-negative edge, 1-positiv */
+/* e edge) */
+#define SH_PROFILE_PEAK_CONTROL_POLARITY_SHFT 6
+#define SH_PROFILE_PEAK_CONTROL_POLARITY_MASK 0x0000000000000040
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_RANGE" */
+/* Profile Counter Range */
+/* ==================================================================== */
+
+#define SH_PROFILE_RANGE 0x0000000110100100
+#define SH_PROFILE_RANGE_MASK 0xffffffffffffffff
+#define SH_PROFILE_RANGE_INIT 0x0000000000000000
+
+/* SH_PROFILE_RANGE_RANGE0 */
+/* Description: Profiling range 0 */
+#define SH_PROFILE_RANGE_RANGE0_SHFT 0
+#define SH_PROFILE_RANGE_RANGE0_MASK 0x00000000000000ff
+
+/* SH_PROFILE_RANGE_RANGE1 */
+/* Description: Profiling range 1 */
+#define SH_PROFILE_RANGE_RANGE1_SHFT 8
+#define SH_PROFILE_RANGE_RANGE1_MASK 0x000000000000ff00
+
+/* SH_PROFILE_RANGE_RANGE2 */
+/* Description: Profiling range 2 */
+#define SH_PROFILE_RANGE_RANGE2_SHFT 16
+#define SH_PROFILE_RANGE_RANGE2_MASK 0x0000000000ff0000
+
+/* SH_PROFILE_RANGE_RANGE3 */
+/* Description: Profiling range 3 */
+#define SH_PROFILE_RANGE_RANGE3_SHFT 24
+#define SH_PROFILE_RANGE_RANGE3_MASK 0x00000000ff000000
+
+/* SH_PROFILE_RANGE_RANGE4 */
+/* Description: Profiling range 4 */
+#define SH_PROFILE_RANGE_RANGE4_SHFT 32
+#define SH_PROFILE_RANGE_RANGE4_MASK 0x000000ff00000000
+
+/* SH_PROFILE_RANGE_RANGE5 */
+/* Description: Profiling range 5 */
+#define SH_PROFILE_RANGE_RANGE5_SHFT 40
+#define SH_PROFILE_RANGE_RANGE5_MASK 0x0000ff0000000000
+
+/* SH_PROFILE_RANGE_RANGE6 */
+/* Description: Profiling range 6 */
+#define SH_PROFILE_RANGE_RANGE6_SHFT 48
+#define SH_PROFILE_RANGE_RANGE6_MASK 0x00ff000000000000
+
+/* SH_PROFILE_RANGE_RANGE7 */
+/* Description: Profiling range 7 */
+#define SH_PROFILE_RANGE_RANGE7_SHFT 56
+#define SH_PROFILE_RANGE_RANGE7_MASK 0xff00000000000000
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_UP_CONTROL" */
+/* Profile Counter Up Control */
+/* ==================================================================== */
+
+#define SH_PROFILE_UP_CONTROL 0x0000000110100180
+#define SH_PROFILE_UP_CONTROL_MASK 0x00000000000000ff
+#define SH_PROFILE_UP_CONTROL_INIT 0x00000000000000b8
+
+/* SH_PROFILE_UP_CONTROL_STIMULUS */
+/* Description: Counter stimulus */
+#define SH_PROFILE_UP_CONTROL_STIMULUS_SHFT 0
+#define SH_PROFILE_UP_CONTROL_STIMULUS_MASK 0x000000000000001f
+
+/* SH_PROFILE_UP_CONTROL_EVENT */
+/* Description: Counter event select (1-greater than, 0-equal) */
+#define SH_PROFILE_UP_CONTROL_EVENT_SHFT 5
+#define SH_PROFILE_UP_CONTROL_EVENT_MASK 0x0000000000000020
+
+/* SH_PROFILE_UP_CONTROL_POLARITY */
+/* Description: Counter polarity select (1-negative edge, 0-positiv */
+/* e edge) */
+#define SH_PROFILE_UP_CONTROL_POLARITY_SHFT 6
+#define SH_PROFILE_UP_CONTROL_POLARITY_MASK 0x0000000000000040
+
+/* SH_PROFILE_UP_CONTROL_MODE */
+/* Description: Counter mode select (1-internal, 0-external) */
+#define SH_PROFILE_UP_CONTROL_MODE_SHFT 7
+#define SH_PROFILE_UP_CONTROL_MODE_MASK 0x0000000000000080
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER0" */
+/* Performance Counter 0 */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNTER0 0x0000000110110000
+#define SH_PERFORMANCE_COUNTER0_MASK 0x00000000ffffffff
+#define SH_PERFORMANCE_COUNTER0_INIT 0x0000000000000000
+
+/* SH_PERFORMANCE_COUNTER0_COUNT */
+/* Description: Counter 0 */
+#define SH_PERFORMANCE_COUNTER0_COUNT_SHFT 0
+#define SH_PERFORMANCE_COUNTER0_COUNT_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER1" */
+/* Performance Counter 1 */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNTER1 0x0000000110120000
+#define SH_PERFORMANCE_COUNTER1_MASK 0x00000000ffffffff
+#define SH_PERFORMANCE_COUNTER1_INIT 0x0000000000000000
+
+/* SH_PERFORMANCE_COUNTER1_COUNT */
+/* Description: Counter 1 */
+#define SH_PERFORMANCE_COUNTER1_COUNT_SHFT 0
+#define SH_PERFORMANCE_COUNTER1_COUNT_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER2" */
+/* Performance Counter 2 */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNTER2 0x0000000110130000
+#define SH_PERFORMANCE_COUNTER2_MASK 0x00000000ffffffff
+#define SH_PERFORMANCE_COUNTER2_INIT 0x0000000000000000
+
+/* SH_PERFORMANCE_COUNTER2_COUNT */
+/* Description: Counter 2 */
+#define SH_PERFORMANCE_COUNTER2_COUNT_SHFT 0
+#define SH_PERFORMANCE_COUNTER2_COUNT_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER3" */
+/* Performance Counter 3 */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNTER3 0x0000000110140000
+#define SH_PERFORMANCE_COUNTER3_MASK 0x00000000ffffffff
+#define SH_PERFORMANCE_COUNTER3_INIT 0x0000000000000000
+
+/* SH_PERFORMANCE_COUNTER3_COUNT */
+/* Description: Counter 3 */
+#define SH_PERFORMANCE_COUNTER3_COUNT_SHFT 0
+#define SH_PERFORMANCE_COUNTER3_COUNT_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER4" */
+/* Performance Counter 4 */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNTER4 0x0000000110150000
+#define SH_PERFORMANCE_COUNTER4_MASK 0x00000000ffffffff
+#define SH_PERFORMANCE_COUNTER4_INIT 0x0000000000000000
+
+/* SH_PERFORMANCE_COUNTER4_COUNT */
+/* Description: Counter 4 */
+#define SH_PERFORMANCE_COUNTER4_COUNT_SHFT 0
+#define SH_PERFORMANCE_COUNTER4_COUNT_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER5" */
+/* Performance Counter 5 */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNTER5 0x0000000110160000
+#define SH_PERFORMANCE_COUNTER5_MASK 0x00000000ffffffff
+#define SH_PERFORMANCE_COUNTER5_INIT 0x0000000000000000
+
+/* SH_PERFORMANCE_COUNTER5_COUNT */
+/* Description: Counter 5 */
+#define SH_PERFORMANCE_COUNTER5_COUNT_SHFT 0
+#define SH_PERFORMANCE_COUNTER5_COUNT_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER6" */
+/* Performance Counter 6 */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNTER6 0x0000000110170000
+#define SH_PERFORMANCE_COUNTER6_MASK 0x00000000ffffffff
+#define SH_PERFORMANCE_COUNTER6_INIT 0x0000000000000000
+
+/* SH_PERFORMANCE_COUNTER6_COUNT */
+/* Description: Counter 6 */
+#define SH_PERFORMANCE_COUNTER6_COUNT_SHFT 0
+#define SH_PERFORMANCE_COUNTER6_COUNT_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER7" */
+/* Performance Counter 7 */
+/* ==================================================================== */
+
+#define SH_PERFORMANCE_COUNTER7 0x0000000110180000
+#define SH_PERFORMANCE_COUNTER7_MASK 0x00000000ffffffff
+#define SH_PERFORMANCE_COUNTER7_INIT 0x0000000000000000
+
+/* SH_PERFORMANCE_COUNTER7_COUNT */
+/* Description: Counter 7 */
+#define SH_PERFORMANCE_COUNTER7_COUNT_SHFT 0
+#define SH_PERFORMANCE_COUNTER7_COUNT_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_COUNTER" */
+/* Profile Counter */
+/* ==================================================================== */
+
+#define SH_PROFILE_COUNTER 0x0000000110190000
+#define SH_PROFILE_COUNTER_MASK 0x00000000000000ff
+#define SH_PROFILE_COUNTER_INIT 0x0000000000000000
+
+/* SH_PROFILE_COUNTER_COUNTER */
+/* Description: Counter Value */
+#define SH_PROFILE_COUNTER_COUNTER_SHFT 0
+#define SH_PROFILE_COUNTER_COUNTER_MASK 0x00000000000000ff
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_PEAK" */
+/* Profile Peak Counter */
+/* ==================================================================== */
+
+#define SH_PROFILE_PEAK 0x0000000110190080
+#define SH_PROFILE_PEAK_MASK 0x00000000000000ff
+#define SH_PROFILE_PEAK_INIT 0x0000000000000000
+
+/* SH_PROFILE_PEAK_COUNTER */
+/* Description: Counter Value */
+#define SH_PROFILE_PEAK_COUNTER_SHFT 0
+#define SH_PROFILE_PEAK_COUNTER_MASK 0x00000000000000ff
+
+/* ==================================================================== */
+/* Register "SH_PTC_0" */
+/* Puge Translation Cache Message Configuration Information */
+/* ==================================================================== */
+
+#define SH_PTC_0 0x00000001101a0000
+#define SH_PTC_0_MASK 0x80000000fffffffd
+#define SH_PTC_0_INIT 0x0000000000000000
+
+/* SH_PTC_0_A */
+/* Description: Type */
+#define SH_PTC_0_A_SHFT 0
+#define SH_PTC_0_A_MASK 0x0000000000000001
+
+/* SH_PTC_0_PS */
+/* Description: Page Size */
+#define SH_PTC_0_PS_SHFT 2
+#define SH_PTC_0_PS_MASK 0x00000000000000fc
+
+/* SH_PTC_0_RID */
+/* Description: Region ID */
+#define SH_PTC_0_RID_SHFT 8
+#define SH_PTC_0_RID_MASK 0x00000000ffffff00
+
+/* SH_PTC_0_START */
+/* Description: Start */
+#define SH_PTC_0_START_SHFT 63
+#define SH_PTC_0_START_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PTC_1" */
+/* Puge Translation Cache Message Configuration Information */
+/* ==================================================================== */
+
+#define SH_PTC_1 0x00000001101a0080
+#define SH_PTC_1_MASK 0x9ffffffffffff000
+#define SH_PTC_1_INIT 0x0000000000000000
+
+/* SH_PTC_1_VPN */
+/* Description: Virtual page number */
+#define SH_PTC_1_VPN_SHFT 12
+#define SH_PTC_1_VPN_MASK 0x1ffffffffffff000
+
+/* SH_PTC_1_START */
+/* Description: PTC_1 Start */
+#define SH_PTC_1_START_SHFT 63
+#define SH_PTC_1_START_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PTC_PARMS" */
+/* PTC Time-out parmaeters */
+/* ==================================================================== */
+
+#define SH_PTC_PARMS 0x00000001101a0100
+#define SH_PTC_PARMS_MASK 0x0000000fffffffff
+#define SH_PTC_PARMS_INIT 0x00000007ffffffff
+
+/* SH_PTC_PARMS_PTC_TO_WRAP */
+/* Description: PTC time-out period */
+#define SH_PTC_PARMS_PTC_TO_WRAP_SHFT 0
+#define SH_PTC_PARMS_PTC_TO_WRAP_MASK 0x0000000000ffffff
+
+/* SH_PTC_PARMS_PTC_TO_VAL */
+/* Description: PTC time-out valid */
+#define SH_PTC_PARMS_PTC_TO_VAL_SHFT 24
+#define SH_PTC_PARMS_PTC_TO_VAL_MASK 0x0000000fff000000
+
+/* ==================================================================== */
+/* Register "SH_INT_CMPA" */
+/* RTC Compare Value for Processor A */
+/* ==================================================================== */
+
+#define SH_INT_CMPA 0x00000001101b0000
+#define SH_INT_CMPA_MASK 0x007fffffffffffff
+#define SH_INT_CMPA_INIT 0x0000000000000000
+
+/* SH_INT_CMPA_REAL_TIME_CMPA */
+/* Description: Real Time Clock Compare */
+#define SH_INT_CMPA_REAL_TIME_CMPA_SHFT 0
+#define SH_INT_CMPA_REAL_TIME_CMPA_MASK 0x007fffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_INT_CMPB" */
+/* RTC Compare Value for Processor B */
+/* ==================================================================== */
+
+#define SH_INT_CMPB 0x00000001101b0080
+#define SH_INT_CMPB_MASK 0x007fffffffffffff
+#define SH_INT_CMPB_INIT 0x0000000000000000
+
+/* SH_INT_CMPB_REAL_TIME_CMPB */
+/* Description: Real Time Clock Compare */
+#define SH_INT_CMPB_REAL_TIME_CMPB_SHFT 0
+#define SH_INT_CMPB_REAL_TIME_CMPB_MASK 0x007fffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_INT_CMPC" */
+/* RTC Compare Value for Processor C */
+/* ==================================================================== */
+
+#define SH_INT_CMPC 0x00000001101b0100
+#define SH_INT_CMPC_MASK 0x007fffffffffffff
+#define SH_INT_CMPC_INIT 0x0000000000000000
+
+/* SH_INT_CMPC_REAL_TIME_CMPC */
+/* Description: Real Time Clock Compare */
+#define SH_INT_CMPC_REAL_TIME_CMPC_SHFT 0
+#define SH_INT_CMPC_REAL_TIME_CMPC_MASK 0x007fffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_INT_CMPD" */
+/* RTC Compare Value for Processor D */
+/* ==================================================================== */
+
+#define SH_INT_CMPD 0x00000001101b0180
+#define SH_INT_CMPD_MASK 0x007fffffffffffff
+#define SH_INT_CMPD_INIT 0x0000000000000000
+
+/* SH_INT_CMPD_REAL_TIME_CMPD */
+/* Description: Real Time Clock Compare */
+#define SH_INT_CMPD_REAL_TIME_CMPD_SHFT 0
+#define SH_INT_CMPD_REAL_TIME_CMPD_MASK 0x007fffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_INT_PROF" */
+/* Profile Compare Registers */
+/* ==================================================================== */
+
+#define SH_INT_PROF 0x00000001101b0200
+#define SH_INT_PROF_MASK 0x00000000ffffffff
+#define SH_INT_PROF_INIT 0x0000000000000000
+
+/* SH_INT_PROF_PROFILE_COMPARE */
+/* Description: Profile Compare */
+#define SH_INT_PROF_PROFILE_COMPARE_SHFT 0
+#define SH_INT_PROF_PROFILE_COMPARE_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_RTC" */
+/* Real-time Clock */
+/* ==================================================================== */
+
+#define SH_RTC 0x00000001101c0000UL
+#define SH_RTC_MASK 0x007fffffffffffffUL
+#define SH_RTC_INIT 0x0000000000000000
+
+/* SH_RTC_REAL_TIME_CLOCK */
+/* Description: Real-time Clock */
+#define SH_RTC_REAL_TIME_CLOCK_SHFT 0
+#define SH_RTC_REAL_TIME_CLOCK_MASK 0x007fffffffffffffUL
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH0" */
+/* Scratch Register 0 */
+/* ==================================================================== */
+
+#define SH_SCRATCH0 0x00000001101d0000
+#define SH_SCRATCH0_MASK 0xffffffffffffffff
+#define SH_SCRATCH0_INIT 0x0000000000000000
+
+/* SH_SCRATCH0_SCRATCH0 */
+/* Description: Scratch register 0 */
+#define SH_SCRATCH0_SCRATCH0_SHFT 0
+#define SH_SCRATCH0_SCRATCH0_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH0_ALIAS" */
+/* Scratch Register 0 Alias Address */
+/* ==================================================================== */
+
+#define SH_SCRATCH0_ALIAS 0x00000001101d0008
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH1" */
+/* Scratch Register 1 */
+/* ==================================================================== */
+
+#define SH_SCRATCH1 0x00000001101d0080
+#define SH_SCRATCH1_MASK 0xffffffffffffffff
+#define SH_SCRATCH1_INIT 0x0000000000000000
+
+/* SH_SCRATCH1_SCRATCH1 */
+/* Description: Scratch register 1 */
+#define SH_SCRATCH1_SCRATCH1_SHFT 0
+#define SH_SCRATCH1_SCRATCH1_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH1_ALIAS" */
+/* Scratch Register 1 Alias Address */
+/* ==================================================================== */
+
+#define SH_SCRATCH1_ALIAS 0x00000001101d0088
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH2" */
+/* Scratch Register 2 */
+/* ==================================================================== */
+
+#define SH_SCRATCH2 0x00000001101d0100
+#define SH_SCRATCH2_MASK 0xffffffffffffffff
+#define SH_SCRATCH2_INIT 0x0000000000000000
+
+/* SH_SCRATCH2_SCRATCH2 */
+/* Description: Scratch register 2 */
+#define SH_SCRATCH2_SCRATCH2_SHFT 0
+#define SH_SCRATCH2_SCRATCH2_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH2_ALIAS" */
+/* Scratch Register 2 Alias Address */
+/* ==================================================================== */
+
+#define SH_SCRATCH2_ALIAS 0x00000001101d0108
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH3" */
+/* Scratch Register 3 */
+/* ==================================================================== */
+
+#define SH_SCRATCH3 0x00000001101d0180
+#define SH_SCRATCH3_MASK 0x0000000000000001
+#define SH_SCRATCH3_INIT 0x0000000000000000
+
+/* SH_SCRATCH3_SCRATCH3 */
+/* Description: Scratch register 3 */
+#define SH_SCRATCH3_SCRATCH3_SHFT 0
+#define SH_SCRATCH3_SCRATCH3_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH3_ALIAS" */
+/* Scratch Register 3 Alias Address */
+/* ==================================================================== */
+
+#define SH_SCRATCH3_ALIAS 0x00000001101d0188
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH4" */
+/* Scratch Register 4 */
+/* ==================================================================== */
+
+#define SH_SCRATCH4 0x00000001101d0200
+#define SH_SCRATCH4_MASK 0x0000000000000001
+#define SH_SCRATCH4_INIT 0x0000000000000000
+
+/* SH_SCRATCH4_SCRATCH4 */
+/* Description: Scratch register 4 */
+#define SH_SCRATCH4_SCRATCH4_SHFT 0
+#define SH_SCRATCH4_SCRATCH4_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH4_ALIAS" */
+/* Scratch Register 4 Alias Address */
+/* ==================================================================== */
+
+#define SH_SCRATCH4_ALIAS 0x00000001101d0208
+
+/* ==================================================================== */
+/* Register "SH_CRB_MESSAGE_CONTROL" */
+/* Coherent Request Buffer Message Control */
+/* ==================================================================== */
+
+#define SH_CRB_MESSAGE_CONTROL 0x0000000120000000
+#define SH_CRB_MESSAGE_CONTROL_MASK 0xffffffff00000fff
+#define SH_CRB_MESSAGE_CONTROL_INIT 0x0000000000000006
+
+/* SH_CRB_MESSAGE_CONTROL_SYSTEM_COHERENCE_ENABLE */
+/* Description: System Coherence Enabled */
+#define SH_CRB_MESSAGE_CONTROL_SYSTEM_COHERENCE_ENABLE_SHFT 0
+#define SH_CRB_MESSAGE_CONTROL_SYSTEM_COHERENCE_ENABLE_MASK 0x0000000000000001
+
+/* SH_CRB_MESSAGE_CONTROL_LOCAL_SPECULATIVE_MESSAGE_ENABLE */
+/* Description: Speculative Read Requests to Local Memory Enabled */
+#define SH_CRB_MESSAGE_CONTROL_LOCAL_SPECULATIVE_MESSAGE_ENABLE_SHFT 1
+#define SH_CRB_MESSAGE_CONTROL_LOCAL_SPECULATIVE_MESSAGE_ENABLE_MASK 0x0000000000000002
+
+/* SH_CRB_MESSAGE_CONTROL_REMOTE_SPECULATIVE_MESSAGE_ENABLE */
+/* Description: Speculative Read Requests to Remote Memory Enabled */
+#define SH_CRB_MESSAGE_CONTROL_REMOTE_SPECULATIVE_MESSAGE_ENABLE_SHFT 2
+#define SH_CRB_MESSAGE_CONTROL_REMOTE_SPECULATIVE_MESSAGE_ENABLE_MASK 0x0000000000000004
+
+/* SH_CRB_MESSAGE_CONTROL_MESSAGE_COLOR */
+/* Description: Define color of message */
+#define SH_CRB_MESSAGE_CONTROL_MESSAGE_COLOR_SHFT 3
+#define SH_CRB_MESSAGE_CONTROL_MESSAGE_COLOR_MASK 0x0000000000000008
+
+/* SH_CRB_MESSAGE_CONTROL_MESSAGE_COLOR_ENABLE */
+/* Description: Enable color message processing */
+#define SH_CRB_MESSAGE_CONTROL_MESSAGE_COLOR_ENABLE_SHFT 4
+#define SH_CRB_MESSAGE_CONTROL_MESSAGE_COLOR_ENABLE_MASK 0x0000000000000010
+
+/* SH_CRB_MESSAGE_CONTROL_RRB_ATTRIBUTE_MISMATCH_FSB_ENABLE */
+/* Description: Enable FSB RRB Mismatch check */
+#define SH_CRB_MESSAGE_CONTROL_RRB_ATTRIBUTE_MISMATCH_FSB_ENABLE_SHFT 5
+#define SH_CRB_MESSAGE_CONTROL_RRB_ATTRIBUTE_MISMATCH_FSB_ENABLE_MASK 0x0000000000000020
+
+/* SH_CRB_MESSAGE_CONTROL_WRB_ATTRIBUTE_MISMATCH_FSB_ENABLE */
+/* Description: Enable FSB WRB Mismatch check */
+#define SH_CRB_MESSAGE_CONTROL_WRB_ATTRIBUTE_MISMATCH_FSB_ENABLE_SHFT 6
+#define SH_CRB_MESSAGE_CONTROL_WRB_ATTRIBUTE_MISMATCH_FSB_ENABLE_MASK 0x0000000000000040
+
+/* SH_CRB_MESSAGE_CONTROL_IRB_ATTRIBUTE_MISMATCH_FSB_ENABLE */
+/* Description: Enable FSB IRB Mismatch check */
+#define SH_CRB_MESSAGE_CONTROL_IRB_ATTRIBUTE_MISMATCH_FSB_ENABLE_SHFT 7
+#define SH_CRB_MESSAGE_CONTROL_IRB_ATTRIBUTE_MISMATCH_FSB_ENABLE_MASK 0x0000000000000080
+
+/* SH_CRB_MESSAGE_CONTROL_RRB_ATTRIBUTE_MISMATCH_XB_ENABLE */
+/* Description: Enable XB RRB Mismatch check */
+#define SH_CRB_MESSAGE_CONTROL_RRB_ATTRIBUTE_MISMATCH_XB_ENABLE_SHFT 8
+#define SH_CRB_MESSAGE_CONTROL_RRB_ATTRIBUTE_MISMATCH_XB_ENABLE_MASK 0x0000000000000100
+
+/* SH_CRB_MESSAGE_CONTROL_WRB_ATTRIBUTE_MISMATCH_XB_ENABLE */
+/* Description: Enable XB WRB Mismatch check */
+#define SH_CRB_MESSAGE_CONTROL_WRB_ATTRIBUTE_MISMATCH_XB_ENABLE_SHFT 9
+#define SH_CRB_MESSAGE_CONTROL_WRB_ATTRIBUTE_MISMATCH_XB_ENABLE_MASK 0x0000000000000200
+
+/* SH_CRB_MESSAGE_CONTROL_SUPPRESS_BOGUS_WRITES */
+/* Description: ignor residual write data */
+#define SH_CRB_MESSAGE_CONTROL_SUPPRESS_BOGUS_WRITES_SHFT 10
+#define SH_CRB_MESSAGE_CONTROL_SUPPRESS_BOGUS_WRITES_MASK 0x0000000000000400
+
+/* SH_CRB_MESSAGE_CONTROL_ENABLE_IVACK_CONSOLIDATION */
+/* Description: enable IVACK reply consolidation */
+#define SH_CRB_MESSAGE_CONTROL_ENABLE_IVACK_CONSOLIDATION_SHFT 11
+#define SH_CRB_MESSAGE_CONTROL_ENABLE_IVACK_CONSOLIDATION_MASK 0x0000000000000800
+
+/* SH_CRB_MESSAGE_CONTROL_IVACK_STALL_COUNT */
+/* Description: IVACK stall counter */
+#define SH_CRB_MESSAGE_CONTROL_IVACK_STALL_COUNT_SHFT 32
+#define SH_CRB_MESSAGE_CONTROL_IVACK_STALL_COUNT_MASK 0x0000ffff00000000
+
+/* SH_CRB_MESSAGE_CONTROL_IVACK_THROTTLE_CONTROL */
+/* Description: IVACK throttling limit/timer control */
+#define SH_CRB_MESSAGE_CONTROL_IVACK_THROTTLE_CONTROL_SHFT 48
+#define SH_CRB_MESSAGE_CONTROL_IVACK_THROTTLE_CONTROL_MASK 0xffff000000000000
+
+/* ==================================================================== */
+/* Register "SH_CRB_NACK_LIMIT" */
+/* CRB Nack Limit */
+/* ==================================================================== */
+
+#define SH_CRB_NACK_LIMIT 0x0000000120000080
+#define SH_CRB_NACK_LIMIT_MASK 0x800000000000ffff
+#define SH_CRB_NACK_LIMIT_INIT 0x0000000000000000
+
+/* SH_CRB_NACK_LIMIT_LIMIT */
+/* Description: Nack Count Limit */
+#define SH_CRB_NACK_LIMIT_LIMIT_SHFT 0
+#define SH_CRB_NACK_LIMIT_LIMIT_MASK 0x0000000000000fff
+
+/* SH_CRB_NACK_LIMIT_PRI_FREQ */
+/* Description: Frequency at which priority count is incremented */
+#define SH_CRB_NACK_LIMIT_PRI_FREQ_SHFT 12
+#define SH_CRB_NACK_LIMIT_PRI_FREQ_MASK 0x000000000000f000
+
+/* SH_CRB_NACK_LIMIT_ENABLE */
+/* Description: Enable NACK limit detection */
+#define SH_CRB_NACK_LIMIT_ENABLE_SHFT 63
+#define SH_CRB_NACK_LIMIT_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_CRB_TIMEOUT_PRESCALE" */
+/* Coherent Request Buffer Timeout Prescale */
+/* ==================================================================== */
+
+#define SH_CRB_TIMEOUT_PRESCALE 0x0000000120000100
+#define SH_CRB_TIMEOUT_PRESCALE_MASK 0x00000000ffffffff
+#define SH_CRB_TIMEOUT_PRESCALE_INIT 0x0000000000000000
+
+/* SH_CRB_TIMEOUT_PRESCALE_SCALING_FACTOR */
+/* Description: CRB Time-out Prescale Factor */
+#define SH_CRB_TIMEOUT_PRESCALE_SCALING_FACTOR_SHFT 0
+#define SH_CRB_TIMEOUT_PRESCALE_SCALING_FACTOR_MASK 0x00000000ffffffff
+
+/* ==================================================================== */
+/* Register "SH_CRB_TIMEOUT_SKID" */
+/* Coherent Request Buffer Timeout Skid Limit */
+/* ==================================================================== */
+
+#define SH_CRB_TIMEOUT_SKID 0x0000000120000180
+#define SH_CRB_TIMEOUT_SKID_MASK 0x800000000000003f
+#define SH_CRB_TIMEOUT_SKID_INIT 0x0000000000000007
+
+/* SH_CRB_TIMEOUT_SKID_SKID */
+/* Description: CRB Time-out Skid */
+#define SH_CRB_TIMEOUT_SKID_SKID_SHFT 0
+#define SH_CRB_TIMEOUT_SKID_SKID_MASK 0x000000000000003f
+
+/* SH_CRB_TIMEOUT_SKID_RESET_SKID_COUNT */
+/* Description: Reset Skid counter */
+#define SH_CRB_TIMEOUT_SKID_RESET_SKID_COUNT_SHFT 63
+#define SH_CRB_TIMEOUT_SKID_RESET_SKID_COUNT_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MEMORY_WRITE_STATUS_0" */
+/* Memory Write Status for CPU 0 */
+/* ==================================================================== */
+
+#define SH_MEMORY_WRITE_STATUS_0 0x0000000120070000
+#define SH_MEMORY_WRITE_STATUS_0_MASK 0x000000000000003f
+#define SH_MEMORY_WRITE_STATUS_0_INIT 0x0000000000000000
+
+/* SH_MEMORY_WRITE_STATUS_0_PENDING_WRITE_COUNT */
+/* Description: Pending Write Count */
+#define SH_MEMORY_WRITE_STATUS_0_PENDING_WRITE_COUNT_SHFT 0
+#define SH_MEMORY_WRITE_STATUS_0_PENDING_WRITE_COUNT_MASK 0x000000000000003f
+
+/* ==================================================================== */
+/* Register "SH_MEMORY_WRITE_STATUS_1" */
+/* Memory Write Status for CPU 1 */
+/* ==================================================================== */
+
+#define SH_MEMORY_WRITE_STATUS_1 0x0000000120070080
+#define SH_MEMORY_WRITE_STATUS_1_MASK 0x000000000000003f
+#define SH_MEMORY_WRITE_STATUS_1_INIT 0x0000000000000000
+
+/* SH_MEMORY_WRITE_STATUS_1_PENDING_WRITE_COUNT */
+/* Description: Pending Write Count */
+#define SH_MEMORY_WRITE_STATUS_1_PENDING_WRITE_COUNT_SHFT 0
+#define SH_MEMORY_WRITE_STATUS_1_PENDING_WRITE_COUNT_MASK 0x000000000000003f
+
+/* ==================================================================== */
+/* Register "SH_PIO_WRITE_STATUS_0" */
+/* PIO Write Status for CPU 0 */
+/* ==================================================================== */
+
+#define SH_PIO_WRITE_STATUS_0 0x0000000120070200
+#define SH_PIO_WRITE_STATUS_0_MASK 0xbf03ffffffffffff
+#define SH_PIO_WRITE_STATUS_0_INIT 0x8000000000000000
+
+/* SH_PIO_WRITE_STATUS_0_MULTI_WRITE_ERROR */
+/* Description: More than one PIO write error occurred */
+#define SH_PIO_WRITE_STATUS_0_MULTI_WRITE_ERROR_SHFT 0
+#define SH_PIO_WRITE_STATUS_0_MULTI_WRITE_ERROR_MASK 0x0000000000000001
+
+/* SH_PIO_WRITE_STATUS_0_WRITE_DEADLOCK */
+/* Description: Deaklock response detected */
+#define SH_PIO_WRITE_STATUS_0_WRITE_DEADLOCK_SHFT 1
+#define SH_PIO_WRITE_STATUS_0_WRITE_DEADLOCK_MASK 0x0000000000000002
+
+/* SH_PIO_WRITE_STATUS_0_WRITE_ERROR */
+/* Description: Error response detected */
+#define SH_PIO_WRITE_STATUS_0_WRITE_ERROR_SHFT 2
+#define SH_PIO_WRITE_STATUS_0_WRITE_ERROR_MASK 0x0000000000000004
+
+/* SH_PIO_WRITE_STATUS_0_WRITE_ERROR_ADDRESS */
+/* Description: Address associated with error response */
+#define SH_PIO_WRITE_STATUS_0_WRITE_ERROR_ADDRESS_SHFT 3
+#define SH_PIO_WRITE_STATUS_0_WRITE_ERROR_ADDRESS_MASK 0x0003fffffffffff8
+
+/* SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT */
+/* Description: Count of currently pending PIO writes */
+#define SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT_SHFT 56
+#define SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT_MASK 0x3f00000000000000
+
+/* SH_PIO_WRITE_STATUS_0_WRITES_OK */
+/* Description: No pending writes or errors */
+#define SH_PIO_WRITE_STATUS_0_WRITES_OK_SHFT 63
+#define SH_PIO_WRITE_STATUS_0_WRITES_OK_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PIO_WRITE_STATUS_1" */
+/* PIO Write Status for CPU 1 */
+/* ==================================================================== */
+
+#define SH_PIO_WRITE_STATUS_1 0x0000000120070280
+#define SH_PIO_WRITE_STATUS_1_MASK 0xbf03ffffffffffff
+#define SH_PIO_WRITE_STATUS_1_INIT 0x8000000000000000
+
+/* SH_PIO_WRITE_STATUS_1_MULTI_WRITE_ERROR */
+/* Description: More than one PIO write error occurred */
+#define SH_PIO_WRITE_STATUS_1_MULTI_WRITE_ERROR_SHFT 0
+#define SH_PIO_WRITE_STATUS_1_MULTI_WRITE_ERROR_MASK 0x0000000000000001
+
+/* SH_PIO_WRITE_STATUS_1_WRITE_DEADLOCK */
+/* Description: Deaklock response detected */
+#define SH_PIO_WRITE_STATUS_1_WRITE_DEADLOCK_SHFT 1
+#define SH_PIO_WRITE_STATUS_1_WRITE_DEADLOCK_MASK 0x0000000000000002
+
+/* SH_PIO_WRITE_STATUS_1_WRITE_ERROR */
+/* Description: Error response detected */
+#define SH_PIO_WRITE_STATUS_1_WRITE_ERROR_SHFT 2
+#define SH_PIO_WRITE_STATUS_1_WRITE_ERROR_MASK 0x0000000000000004
+
+/* SH_PIO_WRITE_STATUS_1_WRITE_ERROR_ADDRESS */
+/* Description: Address associated with error response */
+#define SH_PIO_WRITE_STATUS_1_WRITE_ERROR_ADDRESS_SHFT 3
+#define SH_PIO_WRITE_STATUS_1_WRITE_ERROR_ADDRESS_MASK 0x0003fffffffffff8
+
+/* SH_PIO_WRITE_STATUS_1_PENDING_WRITE_COUNT */
+/* Description: Count of currently pending PIO writes */
+#define SH_PIO_WRITE_STATUS_1_PENDING_WRITE_COUNT_SHFT 56
+#define SH_PIO_WRITE_STATUS_1_PENDING_WRITE_COUNT_MASK 0x3f00000000000000
+
+/* SH_PIO_WRITE_STATUS_1_WRITES_OK */
+/* Description: No pending writes or errors */
+#define SH_PIO_WRITE_STATUS_1_WRITES_OK_SHFT 63
+#define SH_PIO_WRITE_STATUS_1_WRITES_OK_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PIO_WRITE_STATUS_0_ALIAS" */
+/* ==================================================================== */
+
+#define SH_PIO_WRITE_STATUS_0_ALIAS 0x0000000120070208
+
+/* ==================================================================== */
+/* Register "SH_PIO_WRITE_STATUS_1_ALIAS" */
+/* ==================================================================== */
+
+#define SH_PIO_WRITE_STATUS_1_ALIAS 0x0000000120070288
+
+/* ==================================================================== */
+/* Register "SH_MEMORY_WRITE_STATUS_NON_USER_0" */
+/* Memory Write Status for CPU 0. OS access only */
+/* ==================================================================== */
+
+#define SH_MEMORY_WRITE_STATUS_NON_USER_0 0x0000000120070400
+#define SH_MEMORY_WRITE_STATUS_NON_USER_0_MASK 0x800000000000003f
+#define SH_MEMORY_WRITE_STATUS_NON_USER_0_INIT 0x0000000000000000
+
+/* SH_MEMORY_WRITE_STATUS_NON_USER_0_PENDING_WRITE_COUNT */
+/* Description: Pending Write Count */
+#define SH_MEMORY_WRITE_STATUS_NON_USER_0_PENDING_WRITE_COUNT_SHFT 0
+#define SH_MEMORY_WRITE_STATUS_NON_USER_0_PENDING_WRITE_COUNT_MASK 0x000000000000003f
+
+/* SH_MEMORY_WRITE_STATUS_NON_USER_0_CLEAR */
+/* Description: Clear pending write count */
+#define SH_MEMORY_WRITE_STATUS_NON_USER_0_CLEAR_SHFT 63
+#define SH_MEMORY_WRITE_STATUS_NON_USER_0_CLEAR_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MEMORY_WRITE_STATUS_NON_USER_1" */
+/* Memory Write Status for CPU 1. OS access only */
+/* ==================================================================== */
+
+#define SH_MEMORY_WRITE_STATUS_NON_USER_1 0x0000000120070480
+#define SH_MEMORY_WRITE_STATUS_NON_USER_1_MASK 0x800000000000003f
+#define SH_MEMORY_WRITE_STATUS_NON_USER_1_INIT 0x0000000000000000
+
+/* SH_MEMORY_WRITE_STATUS_NON_USER_1_PENDING_WRITE_COUNT */
+/* Description: Pending Write Count */
+#define SH_MEMORY_WRITE_STATUS_NON_USER_1_PENDING_WRITE_COUNT_SHFT 0
+#define SH_MEMORY_WRITE_STATUS_NON_USER_1_PENDING_WRITE_COUNT_MASK 0x000000000000003f
+
+/* SH_MEMORY_WRITE_STATUS_NON_USER_1_CLEAR */
+/* Description: Clear pending write count */
+#define SH_MEMORY_WRITE_STATUS_NON_USER_1_CLEAR_SHFT 63
+#define SH_MEMORY_WRITE_STATUS_NON_USER_1_CLEAR_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MMRBIST_ERR" */
+/* Error capture for bist read errors */
+/* ==================================================================== */
+
+#define SH_MMRBIST_ERR 0x0000000100000080
+#define SH_MMRBIST_ERR_MASK 0x00000071ffffffff
+#define SH_MMRBIST_ERR_INIT 0x0000000000000000
+
+/* SH_MMRBIST_ERR_ADDR */
+/* Description: dword address of bist error */
+#define SH_MMRBIST_ERR_ADDR_SHFT 0
+#define SH_MMRBIST_ERR_ADDR_MASK 0x00000001ffffffff
+
+/* SH_MMRBIST_ERR_DETECTED */
+/* Description: error detected flag */
+#define SH_MMRBIST_ERR_DETECTED_SHFT 36
+#define SH_MMRBIST_ERR_DETECTED_MASK 0x0000001000000000
+
+/* SH_MMRBIST_ERR_MULTIPLE_DETECTED */
+/* Description: multiple errors detected flag */
+#define SH_MMRBIST_ERR_MULTIPLE_DETECTED_SHFT 37
+#define SH_MMRBIST_ERR_MULTIPLE_DETECTED_MASK 0x0000002000000000
+
+/* SH_MMRBIST_ERR_CANCELLED */
+/* Description: mmr/bist was cancelled */
+#define SH_MMRBIST_ERR_CANCELLED_SHFT 38
+#define SH_MMRBIST_ERR_CANCELLED_MASK 0x0000004000000000
+
+/* ==================================================================== */
+/* Register "SH_MISC_ERR_HDR_LOWER" */
+/* Header capture register */
+/* ==================================================================== */
+
+#define SH_MISC_ERR_HDR_LOWER 0x0000000100000088
+#define SH_MISC_ERR_HDR_LOWER_MASK 0x93fffffffffffff8
+#define SH_MISC_ERR_HDR_LOWER_INIT 0x0000000000000000
+
+/* SH_MISC_ERR_HDR_LOWER_ADDR */
+/* Description: upper bits of reference address */
+#define SH_MISC_ERR_HDR_LOWER_ADDR_SHFT 3
+#define SH_MISC_ERR_HDR_LOWER_ADDR_MASK 0x0000000ffffffff8
+
+/* SH_MISC_ERR_HDR_LOWER_CMD */
+/* Description: command of reference */
+#define SH_MISC_ERR_HDR_LOWER_CMD_SHFT 36
+#define SH_MISC_ERR_HDR_LOWER_CMD_MASK 0x00000ff000000000
+
+/* SH_MISC_ERR_HDR_LOWER_SRC */
+/* Description: source node of reference */
+#define SH_MISC_ERR_HDR_LOWER_SRC_SHFT 44
+#define SH_MISC_ERR_HDR_LOWER_SRC_MASK 0x03fff00000000000
+
+/* SH_MISC_ERR_HDR_LOWER_WRITE */
+/* Description: reference is a write */
+#define SH_MISC_ERR_HDR_LOWER_WRITE_SHFT 60
+#define SH_MISC_ERR_HDR_LOWER_WRITE_MASK 0x1000000000000000
+
+/* SH_MISC_ERR_HDR_LOWER_VALID */
+/* Description: set when capture occurs */
+#define SH_MISC_ERR_HDR_LOWER_VALID_SHFT 63
+#define SH_MISC_ERR_HDR_LOWER_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MISC_ERR_HDR_UPPER" */
+/* Error header capture packet and protocol errors */
+/* ==================================================================== */
+
+#define SH_MISC_ERR_HDR_UPPER 0x0000000100000090
+#define SH_MISC_ERR_HDR_UPPER_MASK 0x000000001ff000ff
+#define SH_MISC_ERR_HDR_UPPER_INIT 0x0000000000000000
+
+/* SH_MISC_ERR_HDR_UPPER_DIR_PROTOCOL */
+/* Description: indicates a directory protocol error captured */
+#define SH_MISC_ERR_HDR_UPPER_DIR_PROTOCOL_SHFT 0
+#define SH_MISC_ERR_HDR_UPPER_DIR_PROTOCOL_MASK 0x0000000000000001
+
+/* SH_MISC_ERR_HDR_UPPER_ILLEGAL_CMD */
+/* Description: indicates an illegal command error captured */
+#define SH_MISC_ERR_HDR_UPPER_ILLEGAL_CMD_SHFT 1
+#define SH_MISC_ERR_HDR_UPPER_ILLEGAL_CMD_MASK 0x0000000000000002
+
+/* SH_MISC_ERR_HDR_UPPER_NONEXIST_ADDR */
+/* Description: indicates a non-existent memory error captured */
+#define SH_MISC_ERR_HDR_UPPER_NONEXIST_ADDR_SHFT 2
+#define SH_MISC_ERR_HDR_UPPER_NONEXIST_ADDR_MASK 0x0000000000000004
+
+/* SH_MISC_ERR_HDR_UPPER_RMW_UC */
+/* Description: indicates an uncorrectable store rmw */
+#define SH_MISC_ERR_HDR_UPPER_RMW_UC_SHFT 3
+#define SH_MISC_ERR_HDR_UPPER_RMW_UC_MASK 0x0000000000000008
+
+/* SH_MISC_ERR_HDR_UPPER_RMW_COR */
+/* Description: indicates a correctable store rmw */
+#define SH_MISC_ERR_HDR_UPPER_RMW_COR_SHFT 4
+#define SH_MISC_ERR_HDR_UPPER_RMW_COR_MASK 0x0000000000000010
+
+/* SH_MISC_ERR_HDR_UPPER_DIR_ACC */
+/* Description: indicates a data request to directory memory error */
+/* captured */
+#define SH_MISC_ERR_HDR_UPPER_DIR_ACC_SHFT 5
+#define SH_MISC_ERR_HDR_UPPER_DIR_ACC_MASK 0x0000000000000020
+
+/* SH_MISC_ERR_HDR_UPPER_PI_PKT_SIZE */
+/* Description: indicates a pkt size error from pi */
+#define SH_MISC_ERR_HDR_UPPER_PI_PKT_SIZE_SHFT 6
+#define SH_MISC_ERR_HDR_UPPER_PI_PKT_SIZE_MASK 0x0000000000000040
+
+/* SH_MISC_ERR_HDR_UPPER_XN_PKT_SIZE */
+/* Description: indicates a pkt size error from xn */
+#define SH_MISC_ERR_HDR_UPPER_XN_PKT_SIZE_SHFT 7
+#define SH_MISC_ERR_HDR_UPPER_XN_PKT_SIZE_MASK 0x0000000000000080
+
+/* SH_MISC_ERR_HDR_UPPER_ECHO */
+#define SH_MISC_ERR_HDR_UPPER_ECHO_SHFT 20
+#define SH_MISC_ERR_HDR_UPPER_ECHO_MASK 0x000000001ff00000
+
+/* ==================================================================== */
+/* Register "SH_DIR_UC_ERR_HDR_LOWER" */
+/* Header capture register */
+/* ==================================================================== */
+
+#define SH_DIR_UC_ERR_HDR_LOWER 0x0000000100000098
+#define SH_DIR_UC_ERR_HDR_LOWER_MASK 0x93fffffffffffff8
+#define SH_DIR_UC_ERR_HDR_LOWER_INIT 0x0000000000000000
+
+/* SH_DIR_UC_ERR_HDR_LOWER_ADDR */
+/* Description: upper bits of reference address */
+#define SH_DIR_UC_ERR_HDR_LOWER_ADDR_SHFT 3
+#define SH_DIR_UC_ERR_HDR_LOWER_ADDR_MASK 0x0000000ffffffff8
+
+/* SH_DIR_UC_ERR_HDR_LOWER_CMD */
+/* Description: command of reference */
+#define SH_DIR_UC_ERR_HDR_LOWER_CMD_SHFT 36
+#define SH_DIR_UC_ERR_HDR_LOWER_CMD_MASK 0x00000ff000000000
+
+/* SH_DIR_UC_ERR_HDR_LOWER_SRC */
+/* Description: source node of reference */
+#define SH_DIR_UC_ERR_HDR_LOWER_SRC_SHFT 44
+#define SH_DIR_UC_ERR_HDR_LOWER_SRC_MASK 0x03fff00000000000
+
+/* SH_DIR_UC_ERR_HDR_LOWER_WRITE */
+/* Description: reference is a write */
+#define SH_DIR_UC_ERR_HDR_LOWER_WRITE_SHFT 60
+#define SH_DIR_UC_ERR_HDR_LOWER_WRITE_MASK 0x1000000000000000
+
+/* SH_DIR_UC_ERR_HDR_LOWER_VALID */
+/* Description: set when capture occurs */
+#define SH_DIR_UC_ERR_HDR_LOWER_VALID_SHFT 63
+#define SH_DIR_UC_ERR_HDR_LOWER_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_DIR_UC_ERR_HDR_UPPER" */
+/* Error header capture packet and protocol errors */
+/* ==================================================================== */
+
+#define SH_DIR_UC_ERR_HDR_UPPER 0x00000001000000a0
+#define SH_DIR_UC_ERR_HDR_UPPER_MASK 0x000000001ff00008
+#define SH_DIR_UC_ERR_HDR_UPPER_INIT 0x0000000000000000
+
+/* SH_DIR_UC_ERR_HDR_UPPER_DIR_UC */
+/* Description: indicates uncorrectable directory error captured */
+#define SH_DIR_UC_ERR_HDR_UPPER_DIR_UC_SHFT 3
+#define SH_DIR_UC_ERR_HDR_UPPER_DIR_UC_MASK 0x0000000000000008
+
+/* SH_DIR_UC_ERR_HDR_UPPER_ECHO */
+#define SH_DIR_UC_ERR_HDR_UPPER_ECHO_SHFT 20
+#define SH_DIR_UC_ERR_HDR_UPPER_ECHO_MASK 0x000000001ff00000
+
+/* ==================================================================== */
+/* Register "SH_DIR_COR_ERR_HDR_LOWER" */
+/* Header capture register */
+/* ==================================================================== */
+
+#define SH_DIR_COR_ERR_HDR_LOWER 0x00000001000000a8
+#define SH_DIR_COR_ERR_HDR_LOWER_MASK 0x93fffffffffffff8
+#define SH_DIR_COR_ERR_HDR_LOWER_INIT 0x0000000000000000
+
+/* SH_DIR_COR_ERR_HDR_LOWER_ADDR */
+/* Description: upper bits of reference address */
+#define SH_DIR_COR_ERR_HDR_LOWER_ADDR_SHFT 3
+#define SH_DIR_COR_ERR_HDR_LOWER_ADDR_MASK 0x0000000ffffffff8
+
+/* SH_DIR_COR_ERR_HDR_LOWER_CMD */
+/* Description: command of reference */
+#define SH_DIR_COR_ERR_HDR_LOWER_CMD_SHFT 36
+#define SH_DIR_COR_ERR_HDR_LOWER_CMD_MASK 0x00000ff000000000
+
+/* SH_DIR_COR_ERR_HDR_LOWER_SRC */
+/* Description: source node of reference */
+#define SH_DIR_COR_ERR_HDR_LOWER_SRC_SHFT 44
+#define SH_DIR_COR_ERR_HDR_LOWER_SRC_MASK 0x03fff00000000000
+
+/* SH_DIR_COR_ERR_HDR_LOWER_WRITE */
+/* Description: reference is a write */
+#define SH_DIR_COR_ERR_HDR_LOWER_WRITE_SHFT 60
+#define SH_DIR_COR_ERR_HDR_LOWER_WRITE_MASK 0x1000000000000000
+
+/* SH_DIR_COR_ERR_HDR_LOWER_VALID */
+/* Description: set when capture occurs */
+#define SH_DIR_COR_ERR_HDR_LOWER_VALID_SHFT 63
+#define SH_DIR_COR_ERR_HDR_LOWER_VALID_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_DIR_COR_ERR_HDR_UPPER" */
+/* Error header capture packet and protocol errors */
+/* ==================================================================== */
+
+#define SH_DIR_COR_ERR_HDR_UPPER 0x00000001000000b0
+#define SH_DIR_COR_ERR_HDR_UPPER_MASK 0x000000001ff00100
+#define SH_DIR_COR_ERR_HDR_UPPER_INIT 0x0000000000000000
+
+/* SH_DIR_COR_ERR_HDR_UPPER_DIR_COR */
+/* Description: indicates correctable directory error captured */
+#define SH_DIR_COR_ERR_HDR_UPPER_DIR_COR_SHFT 8
+#define SH_DIR_COR_ERR_HDR_UPPER_DIR_COR_MASK 0x0000000000000100
+
+/* SH_DIR_COR_ERR_HDR_UPPER_ECHO */
+#define SH_DIR_COR_ERR_HDR_UPPER_ECHO_SHFT 20
+#define SH_DIR_COR_ERR_HDR_UPPER_ECHO_MASK 0x000000001ff00000
+
+/* ==================================================================== */
+/* Register "SH_MEM_ERROR_SUMMARY" */
+/* Memory error flags */
+/* ==================================================================== */
+
+#define SH_MEM_ERROR_SUMMARY 0x00000001000000b8
+#define SH_MEM_ERROR_SUMMARY_MASK 0x00000007f77777ff
+#define SH_MEM_ERROR_SUMMARY_INIT 0x0000000000000000
+
+/* SH_MEM_ERROR_SUMMARY_ILLEGAL_CMD */
+/* Description: illegal command error */
+#define SH_MEM_ERROR_SUMMARY_ILLEGAL_CMD_SHFT 0
+#define SH_MEM_ERROR_SUMMARY_ILLEGAL_CMD_MASK 0x0000000000000001
+
+/* SH_MEM_ERROR_SUMMARY_NONEXIST_ADDR */
+/* Description: non-existent memory error */
+#define SH_MEM_ERROR_SUMMARY_NONEXIST_ADDR_SHFT 1
+#define SH_MEM_ERROR_SUMMARY_NONEXIST_ADDR_MASK 0x0000000000000002
+
+/* SH_MEM_ERROR_SUMMARY_DQLP_DIR_PERR */
+/* Description: directory protocol error in dqlp */
+#define SH_MEM_ERROR_SUMMARY_DQLP_DIR_PERR_SHFT 2
+#define SH_MEM_ERROR_SUMMARY_DQLP_DIR_PERR_MASK 0x0000000000000004
+
+/* SH_MEM_ERROR_SUMMARY_DQRP_DIR_PERR */
+/* Description: directory protocol error in dqrp */
+#define SH_MEM_ERROR_SUMMARY_DQRP_DIR_PERR_SHFT 3
+#define SH_MEM_ERROR_SUMMARY_DQRP_DIR_PERR_MASK 0x0000000000000008
+
+/* SH_MEM_ERROR_SUMMARY_DQLP_DIR_UC */
+/* Description: uncorrectable directory error in dqlp */
+#define SH_MEM_ERROR_SUMMARY_DQLP_DIR_UC_SHFT 4
+#define SH_MEM_ERROR_SUMMARY_DQLP_DIR_UC_MASK 0x0000000000000010
+
+/* SH_MEM_ERROR_SUMMARY_DQLP_DIR_COR */
+/* Description: correctable directory error in dqlp */
+#define SH_MEM_ERROR_SUMMARY_DQLP_DIR_COR_SHFT 5
+#define SH_MEM_ERROR_SUMMARY_DQLP_DIR_COR_MASK 0x0000000000000020
+
+/* SH_MEM_ERROR_SUMMARY_DQRP_DIR_UC */
+/* Description: uncorrectable directory error in dqrp */
+#define SH_MEM_ERROR_SUMMARY_DQRP_DIR_UC_SHFT 6
+#define SH_MEM_ERROR_SUMMARY_DQRP_DIR_UC_MASK 0x0000000000000040
+
+/* SH_MEM_ERROR_SUMMARY_DQRP_DIR_COR */
+/* Description: correctable directory error in dqrp */
+#define SH_MEM_ERROR_SUMMARY_DQRP_DIR_COR_SHFT 7
+#define SH_MEM_ERROR_SUMMARY_DQRP_DIR_COR_MASK 0x0000000000000080
+
+/* SH_MEM_ERROR_SUMMARY_ACX_INT_HW */
+/* Description: hardware interrupt from acx */
+#define SH_MEM_ERROR_SUMMARY_ACX_INT_HW_SHFT 8
+#define SH_MEM_ERROR_SUMMARY_ACX_INT_HW_MASK 0x0000000000000100
+
+/* SH_MEM_ERROR_SUMMARY_ACY_INT_HW */
+/* Description: hardware interrupt from acy */
+#define SH_MEM_ERROR_SUMMARY_ACY_INT_HW_SHFT 9
+#define SH_MEM_ERROR_SUMMARY_ACY_INT_HW_MASK 0x0000000000000200
+
+/* SH_MEM_ERROR_SUMMARY_DIR_ACC */
+/* Description: directory memory access error */
+#define SH_MEM_ERROR_SUMMARY_DIR_ACC_SHFT 10
+#define SH_MEM_ERROR_SUMMARY_DIR_ACC_MASK 0x0000000000000400
+
+/* SH_MEM_ERROR_SUMMARY_DQLP_INT_UC */
+/* Description: uncorrectable interrupt from dqlp */
+#define SH_MEM_ERROR_SUMMARY_DQLP_INT_UC_SHFT 12
+#define SH_MEM_ERROR_SUMMARY_DQLP_INT_UC_MASK 0x0000000000001000
+
+/* SH_MEM_ERROR_SUMMARY_DQLP_INT_COR */
+/* Description: correctable interrupt from dqlp */
+#define SH_MEM_ERROR_SUMMARY_DQLP_INT_COR_SHFT 13
+#define SH_MEM_ERROR_SUMMARY_DQLP_INT_COR_MASK 0x0000000000002000
+
+/* SH_MEM_ERROR_SUMMARY_DQLP_INT_HW */
+/* Description: hardware interrupt from dqlp */
+#define SH_MEM_ERROR_SUMMARY_DQLP_INT_HW_SHFT 14
+#define SH_MEM_ERROR_SUMMARY_DQLP_INT_HW_MASK 0x0000000000004000
+
+/* SH_MEM_ERROR_SUMMARY_DQLS_INT_UC */
+/* Description: uncorrectable interrupt from dqls */
+#define SH_MEM_ERROR_SUMMARY_DQLS_INT_UC_SHFT 16
+#define SH_MEM_ERROR_SUMMARY_DQLS_INT_UC_MASK 0x0000000000010000
+
+/* SH_MEM_ERROR_SUMMARY_DQLS_INT_COR */
+/* Description: correctable interrupt from dqls */
+#define SH_MEM_ERROR_SUMMARY_DQLS_INT_COR_SHFT 17
+#define SH_MEM_ERROR_SUMMARY_DQLS_INT_COR_MASK 0x0000000000020000
+
+/* SH_MEM_ERROR_SUMMARY_DQLS_INT_HW */
+/* Description: hardware interrupt from dqls */
+#define SH_MEM_ERROR_SUMMARY_DQLS_INT_HW_SHFT 18
+#define SH_MEM_ERROR_SUMMARY_DQLS_INT_HW_MASK 0x0000000000040000
+
+/* SH_MEM_ERROR_SUMMARY_DQRP_INT_UC */
+/* Description: uncorrectable interrupt from dqrp */
+#define SH_MEM_ERROR_SUMMARY_DQRP_INT_UC_SHFT 20
+#define SH_MEM_ERROR_SUMMARY_DQRP_INT_UC_MASK 0x0000000000100000
+
+/* SH_MEM_ERROR_SUMMARY_DQRP_INT_COR */
+/* Description: correctable interrupt from dqrp */
+#define SH_MEM_ERROR_SUMMARY_DQRP_INT_COR_SHFT 21
+#define SH_MEM_ERROR_SUMMARY_DQRP_INT_COR_MASK 0x0000000000200000
+
+/* SH_MEM_ERROR_SUMMARY_DQRP_INT_HW */
+/* Description: hardware interrupt from dqrp */
+#define SH_MEM_ERROR_SUMMARY_DQRP_INT_HW_SHFT 22
+#define SH_MEM_ERROR_SUMMARY_DQRP_INT_HW_MASK 0x0000000000400000
+
+/* SH_MEM_ERROR_SUMMARY_DQRS_INT_UC */
+/* Description: uncorrectable interrupt from dqrs */
+#define SH_MEM_ERROR_SUMMARY_DQRS_INT_UC_SHFT 24
+#define SH_MEM_ERROR_SUMMARY_DQRS_INT_UC_MASK 0x0000000001000000
+
+/* SH_MEM_ERROR_SUMMARY_DQRS_INT_COR */
+/* Description: correctable interrupt from dqrs */
+#define SH_MEM_ERROR_SUMMARY_DQRS_INT_COR_SHFT 25
+#define SH_MEM_ERROR_SUMMARY_DQRS_INT_COR_MASK 0x0000000002000000
+
+/* SH_MEM_ERROR_SUMMARY_DQRS_INT_HW */
+/* Description: hardware interrupt from dqrs */
+#define SH_MEM_ERROR_SUMMARY_DQRS_INT_HW_SHFT 26
+#define SH_MEM_ERROR_SUMMARY_DQRS_INT_HW_MASK 0x0000000004000000
+
+/* SH_MEM_ERROR_SUMMARY_PI_REPLY_OVERFLOW */
+/* Description: too many reply packets came from pi */
+#define SH_MEM_ERROR_SUMMARY_PI_REPLY_OVERFLOW_SHFT 28
+#define SH_MEM_ERROR_SUMMARY_PI_REPLY_OVERFLOW_MASK 0x0000000010000000
+
+/* SH_MEM_ERROR_SUMMARY_XN_REPLY_OVERFLOW */
+/* Description: too many reply packets came from xn */
+#define SH_MEM_ERROR_SUMMARY_XN_REPLY_OVERFLOW_SHFT 29
+#define SH_MEM_ERROR_SUMMARY_XN_REPLY_OVERFLOW_MASK 0x0000000020000000
+
+/* SH_MEM_ERROR_SUMMARY_PI_REQUEST_OVERFLOW */
+/* Description: too many request packets came from pi */
+#define SH_MEM_ERROR_SUMMARY_PI_REQUEST_OVERFLOW_SHFT 30
+#define SH_MEM_ERROR_SUMMARY_PI_REQUEST_OVERFLOW_MASK 0x0000000040000000
+
+/* SH_MEM_ERROR_SUMMARY_XN_REQUEST_OVERFLOW */
+/* Description: too many request packets came from xn */
+#define SH_MEM_ERROR_SUMMARY_XN_REQUEST_OVERFLOW_SHFT 31
+#define SH_MEM_ERROR_SUMMARY_XN_REQUEST_OVERFLOW_MASK 0x0000000080000000
+
+/* SH_MEM_ERROR_SUMMARY_RED_BLACK_ERR_TIMEOUT */
+/* Description: red black scheme did not clean up soon enough */
+#define SH_MEM_ERROR_SUMMARY_RED_BLACK_ERR_TIMEOUT_SHFT 32
+#define SH_MEM_ERROR_SUMMARY_RED_BLACK_ERR_TIMEOUT_MASK 0x0000000100000000
+
+/* SH_MEM_ERROR_SUMMARY_PI_PKT_SIZE */
+/* Description: received data bearing packet from pi with wrong siz */
+#define SH_MEM_ERROR_SUMMARY_PI_PKT_SIZE_SHFT 33
+#define SH_MEM_ERROR_SUMMARY_PI_PKT_SIZE_MASK 0x0000000200000000
+
+/* SH_MEM_ERROR_SUMMARY_XN_PKT_SIZE */
+/* Description: received data bearing packet from xn with wrong siz */
+#define SH_MEM_ERROR_SUMMARY_XN_PKT_SIZE_SHFT 34
+#define SH_MEM_ERROR_SUMMARY_XN_PKT_SIZE_MASK 0x0000000400000000
+
+/* ==================================================================== */
+/* Register "SH_MEM_ERROR_SUMMARY_ALIAS" */
+/* Memory error flags clear alias */
+/* ==================================================================== */
+
+#define SH_MEM_ERROR_SUMMARY_ALIAS 0x00000001000000c0
+
+/* ==================================================================== */
+/* Register "SH_MEM_ERROR_OVERFLOW" */
+/* Memory error flags */
+/* ==================================================================== */
+
+#define SH_MEM_ERROR_OVERFLOW 0x00000001000000c8
+#define SH_MEM_ERROR_OVERFLOW_MASK 0x00000007f77777ff
+#define SH_MEM_ERROR_OVERFLOW_INIT 0x0000000000000000
+
+/* SH_MEM_ERROR_OVERFLOW_ILLEGAL_CMD */
+/* Description: illegal command error */
+#define SH_MEM_ERROR_OVERFLOW_ILLEGAL_CMD_SHFT 0
+#define SH_MEM_ERROR_OVERFLOW_ILLEGAL_CMD_MASK 0x0000000000000001
+
+/* SH_MEM_ERROR_OVERFLOW_NONEXIST_ADDR */
+/* Description: non-existent memory error */
+#define SH_MEM_ERROR_OVERFLOW_NONEXIST_ADDR_SHFT 1
+#define SH_MEM_ERROR_OVERFLOW_NONEXIST_ADDR_MASK 0x0000000000000002
+
+/* SH_MEM_ERROR_OVERFLOW_DQLP_DIR_PERR */
+/* Description: directory protocol error in dqlp */
+#define SH_MEM_ERROR_OVERFLOW_DQLP_DIR_PERR_SHFT 2
+#define SH_MEM_ERROR_OVERFLOW_DQLP_DIR_PERR_MASK 0x0000000000000004
+
+/* SH_MEM_ERROR_OVERFLOW_DQRP_DIR_PERR */
+/* Description: directory protocol error in dqrp */
+#define SH_MEM_ERROR_OVERFLOW_DQRP_DIR_PERR_SHFT 3
+#define SH_MEM_ERROR_OVERFLOW_DQRP_DIR_PERR_MASK 0x0000000000000008
+
+/* SH_MEM_ERROR_OVERFLOW_DQLP_DIR_UC */
+/* Description: uncorrectable directory error in dqlp */
+#define SH_MEM_ERROR_OVERFLOW_DQLP_DIR_UC_SHFT 4
+#define SH_MEM_ERROR_OVERFLOW_DQLP_DIR_UC_MASK 0x0000000000000010
+
+/* SH_MEM_ERROR_OVERFLOW_DQLP_DIR_COR */
+/* Description: correctable directory error in dqlp */
+#define SH_MEM_ERROR_OVERFLOW_DQLP_DIR_COR_SHFT 5
+#define SH_MEM_ERROR_OVERFLOW_DQLP_DIR_COR_MASK 0x0000000000000020
+
+/* SH_MEM_ERROR_OVERFLOW_DQRP_DIR_UC */
+/* Description: uncorrectable directory error in dqrp */
+#define SH_MEM_ERROR_OVERFLOW_DQRP_DIR_UC_SHFT 6
+#define SH_MEM_ERROR_OVERFLOW_DQRP_DIR_UC_MASK 0x0000000000000040
+
+/* SH_MEM_ERROR_OVERFLOW_DQRP_DIR_COR */
+/* Description: correctable directory error in dqrp */
+#define SH_MEM_ERROR_OVERFLOW_DQRP_DIR_COR_SHFT 7
+#define SH_MEM_ERROR_OVERFLOW_DQRP_DIR_COR_MASK 0x0000000000000080
+
+/* SH_MEM_ERROR_OVERFLOW_ACX_INT_HW */
+/* Description: hardware interrupt from acx */
+#define SH_MEM_ERROR_OVERFLOW_ACX_INT_HW_SHFT 8
+#define SH_MEM_ERROR_OVERFLOW_ACX_INT_HW_MASK 0x0000000000000100
+
+/* SH_MEM_ERROR_OVERFLOW_ACY_INT_HW */
+/* Description: hardware interrupt from acy */
+#define SH_MEM_ERROR_OVERFLOW_ACY_INT_HW_SHFT 9
+#define SH_MEM_ERROR_OVERFLOW_ACY_INT_HW_MASK 0x0000000000000200
+
+/* SH_MEM_ERROR_OVERFLOW_DIR_ACC */
+/* Description: directory memory access error */
+#define SH_MEM_ERROR_OVERFLOW_DIR_ACC_SHFT 10
+#define SH_MEM_ERROR_OVERFLOW_DIR_ACC_MASK 0x0000000000000400
+
+/* SH_MEM_ERROR_OVERFLOW_DQLP_INT_UC */
+/* Description: uncorrectable interrupt from dqlp */
+#define SH_MEM_ERROR_OVERFLOW_DQLP_INT_UC_SHFT 12
+#define SH_MEM_ERROR_OVERFLOW_DQLP_INT_UC_MASK 0x0000000000001000
+
+/* SH_MEM_ERROR_OVERFLOW_DQLP_INT_COR */
+/* Description: correctable interrupt from dqlp */
+#define SH_MEM_ERROR_OVERFLOW_DQLP_INT_COR_SHFT 13
+#define SH_MEM_ERROR_OVERFLOW_DQLP_INT_COR_MASK 0x0000000000002000
+
+/* SH_MEM_ERROR_OVERFLOW_DQLP_INT_HW */
+/* Description: hardware interrupt from dqlp */
+#define SH_MEM_ERROR_OVERFLOW_DQLP_INT_HW_SHFT 14
+#define SH_MEM_ERROR_OVERFLOW_DQLP_INT_HW_MASK 0x0000000000004000
+
+/* SH_MEM_ERROR_OVERFLOW_DQLS_INT_UC */
+/* Description: uncorrectable interrupt from dqls */
+#define SH_MEM_ERROR_OVERFLOW_DQLS_INT_UC_SHFT 16
+#define SH_MEM_ERROR_OVERFLOW_DQLS_INT_UC_MASK 0x0000000000010000
+
+/* SH_MEM_ERROR_OVERFLOW_DQLS_INT_COR */
+/* Description: correctable interrupt from dqls */
+#define SH_MEM_ERROR_OVERFLOW_DQLS_INT_COR_SHFT 17
+#define SH_MEM_ERROR_OVERFLOW_DQLS_INT_COR_MASK 0x0000000000020000
+
+/* SH_MEM_ERROR_OVERFLOW_DQLS_INT_HW */
+/* Description: hardware interrupt from dqls */
+#define SH_MEM_ERROR_OVERFLOW_DQLS_INT_HW_SHFT 18
+#define SH_MEM_ERROR_OVERFLOW_DQLS_INT_HW_MASK 0x0000000000040000
+
+/* SH_MEM_ERROR_OVERFLOW_DQRP_INT_UC */
+/* Description: uncorrectable interrupt from dqrp */
+#define SH_MEM_ERROR_OVERFLOW_DQRP_INT_UC_SHFT 20
+#define SH_MEM_ERROR_OVERFLOW_DQRP_INT_UC_MASK 0x0000000000100000
+
+/* SH_MEM_ERROR_OVERFLOW_DQRP_INT_COR */
+/* Description: correctable interrupt from dqrp */
+#define SH_MEM_ERROR_OVERFLOW_DQRP_INT_COR_SHFT 21
+#define SH_MEM_ERROR_OVERFLOW_DQRP_INT_COR_MASK 0x0000000000200000
+
+/* SH_MEM_ERROR_OVERFLOW_DQRP_INT_HW */
+/* Description: hardware interrupt from dqrp */
+#define SH_MEM_ERROR_OVERFLOW_DQRP_INT_HW_SHFT 22
+#define SH_MEM_ERROR_OVERFLOW_DQRP_INT_HW_MASK 0x0000000000400000
+
+/* SH_MEM_ERROR_OVERFLOW_DQRS_INT_UC */
+/* Description: uncorrectable interrupt from dqrs */
+#define SH_MEM_ERROR_OVERFLOW_DQRS_INT_UC_SHFT 24
+#define SH_MEM_ERROR_OVERFLOW_DQRS_INT_UC_MASK 0x0000000001000000
+
+/* SH_MEM_ERROR_OVERFLOW_DQRS_INT_COR */
+/* Description: correctable interrupt from dqrs */
+#define SH_MEM_ERROR_OVERFLOW_DQRS_INT_COR_SHFT 25
+#define SH_MEM_ERROR_OVERFLOW_DQRS_INT_COR_MASK 0x0000000002000000
+
+/* SH_MEM_ERROR_OVERFLOW_DQRS_INT_HW */
+/* Description: hardware interrupt from dqrs */
+#define SH_MEM_ERROR_OVERFLOW_DQRS_INT_HW_SHFT 26
+#define SH_MEM_ERROR_OVERFLOW_DQRS_INT_HW_MASK 0x0000000004000000
+
+/* SH_MEM_ERROR_OVERFLOW_PI_REPLY_OVERFLOW */
+/* Description: too many reply packets came from pi */
+#define SH_MEM_ERROR_OVERFLOW_PI_REPLY_OVERFLOW_SHFT 28
+#define SH_MEM_ERROR_OVERFLOW_PI_REPLY_OVERFLOW_MASK 0x0000000010000000
+
+/* SH_MEM_ERROR_OVERFLOW_XN_REPLY_OVERFLOW */
+/* Description: too many reply packets came from xn */
+#define SH_MEM_ERROR_OVERFLOW_XN_REPLY_OVERFLOW_SHFT 29
+#define SH_MEM_ERROR_OVERFLOW_XN_REPLY_OVERFLOW_MASK 0x0000000020000000
+
+/* SH_MEM_ERROR_OVERFLOW_PI_REQUEST_OVERFLOW */
+/* Description: too many request packets came from pi */
+#define SH_MEM_ERROR_OVERFLOW_PI_REQUEST_OVERFLOW_SHFT 30
+#define SH_MEM_ERROR_OVERFLOW_PI_REQUEST_OVERFLOW_MASK 0x0000000040000000
+
+/* SH_MEM_ERROR_OVERFLOW_XN_REQUEST_OVERFLOW */
+/* Description: too many request packets came from xn */
+#define SH_MEM_ERROR_OVERFLOW_XN_REQUEST_OVERFLOW_SHFT 31
+#define SH_MEM_ERROR_OVERFLOW_XN_REQUEST_OVERFLOW_MASK 0x0000000080000000
+
+/* SH_MEM_ERROR_OVERFLOW_RED_BLACK_ERR_TIMEOUT */
+/* Description: red black scheme did not clean up soon enough */
+#define SH_MEM_ERROR_OVERFLOW_RED_BLACK_ERR_TIMEOUT_SHFT 32
+#define SH_MEM_ERROR_OVERFLOW_RED_BLACK_ERR_TIMEOUT_MASK 0x0000000100000000
+
+/* SH_MEM_ERROR_OVERFLOW_PI_PKT_SIZE */
+/* Description: received data bearing packet from pi with wrong siz */
+#define SH_MEM_ERROR_OVERFLOW_PI_PKT_SIZE_SHFT 33
+#define SH_MEM_ERROR_OVERFLOW_PI_PKT_SIZE_MASK 0x0000000200000000
+
+/* SH_MEM_ERROR_OVERFLOW_XN_PKT_SIZE */
+/* Description: received data bearing packet from xn with wrong siz */
+#define SH_MEM_ERROR_OVERFLOW_XN_PKT_SIZE_SHFT 34
+#define SH_MEM_ERROR_OVERFLOW_XN_PKT_SIZE_MASK 0x0000000400000000
+
+/* ==================================================================== */
+/* Register "SH_MEM_ERROR_OVERFLOW_ALIAS" */
+/* Memory error flags clear alias */
+/* ==================================================================== */
+
+#define SH_MEM_ERROR_OVERFLOW_ALIAS 0x00000001000000d0
+
+/* ==================================================================== */
+/* Register "SH_MEM_ERROR_MASK" */
+/* Memory error flags */
+/* ==================================================================== */
+
+#define SH_MEM_ERROR_MASK 0x00000001000000d8
+#define SH_MEM_ERROR_MASK_MASK 0x00000007f77777ff
+#define SH_MEM_ERROR_MASK_INIT 0x00000007f77773ff
+
+/* SH_MEM_ERROR_MASK_ILLEGAL_CMD */
+/* Description: illegal command error */
+#define SH_MEM_ERROR_MASK_ILLEGAL_CMD_SHFT 0
+#define SH_MEM_ERROR_MASK_ILLEGAL_CMD_MASK 0x0000000000000001
+
+/* SH_MEM_ERROR_MASK_NONEXIST_ADDR */
+/* Description: non-existent memory error */
+#define SH_MEM_ERROR_MASK_NONEXIST_ADDR_SHFT 1
+#define SH_MEM_ERROR_MASK_NONEXIST_ADDR_MASK 0x0000000000000002
+
+/* SH_MEM_ERROR_MASK_DQLP_DIR_PERR */
+/* Description: directory protocol error in dqlp */
+#define SH_MEM_ERROR_MASK_DQLP_DIR_PERR_SHFT 2
+#define SH_MEM_ERROR_MASK_DQLP_DIR_PERR_MASK 0x0000000000000004
+
+/* SH_MEM_ERROR_MASK_DQRP_DIR_PERR */
+/* Description: directory protocol error in dqrp */
+#define SH_MEM_ERROR_MASK_DQRP_DIR_PERR_SHFT 3
+#define SH_MEM_ERROR_MASK_DQRP_DIR_PERR_MASK 0x0000000000000008
+
+/* SH_MEM_ERROR_MASK_DQLP_DIR_UC */
+/* Description: uncorrectable directory error in dqlp */
+#define SH_MEM_ERROR_MASK_DQLP_DIR_UC_SHFT 4
+#define SH_MEM_ERROR_MASK_DQLP_DIR_UC_MASK 0x0000000000000010
+
+/* SH_MEM_ERROR_MASK_DQLP_DIR_COR */
+/* Description: correctable directory error in dqlp */
+#define SH_MEM_ERROR_MASK_DQLP_DIR_COR_SHFT 5
+#define SH_MEM_ERROR_MASK_DQLP_DIR_COR_MASK 0x0000000000000020
+
+/* SH_MEM_ERROR_MASK_DQRP_DIR_UC */
+/* Description: uncorrectable directory error in dqrp */
+#define SH_MEM_ERROR_MASK_DQRP_DIR_UC_SHFT 6
+#define SH_MEM_ERROR_MASK_DQRP_DIR_UC_MASK 0x0000000000000040
+
+/* SH_MEM_ERROR_MASK_DQRP_DIR_COR */
+/* Description: correctable directory error in dqrp */
+#define SH_MEM_ERROR_MASK_DQRP_DIR_COR_SHFT 7
+#define SH_MEM_ERROR_MASK_DQRP_DIR_COR_MASK 0x0000000000000080
+
+/* SH_MEM_ERROR_MASK_ACX_INT_HW */
+/* Description: hardware interrupt from acx */
+#define SH_MEM_ERROR_MASK_ACX_INT_HW_SHFT 8
+#define SH_MEM_ERROR_MASK_ACX_INT_HW_MASK 0x0000000000000100
+
+/* SH_MEM_ERROR_MASK_ACY_INT_HW */
+/* Description: hardware interrupt from acy */
+#define SH_MEM_ERROR_MASK_ACY_INT_HW_SHFT 9
+#define SH_MEM_ERROR_MASK_ACY_INT_HW_MASK 0x0000000000000200
+
+/* SH_MEM_ERROR_MASK_DIR_ACC */
+/* Description: directory memory access error */
+#define SH_MEM_ERROR_MASK_DIR_ACC_SHFT 10
+#define SH_MEM_ERROR_MASK_DIR_ACC_MASK 0x0000000000000400
+
+/* SH_MEM_ERROR_MASK_DQLP_INT_UC */
+/* Description: uncorrectable interrupt from dqlp */
+#define SH_MEM_ERROR_MASK_DQLP_INT_UC_SHFT 12
+#define SH_MEM_ERROR_MASK_DQLP_INT_UC_MASK 0x0000000000001000
+
+/* SH_MEM_ERROR_MASK_DQLP_INT_COR */
+/* Description: correctable interrupt from dqlp */
+#define SH_MEM_ERROR_MASK_DQLP_INT_COR_SHFT 13
+#define SH_MEM_ERROR_MASK_DQLP_INT_COR_MASK 0x0000000000002000
+
+/* SH_MEM_ERROR_MASK_DQLP_INT_HW */
+/* Description: hardware interrupt from dqlp */
+#define SH_MEM_ERROR_MASK_DQLP_INT_HW_SHFT 14
+#define SH_MEM_ERROR_MASK_DQLP_INT_HW_MASK 0x0000000000004000
+
+/* SH_MEM_ERROR_MASK_DQLS_INT_UC */
+/* Description: uncorrectable interrupt from dqls */
+#define SH_MEM_ERROR_MASK_DQLS_INT_UC_SHFT 16
+#define SH_MEM_ERROR_MASK_DQLS_INT_UC_MASK 0x0000000000010000
+
+/* SH_MEM_ERROR_MASK_DQLS_INT_COR */
+/* Description: correctable interrupt from dqls */
+#define SH_MEM_ERROR_MASK_DQLS_INT_COR_SHFT 17
+#define SH_MEM_ERROR_MASK_DQLS_INT_COR_MASK 0x0000000000020000
+
+/* SH_MEM_ERROR_MASK_DQLS_INT_HW */
+/* Description: hardware interrupt from dqls */
+#define SH_MEM_ERROR_MASK_DQLS_INT_HW_SHFT 18
+#define SH_MEM_ERROR_MASK_DQLS_INT_HW_MASK 0x0000000000040000
+
+/* SH_MEM_ERROR_MASK_DQRP_INT_UC */
+/* Description: uncorrectable interrupt from dqrp */
+#define SH_MEM_ERROR_MASK_DQRP_INT_UC_SHFT 20
+#define SH_MEM_ERROR_MASK_DQRP_INT_UC_MASK 0x0000000000100000
+
+/* SH_MEM_ERROR_MASK_DQRP_INT_COR */
+/* Description: correctable interrupt from dqrp */
+#define SH_MEM_ERROR_MASK_DQRP_INT_COR_SHFT 21
+#define SH_MEM_ERROR_MASK_DQRP_INT_COR_MASK 0x0000000000200000
+
+/* SH_MEM_ERROR_MASK_DQRP_INT_HW */
+/* Description: hardware interrupt from dqrp */
+#define SH_MEM_ERROR_MASK_DQRP_INT_HW_SHFT 22
+#define SH_MEM_ERROR_MASK_DQRP_INT_HW_MASK 0x0000000000400000
+
+/* SH_MEM_ERROR_MASK_DQRS_INT_UC */
+/* Description: uncorrectable interrupt from dqrs */
+#define SH_MEM_ERROR_MASK_DQRS_INT_UC_SHFT 24
+#define SH_MEM_ERROR_MASK_DQRS_INT_UC_MASK 0x0000000001000000
+
+/* SH_MEM_ERROR_MASK_DQRS_INT_COR */
+/* Description: correctable interrupt from dqrs */
+#define SH_MEM_ERROR_MASK_DQRS_INT_COR_SHFT 25
+#define SH_MEM_ERROR_MASK_DQRS_INT_COR_MASK 0x0000000002000000
+
+/* SH_MEM_ERROR_MASK_DQRS_INT_HW */
+/* Description: hardware interrupt from dqrs */
+#define SH_MEM_ERROR_MASK_DQRS_INT_HW_SHFT 26
+#define SH_MEM_ERROR_MASK_DQRS_INT_HW_MASK 0x0000000004000000
+
+/* SH_MEM_ERROR_MASK_PI_REPLY_OVERFLOW */
+/* Description: too many reply packets came from pi */
+#define SH_MEM_ERROR_MASK_PI_REPLY_OVERFLOW_SHFT 28
+#define SH_MEM_ERROR_MASK_PI_REPLY_OVERFLOW_MASK 0x0000000010000000
+
+/* SH_MEM_ERROR_MASK_XN_REPLY_OVERFLOW */
+/* Description: too many reply packets came from xn */
+#define SH_MEM_ERROR_MASK_XN_REPLY_OVERFLOW_SHFT 29
+#define SH_MEM_ERROR_MASK_XN_REPLY_OVERFLOW_MASK 0x0000000020000000
+
+/* SH_MEM_ERROR_MASK_PI_REQUEST_OVERFLOW */
+/* Description: too many request packets came from pi */
+#define SH_MEM_ERROR_MASK_PI_REQUEST_OVERFLOW_SHFT 30
+#define SH_MEM_ERROR_MASK_PI_REQUEST_OVERFLOW_MASK 0x0000000040000000
+
+/* SH_MEM_ERROR_MASK_XN_REQUEST_OVERFLOW */
+/* Description: too many request packets came from xn */
+#define SH_MEM_ERROR_MASK_XN_REQUEST_OVERFLOW_SHFT 31
+#define SH_MEM_ERROR_MASK_XN_REQUEST_OVERFLOW_MASK 0x0000000080000000
+
+/* SH_MEM_ERROR_MASK_RED_BLACK_ERR_TIMEOUT */
+/* Description: red black scheme did not clean up soon enough */
+#define SH_MEM_ERROR_MASK_RED_BLACK_ERR_TIMEOUT_SHFT 32
+#define SH_MEM_ERROR_MASK_RED_BLACK_ERR_TIMEOUT_MASK 0x0000000100000000
+
+/* SH_MEM_ERROR_MASK_PI_PKT_SIZE */
+/* Description: received data bearing packet from pi with wrong siz */
+#define SH_MEM_ERROR_MASK_PI_PKT_SIZE_SHFT 33
+#define SH_MEM_ERROR_MASK_PI_PKT_SIZE_MASK 0x0000000200000000
+
+/* SH_MEM_ERROR_MASK_XN_PKT_SIZE */
+/* Description: received data bearing packet from xn with wrong siz */
+#define SH_MEM_ERROR_MASK_XN_PKT_SIZE_SHFT 34
+#define SH_MEM_ERROR_MASK_XN_PKT_SIZE_MASK 0x0000000400000000
+
+/* ==================================================================== */
+/* Register "SH_X_DIMM_CFG" */
+/* AC Mem Config Registers */
+/* ==================================================================== */
+
+#define SH_X_DIMM_CFG 0x0000000100010000
+#define SH_X_DIMM_CFG_MASK 0x0000000f7f7f7f7f
+#define SH_X_DIMM_CFG_INIT 0x000000026f4f2f0f
+
+/* SH_X_DIMM_CFG_DIMM0_SIZE */
+/* Description: DIMM 0 DRAM size */
+#define SH_X_DIMM_CFG_DIMM0_SIZE_SHFT 0
+#define SH_X_DIMM_CFG_DIMM0_SIZE_MASK 0x0000000000000007
+
+/* SH_X_DIMM_CFG_DIMM0_2BK */
+/* Description: DIMM 0 has two physical banks */
+#define SH_X_DIMM_CFG_DIMM0_2BK_SHFT 3
+#define SH_X_DIMM_CFG_DIMM0_2BK_MASK 0x0000000000000008
+
+/* SH_X_DIMM_CFG_DIMM0_REV */
+/* Description: DIMM 0 physical banks reversed */
+#define SH_X_DIMM_CFG_DIMM0_REV_SHFT 4
+#define SH_X_DIMM_CFG_DIMM0_REV_MASK 0x0000000000000010
+
+/* SH_X_DIMM_CFG_DIMM0_CS */
+/* Description: DIMM 0 chip select, addr[35:34] match */
+#define SH_X_DIMM_CFG_DIMM0_CS_SHFT 5
+#define SH_X_DIMM_CFG_DIMM0_CS_MASK 0x0000000000000060
+
+/* SH_X_DIMM_CFG_DIMM1_SIZE */
+/* Description: DIMM 1 DRAM size */
+#define SH_X_DIMM_CFG_DIMM1_SIZE_SHFT 8
+#define SH_X_DIMM_CFG_DIMM1_SIZE_MASK 0x0000000000000700
+
+/* SH_X_DIMM_CFG_DIMM1_2BK */
+/* Description: DIMM 1 has two physical banks */
+#define SH_X_DIMM_CFG_DIMM1_2BK_SHFT 11
+#define SH_X_DIMM_CFG_DIMM1_2BK_MASK 0x0000000000000800
+
+/* SH_X_DIMM_CFG_DIMM1_REV */
+/* Description: DIMM 1 physical banks reversed */
+#define SH_X_DIMM_CFG_DIMM1_REV_SHFT 12
+#define SH_X_DIMM_CFG_DIMM1_REV_MASK 0x0000000000001000
+
+/* SH_X_DIMM_CFG_DIMM1_CS */
+/* Description: DIMM 1 chip select, addr[35:34] match */
+#define SH_X_DIMM_CFG_DIMM1_CS_SHFT 13
+#define SH_X_DIMM_CFG_DIMM1_CS_MASK 0x0000000000006000
+
+/* SH_X_DIMM_CFG_DIMM2_SIZE */
+/* Description: DIMM 2 DRAM size */
+#define SH_X_DIMM_CFG_DIMM2_SIZE_SHFT 16
+#define SH_X_DIMM_CFG_DIMM2_SIZE_MASK 0x0000000000070000
+
+/* SH_X_DIMM_CFG_DIMM2_2BK */
+/* Description: DIMM 2 has two physical banks */
+#define SH_X_DIMM_CFG_DIMM2_2BK_SHFT 19
+#define SH_X_DIMM_CFG_DIMM2_2BK_MASK 0x0000000000080000
+
+/* SH_X_DIMM_CFG_DIMM2_REV */
+/* Description: DIMM 2 physical banks reversed */
+#define SH_X_DIMM_CFG_DIMM2_REV_SHFT 20
+#define SH_X_DIMM_CFG_DIMM2_REV_MASK 0x0000000000100000
+
+/* SH_X_DIMM_CFG_DIMM2_CS */
+/* Description: DIMM 2 chip select, addr[35:34] match */
+#define SH_X_DIMM_CFG_DIMM2_CS_SHFT 21
+#define SH_X_DIMM_CFG_DIMM2_CS_MASK 0x0000000000600000
+
+/* SH_X_DIMM_CFG_DIMM3_SIZE */
+/* Description: DIMM 3 DRAM size */
+#define SH_X_DIMM_CFG_DIMM3_SIZE_SHFT 24
+#define SH_X_DIMM_CFG_DIMM3_SIZE_MASK 0x0000000007000000
+
+/* SH_X_DIMM_CFG_DIMM3_2BK */
+/* Description: DIMM 3 has two physical banks */
+#define SH_X_DIMM_CFG_DIMM3_2BK_SHFT 27
+#define SH_X_DIMM_CFG_DIMM3_2BK_MASK 0x0000000008000000
+
+/* SH_X_DIMM_CFG_DIMM3_REV */
+/* Description: DIMM 3 physical banks reversed */
+#define SH_X_DIMM_CFG_DIMM3_REV_SHFT 28
+#define SH_X_DIMM_CFG_DIMM3_REV_MASK 0x0000000010000000
+
+/* SH_X_DIMM_CFG_DIMM3_CS */
+/* Description: DIMM 3 chip select, addr[35:34] match */
+#define SH_X_DIMM_CFG_DIMM3_CS_SHFT 29
+#define SH_X_DIMM_CFG_DIMM3_CS_MASK 0x0000000060000000
+
+/* SH_X_DIMM_CFG_FREQ */
+/* Description: DIMM frequency select */
+#define SH_X_DIMM_CFG_FREQ_SHFT 32
+#define SH_X_DIMM_CFG_FREQ_MASK 0x0000000f00000000
+
+/* ==================================================================== */
+/* Register "SH_Y_DIMM_CFG" */
+/* AC Mem Config Registers */
+/* ==================================================================== */
+
+#define SH_Y_DIMM_CFG 0x0000000100010008
+#define SH_Y_DIMM_CFG_MASK 0x0000000f7f7f7f7f
+#define SH_Y_DIMM_CFG_INIT 0x000000026f4f2f0f
+
+/* SH_Y_DIMM_CFG_DIMM0_SIZE */
+/* Description: DIMM 0 DRAM size */
+#define SH_Y_DIMM_CFG_DIMM0_SIZE_SHFT 0
+#define SH_Y_DIMM_CFG_DIMM0_SIZE_MASK 0x0000000000000007
+
+/* SH_Y_DIMM_CFG_DIMM0_2BK */
+/* Description: DIMM 0 has two physical banks */
+#define SH_Y_DIMM_CFG_DIMM0_2BK_SHFT 3
+#define SH_Y_DIMM_CFG_DIMM0_2BK_MASK 0x0000000000000008
+
+/* SH_Y_DIMM_CFG_DIMM0_REV */
+/* Description: DIMM 0 physical banks reversed */
+#define SH_Y_DIMM_CFG_DIMM0_REV_SHFT 4
+#define SH_Y_DIMM_CFG_DIMM0_REV_MASK 0x0000000000000010
+
+/* SH_Y_DIMM_CFG_DIMM0_CS */
+/* Description: DIMM 0 chip select, addr[35:34] match */
+#define SH_Y_DIMM_CFG_DIMM0_CS_SHFT 5
+#define SH_Y_DIMM_CFG_DIMM0_CS_MASK 0x0000000000000060
+
+/* SH_Y_DIMM_CFG_DIMM1_SIZE */
+/* Description: DIMM 1 DRAM size */
+#define SH_Y_DIMM_CFG_DIMM1_SIZE_SHFT 8
+#define SH_Y_DIMM_CFG_DIMM1_SIZE_MASK 0x0000000000000700
+
+/* SH_Y_DIMM_CFG_DIMM1_2BK */
+/* Description: DIMM 1 has two physical banks */
+#define SH_Y_DIMM_CFG_DIMM1_2BK_SHFT 11
+#define SH_Y_DIMM_CFG_DIMM1_2BK_MASK 0x0000000000000800
+
+/* SH_Y_DIMM_CFG_DIMM1_REV */
+/* Description: DIMM 1 physical banks reversed */
+#define SH_Y_DIMM_CFG_DIMM1_REV_SHFT 12
+#define SH_Y_DIMM_CFG_DIMM1_REV_MASK 0x0000000000001000
+
+/* SH_Y_DIMM_CFG_DIMM1_CS */
+/* Description: DIMM 1 chip select, addr[35:34] match */
+#define SH_Y_DIMM_CFG_DIMM1_CS_SHFT 13
+#define SH_Y_DIMM_CFG_DIMM1_CS_MASK 0x0000000000006000
+
+/* SH_Y_DIMM_CFG_DIMM2_SIZE */
+/* Description: DIMM 2 DRAM size */
+#define SH_Y_DIMM_CFG_DIMM2_SIZE_SHFT 16
+#define SH_Y_DIMM_CFG_DIMM2_SIZE_MASK 0x0000000000070000
+
+/* SH_Y_DIMM_CFG_DIMM2_2BK */
+/* Description: DIMM 2 has two physical banks */
+#define SH_Y_DIMM_CFG_DIMM2_2BK_SHFT 19
+#define SH_Y_DIMM_CFG_DIMM2_2BK_MASK 0x0000000000080000
+
+/* SH_Y_DIMM_CFG_DIMM2_REV */
+/* Description: DIMM 2 physical banks reversed */
+#define SH_Y_DIMM_CFG_DIMM2_REV_SHFT 20
+#define SH_Y_DIMM_CFG_DIMM2_REV_MASK 0x0000000000100000
+
+/* SH_Y_DIMM_CFG_DIMM2_CS */
+/* Description: DIMM 2 chip select, addr[35:34] match */
+#define SH_Y_DIMM_CFG_DIMM2_CS_SHFT 21
+#define SH_Y_DIMM_CFG_DIMM2_CS_MASK 0x0000000000600000
+
+/* SH_Y_DIMM_CFG_DIMM3_SIZE */
+/* Description: DIMM 3 DRAM size */
+#define SH_Y_DIMM_CFG_DIMM3_SIZE_SHFT 24
+#define SH_Y_DIMM_CFG_DIMM3_SIZE_MASK 0x0000000007000000
+
+/* SH_Y_DIMM_CFG_DIMM3_2BK */
+/* Description: DIMM 3 has two physical banks */
+#define SH_Y_DIMM_CFG_DIMM3_2BK_SHFT 27
+#define SH_Y_DIMM_CFG_DIMM3_2BK_MASK 0x0000000008000000
+
+/* SH_Y_DIMM_CFG_DIMM3_REV */
+/* Description: DIMM 3 physical banks reversed */
+#define SH_Y_DIMM_CFG_DIMM3_REV_SHFT 28
+#define SH_Y_DIMM_CFG_DIMM3_REV_MASK 0x0000000010000000
+
+/* SH_Y_DIMM_CFG_DIMM3_CS */
+/* Description: DIMM 3 chip select, addr[35:34] match */
+#define SH_Y_DIMM_CFG_DIMM3_CS_SHFT 29
+#define SH_Y_DIMM_CFG_DIMM3_CS_MASK 0x0000000060000000
+
+/* SH_Y_DIMM_CFG_FREQ */
+/* Description: DIMM frequency select */
+#define SH_Y_DIMM_CFG_FREQ_SHFT 32
+#define SH_Y_DIMM_CFG_FREQ_MASK 0x0000000f00000000
+
+/* ==================================================================== */
+/* Register "SH_JNR_DIMM_CFG" */
+/* AC Mem Config Registers */
+/* ==================================================================== */
+
+#define SH_JNR_DIMM_CFG 0x0000000100010010
+#define SH_JNR_DIMM_CFG_MASK 0x0000000f7f7f7f7f
+#define SH_JNR_DIMM_CFG_INIT 0x000000026f4f2f0f
+
+/* SH_JNR_DIMM_CFG_DIMM0_SIZE */
+/* Description: DIMM 0 DRAM size */
+#define SH_JNR_DIMM_CFG_DIMM0_SIZE_SHFT 0
+#define SH_JNR_DIMM_CFG_DIMM0_SIZE_MASK 0x0000000000000007
+
+/* SH_JNR_DIMM_CFG_DIMM0_2BK */
+/* Description: DIMM 0 has two physical banks */
+#define SH_JNR_DIMM_CFG_DIMM0_2BK_SHFT 3
+#define SH_JNR_DIMM_CFG_DIMM0_2BK_MASK 0x0000000000000008
+
+/* SH_JNR_DIMM_CFG_DIMM0_REV */
+/* Description: DIMM 0 physical banks reversed */
+#define SH_JNR_DIMM_CFG_DIMM0_REV_SHFT 4
+#define SH_JNR_DIMM_CFG_DIMM0_REV_MASK 0x0000000000000010
+
+/* SH_JNR_DIMM_CFG_DIMM0_CS */
+/* Description: DIMM 0 chip select, addr[35:34] match */
+#define SH_JNR_DIMM_CFG_DIMM0_CS_SHFT 5
+#define SH_JNR_DIMM_CFG_DIMM0_CS_MASK 0x0000000000000060
+
+/* SH_JNR_DIMM_CFG_DIMM1_SIZE */
+/* Description: DIMM 1 DRAM size */
+#define SH_JNR_DIMM_CFG_DIMM1_SIZE_SHFT 8
+#define SH_JNR_DIMM_CFG_DIMM1_SIZE_MASK 0x0000000000000700
+
+/* SH_JNR_DIMM_CFG_DIMM1_2BK */
+/* Description: DIMM 1 has two physical banks */
+#define SH_JNR_DIMM_CFG_DIMM1_2BK_SHFT 11
+#define SH_JNR_DIMM_CFG_DIMM1_2BK_MASK 0x0000000000000800
+
+/* SH_JNR_DIMM_CFG_DIMM1_REV */
+/* Description: DIMM 1 physical banks reversed */
+#define SH_JNR_DIMM_CFG_DIMM1_REV_SHFT 12
+#define SH_JNR_DIMM_CFG_DIMM1_REV_MASK 0x0000000000001000
+
+/* SH_JNR_DIMM_CFG_DIMM1_CS */
+/* Description: DIMM 1 chip select, addr[35:34] match */
+#define SH_JNR_DIMM_CFG_DIMM1_CS_SHFT 13
+#define SH_JNR_DIMM_CFG_DIMM1_CS_MASK 0x0000000000006000
+
+/* SH_JNR_DIMM_CFG_DIMM2_SIZE */
+/* Description: DIMM 2 DRAM size */
+#define SH_JNR_DIMM_CFG_DIMM2_SIZE_SHFT 16
+#define SH_JNR_DIMM_CFG_DIMM2_SIZE_MASK 0x0000000000070000
+
+/* SH_JNR_DIMM_CFG_DIMM2_2BK */
+/* Description: DIMM 2 has two physical banks */
+#define SH_JNR_DIMM_CFG_DIMM2_2BK_SHFT 19
+#define SH_JNR_DIMM_CFG_DIMM2_2BK_MASK 0x0000000000080000
+
+/* SH_JNR_DIMM_CFG_DIMM2_REV */
+/* Description: DIMM 2 physical banks reversed */
+#define SH_JNR_DIMM_CFG_DIMM2_REV_SHFT 20
+#define SH_JNR_DIMM_CFG_DIMM2_REV_MASK 0x0000000000100000
+
+/* SH_JNR_DIMM_CFG_DIMM2_CS */
+/* Description: DIMM 2 chip select, addr[35:34] match */
+#define SH_JNR_DIMM_CFG_DIMM2_CS_SHFT 21
+#define SH_JNR_DIMM_CFG_DIMM2_CS_MASK 0x0000000000600000
+
+/* SH_JNR_DIMM_CFG_DIMM3_SIZE */
+/* Description: DIMM 3 DRAM size */
+#define SH_JNR_DIMM_CFG_DIMM3_SIZE_SHFT 24
+#define SH_JNR_DIMM_CFG_DIMM3_SIZE_MASK 0x0000000007000000
+
+/* SH_JNR_DIMM_CFG_DIMM3_2BK */
+/* Description: DIMM 3 has two physical banks */
+#define SH_JNR_DIMM_CFG_DIMM3_2BK_SHFT 27
+#define SH_JNR_DIMM_CFG_DIMM3_2BK_MASK 0x0000000008000000
+
+/* SH_JNR_DIMM_CFG_DIMM3_REV */
+/* Description: DIMM 3 physical banks reversed */
+#define SH_JNR_DIMM_CFG_DIMM3_REV_SHFT 28
+#define SH_JNR_DIMM_CFG_DIMM3_REV_MASK 0x0000000010000000
+
+/* SH_JNR_DIMM_CFG_DIMM3_CS */
+/* Description: DIMM 3 chip select, addr[35:34] match */
+#define SH_JNR_DIMM_CFG_DIMM3_CS_SHFT 29
+#define SH_JNR_DIMM_CFG_DIMM3_CS_MASK 0x0000000060000000
+
+/* SH_JNR_DIMM_CFG_FREQ */
+/* Description: DIMM frequency select */
+#define SH_JNR_DIMM_CFG_FREQ_SHFT 32
+#define SH_JNR_DIMM_CFG_FREQ_MASK 0x0000000f00000000
+
+/* ==================================================================== */
+/* Register "SH_X_PHASE_CFG" */
+/* AC Phase Config Registers */
+/* ==================================================================== */
+
+#define SH_X_PHASE_CFG 0x0000000100010018
+#define SH_X_PHASE_CFG_MASK 0x7fffffffffffffff
+#define SH_X_PHASE_CFG_INIT 0x0000000000000000
+
+/* SH_X_PHASE_CFG_LD_A */
+/* Description: Address, control load core clock A latch */
+#define SH_X_PHASE_CFG_LD_A_SHFT 0
+#define SH_X_PHASE_CFG_LD_A_MASK 0x000000000000001f
+
+/* SH_X_PHASE_CFG_LD_B */
+/* Description: Address, control load core clock B latch */
+#define SH_X_PHASE_CFG_LD_B_SHFT 5
+#define SH_X_PHASE_CFG_LD_B_MASK 0x00000000000003e0
+
+/* SH_X_PHASE_CFG_DQ_LD_A */
+/* Description: DATA MCI load core clock A latch */
+#define SH_X_PHASE_CFG_DQ_LD_A_SHFT 10
+#define SH_X_PHASE_CFG_DQ_LD_A_MASK 0x0000000000007c00
+
+/* SH_X_PHASE_CFG_DQ_LD_B */
+/* Description: DATA MCI load core clock B latch */
+#define SH_X_PHASE_CFG_DQ_LD_B_SHFT 15
+#define SH_X_PHASE_CFG_DQ_LD_B_MASK 0x00000000000f8000
+
+/* SH_X_PHASE_CFG_HOLD */
+/* Description: Hold request on core clock phase */
+#define SH_X_PHASE_CFG_HOLD_SHFT 20
+#define SH_X_PHASE_CFG_HOLD_MASK 0x0000000001f00000
+
+/* SH_X_PHASE_CFG_HOLD_REQ */
+/* Description: Hold next request on core clock phase */
+#define SH_X_PHASE_CFG_HOLD_REQ_SHFT 25
+#define SH_X_PHASE_CFG_HOLD_REQ_MASK 0x000000003e000000
+
+/* SH_X_PHASE_CFG_ADD_CP */
+/* Description: add delay clock period to dqct delay chain on phase */
+#define SH_X_PHASE_CFG_ADD_CP_SHFT 30
+#define SH_X_PHASE_CFG_ADD_CP_MASK 0x00000007c0000000
+
+/* SH_X_PHASE_CFG_BUBBLE_EN */
+/* Description: bubble, idle core clock to wait for memory clock */
+#define SH_X_PHASE_CFG_BUBBLE_EN_SHFT 35
+#define SH_X_PHASE_CFG_BUBBLE_EN_MASK 0x000000f800000000
+
+/* SH_X_PHASE_CFG_PHA_BUBBLE */
+/* Description: MMR phaseA bubble value */
+#define SH_X_PHASE_CFG_PHA_BUBBLE_SHFT 40
+#define SH_X_PHASE_CFG_PHA_BUBBLE_MASK 0x0000070000000000
+
+/* SH_X_PHASE_CFG_PHB_BUBBLE */
+/* Description: MMR phaseB bubble value */
+#define SH_X_PHASE_CFG_PHB_BUBBLE_SHFT 43
+#define SH_X_PHASE_CFG_PHB_BUBBLE_MASK 0x0000380000000000
+
+/* SH_X_PHASE_CFG_PHC_BUBBLE */
+/* Description: MMR phaseC bubble value */
+#define SH_X_PHASE_CFG_PHC_BUBBLE_SHFT 46
+#define SH_X_PHASE_CFG_PHC_BUBBLE_MASK 0x0001c00000000000
+
+/* SH_X_PHASE_CFG_PHD_BUBBLE */
+/* Description: MMR phaseD bubble value */
+#define SH_X_PHASE_CFG_PHD_BUBBLE_SHFT 49
+#define SH_X_PHASE_CFG_PHD_BUBBLE_MASK 0x000e000000000000
+
+/* SH_X_PHASE_CFG_PHE_BUBBLE */
+/* Description: MMR phaseE bubble value */
+#define SH_X_PHASE_CFG_PHE_BUBBLE_SHFT 52
+#define SH_X_PHASE_CFG_PHE_BUBBLE_MASK 0x0070000000000000
+
+/* SH_X_PHASE_CFG_SEL_A */
+/* Description: address,control select A memory clock latch */
+#define SH_X_PHASE_CFG_SEL_A_SHFT 55
+#define SH_X_PHASE_CFG_SEL_A_MASK 0x0780000000000000
+
+/* SH_X_PHASE_CFG_DQ_SEL_A */
+/* Description: DATA MCI select A memory clock latch */
+#define SH_X_PHASE_CFG_DQ_SEL_A_SHFT 59
+#define SH_X_PHASE_CFG_DQ_SEL_A_MASK 0x7800000000000000
+
+/* ==================================================================== */
+/* Register "SH_X_CFG" */
+/* AC Config Registers */
+/* ==================================================================== */
+
+#define SH_X_CFG 0x0000000100010020
+#define SH_X_CFG_MASK 0xffffffffffffffff
+#define SH_X_CFG_INIT 0x108443103322100c
+
+/* SH_X_CFG_MODE_SERIAL */
+/* Description: Arbque arbitration in serial mode */
+#define SH_X_CFG_MODE_SERIAL_SHFT 0
+#define SH_X_CFG_MODE_SERIAL_MASK 0x0000000000000001
+
+/* SH_X_CFG_DIRC_RANDOM_REPLACEMENT */
+/* Description: Directory cache random replacement */
+#define SH_X_CFG_DIRC_RANDOM_REPLACEMENT_SHFT 1
+#define SH_X_CFG_DIRC_RANDOM_REPLACEMENT_MASK 0x0000000000000002
+
+/* SH_X_CFG_DIR_COUNTER_INIT */
+/* Description: Dir counter initial value */
+#define SH_X_CFG_DIR_COUNTER_INIT_SHFT 2
+#define SH_X_CFG_DIR_COUNTER_INIT_MASK 0x00000000000000fc
+
+/* SH_X_CFG_TA_DLYS */
+/* Description: Turn around delays */
+#define SH_X_CFG_TA_DLYS_SHFT 8
+#define SH_X_CFG_TA_DLYS_MASK 0x000000ffffffff00
+
+/* SH_X_CFG_DA_BB_CLR */
+/* Description: Bank busy CPs for a data read request */
+#define SH_X_CFG_DA_BB_CLR_SHFT 40
+#define SH_X_CFG_DA_BB_CLR_MASK 0x00000f0000000000
+
+/* SH_X_CFG_DC_BB_CLR */
+/* Description: Bank busy CPs for a directory cache read request */
+#define SH_X_CFG_DC_BB_CLR_SHFT 44
+#define SH_X_CFG_DC_BB_CLR_MASK 0x0000f00000000000
+
+/* SH_X_CFG_WT_BB_CLR */
+/* Description: Bank busy CPs for all write request */
+#define SH_X_CFG_WT_BB_CLR_SHFT 48
+#define SH_X_CFG_WT_BB_CLR_MASK 0x000f000000000000
+
+/* SH_X_CFG_SSO_WT_EN */
+/* Description: Simultaneous switching enabled on output data pins */
+#define SH_X_CFG_SSO_WT_EN_SHFT 52
+#define SH_X_CFG_SSO_WT_EN_MASK 0x0010000000000000
+
+/* SH_X_CFG_TRCD2_EN */
+/* Description: Trcd, ras to cas delay of 2 CPs enabled */
+#define SH_X_CFG_TRCD2_EN_SHFT 53
+#define SH_X_CFG_TRCD2_EN_MASK 0x0020000000000000
+
+/* SH_X_CFG_TRCD4_EN */
+/* Description: Trcd, ras to case delay of 4 CPs enabled */
+#define SH_X_CFG_TRCD4_EN_SHFT 54
+#define SH_X_CFG_TRCD4_EN_MASK 0x0040000000000000
+
+/* SH_X_CFG_REQ_CNTR_DIS */
+/* Description: Request delay counter disabled */
+#define SH_X_CFG_REQ_CNTR_DIS_SHFT 55
+#define SH_X_CFG_REQ_CNTR_DIS_MASK 0x0080000000000000
+
+/* SH_X_CFG_REQ_CNTR_VAL */
+/* Description: Request counter delay value in CPs */
+#define SH_X_CFG_REQ_CNTR_VAL_SHFT 56
+#define SH_X_CFG_REQ_CNTR_VAL_MASK 0x3f00000000000000
+
+/* SH_X_CFG_INV_CAS_ADDR */
+/* Description: Invert cas address bits 3 to 7 */
+#define SH_X_CFG_INV_CAS_ADDR_SHFT 62
+#define SH_X_CFG_INV_CAS_ADDR_MASK 0x4000000000000000
+
+/* SH_X_CFG_CLR_DIR_CACHE */
+/* Description: Clear directory cache tags */
+#define SH_X_CFG_CLR_DIR_CACHE_SHFT 63
+#define SH_X_CFG_CLR_DIR_CACHE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_X_DQCT_CFG" */
+/* AC Config Registers */
+/* ==================================================================== */
+
+#define SH_X_DQCT_CFG 0x0000000100010028
+#define SH_X_DQCT_CFG_MASK 0x0000000000ffffff
+#define SH_X_DQCT_CFG_INIT 0x0000000000585418
+
+/* SH_X_DQCT_CFG_RD_SEL */
+/* Description: Read data select */
+#define SH_X_DQCT_CFG_RD_SEL_SHFT 0
+#define SH_X_DQCT_CFG_RD_SEL_MASK 0x000000000000000f
+
+/* SH_X_DQCT_CFG_WT_SEL */
+/* Description: Write data select */
+#define SH_X_DQCT_CFG_WT_SEL_SHFT 4
+#define SH_X_DQCT_CFG_WT_SEL_MASK 0x00000000000000f0
+
+/* SH_X_DQCT_CFG_DTA_RD_SEL */
+/* Description: Data ready read select */
+#define SH_X_DQCT_CFG_DTA_RD_SEL_SHFT 8
+#define SH_X_DQCT_CFG_DTA_RD_SEL_MASK 0x0000000000000f00
+
+/* SH_X_DQCT_CFG_DTA_WT_SEL */
+/* Description: Data ready write select */
+#define SH_X_DQCT_CFG_DTA_WT_SEL_SHFT 12
+#define SH_X_DQCT_CFG_DTA_WT_SEL_MASK 0x000000000000f000
+
+/* SH_X_DQCT_CFG_DIR_RD_SEL */
+/* Description: Dir ready read select */
+#define SH_X_DQCT_CFG_DIR_RD_SEL_SHFT 16
+#define SH_X_DQCT_CFG_DIR_RD_SEL_MASK 0x00000000000f0000
+
+/* SH_X_DQCT_CFG_MDIR_RD_SEL */
+/* Description: Dir ready read select */
+#define SH_X_DQCT_CFG_MDIR_RD_SEL_SHFT 20
+#define SH_X_DQCT_CFG_MDIR_RD_SEL_MASK 0x0000000000f00000
+
+/* ==================================================================== */
+/* Register "SH_X_REFRESH_CONTROL" */
+/* Refresh Control Register */
+/* ==================================================================== */
+
+#define SH_X_REFRESH_CONTROL 0x0000000100010030
+#define SH_X_REFRESH_CONTROL_MASK 0x000000000fffffff
+#define SH_X_REFRESH_CONTROL_INIT 0x00000000009cc300
+
+/* SH_X_REFRESH_CONTROL_ENABLE */
+/* Description: Refresh enable */
+#define SH_X_REFRESH_CONTROL_ENABLE_SHFT 0
+#define SH_X_REFRESH_CONTROL_ENABLE_MASK 0x00000000000000ff
+
+/* SH_X_REFRESH_CONTROL_INTERVAL */
+/* Description: Refresh interval in core CPs */
+#define SH_X_REFRESH_CONTROL_INTERVAL_SHFT 8
+#define SH_X_REFRESH_CONTROL_INTERVAL_MASK 0x000000000001ff00
+
+/* SH_X_REFRESH_CONTROL_HOLD */
+/* Description: Refresh hold */
+#define SH_X_REFRESH_CONTROL_HOLD_SHFT 17
+#define SH_X_REFRESH_CONTROL_HOLD_MASK 0x00000000007e0000
+
+/* SH_X_REFRESH_CONTROL_INTERLEAVE */
+/* Description: Refresh interleave */
+#define SH_X_REFRESH_CONTROL_INTERLEAVE_SHFT 23
+#define SH_X_REFRESH_CONTROL_INTERLEAVE_MASK 0x0000000000800000
+
+/* SH_X_REFRESH_CONTROL_HALF_RATE */
+/* Description: Refresh half rate */
+#define SH_X_REFRESH_CONTROL_HALF_RATE_SHFT 24
+#define SH_X_REFRESH_CONTROL_HALF_RATE_MASK 0x000000000f000000
+
+/* ==================================================================== */
+/* Register "SH_Y_PHASE_CFG" */
+/* AC Phase Config Registers */
+/* ==================================================================== */
+
+#define SH_Y_PHASE_CFG 0x0000000100010038
+#define SH_Y_PHASE_CFG_MASK 0x7fffffffffffffff
+#define SH_Y_PHASE_CFG_INIT 0x0000000000000000
+
+/* SH_Y_PHASE_CFG_LD_A */
+/* Description: Address, control load core clock A latch */
+#define SH_Y_PHASE_CFG_LD_A_SHFT 0
+#define SH_Y_PHASE_CFG_LD_A_MASK 0x000000000000001f
+
+/* SH_Y_PHASE_CFG_LD_B */
+/* Description: Address, control load core clock B latch */
+#define SH_Y_PHASE_CFG_LD_B_SHFT 5
+#define SH_Y_PHASE_CFG_LD_B_MASK 0x00000000000003e0
+
+/* SH_Y_PHASE_CFG_DQ_LD_A */
+/* Description: DATA MCI load core clock A latch */
+#define SH_Y_PHASE_CFG_DQ_LD_A_SHFT 10
+#define SH_Y_PHASE_CFG_DQ_LD_A_MASK 0x0000000000007c00
+
+/* SH_Y_PHASE_CFG_DQ_LD_B */
+/* Description: DATA MCI load core clock B latch */
+#define SH_Y_PHASE_CFG_DQ_LD_B_SHFT 15
+#define SH_Y_PHASE_CFG_DQ_LD_B_MASK 0x00000000000f8000
+
+/* SH_Y_PHASE_CFG_HOLD */
+/* Description: Hold request on core clock phase */
+#define SH_Y_PHASE_CFG_HOLD_SHFT 20
+#define SH_Y_PHASE_CFG_HOLD_MASK 0x0000000001f00000
+
+/* SH_Y_PHASE_CFG_HOLD_REQ */
+/* Description: Hold next request on core clock phase */
+#define SH_Y_PHASE_CFG_HOLD_REQ_SHFT 25
+#define SH_Y_PHASE_CFG_HOLD_REQ_MASK 0x000000003e000000
+
+/* SH_Y_PHASE_CFG_ADD_CP */
+/* Description: add delay clock period to dqct delay chain on phase */
+#define SH_Y_PHASE_CFG_ADD_CP_SHFT 30
+#define SH_Y_PHASE_CFG_ADD_CP_MASK 0x00000007c0000000
+
+/* SH_Y_PHASE_CFG_BUBBLE_EN */
+/* Description: bubble, idle core clock to wait for memory clock */
+#define SH_Y_PHASE_CFG_BUBBLE_EN_SHFT 35
+#define SH_Y_PHASE_CFG_BUBBLE_EN_MASK 0x000000f800000000
+
+/* SH_Y_PHASE_CFG_PHA_BUBBLE */
+/* Description: MMR phaseA bubble value */
+#define SH_Y_PHASE_CFG_PHA_BUBBLE_SHFT 40
+#define SH_Y_PHASE_CFG_PHA_BUBBLE_MASK 0x0000070000000000
+
+/* SH_Y_PHASE_CFG_PHB_BUBBLE */
+/* Description: MMR phaseB bubble value */
+#define SH_Y_PHASE_CFG_PHB_BUBBLE_SHFT 43
+#define SH_Y_PHASE_CFG_PHB_BUBBLE_MASK 0x0000380000000000
+
+/* SH_Y_PHASE_CFG_PHC_BUBBLE */
+/* Description: MMR phaseC bubble value */
+#define SH_Y_PHASE_CFG_PHC_BUBBLE_SHFT 46
+#define SH_Y_PHASE_CFG_PHC_BUBBLE_MASK 0x0001c00000000000
+
+/* SH_Y_PHASE_CFG_PHD_BUBBLE */
+/* Description: MMR phaseD bubble value */
+#define SH_Y_PHASE_CFG_PHD_BUBBLE_SHFT 49
+#define SH_Y_PHASE_CFG_PHD_BUBBLE_MASK 0x000e000000000000
+
+/* SH_Y_PHASE_CFG_PHE_BUBBLE */
+/* Description: MMR phaseE bubble value */
+#define SH_Y_PHASE_CFG_PHE_BUBBLE_SHFT 52
+#define SH_Y_PHASE_CFG_PHE_BUBBLE_MASK 0x0070000000000000
+
+/* SH_Y_PHASE_CFG_SEL_A */
+/* Description: address,control select A memory clock latch */
+#define SH_Y_PHASE_CFG_SEL_A_SHFT 55
+#define SH_Y_PHASE_CFG_SEL_A_MASK 0x0780000000000000
+
+/* SH_Y_PHASE_CFG_DQ_SEL_A */
+/* Description: DATA MCI select A memory clock latch */
+#define SH_Y_PHASE_CFG_DQ_SEL_A_SHFT 59
+#define SH_Y_PHASE_CFG_DQ_SEL_A_MASK 0x7800000000000000
+
+/* ==================================================================== */
+/* Register "SH_Y_CFG" */
+/* AC Config Registers */
+/* ==================================================================== */
+
+#define SH_Y_CFG 0x0000000100010040
+#define SH_Y_CFG_MASK 0xffffffffffffffff
+#define SH_Y_CFG_INIT 0x108443103322100c
+
+/* SH_Y_CFG_MODE_SERIAL */
+/* Description: Arbque arbitration in serial mode */
+#define SH_Y_CFG_MODE_SERIAL_SHFT 0
+#define SH_Y_CFG_MODE_SERIAL_MASK 0x0000000000000001
+
+/* SH_Y_CFG_DIRC_RANDOM_REPLACEMENT */
+/* Description: Directory cache random replacement */
+#define SH_Y_CFG_DIRC_RANDOM_REPLACEMENT_SHFT 1
+#define SH_Y_CFG_DIRC_RANDOM_REPLACEMENT_MASK 0x0000000000000002
+
+/* SH_Y_CFG_DIR_COUNTER_INIT */
+/* Description: Dir counter initial value */
+#define SH_Y_CFG_DIR_COUNTER_INIT_SHFT 2
+#define SH_Y_CFG_DIR_COUNTER_INIT_MASK 0x00000000000000fc
+
+/* SH_Y_CFG_TA_DLYS */
+/* Description: Turn around delays */
+#define SH_Y_CFG_TA_DLYS_SHFT 8
+#define SH_Y_CFG_TA_DLYS_MASK 0x000000ffffffff00
+
+/* SH_Y_CFG_DA_BB_CLR */
+/* Description: Bank busy CPs for a data read request */
+#define SH_Y_CFG_DA_BB_CLR_SHFT 40
+#define SH_Y_CFG_DA_BB_CLR_MASK 0x00000f0000000000
+
+/* SH_Y_CFG_DC_BB_CLR */
+/* Description: Bank busy CPs for a directory cache read request */
+#define SH_Y_CFG_DC_BB_CLR_SHFT 44
+#define SH_Y_CFG_DC_BB_CLR_MASK 0x0000f00000000000
+
+/* SH_Y_CFG_WT_BB_CLR */
+/* Description: Bank busy CPs for all write request */
+#define SH_Y_CFG_WT_BB_CLR_SHFT 48
+#define SH_Y_CFG_WT_BB_CLR_MASK 0x000f000000000000
+
+/* SH_Y_CFG_SSO_WT_EN */
+/* Description: Simultaneous switching enabled on output data pins */
+#define SH_Y_CFG_SSO_WT_EN_SHFT 52
+#define SH_Y_CFG_SSO_WT_EN_MASK 0x0010000000000000
+
+/* SH_Y_CFG_TRCD2_EN */
+/* Description: Trcd, ras to cas delay of 2 CPs enabled */
+#define SH_Y_CFG_TRCD2_EN_SHFT 53
+#define SH_Y_CFG_TRCD2_EN_MASK 0x0020000000000000
+
+/* SH_Y_CFG_TRCD4_EN */
+/* Description: Trcd, ras to case delay of 4 CPs enabled */
+#define SH_Y_CFG_TRCD4_EN_SHFT 54
+#define SH_Y_CFG_TRCD4_EN_MASK 0x0040000000000000
+
+/* SH_Y_CFG_REQ_CNTR_DIS */
+/* Description: Request delay counter disabled */
+#define SH_Y_CFG_REQ_CNTR_DIS_SHFT 55
+#define SH_Y_CFG_REQ_CNTR_DIS_MASK 0x0080000000000000
+
+/* SH_Y_CFG_REQ_CNTR_VAL */
+/* Description: Request counter delay value in CPs */
+#define SH_Y_CFG_REQ_CNTR_VAL_SHFT 56
+#define SH_Y_CFG_REQ_CNTR_VAL_MASK 0x3f00000000000000
+
+/* SH_Y_CFG_INV_CAS_ADDR */
+/* Description: Invert cas address bits 3 to 7 */
+#define SH_Y_CFG_INV_CAS_ADDR_SHFT 62
+#define SH_Y_CFG_INV_CAS_ADDR_MASK 0x4000000000000000
+
+/* SH_Y_CFG_CLR_DIR_CACHE */
+/* Description: Clear directory cache tags */
+#define SH_Y_CFG_CLR_DIR_CACHE_SHFT 63
+#define SH_Y_CFG_CLR_DIR_CACHE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_Y_DQCT_CFG" */
+/* AC Config Registers */
+/* ==================================================================== */
+
+#define SH_Y_DQCT_CFG 0x0000000100010048
+#define SH_Y_DQCT_CFG_MASK 0x0000000000ffffff
+#define SH_Y_DQCT_CFG_INIT 0x0000000000585418
+
+/* SH_Y_DQCT_CFG_RD_SEL */
+/* Description: Read data select */
+#define SH_Y_DQCT_CFG_RD_SEL_SHFT 0
+#define SH_Y_DQCT_CFG_RD_SEL_MASK 0x000000000000000f
+
+/* SH_Y_DQCT_CFG_WT_SEL */
+/* Description: Write data select */
+#define SH_Y_DQCT_CFG_WT_SEL_SHFT 4
+#define SH_Y_DQCT_CFG_WT_SEL_MASK 0x00000000000000f0
+
+/* SH_Y_DQCT_CFG_DTA_RD_SEL */
+/* Description: Data ready read select */
+#define SH_Y_DQCT_CFG_DTA_RD_SEL_SHFT 8
+#define SH_Y_DQCT_CFG_DTA_RD_SEL_MASK 0x0000000000000f00
+
+/* SH_Y_DQCT_CFG_DTA_WT_SEL */
+/* Description: Data ready write select */
+#define SH_Y_DQCT_CFG_DTA_WT_SEL_SHFT 12
+#define SH_Y_DQCT_CFG_DTA_WT_SEL_MASK 0x000000000000f000
+
+/* SH_Y_DQCT_CFG_DIR_RD_SEL */
+/* Description: Dir ready read select */
+#define SH_Y_DQCT_CFG_DIR_RD_SEL_SHFT 16
+#define SH_Y_DQCT_CFG_DIR_RD_SEL_MASK 0x00000000000f0000
+
+/* SH_Y_DQCT_CFG_MDIR_RD_SEL */
+/* Description: Dir ready read select */
+#define SH_Y_DQCT_CFG_MDIR_RD_SEL_SHFT 20
+#define SH_Y_DQCT_CFG_MDIR_RD_SEL_MASK 0x0000000000f00000
+
+/* ==================================================================== */
+/* Register "SH_Y_REFRESH_CONTROL" */
+/* Refresh Control Register */
+/* ==================================================================== */
+
+#define SH_Y_REFRESH_CONTROL 0x0000000100010050
+#define SH_Y_REFRESH_CONTROL_MASK 0x000000000fffffff
+#define SH_Y_REFRESH_CONTROL_INIT 0x00000000009cc300
+
+/* SH_Y_REFRESH_CONTROL_ENABLE */
+/* Description: Refresh enable */
+#define SH_Y_REFRESH_CONTROL_ENABLE_SHFT 0
+#define SH_Y_REFRESH_CONTROL_ENABLE_MASK 0x00000000000000ff
+
+/* SH_Y_REFRESH_CONTROL_INTERVAL */
+/* Description: Refresh interval in core CPs */
+#define SH_Y_REFRESH_CONTROL_INTERVAL_SHFT 8
+#define SH_Y_REFRESH_CONTROL_INTERVAL_MASK 0x000000000001ff00
+
+/* SH_Y_REFRESH_CONTROL_HOLD */
+/* Description: Refresh hold */
+#define SH_Y_REFRESH_CONTROL_HOLD_SHFT 17
+#define SH_Y_REFRESH_CONTROL_HOLD_MASK 0x00000000007e0000
+
+/* SH_Y_REFRESH_CONTROL_INTERLEAVE */
+/* Description: Refresh interleave */
+#define SH_Y_REFRESH_CONTROL_INTERLEAVE_SHFT 23
+#define SH_Y_REFRESH_CONTROL_INTERLEAVE_MASK 0x0000000000800000
+
+/* SH_Y_REFRESH_CONTROL_HALF_RATE */
+/* Description: Refresh half rate */
+#define SH_Y_REFRESH_CONTROL_HALF_RATE_SHFT 24
+#define SH_Y_REFRESH_CONTROL_HALF_RATE_MASK 0x000000000f000000
+
+/* ==================================================================== */
+/* Register "SH_MEM_RED_BLACK" */
+/* MD fairness watchdog timers */
+/* ==================================================================== */
+
+#define SH_MEM_RED_BLACK 0x0000000100010058
+#define SH_MEM_RED_BLACK_MASK 0x000fffffffffffff
+#define SH_MEM_RED_BLACK_INIT 0x0000000040000400
+
+/* SH_MEM_RED_BLACK_TIME */
+/* Description: Clocks to tag references with a given color */
+#define SH_MEM_RED_BLACK_TIME_SHFT 0
+#define SH_MEM_RED_BLACK_TIME_MASK 0x000000000000ffff
+
+/* SH_MEM_RED_BLACK_ERR_TIME */
+/* Description: Max clocks to wait after red/black change for old c */
+/* olor to clear. */
+#define SH_MEM_RED_BLACK_ERR_TIME_SHFT 16
+#define SH_MEM_RED_BLACK_ERR_TIME_MASK 0x000fffffffff0000
+
+/* ==================================================================== */
+/* Register "SH_MISC_MEM_CFG" */
+/* ==================================================================== */
+
+#define SH_MISC_MEM_CFG 0x0000000100010060
+#define SH_MISC_MEM_CFG_MASK 0x0013f1f1fff3f3ff
+#define SH_MISC_MEM_CFG_INIT 0x0000000000010107
+
+/* SH_MISC_MEM_CFG_EXPRESS_HEADER_ENABLE */
+/* Description: enables the use of express headers from md to pi */
+#define SH_MISC_MEM_CFG_EXPRESS_HEADER_ENABLE_SHFT 0
+#define SH_MISC_MEM_CFG_EXPRESS_HEADER_ENABLE_MASK 0x0000000000000001
+
+/* SH_MISC_MEM_CFG_SPEC_HEADER_ENABLE */
+/* Description: enables the use of speculative headers from md to p */
+#define SH_MISC_MEM_CFG_SPEC_HEADER_ENABLE_SHFT 1
+#define SH_MISC_MEM_CFG_SPEC_HEADER_ENABLE_MASK 0x0000000000000002
+
+/* SH_MISC_MEM_CFG_JNR_BYPASS_ENABLE */
+/* Description: enables bypass path for requests going through ac */
+#define SH_MISC_MEM_CFG_JNR_BYPASS_ENABLE_SHFT 2
+#define SH_MISC_MEM_CFG_JNR_BYPASS_ENABLE_MASK 0x0000000000000004
+
+/* SH_MISC_MEM_CFG_XN_RD_SAME_AS_PI */
+/* Description: disables a one clock delay of XN read data */
+#define SH_MISC_MEM_CFG_XN_RD_SAME_AS_PI_SHFT 3
+#define SH_MISC_MEM_CFG_XN_RD_SAME_AS_PI_MASK 0x0000000000000008
+
+/* SH_MISC_MEM_CFG_LOW_WRITE_BUFFER_THRESHOLD */
+/* Description: point at which data writes get higher priority */
+#define SH_MISC_MEM_CFG_LOW_WRITE_BUFFER_THRESHOLD_SHFT 4
+#define SH_MISC_MEM_CFG_LOW_WRITE_BUFFER_THRESHOLD_MASK 0x00000000000003f0
+
+/* SH_MISC_MEM_CFG_LOW_VICTIM_BUFFER_THRESHOLD */
+/* Description: point at which dir cache writes get higher priority */
+#define SH_MISC_MEM_CFG_LOW_VICTIM_BUFFER_THRESHOLD_SHFT 12
+#define SH_MISC_MEM_CFG_LOW_VICTIM_BUFFER_THRESHOLD_MASK 0x000000000003f000
+
+/* SH_MISC_MEM_CFG_THROTTLE_CNT */
+/* Description: number of clocks between accepting references */
+#define SH_MISC_MEM_CFG_THROTTLE_CNT_SHFT 20
+#define SH_MISC_MEM_CFG_THROTTLE_CNT_MASK 0x000000000ff00000
+
+/* SH_MISC_MEM_CFG_DISABLED_READ_TNUMS */
+/* Description: number of read tnums to take out of circulation */
+#define SH_MISC_MEM_CFG_DISABLED_READ_TNUMS_SHFT 28
+#define SH_MISC_MEM_CFG_DISABLED_READ_TNUMS_MASK 0x00000001f0000000
+
+/* SH_MISC_MEM_CFG_DISABLED_WRITE_TNUMS */
+/* Description: number of write tnums to take out of circulation */
+#define SH_MISC_MEM_CFG_DISABLED_WRITE_TNUMS_SHFT 36
+#define SH_MISC_MEM_CFG_DISABLED_WRITE_TNUMS_MASK 0x000001f000000000
+
+/* SH_MISC_MEM_CFG_DISABLED_VICTIMS */
+/* Description: number of dir cache victim buffers to take out of c */
+/* irculation in each quadrant of the MD */
+#define SH_MISC_MEM_CFG_DISABLED_VICTIMS_SHFT 44
+#define SH_MISC_MEM_CFG_DISABLED_VICTIMS_MASK 0x0003f00000000000
+
+/* SH_MISC_MEM_CFG_ALTERNATE_XN_RP_PLANE */
+/* Description: enables plane alternating for replies to XN */
+#define SH_MISC_MEM_CFG_ALTERNATE_XN_RP_PLANE_SHFT 52
+#define SH_MISC_MEM_CFG_ALTERNATE_XN_RP_PLANE_MASK 0x0010000000000000
+
+/* ==================================================================== */
+/* Register "SH_PIO_RQ_CRD_CTL" */
+/* pio_rq Credit Circulation Control */
+/* ==================================================================== */
+
+#define SH_PIO_RQ_CRD_CTL 0x0000000100010068
+#define SH_PIO_RQ_CRD_CTL_MASK 0x000000000000003f
+#define SH_PIO_RQ_CRD_CTL_INIT 0x0000000000000002
+
+/* SH_PIO_RQ_CRD_CTL_DEPTH */
+/* Description: Total depth of buffering (in sic packets) */
+#define SH_PIO_RQ_CRD_CTL_DEPTH_SHFT 0
+#define SH_PIO_RQ_CRD_CTL_DEPTH_MASK 0x000000000000003f
+
+/* ==================================================================== */
+/* Register "SH_PI_MD_RQ_CRD_CTL" */
+/* pi_md_rq Credit Circulation Control */
+/* ==================================================================== */
+
+#define SH_PI_MD_RQ_CRD_CTL 0x0000000100010070
+#define SH_PI_MD_RQ_CRD_CTL_MASK 0x000000000000003f
+#define SH_PI_MD_RQ_CRD_CTL_INIT 0x0000000000000008
+
+/* SH_PI_MD_RQ_CRD_CTL_DEPTH */
+/* Description: Total depth of buffering (in sic packets) */
+#define SH_PI_MD_RQ_CRD_CTL_DEPTH_SHFT 0
+#define SH_PI_MD_RQ_CRD_CTL_DEPTH_MASK 0x000000000000003f
+
+/* ==================================================================== */
+/* Register "SH_PI_MD_RP_CRD_CTL" */
+/* pi_md_rp Credit Circulation Control */
+/* ==================================================================== */
+
+#define SH_PI_MD_RP_CRD_CTL 0x0000000100010078
+#define SH_PI_MD_RP_CRD_CTL_MASK 0x000000000000003f
+#define SH_PI_MD_RP_CRD_CTL_INIT 0x0000000000000004
+
+/* SH_PI_MD_RP_CRD_CTL_DEPTH */
+/* Description: Total depth of buffering (in sic packets) */
+#define SH_PI_MD_RP_CRD_CTL_DEPTH_SHFT 0
+#define SH_PI_MD_RP_CRD_CTL_DEPTH_MASK 0x000000000000003f
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_RQ_CRD_CTL" */
+/* xn_md_rq Credit Circulation Control */
+/* ==================================================================== */
+
+#define SH_XN_MD_RQ_CRD_CTL 0x0000000100010080
+#define SH_XN_MD_RQ_CRD_CTL_MASK 0x000000000000003f
+#define SH_XN_MD_RQ_CRD_CTL_INIT 0x0000000000000008
+
+/* SH_XN_MD_RQ_CRD_CTL_DEPTH */
+/* Description: Total depth of buffering (in sic packets) */
+#define SH_XN_MD_RQ_CRD_CTL_DEPTH_SHFT 0
+#define SH_XN_MD_RQ_CRD_CTL_DEPTH_MASK 0x000000000000003f
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_RP_CRD_CTL" */
+/* xn_md_rp Credit Circulation Control */
+/* ==================================================================== */
+
+#define SH_XN_MD_RP_CRD_CTL 0x0000000100010088
+#define SH_XN_MD_RP_CRD_CTL_MASK 0x000000000000003f
+#define SH_XN_MD_RP_CRD_CTL_INIT 0x0000000000000004
+
+/* SH_XN_MD_RP_CRD_CTL_DEPTH */
+/* Description: Total depth of buffering (in sic packets) */
+#define SH_XN_MD_RP_CRD_CTL_DEPTH_SHFT 0
+#define SH_XN_MD_RP_CRD_CTL_DEPTH_MASK 0x000000000000003f
+
+/* ==================================================================== */
+/* Register "SH_X_TAG0" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_X_TAG0 0x0000000100020000
+#define SH_X_TAG0_MASK 0x00000000000fffff
+#define SH_X_TAG0_INIT 0x0000000000000000
+
+/* SH_X_TAG0_TAG */
+/* Description: Valid + Tag Address */
+#define SH_X_TAG0_TAG_SHFT 0
+#define SH_X_TAG0_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_X_TAG1" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_X_TAG1 0x0000000100020008
+#define SH_X_TAG1_MASK 0x00000000000fffff
+#define SH_X_TAG1_INIT 0x0000000000000000
+
+/* SH_X_TAG1_TAG */
+/* Description: Valid + Tag Address */
+#define SH_X_TAG1_TAG_SHFT 0
+#define SH_X_TAG1_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_X_TAG2" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_X_TAG2 0x0000000100020010
+#define SH_X_TAG2_MASK 0x00000000000fffff
+#define SH_X_TAG2_INIT 0x0000000000000000
+
+/* SH_X_TAG2_TAG */
+/* Description: Valid + Tag Address */
+#define SH_X_TAG2_TAG_SHFT 0
+#define SH_X_TAG2_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_X_TAG3" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_X_TAG3 0x0000000100020018
+#define SH_X_TAG3_MASK 0x00000000000fffff
+#define SH_X_TAG3_INIT 0x0000000000000000
+
+/* SH_X_TAG3_TAG */
+/* Description: Valid + Tag Address */
+#define SH_X_TAG3_TAG_SHFT 0
+#define SH_X_TAG3_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_X_TAG4" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_X_TAG4 0x0000000100020020
+#define SH_X_TAG4_MASK 0x00000000000fffff
+#define SH_X_TAG4_INIT 0x0000000000000000
+
+/* SH_X_TAG4_TAG */
+/* Description: Valid + Tag Address */
+#define SH_X_TAG4_TAG_SHFT 0
+#define SH_X_TAG4_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_X_TAG5" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_X_TAG5 0x0000000100020028
+#define SH_X_TAG5_MASK 0x00000000000fffff
+#define SH_X_TAG5_INIT 0x0000000000000000
+
+/* SH_X_TAG5_TAG */
+/* Description: Valid + Tag Address */
+#define SH_X_TAG5_TAG_SHFT 0
+#define SH_X_TAG5_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_X_TAG6" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_X_TAG6 0x0000000100020030
+#define SH_X_TAG6_MASK 0x00000000000fffff
+#define SH_X_TAG6_INIT 0x0000000000000000
+
+/* SH_X_TAG6_TAG */
+/* Description: Valid + Tag Address */
+#define SH_X_TAG6_TAG_SHFT 0
+#define SH_X_TAG6_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_X_TAG7" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_X_TAG7 0x0000000100020038
+#define SH_X_TAG7_MASK 0x00000000000fffff
+#define SH_X_TAG7_INIT 0x0000000000000000
+
+/* SH_X_TAG7_TAG */
+/* Description: Valid + Tag Address */
+#define SH_X_TAG7_TAG_SHFT 0
+#define SH_X_TAG7_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG0" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_Y_TAG0 0x0000000100020040
+#define SH_Y_TAG0_MASK 0x00000000000fffff
+#define SH_Y_TAG0_INIT 0x0000000000000000
+
+/* SH_Y_TAG0_TAG */
+/* Description: Valid + Tag Address */
+#define SH_Y_TAG0_TAG_SHFT 0
+#define SH_Y_TAG0_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG1" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_Y_TAG1 0x0000000100020048
+#define SH_Y_TAG1_MASK 0x00000000000fffff
+#define SH_Y_TAG1_INIT 0x0000000000000000
+
+/* SH_Y_TAG1_TAG */
+/* Description: Valid + Tag Address */
+#define SH_Y_TAG1_TAG_SHFT 0
+#define SH_Y_TAG1_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG2" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_Y_TAG2 0x0000000100020050
+#define SH_Y_TAG2_MASK 0x00000000000fffff
+#define SH_Y_TAG2_INIT 0x0000000000000000
+
+/* SH_Y_TAG2_TAG */
+/* Description: Valid + Tag Address */
+#define SH_Y_TAG2_TAG_SHFT 0
+#define SH_Y_TAG2_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG3" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_Y_TAG3 0x0000000100020058
+#define SH_Y_TAG3_MASK 0x00000000000fffff
+#define SH_Y_TAG3_INIT 0x0000000000000000
+
+/* SH_Y_TAG3_TAG */
+/* Description: Valid + Tag Address */
+#define SH_Y_TAG3_TAG_SHFT 0
+#define SH_Y_TAG3_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG4" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_Y_TAG4 0x0000000100020060
+#define SH_Y_TAG4_MASK 0x00000000000fffff
+#define SH_Y_TAG4_INIT 0x0000000000000000
+
+/* SH_Y_TAG4_TAG */
+/* Description: Valid + Tag Address */
+#define SH_Y_TAG4_TAG_SHFT 0
+#define SH_Y_TAG4_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG5" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_Y_TAG5 0x0000000100020068
+#define SH_Y_TAG5_MASK 0x00000000000fffff
+#define SH_Y_TAG5_INIT 0x0000000000000000
+
+/* SH_Y_TAG5_TAG */
+/* Description: Valid + Tag Address */
+#define SH_Y_TAG5_TAG_SHFT 0
+#define SH_Y_TAG5_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG6" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_Y_TAG6 0x0000000100020070
+#define SH_Y_TAG6_MASK 0x00000000000fffff
+#define SH_Y_TAG6_INIT 0x0000000000000000
+
+/* SH_Y_TAG6_TAG */
+/* Description: Valid + Tag Address */
+#define SH_Y_TAG6_TAG_SHFT 0
+#define SH_Y_TAG6_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG7" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+#define SH_Y_TAG7 0x0000000100020078
+#define SH_Y_TAG7_MASK 0x00000000000fffff
+#define SH_Y_TAG7_INIT 0x0000000000000000
+
+/* SH_Y_TAG7_TAG */
+/* Description: Valid + Tag Address */
+#define SH_Y_TAG7_TAG_SHFT 0
+#define SH_Y_TAG7_TAG_MASK 0x00000000000fffff
+
+/* ==================================================================== */
+/* Register "SH_MMRBIST_BASE" */
+/* mmr/bist base address */
+/* ==================================================================== */
+
+#define SH_MMRBIST_BASE 0x0000000100020080
+#define SH_MMRBIST_BASE_MASK 0x0003fffffffffff8
+#define SH_MMRBIST_BASE_INIT 0x0000000000000000
+
+/* SH_MMRBIST_BASE_DWORD_ADDR */
+/* Description: bits 49:3 of the memory address */
+#define SH_MMRBIST_BASE_DWORD_ADDR_SHFT 3
+#define SH_MMRBIST_BASE_DWORD_ADDR_MASK 0x0003fffffffffff8
+
+/* ==================================================================== */
+/* Register "SH_MMRBIST_CTL" */
+/* Bist base address */
+/* ==================================================================== */
+
+#define SH_MMRBIST_CTL 0x0000000100020088
+#define SH_MMRBIST_CTL_MASK 0x0000177f7fffffff
+#define SH_MMRBIST_CTL_INIT 0x0000000000000000
+
+/* SH_MMRBIST_CTL_BLOCK_LENGTH */
+/* Description: number of dwords in operation */
+#define SH_MMRBIST_CTL_BLOCK_LENGTH_SHFT 0
+#define SH_MMRBIST_CTL_BLOCK_LENGTH_MASK 0x000000007fffffff
+
+/* SH_MMRBIST_CTL_CMD */
+/* Description: mmr/bist function */
+#define SH_MMRBIST_CTL_CMD_SHFT 32
+#define SH_MMRBIST_CTL_CMD_MASK 0x0000007f00000000
+
+/* SH_MMRBIST_CTL_IN_PROGRESS */
+/* Description: writing a 1 starts operation, hardware clears on co */
+/* mpletion */
+#define SH_MMRBIST_CTL_IN_PROGRESS_SHFT 40
+#define SH_MMRBIST_CTL_IN_PROGRESS_MASK 0x0000010000000000
+
+/* SH_MMRBIST_CTL_FAIL */
+/* Description: mmr/bist had a data or address error */
+#define SH_MMRBIST_CTL_FAIL_SHFT 41
+#define SH_MMRBIST_CTL_FAIL_MASK 0x0000020000000000
+
+/* SH_MMRBIST_CTL_MEM_IDLE */
+/* Description: all memory activity is complete */
+#define SH_MMRBIST_CTL_MEM_IDLE_SHFT 42
+#define SH_MMRBIST_CTL_MEM_IDLE_MASK 0x0000040000000000
+
+/* SH_MMRBIST_CTL_RESET_STATE */
+/* Description: writing a 1 resets mmrbist hardware, hardware clear */
+/* s on completion */
+#define SH_MMRBIST_CTL_RESET_STATE_SHFT 44
+#define SH_MMRBIST_CTL_RESET_STATE_MASK 0x0000100000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DBUG_DATA_CFG" */
+/* configuration for md debug data muxes */
+/* ==================================================================== */
+
+#define SH_MD_DBUG_DATA_CFG 0x0000000100020100
+#define SH_MD_DBUG_DATA_CFG_MASK 0x7777777777777777
+#define SH_MD_DBUG_DATA_CFG_INIT 0x0000000000000000
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE0_CHIPLET */
+/* Description: selects which md chiplet drives nibble0 */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE0_CHIPLET_SHFT 0
+#define SH_MD_DBUG_DATA_CFG_NIBBLE0_CHIPLET_MASK 0x0000000000000007
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE0_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE0_NIBBLE_SHFT 4
+#define SH_MD_DBUG_DATA_CFG_NIBBLE0_NIBBLE_MASK 0x0000000000000070
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE1_CHIPLET */
+/* Description: selects which md chiplet drives nibble1 */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE1_CHIPLET_SHFT 8
+#define SH_MD_DBUG_DATA_CFG_NIBBLE1_CHIPLET_MASK 0x0000000000000700
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE1_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE1_NIBBLE_SHFT 12
+#define SH_MD_DBUG_DATA_CFG_NIBBLE1_NIBBLE_MASK 0x0000000000007000
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE2_CHIPLET */
+/* Description: selects which md chiplet drives nibble2 */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE2_CHIPLET_SHFT 16
+#define SH_MD_DBUG_DATA_CFG_NIBBLE2_CHIPLET_MASK 0x0000000000070000
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE2_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE2_NIBBLE_SHFT 20
+#define SH_MD_DBUG_DATA_CFG_NIBBLE2_NIBBLE_MASK 0x0000000000700000
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE3_CHIPLET */
+/* Description: selects which md chiplet drives nibble3 */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE3_CHIPLET_SHFT 24
+#define SH_MD_DBUG_DATA_CFG_NIBBLE3_CHIPLET_MASK 0x0000000007000000
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE3_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE3_NIBBLE_SHFT 28
+#define SH_MD_DBUG_DATA_CFG_NIBBLE3_NIBBLE_MASK 0x0000000070000000
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE4_CHIPLET */
+/* Description: selects which md chiplet drives nibble4 */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE4_CHIPLET_SHFT 32
+#define SH_MD_DBUG_DATA_CFG_NIBBLE4_CHIPLET_MASK 0x0000000700000000
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE4_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE4_NIBBLE_SHFT 36
+#define SH_MD_DBUG_DATA_CFG_NIBBLE4_NIBBLE_MASK 0x0000007000000000
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE5_CHIPLET */
+/* Description: selects which md chiplet drives nibble5 */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE5_CHIPLET_SHFT 40
+#define SH_MD_DBUG_DATA_CFG_NIBBLE5_CHIPLET_MASK 0x0000070000000000
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE5_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE5_NIBBLE_SHFT 44
+#define SH_MD_DBUG_DATA_CFG_NIBBLE5_NIBBLE_MASK 0x0000700000000000
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE6_CHIPLET */
+/* Description: selects which md chiplet drives nibble6 */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE6_CHIPLET_SHFT 48
+#define SH_MD_DBUG_DATA_CFG_NIBBLE6_CHIPLET_MASK 0x0007000000000000
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE6_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE6_NIBBLE_SHFT 52
+#define SH_MD_DBUG_DATA_CFG_NIBBLE6_NIBBLE_MASK 0x0070000000000000
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE7_CHIPLET */
+/* Description: selects which md chiplet drives nibble7 */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE7_CHIPLET_SHFT 56
+#define SH_MD_DBUG_DATA_CFG_NIBBLE7_CHIPLET_MASK 0x0700000000000000
+
+/* SH_MD_DBUG_DATA_CFG_NIBBLE7_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_DATA_CFG_NIBBLE7_NIBBLE_SHFT 60
+#define SH_MD_DBUG_DATA_CFG_NIBBLE7_NIBBLE_MASK 0x7000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DBUG_TRIGGER_CFG" */
+/* configuration for md debug triggers */
+/* ==================================================================== */
+
+#define SH_MD_DBUG_TRIGGER_CFG 0x0000000100020108
+#define SH_MD_DBUG_TRIGGER_CFG_MASK 0xf777777777777777
+#define SH_MD_DBUG_TRIGGER_CFG_INIT 0x0000000000000000
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE0_CHIPLET */
+/* Description: selects which md chiplet drives nibble0 */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE0_CHIPLET_SHFT 0
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE0_CHIPLET_MASK 0x0000000000000007
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE0_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE0_NIBBLE_SHFT 4
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE0_NIBBLE_MASK 0x0000000000000070
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE1_CHIPLET */
+/* Description: selects which md chiplet drives nibble1 */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE1_CHIPLET_SHFT 8
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE1_CHIPLET_MASK 0x0000000000000700
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE1_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE1_NIBBLE_SHFT 12
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE1_NIBBLE_MASK 0x0000000000007000
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE2_CHIPLET */
+/* Description: selects which md chiplet drives nibble2 */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE2_CHIPLET_SHFT 16
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE2_CHIPLET_MASK 0x0000000000070000
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE2_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE2_NIBBLE_SHFT 20
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE2_NIBBLE_MASK 0x0000000000700000
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE3_CHIPLET */
+/* Description: selects which md chiplet drives nibble3 */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE3_CHIPLET_SHFT 24
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE3_CHIPLET_MASK 0x0000000007000000
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE3_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE3_NIBBLE_SHFT 28
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE3_NIBBLE_MASK 0x0000000070000000
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE4_CHIPLET */
+/* Description: selects which md chiplet drives nibble4 */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE4_CHIPLET_SHFT 32
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE4_CHIPLET_MASK 0x0000000700000000
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE4_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE4_NIBBLE_SHFT 36
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE4_NIBBLE_MASK 0x0000007000000000
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE5_CHIPLET */
+/* Description: selects which md chiplet drives nibble5 */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE5_CHIPLET_SHFT 40
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE5_CHIPLET_MASK 0x0000070000000000
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE5_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE5_NIBBLE_SHFT 44
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE5_NIBBLE_MASK 0x0000700000000000
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE6_CHIPLET */
+/* Description: selects which md chiplet drives nibble6 */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE6_CHIPLET_SHFT 48
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE6_CHIPLET_MASK 0x0007000000000000
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE6_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE6_NIBBLE_SHFT 52
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE6_NIBBLE_MASK 0x0070000000000000
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE7_CHIPLET */
+/* Description: selects which md chiplet drives nibble7 */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE7_CHIPLET_SHFT 56
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE7_CHIPLET_MASK 0x0700000000000000
+
+/* SH_MD_DBUG_TRIGGER_CFG_NIBBLE7_NIBBLE */
+/* Description: selects which nibble from selected chiplet drives n */
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE7_NIBBLE_SHFT 60
+#define SH_MD_DBUG_TRIGGER_CFG_NIBBLE7_NIBBLE_MASK 0x7000000000000000
+
+/* SH_MD_DBUG_TRIGGER_CFG_ENABLE */
+/* Description: enables triggering on pattern match */
+#define SH_MD_DBUG_TRIGGER_CFG_ENABLE_SHFT 63
+#define SH_MD_DBUG_TRIGGER_CFG_ENABLE_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DBUG_COMPARE" */
+/* md debug compare pattern and mask */
+/* ==================================================================== */
+
+#define SH_MD_DBUG_COMPARE 0x0000000100020110
+#define SH_MD_DBUG_COMPARE_MASK 0xffffffffffffffff
+#define SH_MD_DBUG_COMPARE_INIT 0x0000000000000000
+
+/* SH_MD_DBUG_COMPARE_PATTERN */
+/* Description: pattern against which to compare dbug data for trig */
+#define SH_MD_DBUG_COMPARE_PATTERN_SHFT 0
+#define SH_MD_DBUG_COMPARE_PATTERN_MASK 0x00000000ffffffff
+
+/* SH_MD_DBUG_COMPARE_MASK */
+/* Description: bits to include in compare of dbug data for trigger */
+#define SH_MD_DBUG_COMPARE_MASK_SHFT 32
+#define SH_MD_DBUG_COMPARE_MASK_MASK 0xffffffff00000000
+
+/* ==================================================================== */
+/* Register "SH_X_MOD_DBUG_SEL" */
+/* MD acx debug select */
+/* ==================================================================== */
+
+#define SH_X_MOD_DBUG_SEL 0x0000000100020118
+#define SH_X_MOD_DBUG_SEL_MASK 0x03ffffffffffffff
+#define SH_X_MOD_DBUG_SEL_INIT 0x0000000000000000
+
+/* SH_X_MOD_DBUG_SEL_TAG_SEL */
+/* Description: tagmgr select */
+#define SH_X_MOD_DBUG_SEL_TAG_SEL_SHFT 0
+#define SH_X_MOD_DBUG_SEL_TAG_SEL_MASK 0x00000000000000ff
+
+/* SH_X_MOD_DBUG_SEL_WBQ_SEL */
+/* Description: wbqtg select */
+#define SH_X_MOD_DBUG_SEL_WBQ_SEL_SHFT 8
+#define SH_X_MOD_DBUG_SEL_WBQ_SEL_MASK 0x000000000000ff00
+
+/* SH_X_MOD_DBUG_SEL_ARB_SEL */
+/* Description: arbque select */
+#define SH_X_MOD_DBUG_SEL_ARB_SEL_SHFT 16
+#define SH_X_MOD_DBUG_SEL_ARB_SEL_MASK 0x0000000000ff0000
+
+/* SH_X_MOD_DBUG_SEL_ATL_SEL */
+/* Description: aintl select */
+#define SH_X_MOD_DBUG_SEL_ATL_SEL_SHFT 24
+#define SH_X_MOD_DBUG_SEL_ATL_SEL_MASK 0x00000007ff000000
+
+/* SH_X_MOD_DBUG_SEL_ATR_SEL */
+/* Description: aintr select */
+#define SH_X_MOD_DBUG_SEL_ATR_SEL_SHFT 35
+#define SH_X_MOD_DBUG_SEL_ATR_SEL_MASK 0x00003ff800000000
+
+/* SH_X_MOD_DBUG_SEL_DQL_SEL */
+/* Description: dqctr select */
+#define SH_X_MOD_DBUG_SEL_DQL_SEL_SHFT 46
+#define SH_X_MOD_DBUG_SEL_DQL_SEL_MASK 0x000fc00000000000
+
+/* SH_X_MOD_DBUG_SEL_DQR_SEL */
+/* Description: dqctl select */
+#define SH_X_MOD_DBUG_SEL_DQR_SEL_SHFT 52
+#define SH_X_MOD_DBUG_SEL_DQR_SEL_MASK 0x03f0000000000000
+
+/* ==================================================================== */
+/* Register "SH_X_DBUG_SEL" */
+/* MD acx debug select */
+/* ==================================================================== */
+
+#define SH_X_DBUG_SEL 0x0000000100020120
+#define SH_X_DBUG_SEL_MASK 0x0000000000ffffff
+#define SH_X_DBUG_SEL_INIT 0x0000000000000000
+
+/* SH_X_DBUG_SEL_DBG_SEL */
+/* Description: debug select */
+#define SH_X_DBUG_SEL_DBG_SEL_SHFT 0
+#define SH_X_DBUG_SEL_DBG_SEL_MASK 0x0000000000ffffff
+
+/* ==================================================================== */
+/* Register "SH_X_LADDR_CMP" */
+/* MD acx address compare */
+/* ==================================================================== */
+
+#define SH_X_LADDR_CMP 0x0000000100020128
+#define SH_X_LADDR_CMP_MASK 0x0fffffff0fffffff
+#define SH_X_LADDR_CMP_INIT 0x0000000000000000
+
+/* SH_X_LADDR_CMP_CMP_VAL */
+/* Description: Compare value */
+#define SH_X_LADDR_CMP_CMP_VAL_SHFT 0
+#define SH_X_LADDR_CMP_CMP_VAL_MASK 0x000000000fffffff
+
+/* SH_X_LADDR_CMP_MASK_VAL */
+/* Description: Mask value */
+#define SH_X_LADDR_CMP_MASK_VAL_SHFT 32
+#define SH_X_LADDR_CMP_MASK_VAL_MASK 0x0fffffff00000000
+
+/* ==================================================================== */
+/* Register "SH_X_RADDR_CMP" */
+/* MD acx address compare */
+/* ==================================================================== */
+
+#define SH_X_RADDR_CMP 0x0000000100020130
+#define SH_X_RADDR_CMP_MASK 0x0fffffff0fffffff
+#define SH_X_RADDR_CMP_INIT 0x0000000000000000
+
+/* SH_X_RADDR_CMP_CMP_VAL */
+/* Description: Compare value */
+#define SH_X_RADDR_CMP_CMP_VAL_SHFT 0
+#define SH_X_RADDR_CMP_CMP_VAL_MASK 0x000000000fffffff
+
+/* SH_X_RADDR_CMP_MASK_VAL */
+/* Description: Mask value */
+#define SH_X_RADDR_CMP_MASK_VAL_SHFT 32
+#define SH_X_RADDR_CMP_MASK_VAL_MASK 0x0fffffff00000000
+
+/* ==================================================================== */
+/* Register "SH_X_TAG_CMP" */
+/* MD acx tagmgr compare */
+/* ==================================================================== */
+
+#define SH_X_TAG_CMP 0x0000000100020138
+#define SH_X_TAG_CMP_MASK 0x007fffffffffffff
+#define SH_X_TAG_CMP_INIT 0x0000000000000000
+
+/* SH_X_TAG_CMP_CMD */
+/* Description: Command compare value */
+#define SH_X_TAG_CMP_CMD_SHFT 0
+#define SH_X_TAG_CMP_CMD_MASK 0x00000000000000ff
+
+/* SH_X_TAG_CMP_ADDR */
+/* Description: Address compare value */
+#define SH_X_TAG_CMP_ADDR_SHFT 8
+#define SH_X_TAG_CMP_ADDR_MASK 0x000001ffffffff00
+
+/* SH_X_TAG_CMP_SRC */
+/* Description: Source compare value */
+#define SH_X_TAG_CMP_SRC_SHFT 41
+#define SH_X_TAG_CMP_SRC_MASK 0x007ffe0000000000
+
+/* ==================================================================== */
+/* Register "SH_X_TAG_MASK" */
+/* MD acx tagmgr mask */
+/* ==================================================================== */
+
+#define SH_X_TAG_MASK 0x0000000100020140
+#define SH_X_TAG_MASK_MASK 0x007fffffffffffff
+#define SH_X_TAG_MASK_INIT 0x0000000000000000
+
+/* SH_X_TAG_MASK_CMD */
+/* Description: Command compare value */
+#define SH_X_TAG_MASK_CMD_SHFT 0
+#define SH_X_TAG_MASK_CMD_MASK 0x00000000000000ff
+
+/* SH_X_TAG_MASK_ADDR */
+/* Description: Address compare value */
+#define SH_X_TAG_MASK_ADDR_SHFT 8
+#define SH_X_TAG_MASK_ADDR_MASK 0x000001ffffffff00
+
+/* SH_X_TAG_MASK_SRC */
+/* Description: Source compare value */
+#define SH_X_TAG_MASK_SRC_SHFT 41
+#define SH_X_TAG_MASK_SRC_MASK 0x007ffe0000000000
+
+/* ==================================================================== */
+/* Register "SH_Y_MOD_DBUG_SEL" */
+/* MD acy debug select */
+/* ==================================================================== */
+
+#define SH_Y_MOD_DBUG_SEL 0x0000000100020148
+#define SH_Y_MOD_DBUG_SEL_MASK 0x03ffffffffffffff
+#define SH_Y_MOD_DBUG_SEL_INIT 0x0000000000000000
+
+/* SH_Y_MOD_DBUG_SEL_TAG_SEL */
+/* Description: tagmgr select */
+#define SH_Y_MOD_DBUG_SEL_TAG_SEL_SHFT 0
+#define SH_Y_MOD_DBUG_SEL_TAG_SEL_MASK 0x00000000000000ff
+
+/* SH_Y_MOD_DBUG_SEL_WBQ_SEL */
+/* Description: wbqtg select */
+#define SH_Y_MOD_DBUG_SEL_WBQ_SEL_SHFT 8
+#define SH_Y_MOD_DBUG_SEL_WBQ_SEL_MASK 0x000000000000ff00
+
+/* SH_Y_MOD_DBUG_SEL_ARB_SEL */
+/* Description: arbque select */
+#define SH_Y_MOD_DBUG_SEL_ARB_SEL_SHFT 16
+#define SH_Y_MOD_DBUG_SEL_ARB_SEL_MASK 0x0000000000ff0000
+
+/* SH_Y_MOD_DBUG_SEL_ATL_SEL */
+/* Description: aintl select */
+#define SH_Y_MOD_DBUG_SEL_ATL_SEL_SHFT 24
+#define SH_Y_MOD_DBUG_SEL_ATL_SEL_MASK 0x00000007ff000000
+
+/* SH_Y_MOD_DBUG_SEL_ATR_SEL */
+/* Description: aintr select */
+#define SH_Y_MOD_DBUG_SEL_ATR_SEL_SHFT 35
+#define SH_Y_MOD_DBUG_SEL_ATR_SEL_MASK 0x00003ff800000000
+
+/* SH_Y_MOD_DBUG_SEL_DQL_SEL */
+/* Description: dqctr select */
+#define SH_Y_MOD_DBUG_SEL_DQL_SEL_SHFT 46
+#define SH_Y_MOD_DBUG_SEL_DQL_SEL_MASK 0x000fc00000000000
+
+/* SH_Y_MOD_DBUG_SEL_DQR_SEL */
+/* Description: dqctl select */
+#define SH_Y_MOD_DBUG_SEL_DQR_SEL_SHFT 52
+#define SH_Y_MOD_DBUG_SEL_DQR_SEL_MASK 0x03f0000000000000
+
+/* ==================================================================== */
+/* Register "SH_Y_DBUG_SEL" */
+/* MD acy debug select */
+/* ==================================================================== */
+
+#define SH_Y_DBUG_SEL 0x0000000100020150
+#define SH_Y_DBUG_SEL_MASK 0x0000000000ffffff
+#define SH_Y_DBUG_SEL_INIT 0x0000000000000000
+
+/* SH_Y_DBUG_SEL_DBG_SEL */
+/* Description: debug select */
+#define SH_Y_DBUG_SEL_DBG_SEL_SHFT 0
+#define SH_Y_DBUG_SEL_DBG_SEL_MASK 0x0000000000ffffff
+
+/* ==================================================================== */
+/* Register "SH_Y_LADDR_CMP" */
+/* MD acy address compare */
+/* ==================================================================== */
+
+#define SH_Y_LADDR_CMP 0x0000000100020158
+#define SH_Y_LADDR_CMP_MASK 0x0fffffff0fffffff
+#define SH_Y_LADDR_CMP_INIT 0x0000000000000000
+
+/* SH_Y_LADDR_CMP_CMP_VAL */
+/* Description: Compare value */
+#define SH_Y_LADDR_CMP_CMP_VAL_SHFT 0
+#define SH_Y_LADDR_CMP_CMP_VAL_MASK 0x000000000fffffff
+
+/* SH_Y_LADDR_CMP_MASK_VAL */
+/* Description: Mask value */
+#define SH_Y_LADDR_CMP_MASK_VAL_SHFT 32
+#define SH_Y_LADDR_CMP_MASK_VAL_MASK 0x0fffffff00000000
+
+/* ==================================================================== */
+/* Register "SH_Y_RADDR_CMP" */
+/* MD acy address compare */
+/* ==================================================================== */
+
+#define SH_Y_RADDR_CMP 0x0000000100020160
+#define SH_Y_RADDR_CMP_MASK 0x0fffffff0fffffff
+#define SH_Y_RADDR_CMP_INIT 0x0000000000000000
+
+/* SH_Y_RADDR_CMP_CMP_VAL */
+/* Description: Compare value */
+#define SH_Y_RADDR_CMP_CMP_VAL_SHFT 0
+#define SH_Y_RADDR_CMP_CMP_VAL_MASK 0x000000000fffffff
+
+/* SH_Y_RADDR_CMP_MASK_VAL */
+/* Description: Mask value */
+#define SH_Y_RADDR_CMP_MASK_VAL_SHFT 32
+#define SH_Y_RADDR_CMP_MASK_VAL_MASK 0x0fffffff00000000
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG_CMP" */
+/* MD acy tagmgr compare */
+/* ==================================================================== */
+
+#define SH_Y_TAG_CMP 0x0000000100020168
+#define SH_Y_TAG_CMP_MASK 0x007fffffffffffff
+#define SH_Y_TAG_CMP_INIT 0x0000000000000000
+
+/* SH_Y_TAG_CMP_CMD */
+/* Description: Command compare value */
+#define SH_Y_TAG_CMP_CMD_SHFT 0
+#define SH_Y_TAG_CMP_CMD_MASK 0x00000000000000ff
+
+/* SH_Y_TAG_CMP_ADDR */
+/* Description: Address compare value */
+#define SH_Y_TAG_CMP_ADDR_SHFT 8
+#define SH_Y_TAG_CMP_ADDR_MASK 0x000001ffffffff00
+
+/* SH_Y_TAG_CMP_SRC */
+/* Description: Source compare value */
+#define SH_Y_TAG_CMP_SRC_SHFT 41
+#define SH_Y_TAG_CMP_SRC_MASK 0x007ffe0000000000
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG_MASK" */
+/* MD acy tagmgr mask */
+/* ==================================================================== */
+
+#define SH_Y_TAG_MASK 0x0000000100020170
+#define SH_Y_TAG_MASK_MASK 0x007fffffffffffff
+#define SH_Y_TAG_MASK_INIT 0x0000000000000000
+
+/* SH_Y_TAG_MASK_CMD */
+/* Description: Command compare value */
+#define SH_Y_TAG_MASK_CMD_SHFT 0
+#define SH_Y_TAG_MASK_CMD_MASK 0x00000000000000ff
+
+/* SH_Y_TAG_MASK_ADDR */
+/* Description: Address compare value */
+#define SH_Y_TAG_MASK_ADDR_SHFT 8
+#define SH_Y_TAG_MASK_ADDR_MASK 0x000001ffffffff00
+
+/* SH_Y_TAG_MASK_SRC */
+/* Description: Source compare value */
+#define SH_Y_TAG_MASK_SRC_SHFT 41
+#define SH_Y_TAG_MASK_SRC_MASK 0x007ffe0000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_JNR_DBUG_DATA_CFG" */
+/* configuration for md jnr debug data muxes */
+/* ==================================================================== */
+
+#define SH_MD_JNR_DBUG_DATA_CFG 0x0000000100020178
+#define SH_MD_JNR_DBUG_DATA_CFG_MASK 0x0000000077777777
+#define SH_MD_JNR_DBUG_DATA_CFG_INIT 0x0000000000000000
+
+/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE0_SEL */
+/* Description: selects which nibble drives nibble0 */
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE0_SEL_SHFT 0
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE0_SEL_MASK 0x0000000000000007
+
+/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE1_SEL */
+/* Description: selects which nibble drives nibble1 */
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE1_SEL_SHFT 4
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE1_SEL_MASK 0x0000000000000070
+
+/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE2_SEL */
+/* Description: selects which nibble drives nibble2 */
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE2_SEL_SHFT 8
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE2_SEL_MASK 0x0000000000000700
+
+/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE3_SEL */
+/* Description: selects which nibble drives nibble3 */
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE3_SEL_SHFT 12
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE3_SEL_MASK 0x0000000000007000
+
+/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE4_SEL */
+/* Description: selects which nibble drives nibble4 */
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE4_SEL_SHFT 16
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE4_SEL_MASK 0x0000000000070000
+
+/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE5_SEL */
+/* Description: selects which nibble drives nibble5 */
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE5_SEL_SHFT 20
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE5_SEL_MASK 0x0000000000700000
+
+/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE6_SEL */
+/* Description: selects which nibble drives nibble6 */
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE6_SEL_SHFT 24
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE6_SEL_MASK 0x0000000007000000
+
+/* SH_MD_JNR_DBUG_DATA_CFG_NIBBLE7_SEL */
+/* Description: selects which nibble drives nibble7 */
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE7_SEL_SHFT 28
+#define SH_MD_JNR_DBUG_DATA_CFG_NIBBLE7_SEL_MASK 0x0000000070000000
+
+/* ==================================================================== */
+/* Register "SH_MD_LAST_CREDIT" */
+/* captures last credit values on reset */
+/* ==================================================================== */
+
+#define SH_MD_LAST_CREDIT 0x0000000100020180
+#define SH_MD_LAST_CREDIT_MASK 0x0000003f3f3f3f3f
+#define SH_MD_LAST_CREDIT_INIT 0x0000000000000000
+
+/* SH_MD_LAST_CREDIT_RQ_TO_PI */
+/* Description: capture of request credits to pi */
+#define SH_MD_LAST_CREDIT_RQ_TO_PI_SHFT 0
+#define SH_MD_LAST_CREDIT_RQ_TO_PI_MASK 0x000000000000003f
+
+/* SH_MD_LAST_CREDIT_RP_TO_PI */
+/* Description: capture of reply credits to pi */
+#define SH_MD_LAST_CREDIT_RP_TO_PI_SHFT 8
+#define SH_MD_LAST_CREDIT_RP_TO_PI_MASK 0x0000000000003f00
+
+/* SH_MD_LAST_CREDIT_RQ_TO_XN */
+/* Description: capture of request credits to xn */
+#define SH_MD_LAST_CREDIT_RQ_TO_XN_SHFT 16
+#define SH_MD_LAST_CREDIT_RQ_TO_XN_MASK 0x00000000003f0000
+
+/* SH_MD_LAST_CREDIT_RP_TO_XN */
+/* Description: capture of reply credits to xn */
+#define SH_MD_LAST_CREDIT_RP_TO_XN_SHFT 24
+#define SH_MD_LAST_CREDIT_RP_TO_XN_MASK 0x000000003f000000
+
+/* SH_MD_LAST_CREDIT_TO_LB */
+/* Description: capture of credits to pi */
+#define SH_MD_LAST_CREDIT_TO_LB_SHFT 32
+#define SH_MD_LAST_CREDIT_TO_LB_MASK 0x0000003f00000000
+
+/* ==================================================================== */
+/* Register "SH_MEM_CAPTURE_ADDR" */
+/* Address capture address register */
+/* ==================================================================== */
+
+#define SH_MEM_CAPTURE_ADDR 0x0000000100020300
+#define SH_MEM_CAPTURE_ADDR_MASK 0x00000ffffffffff8
+#define SH_MEM_CAPTURE_ADDR_INIT 0x0000000000000000
+
+/* SH_MEM_CAPTURE_ADDR_ADDR */
+/* Description: upper bits of address */
+#define SH_MEM_CAPTURE_ADDR_ADDR_SHFT 3
+#define SH_MEM_CAPTURE_ADDR_ADDR_MASK 0x0000000ffffffff8
+
+/* SH_MEM_CAPTURE_ADDR_CMD */
+/* Description: command of reference */
+#define SH_MEM_CAPTURE_ADDR_CMD_SHFT 36
+#define SH_MEM_CAPTURE_ADDR_CMD_MASK 0x00000ff000000000
+
+/* ==================================================================== */
+/* Register "SH_MEM_CAPTURE_MASK" */
+/* Address capture mask register */
+/* ==================================================================== */
+
+#define SH_MEM_CAPTURE_MASK 0x0000000100020308
+#define SH_MEM_CAPTURE_MASK_MASK 0x00003ffffffffff8
+#define SH_MEM_CAPTURE_MASK_INIT 0x0000000000000000
+
+/* SH_MEM_CAPTURE_MASK_ADDR */
+/* Description: upper bits of address */
+#define SH_MEM_CAPTURE_MASK_ADDR_SHFT 3
+#define SH_MEM_CAPTURE_MASK_ADDR_MASK 0x0000000ffffffff8
+
+/* SH_MEM_CAPTURE_MASK_CMD */
+/* Description: command of reference */
+#define SH_MEM_CAPTURE_MASK_CMD_SHFT 36
+#define SH_MEM_CAPTURE_MASK_CMD_MASK 0x00000ff000000000
+
+/* SH_MEM_CAPTURE_MASK_ENABLE_LOCAL */
+/* Description: capture references originating locally */
+#define SH_MEM_CAPTURE_MASK_ENABLE_LOCAL_SHFT 44
+#define SH_MEM_CAPTURE_MASK_ENABLE_LOCAL_MASK 0x0000100000000000
+
+/* SH_MEM_CAPTURE_MASK_ENABLE_REMOTE */
+/* Description: capture references originating remotely */
+#define SH_MEM_CAPTURE_MASK_ENABLE_REMOTE_SHFT 45
+#define SH_MEM_CAPTURE_MASK_ENABLE_REMOTE_MASK 0x0000200000000000
+
+/* ==================================================================== */
+/* Register "SH_MEM_CAPTURE_HDR" */
+/* Address capture header register */
+/* ==================================================================== */
+
+#define SH_MEM_CAPTURE_HDR 0x0000000100020310
+#define SH_MEM_CAPTURE_HDR_MASK 0xfffffffffffffff8
+#define SH_MEM_CAPTURE_HDR_INIT 0x0000000000000000
+
+/* SH_MEM_CAPTURE_HDR_ADDR */
+/* Description: upper bits of reference address */
+#define SH_MEM_CAPTURE_HDR_ADDR_SHFT 3
+#define SH_MEM_CAPTURE_HDR_ADDR_MASK 0x0000000ffffffff8
+
+/* SH_MEM_CAPTURE_HDR_CMD */
+/* Description: command of reference */
+#define SH_MEM_CAPTURE_HDR_CMD_SHFT 36
+#define SH_MEM_CAPTURE_HDR_CMD_MASK 0x00000ff000000000
+
+/* SH_MEM_CAPTURE_HDR_SRC */
+/* Description: source node of reference */
+#define SH_MEM_CAPTURE_HDR_SRC_SHFT 44
+#define SH_MEM_CAPTURE_HDR_SRC_MASK 0x03fff00000000000
+
+/* SH_MEM_CAPTURE_HDR_CNTR */
+/* Description: increments on every capture */
+#define SH_MEM_CAPTURE_HDR_CNTR_SHFT 58
+#define SH_MEM_CAPTURE_HDR_CNTR_MASK 0xfc00000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_CONFIG" */
+/* DQ directory config register */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_CONFIG 0x0000000100030000
+#define SH_MD_DQLP_MMR_DIR_CONFIG_MASK 0x000000000000001f
+#define SH_MD_DQLP_MMR_DIR_CONFIG_INIT 0x0000000000000010
+
+/* SH_MD_DQLP_MMR_DIR_CONFIG_SYS_SIZE */
+/* Description: system size code */
+#define SH_MD_DQLP_MMR_DIR_CONFIG_SYS_SIZE_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_CONFIG_SYS_SIZE_MASK 0x0000000000000007
+
+/* SH_MD_DQLP_MMR_DIR_CONFIG_EN_DIRECC */
+/* Description: enable directory ecc correction */
+#define SH_MD_DQLP_MMR_DIR_CONFIG_EN_DIRECC_SHFT 3
+#define SH_MD_DQLP_MMR_DIR_CONFIG_EN_DIRECC_MASK 0x0000000000000008
+
+/* SH_MD_DQLP_MMR_DIR_CONFIG_EN_DIRPOIS */
+/* Description: enable local poisoning for dir table fall-through */
+#define SH_MD_DQLP_MMR_DIR_CONFIG_EN_DIRPOIS_SHFT 4
+#define SH_MD_DQLP_MMR_DIR_CONFIG_EN_DIRPOIS_MASK 0x0000000000000010
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC0" */
+/* node [63:0] presence bits */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_PRESVEC0 0x0000000100030100
+#define SH_MD_DQLP_MMR_DIR_PRESVEC0_MASK 0xffffffffffffffff
+#define SH_MD_DQLP_MMR_DIR_PRESVEC0_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_PRESVEC0_VEC */
+/* Description: node presence bits, 1=present */
+#define SH_MD_DQLP_MMR_DIR_PRESVEC0_VEC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_PRESVEC0_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC1" */
+/* node [127:64] presence bits */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_PRESVEC1 0x0000000100030110
+#define SH_MD_DQLP_MMR_DIR_PRESVEC1_MASK 0xffffffffffffffff
+#define SH_MD_DQLP_MMR_DIR_PRESVEC1_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_PRESVEC1_VEC */
+/* Description: node presence bits, 1=present */
+#define SH_MD_DQLP_MMR_DIR_PRESVEC1_VEC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_PRESVEC1_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC2" */
+/* node [191:128] presence bits */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_PRESVEC2 0x0000000100030120
+#define SH_MD_DQLP_MMR_DIR_PRESVEC2_MASK 0xffffffffffffffff
+#define SH_MD_DQLP_MMR_DIR_PRESVEC2_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_PRESVEC2_VEC */
+/* Description: node presence bits, 1=present */
+#define SH_MD_DQLP_MMR_DIR_PRESVEC2_VEC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_PRESVEC2_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC3" */
+/* node [255:192] presence bits */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_PRESVEC3 0x0000000100030130
+#define SH_MD_DQLP_MMR_DIR_PRESVEC3_MASK 0xffffffffffffffff
+#define SH_MD_DQLP_MMR_DIR_PRESVEC3_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_PRESVEC3_VEC */
+/* Description: node presence bits, 1=present */
+#define SH_MD_DQLP_MMR_DIR_PRESVEC3_VEC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_PRESVEC3_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC0" */
+/* local vector for acc=0 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_LOCVEC0 0x0000000100030200
+#define SH_MD_DQLP_MMR_DIR_LOCVEC0_MASK 0xffffffffffffffff
+#define SH_MD_DQLP_MMR_DIR_LOCVEC0_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_LOCVEC0_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQLP_MMR_DIR_LOCVEC0_VEC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_LOCVEC0_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC1" */
+/* local vector for acc=1 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_LOCVEC1 0x0000000100030210
+#define SH_MD_DQLP_MMR_DIR_LOCVEC1_MASK 0xffffffffffffffff
+#define SH_MD_DQLP_MMR_DIR_LOCVEC1_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_LOCVEC1_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQLP_MMR_DIR_LOCVEC1_VEC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_LOCVEC1_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC2" */
+/* local vector for acc=2 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_LOCVEC2 0x0000000100030220
+#define SH_MD_DQLP_MMR_DIR_LOCVEC2_MASK 0xffffffffffffffff
+#define SH_MD_DQLP_MMR_DIR_LOCVEC2_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_LOCVEC2_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQLP_MMR_DIR_LOCVEC2_VEC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_LOCVEC2_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC3" */
+/* local vector for acc=3 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_LOCVEC3 0x0000000100030230
+#define SH_MD_DQLP_MMR_DIR_LOCVEC3_MASK 0xffffffffffffffff
+#define SH_MD_DQLP_MMR_DIR_LOCVEC3_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_LOCVEC3_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQLP_MMR_DIR_LOCVEC3_VEC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_LOCVEC3_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC4" */
+/* local vector for acc=4 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_LOCVEC4 0x0000000100030240
+#define SH_MD_DQLP_MMR_DIR_LOCVEC4_MASK 0xffffffffffffffff
+#define SH_MD_DQLP_MMR_DIR_LOCVEC4_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_LOCVEC4_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQLP_MMR_DIR_LOCVEC4_VEC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_LOCVEC4_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC5" */
+/* local vector for acc=5 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_LOCVEC5 0x0000000100030250
+#define SH_MD_DQLP_MMR_DIR_LOCVEC5_MASK 0xffffffffffffffff
+#define SH_MD_DQLP_MMR_DIR_LOCVEC5_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_LOCVEC5_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQLP_MMR_DIR_LOCVEC5_VEC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_LOCVEC5_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC6" */
+/* local vector for acc=6 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_LOCVEC6 0x0000000100030260
+#define SH_MD_DQLP_MMR_DIR_LOCVEC6_MASK 0xffffffffffffffff
+#define SH_MD_DQLP_MMR_DIR_LOCVEC6_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_LOCVEC6_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQLP_MMR_DIR_LOCVEC6_VEC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_LOCVEC6_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC7" */
+/* local vector for acc=7 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_LOCVEC7 0x0000000100030270
+#define SH_MD_DQLP_MMR_DIR_LOCVEC7_MASK 0xffffffffffffffff
+#define SH_MD_DQLP_MMR_DIR_LOCVEC7_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_LOCVEC7_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQLP_MMR_DIR_LOCVEC7_VEC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_LOCVEC7_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC0" */
+/* privilege vector for acc=0 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_PRIVEC0 0x0000000100030300
+#define SH_MD_DQLP_MMR_DIR_PRIVEC0_MASK 0x000000000fffffff
+#define SH_MD_DQLP_MMR_DIR_PRIVEC0_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC0_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC0_IN_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_PRIVEC0_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC0_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC0_OUT_SHFT 14
+#define SH_MD_DQLP_MMR_DIR_PRIVEC0_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC1" */
+/* privilege vector for acc=1 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_PRIVEC1 0x0000000100030310
+#define SH_MD_DQLP_MMR_DIR_PRIVEC1_MASK 0x000000000fffffff
+#define SH_MD_DQLP_MMR_DIR_PRIVEC1_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC1_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC1_IN_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_PRIVEC1_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC1_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC1_OUT_SHFT 14
+#define SH_MD_DQLP_MMR_DIR_PRIVEC1_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC2" */
+/* privilege vector for acc=2 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_PRIVEC2 0x0000000100030320
+#define SH_MD_DQLP_MMR_DIR_PRIVEC2_MASK 0x000000000fffffff
+#define SH_MD_DQLP_MMR_DIR_PRIVEC2_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC2_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC2_IN_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_PRIVEC2_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC2_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC2_OUT_SHFT 14
+#define SH_MD_DQLP_MMR_DIR_PRIVEC2_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC3" */
+/* privilege vector for acc=3 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_PRIVEC3 0x0000000100030330
+#define SH_MD_DQLP_MMR_DIR_PRIVEC3_MASK 0x000000000fffffff
+#define SH_MD_DQLP_MMR_DIR_PRIVEC3_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC3_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC3_IN_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_PRIVEC3_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC3_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC3_OUT_SHFT 14
+#define SH_MD_DQLP_MMR_DIR_PRIVEC3_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC4" */
+/* privilege vector for acc=4 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_PRIVEC4 0x0000000100030340
+#define SH_MD_DQLP_MMR_DIR_PRIVEC4_MASK 0x000000000fffffff
+#define SH_MD_DQLP_MMR_DIR_PRIVEC4_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC4_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC4_IN_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_PRIVEC4_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC4_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC4_OUT_SHFT 14
+#define SH_MD_DQLP_MMR_DIR_PRIVEC4_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC5" */
+/* privilege vector for acc=5 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_PRIVEC5 0x0000000100030350
+#define SH_MD_DQLP_MMR_DIR_PRIVEC5_MASK 0x000000000fffffff
+#define SH_MD_DQLP_MMR_DIR_PRIVEC5_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC5_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC5_IN_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_PRIVEC5_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC5_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC5_OUT_SHFT 14
+#define SH_MD_DQLP_MMR_DIR_PRIVEC5_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC6" */
+/* privilege vector for acc=6 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_PRIVEC6 0x0000000100030360
+#define SH_MD_DQLP_MMR_DIR_PRIVEC6_MASK 0x000000000fffffff
+#define SH_MD_DQLP_MMR_DIR_PRIVEC6_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC6_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC6_IN_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_PRIVEC6_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC6_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC6_OUT_SHFT 14
+#define SH_MD_DQLP_MMR_DIR_PRIVEC6_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC7" */
+/* privilege vector for acc=7 */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_PRIVEC7 0x0000000100030370
+#define SH_MD_DQLP_MMR_DIR_PRIVEC7_MASK 0x000000000fffffff
+#define SH_MD_DQLP_MMR_DIR_PRIVEC7_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC7_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC7_IN_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_PRIVEC7_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQLP_MMR_DIR_PRIVEC7_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQLP_MMR_DIR_PRIVEC7_OUT_SHFT 14
+#define SH_MD_DQLP_MMR_DIR_PRIVEC7_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_TIMER" */
+/* MD SXRO timer */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_TIMER 0x0000000100030400
+#define SH_MD_DQLP_MMR_DIR_TIMER_MASK 0x00000000003fffff
+#define SH_MD_DQLP_MMR_DIR_TIMER_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_TIMER_TIMER_DIV */
+/* Description: timer divide register */
+#define SH_MD_DQLP_MMR_DIR_TIMER_TIMER_DIV_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_TIMER_TIMER_DIV_MASK 0x0000000000000fff
+
+/* SH_MD_DQLP_MMR_DIR_TIMER_TIMER_EN */
+/* Description: timer enable */
+#define SH_MD_DQLP_MMR_DIR_TIMER_TIMER_EN_SHFT 12
+#define SH_MD_DQLP_MMR_DIR_TIMER_TIMER_EN_MASK 0x0000000000001000
+
+/* SH_MD_DQLP_MMR_DIR_TIMER_TIMER_CUR */
+/* Description: value of current timer */
+#define SH_MD_DQLP_MMR_DIR_TIMER_TIMER_CUR_SHFT 13
+#define SH_MD_DQLP_MMR_DIR_TIMER_TIMER_CUR_MASK 0x00000000003fe000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY" */
+/* directory pio write data */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY 0x0000000100031000
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_MASK 0x03ffffffffffffff
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_DIRA */
+/* Description: directory entry A */
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_DIRA_SHFT 0
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_DIRA_MASK 0x0000000003ffffff
+
+/* SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_DIRB */
+/* Description: directory entry B */
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_DIRB_SHFT 26
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_DIRB_MASK 0x000ffffffc000000
+
+/* SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_PRI */
+/* Description: directory priority */
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_PRI_SHFT 52
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_PRI_MASK 0x0070000000000000
+
+/* SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_ACC */
+/* Description: directory access bits */
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_ACC_SHFT 55
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY_ACC_MASK 0x0380000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_PIOWD_DIR_ECC" */
+/* directory ecc register */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC 0x0000000100031010
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC_MASK 0x0000000000003fff
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_PIOWD_DIR_ECC_ECCA */
+/* Description: XOR bits for directory ECC group 1 */
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC_ECCA_SHFT 0
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC_ECCA_MASK 0x000000000000007f
+
+/* SH_MD_DQLP_MMR_PIOWD_DIR_ECC_ECCB */
+/* Description: XOR bits for directory ECC group 2 */
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC_ECCB_SHFT 7
+#define SH_MD_DQLP_MMR_PIOWD_DIR_ECC_ECCB_MASK 0x0000000000003f80
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY" */
+/* x directory pio read data */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY 0x0000000100032000
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_MASK 0x0fffffffffffffff
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_DIRA */
+/* Description: directory entry A */
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_DIRA_SHFT 0
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_DIRA_MASK 0x0000000003ffffff
+
+/* SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_DIRB */
+/* Description: directory entry B */
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_DIRB_SHFT 26
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_DIRB_MASK 0x000ffffffc000000
+
+/* SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_PRI */
+/* Description: directory priority */
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_PRI_SHFT 52
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_PRI_MASK 0x0070000000000000
+
+/* SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_ACC */
+/* Description: directory access bits */
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_ACC_SHFT 55
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_ACC_MASK 0x0380000000000000
+
+/* SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_COR */
+/* Description: correctable ecc error */
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_COR_SHFT 58
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_COR_MASK 0x0400000000000000
+
+/* SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_UNC */
+/* Description: uncorrectable ecc error */
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_UNC_SHFT 59
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY_UNC_MASK 0x0800000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XPIORD_XDIR_ECC" */
+/* x directory ecc */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC 0x0000000100032010
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_MASK 0x0000000000003fff
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_ECCA */
+/* Description: group 1 ecc */
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_ECCA_SHFT 0
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_ECCA_MASK 0x000000000000007f
+
+/* SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_ECCB */
+/* Description: group 2 ecc */
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_ECCB_SHFT 7
+#define SH_MD_DQLP_MMR_XPIORD_XDIR_ECC_ECCB_MASK 0x0000000000003f80
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY" */
+/* y directory pio read data */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY 0x0000000100032800
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_MASK 0x0fffffffffffffff
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_DIRA */
+/* Description: directory entry A */
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_DIRA_SHFT 0
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_DIRA_MASK 0x0000000003ffffff
+
+/* SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_DIRB */
+/* Description: directory entry B */
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_DIRB_SHFT 26
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_DIRB_MASK 0x000ffffffc000000
+
+/* SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_PRI */
+/* Description: directory priority */
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_PRI_SHFT 52
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_PRI_MASK 0x0070000000000000
+
+/* SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_ACC */
+/* Description: directory access bits */
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_ACC_SHFT 55
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_ACC_MASK 0x0380000000000000
+
+/* SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_COR */
+/* Description: correctable ecc error */
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_COR_SHFT 58
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_COR_MASK 0x0400000000000000
+
+/* SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_UNC */
+/* Description: uncorrectable ecc error */
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_UNC_SHFT 59
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY_UNC_MASK 0x0800000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YPIORD_YDIR_ECC" */
+/* y directory ecc */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC 0x0000000100032810
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_MASK 0x0000000000003fff
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_ECCA */
+/* Description: group 1 ecc */
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_ECCA_SHFT 0
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_ECCA_MASK 0x000000000000007f
+
+/* SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_ECCB */
+/* Description: group 2 ecc */
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_ECCB_SHFT 7
+#define SH_MD_DQLP_MMR_YPIORD_YDIR_ECC_ECCB_MASK 0x0000000000003f80
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XCERR1" */
+/* correctable dir ecc group 1 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_XCERR1 0x0000000100033000
+#define SH_MD_DQLP_MMR_XCERR1_MASK 0x0000007fffffffff
+#define SH_MD_DQLP_MMR_XCERR1_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_XCERR1_GRP1 */
+/* Description: ecc group 1 bits */
+#define SH_MD_DQLP_MMR_XCERR1_GRP1_SHFT 0
+#define SH_MD_DQLP_MMR_XCERR1_GRP1_MASK 0x0000000fffffffff
+
+/* SH_MD_DQLP_MMR_XCERR1_VAL */
+/* Description: correctable ecc error in group 1 bits */
+#define SH_MD_DQLP_MMR_XCERR1_VAL_SHFT 36
+#define SH_MD_DQLP_MMR_XCERR1_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQLP_MMR_XCERR1_MORE */
+/* Description: more than one correctable ecc error in group 1 */
+#define SH_MD_DQLP_MMR_XCERR1_MORE_SHFT 37
+#define SH_MD_DQLP_MMR_XCERR1_MORE_MASK 0x0000002000000000
+
+/* SH_MD_DQLP_MMR_XCERR1_ARM */
+/* Description: writing 1 arms uncorrectable ecc error capture */
+#define SH_MD_DQLP_MMR_XCERR1_ARM_SHFT 38
+#define SH_MD_DQLP_MMR_XCERR1_ARM_MASK 0x0000004000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XCERR2" */
+/* correctable dir ecc group 2 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_XCERR2 0x0000000100033010
+#define SH_MD_DQLP_MMR_XCERR2_MASK 0x0000003fffffffff
+#define SH_MD_DQLP_MMR_XCERR2_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_XCERR2_GRP2 */
+/* Description: ecc group 2 bits */
+#define SH_MD_DQLP_MMR_XCERR2_GRP2_SHFT 0
+#define SH_MD_DQLP_MMR_XCERR2_GRP2_MASK 0x0000000fffffffff
+
+/* SH_MD_DQLP_MMR_XCERR2_VAL */
+/* Description: correctable ecc error in group 2 bits */
+#define SH_MD_DQLP_MMR_XCERR2_VAL_SHFT 36
+#define SH_MD_DQLP_MMR_XCERR2_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQLP_MMR_XCERR2_MORE */
+/* Description: more than one correctable ecc error in group 2 */
+#define SH_MD_DQLP_MMR_XCERR2_MORE_SHFT 37
+#define SH_MD_DQLP_MMR_XCERR2_MORE_MASK 0x0000002000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XUERR1" */
+/* uncorrectable dir ecc group 1 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_XUERR1 0x0000000100033020
+#define SH_MD_DQLP_MMR_XUERR1_MASK 0x0000007fffffffff
+#define SH_MD_DQLP_MMR_XUERR1_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_XUERR1_GRP1 */
+/* Description: ecc group 1 bits */
+#define SH_MD_DQLP_MMR_XUERR1_GRP1_SHFT 0
+#define SH_MD_DQLP_MMR_XUERR1_GRP1_MASK 0x0000000fffffffff
+
+/* SH_MD_DQLP_MMR_XUERR1_VAL */
+/* Description: uncorrectable ecc error in group 1 bits */
+#define SH_MD_DQLP_MMR_XUERR1_VAL_SHFT 36
+#define SH_MD_DQLP_MMR_XUERR1_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQLP_MMR_XUERR1_MORE */
+/* Description: more than one uncorrectable ecc error in group 1 */
+#define SH_MD_DQLP_MMR_XUERR1_MORE_SHFT 37
+#define SH_MD_DQLP_MMR_XUERR1_MORE_MASK 0x0000002000000000
+
+/* SH_MD_DQLP_MMR_XUERR1_ARM */
+/* Description: writing 1 arms uncorrectable ecc error capture */
+#define SH_MD_DQLP_MMR_XUERR1_ARM_SHFT 38
+#define SH_MD_DQLP_MMR_XUERR1_ARM_MASK 0x0000004000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XUERR2" */
+/* uncorrectable dir ecc group 2 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_XUERR2 0x0000000100033030
+#define SH_MD_DQLP_MMR_XUERR2_MASK 0x0000003fffffffff
+#define SH_MD_DQLP_MMR_XUERR2_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_XUERR2_GRP2 */
+/* Description: ecc group 2 bits */
+#define SH_MD_DQLP_MMR_XUERR2_GRP2_SHFT 0
+#define SH_MD_DQLP_MMR_XUERR2_GRP2_MASK 0x0000000fffffffff
+
+/* SH_MD_DQLP_MMR_XUERR2_VAL */
+/* Description: uncorrectable ecc error in group 2 bits */
+#define SH_MD_DQLP_MMR_XUERR2_VAL_SHFT 36
+#define SH_MD_DQLP_MMR_XUERR2_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQLP_MMR_XUERR2_MORE */
+/* Description: more than one uncorrectable ecc error in group 2 */
+#define SH_MD_DQLP_MMR_XUERR2_MORE_SHFT 37
+#define SH_MD_DQLP_MMR_XUERR2_MORE_MASK 0x0000002000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XPERR" */
+/* protocol error register */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_XPERR 0x0000000100033040
+#define SH_MD_DQLP_MMR_XPERR_MASK 0x7fffffffffffffff
+#define SH_MD_DQLP_MMR_XPERR_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_XPERR_DIR */
+/* Description: directory entry */
+#define SH_MD_DQLP_MMR_XPERR_DIR_SHFT 0
+#define SH_MD_DQLP_MMR_XPERR_DIR_MASK 0x0000000003ffffff
+
+/* SH_MD_DQLP_MMR_XPERR_CMD */
+/* Description: incoming command */
+#define SH_MD_DQLP_MMR_XPERR_CMD_SHFT 26
+#define SH_MD_DQLP_MMR_XPERR_CMD_MASK 0x00000003fc000000
+
+/* SH_MD_DQLP_MMR_XPERR_SRC */
+/* Description: source node of dir operation */
+#define SH_MD_DQLP_MMR_XPERR_SRC_SHFT 34
+#define SH_MD_DQLP_MMR_XPERR_SRC_MASK 0x0000fffc00000000
+
+/* SH_MD_DQLP_MMR_XPERR_PRIGE */
+/* Description: priority was greater-equal */
+#define SH_MD_DQLP_MMR_XPERR_PRIGE_SHFT 48
+#define SH_MD_DQLP_MMR_XPERR_PRIGE_MASK 0x0001000000000000
+
+/* SH_MD_DQLP_MMR_XPERR_PRIV */
+/* Description: access privilege bit */
+#define SH_MD_DQLP_MMR_XPERR_PRIV_SHFT 49
+#define SH_MD_DQLP_MMR_XPERR_PRIV_MASK 0x0002000000000000
+
+/* SH_MD_DQLP_MMR_XPERR_COR */
+/* Description: correctable ecc error */
+#define SH_MD_DQLP_MMR_XPERR_COR_SHFT 50
+#define SH_MD_DQLP_MMR_XPERR_COR_MASK 0x0004000000000000
+
+/* SH_MD_DQLP_MMR_XPERR_UNC */
+/* Description: uncorrectable ecc error */
+#define SH_MD_DQLP_MMR_XPERR_UNC_SHFT 51
+#define SH_MD_DQLP_MMR_XPERR_UNC_MASK 0x0008000000000000
+
+/* SH_MD_DQLP_MMR_XPERR_MYBIT */
+/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */
+#define SH_MD_DQLP_MMR_XPERR_MYBIT_SHFT 52
+#define SH_MD_DQLP_MMR_XPERR_MYBIT_MASK 0x0ff0000000000000
+
+/* SH_MD_DQLP_MMR_XPERR_VAL */
+/* Description: protocol error info valid */
+#define SH_MD_DQLP_MMR_XPERR_VAL_SHFT 60
+#define SH_MD_DQLP_MMR_XPERR_VAL_MASK 0x1000000000000000
+
+/* SH_MD_DQLP_MMR_XPERR_MORE */
+/* Description: more than one protocol error */
+#define SH_MD_DQLP_MMR_XPERR_MORE_SHFT 61
+#define SH_MD_DQLP_MMR_XPERR_MORE_MASK 0x2000000000000000
+
+/* SH_MD_DQLP_MMR_XPERR_ARM */
+/* Description: writing 1 arms error capture */
+#define SH_MD_DQLP_MMR_XPERR_ARM_SHFT 62
+#define SH_MD_DQLP_MMR_XPERR_ARM_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YCERR1" */
+/* correctable dir ecc group 1 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_YCERR1 0x0000000100033800
+#define SH_MD_DQLP_MMR_YCERR1_MASK 0x0000007fffffffff
+#define SH_MD_DQLP_MMR_YCERR1_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_YCERR1_GRP1 */
+/* Description: ecc group 1 bits */
+#define SH_MD_DQLP_MMR_YCERR1_GRP1_SHFT 0
+#define SH_MD_DQLP_MMR_YCERR1_GRP1_MASK 0x0000000fffffffff
+
+/* SH_MD_DQLP_MMR_YCERR1_VAL */
+/* Description: correctable ecc error in group 1 bits */
+#define SH_MD_DQLP_MMR_YCERR1_VAL_SHFT 36
+#define SH_MD_DQLP_MMR_YCERR1_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQLP_MMR_YCERR1_MORE */
+/* Description: more than one correctable ecc error in group 1 */
+#define SH_MD_DQLP_MMR_YCERR1_MORE_SHFT 37
+#define SH_MD_DQLP_MMR_YCERR1_MORE_MASK 0x0000002000000000
+
+/* SH_MD_DQLP_MMR_YCERR1_ARM */
+/* Description: writing 1 arms uncorrectable ecc error capture */
+#define SH_MD_DQLP_MMR_YCERR1_ARM_SHFT 38
+#define SH_MD_DQLP_MMR_YCERR1_ARM_MASK 0x0000004000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YCERR2" */
+/* correctable dir ecc group 2 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_YCERR2 0x0000000100033810
+#define SH_MD_DQLP_MMR_YCERR2_MASK 0x0000003fffffffff
+#define SH_MD_DQLP_MMR_YCERR2_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_YCERR2_GRP2 */
+/* Description: ecc group 2 bits */
+#define SH_MD_DQLP_MMR_YCERR2_GRP2_SHFT 0
+#define SH_MD_DQLP_MMR_YCERR2_GRP2_MASK 0x0000000fffffffff
+
+/* SH_MD_DQLP_MMR_YCERR2_VAL */
+/* Description: correctable ecc error in group 2 bits */
+#define SH_MD_DQLP_MMR_YCERR2_VAL_SHFT 36
+#define SH_MD_DQLP_MMR_YCERR2_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQLP_MMR_YCERR2_MORE */
+/* Description: more than one correctable ecc error in group 2 */
+#define SH_MD_DQLP_MMR_YCERR2_MORE_SHFT 37
+#define SH_MD_DQLP_MMR_YCERR2_MORE_MASK 0x0000002000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YUERR1" */
+/* uncorrectable dir ecc group 1 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_YUERR1 0x0000000100033820
+#define SH_MD_DQLP_MMR_YUERR1_MASK 0x0000007fffffffff
+#define SH_MD_DQLP_MMR_YUERR1_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_YUERR1_GRP1 */
+/* Description: ecc group 1 bits */
+#define SH_MD_DQLP_MMR_YUERR1_GRP1_SHFT 0
+#define SH_MD_DQLP_MMR_YUERR1_GRP1_MASK 0x0000000fffffffff
+
+/* SH_MD_DQLP_MMR_YUERR1_VAL */
+/* Description: uncorrectable ecc error in group 1 bits */
+#define SH_MD_DQLP_MMR_YUERR1_VAL_SHFT 36
+#define SH_MD_DQLP_MMR_YUERR1_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQLP_MMR_YUERR1_MORE */
+/* Description: more than one uncorrectable ecc error in group 1 */
+#define SH_MD_DQLP_MMR_YUERR1_MORE_SHFT 37
+#define SH_MD_DQLP_MMR_YUERR1_MORE_MASK 0x0000002000000000
+
+/* SH_MD_DQLP_MMR_YUERR1_ARM */
+/* Description: writing 1 arms uncorrectable ecc error capture */
+#define SH_MD_DQLP_MMR_YUERR1_ARM_SHFT 38
+#define SH_MD_DQLP_MMR_YUERR1_ARM_MASK 0x0000004000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YUERR2" */
+/* uncorrectable dir ecc group 2 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_YUERR2 0x0000000100033830
+#define SH_MD_DQLP_MMR_YUERR2_MASK 0x0000003fffffffff
+#define SH_MD_DQLP_MMR_YUERR2_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_YUERR2_GRP2 */
+/* Description: ecc group 2 bits */
+#define SH_MD_DQLP_MMR_YUERR2_GRP2_SHFT 0
+#define SH_MD_DQLP_MMR_YUERR2_GRP2_MASK 0x0000000fffffffff
+
+/* SH_MD_DQLP_MMR_YUERR2_VAL */
+/* Description: uncorrectable ecc error in group 2 bits */
+#define SH_MD_DQLP_MMR_YUERR2_VAL_SHFT 36
+#define SH_MD_DQLP_MMR_YUERR2_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQLP_MMR_YUERR2_MORE */
+/* Description: more than one uncorrectable ecc error in group 2 */
+#define SH_MD_DQLP_MMR_YUERR2_MORE_SHFT 37
+#define SH_MD_DQLP_MMR_YUERR2_MORE_MASK 0x0000002000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YPERR" */
+/* protocol error register */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_YPERR 0x0000000100033840
+#define SH_MD_DQLP_MMR_YPERR_MASK 0x7fffffffffffffff
+#define SH_MD_DQLP_MMR_YPERR_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_YPERR_DIR */
+/* Description: directory entry */
+#define SH_MD_DQLP_MMR_YPERR_DIR_SHFT 0
+#define SH_MD_DQLP_MMR_YPERR_DIR_MASK 0x0000000003ffffff
+
+/* SH_MD_DQLP_MMR_YPERR_CMD */
+/* Description: incoming command */
+#define SH_MD_DQLP_MMR_YPERR_CMD_SHFT 26
+#define SH_MD_DQLP_MMR_YPERR_CMD_MASK 0x00000003fc000000
+
+/* SH_MD_DQLP_MMR_YPERR_SRC */
+/* Description: source node of dir operation */
+#define SH_MD_DQLP_MMR_YPERR_SRC_SHFT 34
+#define SH_MD_DQLP_MMR_YPERR_SRC_MASK 0x0000fffc00000000
+
+/* SH_MD_DQLP_MMR_YPERR_PRIGE */
+/* Description: priority was greater-equal */
+#define SH_MD_DQLP_MMR_YPERR_PRIGE_SHFT 48
+#define SH_MD_DQLP_MMR_YPERR_PRIGE_MASK 0x0001000000000000
+
+/* SH_MD_DQLP_MMR_YPERR_PRIV */
+/* Description: access privilege bit */
+#define SH_MD_DQLP_MMR_YPERR_PRIV_SHFT 49
+#define SH_MD_DQLP_MMR_YPERR_PRIV_MASK 0x0002000000000000
+
+/* SH_MD_DQLP_MMR_YPERR_COR */
+/* Description: correctable ecc error */
+#define SH_MD_DQLP_MMR_YPERR_COR_SHFT 50
+#define SH_MD_DQLP_MMR_YPERR_COR_MASK 0x0004000000000000
+
+/* SH_MD_DQLP_MMR_YPERR_UNC */
+/* Description: uncorrectable ecc error */
+#define SH_MD_DQLP_MMR_YPERR_UNC_SHFT 51
+#define SH_MD_DQLP_MMR_YPERR_UNC_MASK 0x0008000000000000
+
+/* SH_MD_DQLP_MMR_YPERR_MYBIT */
+/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */
+#define SH_MD_DQLP_MMR_YPERR_MYBIT_SHFT 52
+#define SH_MD_DQLP_MMR_YPERR_MYBIT_MASK 0x0ff0000000000000
+
+/* SH_MD_DQLP_MMR_YPERR_VAL */
+/* Description: protocol error info valid */
+#define SH_MD_DQLP_MMR_YPERR_VAL_SHFT 60
+#define SH_MD_DQLP_MMR_YPERR_VAL_MASK 0x1000000000000000
+
+/* SH_MD_DQLP_MMR_YPERR_MORE */
+/* Description: more than one protocol error */
+#define SH_MD_DQLP_MMR_YPERR_MORE_SHFT 61
+#define SH_MD_DQLP_MMR_YPERR_MORE_MASK 0x2000000000000000
+
+/* SH_MD_DQLP_MMR_YPERR_ARM */
+/* Description: writing 1 arms error capture */
+#define SH_MD_DQLP_MMR_YPERR_ARM_SHFT 62
+#define SH_MD_DQLP_MMR_YPERR_ARM_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_CMDTRIG" */
+/* cmd triggers */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_CMDTRIG 0x0000000100034000
+#define SH_MD_DQLP_MMR_DIR_CMDTRIG_MASK 0x00000000ffffffff
+#define SH_MD_DQLP_MMR_DIR_CMDTRIG_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD0 */
+/* Description: command trigger 0 */
+#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD0_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD0_MASK 0x00000000000000ff
+
+/* SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD1 */
+/* Description: command trigger 1 */
+#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD1_SHFT 8
+#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD1_MASK 0x000000000000ff00
+
+/* SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD2 */
+/* Description: command trigger 2 */
+#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD2_SHFT 16
+#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD2_MASK 0x0000000000ff0000
+
+/* SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD3 */
+/* Description: command trigger 3 */
+#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD3_SHFT 24
+#define SH_MD_DQLP_MMR_DIR_CMDTRIG_CMD3_MASK 0x00000000ff000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_TBLTRIG" */
+/* dir table trigger */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG 0x0000000100034010
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_MASK 0x000003ffffffffff
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_TBLTRIG_SRC */
+/* Description: source of request */
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_SRC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_SRC_MASK 0x0000000000003fff
+
+/* SH_MD_DQLP_MMR_DIR_TBLTRIG_CMD */
+/* Description: incoming request */
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_CMD_SHFT 14
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_CMD_MASK 0x00000000003fc000
+
+/* SH_MD_DQLP_MMR_DIR_TBLTRIG_ACC */
+/* Description: uncorrectable error, privilege bit */
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_ACC_SHFT 22
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_ACC_MASK 0x0000000000c00000
+
+/* SH_MD_DQLP_MMR_DIR_TBLTRIG_PRIGE */
+/* Description: priority greater-equal */
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_PRIGE_SHFT 24
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_PRIGE_MASK 0x0000000001000000
+
+/* SH_MD_DQLP_MMR_DIR_TBLTRIG_DIRST */
+/* Description: shrd,sxro,sub-state */
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_DIRST_SHFT 25
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_DIRST_MASK 0x00000003fe000000
+
+/* SH_MD_DQLP_MMR_DIR_TBLTRIG_MYBIT */
+/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_MYBIT_SHFT 34
+#define SH_MD_DQLP_MMR_DIR_TBLTRIG_MYBIT_MASK 0x000003fc00000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_TBLMASK" */
+/* dir table trigger mask */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_DIR_TBLMASK 0x0000000100034020
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_MASK 0x000003ffffffffff
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_DIR_TBLMASK_SRC */
+/* Description: source of request */
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_SRC_SHFT 0
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_SRC_MASK 0x0000000000003fff
+
+/* SH_MD_DQLP_MMR_DIR_TBLMASK_CMD */
+/* Description: incoming request */
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_CMD_SHFT 14
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_CMD_MASK 0x00000000003fc000
+
+/* SH_MD_DQLP_MMR_DIR_TBLMASK_ACC */
+/* Description: uncorrectable error, privilege bit */
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_ACC_SHFT 22
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_ACC_MASK 0x0000000000c00000
+
+/* SH_MD_DQLP_MMR_DIR_TBLMASK_PRIGE */
+/* Description: priority greater-equal */
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_PRIGE_SHFT 24
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_PRIGE_MASK 0x0000000001000000
+
+/* SH_MD_DQLP_MMR_DIR_TBLMASK_DIRST */
+/* Description: shrd,sxro,sub-state */
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_DIRST_SHFT 25
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_DIRST_MASK 0x00000003fe000000
+
+/* SH_MD_DQLP_MMR_DIR_TBLMASK_MYBIT */
+/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_MYBIT_SHFT 34
+#define SH_MD_DQLP_MMR_DIR_TBLMASK_MYBIT_MASK 0x000003fc00000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_XBIST_H 0x0000000100038000
+#define SH_MD_DQLP_MMR_XBIST_H_MASK 0x00000700ffffffff
+#define SH_MD_DQLP_MMR_XBIST_H_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_XBIST_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLP_MMR_XBIST_H_PAT_SHFT 0
+#define SH_MD_DQLP_MMR_XBIST_H_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQLP_MMR_XBIST_H_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQLP_MMR_XBIST_H_INV_SHFT 40
+#define SH_MD_DQLP_MMR_XBIST_H_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQLP_MMR_XBIST_H_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQLP_MMR_XBIST_H_ROT_SHFT 41
+#define SH_MD_DQLP_MMR_XBIST_H_ROT_MASK 0x0000020000000000
+
+/* SH_MD_DQLP_MMR_XBIST_H_ARM */
+/* Description: writing 1 arms data miscompare capture */
+#define SH_MD_DQLP_MMR_XBIST_H_ARM_SHFT 42
+#define SH_MD_DQLP_MMR_XBIST_H_ARM_MASK 0x0000040000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_XBIST_L 0x0000000100038010
+#define SH_MD_DQLP_MMR_XBIST_L_MASK 0x00000300ffffffff
+#define SH_MD_DQLP_MMR_XBIST_L_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_XBIST_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLP_MMR_XBIST_L_PAT_SHFT 0
+#define SH_MD_DQLP_MMR_XBIST_L_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQLP_MMR_XBIST_L_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQLP_MMR_XBIST_L_INV_SHFT 40
+#define SH_MD_DQLP_MMR_XBIST_L_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQLP_MMR_XBIST_L_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQLP_MMR_XBIST_L_ROT_SHFT 41
+#define SH_MD_DQLP_MMR_XBIST_L_ROT_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_XBIST_ERR_H 0x0000000100038020
+#define SH_MD_DQLP_MMR_XBIST_ERR_H_MASK 0x00000300ffffffff
+#define SH_MD_DQLP_MMR_XBIST_ERR_H_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_XBIST_ERR_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLP_MMR_XBIST_ERR_H_PAT_SHFT 0
+#define SH_MD_DQLP_MMR_XBIST_ERR_H_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQLP_MMR_XBIST_ERR_H_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQLP_MMR_XBIST_ERR_H_VAL_SHFT 40
+#define SH_MD_DQLP_MMR_XBIST_ERR_H_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQLP_MMR_XBIST_ERR_H_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQLP_MMR_XBIST_ERR_H_MORE_SHFT 41
+#define SH_MD_DQLP_MMR_XBIST_ERR_H_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_XBIST_ERR_L 0x0000000100038030
+#define SH_MD_DQLP_MMR_XBIST_ERR_L_MASK 0x00000300ffffffff
+#define SH_MD_DQLP_MMR_XBIST_ERR_L_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_XBIST_ERR_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLP_MMR_XBIST_ERR_L_PAT_SHFT 0
+#define SH_MD_DQLP_MMR_XBIST_ERR_L_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQLP_MMR_XBIST_ERR_L_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQLP_MMR_XBIST_ERR_L_VAL_SHFT 40
+#define SH_MD_DQLP_MMR_XBIST_ERR_L_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQLP_MMR_XBIST_ERR_L_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQLP_MMR_XBIST_ERR_L_MORE_SHFT 41
+#define SH_MD_DQLP_MMR_XBIST_ERR_L_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_YBIST_H 0x0000000100038800
+#define SH_MD_DQLP_MMR_YBIST_H_MASK 0x00000700ffffffff
+#define SH_MD_DQLP_MMR_YBIST_H_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_YBIST_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLP_MMR_YBIST_H_PAT_SHFT 0
+#define SH_MD_DQLP_MMR_YBIST_H_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQLP_MMR_YBIST_H_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQLP_MMR_YBIST_H_INV_SHFT 40
+#define SH_MD_DQLP_MMR_YBIST_H_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQLP_MMR_YBIST_H_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQLP_MMR_YBIST_H_ROT_SHFT 41
+#define SH_MD_DQLP_MMR_YBIST_H_ROT_MASK 0x0000020000000000
+
+/* SH_MD_DQLP_MMR_YBIST_H_ARM */
+/* Description: writing 1 arms data miscompare capture */
+#define SH_MD_DQLP_MMR_YBIST_H_ARM_SHFT 42
+#define SH_MD_DQLP_MMR_YBIST_H_ARM_MASK 0x0000040000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_YBIST_L 0x0000000100038810
+#define SH_MD_DQLP_MMR_YBIST_L_MASK 0x00000300ffffffff
+#define SH_MD_DQLP_MMR_YBIST_L_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_YBIST_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLP_MMR_YBIST_L_PAT_SHFT 0
+#define SH_MD_DQLP_MMR_YBIST_L_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQLP_MMR_YBIST_L_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQLP_MMR_YBIST_L_INV_SHFT 40
+#define SH_MD_DQLP_MMR_YBIST_L_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQLP_MMR_YBIST_L_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQLP_MMR_YBIST_L_ROT_SHFT 41
+#define SH_MD_DQLP_MMR_YBIST_L_ROT_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_YBIST_ERR_H 0x0000000100038820
+#define SH_MD_DQLP_MMR_YBIST_ERR_H_MASK 0x00000300ffffffff
+#define SH_MD_DQLP_MMR_YBIST_ERR_H_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_YBIST_ERR_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLP_MMR_YBIST_ERR_H_PAT_SHFT 0
+#define SH_MD_DQLP_MMR_YBIST_ERR_H_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQLP_MMR_YBIST_ERR_H_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQLP_MMR_YBIST_ERR_H_VAL_SHFT 40
+#define SH_MD_DQLP_MMR_YBIST_ERR_H_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQLP_MMR_YBIST_ERR_H_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQLP_MMR_YBIST_ERR_H_MORE_SHFT 41
+#define SH_MD_DQLP_MMR_YBIST_ERR_H_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLP_MMR_YBIST_ERR_L 0x0000000100038830
+#define SH_MD_DQLP_MMR_YBIST_ERR_L_MASK 0x00000300ffffffff
+#define SH_MD_DQLP_MMR_YBIST_ERR_L_INIT 0x0000000000000000
+
+/* SH_MD_DQLP_MMR_YBIST_ERR_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLP_MMR_YBIST_ERR_L_PAT_SHFT 0
+#define SH_MD_DQLP_MMR_YBIST_ERR_L_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQLP_MMR_YBIST_ERR_L_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQLP_MMR_YBIST_ERR_L_VAL_SHFT 40
+#define SH_MD_DQLP_MMR_YBIST_ERR_L_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQLP_MMR_YBIST_ERR_L_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQLP_MMR_YBIST_ERR_L_MORE_SHFT 41
+#define SH_MD_DQLP_MMR_YBIST_ERR_L_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_XBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLS_MMR_XBIST_H 0x0000000100048000
+#define SH_MD_DQLS_MMR_XBIST_H_MASK 0x000007ffffffffff
+#define SH_MD_DQLS_MMR_XBIST_H_INIT 0x0000000000000000
+
+/* SH_MD_DQLS_MMR_XBIST_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLS_MMR_XBIST_H_PAT_SHFT 0
+#define SH_MD_DQLS_MMR_XBIST_H_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQLS_MMR_XBIST_H_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQLS_MMR_XBIST_H_INV_SHFT 40
+#define SH_MD_DQLS_MMR_XBIST_H_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQLS_MMR_XBIST_H_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQLS_MMR_XBIST_H_ROT_SHFT 41
+#define SH_MD_DQLS_MMR_XBIST_H_ROT_MASK 0x0000020000000000
+
+/* SH_MD_DQLS_MMR_XBIST_H_ARM */
+/* Description: writing 1 arms data miscompare capture */
+#define SH_MD_DQLS_MMR_XBIST_H_ARM_SHFT 42
+#define SH_MD_DQLS_MMR_XBIST_H_ARM_MASK 0x0000040000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_XBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLS_MMR_XBIST_L 0x0000000100048010
+#define SH_MD_DQLS_MMR_XBIST_L_MASK 0x000003ffffffffff
+#define SH_MD_DQLS_MMR_XBIST_L_INIT 0x0000000000000000
+
+/* SH_MD_DQLS_MMR_XBIST_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLS_MMR_XBIST_L_PAT_SHFT 0
+#define SH_MD_DQLS_MMR_XBIST_L_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQLS_MMR_XBIST_L_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQLS_MMR_XBIST_L_INV_SHFT 40
+#define SH_MD_DQLS_MMR_XBIST_L_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQLS_MMR_XBIST_L_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQLS_MMR_XBIST_L_ROT_SHFT 41
+#define SH_MD_DQLS_MMR_XBIST_L_ROT_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_XBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLS_MMR_XBIST_ERR_H 0x0000000100048020
+#define SH_MD_DQLS_MMR_XBIST_ERR_H_MASK 0x000003ffffffffff
+#define SH_MD_DQLS_MMR_XBIST_ERR_H_INIT 0x0000000000000000
+
+/* SH_MD_DQLS_MMR_XBIST_ERR_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLS_MMR_XBIST_ERR_H_PAT_SHFT 0
+#define SH_MD_DQLS_MMR_XBIST_ERR_H_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQLS_MMR_XBIST_ERR_H_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQLS_MMR_XBIST_ERR_H_VAL_SHFT 40
+#define SH_MD_DQLS_MMR_XBIST_ERR_H_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQLS_MMR_XBIST_ERR_H_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQLS_MMR_XBIST_ERR_H_MORE_SHFT 41
+#define SH_MD_DQLS_MMR_XBIST_ERR_H_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_XBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLS_MMR_XBIST_ERR_L 0x0000000100048030
+#define SH_MD_DQLS_MMR_XBIST_ERR_L_MASK 0x000003ffffffffff
+#define SH_MD_DQLS_MMR_XBIST_ERR_L_INIT 0x0000000000000000
+
+/* SH_MD_DQLS_MMR_XBIST_ERR_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLS_MMR_XBIST_ERR_L_PAT_SHFT 0
+#define SH_MD_DQLS_MMR_XBIST_ERR_L_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQLS_MMR_XBIST_ERR_L_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQLS_MMR_XBIST_ERR_L_VAL_SHFT 40
+#define SH_MD_DQLS_MMR_XBIST_ERR_L_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQLS_MMR_XBIST_ERR_L_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQLS_MMR_XBIST_ERR_L_MORE_SHFT 41
+#define SH_MD_DQLS_MMR_XBIST_ERR_L_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_YBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLS_MMR_YBIST_H 0x0000000100048800
+#define SH_MD_DQLS_MMR_YBIST_H_MASK 0x000007ffffffffff
+#define SH_MD_DQLS_MMR_YBIST_H_INIT 0x0000000000000000
+
+/* SH_MD_DQLS_MMR_YBIST_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLS_MMR_YBIST_H_PAT_SHFT 0
+#define SH_MD_DQLS_MMR_YBIST_H_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQLS_MMR_YBIST_H_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQLS_MMR_YBIST_H_INV_SHFT 40
+#define SH_MD_DQLS_MMR_YBIST_H_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQLS_MMR_YBIST_H_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQLS_MMR_YBIST_H_ROT_SHFT 41
+#define SH_MD_DQLS_MMR_YBIST_H_ROT_MASK 0x0000020000000000
+
+/* SH_MD_DQLS_MMR_YBIST_H_ARM */
+/* Description: writing 1 arms data miscompare capture */
+#define SH_MD_DQLS_MMR_YBIST_H_ARM_SHFT 42
+#define SH_MD_DQLS_MMR_YBIST_H_ARM_MASK 0x0000040000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_YBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLS_MMR_YBIST_L 0x0000000100048810
+#define SH_MD_DQLS_MMR_YBIST_L_MASK 0x000003ffffffffff
+#define SH_MD_DQLS_MMR_YBIST_L_INIT 0x0000000000000000
+
+/* SH_MD_DQLS_MMR_YBIST_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLS_MMR_YBIST_L_PAT_SHFT 0
+#define SH_MD_DQLS_MMR_YBIST_L_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQLS_MMR_YBIST_L_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQLS_MMR_YBIST_L_INV_SHFT 40
+#define SH_MD_DQLS_MMR_YBIST_L_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQLS_MMR_YBIST_L_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQLS_MMR_YBIST_L_ROT_SHFT 41
+#define SH_MD_DQLS_MMR_YBIST_L_ROT_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_YBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLS_MMR_YBIST_ERR_H 0x0000000100048820
+#define SH_MD_DQLS_MMR_YBIST_ERR_H_MASK 0x000003ffffffffff
+#define SH_MD_DQLS_MMR_YBIST_ERR_H_INIT 0x0000000000000000
+
+/* SH_MD_DQLS_MMR_YBIST_ERR_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLS_MMR_YBIST_ERR_H_PAT_SHFT 0
+#define SH_MD_DQLS_MMR_YBIST_ERR_H_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQLS_MMR_YBIST_ERR_H_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQLS_MMR_YBIST_ERR_H_VAL_SHFT 40
+#define SH_MD_DQLS_MMR_YBIST_ERR_H_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQLS_MMR_YBIST_ERR_H_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQLS_MMR_YBIST_ERR_H_MORE_SHFT 41
+#define SH_MD_DQLS_MMR_YBIST_ERR_H_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_YBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQLS_MMR_YBIST_ERR_L 0x0000000100048830
+#define SH_MD_DQLS_MMR_YBIST_ERR_L_MASK 0x000003ffffffffff
+#define SH_MD_DQLS_MMR_YBIST_ERR_L_INIT 0x0000000000000000
+
+/* SH_MD_DQLS_MMR_YBIST_ERR_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQLS_MMR_YBIST_ERR_L_PAT_SHFT 0
+#define SH_MD_DQLS_MMR_YBIST_ERR_L_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQLS_MMR_YBIST_ERR_L_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQLS_MMR_YBIST_ERR_L_VAL_SHFT 40
+#define SH_MD_DQLS_MMR_YBIST_ERR_L_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQLS_MMR_YBIST_ERR_L_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQLS_MMR_YBIST_ERR_L_MORE_SHFT 41
+#define SH_MD_DQLS_MMR_YBIST_ERR_L_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_JNR_DEBUG" */
+/* joiner/fct debug configuration */
+/* ==================================================================== */
+
+#define SH_MD_DQLS_MMR_JNR_DEBUG 0x0000000100049000
+#define SH_MD_DQLS_MMR_JNR_DEBUG_MASK 0x0000000000000003
+#define SH_MD_DQLS_MMR_JNR_DEBUG_INIT 0x0000000000000000
+
+/* SH_MD_DQLS_MMR_JNR_DEBUG_PX */
+/* Description: select 0=pi 1=xn side */
+#define SH_MD_DQLS_MMR_JNR_DEBUG_PX_SHFT 0
+#define SH_MD_DQLS_MMR_JNR_DEBUG_PX_MASK 0x0000000000000001
+
+/* SH_MD_DQLS_MMR_JNR_DEBUG_RW */
+/* Description: select 0=read 1=write side */
+#define SH_MD_DQLS_MMR_JNR_DEBUG_RW_SHFT 1
+#define SH_MD_DQLS_MMR_JNR_DEBUG_RW_MASK 0x0000000000000002
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_XAMOPW_ERR" */
+/* amo/partial rmw ecc error register */
+/* ==================================================================== */
+
+#define SH_MD_DQLS_MMR_XAMOPW_ERR 0x000000010004a000
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_MASK 0x0000000103ff03ff
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_INIT 0x0000000000000000
+
+/* SH_MD_DQLS_MMR_XAMOPW_ERR_SSYN */
+/* Description: store data syndrome */
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_SSYN_SHFT 0
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_SSYN_MASK 0x00000000000000ff
+
+/* SH_MD_DQLS_MMR_XAMOPW_ERR_SCOR */
+/* Description: correctable ecc errror on store data */
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_SCOR_SHFT 8
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_SCOR_MASK 0x0000000000000100
+
+/* SH_MD_DQLS_MMR_XAMOPW_ERR_SUNC */
+/* Description: uncorrectable ecc errror on store data */
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_SUNC_SHFT 9
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_SUNC_MASK 0x0000000000000200
+
+/* SH_MD_DQLS_MMR_XAMOPW_ERR_RSYN */
+/* Description: memory read data syndrome */
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_RSYN_SHFT 16
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_RSYN_MASK 0x0000000000ff0000
+
+/* SH_MD_DQLS_MMR_XAMOPW_ERR_RCOR */
+/* Description: correctable ecc errror on read data */
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_RCOR_SHFT 24
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_RCOR_MASK 0x0000000001000000
+
+/* SH_MD_DQLS_MMR_XAMOPW_ERR_RUNC */
+/* Description: uncorrectable ecc errror on read data */
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_RUNC_SHFT 25
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_RUNC_MASK 0x0000000002000000
+
+/* SH_MD_DQLS_MMR_XAMOPW_ERR_ARM */
+/* Description: writing 1 arms ecc error capture */
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_ARM_SHFT 32
+#define SH_MD_DQLS_MMR_XAMOPW_ERR_ARM_MASK 0x0000000100000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_CONFIG" */
+/* DQ directory config register */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_CONFIG 0x0000000100050000
+#define SH_MD_DQRP_MMR_DIR_CONFIG_MASK 0x000000000000001f
+#define SH_MD_DQRP_MMR_DIR_CONFIG_INIT 0x0000000000000010
+
+/* SH_MD_DQRP_MMR_DIR_CONFIG_SYS_SIZE */
+/* Description: system size code */
+#define SH_MD_DQRP_MMR_DIR_CONFIG_SYS_SIZE_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_CONFIG_SYS_SIZE_MASK 0x0000000000000007
+
+/* SH_MD_DQRP_MMR_DIR_CONFIG_EN_DIRECC */
+/* Description: enable directory ecc correction */
+#define SH_MD_DQRP_MMR_DIR_CONFIG_EN_DIRECC_SHFT 3
+#define SH_MD_DQRP_MMR_DIR_CONFIG_EN_DIRECC_MASK 0x0000000000000008
+
+/* SH_MD_DQRP_MMR_DIR_CONFIG_EN_DIRPOIS */
+/* Description: enable local poisoning for dir table fall-through */
+#define SH_MD_DQRP_MMR_DIR_CONFIG_EN_DIRPOIS_SHFT 4
+#define SH_MD_DQRP_MMR_DIR_CONFIG_EN_DIRPOIS_MASK 0x0000000000000010
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC0" */
+/* node [63:0] presence bits */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_PRESVEC0 0x0000000100050100
+#define SH_MD_DQRP_MMR_DIR_PRESVEC0_MASK 0xffffffffffffffff
+#define SH_MD_DQRP_MMR_DIR_PRESVEC0_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_PRESVEC0_VEC */
+/* Description: node presence bits, 1=present */
+#define SH_MD_DQRP_MMR_DIR_PRESVEC0_VEC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_PRESVEC0_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC1" */
+/* node [127:64] presence bits */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_PRESVEC1 0x0000000100050110
+#define SH_MD_DQRP_MMR_DIR_PRESVEC1_MASK 0xffffffffffffffff
+#define SH_MD_DQRP_MMR_DIR_PRESVEC1_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_PRESVEC1_VEC */
+/* Description: node presence bits, 1=present */
+#define SH_MD_DQRP_MMR_DIR_PRESVEC1_VEC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_PRESVEC1_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC2" */
+/* node [191:128] presence bits */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_PRESVEC2 0x0000000100050120
+#define SH_MD_DQRP_MMR_DIR_PRESVEC2_MASK 0xffffffffffffffff
+#define SH_MD_DQRP_MMR_DIR_PRESVEC2_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_PRESVEC2_VEC */
+/* Description: node presence bits, 1=present */
+#define SH_MD_DQRP_MMR_DIR_PRESVEC2_VEC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_PRESVEC2_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC3" */
+/* node [255:192] presence bits */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_PRESVEC3 0x0000000100050130
+#define SH_MD_DQRP_MMR_DIR_PRESVEC3_MASK 0xffffffffffffffff
+#define SH_MD_DQRP_MMR_DIR_PRESVEC3_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_PRESVEC3_VEC */
+/* Description: node presence bits, 1=present */
+#define SH_MD_DQRP_MMR_DIR_PRESVEC3_VEC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_PRESVEC3_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC0" */
+/* local vector for acc=0 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_LOCVEC0 0x0000000100050200
+#define SH_MD_DQRP_MMR_DIR_LOCVEC0_MASK 0xffffffffffffffff
+#define SH_MD_DQRP_MMR_DIR_LOCVEC0_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_LOCVEC0_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQRP_MMR_DIR_LOCVEC0_VEC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_LOCVEC0_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC1" */
+/* local vector for acc=1 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_LOCVEC1 0x0000000100050210
+#define SH_MD_DQRP_MMR_DIR_LOCVEC1_MASK 0xffffffffffffffff
+#define SH_MD_DQRP_MMR_DIR_LOCVEC1_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_LOCVEC1_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQRP_MMR_DIR_LOCVEC1_VEC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_LOCVEC1_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC2" */
+/* local vector for acc=2 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_LOCVEC2 0x0000000100050220
+#define SH_MD_DQRP_MMR_DIR_LOCVEC2_MASK 0xffffffffffffffff
+#define SH_MD_DQRP_MMR_DIR_LOCVEC2_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_LOCVEC2_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQRP_MMR_DIR_LOCVEC2_VEC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_LOCVEC2_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC3" */
+/* local vector for acc=3 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_LOCVEC3 0x0000000100050230
+#define SH_MD_DQRP_MMR_DIR_LOCVEC3_MASK 0xffffffffffffffff
+#define SH_MD_DQRP_MMR_DIR_LOCVEC3_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_LOCVEC3_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQRP_MMR_DIR_LOCVEC3_VEC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_LOCVEC3_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC4" */
+/* local vector for acc=4 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_LOCVEC4 0x0000000100050240
+#define SH_MD_DQRP_MMR_DIR_LOCVEC4_MASK 0xffffffffffffffff
+#define SH_MD_DQRP_MMR_DIR_LOCVEC4_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_LOCVEC4_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQRP_MMR_DIR_LOCVEC4_VEC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_LOCVEC4_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC5" */
+/* local vector for acc=5 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_LOCVEC5 0x0000000100050250
+#define SH_MD_DQRP_MMR_DIR_LOCVEC5_MASK 0xffffffffffffffff
+#define SH_MD_DQRP_MMR_DIR_LOCVEC5_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_LOCVEC5_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQRP_MMR_DIR_LOCVEC5_VEC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_LOCVEC5_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC6" */
+/* local vector for acc=6 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_LOCVEC6 0x0000000100050260
+#define SH_MD_DQRP_MMR_DIR_LOCVEC6_MASK 0xffffffffffffffff
+#define SH_MD_DQRP_MMR_DIR_LOCVEC6_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_LOCVEC6_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQRP_MMR_DIR_LOCVEC6_VEC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_LOCVEC6_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC7" */
+/* local vector for acc=7 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_LOCVEC7 0x0000000100050270
+#define SH_MD_DQRP_MMR_DIR_LOCVEC7_MASK 0xffffffffffffffff
+#define SH_MD_DQRP_MMR_DIR_LOCVEC7_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_LOCVEC7_VEC */
+/* Description: 1 node is local */
+#define SH_MD_DQRP_MMR_DIR_LOCVEC7_VEC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_LOCVEC7_VEC_MASK 0xffffffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC0" */
+/* privilege vector for acc=0 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_PRIVEC0 0x0000000100050300
+#define SH_MD_DQRP_MMR_DIR_PRIVEC0_MASK 0x000000000fffffff
+#define SH_MD_DQRP_MMR_DIR_PRIVEC0_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC0_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC0_IN_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_PRIVEC0_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC0_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC0_OUT_SHFT 14
+#define SH_MD_DQRP_MMR_DIR_PRIVEC0_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC1" */
+/* privilege vector for acc=1 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_PRIVEC1 0x0000000100050310
+#define SH_MD_DQRP_MMR_DIR_PRIVEC1_MASK 0x000000000fffffff
+#define SH_MD_DQRP_MMR_DIR_PRIVEC1_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC1_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC1_IN_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_PRIVEC1_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC1_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC1_OUT_SHFT 14
+#define SH_MD_DQRP_MMR_DIR_PRIVEC1_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC2" */
+/* privilege vector for acc=2 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_PRIVEC2 0x0000000100050320
+#define SH_MD_DQRP_MMR_DIR_PRIVEC2_MASK 0x000000000fffffff
+#define SH_MD_DQRP_MMR_DIR_PRIVEC2_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC2_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC2_IN_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_PRIVEC2_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC2_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC2_OUT_SHFT 14
+#define SH_MD_DQRP_MMR_DIR_PRIVEC2_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC3" */
+/* privilege vector for acc=3 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_PRIVEC3 0x0000000100050330
+#define SH_MD_DQRP_MMR_DIR_PRIVEC3_MASK 0x000000000fffffff
+#define SH_MD_DQRP_MMR_DIR_PRIVEC3_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC3_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC3_IN_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_PRIVEC3_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC3_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC3_OUT_SHFT 14
+#define SH_MD_DQRP_MMR_DIR_PRIVEC3_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC4" */
+/* privilege vector for acc=4 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_PRIVEC4 0x0000000100050340
+#define SH_MD_DQRP_MMR_DIR_PRIVEC4_MASK 0x000000000fffffff
+#define SH_MD_DQRP_MMR_DIR_PRIVEC4_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC4_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC4_IN_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_PRIVEC4_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC4_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC4_OUT_SHFT 14
+#define SH_MD_DQRP_MMR_DIR_PRIVEC4_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC5" */
+/* privilege vector for acc=5 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_PRIVEC5 0x0000000100050350
+#define SH_MD_DQRP_MMR_DIR_PRIVEC5_MASK 0x000000000fffffff
+#define SH_MD_DQRP_MMR_DIR_PRIVEC5_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC5_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC5_IN_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_PRIVEC5_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC5_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC5_OUT_SHFT 14
+#define SH_MD_DQRP_MMR_DIR_PRIVEC5_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC6" */
+/* privilege vector for acc=6 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_PRIVEC6 0x0000000100050360
+#define SH_MD_DQRP_MMR_DIR_PRIVEC6_MASK 0x000000000fffffff
+#define SH_MD_DQRP_MMR_DIR_PRIVEC6_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC6_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC6_IN_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_PRIVEC6_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC6_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC6_OUT_SHFT 14
+#define SH_MD_DQRP_MMR_DIR_PRIVEC6_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC7" */
+/* privilege vector for acc=7 */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_PRIVEC7 0x0000000100050370
+#define SH_MD_DQRP_MMR_DIR_PRIVEC7_MASK 0x000000000fffffff
+#define SH_MD_DQRP_MMR_DIR_PRIVEC7_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC7_IN */
+/* Description: in partition privileges, locvec bit=1 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC7_IN_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_PRIVEC7_IN_MASK 0x0000000000003fff
+
+/* SH_MD_DQRP_MMR_DIR_PRIVEC7_OUT */
+/* Description: out of partition privileges, locvec bit=0 */
+#define SH_MD_DQRP_MMR_DIR_PRIVEC7_OUT_SHFT 14
+#define SH_MD_DQRP_MMR_DIR_PRIVEC7_OUT_MASK 0x000000000fffc000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_TIMER" */
+/* MD SXRO timer */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_TIMER 0x0000000100050400
+#define SH_MD_DQRP_MMR_DIR_TIMER_MASK 0x00000000003fffff
+#define SH_MD_DQRP_MMR_DIR_TIMER_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_TIMER_TIMER_DIV */
+/* Description: timer divide register */
+#define SH_MD_DQRP_MMR_DIR_TIMER_TIMER_DIV_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_TIMER_TIMER_DIV_MASK 0x0000000000000fff
+
+/* SH_MD_DQRP_MMR_DIR_TIMER_TIMER_EN */
+/* Description: timer enable */
+#define SH_MD_DQRP_MMR_DIR_TIMER_TIMER_EN_SHFT 12
+#define SH_MD_DQRP_MMR_DIR_TIMER_TIMER_EN_MASK 0x0000000000001000
+
+/* SH_MD_DQRP_MMR_DIR_TIMER_TIMER_CUR */
+/* Description: value of current timer */
+#define SH_MD_DQRP_MMR_DIR_TIMER_TIMER_CUR_SHFT 13
+#define SH_MD_DQRP_MMR_DIR_TIMER_TIMER_CUR_MASK 0x00000000003fe000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY" */
+/* directory pio write data */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY 0x0000000100051000
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_MASK 0x03ffffffffffffff
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_DIRA */
+/* Description: directory entry A */
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_DIRA_SHFT 0
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_DIRA_MASK 0x0000000003ffffff
+
+/* SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_DIRB */
+/* Description: directory entry B */
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_DIRB_SHFT 26
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_DIRB_MASK 0x000ffffffc000000
+
+/* SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_PRI */
+/* Description: directory priority */
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_PRI_SHFT 52
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_PRI_MASK 0x0070000000000000
+
+/* SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_ACC */
+/* Description: directory access bits */
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_ACC_SHFT 55
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY_ACC_MASK 0x0380000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_PIOWD_DIR_ECC" */
+/* directory ecc register */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC 0x0000000100051010
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC_MASK 0x0000000000003fff
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_PIOWD_DIR_ECC_ECCA */
+/* Description: XOR bits for directory ECC group 1 */
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC_ECCA_SHFT 0
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC_ECCA_MASK 0x000000000000007f
+
+/* SH_MD_DQRP_MMR_PIOWD_DIR_ECC_ECCB */
+/* Description: XOR bits for directory ECC group 2 */
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC_ECCB_SHFT 7
+#define SH_MD_DQRP_MMR_PIOWD_DIR_ECC_ECCB_MASK 0x0000000000003f80
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY" */
+/* x directory pio read data */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY 0x0000000100052000
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_MASK 0x0fffffffffffffff
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_DIRA */
+/* Description: directory entry A */
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_DIRA_SHFT 0
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_DIRA_MASK 0x0000000003ffffff
+
+/* SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_DIRB */
+/* Description: directory entry B */
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_DIRB_SHFT 26
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_DIRB_MASK 0x000ffffffc000000
+
+/* SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_PRI */
+/* Description: directory priority */
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_PRI_SHFT 52
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_PRI_MASK 0x0070000000000000
+
+/* SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_ACC */
+/* Description: directory access bits */
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_ACC_SHFT 55
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_ACC_MASK 0x0380000000000000
+
+/* SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_COR */
+/* Description: correctable ecc error */
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_COR_SHFT 58
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_COR_MASK 0x0400000000000000
+
+/* SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_UNC */
+/* Description: uncorrectable ecc error */
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_UNC_SHFT 59
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY_UNC_MASK 0x0800000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XPIORD_XDIR_ECC" */
+/* x directory ecc */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC 0x0000000100052010
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_MASK 0x0000000000003fff
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_ECCA */
+/* Description: group 1 ecc */
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_ECCA_SHFT 0
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_ECCA_MASK 0x000000000000007f
+
+/* SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_ECCB */
+/* Description: group 2 ecc */
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_ECCB_SHFT 7
+#define SH_MD_DQRP_MMR_XPIORD_XDIR_ECC_ECCB_MASK 0x0000000000003f80
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY" */
+/* y directory pio read data */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY 0x0000000100052800
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_MASK 0x0fffffffffffffff
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_DIRA */
+/* Description: directory entry A */
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_DIRA_SHFT 0
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_DIRA_MASK 0x0000000003ffffff
+
+/* SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_DIRB */
+/* Description: directory entry B */
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_DIRB_SHFT 26
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_DIRB_MASK 0x000ffffffc000000
+
+/* SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_PRI */
+/* Description: directory priority */
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_PRI_SHFT 52
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_PRI_MASK 0x0070000000000000
+
+/* SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_ACC */
+/* Description: directory access bits */
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_ACC_SHFT 55
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_ACC_MASK 0x0380000000000000
+
+/* SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_COR */
+/* Description: correctable ecc error */
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_COR_SHFT 58
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_COR_MASK 0x0400000000000000
+
+/* SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_UNC */
+/* Description: uncorrectable ecc error */
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_UNC_SHFT 59
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY_UNC_MASK 0x0800000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YPIORD_YDIR_ECC" */
+/* y directory ecc */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC 0x0000000100052810
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_MASK 0x0000000000003fff
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_ECCA */
+/* Description: group 1 ecc */
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_ECCA_SHFT 0
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_ECCA_MASK 0x000000000000007f
+
+/* SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_ECCB */
+/* Description: group 2 ecc */
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_ECCB_SHFT 7
+#define SH_MD_DQRP_MMR_YPIORD_YDIR_ECC_ECCB_MASK 0x0000000000003f80
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XCERR1" */
+/* correctable dir ecc group 1 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_XCERR1 0x0000000100053000
+#define SH_MD_DQRP_MMR_XCERR1_MASK 0x0000007fffffffff
+#define SH_MD_DQRP_MMR_XCERR1_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_XCERR1_GRP1 */
+/* Description: ecc group 1 bits */
+#define SH_MD_DQRP_MMR_XCERR1_GRP1_SHFT 0
+#define SH_MD_DQRP_MMR_XCERR1_GRP1_MASK 0x0000000fffffffff
+
+/* SH_MD_DQRP_MMR_XCERR1_VAL */
+/* Description: correctable ecc error in group 1 bits */
+#define SH_MD_DQRP_MMR_XCERR1_VAL_SHFT 36
+#define SH_MD_DQRP_MMR_XCERR1_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQRP_MMR_XCERR1_MORE */
+/* Description: more than one correctable ecc error in group 1 */
+#define SH_MD_DQRP_MMR_XCERR1_MORE_SHFT 37
+#define SH_MD_DQRP_MMR_XCERR1_MORE_MASK 0x0000002000000000
+
+/* SH_MD_DQRP_MMR_XCERR1_ARM */
+/* Description: writing 1 arms uncorrectable ecc error capture */
+#define SH_MD_DQRP_MMR_XCERR1_ARM_SHFT 38
+#define SH_MD_DQRP_MMR_XCERR1_ARM_MASK 0x0000004000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XCERR2" */
+/* correctable dir ecc group 2 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_XCERR2 0x0000000100053010
+#define SH_MD_DQRP_MMR_XCERR2_MASK 0x0000003fffffffff
+#define SH_MD_DQRP_MMR_XCERR2_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_XCERR2_GRP2 */
+/* Description: ecc group 2 bits */
+#define SH_MD_DQRP_MMR_XCERR2_GRP2_SHFT 0
+#define SH_MD_DQRP_MMR_XCERR2_GRP2_MASK 0x0000000fffffffff
+
+/* SH_MD_DQRP_MMR_XCERR2_VAL */
+/* Description: correctable ecc error in group 2 bits */
+#define SH_MD_DQRP_MMR_XCERR2_VAL_SHFT 36
+#define SH_MD_DQRP_MMR_XCERR2_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQRP_MMR_XCERR2_MORE */
+/* Description: more than one correctable ecc error in group 2 */
+#define SH_MD_DQRP_MMR_XCERR2_MORE_SHFT 37
+#define SH_MD_DQRP_MMR_XCERR2_MORE_MASK 0x0000002000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XUERR1" */
+/* uncorrectable dir ecc group 1 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_XUERR1 0x0000000100053020
+#define SH_MD_DQRP_MMR_XUERR1_MASK 0x0000007fffffffff
+#define SH_MD_DQRP_MMR_XUERR1_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_XUERR1_GRP1 */
+/* Description: ecc group 1 bits */
+#define SH_MD_DQRP_MMR_XUERR1_GRP1_SHFT 0
+#define SH_MD_DQRP_MMR_XUERR1_GRP1_MASK 0x0000000fffffffff
+
+/* SH_MD_DQRP_MMR_XUERR1_VAL */
+/* Description: uncorrectable ecc error in group 1 bits */
+#define SH_MD_DQRP_MMR_XUERR1_VAL_SHFT 36
+#define SH_MD_DQRP_MMR_XUERR1_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQRP_MMR_XUERR1_MORE */
+/* Description: more than one uncorrectable ecc error in group 1 */
+#define SH_MD_DQRP_MMR_XUERR1_MORE_SHFT 37
+#define SH_MD_DQRP_MMR_XUERR1_MORE_MASK 0x0000002000000000
+
+/* SH_MD_DQRP_MMR_XUERR1_ARM */
+/* Description: writing 1 arms uncorrectable ecc error capture */
+#define SH_MD_DQRP_MMR_XUERR1_ARM_SHFT 38
+#define SH_MD_DQRP_MMR_XUERR1_ARM_MASK 0x0000004000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XUERR2" */
+/* uncorrectable dir ecc group 2 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_XUERR2 0x0000000100053030
+#define SH_MD_DQRP_MMR_XUERR2_MASK 0x0000003fffffffff
+#define SH_MD_DQRP_MMR_XUERR2_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_XUERR2_GRP2 */
+/* Description: ecc group 2 bits */
+#define SH_MD_DQRP_MMR_XUERR2_GRP2_SHFT 0
+#define SH_MD_DQRP_MMR_XUERR2_GRP2_MASK 0x0000000fffffffff
+
+/* SH_MD_DQRP_MMR_XUERR2_VAL */
+/* Description: uncorrectable ecc error in group 2 bits */
+#define SH_MD_DQRP_MMR_XUERR2_VAL_SHFT 36
+#define SH_MD_DQRP_MMR_XUERR2_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQRP_MMR_XUERR2_MORE */
+/* Description: more than one uncorrectable ecc error in group 2 */
+#define SH_MD_DQRP_MMR_XUERR2_MORE_SHFT 37
+#define SH_MD_DQRP_MMR_XUERR2_MORE_MASK 0x0000002000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XPERR" */
+/* protocol error register */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_XPERR 0x0000000100053040
+#define SH_MD_DQRP_MMR_XPERR_MASK 0x7fffffffffffffff
+#define SH_MD_DQRP_MMR_XPERR_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_XPERR_DIR */
+/* Description: directory entry */
+#define SH_MD_DQRP_MMR_XPERR_DIR_SHFT 0
+#define SH_MD_DQRP_MMR_XPERR_DIR_MASK 0x0000000003ffffff
+
+/* SH_MD_DQRP_MMR_XPERR_CMD */
+/* Description: incoming command */
+#define SH_MD_DQRP_MMR_XPERR_CMD_SHFT 26
+#define SH_MD_DQRP_MMR_XPERR_CMD_MASK 0x00000003fc000000
+
+/* SH_MD_DQRP_MMR_XPERR_SRC */
+/* Description: source node of dir operation */
+#define SH_MD_DQRP_MMR_XPERR_SRC_SHFT 34
+#define SH_MD_DQRP_MMR_XPERR_SRC_MASK 0x0000fffc00000000
+
+/* SH_MD_DQRP_MMR_XPERR_PRIGE */
+/* Description: priority was greater-equal */
+#define SH_MD_DQRP_MMR_XPERR_PRIGE_SHFT 48
+#define SH_MD_DQRP_MMR_XPERR_PRIGE_MASK 0x0001000000000000
+
+/* SH_MD_DQRP_MMR_XPERR_PRIV */
+/* Description: access privilege bit */
+#define SH_MD_DQRP_MMR_XPERR_PRIV_SHFT 49
+#define SH_MD_DQRP_MMR_XPERR_PRIV_MASK 0x0002000000000000
+
+/* SH_MD_DQRP_MMR_XPERR_COR */
+/* Description: correctable ecc error */
+#define SH_MD_DQRP_MMR_XPERR_COR_SHFT 50
+#define SH_MD_DQRP_MMR_XPERR_COR_MASK 0x0004000000000000
+
+/* SH_MD_DQRP_MMR_XPERR_UNC */
+/* Description: uncorrectable ecc error */
+#define SH_MD_DQRP_MMR_XPERR_UNC_SHFT 51
+#define SH_MD_DQRP_MMR_XPERR_UNC_MASK 0x0008000000000000
+
+/* SH_MD_DQRP_MMR_XPERR_MYBIT */
+/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */
+#define SH_MD_DQRP_MMR_XPERR_MYBIT_SHFT 52
+#define SH_MD_DQRP_MMR_XPERR_MYBIT_MASK 0x0ff0000000000000
+
+/* SH_MD_DQRP_MMR_XPERR_VAL */
+/* Description: protocol error info valid */
+#define SH_MD_DQRP_MMR_XPERR_VAL_SHFT 60
+#define SH_MD_DQRP_MMR_XPERR_VAL_MASK 0x1000000000000000
+
+/* SH_MD_DQRP_MMR_XPERR_MORE */
+/* Description: more than one protocol error */
+#define SH_MD_DQRP_MMR_XPERR_MORE_SHFT 61
+#define SH_MD_DQRP_MMR_XPERR_MORE_MASK 0x2000000000000000
+
+/* SH_MD_DQRP_MMR_XPERR_ARM */
+/* Description: writing 1 arms error capture */
+#define SH_MD_DQRP_MMR_XPERR_ARM_SHFT 62
+#define SH_MD_DQRP_MMR_XPERR_ARM_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YCERR1" */
+/* correctable dir ecc group 1 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_YCERR1 0x0000000100053800
+#define SH_MD_DQRP_MMR_YCERR1_MASK 0x0000007fffffffff
+#define SH_MD_DQRP_MMR_YCERR1_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_YCERR1_GRP1 */
+/* Description: ecc group 1 bits */
+#define SH_MD_DQRP_MMR_YCERR1_GRP1_SHFT 0
+#define SH_MD_DQRP_MMR_YCERR1_GRP1_MASK 0x0000000fffffffff
+
+/* SH_MD_DQRP_MMR_YCERR1_VAL */
+/* Description: correctable ecc error in group 1 bits */
+#define SH_MD_DQRP_MMR_YCERR1_VAL_SHFT 36
+#define SH_MD_DQRP_MMR_YCERR1_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQRP_MMR_YCERR1_MORE */
+/* Description: more than one correctable ecc error in group 1 */
+#define SH_MD_DQRP_MMR_YCERR1_MORE_SHFT 37
+#define SH_MD_DQRP_MMR_YCERR1_MORE_MASK 0x0000002000000000
+
+/* SH_MD_DQRP_MMR_YCERR1_ARM */
+/* Description: writing 1 arms uncorrectable ecc error capture */
+#define SH_MD_DQRP_MMR_YCERR1_ARM_SHFT 38
+#define SH_MD_DQRP_MMR_YCERR1_ARM_MASK 0x0000004000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YCERR2" */
+/* correctable dir ecc group 2 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_YCERR2 0x0000000100053810
+#define SH_MD_DQRP_MMR_YCERR2_MASK 0x0000003fffffffff
+#define SH_MD_DQRP_MMR_YCERR2_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_YCERR2_GRP2 */
+/* Description: ecc group 2 bits */
+#define SH_MD_DQRP_MMR_YCERR2_GRP2_SHFT 0
+#define SH_MD_DQRP_MMR_YCERR2_GRP2_MASK 0x0000000fffffffff
+
+/* SH_MD_DQRP_MMR_YCERR2_VAL */
+/* Description: correctable ecc error in group 2 bits */
+#define SH_MD_DQRP_MMR_YCERR2_VAL_SHFT 36
+#define SH_MD_DQRP_MMR_YCERR2_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQRP_MMR_YCERR2_MORE */
+/* Description: more than one correctable ecc error in group 2 */
+#define SH_MD_DQRP_MMR_YCERR2_MORE_SHFT 37
+#define SH_MD_DQRP_MMR_YCERR2_MORE_MASK 0x0000002000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YUERR1" */
+/* uncorrectable dir ecc group 1 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_YUERR1 0x0000000100053820
+#define SH_MD_DQRP_MMR_YUERR1_MASK 0x0000007fffffffff
+#define SH_MD_DQRP_MMR_YUERR1_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_YUERR1_GRP1 */
+/* Description: ecc group 1 bits */
+#define SH_MD_DQRP_MMR_YUERR1_GRP1_SHFT 0
+#define SH_MD_DQRP_MMR_YUERR1_GRP1_MASK 0x0000000fffffffff
+
+/* SH_MD_DQRP_MMR_YUERR1_VAL */
+/* Description: uncorrectable ecc error in group 1 bits */
+#define SH_MD_DQRP_MMR_YUERR1_VAL_SHFT 36
+#define SH_MD_DQRP_MMR_YUERR1_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQRP_MMR_YUERR1_MORE */
+/* Description: more than one uncorrectable ecc error in group 1 */
+#define SH_MD_DQRP_MMR_YUERR1_MORE_SHFT 37
+#define SH_MD_DQRP_MMR_YUERR1_MORE_MASK 0x0000002000000000
+
+/* SH_MD_DQRP_MMR_YUERR1_ARM */
+/* Description: writing 1 arms uncorrectable ecc error capture */
+#define SH_MD_DQRP_MMR_YUERR1_ARM_SHFT 38
+#define SH_MD_DQRP_MMR_YUERR1_ARM_MASK 0x0000004000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YUERR2" */
+/* uncorrectable dir ecc group 2 error register */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_YUERR2 0x0000000100053830
+#define SH_MD_DQRP_MMR_YUERR2_MASK 0x0000003fffffffff
+#define SH_MD_DQRP_MMR_YUERR2_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_YUERR2_GRP2 */
+/* Description: ecc group 2 bits */
+#define SH_MD_DQRP_MMR_YUERR2_GRP2_SHFT 0
+#define SH_MD_DQRP_MMR_YUERR2_GRP2_MASK 0x0000000fffffffff
+
+/* SH_MD_DQRP_MMR_YUERR2_VAL */
+/* Description: uncorrectable ecc error in group 2 bits */
+#define SH_MD_DQRP_MMR_YUERR2_VAL_SHFT 36
+#define SH_MD_DQRP_MMR_YUERR2_VAL_MASK 0x0000001000000000
+
+/* SH_MD_DQRP_MMR_YUERR2_MORE */
+/* Description: more than one uncorrectable ecc error in group 2 */
+#define SH_MD_DQRP_MMR_YUERR2_MORE_SHFT 37
+#define SH_MD_DQRP_MMR_YUERR2_MORE_MASK 0x0000002000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YPERR" */
+/* protocol error register */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_YPERR 0x0000000100053840
+#define SH_MD_DQRP_MMR_YPERR_MASK 0x7fffffffffffffff
+#define SH_MD_DQRP_MMR_YPERR_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_YPERR_DIR */
+/* Description: directory entry */
+#define SH_MD_DQRP_MMR_YPERR_DIR_SHFT 0
+#define SH_MD_DQRP_MMR_YPERR_DIR_MASK 0x0000000003ffffff
+
+/* SH_MD_DQRP_MMR_YPERR_CMD */
+/* Description: incoming command */
+#define SH_MD_DQRP_MMR_YPERR_CMD_SHFT 26
+#define SH_MD_DQRP_MMR_YPERR_CMD_MASK 0x00000003fc000000
+
+/* SH_MD_DQRP_MMR_YPERR_SRC */
+/* Description: source node of dir operation */
+#define SH_MD_DQRP_MMR_YPERR_SRC_SHFT 34
+#define SH_MD_DQRP_MMR_YPERR_SRC_MASK 0x0000fffc00000000
+
+/* SH_MD_DQRP_MMR_YPERR_PRIGE */
+/* Description: priority was greater-equal */
+#define SH_MD_DQRP_MMR_YPERR_PRIGE_SHFT 48
+#define SH_MD_DQRP_MMR_YPERR_PRIGE_MASK 0x0001000000000000
+
+/* SH_MD_DQRP_MMR_YPERR_PRIV */
+/* Description: access privilege bit */
+#define SH_MD_DQRP_MMR_YPERR_PRIV_SHFT 49
+#define SH_MD_DQRP_MMR_YPERR_PRIV_MASK 0x0002000000000000
+
+/* SH_MD_DQRP_MMR_YPERR_COR */
+/* Description: correctable ecc error */
+#define SH_MD_DQRP_MMR_YPERR_COR_SHFT 50
+#define SH_MD_DQRP_MMR_YPERR_COR_MASK 0x0004000000000000
+
+/* SH_MD_DQRP_MMR_YPERR_UNC */
+/* Description: uncorrectable ecc error */
+#define SH_MD_DQRP_MMR_YPERR_UNC_SHFT 51
+#define SH_MD_DQRP_MMR_YPERR_UNC_MASK 0x0008000000000000
+
+/* SH_MD_DQRP_MMR_YPERR_MYBIT */
+/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */
+#define SH_MD_DQRP_MMR_YPERR_MYBIT_SHFT 52
+#define SH_MD_DQRP_MMR_YPERR_MYBIT_MASK 0x0ff0000000000000
+
+/* SH_MD_DQRP_MMR_YPERR_VAL */
+/* Description: protocol error info valid */
+#define SH_MD_DQRP_MMR_YPERR_VAL_SHFT 60
+#define SH_MD_DQRP_MMR_YPERR_VAL_MASK 0x1000000000000000
+
+/* SH_MD_DQRP_MMR_YPERR_MORE */
+/* Description: more than one protocol error */
+#define SH_MD_DQRP_MMR_YPERR_MORE_SHFT 61
+#define SH_MD_DQRP_MMR_YPERR_MORE_MASK 0x2000000000000000
+
+/* SH_MD_DQRP_MMR_YPERR_ARM */
+/* Description: writing 1 arms error capture */
+#define SH_MD_DQRP_MMR_YPERR_ARM_SHFT 62
+#define SH_MD_DQRP_MMR_YPERR_ARM_MASK 0x4000000000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_CMDTRIG" */
+/* cmd triggers */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_CMDTRIG 0x0000000100054000
+#define SH_MD_DQRP_MMR_DIR_CMDTRIG_MASK 0x00000000ffffffff
+#define SH_MD_DQRP_MMR_DIR_CMDTRIG_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD0 */
+/* Description: command trigger 0 */
+#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD0_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD0_MASK 0x00000000000000ff
+
+/* SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD1 */
+/* Description: command trigger 1 */
+#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD1_SHFT 8
+#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD1_MASK 0x000000000000ff00
+
+/* SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD2 */
+/* Description: command trigger 2 */
+#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD2_SHFT 16
+#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD2_MASK 0x0000000000ff0000
+
+/* SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD3 */
+/* Description: command trigger 3 */
+#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD3_SHFT 24
+#define SH_MD_DQRP_MMR_DIR_CMDTRIG_CMD3_MASK 0x00000000ff000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_TBLTRIG" */
+/* dir table trigger */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG 0x0000000100054010
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_MASK 0x000003ffffffffff
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_TBLTRIG_SRC */
+/* Description: source of request */
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_SRC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_SRC_MASK 0x0000000000003fff
+
+/* SH_MD_DQRP_MMR_DIR_TBLTRIG_CMD */
+/* Description: incoming request */
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_CMD_SHFT 14
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_CMD_MASK 0x00000000003fc000
+
+/* SH_MD_DQRP_MMR_DIR_TBLTRIG_ACC */
+/* Description: uncorrectable error, privilege bit */
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_ACC_SHFT 22
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_ACC_MASK 0x0000000000c00000
+
+/* SH_MD_DQRP_MMR_DIR_TBLTRIG_PRIGE */
+/* Description: priority greater-equal */
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_PRIGE_SHFT 24
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_PRIGE_MASK 0x0000000001000000
+
+/* SH_MD_DQRP_MMR_DIR_TBLTRIG_DIRST */
+/* Description: shrd,sxro,sub-state */
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_DIRST_SHFT 25
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_DIRST_MASK 0x00000003fe000000
+
+/* SH_MD_DQRP_MMR_DIR_TBLTRIG_MYBIT */
+/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_MYBIT_SHFT 34
+#define SH_MD_DQRP_MMR_DIR_TBLTRIG_MYBIT_MASK 0x000003fc00000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_TBLMASK" */
+/* dir table trigger mask */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_DIR_TBLMASK 0x0000000100054020
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_MASK 0x000003ffffffffff
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_DIR_TBLMASK_SRC */
+/* Description: source of request */
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_SRC_SHFT 0
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_SRC_MASK 0x0000000000003fff
+
+/* SH_MD_DQRP_MMR_DIR_TBLMASK_CMD */
+/* Description: incoming request */
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_CMD_SHFT 14
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_CMD_MASK 0x00000000003fc000
+
+/* SH_MD_DQRP_MMR_DIR_TBLMASK_ACC */
+/* Description: uncorrectable error, privilege bit */
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_ACC_SHFT 22
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_ACC_MASK 0x0000000000c00000
+
+/* SH_MD_DQRP_MMR_DIR_TBLMASK_PRIGE */
+/* Description: priority greater-equal */
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_PRIGE_SHFT 24
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_PRIGE_MASK 0x0000000001000000
+
+/* SH_MD_DQRP_MMR_DIR_TBLMASK_DIRST */
+/* Description: shrd,sxro,sub-state */
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_DIRST_SHFT 25
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_DIRST_MASK 0x00000003fe000000
+
+/* SH_MD_DQRP_MMR_DIR_TBLMASK_MYBIT */
+/* Description: ptreq,timeq,timlast,timspec,onlyme,anytim,ptrii,src */
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_MYBIT_SHFT 34
+#define SH_MD_DQRP_MMR_DIR_TBLMASK_MYBIT_MASK 0x000003fc00000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_XBIST_H 0x0000000100058000
+#define SH_MD_DQRP_MMR_XBIST_H_MASK 0x00000700ffffffff
+#define SH_MD_DQRP_MMR_XBIST_H_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_XBIST_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRP_MMR_XBIST_H_PAT_SHFT 0
+#define SH_MD_DQRP_MMR_XBIST_H_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQRP_MMR_XBIST_H_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQRP_MMR_XBIST_H_INV_SHFT 40
+#define SH_MD_DQRP_MMR_XBIST_H_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQRP_MMR_XBIST_H_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQRP_MMR_XBIST_H_ROT_SHFT 41
+#define SH_MD_DQRP_MMR_XBIST_H_ROT_MASK 0x0000020000000000
+
+/* SH_MD_DQRP_MMR_XBIST_H_ARM */
+/* Description: writing 1 arms data miscompare capture */
+#define SH_MD_DQRP_MMR_XBIST_H_ARM_SHFT 42
+#define SH_MD_DQRP_MMR_XBIST_H_ARM_MASK 0x0000040000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_XBIST_L 0x0000000100058010
+#define SH_MD_DQRP_MMR_XBIST_L_MASK 0x00000300ffffffff
+#define SH_MD_DQRP_MMR_XBIST_L_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_XBIST_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRP_MMR_XBIST_L_PAT_SHFT 0
+#define SH_MD_DQRP_MMR_XBIST_L_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQRP_MMR_XBIST_L_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQRP_MMR_XBIST_L_INV_SHFT 40
+#define SH_MD_DQRP_MMR_XBIST_L_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQRP_MMR_XBIST_L_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQRP_MMR_XBIST_L_ROT_SHFT 41
+#define SH_MD_DQRP_MMR_XBIST_L_ROT_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_XBIST_ERR_H 0x0000000100058020
+#define SH_MD_DQRP_MMR_XBIST_ERR_H_MASK 0x00000300ffffffff
+#define SH_MD_DQRP_MMR_XBIST_ERR_H_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_XBIST_ERR_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRP_MMR_XBIST_ERR_H_PAT_SHFT 0
+#define SH_MD_DQRP_MMR_XBIST_ERR_H_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQRP_MMR_XBIST_ERR_H_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQRP_MMR_XBIST_ERR_H_VAL_SHFT 40
+#define SH_MD_DQRP_MMR_XBIST_ERR_H_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQRP_MMR_XBIST_ERR_H_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQRP_MMR_XBIST_ERR_H_MORE_SHFT 41
+#define SH_MD_DQRP_MMR_XBIST_ERR_H_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_XBIST_ERR_L 0x0000000100058030
+#define SH_MD_DQRP_MMR_XBIST_ERR_L_MASK 0x00000300ffffffff
+#define SH_MD_DQRP_MMR_XBIST_ERR_L_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_XBIST_ERR_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRP_MMR_XBIST_ERR_L_PAT_SHFT 0
+#define SH_MD_DQRP_MMR_XBIST_ERR_L_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQRP_MMR_XBIST_ERR_L_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQRP_MMR_XBIST_ERR_L_VAL_SHFT 40
+#define SH_MD_DQRP_MMR_XBIST_ERR_L_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQRP_MMR_XBIST_ERR_L_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQRP_MMR_XBIST_ERR_L_MORE_SHFT 41
+#define SH_MD_DQRP_MMR_XBIST_ERR_L_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_YBIST_H 0x0000000100058800
+#define SH_MD_DQRP_MMR_YBIST_H_MASK 0x00000700ffffffff
+#define SH_MD_DQRP_MMR_YBIST_H_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_YBIST_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRP_MMR_YBIST_H_PAT_SHFT 0
+#define SH_MD_DQRP_MMR_YBIST_H_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQRP_MMR_YBIST_H_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQRP_MMR_YBIST_H_INV_SHFT 40
+#define SH_MD_DQRP_MMR_YBIST_H_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQRP_MMR_YBIST_H_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQRP_MMR_YBIST_H_ROT_SHFT 41
+#define SH_MD_DQRP_MMR_YBIST_H_ROT_MASK 0x0000020000000000
+
+/* SH_MD_DQRP_MMR_YBIST_H_ARM */
+/* Description: writing 1 arms data miscompare capture */
+#define SH_MD_DQRP_MMR_YBIST_H_ARM_SHFT 42
+#define SH_MD_DQRP_MMR_YBIST_H_ARM_MASK 0x0000040000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_YBIST_L 0x0000000100058810
+#define SH_MD_DQRP_MMR_YBIST_L_MASK 0x00000300ffffffff
+#define SH_MD_DQRP_MMR_YBIST_L_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_YBIST_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRP_MMR_YBIST_L_PAT_SHFT 0
+#define SH_MD_DQRP_MMR_YBIST_L_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQRP_MMR_YBIST_L_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQRP_MMR_YBIST_L_INV_SHFT 40
+#define SH_MD_DQRP_MMR_YBIST_L_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQRP_MMR_YBIST_L_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQRP_MMR_YBIST_L_ROT_SHFT 41
+#define SH_MD_DQRP_MMR_YBIST_L_ROT_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_YBIST_ERR_H 0x0000000100058820
+#define SH_MD_DQRP_MMR_YBIST_ERR_H_MASK 0x00000300ffffffff
+#define SH_MD_DQRP_MMR_YBIST_ERR_H_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_YBIST_ERR_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRP_MMR_YBIST_ERR_H_PAT_SHFT 0
+#define SH_MD_DQRP_MMR_YBIST_ERR_H_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQRP_MMR_YBIST_ERR_H_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQRP_MMR_YBIST_ERR_H_VAL_SHFT 40
+#define SH_MD_DQRP_MMR_YBIST_ERR_H_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQRP_MMR_YBIST_ERR_H_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQRP_MMR_YBIST_ERR_H_MORE_SHFT 41
+#define SH_MD_DQRP_MMR_YBIST_ERR_H_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRP_MMR_YBIST_ERR_L 0x0000000100058830
+#define SH_MD_DQRP_MMR_YBIST_ERR_L_MASK 0x00000300ffffffff
+#define SH_MD_DQRP_MMR_YBIST_ERR_L_INIT 0x0000000000000000
+
+/* SH_MD_DQRP_MMR_YBIST_ERR_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRP_MMR_YBIST_ERR_L_PAT_SHFT 0
+#define SH_MD_DQRP_MMR_YBIST_ERR_L_PAT_MASK 0x00000000ffffffff
+
+/* SH_MD_DQRP_MMR_YBIST_ERR_L_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQRP_MMR_YBIST_ERR_L_VAL_SHFT 40
+#define SH_MD_DQRP_MMR_YBIST_ERR_L_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQRP_MMR_YBIST_ERR_L_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQRP_MMR_YBIST_ERR_L_MORE_SHFT 41
+#define SH_MD_DQRP_MMR_YBIST_ERR_L_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_XBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRS_MMR_XBIST_H 0x0000000100068000
+#define SH_MD_DQRS_MMR_XBIST_H_MASK 0x000007ffffffffff
+#define SH_MD_DQRS_MMR_XBIST_H_INIT 0x0000000000000000
+
+/* SH_MD_DQRS_MMR_XBIST_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRS_MMR_XBIST_H_PAT_SHFT 0
+#define SH_MD_DQRS_MMR_XBIST_H_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQRS_MMR_XBIST_H_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQRS_MMR_XBIST_H_INV_SHFT 40
+#define SH_MD_DQRS_MMR_XBIST_H_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQRS_MMR_XBIST_H_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQRS_MMR_XBIST_H_ROT_SHFT 41
+#define SH_MD_DQRS_MMR_XBIST_H_ROT_MASK 0x0000020000000000
+
+/* SH_MD_DQRS_MMR_XBIST_H_ARM */
+/* Description: writing 1 arms data miscompare capture */
+#define SH_MD_DQRS_MMR_XBIST_H_ARM_SHFT 42
+#define SH_MD_DQRS_MMR_XBIST_H_ARM_MASK 0x0000040000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_XBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRS_MMR_XBIST_L 0x0000000100068010
+#define SH_MD_DQRS_MMR_XBIST_L_MASK 0x000003ffffffffff
+#define SH_MD_DQRS_MMR_XBIST_L_INIT 0x0000000000000000
+
+/* SH_MD_DQRS_MMR_XBIST_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRS_MMR_XBIST_L_PAT_SHFT 0
+#define SH_MD_DQRS_MMR_XBIST_L_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQRS_MMR_XBIST_L_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQRS_MMR_XBIST_L_INV_SHFT 40
+#define SH_MD_DQRS_MMR_XBIST_L_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQRS_MMR_XBIST_L_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQRS_MMR_XBIST_L_ROT_SHFT 41
+#define SH_MD_DQRS_MMR_XBIST_L_ROT_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_XBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRS_MMR_XBIST_ERR_H 0x0000000100068020
+#define SH_MD_DQRS_MMR_XBIST_ERR_H_MASK 0x000003ffffffffff
+#define SH_MD_DQRS_MMR_XBIST_ERR_H_INIT 0x0000000000000000
+
+/* SH_MD_DQRS_MMR_XBIST_ERR_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRS_MMR_XBIST_ERR_H_PAT_SHFT 0
+#define SH_MD_DQRS_MMR_XBIST_ERR_H_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQRS_MMR_XBIST_ERR_H_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQRS_MMR_XBIST_ERR_H_VAL_SHFT 40
+#define SH_MD_DQRS_MMR_XBIST_ERR_H_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQRS_MMR_XBIST_ERR_H_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQRS_MMR_XBIST_ERR_H_MORE_SHFT 41
+#define SH_MD_DQRS_MMR_XBIST_ERR_H_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_XBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRS_MMR_XBIST_ERR_L 0x0000000100068030
+#define SH_MD_DQRS_MMR_XBIST_ERR_L_MASK 0x000003ffffffffff
+#define SH_MD_DQRS_MMR_XBIST_ERR_L_INIT 0x0000000000000000
+
+/* SH_MD_DQRS_MMR_XBIST_ERR_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRS_MMR_XBIST_ERR_L_PAT_SHFT 0
+#define SH_MD_DQRS_MMR_XBIST_ERR_L_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQRS_MMR_XBIST_ERR_L_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQRS_MMR_XBIST_ERR_L_VAL_SHFT 40
+#define SH_MD_DQRS_MMR_XBIST_ERR_L_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQRS_MMR_XBIST_ERR_L_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQRS_MMR_XBIST_ERR_L_MORE_SHFT 41
+#define SH_MD_DQRS_MMR_XBIST_ERR_L_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_YBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRS_MMR_YBIST_H 0x0000000100068800
+#define SH_MD_DQRS_MMR_YBIST_H_MASK 0x000007ffffffffff
+#define SH_MD_DQRS_MMR_YBIST_H_INIT 0x0000000000000000
+
+/* SH_MD_DQRS_MMR_YBIST_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRS_MMR_YBIST_H_PAT_SHFT 0
+#define SH_MD_DQRS_MMR_YBIST_H_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQRS_MMR_YBIST_H_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQRS_MMR_YBIST_H_INV_SHFT 40
+#define SH_MD_DQRS_MMR_YBIST_H_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQRS_MMR_YBIST_H_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQRS_MMR_YBIST_H_ROT_SHFT 41
+#define SH_MD_DQRS_MMR_YBIST_H_ROT_MASK 0x0000020000000000
+
+/* SH_MD_DQRS_MMR_YBIST_H_ARM */
+/* Description: writing 1 arms data miscompare capture */
+#define SH_MD_DQRS_MMR_YBIST_H_ARM_SHFT 42
+#define SH_MD_DQRS_MMR_YBIST_H_ARM_MASK 0x0000040000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_YBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRS_MMR_YBIST_L 0x0000000100068810
+#define SH_MD_DQRS_MMR_YBIST_L_MASK 0x000003ffffffffff
+#define SH_MD_DQRS_MMR_YBIST_L_INIT 0x0000000000000000
+
+/* SH_MD_DQRS_MMR_YBIST_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRS_MMR_YBIST_L_PAT_SHFT 0
+#define SH_MD_DQRS_MMR_YBIST_L_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQRS_MMR_YBIST_L_INV */
+/* Description: invert data pattern in next cycle */
+#define SH_MD_DQRS_MMR_YBIST_L_INV_SHFT 40
+#define SH_MD_DQRS_MMR_YBIST_L_INV_MASK 0x0000010000000000
+
+/* SH_MD_DQRS_MMR_YBIST_L_ROT */
+/* Description: rotate left data pattern in next cycle */
+#define SH_MD_DQRS_MMR_YBIST_L_ROT_SHFT 41
+#define SH_MD_DQRS_MMR_YBIST_L_ROT_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_YBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRS_MMR_YBIST_ERR_H 0x0000000100068820
+#define SH_MD_DQRS_MMR_YBIST_ERR_H_MASK 0x000003ffffffffff
+#define SH_MD_DQRS_MMR_YBIST_ERR_H_INIT 0x0000000000000000
+
+/* SH_MD_DQRS_MMR_YBIST_ERR_H_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRS_MMR_YBIST_ERR_H_PAT_SHFT 0
+#define SH_MD_DQRS_MMR_YBIST_ERR_H_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQRS_MMR_YBIST_ERR_H_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQRS_MMR_YBIST_ERR_H_VAL_SHFT 40
+#define SH_MD_DQRS_MMR_YBIST_ERR_H_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQRS_MMR_YBIST_ERR_H_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQRS_MMR_YBIST_ERR_H_MORE_SHFT 41
+#define SH_MD_DQRS_MMR_YBIST_ERR_H_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_YBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+#define SH_MD_DQRS_MMR_YBIST_ERR_L 0x0000000100068830
+#define SH_MD_DQRS_MMR_YBIST_ERR_L_MASK 0x000003ffffffffff
+#define SH_MD_DQRS_MMR_YBIST_ERR_L_INIT 0x0000000000000000
+
+/* SH_MD_DQRS_MMR_YBIST_ERR_L_PAT */
+/* Description: data pattern */
+#define SH_MD_DQRS_MMR_YBIST_ERR_L_PAT_SHFT 0
+#define SH_MD_DQRS_MMR_YBIST_ERR_L_PAT_MASK 0x000000ffffffffff
+
+/* SH_MD_DQRS_MMR_YBIST_ERR_L_VAL */
+/* Description: bist data miscompare */
+#define SH_MD_DQRS_MMR_YBIST_ERR_L_VAL_SHFT 40
+#define SH_MD_DQRS_MMR_YBIST_ERR_L_VAL_MASK 0x0000010000000000
+
+/* SH_MD_DQRS_MMR_YBIST_ERR_L_MORE */
+/* Description: more than one bist data miscompare */
+#define SH_MD_DQRS_MMR_YBIST_ERR_L_MORE_SHFT 41
+#define SH_MD_DQRS_MMR_YBIST_ERR_L_MORE_MASK 0x0000020000000000
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_JNR_DEBUG" */
+/* joiner/fct debug configuration */
+/* ==================================================================== */
+
+#define SH_MD_DQRS_MMR_JNR_DEBUG 0x0000000100069000
+#define SH_MD_DQRS_MMR_JNR_DEBUG_MASK 0x0000000000000003
+#define SH_MD_DQRS_MMR_JNR_DEBUG_INIT 0x0000000000000000
+
+/* SH_MD_DQRS_MMR_JNR_DEBUG_PX */
+/* Description: select 0=pi 1=xn side */
+#define SH_MD_DQRS_MMR_JNR_DEBUG_PX_SHFT 0
+#define SH_MD_DQRS_MMR_JNR_DEBUG_PX_MASK 0x0000000000000001
+
+/* SH_MD_DQRS_MMR_JNR_DEBUG_RW */
+/* Description: select 0=read 1=write side */
+#define SH_MD_DQRS_MMR_JNR_DEBUG_RW_SHFT 1
+#define SH_MD_DQRS_MMR_JNR_DEBUG_RW_MASK 0x0000000000000002
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_YAMOPW_ERR" */
+/* amo/partial rmw ecc error register */
+/* ==================================================================== */
+
+#define SH_MD_DQRS_MMR_YAMOPW_ERR 0x000000010006a000
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_MASK 0x0000000103ff03ff
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_INIT 0x0000000000000000
+
+/* SH_MD_DQRS_MMR_YAMOPW_ERR_SSYN */
+/* Description: store data syndrome */
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_SSYN_SHFT 0
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_SSYN_MASK 0x00000000000000ff
+
+/* SH_MD_DQRS_MMR_YAMOPW_ERR_SCOR */
+/* Description: correctable ecc errror on store data */
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_SCOR_SHFT 8
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_SCOR_MASK 0x0000000000000100
+
+/* SH_MD_DQRS_MMR_YAMOPW_ERR_SUNC */
+/* Description: uncorrectable ecc errror on store data */
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_SUNC_SHFT 9
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_SUNC_MASK 0x0000000000000200
+
+/* SH_MD_DQRS_MMR_YAMOPW_ERR_RSYN */
+/* Description: memory read data syndrome */
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_RSYN_SHFT 16
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_RSYN_MASK 0x0000000000ff0000
+
+/* SH_MD_DQRS_MMR_YAMOPW_ERR_RCOR */
+/* Description: correctable ecc errror on read data */
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_RCOR_SHFT 24
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_RCOR_MASK 0x0000000001000000
+
+/* SH_MD_DQRS_MMR_YAMOPW_ERR_RUNC */
+/* Description: uncorrectable ecc errror on read data */
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_RUNC_SHFT 25
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_RUNC_MASK 0x0000000002000000
+
+/* SH_MD_DQRS_MMR_YAMOPW_ERR_ARM */
+/* Description: writing 1 arms ecc error capture */
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_ARM_SHFT 32
+#define SH_MD_DQRS_MMR_YAMOPW_ERR_ARM_MASK 0x0000000100000000
+
+
+#endif /* _ASM_IA64_SN_SN2_SHUB_MMR_H */
--- /dev/null
+/*
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2001-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#ifndef _ASM_IA64_SN_SN2_SHUB_MMR_T_H
+#define _ASM_IA64_SN_SN2_SHUB_MMR_T_H
+
+#include <asm/sn/arch.h>
+
+/* ==================================================================== */
+/* Register "SH_FSB_BINIT_CONTROL" */
+/* FSB BINIT# Control */
+/* ==================================================================== */
+
+typedef union sh_fsb_binit_control_u {
+ mmr_t sh_fsb_binit_control_regval;
+ struct {
+ mmr_t binit : 1;
+ mmr_t reserved_0 : 63;
+ } sh_fsb_binit_control_s;
+} sh_fsb_binit_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_FSB_RESET_CONTROL" */
+/* FSB Reset Control */
+/* ==================================================================== */
+
+typedef union sh_fsb_reset_control_u {
+ mmr_t sh_fsb_reset_control_regval;
+ struct {
+ mmr_t reset : 1;
+ mmr_t reserved_0 : 63;
+ } sh_fsb_reset_control_s;
+} sh_fsb_reset_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_FSB_SYSTEM_AGENT_CONFIG" */
+/* FSB System Agent Configuration */
+/* ==================================================================== */
+
+typedef union sh_fsb_system_agent_config_u {
+ mmr_t sh_fsb_system_agent_config_regval;
+ struct {
+ mmr_t rcnt_scnt_en : 1;
+ mmr_t reserved_0 : 2;
+ mmr_t berr_assert_en : 1;
+ mmr_t berr_sampling_en : 1;
+ mmr_t binit_assert_en : 1;
+ mmr_t bnr_throttling_en : 1;
+ mmr_t short_hang_en : 1;
+ mmr_t inta_rsp_data : 8;
+ mmr_t io_trans_rsp : 1;
+ mmr_t xtpr_trans_rsp : 1;
+ mmr_t inta_trans_rsp : 1;
+ mmr_t reserved_1 : 4;
+ mmr_t tdot : 1;
+ mmr_t serialize_fsb_en : 1;
+ mmr_t reserved_2 : 7;
+ mmr_t binit_event_enables : 14;
+ mmr_t reserved_3 : 18;
+ } sh_fsb_system_agent_config_s;
+} sh_fsb_system_agent_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_FSB_VGA_REMAP" */
+/* FSB VGA Address Space Remap */
+/* ==================================================================== */
+
+typedef union sh_fsb_vga_remap_u {
+ mmr_t sh_fsb_vga_remap_regval;
+ struct {
+ mmr_t reserved_0 : 17;
+ mmr_t offset : 19;
+ mmr_t asid : 2;
+ mmr_t nid : 11;
+ mmr_t reserved_1 : 13;
+ mmr_t vga_remapping_enabled : 1;
+ mmr_t reserved_2 : 1;
+ } sh_fsb_vga_remap_s;
+} sh_fsb_vga_remap_u_t;
+
+/* ==================================================================== */
+/* Register "SH_FSB_RESET_STATUS" */
+/* FSB Reset Status */
+/* ==================================================================== */
+
+typedef union sh_fsb_reset_status_u {
+ mmr_t sh_fsb_reset_status_regval;
+ struct {
+ mmr_t reset_in_progress : 1;
+ mmr_t reserved_0 : 63;
+ } sh_fsb_reset_status_s;
+} sh_fsb_reset_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_FSB_SYMMETRIC_AGENT_STATUS" */
+/* FSB Symmetric Agent Status */
+/* ==================================================================== */
+
+typedef union sh_fsb_symmetric_agent_status_u {
+ mmr_t sh_fsb_symmetric_agent_status_regval;
+ struct {
+ mmr_t cpu_0_active : 1;
+ mmr_t cpu_1_active : 1;
+ mmr_t cpus_ready : 1;
+ mmr_t reserved_0 : 61;
+ } sh_fsb_symmetric_agent_status_s;
+} sh_fsb_symmetric_agent_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_CREDIT_COUNT_0" */
+/* Graphics-write Credit Count for CPU 0 */
+/* ==================================================================== */
+
+typedef union sh_gfx_credit_count_0_u {
+ mmr_t sh_gfx_credit_count_0_regval;
+ struct {
+ mmr_t count : 20;
+ mmr_t reserved_0 : 43;
+ mmr_t reset_gfx_state : 1;
+ } sh_gfx_credit_count_0_s;
+} sh_gfx_credit_count_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_CREDIT_COUNT_1" */
+/* Graphics-write Credit Count for CPU 1 */
+/* ==================================================================== */
+
+typedef union sh_gfx_credit_count_1_u {
+ mmr_t sh_gfx_credit_count_1_regval;
+ struct {
+ mmr_t count : 20;
+ mmr_t reserved_0 : 43;
+ mmr_t reset_gfx_state : 1;
+ } sh_gfx_credit_count_1_s;
+} sh_gfx_credit_count_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_MODE_CNTRL_0" */
+/* Graphics credit mode amd message ordering for CPU 0 */
+/* ==================================================================== */
+
+typedef union sh_gfx_mode_cntrl_0_u {
+ mmr_t sh_gfx_mode_cntrl_0_regval;
+ struct {
+ mmr_t dword_credits : 1;
+ mmr_t mixed_mode_credits : 1;
+ mmr_t relaxed_ordering : 1;
+ mmr_t reserved_0 : 61;
+ } sh_gfx_mode_cntrl_0_s;
+} sh_gfx_mode_cntrl_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_MODE_CNTRL_1" */
+/* Graphics credit mode amd message ordering for CPU 1 */
+/* ==================================================================== */
+
+typedef union sh_gfx_mode_cntrl_1_u {
+ mmr_t sh_gfx_mode_cntrl_1_regval;
+ struct {
+ mmr_t dword_credits : 1;
+ mmr_t mixed_mode_credits : 1;
+ mmr_t relaxed_ordering : 1;
+ mmr_t reserved_0 : 61;
+ } sh_gfx_mode_cntrl_1_s;
+} sh_gfx_mode_cntrl_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_SKID_CREDIT_COUNT_0" */
+/* Graphics-write Skid Credit Count for CPU 0 */
+/* ==================================================================== */
+
+typedef union sh_gfx_skid_credit_count_0_u {
+ mmr_t sh_gfx_skid_credit_count_0_regval;
+ struct {
+ mmr_t skid : 20;
+ mmr_t reserved_0 : 44;
+ } sh_gfx_skid_credit_count_0_s;
+} sh_gfx_skid_credit_count_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_SKID_CREDIT_COUNT_1" */
+/* Graphics-write Skid Credit Count for CPU 1 */
+/* ==================================================================== */
+
+typedef union sh_gfx_skid_credit_count_1_u {
+ mmr_t sh_gfx_skid_credit_count_1_regval;
+ struct {
+ mmr_t skid : 20;
+ mmr_t reserved_0 : 44;
+ } sh_gfx_skid_credit_count_1_s;
+} sh_gfx_skid_credit_count_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_STALL_LIMIT_0" */
+/* Graphics-write Stall Limit for CPU 0 */
+/* ==================================================================== */
+
+typedef union sh_gfx_stall_limit_0_u {
+ mmr_t sh_gfx_stall_limit_0_regval;
+ struct {
+ mmr_t limit : 26;
+ mmr_t reserved_0 : 38;
+ } sh_gfx_stall_limit_0_s;
+} sh_gfx_stall_limit_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_STALL_LIMIT_1" */
+/* Graphics-write Stall Limit for CPU 1 */
+/* ==================================================================== */
+
+typedef union sh_gfx_stall_limit_1_u {
+ mmr_t sh_gfx_stall_limit_1_regval;
+ struct {
+ mmr_t limit : 26;
+ mmr_t reserved_0 : 38;
+ } sh_gfx_stall_limit_1_s;
+} sh_gfx_stall_limit_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_STALL_TIMER_0" */
+/* Graphics-write Stall Timer for CPU 0 */
+/* ==================================================================== */
+
+typedef union sh_gfx_stall_timer_0_u {
+ mmr_t sh_gfx_stall_timer_0_regval;
+ struct {
+ mmr_t timer_value : 26;
+ mmr_t reserved_0 : 38;
+ } sh_gfx_stall_timer_0_s;
+} sh_gfx_stall_timer_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_STALL_TIMER_1" */
+/* Graphics-write Stall Timer for CPU 1 */
+/* ==================================================================== */
+
+typedef union sh_gfx_stall_timer_1_u {
+ mmr_t sh_gfx_stall_timer_1_regval;
+ struct {
+ mmr_t timer_value : 26;
+ mmr_t reserved_0 : 38;
+ } sh_gfx_stall_timer_1_s;
+} sh_gfx_stall_timer_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_WINDOW_0" */
+/* Graphics-write Window for CPU 0 */
+/* ==================================================================== */
+
+typedef union sh_gfx_window_0_u {
+ mmr_t sh_gfx_window_0_regval;
+ struct {
+ mmr_t reserved_0 : 24;
+ mmr_t base_addr : 12;
+ mmr_t reserved_1 : 27;
+ mmr_t gfx_window_en : 1;
+ } sh_gfx_window_0_s;
+} sh_gfx_window_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_WINDOW_1" */
+/* Graphics-write Window for CPU 1 */
+/* ==================================================================== */
+
+typedef union sh_gfx_window_1_u {
+ mmr_t sh_gfx_window_1_regval;
+ struct {
+ mmr_t reserved_0 : 24;
+ mmr_t base_addr : 12;
+ mmr_t reserved_1 : 27;
+ mmr_t gfx_window_en : 1;
+ } sh_gfx_window_1_s;
+} sh_gfx_window_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_INTERRUPT_TIMER_LIMIT_0" */
+/* Graphics-write Interrupt Limit for CPU 0 */
+/* ==================================================================== */
+
+typedef union sh_gfx_interrupt_timer_limit_0_u {
+ mmr_t sh_gfx_interrupt_timer_limit_0_regval;
+ struct {
+ mmr_t interrupt_timer_limit : 8;
+ mmr_t reserved_0 : 56;
+ } sh_gfx_interrupt_timer_limit_0_s;
+} sh_gfx_interrupt_timer_limit_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_INTERRUPT_TIMER_LIMIT_1" */
+/* Graphics-write Interrupt Limit for CPU 1 */
+/* ==================================================================== */
+
+typedef union sh_gfx_interrupt_timer_limit_1_u {
+ mmr_t sh_gfx_interrupt_timer_limit_1_regval;
+ struct {
+ mmr_t interrupt_timer_limit : 8;
+ mmr_t reserved_0 : 56;
+ } sh_gfx_interrupt_timer_limit_1_s;
+} sh_gfx_interrupt_timer_limit_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_WRITE_STATUS_0" */
+/* Graphics Write Status for CPU 0 */
+/* ==================================================================== */
+
+typedef union sh_gfx_write_status_0_u {
+ mmr_t sh_gfx_write_status_0_regval;
+ struct {
+ mmr_t busy : 1;
+ mmr_t reserved_0 : 62;
+ mmr_t re_enable_gfx_stall : 1;
+ } sh_gfx_write_status_0_s;
+} sh_gfx_write_status_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GFX_WRITE_STATUS_1" */
+/* Graphics Write Status for CPU 1 */
+/* ==================================================================== */
+
+typedef union sh_gfx_write_status_1_u {
+ mmr_t sh_gfx_write_status_1_regval;
+ struct {
+ mmr_t busy : 1;
+ mmr_t reserved_0 : 62;
+ mmr_t re_enable_gfx_stall : 1;
+ } sh_gfx_write_status_1_s;
+} sh_gfx_write_status_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_II_INT0" */
+/* SHub II Interrupt 0 Registers */
+/* ==================================================================== */
+
+typedef union sh_ii_int0_u {
+ mmr_t sh_ii_int0_regval;
+ struct {
+ mmr_t idx : 8;
+ mmr_t send : 1;
+ mmr_t reserved_0 : 55;
+ } sh_ii_int0_s;
+} sh_ii_int0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_II_INT0_CONFIG" */
+/* SHub II Interrupt 0 Config Registers */
+/* ==================================================================== */
+
+typedef union sh_ii_int0_config_u {
+ mmr_t sh_ii_int0_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 14;
+ } sh_ii_int0_config_s;
+} sh_ii_int0_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_II_INT0_ENABLE" */
+/* SHub II Interrupt 0 Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_ii_int0_enable_u {
+ mmr_t sh_ii_int0_enable_regval;
+ struct {
+ mmr_t ii_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_ii_int0_enable_s;
+} sh_ii_int0_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_II_INT1" */
+/* SHub II Interrupt 1 Registers */
+/* ==================================================================== */
+
+typedef union sh_ii_int1_u {
+ mmr_t sh_ii_int1_regval;
+ struct {
+ mmr_t idx : 8;
+ mmr_t send : 1;
+ mmr_t reserved_0 : 55;
+ } sh_ii_int1_s;
+} sh_ii_int1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_II_INT1_CONFIG" */
+/* SHub II Interrupt 1 Config Registers */
+/* ==================================================================== */
+
+typedef union sh_ii_int1_config_u {
+ mmr_t sh_ii_int1_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 14;
+ } sh_ii_int1_config_s;
+} sh_ii_int1_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_II_INT1_ENABLE" */
+/* SHub II Interrupt 1 Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_ii_int1_enable_u {
+ mmr_t sh_ii_int1_enable_regval;
+ struct {
+ mmr_t ii_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_ii_int1_enable_s;
+} sh_ii_int1_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_INT_NODE_ID_CONFIG" */
+/* SHub Interrupt Node ID Configuration */
+/* ==================================================================== */
+
+typedef union sh_int_node_id_config_u {
+ mmr_t sh_int_node_id_config_regval;
+ struct {
+ mmr_t node_id : 11;
+ mmr_t id_sel : 1;
+ mmr_t reserved_0 : 52;
+ } sh_int_node_id_config_s;
+} sh_int_node_id_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_IPI_INT" */
+/* SHub Inter-Processor Interrupt Registers */
+/* ==================================================================== */
+
+typedef union sh_ipi_int_u {
+ mmr_t sh_ipi_int_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 3;
+ mmr_t send : 1;
+ } sh_ipi_int_s;
+} sh_ipi_int_u_t;
+
+/* ==================================================================== */
+/* Register "SH_IPI_INT_ENABLE" */
+/* SHub Inter-Processor Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_ipi_int_enable_u {
+ mmr_t sh_ipi_int_enable_regval;
+ struct {
+ mmr_t pio_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_ipi_int_enable_s;
+} sh_ipi_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT0_CONFIG" */
+/* SHub Local Interrupt 0 Registers */
+/* ==================================================================== */
+
+typedef union sh_local_int0_config_u {
+ mmr_t sh_local_int0_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_local_int0_config_s;
+} sh_local_int0_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT0_ENABLE" */
+/* SHub Local Interrupt 0 Enable */
+/* ==================================================================== */
+
+typedef union sh_local_int0_enable_u {
+ mmr_t sh_local_int0_enable_regval;
+ struct {
+ mmr_t pi_hw_int : 1;
+ mmr_t md_hw_int : 1;
+ mmr_t xn_hw_int : 1;
+ mmr_t lb_hw_int : 1;
+ mmr_t ii_hw_int : 1;
+ mmr_t pi_ce_int : 1;
+ mmr_t md_ce_int : 1;
+ mmr_t xn_ce_int : 1;
+ mmr_t pi_uce_int : 1;
+ mmr_t md_uce_int : 1;
+ mmr_t xn_uce_int : 1;
+ mmr_t reserved_0 : 1;
+ mmr_t system_shutdown_int : 1;
+ mmr_t uart_int : 1;
+ mmr_t l1_nmi_int : 1;
+ mmr_t stop_clock : 1;
+ mmr_t reserved_1 : 48;
+ } sh_local_int0_enable_s;
+} sh_local_int0_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT1_CONFIG" */
+/* SHub Local Interrupt 1 Registers */
+/* ==================================================================== */
+
+typedef union sh_local_int1_config_u {
+ mmr_t sh_local_int1_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_local_int1_config_s;
+} sh_local_int1_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT1_ENABLE" */
+/* SHub Local Interrupt 1 Enable */
+/* ==================================================================== */
+
+typedef union sh_local_int1_enable_u {
+ mmr_t sh_local_int1_enable_regval;
+ struct {
+ mmr_t pi_hw_int : 1;
+ mmr_t md_hw_int : 1;
+ mmr_t xn_hw_int : 1;
+ mmr_t lb_hw_int : 1;
+ mmr_t ii_hw_int : 1;
+ mmr_t pi_ce_int : 1;
+ mmr_t md_ce_int : 1;
+ mmr_t xn_ce_int : 1;
+ mmr_t pi_uce_int : 1;
+ mmr_t md_uce_int : 1;
+ mmr_t xn_uce_int : 1;
+ mmr_t reserved_0 : 1;
+ mmr_t system_shutdown_int : 1;
+ mmr_t uart_int : 1;
+ mmr_t l1_nmi_int : 1;
+ mmr_t stop_clock : 1;
+ mmr_t reserved_1 : 48;
+ } sh_local_int1_enable_s;
+} sh_local_int1_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT2_CONFIG" */
+/* SHub Local Interrupt 2 Registers */
+/* ==================================================================== */
+
+typedef union sh_local_int2_config_u {
+ mmr_t sh_local_int2_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_local_int2_config_s;
+} sh_local_int2_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT2_ENABLE" */
+/* SHub Local Interrupt 2 Enable */
+/* ==================================================================== */
+
+typedef union sh_local_int2_enable_u {
+ mmr_t sh_local_int2_enable_regval;
+ struct {
+ mmr_t pi_hw_int : 1;
+ mmr_t md_hw_int : 1;
+ mmr_t xn_hw_int : 1;
+ mmr_t lb_hw_int : 1;
+ mmr_t ii_hw_int : 1;
+ mmr_t pi_ce_int : 1;
+ mmr_t md_ce_int : 1;
+ mmr_t xn_ce_int : 1;
+ mmr_t pi_uce_int : 1;
+ mmr_t md_uce_int : 1;
+ mmr_t xn_uce_int : 1;
+ mmr_t reserved_0 : 1;
+ mmr_t system_shutdown_int : 1;
+ mmr_t uart_int : 1;
+ mmr_t l1_nmi_int : 1;
+ mmr_t stop_clock : 1;
+ mmr_t reserved_1 : 48;
+ } sh_local_int2_enable_s;
+} sh_local_int2_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT3_CONFIG" */
+/* SHub Local Interrupt 3 Registers */
+/* ==================================================================== */
+
+typedef union sh_local_int3_config_u {
+ mmr_t sh_local_int3_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_local_int3_config_s;
+} sh_local_int3_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT3_ENABLE" */
+/* SHub Local Interrupt 3 Enable */
+/* ==================================================================== */
+
+typedef union sh_local_int3_enable_u {
+ mmr_t sh_local_int3_enable_regval;
+ struct {
+ mmr_t pi_hw_int : 1;
+ mmr_t md_hw_int : 1;
+ mmr_t xn_hw_int : 1;
+ mmr_t lb_hw_int : 1;
+ mmr_t ii_hw_int : 1;
+ mmr_t pi_ce_int : 1;
+ mmr_t md_ce_int : 1;
+ mmr_t xn_ce_int : 1;
+ mmr_t pi_uce_int : 1;
+ mmr_t md_uce_int : 1;
+ mmr_t xn_uce_int : 1;
+ mmr_t reserved_0 : 1;
+ mmr_t system_shutdown_int : 1;
+ mmr_t uart_int : 1;
+ mmr_t l1_nmi_int : 1;
+ mmr_t stop_clock : 1;
+ mmr_t reserved_1 : 48;
+ } sh_local_int3_enable_s;
+} sh_local_int3_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT4_CONFIG" */
+/* SHub Local Interrupt 4 Registers */
+/* ==================================================================== */
+
+typedef union sh_local_int4_config_u {
+ mmr_t sh_local_int4_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_local_int4_config_s;
+} sh_local_int4_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT4_ENABLE" */
+/* SHub Local Interrupt 4 Enable */
+/* ==================================================================== */
+
+typedef union sh_local_int4_enable_u {
+ mmr_t sh_local_int4_enable_regval;
+ struct {
+ mmr_t pi_hw_int : 1;
+ mmr_t md_hw_int : 1;
+ mmr_t xn_hw_int : 1;
+ mmr_t lb_hw_int : 1;
+ mmr_t ii_hw_int : 1;
+ mmr_t pi_ce_int : 1;
+ mmr_t md_ce_int : 1;
+ mmr_t xn_ce_int : 1;
+ mmr_t pi_uce_int : 1;
+ mmr_t md_uce_int : 1;
+ mmr_t xn_uce_int : 1;
+ mmr_t reserved_0 : 1;
+ mmr_t system_shutdown_int : 1;
+ mmr_t uart_int : 1;
+ mmr_t l1_nmi_int : 1;
+ mmr_t stop_clock : 1;
+ mmr_t reserved_1 : 48;
+ } sh_local_int4_enable_s;
+} sh_local_int4_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT5_CONFIG" */
+/* SHub Local Interrupt 5 Registers */
+/* ==================================================================== */
+
+typedef union sh_local_int5_config_u {
+ mmr_t sh_local_int5_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_local_int5_config_s;
+} sh_local_int5_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LOCAL_INT5_ENABLE" */
+/* SHub Local Interrupt 5 Enable */
+/* ==================================================================== */
+
+typedef union sh_local_int5_enable_u {
+ mmr_t sh_local_int5_enable_regval;
+ struct {
+ mmr_t pi_hw_int : 1;
+ mmr_t md_hw_int : 1;
+ mmr_t xn_hw_int : 1;
+ mmr_t lb_hw_int : 1;
+ mmr_t ii_hw_int : 1;
+ mmr_t pi_ce_int : 1;
+ mmr_t md_ce_int : 1;
+ mmr_t xn_ce_int : 1;
+ mmr_t pi_uce_int : 1;
+ mmr_t md_uce_int : 1;
+ mmr_t xn_uce_int : 1;
+ mmr_t reserved_0 : 1;
+ mmr_t system_shutdown_int : 1;
+ mmr_t uart_int : 1;
+ mmr_t l1_nmi_int : 1;
+ mmr_t stop_clock : 1;
+ mmr_t reserved_1 : 48;
+ } sh_local_int5_enable_s;
+} sh_local_int5_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC0_ERR_INT_CONFIG" */
+/* SHub Processor 0 Error Interrupt Registers */
+/* ==================================================================== */
+
+typedef union sh_proc0_err_int_config_u {
+ mmr_t sh_proc0_err_int_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_proc0_err_int_config_s;
+} sh_proc0_err_int_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC1_ERR_INT_CONFIG" */
+/* SHub Processor 1 Error Interrupt Registers */
+/* ==================================================================== */
+
+typedef union sh_proc1_err_int_config_u {
+ mmr_t sh_proc1_err_int_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_proc1_err_int_config_s;
+} sh_proc1_err_int_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC2_ERR_INT_CONFIG" */
+/* SHub Processor 2 Error Interrupt Registers */
+/* ==================================================================== */
+
+typedef union sh_proc2_err_int_config_u {
+ mmr_t sh_proc2_err_int_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_proc2_err_int_config_s;
+} sh_proc2_err_int_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC3_ERR_INT_CONFIG" */
+/* SHub Processor 3 Error Interrupt Registers */
+/* ==================================================================== */
+
+typedef union sh_proc3_err_int_config_u {
+ mmr_t sh_proc3_err_int_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_proc3_err_int_config_s;
+} sh_proc3_err_int_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC0_ADV_INT_CONFIG" */
+/* SHub Processor 0 Advisory Interrupt Registers */
+/* ==================================================================== */
+
+typedef union sh_proc0_adv_int_config_u {
+ mmr_t sh_proc0_adv_int_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_proc0_adv_int_config_s;
+} sh_proc0_adv_int_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC1_ADV_INT_CONFIG" */
+/* SHub Processor 1 Advisory Interrupt Registers */
+/* ==================================================================== */
+
+typedef union sh_proc1_adv_int_config_u {
+ mmr_t sh_proc1_adv_int_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_proc1_adv_int_config_s;
+} sh_proc1_adv_int_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC2_ADV_INT_CONFIG" */
+/* SHub Processor 2 Advisory Interrupt Registers */
+/* ==================================================================== */
+
+typedef union sh_proc2_adv_int_config_u {
+ mmr_t sh_proc2_adv_int_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_proc2_adv_int_config_s;
+} sh_proc2_adv_int_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC3_ADV_INT_CONFIG" */
+/* SHub Processor 3 Advisory Interrupt Registers */
+/* ==================================================================== */
+
+typedef union sh_proc3_adv_int_config_u {
+ mmr_t sh_proc3_adv_int_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_proc3_adv_int_config_s;
+} sh_proc3_adv_int_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC0_ERR_INT_ENABLE" */
+/* SHub Processor 0 Error Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_proc0_err_int_enable_u {
+ mmr_t sh_proc0_err_int_enable_regval;
+ struct {
+ mmr_t proc0_err_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_proc0_err_int_enable_s;
+} sh_proc0_err_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC1_ERR_INT_ENABLE" */
+/* SHub Processor 1 Error Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_proc1_err_int_enable_u {
+ mmr_t sh_proc1_err_int_enable_regval;
+ struct {
+ mmr_t proc1_err_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_proc1_err_int_enable_s;
+} sh_proc1_err_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC2_ERR_INT_ENABLE" */
+/* SHub Processor 2 Error Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_proc2_err_int_enable_u {
+ mmr_t sh_proc2_err_int_enable_regval;
+ struct {
+ mmr_t proc2_err_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_proc2_err_int_enable_s;
+} sh_proc2_err_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC3_ERR_INT_ENABLE" */
+/* SHub Processor 3 Error Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_proc3_err_int_enable_u {
+ mmr_t sh_proc3_err_int_enable_regval;
+ struct {
+ mmr_t proc3_err_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_proc3_err_int_enable_s;
+} sh_proc3_err_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC0_ADV_INT_ENABLE" */
+/* SHub Processor 0 Advisory Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_proc0_adv_int_enable_u {
+ mmr_t sh_proc0_adv_int_enable_regval;
+ struct {
+ mmr_t proc0_adv_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_proc0_adv_int_enable_s;
+} sh_proc0_adv_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC1_ADV_INT_ENABLE" */
+/* SHub Processor 1 Advisory Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_proc1_adv_int_enable_u {
+ mmr_t sh_proc1_adv_int_enable_regval;
+ struct {
+ mmr_t proc1_adv_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_proc1_adv_int_enable_s;
+} sh_proc1_adv_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC2_ADV_INT_ENABLE" */
+/* SHub Processor 2 Advisory Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_proc2_adv_int_enable_u {
+ mmr_t sh_proc2_adv_int_enable_regval;
+ struct {
+ mmr_t proc2_adv_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_proc2_adv_int_enable_s;
+} sh_proc2_adv_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC3_ADV_INT_ENABLE" */
+/* SHub Processor 3 Advisory Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_proc3_adv_int_enable_u {
+ mmr_t sh_proc3_adv_int_enable_regval;
+ struct {
+ mmr_t proc3_adv_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_proc3_adv_int_enable_s;
+} sh_proc3_adv_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_INT_CONFIG" */
+/* SHub Profile Interrupt Configuration Registers */
+/* ==================================================================== */
+
+typedef union sh_profile_int_config_u {
+ mmr_t sh_profile_int_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_profile_int_config_s;
+} sh_profile_int_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_INT_ENABLE" */
+/* SHub Profile Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_profile_int_enable_u {
+ mmr_t sh_profile_int_enable_regval;
+ struct {
+ mmr_t profile_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_profile_int_enable_s;
+} sh_profile_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_RTC0_INT_CONFIG" */
+/* SHub RTC 0 Interrupt Config Registers */
+/* ==================================================================== */
+
+typedef union sh_rtc0_int_config_u {
+ mmr_t sh_rtc0_int_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_rtc0_int_config_s;
+} sh_rtc0_int_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_RTC0_INT_ENABLE" */
+/* SHub RTC 0 Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_rtc0_int_enable_u {
+ mmr_t sh_rtc0_int_enable_regval;
+ struct {
+ mmr_t rtc0_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_rtc0_int_enable_s;
+} sh_rtc0_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_RTC1_INT_CONFIG" */
+/* SHub RTC 1 Interrupt Config Registers */
+/* ==================================================================== */
+
+typedef union sh_rtc1_int_config_u {
+ mmr_t sh_rtc1_int_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_rtc1_int_config_s;
+} sh_rtc1_int_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_RTC1_INT_ENABLE" */
+/* SHub RTC 1 Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_rtc1_int_enable_u {
+ mmr_t sh_rtc1_int_enable_regval;
+ struct {
+ mmr_t rtc1_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_rtc1_int_enable_s;
+} sh_rtc1_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_RTC2_INT_CONFIG" */
+/* SHub RTC 2 Interrupt Config Registers */
+/* ==================================================================== */
+
+typedef union sh_rtc2_int_config_u {
+ mmr_t sh_rtc2_int_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_rtc2_int_config_s;
+} sh_rtc2_int_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_RTC2_INT_ENABLE" */
+/* SHub RTC 2 Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_rtc2_int_enable_u {
+ mmr_t sh_rtc2_int_enable_regval;
+ struct {
+ mmr_t rtc2_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_rtc2_int_enable_s;
+} sh_rtc2_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_RTC3_INT_CONFIG" */
+/* SHub RTC 3 Interrupt Config Registers */
+/* ==================================================================== */
+
+typedef union sh_rtc3_int_config_u {
+ mmr_t sh_rtc3_int_config_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t agt : 1;
+ mmr_t pid : 16;
+ mmr_t reserved_0 : 1;
+ mmr_t base : 29;
+ mmr_t reserved_1 : 2;
+ mmr_t idx : 8;
+ mmr_t reserved_2 : 4;
+ } sh_rtc3_int_config_s;
+} sh_rtc3_int_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_RTC3_INT_ENABLE" */
+/* SHub RTC 3 Interrupt Enable Registers */
+/* ==================================================================== */
+
+typedef union sh_rtc3_int_enable_u {
+ mmr_t sh_rtc3_int_enable_regval;
+ struct {
+ mmr_t rtc3_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_rtc3_int_enable_s;
+} sh_rtc3_int_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_EVENT_OCCURRED" */
+/* SHub Interrupt Event Occurred */
+/* ==================================================================== */
+
+typedef union sh_event_occurred_u {
+ mmr_t sh_event_occurred_regval;
+ struct {
+ mmr_t pi_hw_int : 1;
+ mmr_t md_hw_int : 1;
+ mmr_t xn_hw_int : 1;
+ mmr_t lb_hw_int : 1;
+ mmr_t ii_hw_int : 1;
+ mmr_t pi_ce_int : 1;
+ mmr_t md_ce_int : 1;
+ mmr_t xn_ce_int : 1;
+ mmr_t pi_uce_int : 1;
+ mmr_t md_uce_int : 1;
+ mmr_t xn_uce_int : 1;
+ mmr_t proc0_adv_int : 1;
+ mmr_t proc1_adv_int : 1;
+ mmr_t proc2_adv_int : 1;
+ mmr_t proc3_adv_int : 1;
+ mmr_t proc0_err_int : 1;
+ mmr_t proc1_err_int : 1;
+ mmr_t proc2_err_int : 1;
+ mmr_t proc3_err_int : 1;
+ mmr_t system_shutdown_int : 1;
+ mmr_t uart_int : 1;
+ mmr_t l1_nmi_int : 1;
+ mmr_t stop_clock : 1;
+ mmr_t rtc0_int : 1;
+ mmr_t rtc1_int : 1;
+ mmr_t rtc2_int : 1;
+ mmr_t rtc3_int : 1;
+ mmr_t profile_int : 1;
+ mmr_t ipi_int : 1;
+ mmr_t ii_int0 : 1;
+ mmr_t ii_int1 : 1;
+ mmr_t reserved_0 : 33;
+ } sh_event_occurred_s;
+} sh_event_occurred_u_t;
+
+/* ==================================================================== */
+/* Register "SH_EVENT_OVERFLOW" */
+/* SHub Interrupt Event Occurred Overflow */
+/* ==================================================================== */
+
+typedef union sh_event_overflow_u {
+ mmr_t sh_event_overflow_regval;
+ struct {
+ mmr_t pi_hw_int : 1;
+ mmr_t md_hw_int : 1;
+ mmr_t xn_hw_int : 1;
+ mmr_t lb_hw_int : 1;
+ mmr_t ii_hw_int : 1;
+ mmr_t pi_ce_int : 1;
+ mmr_t md_ce_int : 1;
+ mmr_t xn_ce_int : 1;
+ mmr_t pi_uce_int : 1;
+ mmr_t md_uce_int : 1;
+ mmr_t xn_uce_int : 1;
+ mmr_t proc0_adv_int : 1;
+ mmr_t proc1_adv_int : 1;
+ mmr_t proc2_adv_int : 1;
+ mmr_t proc3_adv_int : 1;
+ mmr_t proc0_err_int : 1;
+ mmr_t proc1_err_int : 1;
+ mmr_t proc2_err_int : 1;
+ mmr_t proc3_err_int : 1;
+ mmr_t system_shutdown_int : 1;
+ mmr_t uart_int : 1;
+ mmr_t l1_nmi_int : 1;
+ mmr_t stop_clock : 1;
+ mmr_t rtc0_int : 1;
+ mmr_t rtc1_int : 1;
+ mmr_t rtc2_int : 1;
+ mmr_t rtc3_int : 1;
+ mmr_t profile_int : 1;
+ mmr_t reserved_0 : 36;
+ } sh_event_overflow_s;
+} sh_event_overflow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_JUNK_BUS_TIME" */
+/* Junk Bus Timing */
+/* ==================================================================== */
+
+typedef union sh_junk_bus_time_u {
+ mmr_t sh_junk_bus_time_regval;
+ struct {
+ mmr_t fprom_setup_hold : 8;
+ mmr_t fprom_enable : 8;
+ mmr_t uart_setup_hold : 8;
+ mmr_t uart_enable : 8;
+ mmr_t reserved_0 : 32;
+ } sh_junk_bus_time_s;
+} sh_junk_bus_time_u_t;
+
+/* ==================================================================== */
+/* Register "SH_JUNK_LATCH_TIME" */
+/* Junk Bus Latch Timing */
+/* ==================================================================== */
+
+typedef union sh_junk_latch_time_u {
+ mmr_t sh_junk_latch_time_regval;
+ struct {
+ mmr_t setup_hold : 3;
+ mmr_t reserved_0 : 61;
+ } sh_junk_latch_time_s;
+} sh_junk_latch_time_u_t;
+
+/* ==================================================================== */
+/* Register "SH_JUNK_NACK_RESET" */
+/* Junk Bus Nack Counter Reset */
+/* ==================================================================== */
+
+typedef union sh_junk_nack_reset_u {
+ mmr_t sh_junk_nack_reset_regval;
+ struct {
+ mmr_t pulse : 1;
+ mmr_t reserved_0 : 63;
+ } sh_junk_nack_reset_s;
+} sh_junk_nack_reset_u_t;
+
+/* ==================================================================== */
+/* Register "SH_JUNK_BUS_LED0" */
+/* Junk Bus LED0 */
+/* ==================================================================== */
+
+typedef union sh_junk_bus_led0_u {
+ mmr_t sh_junk_bus_led0_regval;
+ struct {
+ mmr_t led0_data : 8;
+ mmr_t reserved_0 : 56;
+ } sh_junk_bus_led0_s;
+} sh_junk_bus_led0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_JUNK_BUS_LED1" */
+/* Junk Bus LED1 */
+/* ==================================================================== */
+
+typedef union sh_junk_bus_led1_u {
+ mmr_t sh_junk_bus_led1_regval;
+ struct {
+ mmr_t led1_data : 8;
+ mmr_t reserved_0 : 56;
+ } sh_junk_bus_led1_s;
+} sh_junk_bus_led1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_JUNK_BUS_LED2" */
+/* Junk Bus LED2 */
+/* ==================================================================== */
+
+typedef union sh_junk_bus_led2_u {
+ mmr_t sh_junk_bus_led2_regval;
+ struct {
+ mmr_t led2_data : 8;
+ mmr_t reserved_0 : 56;
+ } sh_junk_bus_led2_s;
+} sh_junk_bus_led2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_JUNK_BUS_LED3" */
+/* Junk Bus LED3 */
+/* ==================================================================== */
+
+typedef union sh_junk_bus_led3_u {
+ mmr_t sh_junk_bus_led3_regval;
+ struct {
+ mmr_t led3_data : 8;
+ mmr_t reserved_0 : 56;
+ } sh_junk_bus_led3_s;
+} sh_junk_bus_led3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_JUNK_ERROR_STATUS" */
+/* Junk Bus Error Status */
+/* ==================================================================== */
+
+typedef union sh_junk_error_status_u {
+ mmr_t sh_junk_error_status_regval;
+ struct {
+ mmr_t address : 47;
+ mmr_t reserved_0 : 1;
+ mmr_t cmd : 8;
+ mmr_t mode : 1;
+ mmr_t status : 4;
+ mmr_t reserved_1 : 3;
+ } sh_junk_error_status_s;
+} sh_junk_error_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_STAT" */
+/* This register describes the LLP status. */
+/* ==================================================================== */
+
+typedef union sh_ni0_llp_stat_u {
+ mmr_t sh_ni0_llp_stat_regval;
+ struct {
+ mmr_t link_reset_state : 4;
+ mmr_t reserved_0 : 60;
+ } sh_ni0_llp_stat_s;
+} sh_ni0_llp_stat_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_RESET" */
+/* Writing issues a reset to the network interface */
+/* ==================================================================== */
+
+typedef union sh_ni0_llp_reset_u {
+ mmr_t sh_ni0_llp_reset_regval;
+ struct {
+ mmr_t link : 1;
+ mmr_t warm : 1;
+ mmr_t reserved_0 : 62;
+ } sh_ni0_llp_reset_s;
+} sh_ni0_llp_reset_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_RESET_EN" */
+/* Controls LLP warm reset propagation */
+/* ==================================================================== */
+
+typedef union sh_ni0_llp_reset_en_u {
+ mmr_t sh_ni0_llp_reset_en_regval;
+ struct {
+ mmr_t ok : 1;
+ mmr_t reserved_0 : 63;
+ } sh_ni0_llp_reset_en_s;
+} sh_ni0_llp_reset_en_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_CHAN_MODE" */
+/* Sets the signaling mode of LLP and channel */
+/* ==================================================================== */
+
+typedef union sh_ni0_llp_chan_mode_u {
+ mmr_t sh_ni0_llp_chan_mode_regval;
+ struct {
+ mmr_t bitmode32 : 1;
+ mmr_t ac_encode : 1;
+ mmr_t enable_tuning : 1;
+ mmr_t enable_rmt_ft_upd : 1;
+ mmr_t enable_clkquad : 1;
+ mmr_t reserved_0 : 59;
+ } sh_ni0_llp_chan_mode_s;
+} sh_ni0_llp_chan_mode_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_CONFIG" */
+/* Sets the configuration of LLP and channel */
+/* ==================================================================== */
+
+typedef union sh_ni0_llp_config_u {
+ mmr_t sh_ni0_llp_config_regval;
+ struct {
+ mmr_t maxburst : 10;
+ mmr_t maxretry : 10;
+ mmr_t nulltimeout : 6;
+ mmr_t ftu_time : 12;
+ mmr_t reserved_0 : 26;
+ } sh_ni0_llp_config_s;
+} sh_ni0_llp_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_TEST_CTL" */
+/* ==================================================================== */
+
+typedef union sh_ni0_llp_test_ctl_u {
+ mmr_t sh_ni0_llp_test_ctl_regval;
+ struct {
+ mmr_t pattern : 40;
+ mmr_t send_test_mode : 2;
+ mmr_t reserved_0 : 2;
+ mmr_t wire_sel : 6;
+ mmr_t reserved_1 : 2;
+ mmr_t lfsr_mode : 2;
+ mmr_t noise_mode : 2;
+ mmr_t armcapture : 1;
+ mmr_t capturecbonly : 1;
+ mmr_t sendcberror : 1;
+ mmr_t sendsnerror : 1;
+ mmr_t fakesnerror : 1;
+ mmr_t captured : 1;
+ mmr_t cberror : 1;
+ mmr_t reserved_2 : 1;
+ } sh_ni0_llp_test_ctl_s;
+} sh_ni0_llp_test_ctl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_CAPT_WD1" */
+/* low order 64-bit captured word */
+/* ==================================================================== */
+
+typedef union sh_ni0_llp_capt_wd1_u {
+ mmr_t sh_ni0_llp_capt_wd1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_ni0_llp_capt_wd1_s;
+} sh_ni0_llp_capt_wd1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_CAPT_WD2" */
+/* high order 64-bit captured word */
+/* ==================================================================== */
+
+typedef union sh_ni0_llp_capt_wd2_u {
+ mmr_t sh_ni0_llp_capt_wd2_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_ni0_llp_capt_wd2_s;
+} sh_ni0_llp_capt_wd2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_CAPT_SBCB" */
+/* captured sideband, sequence, and CRC */
+/* ==================================================================== */
+
+typedef union sh_ni0_llp_capt_sbcb_u {
+ mmr_t sh_ni0_llp_capt_sbcb_regval;
+ struct {
+ mmr_t capturedrcvsbsn : 16;
+ mmr_t capturedrcvcrc : 16;
+ mmr_t sentallcberrors : 1;
+ mmr_t sentallsnerrors : 1;
+ mmr_t fakedallsnerrors : 1;
+ mmr_t chargeoverflow : 1;
+ mmr_t chargeunderflow : 1;
+ mmr_t reserved_0 : 27;
+ } sh_ni0_llp_capt_sbcb_s;
+} sh_ni0_llp_capt_sbcb_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_LLP_ERR" */
+/* ==================================================================== */
+
+typedef union sh_ni0_llp_err_u {
+ mmr_t sh_ni0_llp_err_regval;
+ struct {
+ mmr_t rx_sn_err_count : 8;
+ mmr_t rx_cb_err_count : 8;
+ mmr_t retry_count : 8;
+ mmr_t retry_timeout : 1;
+ mmr_t rcv_link_reset : 1;
+ mmr_t squash : 1;
+ mmr_t power_not_ok : 1;
+ mmr_t wire_cnt : 24;
+ mmr_t wire_overflow : 1;
+ mmr_t reserved_0 : 11;
+ } sh_ni0_llp_err_s;
+} sh_ni0_llp_err_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_STAT" */
+/* This register describes the LLP status. */
+/* ==================================================================== */
+
+typedef union sh_ni1_llp_stat_u {
+ mmr_t sh_ni1_llp_stat_regval;
+ struct {
+ mmr_t link_reset_state : 4;
+ mmr_t reserved_0 : 60;
+ } sh_ni1_llp_stat_s;
+} sh_ni1_llp_stat_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_RESET" */
+/* Writing issues a reset to the network interface */
+/* ==================================================================== */
+
+typedef union sh_ni1_llp_reset_u {
+ mmr_t sh_ni1_llp_reset_regval;
+ struct {
+ mmr_t link : 1;
+ mmr_t warm : 1;
+ mmr_t reserved_0 : 62;
+ } sh_ni1_llp_reset_s;
+} sh_ni1_llp_reset_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_RESET_EN" */
+/* Controls LLP warm reset propagation */
+/* ==================================================================== */
+
+typedef union sh_ni1_llp_reset_en_u {
+ mmr_t sh_ni1_llp_reset_en_regval;
+ struct {
+ mmr_t ok : 1;
+ mmr_t reserved_0 : 63;
+ } sh_ni1_llp_reset_en_s;
+} sh_ni1_llp_reset_en_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_CHAN_MODE" */
+/* Sets the signaling mode of LLP and channel */
+/* ==================================================================== */
+
+typedef union sh_ni1_llp_chan_mode_u {
+ mmr_t sh_ni1_llp_chan_mode_regval;
+ struct {
+ mmr_t bitmode32 : 1;
+ mmr_t ac_encode : 1;
+ mmr_t enable_tuning : 1;
+ mmr_t enable_rmt_ft_upd : 1;
+ mmr_t enable_clkquad : 1;
+ mmr_t reserved_0 : 59;
+ } sh_ni1_llp_chan_mode_s;
+} sh_ni1_llp_chan_mode_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_CONFIG" */
+/* Sets the configuration of LLP and channel */
+/* ==================================================================== */
+
+typedef union sh_ni1_llp_config_u {
+ mmr_t sh_ni1_llp_config_regval;
+ struct {
+ mmr_t maxburst : 10;
+ mmr_t maxretry : 10;
+ mmr_t nulltimeout : 6;
+ mmr_t ftu_time : 12;
+ mmr_t reserved_0 : 26;
+ } sh_ni1_llp_config_s;
+} sh_ni1_llp_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_TEST_CTL" */
+/* ==================================================================== */
+
+typedef union sh_ni1_llp_test_ctl_u {
+ mmr_t sh_ni1_llp_test_ctl_regval;
+ struct {
+ mmr_t pattern : 40;
+ mmr_t send_test_mode : 2;
+ mmr_t reserved_0 : 2;
+ mmr_t wire_sel : 6;
+ mmr_t reserved_1 : 2;
+ mmr_t lfsr_mode : 2;
+ mmr_t noise_mode : 2;
+ mmr_t armcapture : 1;
+ mmr_t capturecbonly : 1;
+ mmr_t sendcberror : 1;
+ mmr_t sendsnerror : 1;
+ mmr_t fakesnerror : 1;
+ mmr_t captured : 1;
+ mmr_t cberror : 1;
+ mmr_t reserved_2 : 1;
+ } sh_ni1_llp_test_ctl_s;
+} sh_ni1_llp_test_ctl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_CAPT_WD1" */
+/* low order 64-bit captured word */
+/* ==================================================================== */
+
+typedef union sh_ni1_llp_capt_wd1_u {
+ mmr_t sh_ni1_llp_capt_wd1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_ni1_llp_capt_wd1_s;
+} sh_ni1_llp_capt_wd1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_CAPT_WD2" */
+/* high order 64-bit captured word */
+/* ==================================================================== */
+
+typedef union sh_ni1_llp_capt_wd2_u {
+ mmr_t sh_ni1_llp_capt_wd2_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_ni1_llp_capt_wd2_s;
+} sh_ni1_llp_capt_wd2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_CAPT_SBCB" */
+/* captured sideband, sequence, and CRC */
+/* ==================================================================== */
+
+typedef union sh_ni1_llp_capt_sbcb_u {
+ mmr_t sh_ni1_llp_capt_sbcb_regval;
+ struct {
+ mmr_t capturedrcvsbsn : 16;
+ mmr_t capturedrcvcrc : 16;
+ mmr_t sentallcberrors : 1;
+ mmr_t sentallsnerrors : 1;
+ mmr_t fakedallsnerrors : 1;
+ mmr_t chargeoverflow : 1;
+ mmr_t chargeunderflow : 1;
+ mmr_t reserved_0 : 27;
+ } sh_ni1_llp_capt_sbcb_s;
+} sh_ni1_llp_capt_sbcb_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_LLP_ERR" */
+/* ==================================================================== */
+
+typedef union sh_ni1_llp_err_u {
+ mmr_t sh_ni1_llp_err_regval;
+ struct {
+ mmr_t rx_sn_err_count : 8;
+ mmr_t rx_cb_err_count : 8;
+ mmr_t retry_count : 8;
+ mmr_t retry_timeout : 1;
+ mmr_t rcv_link_reset : 1;
+ mmr_t squash : 1;
+ mmr_t power_not_ok : 1;
+ mmr_t wire_cnt : 24;
+ mmr_t wire_overflow : 1;
+ mmr_t reserved_0 : 11;
+ } sh_ni1_llp_err_s;
+} sh_ni1_llp_err_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LLP_TO_FIFO02_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_llp_to_fifo02_flow_u {
+ mmr_t sh_xnni0_llp_to_fifo02_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t credit_vc0_dyn : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t credit_vc0_cap : 6;
+ mmr_t reserved_4 : 10;
+ mmr_t credit_vc2_dyn : 6;
+ mmr_t reserved_5 : 2;
+ mmr_t credit_vc2_cap : 6;
+ mmr_t reserved_6 : 2;
+ } sh_xnni0_llp_to_fifo02_flow_s;
+} sh_xnni0_llp_to_fifo02_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LLP_TO_FIFO13_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_llp_to_fifo13_flow_u {
+ mmr_t sh_xnni0_llp_to_fifo13_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t credit_vc0_dyn : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t credit_vc0_cap : 6;
+ mmr_t reserved_4 : 10;
+ mmr_t credit_vc2_dyn : 6;
+ mmr_t reserved_5 : 2;
+ mmr_t credit_vc2_cap : 6;
+ mmr_t reserved_6 : 2;
+ } sh_xnni0_llp_to_fifo13_flow_s;
+} sh_xnni0_llp_to_fifo13_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LLP_DEBIT_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_llp_debit_flow_u {
+ mmr_t sh_xnni0_llp_debit_flow_regval;
+ struct {
+ mmr_t debit_vc0_dyn : 5;
+ mmr_t reserved_0 : 3;
+ mmr_t debit_vc0_cap : 5;
+ mmr_t reserved_1 : 3;
+ mmr_t debit_vc1_dyn : 5;
+ mmr_t reserved_2 : 3;
+ mmr_t debit_vc1_cap : 5;
+ mmr_t reserved_3 : 3;
+ mmr_t debit_vc2_dyn : 5;
+ mmr_t reserved_4 : 3;
+ mmr_t debit_vc2_cap : 5;
+ mmr_t reserved_5 : 3;
+ mmr_t debit_vc3_dyn : 5;
+ mmr_t reserved_6 : 3;
+ mmr_t debit_vc3_cap : 5;
+ mmr_t reserved_7 : 3;
+ } sh_xnni0_llp_debit_flow_s;
+} sh_xnni0_llp_debit_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LINK_0_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_link_0_flow_u {
+ mmr_t sh_xnni0_link_0_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t credit_vc0_test : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t credit_vc0_dyn : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t credit_vc0_cap : 7;
+ mmr_t reserved_3 : 33;
+ } sh_xnni0_link_0_flow_s;
+} sh_xnni0_link_0_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LINK_1_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_link_1_flow_u {
+ mmr_t sh_xnni0_link_1_flow_regval;
+ struct {
+ mmr_t debit_vc1_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc1_force_cred : 1;
+ mmr_t credit_vc1_test : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t credit_vc1_dyn : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t credit_vc1_cap : 7;
+ mmr_t reserved_3 : 33;
+ } sh_xnni0_link_1_flow_s;
+} sh_xnni0_link_1_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LINK_2_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_link_2_flow_u {
+ mmr_t sh_xnni0_link_2_flow_regval;
+ struct {
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t credit_vc2_test : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t credit_vc2_dyn : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t credit_vc2_cap : 7;
+ mmr_t reserved_3 : 33;
+ } sh_xnni0_link_2_flow_s;
+} sh_xnni0_link_2_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_LINK_3_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_link_3_flow_u {
+ mmr_t sh_xnni0_link_3_flow_regval;
+ struct {
+ mmr_t debit_vc3_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc3_force_cred : 1;
+ mmr_t credit_vc3_test : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t credit_vc3_dyn : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t credit_vc3_cap : 7;
+ mmr_t reserved_3 : 33;
+ } sh_xnni0_link_3_flow_s;
+} sh_xnni0_link_3_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LLP_TO_FIFO02_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_llp_to_fifo02_flow_u {
+ mmr_t sh_xnni1_llp_to_fifo02_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t credit_vc0_dyn : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t credit_vc0_cap : 6;
+ mmr_t reserved_4 : 10;
+ mmr_t credit_vc2_dyn : 6;
+ mmr_t reserved_5 : 2;
+ mmr_t credit_vc2_cap : 6;
+ mmr_t reserved_6 : 2;
+ } sh_xnni1_llp_to_fifo02_flow_s;
+} sh_xnni1_llp_to_fifo02_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LLP_TO_FIFO13_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_llp_to_fifo13_flow_u {
+ mmr_t sh_xnni1_llp_to_fifo13_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t credit_vc0_dyn : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t credit_vc0_cap : 6;
+ mmr_t reserved_4 : 10;
+ mmr_t credit_vc2_dyn : 6;
+ mmr_t reserved_5 : 2;
+ mmr_t credit_vc2_cap : 6;
+ mmr_t reserved_6 : 2;
+ } sh_xnni1_llp_to_fifo13_flow_s;
+} sh_xnni1_llp_to_fifo13_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LLP_DEBIT_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_llp_debit_flow_u {
+ mmr_t sh_xnni1_llp_debit_flow_regval;
+ struct {
+ mmr_t debit_vc0_dyn : 5;
+ mmr_t reserved_0 : 3;
+ mmr_t debit_vc0_cap : 5;
+ mmr_t reserved_1 : 3;
+ mmr_t debit_vc1_dyn : 5;
+ mmr_t reserved_2 : 3;
+ mmr_t debit_vc1_cap : 5;
+ mmr_t reserved_3 : 3;
+ mmr_t debit_vc2_dyn : 5;
+ mmr_t reserved_4 : 3;
+ mmr_t debit_vc2_cap : 5;
+ mmr_t reserved_5 : 3;
+ mmr_t debit_vc3_dyn : 5;
+ mmr_t reserved_6 : 3;
+ mmr_t debit_vc3_cap : 5;
+ mmr_t reserved_7 : 3;
+ } sh_xnni1_llp_debit_flow_s;
+} sh_xnni1_llp_debit_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LINK_0_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_link_0_flow_u {
+ mmr_t sh_xnni1_link_0_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t credit_vc0_test : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t credit_vc0_dyn : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t credit_vc0_cap : 7;
+ mmr_t reserved_3 : 33;
+ } sh_xnni1_link_0_flow_s;
+} sh_xnni1_link_0_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LINK_1_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_link_1_flow_u {
+ mmr_t sh_xnni1_link_1_flow_regval;
+ struct {
+ mmr_t debit_vc1_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc1_force_cred : 1;
+ mmr_t credit_vc1_test : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t credit_vc1_dyn : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t credit_vc1_cap : 7;
+ mmr_t reserved_3 : 33;
+ } sh_xnni1_link_1_flow_s;
+} sh_xnni1_link_1_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LINK_2_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_link_2_flow_u {
+ mmr_t sh_xnni1_link_2_flow_regval;
+ struct {
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t credit_vc2_test : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t credit_vc2_dyn : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t credit_vc2_cap : 7;
+ mmr_t reserved_3 : 33;
+ } sh_xnni1_link_2_flow_s;
+} sh_xnni1_link_2_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_LINK_3_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_link_3_flow_u {
+ mmr_t sh_xnni1_link_3_flow_regval;
+ struct {
+ mmr_t debit_vc3_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc3_force_cred : 1;
+ mmr_t credit_vc3_test : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t credit_vc3_dyn : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t credit_vc3_cap : 7;
+ mmr_t reserved_3 : 33;
+ } sh_xnni1_link_3_flow_s;
+} sh_xnni1_link_3_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_IILB_LOCAL_TABLE" */
+/* local lookup table */
+/* ==================================================================== */
+
+typedef union sh_iilb_local_table_u {
+ mmr_t sh_iilb_local_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t ni_sel0 : 1;
+ mmr_t reserved_0 : 57;
+ mmr_t valid : 1;
+ } sh_iilb_local_table_s;
+} sh_iilb_local_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_IILB_GLOBAL_TABLE" */
+/* global lookup table */
+/* ==================================================================== */
+
+typedef union sh_iilb_global_table_u {
+ mmr_t sh_iilb_global_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t ni_sel0 : 1;
+ mmr_t reserved_0 : 57;
+ mmr_t valid : 1;
+ } sh_iilb_global_table_s;
+} sh_iilb_global_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_IILB_OVER_RIDE_TABLE" */
+/* If enabled, bypass the Global/Local tables */
+/* ==================================================================== */
+
+typedef union sh_iilb_over_ride_table_u {
+ mmr_t sh_iilb_over_ride_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t ni_sel0 : 1;
+ mmr_t reserved_0 : 57;
+ mmr_t enable : 1;
+ } sh_iilb_over_ride_table_s;
+} sh_iilb_over_ride_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_IILB_RSP_PLANE_HINT" */
+/* If enabled, invert incoming response only plane hint bit before lo */
+/* ==================================================================== */
+
+typedef union sh_iilb_rsp_plane_hint_u {
+ mmr_t sh_iilb_rsp_plane_hint_regval;
+ struct {
+ mmr_t reserved_0 : 64;
+ } sh_iilb_rsp_plane_hint_s;
+} sh_iilb_rsp_plane_hint_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_LOCAL_TABLE" */
+/* local lookup table */
+/* ==================================================================== */
+
+typedef union sh_pi_local_table_u {
+ mmr_t sh_pi_local_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t ni_sel0 : 1;
+ mmr_t reserved_0 : 2;
+ mmr_t dir1 : 4;
+ mmr_t v1 : 1;
+ mmr_t ni_sel1 : 1;
+ mmr_t reserved_1 : 49;
+ mmr_t valid : 1;
+ } sh_pi_local_table_s;
+} sh_pi_local_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_GLOBAL_TABLE" */
+/* global lookup table */
+/* ==================================================================== */
+
+typedef union sh_pi_global_table_u {
+ mmr_t sh_pi_global_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t ni_sel0 : 1;
+ mmr_t reserved_0 : 2;
+ mmr_t dir1 : 4;
+ mmr_t v1 : 1;
+ mmr_t ni_sel1 : 1;
+ mmr_t reserved_1 : 49;
+ mmr_t valid : 1;
+ } sh_pi_global_table_s;
+} sh_pi_global_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_OVER_RIDE_TABLE" */
+/* If enabled, bypass the Global/Local tables */
+/* ==================================================================== */
+
+typedef union sh_pi_over_ride_table_u {
+ mmr_t sh_pi_over_ride_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t ni_sel0 : 1;
+ mmr_t reserved_0 : 2;
+ mmr_t dir1 : 4;
+ mmr_t v1 : 1;
+ mmr_t ni_sel1 : 1;
+ mmr_t reserved_1 : 49;
+ mmr_t enable : 1;
+ } sh_pi_over_ride_table_s;
+} sh_pi_over_ride_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_RSP_PLANE_HINT" */
+/* If enabled, invert incoming response only plane hint bit before lo */
+/* ==================================================================== */
+
+typedef union sh_pi_rsp_plane_hint_u {
+ mmr_t sh_pi_rsp_plane_hint_regval;
+ struct {
+ mmr_t invert : 1;
+ mmr_t reserved_0 : 63;
+ } sh_pi_rsp_plane_hint_s;
+} sh_pi_rsp_plane_hint_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_LOCAL_TABLE" */
+/* local lookup table */
+/* ==================================================================== */
+
+typedef union sh_ni0_local_table_u {
+ mmr_t sh_ni0_local_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t reserved_0 : 58;
+ mmr_t valid : 1;
+ } sh_ni0_local_table_s;
+} sh_ni0_local_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_GLOBAL_TABLE" */
+/* global lookup table */
+/* ==================================================================== */
+
+typedef union sh_ni0_global_table_u {
+ mmr_t sh_ni0_global_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t reserved_0 : 58;
+ mmr_t valid : 1;
+ } sh_ni0_global_table_s;
+} sh_ni0_global_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_OVER_RIDE_TABLE" */
+/* If enabled, bypass the Global/Local tables */
+/* ==================================================================== */
+
+typedef union sh_ni0_over_ride_table_u {
+ mmr_t sh_ni0_over_ride_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t reserved_0 : 58;
+ mmr_t enable : 1;
+ } sh_ni0_over_ride_table_s;
+} sh_ni0_over_ride_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_RSP_PLANE_HINT" */
+/* If enabled, invert incoming response only plane hint bit before lo */
+/* ==================================================================== */
+
+typedef union sh_ni0_rsp_plane_hint_u {
+ mmr_t sh_ni0_rsp_plane_hint_regval;
+ struct {
+ mmr_t reserved_0 : 64;
+ } sh_ni0_rsp_plane_hint_s;
+} sh_ni0_rsp_plane_hint_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_LOCAL_TABLE" */
+/* local lookup table */
+/* ==================================================================== */
+
+typedef union sh_ni1_local_table_u {
+ mmr_t sh_ni1_local_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t reserved_0 : 58;
+ mmr_t valid : 1;
+ } sh_ni1_local_table_s;
+} sh_ni1_local_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_GLOBAL_TABLE" */
+/* global lookup table */
+/* ==================================================================== */
+
+typedef union sh_ni1_global_table_u {
+ mmr_t sh_ni1_global_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t reserved_0 : 58;
+ mmr_t valid : 1;
+ } sh_ni1_global_table_s;
+} sh_ni1_global_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_OVER_RIDE_TABLE" */
+/* If enabled, bypass the Global/Local tables */
+/* ==================================================================== */
+
+typedef union sh_ni1_over_ride_table_u {
+ mmr_t sh_ni1_over_ride_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t reserved_0 : 58;
+ mmr_t enable : 1;
+ } sh_ni1_over_ride_table_s;
+} sh_ni1_over_ride_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_RSP_PLANE_HINT" */
+/* If enabled, invert incoming response only plane hint bit before lo */
+/* ==================================================================== */
+
+typedef union sh_ni1_rsp_plane_hint_u {
+ mmr_t sh_ni1_rsp_plane_hint_regval;
+ struct {
+ mmr_t reserved_0 : 64;
+ } sh_ni1_rsp_plane_hint_s;
+} sh_ni1_rsp_plane_hint_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_LOCAL_TABLE" */
+/* local lookup table */
+/* ==================================================================== */
+
+typedef union sh_md_local_table_u {
+ mmr_t sh_md_local_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t ni_sel0 : 1;
+ mmr_t reserved_0 : 2;
+ mmr_t dir1 : 4;
+ mmr_t v1 : 1;
+ mmr_t ni_sel1 : 1;
+ mmr_t reserved_1 : 49;
+ mmr_t valid : 1;
+ } sh_md_local_table_s;
+} sh_md_local_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_GLOBAL_TABLE" */
+/* global lookup table */
+/* ==================================================================== */
+
+typedef union sh_md_global_table_u {
+ mmr_t sh_md_global_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t ni_sel0 : 1;
+ mmr_t reserved_0 : 2;
+ mmr_t dir1 : 4;
+ mmr_t v1 : 1;
+ mmr_t ni_sel1 : 1;
+ mmr_t reserved_1 : 49;
+ mmr_t valid : 1;
+ } sh_md_global_table_s;
+} sh_md_global_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_OVER_RIDE_TABLE" */
+/* If enabled, bypass the Global/Local tables */
+/* ==================================================================== */
+
+typedef union sh_md_over_ride_table_u {
+ mmr_t sh_md_over_ride_table_regval;
+ struct {
+ mmr_t dir0 : 4;
+ mmr_t v0 : 1;
+ mmr_t ni_sel0 : 1;
+ mmr_t reserved_0 : 2;
+ mmr_t dir1 : 4;
+ mmr_t v1 : 1;
+ mmr_t ni_sel1 : 1;
+ mmr_t reserved_1 : 49;
+ mmr_t enable : 1;
+ } sh_md_over_ride_table_s;
+} sh_md_over_ride_table_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_RSP_PLANE_HINT" */
+/* If enabled, invert incoming response only plane hint bit before lo */
+/* ==================================================================== */
+
+typedef union sh_md_rsp_plane_hint_u {
+ mmr_t sh_md_rsp_plane_hint_regval;
+ struct {
+ mmr_t invert : 1;
+ mmr_t reserved_0 : 63;
+ } sh_md_rsp_plane_hint_s;
+} sh_md_rsp_plane_hint_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_LIQ_CTL" */
+/* Local Block LIQ Control */
+/* ==================================================================== */
+
+typedef union sh_lb_liq_ctl_u {
+ mmr_t sh_lb_liq_ctl_regval;
+ struct {
+ mmr_t liq_req_ctl : 5;
+ mmr_t reserved_0 : 3;
+ mmr_t liq_rpl_ctl : 4;
+ mmr_t reserved_1 : 4;
+ mmr_t force_rq_credit : 1;
+ mmr_t force_rp_credit : 1;
+ mmr_t force_linvv_credit : 1;
+ mmr_t reserved_2 : 45;
+ } sh_lb_liq_ctl_s;
+} sh_lb_liq_ctl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_LOQ_CTL" */
+/* Local Block LOQ Control */
+/* ==================================================================== */
+
+typedef union sh_lb_loq_ctl_u {
+ mmr_t sh_lb_loq_ctl_regval;
+ struct {
+ mmr_t loq_req_ctl : 1;
+ mmr_t loq_rpl_ctl : 1;
+ mmr_t reserved_0 : 62;
+ } sh_lb_loq_ctl_s;
+} sh_lb_loq_ctl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_MAX_REP_CREDIT_CNT" */
+/* Maximum number of reply credits from XN */
+/* ==================================================================== */
+
+typedef union sh_lb_max_rep_credit_cnt_u {
+ mmr_t sh_lb_max_rep_credit_cnt_regval;
+ struct {
+ mmr_t max_cnt : 5;
+ mmr_t reserved_0 : 59;
+ } sh_lb_max_rep_credit_cnt_s;
+} sh_lb_max_rep_credit_cnt_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_MAX_REQ_CREDIT_CNT" */
+/* Maximum number of request credits from XN */
+/* ==================================================================== */
+
+typedef union sh_lb_max_req_credit_cnt_u {
+ mmr_t sh_lb_max_req_credit_cnt_regval;
+ struct {
+ mmr_t max_cnt : 5;
+ mmr_t reserved_0 : 59;
+ } sh_lb_max_req_credit_cnt_s;
+} sh_lb_max_req_credit_cnt_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PIO_TIME_OUT" */
+/* Local Block PIO time out value */
+/* ==================================================================== */
+
+typedef union sh_pio_time_out_u {
+ mmr_t sh_pio_time_out_regval;
+ struct {
+ mmr_t value : 16;
+ mmr_t reserved_0 : 48;
+ } sh_pio_time_out_s;
+} sh_pio_time_out_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PIO_NACK_RESET" */
+/* Local Block PIO Reset for nack counters */
+/* ==================================================================== */
+
+typedef union sh_pio_nack_reset_u {
+ mmr_t sh_pio_nack_reset_regval;
+ struct {
+ mmr_t pulse : 1;
+ mmr_t reserved_0 : 63;
+ } sh_pio_nack_reset_s;
+} sh_pio_nack_reset_u_t;
+
+/* ==================================================================== */
+/* Register "SH_CONVEYOR_BELT_TIME_OUT" */
+/* Local Block conveyor belt time out value */
+/* ==================================================================== */
+
+typedef union sh_conveyor_belt_time_out_u {
+ mmr_t sh_conveyor_belt_time_out_regval;
+ struct {
+ mmr_t value : 12;
+ mmr_t reserved_0 : 52;
+ } sh_conveyor_belt_time_out_s;
+} sh_conveyor_belt_time_out_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_CREDIT_STATUS" */
+/* Credit Counter Status Register */
+/* ==================================================================== */
+
+typedef union sh_lb_credit_status_u {
+ mmr_t sh_lb_credit_status_regval;
+ struct {
+ mmr_t liq_rq_credit : 5;
+ mmr_t reserved_0 : 1;
+ mmr_t liq_rp_credit : 4;
+ mmr_t reserved_1 : 2;
+ mmr_t linvv_credit : 6;
+ mmr_t loq_rq_credit : 5;
+ mmr_t loq_rp_credit : 5;
+ mmr_t reserved_2 : 36;
+ } sh_lb_credit_status_s;
+} sh_lb_credit_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_DEBUG_LOCAL_SEL" */
+/* LB Debug Port Select */
+/* ==================================================================== */
+
+typedef union sh_lb_debug_local_sel_u {
+ mmr_t sh_lb_debug_local_sel_regval;
+ struct {
+ mmr_t nibble0_chiplet_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_chiplet_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_chiplet_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_chiplet_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_chiplet_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_chiplet_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_chiplet_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_chiplet_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t trigger_enable : 1;
+ } sh_lb_debug_local_sel_s;
+} sh_lb_debug_local_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_DEBUG_PERF_SEL" */
+/* LB Debug Port Performance Select */
+/* ==================================================================== */
+
+typedef union sh_lb_debug_perf_sel_u {
+ mmr_t sh_lb_debug_perf_sel_regval;
+ struct {
+ mmr_t nibble0_chiplet_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_chiplet_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_chiplet_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_chiplet_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_chiplet_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_chiplet_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_chiplet_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_chiplet_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t reserved_15 : 1;
+ } sh_lb_debug_perf_sel_s;
+} sh_lb_debug_perf_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_DEBUG_TRIG_SEL" */
+/* LB Debug Trigger Select */
+/* ==================================================================== */
+
+typedef union sh_lb_debug_trig_sel_u {
+ mmr_t sh_lb_debug_trig_sel_regval;
+ struct {
+ mmr_t trigger0_chiplet_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t trigger0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t trigger1_chiplet_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t trigger1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t trigger2_chiplet_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t trigger2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t trigger3_chiplet_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t trigger3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t trigger4_chiplet_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t trigger4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t trigger5_chiplet_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t trigger5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t trigger6_chiplet_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t trigger6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t trigger7_chiplet_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t trigger7_nibble_sel : 3;
+ mmr_t reserved_15 : 1;
+ } sh_lb_debug_trig_sel_s;
+} sh_lb_debug_trig_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_DETAIL_1" */
+/* LB Error capture information: HDR1 */
+/* ==================================================================== */
+
+typedef union sh_lb_error_detail_1_u {
+ mmr_t sh_lb_error_detail_1_regval;
+ struct {
+ mmr_t command : 8;
+ mmr_t suppl : 14;
+ mmr_t reserved_0 : 2;
+ mmr_t source : 14;
+ mmr_t reserved_1 : 2;
+ mmr_t dest : 3;
+ mmr_t reserved_2 : 5;
+ mmr_t hdr_err : 1;
+ mmr_t data_err : 1;
+ mmr_t reserved_3 : 13;
+ mmr_t valid : 1;
+ } sh_lb_error_detail_1_s;
+} sh_lb_error_detail_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_DETAIL_2" */
+/* LB Error Bits */
+/* ==================================================================== */
+
+typedef union sh_lb_error_detail_2_u {
+ mmr_t sh_lb_error_detail_2_regval;
+ struct {
+ mmr_t address : 47;
+ mmr_t reserved_0 : 17;
+ } sh_lb_error_detail_2_s;
+} sh_lb_error_detail_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_DETAIL_3" */
+/* LB Error Bits */
+/* ==================================================================== */
+
+typedef union sh_lb_error_detail_3_u {
+ mmr_t sh_lb_error_detail_3_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_lb_error_detail_3_s;
+} sh_lb_error_detail_3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_DETAIL_4" */
+/* LB Error Bits */
+/* ==================================================================== */
+
+typedef union sh_lb_error_detail_4_u {
+ mmr_t sh_lb_error_detail_4_regval;
+ struct {
+ mmr_t route : 64;
+ } sh_lb_error_detail_4_s;
+} sh_lb_error_detail_4_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_DETAIL_5" */
+/* LB Error Bits */
+/* ==================================================================== */
+
+typedef union sh_lb_error_detail_5_u {
+ mmr_t sh_lb_error_detail_5_regval;
+ struct {
+ mmr_t read_retry : 1;
+ mmr_t ptc1_write : 1;
+ mmr_t write_retry : 1;
+ mmr_t count_a_overflow : 1;
+ mmr_t count_b_overflow : 1;
+ mmr_t nack_a_timeout : 1;
+ mmr_t nack_b_timeout : 1;
+ mmr_t reserved_0 : 57;
+ } sh_lb_error_detail_5_s;
+} sh_lb_error_detail_5_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_MASK" */
+/* LB Error Mask */
+/* ==================================================================== */
+
+typedef union sh_lb_error_mask_u {
+ mmr_t sh_lb_error_mask_regval;
+ struct {
+ mmr_t rq_bad_cmd : 1;
+ mmr_t rp_bad_cmd : 1;
+ mmr_t rq_short : 1;
+ mmr_t rp_short : 1;
+ mmr_t rq_long : 1;
+ mmr_t rp_long : 1;
+ mmr_t rq_bad_data : 1;
+ mmr_t rp_bad_data : 1;
+ mmr_t rq_bad_addr : 1;
+ mmr_t rq_time_out : 1;
+ mmr_t linvv_overflow : 1;
+ mmr_t unexpected_linv : 1;
+ mmr_t ptc_1_timeout : 1;
+ mmr_t junk_bus_err : 1;
+ mmr_t pio_cb_err : 1;
+ mmr_t vector_rq_route_error : 1;
+ mmr_t vector_rp_route_error : 1;
+ mmr_t gclk_drop : 1;
+ mmr_t rq_fifo_error : 1;
+ mmr_t rp_fifo_error : 1;
+ mmr_t unexp_valid : 1;
+ mmr_t rq_credit_overflow : 1;
+ mmr_t rp_credit_overflow : 1;
+ mmr_t reserved_0 : 41;
+ } sh_lb_error_mask_s;
+} sh_lb_error_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_OVERFLOW" */
+/* LB Error Overflow */
+/* ==================================================================== */
+
+typedef union sh_lb_error_overflow_u {
+ mmr_t sh_lb_error_overflow_regval;
+ struct {
+ mmr_t rq_bad_cmd_ovrfl : 1;
+ mmr_t rp_bad_cmd_ovrfl : 1;
+ mmr_t rq_short_ovrfl : 1;
+ mmr_t rp_short_ovrfl : 1;
+ mmr_t rq_long_ovrfl : 1;
+ mmr_t rp_long_ovrfl : 1;
+ mmr_t rq_bad_data_ovrfl : 1;
+ mmr_t rp_bad_data_ovrfl : 1;
+ mmr_t rq_bad_addr_ovrfl : 1;
+ mmr_t rq_time_out_ovrfl : 1;
+ mmr_t linvv_overflow_ovrfl : 1;
+ mmr_t unexpected_linv_ovrfl : 1;
+ mmr_t ptc_1_timeout_ovrfl : 1;
+ mmr_t junk_bus_err_ovrfl : 1;
+ mmr_t pio_cb_err_ovrfl : 1;
+ mmr_t vector_rq_route_error_ovrfl : 1;
+ mmr_t vector_rp_route_error_ovrfl : 1;
+ mmr_t gclk_drop_ovrfl : 1;
+ mmr_t rq_fifo_error_ovrfl : 1;
+ mmr_t rp_fifo_error_ovrfl : 1;
+ mmr_t unexp_valid_ovrfl : 1;
+ mmr_t rq_credit_overflow_ovrfl : 1;
+ mmr_t rp_credit_overflow_ovrfl : 1;
+ mmr_t reserved_0 : 41;
+ } sh_lb_error_overflow_s;
+} sh_lb_error_overflow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_ERROR_SUMMARY" */
+/* LB Error Bits */
+/* ==================================================================== */
+
+typedef union sh_lb_error_summary_u {
+ mmr_t sh_lb_error_summary_regval;
+ struct {
+ mmr_t rq_bad_cmd : 1;
+ mmr_t rp_bad_cmd : 1;
+ mmr_t rq_short : 1;
+ mmr_t rp_short : 1;
+ mmr_t rq_long : 1;
+ mmr_t rp_long : 1;
+ mmr_t rq_bad_data : 1;
+ mmr_t rp_bad_data : 1;
+ mmr_t rq_bad_addr : 1;
+ mmr_t rq_time_out : 1;
+ mmr_t linvv_overflow : 1;
+ mmr_t unexpected_linv : 1;
+ mmr_t ptc_1_timeout : 1;
+ mmr_t junk_bus_err : 1;
+ mmr_t pio_cb_err : 1;
+ mmr_t vector_rq_route_error : 1;
+ mmr_t vector_rp_route_error : 1;
+ mmr_t gclk_drop : 1;
+ mmr_t rq_fifo_error : 1;
+ mmr_t rp_fifo_error : 1;
+ mmr_t unexp_valid : 1;
+ mmr_t rq_credit_overflow : 1;
+ mmr_t rp_credit_overflow : 1;
+ mmr_t reserved_0 : 41;
+ } sh_lb_error_summary_s;
+} sh_lb_error_summary_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_FIRST_ERROR" */
+/* LB First Error */
+/* ==================================================================== */
+
+typedef union sh_lb_first_error_u {
+ mmr_t sh_lb_first_error_regval;
+ struct {
+ mmr_t rq_bad_cmd : 1;
+ mmr_t rp_bad_cmd : 1;
+ mmr_t rq_short : 1;
+ mmr_t rp_short : 1;
+ mmr_t rq_long : 1;
+ mmr_t rp_long : 1;
+ mmr_t rq_bad_data : 1;
+ mmr_t rp_bad_data : 1;
+ mmr_t rq_bad_addr : 1;
+ mmr_t rq_time_out : 1;
+ mmr_t linvv_overflow : 1;
+ mmr_t unexpected_linv : 1;
+ mmr_t ptc_1_timeout : 1;
+ mmr_t junk_bus_err : 1;
+ mmr_t pio_cb_err : 1;
+ mmr_t vector_rq_route_error : 1;
+ mmr_t vector_rp_route_error : 1;
+ mmr_t gclk_drop : 1;
+ mmr_t rq_fifo_error : 1;
+ mmr_t rp_fifo_error : 1;
+ mmr_t unexp_valid : 1;
+ mmr_t rq_credit_overflow : 1;
+ mmr_t rp_credit_overflow : 1;
+ mmr_t reserved_0 : 41;
+ } sh_lb_first_error_s;
+} sh_lb_first_error_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_LAST_CREDIT" */
+/* Credit counter status register */
+/* ==================================================================== */
+
+typedef union sh_lb_last_credit_u {
+ mmr_t sh_lb_last_credit_regval;
+ struct {
+ mmr_t liq_rq_credit : 5;
+ mmr_t reserved_0 : 1;
+ mmr_t liq_rp_credit : 4;
+ mmr_t reserved_1 : 2;
+ mmr_t linvv_credit : 6;
+ mmr_t loq_rq_credit : 5;
+ mmr_t loq_rp_credit : 5;
+ mmr_t reserved_2 : 36;
+ } sh_lb_last_credit_s;
+} sh_lb_last_credit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_NACK_STATUS" */
+/* Nack Counter Status Register */
+/* ==================================================================== */
+
+typedef union sh_lb_nack_status_u {
+ mmr_t sh_lb_nack_status_regval;
+ struct {
+ mmr_t pio_nack_a : 12;
+ mmr_t reserved_0 : 4;
+ mmr_t pio_nack_b : 12;
+ mmr_t reserved_1 : 4;
+ mmr_t junk_nack : 16;
+ mmr_t cb_timeout_count : 12;
+ mmr_t cb_state : 2;
+ mmr_t reserved_2 : 2;
+ } sh_lb_nack_status_s;
+} sh_lb_nack_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_TRIGGER_COMPARE" */
+/* LB Test-point Trigger Compare */
+/* ==================================================================== */
+
+typedef union sh_lb_trigger_compare_u {
+ mmr_t sh_lb_trigger_compare_regval;
+ struct {
+ mmr_t mask : 32;
+ mmr_t reserved_0 : 32;
+ } sh_lb_trigger_compare_s;
+} sh_lb_trigger_compare_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_TRIGGER_DATA" */
+/* LB Test-point Trigger Compare Data */
+/* ==================================================================== */
+
+typedef union sh_lb_trigger_data_u {
+ mmr_t sh_lb_trigger_data_regval;
+ struct {
+ mmr_t compare_pattern : 32;
+ mmr_t reserved_0 : 32;
+ } sh_lb_trigger_data_s;
+} sh_lb_trigger_data_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_AEC_CONFIG" */
+/* PI Adaptive Error Correction Configuration */
+/* ==================================================================== */
+
+typedef union sh_pi_aec_config_u {
+ mmr_t sh_pi_aec_config_regval;
+ struct {
+ mmr_t mode : 3;
+ mmr_t reserved_0 : 61;
+ } sh_pi_aec_config_s;
+} sh_pi_aec_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_AFI_ERROR_MASK" */
+/* PI AFI Error Mask */
+/* ==================================================================== */
+
+typedef union sh_pi_afi_error_mask_u {
+ mmr_t sh_pi_afi_error_mask_regval;
+ struct {
+ mmr_t reserved_0 : 21;
+ mmr_t hung_bus : 1;
+ mmr_t rsp_parity : 1;
+ mmr_t ioq_overrun : 1;
+ mmr_t req_format : 1;
+ mmr_t addr_access : 1;
+ mmr_t req_parity : 1;
+ mmr_t addr_parity : 1;
+ mmr_t shub_fsb_dqe : 1;
+ mmr_t shub_fsb_uce : 1;
+ mmr_t shub_fsb_ce : 1;
+ mmr_t livelock : 1;
+ mmr_t bad_snoop : 1;
+ mmr_t fsb_tbl_miss : 1;
+ mmr_t msg_len : 1;
+ mmr_t reserved_1 : 29;
+ } sh_pi_afi_error_mask_s;
+} sh_pi_afi_error_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_AFI_TEST_POINT_COMPARE" */
+/* PI AFI Test Point Compare */
+/* ==================================================================== */
+
+typedef union sh_pi_afi_test_point_compare_u {
+ mmr_t sh_pi_afi_test_point_compare_regval;
+ struct {
+ mmr_t compare_mask : 32;
+ mmr_t compare_pattern : 32;
+ } sh_pi_afi_test_point_compare_s;
+} sh_pi_afi_test_point_compare_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_AFI_TEST_POINT_SELECT" */
+/* PI AFI Test Point Select */
+/* ==================================================================== */
+
+typedef union sh_pi_afi_test_point_select_u {
+ mmr_t sh_pi_afi_test_point_select_regval;
+ struct {
+ mmr_t nibble0_chiplet_sel : 4;
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble1_chiplet_sel : 4;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble2_chiplet_sel : 4;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble3_chiplet_sel : 4;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble4_chiplet_sel : 4;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble5_chiplet_sel : 4;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble6_chiplet_sel : 4;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble7_chiplet_sel : 4;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t trigger_enable : 1;
+ } sh_pi_afi_test_point_select_s;
+} sh_pi_afi_test_point_select_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_AFI_TEST_POINT_TRIGGER_SELECT" */
+/* PI CRBC Test Point Trigger Select */
+/* ==================================================================== */
+
+typedef union sh_pi_afi_test_point_trigger_select_u {
+ mmr_t sh_pi_afi_test_point_trigger_select_regval;
+ struct {
+ mmr_t trigger0_chiplet_sel : 4;
+ mmr_t trigger0_nibble_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t trigger1_chiplet_sel : 4;
+ mmr_t trigger1_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t trigger2_chiplet_sel : 4;
+ mmr_t trigger2_nibble_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t trigger3_chiplet_sel : 4;
+ mmr_t trigger3_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t trigger4_chiplet_sel : 4;
+ mmr_t trigger4_nibble_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t trigger5_chiplet_sel : 4;
+ mmr_t trigger5_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t trigger6_chiplet_sel : 4;
+ mmr_t trigger6_nibble_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t trigger7_chiplet_sel : 4;
+ mmr_t trigger7_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ } sh_pi_afi_test_point_trigger_select_s;
+} sh_pi_afi_test_point_trigger_select_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_AUTO_REPLY_ENABLE" */
+/* PI Auto Reply Enable */
+/* ==================================================================== */
+
+typedef union sh_pi_auto_reply_enable_u {
+ mmr_t sh_pi_auto_reply_enable_regval;
+ struct {
+ mmr_t auto_reply_enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_pi_auto_reply_enable_s;
+} sh_pi_auto_reply_enable_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CAM_CONTROL" */
+/* CRB CAM MMR Access Control */
+/* ==================================================================== */
+
+typedef union sh_pi_cam_control_u {
+ mmr_t sh_pi_cam_control_regval;
+ struct {
+ mmr_t cam_indx : 7;
+ mmr_t reserved_0 : 1;
+ mmr_t cam_write : 1;
+ mmr_t rrb_rd_xfer_clear : 1;
+ mmr_t reserved_1 : 53;
+ mmr_t start : 1;
+ } sh_pi_cam_control_s;
+} sh_pi_cam_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBC_TEST_POINT_COMPARE" */
+/* PI CRBC Test Point Compare */
+/* ==================================================================== */
+
+typedef union sh_pi_crbc_test_point_compare_u {
+ mmr_t sh_pi_crbc_test_point_compare_regval;
+ struct {
+ mmr_t compare_mask : 32;
+ mmr_t compare_pattern : 32;
+ } sh_pi_crbc_test_point_compare_s;
+} sh_pi_crbc_test_point_compare_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBC_TEST_POINT_SELECT" */
+/* PI CRBC Test Point Select */
+/* ==================================================================== */
+
+typedef union sh_pi_crbc_test_point_select_u {
+ mmr_t sh_pi_crbc_test_point_select_regval;
+ struct {
+ mmr_t nibble0_chiplet_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_chiplet_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_chiplet_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_chiplet_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_chiplet_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_chiplet_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_chiplet_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_chiplet_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t trigger_enable : 1;
+ } sh_pi_crbc_test_point_select_s;
+} sh_pi_crbc_test_point_select_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBC_TEST_POINT_TRIGGER_SELECT" */
+/* PI CRBC Test Point Trigger Select */
+/* ==================================================================== */
+
+typedef union sh_pi_crbc_test_point_trigger_select_u {
+ mmr_t sh_pi_crbc_test_point_trigger_select_regval;
+ struct {
+ mmr_t trigger0_chiplet_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t trigger0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t trigger1_chiplet_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t trigger1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t trigger2_chiplet_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t trigger2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t trigger3_chiplet_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t trigger3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t trigger4_chiplet_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t trigger4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t trigger5_chiplet_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t trigger5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t trigger6_chiplet_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t trigger6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t trigger7_chiplet_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t trigger7_nibble_sel : 3;
+ mmr_t reserved_15 : 1;
+ } sh_pi_crbc_test_point_trigger_select_s;
+} sh_pi_crbc_test_point_trigger_select_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_ERROR_MASK" */
+/* PI CRBP Error Mask */
+/* ==================================================================== */
+
+typedef union sh_pi_crbp_error_mask_u {
+ mmr_t sh_pi_crbp_error_mask_regval;
+ struct {
+ mmr_t fsb_proto_err : 1;
+ mmr_t gfx_rp_err : 1;
+ mmr_t xb_proto_err : 1;
+ mmr_t mem_rp_err : 1;
+ mmr_t pio_rp_err : 1;
+ mmr_t mem_to_err : 1;
+ mmr_t pio_to_err : 1;
+ mmr_t fsb_shub_uce : 1;
+ mmr_t fsb_shub_ce : 1;
+ mmr_t msg_color_err : 1;
+ mmr_t md_rq_q_oflow : 1;
+ mmr_t md_rp_q_oflow : 1;
+ mmr_t xn_rq_q_oflow : 1;
+ mmr_t xn_rp_q_oflow : 1;
+ mmr_t nack_oflow : 1;
+ mmr_t gfx_int_0 : 1;
+ mmr_t gfx_int_1 : 1;
+ mmr_t md_rq_crd_oflow : 1;
+ mmr_t md_rp_crd_oflow : 1;
+ mmr_t xn_rq_crd_oflow : 1;
+ mmr_t xn_rp_crd_oflow : 1;
+ mmr_t reserved_0 : 43;
+ } sh_pi_crbp_error_mask_s;
+} sh_pi_crbp_error_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_FSB_PIPE_COMPARE" */
+/* CRBP FSB Pipe Compare */
+/* ==================================================================== */
+
+typedef union sh_pi_crbp_fsb_pipe_compare_u {
+ mmr_t sh_pi_crbp_fsb_pipe_compare_regval;
+ struct {
+ mmr_t compare_address : 47;
+ mmr_t compare_req : 6;
+ mmr_t reserved_0 : 11;
+ } sh_pi_crbp_fsb_pipe_compare_s;
+} sh_pi_crbp_fsb_pipe_compare_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_FSB_PIPE_MASK" */
+/* CRBP Compare Mask */
+/* ==================================================================== */
+
+typedef union sh_pi_crbp_fsb_pipe_mask_u {
+ mmr_t sh_pi_crbp_fsb_pipe_mask_regval;
+ struct {
+ mmr_t compare_address_mask : 47;
+ mmr_t compare_req_mask : 6;
+ mmr_t reserved_0 : 11;
+ } sh_pi_crbp_fsb_pipe_mask_s;
+} sh_pi_crbp_fsb_pipe_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_TEST_POINT_COMPARE" */
+/* PI CRBP Test Point Compare */
+/* ==================================================================== */
+
+typedef union sh_pi_crbp_test_point_compare_u {
+ mmr_t sh_pi_crbp_test_point_compare_regval;
+ struct {
+ mmr_t compare_mask : 32;
+ mmr_t compare_pattern : 32;
+ } sh_pi_crbp_test_point_compare_s;
+} sh_pi_crbp_test_point_compare_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_TEST_POINT_SELECT" */
+/* PI CRBP Test Point Select */
+/* ==================================================================== */
+
+typedef union sh_pi_crbp_test_point_select_u {
+ mmr_t sh_pi_crbp_test_point_select_regval;
+ struct {
+ mmr_t nibble0_chiplet_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_chiplet_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_chiplet_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_chiplet_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_chiplet_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_chiplet_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_chiplet_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_chiplet_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t trigger_enable : 1;
+ } sh_pi_crbp_test_point_select_s;
+} sh_pi_crbp_test_point_select_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_TEST_POINT_TRIGGER_SELECT" */
+/* PI CRBP Test Point Trigger Select */
+/* ==================================================================== */
+
+typedef union sh_pi_crbp_test_point_trigger_select_u {
+ mmr_t sh_pi_crbp_test_point_trigger_select_regval;
+ struct {
+ mmr_t trigger0_chiplet_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t trigger0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t trigger1_chiplet_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t trigger1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t trigger2_chiplet_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t trigger2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t trigger3_chiplet_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t trigger3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t trigger4_chiplet_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t trigger4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t trigger5_chiplet_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t trigger5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t trigger6_chiplet_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t trigger6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t trigger7_chiplet_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t trigger7_nibble_sel : 3;
+ mmr_t reserved_15 : 1;
+ } sh_pi_crbp_test_point_trigger_select_s;
+} sh_pi_crbp_test_point_trigger_select_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_XB_PIPE_COMPARE_0" */
+/* CRBP XB Pipe Compare */
+/* ==================================================================== */
+
+typedef union sh_pi_crbp_xb_pipe_compare_0_u {
+ mmr_t sh_pi_crbp_xb_pipe_compare_0_regval;
+ struct {
+ mmr_t compare_address : 47;
+ mmr_t compare_command : 8;
+ mmr_t reserved_0 : 9;
+ } sh_pi_crbp_xb_pipe_compare_0_s;
+} sh_pi_crbp_xb_pipe_compare_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_XB_PIPE_COMPARE_1" */
+/* CRBP XB Pipe Compare */
+/* ==================================================================== */
+
+typedef union sh_pi_crbp_xb_pipe_compare_1_u {
+ mmr_t sh_pi_crbp_xb_pipe_compare_1_regval;
+ struct {
+ mmr_t compare_source : 14;
+ mmr_t reserved_0 : 2;
+ mmr_t compare_supplemental : 14;
+ mmr_t reserved_1 : 2;
+ mmr_t compare_echo : 9;
+ mmr_t reserved_2 : 23;
+ } sh_pi_crbp_xb_pipe_compare_1_s;
+} sh_pi_crbp_xb_pipe_compare_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_XB_PIPE_MASK_0" */
+/* CRBP Compare Mask Register 1 */
+/* ==================================================================== */
+
+typedef union sh_pi_crbp_xb_pipe_mask_0_u {
+ mmr_t sh_pi_crbp_xb_pipe_mask_0_regval;
+ struct {
+ mmr_t compare_address_mask : 47;
+ mmr_t compare_command_mask : 8;
+ mmr_t reserved_0 : 9;
+ } sh_pi_crbp_xb_pipe_mask_0_s;
+} sh_pi_crbp_xb_pipe_mask_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_XB_PIPE_MASK_1" */
+/* CRBP XB Pipe Compare Mask Register 1 */
+/* ==================================================================== */
+
+typedef union sh_pi_crbp_xb_pipe_mask_1_u {
+ mmr_t sh_pi_crbp_xb_pipe_mask_1_regval;
+ struct {
+ mmr_t compare_source_mask : 14;
+ mmr_t reserved_0 : 2;
+ mmr_t compare_supplemental_mask : 14;
+ mmr_t reserved_1 : 2;
+ mmr_t compare_echo_mask : 9;
+ mmr_t reserved_2 : 23;
+ } sh_pi_crbp_xb_pipe_mask_1_s;
+} sh_pi_crbp_xb_pipe_mask_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_DPC_QUEUE_CONFIG" */
+/* DPC Queue Configuration */
+/* ==================================================================== */
+
+typedef union sh_pi_dpc_queue_config_u {
+ mmr_t sh_pi_dpc_queue_config_regval;
+ struct {
+ mmr_t dwcq_ae_level : 5;
+ mmr_t reserved_0 : 3;
+ mmr_t dwcq_af_thresh : 5;
+ mmr_t reserved_1 : 3;
+ mmr_t fwcq_ae_level : 5;
+ mmr_t reserved_2 : 3;
+ mmr_t fwcq_af_thresh : 5;
+ mmr_t reserved_3 : 35;
+ } sh_pi_dpc_queue_config_s;
+} sh_pi_dpc_queue_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_ERROR_MASK" */
+/* PI Error Mask */
+/* ==================================================================== */
+
+typedef union sh_pi_error_mask_u {
+ mmr_t sh_pi_error_mask_regval;
+ struct {
+ mmr_t fsb_proto_err : 1;
+ mmr_t gfx_rp_err : 1;
+ mmr_t xb_proto_err : 1;
+ mmr_t mem_rp_err : 1;
+ mmr_t pio_rp_err : 1;
+ mmr_t mem_to_err : 1;
+ mmr_t pio_to_err : 1;
+ mmr_t fsb_shub_uce : 1;
+ mmr_t fsb_shub_ce : 1;
+ mmr_t msg_color_err : 1;
+ mmr_t md_rq_q_oflow : 1;
+ mmr_t md_rp_q_oflow : 1;
+ mmr_t xn_rq_q_oflow : 1;
+ mmr_t xn_rp_q_oflow : 1;
+ mmr_t nack_oflow : 1;
+ mmr_t gfx_int_0 : 1;
+ mmr_t gfx_int_1 : 1;
+ mmr_t md_rq_crd_oflow : 1;
+ mmr_t md_rp_crd_oflow : 1;
+ mmr_t xn_rq_crd_oflow : 1;
+ mmr_t xn_rp_crd_oflow : 1;
+ mmr_t hung_bus : 1;
+ mmr_t rsp_parity : 1;
+ mmr_t ioq_overrun : 1;
+ mmr_t req_format : 1;
+ mmr_t addr_access : 1;
+ mmr_t req_parity : 1;
+ mmr_t addr_parity : 1;
+ mmr_t shub_fsb_dqe : 1;
+ mmr_t shub_fsb_uce : 1;
+ mmr_t shub_fsb_ce : 1;
+ mmr_t livelock : 1;
+ mmr_t bad_snoop : 1;
+ mmr_t fsb_tbl_miss : 1;
+ mmr_t msg_length : 1;
+ mmr_t reserved_0 : 29;
+ } sh_pi_error_mask_s;
+} sh_pi_error_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_EXPRESS_REPLY_CONFIG" */
+/* PI Express Reply Configuration */
+/* ==================================================================== */
+
+typedef union sh_pi_express_reply_config_u {
+ mmr_t sh_pi_express_reply_config_regval;
+ struct {
+ mmr_t mode : 3;
+ mmr_t reserved_0 : 61;
+ } sh_pi_express_reply_config_s;
+} sh_pi_express_reply_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_FSB_COMPARE_VALUE" */
+/* FSB Compare Value */
+/* ==================================================================== */
+
+typedef union sh_pi_fsb_compare_value_u {
+ mmr_t sh_pi_fsb_compare_value_regval;
+ struct {
+ mmr_t compare_value : 64;
+ } sh_pi_fsb_compare_value_s;
+} sh_pi_fsb_compare_value_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_FSB_COMPARE_MASK" */
+/* FSB Compare Mask */
+/* ==================================================================== */
+
+typedef union sh_pi_fsb_compare_mask_u {
+ mmr_t sh_pi_fsb_compare_mask_regval;
+ struct {
+ mmr_t mask_value : 64;
+ } sh_pi_fsb_compare_mask_s;
+} sh_pi_fsb_compare_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_FSB_ERROR_INJECTION" */
+/* Inject an Error onto the FSB */
+/* ==================================================================== */
+
+typedef union sh_pi_fsb_error_injection_u {
+ mmr_t sh_pi_fsb_error_injection_regval;
+ struct {
+ mmr_t rp_pe_to_fsb : 1;
+ mmr_t ap0_pe_to_fsb : 1;
+ mmr_t ap1_pe_to_fsb : 1;
+ mmr_t rsp_pe_to_fsb : 1;
+ mmr_t dw0_ce_to_fsb : 1;
+ mmr_t dw0_uce_to_fsb : 1;
+ mmr_t dw1_ce_to_fsb : 1;
+ mmr_t dw1_uce_to_fsb : 1;
+ mmr_t ip0_pe_to_fsb : 1;
+ mmr_t ip1_pe_to_fsb : 1;
+ mmr_t reserved_0 : 6;
+ mmr_t rp_pe_from_fsb : 1;
+ mmr_t ap0_pe_from_fsb : 1;
+ mmr_t ap1_pe_from_fsb : 1;
+ mmr_t rsp_pe_from_fsb : 1;
+ mmr_t dw0_ce_from_fsb : 1;
+ mmr_t dw0_uce_from_fsb : 1;
+ mmr_t dw1_ce_from_fsb : 1;
+ mmr_t dw1_uce_from_fsb : 1;
+ mmr_t dw2_ce_from_fsb : 1;
+ mmr_t dw2_uce_from_fsb : 1;
+ mmr_t dw3_ce_from_fsb : 1;
+ mmr_t dw3_uce_from_fsb : 1;
+ mmr_t reserved_1 : 4;
+ mmr_t ioq_overrun : 1;
+ mmr_t livelock : 1;
+ mmr_t bus_hang : 1;
+ mmr_t reserved_2 : 29;
+ } sh_pi_fsb_error_injection_s;
+} sh_pi_fsb_error_injection_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_MD2PI_REPLY_VC_CONFIG" */
+/* MD-to-PI Reply Virtual Channel Configuration */
+/* ==================================================================== */
+
+typedef union sh_pi_md2pi_reply_vc_config_u {
+ mmr_t sh_pi_md2pi_reply_vc_config_regval;
+ struct {
+ mmr_t hdr_depth : 4;
+ mmr_t data_depth : 4;
+ mmr_t max_credits : 6;
+ mmr_t reserved_0 : 48;
+ mmr_t force_credit : 1;
+ mmr_t capture_credit_status : 1;
+ } sh_pi_md2pi_reply_vc_config_s;
+} sh_pi_md2pi_reply_vc_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_MD2PI_REQUEST_VC_CONFIG" */
+/* MD-to-PI Request Virtual Channel Configuration */
+/* ==================================================================== */
+
+typedef union sh_pi_md2pi_request_vc_config_u {
+ mmr_t sh_pi_md2pi_request_vc_config_regval;
+ struct {
+ mmr_t hdr_depth : 4;
+ mmr_t data_depth : 4;
+ mmr_t max_credits : 6;
+ mmr_t reserved_0 : 48;
+ mmr_t force_credit : 1;
+ mmr_t capture_credit_status : 1;
+ } sh_pi_md2pi_request_vc_config_s;
+} sh_pi_md2pi_request_vc_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_QUEUE_ERROR_INJECTION" */
+/* PI Queue Error Injection */
+/* ==================================================================== */
+
+typedef union sh_pi_queue_error_injection_u {
+ mmr_t sh_pi_queue_error_injection_regval;
+ struct {
+ mmr_t dat_dfr_q : 1;
+ mmr_t dxb_wtl_cmnd_q : 1;
+ mmr_t fsb_wtl_cmnd_q : 1;
+ mmr_t mdpi_rpy_bfr : 1;
+ mmr_t ptc_intr : 1;
+ mmr_t rxl_kill_q : 1;
+ mmr_t rxl_rdy_q : 1;
+ mmr_t xnpi_rpy_bfr : 1;
+ mmr_t reserved_0 : 56;
+ } sh_pi_queue_error_injection_s;
+} sh_pi_queue_error_injection_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_TEST_POINT_COMPARE" */
+/* PI Test Point Compare */
+/* ==================================================================== */
+
+typedef union sh_pi_test_point_compare_u {
+ mmr_t sh_pi_test_point_compare_regval;
+ struct {
+ mmr_t compare_mask : 32;
+ mmr_t compare_pattern : 32;
+ } sh_pi_test_point_compare_s;
+} sh_pi_test_point_compare_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_TEST_POINT_SELECT" */
+/* PI Test Point Select */
+/* ==================================================================== */
+
+typedef union sh_pi_test_point_select_u {
+ mmr_t sh_pi_test_point_select_regval;
+ struct {
+ mmr_t nibble0_chiplet_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_chiplet_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_chiplet_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_chiplet_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_chiplet_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_chiplet_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_chiplet_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_chiplet_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t trigger_enable : 1;
+ } sh_pi_test_point_select_s;
+} sh_pi_test_point_select_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_TEST_POINT_TRIGGER_SELECT" */
+/* PI Test Point Trigger Select */
+/* ==================================================================== */
+
+typedef union sh_pi_test_point_trigger_select_u {
+ mmr_t sh_pi_test_point_trigger_select_regval;
+ struct {
+ mmr_t trigger0_chiplet_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t trigger0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t trigger1_chiplet_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t trigger1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t trigger2_chiplet_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t trigger2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t trigger3_chiplet_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t trigger3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t trigger4_chiplet_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t trigger4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t trigger5_chiplet_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t trigger5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t trigger6_chiplet_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t trigger6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t trigger7_chiplet_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t trigger7_nibble_sel : 3;
+ mmr_t reserved_15 : 1;
+ } sh_pi_test_point_trigger_select_s;
+} sh_pi_test_point_trigger_select_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_XN2PI_REPLY_VC_CONFIG" */
+/* XN-to-PI Reply Virtual Channel Configuration */
+/* ==================================================================== */
+
+typedef union sh_pi_xn2pi_reply_vc_config_u {
+ mmr_t sh_pi_xn2pi_reply_vc_config_regval;
+ struct {
+ mmr_t hdr_depth : 4;
+ mmr_t data_depth : 4;
+ mmr_t max_credits : 6;
+ mmr_t reserved_0 : 48;
+ mmr_t force_credit : 1;
+ mmr_t capture_credit_status : 1;
+ } sh_pi_xn2pi_reply_vc_config_s;
+} sh_pi_xn2pi_reply_vc_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_XN2PI_REQUEST_VC_CONFIG" */
+/* XN-to-PI Request Virtual Channel Configuration */
+/* ==================================================================== */
+
+typedef union sh_pi_xn2pi_request_vc_config_u {
+ mmr_t sh_pi_xn2pi_request_vc_config_regval;
+ struct {
+ mmr_t hdr_depth : 4;
+ mmr_t data_depth : 4;
+ mmr_t max_credits : 6;
+ mmr_t reserved_0 : 48;
+ mmr_t force_credit : 1;
+ mmr_t capture_credit_status : 1;
+ } sh_pi_xn2pi_request_vc_config_s;
+} sh_pi_xn2pi_request_vc_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_AEC_STATUS" */
+/* PI Adaptive Error Correction Status */
+/* ==================================================================== */
+
+typedef union sh_pi_aec_status_u {
+ mmr_t sh_pi_aec_status_regval;
+ struct {
+ mmr_t state : 3;
+ mmr_t reserved_0 : 61;
+ } sh_pi_aec_status_s;
+} sh_pi_aec_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_AFI_FIRST_ERROR" */
+/* PI AFI First Error */
+/* ==================================================================== */
+
+typedef union sh_pi_afi_first_error_u {
+ mmr_t sh_pi_afi_first_error_regval;
+ struct {
+ mmr_t reserved_0 : 7;
+ mmr_t fsb_shub_uce : 1;
+ mmr_t fsb_shub_ce : 1;
+ mmr_t reserved_1 : 12;
+ mmr_t hung_bus : 1;
+ mmr_t rsp_parity : 1;
+ mmr_t ioq_overrun : 1;
+ mmr_t req_format : 1;
+ mmr_t addr_access : 1;
+ mmr_t req_parity : 1;
+ mmr_t addr_parity : 1;
+ mmr_t shub_fsb_dqe : 1;
+ mmr_t shub_fsb_uce : 1;
+ mmr_t shub_fsb_ce : 1;
+ mmr_t livelock : 1;
+ mmr_t bad_snoop : 1;
+ mmr_t fsb_tbl_miss : 1;
+ mmr_t msg_len : 1;
+ mmr_t reserved_2 : 29;
+ } sh_pi_afi_first_error_s;
+} sh_pi_afi_first_error_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CAM_ADDRESS_READ_DATA" */
+/* CRB CAM MMR Address Read Data */
+/* ==================================================================== */
+
+typedef union sh_pi_cam_address_read_data_u {
+ mmr_t sh_pi_cam_address_read_data_regval;
+ struct {
+ mmr_t cam_addr : 48;
+ mmr_t reserved_0 : 15;
+ mmr_t cam_addr_val : 1;
+ } sh_pi_cam_address_read_data_s;
+} sh_pi_cam_address_read_data_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CAM_LPRA_READ_DATA" */
+/* CRB CAM MMR LPRA Read Data */
+/* ==================================================================== */
+
+typedef union sh_pi_cam_lpra_read_data_u {
+ mmr_t sh_pi_cam_lpra_read_data_regval;
+ struct {
+ mmr_t cam_lpra : 64;
+ } sh_pi_cam_lpra_read_data_s;
+} sh_pi_cam_lpra_read_data_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CAM_STATE_READ_DATA" */
+/* CRB CAM MMR State Read Data */
+/* ==================================================================== */
+
+typedef union sh_pi_cam_state_read_data_u {
+ mmr_t sh_pi_cam_state_read_data_regval;
+ struct {
+ mmr_t cam_state : 4;
+ mmr_t cam_to : 1;
+ mmr_t cam_state_rd_pend : 1;
+ mmr_t reserved_0 : 26;
+ mmr_t cam_lpra : 18;
+ mmr_t reserved_1 : 13;
+ mmr_t cam_rd_data_val : 1;
+ } sh_pi_cam_state_read_data_s;
+} sh_pi_cam_state_read_data_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CORRECTED_DETAIL_1" */
+/* PI Corrected Error Detail */
+/* ==================================================================== */
+
+typedef union sh_pi_corrected_detail_1_u {
+ mmr_t sh_pi_corrected_detail_1_regval;
+ struct {
+ mmr_t address : 48;
+ mmr_t syndrome : 8;
+ mmr_t dep : 8;
+ } sh_pi_corrected_detail_1_s;
+} sh_pi_corrected_detail_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CORRECTED_DETAIL_2" */
+/* PI Corrected Error Detail 2 */
+/* ==================================================================== */
+
+typedef union sh_pi_corrected_detail_2_u {
+ mmr_t sh_pi_corrected_detail_2_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_pi_corrected_detail_2_s;
+} sh_pi_corrected_detail_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CORRECTED_DETAIL_3" */
+/* PI Corrected Error Detail 3 */
+/* ==================================================================== */
+
+typedef union sh_pi_corrected_detail_3_u {
+ mmr_t sh_pi_corrected_detail_3_regval;
+ struct {
+ mmr_t address : 48;
+ mmr_t syndrome : 8;
+ mmr_t dep : 8;
+ } sh_pi_corrected_detail_3_s;
+} sh_pi_corrected_detail_3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CORRECTED_DETAIL_4" */
+/* PI Corrected Error Detail 4 */
+/* ==================================================================== */
+
+typedef union sh_pi_corrected_detail_4_u {
+ mmr_t sh_pi_corrected_detail_4_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_pi_corrected_detail_4_s;
+} sh_pi_corrected_detail_4_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_CRBP_FIRST_ERROR" */
+/* PI CRBP First Error */
+/* ==================================================================== */
+
+typedef union sh_pi_crbp_first_error_u {
+ mmr_t sh_pi_crbp_first_error_regval;
+ struct {
+ mmr_t fsb_proto_err : 1;
+ mmr_t gfx_rp_err : 1;
+ mmr_t xb_proto_err : 1;
+ mmr_t mem_rp_err : 1;
+ mmr_t pio_rp_err : 1;
+ mmr_t mem_to_err : 1;
+ mmr_t pio_to_err : 1;
+ mmr_t fsb_shub_uce : 1;
+ mmr_t fsb_shub_ce : 1;
+ mmr_t msg_color_err : 1;
+ mmr_t md_rq_q_oflow : 1;
+ mmr_t md_rp_q_oflow : 1;
+ mmr_t xn_rq_q_oflow : 1;
+ mmr_t xn_rp_q_oflow : 1;
+ mmr_t nack_oflow : 1;
+ mmr_t gfx_int_0 : 1;
+ mmr_t gfx_int_1 : 1;
+ mmr_t md_rq_crd_oflow : 1;
+ mmr_t md_rp_crd_oflow : 1;
+ mmr_t xn_rq_crd_oflow : 1;
+ mmr_t xn_rp_crd_oflow : 1;
+ mmr_t reserved_0 : 43;
+ } sh_pi_crbp_first_error_s;
+} sh_pi_crbp_first_error_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_ERROR_DETAIL_1" */
+/* PI Error Detail 1 */
+/* ==================================================================== */
+
+typedef union sh_pi_error_detail_1_u {
+ mmr_t sh_pi_error_detail_1_regval;
+ struct {
+ mmr_t status : 64;
+ } sh_pi_error_detail_1_s;
+} sh_pi_error_detail_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_ERROR_DETAIL_2" */
+/* PI Error Detail 2 */
+/* ==================================================================== */
+
+typedef union sh_pi_error_detail_2_u {
+ mmr_t sh_pi_error_detail_2_regval;
+ struct {
+ mmr_t status : 64;
+ } sh_pi_error_detail_2_s;
+} sh_pi_error_detail_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_ERROR_OVERFLOW" */
+/* PI Error Overflow */
+/* ==================================================================== */
+
+typedef union sh_pi_error_overflow_u {
+ mmr_t sh_pi_error_overflow_regval;
+ struct {
+ mmr_t fsb_proto_err : 1;
+ mmr_t gfx_rp_err : 1;
+ mmr_t xb_proto_err : 1;
+ mmr_t mem_rp_err : 1;
+ mmr_t pio_rp_err : 1;
+ mmr_t mem_to_err : 1;
+ mmr_t pio_to_err : 1;
+ mmr_t fsb_shub_uce : 1;
+ mmr_t fsb_shub_ce : 1;
+ mmr_t msg_color_err : 1;
+ mmr_t md_rq_q_oflow : 1;
+ mmr_t md_rp_q_oflow : 1;
+ mmr_t xn_rq_q_oflow : 1;
+ mmr_t xn_rp_q_oflow : 1;
+ mmr_t nack_oflow : 1;
+ mmr_t gfx_int_0 : 1;
+ mmr_t gfx_int_1 : 1;
+ mmr_t md_rq_crd_oflow : 1;
+ mmr_t md_rp_crd_oflow : 1;
+ mmr_t xn_rq_crd_oflow : 1;
+ mmr_t xn_rp_crd_oflow : 1;
+ mmr_t hung_bus : 1;
+ mmr_t rsp_parity : 1;
+ mmr_t ioq_overrun : 1;
+ mmr_t req_format : 1;
+ mmr_t addr_access : 1;
+ mmr_t req_parity : 1;
+ mmr_t addr_parity : 1;
+ mmr_t shub_fsb_dqe : 1;
+ mmr_t shub_fsb_uce : 1;
+ mmr_t shub_fsb_ce : 1;
+ mmr_t livelock : 1;
+ mmr_t bad_snoop : 1;
+ mmr_t fsb_tbl_miss : 1;
+ mmr_t msg_length : 1;
+ mmr_t reserved_0 : 29;
+ } sh_pi_error_overflow_s;
+} sh_pi_error_overflow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_ERROR_SUMMARY" */
+/* PI Error Summary */
+/* ==================================================================== */
+
+typedef union sh_pi_error_summary_u {
+ mmr_t sh_pi_error_summary_regval;
+ struct {
+ mmr_t fsb_proto_err : 1;
+ mmr_t gfx_rp_err : 1;
+ mmr_t xb_proto_err : 1;
+ mmr_t mem_rp_err : 1;
+ mmr_t pio_rp_err : 1;
+ mmr_t mem_to_err : 1;
+ mmr_t pio_to_err : 1;
+ mmr_t fsb_shub_uce : 1;
+ mmr_t fsb_shub_ce : 1;
+ mmr_t msg_color_err : 1;
+ mmr_t md_rq_q_oflow : 1;
+ mmr_t md_rp_q_oflow : 1;
+ mmr_t xn_rq_q_oflow : 1;
+ mmr_t xn_rp_q_oflow : 1;
+ mmr_t nack_oflow : 1;
+ mmr_t gfx_int_0 : 1;
+ mmr_t gfx_int_1 : 1;
+ mmr_t md_rq_crd_oflow : 1;
+ mmr_t md_rp_crd_oflow : 1;
+ mmr_t xn_rq_crd_oflow : 1;
+ mmr_t xn_rp_crd_oflow : 1;
+ mmr_t hung_bus : 1;
+ mmr_t rsp_parity : 1;
+ mmr_t ioq_overrun : 1;
+ mmr_t req_format : 1;
+ mmr_t addr_access : 1;
+ mmr_t req_parity : 1;
+ mmr_t addr_parity : 1;
+ mmr_t shub_fsb_dqe : 1;
+ mmr_t shub_fsb_uce : 1;
+ mmr_t shub_fsb_ce : 1;
+ mmr_t livelock : 1;
+ mmr_t bad_snoop : 1;
+ mmr_t fsb_tbl_miss : 1;
+ mmr_t msg_length : 1;
+ mmr_t reserved_0 : 29;
+ } sh_pi_error_summary_s;
+} sh_pi_error_summary_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_EXPRESS_REPLY_STATUS" */
+/* PI Express Reply Status */
+/* ==================================================================== */
+
+typedef union sh_pi_express_reply_status_u {
+ mmr_t sh_pi_express_reply_status_regval;
+ struct {
+ mmr_t state : 3;
+ mmr_t reserved_0 : 61;
+ } sh_pi_express_reply_status_s;
+} sh_pi_express_reply_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_FIRST_ERROR" */
+/* PI First Error */
+/* ==================================================================== */
+
+typedef union sh_pi_first_error_u {
+ mmr_t sh_pi_first_error_regval;
+ struct {
+ mmr_t fsb_proto_err : 1;
+ mmr_t gfx_rp_err : 1;
+ mmr_t xb_proto_err : 1;
+ mmr_t mem_rp_err : 1;
+ mmr_t pio_rp_err : 1;
+ mmr_t mem_to_err : 1;
+ mmr_t pio_to_err : 1;
+ mmr_t fsb_shub_uce : 1;
+ mmr_t fsb_shub_ce : 1;
+ mmr_t msg_color_err : 1;
+ mmr_t md_rq_q_oflow : 1;
+ mmr_t md_rp_q_oflow : 1;
+ mmr_t xn_rq_q_oflow : 1;
+ mmr_t xn_rp_q_oflow : 1;
+ mmr_t nack_oflow : 1;
+ mmr_t gfx_int_0 : 1;
+ mmr_t gfx_int_1 : 1;
+ mmr_t md_rq_crd_oflow : 1;
+ mmr_t md_rp_crd_oflow : 1;
+ mmr_t xn_rq_crd_oflow : 1;
+ mmr_t xn_rp_crd_oflow : 1;
+ mmr_t hung_bus : 1;
+ mmr_t rsp_parity : 1;
+ mmr_t ioq_overrun : 1;
+ mmr_t req_format : 1;
+ mmr_t addr_access : 1;
+ mmr_t req_parity : 1;
+ mmr_t addr_parity : 1;
+ mmr_t shub_fsb_dqe : 1;
+ mmr_t shub_fsb_uce : 1;
+ mmr_t shub_fsb_ce : 1;
+ mmr_t livelock : 1;
+ mmr_t bad_snoop : 1;
+ mmr_t fsb_tbl_miss : 1;
+ mmr_t msg_length : 1;
+ mmr_t reserved_0 : 29;
+ } sh_pi_first_error_s;
+} sh_pi_first_error_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_PI2MD_REPLY_VC_STATUS" */
+/* PI-to-MD Reply Virtual Channel Status */
+/* ==================================================================== */
+
+typedef union sh_pi_pi2md_reply_vc_status_u {
+ mmr_t sh_pi_pi2md_reply_vc_status_regval;
+ struct {
+ mmr_t output_crd_stat : 6;
+ mmr_t reserved_0 : 58;
+ } sh_pi_pi2md_reply_vc_status_s;
+} sh_pi_pi2md_reply_vc_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_PI2MD_REQUEST_VC_STATUS" */
+/* PI-to-MD Request Virtual Channel Status */
+/* ==================================================================== */
+
+typedef union sh_pi_pi2md_request_vc_status_u {
+ mmr_t sh_pi_pi2md_request_vc_status_regval;
+ struct {
+ mmr_t output_crd_stat : 6;
+ mmr_t reserved_0 : 58;
+ } sh_pi_pi2md_request_vc_status_s;
+} sh_pi_pi2md_request_vc_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_PI2XN_REPLY_VC_STATUS" */
+/* PI-to-XN Reply Virtual Channel Status */
+/* ==================================================================== */
+
+typedef union sh_pi_pi2xn_reply_vc_status_u {
+ mmr_t sh_pi_pi2xn_reply_vc_status_regval;
+ struct {
+ mmr_t output_crd_stat : 6;
+ mmr_t reserved_0 : 58;
+ } sh_pi_pi2xn_reply_vc_status_s;
+} sh_pi_pi2xn_reply_vc_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_PI2XN_REQUEST_VC_STATUS" */
+/* PI-to-XN Request Virtual Channel Status */
+/* ==================================================================== */
+
+typedef union sh_pi_pi2xn_request_vc_status_u {
+ mmr_t sh_pi_pi2xn_request_vc_status_regval;
+ struct {
+ mmr_t output_crd_stat : 6;
+ mmr_t reserved_0 : 58;
+ } sh_pi_pi2xn_request_vc_status_s;
+} sh_pi_pi2xn_request_vc_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_UNCORRECTED_DETAIL_1" */
+/* PI Uncorrected Error Detail 1 */
+/* ==================================================================== */
+
+typedef union sh_pi_uncorrected_detail_1_u {
+ mmr_t sh_pi_uncorrected_detail_1_regval;
+ struct {
+ mmr_t address : 48;
+ mmr_t syndrome : 8;
+ mmr_t dep : 8;
+ } sh_pi_uncorrected_detail_1_s;
+} sh_pi_uncorrected_detail_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_UNCORRECTED_DETAIL_2" */
+/* PI Uncorrected Error Detail 2 */
+/* ==================================================================== */
+
+typedef union sh_pi_uncorrected_detail_2_u {
+ mmr_t sh_pi_uncorrected_detail_2_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_pi_uncorrected_detail_2_s;
+} sh_pi_uncorrected_detail_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_UNCORRECTED_DETAIL_3" */
+/* PI Uncorrected Error Detail 3 */
+/* ==================================================================== */
+
+typedef union sh_pi_uncorrected_detail_3_u {
+ mmr_t sh_pi_uncorrected_detail_3_regval;
+ struct {
+ mmr_t address : 48;
+ mmr_t syndrome : 8;
+ mmr_t dep : 8;
+ } sh_pi_uncorrected_detail_3_s;
+} sh_pi_uncorrected_detail_3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_UNCORRECTED_DETAIL_4" */
+/* PI Uncorrected Error Detail 4 */
+/* ==================================================================== */
+
+typedef union sh_pi_uncorrected_detail_4_u {
+ mmr_t sh_pi_uncorrected_detail_4_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_pi_uncorrected_detail_4_s;
+} sh_pi_uncorrected_detail_4_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_MD2PI_REPLY_VC_STATUS" */
+/* MD-to-PI Reply Virtual Channel Status */
+/* ==================================================================== */
+
+typedef union sh_pi_md2pi_reply_vc_status_u {
+ mmr_t sh_pi_md2pi_reply_vc_status_regval;
+ struct {
+ mmr_t input_hdr_crd_stat : 4;
+ mmr_t input_dat_crd_stat : 4;
+ mmr_t input_queue_stat : 4;
+ mmr_t reserved_0 : 52;
+ } sh_pi_md2pi_reply_vc_status_s;
+} sh_pi_md2pi_reply_vc_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_MD2PI_REQUEST_VC_STATUS" */
+/* MD-to-PI Request Virtual Channel Status */
+/* ==================================================================== */
+
+typedef union sh_pi_md2pi_request_vc_status_u {
+ mmr_t sh_pi_md2pi_request_vc_status_regval;
+ struct {
+ mmr_t input_hdr_crd_stat : 4;
+ mmr_t input_dat_crd_stat : 4;
+ mmr_t input_queue_stat : 4;
+ mmr_t reserved_0 : 52;
+ } sh_pi_md2pi_request_vc_status_s;
+} sh_pi_md2pi_request_vc_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_XN2PI_REPLY_VC_STATUS" */
+/* XN-to-PI Reply Virtual Channel Status */
+/* ==================================================================== */
+
+typedef union sh_pi_xn2pi_reply_vc_status_u {
+ mmr_t sh_pi_xn2pi_reply_vc_status_regval;
+ struct {
+ mmr_t input_hdr_crd_stat : 4;
+ mmr_t input_dat_crd_stat : 4;
+ mmr_t input_queue_stat : 4;
+ mmr_t reserved_0 : 52;
+ } sh_pi_xn2pi_reply_vc_status_s;
+} sh_pi_xn2pi_reply_vc_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_XN2PI_REQUEST_VC_STATUS" */
+/* XN-to-PI Request Virtual Channel Status */
+/* ==================================================================== */
+
+typedef union sh_pi_xn2pi_request_vc_status_u {
+ mmr_t sh_pi_xn2pi_request_vc_status_regval;
+ struct {
+ mmr_t input_hdr_crd_stat : 4;
+ mmr_t input_dat_crd_stat : 4;
+ mmr_t input_queue_stat : 4;
+ mmr_t reserved_0 : 52;
+ } sh_pi_xn2pi_request_vc_status_s;
+} sh_pi_xn2pi_request_vc_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_SIC_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_sic_flow_u {
+ mmr_t sh_xnpi_sic_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 5;
+ mmr_t reserved_0 : 2;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 5;
+ mmr_t reserved_1 : 2;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t credit_vc0_test : 5;
+ mmr_t reserved_2 : 3;
+ mmr_t credit_vc0_dyn : 5;
+ mmr_t reserved_3 : 3;
+ mmr_t credit_vc0_cap : 5;
+ mmr_t reserved_4 : 3;
+ mmr_t credit_vc2_test : 5;
+ mmr_t reserved_5 : 3;
+ mmr_t credit_vc2_dyn : 5;
+ mmr_t reserved_6 : 3;
+ mmr_t credit_vc2_cap : 5;
+ mmr_t reserved_7 : 2;
+ mmr_t disable_bypass_out : 1;
+ } sh_xnpi_sic_flow_s;
+} sh_xnpi_sic_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_TO_NI0_PORT_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_to_ni0_port_flow_u {
+ mmr_t sh_xnpi_to_ni0_port_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t credit_vc0_dyn : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t credit_vc0_cap : 6;
+ mmr_t reserved_4 : 10;
+ mmr_t credit_vc2_dyn : 6;
+ mmr_t reserved_5 : 2;
+ mmr_t credit_vc2_cap : 6;
+ mmr_t reserved_6 : 2;
+ } sh_xnpi_to_ni0_port_flow_s;
+} sh_xnpi_to_ni0_port_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_TO_NI1_PORT_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_to_ni1_port_flow_u {
+ mmr_t sh_xnpi_to_ni1_port_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t credit_vc0_dyn : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t credit_vc0_cap : 6;
+ mmr_t reserved_4 : 10;
+ mmr_t credit_vc2_dyn : 6;
+ mmr_t reserved_5 : 2;
+ mmr_t credit_vc2_cap : 6;
+ mmr_t reserved_6 : 2;
+ } sh_xnpi_to_ni1_port_flow_s;
+} sh_xnpi_to_ni1_port_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_TO_IILB_PORT_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_to_iilb_port_flow_u {
+ mmr_t sh_xnpi_to_iilb_port_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t credit_vc0_dyn : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t credit_vc0_cap : 6;
+ mmr_t reserved_4 : 10;
+ mmr_t credit_vc2_dyn : 6;
+ mmr_t reserved_5 : 2;
+ mmr_t credit_vc2_cap : 6;
+ mmr_t reserved_6 : 2;
+ } sh_xnpi_to_iilb_port_flow_s;
+} sh_xnpi_to_iilb_port_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_FR_NI0_PORT_FLOW_FIFO" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_fr_ni0_port_flow_fifo_u {
+ mmr_t sh_xnpi_fr_ni0_port_flow_fifo_regval;
+ struct {
+ mmr_t entry_vc0_dyn : 6;
+ mmr_t reserved_0 : 2;
+ mmr_t entry_vc0_cap : 6;
+ mmr_t reserved_1 : 2;
+ mmr_t entry_vc2_dyn : 6;
+ mmr_t reserved_2 : 2;
+ mmr_t entry_vc2_cap : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t entry_vc0_test : 5;
+ mmr_t reserved_4 : 3;
+ mmr_t entry_vc2_test : 5;
+ mmr_t reserved_5 : 19;
+ } sh_xnpi_fr_ni0_port_flow_fifo_s;
+} sh_xnpi_fr_ni0_port_flow_fifo_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_FR_NI1_PORT_FLOW_FIFO" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_fr_ni1_port_flow_fifo_u {
+ mmr_t sh_xnpi_fr_ni1_port_flow_fifo_regval;
+ struct {
+ mmr_t entry_vc0_dyn : 6;
+ mmr_t reserved_0 : 2;
+ mmr_t entry_vc0_cap : 6;
+ mmr_t reserved_1 : 2;
+ mmr_t entry_vc2_dyn : 6;
+ mmr_t reserved_2 : 2;
+ mmr_t entry_vc2_cap : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t entry_vc0_test : 5;
+ mmr_t reserved_4 : 3;
+ mmr_t entry_vc2_test : 5;
+ mmr_t reserved_5 : 19;
+ } sh_xnpi_fr_ni1_port_flow_fifo_s;
+} sh_xnpi_fr_ni1_port_flow_fifo_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_FR_IILB_PORT_FLOW_FIFO" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_fr_iilb_port_flow_fifo_u {
+ mmr_t sh_xnpi_fr_iilb_port_flow_fifo_regval;
+ struct {
+ mmr_t entry_vc0_dyn : 6;
+ mmr_t reserved_0 : 2;
+ mmr_t entry_vc0_cap : 6;
+ mmr_t reserved_1 : 2;
+ mmr_t entry_vc2_dyn : 6;
+ mmr_t reserved_2 : 2;
+ mmr_t entry_vc2_cap : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t entry_vc0_test : 5;
+ mmr_t reserved_4 : 3;
+ mmr_t entry_vc2_test : 5;
+ mmr_t reserved_5 : 19;
+ } sh_xnpi_fr_iilb_port_flow_fifo_s;
+} sh_xnpi_fr_iilb_port_flow_fifo_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_SIC_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_sic_flow_u {
+ mmr_t sh_xnmd_sic_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 5;
+ mmr_t reserved_0 : 2;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 5;
+ mmr_t reserved_1 : 2;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t credit_vc0_test : 5;
+ mmr_t reserved_2 : 3;
+ mmr_t credit_vc0_dyn : 5;
+ mmr_t reserved_3 : 3;
+ mmr_t credit_vc0_cap : 5;
+ mmr_t reserved_4 : 3;
+ mmr_t credit_vc2_test : 5;
+ mmr_t reserved_5 : 3;
+ mmr_t credit_vc2_dyn : 5;
+ mmr_t reserved_6 : 3;
+ mmr_t credit_vc2_cap : 5;
+ mmr_t reserved_7 : 2;
+ mmr_t disable_bypass_out : 1;
+ } sh_xnmd_sic_flow_s;
+} sh_xnmd_sic_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_TO_NI0_PORT_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_to_ni0_port_flow_u {
+ mmr_t sh_xnmd_to_ni0_port_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t credit_vc0_dyn : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t credit_vc0_cap : 6;
+ mmr_t reserved_4 : 10;
+ mmr_t credit_vc2_dyn : 6;
+ mmr_t reserved_5 : 2;
+ mmr_t credit_vc2_cap : 6;
+ mmr_t reserved_6 : 2;
+ } sh_xnmd_to_ni0_port_flow_s;
+} sh_xnmd_to_ni0_port_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_TO_NI1_PORT_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_to_ni1_port_flow_u {
+ mmr_t sh_xnmd_to_ni1_port_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t credit_vc0_dyn : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t credit_vc0_cap : 6;
+ mmr_t reserved_4 : 10;
+ mmr_t credit_vc2_dyn : 6;
+ mmr_t reserved_5 : 2;
+ mmr_t credit_vc2_cap : 6;
+ mmr_t reserved_6 : 2;
+ } sh_xnmd_to_ni1_port_flow_s;
+} sh_xnmd_to_ni1_port_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_TO_IILB_PORT_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_to_iilb_port_flow_u {
+ mmr_t sh_xnmd_to_iilb_port_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t credit_vc0_dyn : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t credit_vc0_cap : 6;
+ mmr_t reserved_4 : 10;
+ mmr_t credit_vc2_dyn : 6;
+ mmr_t reserved_5 : 2;
+ mmr_t credit_vc2_cap : 6;
+ mmr_t reserved_6 : 2;
+ } sh_xnmd_to_iilb_port_flow_s;
+} sh_xnmd_to_iilb_port_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_FR_NI0_PORT_FLOW_FIFO" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_fr_ni0_port_flow_fifo_u {
+ mmr_t sh_xnmd_fr_ni0_port_flow_fifo_regval;
+ struct {
+ mmr_t entry_vc0_dyn : 6;
+ mmr_t reserved_0 : 2;
+ mmr_t entry_vc0_cap : 6;
+ mmr_t reserved_1 : 2;
+ mmr_t entry_vc2_dyn : 6;
+ mmr_t reserved_2 : 2;
+ mmr_t entry_vc2_cap : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t entry_vc0_test : 5;
+ mmr_t reserved_4 : 3;
+ mmr_t entry_vc2_test : 5;
+ mmr_t reserved_5 : 19;
+ } sh_xnmd_fr_ni0_port_flow_fifo_s;
+} sh_xnmd_fr_ni0_port_flow_fifo_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_FR_NI1_PORT_FLOW_FIFO" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_fr_ni1_port_flow_fifo_u {
+ mmr_t sh_xnmd_fr_ni1_port_flow_fifo_regval;
+ struct {
+ mmr_t entry_vc0_dyn : 6;
+ mmr_t reserved_0 : 2;
+ mmr_t entry_vc0_cap : 6;
+ mmr_t reserved_1 : 2;
+ mmr_t entry_vc2_dyn : 6;
+ mmr_t reserved_2 : 2;
+ mmr_t entry_vc2_cap : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t entry_vc0_test : 5;
+ mmr_t reserved_4 : 3;
+ mmr_t entry_vc2_test : 5;
+ mmr_t reserved_5 : 19;
+ } sh_xnmd_fr_ni1_port_flow_fifo_s;
+} sh_xnmd_fr_ni1_port_flow_fifo_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_FR_IILB_PORT_FLOW_FIFO" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_fr_iilb_port_flow_fifo_u {
+ mmr_t sh_xnmd_fr_iilb_port_flow_fifo_regval;
+ struct {
+ mmr_t entry_vc0_dyn : 6;
+ mmr_t reserved_0 : 2;
+ mmr_t entry_vc0_cap : 6;
+ mmr_t reserved_1 : 2;
+ mmr_t entry_vc2_dyn : 6;
+ mmr_t reserved_2 : 2;
+ mmr_t entry_vc2_cap : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t entry_vc0_test : 5;
+ mmr_t reserved_4 : 3;
+ mmr_t entry_vc2_test : 5;
+ mmr_t reserved_5 : 19;
+ } sh_xnmd_fr_iilb_port_flow_fifo_s;
+} sh_xnmd_fr_iilb_port_flow_fifo_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNII_INTRA_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnii_intra_flow_u {
+ mmr_t sh_xnii_intra_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t credit_vc0_test : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t credit_vc0_dyn : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t credit_vc0_cap : 7;
+ mmr_t reserved_4 : 1;
+ mmr_t credit_vc2_test : 7;
+ mmr_t reserved_5 : 1;
+ mmr_t credit_vc2_dyn : 7;
+ mmr_t reserved_6 : 1;
+ mmr_t credit_vc2_cap : 7;
+ mmr_t reserved_7 : 1;
+ } sh_xnii_intra_flow_s;
+} sh_xnii_intra_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNLB_INTRA_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnlb_intra_flow_u {
+ mmr_t sh_xnlb_intra_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t credit_vc0_test : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t credit_vc0_dyn : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t credit_vc0_cap : 7;
+ mmr_t reserved_4 : 1;
+ mmr_t credit_vc2_test : 7;
+ mmr_t reserved_5 : 1;
+ mmr_t credit_vc2_dyn : 7;
+ mmr_t reserved_6 : 1;
+ mmr_t credit_vc2_cap : 7;
+ mmr_t disable_bypass_in : 1;
+ } sh_xnlb_intra_flow_s;
+} sh_xnlb_intra_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_TO_NI0_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_to_ni0_intra_flow_debit_u {
+ mmr_t sh_xniilb_to_ni0_intra_flow_debit_regval;
+ struct {
+ mmr_t vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_force_cred : 1;
+ mmr_t vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_4 : 9;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_5 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_6 : 1;
+ } sh_xniilb_to_ni0_intra_flow_debit_s;
+} sh_xniilb_to_ni0_intra_flow_debit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_TO_NI1_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_to_ni1_intra_flow_debit_u {
+ mmr_t sh_xniilb_to_ni1_intra_flow_debit_regval;
+ struct {
+ mmr_t vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_force_cred : 1;
+ mmr_t vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_4 : 9;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_5 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_6 : 1;
+ } sh_xniilb_to_ni1_intra_flow_debit_s;
+} sh_xniilb_to_ni1_intra_flow_debit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_TO_MD_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_to_md_intra_flow_debit_u {
+ mmr_t sh_xniilb_to_md_intra_flow_debit_regval;
+ struct {
+ mmr_t vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_force_cred : 1;
+ mmr_t vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_4 : 9;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_5 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_6 : 1;
+ } sh_xniilb_to_md_intra_flow_debit_s;
+} sh_xniilb_to_md_intra_flow_debit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_TO_IILB_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_to_iilb_intra_flow_debit_u {
+ mmr_t sh_xniilb_to_iilb_intra_flow_debit_regval;
+ struct {
+ mmr_t vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_force_cred : 1;
+ mmr_t vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_4 : 9;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_5 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_6 : 1;
+ } sh_xniilb_to_iilb_intra_flow_debit_s;
+} sh_xniilb_to_iilb_intra_flow_debit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_TO_PI_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_to_pi_intra_flow_debit_u {
+ mmr_t sh_xniilb_to_pi_intra_flow_debit_regval;
+ struct {
+ mmr_t vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_force_cred : 1;
+ mmr_t vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_4 : 9;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_5 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_6 : 1;
+ } sh_xniilb_to_pi_intra_flow_debit_s;
+} sh_xniilb_to_pi_intra_flow_debit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_FR_NI0_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_fr_ni0_intra_flow_credit_u {
+ mmr_t sh_xniilb_fr_ni0_intra_flow_credit_regval;
+ struct {
+ mmr_t vc0_test : 7;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t vc2_test : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_4 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_5 : 17;
+ } sh_xniilb_fr_ni0_intra_flow_credit_s;
+} sh_xniilb_fr_ni0_intra_flow_credit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_FR_NI1_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_fr_ni1_intra_flow_credit_u {
+ mmr_t sh_xniilb_fr_ni1_intra_flow_credit_regval;
+ struct {
+ mmr_t vc0_test : 7;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t vc2_test : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_4 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_5 : 17;
+ } sh_xniilb_fr_ni1_intra_flow_credit_s;
+} sh_xniilb_fr_ni1_intra_flow_credit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_FR_MD_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_fr_md_intra_flow_credit_u {
+ mmr_t sh_xniilb_fr_md_intra_flow_credit_regval;
+ struct {
+ mmr_t vc0_test : 7;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t vc2_test : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_4 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_5 : 17;
+ } sh_xniilb_fr_md_intra_flow_credit_s;
+} sh_xniilb_fr_md_intra_flow_credit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_FR_IILB_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_fr_iilb_intra_flow_credit_u {
+ mmr_t sh_xniilb_fr_iilb_intra_flow_credit_regval;
+ struct {
+ mmr_t vc0_test : 7;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t vc2_test : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_4 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_5 : 17;
+ } sh_xniilb_fr_iilb_intra_flow_credit_s;
+} sh_xniilb_fr_iilb_intra_flow_credit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_FR_PI_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_fr_pi_intra_flow_credit_u {
+ mmr_t sh_xniilb_fr_pi_intra_flow_credit_regval;
+ struct {
+ mmr_t vc0_test : 7;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t vc2_test : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_4 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_5 : 17;
+ } sh_xniilb_fr_pi_intra_flow_credit_s;
+} sh_xniilb_fr_pi_intra_flow_credit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_TO_PI_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_to_pi_intra_flow_debit_u {
+ mmr_t sh_xnni0_to_pi_intra_flow_debit_regval;
+ struct {
+ mmr_t vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_force_cred : 1;
+ mmr_t vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_4 : 9;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_5 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_6 : 1;
+ } sh_xnni0_to_pi_intra_flow_debit_s;
+} sh_xnni0_to_pi_intra_flow_debit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_TO_MD_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_to_md_intra_flow_debit_u {
+ mmr_t sh_xnni0_to_md_intra_flow_debit_regval;
+ struct {
+ mmr_t vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_force_cred : 1;
+ mmr_t vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_4 : 9;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_5 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_6 : 1;
+ } sh_xnni0_to_md_intra_flow_debit_s;
+} sh_xnni0_to_md_intra_flow_debit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_TO_IILB_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_to_iilb_intra_flow_debit_u {
+ mmr_t sh_xnni0_to_iilb_intra_flow_debit_regval;
+ struct {
+ mmr_t vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_force_cred : 1;
+ mmr_t vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_4 : 9;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_5 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_6 : 1;
+ } sh_xnni0_to_iilb_intra_flow_debit_s;
+} sh_xnni0_to_iilb_intra_flow_debit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_FR_PI_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_fr_pi_intra_flow_credit_u {
+ mmr_t sh_xnni0_fr_pi_intra_flow_credit_regval;
+ struct {
+ mmr_t vc0_test : 7;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t vc2_test : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_4 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_5 : 17;
+ } sh_xnni0_fr_pi_intra_flow_credit_s;
+} sh_xnni0_fr_pi_intra_flow_credit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_FR_MD_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_fr_md_intra_flow_credit_u {
+ mmr_t sh_xnni0_fr_md_intra_flow_credit_regval;
+ struct {
+ mmr_t vc0_test : 7;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t vc2_test : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_4 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_5 : 17;
+ } sh_xnni0_fr_md_intra_flow_credit_s;
+} sh_xnni0_fr_md_intra_flow_credit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_FR_IILB_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_fr_iilb_intra_flow_credit_u {
+ mmr_t sh_xnni0_fr_iilb_intra_flow_credit_regval;
+ struct {
+ mmr_t vc0_test : 7;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t vc2_test : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_4 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_5 : 17;
+ } sh_xnni0_fr_iilb_intra_flow_credit_s;
+} sh_xnni0_fr_iilb_intra_flow_credit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_0_INTRANI_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_0_intrani_flow_u {
+ mmr_t sh_xnni0_0_intrani_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t reserved_1 : 56;
+ } sh_xnni0_0_intrani_flow_s;
+} sh_xnni0_0_intrani_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_1_INTRANI_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_1_intrani_flow_u {
+ mmr_t sh_xnni0_1_intrani_flow_regval;
+ struct {
+ mmr_t debit_vc1_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc1_force_cred : 1;
+ mmr_t reserved_1 : 56;
+ } sh_xnni0_1_intrani_flow_s;
+} sh_xnni0_1_intrani_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_2_INTRANI_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_2_intrani_flow_u {
+ mmr_t sh_xnni0_2_intrani_flow_regval;
+ struct {
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t reserved_1 : 56;
+ } sh_xnni0_2_intrani_flow_s;
+} sh_xnni0_2_intrani_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_3_INTRANI_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_3_intrani_flow_u {
+ mmr_t sh_xnni0_3_intrani_flow_regval;
+ struct {
+ mmr_t debit_vc3_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc3_force_cred : 1;
+ mmr_t reserved_1 : 56;
+ } sh_xnni0_3_intrani_flow_s;
+} sh_xnni0_3_intrani_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_VCSWITCH_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_vcswitch_flow_u {
+ mmr_t sh_xnni0_vcswitch_flow_regval;
+ struct {
+ mmr_t ni_vcfifo_dateline_switch : 1;
+ mmr_t reserved_0 : 7;
+ mmr_t pi_vcfifo_switch : 1;
+ mmr_t reserved_1 : 7;
+ mmr_t md_vcfifo_switch : 1;
+ mmr_t reserved_2 : 7;
+ mmr_t iilb_vcfifo_switch : 1;
+ mmr_t reserved_3 : 7;
+ mmr_t disable_sync_bypass_in : 1;
+ mmr_t disable_sync_bypass_out : 1;
+ mmr_t async_fifoes : 1;
+ mmr_t reserved_4 : 29;
+ } sh_xnni0_vcswitch_flow_s;
+} sh_xnni0_vcswitch_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_TIMER_REG" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_timer_reg_u {
+ mmr_t sh_xnni0_timer_reg_regval;
+ struct {
+ mmr_t timeout_reg : 24;
+ mmr_t reserved_0 : 8;
+ mmr_t linkcleanup_reg : 1;
+ mmr_t reserved_1 : 31;
+ } sh_xnni0_timer_reg_s;
+} sh_xnni0_timer_reg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_FIFO02_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_fifo02_flow_u {
+ mmr_t sh_xnni0_fifo02_flow_regval;
+ struct {
+ mmr_t count_vc0_limit : 4;
+ mmr_t reserved_0 : 4;
+ mmr_t count_vc0_dyn : 4;
+ mmr_t reserved_1 : 4;
+ mmr_t count_vc0_cap : 4;
+ mmr_t reserved_2 : 4;
+ mmr_t count_vc2_limit : 4;
+ mmr_t reserved_3 : 4;
+ mmr_t count_vc2_dyn : 4;
+ mmr_t reserved_4 : 4;
+ mmr_t count_vc2_cap : 4;
+ mmr_t reserved_5 : 20;
+ } sh_xnni0_fifo02_flow_s;
+} sh_xnni0_fifo02_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_FIFO13_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_fifo13_flow_u {
+ mmr_t sh_xnni0_fifo13_flow_regval;
+ struct {
+ mmr_t count_vc1_limit : 4;
+ mmr_t reserved_0 : 4;
+ mmr_t count_vc1_dyn : 4;
+ mmr_t reserved_1 : 4;
+ mmr_t count_vc1_cap : 4;
+ mmr_t reserved_2 : 4;
+ mmr_t count_vc3_limit : 4;
+ mmr_t reserved_3 : 4;
+ mmr_t count_vc3_dyn : 4;
+ mmr_t reserved_4 : 4;
+ mmr_t count_vc3_cap : 4;
+ mmr_t reserved_5 : 20;
+ } sh_xnni0_fifo13_flow_s;
+} sh_xnni0_fifo13_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_NI_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_ni_flow_u {
+ mmr_t sh_xnni0_ni_flow_regval;
+ struct {
+ mmr_t vc0_limit : 4;
+ mmr_t reserved_0 : 4;
+ mmr_t vc0_dyn : 4;
+ mmr_t vc0_cap : 4;
+ mmr_t vc1_limit : 4;
+ mmr_t reserved_1 : 4;
+ mmr_t vc1_dyn : 4;
+ mmr_t vc1_cap : 4;
+ mmr_t vc2_limit : 4;
+ mmr_t reserved_2 : 4;
+ mmr_t vc2_dyn : 4;
+ mmr_t vc2_cap : 4;
+ mmr_t vc3_limit : 4;
+ mmr_t reserved_3 : 4;
+ mmr_t vc3_dyn : 4;
+ mmr_t vc3_cap : 4;
+ } sh_xnni0_ni_flow_s;
+} sh_xnni0_ni_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_DEAD_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_dead_flow_u {
+ mmr_t sh_xnni0_dead_flow_regval;
+ struct {
+ mmr_t vc0_limit : 4;
+ mmr_t reserved_0 : 4;
+ mmr_t vc0_dyn : 4;
+ mmr_t vc0_cap : 4;
+ mmr_t vc1_limit : 4;
+ mmr_t reserved_1 : 4;
+ mmr_t vc1_dyn : 4;
+ mmr_t vc1_cap : 4;
+ mmr_t vc2_limit : 4;
+ mmr_t reserved_2 : 4;
+ mmr_t vc2_dyn : 4;
+ mmr_t vc2_cap : 4;
+ mmr_t vc3_limit : 4;
+ mmr_t reserved_3 : 4;
+ mmr_t vc3_dyn : 4;
+ mmr_t vc3_cap : 4;
+ } sh_xnni0_dead_flow_s;
+} sh_xnni0_dead_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI0_INJECT_AGE" */
+/* ==================================================================== */
+
+typedef union sh_xnni0_inject_age_u {
+ mmr_t sh_xnni0_inject_age_regval;
+ struct {
+ mmr_t request_inject : 8;
+ mmr_t reply_inject : 8;
+ mmr_t reserved_0 : 48;
+ } sh_xnni0_inject_age_s;
+} sh_xnni0_inject_age_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_TO_PI_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_to_pi_intra_flow_debit_u {
+ mmr_t sh_xnni1_to_pi_intra_flow_debit_regval;
+ struct {
+ mmr_t vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_force_cred : 1;
+ mmr_t vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_4 : 9;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_5 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_6 : 1;
+ } sh_xnni1_to_pi_intra_flow_debit_s;
+} sh_xnni1_to_pi_intra_flow_debit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_TO_MD_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_to_md_intra_flow_debit_u {
+ mmr_t sh_xnni1_to_md_intra_flow_debit_regval;
+ struct {
+ mmr_t vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_force_cred : 1;
+ mmr_t vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_4 : 9;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_5 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_6 : 1;
+ } sh_xnni1_to_md_intra_flow_debit_s;
+} sh_xnni1_to_md_intra_flow_debit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_TO_IILB_INTRA_FLOW_DEBIT" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_to_iilb_intra_flow_debit_u {
+ mmr_t sh_xnni1_to_iilb_intra_flow_debit_regval;
+ struct {
+ mmr_t vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_force_cred : 1;
+ mmr_t vc2_withhold : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t vc2_force_cred : 1;
+ mmr_t reserved_2 : 8;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_4 : 9;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_5 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_6 : 1;
+ } sh_xnni1_to_iilb_intra_flow_debit_s;
+} sh_xnni1_to_iilb_intra_flow_debit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_FR_PI_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_fr_pi_intra_flow_credit_u {
+ mmr_t sh_xnni1_fr_pi_intra_flow_credit_regval;
+ struct {
+ mmr_t vc0_test : 7;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t vc2_test : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_4 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_5 : 17;
+ } sh_xnni1_fr_pi_intra_flow_credit_s;
+} sh_xnni1_fr_pi_intra_flow_credit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_FR_MD_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_fr_md_intra_flow_credit_u {
+ mmr_t sh_xnni1_fr_md_intra_flow_credit_regval;
+ struct {
+ mmr_t vc0_test : 7;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t vc2_test : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_4 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_5 : 17;
+ } sh_xnni1_fr_md_intra_flow_credit_s;
+} sh_xnni1_fr_md_intra_flow_credit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_FR_IILB_INTRA_FLOW_CREDIT" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_fr_iilb_intra_flow_credit_u {
+ mmr_t sh_xnni1_fr_iilb_intra_flow_credit_regval;
+ struct {
+ mmr_t vc0_test : 7;
+ mmr_t reserved_0 : 1;
+ mmr_t vc0_dyn : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t vc0_cap : 7;
+ mmr_t reserved_2 : 1;
+ mmr_t vc2_test : 7;
+ mmr_t reserved_3 : 1;
+ mmr_t vc2_dyn : 7;
+ mmr_t reserved_4 : 1;
+ mmr_t vc2_cap : 7;
+ mmr_t reserved_5 : 17;
+ } sh_xnni1_fr_iilb_intra_flow_credit_s;
+} sh_xnni1_fr_iilb_intra_flow_credit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_0_INTRANI_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_0_intrani_flow_u {
+ mmr_t sh_xnni1_0_intrani_flow_regval;
+ struct {
+ mmr_t debit_vc0_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc0_force_cred : 1;
+ mmr_t reserved_1 : 56;
+ } sh_xnni1_0_intrani_flow_s;
+} sh_xnni1_0_intrani_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_1_INTRANI_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_1_intrani_flow_u {
+ mmr_t sh_xnni1_1_intrani_flow_regval;
+ struct {
+ mmr_t debit_vc1_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc1_force_cred : 1;
+ mmr_t reserved_1 : 56;
+ } sh_xnni1_1_intrani_flow_s;
+} sh_xnni1_1_intrani_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_2_INTRANI_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_2_intrani_flow_u {
+ mmr_t sh_xnni1_2_intrani_flow_regval;
+ struct {
+ mmr_t debit_vc2_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc2_force_cred : 1;
+ mmr_t reserved_1 : 56;
+ } sh_xnni1_2_intrani_flow_s;
+} sh_xnni1_2_intrani_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_3_INTRANI_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_3_intrani_flow_u {
+ mmr_t sh_xnni1_3_intrani_flow_regval;
+ struct {
+ mmr_t debit_vc3_withhold : 6;
+ mmr_t reserved_0 : 1;
+ mmr_t debit_vc3_force_cred : 1;
+ mmr_t reserved_1 : 56;
+ } sh_xnni1_3_intrani_flow_s;
+} sh_xnni1_3_intrani_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_VCSWITCH_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_vcswitch_flow_u {
+ mmr_t sh_xnni1_vcswitch_flow_regval;
+ struct {
+ mmr_t ni_vcfifo_dateline_switch : 1;
+ mmr_t reserved_0 : 7;
+ mmr_t pi_vcfifo_switch : 1;
+ mmr_t reserved_1 : 7;
+ mmr_t md_vcfifo_switch : 1;
+ mmr_t reserved_2 : 7;
+ mmr_t iilb_vcfifo_switch : 1;
+ mmr_t reserved_3 : 7;
+ mmr_t disable_sync_bypass_in : 1;
+ mmr_t disable_sync_bypass_out : 1;
+ mmr_t async_fifoes : 1;
+ mmr_t reserved_4 : 29;
+ } sh_xnni1_vcswitch_flow_s;
+} sh_xnni1_vcswitch_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_TIMER_REG" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_timer_reg_u {
+ mmr_t sh_xnni1_timer_reg_regval;
+ struct {
+ mmr_t timeout_reg : 24;
+ mmr_t reserved_0 : 8;
+ mmr_t linkcleanup_reg : 1;
+ mmr_t reserved_1 : 31;
+ } sh_xnni1_timer_reg_s;
+} sh_xnni1_timer_reg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_FIFO02_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_fifo02_flow_u {
+ mmr_t sh_xnni1_fifo02_flow_regval;
+ struct {
+ mmr_t count_vc0_limit : 4;
+ mmr_t reserved_0 : 4;
+ mmr_t count_vc0_dyn : 4;
+ mmr_t reserved_1 : 4;
+ mmr_t count_vc0_cap : 4;
+ mmr_t reserved_2 : 4;
+ mmr_t count_vc2_limit : 4;
+ mmr_t reserved_3 : 4;
+ mmr_t count_vc2_dyn : 4;
+ mmr_t reserved_4 : 4;
+ mmr_t count_vc2_cap : 4;
+ mmr_t reserved_5 : 20;
+ } sh_xnni1_fifo02_flow_s;
+} sh_xnni1_fifo02_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_FIFO13_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_fifo13_flow_u {
+ mmr_t sh_xnni1_fifo13_flow_regval;
+ struct {
+ mmr_t count_vc1_limit : 4;
+ mmr_t reserved_0 : 4;
+ mmr_t count_vc1_dyn : 4;
+ mmr_t reserved_1 : 4;
+ mmr_t count_vc1_cap : 4;
+ mmr_t reserved_2 : 4;
+ mmr_t count_vc3_limit : 4;
+ mmr_t reserved_3 : 4;
+ mmr_t count_vc3_dyn : 4;
+ mmr_t reserved_4 : 4;
+ mmr_t count_vc3_cap : 4;
+ mmr_t reserved_5 : 20;
+ } sh_xnni1_fifo13_flow_s;
+} sh_xnni1_fifo13_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_NI_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_ni_flow_u {
+ mmr_t sh_xnni1_ni_flow_regval;
+ struct {
+ mmr_t vc0_limit : 4;
+ mmr_t reserved_0 : 4;
+ mmr_t vc0_dyn : 4;
+ mmr_t vc0_cap : 4;
+ mmr_t vc1_limit : 4;
+ mmr_t reserved_1 : 4;
+ mmr_t vc1_dyn : 4;
+ mmr_t vc1_cap : 4;
+ mmr_t vc2_limit : 4;
+ mmr_t reserved_2 : 4;
+ mmr_t vc2_dyn : 4;
+ mmr_t vc2_cap : 4;
+ mmr_t vc3_limit : 4;
+ mmr_t reserved_3 : 4;
+ mmr_t vc3_dyn : 4;
+ mmr_t vc3_cap : 4;
+ } sh_xnni1_ni_flow_s;
+} sh_xnni1_ni_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_DEAD_FLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_dead_flow_u {
+ mmr_t sh_xnni1_dead_flow_regval;
+ struct {
+ mmr_t vc0_limit : 4;
+ mmr_t reserved_0 : 4;
+ mmr_t vc0_dyn : 4;
+ mmr_t vc0_cap : 4;
+ mmr_t vc1_limit : 4;
+ mmr_t reserved_1 : 4;
+ mmr_t vc1_dyn : 4;
+ mmr_t vc1_cap : 4;
+ mmr_t vc2_limit : 4;
+ mmr_t reserved_2 : 4;
+ mmr_t vc2_dyn : 4;
+ mmr_t vc2_cap : 4;
+ mmr_t vc3_limit : 4;
+ mmr_t reserved_3 : 4;
+ mmr_t vc3_dyn : 4;
+ mmr_t vc3_cap : 4;
+ } sh_xnni1_dead_flow_s;
+} sh_xnni1_dead_flow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNNI1_INJECT_AGE" */
+/* ==================================================================== */
+
+typedef union sh_xnni1_inject_age_u {
+ mmr_t sh_xnni1_inject_age_regval;
+ struct {
+ mmr_t request_inject : 8;
+ mmr_t reply_inject : 8;
+ mmr_t reserved_0 : 48;
+ } sh_xnni1_inject_age_s;
+} sh_xnni1_inject_age_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_DEBUG_SEL" */
+/* XN Debug Port Select */
+/* ==================================================================== */
+
+typedef union sh_xn_debug_sel_u {
+ mmr_t sh_xn_debug_sel_regval;
+ struct {
+ mmr_t nibble0_rlm_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_rlm_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_rlm_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_rlm_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_rlm_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_rlm_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_rlm_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_rlm_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t trigger_enable : 1;
+ } sh_xn_debug_sel_s;
+} sh_xn_debug_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_DEBUG_TRIG_SEL" */
+/* XN Debug trigger Select */
+/* ==================================================================== */
+
+typedef union sh_xn_debug_trig_sel_u {
+ mmr_t sh_xn_debug_trig_sel_regval;
+ struct {
+ mmr_t trigger0_rlm_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t trigger0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t trigger1_rlm_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t trigger1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t trigger2_rlm_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t trigger2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t trigger3_rlm_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t trigger3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t trigger4_rlm_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t trigger4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t trigger5_rlm_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t trigger5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t trigger6_rlm_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t trigger6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t trigger7_rlm_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t trigger7_nibble_sel : 3;
+ mmr_t reserved_15 : 1;
+ } sh_xn_debug_trig_sel_s;
+} sh_xn_debug_trig_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_TRIGGER_COMPARE" */
+/* XN Debug Compare */
+/* ==================================================================== */
+
+typedef union sh_xn_trigger_compare_u {
+ mmr_t sh_xn_trigger_compare_regval;
+ struct {
+ mmr_t mask : 32;
+ mmr_t reserved_0 : 32;
+ } sh_xn_trigger_compare_s;
+} sh_xn_trigger_compare_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_TRIGGER_DATA" */
+/* XN Debug Compare Data */
+/* ==================================================================== */
+
+typedef union sh_xn_trigger_data_u {
+ mmr_t sh_xn_trigger_data_regval;
+ struct {
+ mmr_t compare_pattern : 32;
+ mmr_t reserved_0 : 32;
+ } sh_xn_trigger_data_s;
+} sh_xn_trigger_data_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_DEBUG_SEL" */
+/* XN IILB Debug Port Select */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_debug_sel_u {
+ mmr_t sh_xn_iilb_debug_sel_regval;
+ struct {
+ mmr_t nibble0_input_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_input_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_input_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_input_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_input_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_input_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_input_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_input_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t reserved_15 : 1;
+ } sh_xn_iilb_debug_sel_s;
+} sh_xn_iilb_debug_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_DEBUG_SEL" */
+/* XN PI Debug Port Select */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_debug_sel_u {
+ mmr_t sh_xn_pi_debug_sel_regval;
+ struct {
+ mmr_t nibble0_input_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_input_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_input_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_input_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_input_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_input_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_input_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_input_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t reserved_15 : 1;
+ } sh_xn_pi_debug_sel_s;
+} sh_xn_pi_debug_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_DEBUG_SEL" */
+/* XN MD Debug Port Select */
+/* ==================================================================== */
+
+typedef union sh_xn_md_debug_sel_u {
+ mmr_t sh_xn_md_debug_sel_regval;
+ struct {
+ mmr_t nibble0_input_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_input_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_input_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_input_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_input_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_input_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_input_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_input_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t reserved_15 : 1;
+ } sh_xn_md_debug_sel_s;
+} sh_xn_md_debug_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_DEBUG_SEL" */
+/* XN NI0 Debug Port Select */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_debug_sel_u {
+ mmr_t sh_xn_ni0_debug_sel_regval;
+ struct {
+ mmr_t nibble0_input_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_input_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_input_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_input_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_input_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_input_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_input_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_input_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t reserved_15 : 1;
+ } sh_xn_ni0_debug_sel_s;
+} sh_xn_ni0_debug_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_DEBUG_SEL" */
+/* XN NI1 Debug Port Select */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_debug_sel_u {
+ mmr_t sh_xn_ni1_debug_sel_regval;
+ struct {
+ mmr_t nibble0_input_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_input_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_input_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_input_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_input_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_input_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_input_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_input_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t reserved_15 : 1;
+ } sh_xn_ni1_debug_sel_s;
+} sh_xn_ni1_debug_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_LB_CMP_EXP_DATA0" */
+/* IILB compare LB input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_lb_cmp_exp_data0_u {
+ mmr_t sh_xn_iilb_lb_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_iilb_lb_cmp_exp_data0_s;
+} sh_xn_iilb_lb_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_LB_CMP_EXP_DATA1" */
+/* IILB compare LB input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_lb_cmp_exp_data1_u {
+ mmr_t sh_xn_iilb_lb_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_iilb_lb_cmp_exp_data1_s;
+} sh_xn_iilb_lb_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_LB_CMP_ENABLE0" */
+/* IILB compare LB input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_lb_cmp_enable0_u {
+ mmr_t sh_xn_iilb_lb_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_iilb_lb_cmp_enable0_s;
+} sh_xn_iilb_lb_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_LB_CMP_ENABLE1" */
+/* IILB compare LB input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_lb_cmp_enable1_u {
+ mmr_t sh_xn_iilb_lb_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_iilb_lb_cmp_enable1_s;
+} sh_xn_iilb_lb_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_II_CMP_EXP_DATA0" */
+/* IILB compare II input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_ii_cmp_exp_data0_u {
+ mmr_t sh_xn_iilb_ii_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_iilb_ii_cmp_exp_data0_s;
+} sh_xn_iilb_ii_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_II_CMP_EXP_DATA1" */
+/* IILB compare II input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_ii_cmp_exp_data1_u {
+ mmr_t sh_xn_iilb_ii_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_iilb_ii_cmp_exp_data1_s;
+} sh_xn_iilb_ii_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_II_CMP_ENABLE0" */
+/* IILB compare II input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_ii_cmp_enable0_u {
+ mmr_t sh_xn_iilb_ii_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_iilb_ii_cmp_enable0_s;
+} sh_xn_iilb_ii_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_II_CMP_ENABLE1" */
+/* IILB compare II input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_ii_cmp_enable1_u {
+ mmr_t sh_xn_iilb_ii_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_iilb_ii_cmp_enable1_s;
+} sh_xn_iilb_ii_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_MD_CMP_EXP_DATA0" */
+/* IILB compare MD input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_md_cmp_exp_data0_u {
+ mmr_t sh_xn_iilb_md_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_iilb_md_cmp_exp_data0_s;
+} sh_xn_iilb_md_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_MD_CMP_EXP_DATA1" */
+/* IILB compare MD input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_md_cmp_exp_data1_u {
+ mmr_t sh_xn_iilb_md_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_iilb_md_cmp_exp_data1_s;
+} sh_xn_iilb_md_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_MD_CMP_ENABLE0" */
+/* IILB compare MD input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_md_cmp_enable0_u {
+ mmr_t sh_xn_iilb_md_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_iilb_md_cmp_enable0_s;
+} sh_xn_iilb_md_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_MD_CMP_ENABLE1" */
+/* IILB compare MD input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_md_cmp_enable1_u {
+ mmr_t sh_xn_iilb_md_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_iilb_md_cmp_enable1_s;
+} sh_xn_iilb_md_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_PI_CMP_EXP_DATA0" */
+/* IILB compare PI input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_pi_cmp_exp_data0_u {
+ mmr_t sh_xn_iilb_pi_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_iilb_pi_cmp_exp_data0_s;
+} sh_xn_iilb_pi_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_PI_CMP_EXP_DATA1" */
+/* IILB compare PI input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_pi_cmp_exp_data1_u {
+ mmr_t sh_xn_iilb_pi_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_iilb_pi_cmp_exp_data1_s;
+} sh_xn_iilb_pi_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_PI_CMP_ENABLE0" */
+/* IILB compare PI input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_pi_cmp_enable0_u {
+ mmr_t sh_xn_iilb_pi_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_iilb_pi_cmp_enable0_s;
+} sh_xn_iilb_pi_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_PI_CMP_ENABLE1" */
+/* IILB compare PI input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_pi_cmp_enable1_u {
+ mmr_t sh_xn_iilb_pi_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_iilb_pi_cmp_enable1_s;
+} sh_xn_iilb_pi_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI0_CMP_EXP_DATA0" */
+/* IILB compare NI0 input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_ni0_cmp_exp_data0_u {
+ mmr_t sh_xn_iilb_ni0_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_iilb_ni0_cmp_exp_data0_s;
+} sh_xn_iilb_ni0_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI0_CMP_EXP_DATA1" */
+/* IILB compare NI0 input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_ni0_cmp_exp_data1_u {
+ mmr_t sh_xn_iilb_ni0_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_iilb_ni0_cmp_exp_data1_s;
+} sh_xn_iilb_ni0_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI0_CMP_ENABLE0" */
+/* IILB compare NI0 input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_ni0_cmp_enable0_u {
+ mmr_t sh_xn_iilb_ni0_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_iilb_ni0_cmp_enable0_s;
+} sh_xn_iilb_ni0_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI0_CMP_ENABLE1" */
+/* IILB compare NI0 input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_ni0_cmp_enable1_u {
+ mmr_t sh_xn_iilb_ni0_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_iilb_ni0_cmp_enable1_s;
+} sh_xn_iilb_ni0_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI1_CMP_EXP_DATA0" */
+/* IILB compare NI1 input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_ni1_cmp_exp_data0_u {
+ mmr_t sh_xn_iilb_ni1_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_iilb_ni1_cmp_exp_data0_s;
+} sh_xn_iilb_ni1_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI1_CMP_EXP_DATA1" */
+/* IILB compare NI1 input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_ni1_cmp_exp_data1_u {
+ mmr_t sh_xn_iilb_ni1_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_iilb_ni1_cmp_exp_data1_s;
+} sh_xn_iilb_ni1_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI1_CMP_ENABLE0" */
+/* IILB compare NI1 input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_ni1_cmp_enable0_u {
+ mmr_t sh_xn_iilb_ni1_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_iilb_ni1_cmp_enable0_s;
+} sh_xn_iilb_ni1_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_IILB_NI1_CMP_ENABLE1" */
+/* IILB compare NI1 input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_iilb_ni1_cmp_enable1_u {
+ mmr_t sh_xn_iilb_ni1_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_iilb_ni1_cmp_enable1_s;
+} sh_xn_iilb_ni1_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_IILB_CMP_EXP_DATA0" */
+/* MD compare IILB input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_iilb_cmp_exp_data0_u {
+ mmr_t sh_xn_md_iilb_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_md_iilb_cmp_exp_data0_s;
+} sh_xn_md_iilb_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_IILB_CMP_EXP_DATA1" */
+/* MD compare IILB input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_iilb_cmp_exp_data1_u {
+ mmr_t sh_xn_md_iilb_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_md_iilb_cmp_exp_data1_s;
+} sh_xn_md_iilb_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_IILB_CMP_ENABLE0" */
+/* MD compare IILB input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_iilb_cmp_enable0_u {
+ mmr_t sh_xn_md_iilb_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_md_iilb_cmp_enable0_s;
+} sh_xn_md_iilb_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_IILB_CMP_ENABLE1" */
+/* MD compare IILB input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_iilb_cmp_enable1_u {
+ mmr_t sh_xn_md_iilb_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_md_iilb_cmp_enable1_s;
+} sh_xn_md_iilb_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI0_CMP_EXP_DATA0" */
+/* MD compare NI0 input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_ni0_cmp_exp_data0_u {
+ mmr_t sh_xn_md_ni0_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_md_ni0_cmp_exp_data0_s;
+} sh_xn_md_ni0_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI0_CMP_EXP_DATA1" */
+/* MD compare NI0 input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_ni0_cmp_exp_data1_u {
+ mmr_t sh_xn_md_ni0_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_md_ni0_cmp_exp_data1_s;
+} sh_xn_md_ni0_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI0_CMP_ENABLE0" */
+/* MD compare NI0 input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_ni0_cmp_enable0_u {
+ mmr_t sh_xn_md_ni0_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_md_ni0_cmp_enable0_s;
+} sh_xn_md_ni0_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI0_CMP_ENABLE1" */
+/* MD compare NI0 input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_ni0_cmp_enable1_u {
+ mmr_t sh_xn_md_ni0_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_md_ni0_cmp_enable1_s;
+} sh_xn_md_ni0_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI1_CMP_EXP_DATA0" */
+/* MD compare NI1 input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_ni1_cmp_exp_data0_u {
+ mmr_t sh_xn_md_ni1_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_md_ni1_cmp_exp_data0_s;
+} sh_xn_md_ni1_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI1_CMP_EXP_DATA1" */
+/* MD compare NI1 input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_ni1_cmp_exp_data1_u {
+ mmr_t sh_xn_md_ni1_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_md_ni1_cmp_exp_data1_s;
+} sh_xn_md_ni1_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI1_CMP_ENABLE0" */
+/* MD compare NI1 input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_ni1_cmp_enable0_u {
+ mmr_t sh_xn_md_ni1_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_md_ni1_cmp_enable0_s;
+} sh_xn_md_ni1_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_NI1_CMP_ENABLE1" */
+/* MD compare NI1 input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_ni1_cmp_enable1_u {
+ mmr_t sh_xn_md_ni1_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_md_ni1_cmp_enable1_s;
+} sh_xn_md_ni1_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_EXP_HDR0" */
+/* MD compare SIC input expected header0 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_sic_cmp_exp_hdr0_u {
+ mmr_t sh_xn_md_sic_cmp_exp_hdr0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_md_sic_cmp_exp_hdr0_s;
+} sh_xn_md_sic_cmp_exp_hdr0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_EXP_HDR1" */
+/* MD compare SIC input expected header1 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_sic_cmp_exp_hdr1_u {
+ mmr_t sh_xn_md_sic_cmp_exp_hdr1_regval;
+ struct {
+ mmr_t data : 42;
+ mmr_t reserved_0 : 22;
+ } sh_xn_md_sic_cmp_exp_hdr1_s;
+} sh_xn_md_sic_cmp_exp_hdr1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_HDR_ENABLE0" */
+/* MD compare SIC header enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_sic_cmp_hdr_enable0_u {
+ mmr_t sh_xn_md_sic_cmp_hdr_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_md_sic_cmp_hdr_enable0_s;
+} sh_xn_md_sic_cmp_hdr_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_HDR_ENABLE1" */
+/* MD compare SIC header enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_sic_cmp_hdr_enable1_u {
+ mmr_t sh_xn_md_sic_cmp_hdr_enable1_regval;
+ struct {
+ mmr_t enable : 42;
+ mmr_t reserved_0 : 22;
+ } sh_xn_md_sic_cmp_hdr_enable1_s;
+} sh_xn_md_sic_cmp_hdr_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA0" */
+/* MD compare SIC data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_sic_cmp_data0_u {
+ mmr_t sh_xn_md_sic_cmp_data0_regval;
+ struct {
+ mmr_t data0 : 64;
+ } sh_xn_md_sic_cmp_data0_s;
+} sh_xn_md_sic_cmp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA1" */
+/* MD compare SIC data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_sic_cmp_data1_u {
+ mmr_t sh_xn_md_sic_cmp_data1_regval;
+ struct {
+ mmr_t data1 : 64;
+ } sh_xn_md_sic_cmp_data1_s;
+} sh_xn_md_sic_cmp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA2" */
+/* MD compare SIC data2 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_sic_cmp_data2_u {
+ mmr_t sh_xn_md_sic_cmp_data2_regval;
+ struct {
+ mmr_t data2 : 64;
+ } sh_xn_md_sic_cmp_data2_s;
+} sh_xn_md_sic_cmp_data2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA3" */
+/* MD compare SIC data3 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_sic_cmp_data3_u {
+ mmr_t sh_xn_md_sic_cmp_data3_regval;
+ struct {
+ mmr_t data3 : 64;
+ } sh_xn_md_sic_cmp_data3_s;
+} sh_xn_md_sic_cmp_data3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE0" */
+/* MD enable compare SIC data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_sic_cmp_data_enable0_u {
+ mmr_t sh_xn_md_sic_cmp_data_enable0_regval;
+ struct {
+ mmr_t data_enable0 : 64;
+ } sh_xn_md_sic_cmp_data_enable0_s;
+} sh_xn_md_sic_cmp_data_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE1" */
+/* MD enable compare SIC data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_sic_cmp_data_enable1_u {
+ mmr_t sh_xn_md_sic_cmp_data_enable1_regval;
+ struct {
+ mmr_t data_enable1 : 64;
+ } sh_xn_md_sic_cmp_data_enable1_s;
+} sh_xn_md_sic_cmp_data_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE2" */
+/* MD enable compare SIC data2 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_sic_cmp_data_enable2_u {
+ mmr_t sh_xn_md_sic_cmp_data_enable2_regval;
+ struct {
+ mmr_t data_enable2 : 64;
+ } sh_xn_md_sic_cmp_data_enable2_s;
+} sh_xn_md_sic_cmp_data_enable2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_SIC_CMP_DATA_ENABLE3" */
+/* MD enable compare SIC data3 */
+/* ==================================================================== */
+
+typedef union sh_xn_md_sic_cmp_data_enable3_u {
+ mmr_t sh_xn_md_sic_cmp_data_enable3_regval;
+ struct {
+ mmr_t data_enable3 : 64;
+ } sh_xn_md_sic_cmp_data_enable3_s;
+} sh_xn_md_sic_cmp_data_enable3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_IILB_CMP_EXP_DATA0" */
+/* PI compare IILB input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_iilb_cmp_exp_data0_u {
+ mmr_t sh_xn_pi_iilb_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_pi_iilb_cmp_exp_data0_s;
+} sh_xn_pi_iilb_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_IILB_CMP_EXP_DATA1" */
+/* PI compare IILB input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_iilb_cmp_exp_data1_u {
+ mmr_t sh_xn_pi_iilb_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_pi_iilb_cmp_exp_data1_s;
+} sh_xn_pi_iilb_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_IILB_CMP_ENABLE0" */
+/* PI compare IILB input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_iilb_cmp_enable0_u {
+ mmr_t sh_xn_pi_iilb_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_pi_iilb_cmp_enable0_s;
+} sh_xn_pi_iilb_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_IILB_CMP_ENABLE1" */
+/* PI compare IILB input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_iilb_cmp_enable1_u {
+ mmr_t sh_xn_pi_iilb_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_pi_iilb_cmp_enable1_s;
+} sh_xn_pi_iilb_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI0_CMP_EXP_DATA0" */
+/* PI compare NI0 input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_ni0_cmp_exp_data0_u {
+ mmr_t sh_xn_pi_ni0_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_pi_ni0_cmp_exp_data0_s;
+} sh_xn_pi_ni0_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI0_CMP_EXP_DATA1" */
+/* PI compare NI0 input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_ni0_cmp_exp_data1_u {
+ mmr_t sh_xn_pi_ni0_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_pi_ni0_cmp_exp_data1_s;
+} sh_xn_pi_ni0_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI0_CMP_ENABLE0" */
+/* PI compare NI0 input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_ni0_cmp_enable0_u {
+ mmr_t sh_xn_pi_ni0_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_pi_ni0_cmp_enable0_s;
+} sh_xn_pi_ni0_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI0_CMP_ENABLE1" */
+/* PI compare NI0 input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_ni0_cmp_enable1_u {
+ mmr_t sh_xn_pi_ni0_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_pi_ni0_cmp_enable1_s;
+} sh_xn_pi_ni0_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI1_CMP_EXP_DATA0" */
+/* PI compare NI1 input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_ni1_cmp_exp_data0_u {
+ mmr_t sh_xn_pi_ni1_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_pi_ni1_cmp_exp_data0_s;
+} sh_xn_pi_ni1_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI1_CMP_EXP_DATA1" */
+/* PI compare NI1 input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_ni1_cmp_exp_data1_u {
+ mmr_t sh_xn_pi_ni1_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_pi_ni1_cmp_exp_data1_s;
+} sh_xn_pi_ni1_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI1_CMP_ENABLE0" */
+/* PI compare NI1 input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_ni1_cmp_enable0_u {
+ mmr_t sh_xn_pi_ni1_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_pi_ni1_cmp_enable0_s;
+} sh_xn_pi_ni1_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_NI1_CMP_ENABLE1" */
+/* PI compare NI1 input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_ni1_cmp_enable1_u {
+ mmr_t sh_xn_pi_ni1_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_pi_ni1_cmp_enable1_s;
+} sh_xn_pi_ni1_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_EXP_HDR0" */
+/* PI compare SIC input expected header0 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_sic_cmp_exp_hdr0_u {
+ mmr_t sh_xn_pi_sic_cmp_exp_hdr0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_pi_sic_cmp_exp_hdr0_s;
+} sh_xn_pi_sic_cmp_exp_hdr0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_EXP_HDR1" */
+/* PI compare SIC input expected header1 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_sic_cmp_exp_hdr1_u {
+ mmr_t sh_xn_pi_sic_cmp_exp_hdr1_regval;
+ struct {
+ mmr_t data : 42;
+ mmr_t reserved_0 : 22;
+ } sh_xn_pi_sic_cmp_exp_hdr1_s;
+} sh_xn_pi_sic_cmp_exp_hdr1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_HDR_ENABLE0" */
+/* PI compare SIC header enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_sic_cmp_hdr_enable0_u {
+ mmr_t sh_xn_pi_sic_cmp_hdr_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_pi_sic_cmp_hdr_enable0_s;
+} sh_xn_pi_sic_cmp_hdr_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_HDR_ENABLE1" */
+/* PI compare SIC header enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_sic_cmp_hdr_enable1_u {
+ mmr_t sh_xn_pi_sic_cmp_hdr_enable1_regval;
+ struct {
+ mmr_t enable : 42;
+ mmr_t reserved_0 : 22;
+ } sh_xn_pi_sic_cmp_hdr_enable1_s;
+} sh_xn_pi_sic_cmp_hdr_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA0" */
+/* PI compare SIC data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_sic_cmp_data0_u {
+ mmr_t sh_xn_pi_sic_cmp_data0_regval;
+ struct {
+ mmr_t data0 : 64;
+ } sh_xn_pi_sic_cmp_data0_s;
+} sh_xn_pi_sic_cmp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA1" */
+/* PI compare SIC data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_sic_cmp_data1_u {
+ mmr_t sh_xn_pi_sic_cmp_data1_regval;
+ struct {
+ mmr_t data1 : 64;
+ } sh_xn_pi_sic_cmp_data1_s;
+} sh_xn_pi_sic_cmp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA2" */
+/* PI compare SIC data2 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_sic_cmp_data2_u {
+ mmr_t sh_xn_pi_sic_cmp_data2_regval;
+ struct {
+ mmr_t data2 : 64;
+ } sh_xn_pi_sic_cmp_data2_s;
+} sh_xn_pi_sic_cmp_data2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA3" */
+/* PI compare SIC data3 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_sic_cmp_data3_u {
+ mmr_t sh_xn_pi_sic_cmp_data3_regval;
+ struct {
+ mmr_t data3 : 64;
+ } sh_xn_pi_sic_cmp_data3_s;
+} sh_xn_pi_sic_cmp_data3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE0" */
+/* PI enable compare SIC data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_sic_cmp_data_enable0_u {
+ mmr_t sh_xn_pi_sic_cmp_data_enable0_regval;
+ struct {
+ mmr_t data_enable0 : 64;
+ } sh_xn_pi_sic_cmp_data_enable0_s;
+} sh_xn_pi_sic_cmp_data_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE1" */
+/* PI enable compare SIC data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_sic_cmp_data_enable1_u {
+ mmr_t sh_xn_pi_sic_cmp_data_enable1_regval;
+ struct {
+ mmr_t data_enable1 : 64;
+ } sh_xn_pi_sic_cmp_data_enable1_s;
+} sh_xn_pi_sic_cmp_data_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE2" */
+/* PI enable compare SIC data2 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_sic_cmp_data_enable2_u {
+ mmr_t sh_xn_pi_sic_cmp_data_enable2_regval;
+ struct {
+ mmr_t data_enable2 : 64;
+ } sh_xn_pi_sic_cmp_data_enable2_s;
+} sh_xn_pi_sic_cmp_data_enable2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_PI_SIC_CMP_DATA_ENABLE3" */
+/* PI enable compare SIC data3 */
+/* ==================================================================== */
+
+typedef union sh_xn_pi_sic_cmp_data_enable3_u {
+ mmr_t sh_xn_pi_sic_cmp_data_enable3_regval;
+ struct {
+ mmr_t data_enable3 : 64;
+ } sh_xn_pi_sic_cmp_data_enable3_s;
+} sh_xn_pi_sic_cmp_data_enable3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_IILB_CMP_EXP_DATA0" */
+/* NI0 compare IILB input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_iilb_cmp_exp_data0_u {
+ mmr_t sh_xn_ni0_iilb_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni0_iilb_cmp_exp_data0_s;
+} sh_xn_ni0_iilb_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_IILB_CMP_EXP_DATA1" */
+/* NI0 compare IILB input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_iilb_cmp_exp_data1_u {
+ mmr_t sh_xn_ni0_iilb_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni0_iilb_cmp_exp_data1_s;
+} sh_xn_ni0_iilb_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_IILB_CMP_ENABLE0" */
+/* NI0 compare IILB input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_iilb_cmp_enable0_u {
+ mmr_t sh_xn_ni0_iilb_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni0_iilb_cmp_enable0_s;
+} sh_xn_ni0_iilb_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_IILB_CMP_ENABLE1" */
+/* NI0 compare IILB input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_iilb_cmp_enable1_u {
+ mmr_t sh_xn_ni0_iilb_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni0_iilb_cmp_enable1_s;
+} sh_xn_ni0_iilb_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_PI_CMP_EXP_DATA0" */
+/* NI0 compare PI input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_pi_cmp_exp_data0_u {
+ mmr_t sh_xn_ni0_pi_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni0_pi_cmp_exp_data0_s;
+} sh_xn_ni0_pi_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_PI_CMP_EXP_DATA1" */
+/* NI0 compare PI input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_pi_cmp_exp_data1_u {
+ mmr_t sh_xn_ni0_pi_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni0_pi_cmp_exp_data1_s;
+} sh_xn_ni0_pi_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_PI_CMP_ENABLE0" */
+/* NI0 compare PI input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_pi_cmp_enable0_u {
+ mmr_t sh_xn_ni0_pi_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni0_pi_cmp_enable0_s;
+} sh_xn_ni0_pi_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_PI_CMP_ENABLE1" */
+/* NI0 compare PI input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_pi_cmp_enable1_u {
+ mmr_t sh_xn_ni0_pi_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni0_pi_cmp_enable1_s;
+} sh_xn_ni0_pi_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_MD_CMP_EXP_DATA0" */
+/* NI0 compare MD input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_md_cmp_exp_data0_u {
+ mmr_t sh_xn_ni0_md_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni0_md_cmp_exp_data0_s;
+} sh_xn_ni0_md_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_MD_CMP_EXP_DATA1" */
+/* NI0 compare MD input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_md_cmp_exp_data1_u {
+ mmr_t sh_xn_ni0_md_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni0_md_cmp_exp_data1_s;
+} sh_xn_ni0_md_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_MD_CMP_ENABLE0" */
+/* NI0 compare MD input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_md_cmp_enable0_u {
+ mmr_t sh_xn_ni0_md_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni0_md_cmp_enable0_s;
+} sh_xn_ni0_md_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_MD_CMP_ENABLE1" */
+/* NI0 compare MD input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_md_cmp_enable1_u {
+ mmr_t sh_xn_ni0_md_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni0_md_cmp_enable1_s;
+} sh_xn_ni0_md_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_NI_CMP_EXP_DATA0" */
+/* NI0 compare NI input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_ni_cmp_exp_data0_u {
+ mmr_t sh_xn_ni0_ni_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni0_ni_cmp_exp_data0_s;
+} sh_xn_ni0_ni_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_NI_CMP_EXP_DATA1" */
+/* NI0 compare NI input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_ni_cmp_exp_data1_u {
+ mmr_t sh_xn_ni0_ni_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni0_ni_cmp_exp_data1_s;
+} sh_xn_ni0_ni_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_NI_CMP_ENABLE0" */
+/* NI0 compare NI input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_ni_cmp_enable0_u {
+ mmr_t sh_xn_ni0_ni_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni0_ni_cmp_enable0_s;
+} sh_xn_ni0_ni_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_NI_CMP_ENABLE1" */
+/* NI0 compare NI input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_ni_cmp_enable1_u {
+ mmr_t sh_xn_ni0_ni_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni0_ni_cmp_enable1_s;
+} sh_xn_ni0_ni_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_LLP_CMP_EXP_DATA0" */
+/* NI0 compare LLP input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_llp_cmp_exp_data0_u {
+ mmr_t sh_xn_ni0_llp_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni0_llp_cmp_exp_data0_s;
+} sh_xn_ni0_llp_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_LLP_CMP_EXP_DATA1" */
+/* NI0 compare LLP input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_llp_cmp_exp_data1_u {
+ mmr_t sh_xn_ni0_llp_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni0_llp_cmp_exp_data1_s;
+} sh_xn_ni0_llp_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_LLP_CMP_ENABLE0" */
+/* NI0 compare LLP input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_llp_cmp_enable0_u {
+ mmr_t sh_xn_ni0_llp_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni0_llp_cmp_enable0_s;
+} sh_xn_ni0_llp_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI0_LLP_CMP_ENABLE1" */
+/* NI0 compare LLP input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni0_llp_cmp_enable1_u {
+ mmr_t sh_xn_ni0_llp_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni0_llp_cmp_enable1_s;
+} sh_xn_ni0_llp_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_IILB_CMP_EXP_DATA0" */
+/* NI1 compare IILB input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_iilb_cmp_exp_data0_u {
+ mmr_t sh_xn_ni1_iilb_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni1_iilb_cmp_exp_data0_s;
+} sh_xn_ni1_iilb_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_IILB_CMP_EXP_DATA1" */
+/* NI1 compare IILB input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_iilb_cmp_exp_data1_u {
+ mmr_t sh_xn_ni1_iilb_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni1_iilb_cmp_exp_data1_s;
+} sh_xn_ni1_iilb_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_IILB_CMP_ENABLE0" */
+/* NI1 compare IILB input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_iilb_cmp_enable0_u {
+ mmr_t sh_xn_ni1_iilb_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni1_iilb_cmp_enable0_s;
+} sh_xn_ni1_iilb_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_IILB_CMP_ENABLE1" */
+/* NI1 compare IILB input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_iilb_cmp_enable1_u {
+ mmr_t sh_xn_ni1_iilb_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni1_iilb_cmp_enable1_s;
+} sh_xn_ni1_iilb_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_PI_CMP_EXP_DATA0" */
+/* NI1 compare PI input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_pi_cmp_exp_data0_u {
+ mmr_t sh_xn_ni1_pi_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni1_pi_cmp_exp_data0_s;
+} sh_xn_ni1_pi_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_PI_CMP_EXP_DATA1" */
+/* NI1 compare PI input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_pi_cmp_exp_data1_u {
+ mmr_t sh_xn_ni1_pi_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni1_pi_cmp_exp_data1_s;
+} sh_xn_ni1_pi_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_PI_CMP_ENABLE0" */
+/* NI1 compare PI input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_pi_cmp_enable0_u {
+ mmr_t sh_xn_ni1_pi_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni1_pi_cmp_enable0_s;
+} sh_xn_ni1_pi_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_PI_CMP_ENABLE1" */
+/* NI1 compare PI input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_pi_cmp_enable1_u {
+ mmr_t sh_xn_ni1_pi_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni1_pi_cmp_enable1_s;
+} sh_xn_ni1_pi_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_MD_CMP_EXP_DATA0" */
+/* NI1 compare MD input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_md_cmp_exp_data0_u {
+ mmr_t sh_xn_ni1_md_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni1_md_cmp_exp_data0_s;
+} sh_xn_ni1_md_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_MD_CMP_EXP_DATA1" */
+/* NI1 compare MD input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_md_cmp_exp_data1_u {
+ mmr_t sh_xn_ni1_md_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni1_md_cmp_exp_data1_s;
+} sh_xn_ni1_md_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_MD_CMP_ENABLE0" */
+/* NI1 compare MD input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_md_cmp_enable0_u {
+ mmr_t sh_xn_ni1_md_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni1_md_cmp_enable0_s;
+} sh_xn_ni1_md_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_MD_CMP_ENABLE1" */
+/* NI1 compare MD input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_md_cmp_enable1_u {
+ mmr_t sh_xn_ni1_md_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni1_md_cmp_enable1_s;
+} sh_xn_ni1_md_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_NI_CMP_EXP_DATA0" */
+/* NI1 compare NI input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_ni_cmp_exp_data0_u {
+ mmr_t sh_xn_ni1_ni_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni1_ni_cmp_exp_data0_s;
+} sh_xn_ni1_ni_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_NI_CMP_EXP_DATA1" */
+/* NI1 compare NI input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_ni_cmp_exp_data1_u {
+ mmr_t sh_xn_ni1_ni_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni1_ni_cmp_exp_data1_s;
+} sh_xn_ni1_ni_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_NI_CMP_ENABLE0" */
+/* NI1 compare NI input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_ni_cmp_enable0_u {
+ mmr_t sh_xn_ni1_ni_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni1_ni_cmp_enable0_s;
+} sh_xn_ni1_ni_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_NI_CMP_ENABLE1" */
+/* NI1 compare NI input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_ni_cmp_enable1_u {
+ mmr_t sh_xn_ni1_ni_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni1_ni_cmp_enable1_s;
+} sh_xn_ni1_ni_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_LLP_CMP_EXP_DATA0" */
+/* NI1 compare LLP input expected data0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_llp_cmp_exp_data0_u {
+ mmr_t sh_xn_ni1_llp_cmp_exp_data0_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni1_llp_cmp_exp_data0_s;
+} sh_xn_ni1_llp_cmp_exp_data0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_LLP_CMP_EXP_DATA1" */
+/* NI1 compare LLP input expected data1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_llp_cmp_exp_data1_u {
+ mmr_t sh_xn_ni1_llp_cmp_exp_data1_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_ni1_llp_cmp_exp_data1_s;
+} sh_xn_ni1_llp_cmp_exp_data1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_LLP_CMP_ENABLE0" */
+/* NI1 compare LLP input enable0 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_llp_cmp_enable0_u {
+ mmr_t sh_xn_ni1_llp_cmp_enable0_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni1_llp_cmp_enable0_s;
+} sh_xn_ni1_llp_cmp_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_NI1_LLP_CMP_ENABLE1" */
+/* NI1 compare LLP input enable1 */
+/* ==================================================================== */
+
+typedef union sh_xn_ni1_llp_cmp_enable1_u {
+ mmr_t sh_xn_ni1_llp_cmp_enable1_regval;
+ struct {
+ mmr_t enable : 64;
+ } sh_xn_ni1_llp_cmp_enable1_s;
+} sh_xn_ni1_llp_cmp_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ECC_INJ_REG" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_ecc_inj_reg_u {
+ mmr_t sh_xnpi_ecc_inj_reg_regval;
+ struct {
+ mmr_t byte0 : 8;
+ mmr_t reserved_0 : 4;
+ mmr_t data_1shot0 : 1;
+ mmr_t data_cont0 : 1;
+ mmr_t data_cb_1shot0 : 1;
+ mmr_t data_cb_cont0 : 1;
+ mmr_t byte1 : 8;
+ mmr_t reserved_1 : 4;
+ mmr_t data_1shot1 : 1;
+ mmr_t data_cont1 : 1;
+ mmr_t data_cb_1shot1 : 1;
+ mmr_t data_cb_cont1 : 1;
+ mmr_t byte2 : 8;
+ mmr_t reserved_2 : 4;
+ mmr_t data_1shot2 : 1;
+ mmr_t data_cont2 : 1;
+ mmr_t data_cb_1shot2 : 1;
+ mmr_t data_cb_cont2 : 1;
+ mmr_t byte3 : 8;
+ mmr_t reserved_3 : 4;
+ mmr_t data_1shot3 : 1;
+ mmr_t data_cont3 : 1;
+ mmr_t data_cb_1shot3 : 1;
+ mmr_t data_cb_cont3 : 1;
+ } sh_xnpi_ecc_inj_reg_s;
+} sh_xnpi_ecc_inj_reg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ECC0_INJ_MASK_REG" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_ecc0_inj_mask_reg_u {
+ mmr_t sh_xnpi_ecc0_inj_mask_reg_regval;
+ struct {
+ mmr_t mask_ecc0 : 64;
+ } sh_xnpi_ecc0_inj_mask_reg_s;
+} sh_xnpi_ecc0_inj_mask_reg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ECC1_INJ_MASK_REG" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_ecc1_inj_mask_reg_u {
+ mmr_t sh_xnpi_ecc1_inj_mask_reg_regval;
+ struct {
+ mmr_t mask_ecc1 : 64;
+ } sh_xnpi_ecc1_inj_mask_reg_s;
+} sh_xnpi_ecc1_inj_mask_reg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ECC2_INJ_MASK_REG" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_ecc2_inj_mask_reg_u {
+ mmr_t sh_xnpi_ecc2_inj_mask_reg_regval;
+ struct {
+ mmr_t mask_ecc2 : 64;
+ } sh_xnpi_ecc2_inj_mask_reg_s;
+} sh_xnpi_ecc2_inj_mask_reg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ECC3_INJ_MASK_REG" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_ecc3_inj_mask_reg_u {
+ mmr_t sh_xnpi_ecc3_inj_mask_reg_regval;
+ struct {
+ mmr_t mask_ecc3 : 64;
+ } sh_xnpi_ecc3_inj_mask_reg_s;
+} sh_xnpi_ecc3_inj_mask_reg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ECC_INJ_REG" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_ecc_inj_reg_u {
+ mmr_t sh_xnmd_ecc_inj_reg_regval;
+ struct {
+ mmr_t byte0 : 8;
+ mmr_t reserved_0 : 4;
+ mmr_t data_1shot0 : 1;
+ mmr_t data_cont0 : 1;
+ mmr_t data_cb_1shot0 : 1;
+ mmr_t data_cb_cont0 : 1;
+ mmr_t byte1 : 8;
+ mmr_t reserved_1 : 4;
+ mmr_t data_1shot1 : 1;
+ mmr_t data_cont1 : 1;
+ mmr_t data_cb_1shot1 : 1;
+ mmr_t data_cb_cont1 : 1;
+ mmr_t byte2 : 8;
+ mmr_t reserved_2 : 4;
+ mmr_t data_1shot2 : 1;
+ mmr_t data_cont2 : 1;
+ mmr_t data_cb_1shot2 : 1;
+ mmr_t data_cb_cont2 : 1;
+ mmr_t byte3 : 8;
+ mmr_t reserved_3 : 4;
+ mmr_t data_1shot3 : 1;
+ mmr_t data_cont3 : 1;
+ mmr_t data_cb_1shot3 : 1;
+ mmr_t data_cb_cont3 : 1;
+ } sh_xnmd_ecc_inj_reg_s;
+} sh_xnmd_ecc_inj_reg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ECC0_INJ_MASK_REG" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_ecc0_inj_mask_reg_u {
+ mmr_t sh_xnmd_ecc0_inj_mask_reg_regval;
+ struct {
+ mmr_t mask_ecc0 : 64;
+ } sh_xnmd_ecc0_inj_mask_reg_s;
+} sh_xnmd_ecc0_inj_mask_reg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ECC1_INJ_MASK_REG" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_ecc1_inj_mask_reg_u {
+ mmr_t sh_xnmd_ecc1_inj_mask_reg_regval;
+ struct {
+ mmr_t mask_ecc1 : 64;
+ } sh_xnmd_ecc1_inj_mask_reg_s;
+} sh_xnmd_ecc1_inj_mask_reg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ECC2_INJ_MASK_REG" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_ecc2_inj_mask_reg_u {
+ mmr_t sh_xnmd_ecc2_inj_mask_reg_regval;
+ struct {
+ mmr_t mask_ecc2 : 64;
+ } sh_xnmd_ecc2_inj_mask_reg_s;
+} sh_xnmd_ecc2_inj_mask_reg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ECC3_INJ_MASK_REG" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_ecc3_inj_mask_reg_u {
+ mmr_t sh_xnmd_ecc3_inj_mask_reg_regval;
+ struct {
+ mmr_t mask_ecc3 : 64;
+ } sh_xnmd_ecc3_inj_mask_reg_s;
+} sh_xnmd_ecc3_inj_mask_reg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ECC_ERR_REPORT" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_ecc_err_report_u {
+ mmr_t sh_xnmd_ecc_err_report_regval;
+ struct {
+ mmr_t ecc_disable0 : 1;
+ mmr_t reserved_0 : 15;
+ mmr_t ecc_disable1 : 1;
+ mmr_t reserved_1 : 15;
+ mmr_t ecc_disable2 : 1;
+ mmr_t reserved_2 : 15;
+ mmr_t ecc_disable3 : 1;
+ mmr_t reserved_3 : 15;
+ } sh_xnmd_ecc_err_report_s;
+} sh_xnmd_ecc_err_report_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_SUMMARY_1" */
+/* ni0 Error Summary Bits */
+/* ==================================================================== */
+
+typedef union sh_ni0_error_summary_1_u {
+ mmr_t sh_ni0_error_summary_1_regval;
+ struct {
+ mmr_t overflow_fifo02_debit0 : 1;
+ mmr_t overflow_fifo02_debit2 : 1;
+ mmr_t overflow_fifo13_debit0 : 1;
+ mmr_t overflow_fifo13_debit2 : 1;
+ mmr_t overflow_fifo02_vc0_pop : 1;
+ mmr_t overflow_fifo02_vc2_pop : 1;
+ mmr_t overflow_fifo13_vc1_pop : 1;
+ mmr_t overflow_fifo13_vc3_pop : 1;
+ mmr_t overflow_fifo02_vc0_push : 1;
+ mmr_t overflow_fifo02_vc2_push : 1;
+ mmr_t overflow_fifo13_vc1_push : 1;
+ mmr_t overflow_fifo13_vc3_push : 1;
+ mmr_t overflow_fifo02_vc0_credit : 1;
+ mmr_t overflow_fifo02_vc2_credit : 1;
+ mmr_t overflow_fifo13_vc0_credit : 1;
+ mmr_t overflow_fifo13_vc2_credit : 1;
+ mmr_t overflow0_vc0_credit : 1;
+ mmr_t overflow1_vc0_credit : 1;
+ mmr_t overflow2_vc0_credit : 1;
+ mmr_t overflow0_vc2_credit : 1;
+ mmr_t overflow1_vc2_credit : 1;
+ mmr_t overflow2_vc2_credit : 1;
+ mmr_t overflow_pi_fifo_debit0 : 1;
+ mmr_t overflow_pi_fifo_debit2 : 1;
+ mmr_t overflow_iilb_fifo_debit0 : 1;
+ mmr_t overflow_iilb_fifo_debit2 : 1;
+ mmr_t overflow_md_fifo_debit0 : 1;
+ mmr_t overflow_md_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit0 : 1;
+ mmr_t overflow_ni_fifo_debit1 : 1;
+ mmr_t overflow_ni_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit3 : 1;
+ mmr_t overflow_pi_fifo_vc0_pop : 1;
+ mmr_t overflow_pi_fifo_vc2_pop : 1;
+ mmr_t overflow_iilb_fifo_vc0_pop : 1;
+ mmr_t overflow_iilb_fifo_vc2_pop : 1;
+ mmr_t overflow_md_fifo_vc0_pop : 1;
+ mmr_t overflow_md_fifo_vc2_pop : 1;
+ mmr_t overflow_ni_fifo_vc0_pop : 1;
+ mmr_t overflow_ni_fifo_vc2_pop : 1;
+ mmr_t overflow_pi_fifo_vc0_push : 1;
+ mmr_t overflow_pi_fifo_vc2_push : 1;
+ mmr_t overflow_iilb_fifo_vc0_push : 1;
+ mmr_t overflow_iilb_fifo_vc2_push : 1;
+ mmr_t overflow_md_fifo_vc0_push : 1;
+ mmr_t overflow_md_fifo_vc2_push : 1;
+ mmr_t overflow_pi_fifo_vc0_credit : 1;
+ mmr_t overflow_pi_fifo_vc2_credit : 1;
+ mmr_t overflow_iilb_fifo_vc0_credit : 1;
+ mmr_t overflow_iilb_fifo_vc2_credit : 1;
+ mmr_t overflow_md_fifo_vc0_credit : 1;
+ mmr_t overflow_md_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc0_credit : 1;
+ mmr_t overflow_ni_fifo_vc1_credit : 1;
+ mmr_t overflow_ni_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc3_credit : 1;
+ mmr_t tail_timeout_fifo02_vc0 : 1;
+ mmr_t tail_timeout_fifo02_vc2 : 1;
+ mmr_t tail_timeout_fifo13_vc1 : 1;
+ mmr_t tail_timeout_fifo13_vc3 : 1;
+ mmr_t tail_timeout_ni_vc0 : 1;
+ mmr_t tail_timeout_ni_vc1 : 1;
+ mmr_t tail_timeout_ni_vc2 : 1;
+ mmr_t tail_timeout_ni_vc3 : 1;
+ } sh_ni0_error_summary_1_s;
+} sh_ni0_error_summary_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_SUMMARY_2" */
+/* ni0 Error Summary Bits */
+/* ==================================================================== */
+
+typedef union sh_ni0_error_summary_2_u {
+ mmr_t sh_ni0_error_summary_2_regval;
+ struct {
+ mmr_t illegal_vcni : 1;
+ mmr_t illegal_vcpi : 1;
+ mmr_t illegal_vcmd : 1;
+ mmr_t illegal_vciilb : 1;
+ mmr_t underflow_fifo02_vc0_pop : 1;
+ mmr_t underflow_fifo02_vc2_pop : 1;
+ mmr_t underflow_fifo13_vc1_pop : 1;
+ mmr_t underflow_fifo13_vc3_pop : 1;
+ mmr_t underflow_fifo02_vc0_push : 1;
+ mmr_t underflow_fifo02_vc2_push : 1;
+ mmr_t underflow_fifo13_vc1_push : 1;
+ mmr_t underflow_fifo13_vc3_push : 1;
+ mmr_t underflow_fifo02_vc0_credit : 1;
+ mmr_t underflow_fifo02_vc2_credit : 1;
+ mmr_t underflow_fifo13_vc0_credit : 1;
+ mmr_t underflow_fifo13_vc2_credit : 1;
+ mmr_t underflow0_vc0_credit : 1;
+ mmr_t underflow1_vc0_credit : 1;
+ mmr_t underflow2_vc0_credit : 1;
+ mmr_t underflow0_vc2_credit : 1;
+ mmr_t underflow1_vc2_credit : 1;
+ mmr_t underflow2_vc2_credit : 1;
+ mmr_t reserved_0 : 10;
+ mmr_t underflow_pi_fifo_vc0_pop : 1;
+ mmr_t underflow_pi_fifo_vc2_pop : 1;
+ mmr_t underflow_iilb_fifo_vc0_pop : 1;
+ mmr_t underflow_iilb_fifo_vc2_pop : 1;
+ mmr_t underflow_md_fifo_vc0_pop : 1;
+ mmr_t underflow_md_fifo_vc2_pop : 1;
+ mmr_t underflow_ni_fifo_vc0_pop : 1;
+ mmr_t underflow_ni_fifo_vc2_pop : 1;
+ mmr_t underflow_pi_fifo_vc0_push : 1;
+ mmr_t underflow_pi_fifo_vc2_push : 1;
+ mmr_t underflow_iilb_fifo_vc0_push : 1;
+ mmr_t underflow_iilb_fifo_vc2_push : 1;
+ mmr_t underflow_md_fifo_vc0_push : 1;
+ mmr_t underflow_md_fifo_vc2_push : 1;
+ mmr_t underflow_pi_fifo_vc0_credit : 1;
+ mmr_t underflow_pi_fifo_vc2_credit : 1;
+ mmr_t underflow_iilb_fifo_vc0_credit : 1;
+ mmr_t underflow_iilb_fifo_vc2_credit : 1;
+ mmr_t underflow_md_fifo_vc0_credit : 1;
+ mmr_t underflow_md_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc0_credit : 1;
+ mmr_t underflow_ni_fifo_vc1_credit : 1;
+ mmr_t underflow_ni_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc3_credit : 1;
+ mmr_t llp_deadlock_vc0 : 1;
+ mmr_t llp_deadlock_vc1 : 1;
+ mmr_t llp_deadlock_vc2 : 1;
+ mmr_t llp_deadlock_vc3 : 1;
+ mmr_t chiplet_nomatch : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t retry_timeout_error : 1;
+ mmr_t reserved_1 : 1;
+ } sh_ni0_error_summary_2_s;
+} sh_ni0_error_summary_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_OVERFLOW_1" */
+/* ni0 Error Overflow Bits */
+/* ==================================================================== */
+
+typedef union sh_ni0_error_overflow_1_u {
+ mmr_t sh_ni0_error_overflow_1_regval;
+ struct {
+ mmr_t overflow_fifo02_debit0 : 1;
+ mmr_t overflow_fifo02_debit2 : 1;
+ mmr_t overflow_fifo13_debit0 : 1;
+ mmr_t overflow_fifo13_debit2 : 1;
+ mmr_t overflow_fifo02_vc0_pop : 1;
+ mmr_t overflow_fifo02_vc2_pop : 1;
+ mmr_t overflow_fifo13_vc1_pop : 1;
+ mmr_t overflow_fifo13_vc3_pop : 1;
+ mmr_t overflow_fifo02_vc0_push : 1;
+ mmr_t overflow_fifo02_vc2_push : 1;
+ mmr_t overflow_fifo13_vc1_push : 1;
+ mmr_t overflow_fifo13_vc3_push : 1;
+ mmr_t overflow_fifo02_vc0_credit : 1;
+ mmr_t overflow_fifo02_vc2_credit : 1;
+ mmr_t overflow_fifo13_vc0_credit : 1;
+ mmr_t overflow_fifo13_vc2_credit : 1;
+ mmr_t overflow0_vc0_credit : 1;
+ mmr_t overflow1_vc0_credit : 1;
+ mmr_t overflow2_vc0_credit : 1;
+ mmr_t overflow0_vc2_credit : 1;
+ mmr_t overflow1_vc2_credit : 1;
+ mmr_t overflow2_vc2_credit : 1;
+ mmr_t overflow_pi_fifo_debit0 : 1;
+ mmr_t overflow_pi_fifo_debit2 : 1;
+ mmr_t overflow_iilb_fifo_debit0 : 1;
+ mmr_t overflow_iilb_fifo_debit2 : 1;
+ mmr_t overflow_md_fifo_debit0 : 1;
+ mmr_t overflow_md_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit0 : 1;
+ mmr_t overflow_ni_fifo_debit1 : 1;
+ mmr_t overflow_ni_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit3 : 1;
+ mmr_t overflow_pi_fifo_vc0_pop : 1;
+ mmr_t overflow_pi_fifo_vc2_pop : 1;
+ mmr_t overflow_iilb_fifo_vc0_pop : 1;
+ mmr_t overflow_iilb_fifo_vc2_pop : 1;
+ mmr_t overflow_md_fifo_vc0_pop : 1;
+ mmr_t overflow_md_fifo_vc2_pop : 1;
+ mmr_t overflow_ni_fifo_vc0_pop : 1;
+ mmr_t overflow_ni_fifo_vc2_pop : 1;
+ mmr_t overflow_pi_fifo_vc0_push : 1;
+ mmr_t overflow_pi_fifo_vc2_push : 1;
+ mmr_t overflow_iilb_fifo_vc0_push : 1;
+ mmr_t overflow_iilb_fifo_vc2_push : 1;
+ mmr_t overflow_md_fifo_vc0_push : 1;
+ mmr_t overflow_md_fifo_vc2_push : 1;
+ mmr_t overflow_pi_fifo_vc0_credit : 1;
+ mmr_t overflow_pi_fifo_vc2_credit : 1;
+ mmr_t overflow_iilb_fifo_vc0_credit : 1;
+ mmr_t overflow_iilb_fifo_vc2_credit : 1;
+ mmr_t overflow_md_fifo_vc0_credit : 1;
+ mmr_t overflow_md_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc0_credit : 1;
+ mmr_t overflow_ni_fifo_vc1_credit : 1;
+ mmr_t overflow_ni_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc3_credit : 1;
+ mmr_t tail_timeout_fifo02_vc0 : 1;
+ mmr_t tail_timeout_fifo02_vc2 : 1;
+ mmr_t tail_timeout_fifo13_vc1 : 1;
+ mmr_t tail_timeout_fifo13_vc3 : 1;
+ mmr_t tail_timeout_ni_vc0 : 1;
+ mmr_t tail_timeout_ni_vc1 : 1;
+ mmr_t tail_timeout_ni_vc2 : 1;
+ mmr_t tail_timeout_ni_vc3 : 1;
+ } sh_ni0_error_overflow_1_s;
+} sh_ni0_error_overflow_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_OVERFLOW_2" */
+/* ni0 Error Overflow Bits */
+/* ==================================================================== */
+
+typedef union sh_ni0_error_overflow_2_u {
+ mmr_t sh_ni0_error_overflow_2_regval;
+ struct {
+ mmr_t illegal_vcni : 1;
+ mmr_t illegal_vcpi : 1;
+ mmr_t illegal_vcmd : 1;
+ mmr_t illegal_vciilb : 1;
+ mmr_t underflow_fifo02_vc0_pop : 1;
+ mmr_t underflow_fifo02_vc2_pop : 1;
+ mmr_t underflow_fifo13_vc1_pop : 1;
+ mmr_t underflow_fifo13_vc3_pop : 1;
+ mmr_t underflow_fifo02_vc0_push : 1;
+ mmr_t underflow_fifo02_vc2_push : 1;
+ mmr_t underflow_fifo13_vc1_push : 1;
+ mmr_t underflow_fifo13_vc3_push : 1;
+ mmr_t underflow_fifo02_vc0_credit : 1;
+ mmr_t underflow_fifo02_vc2_credit : 1;
+ mmr_t underflow_fifo13_vc0_credit : 1;
+ mmr_t underflow_fifo13_vc2_credit : 1;
+ mmr_t underflow0_vc0_credit : 1;
+ mmr_t underflow1_vc0_credit : 1;
+ mmr_t underflow2_vc0_credit : 1;
+ mmr_t underflow0_vc2_credit : 1;
+ mmr_t underflow1_vc2_credit : 1;
+ mmr_t underflow2_vc2_credit : 1;
+ mmr_t reserved_0 : 10;
+ mmr_t underflow_pi_fifo_vc0_pop : 1;
+ mmr_t underflow_pi_fifo_vc2_pop : 1;
+ mmr_t underflow_iilb_fifo_vc0_pop : 1;
+ mmr_t underflow_iilb_fifo_vc2_pop : 1;
+ mmr_t underflow_md_fifo_vc0_pop : 1;
+ mmr_t underflow_md_fifo_vc2_pop : 1;
+ mmr_t underflow_ni_fifo_vc0_pop : 1;
+ mmr_t underflow_ni_fifo_vc2_pop : 1;
+ mmr_t underflow_pi_fifo_vc0_push : 1;
+ mmr_t underflow_pi_fifo_vc2_push : 1;
+ mmr_t underflow_iilb_fifo_vc0_push : 1;
+ mmr_t underflow_iilb_fifo_vc2_push : 1;
+ mmr_t underflow_md_fifo_vc0_push : 1;
+ mmr_t underflow_md_fifo_vc2_push : 1;
+ mmr_t underflow_pi_fifo_vc0_credit : 1;
+ mmr_t underflow_pi_fifo_vc2_credit : 1;
+ mmr_t underflow_iilb_fifo_vc0_credit : 1;
+ mmr_t underflow_iilb_fifo_vc2_credit : 1;
+ mmr_t underflow_md_fifo_vc0_credit : 1;
+ mmr_t underflow_md_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc0_credit : 1;
+ mmr_t underflow_ni_fifo_vc1_credit : 1;
+ mmr_t underflow_ni_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc3_credit : 1;
+ mmr_t llp_deadlock_vc0 : 1;
+ mmr_t llp_deadlock_vc1 : 1;
+ mmr_t llp_deadlock_vc2 : 1;
+ mmr_t llp_deadlock_vc3 : 1;
+ mmr_t chiplet_nomatch : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t retry_timeout_error : 1;
+ mmr_t reserved_1 : 1;
+ } sh_ni0_error_overflow_2_s;
+} sh_ni0_error_overflow_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_MASK_1" */
+/* ni0 Error Mask Bits */
+/* ==================================================================== */
+
+typedef union sh_ni0_error_mask_1_u {
+ mmr_t sh_ni0_error_mask_1_regval;
+ struct {
+ mmr_t overflow_fifo02_debit0 : 1;
+ mmr_t overflow_fifo02_debit2 : 1;
+ mmr_t overflow_fifo13_debit0 : 1;
+ mmr_t overflow_fifo13_debit2 : 1;
+ mmr_t overflow_fifo02_vc0_pop : 1;
+ mmr_t overflow_fifo02_vc2_pop : 1;
+ mmr_t overflow_fifo13_vc1_pop : 1;
+ mmr_t overflow_fifo13_vc3_pop : 1;
+ mmr_t overflow_fifo02_vc0_push : 1;
+ mmr_t overflow_fifo02_vc2_push : 1;
+ mmr_t overflow_fifo13_vc1_push : 1;
+ mmr_t overflow_fifo13_vc3_push : 1;
+ mmr_t overflow_fifo02_vc0_credit : 1;
+ mmr_t overflow_fifo02_vc2_credit : 1;
+ mmr_t overflow_fifo13_vc0_credit : 1;
+ mmr_t overflow_fifo13_vc2_credit : 1;
+ mmr_t overflow0_vc0_credit : 1;
+ mmr_t overflow1_vc0_credit : 1;
+ mmr_t overflow2_vc0_credit : 1;
+ mmr_t overflow0_vc2_credit : 1;
+ mmr_t overflow1_vc2_credit : 1;
+ mmr_t overflow2_vc2_credit : 1;
+ mmr_t overflow_pi_fifo_debit0 : 1;
+ mmr_t overflow_pi_fifo_debit2 : 1;
+ mmr_t overflow_iilb_fifo_debit0 : 1;
+ mmr_t overflow_iilb_fifo_debit2 : 1;
+ mmr_t overflow_md_fifo_debit0 : 1;
+ mmr_t overflow_md_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit0 : 1;
+ mmr_t overflow_ni_fifo_debit1 : 1;
+ mmr_t overflow_ni_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit3 : 1;
+ mmr_t overflow_pi_fifo_vc0_pop : 1;
+ mmr_t overflow_pi_fifo_vc2_pop : 1;
+ mmr_t overflow_iilb_fifo_vc0_pop : 1;
+ mmr_t overflow_iilb_fifo_vc2_pop : 1;
+ mmr_t overflow_md_fifo_vc0_pop : 1;
+ mmr_t overflow_md_fifo_vc2_pop : 1;
+ mmr_t overflow_ni_fifo_vc0_pop : 1;
+ mmr_t overflow_ni_fifo_vc2_pop : 1;
+ mmr_t overflow_pi_fifo_vc0_push : 1;
+ mmr_t overflow_pi_fifo_vc2_push : 1;
+ mmr_t overflow_iilb_fifo_vc0_push : 1;
+ mmr_t overflow_iilb_fifo_vc2_push : 1;
+ mmr_t overflow_md_fifo_vc0_push : 1;
+ mmr_t overflow_md_fifo_vc2_push : 1;
+ mmr_t overflow_pi_fifo_vc0_credit : 1;
+ mmr_t overflow_pi_fifo_vc2_credit : 1;
+ mmr_t overflow_iilb_fifo_vc0_credit : 1;
+ mmr_t overflow_iilb_fifo_vc2_credit : 1;
+ mmr_t overflow_md_fifo_vc0_credit : 1;
+ mmr_t overflow_md_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc0_credit : 1;
+ mmr_t overflow_ni_fifo_vc1_credit : 1;
+ mmr_t overflow_ni_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc3_credit : 1;
+ mmr_t tail_timeout_fifo02_vc0 : 1;
+ mmr_t tail_timeout_fifo02_vc2 : 1;
+ mmr_t tail_timeout_fifo13_vc1 : 1;
+ mmr_t tail_timeout_fifo13_vc3 : 1;
+ mmr_t tail_timeout_ni_vc0 : 1;
+ mmr_t tail_timeout_ni_vc1 : 1;
+ mmr_t tail_timeout_ni_vc2 : 1;
+ mmr_t tail_timeout_ni_vc3 : 1;
+ } sh_ni0_error_mask_1_s;
+} sh_ni0_error_mask_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_MASK_2" */
+/* ni0 Error Mask Bits */
+/* ==================================================================== */
+
+typedef union sh_ni0_error_mask_2_u {
+ mmr_t sh_ni0_error_mask_2_regval;
+ struct {
+ mmr_t illegal_vcni : 1;
+ mmr_t illegal_vcpi : 1;
+ mmr_t illegal_vcmd : 1;
+ mmr_t illegal_vciilb : 1;
+ mmr_t underflow_fifo02_vc0_pop : 1;
+ mmr_t underflow_fifo02_vc2_pop : 1;
+ mmr_t underflow_fifo13_vc1_pop : 1;
+ mmr_t underflow_fifo13_vc3_pop : 1;
+ mmr_t underflow_fifo02_vc0_push : 1;
+ mmr_t underflow_fifo02_vc2_push : 1;
+ mmr_t underflow_fifo13_vc1_push : 1;
+ mmr_t underflow_fifo13_vc3_push : 1;
+ mmr_t underflow_fifo02_vc0_credit : 1;
+ mmr_t underflow_fifo02_vc2_credit : 1;
+ mmr_t underflow_fifo13_vc0_credit : 1;
+ mmr_t underflow_fifo13_vc2_credit : 1;
+ mmr_t underflow0_vc0_credit : 1;
+ mmr_t underflow1_vc0_credit : 1;
+ mmr_t underflow2_vc0_credit : 1;
+ mmr_t underflow0_vc2_credit : 1;
+ mmr_t underflow1_vc2_credit : 1;
+ mmr_t underflow2_vc2_credit : 1;
+ mmr_t reserved_0 : 10;
+ mmr_t underflow_pi_fifo_vc0_pop : 1;
+ mmr_t underflow_pi_fifo_vc2_pop : 1;
+ mmr_t underflow_iilb_fifo_vc0_pop : 1;
+ mmr_t underflow_iilb_fifo_vc2_pop : 1;
+ mmr_t underflow_md_fifo_vc0_pop : 1;
+ mmr_t underflow_md_fifo_vc2_pop : 1;
+ mmr_t underflow_ni_fifo_vc0_pop : 1;
+ mmr_t underflow_ni_fifo_vc2_pop : 1;
+ mmr_t underflow_pi_fifo_vc0_push : 1;
+ mmr_t underflow_pi_fifo_vc2_push : 1;
+ mmr_t underflow_iilb_fifo_vc0_push : 1;
+ mmr_t underflow_iilb_fifo_vc2_push : 1;
+ mmr_t underflow_md_fifo_vc0_push : 1;
+ mmr_t underflow_md_fifo_vc2_push : 1;
+ mmr_t underflow_pi_fifo_vc0_credit : 1;
+ mmr_t underflow_pi_fifo_vc2_credit : 1;
+ mmr_t underflow_iilb_fifo_vc0_credit : 1;
+ mmr_t underflow_iilb_fifo_vc2_credit : 1;
+ mmr_t underflow_md_fifo_vc0_credit : 1;
+ mmr_t underflow_md_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc0_credit : 1;
+ mmr_t underflow_ni_fifo_vc1_credit : 1;
+ mmr_t underflow_ni_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc3_credit : 1;
+ mmr_t llp_deadlock_vc0 : 1;
+ mmr_t llp_deadlock_vc1 : 1;
+ mmr_t llp_deadlock_vc2 : 1;
+ mmr_t llp_deadlock_vc3 : 1;
+ mmr_t chiplet_nomatch : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t retry_timeout_error : 1;
+ mmr_t reserved_1 : 1;
+ } sh_ni0_error_mask_2_s;
+} sh_ni0_error_mask_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_FIRST_ERROR_1" */
+/* ni0 First Error Bits */
+/* ==================================================================== */
+
+typedef union sh_ni0_first_error_1_u {
+ mmr_t sh_ni0_first_error_1_regval;
+ struct {
+ mmr_t overflow_fifo02_debit0 : 1;
+ mmr_t overflow_fifo02_debit2 : 1;
+ mmr_t overflow_fifo13_debit0 : 1;
+ mmr_t overflow_fifo13_debit2 : 1;
+ mmr_t overflow_fifo02_vc0_pop : 1;
+ mmr_t overflow_fifo02_vc2_pop : 1;
+ mmr_t overflow_fifo13_vc1_pop : 1;
+ mmr_t overflow_fifo13_vc3_pop : 1;
+ mmr_t overflow_fifo02_vc0_push : 1;
+ mmr_t overflow_fifo02_vc2_push : 1;
+ mmr_t overflow_fifo13_vc1_push : 1;
+ mmr_t overflow_fifo13_vc3_push : 1;
+ mmr_t overflow_fifo02_vc0_credit : 1;
+ mmr_t overflow_fifo02_vc2_credit : 1;
+ mmr_t overflow_fifo13_vc0_credit : 1;
+ mmr_t overflow_fifo13_vc2_credit : 1;
+ mmr_t overflow0_vc0_credit : 1;
+ mmr_t overflow1_vc0_credit : 1;
+ mmr_t overflow2_vc0_credit : 1;
+ mmr_t overflow0_vc2_credit : 1;
+ mmr_t overflow1_vc2_credit : 1;
+ mmr_t overflow2_vc2_credit : 1;
+ mmr_t overflow_pi_fifo_debit0 : 1;
+ mmr_t overflow_pi_fifo_debit2 : 1;
+ mmr_t overflow_iilb_fifo_debit0 : 1;
+ mmr_t overflow_iilb_fifo_debit2 : 1;
+ mmr_t overflow_md_fifo_debit0 : 1;
+ mmr_t overflow_md_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit0 : 1;
+ mmr_t overflow_ni_fifo_debit1 : 1;
+ mmr_t overflow_ni_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit3 : 1;
+ mmr_t overflow_pi_fifo_vc0_pop : 1;
+ mmr_t overflow_pi_fifo_vc2_pop : 1;
+ mmr_t overflow_iilb_fifo_vc0_pop : 1;
+ mmr_t overflow_iilb_fifo_vc2_pop : 1;
+ mmr_t overflow_md_fifo_vc0_pop : 1;
+ mmr_t overflow_md_fifo_vc2_pop : 1;
+ mmr_t overflow_ni_fifo_vc0_pop : 1;
+ mmr_t overflow_ni_fifo_vc2_pop : 1;
+ mmr_t overflow_pi_fifo_vc0_push : 1;
+ mmr_t overflow_pi_fifo_vc2_push : 1;
+ mmr_t overflow_iilb_fifo_vc0_push : 1;
+ mmr_t overflow_iilb_fifo_vc2_push : 1;
+ mmr_t overflow_md_fifo_vc0_push : 1;
+ mmr_t overflow_md_fifo_vc2_push : 1;
+ mmr_t overflow_pi_fifo_vc0_credit : 1;
+ mmr_t overflow_pi_fifo_vc2_credit : 1;
+ mmr_t overflow_iilb_fifo_vc0_credit : 1;
+ mmr_t overflow_iilb_fifo_vc2_credit : 1;
+ mmr_t overflow_md_fifo_vc0_credit : 1;
+ mmr_t overflow_md_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc0_credit : 1;
+ mmr_t overflow_ni_fifo_vc1_credit : 1;
+ mmr_t overflow_ni_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc3_credit : 1;
+ mmr_t tail_timeout_fifo02_vc0 : 1;
+ mmr_t tail_timeout_fifo02_vc2 : 1;
+ mmr_t tail_timeout_fifo13_vc1 : 1;
+ mmr_t tail_timeout_fifo13_vc3 : 1;
+ mmr_t tail_timeout_ni_vc0 : 1;
+ mmr_t tail_timeout_ni_vc1 : 1;
+ mmr_t tail_timeout_ni_vc2 : 1;
+ mmr_t tail_timeout_ni_vc3 : 1;
+ } sh_ni0_first_error_1_s;
+} sh_ni0_first_error_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_FIRST_ERROR_2" */
+/* ni0 First Error Bits */
+/* ==================================================================== */
+
+typedef union sh_ni0_first_error_2_u {
+ mmr_t sh_ni0_first_error_2_regval;
+ struct {
+ mmr_t illegal_vcni : 1;
+ mmr_t illegal_vcpi : 1;
+ mmr_t illegal_vcmd : 1;
+ mmr_t illegal_vciilb : 1;
+ mmr_t underflow_fifo02_vc0_pop : 1;
+ mmr_t underflow_fifo02_vc2_pop : 1;
+ mmr_t underflow_fifo13_vc1_pop : 1;
+ mmr_t underflow_fifo13_vc3_pop : 1;
+ mmr_t underflow_fifo02_vc0_push : 1;
+ mmr_t underflow_fifo02_vc2_push : 1;
+ mmr_t underflow_fifo13_vc1_push : 1;
+ mmr_t underflow_fifo13_vc3_push : 1;
+ mmr_t underflow_fifo02_vc0_credit : 1;
+ mmr_t underflow_fifo02_vc2_credit : 1;
+ mmr_t underflow_fifo13_vc0_credit : 1;
+ mmr_t underflow_fifo13_vc2_credit : 1;
+ mmr_t underflow0_vc0_credit : 1;
+ mmr_t underflow1_vc0_credit : 1;
+ mmr_t underflow2_vc0_credit : 1;
+ mmr_t underflow0_vc2_credit : 1;
+ mmr_t underflow1_vc2_credit : 1;
+ mmr_t underflow2_vc2_credit : 1;
+ mmr_t reserved_0 : 10;
+ mmr_t underflow_pi_fifo_vc0_pop : 1;
+ mmr_t underflow_pi_fifo_vc2_pop : 1;
+ mmr_t underflow_iilb_fifo_vc0_pop : 1;
+ mmr_t underflow_iilb_fifo_vc2_pop : 1;
+ mmr_t underflow_md_fifo_vc0_pop : 1;
+ mmr_t underflow_md_fifo_vc2_pop : 1;
+ mmr_t underflow_ni_fifo_vc0_pop : 1;
+ mmr_t underflow_ni_fifo_vc2_pop : 1;
+ mmr_t underflow_pi_fifo_vc0_push : 1;
+ mmr_t underflow_pi_fifo_vc2_push : 1;
+ mmr_t underflow_iilb_fifo_vc0_push : 1;
+ mmr_t underflow_iilb_fifo_vc2_push : 1;
+ mmr_t underflow_md_fifo_vc0_push : 1;
+ mmr_t underflow_md_fifo_vc2_push : 1;
+ mmr_t underflow_pi_fifo_vc0_credit : 1;
+ mmr_t underflow_pi_fifo_vc2_credit : 1;
+ mmr_t underflow_iilb_fifo_vc0_credit : 1;
+ mmr_t underflow_iilb_fifo_vc2_credit : 1;
+ mmr_t underflow_md_fifo_vc0_credit : 1;
+ mmr_t underflow_md_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc0_credit : 1;
+ mmr_t underflow_ni_fifo_vc1_credit : 1;
+ mmr_t underflow_ni_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc3_credit : 1;
+ mmr_t llp_deadlock_vc0 : 1;
+ mmr_t llp_deadlock_vc1 : 1;
+ mmr_t llp_deadlock_vc2 : 1;
+ mmr_t llp_deadlock_vc3 : 1;
+ mmr_t chiplet_nomatch : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t retry_timeout_error : 1;
+ mmr_t reserved_1 : 1;
+ } sh_ni0_first_error_2_s;
+} sh_ni0_first_error_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_DETAIL_1" */
+/* ni0 Chiplet no match header bits 63:0 */
+/* ==================================================================== */
+
+typedef union sh_ni0_error_detail_1_u {
+ mmr_t sh_ni0_error_detail_1_regval;
+ struct {
+ mmr_t header : 64;
+ } sh_ni0_error_detail_1_s;
+} sh_ni0_error_detail_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_DETAIL_2" */
+/* ni0 Chiplet no match header bits 127:64 */
+/* ==================================================================== */
+
+typedef union sh_ni0_error_detail_2_u {
+ mmr_t sh_ni0_error_detail_2_regval;
+ struct {
+ mmr_t header : 64;
+ } sh_ni0_error_detail_2_s;
+} sh_ni0_error_detail_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_SUMMARY_1" */
+/* ni1 Error Summary Bits */
+/* ==================================================================== */
+
+typedef union sh_ni1_error_summary_1_u {
+ mmr_t sh_ni1_error_summary_1_regval;
+ struct {
+ mmr_t overflow_fifo02_debit0 : 1;
+ mmr_t overflow_fifo02_debit2 : 1;
+ mmr_t overflow_fifo13_debit0 : 1;
+ mmr_t overflow_fifo13_debit2 : 1;
+ mmr_t overflow_fifo02_vc0_pop : 1;
+ mmr_t overflow_fifo02_vc2_pop : 1;
+ mmr_t overflow_fifo13_vc1_pop : 1;
+ mmr_t overflow_fifo13_vc3_pop : 1;
+ mmr_t overflow_fifo02_vc0_push : 1;
+ mmr_t overflow_fifo02_vc2_push : 1;
+ mmr_t overflow_fifo13_vc1_push : 1;
+ mmr_t overflow_fifo13_vc3_push : 1;
+ mmr_t overflow_fifo02_vc0_credit : 1;
+ mmr_t overflow_fifo02_vc2_credit : 1;
+ mmr_t overflow_fifo13_vc0_credit : 1;
+ mmr_t overflow_fifo13_vc2_credit : 1;
+ mmr_t overflow0_vc0_credit : 1;
+ mmr_t overflow1_vc0_credit : 1;
+ mmr_t overflow2_vc0_credit : 1;
+ mmr_t overflow0_vc2_credit : 1;
+ mmr_t overflow1_vc2_credit : 1;
+ mmr_t overflow2_vc2_credit : 1;
+ mmr_t overflow_pi_fifo_debit0 : 1;
+ mmr_t overflow_pi_fifo_debit2 : 1;
+ mmr_t overflow_iilb_fifo_debit0 : 1;
+ mmr_t overflow_iilb_fifo_debit2 : 1;
+ mmr_t overflow_md_fifo_debit0 : 1;
+ mmr_t overflow_md_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit0 : 1;
+ mmr_t overflow_ni_fifo_debit1 : 1;
+ mmr_t overflow_ni_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit3 : 1;
+ mmr_t overflow_pi_fifo_vc0_pop : 1;
+ mmr_t overflow_pi_fifo_vc2_pop : 1;
+ mmr_t overflow_iilb_fifo_vc0_pop : 1;
+ mmr_t overflow_iilb_fifo_vc2_pop : 1;
+ mmr_t overflow_md_fifo_vc0_pop : 1;
+ mmr_t overflow_md_fifo_vc2_pop : 1;
+ mmr_t overflow_ni_fifo_vc0_pop : 1;
+ mmr_t overflow_ni_fifo_vc2_pop : 1;
+ mmr_t overflow_pi_fifo_vc0_push : 1;
+ mmr_t overflow_pi_fifo_vc2_push : 1;
+ mmr_t overflow_iilb_fifo_vc0_push : 1;
+ mmr_t overflow_iilb_fifo_vc2_push : 1;
+ mmr_t overflow_md_fifo_vc0_push : 1;
+ mmr_t overflow_md_fifo_vc2_push : 1;
+ mmr_t overflow_pi_fifo_vc0_credit : 1;
+ mmr_t overflow_pi_fifo_vc2_credit : 1;
+ mmr_t overflow_iilb_fifo_vc0_credit : 1;
+ mmr_t overflow_iilb_fifo_vc2_credit : 1;
+ mmr_t overflow_md_fifo_vc0_credit : 1;
+ mmr_t overflow_md_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc0_credit : 1;
+ mmr_t overflow_ni_fifo_vc1_credit : 1;
+ mmr_t overflow_ni_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc3_credit : 1;
+ mmr_t tail_timeout_fifo02_vc0 : 1;
+ mmr_t tail_timeout_fifo02_vc2 : 1;
+ mmr_t tail_timeout_fifo13_vc1 : 1;
+ mmr_t tail_timeout_fifo13_vc3 : 1;
+ mmr_t tail_timeout_ni_vc0 : 1;
+ mmr_t tail_timeout_ni_vc1 : 1;
+ mmr_t tail_timeout_ni_vc2 : 1;
+ mmr_t tail_timeout_ni_vc3 : 1;
+ } sh_ni1_error_summary_1_s;
+} sh_ni1_error_summary_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_SUMMARY_2" */
+/* ni1 Error Summary Bits */
+/* ==================================================================== */
+
+typedef union sh_ni1_error_summary_2_u {
+ mmr_t sh_ni1_error_summary_2_regval;
+ struct {
+ mmr_t illegal_vcni : 1;
+ mmr_t illegal_vcpi : 1;
+ mmr_t illegal_vcmd : 1;
+ mmr_t illegal_vciilb : 1;
+ mmr_t underflow_fifo02_vc0_pop : 1;
+ mmr_t underflow_fifo02_vc2_pop : 1;
+ mmr_t underflow_fifo13_vc1_pop : 1;
+ mmr_t underflow_fifo13_vc3_pop : 1;
+ mmr_t underflow_fifo02_vc0_push : 1;
+ mmr_t underflow_fifo02_vc2_push : 1;
+ mmr_t underflow_fifo13_vc1_push : 1;
+ mmr_t underflow_fifo13_vc3_push : 1;
+ mmr_t underflow_fifo02_vc0_credit : 1;
+ mmr_t underflow_fifo02_vc2_credit : 1;
+ mmr_t underflow_fifo13_vc0_credit : 1;
+ mmr_t underflow_fifo13_vc2_credit : 1;
+ mmr_t underflow0_vc0_credit : 1;
+ mmr_t underflow1_vc0_credit : 1;
+ mmr_t underflow2_vc0_credit : 1;
+ mmr_t underflow0_vc2_credit : 1;
+ mmr_t underflow1_vc2_credit : 1;
+ mmr_t underflow2_vc2_credit : 1;
+ mmr_t reserved_0 : 10;
+ mmr_t underflow_pi_fifo_vc0_pop : 1;
+ mmr_t underflow_pi_fifo_vc2_pop : 1;
+ mmr_t underflow_iilb_fifo_vc0_pop : 1;
+ mmr_t underflow_iilb_fifo_vc2_pop : 1;
+ mmr_t underflow_md_fifo_vc0_pop : 1;
+ mmr_t underflow_md_fifo_vc2_pop : 1;
+ mmr_t underflow_ni_fifo_vc0_pop : 1;
+ mmr_t underflow_ni_fifo_vc2_pop : 1;
+ mmr_t underflow_pi_fifo_vc0_push : 1;
+ mmr_t underflow_pi_fifo_vc2_push : 1;
+ mmr_t underflow_iilb_fifo_vc0_push : 1;
+ mmr_t underflow_iilb_fifo_vc2_push : 1;
+ mmr_t underflow_md_fifo_vc0_push : 1;
+ mmr_t underflow_md_fifo_vc2_push : 1;
+ mmr_t underflow_pi_fifo_vc0_credit : 1;
+ mmr_t underflow_pi_fifo_vc2_credit : 1;
+ mmr_t underflow_iilb_fifo_vc0_credit : 1;
+ mmr_t underflow_iilb_fifo_vc2_credit : 1;
+ mmr_t underflow_md_fifo_vc0_credit : 1;
+ mmr_t underflow_md_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc0_credit : 1;
+ mmr_t underflow_ni_fifo_vc1_credit : 1;
+ mmr_t underflow_ni_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc3_credit : 1;
+ mmr_t llp_deadlock_vc0 : 1;
+ mmr_t llp_deadlock_vc1 : 1;
+ mmr_t llp_deadlock_vc2 : 1;
+ mmr_t llp_deadlock_vc3 : 1;
+ mmr_t chiplet_nomatch : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t retry_timeout_error : 1;
+ mmr_t reserved_1 : 1;
+ } sh_ni1_error_summary_2_s;
+} sh_ni1_error_summary_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_OVERFLOW_1" */
+/* ni1 Error Overflow Bits */
+/* ==================================================================== */
+
+typedef union sh_ni1_error_overflow_1_u {
+ mmr_t sh_ni1_error_overflow_1_regval;
+ struct {
+ mmr_t overflow_fifo02_debit0 : 1;
+ mmr_t overflow_fifo02_debit2 : 1;
+ mmr_t overflow_fifo13_debit0 : 1;
+ mmr_t overflow_fifo13_debit2 : 1;
+ mmr_t overflow_fifo02_vc0_pop : 1;
+ mmr_t overflow_fifo02_vc2_pop : 1;
+ mmr_t overflow_fifo13_vc1_pop : 1;
+ mmr_t overflow_fifo13_vc3_pop : 1;
+ mmr_t overflow_fifo02_vc0_push : 1;
+ mmr_t overflow_fifo02_vc2_push : 1;
+ mmr_t overflow_fifo13_vc1_push : 1;
+ mmr_t overflow_fifo13_vc3_push : 1;
+ mmr_t overflow_fifo02_vc0_credit : 1;
+ mmr_t overflow_fifo02_vc2_credit : 1;
+ mmr_t overflow_fifo13_vc0_credit : 1;
+ mmr_t overflow_fifo13_vc2_credit : 1;
+ mmr_t overflow0_vc0_credit : 1;
+ mmr_t overflow1_vc0_credit : 1;
+ mmr_t overflow2_vc0_credit : 1;
+ mmr_t overflow0_vc2_credit : 1;
+ mmr_t overflow1_vc2_credit : 1;
+ mmr_t overflow2_vc2_credit : 1;
+ mmr_t overflow_pi_fifo_debit0 : 1;
+ mmr_t overflow_pi_fifo_debit2 : 1;
+ mmr_t overflow_iilb_fifo_debit0 : 1;
+ mmr_t overflow_iilb_fifo_debit2 : 1;
+ mmr_t overflow_md_fifo_debit0 : 1;
+ mmr_t overflow_md_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit0 : 1;
+ mmr_t overflow_ni_fifo_debit1 : 1;
+ mmr_t overflow_ni_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit3 : 1;
+ mmr_t overflow_pi_fifo_vc0_pop : 1;
+ mmr_t overflow_pi_fifo_vc2_pop : 1;
+ mmr_t overflow_iilb_fifo_vc0_pop : 1;
+ mmr_t overflow_iilb_fifo_vc2_pop : 1;
+ mmr_t overflow_md_fifo_vc0_pop : 1;
+ mmr_t overflow_md_fifo_vc2_pop : 1;
+ mmr_t overflow_ni_fifo_vc0_pop : 1;
+ mmr_t overflow_ni_fifo_vc2_pop : 1;
+ mmr_t overflow_pi_fifo_vc0_push : 1;
+ mmr_t overflow_pi_fifo_vc2_push : 1;
+ mmr_t overflow_iilb_fifo_vc0_push : 1;
+ mmr_t overflow_iilb_fifo_vc2_push : 1;
+ mmr_t overflow_md_fifo_vc0_push : 1;
+ mmr_t overflow_md_fifo_vc2_push : 1;
+ mmr_t overflow_pi_fifo_vc0_credit : 1;
+ mmr_t overflow_pi_fifo_vc2_credit : 1;
+ mmr_t overflow_iilb_fifo_vc0_credit : 1;
+ mmr_t overflow_iilb_fifo_vc2_credit : 1;
+ mmr_t overflow_md_fifo_vc0_credit : 1;
+ mmr_t overflow_md_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc0_credit : 1;
+ mmr_t overflow_ni_fifo_vc1_credit : 1;
+ mmr_t overflow_ni_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc3_credit : 1;
+ mmr_t tail_timeout_fifo02_vc0 : 1;
+ mmr_t tail_timeout_fifo02_vc2 : 1;
+ mmr_t tail_timeout_fifo13_vc1 : 1;
+ mmr_t tail_timeout_fifo13_vc3 : 1;
+ mmr_t tail_timeout_ni_vc0 : 1;
+ mmr_t tail_timeout_ni_vc1 : 1;
+ mmr_t tail_timeout_ni_vc2 : 1;
+ mmr_t tail_timeout_ni_vc3 : 1;
+ } sh_ni1_error_overflow_1_s;
+} sh_ni1_error_overflow_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_OVERFLOW_2" */
+/* ni1 Error Overflow Bits */
+/* ==================================================================== */
+
+typedef union sh_ni1_error_overflow_2_u {
+ mmr_t sh_ni1_error_overflow_2_regval;
+ struct {
+ mmr_t illegal_vcni : 1;
+ mmr_t illegal_vcpi : 1;
+ mmr_t illegal_vcmd : 1;
+ mmr_t illegal_vciilb : 1;
+ mmr_t underflow_fifo02_vc0_pop : 1;
+ mmr_t underflow_fifo02_vc2_pop : 1;
+ mmr_t underflow_fifo13_vc1_pop : 1;
+ mmr_t underflow_fifo13_vc3_pop : 1;
+ mmr_t underflow_fifo02_vc0_push : 1;
+ mmr_t underflow_fifo02_vc2_push : 1;
+ mmr_t underflow_fifo13_vc1_push : 1;
+ mmr_t underflow_fifo13_vc3_push : 1;
+ mmr_t underflow_fifo02_vc0_credit : 1;
+ mmr_t underflow_fifo02_vc2_credit : 1;
+ mmr_t underflow_fifo13_vc0_credit : 1;
+ mmr_t underflow_fifo13_vc2_credit : 1;
+ mmr_t underflow0_vc0_credit : 1;
+ mmr_t underflow1_vc0_credit : 1;
+ mmr_t underflow2_vc0_credit : 1;
+ mmr_t underflow0_vc2_credit : 1;
+ mmr_t underflow1_vc2_credit : 1;
+ mmr_t underflow2_vc2_credit : 1;
+ mmr_t reserved_0 : 10;
+ mmr_t underflow_pi_fifo_vc0_pop : 1;
+ mmr_t underflow_pi_fifo_vc2_pop : 1;
+ mmr_t underflow_iilb_fifo_vc0_pop : 1;
+ mmr_t underflow_iilb_fifo_vc2_pop : 1;
+ mmr_t underflow_md_fifo_vc0_pop : 1;
+ mmr_t underflow_md_fifo_vc2_pop : 1;
+ mmr_t underflow_ni_fifo_vc0_pop : 1;
+ mmr_t underflow_ni_fifo_vc2_pop : 1;
+ mmr_t underflow_pi_fifo_vc0_push : 1;
+ mmr_t underflow_pi_fifo_vc2_push : 1;
+ mmr_t underflow_iilb_fifo_vc0_push : 1;
+ mmr_t underflow_iilb_fifo_vc2_push : 1;
+ mmr_t underflow_md_fifo_vc0_push : 1;
+ mmr_t underflow_md_fifo_vc2_push : 1;
+ mmr_t underflow_pi_fifo_vc0_credit : 1;
+ mmr_t underflow_pi_fifo_vc2_credit : 1;
+ mmr_t underflow_iilb_fifo_vc0_credit : 1;
+ mmr_t underflow_iilb_fifo_vc2_credit : 1;
+ mmr_t underflow_md_fifo_vc0_credit : 1;
+ mmr_t underflow_md_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc0_credit : 1;
+ mmr_t underflow_ni_fifo_vc1_credit : 1;
+ mmr_t underflow_ni_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc3_credit : 1;
+ mmr_t llp_deadlock_vc0 : 1;
+ mmr_t llp_deadlock_vc1 : 1;
+ mmr_t llp_deadlock_vc2 : 1;
+ mmr_t llp_deadlock_vc3 : 1;
+ mmr_t chiplet_nomatch : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t retry_timeout_error : 1;
+ mmr_t reserved_1 : 1;
+ } sh_ni1_error_overflow_2_s;
+} sh_ni1_error_overflow_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_MASK_1" */
+/* ni1 Error Mask Bits */
+/* ==================================================================== */
+
+typedef union sh_ni1_error_mask_1_u {
+ mmr_t sh_ni1_error_mask_1_regval;
+ struct {
+ mmr_t overflow_fifo02_debit0 : 1;
+ mmr_t overflow_fifo02_debit2 : 1;
+ mmr_t overflow_fifo13_debit0 : 1;
+ mmr_t overflow_fifo13_debit2 : 1;
+ mmr_t overflow_fifo02_vc0_pop : 1;
+ mmr_t overflow_fifo02_vc2_pop : 1;
+ mmr_t overflow_fifo13_vc1_pop : 1;
+ mmr_t overflow_fifo13_vc3_pop : 1;
+ mmr_t overflow_fifo02_vc0_push : 1;
+ mmr_t overflow_fifo02_vc2_push : 1;
+ mmr_t overflow_fifo13_vc1_push : 1;
+ mmr_t overflow_fifo13_vc3_push : 1;
+ mmr_t overflow_fifo02_vc0_credit : 1;
+ mmr_t overflow_fifo02_vc2_credit : 1;
+ mmr_t overflow_fifo13_vc0_credit : 1;
+ mmr_t overflow_fifo13_vc2_credit : 1;
+ mmr_t overflow0_vc0_credit : 1;
+ mmr_t overflow1_vc0_credit : 1;
+ mmr_t overflow2_vc0_credit : 1;
+ mmr_t overflow0_vc2_credit : 1;
+ mmr_t overflow1_vc2_credit : 1;
+ mmr_t overflow2_vc2_credit : 1;
+ mmr_t overflow_pi_fifo_debit0 : 1;
+ mmr_t overflow_pi_fifo_debit2 : 1;
+ mmr_t overflow_iilb_fifo_debit0 : 1;
+ mmr_t overflow_iilb_fifo_debit2 : 1;
+ mmr_t overflow_md_fifo_debit0 : 1;
+ mmr_t overflow_md_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit0 : 1;
+ mmr_t overflow_ni_fifo_debit1 : 1;
+ mmr_t overflow_ni_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit3 : 1;
+ mmr_t overflow_pi_fifo_vc0_pop : 1;
+ mmr_t overflow_pi_fifo_vc2_pop : 1;
+ mmr_t overflow_iilb_fifo_vc0_pop : 1;
+ mmr_t overflow_iilb_fifo_vc2_pop : 1;
+ mmr_t overflow_md_fifo_vc0_pop : 1;
+ mmr_t overflow_md_fifo_vc2_pop : 1;
+ mmr_t overflow_ni_fifo_vc0_pop : 1;
+ mmr_t overflow_ni_fifo_vc2_pop : 1;
+ mmr_t overflow_pi_fifo_vc0_push : 1;
+ mmr_t overflow_pi_fifo_vc2_push : 1;
+ mmr_t overflow_iilb_fifo_vc0_push : 1;
+ mmr_t overflow_iilb_fifo_vc2_push : 1;
+ mmr_t overflow_md_fifo_vc0_push : 1;
+ mmr_t overflow_md_fifo_vc2_push : 1;
+ mmr_t overflow_pi_fifo_vc0_credit : 1;
+ mmr_t overflow_pi_fifo_vc2_credit : 1;
+ mmr_t overflow_iilb_fifo_vc0_credit : 1;
+ mmr_t overflow_iilb_fifo_vc2_credit : 1;
+ mmr_t overflow_md_fifo_vc0_credit : 1;
+ mmr_t overflow_md_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc0_credit : 1;
+ mmr_t overflow_ni_fifo_vc1_credit : 1;
+ mmr_t overflow_ni_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc3_credit : 1;
+ mmr_t tail_timeout_fifo02_vc0 : 1;
+ mmr_t tail_timeout_fifo02_vc2 : 1;
+ mmr_t tail_timeout_fifo13_vc1 : 1;
+ mmr_t tail_timeout_fifo13_vc3 : 1;
+ mmr_t tail_timeout_ni_vc0 : 1;
+ mmr_t tail_timeout_ni_vc1 : 1;
+ mmr_t tail_timeout_ni_vc2 : 1;
+ mmr_t tail_timeout_ni_vc3 : 1;
+ } sh_ni1_error_mask_1_s;
+} sh_ni1_error_mask_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_MASK_2" */
+/* ni1 Error Mask Bits */
+/* ==================================================================== */
+
+typedef union sh_ni1_error_mask_2_u {
+ mmr_t sh_ni1_error_mask_2_regval;
+ struct {
+ mmr_t illegal_vcni : 1;
+ mmr_t illegal_vcpi : 1;
+ mmr_t illegal_vcmd : 1;
+ mmr_t illegal_vciilb : 1;
+ mmr_t underflow_fifo02_vc0_pop : 1;
+ mmr_t underflow_fifo02_vc2_pop : 1;
+ mmr_t underflow_fifo13_vc1_pop : 1;
+ mmr_t underflow_fifo13_vc3_pop : 1;
+ mmr_t underflow_fifo02_vc0_push : 1;
+ mmr_t underflow_fifo02_vc2_push : 1;
+ mmr_t underflow_fifo13_vc1_push : 1;
+ mmr_t underflow_fifo13_vc3_push : 1;
+ mmr_t underflow_fifo02_vc0_credit : 1;
+ mmr_t underflow_fifo02_vc2_credit : 1;
+ mmr_t underflow_fifo13_vc0_credit : 1;
+ mmr_t underflow_fifo13_vc2_credit : 1;
+ mmr_t underflow0_vc0_credit : 1;
+ mmr_t underflow1_vc0_credit : 1;
+ mmr_t underflow2_vc0_credit : 1;
+ mmr_t underflow0_vc2_credit : 1;
+ mmr_t underflow1_vc2_credit : 1;
+ mmr_t underflow2_vc2_credit : 1;
+ mmr_t reserved_0 : 10;
+ mmr_t underflow_pi_fifo_vc0_pop : 1;
+ mmr_t underflow_pi_fifo_vc2_pop : 1;
+ mmr_t underflow_iilb_fifo_vc0_pop : 1;
+ mmr_t underflow_iilb_fifo_vc2_pop : 1;
+ mmr_t underflow_md_fifo_vc0_pop : 1;
+ mmr_t underflow_md_fifo_vc2_pop : 1;
+ mmr_t underflow_ni_fifo_vc0_pop : 1;
+ mmr_t underflow_ni_fifo_vc2_pop : 1;
+ mmr_t underflow_pi_fifo_vc0_push : 1;
+ mmr_t underflow_pi_fifo_vc2_push : 1;
+ mmr_t underflow_iilb_fifo_vc0_push : 1;
+ mmr_t underflow_iilb_fifo_vc2_push : 1;
+ mmr_t underflow_md_fifo_vc0_push : 1;
+ mmr_t underflow_md_fifo_vc2_push : 1;
+ mmr_t underflow_pi_fifo_vc0_credit : 1;
+ mmr_t underflow_pi_fifo_vc2_credit : 1;
+ mmr_t underflow_iilb_fifo_vc0_credit : 1;
+ mmr_t underflow_iilb_fifo_vc2_credit : 1;
+ mmr_t underflow_md_fifo_vc0_credit : 1;
+ mmr_t underflow_md_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc0_credit : 1;
+ mmr_t underflow_ni_fifo_vc1_credit : 1;
+ mmr_t underflow_ni_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc3_credit : 1;
+ mmr_t llp_deadlock_vc0 : 1;
+ mmr_t llp_deadlock_vc1 : 1;
+ mmr_t llp_deadlock_vc2 : 1;
+ mmr_t llp_deadlock_vc3 : 1;
+ mmr_t chiplet_nomatch : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t retry_timeout_error : 1;
+ mmr_t reserved_1 : 1;
+ } sh_ni1_error_mask_2_s;
+} sh_ni1_error_mask_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_FIRST_ERROR_1" */
+/* ni1 First Error Bits */
+/* ==================================================================== */
+
+typedef union sh_ni1_first_error_1_u {
+ mmr_t sh_ni1_first_error_1_regval;
+ struct {
+ mmr_t overflow_fifo02_debit0 : 1;
+ mmr_t overflow_fifo02_debit2 : 1;
+ mmr_t overflow_fifo13_debit0 : 1;
+ mmr_t overflow_fifo13_debit2 : 1;
+ mmr_t overflow_fifo02_vc0_pop : 1;
+ mmr_t overflow_fifo02_vc2_pop : 1;
+ mmr_t overflow_fifo13_vc1_pop : 1;
+ mmr_t overflow_fifo13_vc3_pop : 1;
+ mmr_t overflow_fifo02_vc0_push : 1;
+ mmr_t overflow_fifo02_vc2_push : 1;
+ mmr_t overflow_fifo13_vc1_push : 1;
+ mmr_t overflow_fifo13_vc3_push : 1;
+ mmr_t overflow_fifo02_vc0_credit : 1;
+ mmr_t overflow_fifo02_vc2_credit : 1;
+ mmr_t overflow_fifo13_vc0_credit : 1;
+ mmr_t overflow_fifo13_vc2_credit : 1;
+ mmr_t overflow0_vc0_credit : 1;
+ mmr_t overflow1_vc0_credit : 1;
+ mmr_t overflow2_vc0_credit : 1;
+ mmr_t overflow0_vc2_credit : 1;
+ mmr_t overflow1_vc2_credit : 1;
+ mmr_t overflow2_vc2_credit : 1;
+ mmr_t overflow_pi_fifo_debit0 : 1;
+ mmr_t overflow_pi_fifo_debit2 : 1;
+ mmr_t overflow_iilb_fifo_debit0 : 1;
+ mmr_t overflow_iilb_fifo_debit2 : 1;
+ mmr_t overflow_md_fifo_debit0 : 1;
+ mmr_t overflow_md_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit0 : 1;
+ mmr_t overflow_ni_fifo_debit1 : 1;
+ mmr_t overflow_ni_fifo_debit2 : 1;
+ mmr_t overflow_ni_fifo_debit3 : 1;
+ mmr_t overflow_pi_fifo_vc0_pop : 1;
+ mmr_t overflow_pi_fifo_vc2_pop : 1;
+ mmr_t overflow_iilb_fifo_vc0_pop : 1;
+ mmr_t overflow_iilb_fifo_vc2_pop : 1;
+ mmr_t overflow_md_fifo_vc0_pop : 1;
+ mmr_t overflow_md_fifo_vc2_pop : 1;
+ mmr_t overflow_ni_fifo_vc0_pop : 1;
+ mmr_t overflow_ni_fifo_vc2_pop : 1;
+ mmr_t overflow_pi_fifo_vc0_push : 1;
+ mmr_t overflow_pi_fifo_vc2_push : 1;
+ mmr_t overflow_iilb_fifo_vc0_push : 1;
+ mmr_t overflow_iilb_fifo_vc2_push : 1;
+ mmr_t overflow_md_fifo_vc0_push : 1;
+ mmr_t overflow_md_fifo_vc2_push : 1;
+ mmr_t overflow_pi_fifo_vc0_credit : 1;
+ mmr_t overflow_pi_fifo_vc2_credit : 1;
+ mmr_t overflow_iilb_fifo_vc0_credit : 1;
+ mmr_t overflow_iilb_fifo_vc2_credit : 1;
+ mmr_t overflow_md_fifo_vc0_credit : 1;
+ mmr_t overflow_md_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc0_credit : 1;
+ mmr_t overflow_ni_fifo_vc1_credit : 1;
+ mmr_t overflow_ni_fifo_vc2_credit : 1;
+ mmr_t overflow_ni_fifo_vc3_credit : 1;
+ mmr_t tail_timeout_fifo02_vc0 : 1;
+ mmr_t tail_timeout_fifo02_vc2 : 1;
+ mmr_t tail_timeout_fifo13_vc1 : 1;
+ mmr_t tail_timeout_fifo13_vc3 : 1;
+ mmr_t tail_timeout_ni_vc0 : 1;
+ mmr_t tail_timeout_ni_vc1 : 1;
+ mmr_t tail_timeout_ni_vc2 : 1;
+ mmr_t tail_timeout_ni_vc3 : 1;
+ } sh_ni1_first_error_1_s;
+} sh_ni1_first_error_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_FIRST_ERROR_2" */
+/* ni1 First Error Bits */
+/* ==================================================================== */
+
+typedef union sh_ni1_first_error_2_u {
+ mmr_t sh_ni1_first_error_2_regval;
+ struct {
+ mmr_t illegal_vcni : 1;
+ mmr_t illegal_vcpi : 1;
+ mmr_t illegal_vcmd : 1;
+ mmr_t illegal_vciilb : 1;
+ mmr_t underflow_fifo02_vc0_pop : 1;
+ mmr_t underflow_fifo02_vc2_pop : 1;
+ mmr_t underflow_fifo13_vc1_pop : 1;
+ mmr_t underflow_fifo13_vc3_pop : 1;
+ mmr_t underflow_fifo02_vc0_push : 1;
+ mmr_t underflow_fifo02_vc2_push : 1;
+ mmr_t underflow_fifo13_vc1_push : 1;
+ mmr_t underflow_fifo13_vc3_push : 1;
+ mmr_t underflow_fifo02_vc0_credit : 1;
+ mmr_t underflow_fifo02_vc2_credit : 1;
+ mmr_t underflow_fifo13_vc0_credit : 1;
+ mmr_t underflow_fifo13_vc2_credit : 1;
+ mmr_t underflow0_vc0_credit : 1;
+ mmr_t underflow1_vc0_credit : 1;
+ mmr_t underflow2_vc0_credit : 1;
+ mmr_t underflow0_vc2_credit : 1;
+ mmr_t underflow1_vc2_credit : 1;
+ mmr_t underflow2_vc2_credit : 1;
+ mmr_t reserved_0 : 10;
+ mmr_t underflow_pi_fifo_vc0_pop : 1;
+ mmr_t underflow_pi_fifo_vc2_pop : 1;
+ mmr_t underflow_iilb_fifo_vc0_pop : 1;
+ mmr_t underflow_iilb_fifo_vc2_pop : 1;
+ mmr_t underflow_md_fifo_vc0_pop : 1;
+ mmr_t underflow_md_fifo_vc2_pop : 1;
+ mmr_t underflow_ni_fifo_vc0_pop : 1;
+ mmr_t underflow_ni_fifo_vc2_pop : 1;
+ mmr_t underflow_pi_fifo_vc0_push : 1;
+ mmr_t underflow_pi_fifo_vc2_push : 1;
+ mmr_t underflow_iilb_fifo_vc0_push : 1;
+ mmr_t underflow_iilb_fifo_vc2_push : 1;
+ mmr_t underflow_md_fifo_vc0_push : 1;
+ mmr_t underflow_md_fifo_vc2_push : 1;
+ mmr_t underflow_pi_fifo_vc0_credit : 1;
+ mmr_t underflow_pi_fifo_vc2_credit : 1;
+ mmr_t underflow_iilb_fifo_vc0_credit : 1;
+ mmr_t underflow_iilb_fifo_vc2_credit : 1;
+ mmr_t underflow_md_fifo_vc0_credit : 1;
+ mmr_t underflow_md_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc0_credit : 1;
+ mmr_t underflow_ni_fifo_vc1_credit : 1;
+ mmr_t underflow_ni_fifo_vc2_credit : 1;
+ mmr_t underflow_ni_fifo_vc3_credit : 1;
+ mmr_t llp_deadlock_vc0 : 1;
+ mmr_t llp_deadlock_vc1 : 1;
+ mmr_t llp_deadlock_vc2 : 1;
+ mmr_t llp_deadlock_vc3 : 1;
+ mmr_t chiplet_nomatch : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t retry_timeout_error : 1;
+ mmr_t reserved_1 : 1;
+ } sh_ni1_first_error_2_s;
+} sh_ni1_first_error_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_DETAIL_1" */
+/* ni1 Chiplet no match header bits 63:0 */
+/* ==================================================================== */
+
+typedef union sh_ni1_error_detail_1_u {
+ mmr_t sh_ni1_error_detail_1_regval;
+ struct {
+ mmr_t header : 64;
+ } sh_ni1_error_detail_1_s;
+} sh_ni1_error_detail_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_DETAIL_2" */
+/* ni1 Chiplet no match header bits 127:64 */
+/* ==================================================================== */
+
+typedef union sh_ni1_error_detail_2_u {
+ mmr_t sh_ni1_error_detail_2_regval;
+ struct {
+ mmr_t header : 64;
+ } sh_ni1_error_detail_2_s;
+} sh_ni1_error_detail_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_CORRECTED_DETAIL_1" */
+/* Corrected error details */
+/* ==================================================================== */
+
+typedef union sh_xn_corrected_detail_1_u {
+ mmr_t sh_xn_corrected_detail_1_regval;
+ struct {
+ mmr_t ecc0_syndrome : 8;
+ mmr_t ecc0_wc : 2;
+ mmr_t ecc0_vc : 2;
+ mmr_t reserved_0 : 4;
+ mmr_t ecc1_syndrome : 8;
+ mmr_t ecc1_wc : 2;
+ mmr_t ecc1_vc : 2;
+ mmr_t reserved_1 : 4;
+ mmr_t ecc2_syndrome : 8;
+ mmr_t ecc2_wc : 2;
+ mmr_t ecc2_vc : 2;
+ mmr_t reserved_2 : 4;
+ mmr_t ecc3_syndrome : 8;
+ mmr_t ecc3_wc : 2;
+ mmr_t ecc3_vc : 2;
+ mmr_t reserved_3 : 4;
+ } sh_xn_corrected_detail_1_s;
+} sh_xn_corrected_detail_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_CORRECTED_DETAIL_2" */
+/* Corrected error data */
+/* ==================================================================== */
+
+typedef union sh_xn_corrected_detail_2_u {
+ mmr_t sh_xn_corrected_detail_2_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_corrected_detail_2_s;
+} sh_xn_corrected_detail_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_CORRECTED_DETAIL_3" */
+/* Corrected error header0 */
+/* ==================================================================== */
+
+typedef union sh_xn_corrected_detail_3_u {
+ mmr_t sh_xn_corrected_detail_3_regval;
+ struct {
+ mmr_t header0 : 64;
+ } sh_xn_corrected_detail_3_s;
+} sh_xn_corrected_detail_3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_CORRECTED_DETAIL_4" */
+/* Corrected error header1 */
+/* ==================================================================== */
+
+typedef union sh_xn_corrected_detail_4_u {
+ mmr_t sh_xn_corrected_detail_4_regval;
+ struct {
+ mmr_t header1 : 42;
+ mmr_t reserved_0 : 20;
+ mmr_t err_group : 2;
+ } sh_xn_corrected_detail_4_s;
+} sh_xn_corrected_detail_4_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_UNCORRECTED_DETAIL_1" */
+/* Uncorrected error details */
+/* ==================================================================== */
+
+typedef union sh_xn_uncorrected_detail_1_u {
+ mmr_t sh_xn_uncorrected_detail_1_regval;
+ struct {
+ mmr_t ecc0_syndrome : 8;
+ mmr_t ecc0_wc : 2;
+ mmr_t ecc0_vc : 2;
+ mmr_t reserved_0 : 4;
+ mmr_t ecc1_syndrome : 8;
+ mmr_t ecc1_wc : 2;
+ mmr_t ecc1_vc : 2;
+ mmr_t reserved_1 : 4;
+ mmr_t ecc2_syndrome : 8;
+ mmr_t ecc2_wc : 2;
+ mmr_t ecc2_vc : 2;
+ mmr_t reserved_2 : 4;
+ mmr_t ecc3_syndrome : 8;
+ mmr_t ecc3_wc : 2;
+ mmr_t ecc3_vc : 2;
+ mmr_t reserved_3 : 4;
+ } sh_xn_uncorrected_detail_1_s;
+} sh_xn_uncorrected_detail_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_UNCORRECTED_DETAIL_2" */
+/* Uncorrected error data */
+/* ==================================================================== */
+
+typedef union sh_xn_uncorrected_detail_2_u {
+ mmr_t sh_xn_uncorrected_detail_2_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_xn_uncorrected_detail_2_s;
+} sh_xn_uncorrected_detail_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_UNCORRECTED_DETAIL_3" */
+/* Uncorrected error header0 */
+/* ==================================================================== */
+
+typedef union sh_xn_uncorrected_detail_3_u {
+ mmr_t sh_xn_uncorrected_detail_3_regval;
+ struct {
+ mmr_t header0 : 64;
+ } sh_xn_uncorrected_detail_3_s;
+} sh_xn_uncorrected_detail_3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_UNCORRECTED_DETAIL_4" */
+/* Uncorrected error header1 */
+/* ==================================================================== */
+
+typedef union sh_xn_uncorrected_detail_4_u {
+ mmr_t sh_xn_uncorrected_detail_4_regval;
+ struct {
+ mmr_t header1 : 42;
+ mmr_t reserved_0 : 20;
+ mmr_t err_group : 2;
+ } sh_xn_uncorrected_detail_4_s;
+} sh_xn_uncorrected_detail_4_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ERROR_DETAIL_1" */
+/* Look Up Table Address (md) */
+/* ==================================================================== */
+
+typedef union sh_xnmd_error_detail_1_u {
+ mmr_t sh_xnmd_error_detail_1_regval;
+ struct {
+ mmr_t lut_addr : 11;
+ mmr_t reserved_0 : 53;
+ } sh_xnmd_error_detail_1_s;
+} sh_xnmd_error_detail_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ERROR_DETAIL_1" */
+/* Look Up Table Address (pi) */
+/* ==================================================================== */
+
+typedef union sh_xnpi_error_detail_1_u {
+ mmr_t sh_xnpi_error_detail_1_regval;
+ struct {
+ mmr_t lut_addr : 11;
+ mmr_t reserved_0 : 53;
+ } sh_xnpi_error_detail_1_s;
+} sh_xnpi_error_detail_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERROR_DETAIL_1" */
+/* Chiplet NoMatch header [63:0] */
+/* ==================================================================== */
+
+typedef union sh_xniilb_error_detail_1_u {
+ mmr_t sh_xniilb_error_detail_1_regval;
+ struct {
+ mmr_t header : 64;
+ } sh_xniilb_error_detail_1_s;
+} sh_xniilb_error_detail_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERROR_DETAIL_2" */
+/* Chiplet NoMatch header [127:64] */
+/* ==================================================================== */
+
+typedef union sh_xniilb_error_detail_2_u {
+ mmr_t sh_xniilb_error_detail_2_regval;
+ struct {
+ mmr_t header : 64;
+ } sh_xniilb_error_detail_2_s;
+} sh_xniilb_error_detail_2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERROR_DETAIL_3" */
+/* Look Up Table Address (iilb) */
+/* ==================================================================== */
+
+typedef union sh_xniilb_error_detail_3_u {
+ mmr_t sh_xniilb_error_detail_3_regval;
+ struct {
+ mmr_t lut_addr : 11;
+ mmr_t reserved_0 : 53;
+ } sh_xniilb_error_detail_3_s;
+} sh_xniilb_error_detail_3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI0_ERROR_DETAIL_3" */
+/* Look Up Table Address (ni0) */
+/* ==================================================================== */
+
+typedef union sh_ni0_error_detail_3_u {
+ mmr_t sh_ni0_error_detail_3_regval;
+ struct {
+ mmr_t lut_addr : 11;
+ mmr_t reserved_0 : 53;
+ } sh_ni0_error_detail_3_s;
+} sh_ni0_error_detail_3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_NI1_ERROR_DETAIL_3" */
+/* Look Up Table Address (ni1) */
+/* ==================================================================== */
+
+typedef union sh_ni1_error_detail_3_u {
+ mmr_t sh_ni1_error_detail_3_regval;
+ struct {
+ mmr_t lut_addr : 11;
+ mmr_t reserved_0 : 53;
+ } sh_ni1_error_detail_3_s;
+} sh_ni1_error_detail_3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_ERROR_SUMMARY" */
+/* ==================================================================== */
+
+typedef union sh_xn_error_summary_u {
+ mmr_t sh_xn_error_summary_regval;
+ struct {
+ mmr_t ni0_pop_overflow : 1;
+ mmr_t ni0_push_overflow : 1;
+ mmr_t ni0_credit_overflow : 1;
+ mmr_t ni0_debit_overflow : 1;
+ mmr_t ni0_pop_underflow : 1;
+ mmr_t ni0_push_underflow : 1;
+ mmr_t ni0_credit_underflow : 1;
+ mmr_t ni0_llp_error : 1;
+ mmr_t ni0_pipe_error : 1;
+ mmr_t ni1_pop_overflow : 1;
+ mmr_t ni1_push_overflow : 1;
+ mmr_t ni1_credit_overflow : 1;
+ mmr_t ni1_debit_overflow : 1;
+ mmr_t ni1_pop_underflow : 1;
+ mmr_t ni1_push_underflow : 1;
+ mmr_t ni1_credit_underflow : 1;
+ mmr_t ni1_llp_error : 1;
+ mmr_t ni1_pipe_error : 1;
+ mmr_t xnmd_credit_overflow : 1;
+ mmr_t xnmd_debit_overflow : 1;
+ mmr_t xnmd_data_buff_overflow : 1;
+ mmr_t xnmd_credit_underflow : 1;
+ mmr_t xnmd_sbe_error : 1;
+ mmr_t xnmd_uce_error : 1;
+ mmr_t xnmd_lut_error : 1;
+ mmr_t xnpi_credit_overflow : 1;
+ mmr_t xnpi_debit_overflow : 1;
+ mmr_t xnpi_data_buff_overflow : 1;
+ mmr_t xnpi_credit_underflow : 1;
+ mmr_t xnpi_sbe_error : 1;
+ mmr_t xnpi_uce_error : 1;
+ mmr_t xnpi_lut_error : 1;
+ mmr_t iilb_debit_overflow : 1;
+ mmr_t iilb_credit_overflow : 1;
+ mmr_t iilb_fifo_overflow : 1;
+ mmr_t iilb_credit_underflow : 1;
+ mmr_t iilb_fifo_underflow : 1;
+ mmr_t iilb_chiplet_or_lut : 1;
+ mmr_t reserved_0 : 26;
+ } sh_xn_error_summary_s;
+} sh_xn_error_summary_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_ERROR_OVERFLOW" */
+/* ==================================================================== */
+
+typedef union sh_xn_error_overflow_u {
+ mmr_t sh_xn_error_overflow_regval;
+ struct {
+ mmr_t ni0_pop_overflow : 1;
+ mmr_t ni0_push_overflow : 1;
+ mmr_t ni0_credit_overflow : 1;
+ mmr_t ni0_debit_overflow : 1;
+ mmr_t ni0_pop_underflow : 1;
+ mmr_t ni0_push_underflow : 1;
+ mmr_t ni0_credit_underflow : 1;
+ mmr_t ni0_llp_error : 1;
+ mmr_t ni0_pipe_error : 1;
+ mmr_t ni1_pop_overflow : 1;
+ mmr_t ni1_push_overflow : 1;
+ mmr_t ni1_credit_overflow : 1;
+ mmr_t ni1_debit_overflow : 1;
+ mmr_t ni1_pop_underflow : 1;
+ mmr_t ni1_push_underflow : 1;
+ mmr_t ni1_credit_underflow : 1;
+ mmr_t ni1_llp_error : 1;
+ mmr_t ni1_pipe_error : 1;
+ mmr_t xnmd_credit_overflow : 1;
+ mmr_t xnmd_debit_overflow : 1;
+ mmr_t xnmd_data_buff_overflow : 1;
+ mmr_t xnmd_credit_underflow : 1;
+ mmr_t xnmd_sbe_error : 1;
+ mmr_t xnmd_uce_error : 1;
+ mmr_t xnmd_lut_error : 1;
+ mmr_t xnpi_credit_overflow : 1;
+ mmr_t xnpi_debit_overflow : 1;
+ mmr_t xnpi_data_buff_overflow : 1;
+ mmr_t xnpi_credit_underflow : 1;
+ mmr_t xnpi_sbe_error : 1;
+ mmr_t xnpi_uce_error : 1;
+ mmr_t xnpi_lut_error : 1;
+ mmr_t iilb_debit_overflow : 1;
+ mmr_t iilb_credit_overflow : 1;
+ mmr_t iilb_fifo_overflow : 1;
+ mmr_t iilb_credit_underflow : 1;
+ mmr_t iilb_fifo_underflow : 1;
+ mmr_t iilb_chiplet_or_lut : 1;
+ mmr_t reserved_0 : 26;
+ } sh_xn_error_overflow_s;
+} sh_xn_error_overflow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_ERROR_MASK" */
+/* ==================================================================== */
+
+typedef union sh_xn_error_mask_u {
+ mmr_t sh_xn_error_mask_regval;
+ struct {
+ mmr_t ni0_pop_overflow : 1;
+ mmr_t ni0_push_overflow : 1;
+ mmr_t ni0_credit_overflow : 1;
+ mmr_t ni0_debit_overflow : 1;
+ mmr_t ni0_pop_underflow : 1;
+ mmr_t ni0_push_underflow : 1;
+ mmr_t ni0_credit_underflow : 1;
+ mmr_t ni0_llp_error : 1;
+ mmr_t ni0_pipe_error : 1;
+ mmr_t ni1_pop_overflow : 1;
+ mmr_t ni1_push_overflow : 1;
+ mmr_t ni1_credit_overflow : 1;
+ mmr_t ni1_debit_overflow : 1;
+ mmr_t ni1_pop_underflow : 1;
+ mmr_t ni1_push_underflow : 1;
+ mmr_t ni1_credit_underflow : 1;
+ mmr_t ni1_llp_error : 1;
+ mmr_t ni1_pipe_error : 1;
+ mmr_t xnmd_credit_overflow : 1;
+ mmr_t xnmd_debit_overflow : 1;
+ mmr_t xnmd_data_buff_overflow : 1;
+ mmr_t xnmd_credit_underflow : 1;
+ mmr_t xnmd_sbe_error : 1;
+ mmr_t xnmd_uce_error : 1;
+ mmr_t xnmd_lut_error : 1;
+ mmr_t xnpi_credit_overflow : 1;
+ mmr_t xnpi_debit_overflow : 1;
+ mmr_t xnpi_data_buff_overflow : 1;
+ mmr_t xnpi_credit_underflow : 1;
+ mmr_t xnpi_sbe_error : 1;
+ mmr_t xnpi_uce_error : 1;
+ mmr_t xnpi_lut_error : 1;
+ mmr_t iilb_debit_overflow : 1;
+ mmr_t iilb_credit_overflow : 1;
+ mmr_t iilb_fifo_overflow : 1;
+ mmr_t iilb_credit_underflow : 1;
+ mmr_t iilb_fifo_underflow : 1;
+ mmr_t iilb_chiplet_or_lut : 1;
+ mmr_t reserved_0 : 26;
+ } sh_xn_error_mask_s;
+} sh_xn_error_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_FIRST_ERROR" */
+/* ==================================================================== */
+
+typedef union sh_xn_first_error_u {
+ mmr_t sh_xn_first_error_regval;
+ struct {
+ mmr_t ni0_pop_overflow : 1;
+ mmr_t ni0_push_overflow : 1;
+ mmr_t ni0_credit_overflow : 1;
+ mmr_t ni0_debit_overflow : 1;
+ mmr_t ni0_pop_underflow : 1;
+ mmr_t ni0_push_underflow : 1;
+ mmr_t ni0_credit_underflow : 1;
+ mmr_t ni0_llp_error : 1;
+ mmr_t ni0_pipe_error : 1;
+ mmr_t ni1_pop_overflow : 1;
+ mmr_t ni1_push_overflow : 1;
+ mmr_t ni1_credit_overflow : 1;
+ mmr_t ni1_debit_overflow : 1;
+ mmr_t ni1_pop_underflow : 1;
+ mmr_t ni1_push_underflow : 1;
+ mmr_t ni1_credit_underflow : 1;
+ mmr_t ni1_llp_error : 1;
+ mmr_t ni1_pipe_error : 1;
+ mmr_t xnmd_credit_overflow : 1;
+ mmr_t xnmd_debit_overflow : 1;
+ mmr_t xnmd_data_buff_overflow : 1;
+ mmr_t xnmd_credit_underflow : 1;
+ mmr_t xnmd_sbe_error : 1;
+ mmr_t xnmd_uce_error : 1;
+ mmr_t xnmd_lut_error : 1;
+ mmr_t xnpi_credit_overflow : 1;
+ mmr_t xnpi_debit_overflow : 1;
+ mmr_t xnpi_data_buff_overflow : 1;
+ mmr_t xnpi_credit_underflow : 1;
+ mmr_t xnpi_sbe_error : 1;
+ mmr_t xnpi_uce_error : 1;
+ mmr_t xnpi_lut_error : 1;
+ mmr_t iilb_debit_overflow : 1;
+ mmr_t iilb_credit_overflow : 1;
+ mmr_t iilb_fifo_overflow : 1;
+ mmr_t iilb_credit_underflow : 1;
+ mmr_t iilb_fifo_underflow : 1;
+ mmr_t iilb_chiplet_or_lut : 1;
+ mmr_t reserved_0 : 26;
+ } sh_xn_first_error_s;
+} sh_xn_first_error_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERROR_SUMMARY" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_error_summary_u {
+ mmr_t sh_xniilb_error_summary_regval;
+ struct {
+ mmr_t overflow_ii_debit0 : 1;
+ mmr_t overflow_ii_debit2 : 1;
+ mmr_t overflow_lb_debit0 : 1;
+ mmr_t overflow_lb_debit2 : 1;
+ mmr_t overflow_ii_vc0 : 1;
+ mmr_t overflow_ii_vc2 : 1;
+ mmr_t underflow_ii_vc0 : 1;
+ mmr_t underflow_ii_vc2 : 1;
+ mmr_t overflow_lb_vc0 : 1;
+ mmr_t overflow_lb_vc2 : 1;
+ mmr_t underflow_lb_vc0 : 1;
+ mmr_t underflow_lb_vc2 : 1;
+ mmr_t overflow_pi_vc0_credit_in : 1;
+ mmr_t overflow_iilb_vc0_credit_in : 1;
+ mmr_t overflow_md_vc0_credit_in : 1;
+ mmr_t overflow_ni0_vc0_credit_in : 1;
+ mmr_t overflow_ni1_vc0_credit_in : 1;
+ mmr_t overflow_pi_vc2_credit_in : 1;
+ mmr_t overflow_iilb_vc2_credit_in : 1;
+ mmr_t overflow_md_vc2_credit_in : 1;
+ mmr_t overflow_ni0_vc2_credit_in : 1;
+ mmr_t overflow_ni1_vc2_credit_in : 1;
+ mmr_t underflow_pi_vc0_credit_in : 1;
+ mmr_t underflow_iilb_vc0_credit_in : 1;
+ mmr_t underflow_md_vc0_credit_in : 1;
+ mmr_t underflow_ni0_vc0_credit_in : 1;
+ mmr_t underflow_ni1_vc0_credit_in : 1;
+ mmr_t underflow_pi_vc2_credit_in : 1;
+ mmr_t underflow_iilb_vc2_credit_in : 1;
+ mmr_t underflow_md_vc2_credit_in : 1;
+ mmr_t underflow_ni0_vc2_credit_in : 1;
+ mmr_t underflow_ni1_vc2_credit_in : 1;
+ mmr_t overflow_pi_debit0 : 1;
+ mmr_t overflow_pi_debit2 : 1;
+ mmr_t overflow_iilb_debit0 : 1;
+ mmr_t overflow_iilb_debit2 : 1;
+ mmr_t overflow_md_debit0 : 1;
+ mmr_t overflow_md_debit2 : 1;
+ mmr_t overflow_ni0_debit0 : 1;
+ mmr_t overflow_ni0_debit2 : 1;
+ mmr_t overflow_ni1_debit0 : 1;
+ mmr_t overflow_ni1_debit2 : 1;
+ mmr_t overflow_pi_vc0_credit_out : 1;
+ mmr_t overflow_pi_vc2_credit_out : 1;
+ mmr_t overflow_md_vc0_credit_out : 1;
+ mmr_t overflow_md_vc2_credit_out : 1;
+ mmr_t overflow_iilb_vc0_credit_out : 1;
+ mmr_t overflow_iilb_vc2_credit_out : 1;
+ mmr_t overflow_ni0_vc0_credit_out : 1;
+ mmr_t overflow_ni0_vc2_credit_out : 1;
+ mmr_t overflow_ni1_vc0_credit_out : 1;
+ mmr_t overflow_ni1_vc2_credit_out : 1;
+ mmr_t underflow_pi_vc0_credit_out : 1;
+ mmr_t underflow_pi_vc2_credit_out : 1;
+ mmr_t underflow_md_vc0_credit_out : 1;
+ mmr_t underflow_md_vc2_credit_out : 1;
+ mmr_t underflow_iilb_vc0_credit_out : 1;
+ mmr_t underflow_iilb_vc2_credit_out : 1;
+ mmr_t underflow_ni0_vc0_credit_out : 1;
+ mmr_t underflow_ni0_vc2_credit_out : 1;
+ mmr_t underflow_ni1_vc0_credit_out : 1;
+ mmr_t underflow_ni1_vc2_credit_out : 1;
+ mmr_t chiplet_nomatch : 1;
+ mmr_t lut_read_error : 1;
+ } sh_xniilb_error_summary_s;
+} sh_xniilb_error_summary_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERROR_OVERFLOW" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_error_overflow_u {
+ mmr_t sh_xniilb_error_overflow_regval;
+ struct {
+ mmr_t overflow_ii_debit0 : 1;
+ mmr_t overflow_ii_debit2 : 1;
+ mmr_t overflow_lb_debit0 : 1;
+ mmr_t overflow_lb_debit2 : 1;
+ mmr_t overflow_ii_vc0 : 1;
+ mmr_t overflow_ii_vc2 : 1;
+ mmr_t underflow_ii_vc0 : 1;
+ mmr_t underflow_ii_vc2 : 1;
+ mmr_t overflow_lb_vc0 : 1;
+ mmr_t overflow_lb_vc2 : 1;
+ mmr_t underflow_lb_vc0 : 1;
+ mmr_t underflow_lb_vc2 : 1;
+ mmr_t overflow_pi_vc0_credit_in : 1;
+ mmr_t overflow_iilb_vc0_credit_in : 1;
+ mmr_t overflow_md_vc0_credit_in : 1;
+ mmr_t overflow_ni0_vc0_credit_in : 1;
+ mmr_t overflow_ni1_vc0_credit_in : 1;
+ mmr_t overflow_pi_vc2_credit_in : 1;
+ mmr_t overflow_iilb_vc2_credit_in : 1;
+ mmr_t overflow_md_vc2_credit_in : 1;
+ mmr_t overflow_ni0_vc2_credit_in : 1;
+ mmr_t overflow_ni1_vc2_credit_in : 1;
+ mmr_t underflow_pi_vc0_credit_in : 1;
+ mmr_t underflow_iilb_vc0_credit_in : 1;
+ mmr_t underflow_md_vc0_credit_in : 1;
+ mmr_t underflow_ni0_vc0_credit_in : 1;
+ mmr_t underflow_ni1_vc0_credit_in : 1;
+ mmr_t underflow_pi_vc2_credit_in : 1;
+ mmr_t underflow_iilb_vc2_credit_in : 1;
+ mmr_t underflow_md_vc2_credit_in : 1;
+ mmr_t underflow_ni0_vc2_credit_in : 1;
+ mmr_t underflow_ni1_vc2_credit_in : 1;
+ mmr_t overflow_pi_debit0 : 1;
+ mmr_t overflow_pi_debit2 : 1;
+ mmr_t overflow_iilb_debit0 : 1;
+ mmr_t overflow_iilb_debit2 : 1;
+ mmr_t overflow_md_debit0 : 1;
+ mmr_t overflow_md_debit2 : 1;
+ mmr_t overflow_ni0_debit0 : 1;
+ mmr_t overflow_ni0_debit2 : 1;
+ mmr_t overflow_ni1_debit0 : 1;
+ mmr_t overflow_ni1_debit2 : 1;
+ mmr_t overflow_pi_vc0_credit_out : 1;
+ mmr_t overflow_pi_vc2_credit_out : 1;
+ mmr_t overflow_md_vc0_credit_out : 1;
+ mmr_t overflow_md_vc2_credit_out : 1;
+ mmr_t overflow_iilb_vc0_credit_out : 1;
+ mmr_t overflow_iilb_vc2_credit_out : 1;
+ mmr_t overflow_ni0_vc0_credit_out : 1;
+ mmr_t overflow_ni0_vc2_credit_out : 1;
+ mmr_t overflow_ni1_vc0_credit_out : 1;
+ mmr_t overflow_ni1_vc2_credit_out : 1;
+ mmr_t underflow_pi_vc0_credit_out : 1;
+ mmr_t underflow_pi_vc2_credit_out : 1;
+ mmr_t underflow_md_vc0_credit_out : 1;
+ mmr_t underflow_md_vc2_credit_out : 1;
+ mmr_t underflow_iilb_vc0_credit_out : 1;
+ mmr_t underflow_iilb_vc2_credit_out : 1;
+ mmr_t underflow_ni0_vc0_credit_out : 1;
+ mmr_t underflow_ni0_vc2_credit_out : 1;
+ mmr_t underflow_ni1_vc0_credit_out : 1;
+ mmr_t underflow_ni1_vc2_credit_out : 1;
+ mmr_t chiplet_nomatch : 1;
+ mmr_t lut_read_error : 1;
+ } sh_xniilb_error_overflow_s;
+} sh_xniilb_error_overflow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_ERROR_MASK" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_error_mask_u {
+ mmr_t sh_xniilb_error_mask_regval;
+ struct {
+ mmr_t overflow_ii_debit0 : 1;
+ mmr_t overflow_ii_debit2 : 1;
+ mmr_t overflow_lb_debit0 : 1;
+ mmr_t overflow_lb_debit2 : 1;
+ mmr_t overflow_ii_vc0 : 1;
+ mmr_t overflow_ii_vc2 : 1;
+ mmr_t underflow_ii_vc0 : 1;
+ mmr_t underflow_ii_vc2 : 1;
+ mmr_t overflow_lb_vc0 : 1;
+ mmr_t overflow_lb_vc2 : 1;
+ mmr_t underflow_lb_vc0 : 1;
+ mmr_t underflow_lb_vc2 : 1;
+ mmr_t overflow_pi_vc0_credit_in : 1;
+ mmr_t overflow_iilb_vc0_credit_in : 1;
+ mmr_t overflow_md_vc0_credit_in : 1;
+ mmr_t overflow_ni0_vc0_credit_in : 1;
+ mmr_t overflow_ni1_vc0_credit_in : 1;
+ mmr_t overflow_pi_vc2_credit_in : 1;
+ mmr_t overflow_iilb_vc2_credit_in : 1;
+ mmr_t overflow_md_vc2_credit_in : 1;
+ mmr_t overflow_ni0_vc2_credit_in : 1;
+ mmr_t overflow_ni1_vc2_credit_in : 1;
+ mmr_t underflow_pi_vc0_credit_in : 1;
+ mmr_t underflow_iilb_vc0_credit_in : 1;
+ mmr_t underflow_md_vc0_credit_in : 1;
+ mmr_t underflow_ni0_vc0_credit_in : 1;
+ mmr_t underflow_ni1_vc0_credit_in : 1;
+ mmr_t underflow_pi_vc2_credit_in : 1;
+ mmr_t underflow_iilb_vc2_credit_in : 1;
+ mmr_t underflow_md_vc2_credit_in : 1;
+ mmr_t underflow_ni0_vc2_credit_in : 1;
+ mmr_t underflow_ni1_vc2_credit_in : 1;
+ mmr_t overflow_pi_debit0 : 1;
+ mmr_t overflow_pi_debit2 : 1;
+ mmr_t overflow_iilb_debit0 : 1;
+ mmr_t overflow_iilb_debit2 : 1;
+ mmr_t overflow_md_debit0 : 1;
+ mmr_t overflow_md_debit2 : 1;
+ mmr_t overflow_ni0_debit0 : 1;
+ mmr_t overflow_ni0_debit2 : 1;
+ mmr_t overflow_ni1_debit0 : 1;
+ mmr_t overflow_ni1_debit2 : 1;
+ mmr_t overflow_pi_vc0_credit_out : 1;
+ mmr_t overflow_pi_vc2_credit_out : 1;
+ mmr_t overflow_md_vc0_credit_out : 1;
+ mmr_t overflow_md_vc2_credit_out : 1;
+ mmr_t overflow_iilb_vc0_credit_out : 1;
+ mmr_t overflow_iilb_vc2_credit_out : 1;
+ mmr_t overflow_ni0_vc0_credit_out : 1;
+ mmr_t overflow_ni0_vc2_credit_out : 1;
+ mmr_t overflow_ni1_vc0_credit_out : 1;
+ mmr_t overflow_ni1_vc2_credit_out : 1;
+ mmr_t underflow_pi_vc0_credit_out : 1;
+ mmr_t underflow_pi_vc2_credit_out : 1;
+ mmr_t underflow_md_vc0_credit_out : 1;
+ mmr_t underflow_md_vc2_credit_out : 1;
+ mmr_t underflow_iilb_vc0_credit_out : 1;
+ mmr_t underflow_iilb_vc2_credit_out : 1;
+ mmr_t underflow_ni0_vc0_credit_out : 1;
+ mmr_t underflow_ni0_vc2_credit_out : 1;
+ mmr_t underflow_ni1_vc0_credit_out : 1;
+ mmr_t underflow_ni1_vc2_credit_out : 1;
+ mmr_t chiplet_nomatch : 1;
+ mmr_t lut_read_error : 1;
+ } sh_xniilb_error_mask_s;
+} sh_xniilb_error_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNIILB_FIRST_ERROR" */
+/* ==================================================================== */
+
+typedef union sh_xniilb_first_error_u {
+ mmr_t sh_xniilb_first_error_regval;
+ struct {
+ mmr_t overflow_ii_debit0 : 1;
+ mmr_t overflow_ii_debit2 : 1;
+ mmr_t overflow_lb_debit0 : 1;
+ mmr_t overflow_lb_debit2 : 1;
+ mmr_t overflow_ii_vc0 : 1;
+ mmr_t overflow_ii_vc2 : 1;
+ mmr_t underflow_ii_vc0 : 1;
+ mmr_t underflow_ii_vc2 : 1;
+ mmr_t overflow_lb_vc0 : 1;
+ mmr_t overflow_lb_vc2 : 1;
+ mmr_t underflow_lb_vc0 : 1;
+ mmr_t underflow_lb_vc2 : 1;
+ mmr_t overflow_pi_vc0_credit_in : 1;
+ mmr_t overflow_iilb_vc0_credit_in : 1;
+ mmr_t overflow_md_vc0_credit_in : 1;
+ mmr_t overflow_ni0_vc0_credit_in : 1;
+ mmr_t overflow_ni1_vc0_credit_in : 1;
+ mmr_t overflow_pi_vc2_credit_in : 1;
+ mmr_t overflow_iilb_vc2_credit_in : 1;
+ mmr_t overflow_md_vc2_credit_in : 1;
+ mmr_t overflow_ni0_vc2_credit_in : 1;
+ mmr_t overflow_ni1_vc2_credit_in : 1;
+ mmr_t underflow_pi_vc0_credit_in : 1;
+ mmr_t underflow_iilb_vc0_credit_in : 1;
+ mmr_t underflow_md_vc0_credit_in : 1;
+ mmr_t underflow_ni0_vc0_credit_in : 1;
+ mmr_t underflow_ni1_vc0_credit_in : 1;
+ mmr_t underflow_pi_vc2_credit_in : 1;
+ mmr_t underflow_iilb_vc2_credit_in : 1;
+ mmr_t underflow_md_vc2_credit_in : 1;
+ mmr_t underflow_ni0_vc2_credit_in : 1;
+ mmr_t underflow_ni1_vc2_credit_in : 1;
+ mmr_t overflow_pi_debit0 : 1;
+ mmr_t overflow_pi_debit2 : 1;
+ mmr_t overflow_iilb_debit0 : 1;
+ mmr_t overflow_iilb_debit2 : 1;
+ mmr_t overflow_md_debit0 : 1;
+ mmr_t overflow_md_debit2 : 1;
+ mmr_t overflow_ni0_debit0 : 1;
+ mmr_t overflow_ni0_debit2 : 1;
+ mmr_t overflow_ni1_debit0 : 1;
+ mmr_t overflow_ni1_debit2 : 1;
+ mmr_t overflow_pi_vc0_credit_out : 1;
+ mmr_t overflow_pi_vc2_credit_out : 1;
+ mmr_t overflow_md_vc0_credit_out : 1;
+ mmr_t overflow_md_vc2_credit_out : 1;
+ mmr_t overflow_iilb_vc0_credit_out : 1;
+ mmr_t overflow_iilb_vc2_credit_out : 1;
+ mmr_t overflow_ni0_vc0_credit_out : 1;
+ mmr_t overflow_ni0_vc2_credit_out : 1;
+ mmr_t overflow_ni1_vc0_credit_out : 1;
+ mmr_t overflow_ni1_vc2_credit_out : 1;
+ mmr_t underflow_pi_vc0_credit_out : 1;
+ mmr_t underflow_pi_vc2_credit_out : 1;
+ mmr_t underflow_md_vc0_credit_out : 1;
+ mmr_t underflow_md_vc2_credit_out : 1;
+ mmr_t underflow_iilb_vc0_credit_out : 1;
+ mmr_t underflow_iilb_vc2_credit_out : 1;
+ mmr_t underflow_ni0_vc0_credit_out : 1;
+ mmr_t underflow_ni0_vc2_credit_out : 1;
+ mmr_t underflow_ni1_vc0_credit_out : 1;
+ mmr_t underflow_ni1_vc2_credit_out : 1;
+ mmr_t chiplet_nomatch : 1;
+ mmr_t lut_read_error : 1;
+ } sh_xniilb_first_error_s;
+} sh_xniilb_first_error_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ERROR_SUMMARY" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_error_summary_u {
+ mmr_t sh_xnpi_error_summary_regval;
+ struct {
+ mmr_t underflow_ni0_vc0 : 1;
+ mmr_t overflow_ni0_vc0 : 1;
+ mmr_t underflow_ni0_vc2 : 1;
+ mmr_t overflow_ni0_vc2 : 1;
+ mmr_t underflow_ni1_vc0 : 1;
+ mmr_t overflow_ni1_vc0 : 1;
+ mmr_t underflow_ni1_vc2 : 1;
+ mmr_t overflow_ni1_vc2 : 1;
+ mmr_t underflow_iilb_vc0 : 1;
+ mmr_t overflow_iilb_vc0 : 1;
+ mmr_t underflow_iilb_vc2 : 1;
+ mmr_t overflow_iilb_vc2 : 1;
+ mmr_t underflow_vc0_credit : 1;
+ mmr_t overflow_vc0_credit : 1;
+ mmr_t underflow_vc2_credit : 1;
+ mmr_t overflow_vc2_credit : 1;
+ mmr_t overflow_databuff_vc0 : 1;
+ mmr_t overflow_databuff_vc2 : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t single_bit_error0 : 1;
+ mmr_t single_bit_error1 : 1;
+ mmr_t single_bit_error2 : 1;
+ mmr_t single_bit_error3 : 1;
+ mmr_t uncor_error0 : 1;
+ mmr_t uncor_error1 : 1;
+ mmr_t uncor_error2 : 1;
+ mmr_t uncor_error3 : 1;
+ mmr_t underflow_sic_cntr0 : 1;
+ mmr_t overflow_sic_cntr0 : 1;
+ mmr_t underflow_sic_cntr2 : 1;
+ mmr_t overflow_sic_cntr2 : 1;
+ mmr_t overflow_ni0_debit0 : 1;
+ mmr_t overflow_ni0_debit2 : 1;
+ mmr_t overflow_ni1_debit0 : 1;
+ mmr_t overflow_ni1_debit2 : 1;
+ mmr_t overflow_iilb_debit0 : 1;
+ mmr_t overflow_iilb_debit2 : 1;
+ mmr_t underflow_ni0_vc0_credit : 1;
+ mmr_t overflow_ni0_vc0_credit : 1;
+ mmr_t underflow_ni0_vc2_credit : 1;
+ mmr_t overflow_ni0_vc2_credit : 1;
+ mmr_t underflow_ni1_vc0_credit : 1;
+ mmr_t overflow_ni1_vc0_credit : 1;
+ mmr_t underflow_ni1_vc2_credit : 1;
+ mmr_t overflow_ni1_vc2_credit : 1;
+ mmr_t underflow_iilb_vc0_credit : 1;
+ mmr_t overflow_iilb_vc0_credit : 1;
+ mmr_t underflow_iilb_vc2_credit : 1;
+ mmr_t overflow_iilb_vc2_credit : 1;
+ mmr_t overflow_header_cancel_fifo : 1;
+ mmr_t reserved_0 : 14;
+ } sh_xnpi_error_summary_s;
+} sh_xnpi_error_summary_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ERROR_OVERFLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_error_overflow_u {
+ mmr_t sh_xnpi_error_overflow_regval;
+ struct {
+ mmr_t underflow_ni0_vc0 : 1;
+ mmr_t overflow_ni0_vc0 : 1;
+ mmr_t underflow_ni0_vc2 : 1;
+ mmr_t overflow_ni0_vc2 : 1;
+ mmr_t underflow_ni1_vc0 : 1;
+ mmr_t overflow_ni1_vc0 : 1;
+ mmr_t underflow_ni1_vc2 : 1;
+ mmr_t overflow_ni1_vc2 : 1;
+ mmr_t underflow_iilb_vc0 : 1;
+ mmr_t overflow_iilb_vc0 : 1;
+ mmr_t underflow_iilb_vc2 : 1;
+ mmr_t overflow_iilb_vc2 : 1;
+ mmr_t underflow_vc0_credit : 1;
+ mmr_t overflow_vc0_credit : 1;
+ mmr_t underflow_vc2_credit : 1;
+ mmr_t overflow_vc2_credit : 1;
+ mmr_t overflow_databuff_vc0 : 1;
+ mmr_t overflow_databuff_vc2 : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t single_bit_error0 : 1;
+ mmr_t single_bit_error1 : 1;
+ mmr_t single_bit_error2 : 1;
+ mmr_t single_bit_error3 : 1;
+ mmr_t uncor_error0 : 1;
+ mmr_t uncor_error1 : 1;
+ mmr_t uncor_error2 : 1;
+ mmr_t uncor_error3 : 1;
+ mmr_t underflow_sic_cntr0 : 1;
+ mmr_t overflow_sic_cntr0 : 1;
+ mmr_t underflow_sic_cntr2 : 1;
+ mmr_t overflow_sic_cntr2 : 1;
+ mmr_t overflow_ni0_debit0 : 1;
+ mmr_t overflow_ni0_debit2 : 1;
+ mmr_t overflow_ni1_debit0 : 1;
+ mmr_t overflow_ni1_debit2 : 1;
+ mmr_t overflow_iilb_debit0 : 1;
+ mmr_t overflow_iilb_debit2 : 1;
+ mmr_t underflow_ni0_vc0_credit : 1;
+ mmr_t overflow_ni0_vc0_credit : 1;
+ mmr_t underflow_ni0_vc2_credit : 1;
+ mmr_t overflow_ni0_vc2_credit : 1;
+ mmr_t underflow_ni1_vc0_credit : 1;
+ mmr_t overflow_ni1_vc0_credit : 1;
+ mmr_t underflow_ni1_vc2_credit : 1;
+ mmr_t overflow_ni1_vc2_credit : 1;
+ mmr_t underflow_iilb_vc0_credit : 1;
+ mmr_t overflow_iilb_vc0_credit : 1;
+ mmr_t underflow_iilb_vc2_credit : 1;
+ mmr_t overflow_iilb_vc2_credit : 1;
+ mmr_t overflow_header_cancel_fifo : 1;
+ mmr_t reserved_0 : 14;
+ } sh_xnpi_error_overflow_s;
+} sh_xnpi_error_overflow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_ERROR_MASK" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_error_mask_u {
+ mmr_t sh_xnpi_error_mask_regval;
+ struct {
+ mmr_t underflow_ni0_vc0 : 1;
+ mmr_t overflow_ni0_vc0 : 1;
+ mmr_t underflow_ni0_vc2 : 1;
+ mmr_t overflow_ni0_vc2 : 1;
+ mmr_t underflow_ni1_vc0 : 1;
+ mmr_t overflow_ni1_vc0 : 1;
+ mmr_t underflow_ni1_vc2 : 1;
+ mmr_t overflow_ni1_vc2 : 1;
+ mmr_t underflow_iilb_vc0 : 1;
+ mmr_t overflow_iilb_vc0 : 1;
+ mmr_t underflow_iilb_vc2 : 1;
+ mmr_t overflow_iilb_vc2 : 1;
+ mmr_t underflow_vc0_credit : 1;
+ mmr_t overflow_vc0_credit : 1;
+ mmr_t underflow_vc2_credit : 1;
+ mmr_t overflow_vc2_credit : 1;
+ mmr_t overflow_databuff_vc0 : 1;
+ mmr_t overflow_databuff_vc2 : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t single_bit_error0 : 1;
+ mmr_t single_bit_error1 : 1;
+ mmr_t single_bit_error2 : 1;
+ mmr_t single_bit_error3 : 1;
+ mmr_t uncor_error0 : 1;
+ mmr_t uncor_error1 : 1;
+ mmr_t uncor_error2 : 1;
+ mmr_t uncor_error3 : 1;
+ mmr_t underflow_sic_cntr0 : 1;
+ mmr_t overflow_sic_cntr0 : 1;
+ mmr_t underflow_sic_cntr2 : 1;
+ mmr_t overflow_sic_cntr2 : 1;
+ mmr_t overflow_ni0_debit0 : 1;
+ mmr_t overflow_ni0_debit2 : 1;
+ mmr_t overflow_ni1_debit0 : 1;
+ mmr_t overflow_ni1_debit2 : 1;
+ mmr_t overflow_iilb_debit0 : 1;
+ mmr_t overflow_iilb_debit2 : 1;
+ mmr_t underflow_ni0_vc0_credit : 1;
+ mmr_t overflow_ni0_vc0_credit : 1;
+ mmr_t underflow_ni0_vc2_credit : 1;
+ mmr_t overflow_ni0_vc2_credit : 1;
+ mmr_t underflow_ni1_vc0_credit : 1;
+ mmr_t overflow_ni1_vc0_credit : 1;
+ mmr_t underflow_ni1_vc2_credit : 1;
+ mmr_t overflow_ni1_vc2_credit : 1;
+ mmr_t underflow_iilb_vc0_credit : 1;
+ mmr_t overflow_iilb_vc0_credit : 1;
+ mmr_t underflow_iilb_vc2_credit : 1;
+ mmr_t overflow_iilb_vc2_credit : 1;
+ mmr_t overflow_header_cancel_fifo : 1;
+ mmr_t reserved_0 : 14;
+ } sh_xnpi_error_mask_s;
+} sh_xnpi_error_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNPI_FIRST_ERROR" */
+/* ==================================================================== */
+
+typedef union sh_xnpi_first_error_u {
+ mmr_t sh_xnpi_first_error_regval;
+ struct {
+ mmr_t underflow_ni0_vc0 : 1;
+ mmr_t overflow_ni0_vc0 : 1;
+ mmr_t underflow_ni0_vc2 : 1;
+ mmr_t overflow_ni0_vc2 : 1;
+ mmr_t underflow_ni1_vc0 : 1;
+ mmr_t overflow_ni1_vc0 : 1;
+ mmr_t underflow_ni1_vc2 : 1;
+ mmr_t overflow_ni1_vc2 : 1;
+ mmr_t underflow_iilb_vc0 : 1;
+ mmr_t overflow_iilb_vc0 : 1;
+ mmr_t underflow_iilb_vc2 : 1;
+ mmr_t overflow_iilb_vc2 : 1;
+ mmr_t underflow_vc0_credit : 1;
+ mmr_t overflow_vc0_credit : 1;
+ mmr_t underflow_vc2_credit : 1;
+ mmr_t overflow_vc2_credit : 1;
+ mmr_t overflow_databuff_vc0 : 1;
+ mmr_t overflow_databuff_vc2 : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t single_bit_error0 : 1;
+ mmr_t single_bit_error1 : 1;
+ mmr_t single_bit_error2 : 1;
+ mmr_t single_bit_error3 : 1;
+ mmr_t uncor_error0 : 1;
+ mmr_t uncor_error1 : 1;
+ mmr_t uncor_error2 : 1;
+ mmr_t uncor_error3 : 1;
+ mmr_t underflow_sic_cntr0 : 1;
+ mmr_t overflow_sic_cntr0 : 1;
+ mmr_t underflow_sic_cntr2 : 1;
+ mmr_t overflow_sic_cntr2 : 1;
+ mmr_t overflow_ni0_debit0 : 1;
+ mmr_t overflow_ni0_debit2 : 1;
+ mmr_t overflow_ni1_debit0 : 1;
+ mmr_t overflow_ni1_debit2 : 1;
+ mmr_t overflow_iilb_debit0 : 1;
+ mmr_t overflow_iilb_debit2 : 1;
+ mmr_t underflow_ni0_vc0_credit : 1;
+ mmr_t overflow_ni0_vc0_credit : 1;
+ mmr_t underflow_ni0_vc2_credit : 1;
+ mmr_t overflow_ni0_vc2_credit : 1;
+ mmr_t underflow_ni1_vc0_credit : 1;
+ mmr_t overflow_ni1_vc0_credit : 1;
+ mmr_t underflow_ni1_vc2_credit : 1;
+ mmr_t overflow_ni1_vc2_credit : 1;
+ mmr_t underflow_iilb_vc0_credit : 1;
+ mmr_t overflow_iilb_vc0_credit : 1;
+ mmr_t underflow_iilb_vc2_credit : 1;
+ mmr_t overflow_iilb_vc2_credit : 1;
+ mmr_t overflow_header_cancel_fifo : 1;
+ mmr_t reserved_0 : 14;
+ } sh_xnpi_first_error_s;
+} sh_xnpi_first_error_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ERROR_SUMMARY" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_error_summary_u {
+ mmr_t sh_xnmd_error_summary_regval;
+ struct {
+ mmr_t underflow_ni0_vc0 : 1;
+ mmr_t overflow_ni0_vc0 : 1;
+ mmr_t underflow_ni0_vc2 : 1;
+ mmr_t overflow_ni0_vc2 : 1;
+ mmr_t underflow_ni1_vc0 : 1;
+ mmr_t overflow_ni1_vc0 : 1;
+ mmr_t underflow_ni1_vc2 : 1;
+ mmr_t overflow_ni1_vc2 : 1;
+ mmr_t underflow_iilb_vc0 : 1;
+ mmr_t overflow_iilb_vc0 : 1;
+ mmr_t underflow_iilb_vc2 : 1;
+ mmr_t overflow_iilb_vc2 : 1;
+ mmr_t underflow_vc0_credit : 1;
+ mmr_t overflow_vc0_credit : 1;
+ mmr_t underflow_vc2_credit : 1;
+ mmr_t overflow_vc2_credit : 1;
+ mmr_t overflow_databuff_vc0 : 1;
+ mmr_t overflow_databuff_vc2 : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t single_bit_error0 : 1;
+ mmr_t single_bit_error1 : 1;
+ mmr_t single_bit_error2 : 1;
+ mmr_t single_bit_error3 : 1;
+ mmr_t uncor_error0 : 1;
+ mmr_t uncor_error1 : 1;
+ mmr_t uncor_error2 : 1;
+ mmr_t uncor_error3 : 1;
+ mmr_t underflow_sic_cntr0 : 1;
+ mmr_t overflow_sic_cntr0 : 1;
+ mmr_t underflow_sic_cntr2 : 1;
+ mmr_t overflow_sic_cntr2 : 1;
+ mmr_t overflow_ni0_debit0 : 1;
+ mmr_t overflow_ni0_debit2 : 1;
+ mmr_t overflow_ni1_debit0 : 1;
+ mmr_t overflow_ni1_debit2 : 1;
+ mmr_t overflow_iilb_debit0 : 1;
+ mmr_t overflow_iilb_debit2 : 1;
+ mmr_t underflow_ni0_vc0_credit : 1;
+ mmr_t overflow_ni0_vc0_credit : 1;
+ mmr_t underflow_ni0_vc2_credit : 1;
+ mmr_t overflow_ni0_vc2_credit : 1;
+ mmr_t underflow_ni1_vc0_credit : 1;
+ mmr_t overflow_ni1_vc0_credit : 1;
+ mmr_t underflow_ni1_vc2_credit : 1;
+ mmr_t overflow_ni1_vc2_credit : 1;
+ mmr_t underflow_iilb_vc0_credit : 1;
+ mmr_t overflow_iilb_vc0_credit : 1;
+ mmr_t underflow_iilb_vc2_credit : 1;
+ mmr_t overflow_iilb_vc2_credit : 1;
+ mmr_t overflow_header_cancel_fifo : 1;
+ mmr_t reserved_0 : 14;
+ } sh_xnmd_error_summary_s;
+} sh_xnmd_error_summary_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ERROR_OVERFLOW" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_error_overflow_u {
+ mmr_t sh_xnmd_error_overflow_regval;
+ struct {
+ mmr_t underflow_ni0_vc0 : 1;
+ mmr_t overflow_ni0_vc0 : 1;
+ mmr_t underflow_ni0_vc2 : 1;
+ mmr_t overflow_ni0_vc2 : 1;
+ mmr_t underflow_ni1_vc0 : 1;
+ mmr_t overflow_ni1_vc0 : 1;
+ mmr_t underflow_ni1_vc2 : 1;
+ mmr_t overflow_ni1_vc2 : 1;
+ mmr_t underflow_iilb_vc0 : 1;
+ mmr_t overflow_iilb_vc0 : 1;
+ mmr_t underflow_iilb_vc2 : 1;
+ mmr_t overflow_iilb_vc2 : 1;
+ mmr_t underflow_vc0_credit : 1;
+ mmr_t overflow_vc0_credit : 1;
+ mmr_t underflow_vc2_credit : 1;
+ mmr_t overflow_vc2_credit : 1;
+ mmr_t overflow_databuff_vc0 : 1;
+ mmr_t overflow_databuff_vc2 : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t single_bit_error0 : 1;
+ mmr_t single_bit_error1 : 1;
+ mmr_t single_bit_error2 : 1;
+ mmr_t single_bit_error3 : 1;
+ mmr_t uncor_error0 : 1;
+ mmr_t uncor_error1 : 1;
+ mmr_t uncor_error2 : 1;
+ mmr_t uncor_error3 : 1;
+ mmr_t underflow_sic_cntr0 : 1;
+ mmr_t overflow_sic_cntr0 : 1;
+ mmr_t underflow_sic_cntr2 : 1;
+ mmr_t overflow_sic_cntr2 : 1;
+ mmr_t overflow_ni0_debit0 : 1;
+ mmr_t overflow_ni0_debit2 : 1;
+ mmr_t overflow_ni1_debit0 : 1;
+ mmr_t overflow_ni1_debit2 : 1;
+ mmr_t overflow_iilb_debit0 : 1;
+ mmr_t overflow_iilb_debit2 : 1;
+ mmr_t underflow_ni0_vc0_credit : 1;
+ mmr_t overflow_ni0_vc0_credit : 1;
+ mmr_t underflow_ni0_vc2_credit : 1;
+ mmr_t overflow_ni0_vc2_credit : 1;
+ mmr_t underflow_ni1_vc0_credit : 1;
+ mmr_t overflow_ni1_vc0_credit : 1;
+ mmr_t underflow_ni1_vc2_credit : 1;
+ mmr_t overflow_ni1_vc2_credit : 1;
+ mmr_t underflow_iilb_vc0_credit : 1;
+ mmr_t overflow_iilb_vc0_credit : 1;
+ mmr_t underflow_iilb_vc2_credit : 1;
+ mmr_t overflow_iilb_vc2_credit : 1;
+ mmr_t overflow_header_cancel_fifo : 1;
+ mmr_t reserved_0 : 14;
+ } sh_xnmd_error_overflow_s;
+} sh_xnmd_error_overflow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_ERROR_MASK" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_error_mask_u {
+ mmr_t sh_xnmd_error_mask_regval;
+ struct {
+ mmr_t underflow_ni0_vc0 : 1;
+ mmr_t overflow_ni0_vc0 : 1;
+ mmr_t underflow_ni0_vc2 : 1;
+ mmr_t overflow_ni0_vc2 : 1;
+ mmr_t underflow_ni1_vc0 : 1;
+ mmr_t overflow_ni1_vc0 : 1;
+ mmr_t underflow_ni1_vc2 : 1;
+ mmr_t overflow_ni1_vc2 : 1;
+ mmr_t underflow_iilb_vc0 : 1;
+ mmr_t overflow_iilb_vc0 : 1;
+ mmr_t underflow_iilb_vc2 : 1;
+ mmr_t overflow_iilb_vc2 : 1;
+ mmr_t underflow_vc0_credit : 1;
+ mmr_t overflow_vc0_credit : 1;
+ mmr_t underflow_vc2_credit : 1;
+ mmr_t overflow_vc2_credit : 1;
+ mmr_t overflow_databuff_vc0 : 1;
+ mmr_t overflow_databuff_vc2 : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t single_bit_error0 : 1;
+ mmr_t single_bit_error1 : 1;
+ mmr_t single_bit_error2 : 1;
+ mmr_t single_bit_error3 : 1;
+ mmr_t uncor_error0 : 1;
+ mmr_t uncor_error1 : 1;
+ mmr_t uncor_error2 : 1;
+ mmr_t uncor_error3 : 1;
+ mmr_t underflow_sic_cntr0 : 1;
+ mmr_t overflow_sic_cntr0 : 1;
+ mmr_t underflow_sic_cntr2 : 1;
+ mmr_t overflow_sic_cntr2 : 1;
+ mmr_t overflow_ni0_debit0 : 1;
+ mmr_t overflow_ni0_debit2 : 1;
+ mmr_t overflow_ni1_debit0 : 1;
+ mmr_t overflow_ni1_debit2 : 1;
+ mmr_t overflow_iilb_debit0 : 1;
+ mmr_t overflow_iilb_debit2 : 1;
+ mmr_t underflow_ni0_vc0_credit : 1;
+ mmr_t overflow_ni0_vc0_credit : 1;
+ mmr_t underflow_ni0_vc2_credit : 1;
+ mmr_t overflow_ni0_vc2_credit : 1;
+ mmr_t underflow_ni1_vc0_credit : 1;
+ mmr_t overflow_ni1_vc0_credit : 1;
+ mmr_t underflow_ni1_vc2_credit : 1;
+ mmr_t overflow_ni1_vc2_credit : 1;
+ mmr_t underflow_iilb_vc0_credit : 1;
+ mmr_t overflow_iilb_vc0_credit : 1;
+ mmr_t underflow_iilb_vc2_credit : 1;
+ mmr_t overflow_iilb_vc2_credit : 1;
+ mmr_t overflow_header_cancel_fifo : 1;
+ mmr_t reserved_0 : 14;
+ } sh_xnmd_error_mask_s;
+} sh_xnmd_error_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XNMD_FIRST_ERROR" */
+/* ==================================================================== */
+
+typedef union sh_xnmd_first_error_u {
+ mmr_t sh_xnmd_first_error_regval;
+ struct {
+ mmr_t underflow_ni0_vc0 : 1;
+ mmr_t overflow_ni0_vc0 : 1;
+ mmr_t underflow_ni0_vc2 : 1;
+ mmr_t overflow_ni0_vc2 : 1;
+ mmr_t underflow_ni1_vc0 : 1;
+ mmr_t overflow_ni1_vc0 : 1;
+ mmr_t underflow_ni1_vc2 : 1;
+ mmr_t overflow_ni1_vc2 : 1;
+ mmr_t underflow_iilb_vc0 : 1;
+ mmr_t overflow_iilb_vc0 : 1;
+ mmr_t underflow_iilb_vc2 : 1;
+ mmr_t overflow_iilb_vc2 : 1;
+ mmr_t underflow_vc0_credit : 1;
+ mmr_t overflow_vc0_credit : 1;
+ mmr_t underflow_vc2_credit : 1;
+ mmr_t overflow_vc2_credit : 1;
+ mmr_t overflow_databuff_vc0 : 1;
+ mmr_t overflow_databuff_vc2 : 1;
+ mmr_t lut_read_error : 1;
+ mmr_t single_bit_error0 : 1;
+ mmr_t single_bit_error1 : 1;
+ mmr_t single_bit_error2 : 1;
+ mmr_t single_bit_error3 : 1;
+ mmr_t uncor_error0 : 1;
+ mmr_t uncor_error1 : 1;
+ mmr_t uncor_error2 : 1;
+ mmr_t uncor_error3 : 1;
+ mmr_t underflow_sic_cntr0 : 1;
+ mmr_t overflow_sic_cntr0 : 1;
+ mmr_t underflow_sic_cntr2 : 1;
+ mmr_t overflow_sic_cntr2 : 1;
+ mmr_t overflow_ni0_debit0 : 1;
+ mmr_t overflow_ni0_debit2 : 1;
+ mmr_t overflow_ni1_debit0 : 1;
+ mmr_t overflow_ni1_debit2 : 1;
+ mmr_t overflow_iilb_debit0 : 1;
+ mmr_t overflow_iilb_debit2 : 1;
+ mmr_t underflow_ni0_vc0_credit : 1;
+ mmr_t overflow_ni0_vc0_credit : 1;
+ mmr_t underflow_ni0_vc2_credit : 1;
+ mmr_t overflow_ni0_vc2_credit : 1;
+ mmr_t underflow_ni1_vc0_credit : 1;
+ mmr_t overflow_ni1_vc0_credit : 1;
+ mmr_t underflow_ni1_vc2_credit : 1;
+ mmr_t overflow_ni1_vc2_credit : 1;
+ mmr_t underflow_iilb_vc0_credit : 1;
+ mmr_t overflow_iilb_vc0_credit : 1;
+ mmr_t underflow_iilb_vc2_credit : 1;
+ mmr_t overflow_iilb_vc2_credit : 1;
+ mmr_t overflow_header_cancel_fifo : 1;
+ mmr_t reserved_0 : 14;
+ } sh_xnmd_first_error_s;
+} sh_xnmd_first_error_u_t;
+
+/* ==================================================================== */
+/* Register "SH_AUTO_REPLY_ENABLE0" */
+/* Automatic Maintenance Reply Enable 0 */
+/* ==================================================================== */
+
+typedef union sh_auto_reply_enable0_u {
+ mmr_t sh_auto_reply_enable0_regval;
+ struct {
+ mmr_t enable0 : 64;
+ } sh_auto_reply_enable0_s;
+} sh_auto_reply_enable0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_AUTO_REPLY_ENABLE1" */
+/* Automatic Maintenance Reply Enable 1 */
+/* ==================================================================== */
+
+typedef union sh_auto_reply_enable1_u {
+ mmr_t sh_auto_reply_enable1_regval;
+ struct {
+ mmr_t enable1 : 64;
+ } sh_auto_reply_enable1_s;
+} sh_auto_reply_enable1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_AUTO_REPLY_HEADER0" */
+/* Automatic Maintenance Reply Header 0 */
+/* ==================================================================== */
+
+typedef union sh_auto_reply_header0_u {
+ mmr_t sh_auto_reply_header0_regval;
+ struct {
+ mmr_t header0 : 64;
+ } sh_auto_reply_header0_s;
+} sh_auto_reply_header0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_AUTO_REPLY_HEADER1" */
+/* Automatic Maintenance Reply Header 1 */
+/* ==================================================================== */
+
+typedef union sh_auto_reply_header1_u {
+ mmr_t sh_auto_reply_header1_regval;
+ struct {
+ mmr_t header1 : 64;
+ } sh_auto_reply_header1_s;
+} sh_auto_reply_header1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_ENABLE_RP_AUTO_REPLY" */
+/* Enable Automatic Maintenance Reply From Reply Queue */
+/* ==================================================================== */
+
+typedef union sh_enable_rp_auto_reply_u {
+ mmr_t sh_enable_rp_auto_reply_regval;
+ struct {
+ mmr_t enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_enable_rp_auto_reply_s;
+} sh_enable_rp_auto_reply_u_t;
+
+/* ==================================================================== */
+/* Register "SH_ENABLE_RQ_AUTO_REPLY" */
+/* Enable Automatic Maintenance Reply From Request Queue */
+/* ==================================================================== */
+
+typedef union sh_enable_rq_auto_reply_u {
+ mmr_t sh_enable_rq_auto_reply_regval;
+ struct {
+ mmr_t enable : 1;
+ mmr_t reserved_0 : 63;
+ } sh_enable_rq_auto_reply_s;
+} sh_enable_rq_auto_reply_u_t;
+
+/* ==================================================================== */
+/* Register "SH_REDIRECT_INVAL" */
+/* Redirect invalidate to LB instead of PI */
+/* ==================================================================== */
+
+typedef union sh_redirect_inval_u {
+ mmr_t sh_redirect_inval_regval;
+ struct {
+ mmr_t redirect : 1;
+ mmr_t reserved_0 : 63;
+ } sh_redirect_inval_s;
+} sh_redirect_inval_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_CNTRL" */
+/* Diagnostic Message Control Register */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_cntrl_u {
+ mmr_t sh_diag_msg_cntrl_regval;
+ struct {
+ mmr_t msg_length : 6;
+ mmr_t error_inject_point : 6;
+ mmr_t error_inject_enable : 1;
+ mmr_t port : 1;
+ mmr_t reserved_0 : 48;
+ mmr_t start : 1;
+ mmr_t busy : 1;
+ } sh_diag_msg_cntrl_s;
+} sh_diag_msg_cntrl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA0L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data0l_u {
+ mmr_t sh_diag_msg_data0l_regval;
+ struct {
+ mmr_t data_lower : 64;
+ } sh_diag_msg_data0l_s;
+} sh_diag_msg_data0l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA0U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data0u_u {
+ mmr_t sh_diag_msg_data0u_regval;
+ struct {
+ mmr_t data_upper : 64;
+ } sh_diag_msg_data0u_s;
+} sh_diag_msg_data0u_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA1L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data1l_u {
+ mmr_t sh_diag_msg_data1l_regval;
+ struct {
+ mmr_t data_lower : 64;
+ } sh_diag_msg_data1l_s;
+} sh_diag_msg_data1l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA1U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data1u_u {
+ mmr_t sh_diag_msg_data1u_regval;
+ struct {
+ mmr_t data_upper : 64;
+ } sh_diag_msg_data1u_s;
+} sh_diag_msg_data1u_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA2L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data2l_u {
+ mmr_t sh_diag_msg_data2l_regval;
+ struct {
+ mmr_t data_lower : 64;
+ } sh_diag_msg_data2l_s;
+} sh_diag_msg_data2l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA2U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data2u_u {
+ mmr_t sh_diag_msg_data2u_regval;
+ struct {
+ mmr_t data_upper : 64;
+ } sh_diag_msg_data2u_s;
+} sh_diag_msg_data2u_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA3L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data3l_u {
+ mmr_t sh_diag_msg_data3l_regval;
+ struct {
+ mmr_t data_lower : 64;
+ } sh_diag_msg_data3l_s;
+} sh_diag_msg_data3l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA3U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data3u_u {
+ mmr_t sh_diag_msg_data3u_regval;
+ struct {
+ mmr_t data_upper : 64;
+ } sh_diag_msg_data3u_s;
+} sh_diag_msg_data3u_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA4L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data4l_u {
+ mmr_t sh_diag_msg_data4l_regval;
+ struct {
+ mmr_t data_lower : 64;
+ } sh_diag_msg_data4l_s;
+} sh_diag_msg_data4l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA4U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data4u_u {
+ mmr_t sh_diag_msg_data4u_regval;
+ struct {
+ mmr_t data_upper : 64;
+ } sh_diag_msg_data4u_s;
+} sh_diag_msg_data4u_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA5L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data5l_u {
+ mmr_t sh_diag_msg_data5l_regval;
+ struct {
+ mmr_t data_lower : 64;
+ } sh_diag_msg_data5l_s;
+} sh_diag_msg_data5l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA5U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data5u_u {
+ mmr_t sh_diag_msg_data5u_regval;
+ struct {
+ mmr_t data_upper : 64;
+ } sh_diag_msg_data5u_s;
+} sh_diag_msg_data5u_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA6L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data6l_u {
+ mmr_t sh_diag_msg_data6l_regval;
+ struct {
+ mmr_t data_lower : 64;
+ } sh_diag_msg_data6l_s;
+} sh_diag_msg_data6l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA6U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data6u_u {
+ mmr_t sh_diag_msg_data6u_regval;
+ struct {
+ mmr_t data_upper : 64;
+ } sh_diag_msg_data6u_s;
+} sh_diag_msg_data6u_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA7L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data7l_u {
+ mmr_t sh_diag_msg_data7l_regval;
+ struct {
+ mmr_t data_lower : 64;
+ } sh_diag_msg_data7l_s;
+} sh_diag_msg_data7l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA7U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data7u_u {
+ mmr_t sh_diag_msg_data7u_regval;
+ struct {
+ mmr_t data_upper : 64;
+ } sh_diag_msg_data7u_s;
+} sh_diag_msg_data7u_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA8L" */
+/* Diagnostic Data, lower 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data8l_u {
+ mmr_t sh_diag_msg_data8l_regval;
+ struct {
+ mmr_t data_lower : 64;
+ } sh_diag_msg_data8l_s;
+} sh_diag_msg_data8l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_DATA8U" */
+/* Diagnostice Data, upper 64 bits */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_data8u_u {
+ mmr_t sh_diag_msg_data8u_regval;
+ struct {
+ mmr_t data_upper : 64;
+ } sh_diag_msg_data8u_s;
+} sh_diag_msg_data8u_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_HDR0" */
+/* Diagnostice Data, lower 64 bits of header */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_hdr0_u {
+ mmr_t sh_diag_msg_hdr0_regval;
+ struct {
+ mmr_t header0 : 64;
+ } sh_diag_msg_hdr0_s;
+} sh_diag_msg_hdr0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIAG_MSG_HDR1" */
+/* Diagnostice Data, upper 64 bits of header */
+/* ==================================================================== */
+
+typedef union sh_diag_msg_hdr1_u {
+ mmr_t sh_diag_msg_hdr1_regval;
+ struct {
+ mmr_t header1 : 64;
+ } sh_diag_msg_hdr1_s;
+} sh_diag_msg_hdr1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DEBUG_SELECT" */
+/* SHub Debug Port Select */
+/* ==================================================================== */
+
+typedef union sh_debug_select_u {
+ mmr_t sh_debug_select_regval;
+ struct {
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t nibble0_chiplet_sel : 3;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t nibble1_chiplet_sel : 3;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t nibble2_chiplet_sel : 3;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t nibble3_chiplet_sel : 3;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t nibble4_chiplet_sel : 3;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t nibble5_chiplet_sel : 3;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t nibble6_chiplet_sel : 3;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t nibble7_chiplet_sel : 3;
+ mmr_t debug_ii_sel : 3;
+ mmr_t sel_ii : 9;
+ mmr_t reserved_0 : 3;
+ mmr_t trigger_enable : 1;
+ } sh_debug_select_s;
+} sh_debug_select_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TRIGGER_COMPARE_MASK" */
+/* SHub Trigger Compare Mask */
+/* ==================================================================== */
+
+typedef union sh_trigger_compare_mask_u {
+ mmr_t sh_trigger_compare_mask_regval;
+ struct {
+ mmr_t mask : 32;
+ mmr_t reserved_0 : 32;
+ } sh_trigger_compare_mask_s;
+} sh_trigger_compare_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TRIGGER_COMPARE_PATTERN" */
+/* SHub Trigger Compare Pattern */
+/* ==================================================================== */
+
+typedef union sh_trigger_compare_pattern_u {
+ mmr_t sh_trigger_compare_pattern_regval;
+ struct {
+ mmr_t data : 32;
+ mmr_t reserved_0 : 32;
+ } sh_trigger_compare_pattern_s;
+} sh_trigger_compare_pattern_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TRIGGER_SEL" */
+/* Trigger select for SHUB debug port */
+/* ==================================================================== */
+
+typedef union sh_trigger_sel_u {
+ mmr_t sh_trigger_sel_regval;
+ struct {
+ mmr_t nibble0_input_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_input_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_input_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_input_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble_sel : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_input_sel : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble_sel : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_input_sel : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble_sel : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_input_sel : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble_sel : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_input_sel : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble_sel : 3;
+ mmr_t reserved_15 : 1;
+ } sh_trigger_sel_s;
+} sh_trigger_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_STOP_CLK_CONTROL" */
+/* Stop Clock Control */
+/* ==================================================================== */
+
+typedef union sh_stop_clk_control_u {
+ mmr_t sh_stop_clk_control_regval;
+ struct {
+ mmr_t stimulus : 5;
+ mmr_t event : 1;
+ mmr_t polarity : 1;
+ mmr_t mode : 1;
+ mmr_t reserved_0 : 56;
+ } sh_stop_clk_control_s;
+} sh_stop_clk_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_STOP_CLK_DELAY_PHASE" */
+/* Stop Clock Delay Phase */
+/* ==================================================================== */
+
+typedef union sh_stop_clk_delay_phase_u {
+ mmr_t sh_stop_clk_delay_phase_regval;
+ struct {
+ mmr_t delay : 8;
+ mmr_t reserved_0 : 56;
+ } sh_stop_clk_delay_phase_s;
+} sh_stop_clk_delay_phase_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TSF_ARM_MASK" */
+/* Trigger sequencing facility arm mask */
+/* ==================================================================== */
+
+typedef union sh_tsf_arm_mask_u {
+ mmr_t sh_tsf_arm_mask_regval;
+ struct {
+ mmr_t mask : 64;
+ } sh_tsf_arm_mask_s;
+} sh_tsf_arm_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TSF_COUNTER_PRESETS" */
+/* Trigger sequencing facility counter presets */
+/* ==================================================================== */
+
+typedef union sh_tsf_counter_presets_u {
+ mmr_t sh_tsf_counter_presets_regval;
+ struct {
+ mmr_t count_32 : 32;
+ mmr_t count_16 : 16;
+ mmr_t count_8b : 8;
+ mmr_t count_8a : 8;
+ } sh_tsf_counter_presets_s;
+} sh_tsf_counter_presets_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TSF_DECREMENT_CTL" */
+/* Trigger sequencing facility counter decrement control */
+/* ==================================================================== */
+
+typedef union sh_tsf_decrement_ctl_u {
+ mmr_t sh_tsf_decrement_ctl_regval;
+ struct {
+ mmr_t ctl : 16;
+ mmr_t reserved_0 : 48;
+ } sh_tsf_decrement_ctl_s;
+} sh_tsf_decrement_ctl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TSF_DIAG_MSG_CTL" */
+/* Trigger sequencing facility diagnostic message control */
+/* ==================================================================== */
+
+typedef union sh_tsf_diag_msg_ctl_u {
+ mmr_t sh_tsf_diag_msg_ctl_regval;
+ struct {
+ mmr_t enable : 8;
+ mmr_t reserved_0 : 56;
+ } sh_tsf_diag_msg_ctl_s;
+} sh_tsf_diag_msg_ctl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TSF_DISARM_MASK" */
+/* Trigger sequencing facility disarm mask */
+/* ==================================================================== */
+
+typedef union sh_tsf_disarm_mask_u {
+ mmr_t sh_tsf_disarm_mask_regval;
+ struct {
+ mmr_t mask : 64;
+ } sh_tsf_disarm_mask_s;
+} sh_tsf_disarm_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TSF_ENABLE_CTL" */
+/* Trigger sequencing facility counter enable control */
+/* ==================================================================== */
+
+typedef union sh_tsf_enable_ctl_u {
+ mmr_t sh_tsf_enable_ctl_regval;
+ struct {
+ mmr_t ctl : 16;
+ mmr_t reserved_0 : 48;
+ } sh_tsf_enable_ctl_s;
+} sh_tsf_enable_ctl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TSF_SOFTWARE_ARM" */
+/* Trigger sequencing facility software arm */
+/* ==================================================================== */
+
+typedef union sh_tsf_software_arm_u {
+ mmr_t sh_tsf_software_arm_regval;
+ struct {
+ mmr_t bit0 : 1;
+ mmr_t bit1 : 1;
+ mmr_t bit2 : 1;
+ mmr_t bit3 : 1;
+ mmr_t bit4 : 1;
+ mmr_t bit5 : 1;
+ mmr_t bit6 : 1;
+ mmr_t bit7 : 1;
+ mmr_t reserved_0 : 56;
+ } sh_tsf_software_arm_s;
+} sh_tsf_software_arm_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TSF_SOFTWARE_DISARM" */
+/* Trigger sequencing facility software disarm */
+/* ==================================================================== */
+
+typedef union sh_tsf_software_disarm_u {
+ mmr_t sh_tsf_software_disarm_regval;
+ struct {
+ mmr_t bit0 : 1;
+ mmr_t bit1 : 1;
+ mmr_t bit2 : 1;
+ mmr_t bit3 : 1;
+ mmr_t bit4 : 1;
+ mmr_t bit5 : 1;
+ mmr_t bit6 : 1;
+ mmr_t bit7 : 1;
+ mmr_t reserved_0 : 56;
+ } sh_tsf_software_disarm_s;
+} sh_tsf_software_disarm_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TSF_SOFTWARE_TRIGGERED" */
+/* Trigger sequencing facility software triggered */
+/* ==================================================================== */
+
+typedef union sh_tsf_software_triggered_u {
+ mmr_t sh_tsf_software_triggered_regval;
+ struct {
+ mmr_t bit0 : 1;
+ mmr_t bit1 : 1;
+ mmr_t bit2 : 1;
+ mmr_t bit3 : 1;
+ mmr_t bit4 : 1;
+ mmr_t bit5 : 1;
+ mmr_t bit6 : 1;
+ mmr_t bit7 : 1;
+ mmr_t reserved_0 : 56;
+ } sh_tsf_software_triggered_s;
+} sh_tsf_software_triggered_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TSF_TRIGGER_MASK" */
+/* Trigger sequencing facility trigger mask */
+/* ==================================================================== */
+
+typedef union sh_tsf_trigger_mask_u {
+ mmr_t sh_tsf_trigger_mask_regval;
+ struct {
+ mmr_t mask : 64;
+ } sh_tsf_trigger_mask_s;
+} sh_tsf_trigger_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_VEC_DATA" */
+/* Vector Write Request Message Data */
+/* ==================================================================== */
+
+typedef union sh_vec_data_u {
+ mmr_t sh_vec_data_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_vec_data_s;
+} sh_vec_data_u_t;
+
+/* ==================================================================== */
+/* Register "SH_VEC_PARMS" */
+/* Vector Message Parameters Register */
+/* ==================================================================== */
+
+typedef union sh_vec_parms_u {
+ mmr_t sh_vec_parms_regval;
+ struct {
+ mmr_t type : 1;
+ mmr_t ni_port : 1;
+ mmr_t reserved_0 : 1;
+ mmr_t address : 32;
+ mmr_t pio_id : 11;
+ mmr_t reserved_1 : 16;
+ mmr_t start : 1;
+ mmr_t busy : 1;
+ } sh_vec_parms_s;
+} sh_vec_parms_u_t;
+
+/* ==================================================================== */
+/* Register "SH_VEC_ROUTE" */
+/* Vector Request Message Route */
+/* ==================================================================== */
+
+typedef union sh_vec_route_u {
+ mmr_t sh_vec_route_regval;
+ struct {
+ mmr_t route : 64;
+ } sh_vec_route_s;
+} sh_vec_route_u_t;
+
+/* ==================================================================== */
+/* Register "SH_CPU_PERM" */
+/* CPU MMR Access Permission Bits */
+/* ==================================================================== */
+
+typedef union sh_cpu_perm_u {
+ mmr_t sh_cpu_perm_regval;
+ struct {
+ mmr_t access_bits : 64;
+ } sh_cpu_perm_s;
+} sh_cpu_perm_u_t;
+
+/* ==================================================================== */
+/* Register "SH_CPU_PERM_OVR" */
+/* CPU MMR Access Permission Override */
+/* ==================================================================== */
+
+typedef union sh_cpu_perm_ovr_u {
+ mmr_t sh_cpu_perm_ovr_regval;
+ struct {
+ mmr_t override : 64;
+ } sh_cpu_perm_ovr_s;
+} sh_cpu_perm_ovr_u_t;
+
+/* ==================================================================== */
+/* Register "SH_EXT_IO_PERM" */
+/* External IO MMR Access Permission Bits */
+/* ==================================================================== */
+
+typedef union sh_ext_io_perm_u {
+ mmr_t sh_ext_io_perm_regval;
+ struct {
+ mmr_t access_bits : 64;
+ } sh_ext_io_perm_s;
+} sh_ext_io_perm_u_t;
+
+/* ==================================================================== */
+/* Register "SH_EXT_IOI_ACCESS" */
+/* External IO Interrupt Access Permission Bits */
+/* ==================================================================== */
+
+typedef union sh_ext_ioi_access_u {
+ mmr_t sh_ext_ioi_access_regval;
+ struct {
+ mmr_t access_bits : 64;
+ } sh_ext_ioi_access_s;
+} sh_ext_ioi_access_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GC_FIL_CTRL" */
+/* SHub Global Clock Filter Control */
+/* ==================================================================== */
+
+typedef union sh_gc_fil_ctrl_u {
+ mmr_t sh_gc_fil_ctrl_regval;
+ struct {
+ mmr_t offset : 5;
+ mmr_t reserved_0 : 3;
+ mmr_t mask_counter : 12;
+ mmr_t mask_enable : 1;
+ mmr_t reserved_1 : 3;
+ mmr_t dropout_counter : 10;
+ mmr_t reserved_2 : 2;
+ mmr_t dropout_thresh : 10;
+ mmr_t reserved_3 : 2;
+ mmr_t error_counter : 10;
+ mmr_t reserved_4 : 6;
+ } sh_gc_fil_ctrl_s;
+} sh_gc_fil_ctrl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_GC_SRC_CTRL" */
+/* SHub Global Clock Control */
+/* ==================================================================== */
+
+typedef union sh_gc_src_ctrl_u {
+ mmr_t sh_gc_src_ctrl_regval;
+ struct {
+ mmr_t enable_counter : 1;
+ mmr_t reserved_0 : 3;
+ mmr_t max_count : 10;
+ mmr_t reserved_1 : 2;
+ mmr_t counter : 10;
+ mmr_t reserved_2 : 2;
+ mmr_t toggle_bit : 1;
+ mmr_t reserved_3 : 3;
+ mmr_t source_sel : 2;
+ mmr_t reserved_4 : 30;
+ } sh_gc_src_ctrl_s;
+} sh_gc_src_ctrl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_HARD_RESET" */
+/* SHub Hard Reset */
+/* ==================================================================== */
+
+typedef union sh_hard_reset_u {
+ mmr_t sh_hard_reset_regval;
+ struct {
+ mmr_t hard_reset : 1;
+ mmr_t reserved_0 : 63;
+ } sh_hard_reset_s;
+} sh_hard_reset_u_t;
+
+/* ==================================================================== */
+/* Register "SH_IO_PERM" */
+/* II MMR Access Permission Bits */
+/* ==================================================================== */
+
+typedef union sh_io_perm_u {
+ mmr_t sh_io_perm_regval;
+ struct {
+ mmr_t access_bits : 64;
+ } sh_io_perm_s;
+} sh_io_perm_u_t;
+
+/* ==================================================================== */
+/* Register "SH_IOI_ACCESS" */
+/* II Interrupt Access Permission Bits */
+/* ==================================================================== */
+
+typedef union sh_ioi_access_u {
+ mmr_t sh_ioi_access_regval;
+ struct {
+ mmr_t access_bits : 64;
+ } sh_ioi_access_s;
+} sh_ioi_access_u_t;
+
+/* ==================================================================== */
+/* Register "SH_IPI_ACCESS" */
+/* CPU interrupt Access Permission Bits */
+/* ==================================================================== */
+
+typedef union sh_ipi_access_u {
+ mmr_t sh_ipi_access_regval;
+ struct {
+ mmr_t access_bits : 64;
+ } sh_ipi_access_s;
+} sh_ipi_access_u_t;
+
+/* ==================================================================== */
+/* Register "SH_JTAG_CONFIG" */
+/* SHub JTAG configuration */
+/* ==================================================================== */
+
+typedef union sh_jtag_config_u {
+ mmr_t sh_jtag_config_regval;
+ struct {
+ mmr_t md_clk_sel : 2;
+ mmr_t ni_clk_sel : 1;
+ mmr_t ii_clk_sel : 2;
+ mmr_t wrt90_target : 14;
+ mmr_t wrt90_overrider : 1;
+ mmr_t wrt90_override : 1;
+ mmr_t jtag_mci_reset_delay : 4;
+ mmr_t jtag_mci_target : 14;
+ mmr_t jtag_mci_override : 1;
+ mmr_t fsb_config_ioq_depth : 1;
+ mmr_t fsb_config_sample_binit : 1;
+ mmr_t fsb_config_enable_bus_parking : 1;
+ mmr_t fsb_config_clock_ratio : 5;
+ mmr_t fsb_config_output_tristate : 4;
+ mmr_t fsb_config_enable_bist : 1;
+ mmr_t fsb_config_aux : 2;
+ mmr_t gtl_config_re : 1;
+ mmr_t reserved_0 : 8;
+ } sh_jtag_config_s;
+} sh_jtag_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_SHUB_ID" */
+/* SHub ID Number */
+/* ==================================================================== */
+
+typedef union sh_shub_id_u {
+ mmr_t sh_shub_id_regval;
+ struct {
+ mmr_t force1 : 1;
+ mmr_t manufacturer : 11;
+ mmr_t part_number : 16;
+ mmr_t revision : 4;
+ mmr_t node_id : 11;
+ mmr_t reserved_0 : 1;
+ mmr_t sharing_mode : 2;
+ mmr_t reserved_1 : 2;
+ mmr_t nodes_per_bit : 5;
+ mmr_t reserved_2 : 3;
+ mmr_t ni_port : 1;
+ mmr_t reserved_3 : 7;
+ } sh_shub_id_s;
+} sh_shub_id_u_t;
+
+/* ==================================================================== */
+/* Register "SH_SHUBS_PRESENT0" */
+/* Shubs 0 - 63 Present. Used for invalidate generation */
+/* ==================================================================== */
+
+typedef union sh_shubs_present0_u {
+ mmr_t sh_shubs_present0_regval;
+ struct {
+ mmr_t shubs_present0 : 64;
+ } sh_shubs_present0_s;
+} sh_shubs_present0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_SHUBS_PRESENT1" */
+/* Shubs 64 - 127 Present. Used for invalidate generation */
+/* ==================================================================== */
+
+typedef union sh_shubs_present1_u {
+ mmr_t sh_shubs_present1_regval;
+ struct {
+ mmr_t shubs_present1 : 64;
+ } sh_shubs_present1_s;
+} sh_shubs_present1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_SHUBS_PRESENT2" */
+/* Shubs 128 - 191 Present. Used for invalidate generation */
+/* ==================================================================== */
+
+typedef union sh_shubs_present2_u {
+ mmr_t sh_shubs_present2_regval;
+ struct {
+ mmr_t shubs_present2 : 64;
+ } sh_shubs_present2_s;
+} sh_shubs_present2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_SHUBS_PRESENT3" */
+/* Shubs 192 - 255 Present. Used for invalidate generation */
+/* ==================================================================== */
+
+typedef union sh_shubs_present3_u {
+ mmr_t sh_shubs_present3_regval;
+ struct {
+ mmr_t shubs_present3 : 64;
+ } sh_shubs_present3_s;
+} sh_shubs_present3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_SOFT_RESET" */
+/* SHub Soft Reset */
+/* ==================================================================== */
+
+typedef union sh_soft_reset_u {
+ mmr_t sh_soft_reset_regval;
+ struct {
+ mmr_t soft_reset : 1;
+ mmr_t reserved_0 : 63;
+ } sh_soft_reset_s;
+} sh_soft_reset_u_t;
+
+/* ==================================================================== */
+/* Register "SH_FIRST_ERROR" */
+/* Shub Global First Error Flags */
+/* ==================================================================== */
+
+typedef union sh_first_error_u {
+ mmr_t sh_first_error_regval;
+ struct {
+ mmr_t first_error : 19;
+ mmr_t reserved_0 : 45;
+ } sh_first_error_s;
+} sh_first_error_u_t;
+
+/* ==================================================================== */
+/* Register "SH_II_HW_TIME_STAMP" */
+/* II hardware error time stamp */
+/* ==================================================================== */
+
+typedef union sh_ii_hw_time_stamp_u {
+ mmr_t sh_ii_hw_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_ii_hw_time_stamp_s;
+} sh_ii_hw_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_HW_TIME_STAMP" */
+/* LB hardware error time stamp */
+/* ==================================================================== */
+
+typedef union sh_lb_hw_time_stamp_u {
+ mmr_t sh_lb_hw_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_lb_hw_time_stamp_s;
+} sh_lb_hw_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_COR_TIME_STAMP" */
+/* MD correctable error time stamp */
+/* ==================================================================== */
+
+typedef union sh_md_cor_time_stamp_u {
+ mmr_t sh_md_cor_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_md_cor_time_stamp_s;
+} sh_md_cor_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_HW_TIME_STAMP" */
+/* MD hardware error time stamp */
+/* ==================================================================== */
+
+typedef union sh_md_hw_time_stamp_u {
+ mmr_t sh_md_hw_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_md_hw_time_stamp_s;
+} sh_md_hw_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_UNCOR_TIME_STAMP" */
+/* MD uncorrectable error time stamp */
+/* ==================================================================== */
+
+typedef union sh_md_uncor_time_stamp_u {
+ mmr_t sh_md_uncor_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_md_uncor_time_stamp_s;
+} sh_md_uncor_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_COR_TIME_STAMP" */
+/* PI correctable error time stamp */
+/* ==================================================================== */
+
+typedef union sh_pi_cor_time_stamp_u {
+ mmr_t sh_pi_cor_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_pi_cor_time_stamp_s;
+} sh_pi_cor_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_HW_TIME_STAMP" */
+/* PI hardware error time stamp */
+/* ==================================================================== */
+
+typedef union sh_pi_hw_time_stamp_u {
+ mmr_t sh_pi_hw_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_pi_hw_time_stamp_s;
+} sh_pi_hw_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_UNCOR_TIME_STAMP" */
+/* PI uncorrectable error time stamp */
+/* ==================================================================== */
+
+typedef union sh_pi_uncor_time_stamp_u {
+ mmr_t sh_pi_uncor_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_pi_uncor_time_stamp_s;
+} sh_pi_uncor_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC0_ADV_TIME_STAMP" */
+/* Proc 0 advisory time stamp */
+/* ==================================================================== */
+
+typedef union sh_proc0_adv_time_stamp_u {
+ mmr_t sh_proc0_adv_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_proc0_adv_time_stamp_s;
+} sh_proc0_adv_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC0_ERR_TIME_STAMP" */
+/* Proc 0 error time stamp */
+/* ==================================================================== */
+
+typedef union sh_proc0_err_time_stamp_u {
+ mmr_t sh_proc0_err_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_proc0_err_time_stamp_s;
+} sh_proc0_err_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC1_ADV_TIME_STAMP" */
+/* Proc 1 advisory time stamp */
+/* ==================================================================== */
+
+typedef union sh_proc1_adv_time_stamp_u {
+ mmr_t sh_proc1_adv_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_proc1_adv_time_stamp_s;
+} sh_proc1_adv_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC1_ERR_TIME_STAMP" */
+/* Proc 1 error time stamp */
+/* ==================================================================== */
+
+typedef union sh_proc1_err_time_stamp_u {
+ mmr_t sh_proc1_err_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_proc1_err_time_stamp_s;
+} sh_proc1_err_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC2_ADV_TIME_STAMP" */
+/* Proc 2 advisory time stamp */
+/* ==================================================================== */
+
+typedef union sh_proc2_adv_time_stamp_u {
+ mmr_t sh_proc2_adv_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_proc2_adv_time_stamp_s;
+} sh_proc2_adv_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC2_ERR_TIME_STAMP" */
+/* Proc 2 error time stamp */
+/* ==================================================================== */
+
+typedef union sh_proc2_err_time_stamp_u {
+ mmr_t sh_proc2_err_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_proc2_err_time_stamp_s;
+} sh_proc2_err_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC3_ADV_TIME_STAMP" */
+/* Proc 3 advisory time stamp */
+/* ==================================================================== */
+
+typedef union sh_proc3_adv_time_stamp_u {
+ mmr_t sh_proc3_adv_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_proc3_adv_time_stamp_s;
+} sh_proc3_adv_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROC3_ERR_TIME_STAMP" */
+/* Proc 3 error time stamp */
+/* ==================================================================== */
+
+typedef union sh_proc3_err_time_stamp_u {
+ mmr_t sh_proc3_err_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_proc3_err_time_stamp_s;
+} sh_proc3_err_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_COR_TIME_STAMP" */
+/* XN correctable error time stamp */
+/* ==================================================================== */
+
+typedef union sh_xn_cor_time_stamp_u {
+ mmr_t sh_xn_cor_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_xn_cor_time_stamp_s;
+} sh_xn_cor_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_HW_TIME_STAMP" */
+/* XN hardware error time stamp */
+/* ==================================================================== */
+
+typedef union sh_xn_hw_time_stamp_u {
+ mmr_t sh_xn_hw_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_xn_hw_time_stamp_s;
+} sh_xn_hw_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_UNCOR_TIME_STAMP" */
+/* XN uncorrectable error time stamp */
+/* ==================================================================== */
+
+typedef union sh_xn_uncor_time_stamp_u {
+ mmr_t sh_xn_uncor_time_stamp_regval;
+ struct {
+ mmr_t time : 63;
+ mmr_t valid : 1;
+ } sh_xn_uncor_time_stamp_s;
+} sh_xn_uncor_time_stamp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DEBUG_PORT" */
+/* SHub Debug Port */
+/* ==================================================================== */
+
+typedef union sh_debug_port_u {
+ mmr_t sh_debug_port_regval;
+ struct {
+ mmr_t debug_nibble0 : 4;
+ mmr_t debug_nibble1 : 4;
+ mmr_t debug_nibble2 : 4;
+ mmr_t debug_nibble3 : 4;
+ mmr_t debug_nibble4 : 4;
+ mmr_t debug_nibble5 : 4;
+ mmr_t debug_nibble6 : 4;
+ mmr_t debug_nibble7 : 4;
+ mmr_t reserved_0 : 32;
+ } sh_debug_port_s;
+} sh_debug_port_u_t;
+
+/* ==================================================================== */
+/* Register "SH_II_DEBUG_DATA" */
+/* II Debug Data */
+/* ==================================================================== */
+
+typedef union sh_ii_debug_data_u {
+ mmr_t sh_ii_debug_data_regval;
+ struct {
+ mmr_t ii_data : 32;
+ mmr_t reserved_0 : 32;
+ } sh_ii_debug_data_s;
+} sh_ii_debug_data_u_t;
+
+/* ==================================================================== */
+/* Register "SH_II_WRAP_DEBUG_DATA" */
+/* SHub II Wrapper Debug Data */
+/* ==================================================================== */
+
+typedef union sh_ii_wrap_debug_data_u {
+ mmr_t sh_ii_wrap_debug_data_regval;
+ struct {
+ mmr_t ii_wrap_data : 32;
+ mmr_t reserved_0 : 32;
+ } sh_ii_wrap_debug_data_s;
+} sh_ii_wrap_debug_data_u_t;
+
+/* ==================================================================== */
+/* Register "SH_LB_DEBUG_DATA" */
+/* SHub LB Debug Data */
+/* ==================================================================== */
+
+typedef union sh_lb_debug_data_u {
+ mmr_t sh_lb_debug_data_regval;
+ struct {
+ mmr_t lb_data : 32;
+ mmr_t reserved_0 : 32;
+ } sh_lb_debug_data_s;
+} sh_lb_debug_data_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DEBUG_DATA" */
+/* SHub MD Debug Data */
+/* ==================================================================== */
+
+typedef union sh_md_debug_data_u {
+ mmr_t sh_md_debug_data_regval;
+ struct {
+ mmr_t md_data : 32;
+ mmr_t reserved_0 : 32;
+ } sh_md_debug_data_s;
+} sh_md_debug_data_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_DEBUG_DATA" */
+/* SHub PI Debug Data */
+/* ==================================================================== */
+
+typedef union sh_pi_debug_data_u {
+ mmr_t sh_pi_debug_data_regval;
+ struct {
+ mmr_t pi_data : 32;
+ mmr_t reserved_0 : 32;
+ } sh_pi_debug_data_s;
+} sh_pi_debug_data_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_DEBUG_DATA" */
+/* SHub XN Debug Data */
+/* ==================================================================== */
+
+typedef union sh_xn_debug_data_u {
+ mmr_t sh_xn_debug_data_regval;
+ struct {
+ mmr_t xn_data : 32;
+ mmr_t reserved_0 : 32;
+ } sh_xn_debug_data_s;
+} sh_xn_debug_data_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TSF_ARMED_STATE" */
+/* Trigger sequencing facility arm state */
+/* ==================================================================== */
+
+typedef union sh_tsf_armed_state_u {
+ mmr_t sh_tsf_armed_state_regval;
+ struct {
+ mmr_t state : 8;
+ mmr_t reserved_0 : 56;
+ } sh_tsf_armed_state_s;
+} sh_tsf_armed_state_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TSF_COUNTER_VALUE" */
+/* Trigger sequencing facility counter value */
+/* ==================================================================== */
+
+typedef union sh_tsf_counter_value_u {
+ mmr_t sh_tsf_counter_value_regval;
+ struct {
+ mmr_t count_32 : 32;
+ mmr_t count_16 : 16;
+ mmr_t count_8b : 8;
+ mmr_t count_8a : 8;
+ } sh_tsf_counter_value_s;
+} sh_tsf_counter_value_u_t;
+
+/* ==================================================================== */
+/* Register "SH_TSF_TRIGGERED_STATE" */
+/* Trigger sequencing facility triggered state */
+/* ==================================================================== */
+
+typedef union sh_tsf_triggered_state_u {
+ mmr_t sh_tsf_triggered_state_regval;
+ struct {
+ mmr_t state : 8;
+ mmr_t reserved_0 : 56;
+ } sh_tsf_triggered_state_s;
+} sh_tsf_triggered_state_u_t;
+
+/* ==================================================================== */
+/* Register "SH_VEC_RDDATA" */
+/* Vector Reply Message Data */
+/* ==================================================================== */
+
+typedef union sh_vec_rddata_u {
+ mmr_t sh_vec_rddata_regval;
+ struct {
+ mmr_t data : 64;
+ } sh_vec_rddata_s;
+} sh_vec_rddata_u_t;
+
+/* ==================================================================== */
+/* Register "SH_VEC_RETURN" */
+/* Vector Reply Message Return Route */
+/* ==================================================================== */
+
+typedef union sh_vec_return_u {
+ mmr_t sh_vec_return_regval;
+ struct {
+ mmr_t route : 64;
+ } sh_vec_return_s;
+} sh_vec_return_u_t;
+
+/* ==================================================================== */
+/* Register "SH_VEC_STATUS" */
+/* Vector Reply Message Status */
+/* ==================================================================== */
+
+typedef union sh_vec_status_u {
+ mmr_t sh_vec_status_regval;
+ struct {
+ mmr_t type : 3;
+ mmr_t address : 32;
+ mmr_t pio_id : 11;
+ mmr_t source : 14;
+ mmr_t reserved_0 : 2;
+ mmr_t overrun : 1;
+ mmr_t status_valid : 1;
+ } sh_vec_status_s;
+} sh_vec_status_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT0_CONTROL" */
+/* Performance Counter 0 Control */
+/* ==================================================================== */
+
+typedef union sh_performance_count0_control_u {
+ mmr_t sh_performance_count0_control_regval;
+ struct {
+ mmr_t up_stimulus : 5;
+ mmr_t up_event : 1;
+ mmr_t up_polarity : 1;
+ mmr_t up_mode : 1;
+ mmr_t dn_stimulus : 5;
+ mmr_t dn_event : 1;
+ mmr_t dn_polarity : 1;
+ mmr_t dn_mode : 1;
+ mmr_t inc_enable : 1;
+ mmr_t dec_enable : 1;
+ mmr_t peak_det_enable : 1;
+ mmr_t reserved_0 : 45;
+ } sh_performance_count0_control_s;
+} sh_performance_count0_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT1_CONTROL" */
+/* Performance Counter 1 Control */
+/* ==================================================================== */
+
+typedef union sh_performance_count1_control_u {
+ mmr_t sh_performance_count1_control_regval;
+ struct {
+ mmr_t up_stimulus : 5;
+ mmr_t up_event : 1;
+ mmr_t up_polarity : 1;
+ mmr_t up_mode : 1;
+ mmr_t dn_stimulus : 5;
+ mmr_t dn_event : 1;
+ mmr_t dn_polarity : 1;
+ mmr_t dn_mode : 1;
+ mmr_t inc_enable : 1;
+ mmr_t dec_enable : 1;
+ mmr_t peak_det_enable : 1;
+ mmr_t reserved_0 : 45;
+ } sh_performance_count1_control_s;
+} sh_performance_count1_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT2_CONTROL" */
+/* Performance Counter 2 Control */
+/* ==================================================================== */
+
+typedef union sh_performance_count2_control_u {
+ mmr_t sh_performance_count2_control_regval;
+ struct {
+ mmr_t up_stimulus : 5;
+ mmr_t up_event : 1;
+ mmr_t up_polarity : 1;
+ mmr_t up_mode : 1;
+ mmr_t dn_stimulus : 5;
+ mmr_t dn_event : 1;
+ mmr_t dn_polarity : 1;
+ mmr_t dn_mode : 1;
+ mmr_t inc_enable : 1;
+ mmr_t dec_enable : 1;
+ mmr_t peak_det_enable : 1;
+ mmr_t reserved_0 : 45;
+ } sh_performance_count2_control_s;
+} sh_performance_count2_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT3_CONTROL" */
+/* Performance Counter 3 Control */
+/* ==================================================================== */
+
+typedef union sh_performance_count3_control_u {
+ mmr_t sh_performance_count3_control_regval;
+ struct {
+ mmr_t up_stimulus : 5;
+ mmr_t up_event : 1;
+ mmr_t up_polarity : 1;
+ mmr_t up_mode : 1;
+ mmr_t dn_stimulus : 5;
+ mmr_t dn_event : 1;
+ mmr_t dn_polarity : 1;
+ mmr_t dn_mode : 1;
+ mmr_t inc_enable : 1;
+ mmr_t dec_enable : 1;
+ mmr_t peak_det_enable : 1;
+ mmr_t reserved_0 : 45;
+ } sh_performance_count3_control_s;
+} sh_performance_count3_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT4_CONTROL" */
+/* Performance Counter 4 Control */
+/* ==================================================================== */
+
+typedef union sh_performance_count4_control_u {
+ mmr_t sh_performance_count4_control_regval;
+ struct {
+ mmr_t up_stimulus : 5;
+ mmr_t up_event : 1;
+ mmr_t up_polarity : 1;
+ mmr_t up_mode : 1;
+ mmr_t dn_stimulus : 5;
+ mmr_t dn_event : 1;
+ mmr_t dn_polarity : 1;
+ mmr_t dn_mode : 1;
+ mmr_t inc_enable : 1;
+ mmr_t dec_enable : 1;
+ mmr_t peak_det_enable : 1;
+ mmr_t reserved_0 : 45;
+ } sh_performance_count4_control_s;
+} sh_performance_count4_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT5_CONTROL" */
+/* Performance Counter 5 Control */
+/* ==================================================================== */
+
+typedef union sh_performance_count5_control_u {
+ mmr_t sh_performance_count5_control_regval;
+ struct {
+ mmr_t up_stimulus : 5;
+ mmr_t up_event : 1;
+ mmr_t up_polarity : 1;
+ mmr_t up_mode : 1;
+ mmr_t dn_stimulus : 5;
+ mmr_t dn_event : 1;
+ mmr_t dn_polarity : 1;
+ mmr_t dn_mode : 1;
+ mmr_t inc_enable : 1;
+ mmr_t dec_enable : 1;
+ mmr_t peak_det_enable : 1;
+ mmr_t reserved_0 : 45;
+ } sh_performance_count5_control_s;
+} sh_performance_count5_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT6_CONTROL" */
+/* Performance Counter 6 Control */
+/* ==================================================================== */
+
+typedef union sh_performance_count6_control_u {
+ mmr_t sh_performance_count6_control_regval;
+ struct {
+ mmr_t up_stimulus : 5;
+ mmr_t up_event : 1;
+ mmr_t up_polarity : 1;
+ mmr_t up_mode : 1;
+ mmr_t dn_stimulus : 5;
+ mmr_t dn_event : 1;
+ mmr_t dn_polarity : 1;
+ mmr_t dn_mode : 1;
+ mmr_t inc_enable : 1;
+ mmr_t dec_enable : 1;
+ mmr_t peak_det_enable : 1;
+ mmr_t reserved_0 : 45;
+ } sh_performance_count6_control_s;
+} sh_performance_count6_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNT7_CONTROL" */
+/* Performance Counter 7 Control */
+/* ==================================================================== */
+
+typedef union sh_performance_count7_control_u {
+ mmr_t sh_performance_count7_control_regval;
+ struct {
+ mmr_t up_stimulus : 5;
+ mmr_t up_event : 1;
+ mmr_t up_polarity : 1;
+ mmr_t up_mode : 1;
+ mmr_t dn_stimulus : 5;
+ mmr_t dn_event : 1;
+ mmr_t dn_polarity : 1;
+ mmr_t dn_mode : 1;
+ mmr_t inc_enable : 1;
+ mmr_t dec_enable : 1;
+ mmr_t peak_det_enable : 1;
+ mmr_t reserved_0 : 45;
+ } sh_performance_count7_control_s;
+} sh_performance_count7_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_DN_CONTROL" */
+/* Profile Counter Down Control */
+/* ==================================================================== */
+
+typedef union sh_profile_dn_control_u {
+ mmr_t sh_profile_dn_control_regval;
+ struct {
+ mmr_t stimulus : 5;
+ mmr_t event : 1;
+ mmr_t polarity : 1;
+ mmr_t mode : 1;
+ mmr_t reserved_0 : 56;
+ } sh_profile_dn_control_s;
+} sh_profile_dn_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_PEAK_CONTROL" */
+/* Profile Counter Peak Control */
+/* ==================================================================== */
+
+typedef union sh_profile_peak_control_u {
+ mmr_t sh_profile_peak_control_regval;
+ struct {
+ mmr_t reserved_0 : 3;
+ mmr_t stimulus : 1;
+ mmr_t reserved_1 : 1;
+ mmr_t event : 1;
+ mmr_t polarity : 1;
+ mmr_t reserved_2 : 57;
+ } sh_profile_peak_control_s;
+} sh_profile_peak_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_RANGE" */
+/* Profile Counter Range */
+/* ==================================================================== */
+
+typedef union sh_profile_range_u {
+ mmr_t sh_profile_range_regval;
+ struct {
+ mmr_t range0 : 8;
+ mmr_t range1 : 8;
+ mmr_t range2 : 8;
+ mmr_t range3 : 8;
+ mmr_t range4 : 8;
+ mmr_t range5 : 8;
+ mmr_t range6 : 8;
+ mmr_t range7 : 8;
+ } sh_profile_range_s;
+} sh_profile_range_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_UP_CONTROL" */
+/* Profile Counter Up Control */
+/* ==================================================================== */
+
+typedef union sh_profile_up_control_u {
+ mmr_t sh_profile_up_control_regval;
+ struct {
+ mmr_t stimulus : 5;
+ mmr_t event : 1;
+ mmr_t polarity : 1;
+ mmr_t mode : 1;
+ mmr_t reserved_0 : 56;
+ } sh_profile_up_control_s;
+} sh_profile_up_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER0" */
+/* Performance Counter 0 */
+/* ==================================================================== */
+
+typedef union sh_performance_counter0_u {
+ mmr_t sh_performance_counter0_regval;
+ struct {
+ mmr_t count : 32;
+ mmr_t reserved_0 : 32;
+ } sh_performance_counter0_s;
+} sh_performance_counter0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER1" */
+/* Performance Counter 1 */
+/* ==================================================================== */
+
+typedef union sh_performance_counter1_u {
+ mmr_t sh_performance_counter1_regval;
+ struct {
+ mmr_t count : 32;
+ mmr_t reserved_0 : 32;
+ } sh_performance_counter1_s;
+} sh_performance_counter1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER2" */
+/* Performance Counter 2 */
+/* ==================================================================== */
+
+typedef union sh_performance_counter2_u {
+ mmr_t sh_performance_counter2_regval;
+ struct {
+ mmr_t count : 32;
+ mmr_t reserved_0 : 32;
+ } sh_performance_counter2_s;
+} sh_performance_counter2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER3" */
+/* Performance Counter 3 */
+/* ==================================================================== */
+
+typedef union sh_performance_counter3_u {
+ mmr_t sh_performance_counter3_regval;
+ struct {
+ mmr_t count : 32;
+ mmr_t reserved_0 : 32;
+ } sh_performance_counter3_s;
+} sh_performance_counter3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER4" */
+/* Performance Counter 4 */
+/* ==================================================================== */
+
+typedef union sh_performance_counter4_u {
+ mmr_t sh_performance_counter4_regval;
+ struct {
+ mmr_t count : 32;
+ mmr_t reserved_0 : 32;
+ } sh_performance_counter4_s;
+} sh_performance_counter4_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER5" */
+/* Performance Counter 5 */
+/* ==================================================================== */
+
+typedef union sh_performance_counter5_u {
+ mmr_t sh_performance_counter5_regval;
+ struct {
+ mmr_t count : 32;
+ mmr_t reserved_0 : 32;
+ } sh_performance_counter5_s;
+} sh_performance_counter5_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER6" */
+/* Performance Counter 6 */
+/* ==================================================================== */
+
+typedef union sh_performance_counter6_u {
+ mmr_t sh_performance_counter6_regval;
+ struct {
+ mmr_t count : 32;
+ mmr_t reserved_0 : 32;
+ } sh_performance_counter6_s;
+} sh_performance_counter6_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PERFORMANCE_COUNTER7" */
+/* Performance Counter 7 */
+/* ==================================================================== */
+
+typedef union sh_performance_counter7_u {
+ mmr_t sh_performance_counter7_regval;
+ struct {
+ mmr_t count : 32;
+ mmr_t reserved_0 : 32;
+ } sh_performance_counter7_s;
+} sh_performance_counter7_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_COUNTER" */
+/* Profile Counter */
+/* ==================================================================== */
+
+typedef union sh_profile_counter_u {
+ mmr_t sh_profile_counter_regval;
+ struct {
+ mmr_t counter : 8;
+ mmr_t reserved_0 : 56;
+ } sh_profile_counter_s;
+} sh_profile_counter_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PROFILE_PEAK" */
+/* Profile Peak Counter */
+/* ==================================================================== */
+
+typedef union sh_profile_peak_u {
+ mmr_t sh_profile_peak_regval;
+ struct {
+ mmr_t counter : 8;
+ mmr_t reserved_0 : 56;
+ } sh_profile_peak_s;
+} sh_profile_peak_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PTC_0" */
+/* Puge Translation Cache Message Configuration Information */
+/* ==================================================================== */
+
+typedef union sh_ptc_0_u {
+ mmr_t sh_ptc_0_regval;
+ struct {
+ mmr_t a : 1;
+ mmr_t reserved_0 : 1;
+ mmr_t ps : 6;
+ mmr_t rid : 24;
+ mmr_t reserved_1 : 31;
+ mmr_t start : 1;
+ } sh_ptc_0_s;
+} sh_ptc_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PTC_1" */
+/* Puge Translation Cache Message Configuration Information */
+/* ==================================================================== */
+
+typedef union sh_ptc_1_u {
+ mmr_t sh_ptc_1_regval;
+ struct {
+ mmr_t reserved_0 : 12;
+ mmr_t vpn : 49;
+ mmr_t reserved_1 : 2;
+ mmr_t start : 1;
+ } sh_ptc_1_s;
+} sh_ptc_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PTC_PARMS" */
+/* PTC Time-out parmaeters */
+/* ==================================================================== */
+
+typedef union sh_ptc_parms_u {
+ mmr_t sh_ptc_parms_regval;
+ struct {
+ mmr_t ptc_to_wrap : 24;
+ mmr_t ptc_to_val : 12;
+ mmr_t reserved_0 : 28;
+ } sh_ptc_parms_s;
+} sh_ptc_parms_u_t;
+
+/* ==================================================================== */
+/* Register "SH_INT_CMPA" */
+/* RTC Compare Value for Processor A */
+/* ==================================================================== */
+
+typedef union sh_int_cmpa_u {
+ mmr_t sh_int_cmpa_regval;
+ struct {
+ mmr_t real_time_cmpa : 55;
+ mmr_t reserved_0 : 9;
+ } sh_int_cmpa_s;
+} sh_int_cmpa_u_t;
+
+/* ==================================================================== */
+/* Register "SH_INT_CMPB" */
+/* RTC Compare Value for Processor B */
+/* ==================================================================== */
+
+typedef union sh_int_cmpb_u {
+ mmr_t sh_int_cmpb_regval;
+ struct {
+ mmr_t real_time_cmpb : 55;
+ mmr_t reserved_0 : 9;
+ } sh_int_cmpb_s;
+} sh_int_cmpb_u_t;
+
+/* ==================================================================== */
+/* Register "SH_INT_CMPC" */
+/* RTC Compare Value for Processor C */
+/* ==================================================================== */
+
+typedef union sh_int_cmpc_u {
+ mmr_t sh_int_cmpc_regval;
+ struct {
+ mmr_t real_time_cmpc : 55;
+ mmr_t reserved_0 : 9;
+ } sh_int_cmpc_s;
+} sh_int_cmpc_u_t;
+
+/* ==================================================================== */
+/* Register "SH_INT_CMPD" */
+/* RTC Compare Value for Processor D */
+/* ==================================================================== */
+
+typedef union sh_int_cmpd_u {
+ mmr_t sh_int_cmpd_regval;
+ struct {
+ mmr_t real_time_cmpd : 55;
+ mmr_t reserved_0 : 9;
+ } sh_int_cmpd_s;
+} sh_int_cmpd_u_t;
+
+/* ==================================================================== */
+/* Register "SH_INT_PROF" */
+/* Profile Compare Registers */
+/* ==================================================================== */
+
+typedef union sh_int_prof_u {
+ mmr_t sh_int_prof_regval;
+ struct {
+ mmr_t profile_compare : 32;
+ mmr_t reserved_0 : 32;
+ } sh_int_prof_s;
+} sh_int_prof_u_t;
+
+/* ==================================================================== */
+/* Register "SH_RTC" */
+/* Real-time Clock */
+/* ==================================================================== */
+
+typedef union sh_rtc_u {
+ mmr_t sh_rtc_regval;
+ struct {
+ mmr_t real_time_clock : 55;
+ mmr_t reserved_0 : 9;
+ } sh_rtc_s;
+} sh_rtc_u_t;
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH0" */
+/* Scratch Register 0 */
+/* ==================================================================== */
+
+typedef union sh_scratch0_u {
+ mmr_t sh_scratch0_regval;
+ struct {
+ mmr_t scratch0 : 64;
+ } sh_scratch0_s;
+} sh_scratch0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH1" */
+/* Scratch Register 1 */
+/* ==================================================================== */
+
+typedef union sh_scratch1_u {
+ mmr_t sh_scratch1_regval;
+ struct {
+ mmr_t scratch1 : 64;
+ } sh_scratch1_s;
+} sh_scratch1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH2" */
+/* Scratch Register 2 */
+/* ==================================================================== */
+
+typedef union sh_scratch2_u {
+ mmr_t sh_scratch2_regval;
+ struct {
+ mmr_t scratch2 : 64;
+ } sh_scratch2_s;
+} sh_scratch2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH3" */
+/* Scratch Register 3 */
+/* ==================================================================== */
+
+typedef union sh_scratch3_u {
+ mmr_t sh_scratch3_regval;
+ struct {
+ mmr_t scratch3 : 1;
+ mmr_t reserved_0 : 63;
+ } sh_scratch3_s;
+} sh_scratch3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_SCRATCH4" */
+/* Scratch Register 4 */
+/* ==================================================================== */
+
+typedef union sh_scratch4_u {
+ mmr_t sh_scratch4_regval;
+ struct {
+ mmr_t scratch4 : 1;
+ mmr_t reserved_0 : 63;
+ } sh_scratch4_s;
+} sh_scratch4_u_t;
+
+/* ==================================================================== */
+/* Register "SH_CRB_MESSAGE_CONTROL" */
+/* Coherent Request Buffer Message Control */
+/* ==================================================================== */
+
+typedef union sh_crb_message_control_u {
+ mmr_t sh_crb_message_control_regval;
+ struct {
+ mmr_t system_coherence_enable : 1;
+ mmr_t local_speculative_message_enable : 1;
+ mmr_t remote_speculative_message_enable : 1;
+ mmr_t message_color : 1;
+ mmr_t message_color_enable : 1;
+ mmr_t rrb_attribute_mismatch_fsb_enable : 1;
+ mmr_t wrb_attribute_mismatch_fsb_enable : 1;
+ mmr_t irb_attribute_mismatch_fsb_enable : 1;
+ mmr_t rrb_attribute_mismatch_xb_enable : 1;
+ mmr_t wrb_attribute_mismatch_xb_enable : 1;
+ mmr_t suppress_bogus_writes : 1;
+ mmr_t enable_ivack_consolidation : 1;
+ mmr_t reserved_0 : 20;
+ mmr_t ivack_stall_count : 16;
+ mmr_t ivack_throttle_control : 16;
+ } sh_crb_message_control_s;
+} sh_crb_message_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_CRB_NACK_LIMIT" */
+/* CRB Nack Limit */
+/* ==================================================================== */
+
+typedef union sh_crb_nack_limit_u {
+ mmr_t sh_crb_nack_limit_regval;
+ struct {
+ mmr_t limit : 12;
+ mmr_t pri_freq : 4;
+ mmr_t reserved_0 : 47;
+ mmr_t enable : 1;
+ } sh_crb_nack_limit_s;
+} sh_crb_nack_limit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_CRB_TIMEOUT_PRESCALE" */
+/* Coherent Request Buffer Timeout Prescale */
+/* ==================================================================== */
+
+typedef union sh_crb_timeout_prescale_u {
+ mmr_t sh_crb_timeout_prescale_regval;
+ struct {
+ mmr_t scaling_factor : 32;
+ mmr_t reserved_0 : 32;
+ } sh_crb_timeout_prescale_s;
+} sh_crb_timeout_prescale_u_t;
+
+/* ==================================================================== */
+/* Register "SH_CRB_TIMEOUT_SKID" */
+/* Coherent Request Buffer Timeout Skid Limit */
+/* ==================================================================== */
+
+typedef union sh_crb_timeout_skid_u {
+ mmr_t sh_crb_timeout_skid_regval;
+ struct {
+ mmr_t skid : 6;
+ mmr_t reserved_0 : 57;
+ mmr_t reset_skid_count : 1;
+ } sh_crb_timeout_skid_s;
+} sh_crb_timeout_skid_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MEMORY_WRITE_STATUS_0" */
+/* Memory Write Status for CPU 0 */
+/* ==================================================================== */
+
+typedef union sh_memory_write_status_0_u {
+ mmr_t sh_memory_write_status_0_regval;
+ struct {
+ mmr_t pending_write_count : 6;
+ mmr_t reserved_0 : 58;
+ } sh_memory_write_status_0_s;
+} sh_memory_write_status_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MEMORY_WRITE_STATUS_1" */
+/* Memory Write Status for CPU 1 */
+/* ==================================================================== */
+
+typedef union sh_memory_write_status_1_u {
+ mmr_t sh_memory_write_status_1_regval;
+ struct {
+ mmr_t pending_write_count : 6;
+ mmr_t reserved_0 : 58;
+ } sh_memory_write_status_1_s;
+} sh_memory_write_status_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PIO_WRITE_STATUS_0" */
+/* PIO Write Status for CPU 0 */
+/* ==================================================================== */
+
+typedef union sh_pio_write_status_0_u {
+ mmr_t sh_pio_write_status_0_regval;
+ struct {
+ mmr_t multi_write_error : 1;
+ mmr_t write_deadlock : 1;
+ mmr_t write_error : 1;
+ mmr_t write_error_address : 47;
+ mmr_t reserved_0 : 6;
+ mmr_t pending_write_count : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t writes_ok : 1;
+ } sh_pio_write_status_0_s;
+} sh_pio_write_status_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PIO_WRITE_STATUS_1" */
+/* PIO Write Status for CPU 1 */
+/* ==================================================================== */
+
+typedef union sh_pio_write_status_1_u {
+ mmr_t sh_pio_write_status_1_regval;
+ struct {
+ mmr_t multi_write_error : 1;
+ mmr_t write_deadlock : 1;
+ mmr_t write_error : 1;
+ mmr_t write_error_address : 47;
+ mmr_t reserved_0 : 6;
+ mmr_t pending_write_count : 6;
+ mmr_t reserved_1 : 1;
+ mmr_t writes_ok : 1;
+ } sh_pio_write_status_1_s;
+} sh_pio_write_status_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MEMORY_WRITE_STATUS_NON_USER_0" */
+/* Memory Write Status for CPU 0. OS access only */
+/* ==================================================================== */
+
+typedef union sh_memory_write_status_non_user_0_u {
+ mmr_t sh_memory_write_status_non_user_0_regval;
+ struct {
+ mmr_t pending_write_count : 6;
+ mmr_t reserved_0 : 57;
+ mmr_t clear : 1;
+ } sh_memory_write_status_non_user_0_s;
+} sh_memory_write_status_non_user_0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MEMORY_WRITE_STATUS_NON_USER_1" */
+/* Memory Write Status for CPU 1. OS access only */
+/* ==================================================================== */
+
+typedef union sh_memory_write_status_non_user_1_u {
+ mmr_t sh_memory_write_status_non_user_1_regval;
+ struct {
+ mmr_t pending_write_count : 6;
+ mmr_t reserved_0 : 57;
+ mmr_t clear : 1;
+ } sh_memory_write_status_non_user_1_s;
+} sh_memory_write_status_non_user_1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MMRBIST_ERR" */
+/* Error capture for bist read errors */
+/* ==================================================================== */
+
+typedef union sh_mmrbist_err_u {
+ mmr_t sh_mmrbist_err_regval;
+ struct {
+ mmr_t addr : 33;
+ mmr_t reserved_0 : 3;
+ mmr_t detected : 1;
+ mmr_t multiple_detected : 1;
+ mmr_t cancelled : 1;
+ mmr_t reserved_1 : 25;
+ } sh_mmrbist_err_s;
+} sh_mmrbist_err_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MISC_ERR_HDR_LOWER" */
+/* Header capture register */
+/* ==================================================================== */
+
+typedef union sh_misc_err_hdr_lower_u {
+ mmr_t sh_misc_err_hdr_lower_regval;
+ struct {
+ mmr_t reserved_0 : 3;
+ mmr_t addr : 33;
+ mmr_t cmd : 8;
+ mmr_t src : 14;
+ mmr_t reserved_1 : 2;
+ mmr_t write : 1;
+ mmr_t reserved_2 : 2;
+ mmr_t valid : 1;
+ } sh_misc_err_hdr_lower_s;
+} sh_misc_err_hdr_lower_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MISC_ERR_HDR_UPPER" */
+/* Error header capture packet and protocol errors */
+/* ==================================================================== */
+
+typedef union sh_misc_err_hdr_upper_u {
+ mmr_t sh_misc_err_hdr_upper_regval;
+ struct {
+ mmr_t dir_protocol : 1;
+ mmr_t illegal_cmd : 1;
+ mmr_t nonexist_addr : 1;
+ mmr_t rmw_uc : 1;
+ mmr_t rmw_cor : 1;
+ mmr_t dir_acc : 1;
+ mmr_t pi_pkt_size : 1;
+ mmr_t xn_pkt_size : 1;
+ mmr_t reserved_0 : 12;
+ mmr_t echo : 9;
+ mmr_t reserved_1 : 35;
+ } sh_misc_err_hdr_upper_s;
+} sh_misc_err_hdr_upper_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIR_UC_ERR_HDR_LOWER" */
+/* Header capture register */
+/* ==================================================================== */
+
+typedef union sh_dir_uc_err_hdr_lower_u {
+ mmr_t sh_dir_uc_err_hdr_lower_regval;
+ struct {
+ mmr_t reserved_0 : 3;
+ mmr_t addr : 33;
+ mmr_t cmd : 8;
+ mmr_t src : 14;
+ mmr_t reserved_1 : 2;
+ mmr_t write : 1;
+ mmr_t reserved_2 : 2;
+ mmr_t valid : 1;
+ } sh_dir_uc_err_hdr_lower_s;
+} sh_dir_uc_err_hdr_lower_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIR_UC_ERR_HDR_UPPER" */
+/* Error header capture packet and protocol errors */
+/* ==================================================================== */
+
+typedef union sh_dir_uc_err_hdr_upper_u {
+ mmr_t sh_dir_uc_err_hdr_upper_regval;
+ struct {
+ mmr_t reserved_0 : 3;
+ mmr_t dir_uc : 1;
+ mmr_t reserved_1 : 16;
+ mmr_t echo : 9;
+ mmr_t reserved_2 : 35;
+ } sh_dir_uc_err_hdr_upper_s;
+} sh_dir_uc_err_hdr_upper_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIR_COR_ERR_HDR_LOWER" */
+/* Header capture register */
+/* ==================================================================== */
+
+typedef union sh_dir_cor_err_hdr_lower_u {
+ mmr_t sh_dir_cor_err_hdr_lower_regval;
+ struct {
+ mmr_t reserved_0 : 3;
+ mmr_t addr : 33;
+ mmr_t cmd : 8;
+ mmr_t src : 14;
+ mmr_t reserved_1 : 2;
+ mmr_t write : 1;
+ mmr_t reserved_2 : 2;
+ mmr_t valid : 1;
+ } sh_dir_cor_err_hdr_lower_s;
+} sh_dir_cor_err_hdr_lower_u_t;
+
+/* ==================================================================== */
+/* Register "SH_DIR_COR_ERR_HDR_UPPER" */
+/* Error header capture packet and protocol errors */
+/* ==================================================================== */
+
+typedef union sh_dir_cor_err_hdr_upper_u {
+ mmr_t sh_dir_cor_err_hdr_upper_regval;
+ struct {
+ mmr_t reserved_0 : 8;
+ mmr_t dir_cor : 1;
+ mmr_t reserved_1 : 11;
+ mmr_t echo : 9;
+ mmr_t reserved_2 : 35;
+ } sh_dir_cor_err_hdr_upper_s;
+} sh_dir_cor_err_hdr_upper_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MEM_ERROR_SUMMARY" */
+/* Memory error flags */
+/* ==================================================================== */
+
+typedef union sh_mem_error_summary_u {
+ mmr_t sh_mem_error_summary_regval;
+ struct {
+ mmr_t illegal_cmd : 1;
+ mmr_t nonexist_addr : 1;
+ mmr_t dqlp_dir_perr : 1;
+ mmr_t dqrp_dir_perr : 1;
+ mmr_t dqlp_dir_uc : 1;
+ mmr_t dqlp_dir_cor : 1;
+ mmr_t dqrp_dir_uc : 1;
+ mmr_t dqrp_dir_cor : 1;
+ mmr_t acx_int_hw : 1;
+ mmr_t acy_int_hw : 1;
+ mmr_t dir_acc : 1;
+ mmr_t reserved_0 : 1;
+ mmr_t dqlp_int_uc : 1;
+ mmr_t dqlp_int_cor : 1;
+ mmr_t dqlp_int_hw : 1;
+ mmr_t reserved_1 : 1;
+ mmr_t dqls_int_uc : 1;
+ mmr_t dqls_int_cor : 1;
+ mmr_t dqls_int_hw : 1;
+ mmr_t reserved_2 : 1;
+ mmr_t dqrp_int_uc : 1;
+ mmr_t dqrp_int_cor : 1;
+ mmr_t dqrp_int_hw : 1;
+ mmr_t reserved_3 : 1;
+ mmr_t dqrs_int_uc : 1;
+ mmr_t dqrs_int_cor : 1;
+ mmr_t dqrs_int_hw : 1;
+ mmr_t reserved_4 : 1;
+ mmr_t pi_reply_overflow : 1;
+ mmr_t xn_reply_overflow : 1;
+ mmr_t pi_request_overflow : 1;
+ mmr_t xn_request_overflow : 1;
+ mmr_t red_black_err_timeout : 1;
+ mmr_t pi_pkt_size : 1;
+ mmr_t xn_pkt_size : 1;
+ mmr_t reserved_5 : 29;
+ } sh_mem_error_summary_s;
+} sh_mem_error_summary_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MEM_ERROR_OVERFLOW" */
+/* Memory error flags */
+/* ==================================================================== */
+
+typedef union sh_mem_error_overflow_u {
+ mmr_t sh_mem_error_overflow_regval;
+ struct {
+ mmr_t illegal_cmd : 1;
+ mmr_t nonexist_addr : 1;
+ mmr_t dqlp_dir_perr : 1;
+ mmr_t dqrp_dir_perr : 1;
+ mmr_t dqlp_dir_uc : 1;
+ mmr_t dqlp_dir_cor : 1;
+ mmr_t dqrp_dir_uc : 1;
+ mmr_t dqrp_dir_cor : 1;
+ mmr_t acx_int_hw : 1;
+ mmr_t acy_int_hw : 1;
+ mmr_t dir_acc : 1;
+ mmr_t reserved_0 : 1;
+ mmr_t dqlp_int_uc : 1;
+ mmr_t dqlp_int_cor : 1;
+ mmr_t dqlp_int_hw : 1;
+ mmr_t reserved_1 : 1;
+ mmr_t dqls_int_uc : 1;
+ mmr_t dqls_int_cor : 1;
+ mmr_t dqls_int_hw : 1;
+ mmr_t reserved_2 : 1;
+ mmr_t dqrp_int_uc : 1;
+ mmr_t dqrp_int_cor : 1;
+ mmr_t dqrp_int_hw : 1;
+ mmr_t reserved_3 : 1;
+ mmr_t dqrs_int_uc : 1;
+ mmr_t dqrs_int_cor : 1;
+ mmr_t dqrs_int_hw : 1;
+ mmr_t reserved_4 : 1;
+ mmr_t pi_reply_overflow : 1;
+ mmr_t xn_reply_overflow : 1;
+ mmr_t pi_request_overflow : 1;
+ mmr_t xn_request_overflow : 1;
+ mmr_t red_black_err_timeout : 1;
+ mmr_t pi_pkt_size : 1;
+ mmr_t xn_pkt_size : 1;
+ mmr_t reserved_5 : 29;
+ } sh_mem_error_overflow_s;
+} sh_mem_error_overflow_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MEM_ERROR_MASK" */
+/* Memory error flags */
+/* ==================================================================== */
+
+typedef union sh_mem_error_mask_u {
+ mmr_t sh_mem_error_mask_regval;
+ struct {
+ mmr_t illegal_cmd : 1;
+ mmr_t nonexist_addr : 1;
+ mmr_t dqlp_dir_perr : 1;
+ mmr_t dqrp_dir_perr : 1;
+ mmr_t dqlp_dir_uc : 1;
+ mmr_t dqlp_dir_cor : 1;
+ mmr_t dqrp_dir_uc : 1;
+ mmr_t dqrp_dir_cor : 1;
+ mmr_t acx_int_hw : 1;
+ mmr_t acy_int_hw : 1;
+ mmr_t dir_acc : 1;
+ mmr_t reserved_0 : 1;
+ mmr_t dqlp_int_uc : 1;
+ mmr_t dqlp_int_cor : 1;
+ mmr_t dqlp_int_hw : 1;
+ mmr_t reserved_1 : 1;
+ mmr_t dqls_int_uc : 1;
+ mmr_t dqls_int_cor : 1;
+ mmr_t dqls_int_hw : 1;
+ mmr_t reserved_2 : 1;
+ mmr_t dqrp_int_uc : 1;
+ mmr_t dqrp_int_cor : 1;
+ mmr_t dqrp_int_hw : 1;
+ mmr_t reserved_3 : 1;
+ mmr_t dqrs_int_uc : 1;
+ mmr_t dqrs_int_cor : 1;
+ mmr_t dqrs_int_hw : 1;
+ mmr_t reserved_4 : 1;
+ mmr_t pi_reply_overflow : 1;
+ mmr_t xn_reply_overflow : 1;
+ mmr_t pi_request_overflow : 1;
+ mmr_t xn_request_overflow : 1;
+ mmr_t red_black_err_timeout : 1;
+ mmr_t pi_pkt_size : 1;
+ mmr_t xn_pkt_size : 1;
+ mmr_t reserved_5 : 29;
+ } sh_mem_error_mask_s;
+} sh_mem_error_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_DIMM_CFG" */
+/* AC Mem Config Registers */
+/* ==================================================================== */
+
+typedef union sh_x_dimm_cfg_u {
+ mmr_t sh_x_dimm_cfg_regval;
+ struct {
+ mmr_t dimm0_size : 3;
+ mmr_t dimm0_2bk : 1;
+ mmr_t dimm0_rev : 1;
+ mmr_t dimm0_cs : 2;
+ mmr_t reserved_0 : 1;
+ mmr_t dimm1_size : 3;
+ mmr_t dimm1_2bk : 1;
+ mmr_t dimm1_rev : 1;
+ mmr_t dimm1_cs : 2;
+ mmr_t reserved_1 : 1;
+ mmr_t dimm2_size : 3;
+ mmr_t dimm2_2bk : 1;
+ mmr_t dimm2_rev : 1;
+ mmr_t dimm2_cs : 2;
+ mmr_t reserved_2 : 1;
+ mmr_t dimm3_size : 3;
+ mmr_t dimm3_2bk : 1;
+ mmr_t dimm3_rev : 1;
+ mmr_t dimm3_cs : 2;
+ mmr_t reserved_3 : 1;
+ mmr_t freq : 4;
+ mmr_t reserved_4 : 28;
+ } sh_x_dimm_cfg_s;
+} sh_x_dimm_cfg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_DIMM_CFG" */
+/* AC Mem Config Registers */
+/* ==================================================================== */
+
+typedef union sh_y_dimm_cfg_u {
+ mmr_t sh_y_dimm_cfg_regval;
+ struct {
+ mmr_t dimm0_size : 3;
+ mmr_t dimm0_2bk : 1;
+ mmr_t dimm0_rev : 1;
+ mmr_t dimm0_cs : 2;
+ mmr_t reserved_0 : 1;
+ mmr_t dimm1_size : 3;
+ mmr_t dimm1_2bk : 1;
+ mmr_t dimm1_rev : 1;
+ mmr_t dimm1_cs : 2;
+ mmr_t reserved_1 : 1;
+ mmr_t dimm2_size : 3;
+ mmr_t dimm2_2bk : 1;
+ mmr_t dimm2_rev : 1;
+ mmr_t dimm2_cs : 2;
+ mmr_t reserved_2 : 1;
+ mmr_t dimm3_size : 3;
+ mmr_t dimm3_2bk : 1;
+ mmr_t dimm3_rev : 1;
+ mmr_t dimm3_cs : 2;
+ mmr_t reserved_3 : 1;
+ mmr_t freq : 4;
+ mmr_t reserved_4 : 28;
+ } sh_y_dimm_cfg_s;
+} sh_y_dimm_cfg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_JNR_DIMM_CFG" */
+/* AC Mem Config Registers */
+/* ==================================================================== */
+
+typedef union sh_jnr_dimm_cfg_u {
+ mmr_t sh_jnr_dimm_cfg_regval;
+ struct {
+ mmr_t dimm0_size : 3;
+ mmr_t dimm0_2bk : 1;
+ mmr_t dimm0_rev : 1;
+ mmr_t dimm0_cs : 2;
+ mmr_t reserved_0 : 1;
+ mmr_t dimm1_size : 3;
+ mmr_t dimm1_2bk : 1;
+ mmr_t dimm1_rev : 1;
+ mmr_t dimm1_cs : 2;
+ mmr_t reserved_1 : 1;
+ mmr_t dimm2_size : 3;
+ mmr_t dimm2_2bk : 1;
+ mmr_t dimm2_rev : 1;
+ mmr_t dimm2_cs : 2;
+ mmr_t reserved_2 : 1;
+ mmr_t dimm3_size : 3;
+ mmr_t dimm3_2bk : 1;
+ mmr_t dimm3_rev : 1;
+ mmr_t dimm3_cs : 2;
+ mmr_t reserved_3 : 1;
+ mmr_t freq : 4;
+ mmr_t reserved_4 : 28;
+ } sh_jnr_dimm_cfg_s;
+} sh_jnr_dimm_cfg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_PHASE_CFG" */
+/* AC Phase Config Registers */
+/* ==================================================================== */
+
+typedef union sh_x_phase_cfg_u {
+ mmr_t sh_x_phase_cfg_regval;
+ struct {
+ mmr_t ld_a : 5;
+ mmr_t ld_b : 5;
+ mmr_t dq_ld_a : 5;
+ mmr_t dq_ld_b : 5;
+ mmr_t hold : 5;
+ mmr_t hold_req : 5;
+ mmr_t add_cp : 5;
+ mmr_t bubble_en : 5;
+ mmr_t pha_bubble : 3;
+ mmr_t phb_bubble : 3;
+ mmr_t phc_bubble : 3;
+ mmr_t phd_bubble : 3;
+ mmr_t phe_bubble : 3;
+ mmr_t sel_a : 4;
+ mmr_t dq_sel_a : 4;
+ mmr_t reserved_0 : 1;
+ } sh_x_phase_cfg_s;
+} sh_x_phase_cfg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_CFG" */
+/* AC Config Registers */
+/* ==================================================================== */
+
+typedef union sh_x_cfg_u {
+ mmr_t sh_x_cfg_regval;
+ struct {
+ mmr_t mode_serial : 1;
+ mmr_t dirc_random_replacement : 1;
+ mmr_t dir_counter_init : 6;
+ mmr_t ta_dlys : 32;
+ mmr_t da_bb_clr : 4;
+ mmr_t dc_bb_clr : 4;
+ mmr_t wt_bb_clr : 4;
+ mmr_t sso_wt_en : 1;
+ mmr_t trcd2_en : 1;
+ mmr_t trcd4_en : 1;
+ mmr_t req_cntr_dis : 1;
+ mmr_t req_cntr_val : 6;
+ mmr_t inv_cas_addr : 1;
+ mmr_t clr_dir_cache : 1;
+ } sh_x_cfg_s;
+} sh_x_cfg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_DQCT_CFG" */
+/* AC Config Registers */
+/* ==================================================================== */
+
+typedef union sh_x_dqct_cfg_u {
+ mmr_t sh_x_dqct_cfg_regval;
+ struct {
+ mmr_t rd_sel : 4;
+ mmr_t wt_sel : 4;
+ mmr_t dta_rd_sel : 4;
+ mmr_t dta_wt_sel : 4;
+ mmr_t dir_rd_sel : 4;
+ mmr_t mdir_rd_sel : 4;
+ mmr_t reserved_0 : 40;
+ } sh_x_dqct_cfg_s;
+} sh_x_dqct_cfg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_REFRESH_CONTROL" */
+/* Refresh Control Register */
+/* ==================================================================== */
+
+typedef union sh_x_refresh_control_u {
+ mmr_t sh_x_refresh_control_regval;
+ struct {
+ mmr_t enable : 8;
+ mmr_t interval : 9;
+ mmr_t hold : 6;
+ mmr_t interleave : 1;
+ mmr_t half_rate : 4;
+ mmr_t reserved_0 : 36;
+ } sh_x_refresh_control_s;
+} sh_x_refresh_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_PHASE_CFG" */
+/* AC Phase Config Registers */
+/* ==================================================================== */
+
+typedef union sh_y_phase_cfg_u {
+ mmr_t sh_y_phase_cfg_regval;
+ struct {
+ mmr_t ld_a : 5;
+ mmr_t ld_b : 5;
+ mmr_t dq_ld_a : 5;
+ mmr_t dq_ld_b : 5;
+ mmr_t hold : 5;
+ mmr_t hold_req : 5;
+ mmr_t add_cp : 5;
+ mmr_t bubble_en : 5;
+ mmr_t pha_bubble : 3;
+ mmr_t phb_bubble : 3;
+ mmr_t phc_bubble : 3;
+ mmr_t phd_bubble : 3;
+ mmr_t phe_bubble : 3;
+ mmr_t sel_a : 4;
+ mmr_t dq_sel_a : 4;
+ mmr_t reserved_0 : 1;
+ } sh_y_phase_cfg_s;
+} sh_y_phase_cfg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_CFG" */
+/* AC Config Registers */
+/* ==================================================================== */
+
+typedef union sh_y_cfg_u {
+ mmr_t sh_y_cfg_regval;
+ struct {
+ mmr_t mode_serial : 1;
+ mmr_t dirc_random_replacement : 1;
+ mmr_t dir_counter_init : 6;
+ mmr_t ta_dlys : 32;
+ mmr_t da_bb_clr : 4;
+ mmr_t dc_bb_clr : 4;
+ mmr_t wt_bb_clr : 4;
+ mmr_t sso_wt_en : 1;
+ mmr_t trcd2_en : 1;
+ mmr_t trcd4_en : 1;
+ mmr_t req_cntr_dis : 1;
+ mmr_t req_cntr_val : 6;
+ mmr_t inv_cas_addr : 1;
+ mmr_t clr_dir_cache : 1;
+ } sh_y_cfg_s;
+} sh_y_cfg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_DQCT_CFG" */
+/* AC Config Registers */
+/* ==================================================================== */
+
+typedef union sh_y_dqct_cfg_u {
+ mmr_t sh_y_dqct_cfg_regval;
+ struct {
+ mmr_t rd_sel : 4;
+ mmr_t wt_sel : 4;
+ mmr_t dta_rd_sel : 4;
+ mmr_t dta_wt_sel : 4;
+ mmr_t dir_rd_sel : 4;
+ mmr_t mdir_rd_sel : 4;
+ mmr_t reserved_0 : 40;
+ } sh_y_dqct_cfg_s;
+} sh_y_dqct_cfg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_REFRESH_CONTROL" */
+/* Refresh Control Register */
+/* ==================================================================== */
+
+typedef union sh_y_refresh_control_u {
+ mmr_t sh_y_refresh_control_regval;
+ struct {
+ mmr_t enable : 8;
+ mmr_t interval : 9;
+ mmr_t hold : 6;
+ mmr_t interleave : 1;
+ mmr_t half_rate : 4;
+ mmr_t reserved_0 : 36;
+ } sh_y_refresh_control_s;
+} sh_y_refresh_control_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MEM_RED_BLACK" */
+/* MD fairness watchdog timers */
+/* ==================================================================== */
+
+typedef union sh_mem_red_black_u {
+ mmr_t sh_mem_red_black_regval;
+ struct {
+ mmr_t time : 16;
+ mmr_t err_time : 36;
+ mmr_t reserved_0 : 12;
+ } sh_mem_red_black_s;
+} sh_mem_red_black_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MISC_MEM_CFG" */
+/* ==================================================================== */
+
+typedef union sh_misc_mem_cfg_u {
+ mmr_t sh_misc_mem_cfg_regval;
+ struct {
+ mmr_t express_header_enable : 1;
+ mmr_t spec_header_enable : 1;
+ mmr_t jnr_bypass_enable : 1;
+ mmr_t xn_rd_same_as_pi : 1;
+ mmr_t low_write_buffer_threshold : 6;
+ mmr_t reserved_0 : 2;
+ mmr_t low_victim_buffer_threshold : 6;
+ mmr_t reserved_1 : 2;
+ mmr_t throttle_cnt : 8;
+ mmr_t disabled_read_tnums : 5;
+ mmr_t reserved_2 : 3;
+ mmr_t disabled_write_tnums : 5;
+ mmr_t reserved_3 : 3;
+ mmr_t disabled_victims : 6;
+ mmr_t reserved_4 : 2;
+ mmr_t alternate_xn_rp_plane : 1;
+ mmr_t reserved_5 : 11;
+ } sh_misc_mem_cfg_s;
+} sh_misc_mem_cfg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PIO_RQ_CRD_CTL" */
+/* pio_rq Credit Circulation Control */
+/* ==================================================================== */
+
+typedef union sh_pio_rq_crd_ctl_u {
+ mmr_t sh_pio_rq_crd_ctl_regval;
+ struct {
+ mmr_t depth : 6;
+ mmr_t reserved_0 : 58;
+ } sh_pio_rq_crd_ctl_s;
+} sh_pio_rq_crd_ctl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_MD_RQ_CRD_CTL" */
+/* pi_md_rq Credit Circulation Control */
+/* ==================================================================== */
+
+typedef union sh_pi_md_rq_crd_ctl_u {
+ mmr_t sh_pi_md_rq_crd_ctl_regval;
+ struct {
+ mmr_t depth : 6;
+ mmr_t reserved_0 : 58;
+ } sh_pi_md_rq_crd_ctl_s;
+} sh_pi_md_rq_crd_ctl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_PI_MD_RP_CRD_CTL" */
+/* pi_md_rp Credit Circulation Control */
+/* ==================================================================== */
+
+typedef union sh_pi_md_rp_crd_ctl_u {
+ mmr_t sh_pi_md_rp_crd_ctl_regval;
+ struct {
+ mmr_t depth : 6;
+ mmr_t reserved_0 : 58;
+ } sh_pi_md_rp_crd_ctl_s;
+} sh_pi_md_rp_crd_ctl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_RQ_CRD_CTL" */
+/* xn_md_rq Credit Circulation Control */
+/* ==================================================================== */
+
+typedef union sh_xn_md_rq_crd_ctl_u {
+ mmr_t sh_xn_md_rq_crd_ctl_regval;
+ struct {
+ mmr_t depth : 6;
+ mmr_t reserved_0 : 58;
+ } sh_xn_md_rq_crd_ctl_s;
+} sh_xn_md_rq_crd_ctl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_XN_MD_RP_CRD_CTL" */
+/* xn_md_rp Credit Circulation Control */
+/* ==================================================================== */
+
+typedef union sh_xn_md_rp_crd_ctl_u {
+ mmr_t sh_xn_md_rp_crd_ctl_regval;
+ struct {
+ mmr_t depth : 6;
+ mmr_t reserved_0 : 58;
+ } sh_xn_md_rp_crd_ctl_s;
+} sh_xn_md_rp_crd_ctl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_TAG0" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_x_tag0_u {
+ mmr_t sh_x_tag0_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_x_tag0_s;
+} sh_x_tag0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_TAG1" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_x_tag1_u {
+ mmr_t sh_x_tag1_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_x_tag1_s;
+} sh_x_tag1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_TAG2" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_x_tag2_u {
+ mmr_t sh_x_tag2_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_x_tag2_s;
+} sh_x_tag2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_TAG3" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_x_tag3_u {
+ mmr_t sh_x_tag3_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_x_tag3_s;
+} sh_x_tag3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_TAG4" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_x_tag4_u {
+ mmr_t sh_x_tag4_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_x_tag4_s;
+} sh_x_tag4_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_TAG5" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_x_tag5_u {
+ mmr_t sh_x_tag5_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_x_tag5_s;
+} sh_x_tag5_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_TAG6" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_x_tag6_u {
+ mmr_t sh_x_tag6_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_x_tag6_s;
+} sh_x_tag6_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_TAG7" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_x_tag7_u {
+ mmr_t sh_x_tag7_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_x_tag7_s;
+} sh_x_tag7_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG0" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_y_tag0_u {
+ mmr_t sh_y_tag0_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_y_tag0_s;
+} sh_y_tag0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG1" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_y_tag1_u {
+ mmr_t sh_y_tag1_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_y_tag1_s;
+} sh_y_tag1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG2" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_y_tag2_u {
+ mmr_t sh_y_tag2_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_y_tag2_s;
+} sh_y_tag2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG3" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_y_tag3_u {
+ mmr_t sh_y_tag3_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_y_tag3_s;
+} sh_y_tag3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG4" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_y_tag4_u {
+ mmr_t sh_y_tag4_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_y_tag4_s;
+} sh_y_tag4_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG5" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_y_tag5_u {
+ mmr_t sh_y_tag5_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_y_tag5_s;
+} sh_y_tag5_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG6" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_y_tag6_u {
+ mmr_t sh_y_tag6_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_y_tag6_s;
+} sh_y_tag6_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG7" */
+/* AC tag Registers */
+/* ==================================================================== */
+
+typedef union sh_y_tag7_u {
+ mmr_t sh_y_tag7_regval;
+ struct {
+ mmr_t tag : 20;
+ mmr_t reserved_0 : 44;
+ } sh_y_tag7_s;
+} sh_y_tag7_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MMRBIST_BASE" */
+/* mmr/bist base address */
+/* ==================================================================== */
+
+typedef union sh_mmrbist_base_u {
+ mmr_t sh_mmrbist_base_regval;
+ struct {
+ mmr_t reserved_0 : 3;
+ mmr_t dword_addr : 47;
+ mmr_t reserved_1 : 14;
+ } sh_mmrbist_base_s;
+} sh_mmrbist_base_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MMRBIST_CTL" */
+/* Bist base address */
+/* ==================================================================== */
+
+typedef union sh_mmrbist_ctl_u {
+ mmr_t sh_mmrbist_ctl_regval;
+ struct {
+ mmr_t block_length : 31;
+ mmr_t reserved_0 : 1;
+ mmr_t cmd : 7;
+ mmr_t reserved_1 : 1;
+ mmr_t in_progress : 1;
+ mmr_t fail : 1;
+ mmr_t mem_idle : 1;
+ mmr_t reserved_2 : 1;
+ mmr_t reset_state : 1;
+ mmr_t reserved_3 : 19;
+ } sh_mmrbist_ctl_s;
+} sh_mmrbist_ctl_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DBUG_DATA_CFG" */
+/* configuration for md debug data muxes */
+/* ==================================================================== */
+
+typedef union sh_md_dbug_data_cfg_u {
+ mmr_t sh_md_dbug_data_cfg_regval;
+ struct {
+ mmr_t nibble0_chiplet : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_chiplet : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_chiplet : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_chiplet : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_chiplet : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_chiplet : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_chiplet : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_chiplet : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble : 3;
+ mmr_t reserved_15 : 1;
+ } sh_md_dbug_data_cfg_s;
+} sh_md_dbug_data_cfg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DBUG_TRIGGER_CFG" */
+/* configuration for md debug triggers */
+/* ==================================================================== */
+
+typedef union sh_md_dbug_trigger_cfg_u {
+ mmr_t sh_md_dbug_trigger_cfg_regval;
+ struct {
+ mmr_t nibble0_chiplet : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble0_nibble : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble1_chiplet : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble1_nibble : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble2_chiplet : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble2_nibble : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble3_chiplet : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble3_nibble : 3;
+ mmr_t reserved_7 : 1;
+ mmr_t nibble4_chiplet : 3;
+ mmr_t reserved_8 : 1;
+ mmr_t nibble4_nibble : 3;
+ mmr_t reserved_9 : 1;
+ mmr_t nibble5_chiplet : 3;
+ mmr_t reserved_10 : 1;
+ mmr_t nibble5_nibble : 3;
+ mmr_t reserved_11 : 1;
+ mmr_t nibble6_chiplet : 3;
+ mmr_t reserved_12 : 1;
+ mmr_t nibble6_nibble : 3;
+ mmr_t reserved_13 : 1;
+ mmr_t nibble7_chiplet : 3;
+ mmr_t reserved_14 : 1;
+ mmr_t nibble7_nibble : 3;
+ mmr_t enable : 1;
+ } sh_md_dbug_trigger_cfg_s;
+} sh_md_dbug_trigger_cfg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DBUG_COMPARE" */
+/* md debug compare pattern and mask */
+/* ==================================================================== */
+
+typedef union sh_md_dbug_compare_u {
+ mmr_t sh_md_dbug_compare_regval;
+ struct {
+ mmr_t pattern : 32;
+ mmr_t mask : 32;
+ } sh_md_dbug_compare_s;
+} sh_md_dbug_compare_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_MOD_DBUG_SEL" */
+/* MD acx debug select */
+/* ==================================================================== */
+
+typedef union sh_x_mod_dbug_sel_u {
+ mmr_t sh_x_mod_dbug_sel_regval;
+ struct {
+ mmr_t tag_sel : 8;
+ mmr_t wbq_sel : 8;
+ mmr_t arb_sel : 8;
+ mmr_t atl_sel : 11;
+ mmr_t atr_sel : 11;
+ mmr_t dql_sel : 6;
+ mmr_t dqr_sel : 6;
+ mmr_t reserved_0 : 6;
+ } sh_x_mod_dbug_sel_s;
+} sh_x_mod_dbug_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_DBUG_SEL" */
+/* MD acx debug select */
+/* ==================================================================== */
+
+typedef union sh_x_dbug_sel_u {
+ mmr_t sh_x_dbug_sel_regval;
+ struct {
+ mmr_t dbg_sel : 24;
+ mmr_t reserved_0 : 40;
+ } sh_x_dbug_sel_s;
+} sh_x_dbug_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_LADDR_CMP" */
+/* MD acx address compare */
+/* ==================================================================== */
+
+typedef union sh_x_laddr_cmp_u {
+ mmr_t sh_x_laddr_cmp_regval;
+ struct {
+ mmr_t cmp_val : 28;
+ mmr_t reserved_0 : 4;
+ mmr_t mask_val : 28;
+ mmr_t reserved_1 : 4;
+ } sh_x_laddr_cmp_s;
+} sh_x_laddr_cmp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_RADDR_CMP" */
+/* MD acx address compare */
+/* ==================================================================== */
+
+typedef union sh_x_raddr_cmp_u {
+ mmr_t sh_x_raddr_cmp_regval;
+ struct {
+ mmr_t cmp_val : 28;
+ mmr_t reserved_0 : 4;
+ mmr_t mask_val : 28;
+ mmr_t reserved_1 : 4;
+ } sh_x_raddr_cmp_s;
+} sh_x_raddr_cmp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_TAG_CMP" */
+/* MD acx tagmgr compare */
+/* ==================================================================== */
+
+typedef union sh_x_tag_cmp_u {
+ mmr_t sh_x_tag_cmp_regval;
+ struct {
+ mmr_t cmd : 8;
+ mmr_t addr : 33;
+ mmr_t src : 14;
+ mmr_t reserved_0 : 9;
+ } sh_x_tag_cmp_s;
+} sh_x_tag_cmp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_X_TAG_MASK" */
+/* MD acx tagmgr mask */
+/* ==================================================================== */
+
+typedef union sh_x_tag_mask_u {
+ mmr_t sh_x_tag_mask_regval;
+ struct {
+ mmr_t cmd : 8;
+ mmr_t addr : 33;
+ mmr_t src : 14;
+ mmr_t reserved_0 : 9;
+ } sh_x_tag_mask_s;
+} sh_x_tag_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_MOD_DBUG_SEL" */
+/* MD acy debug select */
+/* ==================================================================== */
+
+typedef union sh_y_mod_dbug_sel_u {
+ mmr_t sh_y_mod_dbug_sel_regval;
+ struct {
+ mmr_t tag_sel : 8;
+ mmr_t wbq_sel : 8;
+ mmr_t arb_sel : 8;
+ mmr_t atl_sel : 11;
+ mmr_t atr_sel : 11;
+ mmr_t dql_sel : 6;
+ mmr_t dqr_sel : 6;
+ mmr_t reserved_0 : 6;
+ } sh_y_mod_dbug_sel_s;
+} sh_y_mod_dbug_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_DBUG_SEL" */
+/* MD acy debug select */
+/* ==================================================================== */
+
+typedef union sh_y_dbug_sel_u {
+ mmr_t sh_y_dbug_sel_regval;
+ struct {
+ mmr_t dbg_sel : 24;
+ mmr_t reserved_0 : 40;
+ } sh_y_dbug_sel_s;
+} sh_y_dbug_sel_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_LADDR_CMP" */
+/* MD acy address compare */
+/* ==================================================================== */
+
+typedef union sh_y_laddr_cmp_u {
+ mmr_t sh_y_laddr_cmp_regval;
+ struct {
+ mmr_t cmp_val : 28;
+ mmr_t reserved_0 : 4;
+ mmr_t mask_val : 28;
+ mmr_t reserved_1 : 4;
+ } sh_y_laddr_cmp_s;
+} sh_y_laddr_cmp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_RADDR_CMP" */
+/* MD acy address compare */
+/* ==================================================================== */
+
+typedef union sh_y_raddr_cmp_u {
+ mmr_t sh_y_raddr_cmp_regval;
+ struct {
+ mmr_t cmp_val : 28;
+ mmr_t reserved_0 : 4;
+ mmr_t mask_val : 28;
+ mmr_t reserved_1 : 4;
+ } sh_y_raddr_cmp_s;
+} sh_y_raddr_cmp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG_CMP" */
+/* MD acy tagmgr compare */
+/* ==================================================================== */
+
+typedef union sh_y_tag_cmp_u {
+ mmr_t sh_y_tag_cmp_regval;
+ struct {
+ mmr_t cmd : 8;
+ mmr_t addr : 33;
+ mmr_t src : 14;
+ mmr_t reserved_0 : 9;
+ } sh_y_tag_cmp_s;
+} sh_y_tag_cmp_u_t;
+
+/* ==================================================================== */
+/* Register "SH_Y_TAG_MASK" */
+/* MD acy tagmgr mask */
+/* ==================================================================== */
+
+typedef union sh_y_tag_mask_u {
+ mmr_t sh_y_tag_mask_regval;
+ struct {
+ mmr_t cmd : 8;
+ mmr_t addr : 33;
+ mmr_t src : 14;
+ mmr_t reserved_0 : 9;
+ } sh_y_tag_mask_s;
+} sh_y_tag_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_JNR_DBUG_DATA_CFG" */
+/* configuration for md jnr debug data muxes */
+/* ==================================================================== */
+
+typedef union sh_md_jnr_dbug_data_cfg_u {
+ mmr_t sh_md_jnr_dbug_data_cfg_regval;
+ struct {
+ mmr_t nibble0_sel : 3;
+ mmr_t reserved_0 : 1;
+ mmr_t nibble1_sel : 3;
+ mmr_t reserved_1 : 1;
+ mmr_t nibble2_sel : 3;
+ mmr_t reserved_2 : 1;
+ mmr_t nibble3_sel : 3;
+ mmr_t reserved_3 : 1;
+ mmr_t nibble4_sel : 3;
+ mmr_t reserved_4 : 1;
+ mmr_t nibble5_sel : 3;
+ mmr_t reserved_5 : 1;
+ mmr_t nibble6_sel : 3;
+ mmr_t reserved_6 : 1;
+ mmr_t nibble7_sel : 3;
+ mmr_t reserved_7 : 33;
+ } sh_md_jnr_dbug_data_cfg_s;
+} sh_md_jnr_dbug_data_cfg_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_LAST_CREDIT" */
+/* captures last credit values on reset */
+/* ==================================================================== */
+
+typedef union sh_md_last_credit_u {
+ mmr_t sh_md_last_credit_regval;
+ struct {
+ mmr_t rq_to_pi : 6;
+ mmr_t reserved_0 : 2;
+ mmr_t rp_to_pi : 6;
+ mmr_t reserved_1 : 2;
+ mmr_t rq_to_xn : 6;
+ mmr_t reserved_2 : 2;
+ mmr_t rp_to_xn : 6;
+ mmr_t reserved_3 : 2;
+ mmr_t to_lb : 6;
+ mmr_t reserved_4 : 26;
+ } sh_md_last_credit_s;
+} sh_md_last_credit_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MEM_CAPTURE_ADDR" */
+/* Address capture address register */
+/* ==================================================================== */
+
+typedef union sh_mem_capture_addr_u {
+ mmr_t sh_mem_capture_addr_regval;
+ struct {
+ mmr_t reserved_0 : 3;
+ mmr_t addr : 33;
+ mmr_t cmd : 8;
+ mmr_t reserved_1 : 20;
+ } sh_mem_capture_addr_s;
+} sh_mem_capture_addr_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MEM_CAPTURE_MASK" */
+/* Address capture mask register */
+/* ==================================================================== */
+
+typedef union sh_mem_capture_mask_u {
+ mmr_t sh_mem_capture_mask_regval;
+ struct {
+ mmr_t reserved_0 : 3;
+ mmr_t addr : 33;
+ mmr_t cmd : 8;
+ mmr_t enable_local : 1;
+ mmr_t enable_remote : 1;
+ mmr_t reserved_1 : 18;
+ } sh_mem_capture_mask_s;
+} sh_mem_capture_mask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MEM_CAPTURE_HDR" */
+/* Address capture header register */
+/* ==================================================================== */
+
+typedef union sh_mem_capture_hdr_u {
+ mmr_t sh_mem_capture_hdr_regval;
+ struct {
+ mmr_t reserved_0 : 3;
+ mmr_t addr : 33;
+ mmr_t cmd : 8;
+ mmr_t src : 14;
+ mmr_t cntr : 6;
+ } sh_mem_capture_hdr_s;
+} sh_mem_capture_hdr_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_CONFIG" */
+/* DQ directory config register */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_config_u {
+ mmr_t sh_md_dqlp_mmr_dir_config_regval;
+ struct {
+ mmr_t sys_size : 3;
+ mmr_t en_direcc : 1;
+ mmr_t en_dirpois : 1;
+ mmr_t reserved_0 : 59;
+ } sh_md_dqlp_mmr_dir_config_s;
+} sh_md_dqlp_mmr_dir_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC0" */
+/* node [63:0] presence bits */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_presvec0_u {
+ mmr_t sh_md_dqlp_mmr_dir_presvec0_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqlp_mmr_dir_presvec0_s;
+} sh_md_dqlp_mmr_dir_presvec0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC1" */
+/* node [127:64] presence bits */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_presvec1_u {
+ mmr_t sh_md_dqlp_mmr_dir_presvec1_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqlp_mmr_dir_presvec1_s;
+} sh_md_dqlp_mmr_dir_presvec1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC2" */
+/* node [191:128] presence bits */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_presvec2_u {
+ mmr_t sh_md_dqlp_mmr_dir_presvec2_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqlp_mmr_dir_presvec2_s;
+} sh_md_dqlp_mmr_dir_presvec2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRESVEC3" */
+/* node [255:192] presence bits */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_presvec3_u {
+ mmr_t sh_md_dqlp_mmr_dir_presvec3_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqlp_mmr_dir_presvec3_s;
+} sh_md_dqlp_mmr_dir_presvec3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC0" */
+/* local vector for acc=0 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_locvec0_u {
+ mmr_t sh_md_dqlp_mmr_dir_locvec0_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqlp_mmr_dir_locvec0_s;
+} sh_md_dqlp_mmr_dir_locvec0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC1" */
+/* local vector for acc=1 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_locvec1_u {
+ mmr_t sh_md_dqlp_mmr_dir_locvec1_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqlp_mmr_dir_locvec1_s;
+} sh_md_dqlp_mmr_dir_locvec1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC2" */
+/* local vector for acc=2 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_locvec2_u {
+ mmr_t sh_md_dqlp_mmr_dir_locvec2_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqlp_mmr_dir_locvec2_s;
+} sh_md_dqlp_mmr_dir_locvec2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC3" */
+/* local vector for acc=3 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_locvec3_u {
+ mmr_t sh_md_dqlp_mmr_dir_locvec3_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqlp_mmr_dir_locvec3_s;
+} sh_md_dqlp_mmr_dir_locvec3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC4" */
+/* local vector for acc=4 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_locvec4_u {
+ mmr_t sh_md_dqlp_mmr_dir_locvec4_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqlp_mmr_dir_locvec4_s;
+} sh_md_dqlp_mmr_dir_locvec4_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC5" */
+/* local vector for acc=5 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_locvec5_u {
+ mmr_t sh_md_dqlp_mmr_dir_locvec5_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqlp_mmr_dir_locvec5_s;
+} sh_md_dqlp_mmr_dir_locvec5_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC6" */
+/* local vector for acc=6 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_locvec6_u {
+ mmr_t sh_md_dqlp_mmr_dir_locvec6_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqlp_mmr_dir_locvec6_s;
+} sh_md_dqlp_mmr_dir_locvec6_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_LOCVEC7" */
+/* local vector for acc=7 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_locvec7_u {
+ mmr_t sh_md_dqlp_mmr_dir_locvec7_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqlp_mmr_dir_locvec7_s;
+} sh_md_dqlp_mmr_dir_locvec7_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC0" */
+/* privilege vector for acc=0 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_privec0_u {
+ mmr_t sh_md_dqlp_mmr_dir_privec0_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqlp_mmr_dir_privec0_s;
+} sh_md_dqlp_mmr_dir_privec0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC1" */
+/* privilege vector for acc=1 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_privec1_u {
+ mmr_t sh_md_dqlp_mmr_dir_privec1_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqlp_mmr_dir_privec1_s;
+} sh_md_dqlp_mmr_dir_privec1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC2" */
+/* privilege vector for acc=2 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_privec2_u {
+ mmr_t sh_md_dqlp_mmr_dir_privec2_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqlp_mmr_dir_privec2_s;
+} sh_md_dqlp_mmr_dir_privec2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC3" */
+/* privilege vector for acc=3 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_privec3_u {
+ mmr_t sh_md_dqlp_mmr_dir_privec3_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqlp_mmr_dir_privec3_s;
+} sh_md_dqlp_mmr_dir_privec3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC4" */
+/* privilege vector for acc=4 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_privec4_u {
+ mmr_t sh_md_dqlp_mmr_dir_privec4_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqlp_mmr_dir_privec4_s;
+} sh_md_dqlp_mmr_dir_privec4_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC5" */
+/* privilege vector for acc=5 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_privec5_u {
+ mmr_t sh_md_dqlp_mmr_dir_privec5_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqlp_mmr_dir_privec5_s;
+} sh_md_dqlp_mmr_dir_privec5_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC6" */
+/* privilege vector for acc=6 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_privec6_u {
+ mmr_t sh_md_dqlp_mmr_dir_privec6_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqlp_mmr_dir_privec6_s;
+} sh_md_dqlp_mmr_dir_privec6_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_PRIVEC7" */
+/* privilege vector for acc=7 */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_privec7_u {
+ mmr_t sh_md_dqlp_mmr_dir_privec7_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqlp_mmr_dir_privec7_s;
+} sh_md_dqlp_mmr_dir_privec7_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_TIMER" */
+/* MD SXRO timer */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_timer_u {
+ mmr_t sh_md_dqlp_mmr_dir_timer_regval;
+ struct {
+ mmr_t timer_div : 12;
+ mmr_t timer_en : 1;
+ mmr_t timer_cur : 9;
+ mmr_t reserved_0 : 42;
+ } sh_md_dqlp_mmr_dir_timer_s;
+} sh_md_dqlp_mmr_dir_timer_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_PIOWD_DIR_ENTRY" */
+/* directory pio write data */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_piowd_dir_entry_u {
+ mmr_t sh_md_dqlp_mmr_piowd_dir_entry_regval;
+ struct {
+ mmr_t dira : 26;
+ mmr_t dirb : 26;
+ mmr_t pri : 3;
+ mmr_t acc : 3;
+ mmr_t reserved_0 : 6;
+ } sh_md_dqlp_mmr_piowd_dir_entry_s;
+} sh_md_dqlp_mmr_piowd_dir_entry_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_PIOWD_DIR_ECC" */
+/* directory ecc register */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_piowd_dir_ecc_u {
+ mmr_t sh_md_dqlp_mmr_piowd_dir_ecc_regval;
+ struct {
+ mmr_t ecca : 7;
+ mmr_t eccb : 7;
+ mmr_t reserved_0 : 50;
+ } sh_md_dqlp_mmr_piowd_dir_ecc_s;
+} sh_md_dqlp_mmr_piowd_dir_ecc_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XPIORD_XDIR_ENTRY" */
+/* x directory pio read data */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_xpiord_xdir_entry_u {
+ mmr_t sh_md_dqlp_mmr_xpiord_xdir_entry_regval;
+ struct {
+ mmr_t dira : 26;
+ mmr_t dirb : 26;
+ mmr_t pri : 3;
+ mmr_t acc : 3;
+ mmr_t cor : 1;
+ mmr_t unc : 1;
+ mmr_t reserved_0 : 4;
+ } sh_md_dqlp_mmr_xpiord_xdir_entry_s;
+} sh_md_dqlp_mmr_xpiord_xdir_entry_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XPIORD_XDIR_ECC" */
+/* x directory ecc */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_xpiord_xdir_ecc_u {
+ mmr_t sh_md_dqlp_mmr_xpiord_xdir_ecc_regval;
+ struct {
+ mmr_t ecca : 7;
+ mmr_t eccb : 7;
+ mmr_t reserved_0 : 50;
+ } sh_md_dqlp_mmr_xpiord_xdir_ecc_s;
+} sh_md_dqlp_mmr_xpiord_xdir_ecc_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YPIORD_YDIR_ENTRY" */
+/* y directory pio read data */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_ypiord_ydir_entry_u {
+ mmr_t sh_md_dqlp_mmr_ypiord_ydir_entry_regval;
+ struct {
+ mmr_t dira : 26;
+ mmr_t dirb : 26;
+ mmr_t pri : 3;
+ mmr_t acc : 3;
+ mmr_t cor : 1;
+ mmr_t unc : 1;
+ mmr_t reserved_0 : 4;
+ } sh_md_dqlp_mmr_ypiord_ydir_entry_s;
+} sh_md_dqlp_mmr_ypiord_ydir_entry_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YPIORD_YDIR_ECC" */
+/* y directory ecc */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_ypiord_ydir_ecc_u {
+ mmr_t sh_md_dqlp_mmr_ypiord_ydir_ecc_regval;
+ struct {
+ mmr_t ecca : 7;
+ mmr_t eccb : 7;
+ mmr_t reserved_0 : 50;
+ } sh_md_dqlp_mmr_ypiord_ydir_ecc_s;
+} sh_md_dqlp_mmr_ypiord_ydir_ecc_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XCERR1" */
+/* correctable dir ecc group 1 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_xcerr1_u {
+ mmr_t sh_md_dqlp_mmr_xcerr1_regval;
+ struct {
+ mmr_t grp1 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 25;
+ } sh_md_dqlp_mmr_xcerr1_s;
+} sh_md_dqlp_mmr_xcerr1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XCERR2" */
+/* correctable dir ecc group 2 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_xcerr2_u {
+ mmr_t sh_md_dqlp_mmr_xcerr2_regval;
+ struct {
+ mmr_t grp2 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 26;
+ } sh_md_dqlp_mmr_xcerr2_s;
+} sh_md_dqlp_mmr_xcerr2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XUERR1" */
+/* uncorrectable dir ecc group 1 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_xuerr1_u {
+ mmr_t sh_md_dqlp_mmr_xuerr1_regval;
+ struct {
+ mmr_t grp1 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 25;
+ } sh_md_dqlp_mmr_xuerr1_s;
+} sh_md_dqlp_mmr_xuerr1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XUERR2" */
+/* uncorrectable dir ecc group 2 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_xuerr2_u {
+ mmr_t sh_md_dqlp_mmr_xuerr2_regval;
+ struct {
+ mmr_t grp2 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 26;
+ } sh_md_dqlp_mmr_xuerr2_s;
+} sh_md_dqlp_mmr_xuerr2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XPERR" */
+/* protocol error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_xperr_u {
+ mmr_t sh_md_dqlp_mmr_xperr_regval;
+ struct {
+ mmr_t dir : 26;
+ mmr_t cmd : 8;
+ mmr_t src : 14;
+ mmr_t prige : 1;
+ mmr_t priv : 1;
+ mmr_t cor : 1;
+ mmr_t unc : 1;
+ mmr_t mybit : 8;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 1;
+ } sh_md_dqlp_mmr_xperr_s;
+} sh_md_dqlp_mmr_xperr_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YCERR1" */
+/* correctable dir ecc group 1 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_ycerr1_u {
+ mmr_t sh_md_dqlp_mmr_ycerr1_regval;
+ struct {
+ mmr_t grp1 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 25;
+ } sh_md_dqlp_mmr_ycerr1_s;
+} sh_md_dqlp_mmr_ycerr1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YCERR2" */
+/* correctable dir ecc group 2 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_ycerr2_u {
+ mmr_t sh_md_dqlp_mmr_ycerr2_regval;
+ struct {
+ mmr_t grp2 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 26;
+ } sh_md_dqlp_mmr_ycerr2_s;
+} sh_md_dqlp_mmr_ycerr2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YUERR1" */
+/* uncorrectable dir ecc group 1 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_yuerr1_u {
+ mmr_t sh_md_dqlp_mmr_yuerr1_regval;
+ struct {
+ mmr_t grp1 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 25;
+ } sh_md_dqlp_mmr_yuerr1_s;
+} sh_md_dqlp_mmr_yuerr1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YUERR2" */
+/* uncorrectable dir ecc group 2 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_yuerr2_u {
+ mmr_t sh_md_dqlp_mmr_yuerr2_regval;
+ struct {
+ mmr_t grp2 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 26;
+ } sh_md_dqlp_mmr_yuerr2_s;
+} sh_md_dqlp_mmr_yuerr2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YPERR" */
+/* protocol error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_yperr_u {
+ mmr_t sh_md_dqlp_mmr_yperr_regval;
+ struct {
+ mmr_t dir : 26;
+ mmr_t cmd : 8;
+ mmr_t src : 14;
+ mmr_t prige : 1;
+ mmr_t priv : 1;
+ mmr_t cor : 1;
+ mmr_t unc : 1;
+ mmr_t mybit : 8;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 1;
+ } sh_md_dqlp_mmr_yperr_s;
+} sh_md_dqlp_mmr_yperr_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_CMDTRIG" */
+/* cmd triggers */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_cmdtrig_u {
+ mmr_t sh_md_dqlp_mmr_dir_cmdtrig_regval;
+ struct {
+ mmr_t cmd0 : 8;
+ mmr_t cmd1 : 8;
+ mmr_t cmd2 : 8;
+ mmr_t cmd3 : 8;
+ mmr_t reserved_0 : 32;
+ } sh_md_dqlp_mmr_dir_cmdtrig_s;
+} sh_md_dqlp_mmr_dir_cmdtrig_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_TBLTRIG" */
+/* dir table trigger */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_tbltrig_u {
+ mmr_t sh_md_dqlp_mmr_dir_tbltrig_regval;
+ struct {
+ mmr_t src : 14;
+ mmr_t cmd : 8;
+ mmr_t acc : 2;
+ mmr_t prige : 1;
+ mmr_t dirst : 9;
+ mmr_t mybit : 8;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqlp_mmr_dir_tbltrig_s;
+} sh_md_dqlp_mmr_dir_tbltrig_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_DIR_TBLMASK" */
+/* dir table trigger mask */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_dir_tblmask_u {
+ mmr_t sh_md_dqlp_mmr_dir_tblmask_regval;
+ struct {
+ mmr_t src : 14;
+ mmr_t cmd : 8;
+ mmr_t acc : 2;
+ mmr_t prige : 1;
+ mmr_t dirst : 9;
+ mmr_t mybit : 8;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqlp_mmr_dir_tblmask_s;
+} sh_md_dqlp_mmr_dir_tblmask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_xbist_h_u {
+ mmr_t sh_md_dqlp_mmr_xbist_h_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_1 : 21;
+ } sh_md_dqlp_mmr_xbist_h_s;
+} sh_md_dqlp_mmr_xbist_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_xbist_l_u {
+ mmr_t sh_md_dqlp_mmr_xbist_l_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t reserved_1 : 22;
+ } sh_md_dqlp_mmr_xbist_l_s;
+} sh_md_dqlp_mmr_xbist_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_xbist_err_h_u {
+ mmr_t sh_md_dqlp_mmr_xbist_err_h_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_1 : 22;
+ } sh_md_dqlp_mmr_xbist_err_h_s;
+} sh_md_dqlp_mmr_xbist_err_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_XBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_xbist_err_l_u {
+ mmr_t sh_md_dqlp_mmr_xbist_err_l_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_1 : 22;
+ } sh_md_dqlp_mmr_xbist_err_l_s;
+} sh_md_dqlp_mmr_xbist_err_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_ybist_h_u {
+ mmr_t sh_md_dqlp_mmr_ybist_h_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_1 : 21;
+ } sh_md_dqlp_mmr_ybist_h_s;
+} sh_md_dqlp_mmr_ybist_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_ybist_l_u {
+ mmr_t sh_md_dqlp_mmr_ybist_l_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t reserved_1 : 22;
+ } sh_md_dqlp_mmr_ybist_l_s;
+} sh_md_dqlp_mmr_ybist_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_ybist_err_h_u {
+ mmr_t sh_md_dqlp_mmr_ybist_err_h_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_1 : 22;
+ } sh_md_dqlp_mmr_ybist_err_h_s;
+} sh_md_dqlp_mmr_ybist_err_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLP_MMR_YBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqlp_mmr_ybist_err_l_u {
+ mmr_t sh_md_dqlp_mmr_ybist_err_l_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_1 : 22;
+ } sh_md_dqlp_mmr_ybist_err_l_s;
+} sh_md_dqlp_mmr_ybist_err_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_XBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqls_mmr_xbist_h_u {
+ mmr_t sh_md_dqls_mmr_xbist_h_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 21;
+ } sh_md_dqls_mmr_xbist_h_s;
+} sh_md_dqls_mmr_xbist_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_XBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqls_mmr_xbist_l_u {
+ mmr_t sh_md_dqls_mmr_xbist_l_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqls_mmr_xbist_l_s;
+} sh_md_dqls_mmr_xbist_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_XBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqls_mmr_xbist_err_h_u {
+ mmr_t sh_md_dqls_mmr_xbist_err_h_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqls_mmr_xbist_err_h_s;
+} sh_md_dqls_mmr_xbist_err_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_XBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqls_mmr_xbist_err_l_u {
+ mmr_t sh_md_dqls_mmr_xbist_err_l_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqls_mmr_xbist_err_l_s;
+} sh_md_dqls_mmr_xbist_err_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_YBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqls_mmr_ybist_h_u {
+ mmr_t sh_md_dqls_mmr_ybist_h_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 21;
+ } sh_md_dqls_mmr_ybist_h_s;
+} sh_md_dqls_mmr_ybist_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_YBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqls_mmr_ybist_l_u {
+ mmr_t sh_md_dqls_mmr_ybist_l_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqls_mmr_ybist_l_s;
+} sh_md_dqls_mmr_ybist_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_YBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqls_mmr_ybist_err_h_u {
+ mmr_t sh_md_dqls_mmr_ybist_err_h_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqls_mmr_ybist_err_h_s;
+} sh_md_dqls_mmr_ybist_err_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_YBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqls_mmr_ybist_err_l_u {
+ mmr_t sh_md_dqls_mmr_ybist_err_l_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqls_mmr_ybist_err_l_s;
+} sh_md_dqls_mmr_ybist_err_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_JNR_DEBUG" */
+/* joiner/fct debug configuration */
+/* ==================================================================== */
+
+typedef union sh_md_dqls_mmr_jnr_debug_u {
+ mmr_t sh_md_dqls_mmr_jnr_debug_regval;
+ struct {
+ mmr_t px : 1;
+ mmr_t rw : 1;
+ mmr_t reserved_0 : 62;
+ } sh_md_dqls_mmr_jnr_debug_s;
+} sh_md_dqls_mmr_jnr_debug_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQLS_MMR_XAMOPW_ERR" */
+/* amo/partial rmw ecc error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqls_mmr_xamopw_err_u {
+ mmr_t sh_md_dqls_mmr_xamopw_err_regval;
+ struct {
+ mmr_t ssyn : 8;
+ mmr_t scor : 1;
+ mmr_t sunc : 1;
+ mmr_t reserved_0 : 6;
+ mmr_t rsyn : 8;
+ mmr_t rcor : 1;
+ mmr_t runc : 1;
+ mmr_t reserved_1 : 6;
+ mmr_t arm : 1;
+ mmr_t reserved_2 : 31;
+ } sh_md_dqls_mmr_xamopw_err_s;
+} sh_md_dqls_mmr_xamopw_err_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_CONFIG" */
+/* DQ directory config register */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_config_u {
+ mmr_t sh_md_dqrp_mmr_dir_config_regval;
+ struct {
+ mmr_t sys_size : 3;
+ mmr_t en_direcc : 1;
+ mmr_t en_dirpois : 1;
+ mmr_t reserved_0 : 59;
+ } sh_md_dqrp_mmr_dir_config_s;
+} sh_md_dqrp_mmr_dir_config_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC0" */
+/* node [63:0] presence bits */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_presvec0_u {
+ mmr_t sh_md_dqrp_mmr_dir_presvec0_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqrp_mmr_dir_presvec0_s;
+} sh_md_dqrp_mmr_dir_presvec0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC1" */
+/* node [127:64] presence bits */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_presvec1_u {
+ mmr_t sh_md_dqrp_mmr_dir_presvec1_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqrp_mmr_dir_presvec1_s;
+} sh_md_dqrp_mmr_dir_presvec1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC2" */
+/* node [191:128] presence bits */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_presvec2_u {
+ mmr_t sh_md_dqrp_mmr_dir_presvec2_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqrp_mmr_dir_presvec2_s;
+} sh_md_dqrp_mmr_dir_presvec2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRESVEC3" */
+/* node [255:192] presence bits */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_presvec3_u {
+ mmr_t sh_md_dqrp_mmr_dir_presvec3_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqrp_mmr_dir_presvec3_s;
+} sh_md_dqrp_mmr_dir_presvec3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC0" */
+/* local vector for acc=0 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_locvec0_u {
+ mmr_t sh_md_dqrp_mmr_dir_locvec0_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqrp_mmr_dir_locvec0_s;
+} sh_md_dqrp_mmr_dir_locvec0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC1" */
+/* local vector for acc=1 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_locvec1_u {
+ mmr_t sh_md_dqrp_mmr_dir_locvec1_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqrp_mmr_dir_locvec1_s;
+} sh_md_dqrp_mmr_dir_locvec1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC2" */
+/* local vector for acc=2 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_locvec2_u {
+ mmr_t sh_md_dqrp_mmr_dir_locvec2_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqrp_mmr_dir_locvec2_s;
+} sh_md_dqrp_mmr_dir_locvec2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC3" */
+/* local vector for acc=3 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_locvec3_u {
+ mmr_t sh_md_dqrp_mmr_dir_locvec3_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqrp_mmr_dir_locvec3_s;
+} sh_md_dqrp_mmr_dir_locvec3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC4" */
+/* local vector for acc=4 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_locvec4_u {
+ mmr_t sh_md_dqrp_mmr_dir_locvec4_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqrp_mmr_dir_locvec4_s;
+} sh_md_dqrp_mmr_dir_locvec4_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC5" */
+/* local vector for acc=5 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_locvec5_u {
+ mmr_t sh_md_dqrp_mmr_dir_locvec5_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqrp_mmr_dir_locvec5_s;
+} sh_md_dqrp_mmr_dir_locvec5_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC6" */
+/* local vector for acc=6 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_locvec6_u {
+ mmr_t sh_md_dqrp_mmr_dir_locvec6_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqrp_mmr_dir_locvec6_s;
+} sh_md_dqrp_mmr_dir_locvec6_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_LOCVEC7" */
+/* local vector for acc=7 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_locvec7_u {
+ mmr_t sh_md_dqrp_mmr_dir_locvec7_regval;
+ struct {
+ mmr_t vec : 64;
+ } sh_md_dqrp_mmr_dir_locvec7_s;
+} sh_md_dqrp_mmr_dir_locvec7_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC0" */
+/* privilege vector for acc=0 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_privec0_u {
+ mmr_t sh_md_dqrp_mmr_dir_privec0_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqrp_mmr_dir_privec0_s;
+} sh_md_dqrp_mmr_dir_privec0_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC1" */
+/* privilege vector for acc=1 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_privec1_u {
+ mmr_t sh_md_dqrp_mmr_dir_privec1_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqrp_mmr_dir_privec1_s;
+} sh_md_dqrp_mmr_dir_privec1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC2" */
+/* privilege vector for acc=2 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_privec2_u {
+ mmr_t sh_md_dqrp_mmr_dir_privec2_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqrp_mmr_dir_privec2_s;
+} sh_md_dqrp_mmr_dir_privec2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC3" */
+/* privilege vector for acc=3 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_privec3_u {
+ mmr_t sh_md_dqrp_mmr_dir_privec3_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqrp_mmr_dir_privec3_s;
+} sh_md_dqrp_mmr_dir_privec3_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC4" */
+/* privilege vector for acc=4 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_privec4_u {
+ mmr_t sh_md_dqrp_mmr_dir_privec4_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqrp_mmr_dir_privec4_s;
+} sh_md_dqrp_mmr_dir_privec4_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC5" */
+/* privilege vector for acc=5 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_privec5_u {
+ mmr_t sh_md_dqrp_mmr_dir_privec5_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqrp_mmr_dir_privec5_s;
+} sh_md_dqrp_mmr_dir_privec5_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC6" */
+/* privilege vector for acc=6 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_privec6_u {
+ mmr_t sh_md_dqrp_mmr_dir_privec6_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqrp_mmr_dir_privec6_s;
+} sh_md_dqrp_mmr_dir_privec6_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_PRIVEC7" */
+/* privilege vector for acc=7 */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_privec7_u {
+ mmr_t sh_md_dqrp_mmr_dir_privec7_regval;
+ struct {
+ mmr_t in : 14;
+ mmr_t out : 14;
+ mmr_t reserved_0 : 36;
+ } sh_md_dqrp_mmr_dir_privec7_s;
+} sh_md_dqrp_mmr_dir_privec7_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_TIMER" */
+/* MD SXRO timer */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_timer_u {
+ mmr_t sh_md_dqrp_mmr_dir_timer_regval;
+ struct {
+ mmr_t timer_div : 12;
+ mmr_t timer_en : 1;
+ mmr_t timer_cur : 9;
+ mmr_t reserved_0 : 42;
+ } sh_md_dqrp_mmr_dir_timer_s;
+} sh_md_dqrp_mmr_dir_timer_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_PIOWD_DIR_ENTRY" */
+/* directory pio write data */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_piowd_dir_entry_u {
+ mmr_t sh_md_dqrp_mmr_piowd_dir_entry_regval;
+ struct {
+ mmr_t dira : 26;
+ mmr_t dirb : 26;
+ mmr_t pri : 3;
+ mmr_t acc : 3;
+ mmr_t reserved_0 : 6;
+ } sh_md_dqrp_mmr_piowd_dir_entry_s;
+} sh_md_dqrp_mmr_piowd_dir_entry_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_PIOWD_DIR_ECC" */
+/* directory ecc register */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_piowd_dir_ecc_u {
+ mmr_t sh_md_dqrp_mmr_piowd_dir_ecc_regval;
+ struct {
+ mmr_t ecca : 7;
+ mmr_t eccb : 7;
+ mmr_t reserved_0 : 50;
+ } sh_md_dqrp_mmr_piowd_dir_ecc_s;
+} sh_md_dqrp_mmr_piowd_dir_ecc_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XPIORD_XDIR_ENTRY" */
+/* x directory pio read data */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_xpiord_xdir_entry_u {
+ mmr_t sh_md_dqrp_mmr_xpiord_xdir_entry_regval;
+ struct {
+ mmr_t dira : 26;
+ mmr_t dirb : 26;
+ mmr_t pri : 3;
+ mmr_t acc : 3;
+ mmr_t cor : 1;
+ mmr_t unc : 1;
+ mmr_t reserved_0 : 4;
+ } sh_md_dqrp_mmr_xpiord_xdir_entry_s;
+} sh_md_dqrp_mmr_xpiord_xdir_entry_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XPIORD_XDIR_ECC" */
+/* x directory ecc */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_xpiord_xdir_ecc_u {
+ mmr_t sh_md_dqrp_mmr_xpiord_xdir_ecc_regval;
+ struct {
+ mmr_t ecca : 7;
+ mmr_t eccb : 7;
+ mmr_t reserved_0 : 50;
+ } sh_md_dqrp_mmr_xpiord_xdir_ecc_s;
+} sh_md_dqrp_mmr_xpiord_xdir_ecc_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YPIORD_YDIR_ENTRY" */
+/* y directory pio read data */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_ypiord_ydir_entry_u {
+ mmr_t sh_md_dqrp_mmr_ypiord_ydir_entry_regval;
+ struct {
+ mmr_t dira : 26;
+ mmr_t dirb : 26;
+ mmr_t pri : 3;
+ mmr_t acc : 3;
+ mmr_t cor : 1;
+ mmr_t unc : 1;
+ mmr_t reserved_0 : 4;
+ } sh_md_dqrp_mmr_ypiord_ydir_entry_s;
+} sh_md_dqrp_mmr_ypiord_ydir_entry_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YPIORD_YDIR_ECC" */
+/* y directory ecc */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_ypiord_ydir_ecc_u {
+ mmr_t sh_md_dqrp_mmr_ypiord_ydir_ecc_regval;
+ struct {
+ mmr_t ecca : 7;
+ mmr_t eccb : 7;
+ mmr_t reserved_0 : 50;
+ } sh_md_dqrp_mmr_ypiord_ydir_ecc_s;
+} sh_md_dqrp_mmr_ypiord_ydir_ecc_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XCERR1" */
+/* correctable dir ecc group 1 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_xcerr1_u {
+ mmr_t sh_md_dqrp_mmr_xcerr1_regval;
+ struct {
+ mmr_t grp1 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 25;
+ } sh_md_dqrp_mmr_xcerr1_s;
+} sh_md_dqrp_mmr_xcerr1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XCERR2" */
+/* correctable dir ecc group 2 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_xcerr2_u {
+ mmr_t sh_md_dqrp_mmr_xcerr2_regval;
+ struct {
+ mmr_t grp2 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 26;
+ } sh_md_dqrp_mmr_xcerr2_s;
+} sh_md_dqrp_mmr_xcerr2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XUERR1" */
+/* uncorrectable dir ecc group 1 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_xuerr1_u {
+ mmr_t sh_md_dqrp_mmr_xuerr1_regval;
+ struct {
+ mmr_t grp1 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 25;
+ } sh_md_dqrp_mmr_xuerr1_s;
+} sh_md_dqrp_mmr_xuerr1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XUERR2" */
+/* uncorrectable dir ecc group 2 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_xuerr2_u {
+ mmr_t sh_md_dqrp_mmr_xuerr2_regval;
+ struct {
+ mmr_t grp2 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 26;
+ } sh_md_dqrp_mmr_xuerr2_s;
+} sh_md_dqrp_mmr_xuerr2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XPERR" */
+/* protocol error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_xperr_u {
+ mmr_t sh_md_dqrp_mmr_xperr_regval;
+ struct {
+ mmr_t dir : 26;
+ mmr_t cmd : 8;
+ mmr_t src : 14;
+ mmr_t prige : 1;
+ mmr_t priv : 1;
+ mmr_t cor : 1;
+ mmr_t unc : 1;
+ mmr_t mybit : 8;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 1;
+ } sh_md_dqrp_mmr_xperr_s;
+} sh_md_dqrp_mmr_xperr_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YCERR1" */
+/* correctable dir ecc group 1 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_ycerr1_u {
+ mmr_t sh_md_dqrp_mmr_ycerr1_regval;
+ struct {
+ mmr_t grp1 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 25;
+ } sh_md_dqrp_mmr_ycerr1_s;
+} sh_md_dqrp_mmr_ycerr1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YCERR2" */
+/* correctable dir ecc group 2 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_ycerr2_u {
+ mmr_t sh_md_dqrp_mmr_ycerr2_regval;
+ struct {
+ mmr_t grp2 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 26;
+ } sh_md_dqrp_mmr_ycerr2_s;
+} sh_md_dqrp_mmr_ycerr2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YUERR1" */
+/* uncorrectable dir ecc group 1 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_yuerr1_u {
+ mmr_t sh_md_dqrp_mmr_yuerr1_regval;
+ struct {
+ mmr_t grp1 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 25;
+ } sh_md_dqrp_mmr_yuerr1_s;
+} sh_md_dqrp_mmr_yuerr1_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YUERR2" */
+/* uncorrectable dir ecc group 2 error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_yuerr2_u {
+ mmr_t sh_md_dqrp_mmr_yuerr2_regval;
+ struct {
+ mmr_t grp2 : 36;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 26;
+ } sh_md_dqrp_mmr_yuerr2_s;
+} sh_md_dqrp_mmr_yuerr2_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YPERR" */
+/* protocol error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_yperr_u {
+ mmr_t sh_md_dqrp_mmr_yperr_regval;
+ struct {
+ mmr_t dir : 26;
+ mmr_t cmd : 8;
+ mmr_t src : 14;
+ mmr_t prige : 1;
+ mmr_t priv : 1;
+ mmr_t cor : 1;
+ mmr_t unc : 1;
+ mmr_t mybit : 8;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 1;
+ } sh_md_dqrp_mmr_yperr_s;
+} sh_md_dqrp_mmr_yperr_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_CMDTRIG" */
+/* cmd triggers */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_cmdtrig_u {
+ mmr_t sh_md_dqrp_mmr_dir_cmdtrig_regval;
+ struct {
+ mmr_t cmd0 : 8;
+ mmr_t cmd1 : 8;
+ mmr_t cmd2 : 8;
+ mmr_t cmd3 : 8;
+ mmr_t reserved_0 : 32;
+ } sh_md_dqrp_mmr_dir_cmdtrig_s;
+} sh_md_dqrp_mmr_dir_cmdtrig_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_TBLTRIG" */
+/* dir table trigger */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_tbltrig_u {
+ mmr_t sh_md_dqrp_mmr_dir_tbltrig_regval;
+ struct {
+ mmr_t src : 14;
+ mmr_t cmd : 8;
+ mmr_t acc : 2;
+ mmr_t prige : 1;
+ mmr_t dirst : 9;
+ mmr_t mybit : 8;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqrp_mmr_dir_tbltrig_s;
+} sh_md_dqrp_mmr_dir_tbltrig_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_DIR_TBLMASK" */
+/* dir table trigger mask */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_dir_tblmask_u {
+ mmr_t sh_md_dqrp_mmr_dir_tblmask_regval;
+ struct {
+ mmr_t src : 14;
+ mmr_t cmd : 8;
+ mmr_t acc : 2;
+ mmr_t prige : 1;
+ mmr_t dirst : 9;
+ mmr_t mybit : 8;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqrp_mmr_dir_tblmask_s;
+} sh_md_dqrp_mmr_dir_tblmask_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_xbist_h_u {
+ mmr_t sh_md_dqrp_mmr_xbist_h_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_1 : 21;
+ } sh_md_dqrp_mmr_xbist_h_s;
+} sh_md_dqrp_mmr_xbist_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_xbist_l_u {
+ mmr_t sh_md_dqrp_mmr_xbist_l_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t reserved_1 : 22;
+ } sh_md_dqrp_mmr_xbist_l_s;
+} sh_md_dqrp_mmr_xbist_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_xbist_err_h_u {
+ mmr_t sh_md_dqrp_mmr_xbist_err_h_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_1 : 22;
+ } sh_md_dqrp_mmr_xbist_err_h_s;
+} sh_md_dqrp_mmr_xbist_err_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_XBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_xbist_err_l_u {
+ mmr_t sh_md_dqrp_mmr_xbist_err_l_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_1 : 22;
+ } sh_md_dqrp_mmr_xbist_err_l_s;
+} sh_md_dqrp_mmr_xbist_err_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_ybist_h_u {
+ mmr_t sh_md_dqrp_mmr_ybist_h_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_1 : 21;
+ } sh_md_dqrp_mmr_ybist_h_s;
+} sh_md_dqrp_mmr_ybist_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_ybist_l_u {
+ mmr_t sh_md_dqrp_mmr_ybist_l_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t reserved_1 : 22;
+ } sh_md_dqrp_mmr_ybist_l_s;
+} sh_md_dqrp_mmr_ybist_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_ybist_err_h_u {
+ mmr_t sh_md_dqrp_mmr_ybist_err_h_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_1 : 22;
+ } sh_md_dqrp_mmr_ybist_err_h_s;
+} sh_md_dqrp_mmr_ybist_err_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRP_MMR_YBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrp_mmr_ybist_err_l_u {
+ mmr_t sh_md_dqrp_mmr_ybist_err_l_regval;
+ struct {
+ mmr_t pat : 32;
+ mmr_t reserved_0 : 8;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_1 : 22;
+ } sh_md_dqrp_mmr_ybist_err_l_s;
+} sh_md_dqrp_mmr_ybist_err_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_XBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrs_mmr_xbist_h_u {
+ mmr_t sh_md_dqrs_mmr_xbist_h_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 21;
+ } sh_md_dqrs_mmr_xbist_h_s;
+} sh_md_dqrs_mmr_xbist_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_XBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrs_mmr_xbist_l_u {
+ mmr_t sh_md_dqrs_mmr_xbist_l_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqrs_mmr_xbist_l_s;
+} sh_md_dqrs_mmr_xbist_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_XBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrs_mmr_xbist_err_h_u {
+ mmr_t sh_md_dqrs_mmr_xbist_err_h_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqrs_mmr_xbist_err_h_s;
+} sh_md_dqrs_mmr_xbist_err_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_XBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrs_mmr_xbist_err_l_u {
+ mmr_t sh_md_dqrs_mmr_xbist_err_l_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqrs_mmr_xbist_err_l_s;
+} sh_md_dqrs_mmr_xbist_err_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_YBIST_H" */
+/* rising edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrs_mmr_ybist_h_u {
+ mmr_t sh_md_dqrs_mmr_ybist_h_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t arm : 1;
+ mmr_t reserved_0 : 21;
+ } sh_md_dqrs_mmr_ybist_h_s;
+} sh_md_dqrs_mmr_ybist_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_YBIST_L" */
+/* falling edge bist/fill pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrs_mmr_ybist_l_u {
+ mmr_t sh_md_dqrs_mmr_ybist_l_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t inv : 1;
+ mmr_t rot : 1;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqrs_mmr_ybist_l_s;
+} sh_md_dqrs_mmr_ybist_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_YBIST_ERR_H" */
+/* rising edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrs_mmr_ybist_err_h_u {
+ mmr_t sh_md_dqrs_mmr_ybist_err_h_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqrs_mmr_ybist_err_h_s;
+} sh_md_dqrs_mmr_ybist_err_h_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_YBIST_ERR_L" */
+/* falling edge bist error pattern */
+/* ==================================================================== */
+
+typedef union sh_md_dqrs_mmr_ybist_err_l_u {
+ mmr_t sh_md_dqrs_mmr_ybist_err_l_regval;
+ struct {
+ mmr_t pat : 40;
+ mmr_t val : 1;
+ mmr_t more : 1;
+ mmr_t reserved_0 : 22;
+ } sh_md_dqrs_mmr_ybist_err_l_s;
+} sh_md_dqrs_mmr_ybist_err_l_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_JNR_DEBUG" */
+/* joiner/fct debug configuration */
+/* ==================================================================== */
+
+typedef union sh_md_dqrs_mmr_jnr_debug_u {
+ mmr_t sh_md_dqrs_mmr_jnr_debug_regval;
+ struct {
+ mmr_t px : 1;
+ mmr_t rw : 1;
+ mmr_t reserved_0 : 62;
+ } sh_md_dqrs_mmr_jnr_debug_s;
+} sh_md_dqrs_mmr_jnr_debug_u_t;
+
+/* ==================================================================== */
+/* Register "SH_MD_DQRS_MMR_YAMOPW_ERR" */
+/* amo/partial rmw ecc error register */
+/* ==================================================================== */
+
+typedef union sh_md_dqrs_mmr_yamopw_err_u {
+ mmr_t sh_md_dqrs_mmr_yamopw_err_regval;
+ struct {
+ mmr_t ssyn : 8;
+ mmr_t scor : 1;
+ mmr_t sunc : 1;
+ mmr_t reserved_0 : 6;
+ mmr_t rsyn : 8;
+ mmr_t rcor : 1;
+ mmr_t runc : 1;
+ mmr_t reserved_1 : 6;
+ mmr_t arm : 1;
+ mmr_t reserved_2 : 31;
+ } sh_md_dqrs_mmr_yamopw_err_s;
+} sh_md_dqrs_mmr_yamopw_err_u_t;
+
+#endif /* _ASM_IA64_SN_SN2_SHUB_MMR_T_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#ifndef _ASM_IA64_SN_SN2_SHUBIO_H
+#define _ASM_IA64_SN_SN2_SHUBIO_H
+
+#include <asm/sn/arch.h>
+
+#define HUB_WIDGET_ID_MAX 0xf
+#define IIO_NUM_ITTES 7
+#define HUB_NUM_BIG_WINDOW (IIO_NUM_ITTES - 1)
+
+#define IIO_WID 0x00400000 /* Crosstalk Widget Identification */
+ /* This register is also accessible from
+ * Crosstalk at address 0x0. */
+#define IIO_WSTAT 0x00400008 /* Crosstalk Widget Status */
+#define IIO_WCR 0x00400020 /* Crosstalk Widget Control Register */
+#define IIO_ILAPR 0x00400100 /* IO Local Access Protection Register */
+#define IIO_ILAPO 0x00400108 /* IO Local Access Protection Override */
+#define IIO_IOWA 0x00400110 /* IO Outbound Widget Access */
+#define IIO_IIWA 0x00400118 /* IO Inbound Widget Access */
+#define IIO_IIDEM 0x00400120 /* IO Inbound Device Error Mask */
+#define IIO_ILCSR 0x00400128 /* IO LLP Control and Status Register */
+#define IIO_ILLR 0x00400130 /* IO LLP Log Register */
+#define IIO_IIDSR 0x00400138 /* IO Interrupt Destination */
+
+#define IIO_IGFX0 0x00400140 /* IO Graphics Node-Widget Map 0 */
+#define IIO_IGFX1 0x00400148 /* IO Graphics Node-Widget Map 1 */
+
+#define IIO_ISCR0 0x00400150 /* IO Scratch Register 0 */
+#define IIO_ISCR1 0x00400158 /* IO Scratch Register 1 */
+
+#define IIO_ITTE1 0x00400160 /* IO Translation Table Entry 1 */
+#define IIO_ITTE2 0x00400168 /* IO Translation Table Entry 2 */
+#define IIO_ITTE3 0x00400170 /* IO Translation Table Entry 3 */
+#define IIO_ITTE4 0x00400178 /* IO Translation Table Entry 4 */
+#define IIO_ITTE5 0x00400180 /* IO Translation Table Entry 5 */
+#define IIO_ITTE6 0x00400188 /* IO Translation Table Entry 6 */
+#define IIO_ITTE7 0x00400190 /* IO Translation Table Entry 7 */
+
+#define IIO_IPRB0 0x00400198 /* IO PRB Entry 0 */
+#define IIO_IPRB8 0x004001A0 /* IO PRB Entry 8 */
+#define IIO_IPRB9 0x004001A8 /* IO PRB Entry 9 */
+#define IIO_IPRBA 0x004001B0 /* IO PRB Entry A */
+#define IIO_IPRBB 0x004001B8 /* IO PRB Entry B */
+#define IIO_IPRBC 0x004001C0 /* IO PRB Entry C */
+#define IIO_IPRBD 0x004001C8 /* IO PRB Entry D */
+#define IIO_IPRBE 0x004001D0 /* IO PRB Entry E */
+#define IIO_IPRBF 0x004001D8 /* IO PRB Entry F */
+
+#define IIO_IXCC 0x004001E0 /* IO Crosstalk Credit Count Timeout */
+#define IIO_IMEM 0x004001E8 /* IO Miscellaneous Error Mask */
+#define IIO_IXTT 0x004001F0 /* IO Crosstalk Timeout Threshold */
+#define IIO_IECLR 0x004001F8 /* IO Error Clear Register */
+#define IIO_IBCR 0x00400200 /* IO BTE Control Register */
+
+#define IIO_IXSM 0x00400208 /* IO Crosstalk Spurious Message */
+#define IIO_IXSS 0x00400210 /* IO Crosstalk Spurious Sideband */
+
+#define IIO_ILCT 0x00400218 /* IO LLP Channel Test */
+
+#define IIO_IIEPH1 0x00400220 /* IO Incoming Error Packet Header, Part 1 */
+#define IIO_IIEPH2 0x00400228 /* IO Incoming Error Packet Header, Part 2 */
+
+
+#define IIO_ISLAPR 0x00400230 /* IO SXB Local Access Protection Regster */
+#define IIO_ISLAPO 0x00400238 /* IO SXB Local Access Protection Override */
+
+#define IIO_IWI 0x00400240 /* IO Wrapper Interrupt Register */
+#define IIO_IWEL 0x00400248 /* IO Wrapper Error Log Register */
+#define IIO_IWC 0x00400250 /* IO Wrapper Control Register */
+#define IIO_IWS 0x00400258 /* IO Wrapper Status Register */
+#define IIO_IWEIM 0x00400260 /* IO Wrapper Error Interrupt Masking Register */
+
+#define IIO_IPCA 0x00400300 /* IO PRB Counter Adjust */
+
+#define IIO_IPRTE0_A 0x00400308 /* IO PIO Read Address Table Entry 0, Part A */
+#define IIO_IPRTE1_A 0x00400310 /* IO PIO Read Address Table Entry 1, Part A */
+#define IIO_IPRTE2_A 0x00400318 /* IO PIO Read Address Table Entry 2, Part A */
+#define IIO_IPRTE3_A 0x00400320 /* IO PIO Read Address Table Entry 3, Part A */
+#define IIO_IPRTE4_A 0x00400328 /* IO PIO Read Address Table Entry 4, Part A */
+#define IIO_IPRTE5_A 0x00400330 /* IO PIO Read Address Table Entry 5, Part A */
+#define IIO_IPRTE6_A 0x00400338 /* IO PIO Read Address Table Entry 6, Part A */
+#define IIO_IPRTE7_A 0x00400340 /* IO PIO Read Address Table Entry 7, Part A */
+
+#define IIO_IPRTE0_B 0x00400348 /* IO PIO Read Address Table Entry 0, Part B */
+#define IIO_IPRTE1_B 0x00400350 /* IO PIO Read Address Table Entry 1, Part B */
+#define IIO_IPRTE2_B 0x00400358 /* IO PIO Read Address Table Entry 2, Part B */
+#define IIO_IPRTE3_B 0x00400360 /* IO PIO Read Address Table Entry 3, Part B */
+#define IIO_IPRTE4_B 0x00400368 /* IO PIO Read Address Table Entry 4, Part B */
+#define IIO_IPRTE5_B 0x00400370 /* IO PIO Read Address Table Entry 5, Part B */
+#define IIO_IPRTE6_B 0x00400378 /* IO PIO Read Address Table Entry 6, Part B */
+#define IIO_IPRTE7_B 0x00400380 /* IO PIO Read Address Table Entry 7, Part B */
+
+#define IIO_IPDR 0x00400388 /* IO PIO Deallocation Register */
+#define IIO_ICDR 0x00400390 /* IO CRB Entry Deallocation Register */
+#define IIO_IFDR 0x00400398 /* IO IOQ FIFO Depth Register */
+#define IIO_IIAP 0x004003A0 /* IO IIQ Arbitration Parameters */
+#define IIO_ICMR 0x004003A8 /* IO CRB Management Register */
+#define IIO_ICCR 0x004003B0 /* IO CRB Control Register */
+#define IIO_ICTO 0x004003B8 /* IO CRB Timeout */
+#define IIO_ICTP 0x004003C0 /* IO CRB Timeout Prescalar */
+
+#define IIO_ICRB0_A 0x00400400 /* IO CRB Entry 0_A */
+#define IIO_ICRB0_B 0x00400408 /* IO CRB Entry 0_B */
+#define IIO_ICRB0_C 0x00400410 /* IO CRB Entry 0_C */
+#define IIO_ICRB0_D 0x00400418 /* IO CRB Entry 0_D */
+#define IIO_ICRB0_E 0x00400420 /* IO CRB Entry 0_E */
+
+#define IIO_ICRB1_A 0x00400430 /* IO CRB Entry 1_A */
+#define IIO_ICRB1_B 0x00400438 /* IO CRB Entry 1_B */
+#define IIO_ICRB1_C 0x00400440 /* IO CRB Entry 1_C */
+#define IIO_ICRB1_D 0x00400448 /* IO CRB Entry 1_D */
+#define IIO_ICRB1_E 0x00400450 /* IO CRB Entry 1_E */
+
+#define IIO_ICRB2_A 0x00400460 /* IO CRB Entry 2_A */
+#define IIO_ICRB2_B 0x00400468 /* IO CRB Entry 2_B */
+#define IIO_ICRB2_C 0x00400470 /* IO CRB Entry 2_C */
+#define IIO_ICRB2_D 0x00400478 /* IO CRB Entry 2_D */
+#define IIO_ICRB2_E 0x00400480 /* IO CRB Entry 2_E */
+
+#define IIO_ICRB3_A 0x00400490 /* IO CRB Entry 3_A */
+#define IIO_ICRB3_B 0x00400498 /* IO CRB Entry 3_B */
+#define IIO_ICRB3_C 0x004004a0 /* IO CRB Entry 3_C */
+#define IIO_ICRB3_D 0x004004a8 /* IO CRB Entry 3_D */
+#define IIO_ICRB3_E 0x004004b0 /* IO CRB Entry 3_E */
+
+#define IIO_ICRB4_A 0x004004c0 /* IO CRB Entry 4_A */
+#define IIO_ICRB4_B 0x004004c8 /* IO CRB Entry 4_B */
+#define IIO_ICRB4_C 0x004004d0 /* IO CRB Entry 4_C */
+#define IIO_ICRB4_D 0x004004d8 /* IO CRB Entry 4_D */
+#define IIO_ICRB4_E 0x004004e0 /* IO CRB Entry 4_E */
+
+#define IIO_ICRB5_A 0x004004f0 /* IO CRB Entry 5_A */
+#define IIO_ICRB5_B 0x004004f8 /* IO CRB Entry 5_B */
+#define IIO_ICRB5_C 0x00400500 /* IO CRB Entry 5_C */
+#define IIO_ICRB5_D 0x00400508 /* IO CRB Entry 5_D */
+#define IIO_ICRB5_E 0x00400510 /* IO CRB Entry 5_E */
+
+#define IIO_ICRB6_A 0x00400520 /* IO CRB Entry 6_A */
+#define IIO_ICRB6_B 0x00400528 /* IO CRB Entry 6_B */
+#define IIO_ICRB6_C 0x00400530 /* IO CRB Entry 6_C */
+#define IIO_ICRB6_D 0x00400538 /* IO CRB Entry 6_D */
+#define IIO_ICRB6_E 0x00400540 /* IO CRB Entry 6_E */
+
+#define IIO_ICRB7_A 0x00400550 /* IO CRB Entry 7_A */
+#define IIO_ICRB7_B 0x00400558 /* IO CRB Entry 7_B */
+#define IIO_ICRB7_C 0x00400560 /* IO CRB Entry 7_C */
+#define IIO_ICRB7_D 0x00400568 /* IO CRB Entry 7_D */
+#define IIO_ICRB7_E 0x00400570 /* IO CRB Entry 7_E */
+
+#define IIO_ICRB8_A 0x00400580 /* IO CRB Entry 8_A */
+#define IIO_ICRB8_B 0x00400588 /* IO CRB Entry 8_B */
+#define IIO_ICRB8_C 0x00400590 /* IO CRB Entry 8_C */
+#define IIO_ICRB8_D 0x00400598 /* IO CRB Entry 8_D */
+#define IIO_ICRB8_E 0x004005a0 /* IO CRB Entry 8_E */
+
+#define IIO_ICRB9_A 0x004005b0 /* IO CRB Entry 9_A */
+#define IIO_ICRB9_B 0x004005b8 /* IO CRB Entry 9_B */
+#define IIO_ICRB9_C 0x004005c0 /* IO CRB Entry 9_C */
+#define IIO_ICRB9_D 0x004005c8 /* IO CRB Entry 9_D */
+#define IIO_ICRB9_E 0x004005d0 /* IO CRB Entry 9_E */
+
+#define IIO_ICRBA_A 0x004005e0 /* IO CRB Entry A_A */
+#define IIO_ICRBA_B 0x004005e8 /* IO CRB Entry A_B */
+#define IIO_ICRBA_C 0x004005f0 /* IO CRB Entry A_C */
+#define IIO_ICRBA_D 0x004005f8 /* IO CRB Entry A_D */
+#define IIO_ICRBA_E 0x00400600 /* IO CRB Entry A_E */
+
+#define IIO_ICRBB_A 0x00400610 /* IO CRB Entry B_A */
+#define IIO_ICRBB_B 0x00400618 /* IO CRB Entry B_B */
+#define IIO_ICRBB_C 0x00400620 /* IO CRB Entry B_C */
+#define IIO_ICRBB_D 0x00400628 /* IO CRB Entry B_D */
+#define IIO_ICRBB_E 0x00400630 /* IO CRB Entry B_E */
+
+#define IIO_ICRBC_A 0x00400640 /* IO CRB Entry C_A */
+#define IIO_ICRBC_B 0x00400648 /* IO CRB Entry C_B */
+#define IIO_ICRBC_C 0x00400650 /* IO CRB Entry C_C */
+#define IIO_ICRBC_D 0x00400658 /* IO CRB Entry C_D */
+#define IIO_ICRBC_E 0x00400660 /* IO CRB Entry C_E */
+
+#define IIO_ICRBD_A 0x00400670 /* IO CRB Entry D_A */
+#define IIO_ICRBD_B 0x00400678 /* IO CRB Entry D_B */
+#define IIO_ICRBD_C 0x00400680 /* IO CRB Entry D_C */
+#define IIO_ICRBD_D 0x00400688 /* IO CRB Entry D_D */
+#define IIO_ICRBD_E 0x00400690 /* IO CRB Entry D_E */
+
+#define IIO_ICRBE_A 0x004006a0 /* IO CRB Entry E_A */
+#define IIO_ICRBE_B 0x004006a8 /* IO CRB Entry E_B */
+#define IIO_ICRBE_C 0x004006b0 /* IO CRB Entry E_C */
+#define IIO_ICRBE_D 0x004006b8 /* IO CRB Entry E_D */
+#define IIO_ICRBE_E 0x004006c0 /* IO CRB Entry E_E */
+
+#define IIO_ICSML 0x00400700 /* IO CRB Spurious Message Low */
+#define IIO_ICSMM 0x00400708 /* IO CRB Spurious Message Middle */
+#define IIO_ICSMH 0x00400710 /* IO CRB Spurious Message High */
+
+#define IIO_IDBSS 0x00400718 /* IO Debug Submenu Select */
+
+#define IIO_IBLS0 0x00410000 /* IO BTE Length Status 0 */
+#define IIO_IBSA0 0x00410008 /* IO BTE Source Address 0 */
+#define IIO_IBDA0 0x00410010 /* IO BTE Destination Address 0 */
+#define IIO_IBCT0 0x00410018 /* IO BTE Control Terminate 0 */
+#define IIO_IBNA0 0x00410020 /* IO BTE Notification Address 0 */
+#define IIO_IBIA0 0x00410028 /* IO BTE Interrupt Address 0 */
+#define IIO_IBLS1 0x00420000 /* IO BTE Length Status 1 */
+#define IIO_IBSA1 0x00420008 /* IO BTE Source Address 1 */
+#define IIO_IBDA1 0x00420010 /* IO BTE Destination Address 1 */
+#define IIO_IBCT1 0x00420018 /* IO BTE Control Terminate 1 */
+#define IIO_IBNA1 0x00420020 /* IO BTE Notification Address 1 */
+#define IIO_IBIA1 0x00420028 /* IO BTE Interrupt Address 1 */
+
+#define IIO_IPCR 0x00430000 /* IO Performance Control */
+#define IIO_IPPR 0x00430008 /* IO Performance Profiling */
+
+
+/************************************************************************
+ * *
+ * Description: This register echoes some information from the *
+ * LB_REV_ID register. It is available through Crosstalk as described *
+ * above. The REV_NUM and MFG_NUM fields receive their values from *
+ * the REVISION and MANUFACTURER fields in the LB_REV_ID register. *
+ * The PART_NUM field's value is the Crosstalk device ID number that *
+ * Steve Miller assigned to the SHub chip. *
+ * *
+ ************************************************************************/
+
+typedef union ii_wid_u {
+ shubreg_t ii_wid_regval;
+ struct {
+ shubreg_t w_rsvd_1 : 1;
+ shubreg_t w_mfg_num : 11;
+ shubreg_t w_part_num : 16;
+ shubreg_t w_rev_num : 4;
+ shubreg_t w_rsvd : 32;
+ } ii_wid_fld_s;
+} ii_wid_u_t;
+
+
+/************************************************************************
+ * *
+ * The fields in this register are set upon detection of an error *
+ * and cleared by various mechanisms, as explained in the *
+ * description. *
+ * *
+ ************************************************************************/
+
+typedef union ii_wstat_u {
+ shubreg_t ii_wstat_regval;
+ struct {
+ shubreg_t w_pending : 4;
+ shubreg_t w_xt_crd_to : 1;
+ shubreg_t w_xt_tail_to : 1;
+ shubreg_t w_rsvd_3 : 3;
+ shubreg_t w_tx_mx_rty : 1;
+ shubreg_t w_rsvd_2 : 6;
+ shubreg_t w_llp_tx_cnt : 8;
+ shubreg_t w_rsvd_1 : 8;
+ shubreg_t w_crazy : 1;
+ shubreg_t w_rsvd : 31;
+ } ii_wstat_fld_s;
+} ii_wstat_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: This is a read-write enabled register. It controls *
+ * various aspects of the Crosstalk flow control. *
+ * *
+ ************************************************************************/
+
+typedef union ii_wcr_u {
+ shubreg_t ii_wcr_regval;
+ struct {
+ shubreg_t w_wid : 4;
+ shubreg_t w_tag : 1;
+ shubreg_t w_rsvd_1 : 8;
+ shubreg_t w_dst_crd : 3;
+ shubreg_t w_f_bad_pkt : 1;
+ shubreg_t w_dir_con : 1;
+ shubreg_t w_e_thresh : 5;
+ shubreg_t w_rsvd : 41;
+ } ii_wcr_fld_s;
+} ii_wcr_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: This register's value is a bit vector that guards *
+ * access to local registers within the II as well as to external *
+ * Crosstalk widgets. Each bit in the register corresponds to a *
+ * particular region in the system; a region consists of one, two or *
+ * four nodes (depending on the value of the REGION_SIZE field in the *
+ * LB_REV_ID register, which is documented in Section 8.3.1.1). The *
+ * protection provided by this register applies to PIO read *
+ * operations as well as PIO write operations. The II will perform a *
+ * PIO read or write request only if the bit for the requestor's *
+ * region is set; otherwise, the II will not perform the requested *
+ * operation and will return an error response. When a PIO read or *
+ * write request targets an external Crosstalk widget, then not only *
+ * must the bit for the requestor's region be set in the ILAPR, but *
+ * also the target widget's bit in the IOWA register must be set in *
+ * order for the II to perform the requested operation; otherwise, *
+ * the II will return an error response. Hence, the protection *
+ * provided by the IOWA register supplements the protection provided *
+ * by the ILAPR for requests that target external Crosstalk widgets. *
+ * This register itself can be accessed only by the nodes whose *
+ * region ID bits are enabled in this same register. It can also be *
+ * accessed through the IAlias space by the local processors. *
+ * The reset value of this register allows access by all nodes. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ilapr_u {
+ shubreg_t ii_ilapr_regval;
+ struct {
+ shubreg_t i_region : 64;
+ } ii_ilapr_fld_s;
+} ii_ilapr_u_t;
+
+
+
+
+/************************************************************************
+ * *
+ * Description: A write to this register of the 64-bit value *
+ * "SGIrules" in ASCII, will cause the bit in the ILAPR register *
+ * corresponding to the region of the requestor to be set (allow *
+ * access). A write of any other value will be ignored. Access *
+ * protection for this register is "SGIrules". *
+ * This register can also be accessed through the IAlias space. *
+ * However, this access will not change the access permissions in the *
+ * ILAPR. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ilapo_u {
+ shubreg_t ii_ilapo_regval;
+ struct {
+ shubreg_t i_io_ovrride : 64;
+ } ii_ilapo_fld_s;
+} ii_ilapo_u_t;
+
+
+
+/************************************************************************
+ * *
+ * This register qualifies all the PIO and Graphics writes launched *
+ * from the SHUB towards a widget. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iowa_u {
+ shubreg_t ii_iowa_regval;
+ struct {
+ shubreg_t i_w0_oac : 1;
+ shubreg_t i_rsvd_1 : 7;
+ shubreg_t i_wx_oac : 8;
+ shubreg_t i_rsvd : 48;
+ } ii_iowa_fld_s;
+} ii_iowa_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: This register qualifies all the requests launched *
+ * from a widget towards the Shub. This register is intended to be *
+ * used by software in case of misbehaving widgets. *
+ * *
+ * *
+ ************************************************************************/
+
+typedef union ii_iiwa_u {
+ shubreg_t ii_iiwa_regval;
+ struct {
+ shubreg_t i_w0_iac : 1;
+ shubreg_t i_rsvd_1 : 7;
+ shubreg_t i_wx_iac : 8;
+ shubreg_t i_rsvd : 48;
+ } ii_iiwa_fld_s;
+} ii_iiwa_u_t;
+
+
+
+/************************************************************************
+ * *
+ * Description: This register qualifies all the operations launched *
+ * from a widget towards the SHub. It allows individual access *
+ * control for up to 8 devices per widget. A device refers to *
+ * individual DMA master hosted by a widget. *
+ * The bits in each field of this register are cleared by the Shub *
+ * upon detection of an error which requires the device to be *
+ * disabled. These fields assume that 0=TNUM=7 (i.e., Bridge-centric *
+ * Crosstalk). Whether or not a device has access rights to this *
+ * Shub is determined by an AND of the device enable bit in the *
+ * appropriate field of this register and the corresponding bit in *
+ * the Wx_IAC field (for the widget which this device belongs to). *
+ * The bits in this field are set by writing a 1 to them. Incoming *
+ * replies from Crosstalk are not subject to this access control *
+ * mechanism. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iidem_u {
+ shubreg_t ii_iidem_regval;
+ struct {
+ shubreg_t i_w8_dxs : 8;
+ shubreg_t i_w9_dxs : 8;
+ shubreg_t i_wa_dxs : 8;
+ shubreg_t i_wb_dxs : 8;
+ shubreg_t i_wc_dxs : 8;
+ shubreg_t i_wd_dxs : 8;
+ shubreg_t i_we_dxs : 8;
+ shubreg_t i_wf_dxs : 8;
+ } ii_iidem_fld_s;
+} ii_iidem_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the various programmable fields necessary *
+ * for controlling and observing the LLP signals. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ilcsr_u {
+ shubreg_t ii_ilcsr_regval;
+ struct {
+ shubreg_t i_nullto : 6;
+ shubreg_t i_rsvd_4 : 2;
+ shubreg_t i_wrmrst : 1;
+ shubreg_t i_rsvd_3 : 1;
+ shubreg_t i_llp_en : 1;
+ shubreg_t i_bm8 : 1;
+ shubreg_t i_llp_stat : 2;
+ shubreg_t i_remote_power : 1;
+ shubreg_t i_rsvd_2 : 1;
+ shubreg_t i_maxrtry : 10;
+ shubreg_t i_d_avail_sel : 2;
+ shubreg_t i_rsvd_1 : 4;
+ shubreg_t i_maxbrst : 10;
+ shubreg_t i_rsvd : 22;
+
+ } ii_ilcsr_fld_s;
+} ii_ilcsr_u_t;
+
+
+/************************************************************************
+ * *
+ * This is simply a status registers that monitors the LLP error *
+ * rate. *
+ * *
+ ************************************************************************/
+
+typedef union ii_illr_u {
+ shubreg_t ii_illr_regval;
+ struct {
+ shubreg_t i_sn_cnt : 16;
+ shubreg_t i_cb_cnt : 16;
+ shubreg_t i_rsvd : 32;
+ } ii_illr_fld_s;
+} ii_illr_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: All II-detected non-BTE error interrupts are *
+ * specified via this register. *
+ * NOTE: The PI interrupt register address is hardcoded in the II. If *
+ * PI_ID==0, then the II sends an interrupt request (Duplonet PWRI *
+ * packet) to address offset 0x0180_0090 within the local register *
+ * address space of PI0 on the node specified by the NODE field. If *
+ * PI_ID==1, then the II sends the interrupt request to address *
+ * offset 0x01A0_0090 within the local register address space of PI1 *
+ * on the node specified by the NODE field. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iidsr_u {
+ shubreg_t ii_iidsr_regval;
+ struct {
+ shubreg_t i_level : 8;
+ shubreg_t i_pi_id : 1;
+ shubreg_t i_node : 11;
+ shubreg_t i_rsvd_3 : 4;
+ shubreg_t i_enable : 1;
+ shubreg_t i_rsvd_2 : 3;
+ shubreg_t i_int_sent : 2;
+ shubreg_t i_rsvd_1 : 2;
+ shubreg_t i_pi0_forward_int : 1;
+ shubreg_t i_pi1_forward_int : 1;
+ shubreg_t i_rsvd : 30;
+ } ii_iidsr_fld_s;
+} ii_iidsr_u_t;
+
+
+
+/************************************************************************
+ * *
+ * There are two instances of this register. This register is used *
+ * for matching up the incoming responses from the graphics widget to *
+ * the processor that initiated the graphics operation. The *
+ * write-responses are converted to graphics credits and returned to *
+ * the processor so that the processor interface can manage the flow *
+ * control. *
+ * *
+ ************************************************************************/
+
+typedef union ii_igfx0_u {
+ shubreg_t ii_igfx0_regval;
+ struct {
+ shubreg_t i_w_num : 4;
+ shubreg_t i_pi_id : 1;
+ shubreg_t i_n_num : 12;
+ shubreg_t i_p_num : 1;
+ shubreg_t i_rsvd : 46;
+ } ii_igfx0_fld_s;
+} ii_igfx0_u_t;
+
+
+/************************************************************************
+ * *
+ * There are two instances of this register. This register is used *
+ * for matching up the incoming responses from the graphics widget to *
+ * the processor that initiated the graphics operation. The *
+ * write-responses are converted to graphics credits and returned to *
+ * the processor so that the processor interface can manage the flow *
+ * control. *
+ * *
+ ************************************************************************/
+
+typedef union ii_igfx1_u {
+ shubreg_t ii_igfx1_regval;
+ struct {
+ shubreg_t i_w_num : 4;
+ shubreg_t i_pi_id : 1;
+ shubreg_t i_n_num : 12;
+ shubreg_t i_p_num : 1;
+ shubreg_t i_rsvd : 46;
+ } ii_igfx1_fld_s;
+} ii_igfx1_u_t;
+
+
+/************************************************************************
+ * *
+ * There are two instances of this registers. These registers are *
+ * used as scratch registers for software use. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iscr0_u {
+ shubreg_t ii_iscr0_regval;
+ struct {
+ shubreg_t i_scratch : 64;
+ } ii_iscr0_fld_s;
+} ii_iscr0_u_t;
+
+
+
+/************************************************************************
+ * *
+ * There are two instances of this registers. These registers are *
+ * used as scratch registers for software use. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iscr1_u {
+ shubreg_t ii_iscr1_regval;
+ struct {
+ shubreg_t i_scratch : 64;
+ } ii_iscr1_fld_s;
+} ii_iscr1_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a Shub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the SHub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the Shub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte1_u {
+ shubreg_t ii_itte1_regval;
+ struct {
+ shubreg_t i_offset : 5;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_w_num : 4;
+ shubreg_t i_iosp : 1;
+ shubreg_t i_rsvd : 51;
+ } ii_itte1_fld_s;
+} ii_itte1_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a Shub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the Shub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the Shub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte2_u {
+ shubreg_t ii_itte2_regval;
+ struct {
+ shubreg_t i_offset : 5;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_w_num : 4;
+ shubreg_t i_iosp : 1;
+ shubreg_t i_rsvd : 51;
+ } ii_itte2_fld_s;
+} ii_itte2_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a Shub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the Shub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the SHub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte3_u {
+ shubreg_t ii_itte3_regval;
+ struct {
+ shubreg_t i_offset : 5;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_w_num : 4;
+ shubreg_t i_iosp : 1;
+ shubreg_t i_rsvd : 51;
+ } ii_itte3_fld_s;
+} ii_itte3_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a SHub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the SHub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the SHub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte4_u {
+ shubreg_t ii_itte4_regval;
+ struct {
+ shubreg_t i_offset : 5;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_w_num : 4;
+ shubreg_t i_iosp : 1;
+ shubreg_t i_rsvd : 51;
+ } ii_itte4_fld_s;
+} ii_itte4_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a SHub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the Shub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the Shub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte5_u {
+ shubreg_t ii_itte5_regval;
+ struct {
+ shubreg_t i_offset : 5;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_w_num : 4;
+ shubreg_t i_iosp : 1;
+ shubreg_t i_rsvd : 51;
+ } ii_itte5_fld_s;
+} ii_itte5_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a Shub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the Shub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the Shub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte6_u {
+ shubreg_t ii_itte6_regval;
+ struct {
+ shubreg_t i_offset : 5;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_w_num : 4;
+ shubreg_t i_iosp : 1;
+ shubreg_t i_rsvd : 51;
+ } ii_itte6_fld_s;
+} ii_itte6_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a Shub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the Shub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the SHub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte7_u {
+ shubreg_t ii_itte7_regval;
+ struct {
+ shubreg_t i_offset : 5;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_w_num : 4;
+ shubreg_t i_iosp : 1;
+ shubreg_t i_rsvd : 51;
+ } ii_itte7_fld_s;
+} ii_itte7_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprb0_u {
+ shubreg_t ii_iprb0_regval;
+ struct {
+ shubreg_t i_c : 8;
+ shubreg_t i_na : 14;
+ shubreg_t i_rsvd_2 : 2;
+ shubreg_t i_nb : 14;
+ shubreg_t i_rsvd_1 : 2;
+ shubreg_t i_m : 2;
+ shubreg_t i_f : 1;
+ shubreg_t i_of_cnt : 5;
+ shubreg_t i_error : 1;
+ shubreg_t i_rd_to : 1;
+ shubreg_t i_spur_wr : 1;
+ shubreg_t i_spur_rd : 1;
+ shubreg_t i_rsvd : 11;
+ shubreg_t i_mult_err : 1;
+ } ii_iprb0_fld_s;
+} ii_iprb0_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprb8_u {
+ shubreg_t ii_iprb8_regval;
+ struct {
+ shubreg_t i_c : 8;
+ shubreg_t i_na : 14;
+ shubreg_t i_rsvd_2 : 2;
+ shubreg_t i_nb : 14;
+ shubreg_t i_rsvd_1 : 2;
+ shubreg_t i_m : 2;
+ shubreg_t i_f : 1;
+ shubreg_t i_of_cnt : 5;
+ shubreg_t i_error : 1;
+ shubreg_t i_rd_to : 1;
+ shubreg_t i_spur_wr : 1;
+ shubreg_t i_spur_rd : 1;
+ shubreg_t i_rsvd : 11;
+ shubreg_t i_mult_err : 1;
+ } ii_iprb8_fld_s;
+} ii_iprb8_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprb9_u {
+ shubreg_t ii_iprb9_regval;
+ struct {
+ shubreg_t i_c : 8;
+ shubreg_t i_na : 14;
+ shubreg_t i_rsvd_2 : 2;
+ shubreg_t i_nb : 14;
+ shubreg_t i_rsvd_1 : 2;
+ shubreg_t i_m : 2;
+ shubreg_t i_f : 1;
+ shubreg_t i_of_cnt : 5;
+ shubreg_t i_error : 1;
+ shubreg_t i_rd_to : 1;
+ shubreg_t i_spur_wr : 1;
+ shubreg_t i_spur_rd : 1;
+ shubreg_t i_rsvd : 11;
+ shubreg_t i_mult_err : 1;
+ } ii_iprb9_fld_s;
+} ii_iprb9_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprba_u {
+ shubreg_t ii_iprba_regval;
+ struct {
+ shubreg_t i_c : 8;
+ shubreg_t i_na : 14;
+ shubreg_t i_rsvd_2 : 2;
+ shubreg_t i_nb : 14;
+ shubreg_t i_rsvd_1 : 2;
+ shubreg_t i_m : 2;
+ shubreg_t i_f : 1;
+ shubreg_t i_of_cnt : 5;
+ shubreg_t i_error : 1;
+ shubreg_t i_rd_to : 1;
+ shubreg_t i_spur_wr : 1;
+ shubreg_t i_spur_rd : 1;
+ shubreg_t i_rsvd : 11;
+ shubreg_t i_mult_err : 1;
+ } ii_iprba_fld_s;
+} ii_iprba_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprbb_u {
+ shubreg_t ii_iprbb_regval;
+ struct {
+ shubreg_t i_c : 8;
+ shubreg_t i_na : 14;
+ shubreg_t i_rsvd_2 : 2;
+ shubreg_t i_nb : 14;
+ shubreg_t i_rsvd_1 : 2;
+ shubreg_t i_m : 2;
+ shubreg_t i_f : 1;
+ shubreg_t i_of_cnt : 5;
+ shubreg_t i_error : 1;
+ shubreg_t i_rd_to : 1;
+ shubreg_t i_spur_wr : 1;
+ shubreg_t i_spur_rd : 1;
+ shubreg_t i_rsvd : 11;
+ shubreg_t i_mult_err : 1;
+ } ii_iprbb_fld_s;
+} ii_iprbb_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprbc_u {
+ shubreg_t ii_iprbc_regval;
+ struct {
+ shubreg_t i_c : 8;
+ shubreg_t i_na : 14;
+ shubreg_t i_rsvd_2 : 2;
+ shubreg_t i_nb : 14;
+ shubreg_t i_rsvd_1 : 2;
+ shubreg_t i_m : 2;
+ shubreg_t i_f : 1;
+ shubreg_t i_of_cnt : 5;
+ shubreg_t i_error : 1;
+ shubreg_t i_rd_to : 1;
+ shubreg_t i_spur_wr : 1;
+ shubreg_t i_spur_rd : 1;
+ shubreg_t i_rsvd : 11;
+ shubreg_t i_mult_err : 1;
+ } ii_iprbc_fld_s;
+} ii_iprbc_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprbd_u {
+ shubreg_t ii_iprbd_regval;
+ struct {
+ shubreg_t i_c : 8;
+ shubreg_t i_na : 14;
+ shubreg_t i_rsvd_2 : 2;
+ shubreg_t i_nb : 14;
+ shubreg_t i_rsvd_1 : 2;
+ shubreg_t i_m : 2;
+ shubreg_t i_f : 1;
+ shubreg_t i_of_cnt : 5;
+ shubreg_t i_error : 1;
+ shubreg_t i_rd_to : 1;
+ shubreg_t i_spur_wr : 1;
+ shubreg_t i_spur_rd : 1;
+ shubreg_t i_rsvd : 11;
+ shubreg_t i_mult_err : 1;
+ } ii_iprbd_fld_s;
+} ii_iprbd_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprbe_u {
+ shubreg_t ii_iprbe_regval;
+ struct {
+ shubreg_t i_c : 8;
+ shubreg_t i_na : 14;
+ shubreg_t i_rsvd_2 : 2;
+ shubreg_t i_nb : 14;
+ shubreg_t i_rsvd_1 : 2;
+ shubreg_t i_m : 2;
+ shubreg_t i_f : 1;
+ shubreg_t i_of_cnt : 5;
+ shubreg_t i_error : 1;
+ shubreg_t i_rd_to : 1;
+ shubreg_t i_spur_wr : 1;
+ shubreg_t i_spur_rd : 1;
+ shubreg_t i_rsvd : 11;
+ shubreg_t i_mult_err : 1;
+ } ii_iprbe_fld_s;
+} ii_iprbe_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of Shub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprbf_u {
+ shubreg_t ii_iprbf_regval;
+ struct {
+ shubreg_t i_c : 8;
+ shubreg_t i_na : 14;
+ shubreg_t i_rsvd_2 : 2;
+ shubreg_t i_nb : 14;
+ shubreg_t i_rsvd_1 : 2;
+ shubreg_t i_m : 2;
+ shubreg_t i_f : 1;
+ shubreg_t i_of_cnt : 5;
+ shubreg_t i_error : 1;
+ shubreg_t i_rd_to : 1;
+ shubreg_t i_spur_wr : 1;
+ shubreg_t i_spur_rd : 1;
+ shubreg_t i_rsvd : 11;
+ shubreg_t i_mult_err : 1;
+ } ii_iprbe_fld_s;
+} ii_iprbf_u_t;
+
+
+/************************************************************************
+ * *
+ * This register specifies the timeout value to use for monitoring *
+ * Crosstalk credits which are used outbound to Crosstalk. An *
+ * internal counter called the Crosstalk Credit Timeout Counter *
+ * increments every 128 II clocks. The counter starts counting *
+ * anytime the credit count drops below a threshold, and resets to *
+ * zero (stops counting) anytime the credit count is at or above the *
+ * threshold. The threshold is 1 credit in direct connect mode and 2 *
+ * in Crossbow connect mode. When the internal Crosstalk Credit *
+ * Timeout Counter reaches the value programmed in this register, a *
+ * Crosstalk Credit Timeout has occurred. The internal counter is not *
+ * readable from software, and stops counting at its maximum value, *
+ * so it cannot cause more than one interrupt. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ixcc_u {
+ shubreg_t ii_ixcc_regval;
+ struct {
+ shubreg_t i_time_out : 26;
+ shubreg_t i_rsvd : 38;
+ } ii_ixcc_fld_s;
+} ii_ixcc_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: This register qualifies all the PIO and DMA *
+ * operations launched from widget 0 towards the SHub. In *
+ * addition, it also qualifies accesses by the BTE streams. *
+ * The bits in each field of this register are cleared by the SHub *
+ * upon detection of an error which requires widget 0 or the BTE *
+ * streams to be terminated. Whether or not widget x has access *
+ * rights to this SHub is determined by an AND of the device *
+ * enable bit in the appropriate field of this register and bit 0 in *
+ * the Wx_IAC field. The bits in this field are set by writing a 1 to *
+ * them. Incoming replies from Crosstalk are not subject to this *
+ * access control mechanism. *
+ * *
+ ************************************************************************/
+
+typedef union ii_imem_u {
+ shubreg_t ii_imem_regval;
+ struct {
+ shubreg_t i_w0_esd : 1;
+ shubreg_t i_rsvd_3 : 3;
+ shubreg_t i_b0_esd : 1;
+ shubreg_t i_rsvd_2 : 3;
+ shubreg_t i_b1_esd : 1;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_clr_precise : 1;
+ shubreg_t i_rsvd : 51;
+ } ii_imem_fld_s;
+} ii_imem_u_t;
+
+
+
+/************************************************************************
+ * *
+ * Description: This register specifies the timeout value to use for *
+ * monitoring Crosstalk tail flits coming into the Shub in the *
+ * TAIL_TO field. An internal counter associated with this register *
+ * is incremented every 128 II internal clocks (7 bits). The counter *
+ * starts counting anytime a header micropacket is received and stops *
+ * counting (and resets to zero) any time a micropacket with a Tail *
+ * bit is received. Once the counter reaches the threshold value *
+ * programmed in this register, it generates an interrupt to the *
+ * processor that is programmed into the IIDSR. The counter saturates *
+ * (does not roll over) at its maximum value, so it cannot cause *
+ * another interrupt until after it is cleared. *
+ * The register also contains the Read Response Timeout values. The *
+ * Prescalar is 23 bits, and counts II clocks. An internal counter *
+ * increments on every II clock and when it reaches the value in the *
+ * Prescalar field, all IPRTE registers with their valid bits set *
+ * have their Read Response timers bumped. Whenever any of them match *
+ * the value in the RRSP_TO field, a Read Response Timeout has *
+ * occurred, and error handling occurs as described in the Error *
+ * Handling section of this document. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ixtt_u {
+ shubreg_t ii_ixtt_regval;
+ struct {
+ shubreg_t i_tail_to : 26;
+ shubreg_t i_rsvd_1 : 6;
+ shubreg_t i_rrsp_ps : 23;
+ shubreg_t i_rrsp_to : 5;
+ shubreg_t i_rsvd : 4;
+ } ii_ixtt_fld_s;
+} ii_ixtt_u_t;
+
+
+/************************************************************************
+ * *
+ * Writing a 1 to the fields of this register clears the appropriate *
+ * error bits in other areas of SHub. Note that when the *
+ * E_PRB_x bits are used to clear error bits in PRB registers, *
+ * SPUR_RD and SPUR_WR may persist, because they require additional *
+ * action to clear them. See the IPRBx and IXSS Register *
+ * specifications. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ieclr_u {
+ shubreg_t ii_ieclr_regval;
+ struct {
+ shubreg_t i_e_prb_0 : 1;
+ shubreg_t i_rsvd : 7;
+ shubreg_t i_e_prb_8 : 1;
+ shubreg_t i_e_prb_9 : 1;
+ shubreg_t i_e_prb_a : 1;
+ shubreg_t i_e_prb_b : 1;
+ shubreg_t i_e_prb_c : 1;
+ shubreg_t i_e_prb_d : 1;
+ shubreg_t i_e_prb_e : 1;
+ shubreg_t i_e_prb_f : 1;
+ shubreg_t i_e_crazy : 1;
+ shubreg_t i_e_bte_0 : 1;
+ shubreg_t i_e_bte_1 : 1;
+ shubreg_t i_reserved_1 : 10;
+ shubreg_t i_spur_rd_hdr : 1;
+ shubreg_t i_cam_intr_to : 1;
+ shubreg_t i_cam_overflow : 1;
+ shubreg_t i_cam_read_miss : 1;
+ shubreg_t i_ioq_rep_underflow : 1;
+ shubreg_t i_ioq_req_underflow : 1;
+ shubreg_t i_ioq_rep_overflow : 1;
+ shubreg_t i_ioq_req_overflow : 1;
+ shubreg_t i_iiq_rep_overflow : 1;
+ shubreg_t i_iiq_req_overflow : 1;
+ shubreg_t i_ii_xn_rep_cred_overflow : 1;
+ shubreg_t i_ii_xn_req_cred_overflow : 1;
+ shubreg_t i_ii_xn_invalid_cmd : 1;
+ shubreg_t i_xn_ii_invalid_cmd : 1;
+ shubreg_t i_reserved_2 : 21;
+ } ii_ieclr_fld_s;
+} ii_ieclr_u_t;
+
+
+/************************************************************************
+ * *
+ * This register controls both BTEs. SOFT_RESET is intended for *
+ * recovery after an error. COUNT controls the total number of CRBs *
+ * that both BTEs (combined) can use, which affects total BTE *
+ * bandwidth. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibcr_u {
+ shubreg_t ii_ibcr_regval;
+ struct {
+ shubreg_t i_count : 4;
+ shubreg_t i_rsvd_1 : 4;
+ shubreg_t i_soft_reset : 1;
+ shubreg_t i_rsvd : 55;
+ } ii_ibcr_fld_s;
+} ii_ibcr_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the header of a spurious read response *
+ * received from Crosstalk. A spurious read response is defined as a *
+ * read response received by II from a widget for which (1) the SIDN *
+ * has a value between 1 and 7, inclusive (II never sends requests to *
+ * these widgets (2) there is no valid IPRTE register which *
+ * corresponds to the TNUM, or (3) the widget indicated in SIDN is *
+ * not the same as the widget recorded in the IPRTE register *
+ * referenced by the TNUM. If this condition is true, and if the *
+ * IXSS[VALID] bit is clear, then the header of the spurious read *
+ * response is capture in IXSM and IXSS, and IXSS[VALID] is set. The *
+ * errant header is thereby captured, and no further spurious read *
+ * respones are captured until IXSS[VALID] is cleared by setting the *
+ * appropriate bit in IECLR.Everytime a spurious read response is *
+ * detected, the SPUR_RD bit of the PRB corresponding to the incoming *
+ * message's SIDN field is set. This always happens, regarless of *
+ * whether a header is captured. The programmer should check *
+ * IXSM[SIDN] to determine which widget sent the spurious response, *
+ * because there may be more than one SPUR_RD bit set in the PRB *
+ * registers. The widget indicated by IXSM[SIDN] was the first *
+ * spurious read response to be received since the last time *
+ * IXSS[VALID] was clear. The SPUR_RD bit of the corresponding PRB *
+ * will be set. Any SPUR_RD bits in any other PRB registers indicate *
+ * spurious messages from other widets which were detected after the *
+ * header was captured.. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ixsm_u {
+ shubreg_t ii_ixsm_regval;
+ struct {
+ shubreg_t i_byte_en : 32;
+ shubreg_t i_reserved : 1;
+ shubreg_t i_tag : 3;
+ shubreg_t i_alt_pactyp : 4;
+ shubreg_t i_bo : 1;
+ shubreg_t i_error : 1;
+ shubreg_t i_vbpm : 1;
+ shubreg_t i_gbr : 1;
+ shubreg_t i_ds : 2;
+ shubreg_t i_ct : 1;
+ shubreg_t i_tnum : 5;
+ shubreg_t i_pactyp : 4;
+ shubreg_t i_sidn : 4;
+ shubreg_t i_didn : 4;
+ } ii_ixsm_fld_s;
+} ii_ixsm_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the sideband bits of a spurious read *
+ * response received from Crosstalk. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ixss_u {
+ shubreg_t ii_ixss_regval;
+ struct {
+ shubreg_t i_sideband : 8;
+ shubreg_t i_rsvd : 55;
+ shubreg_t i_valid : 1;
+ } ii_ixss_fld_s;
+} ii_ixss_u_t;
+
+
+/************************************************************************
+ * *
+ * This register enables software to access the II LLP's test port. *
+ * Refer to the LLP 2.5 documentation for an explanation of the test *
+ * port. Software can write to this register to program the values *
+ * for the control fields (TestErrCapture, TestClear, TestFlit, *
+ * TestMask and TestSeed). Similarly, software can read from this *
+ * register to obtain the values of the test port's status outputs *
+ * (TestCBerr, TestValid and TestData). *
+ * *
+ ************************************************************************/
+
+typedef union ii_ilct_u {
+ shubreg_t ii_ilct_regval;
+ struct {
+ shubreg_t i_test_seed : 20;
+ shubreg_t i_test_mask : 8;
+ shubreg_t i_test_data : 20;
+ shubreg_t i_test_valid : 1;
+ shubreg_t i_test_cberr : 1;
+ shubreg_t i_test_flit : 3;
+ shubreg_t i_test_clear : 1;
+ shubreg_t i_test_err_capture : 1;
+ shubreg_t i_rsvd : 9;
+ } ii_ilct_fld_s;
+} ii_ilct_u_t;
+
+
+/************************************************************************
+ * *
+ * If the II detects an illegal incoming Duplonet packet (request or *
+ * reply) when VALID==0 in the IIEPH1 register, then it saves the *
+ * contents of the packet's header flit in the IIEPH1 and IIEPH2 *
+ * registers, sets the VALID bit in IIEPH1, clears the OVERRUN bit, *
+ * and assigns a value to the ERR_TYPE field which indicates the *
+ * specific nature of the error. The II recognizes four different *
+ * types of errors: short request packets (ERR_TYPE==2), short reply *
+ * packets (ERR_TYPE==3), long request packets (ERR_TYPE==4) and long *
+ * reply packets (ERR_TYPE==5). The encodings for these types of *
+ * errors were chosen to be consistent with the same types of errors *
+ * indicated by the ERR_TYPE field in the LB_ERROR_HDR1 register (in *
+ * the LB unit). If the II detects an illegal incoming Duplonet *
+ * packet when VALID==1 in the IIEPH1 register, then it merely sets *
+ * the OVERRUN bit to indicate that a subsequent error has happened, *
+ * and does nothing further. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iieph1_u {
+ shubreg_t ii_iieph1_regval;
+ struct {
+ shubreg_t i_command : 7;
+ shubreg_t i_rsvd_5 : 1;
+ shubreg_t i_suppl : 14;
+ shubreg_t i_rsvd_4 : 1;
+ shubreg_t i_source : 14;
+ shubreg_t i_rsvd_3 : 1;
+ shubreg_t i_err_type : 4;
+ shubreg_t i_rsvd_2 : 4;
+ shubreg_t i_overrun : 1;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_valid : 1;
+ shubreg_t i_rsvd : 13;
+ } ii_iieph1_fld_s;
+} ii_iieph1_u_t;
+
+
+/************************************************************************
+ * *
+ * This register holds the Address field from the header flit of an *
+ * incoming erroneous Duplonet packet, along with the tail bit which *
+ * accompanied this header flit. This register is essentially an *
+ * extension of IIEPH1. Two registers were necessary because the 64 *
+ * bits available in only a single register were insufficient to *
+ * capture the entire header flit of an erroneous packet. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iieph2_u {
+ shubreg_t ii_iieph2_regval;
+ struct {
+ shubreg_t i_rsvd_0 : 3;
+ shubreg_t i_address : 47;
+ shubreg_t i_rsvd_1 : 10;
+ shubreg_t i_tail : 1;
+ shubreg_t i_rsvd : 3;
+ } ii_iieph2_fld_s;
+} ii_iieph2_u_t;
+
+
+/******************************/
+
+
+
+/************************************************************************
+ * *
+ * This register's value is a bit vector that guards access from SXBs *
+ * to local registers within the II as well as to external Crosstalk *
+ * widgets *
+ * *
+ ************************************************************************/
+
+typedef union ii_islapr_u {
+ shubreg_t ii_islapr_regval;
+ struct {
+ shubreg_t i_region : 64;
+ } ii_islapr_fld_s;
+} ii_islapr_u_t;
+
+
+/************************************************************************
+ * *
+ * A write to this register of the 56-bit value "Pup+Bun" will cause *
+ * the bit in the ISLAPR register corresponding to the region of the *
+ * requestor to be set (access allowed). (
+ * *
+ ************************************************************************/
+
+typedef union ii_islapo_u {
+ shubreg_t ii_islapo_regval;
+ struct {
+ shubreg_t i_io_sbx_ovrride : 56;
+ shubreg_t i_rsvd : 8;
+ } ii_islapo_fld_s;
+} ii_islapo_u_t;
+
+/************************************************************************
+ * *
+ * Determines how long the wrapper will wait aftr an interrupt is *
+ * initially issued from the II before it times out the outstanding *
+ * interrupt and drops it from the interrupt queue. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iwi_u {
+ shubreg_t ii_iwi_regval;
+ struct {
+ shubreg_t i_prescale : 24;
+ shubreg_t i_rsvd : 8;
+ shubreg_t i_timeout : 8;
+ shubreg_t i_rsvd1 : 8;
+ shubreg_t i_intrpt_retry_period : 8;
+ shubreg_t i_rsvd2 : 8;
+ } ii_iwi_fld_s;
+} ii_iwi_u_t;
+
+/************************************************************************
+ * *
+ * Log errors which have occurred in the II wrapper. The errors are *
+ * cleared by writing to the IECLR register. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iwel_u {
+ shubreg_t ii_iwel_regval;
+ struct {
+ shubreg_t i_intr_timed_out : 1;
+ shubreg_t i_rsvd : 7;
+ shubreg_t i_cam_overflow : 1;
+ shubreg_t i_cam_read_miss : 1;
+ shubreg_t i_rsvd1 : 2;
+ shubreg_t i_ioq_rep_underflow : 1;
+ shubreg_t i_ioq_req_underflow : 1;
+ shubreg_t i_ioq_rep_overflow : 1;
+ shubreg_t i_ioq_req_overflow : 1;
+ shubreg_t i_iiq_rep_overflow : 1;
+ shubreg_t i_iiq_req_overflow : 1;
+ shubreg_t i_rsvd2 : 6;
+ shubreg_t i_ii_xn_rep_cred_over_under: 1;
+ shubreg_t i_ii_xn_req_cred_over_under: 1;
+ shubreg_t i_rsvd3 : 6;
+ shubreg_t i_ii_xn_invalid_cmd : 1;
+ shubreg_t i_xn_ii_invalid_cmd : 1;
+ shubreg_t i_rsvd4 : 30;
+ } ii_iwel_fld_s;
+} ii_iwel_u_t;
+
+/************************************************************************
+ * *
+ * Controls the II wrapper. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iwc_u {
+ shubreg_t ii_iwc_regval;
+ struct {
+ shubreg_t i_dma_byte_swap : 1;
+ shubreg_t i_rsvd : 3;
+ shubreg_t i_cam_read_lines_reset : 1;
+ shubreg_t i_rsvd1 : 3;
+ shubreg_t i_ii_xn_cred_over_under_log: 1;
+ shubreg_t i_rsvd2 : 19;
+ shubreg_t i_xn_rep_iq_depth : 5;
+ shubreg_t i_rsvd3 : 3;
+ shubreg_t i_xn_req_iq_depth : 5;
+ shubreg_t i_rsvd4 : 3;
+ shubreg_t i_iiq_depth : 6;
+ shubreg_t i_rsvd5 : 12;
+ shubreg_t i_force_rep_cred : 1;
+ shubreg_t i_force_req_cred : 1;
+ } ii_iwc_fld_s;
+} ii_iwc_u_t;
+
+/************************************************************************
+ * *
+ * Status in the II wrapper. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iws_u {
+ shubreg_t ii_iws_regval;
+ struct {
+ shubreg_t i_xn_rep_iq_credits : 5;
+ shubreg_t i_rsvd : 3;
+ shubreg_t i_xn_req_iq_credits : 5;
+ shubreg_t i_rsvd1 : 51;
+ } ii_iws_fld_s;
+} ii_iws_u_t;
+
+/************************************************************************
+ * *
+ * Masks errors in the IWEL register. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iweim_u {
+ shubreg_t ii_iweim_regval;
+ struct {
+ shubreg_t i_intr_timed_out : 1;
+ shubreg_t i_rsvd : 7;
+ shubreg_t i_cam_overflow : 1;
+ shubreg_t i_cam_read_miss : 1;
+ shubreg_t i_rsvd1 : 2;
+ shubreg_t i_ioq_rep_underflow : 1;
+ shubreg_t i_ioq_req_underflow : 1;
+ shubreg_t i_ioq_rep_overflow : 1;
+ shubreg_t i_ioq_req_overflow : 1;
+ shubreg_t i_iiq_rep_overflow : 1;
+ shubreg_t i_iiq_req_overflow : 1;
+ shubreg_t i_rsvd2 : 6;
+ shubreg_t i_ii_xn_rep_cred_overflow : 1;
+ shubreg_t i_ii_xn_req_cred_overflow : 1;
+ shubreg_t i_rsvd3 : 6;
+ shubreg_t i_ii_xn_invalid_cmd : 1;
+ shubreg_t i_xn_ii_invalid_cmd : 1;
+ shubreg_t i_rsvd4 : 30;
+ } ii_iweim_fld_s;
+} ii_iweim_u_t;
+
+
+/************************************************************************
+ * *
+ * A write to this register causes a particular field in the *
+ * corresponding widget's PRB entry to be adjusted up or down by 1. *
+ * This counter should be used when recovering from error and reset *
+ * conditions. Note that software would be capable of causing *
+ * inadvertent overflow or underflow of these counters. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ipca_u {
+ shubreg_t ii_ipca_regval;
+ struct {
+ shubreg_t i_wid : 4;
+ shubreg_t i_adjust : 1;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_field : 2;
+ shubreg_t i_rsvd : 54;
+ } ii_ipca_fld_s;
+} ii_ipca_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+
+typedef union ii_iprte0a_u {
+ shubreg_t ii_iprte0a_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 54;
+ shubreg_t i_widget : 4;
+ shubreg_t i_to_cnt : 5;
+ shubreg_t i_vld : 1;
+ } ii_iprte0a_fld_s;
+} ii_iprte0a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte1a_u {
+ shubreg_t ii_iprte1a_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 54;
+ shubreg_t i_widget : 4;
+ shubreg_t i_to_cnt : 5;
+ shubreg_t i_vld : 1;
+ } ii_iprte1a_fld_s;
+} ii_iprte1a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte2a_u {
+ shubreg_t ii_iprte2a_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 54;
+ shubreg_t i_widget : 4;
+ shubreg_t i_to_cnt : 5;
+ shubreg_t i_vld : 1;
+ } ii_iprte2a_fld_s;
+} ii_iprte2a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte3a_u {
+ shubreg_t ii_iprte3a_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 54;
+ shubreg_t i_widget : 4;
+ shubreg_t i_to_cnt : 5;
+ shubreg_t i_vld : 1;
+ } ii_iprte3a_fld_s;
+} ii_iprte3a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte4a_u {
+ shubreg_t ii_iprte4a_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 54;
+ shubreg_t i_widget : 4;
+ shubreg_t i_to_cnt : 5;
+ shubreg_t i_vld : 1;
+ } ii_iprte4a_fld_s;
+} ii_iprte4a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte5a_u {
+ shubreg_t ii_iprte5a_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 54;
+ shubreg_t i_widget : 4;
+ shubreg_t i_to_cnt : 5;
+ shubreg_t i_vld : 1;
+ } ii_iprte5a_fld_s;
+} ii_iprte5a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte6a_u {
+ shubreg_t ii_iprte6a_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 54;
+ shubreg_t i_widget : 4;
+ shubreg_t i_to_cnt : 5;
+ shubreg_t i_vld : 1;
+ } ii_iprte6a_fld_s;
+} ii_iprte6a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte7a_u {
+ shubreg_t ii_iprte7a_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 54;
+ shubreg_t i_widget : 4;
+ shubreg_t i_to_cnt : 5;
+ shubreg_t i_vld : 1;
+ } ii_iprtea7_fld_s;
+} ii_iprte7a_u_t;
+
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+
+typedef union ii_iprte0b_u {
+ shubreg_t ii_iprte0b_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_address : 47;
+ shubreg_t i_init : 3;
+ shubreg_t i_source : 11;
+ } ii_iprte0b_fld_s;
+} ii_iprte0b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte1b_u {
+ shubreg_t ii_iprte1b_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_address : 47;
+ shubreg_t i_init : 3;
+ shubreg_t i_source : 11;
+ } ii_iprte1b_fld_s;
+} ii_iprte1b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte2b_u {
+ shubreg_t ii_iprte2b_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_address : 47;
+ shubreg_t i_init : 3;
+ shubreg_t i_source : 11;
+ } ii_iprte2b_fld_s;
+} ii_iprte2b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte3b_u {
+ shubreg_t ii_iprte3b_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_address : 47;
+ shubreg_t i_init : 3;
+ shubreg_t i_source : 11;
+ } ii_iprte3b_fld_s;
+} ii_iprte3b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte4b_u {
+ shubreg_t ii_iprte4b_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_address : 47;
+ shubreg_t i_init : 3;
+ shubreg_t i_source : 11;
+ } ii_iprte4b_fld_s;
+} ii_iprte4b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte5b_u {
+ shubreg_t ii_iprte5b_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_address : 47;
+ shubreg_t i_init : 3;
+ shubreg_t i_source : 11;
+ } ii_iprte5b_fld_s;
+} ii_iprte5b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte6b_u {
+ shubreg_t ii_iprte6b_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_address : 47;
+ shubreg_t i_init : 3;
+ shubreg_t i_source : 11;
+
+ } ii_iprte6b_fld_s;
+} ii_iprte6b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte7b_u {
+ shubreg_t ii_iprte7b_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_address : 47;
+ shubreg_t i_init : 3;
+ shubreg_t i_source : 11;
+ } ii_iprte7b_fld_s;
+} ii_iprte7b_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: SHub II contains a feature which did not exist in *
+ * the Hub which automatically cleans up after a Read Response *
+ * timeout, including deallocation of the IPRTE and recovery of IBuf *
+ * space. The inclusion of this register in SHub is for backward *
+ * compatibility *
+ * A write to this register causes an entry from the table of *
+ * outstanding PIO Read Requests to be freed and returned to the *
+ * stack of free entries. This register is used in handling the *
+ * timeout errors that result in a PIO Reply never returning from *
+ * Crosstalk. *
+ * Note that this register does not affect the contents of the IPRTE *
+ * registers. The Valid bits in those registers have to be *
+ * specifically turned off by software. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ipdr_u {
+ shubreg_t ii_ipdr_regval;
+ struct {
+ shubreg_t i_te : 3;
+ shubreg_t i_rsvd_1 : 1;
+ shubreg_t i_pnd : 1;
+ shubreg_t i_init_rpcnt : 1;
+ shubreg_t i_rsvd : 58;
+ } ii_ipdr_fld_s;
+} ii_ipdr_u_t;
+
+
+/************************************************************************
+ * *
+ * A write to this register causes a CRB entry to be returned to the *
+ * queue of free CRBs. The entry should have previously been cleared *
+ * (mark bit) via backdoor access to the pertinent CRB entry. This *
+ * register is used in the last step of handling the errors that are *
+ * captured and marked in CRB entries. Briefly: 1) first error for *
+ * DMA write from a particular device, and first error for a *
+ * particular BTE stream, lead to a marked CRB entry, and processor *
+ * interrupt, 2) software reads the error information captured in the *
+ * CRB entry, and presumably takes some corrective action, 3) *
+ * software clears the mark bit, and finally 4) software writes to *
+ * the ICDR register to return the CRB entry to the list of free CRB *
+ * entries. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icdr_u {
+ shubreg_t ii_icdr_regval;
+ struct {
+ shubreg_t i_crb_num : 4;
+ shubreg_t i_pnd : 1;
+ shubreg_t i_rsvd : 59;
+ } ii_icdr_fld_s;
+} ii_icdr_u_t;
+
+
+/************************************************************************
+ * *
+ * This register provides debug access to two FIFOs inside of II. *
+ * Both IOQ_MAX* fields of this register contain the instantaneous *
+ * depth (in units of the number of available entries) of the *
+ * associated IOQ FIFO. A read of this register will return the *
+ * number of free entries on each FIFO at the time of the read. So *
+ * when a FIFO is idle, the associated field contains the maximum *
+ * depth of the FIFO. This register is writable for debug reasons *
+ * and is intended to be written with the maximum desired FIFO depth *
+ * while the FIFO is idle. Software must assure that II is idle when *
+ * this register is written. If there are any active entries in any *
+ * of these FIFOs when this register is written, the results are *
+ * undefined. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ifdr_u {
+ shubreg_t ii_ifdr_regval;
+ struct {
+ shubreg_t i_ioq_max_rq : 7;
+ shubreg_t i_set_ioq_rq : 1;
+ shubreg_t i_ioq_max_rp : 7;
+ shubreg_t i_set_ioq_rp : 1;
+ shubreg_t i_rsvd : 48;
+ } ii_ifdr_fld_s;
+} ii_ifdr_u_t;
+
+
+/************************************************************************
+ * *
+ * This register allows the II to become sluggish in removing *
+ * messages from its inbound queue (IIQ). This will cause messages to *
+ * back up in either virtual channel. Disabling the "molasses" mode *
+ * subsequently allows the II to be tested under stress. In the *
+ * sluggish ("Molasses") mode, the localized effects of congestion *
+ * can be observed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iiap_u {
+ shubreg_t ii_iiap_regval;
+ struct {
+ shubreg_t i_rq_mls : 6;
+ shubreg_t i_rsvd_1 : 2;
+ shubreg_t i_rp_mls : 6;
+ shubreg_t i_rsvd : 50;
+ } ii_iiap_fld_s;
+} ii_iiap_u_t;
+
+
+/************************************************************************
+ * *
+ * This register allows several parameters of CRB operation to be *
+ * set. Note that writing to this register can have catastrophic side *
+ * effects, if the CRB is not quiescent, i.e. if the CRB is *
+ * processing protocol messages when the write occurs. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icmr_u {
+ shubreg_t ii_icmr_regval;
+ struct {
+ shubreg_t i_sp_msg : 1;
+ shubreg_t i_rd_hdr : 1;
+ shubreg_t i_rsvd_4 : 2;
+ shubreg_t i_c_cnt : 4;
+ shubreg_t i_rsvd_3 : 4;
+ shubreg_t i_clr_rqpd : 1;
+ shubreg_t i_clr_rppd : 1;
+ shubreg_t i_rsvd_2 : 2;
+ shubreg_t i_fc_cnt : 4;
+ shubreg_t i_crb_vld : 15;
+ shubreg_t i_crb_mark : 15;
+ shubreg_t i_rsvd_1 : 2;
+ shubreg_t i_precise : 1;
+ shubreg_t i_rsvd : 11;
+ } ii_icmr_fld_s;
+} ii_icmr_u_t;
+
+
+/************************************************************************
+ * *
+ * This register allows control of the table portion of the CRB *
+ * logic via software. Control operations from this register have *
+ * priority over all incoming Crosstalk or BTE requests. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iccr_u {
+ shubreg_t ii_iccr_regval;
+ struct {
+ shubreg_t i_crb_num : 4;
+ shubreg_t i_rsvd_1 : 4;
+ shubreg_t i_cmd : 8;
+ shubreg_t i_pending : 1;
+ shubreg_t i_rsvd : 47;
+ } ii_iccr_fld_s;
+} ii_iccr_u_t;
+
+
+/************************************************************************
+ * *
+ * This register allows the maximum timeout value to be programmed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icto_u {
+ shubreg_t ii_icto_regval;
+ struct {
+ shubreg_t i_timeout : 8;
+ shubreg_t i_rsvd : 56;
+ } ii_icto_fld_s;
+} ii_icto_u_t;
+
+
+/************************************************************************
+ * *
+ * This register allows the timeout prescalar to be programmed. An *
+ * internal counter is associated with this register. When the *
+ * internal counter reaches the value of the PRESCALE field, the *
+ * timer registers in all valid CRBs are incremented (CRBx_D[TIMEOUT] *
+ * field). The internal counter resets to zero, and then continues *
+ * counting. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ictp_u {
+ shubreg_t ii_ictp_regval;
+ struct {
+ shubreg_t i_prescale : 24;
+ shubreg_t i_rsvd : 40;
+ } ii_ictp_fld_s;
+} ii_ictp_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are *
+ * used for Crosstalk operations (both cacheline and partial *
+ * operations) or BTE/IO. Because the CRB entries are very wide, five *
+ * registers (_A to _E) are required to read and write each entry. *
+ * The CRB Entry registers can be conceptualized as rows and columns *
+ * (illustrated in the table above). Each row contains the 4 *
+ * registers required for a single CRB Entry. The first doubleword *
+ * (column) for each entry is labeled A, and the second doubleword *
+ * (higher address) is labeled B, the third doubleword is labeled C, *
+ * the fourth doubleword is labeled D and the fifth doubleword is *
+ * labeled E. All CRB entries have their addresses on a quarter *
+ * cacheline aligned boundary. *
+ * Upon reset, only the following fields are initialized: valid *
+ * (VLD), priority count, timeout, timeout valid, and context valid. *
+ * All other bits should be cleared by software before use (after *
+ * recovering any potential error state from before the reset). *
+ * The following four tables summarize the format for the four *
+ * registers that are used for each ICRB# Entry. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icrb0_a_u {
+ shubreg_t ii_icrb0_a_regval;
+ struct {
+ shubreg_t ia_iow : 1;
+ shubreg_t ia_vld : 1;
+ shubreg_t ia_addr : 47;
+ shubreg_t ia_tnum : 5;
+ shubreg_t ia_sidn : 4;
+ shubreg_t ia_rsvd : 6;
+ } ii_icrb0_a_fld_s;
+} ii_icrb0_a_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are *
+ * used for Crosstalk operations (both cacheline and partial *
+ * operations) or BTE/IO. Because the CRB entries are very wide, five *
+ * registers (_A to _E) are required to read and write each entry. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icrb0_b_u {
+ shubreg_t ii_icrb0_b_regval;
+ struct {
+ shubreg_t ib_xt_err : 1;
+ shubreg_t ib_mark : 1;
+ shubreg_t ib_ln_uce : 1;
+ shubreg_t ib_errcode : 3;
+ shubreg_t ib_error : 1;
+ shubreg_t ib_stall__bte_1 : 1;
+ shubreg_t ib_stall__bte_0 : 1;
+ shubreg_t ib_stall__intr : 1;
+ shubreg_t ib_stall_ib : 1;
+ shubreg_t ib_intvn : 1;
+ shubreg_t ib_wb : 1;
+ shubreg_t ib_hold : 1;
+ shubreg_t ib_ack : 1;
+ shubreg_t ib_resp : 1;
+ shubreg_t ib_ack_cnt : 11;
+ shubreg_t ib_rsvd : 7;
+ shubreg_t ib_exc : 5;
+ shubreg_t ib_init : 3;
+ shubreg_t ib_imsg : 8;
+ shubreg_t ib_imsgtype : 2;
+ shubreg_t ib_use_old : 1;
+ shubreg_t ib_rsvd_1 : 11;
+ } ii_icrb0_b_fld_s;
+} ii_icrb0_b_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are *
+ * used for Crosstalk operations (both cacheline and partial *
+ * operations) or BTE/IO. Because the CRB entries are very wide, five *
+ * registers (_A to _E) are required to read and write each entry. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icrb0_c_u {
+ shubreg_t ii_icrb0_c_regval;
+ struct {
+ shubreg_t ic_source : 15;
+ shubreg_t ic_size : 2;
+ shubreg_t ic_ct : 1;
+ shubreg_t ic_bte_num : 1;
+ shubreg_t ic_gbr : 1;
+ shubreg_t ic_resprqd : 1;
+ shubreg_t ic_bo : 1;
+ shubreg_t ic_suppl : 15;
+ shubreg_t ic_rsvd : 27;
+ } ii_icrb0_c_fld_s;
+} ii_icrb0_c_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are *
+ * used for Crosstalk operations (both cacheline and partial *
+ * operations) or BTE/IO. Because the CRB entries are very wide, five *
+ * registers (_A to _E) are required to read and write each entry. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icrb0_d_u {
+ shubreg_t ii_icrb0_d_regval;
+ struct {
+ shubreg_t id_pa_be : 43;
+ shubreg_t id_bte_op : 1;
+ shubreg_t id_pr_psc : 4;
+ shubreg_t id_pr_cnt : 4;
+ shubreg_t id_sleep : 1;
+ shubreg_t id_rsvd : 11;
+ } ii_icrb0_d_fld_s;
+} ii_icrb0_d_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are *
+ * used for Crosstalk operations (both cacheline and partial *
+ * operations) or BTE/IO. Because the CRB entries are very wide, five *
+ * registers (_A to _E) are required to read and write each entry. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icrb0_e_u {
+ shubreg_t ii_icrb0_e_regval;
+ struct {
+ shubreg_t ie_timeout : 8;
+ shubreg_t ie_context : 15;
+ shubreg_t ie_rsvd : 1;
+ shubreg_t ie_tvld : 1;
+ shubreg_t ie_cvld : 1;
+ shubreg_t ie_rsvd_0 : 38;
+ } ii_icrb0_e_fld_s;
+} ii_icrb0_e_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the lower 64 bits of the header of the *
+ * spurious message captured by II. Valid when the SP_MSG bit in ICMR *
+ * register is set. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icsml_u {
+ shubreg_t ii_icsml_regval;
+ struct {
+ shubreg_t i_tt_addr : 47;
+ shubreg_t i_newsuppl_ex : 14;
+ shubreg_t i_reserved : 2;
+ shubreg_t i_overflow : 1;
+ } ii_icsml_fld_s;
+} ii_icsml_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the middle 64 bits of the header of the *
+ * spurious message captured by II. Valid when the SP_MSG bit in ICMR *
+ * register is set. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icsmm_u {
+ shubreg_t ii_icsmm_regval;
+ struct {
+ shubreg_t i_tt_ack_cnt : 11;
+ shubreg_t i_reserved : 53;
+ } ii_icsmm_fld_s;
+} ii_icsmm_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the microscopic state, all the inputs to *
+ * the protocol table, captured with the spurious message. Valid when *
+ * the SP_MSG bit in the ICMR register is set. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icsmh_u {
+ shubreg_t ii_icsmh_regval;
+ struct {
+ shubreg_t i_tt_vld : 1;
+ shubreg_t i_xerr : 1;
+ shubreg_t i_ft_cwact_o : 1;
+ shubreg_t i_ft_wact_o : 1;
+ shubreg_t i_ft_active_o : 1;
+ shubreg_t i_sync : 1;
+ shubreg_t i_mnusg : 1;
+ shubreg_t i_mnusz : 1;
+ shubreg_t i_plusz : 1;
+ shubreg_t i_plusg : 1;
+ shubreg_t i_tt_exc : 5;
+ shubreg_t i_tt_wb : 1;
+ shubreg_t i_tt_hold : 1;
+ shubreg_t i_tt_ack : 1;
+ shubreg_t i_tt_resp : 1;
+ shubreg_t i_tt_intvn : 1;
+ shubreg_t i_g_stall_bte1 : 1;
+ shubreg_t i_g_stall_bte0 : 1;
+ shubreg_t i_g_stall_il : 1;
+ shubreg_t i_g_stall_ib : 1;
+ shubreg_t i_tt_imsg : 8;
+ shubreg_t i_tt_imsgtype : 2;
+ shubreg_t i_tt_use_old : 1;
+ shubreg_t i_tt_respreqd : 1;
+ shubreg_t i_tt_bte_num : 1;
+ shubreg_t i_cbn : 1;
+ shubreg_t i_match : 1;
+ shubreg_t i_rpcnt_lt_34 : 1;
+ shubreg_t i_rpcnt_ge_34 : 1;
+ shubreg_t i_rpcnt_lt_18 : 1;
+ shubreg_t i_rpcnt_ge_18 : 1;
+ shubreg_t i_rpcnt_lt_2 : 1;
+ shubreg_t i_rpcnt_ge_2 : 1;
+ shubreg_t i_rqcnt_lt_18 : 1;
+ shubreg_t i_rqcnt_ge_18 : 1;
+ shubreg_t i_rqcnt_lt_2 : 1;
+ shubreg_t i_rqcnt_ge_2 : 1;
+ shubreg_t i_tt_device : 7;
+ shubreg_t i_tt_init : 3;
+ shubreg_t i_reserved : 5;
+ } ii_icsmh_fld_s;
+} ii_icsmh_u_t;
+
+
+/************************************************************************
+ * *
+ * The Shub DEBUG unit provides a 3-bit selection signal to the *
+ * II core and a 3-bit selection signal to the fsbclk domain in the II *
+ * wrapper. *
+ * *
+ ************************************************************************/
+
+typedef union ii_idbss_u {
+ shubreg_t ii_idbss_regval;
+ struct {
+ shubreg_t i_iioclk_core_submenu : 3;
+ shubreg_t i_rsvd : 5;
+ shubreg_t i_fsbclk_wrapper_submenu : 3;
+ shubreg_t i_rsvd_1 : 5;
+ shubreg_t i_iioclk_menu : 5;
+ shubreg_t i_rsvd_2 : 43;
+ } ii_idbss_fld_s;
+} ii_idbss_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: This register is used to set up the length for a *
+ * transfer and then to monitor the progress of that transfer. This *
+ * register needs to be initialized before a transfer is started. A *
+ * legitimate write to this register will set the Busy bit, clear the *
+ * Error bit, and initialize the length to the value desired. *
+ * While the transfer is in progress, hardware will decrement the *
+ * length field with each successful block that is copied. Once the *
+ * transfer completes, hardware will clear the Busy bit. The length *
+ * field will also contain the number of cache lines left to be *
+ * transferred. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibls0_u {
+ shubreg_t ii_ibls0_regval;
+ struct {
+ shubreg_t i_length : 16;
+ shubreg_t i_error : 1;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_busy : 1;
+ shubreg_t i_rsvd : 43;
+ } ii_ibls0_fld_s;
+} ii_ibls0_u_t;
+
+
+/************************************************************************
+ * *
+ * This register should be loaded before a transfer is started. The *
+ * address to be loaded in bits 39:0 is the 40-bit TRex+ physical *
+ * address as described in Section 1.3, Figure2 and Figure3. Since *
+ * the bottom 7 bits of the address are always taken to be zero, BTE *
+ * transfers are always cacheline-aligned. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibsa0_u {
+ shubreg_t ii_ibsa0_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 7;
+ shubreg_t i_addr : 42;
+ shubreg_t i_rsvd : 15;
+ } ii_ibsa0_fld_s;
+} ii_ibsa0_u_t;
+
+
+/************************************************************************
+ * *
+ * This register should be loaded before a transfer is started. The *
+ * address to be loaded in bits 39:0 is the 40-bit TRex+ physical *
+ * address as described in Section 1.3, Figure2 and Figure3. Since *
+ * the bottom 7 bits of the address are always taken to be zero, BTE *
+ * transfers are always cacheline-aligned. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibda0_u {
+ shubreg_t ii_ibda0_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 7;
+ shubreg_t i_addr : 42;
+ shubreg_t i_rsvd : 15;
+ } ii_ibda0_fld_s;
+} ii_ibda0_u_t;
+
+
+/************************************************************************
+ * *
+ * Writing to this register sets up the attributes of the transfer *
+ * and initiates the transfer operation. Reading this register has *
+ * the side effect of terminating any transfer in progress. Note: *
+ * stopping a transfer midstream could have an adverse impact on the *
+ * other BTE. If a BTE stream has to be stopped (due to error *
+ * handling for example), both BTE streams should be stopped and *
+ * their transfers discarded. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibct0_u {
+ shubreg_t ii_ibct0_regval;
+ struct {
+ shubreg_t i_zerofill : 1;
+ shubreg_t i_rsvd_2 : 3;
+ shubreg_t i_notify : 1;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_poison : 1;
+ shubreg_t i_rsvd : 55;
+ } ii_ibct0_fld_s;
+} ii_ibct0_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the address to which the WINV is sent. *
+ * This address has to be cache line aligned. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibna0_u {
+ shubreg_t ii_ibna0_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 7;
+ shubreg_t i_addr : 42;
+ shubreg_t i_rsvd : 15;
+ } ii_ibna0_fld_s;
+} ii_ibna0_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the programmable level as well as the node *
+ * ID and PI unit of the processor to which the interrupt will be *
+ * sent. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibia0_u {
+ shubreg_t ii_ibia0_regval;
+ struct {
+ shubreg_t i_rsvd_2 : 1;
+ shubreg_t i_node_id : 11;
+ shubreg_t i_rsvd_1 : 4;
+ shubreg_t i_level : 7;
+ shubreg_t i_rsvd : 41;
+ } ii_ibia0_fld_s;
+} ii_ibia0_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: This register is used to set up the length for a *
+ * transfer and then to monitor the progress of that transfer. This *
+ * register needs to be initialized before a transfer is started. A *
+ * legitimate write to this register will set the Busy bit, clear the *
+ * Error bit, and initialize the length to the value desired. *
+ * While the transfer is in progress, hardware will decrement the *
+ * length field with each successful block that is copied. Once the *
+ * transfer completes, hardware will clear the Busy bit. The length *
+ * field will also contain the number of cache lines left to be *
+ * transferred. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibls1_u {
+ shubreg_t ii_ibls1_regval;
+ struct {
+ shubreg_t i_length : 16;
+ shubreg_t i_error : 1;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_busy : 1;
+ shubreg_t i_rsvd : 43;
+ } ii_ibls1_fld_s;
+} ii_ibls1_u_t;
+
+
+/************************************************************************
+ * *
+ * This register should be loaded before a transfer is started. The *
+ * address to be loaded in bits 39:0 is the 40-bit TRex+ physical *
+ * address as described in Section 1.3, Figure2 and Figure3. Since *
+ * the bottom 7 bits of the address are always taken to be zero, BTE *
+ * transfers are always cacheline-aligned. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibsa1_u {
+ shubreg_t ii_ibsa1_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 7;
+ shubreg_t i_addr : 33;
+ shubreg_t i_rsvd : 24;
+ } ii_ibsa1_fld_s;
+} ii_ibsa1_u_t;
+
+
+/************************************************************************
+ * *
+ * This register should be loaded before a transfer is started. The *
+ * address to be loaded in bits 39:0 is the 40-bit TRex+ physical *
+ * address as described in Section 1.3, Figure2 and Figure3. Since *
+ * the bottom 7 bits of the address are always taken to be zero, BTE *
+ * transfers are always cacheline-aligned. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibda1_u {
+ shubreg_t ii_ibda1_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 7;
+ shubreg_t i_addr : 33;
+ shubreg_t i_rsvd : 24;
+ } ii_ibda1_fld_s;
+} ii_ibda1_u_t;
+
+
+/************************************************************************
+ * *
+ * Writing to this register sets up the attributes of the transfer *
+ * and initiates the transfer operation. Reading this register has *
+ * the side effect of terminating any transfer in progress. Note: *
+ * stopping a transfer midstream could have an adverse impact on the *
+ * other BTE. If a BTE stream has to be stopped (due to error *
+ * handling for example), both BTE streams should be stopped and *
+ * their transfers discarded. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibct1_u {
+ shubreg_t ii_ibct1_regval;
+ struct {
+ shubreg_t i_zerofill : 1;
+ shubreg_t i_rsvd_2 : 3;
+ shubreg_t i_notify : 1;
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_poison : 1;
+ shubreg_t i_rsvd : 55;
+ } ii_ibct1_fld_s;
+} ii_ibct1_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the address to which the WINV is sent. *
+ * This address has to be cache line aligned. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibna1_u {
+ shubreg_t ii_ibna1_regval;
+ struct {
+ shubreg_t i_rsvd_1 : 7;
+ shubreg_t i_addr : 33;
+ shubreg_t i_rsvd : 24;
+ } ii_ibna1_fld_s;
+} ii_ibna1_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the programmable level as well as the node *
+ * ID and PI unit of the processor to which the interrupt will be *
+ * sent. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibia1_u {
+ shubreg_t ii_ibia1_regval;
+ struct {
+ shubreg_t i_pi_id : 1;
+ shubreg_t i_node_id : 8;
+ shubreg_t i_rsvd_1 : 7;
+ shubreg_t i_level : 7;
+ shubreg_t i_rsvd : 41;
+ } ii_ibia1_fld_s;
+} ii_ibia1_u_t;
+
+
+/************************************************************************
+ * *
+ * This register defines the resources that feed information into *
+ * the two performance counters located in the IO Performance *
+ * Profiling Register. There are 17 different quantities that can be *
+ * measured. Given these 17 different options, the two performance *
+ * counters have 15 of them in common; menu selections 0 through 0xE *
+ * are identical for each performance counter. As for the other two *
+ * options, one is available from one performance counter and the *
+ * other is available from the other performance counter. Hence, the *
+ * II supports all 17*16=272 possible combinations of quantities to *
+ * measure. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ipcr_u {
+ shubreg_t ii_ipcr_regval;
+ struct {
+ shubreg_t i_ippr0_c : 4;
+ shubreg_t i_ippr1_c : 4;
+ shubreg_t i_icct : 8;
+ shubreg_t i_rsvd : 48;
+ } ii_ipcr_fld_s;
+} ii_ipcr_u_t;
+
+
+/************************************************************************
+ * *
+ * *
+ * *
+ ************************************************************************/
+
+typedef union ii_ippr_u {
+ shubreg_t ii_ippr_regval;
+ struct {
+ shubreg_t i_ippr0 : 32;
+ shubreg_t i_ippr1 : 32;
+ } ii_ippr_fld_s;
+} ii_ippr_u_t;
+
+
+/**************************************************************************
+ * *
+ * The following defines which were not formed into structures are *
+ * probably indentical to another register, and the name of the *
+ * register is provided against each of these registers. This *
+ * information needs to be checked carefully *
+ * *
+ * IIO_ICRB1_A IIO_ICRB0_A *
+ * IIO_ICRB1_B IIO_ICRB0_B *
+ * IIO_ICRB1_C IIO_ICRB0_C *
+ * IIO_ICRB1_D IIO_ICRB0_D *
+ * IIO_ICRB1_E IIO_ICRB0_E *
+ * IIO_ICRB2_A IIO_ICRB0_A *
+ * IIO_ICRB2_B IIO_ICRB0_B *
+ * IIO_ICRB2_C IIO_ICRB0_C *
+ * IIO_ICRB2_D IIO_ICRB0_D *
+ * IIO_ICRB2_E IIO_ICRB0_E *
+ * IIO_ICRB3_A IIO_ICRB0_A *
+ * IIO_ICRB3_B IIO_ICRB0_B *
+ * IIO_ICRB3_C IIO_ICRB0_C *
+ * IIO_ICRB3_D IIO_ICRB0_D *
+ * IIO_ICRB3_E IIO_ICRB0_E *
+ * IIO_ICRB4_A IIO_ICRB0_A *
+ * IIO_ICRB4_B IIO_ICRB0_B *
+ * IIO_ICRB4_C IIO_ICRB0_C *
+ * IIO_ICRB4_D IIO_ICRB0_D *
+ * IIO_ICRB4_E IIO_ICRB0_E *
+ * IIO_ICRB5_A IIO_ICRB0_A *
+ * IIO_ICRB5_B IIO_ICRB0_B *
+ * IIO_ICRB5_C IIO_ICRB0_C *
+ * IIO_ICRB5_D IIO_ICRB0_D *
+ * IIO_ICRB5_E IIO_ICRB0_E *
+ * IIO_ICRB6_A IIO_ICRB0_A *
+ * IIO_ICRB6_B IIO_ICRB0_B *
+ * IIO_ICRB6_C IIO_ICRB0_C *
+ * IIO_ICRB6_D IIO_ICRB0_D *
+ * IIO_ICRB6_E IIO_ICRB0_E *
+ * IIO_ICRB7_A IIO_ICRB0_A *
+ * IIO_ICRB7_B IIO_ICRB0_B *
+ * IIO_ICRB7_C IIO_ICRB0_C *
+ * IIO_ICRB7_D IIO_ICRB0_D *
+ * IIO_ICRB7_E IIO_ICRB0_E *
+ * IIO_ICRB8_A IIO_ICRB0_A *
+ * IIO_ICRB8_B IIO_ICRB0_B *
+ * IIO_ICRB8_C IIO_ICRB0_C *
+ * IIO_ICRB8_D IIO_ICRB0_D *
+ * IIO_ICRB8_E IIO_ICRB0_E *
+ * IIO_ICRB9_A IIO_ICRB0_A *
+ * IIO_ICRB9_B IIO_ICRB0_B *
+ * IIO_ICRB9_C IIO_ICRB0_C *
+ * IIO_ICRB9_D IIO_ICRB0_D *
+ * IIO_ICRB9_E IIO_ICRB0_E *
+ * IIO_ICRBA_A IIO_ICRB0_A *
+ * IIO_ICRBA_B IIO_ICRB0_B *
+ * IIO_ICRBA_C IIO_ICRB0_C *
+ * IIO_ICRBA_D IIO_ICRB0_D *
+ * IIO_ICRBA_E IIO_ICRB0_E *
+ * IIO_ICRBB_A IIO_ICRB0_A *
+ * IIO_ICRBB_B IIO_ICRB0_B *
+ * IIO_ICRBB_C IIO_ICRB0_C *
+ * IIO_ICRBB_D IIO_ICRB0_D *
+ * IIO_ICRBB_E IIO_ICRB0_E *
+ * IIO_ICRBC_A IIO_ICRB0_A *
+ * IIO_ICRBC_B IIO_ICRB0_B *
+ * IIO_ICRBC_C IIO_ICRB0_C *
+ * IIO_ICRBC_D IIO_ICRB0_D *
+ * IIO_ICRBC_E IIO_ICRB0_E *
+ * IIO_ICRBD_A IIO_ICRB0_A *
+ * IIO_ICRBD_B IIO_ICRB0_B *
+ * IIO_ICRBD_C IIO_ICRB0_C *
+ * IIO_ICRBD_D IIO_ICRB0_D *
+ * IIO_ICRBD_E IIO_ICRB0_E *
+ * IIO_ICRBE_A IIO_ICRB0_A *
+ * IIO_ICRBE_B IIO_ICRB0_B *
+ * IIO_ICRBE_C IIO_ICRB0_C *
+ * IIO_ICRBE_D IIO_ICRB0_D *
+ * IIO_ICRBE_E IIO_ICRB0_E *
+ * *
+ **************************************************************************/
+
+
+/*
+ * Slightly friendlier names for some common registers.
+ */
+#define IIO_WIDGET IIO_WID /* Widget identification */
+#define IIO_WIDGET_STAT IIO_WSTAT /* Widget status register */
+#define IIO_WIDGET_CTRL IIO_WCR /* Widget control register */
+#define IIO_PROTECT IIO_ILAPR /* IO interface protection */
+#define IIO_PROTECT_OVRRD IIO_ILAPO /* IO protect override */
+#define IIO_OUTWIDGET_ACCESS IIO_IOWA /* Outbound widget access */
+#define IIO_INWIDGET_ACCESS IIO_IIWA /* Inbound widget access */
+#define IIO_INDEV_ERR_MASK IIO_IIDEM /* Inbound device error mask */
+#define IIO_LLP_CSR IIO_ILCSR /* LLP control and status */
+#define IIO_LLP_LOG IIO_ILLR /* LLP log */
+#define IIO_XTALKCC_TOUT IIO_IXCC /* Xtalk credit count timeout*/
+#define IIO_XTALKTT_TOUT IIO_IXTT /* Xtalk tail timeout */
+#define IIO_IO_ERR_CLR IIO_IECLR /* IO error clear */
+#define IIO_IGFX_0 IIO_IGFX0
+#define IIO_IGFX_1 IIO_IGFX1
+#define IIO_IBCT_0 IIO_IBCT0
+#define IIO_IBCT_1 IIO_IBCT1
+#define IIO_IBLS_0 IIO_IBLS0
+#define IIO_IBLS_1 IIO_IBLS1
+#define IIO_IBSA_0 IIO_IBSA0
+#define IIO_IBSA_1 IIO_IBSA1
+#define IIO_IBDA_0 IIO_IBDA0
+#define IIO_IBDA_1 IIO_IBDA1
+#define IIO_IBNA_0 IIO_IBNA0
+#define IIO_IBNA_1 IIO_IBNA1
+#define IIO_IBIA_0 IIO_IBIA0
+#define IIO_IBIA_1 IIO_IBIA1
+#define IIO_IOPRB_0 IIO_IPRB0
+
+#define IIO_PRTE_A(_x) (IIO_IPRTE0_A + (8 * (_x)))
+#define IIO_PRTE_B(_x) (IIO_IPRTE0_B + (8 * (_x)))
+#define IIO_NUM_PRTES 8 /* Total number of PRB table entries */
+#define IIO_WIDPRTE_A(x) IIO_PRTE_A(((x) - 8)) /* widget ID to its PRTE num */
+#define IIO_WIDPRTE_B(x) IIO_PRTE_B(((x) - 8)) /* widget ID to its PRTE num */
+
+#define IIO_NUM_IPRBS (9)
+
+#define IIO_LLP_CSR_IS_UP 0x00002000
+#define IIO_LLP_CSR_LLP_STAT_MASK 0x00003000
+#define IIO_LLP_CSR_LLP_STAT_SHFT 12
+
+#define IIO_LLP_CB_MAX 0xffff /* in ILLR CB_CNT, Max Check Bit errors */
+#define IIO_LLP_SN_MAX 0xffff /* in ILLR SN_CNT, Max Sequence Number errors */
+
+/* key to IIO_PROTECT_OVRRD */
+#define IIO_PROTECT_OVRRD_KEY 0x53474972756c6573ull /* "SGIrules" */
+
+/* BTE register names */
+#define IIO_BTE_STAT_0 IIO_IBLS_0 /* Also BTE length/status 0 */
+#define IIO_BTE_SRC_0 IIO_IBSA_0 /* Also BTE source address 0 */
+#define IIO_BTE_DEST_0 IIO_IBDA_0 /* Also BTE dest. address 0 */
+#define IIO_BTE_CTRL_0 IIO_IBCT_0 /* Also BTE control/terminate 0 */
+#define IIO_BTE_NOTIFY_0 IIO_IBNA_0 /* Also BTE notification 0 */
+#define IIO_BTE_INT_0 IIO_IBIA_0 /* Also BTE interrupt 0 */
+#define IIO_BTE_OFF_0 0 /* Base offset from BTE 0 regs. */
+#define IIO_BTE_OFF_1 (IIO_IBLS_1 - IIO_IBLS_0) /* Offset from base to BTE 1 */
+
+/* BTE register offsets from base */
+#define BTEOFF_STAT 0
+#define BTEOFF_SRC (IIO_BTE_SRC_0 - IIO_BTE_STAT_0)
+#define BTEOFF_DEST (IIO_BTE_DEST_0 - IIO_BTE_STAT_0)
+#define BTEOFF_CTRL (IIO_BTE_CTRL_0 - IIO_BTE_STAT_0)
+#define BTEOFF_NOTIFY (IIO_BTE_NOTIFY_0 - IIO_BTE_STAT_0)
+#define BTEOFF_INT (IIO_BTE_INT_0 - IIO_BTE_STAT_0)
+
+
+/* names used in shub diags */
+#define IIO_BASE_BTE0 IIO_IBLS_0
+#define IIO_BASE_BTE1 IIO_IBLS_1
+
+/*
+ * Macro which takes the widget number, and returns the
+ * IO PRB address of that widget.
+ * value _x is expected to be a widget number in the range
+ * 0, 8 - 0xF
+ */
+#define IIO_IOPRB(_x) (IIO_IOPRB_0 + ( ( (_x) < HUB_WIDGET_ID_MIN ? \
+ (_x) : \
+ (_x) - (HUB_WIDGET_ID_MIN-1)) << 3) )
+
+
+/* GFX Flow Control Node/Widget Register */
+#define IIO_IGFX_W_NUM_BITS 4 /* size of widget num field */
+#define IIO_IGFX_W_NUM_MASK ((1<<IIO_IGFX_W_NUM_BITS)-1)
+#define IIO_IGFX_W_NUM_SHIFT 0
+#define IIO_IGFX_PI_NUM_BITS 1 /* size of PI num field */
+#define IIO_IGFX_PI_NUM_MASK ((1<<IIO_IGFX_PI_NUM_BITS)-1)
+#define IIO_IGFX_PI_NUM_SHIFT 4
+#define IIO_IGFX_N_NUM_BITS 8 /* size of node num field */
+#define IIO_IGFX_N_NUM_MASK ((1<<IIO_IGFX_N_NUM_BITS)-1)
+#define IIO_IGFX_N_NUM_SHIFT 5
+#define IIO_IGFX_P_NUM_BITS 1 /* size of processor num field */
+#define IIO_IGFX_P_NUM_MASK ((1<<IIO_IGFX_P_NUM_BITS)-1)
+#define IIO_IGFX_P_NUM_SHIFT 16
+#define IIO_IGFX_INIT(widget, pi, node, cpu) (\
+ (((widget) & IIO_IGFX_W_NUM_MASK) << IIO_IGFX_W_NUM_SHIFT) | \
+ (((pi) & IIO_IGFX_PI_NUM_MASK)<< IIO_IGFX_PI_NUM_SHIFT)| \
+ (((node) & IIO_IGFX_N_NUM_MASK) << IIO_IGFX_N_NUM_SHIFT) | \
+ (((cpu) & IIO_IGFX_P_NUM_MASK) << IIO_IGFX_P_NUM_SHIFT))
+
+
+/* Scratch registers (all bits available) */
+#define IIO_SCRATCH_REG0 IIO_ISCR0
+#define IIO_SCRATCH_REG1 IIO_ISCR1
+#define IIO_SCRATCH_MASK 0xffffffffffffffffUL
+
+#define IIO_SCRATCH_BIT0_0 0x0000000000000001UL
+#define IIO_SCRATCH_BIT0_1 0x0000000000000002UL
+#define IIO_SCRATCH_BIT0_2 0x0000000000000004UL
+#define IIO_SCRATCH_BIT0_3 0x0000000000000008UL
+#define IIO_SCRATCH_BIT0_4 0x0000000000000010UL
+#define IIO_SCRATCH_BIT0_5 0x0000000000000020UL
+#define IIO_SCRATCH_BIT0_6 0x0000000000000040UL
+#define IIO_SCRATCH_BIT0_7 0x0000000000000080UL
+#define IIO_SCRATCH_BIT0_8 0x0000000000000100UL
+#define IIO_SCRATCH_BIT0_9 0x0000000000000200UL
+#define IIO_SCRATCH_BIT0_A 0x0000000000000400UL
+
+#define IIO_SCRATCH_BIT1_0 0x0000000000000001UL
+#define IIO_SCRATCH_BIT1_1 0x0000000000000002UL
+/* IO Translation Table Entries */
+#define IIO_NUM_ITTES 7 /* ITTEs numbered 0..6 */
+ /* Hw manuals number them 1..7! */
+/*
+ * IIO_IMEM Register fields.
+ */
+#define IIO_IMEM_W0ESD 0x1UL /* Widget 0 shut down due to error */
+#define IIO_IMEM_B0ESD (1UL << 4) /* BTE 0 shut down due to error */
+#define IIO_IMEM_B1ESD (1UL << 8) /* BTE 1 Shut down due to error */
+
+/*
+ * As a permanent workaround for a bug in the PI side of the shub, we've
+ * redefined big window 7 as small window 0.
+ XXX does this still apply for SN1??
+ */
+#define HUB_NUM_BIG_WINDOW (IIO_NUM_ITTES - 1)
+
+/*
+ * Use the top big window as a surrogate for the first small window
+ */
+#define SWIN0_BIGWIN HUB_NUM_BIG_WINDOW
+
+#define ILCSR_WARM_RESET 0x100
+
+/*
+ * CRB manipulation macros
+ * The CRB macros are slightly complicated, since there are up to
+ * four registers associated with each CRB entry.
+ */
+#define IIO_NUM_CRBS 15 /* Number of CRBs */
+#define IIO_NUM_PC_CRBS 4 /* Number of partial cache CRBs */
+#define IIO_ICRB_OFFSET 8
+#define IIO_ICRB_0 IIO_ICRB0_A
+#define IIO_ICRB_ADDR_SHFT 2 /* Shift to get proper address */
+/* XXX - This is now tuneable:
+ #define IIO_FIRST_PC_ENTRY 12
+ */
+
+#define IIO_ICRB_A(_x) ((u64)(IIO_ICRB_0 + (6 * IIO_ICRB_OFFSET * (_x))))
+#define IIO_ICRB_B(_x) ((u64)((char *)IIO_ICRB_A(_x) + 1*IIO_ICRB_OFFSET))
+#define IIO_ICRB_C(_x) ((u64)((char *)IIO_ICRB_A(_x) + 2*IIO_ICRB_OFFSET))
+#define IIO_ICRB_D(_x) ((u64)((char *)IIO_ICRB_A(_x) + 3*IIO_ICRB_OFFSET))
+#define IIO_ICRB_E(_x) ((u64)((char *)IIO_ICRB_A(_x) + 4*IIO_ICRB_OFFSET))
+
+#define TNUM_TO_WIDGET_DEV(_tnum) (_tnum & 0x7)
+
+/*
+ * values for "ecode" field
+ */
+#define IIO_ICRB_ECODE_DERR 0 /* Directory error due to IIO access */
+#define IIO_ICRB_ECODE_PERR 1 /* Poison error on IO access */
+#define IIO_ICRB_ECODE_WERR 2 /* Write error by IIO access
+ * e.g. WINV to a Read only line. */
+#define IIO_ICRB_ECODE_AERR 3 /* Access error caused by IIO access */
+#define IIO_ICRB_ECODE_PWERR 4 /* Error on partial write */
+#define IIO_ICRB_ECODE_PRERR 5 /* Error on partial read */
+#define IIO_ICRB_ECODE_TOUT 6 /* CRB timeout before deallocating */
+#define IIO_ICRB_ECODE_XTERR 7 /* Incoming xtalk pkt had error bit */
+
+/*
+ * Values for field imsgtype
+ */
+#define IIO_ICRB_IMSGT_XTALK 0 /* Incoming Meessage from Xtalk */
+#define IIO_ICRB_IMSGT_BTE 1 /* Incoming message from BTE */
+#define IIO_ICRB_IMSGT_SN1NET 2 /* Incoming message from SN1 net */
+#define IIO_ICRB_IMSGT_CRB 3 /* Incoming message from CRB ??? */
+
+/*
+ * values for field initiator.
+ */
+#define IIO_ICRB_INIT_XTALK 0 /* Message originated in xtalk */
+#define IIO_ICRB_INIT_BTE0 0x1 /* Message originated in BTE 0 */
+#define IIO_ICRB_INIT_SN1NET 0x2 /* Message originated in SN1net */
+#define IIO_ICRB_INIT_CRB 0x3 /* Message originated in CRB ? */
+#define IIO_ICRB_INIT_BTE1 0x5 /* MEssage originated in BTE 1 */
+
+/*
+ * Number of credits Hub widget has while sending req/response to
+ * xbow.
+ * Value of 3 is required by Xbow 1.1
+ * We may be able to increase this to 4 with Xbow 1.2.
+ */
+#define HUBII_XBOW_CREDIT 3
+#define HUBII_XBOW_REV2_CREDIT 4
+
+/*
+ * Number of credits that xtalk devices should use when communicating
+ * with a SHub (depth of SHub's queue).
+ */
+#define HUB_CREDIT 4
+
+/*
+ * Some IIO_PRB fields
+ */
+#define IIO_PRB_MULTI_ERR (1LL << 63)
+#define IIO_PRB_SPUR_RD (1LL << 51)
+#define IIO_PRB_SPUR_WR (1LL << 50)
+#define IIO_PRB_RD_TO (1LL << 49)
+#define IIO_PRB_ERROR (1LL << 48)
+
+/*************************************************************************
+
+ Some of the IIO field masks and shifts are defined here.
+ This is in order to maintain compatibility in SN0 and SN1 code
+
+**************************************************************************/
+
+/*
+ * ICMR register fields
+ * (Note: the IIO_ICMR_P_CNT and IIO_ICMR_PC_VLD from Hub are not
+ * present in SHub)
+ */
+
+#define IIO_ICMR_CRB_VLD_SHFT 20
+#define IIO_ICMR_CRB_VLD_MASK (0x7fffUL << IIO_ICMR_CRB_VLD_SHFT)
+
+#define IIO_ICMR_FC_CNT_SHFT 16
+#define IIO_ICMR_FC_CNT_MASK (0xf << IIO_ICMR_FC_CNT_SHFT)
+
+#define IIO_ICMR_C_CNT_SHFT 4
+#define IIO_ICMR_C_CNT_MASK (0xf << IIO_ICMR_C_CNT_SHFT)
+
+#define IIO_ICMR_PRECISE (1UL << 52)
+#define IIO_ICMR_CLR_RPPD (1UL << 13)
+#define IIO_ICMR_CLR_RQPD (1UL << 12)
+
+/*
+ * IIO PIO Deallocation register field masks : (IIO_IPDR)
+ XXX present but not needed in bedrock? See the manual.
+ */
+#define IIO_IPDR_PND (1 << 4)
+
+/*
+ * IIO CRB deallocation register field masks: (IIO_ICDR)
+ */
+#define IIO_ICDR_PND (1 << 4)
+
+/*
+ * IO BTE Length/Status (IIO_IBLS) register bit field definitions
+ */
+#define IBLS_BUSY (0x1UL << 20)
+#define IBLS_ERROR_SHFT 16
+#define IBLS_ERROR (0x1UL << IBLS_ERROR_SHFT)
+#define IBLS_LENGTH_MASK 0xffff
+
+/*
+ * IO BTE Control/Terminate register (IBCT) register bit field definitions
+ */
+#define IBCT_POISON (0x1UL << 8)
+#define IBCT_NOTIFY (0x1UL << 4)
+#define IBCT_ZFIL_MODE (0x1UL << 0)
+
+/*
+ * IIO Incoming Error Packet Header (IIO_IIEPH1/IIO_IIEPH2)
+ */
+#define IIEPH1_VALID (1UL << 44)
+#define IIEPH1_OVERRUN (1UL << 40)
+#define IIEPH1_ERR_TYPE_SHFT 32
+#define IIEPH1_ERR_TYPE_MASK 0xf
+#define IIEPH1_SOURCE_SHFT 20
+#define IIEPH1_SOURCE_MASK 11
+#define IIEPH1_SUPPL_SHFT 8
+#define IIEPH1_SUPPL_MASK 11
+#define IIEPH1_CMD_SHFT 0
+#define IIEPH1_CMD_MASK 7
+
+#define IIEPH2_TAIL (1UL << 40)
+#define IIEPH2_ADDRESS_SHFT 0
+#define IIEPH2_ADDRESS_MASK 38
+
+#define IIEPH1_ERR_SHORT_REQ 2
+#define IIEPH1_ERR_SHORT_REPLY 3
+#define IIEPH1_ERR_LONG_REQ 4
+#define IIEPH1_ERR_LONG_REPLY 5
+
+/*
+ * IO Error Clear register bit field definitions
+ */
+#define IECLR_PI1_FWD_INT (1UL << 31) /* clear PI1_FORWARD_INT in iidsr */
+#define IECLR_PI0_FWD_INT (1UL << 30) /* clear PI0_FORWARD_INT in iidsr */
+#define IECLR_SPUR_RD_HDR (1UL << 29) /* clear valid bit in ixss reg */
+#define IECLR_BTE1 (1UL << 18) /* clear bte error 1 */
+#define IECLR_BTE0 (1UL << 17) /* clear bte error 0 */
+#define IECLR_CRAZY (1UL << 16) /* clear crazy bit in wstat reg */
+#define IECLR_PRB_F (1UL << 15) /* clear err bit in PRB_F reg */
+#define IECLR_PRB_E (1UL << 14) /* clear err bit in PRB_E reg */
+#define IECLR_PRB_D (1UL << 13) /* clear err bit in PRB_D reg */
+#define IECLR_PRB_C (1UL << 12) /* clear err bit in PRB_C reg */
+#define IECLR_PRB_B (1UL << 11) /* clear err bit in PRB_B reg */
+#define IECLR_PRB_A (1UL << 10) /* clear err bit in PRB_A reg */
+#define IECLR_PRB_9 (1UL << 9) /* clear err bit in PRB_9 reg */
+#define IECLR_PRB_8 (1UL << 8) /* clear err bit in PRB_8 reg */
+#define IECLR_PRB_0 (1UL << 0) /* clear err bit in PRB_0 reg */
+
+/*
+ * IIO CRB control register Fields: IIO_ICCR
+ */
+#define IIO_ICCR_PENDING (0x10000)
+#define IIO_ICCR_CMD_MASK (0xFF)
+#define IIO_ICCR_CMD_SHFT (7)
+#define IIO_ICCR_CMD_NOP (0x0) /* No Op */
+#define IIO_ICCR_CMD_WAKE (0x100) /* Reactivate CRB entry and process */
+#define IIO_ICCR_CMD_TIMEOUT (0x200) /* Make CRB timeout & mark invalid */
+#define IIO_ICCR_CMD_EJECT (0x400) /* Contents of entry written to memory
+ * via a WB
+ */
+#define IIO_ICCR_CMD_FLUSH (0x800)
+
+/*
+ *
+ * CRB Register description.
+ *
+ * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING
+ * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING
+ * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING
+ * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING
+ * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING
+ *
+ * Many of the fields in CRB are status bits used by hardware
+ * for implementation of the protocol. It's very dangerous to
+ * mess around with the CRB registers.
+ *
+ * It's OK to read the CRB registers and try to make sense out of the
+ * fields in CRB.
+ *
+ * Updating CRB requires all activities in Hub IIO to be quiesced.
+ * otherwise, a write to CRB could corrupt other CRB entries.
+ * CRBs are here only as a back door peek to shub IIO's status.
+ * Quiescing implies no dmas no PIOs
+ * either directly from the cpu or from sn0net.
+ * this is not something that can be done easily. So, AVOID updating
+ * CRBs.
+ */
+
+#ifndef __ASSEMBLY__
+
+/*
+ * Easy access macros for CRBs, all 5 registers (A-E)
+ */
+typedef ii_icrb0_a_u_t icrba_t;
+#define a_sidn ii_icrb0_a_fld_s.ia_sidn
+#define a_tnum ii_icrb0_a_fld_s.ia_tnum
+#define a_addr ii_icrb0_a_fld_s.ia_addr
+#define a_valid ii_icrb0_a_fld_s.ia_vld
+#define a_iow ii_icrb0_a_fld_s.ia_iow
+#define a_regvalue ii_icrb0_a_regval
+
+typedef ii_icrb0_b_u_t icrbb_t;
+#define b_use_old ii_icrb0_b_fld_s.ib_use_old
+#define b_imsgtype ii_icrb0_b_fld_s.ib_imsgtype
+#define b_imsg ii_icrb0_b_fld_s.ib_imsg
+#define b_initiator ii_icrb0_b_fld_s.ib_init
+#define b_exc ii_icrb0_b_fld_s.ib_exc
+#define b_ackcnt ii_icrb0_b_fld_s.ib_ack_cnt
+#define b_resp ii_icrb0_b_fld_s.ib_resp
+#define b_ack ii_icrb0_b_fld_s.ib_ack
+#define b_hold ii_icrb0_b_fld_s.ib_hold
+#define b_wb ii_icrb0_b_fld_s.ib_wb
+#define b_intvn ii_icrb0_b_fld_s.ib_intvn
+#define b_stall_ib ii_icrb0_b_fld_s.ib_stall_ib
+#define b_stall_int ii_icrb0_b_fld_s.ib_stall__intr
+#define b_stall_bte_0 ii_icrb0_b_fld_s.ib_stall__bte_0
+#define b_stall_bte_1 ii_icrb0_b_fld_s.ib_stall__bte_1
+#define b_error ii_icrb0_b_fld_s.ib_error
+#define b_ecode ii_icrb0_b_fld_s.ib_errcode
+#define b_lnetuce ii_icrb0_b_fld_s.ib_ln_uce
+#define b_mark ii_icrb0_b_fld_s.ib_mark
+#define b_xerr ii_icrb0_b_fld_s.ib_xt_err
+#define b_regvalue ii_icrb0_b_regval
+
+typedef ii_icrb0_c_u_t icrbc_t;
+#define c_suppl ii_icrb0_c_fld_s.ic_suppl
+#define c_barrop ii_icrb0_c_fld_s.ic_bo
+#define c_doresp ii_icrb0_c_fld_s.ic_resprqd
+#define c_gbr ii_icrb0_c_fld_s.ic_gbr
+#define c_btenum ii_icrb0_c_fld_s.ic_bte_num
+#define c_cohtrans ii_icrb0_c_fld_s.ic_ct
+#define c_xtsize ii_icrb0_c_fld_s.ic_size
+#define c_source ii_icrb0_c_fld_s.ic_source
+#define c_regvalue ii_icrb0_c_regval
+
+
+typedef ii_icrb0_d_u_t icrbd_t;
+#define d_sleep ii_icrb0_d_fld_s.id_sleep
+#define d_pricnt ii_icrb0_d_fld_s.id_pr_cnt
+#define d_pripsc ii_icrb0_d_fld_s.id_pr_psc
+#define d_bteop ii_icrb0_d_fld_s.id_bte_op
+#define d_bteaddr ii_icrb0_d_fld_s.id_pa_be /* ic_pa_be fld has 2 names*/
+#define d_benable ii_icrb0_d_fld_s.id_pa_be /* ic_pa_be fld has 2 names*/
+#define d_regvalue ii_icrb0_d_regval
+
+typedef ii_icrb0_e_u_t icrbe_t;
+#define icrbe_ctxtvld ii_icrb0_e_fld_s.ie_cvld
+#define icrbe_toutvld ii_icrb0_e_fld_s.ie_tvld
+#define icrbe_context ii_icrb0_e_fld_s.ie_context
+#define icrbe_timeout ii_icrb0_e_fld_s.ie_timeout
+#define e_regvalue ii_icrb0_e_regval
+
+#endif /* __ASSEMBLY__ */
+
+/* Number of widgets supported by shub */
+#define HUB_NUM_WIDGET 9
+#define HUB_WIDGET_ID_MIN 0x8
+#define HUB_WIDGET_ID_MAX 0xf
+
+#define HUB_WIDGET_PART_NUM 0xc120
+#define MAX_HUBS_PER_XBOW 2
+
+#ifndef __ASSEMBLY__
+/* A few more #defines for backwards compatibility */
+#define iprb_t ii_iprb0_u_t
+#define iprb_regval ii_iprb0_regval
+#define iprb_mult_err ii_iprb0_fld_s.i_mult_err
+#define iprb_spur_rd ii_iprb0_fld_s.i_spur_rd
+#define iprb_spur_wr ii_iprb0_fld_s.i_spur_wr
+#define iprb_rd_to ii_iprb0_fld_s.i_rd_to
+#define iprb_ovflow ii_iprb0_fld_s.i_of_cnt
+#define iprb_error ii_iprb0_fld_s.i_error
+#define iprb_ff ii_iprb0_fld_s.i_f
+#define iprb_mode ii_iprb0_fld_s.i_m
+#define iprb_bnakctr ii_iprb0_fld_s.i_nb
+#define iprb_anakctr ii_iprb0_fld_s.i_na
+#define iprb_xtalkctr ii_iprb0_fld_s.i_c
+#endif
+
+#define LNK_STAT_WORKING 0x2 /* LLP is working */
+
+#define IIO_WSTAT_ECRAZY (1ULL << 32) /* Hub gone crazy */
+#define IIO_WSTAT_TXRETRY (1ULL << 9) /* Hub Tx Retry timeout */
+#define IIO_WSTAT_TXRETRY_MASK (0x7F) /* should be 0xFF?? */
+#define IIO_WSTAT_TXRETRY_SHFT (16)
+#define IIO_WSTAT_TXRETRY_CNT(w) (((w) >> IIO_WSTAT_TXRETRY_SHFT) & \
+ IIO_WSTAT_TXRETRY_MASK)
+
+/* Number of II perf. counters we can multiplex at once */
+
+#define IO_PERF_SETS 32
+
+#ifdef __KERNEL__
+#include <asm/sn/dmamap.h>
+#include <asm/sn/driver.h>
+#include <asm/sn/xtalk/xtalk.h>
+
+/* Bit for the widget in inbound access register */
+#define IIO_IIWA_WIDGET(_w) ((uint64_t)(1ULL << _w))
+/* Bit for the widget in outbound access register */
+#define IIO_IOWA_WIDGET(_w) ((uint64_t)(1ULL << _w))
+
+/* NOTE: The following define assumes that we are going to get
+ * widget numbers from 8 thru F and the device numbers within
+ * widget from 0 thru 7.
+ */
+#define IIO_IIDEM_WIDGETDEV_MASK(w, d) ((uint64_t)(1ULL << (8 * ((w) - 8) + (d))))
+
+/* IO Interrupt Destination Register */
+#define IIO_IIDSR_SENT_SHIFT 28
+#define IIO_IIDSR_SENT_MASK 0x30000000
+#define IIO_IIDSR_ENB_SHIFT 24
+#define IIO_IIDSR_ENB_MASK 0x01000000
+#define IIO_IIDSR_NODE_SHIFT 9
+#define IIO_IIDSR_NODE_MASK 0x000ff700
+#define IIO_IIDSR_PI_ID_SHIFT 8
+#define IIO_IIDSR_PI_ID_MASK 0x00000100
+#define IIO_IIDSR_LVL_SHIFT 0
+#define IIO_IIDSR_LVL_MASK 0x000000ff
+
+/* Xtalk timeout threshhold register (IIO_IXTT) */
+#define IXTT_RRSP_TO_SHFT 55 /* read response timeout */
+#define IXTT_RRSP_TO_MASK (0x1FULL << IXTT_RRSP_TO_SHFT)
+#define IXTT_RRSP_PS_SHFT 32 /* read responsed TO prescalar */
+#define IXTT_RRSP_PS_MASK (0x7FFFFFULL << IXTT_RRSP_PS_SHFT)
+#define IXTT_TAIL_TO_SHFT 0 /* tail timeout counter threshold */
+#define IXTT_TAIL_TO_MASK (0x3FFFFFFULL << IXTT_TAIL_TO_SHFT)
+
+/*
+ * The IO LLP control status register and widget control register
+ */
+
+typedef union hubii_wcr_u {
+ uint64_t wcr_reg_value;
+ struct {
+ uint64_t wcr_widget_id: 4, /* LLP crossbar credit */
+ wcr_tag_mode: 1, /* Tag mode */
+ wcr_rsvd1: 8, /* Reserved */
+ wcr_xbar_crd: 3, /* LLP crossbar credit */
+ wcr_f_bad_pkt: 1, /* Force bad llp pkt enable */
+ wcr_dir_con: 1, /* widget direct connect */
+ wcr_e_thresh: 5, /* elasticity threshold */
+ wcr_rsvd: 41; /* unused */
+ } wcr_fields_s;
+} hubii_wcr_t;
+
+#define iwcr_dir_con wcr_fields_s.wcr_dir_con
+
+/* The structures below are defined to extract and modify the ii
+performance registers */
+
+/* io_perf_sel allows the caller to specify what tests will be
+ performed */
+
+typedef union io_perf_sel {
+ uint64_t perf_sel_reg;
+ struct {
+ uint64_t perf_ippr0 : 4,
+ perf_ippr1 : 4,
+ perf_icct : 8,
+ perf_rsvd : 48;
+ } perf_sel_bits;
+} io_perf_sel_t;
+
+/* io_perf_cnt is to extract the count from the shub registers. Due to
+ hardware problems there is only one counter, not two. */
+
+typedef union io_perf_cnt {
+ uint64_t perf_cnt;
+ struct {
+ uint64_t perf_cnt : 20,
+ perf_rsvd2 : 12,
+ perf_rsvd1 : 32;
+ } perf_cnt_bits;
+
+} io_perf_cnt_t;
+
+typedef union iprte_a {
+ shubreg_t entry;
+ struct {
+ shubreg_t i_rsvd_1 : 3;
+ shubreg_t i_addr : 38;
+ shubreg_t i_init : 3;
+ shubreg_t i_source : 8;
+ shubreg_t i_rsvd : 2;
+ shubreg_t i_widget : 4;
+ shubreg_t i_to_cnt : 5;
+ shubreg_t i_vld : 1;
+ } iprte_fields;
+} iprte_a_t;
+
+
+/* PIO MANAGEMENT */
+typedef struct hub_piomap_s *hub_piomap_t;
+
+extern hub_piomap_t
+hub_piomap_alloc(vertex_hdl_t dev, /* set up mapping for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ iopaddr_t xtalk_addr, /* map for this xtalk_addr range */
+ size_t byte_count,
+ size_t byte_count_max, /* maximum size of a mapping */
+ unsigned flags); /* defined in sys/pio.h */
+
+extern void hub_piomap_free(hub_piomap_t hub_piomap);
+
+extern caddr_t
+hub_piomap_addr(hub_piomap_t hub_piomap, /* mapping resources */
+ iopaddr_t xtalk_addr, /* map for this xtalk addr */
+ size_t byte_count); /* map this many bytes */
+
+extern void
+hub_piomap_done(hub_piomap_t hub_piomap);
+
+extern caddr_t
+hub_piotrans_addr( vertex_hdl_t dev, /* translate to this device */
+ device_desc_t dev_desc, /* device descriptor */
+ iopaddr_t xtalk_addr, /* Crosstalk address */
+ size_t byte_count, /* map this many bytes */
+ unsigned flags); /* (currently unused) */
+
+/* DMA MANAGEMENT */
+typedef struct hub_dmamap_s *hub_dmamap_t;
+
+extern hub_dmamap_t
+hub_dmamap_alloc( vertex_hdl_t dev, /* set up mappings for dev */
+ device_desc_t dev_desc, /* device descriptor */
+ size_t byte_count_max, /* max size of a mapping */
+ unsigned flags); /* defined in dma.h */
+
+extern void
+hub_dmamap_free(hub_dmamap_t dmamap);
+
+extern iopaddr_t
+hub_dmamap_addr( hub_dmamap_t dmamap, /* use mapping resources */
+ paddr_t paddr, /* map for this address */
+ size_t byte_count); /* map this many bytes */
+
+extern void
+hub_dmamap_done( hub_dmamap_t dmamap); /* done w/ mapping resources */
+
+extern iopaddr_t
+hub_dmatrans_addr( vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ paddr_t paddr, /* system physical address */
+ size_t byte_count, /* length */
+ unsigned flags); /* defined in dma.h */
+
+extern void
+hub_dmamap_drain( hub_dmamap_t map);
+
+extern void
+hub_dmaaddr_drain( vertex_hdl_t vhdl,
+ paddr_t addr,
+ size_t bytes);
+
+
+/* INTERRUPT MANAGEMENT */
+typedef struct hub_intr_s *hub_intr_t;
+
+extern hub_intr_t
+hub_intr_alloc( vertex_hdl_t dev, /* which device */
+ device_desc_t dev_desc, /* device descriptor */
+ vertex_hdl_t owner_dev); /* owner of this interrupt */
+
+extern hub_intr_t
+hub_intr_alloc_nothd(vertex_hdl_t dev, /* which device */
+ device_desc_t dev_desc, /* device descriptor */
+ vertex_hdl_t owner_dev); /* owner of this interrupt */
+
+extern void
+hub_intr_free(hub_intr_t intr_hdl);
+
+extern int
+hub_intr_connect( hub_intr_t intr_hdl, /* xtalk intr resource hndl */
+ intr_func_t intr_func, /* xtalk intr handler */
+ void *intr_arg, /* arg to intr handler */
+ xtalk_intr_setfunc_t setfunc, /* func to set intr hw */
+ void *setfunc_arg); /* arg to setfunc */
+
+extern void
+hub_intr_disconnect(hub_intr_t intr_hdl);
+
+
+/* CONFIGURATION MANAGEMENT */
+
+extern void
+hub_provider_startup(vertex_hdl_t hub);
+
+extern void
+hub_provider_shutdown(vertex_hdl_t hub);
+
+#define HUB_PIO_CONVEYOR 0x1 /* PIO in conveyor belt mode */
+#define HUB_PIO_FIRE_N_FORGET 0x2 /* PIO in fire-and-forget mode */
+
+/* Flags that make sense to hub_widget_flags_set */
+#define HUB_WIDGET_FLAGS ( \
+ HUB_PIO_CONVEYOR | \
+ HUB_PIO_FIRE_N_FORGET \
+ )
+
+
+typedef int hub_widget_flags_t;
+
+/* Set the PIO mode for a widget. */
+extern int hub_widget_flags_set(nasid_t nasid,
+ xwidgetnum_t widget_num,
+ hub_widget_flags_t flags);
+
+/* Error Handling. */
+extern int hub_ioerror_handler(vertex_hdl_t, int, int, struct io_error_s *);
+extern int kl_ioerror_handler(cnodeid_t, cnodeid_t, cpuid_t,
+ int, paddr_t, caddr_t, ioerror_mode_t);
+#endif /* _KERNEL */
+#endif /* _ASM_IA64_SN_SN2_SHUBIO_H */
+
--- /dev/null
+/*
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 1992-1997,2001-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#ifndef _ASM_IA64_SN_SN2_SLOTNUM_H
+#define _ASM_IA64_SN_SN2_SLOTNUM_H
+
+#define SLOTNUM_MAXLENGTH 16
+
+/*
+ * This file defines IO widget to slot/device assignments.
+ */
+
+
+/* This determines module to pnode mapping. */
+
+#define NODESLOTS_PER_MODULE 1
+#define NODESLOTS_PER_MODULE_SHFT 1
+
+#define SLOTNUM_NODE_CLASS 0x00 /* Node */
+#define SLOTNUM_ROUTER_CLASS 0x10 /* Router */
+#define SLOTNUM_XTALK_CLASS 0x20 /* Xtalk */
+#define SLOTNUM_MIDPLANE_CLASS 0x30 /* Midplane */
+#define SLOTNUM_XBOW_CLASS 0x40 /* Xbow */
+#define SLOTNUM_KNODE_CLASS 0x50 /* Kego node */
+#define SLOTNUM_PCI_CLASS 0x60 /* PCI widgets on XBridge */
+#define SLOTNUM_INVALID_CLASS 0xf0 /* Invalid */
+
+#define SLOTNUM_CLASS_MASK 0xf0
+#define SLOTNUM_SLOT_MASK 0x0f
+
+#define SLOTNUM_GETCLASS(_sn) ((_sn) & SLOTNUM_CLASS_MASK)
+#define SLOTNUM_GETSLOT(_sn) ((_sn) & SLOTNUM_SLOT_MASK)
+
+
+#endif /* _ASM_IA64_SN_SN2_SLOTNUM_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_SN2_SN_PRIVATE_H
+#define _ASM_IA64_SN_SN2_SN_PRIVATE_H
+
+#include <linux/wait.h>
+#include <asm/sn/nodepda.h>
+#include <asm/sn/io.h>
+#include <asm/sn/iograph.h>
+#include <asm/sn/xtalk/xwidget.h>
+#include <asm/sn/xtalk/xtalk_private.h>
+
+extern nasid_t master_nasid;
+
+/* promif.c */
+extern void he_arcs_set_vectors(void);
+extern void mem_init(void);
+extern void cpu_unenable(cpuid_t);
+extern nasid_t get_lowest_nasid(void);
+extern unsigned long get_master_bridge_base(void);
+extern int check_nasid_equiv(nasid_t, nasid_t);
+extern char get_console_pcislot(void);
+
+extern int is_master_baseio_nasid_widget(nasid_t test_nasid,
+ xwidgetnum_t test_wid);
+
+/* memsupport.c */
+extern void poison_state_alter_range(unsigned long start, int len, int poison);
+extern int memory_present(paddr_t);
+extern int memory_read_accessible(paddr_t);
+extern int memory_write_accessible(paddr_t);
+extern void memory_set_access(paddr_t, int, int);
+extern void show_dir_state(paddr_t, void (*)(char *, ...));
+extern void check_dir_state(nasid_t, int, void (*)(char *, ...));
+extern void set_dir_owner(paddr_t, int);
+extern void set_dir_state(paddr_t, int);
+extern void set_dir_state_POISONED(paddr_t);
+extern void set_dir_state_UNOWNED(paddr_t);
+extern int is_POISONED_dir_state(paddr_t);
+extern int is_UNOWNED_dir_state(paddr_t);
+#ifdef LATER
+extern void get_dir_ent(paddr_t paddr, int *state,
+ uint64_t * vec_ptr, hubreg_t * elo);
+#endif
+
+/* intr.c */
+extern void intr_unreserve_level(cpuid_t cpu, int level);
+extern int intr_connect_level(cpuid_t cpu, int bit);
+extern int intr_disconnect_level(cpuid_t cpu, int bit);
+extern cpuid_t intr_heuristic(vertex_hdl_t dev, int req_bit, int *resp_bit);
+extern void intr_block_bit(cpuid_t cpu, int bit);
+extern void intr_unblock_bit(cpuid_t cpu, int bit);
+extern void setrtvector(intr_func_t);
+extern void install_cpuintr(cpuid_t cpu);
+extern void install_dbgintr(cpuid_t cpu);
+extern void install_tlbintr(cpuid_t cpu);
+extern void hub_migrintr_init(cnodeid_t /*cnode */ );
+extern int cause_intr_connect(int level, intr_func_t handler,
+ unsigned int intr_spl_mask);
+extern int cause_intr_disconnect(int level);
+extern void intr_dumpvec(cnodeid_t cnode, void (*pf) (char *, ...));
+
+/* error_dump.c */
+extern char *hub_rrb_err_type[];
+extern char *hub_wrb_err_type[];
+
+void nmi_dump(void);
+void install_cpu_nmi_handler(int slice);
+
+/* klclock.c */
+extern void hub_rtc_init(cnodeid_t);
+
+/* bte.c */
+void bte_lateinit(void);
+void bte_wait_for_xfer_completion(void *);
+
+/* klgraph.c */
+void klhwg_add_all_nodes(vertex_hdl_t);
+void klhwg_add_all_modules(vertex_hdl_t);
+
+/* klidbg.c */
+void install_klidbg_functions(void);
+
+/* klnuma.c */
+extern void replicate_kernel_text(int numnodes);
+extern unsigned long get_freemem_start(cnodeid_t cnode);
+extern void setup_replication_mask(int maxnodes);
+
+/* init.c */
+extern cnodeid_t get_compact_nodeid(void); /* get compact node id */
+extern void init_platform_nodepda(nodepda_t * npda, cnodeid_t node);
+extern int is_fine_dirmode(void);
+extern void update_node_information(cnodeid_t);
+
+/* shubio.c */
+extern void hubio_init(void);
+extern void hub_merge_clean(nasid_t nasid);
+extern void hub_set_piomode(nasid_t nasid, int conveyor);
+
+/* shuberror.c */
+extern void hub_error_init(cnodeid_t);
+extern void dump_error_spool(cpuid_t cpu, void (*pf) (char *, ...));
+extern void hubni_error_handler(char *, int);
+extern int check_ni_errors(void);
+
+/* Used for debugger to signal upper software a breakpoint has taken place */
+
+extern void *debugger_update;
+extern unsigned long debugger_stopped;
+
+/*
+ * piomap, created by shub_pio_alloc.
+ * xtalk_info MUST BE FIRST, since this structure is cast to a
+ * xtalk_piomap_s by generic xtalk routines.
+ */
+struct hub_piomap_s {
+ struct xtalk_piomap_s hpio_xtalk_info; /* standard crosstalk pio info */
+ vertex_hdl_t hpio_hub; /* which shub's mapping registers are set up */
+ short hpio_holdcnt; /* count of current users of bigwin mapping */
+ char hpio_bigwin_num; /* if big window map, which one */
+ int hpio_flags; /* defined below */
+};
+/* hub_piomap flags */
+#define HUB_PIOMAP_IS_VALID 0x1
+#define HUB_PIOMAP_IS_BIGWINDOW 0x2
+#define HUB_PIOMAP_IS_FIXED 0x4
+
+#define hub_piomap_xt_piomap(hp) (&hp->hpio_xtalk_info)
+#define hub_piomap_hub_v(hp) (hp->hpio_hub)
+#define hub_piomap_winnum(hp) (hp->hpio_bigwin_num)
+
+/*
+ * dmamap, created by shub_pio_alloc.
+ * xtalk_info MUST BE FIRST, since this structure is cast to a
+ * xtalk_dmamap_s by generic xtalk routines.
+ */
+struct hub_dmamap_s {
+ struct xtalk_dmamap_s hdma_xtalk_info; /* standard crosstalk dma info */
+ vertex_hdl_t hdma_hub; /* which shub we go through */
+ int hdma_flags; /* defined below */
+};
+/* shub_dmamap flags */
+#define HUB_DMAMAP_IS_VALID 0x1
+#define HUB_DMAMAP_USED 0x2
+#define HUB_DMAMAP_IS_FIXED 0x4
+
+/*
+ * interrupt handle, created by shub_intr_alloc.
+ * xtalk_info MUST BE FIRST, since this structure is cast to a
+ * xtalk_intr_s by generic xtalk routines.
+ */
+struct hub_intr_s {
+ struct xtalk_intr_s i_xtalk_info; /* standard crosstalk intr info */
+ cpuid_t i_cpuid; /* which cpu */
+ int i_bit; /* which bit */
+ int i_flags;
+};
+/* flag values */
+#define HUB_INTR_IS_ALLOCED 0x1 /* for debug: allocated */
+#define HUB_INTR_IS_CONNECTED 0x4 /* for debug: connected to a software driver */
+
+typedef struct hubinfo_s {
+ nodepda_t *h_nodepda; /* pointer to node's private data area */
+ cnodeid_t h_cnodeid; /* compact nodeid */
+ nasid_t h_nasid; /* nasid */
+
+ /* structures for PIO management */
+ xwidgetnum_t h_widgetid; /* my widget # (as viewed from xbow) */
+ struct hub_piomap_s h_small_window_piomap[HUB_WIDGET_ID_MAX + 1];
+ wait_queue_head_t h_bwwait; /* wait for big window to free */
+ spinlock_t h_bwlock; /* guard big window piomap's */
+ spinlock_t h_crblock; /* gaurd CRB error handling */
+ int h_num_big_window_fixed; /* count number of FIXED maps */
+ struct hub_piomap_s h_big_window_piomap[HUB_NUM_BIG_WINDOW];
+ hub_intr_t hub_ii_errintr;
+} *hubinfo_t;
+
+#define hubinfo_get(vhdl, infoptr) ((void)hwgraph_info_get_LBL \
+ (vhdl, INFO_LBL_NODE_INFO, (arbitrary_info_t *)infoptr))
+
+#define hubinfo_set(vhdl, infoptr) (void)hwgraph_info_add_LBL \
+ (vhdl, INFO_LBL_NODE_INFO, (arbitrary_info_t)infoptr)
+
+#define hubinfo_to_hubv(hinfo, hub_v) (hinfo->h_nodepda->node_vertex)
+
+/*
+ * Hub info PIO map access functions.
+ */
+#define hubinfo_bwin_piomap_get(hinfo, win) \
+ (&hinfo->h_big_window_piomap[win])
+#define hubinfo_swin_piomap_get(hinfo, win) \
+ (&hinfo->h_small_window_piomap[win])
+
+/* cpu-specific information stored under INFO_LBL_CPU_INFO */
+typedef struct cpuinfo_s {
+ cpuid_t ci_cpuid; /* CPU ID */
+} *cpuinfo_t;
+
+#define cpuinfo_get(vhdl, infoptr) ((void)hwgraph_info_get_LBL \
+ (vhdl, INFO_LBL_CPU_INFO, (arbitrary_info_t *)infoptr))
+
+#define cpuinfo_set(vhdl, infoptr) (void)hwgraph_info_add_LBL \
+ (vhdl, INFO_LBL_CPU_INFO, (arbitrary_info_t)infoptr)
+
+/* Special initialization function for xswitch vertices created during startup. */
+extern void xswitch_vertex_init(vertex_hdl_t xswitch);
+
+extern xtalk_provider_t hub_provider;
+extern int numionodes;
+
+/* du.c */
+int ducons_write(char *buf, int len);
+
+/* memerror.c */
+
+extern void install_eccintr(cpuid_t cpu);
+extern void memerror_get_stats(cnodeid_t cnode,
+ int *bank_stats, int *bank_stats_max);
+extern void probe_md_errors(nasid_t);
+/* sysctlr.c */
+extern void sysctlr_init(void);
+extern void sysctlr_power_off(int sdonly);
+extern void sysctlr_keepalive(void);
+
+#define valid_cpuid(_x) (((_x) >= 0) && ((_x) < maxcpus))
+
+/* Useful definitions to get the memory dimm given a physical
+ * address.
+ */
+#define paddr_dimm(_pa) ((_pa & MD_BANK_MASK) >> MD_BANK_SHFT)
+#define paddr_cnode(_pa) (nasid_to_cnodeid(NASID_GET(_pa)))
+extern void membank_pathname_get(paddr_t, char *);
+
+extern void crbx(nasid_t nasid, void (*pf) (char *, ...));
+void bootstrap(void);
+
+/* sndrv.c */
+extern int sndrv_attach(vertex_hdl_t vertex);
+
+#endif /* _ASM_IA64_SN_SN2_SN_PRIVATE_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_SN_PRIVATE_H
+#define _ASM_IA64_SN_SN_PRIVATE_H
+
+#include <asm/sn/sn2/sn_private.h>
+
+#endif /* _ASM_IA64_SN_SN_PRIVATE_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_VECTOR_H
+#define _ASM_IA64_SN_VECTOR_H
+
+#define NET_VEC_NULL ((net_vec_t) 0)
+#define NET_VEC_BAD ((net_vec_t) -1)
+
+#define VEC_POLLS_W 128 /* Polls before write times out */
+#define VEC_POLLS_R 128 /* Polls before read times out */
+#define VEC_POLLS_X 128 /* Polls before exch times out */
+
+#define VEC_RETRIES_W 8 /* Retries before write fails */
+#define VEC_RETRIES_R 8 /* Retries before read fails */
+#define VEC_RETRIES_X 4 /* Retries before exch fails */
+
+#define NET_ERROR_NONE 0 /* No error */
+#define NET_ERROR_HARDWARE (-1) /* Hardware error */
+#define NET_ERROR_OVERRUN (-2) /* Extra response(s) */
+#define NET_ERROR_REPLY (-3) /* Reply parms mismatch */
+#define NET_ERROR_ADDRESS (-4) /* Addr error response */
+#define NET_ERROR_COMMAND (-5) /* Cmd error response */
+#define NET_ERROR_PROT (-6) /* Prot error response */
+#define NET_ERROR_TIMEOUT (-7) /* Too many retries */
+#define NET_ERROR_VECTOR (-8) /* Invalid vector/path */
+#define NET_ERROR_ROUTERLOCK (-9) /* Timeout locking rtr */
+#define NET_ERROR_INVAL (-10) /* Invalid vector request */
+
+#ifndef __ASSEMBLY__
+#include <linux/types.h>
+#include <asm/sn/types.h>
+
+typedef uint64_t net_reg_t;
+typedef uint64_t net_vec_t;
+
+int vector_write(net_vec_t dest,
+ int write_id, int address,
+ uint64_t value);
+
+int vector_read(net_vec_t dest,
+ int write_id, int address,
+ uint64_t *value);
+
+int vector_write_node(net_vec_t dest, nasid_t nasid,
+ int write_id, int address,
+ uint64_t value);
+
+int vector_read_node(net_vec_t dest, nasid_t nasid,
+ int write_id, int address,
+ uint64_t *value);
+
+int vector_length(net_vec_t vec);
+net_vec_t vector_get(net_vec_t vec, int n);
+net_vec_t vector_prefix(net_vec_t vec, int n);
+net_vec_t vector_modify(net_vec_t entry, int n, int route);
+net_vec_t vector_reverse(net_vec_t vec);
+net_vec_t vector_concat(net_vec_t vec1, net_vec_t vec2);
+
+char *net_errmsg(int);
+
+#ifndef _STANDALONE
+int hub_vector_write(cnodeid_t cnode, net_vec_t vector, int writeid,
+ int addr, net_reg_t value);
+int hub_vector_read(cnodeid_t cnode, net_vec_t vector, int writeid,
+ int addr, net_reg_t *value);
+#endif
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_IA64_SN_VECTOR_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+#ifndef _ASM_IA64_SN_XTALK_XBOW_H
+#define _ASM_IA64_SN_XTALK_XBOW_H
+
+/*
+ * xbow.h - header file for crossbow chip and xbow section of xbridge
+ */
+
+#include <asm/types.h>
+#include <asm/sn/xtalk/xtalk.h>
+#include <asm/sn/xtalk/xwidget.h>
+#include <asm/sn/xtalk/xswitch.h>
+#ifndef __ASSEMBLY__
+#include <asm/sn/xtalk/xbow_info.h>
+#endif
+
+
+#define XBOW_DRV_PREFIX "xbow_"
+
+/* The crossbow chip supports 8 8/16 bits I/O ports, numbered 0x8 through 0xf.
+ * It also implements the widget 0 address space and register set.
+ */
+#define XBOW_PORT_0 0x0
+#define XBOW_PORT_8 0x8
+#define XBOW_PORT_9 0x9
+#define XBOW_PORT_A 0xa
+#define XBOW_PORT_B 0xb
+#define XBOW_PORT_C 0xc
+#define XBOW_PORT_D 0xd
+#define XBOW_PORT_E 0xe
+#define XBOW_PORT_F 0xf
+
+#define MAX_XBOW_PORTS 8 /* number of ports on xbow chip */
+#define BASE_XBOW_PORT XBOW_PORT_8 /* Lowest external port */
+#define MAX_PORT_NUM 0x10 /* maximum port number + 1 */
+#define XBOW_WIDGET_ID 0 /* xbow is itself widget 0 */
+
+#define XBOW_HUBLINK_LOW 0xa
+#define XBOW_HUBLINK_HIGH 0xb
+
+#define XBOW_PEER_LINK(link) (link == XBOW_HUBLINK_LOW) ? \
+ XBOW_HUBLINK_HIGH : XBOW_HUBLINK_LOW
+
+
+#define XBOW_CREDIT 4
+
+#define MAX_XBOW_NAME 16
+
+#ifndef __ASSEMBLY__
+typedef uint32_t xbowreg_t;
+
+/* Register set for each xbow link */
+typedef volatile struct xb_linkregs_s {
+/*
+ * we access these through synergy unswizzled space, so the address
+ * gets twiddled (i.e. references to 0x4 actually go to 0x0 and vv.)
+ * That's why we put the register first and filler second.
+ */
+ xbowreg_t link_ibf;
+ xbowreg_t filler0; /* filler for proper alignment */
+ xbowreg_t link_control;
+ xbowreg_t filler1;
+ xbowreg_t link_status;
+ xbowreg_t filler2;
+ xbowreg_t link_arb_upper;
+ xbowreg_t filler3;
+ xbowreg_t link_arb_lower;
+ xbowreg_t filler4;
+ xbowreg_t link_status_clr;
+ xbowreg_t filler5;
+ xbowreg_t link_reset;
+ xbowreg_t filler6;
+ xbowreg_t link_aux_status;
+ xbowreg_t filler7;
+} xb_linkregs_t;
+
+typedef volatile struct xbow_s {
+ /* standard widget configuration 0x000000-0x000057 */
+ widget_cfg_t xb_widget; /* 0x000000 */
+
+ /* helper fieldnames for accessing bridge widget */
+
+#define xb_wid_id xb_widget.w_id
+#define xb_wid_stat xb_widget.w_status
+#define xb_wid_err_upper xb_widget.w_err_upper_addr
+#define xb_wid_err_lower xb_widget.w_err_lower_addr
+#define xb_wid_control xb_widget.w_control
+#define xb_wid_req_timeout xb_widget.w_req_timeout
+#define xb_wid_int_upper xb_widget.w_intdest_upper_addr
+#define xb_wid_int_lower xb_widget.w_intdest_lower_addr
+#define xb_wid_err_cmdword xb_widget.w_err_cmd_word
+#define xb_wid_llp xb_widget.w_llp_cfg
+#define xb_wid_stat_clr xb_widget.w_tflush
+
+/*
+ * we access these through synergy unswizzled space, so the address
+ * gets twiddled (i.e. references to 0x4 actually go to 0x0 and vv.)
+ * That's why we put the register first and filler second.
+ */
+ /* xbow-specific widget configuration 0x000058-0x0000FF */
+ xbowreg_t xb_wid_arb_reload; /* 0x00005C */
+ xbowreg_t _pad_000058;
+ xbowreg_t xb_perf_ctr_a; /* 0x000064 */
+ xbowreg_t _pad_000060;
+ xbowreg_t xb_perf_ctr_b; /* 0x00006c */
+ xbowreg_t _pad_000068;
+ xbowreg_t xb_nic; /* 0x000074 */
+ xbowreg_t _pad_000070;
+
+ /* Xbridge only */
+ xbowreg_t xb_w0_rst_fnc; /* 0x00007C */
+ xbowreg_t _pad_000078;
+ xbowreg_t xb_l8_rst_fnc; /* 0x000084 */
+ xbowreg_t _pad_000080;
+ xbowreg_t xb_l9_rst_fnc; /* 0x00008c */
+ xbowreg_t _pad_000088;
+ xbowreg_t xb_la_rst_fnc; /* 0x000094 */
+ xbowreg_t _pad_000090;
+ xbowreg_t xb_lb_rst_fnc; /* 0x00009c */
+ xbowreg_t _pad_000098;
+ xbowreg_t xb_lc_rst_fnc; /* 0x0000a4 */
+ xbowreg_t _pad_0000a0;
+ xbowreg_t xb_ld_rst_fnc; /* 0x0000ac */
+ xbowreg_t _pad_0000a8;
+ xbowreg_t xb_le_rst_fnc; /* 0x0000b4 */
+ xbowreg_t _pad_0000b0;
+ xbowreg_t xb_lf_rst_fnc; /* 0x0000bc */
+ xbowreg_t _pad_0000b8;
+ xbowreg_t xb_lock; /* 0x0000c4 */
+ xbowreg_t _pad_0000c0;
+ xbowreg_t xb_lock_clr; /* 0x0000cc */
+ xbowreg_t _pad_0000c8;
+ /* end of Xbridge only */
+ xbowreg_t _pad_0000d0[12];
+
+ /* Link Specific Registers, port 8..15 0x000100-0x000300 */
+ xb_linkregs_t xb_link_raw[MAX_XBOW_PORTS];
+#define xb_link(p) xb_link_raw[(p) & (MAX_XBOW_PORTS - 1)]
+
+} xbow_t;
+
+/* Configuration structure which describes each xbow link */
+typedef struct xbow_cfg_s {
+ int xb_port; /* port number (0-15) */
+ int xb_flags; /* port software flags */
+ short xb_shift; /* shift for arb reg (mask is 0xff) */
+ short xb_ul; /* upper or lower arb reg */
+ int xb_pad; /* use this later (pad to ptr align) */
+ xb_linkregs_t *xb_linkregs; /* pointer to link registers */
+ widget_cfg_t *xb_widget; /* pointer to widget registers */
+ char xb_name[MAX_XBOW_NAME]; /* port name */
+ xbowreg_t xb_sh_arb_upper; /* shadow upper arb register */
+ xbowreg_t xb_sh_arb_lower; /* shadow lower arb register */
+} xbow_cfg_t;
+
+#define XB_FLAGS_EXISTS 0x1 /* device exists */
+#define XB_FLAGS_MASTER 0x2
+#define XB_FLAGS_SLAVE 0x0
+#define XB_FLAGS_GBR 0x4
+#define XB_FLAGS_16BIT 0x8
+#define XB_FLAGS_8BIT 0x0
+
+/* get xbow config information for port p */
+#define XB_CONFIG(p) xbow_cfg[xb_ports[p]]
+
+/* is widget port number valid? (based on version 7.0 of xbow spec) */
+#define XBOW_WIDGET_IS_VALID(wid) ((wid) >= XBOW_PORT_8 && (wid) <= XBOW_PORT_F)
+
+/* whether to use upper or lower arbitration register, given source widget id */
+#define XBOW_ARB_IS_UPPER(wid) ((wid) >= XBOW_PORT_8 && (wid) <= XBOW_PORT_B)
+#define XBOW_ARB_IS_LOWER(wid) ((wid) >= XBOW_PORT_C && (wid) <= XBOW_PORT_F)
+
+/* offset of arbitration register, given source widget id */
+#define XBOW_ARB_OFF(wid) (XBOW_ARB_IS_UPPER(wid) ? 0x1c : 0x24)
+
+#endif /* __ASSEMBLY__ */
+
+#define XBOW_WID_ID WIDGET_ID
+#define XBOW_WID_STAT WIDGET_STATUS
+#define XBOW_WID_ERR_UPPER WIDGET_ERR_UPPER_ADDR
+#define XBOW_WID_ERR_LOWER WIDGET_ERR_LOWER_ADDR
+#define XBOW_WID_CONTROL WIDGET_CONTROL
+#define XBOW_WID_REQ_TO WIDGET_REQ_TIMEOUT
+#define XBOW_WID_INT_UPPER WIDGET_INTDEST_UPPER_ADDR
+#define XBOW_WID_INT_LOWER WIDGET_INTDEST_LOWER_ADDR
+#define XBOW_WID_ERR_CMDWORD WIDGET_ERR_CMD_WORD
+#define XBOW_WID_LLP WIDGET_LLP_CFG
+#define XBOW_WID_STAT_CLR WIDGET_TFLUSH
+#define XBOW_WID_ARB_RELOAD 0x5c
+#define XBOW_WID_PERF_CTR_A 0x64
+#define XBOW_WID_PERF_CTR_B 0x6c
+#define XBOW_WID_NIC 0x74
+
+/* Xbridge only */
+#define XBOW_W0_RST_FNC 0x00007C
+#define XBOW_L8_RST_FNC 0x000084
+#define XBOW_L9_RST_FNC 0x00008c
+#define XBOW_LA_RST_FNC 0x000094
+#define XBOW_LB_RST_FNC 0x00009c
+#define XBOW_LC_RST_FNC 0x0000a4
+#define XBOW_LD_RST_FNC 0x0000ac
+#define XBOW_LE_RST_FNC 0x0000b4
+#define XBOW_LF_RST_FNC 0x0000bc
+#define XBOW_RESET_FENCE(x) ((x) > 7 && (x) < 16) ? \
+ (XBOW_W0_RST_FNC + ((x) - 7) * 8) : \
+ ((x) == 0) ? XBOW_W0_RST_FNC : 0
+#define XBOW_LOCK 0x0000c4
+#define XBOW_LOCK_CLR 0x0000cc
+/* End of Xbridge only */
+
+/* used only in ide, but defined here within the reserved portion */
+/* of the widget0 address space (before 0xf4) */
+#define XBOW_WID_UNDEF 0xe4
+
+/* xbow link register set base, legal value for x is 0x8..0xf */
+#define XB_LINK_BASE 0x100
+#define XB_LINK_OFFSET 0x40
+#define XB_LINK_REG_BASE(x) (XB_LINK_BASE + ((x) & (MAX_XBOW_PORTS - 1)) * XB_LINK_OFFSET)
+
+#define XB_LINK_IBUF_FLUSH(x) (XB_LINK_REG_BASE(x) + 0x4)
+#define XB_LINK_CTRL(x) (XB_LINK_REG_BASE(x) + 0xc)
+#define XB_LINK_STATUS(x) (XB_LINK_REG_BASE(x) + 0x14)
+#define XB_LINK_ARB_UPPER(x) (XB_LINK_REG_BASE(x) + 0x1c)
+#define XB_LINK_ARB_LOWER(x) (XB_LINK_REG_BASE(x) + 0x24)
+#define XB_LINK_STATUS_CLR(x) (XB_LINK_REG_BASE(x) + 0x2c)
+#define XB_LINK_RESET(x) (XB_LINK_REG_BASE(x) + 0x34)
+#define XB_LINK_AUX_STATUS(x) (XB_LINK_REG_BASE(x) + 0x3c)
+
+/* link_control(x) */
+#define XB_CTRL_LINKALIVE_IE 0x80000000 /* link comes alive */
+ /* reserved: 0x40000000 */
+#define XB_CTRL_PERF_CTR_MODE_MSK 0x30000000 /* perf counter mode */
+#define XB_CTRL_IBUF_LEVEL_MSK 0x0e000000 /* input packet buffer level */
+#define XB_CTRL_8BIT_MODE 0x01000000 /* force link into 8 bit mode */
+#define XB_CTRL_BAD_LLP_PKT 0x00800000 /* force bad LLP packet */
+#define XB_CTRL_WIDGET_CR_MSK 0x007c0000 /* LLP widget credit mask */
+#define XB_CTRL_WIDGET_CR_SHFT 18 /* LLP widget credit shift */
+#define XB_CTRL_ILLEGAL_DST_IE 0x00020000 /* illegal destination */
+#define XB_CTRL_OALLOC_IBUF_IE 0x00010000 /* overallocated input buffer */
+ /* reserved: 0x0000fe00 */
+#define XB_CTRL_BNDWDTH_ALLOC_IE 0x00000100 /* bandwidth alloc */
+#define XB_CTRL_RCV_CNT_OFLOW_IE 0x00000080 /* rcv retry overflow */
+#define XB_CTRL_XMT_CNT_OFLOW_IE 0x00000040 /* xmt retry overflow */
+#define XB_CTRL_XMT_MAX_RTRY_IE 0x00000020 /* max transmit retry */
+#define XB_CTRL_RCV_IE 0x00000010 /* receive */
+#define XB_CTRL_XMT_RTRY_IE 0x00000008 /* transmit retry */
+ /* reserved: 0x00000004 */
+#define XB_CTRL_MAXREQ_TOUT_IE 0x00000002 /* maximum request timeout */
+#define XB_CTRL_SRC_TOUT_IE 0x00000001 /* source timeout */
+
+/* link_status(x) */
+#define XB_STAT_LINKALIVE XB_CTRL_LINKALIVE_IE
+ /* reserved: 0x7ff80000 */
+#define XB_STAT_MULTI_ERR 0x00040000 /* multi error */
+#define XB_STAT_ILLEGAL_DST_ERR XB_CTRL_ILLEGAL_DST_IE
+#define XB_STAT_OALLOC_IBUF_ERR XB_CTRL_OALLOC_IBUF_IE
+#define XB_STAT_BNDWDTH_ALLOC_ID_MSK 0x0000ff00 /* port bitmask */
+#define XB_STAT_RCV_CNT_OFLOW_ERR XB_CTRL_RCV_CNT_OFLOW_IE
+#define XB_STAT_XMT_CNT_OFLOW_ERR XB_CTRL_XMT_CNT_OFLOW_IE
+#define XB_STAT_XMT_MAX_RTRY_ERR XB_CTRL_XMT_MAX_RTRY_IE
+#define XB_STAT_RCV_ERR XB_CTRL_RCV_IE
+#define XB_STAT_XMT_RTRY_ERR XB_CTRL_XMT_RTRY_IE
+ /* reserved: 0x00000004 */
+#define XB_STAT_MAXREQ_TOUT_ERR XB_CTRL_MAXREQ_TOUT_IE
+#define XB_STAT_SRC_TOUT_ERR XB_CTRL_SRC_TOUT_IE
+
+/* link_aux_status(x) */
+#define XB_AUX_STAT_RCV_CNT 0xff000000
+#define XB_AUX_STAT_XMT_CNT 0x00ff0000
+#define XB_AUX_STAT_TOUT_DST 0x0000ff00
+#define XB_AUX_LINKFAIL_RST_BAD 0x00000040
+#define XB_AUX_STAT_PRESENT 0x00000020
+#define XB_AUX_STAT_PORT_WIDTH 0x00000010
+ /* reserved: 0x0000000f */
+
+/*
+ * link_arb_upper/link_arb_lower(x), (reg) should be the link_arb_upper
+ * register if (x) is 0x8..0xb, link_arb_lower if (x) is 0xc..0xf
+ */
+#define XB_ARB_GBR_MSK 0x1f
+#define XB_ARB_RR_MSK 0x7
+#define XB_ARB_GBR_SHFT(x) (((x) & 0x3) * 8)
+#define XB_ARB_RR_SHFT(x) (((x) & 0x3) * 8 + 5)
+#define XB_ARB_GBR_CNT(reg,x) ((reg) >> XB_ARB_GBR_SHFT(x) & XB_ARB_GBR_MSK)
+#define XB_ARB_RR_CNT(reg,x) ((reg) >> XB_ARB_RR_SHFT(x) & XB_ARB_RR_MSK)
+
+/* XBOW_WID_STAT */
+#define XB_WID_STAT_LINK_INTR_SHFT (24)
+#define XB_WID_STAT_LINK_INTR_MASK (0xFF << XB_WID_STAT_LINK_INTR_SHFT)
+#define XB_WID_STAT_LINK_INTR(x) (0x1 << (((x)&7) + XB_WID_STAT_LINK_INTR_SHFT))
+#define XB_WID_STAT_WIDGET0_INTR 0x00800000
+#define XB_WID_STAT_SRCID_MASK 0x000003c0 /* Xbridge only */
+#define XB_WID_STAT_REG_ACC_ERR 0x00000020
+#define XB_WID_STAT_RECV_TOUT 0x00000010 /* Xbridge only */
+#define XB_WID_STAT_ARB_TOUT 0x00000008 /* Xbridge only */
+#define XB_WID_STAT_XTALK_ERR 0x00000004
+#define XB_WID_STAT_DST_TOUT 0x00000002 /* Xbridge only */
+#define XB_WID_STAT_MULTI_ERR 0x00000001
+
+#define XB_WID_STAT_SRCID_SHFT 6
+
+/* XBOW_WID_CONTROL */
+#define XB_WID_CTRL_REG_ACC_IE XB_WID_STAT_REG_ACC_ERR
+#define XB_WID_CTRL_RECV_TOUT XB_WID_STAT_RECV_TOUT
+#define XB_WID_CTRL_ARB_TOUT XB_WID_STAT_ARB_TOUT
+#define XB_WID_CTRL_XTALK_IE XB_WID_STAT_XTALK_ERR
+
+/* XBOW_WID_INT_UPPER */
+/* defined in xwidget.h for WIDGET_INTDEST_UPPER_ADDR */
+
+/* XBOW WIDGET part number, in the ID register */
+#define XBOW_WIDGET_PART_NUM 0x0 /* crossbow */
+#define XXBOW_WIDGET_PART_NUM 0xd000 /* Xbridge */
+#define XBOW_WIDGET_MFGR_NUM 0x0
+#define XXBOW_WIDGET_MFGR_NUM 0x0
+#define PXBOW_WIDGET_PART_NUM 0xd100 /* PIC */
+
+#define XBOW_REV_1_0 0x1 /* xbow rev 1.0 is "1" */
+#define XBOW_REV_1_1 0x2 /* xbow rev 1.1 is "2" */
+#define XBOW_REV_1_2 0x3 /* xbow rev 1.2 is "3" */
+#define XBOW_REV_1_3 0x4 /* xbow rev 1.3 is "4" */
+#define XBOW_REV_2_0 0x5 /* xbow rev 2.0 is "5" */
+
+#define XXBOW_PART_REV_1_0 (XXBOW_WIDGET_PART_NUM << 4 | 0x1 )
+#define XXBOW_PART_REV_2_0 (XXBOW_WIDGET_PART_NUM << 4 | 0x2 )
+
+/* XBOW_WID_ARB_RELOAD */
+#define XBOW_WID_ARB_RELOAD_INT 0x3f /* GBR reload interval */
+
+#define IS_XBRIDGE_XBOW(wid) \
+ (XWIDGET_PART_NUM(wid) == XXBOW_WIDGET_PART_NUM && \
+ XWIDGET_MFG_NUM(wid) == XXBOW_WIDGET_MFGR_NUM)
+
+#define IS_PIC_XBOW(wid) \
+ (XWIDGET_PART_NUM(wid) == PXBOW_WIDGET_PART_NUM && \
+ XWIDGET_MFG_NUM(wid) == XXBOW_WIDGET_MFGR_NUM)
+
+#define XBOW_WAR_ENABLED(pv, widid) ((1 << XWIDGET_REV_NUM(widid)) & pv)
+
+#ifndef __ASSEMBLY__
+/*
+ * XBOW Widget 0 Register formats.
+ * Format for many of these registers are similar to the standard
+ * widget register format described as part of xtalk specification
+ * Standard widget register field format description is available in
+ * xwidget.h
+ * Following structures define the format for xbow widget 0 registers
+ */
+/*
+ * Xbow Widget 0 Command error word
+ */
+typedef union xbw0_cmdword_u {
+ xbowreg_t cmdword;
+ struct {
+ uint32_t rsvd:8, /* Reserved */
+ barr:1, /* Barrier operation */
+ error:1, /* Error Occured */
+ vbpm:1, /* Virtual Backplane message */
+ gbr:1, /* GBR enable ? */
+ ds:2, /* Data size */
+ ct:1, /* Is it a coherent transaction */
+ tnum:5, /* Transaction Number */
+ pactyp:4, /* Packet type: */
+ srcid:4, /* Source ID number */
+ destid:4; /* Desination ID number */
+
+ } xbw0_cmdfield;
+} xbw0_cmdword_t;
+
+#define xbcmd_destid xbw0_cmdfield.destid
+#define xbcmd_srcid xbw0_cmdfield.srcid
+#define xbcmd_pactyp xbw0_cmdfield.pactyp
+#define xbcmd_tnum xbw0_cmdfield.tnum
+#define xbcmd_ct xbw0_cmdfield.ct
+#define xbcmd_ds xbw0_cmdfield.ds
+#define xbcmd_gbr xbw0_cmdfield.gbr
+#define xbcmd_vbpm xbw0_cmdfield.vbpm
+#define xbcmd_error xbw0_cmdfield.error
+#define xbcmd_barr xbw0_cmdfield.barr
+
+/*
+ * Values for field PACTYP in xbow error command word
+ */
+#define XBCMDTYP_READREQ 0 /* Read Request packet */
+#define XBCMDTYP_READRESP 1 /* Read Response packet */
+#define XBCMDTYP_WRREQ_RESP 2 /* Write Request with response */
+#define XBCMDTYP_WRRESP 3 /* Write Response */
+#define XBCMDTYP_WRREQ_NORESP 4 /* Write request with No Response */
+#define XBCMDTYP_FETCHOP 6 /* Fetch & Op packet */
+#define XBCMDTYP_STOREOP 8 /* Store & Op packet */
+#define XBCMDTYP_SPLPKT_REQ 0xE /* Special packet request */
+#define XBCMDTYP_SPLPKT_RESP 0xF /* Special packet response */
+
+/*
+ * Values for field ds (datasize) in xbow error command word
+ */
+#define XBCMDSZ_DOUBLEWORD 0
+#define XBCMDSZ_QUARTRCACHE 1
+#define XBCMDSZ_FULLCACHE 2
+
+/*
+ * Xbow widget 0 Status register format.
+ */
+
+typedef union xbw0_status_u {
+ xbowreg_t statusword;
+ struct {
+ uint32_t mult_err:1, /* Multiple error occurred */
+ connect_tout:1, /* Connection timeout */
+ xtalk_err:1, /* Xtalk pkt with error bit */
+ /* End of Xbridge only */
+ w0_arb_tout, /* arbiter timeout err */
+ w0_recv_tout, /* receive timeout err */
+ /* Xbridge only */
+ regacc_err:1, /* Reg Access error */
+ src_id:4, /* source id. Xbridge only */
+ resvd1:13,
+ wid0intr:1; /* Widget 0 err intr */
+ } xbw0_stfield;
+} xbw0_status_t;
+
+#define xbst_linkXintr xbw0_stfield.linkXintr
+#define xbst_w0intr xbw0_stfield.wid0intr
+#define xbst_regacc_err xbw0_stfield.regacc_err
+#define xbst_xtalk_err xbw0_stfield.xtalk_err
+#define xbst_connect_tout xbw0_stfield.connect_tout
+#define xbst_mult_err xbw0_stfield.mult_err
+#define xbst_src_id xbw0_stfield.src_id /* Xbridge only */
+#define xbst_w0_recv_tout xbw0_stfield.w0_recv_tout /* Xbridge only */
+#define xbst_w0_arb_tout xbw0_stfield.w0_arb_tout /* Xbridge only */
+
+/*
+ * Xbow widget 0 Control register format
+ */
+
+typedef union xbw0_ctrl_u {
+ xbowreg_t ctrlword;
+ struct {
+ uint32_t
+ resvd3:1,
+ conntout_intr:1,
+ xtalkerr_intr:1,
+ w0_arg_tout_intr:1, /* Xbridge only */
+ w0_recv_tout_intr:1, /* Xbridge only */
+ accerr_intr:1,
+ enable_w0_tout_cntr:1, /* Xbridge only */
+ enable_watchdog:1, /* Xbridge only */
+ resvd1:24;
+ } xbw0_ctrlfield;
+} xbw0_ctrl_t;
+
+typedef union xbow_linkctrl_u {
+ xbowreg_t xbl_ctrlword;
+ struct {
+ uint32_t srcto_intr:1,
+ maxto_intr:1,
+ rsvd3:1,
+ trx_retry_intr:1,
+ rcv_err_intr:1,
+ trx_max_retry_intr:1,
+ trxov_intr:1,
+ rcvov_intr:1,
+ bwalloc_intr:1,
+ rsvd2:7,
+ obuf_intr:1,
+ idest_intr:1,
+ llp_credit:5,
+ force_badllp:1,
+ send_bm8:1,
+ inbuf_level:3,
+ perf_mode:2,
+ rsvd1:1,
+ alive_intr:1;
+
+ } xb_linkcontrol;
+} xbow_linkctrl_t;
+
+#define xbctl_accerr_intr (xbw0_ctrlfield.accerr_intr)
+#define xbctl_xtalkerr_intr (xbw0_ctrlfield.xtalkerr_intr)
+#define xbctl_cnntout_intr (xbw0_ctrlfield.conntout_intr)
+
+#define XBW0_CTRL_ACCERR_INTR (1 << 5)
+#define XBW0_CTRL_XTERR_INTR (1 << 2)
+#define XBW0_CTRL_CONNTOUT_INTR (1 << 1)
+
+/*
+ * Xbow Link specific Registers structure definitions.
+ */
+
+typedef union xbow_linkX_status_u {
+ xbowreg_t linkstatus;
+ struct {
+ uint32_t pkt_toutsrc:1,
+ pkt_toutconn:1, /* max_req_tout in Xbridge */
+ pkt_toutdest:1, /* reserved in Xbridge */
+ llp_xmitretry:1,
+ llp_rcverror:1,
+ llp_maxtxretry:1,
+ llp_txovflow:1,
+ llp_rxovflow:1,
+ bw_errport:8, /* BW allocation error port */
+ ioe:1, /* Input overallocation error */
+ illdest:1,
+ merror:1,
+ resvd1:12,
+ alive:1;
+ } xb_linkstatus;
+} xbwX_stat_t;
+
+#define link_alive xb_linkstatus.alive
+#define link_multierror xb_linkstatus.merror
+#define link_illegal_dest xb_linkstatus.illdest
+#define link_ioe xb_linkstatus.ioe
+#define link_max_req_tout xb_linkstatus.pkt_toutconn /* Xbridge */
+#define link_pkt_toutconn xb_linkstatus.pkt_toutconn /* Xbow */
+#define link_pkt_toutdest xb_linkstatus.pkt_toutdest
+#define link_pkt_toutsrc xb_linkstatus.pkt_toutsrc
+
+typedef union xbow_aux_linkX_status_u {
+ xbowreg_t aux_linkstatus;
+ struct {
+ uint32_t rsvd2:4,
+ bit_mode_8:1,
+ wid_present:1,
+ fail_mode:1,
+ rsvd1:1,
+ to_src_loc:8,
+ tx_retry_cnt:8,
+ rx_err_cnt:8;
+ } xb_aux_linkstatus;
+} xbow_aux_link_status_t;
+
+typedef union xbow_perf_count_u {
+ xbowreg_t xb_counter_val;
+ struct {
+ uint32_t count:20,
+ link_select:3,
+ rsvd:9;
+ } xb_perf;
+} xbow_perfcount_t;
+
+#define XBOW_COUNTER_MASK 0xFFFFF
+
+extern int xbow_widget_present(xbow_t * xbow, int port);
+
+extern xwidget_intr_preset_f xbow_intr_preset;
+extern xswitch_reset_link_f xbow_reset_link;
+void xbow_mlreset(xbow_t *);
+
+/* ========================================================================
+ */
+
+#ifdef MACROFIELD_LINE
+/*
+ * This table forms a relation between the byte offset macros normally
+ * used for ASM coding and the calculated byte offsets of the fields
+ * in the C structure.
+ *
+ * See xbow_check.c xbow_html.c for further details.
+ */
+#ifndef MACROFIELD_LINE_BITFIELD
+#define MACROFIELD_LINE_BITFIELD(m) /* ignored */
+#endif
+
+struct macrofield_s xbow_macrofield[] =
+{
+
+ MACROFIELD_LINE(XBOW_WID_ID, xb_wid_id)
+ MACROFIELD_LINE(XBOW_WID_STAT, xb_wid_stat)
+ MACROFIELD_LINE_BITFIELD(XB_WID_STAT_LINK_INTR(0xF))
+ MACROFIELD_LINE_BITFIELD(XB_WID_STAT_LINK_INTR(0xE))
+ MACROFIELD_LINE_BITFIELD(XB_WID_STAT_LINK_INTR(0xD))
+ MACROFIELD_LINE_BITFIELD(XB_WID_STAT_LINK_INTR(0xC))
+ MACROFIELD_LINE_BITFIELD(XB_WID_STAT_LINK_INTR(0xB))
+ MACROFIELD_LINE_BITFIELD(XB_WID_STAT_LINK_INTR(0xA))
+ MACROFIELD_LINE_BITFIELD(XB_WID_STAT_LINK_INTR(0x9))
+ MACROFIELD_LINE_BITFIELD(XB_WID_STAT_LINK_INTR(0x8))
+ MACROFIELD_LINE_BITFIELD(XB_WID_STAT_WIDGET0_INTR)
+ MACROFIELD_LINE_BITFIELD(XB_WID_STAT_REG_ACC_ERR)
+ MACROFIELD_LINE_BITFIELD(XB_WID_STAT_XTALK_ERR)
+ MACROFIELD_LINE_BITFIELD(XB_WID_STAT_MULTI_ERR)
+ MACROFIELD_LINE(XBOW_WID_ERR_UPPER, xb_wid_err_upper)
+ MACROFIELD_LINE(XBOW_WID_ERR_LOWER, xb_wid_err_lower)
+ MACROFIELD_LINE(XBOW_WID_CONTROL, xb_wid_control)
+ MACROFIELD_LINE_BITFIELD(XB_WID_CTRL_REG_ACC_IE)
+ MACROFIELD_LINE_BITFIELD(XB_WID_CTRL_XTALK_IE)
+ MACROFIELD_LINE(XBOW_WID_REQ_TO, xb_wid_req_timeout)
+ MACROFIELD_LINE(XBOW_WID_INT_UPPER, xb_wid_int_upper)
+ MACROFIELD_LINE(XBOW_WID_INT_LOWER, xb_wid_int_lower)
+ MACROFIELD_LINE(XBOW_WID_ERR_CMDWORD, xb_wid_err_cmdword)
+ MACROFIELD_LINE(XBOW_WID_LLP, xb_wid_llp)
+ MACROFIELD_LINE(XBOW_WID_STAT_CLR, xb_wid_stat_clr)
+ MACROFIELD_LINE(XBOW_WID_ARB_RELOAD, xb_wid_arb_reload)
+ MACROFIELD_LINE(XBOW_WID_PERF_CTR_A, xb_perf_ctr_a)
+ MACROFIELD_LINE(XBOW_WID_PERF_CTR_B, xb_perf_ctr_b)
+ MACROFIELD_LINE(XBOW_WID_NIC, xb_nic)
+ MACROFIELD_LINE(XB_LINK_REG_BASE(8), xb_link(8))
+ MACROFIELD_LINE(XB_LINK_IBUF_FLUSH(8), xb_link(8).link_ibf)
+ MACROFIELD_LINE(XB_LINK_CTRL(8), xb_link(8).link_control)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_LINKALIVE_IE)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_PERF_CTR_MODE_MSK)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_IBUF_LEVEL_MSK)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_8BIT_MODE)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_BAD_LLP_PKT)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_WIDGET_CR_MSK)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_ILLEGAL_DST_IE)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_OALLOC_IBUF_IE)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_BNDWDTH_ALLOC_IE)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_RCV_CNT_OFLOW_IE)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_XMT_CNT_OFLOW_IE)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_XMT_MAX_RTRY_IE)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_RCV_IE)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_XMT_RTRY_IE)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_MAXREQ_TOUT_IE)
+ MACROFIELD_LINE_BITFIELD(XB_CTRL_SRC_TOUT_IE)
+ MACROFIELD_LINE(XB_LINK_STATUS(8), xb_link(8).link_status)
+ MACROFIELD_LINE_BITFIELD(XB_STAT_LINKALIVE)
+ MACROFIELD_LINE_BITFIELD(XB_STAT_MULTI_ERR)
+ MACROFIELD_LINE_BITFIELD(XB_STAT_ILLEGAL_DST_ERR)
+ MACROFIELD_LINE_BITFIELD(XB_STAT_OALLOC_IBUF_ERR)
+ MACROFIELD_LINE_BITFIELD(XB_STAT_BNDWDTH_ALLOC_ID_MSK)
+ MACROFIELD_LINE_BITFIELD(XB_STAT_RCV_CNT_OFLOW_ERR)
+ MACROFIELD_LINE_BITFIELD(XB_STAT_XMT_CNT_OFLOW_ERR)
+ MACROFIELD_LINE_BITFIELD(XB_STAT_XMT_MAX_RTRY_ERR)
+ MACROFIELD_LINE_BITFIELD(XB_STAT_RCV_ERR)
+ MACROFIELD_LINE_BITFIELD(XB_STAT_XMT_RTRY_ERR)
+ MACROFIELD_LINE_BITFIELD(XB_STAT_MAXREQ_TOUT_ERR)
+ MACROFIELD_LINE_BITFIELD(XB_STAT_SRC_TOUT_ERR)
+ MACROFIELD_LINE(XB_LINK_ARB_UPPER(8), xb_link(8).link_arb_upper)
+ MACROFIELD_LINE_BITFIELD(XB_ARB_RR_MSK << XB_ARB_RR_SHFT(0xb))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_GBR_MSK << XB_ARB_GBR_SHFT(0xb))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_RR_MSK << XB_ARB_RR_SHFT(0xa))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_GBR_MSK << XB_ARB_GBR_SHFT(0xa))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_RR_MSK << XB_ARB_RR_SHFT(0x9))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_GBR_MSK << XB_ARB_GBR_SHFT(0x9))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_RR_MSK << XB_ARB_RR_SHFT(0x8))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_GBR_MSK << XB_ARB_GBR_SHFT(0x8))
+ MACROFIELD_LINE(XB_LINK_ARB_LOWER(8), xb_link(8).link_arb_lower)
+ MACROFIELD_LINE_BITFIELD(XB_ARB_RR_MSK << XB_ARB_RR_SHFT(0xf))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_GBR_MSK << XB_ARB_GBR_SHFT(0xf))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_RR_MSK << XB_ARB_RR_SHFT(0xe))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_GBR_MSK << XB_ARB_GBR_SHFT(0xe))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_RR_MSK << XB_ARB_RR_SHFT(0xd))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_GBR_MSK << XB_ARB_GBR_SHFT(0xd))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_RR_MSK << XB_ARB_RR_SHFT(0xc))
+ MACROFIELD_LINE_BITFIELD(XB_ARB_GBR_MSK << XB_ARB_GBR_SHFT(0xc))
+ MACROFIELD_LINE(XB_LINK_STATUS_CLR(8), xb_link(8).link_status_clr)
+ MACROFIELD_LINE(XB_LINK_RESET(8), xb_link(8).link_reset)
+ MACROFIELD_LINE(XB_LINK_AUX_STATUS(8), xb_link(8).link_aux_status)
+ MACROFIELD_LINE_BITFIELD(XB_AUX_STAT_RCV_CNT)
+ MACROFIELD_LINE_BITFIELD(XB_AUX_STAT_XMT_CNT)
+ MACROFIELD_LINE_BITFIELD(XB_AUX_LINKFAIL_RST_BAD)
+ MACROFIELD_LINE_BITFIELD(XB_AUX_STAT_PRESENT)
+ MACROFIELD_LINE_BITFIELD(XB_AUX_STAT_PORT_WIDTH)
+ MACROFIELD_LINE_BITFIELD(XB_AUX_STAT_TOUT_DST)
+ MACROFIELD_LINE(XB_LINK_REG_BASE(0x8), xb_link(0x8))
+ MACROFIELD_LINE(XB_LINK_REG_BASE(0x9), xb_link(0x9))
+ MACROFIELD_LINE(XB_LINK_REG_BASE(0xA), xb_link(0xA))
+ MACROFIELD_LINE(XB_LINK_REG_BASE(0xB), xb_link(0xB))
+ MACROFIELD_LINE(XB_LINK_REG_BASE(0xC), xb_link(0xC))
+ MACROFIELD_LINE(XB_LINK_REG_BASE(0xD), xb_link(0xD))
+ MACROFIELD_LINE(XB_LINK_REG_BASE(0xE), xb_link(0xE))
+ MACROFIELD_LINE(XB_LINK_REG_BASE(0xF), xb_link(0xF))
+}; /* xbow_macrofield[] */
+
+#endif /* MACROFIELD_LINE */
+
+#endif /* __ASSEMBLY__ */
+#endif /* _ASM_IA64_SN_XTALK_XBOW_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+#ifndef _ASM_IA64_SN_XTALK_XBOW_INFO_H
+#define _ASM_IA64_SN_XTALK_XBOW_INFO_H
+
+#include <linux/types.h>
+
+#define XBOW_PERF_MODES 0x03
+
+typedef struct xbow_link_status {
+ uint64_t rx_err_count;
+ uint64_t tx_retry_count;
+} xbow_link_status_t;
+
+
+#endif /* _ASM_IA64_SN_XTALK_XBOW_INFO_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+#ifndef _ASM_IA64_SN_XTALK_XSWITCH_H
+#define _ASM_IA64_SN_XTALK_XSWITCH_H
+
+/*
+ * xswitch.h - controls the format of the data
+ * provided by xswitch verticies back to the
+ * xtalk bus providers.
+ */
+
+#ifndef __ASSEMBLY__
+
+#include <asm/sn/xtalk/xtalk.h>
+
+typedef struct xswitch_info_s *xswitch_info_t;
+
+typedef int
+ xswitch_reset_link_f(vertex_hdl_t xconn);
+
+typedef struct xswitch_provider_s {
+ xswitch_reset_link_f *reset_link;
+} xswitch_provider_t;
+
+extern void xswitch_provider_register(vertex_hdl_t sw_vhdl, xswitch_provider_t * xsw_fns);
+
+xswitch_reset_link_f xswitch_reset_link;
+
+extern xswitch_info_t xswitch_info_new(vertex_hdl_t vhdl);
+
+extern void xswitch_info_link_is_ok(xswitch_info_t xswitch_info,
+ xwidgetnum_t port);
+extern void xswitch_info_vhdl_set(xswitch_info_t xswitch_info,
+ xwidgetnum_t port,
+ vertex_hdl_t xwidget);
+extern void xswitch_info_master_assignment_set(xswitch_info_t xswitch_info,
+ xwidgetnum_t port,
+ vertex_hdl_t master_vhdl);
+
+extern xswitch_info_t xswitch_info_get(vertex_hdl_t vhdl);
+
+extern int xswitch_info_link_ok(xswitch_info_t xswitch_info,
+ xwidgetnum_t port);
+extern vertex_hdl_t xswitch_info_vhdl_get(xswitch_info_t xswitch_info,
+ xwidgetnum_t port);
+extern vertex_hdl_t xswitch_info_master_assignment_get(xswitch_info_t xswitch_info,
+ xwidgetnum_t port);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_IA64_SN_XTALK_XSWITCH_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997, 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+#ifndef _ASM_IA64_SN_XTALK_XTALK_H
+#define _ASM_IA64_SN_XTALK_XTALK_H
+#include <linux/config.h>
+
+#ifdef __KERNEL__
+#include "asm/sn/sgi.h"
+#endif
+
+
+/*
+ * xtalk.h -- platform-independent crosstalk interface
+ */
+/*
+ * User-level device driver visible types
+ */
+typedef char xwidgetnum_t; /* xtalk widget number (0..15) */
+
+#define XWIDGET_NONE (-1)
+
+typedef int xwidget_part_num_t; /* xtalk widget part number */
+
+#define XWIDGET_PART_NUM_NONE (-1)
+
+typedef int xwidget_rev_num_t; /* xtalk widget revision number */
+
+#define XWIDGET_REV_NUM_NONE (-1)
+
+typedef int xwidget_mfg_num_t; /* xtalk widget manufacturing ID */
+
+#define XWIDGET_MFG_NUM_NONE (-1)
+
+typedef struct xtalk_piomap_s *xtalk_piomap_t;
+
+/* It is often convenient to fold the XIO target port
+ * number into the XIO address.
+ */
+#define XIO_NOWHERE (0xFFFFFFFFFFFFFFFFull)
+#define XIO_ADDR_BITS (0x0000FFFFFFFFFFFFull)
+#define XIO_PORT_BITS (0xF000000000000000ull)
+#define XIO_PORT_SHIFT (60)
+
+#define XIO_PACKED(x) (((x)&XIO_PORT_BITS) != 0)
+#define XIO_ADDR(x) ((x)&XIO_ADDR_BITS)
+#define XIO_PORT(x) ((xwidgetnum_t)(((x)&XIO_PORT_BITS) >> XIO_PORT_SHIFT))
+#define XIO_PACK(p,o) ((((uint64_t)(p))<<XIO_PORT_SHIFT) | ((o)&XIO_ADDR_BITS))
+
+
+/*
+ * Kernel/driver only definitions
+ */
+#ifdef __KERNEL__
+
+#include <asm/types.h>
+#include <asm/sn/types.h>
+#include <asm/sn/ioerror.h>
+#include <asm/sn/driver.h>
+#include <asm/sn/dmamap.h>
+
+struct xwidget_hwid_s;
+
+/*
+ * Acceptable flag bits for xtalk service calls
+ *
+ * XTALK_FIXED: require that mappings be established
+ * using fixed sharable resources; address
+ * translation results will be permanently
+ * available. (PIOMAP_FIXED and DMAMAP_FIXED are
+ * the same numeric value and are acceptable).
+ * XTALK_NOSLEEP: if any part of the operation would
+ * sleep waiting for resoruces, return an error
+ * instead. (PIOMAP_NOSLEEP and DMAMAP_NOSLEEP are
+ * the same numeric value and are acceptable).
+ */
+#define XTALK_FIXED DMAMAP_FIXED
+#define XTALK_NOSLEEP DMAMAP_NOSLEEP
+
+/* PIO MANAGEMENT */
+typedef xtalk_piomap_t
+xtalk_piomap_alloc_f (vertex_hdl_t dev, /* set up mapping for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ iopaddr_t xtalk_addr, /* map for this xtalk_addr range */
+ size_t byte_count,
+ size_t byte_count_max, /* maximum size of a mapping */
+ unsigned int flags); /* defined in sys/pio.h */
+typedef void
+xtalk_piomap_free_f (xtalk_piomap_t xtalk_piomap);
+
+typedef caddr_t
+xtalk_piomap_addr_f (xtalk_piomap_t xtalk_piomap, /* mapping resources */
+ iopaddr_t xtalk_addr, /* map for this xtalk address */
+ size_t byte_count); /* map this many bytes */
+
+typedef void
+xtalk_piomap_done_f (xtalk_piomap_t xtalk_piomap);
+
+typedef caddr_t
+xtalk_piotrans_addr_f (vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ iopaddr_t xtalk_addr, /* Crosstalk address */
+ size_t byte_count, /* map this many bytes */
+ unsigned int flags); /* (currently unused) */
+
+extern caddr_t
+xtalk_pio_addr (vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ iopaddr_t xtalk_addr, /* Crosstalk address */
+ size_t byte_count, /* map this many bytes */
+ xtalk_piomap_t *xtalk_piomapp, /* RETURNS mapping resources */
+ unsigned int flags); /* (currently unused) */
+
+/* DMA MANAGEMENT */
+
+typedef struct xtalk_dmamap_s *xtalk_dmamap_t;
+
+typedef xtalk_dmamap_t
+xtalk_dmamap_alloc_f (vertex_hdl_t dev, /* set up mappings for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ size_t byte_count_max, /* max size of a mapping */
+ unsigned int flags); /* defined in dma.h */
+
+typedef void
+xtalk_dmamap_free_f (xtalk_dmamap_t dmamap);
+
+typedef iopaddr_t
+xtalk_dmamap_addr_f (xtalk_dmamap_t dmamap, /* use these mapping resources */
+ paddr_t paddr, /* map for this address */
+ size_t byte_count); /* map this many bytes */
+
+typedef void
+xtalk_dmamap_done_f (xtalk_dmamap_t dmamap);
+
+typedef iopaddr_t
+xtalk_dmatrans_addr_f (vertex_hdl_t dev, /* translate for this device */
+ device_desc_t dev_desc, /* device descriptor */
+ paddr_t paddr, /* system physical address */
+ size_t byte_count, /* length */
+ unsigned int flags);
+
+typedef void
+xtalk_dmamap_drain_f (xtalk_dmamap_t map); /* drain this map's channel */
+
+typedef void
+xtalk_dmaaddr_drain_f (vertex_hdl_t vhdl, /* drain channel from this device */
+ paddr_t addr, /* to this physical address */
+ size_t bytes); /* for this many bytes */
+
+/* INTERRUPT MANAGEMENT */
+
+/*
+ * A xtalk interrupt resource handle. When resources are allocated
+ * in order to satisfy a xtalk_intr_alloc request, a xtalk_intr handle
+ * is returned. xtalk_intr_connect associates a software handler with
+
+ * these system resources.
+ */
+typedef struct xtalk_intr_s *xtalk_intr_t;
+
+
+/*
+ * When a crosstalk device connects an interrupt, it passes in a function
+ * that knows how to set its xtalk interrupt register appropriately. The
+ * low-level interrupt code may invoke this function later in order to
+ * migrate an interrupt transparently to the device driver(s) that use this
+ * interrupt.
+ *
+ * The argument passed to this function contains enough information for a
+ * crosstalk device to (re-)target an interrupt. A function of this type
+ * must be supplied by every crosstalk driver.
+ */
+typedef int
+xtalk_intr_setfunc_f (xtalk_intr_t intr_hdl); /* interrupt handle */
+
+typedef xtalk_intr_t
+xtalk_intr_alloc_f (vertex_hdl_t dev, /* which crosstalk device */
+ device_desc_t dev_desc, /* device descriptor */
+ vertex_hdl_t owner_dev); /* owner of this intr */
+
+typedef void
+xtalk_intr_free_f (xtalk_intr_t intr_hdl);
+
+typedef int
+xtalk_intr_connect_f (xtalk_intr_t intr_hdl, /* xtalk intr resource handle */
+ intr_func_t intr_func, /* xtalk intr handler */
+ void *intr_arg, /* arg to intr handler */
+ xtalk_intr_setfunc_f *setfunc, /* func to set intr hw */
+ void *setfunc_arg); /* arg to setfunc */
+
+typedef void
+xtalk_intr_disconnect_f (xtalk_intr_t intr_hdl);
+
+typedef vertex_hdl_t
+xtalk_intr_cpu_get_f (xtalk_intr_t intr_hdl); /* xtalk intr resource handle */
+
+/* CONFIGURATION MANAGEMENT */
+
+typedef void
+xtalk_provider_startup_f (vertex_hdl_t xtalk_provider);
+
+typedef void
+xtalk_provider_shutdown_f (vertex_hdl_t xtalk_provider);
+
+typedef void
+xtalk_widgetdev_enable_f (vertex_hdl_t, int);
+
+typedef void
+xtalk_widgetdev_shutdown_f (vertex_hdl_t, int);
+
+/* Error Management */
+
+/* Early Action Support */
+typedef caddr_t
+xtalk_early_piotrans_addr_f (xwidget_part_num_t part_num,
+ xwidget_mfg_num_t mfg_num,
+ int which,
+ iopaddr_t xtalk_addr,
+ size_t byte_count,
+ unsigned int flags);
+
+/*
+ * Adapters that provide a crosstalk interface adhere to this software interface.
+ */
+typedef struct xtalk_provider_s {
+ /* PIO MANAGEMENT */
+ xtalk_piomap_alloc_f *piomap_alloc;
+ xtalk_piomap_free_f *piomap_free;
+ xtalk_piomap_addr_f *piomap_addr;
+ xtalk_piomap_done_f *piomap_done;
+ xtalk_piotrans_addr_f *piotrans_addr;
+
+ /* DMA MANAGEMENT */
+ xtalk_dmamap_alloc_f *dmamap_alloc;
+ xtalk_dmamap_free_f *dmamap_free;
+ xtalk_dmamap_addr_f *dmamap_addr;
+ xtalk_dmamap_done_f *dmamap_done;
+ xtalk_dmatrans_addr_f *dmatrans_addr;
+ xtalk_dmamap_drain_f *dmamap_drain;
+ xtalk_dmaaddr_drain_f *dmaaddr_drain;
+
+ /* INTERRUPT MANAGEMENT */
+ xtalk_intr_alloc_f *intr_alloc;
+ xtalk_intr_alloc_f *intr_alloc_nothd;
+ xtalk_intr_free_f *intr_free;
+ xtalk_intr_connect_f *intr_connect;
+ xtalk_intr_disconnect_f *intr_disconnect;
+
+ /* CONFIGURATION MANAGEMENT */
+ xtalk_provider_startup_f *provider_startup;
+ xtalk_provider_shutdown_f *provider_shutdown;
+} xtalk_provider_t;
+
+/* Crosstalk devices use these standard Crosstalk provider interfaces */
+extern xtalk_piomap_alloc_f xtalk_piomap_alloc;
+extern xtalk_piomap_free_f xtalk_piomap_free;
+extern xtalk_piomap_addr_f xtalk_piomap_addr;
+extern xtalk_piomap_done_f xtalk_piomap_done;
+extern xtalk_piotrans_addr_f xtalk_piotrans_addr;
+extern xtalk_dmamap_alloc_f xtalk_dmamap_alloc;
+extern xtalk_dmamap_free_f xtalk_dmamap_free;
+extern xtalk_dmamap_addr_f xtalk_dmamap_addr;
+extern xtalk_dmamap_done_f xtalk_dmamap_done;
+extern xtalk_dmatrans_addr_f xtalk_dmatrans_addr;
+extern xtalk_dmamap_drain_f xtalk_dmamap_drain;
+extern xtalk_dmaaddr_drain_f xtalk_dmaaddr_drain;
+extern xtalk_intr_alloc_f xtalk_intr_alloc;
+extern xtalk_intr_alloc_f xtalk_intr_alloc_nothd;
+extern xtalk_intr_free_f xtalk_intr_free;
+extern xtalk_intr_connect_f xtalk_intr_connect;
+extern xtalk_intr_disconnect_f xtalk_intr_disconnect;
+extern xtalk_intr_cpu_get_f xtalk_intr_cpu_get;
+extern xtalk_provider_startup_f xtalk_provider_startup;
+extern xtalk_provider_shutdown_f xtalk_provider_shutdown;
+extern xtalk_widgetdev_enable_f xtalk_widgetdev_enable;
+extern xtalk_widgetdev_shutdown_f xtalk_widgetdev_shutdown;
+extern xtalk_early_piotrans_addr_f xtalk_early_piotrans_addr;
+
+/* error management */
+
+extern int xtalk_error_handler(vertex_hdl_t,
+ int,
+ ioerror_mode_t,
+ ioerror_t *);
+
+/*
+ * Generic crosstalk interface, for use with all crosstalk providers
+ * and all crosstalk devices.
+ */
+typedef unchar xtalk_intr_vector_t; /* crosstalk interrupt vector (0..255) */
+
+#define XTALK_INTR_VECTOR_NONE (xtalk_intr_vector_t)0
+
+/* Generic crosstalk interrupt interfaces */
+extern vertex_hdl_t xtalk_intr_dev_get(xtalk_intr_t xtalk_intr);
+extern xwidgetnum_t xtalk_intr_target_get(xtalk_intr_t xtalk_intr);
+extern xtalk_intr_vector_t xtalk_intr_vector_get(xtalk_intr_t xtalk_intr);
+extern iopaddr_t xtalk_intr_addr_get(xtalk_intr_t xtalk_intr);
+extern vertex_hdl_t xtalk_intr_cpu_get(xtalk_intr_t xtalk_intr);
+extern void *xtalk_intr_sfarg_get(xtalk_intr_t xtalk_intr);
+
+/* Generic crosstalk pio interfaces */
+extern vertex_hdl_t xtalk_pio_dev_get(xtalk_piomap_t xtalk_piomap);
+extern xwidgetnum_t xtalk_pio_target_get(xtalk_piomap_t xtalk_piomap);
+extern iopaddr_t xtalk_pio_xtalk_addr_get(xtalk_piomap_t xtalk_piomap);
+extern size_t xtalk_pio_mapsz_get(xtalk_piomap_t xtalk_piomap);
+extern caddr_t xtalk_pio_kvaddr_get(xtalk_piomap_t xtalk_piomap);
+
+/* Generic crosstalk dma interfaces */
+extern vertex_hdl_t xtalk_dma_dev_get(xtalk_dmamap_t xtalk_dmamap);
+extern xwidgetnum_t xtalk_dma_target_get(xtalk_dmamap_t xtalk_dmamap);
+
+/* Register/unregister Crosstalk providers and get implementation handle */
+extern void xtalk_set_early_piotrans_addr(xtalk_early_piotrans_addr_f *);
+extern void xtalk_provider_register(vertex_hdl_t provider, xtalk_provider_t *xtalk_fns);
+extern void xtalk_provider_unregister(vertex_hdl_t provider);
+extern xtalk_provider_t *xtalk_provider_fns_get(vertex_hdl_t provider);
+
+/* Crosstalk Switch generic layer, for use by initialization code */
+extern void xswitch_census(vertex_hdl_t xswitchv);
+extern void xswitch_init_widgets(vertex_hdl_t xswitchv);
+
+/* early init interrupt management */
+
+typedef void
+xwidget_intr_preset_f (void *which_widget,
+ int which_widget_intr,
+ xwidgetnum_t targ,
+ iopaddr_t addr,
+ xtalk_intr_vector_t vect);
+
+typedef void
+xtalk_intr_prealloc_f (void *which_xtalk,
+ xtalk_intr_vector_t xtalk_vector,
+ xwidget_intr_preset_f *preset_func,
+ void *which_widget,
+ int which_widget_intr);
+
+typedef void
+xtalk_intr_preconn_f (void *which_xtalk,
+ xtalk_intr_vector_t xtalk_vector,
+ intr_func_t intr_func,
+ intr_arg_t intr_arg);
+
+
+#define XTALK_ADDR_TO_UPPER(xtalk_addr) (((iopaddr_t)(xtalk_addr) >> 32) & 0xffff)
+#define XTALK_ADDR_TO_LOWER(xtalk_addr) ((iopaddr_t)(xtalk_addr) & 0xffffffff)
+
+typedef xtalk_intr_setfunc_f *xtalk_intr_setfunc_t;
+
+typedef void xtalk_iter_f(vertex_hdl_t vhdl);
+
+extern void xtalk_iterate(char *prefix, xtalk_iter_f *func);
+
+#endif /* __KERNEL__ */
+#endif /* _ASM_IA64_SN_XTALK_XTALK_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997, 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+#ifndef _ASM_IA64_SN_XTALK_XTALK_PRIVATE_H
+#define _ASM_IA64_SN_XTALK_XTALK_PRIVATE_H
+
+#include <asm/sn/ioerror.h> /* for error function and arg types */
+#include <asm/sn/xtalk/xwidget.h>
+#include <asm/sn/xtalk/xtalk.h>
+
+/*
+ * xtalk_private.h -- private definitions for xtalk
+ * crosstalk drivers should NOT include this file.
+ */
+
+/*
+ * All Crosstalk providers set up PIO using this information.
+ */
+struct xtalk_piomap_s {
+ vertex_hdl_t xp_dev; /* a requestor of this mapping */
+ xwidgetnum_t xp_target; /* target (node's widget number) */
+ iopaddr_t xp_xtalk_addr; /* which crosstalk addr is mapped */
+ size_t xp_mapsz; /* size of this mapping */
+ caddr_t xp_kvaddr; /* kernel virtual address to use */
+};
+
+/*
+ * All Crosstalk providers set up DMA using this information.
+ */
+struct xtalk_dmamap_s {
+ vertex_hdl_t xd_dev; /* a requestor of this mapping */
+ xwidgetnum_t xd_target; /* target (node's widget number) */
+};
+
+/*
+ * All Crosstalk providers set up interrupts using this information.
+ */
+struct xtalk_intr_s {
+ vertex_hdl_t xi_dev; /* requestor of this intr */
+ xwidgetnum_t xi_target; /* master's widget number */
+ xtalk_intr_vector_t xi_vector; /* 8-bit interrupt vector */
+ iopaddr_t xi_addr; /* xtalk address to generate intr */
+ void *xi_sfarg; /* argument for setfunc */
+ xtalk_intr_setfunc_t xi_setfunc; /* device's setfunc routine */
+};
+
+/*
+ * Xtalk interrupt handler structure access functions
+ */
+#define xwidget_hwid_is_sn1_xswitch(_hwid) \
+ (((_hwid)->part_num == XXBOW_WIDGET_PART_NUM || \
+ (_hwid)->part_num == PXBOW_WIDGET_PART_NUM) && \
+ ((_hwid)->mfg_num == XXBOW_WIDGET_MFGR_NUM ))
+
+#define xwidget_hwid_is_xswitch(_hwid) \
+ xwidget_hwid_is_sn1_xswitch(_hwid)
+
+/* common iograph info for all widgets,
+ * stashed in FASTINFO of widget connection points.
+ */
+struct xwidget_info_s {
+ char *w_fingerprint;
+ vertex_hdl_t w_vertex; /* back pointer to vertex */
+ xwidgetnum_t w_id; /* widget id */
+ struct xwidget_hwid_s w_hwid; /* hardware identification (part/rev/mfg) */
+ vertex_hdl_t w_master; /* CACHED widget's master */
+ xwidgetnum_t w_masterid; /* CACHED widget's master's widgetnum */
+ error_handler_f *w_efunc; /* error handling function */
+ error_handler_arg_t w_einfo; /* first parameter for efunc */
+ char *w_name; /* canonical hwgraph name */
+};
+
+extern char widget_info_fingerprint[];
+
+#endif /* _ASM_IA64_SN_XTALK_XTALK_PRIVATE_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+#ifndef _ASM_IA64_SN_XTALK_XTALKADDRS_H
+#define _ASM_IA64_SN_XTALK_XTALKADDRS_H
+
+
+/*
+ * CrossTalk to SN0 Hub addressing support
+ *
+ * This file defines the mapping conventions used by the Hub's
+ * I/O interface when it receives a read or write request from
+ * a CrossTalk widget.
+ *
+ * Format for non-Memory accesses:
+ *
+ * +--------------+------------------------------------------------+
+ * | 0 | XXXXX | SN0Addr |
+ * +----+---------+------------------------------------------------+
+ * 47 46 40 39 0
+ * bit 47 indicates Memory (0)
+ * bits 46..40 are unused
+ * bits 39..0 hold the memory address
+ * (bits 39..31 hold the nodeID in N mode
+ * bits 39..32 hold the nodeID in M mode
+ * By design, this looks exactly like a 0-extended SN0 Address, so
+ * we don't need to do any conversions.
+ *
+ *
+ *
+ * Format for non-Memory accesses:
+ *
+ * +--------------+------+---------+------+--+---------------------+
+ * | 1 | DstNode | XXXX | BigW=0 | SW=1 | 1| Addr |
+ * +----+---------+------+---------+------+--+---------------------+
+ * 47 46 38 37 31 30 28 27 24 23 22 0
+ *
+ * bit 47 indicates IO (1)
+ * bits 46..38 hold the destination node ID
+ * bits 37..31 are unused
+ * bits 30..28 hold the big window being addressed
+ * bits 27..24 hold the small window being addressed
+ * 0 always refers to the xbow
+ * 1 always refers to the hub itself
+ * bit 23 indicates local (0) or remote (1)
+ * no accessing checks are done if this bit is 0
+ * bits 22..0 hold the register address
+ * bits 22..21 determine which section of the hub
+ * 00 -> PI
+ * 01 -> MD
+ * 10 -> IO
+ * 11 -> NI
+ * This looks very much like a REMOTE_HUB access, except the nodeID
+ * is in a different place, and the highest xtalk bit is set.
+ */
+/* Hub-specific xtalk definitions */
+
+#define HX_MEM_BIT 0L /* Hub's idea of xtalk memory access */
+#define HX_IO_BIT 1L /* Hub's idea of xtalk register access */
+#define HX_ACCTYPE_SHIFT 47
+
+#define HX_NODE_SHIFT 39
+
+#define HX_BIGWIN_SHIFT 28
+#define HX_SWIN_SHIFT 23
+
+#define HX_LOCACC 0L /* local access */
+#define HX_REMACC 1L /* remote access */
+#define HX_ACCESS_SHIFT 23
+
+/*
+ * Pre-calculate the fixed portion of a crosstalk address that maps
+ * to local register space on a hub.
+ */
+#define HX_REG_BASE ((HX_IO_BIT<<HX_ACCTYPE_SHIFT) + \
+ (0L<<HX_BIGWIN_SHIFT) + \
+ (1L<<HX_SWIN_SHIFT) + IALIAS_SIZE + \
+ (HX_REMACC<<HX_ACCESS_SHIFT))
+
+/*
+ * Return a crosstalk address which a widget can use to access a
+ * designated register on a designated node.
+ */
+#define HUBREG_AS_XTALKADDR(nasid, regaddr) \
+ ((iopaddr_t)(HX_REG_BASE + (((long)nasid)<<HX_NODE_SHIFT) + ((long)regaddr)))
+
+#if TBD
+#assert sizeof(iopaddr_t) == 8
+#endif /* TBD */
+
+/*
+ * Get widget part number, given node id and widget id.
+ * Always do a 32-bit read, because some widgets, e.g., Bridge, require so.
+ * Widget ID is at offset 0 for 64-bit access. Add 4 to get lower 32 bits
+ * in big endian mode.
+ * XXX Double check this with Hub, Xbow, Bridge and other hardware folks.
+ */
+#define XWIDGET_ID_READ(nasid, widget) \
+ (widgetreg_t)(*(volatile uint32_t *)(NODE_SWIN_BASE(nasid, widget) + WIDGET_ID))
+
+
+#endif /* _ASM_IA64_SN_XTALK_XTALKADDRS_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+#ifndef _ASM_IA64_SN_XTALK_XWIDGET_H
+#define _ASM_IA64_SN_XTALK_XWIDGET_H
+
+/*
+ * xwidget.h - generic crosstalk widget header file
+ */
+
+#ifdef __KERNEL__
+#include <asm/sn/xtalk/xtalk.h>
+#ifndef __ASSEMBLY__
+#include <asm/sn/cdl.h>
+#endif /* __ASSEMBLY__ */
+#else
+#include <xtalk/xtalk.h>
+#endif
+
+#define WIDGET_ID 0x00
+#define WIDGET_STATUS 0x08
+#define WIDGET_ERR_UPPER_ADDR 0x10
+#define WIDGET_ERR_LOWER_ADDR 0x18
+#define WIDGET_CONTROL 0x20
+#define WIDGET_REQ_TIMEOUT 0x28
+#define WIDGET_INTDEST_UPPER_ADDR 0x30
+#define WIDGET_INTDEST_LOWER_ADDR 0x38
+#define WIDGET_ERR_CMD_WORD 0x40
+#define WIDGET_LLP_CFG 0x48
+#define WIDGET_TFLUSH 0x50
+
+/* WIDGET_ID */
+#define WIDGET_REV_NUM 0xf0000000
+#define WIDGET_PART_NUM 0x0ffff000
+#define WIDGET_MFG_NUM 0x00000ffe
+#define WIDGET_REV_NUM_SHFT 28
+#define WIDGET_PART_NUM_SHFT 12
+#define WIDGET_MFG_NUM_SHFT 1
+
+#define XWIDGET_PART_NUM(widgetid) (((widgetid) & WIDGET_PART_NUM) >> WIDGET_PART_NUM_SHFT)
+#define XWIDGET_REV_NUM(widgetid) (((widgetid) & WIDGET_REV_NUM) >> WIDGET_REV_NUM_SHFT)
+#define XWIDGET_MFG_NUM(widgetid) (((widgetid) & WIDGET_MFG_NUM) >> WIDGET_MFG_NUM_SHFT)
+#define XWIDGET_PART_REV_NUM(widgetid) ((XWIDGET_PART_NUM(widgetid) << 4) | \
+ XWIDGET_REV_NUM(widgetid))
+#define XWIDGET_PART_REV_NUM_REV(partrev) (partrev & 0xf)
+
+/* WIDGET_STATUS */
+#define WIDGET_LLP_REC_CNT 0xff000000
+#define WIDGET_LLP_TX_CNT 0x00ff0000
+#define WIDGET_PENDING 0x0000001f
+
+/* WIDGET_ERR_UPPER_ADDR */
+#define WIDGET_ERR_UPPER_ADDR_ONLY 0x0000ffff
+
+/* WIDGET_CONTROL */
+#define WIDGET_F_BAD_PKT 0x00010000
+#define WIDGET_LLP_XBAR_CRD 0x0000f000
+#define WIDGET_LLP_XBAR_CRD_SHFT 12
+#define WIDGET_CLR_RLLP_CNT 0x00000800
+#define WIDGET_CLR_TLLP_CNT 0x00000400
+#define WIDGET_SYS_END 0x00000200
+#define WIDGET_MAX_TRANS 0x000001f0
+#define WIDGET_PCI_SPEED 0x00000030
+#define WIDGET_PCI_SPEED_SHFT 4
+#define WIDGET_PCI_SPEED_33MHZ 0
+#define WIDGET_PCI_SPEED_66MHZ 1
+#define WIDGET_WIDGET_ID 0x0000000f
+
+/* WIDGET_INTDEST_UPPER_ADDR */
+#define WIDGET_INT_VECTOR 0xff000000
+#define WIDGET_INT_VECTOR_SHFT 24
+#define WIDGET_TARGET_ID 0x000f0000
+#define WIDGET_TARGET_ID_SHFT 16
+#define WIDGET_UPP_ADDR 0x0000ffff
+
+/* WIDGET_ERR_CMD_WORD */
+#define WIDGET_DIDN 0xf0000000
+#define WIDGET_SIDN 0x0f000000
+#define WIDGET_PACTYP 0x00f00000
+#define WIDGET_TNUM 0x000f8000
+#define WIDGET_COHERENT 0x00004000
+#define WIDGET_DS 0x00003000
+#define WIDGET_GBR 0x00000800
+#define WIDGET_VBPM 0x00000400
+#define WIDGET_ERROR 0x00000200
+#define WIDGET_BARRIER 0x00000100
+
+/* WIDGET_LLP_CFG */
+#define WIDGET_LLP_MAXRETRY 0x03ff0000
+#define WIDGET_LLP_MAXRETRY_SHFT 16
+#define WIDGET_LLP_NULLTIMEOUT 0x0000fc00
+#define WIDGET_LLP_NULLTIMEOUT_SHFT 10
+#define WIDGET_LLP_MAXBURST 0x000003ff
+#define WIDGET_LLP_MAXBURST_SHFT 0
+
+/*
+ * according to the crosstalk spec, only 32-bits access to the widget
+ * configuration registers is allowed. some widgets may allow 64-bits
+ * access but software should not depend on it. registers beyond the
+ * widget target flush register are widget dependent thus will not be
+ * defined here
+ */
+#ifndef __ASSEMBLY__
+typedef uint32_t widgetreg_t;
+
+/* widget configuration registers */
+typedef volatile struct widget_cfg {
+/*
+ * we access these through synergy unswizzled space, so the address
+ * gets twiddled (i.e. references to 0x4 actually go to 0x0 and vv.)
+ * That's why we put the register first and filler second.
+ */
+ widgetreg_t w_id; /* 0x04 */
+ widgetreg_t w_pad_0; /* 0x00 */
+ widgetreg_t w_status; /* 0x0c */
+ widgetreg_t w_pad_1; /* 0x08 */
+ widgetreg_t w_err_upper_addr; /* 0x14 */
+ widgetreg_t w_pad_2; /* 0x10 */
+ widgetreg_t w_err_lower_addr; /* 0x1c */
+ widgetreg_t w_pad_3; /* 0x18 */
+ widgetreg_t w_control; /* 0x24 */
+ widgetreg_t w_pad_4; /* 0x20 */
+ widgetreg_t w_req_timeout; /* 0x2c */
+ widgetreg_t w_pad_5; /* 0x28 */
+ widgetreg_t w_intdest_upper_addr; /* 0x34 */
+ widgetreg_t w_pad_6; /* 0x30 */
+ widgetreg_t w_intdest_lower_addr; /* 0x3c */
+ widgetreg_t w_pad_7; /* 0x38 */
+ widgetreg_t w_err_cmd_word; /* 0x44 */
+ widgetreg_t w_pad_8; /* 0x40 */
+ widgetreg_t w_llp_cfg; /* 0x4c */
+ widgetreg_t w_pad_9; /* 0x48 */
+ widgetreg_t w_tflush; /* 0x54 */
+ widgetreg_t w_pad_10; /* 0x50 */
+} widget_cfg_t;
+
+typedef struct {
+ unsigned int other:8;
+ unsigned int bo:1;
+ unsigned int error:1;
+ unsigned int vbpm:1;
+ unsigned int gbr:1;
+ unsigned int ds:2;
+ unsigned int ct:1;
+ unsigned int tnum:5;
+ unsigned int pactyp:4;
+ unsigned int sidn:4;
+ unsigned int didn:4;
+} w_err_cmd_word_f;
+
+typedef union {
+ w_err_cmd_word_f f;
+ widgetreg_t r;
+} w_err_cmd_word_u;
+
+/* IO widget initialization function */
+typedef struct xwidget_info_s *xwidget_info_t;
+
+/*
+ * Crosstalk Widget Hardware Identification, as defined in the Crosstalk spec.
+ */
+typedef struct xwidget_hwid_s {
+ xwidget_mfg_num_t mfg_num;
+ xwidget_rev_num_t rev_num;
+ xwidget_part_num_t part_num;
+} *xwidget_hwid_t;
+
+
+/*
+ * Returns 1 if a driver that handles devices described by hwid1 is able
+ * to manage a device with hardwareid hwid2. NOTE: We don't check rev
+ * numbers at all.
+ */
+#define XWIDGET_HARDWARE_ID_MATCH(hwid1, hwid2) \
+ (((hwid1)->part_num == (hwid2)->part_num) && \
+ (((hwid1)->mfg_num == XWIDGET_MFG_NUM_NONE) || \
+ ((hwid2)->mfg_num == XWIDGET_MFG_NUM_NONE) || \
+ ((hwid1)->mfg_num == (hwid2)->mfg_num)))
+
+
+/* Generic crosstalk widget initialization interface */
+#ifdef __KERNEL__
+
+extern int xwidget_driver_register(xwidget_part_num_t part_num,
+ xwidget_mfg_num_t mfg_num,
+ char *driver_prefix,
+ unsigned int flags);
+
+extern void xwidget_driver_unregister(char *driver_prefix);
+
+extern int xwidget_register(struct xwidget_hwid_s *hwid,
+ vertex_hdl_t dev,
+ xwidgetnum_t id,
+ vertex_hdl_t master,
+ xwidgetnum_t targetid);
+
+extern int xwidget_unregister(vertex_hdl_t);
+
+extern void xwidget_reset(vertex_hdl_t xwidget);
+extern void xwidget_gfx_reset(vertex_hdl_t xwidget);
+extern char *xwidget_name_get(vertex_hdl_t xwidget);
+
+/* Generic crosstalk widget information access interface */
+extern xwidget_info_t xwidget_info_chk(vertex_hdl_t widget);
+extern xwidget_info_t xwidget_info_get(vertex_hdl_t widget);
+extern void xwidget_info_set(vertex_hdl_t widget, xwidget_info_t widget_info);
+extern vertex_hdl_t xwidget_info_dev_get(xwidget_info_t xwidget_info);
+extern xwidgetnum_t xwidget_info_id_get(xwidget_info_t xwidget_info);
+extern int xwidget_info_type_get(xwidget_info_t xwidget_info);
+extern int xwidget_info_state_get(xwidget_info_t xwidget_info);
+extern vertex_hdl_t xwidget_info_master_get(xwidget_info_t xwidget_info);
+extern xwidgetnum_t xwidget_info_masterid_get(xwidget_info_t xwidget_info);
+extern xwidget_part_num_t xwidget_info_part_num_get(xwidget_info_t xwidget_info);
+extern xwidget_rev_num_t xwidget_info_rev_num_get(xwidget_info_t xwidget_info);
+extern xwidget_mfg_num_t xwidget_info_mfg_num_get(xwidget_info_t xwidget_info);
+
+extern xwidgetnum_t hub_widget_id(nasid_t);
+
+
+
+/*
+ * TBD: DELETE THIS ENTIRE STRUCTURE! Equivalent is now in
+ * xtalk_private.h: xwidget_info_s
+ * This is just here for now because we still have a lot of
+ * junk referencing it.
+ * However, since nobody looks inside ...
+ */
+typedef struct v_widget_s {
+ unsigned int v_widget_s_is_really_empty;
+#define v_widget_s_is_really_empty and using this would be a syntax error.
+} v_widget_t;
+#endif /* _KERNEL */
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_IA64_SN_XTALK_XWIDGET_H */
--- /dev/null
+#ifndef _ASM_M68K_RELAY_H
+#define _ASM_M68K_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
#define __NR_mq_notify 275
#define __NR_mq_getsetattr 276
#define __NR_waitid 277
+#warning __NR_vserver is likely the wrong number
#define __NR_vserver 278
#define __NR_add_key 279
#define __NR_request_key 280
#define __NR_keyctl 281
+#define __NR_vserver 282
-#define NR_syscalls 282
+#define NR_syscalls 283
/* user-visible error numbers are in the range -1 - -124: see
<asm-m68k/errno.h> */
--- /dev/null
+#ifndef _ASM_M68KNOMMU_RELAY_H
+#define _ASM_M68KNOMMU_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
#define __NR_set_mempolicy 270
#define __NR_mq_open 271
#define __NR_mq_unlink 272
+#warning __NR_vserver is the wrong number
+#define __NR_vserver 273
#define __NR_mq_timedsend 273
#define __NR_mq_timedreceive 274
#define __NR_mq_notify 275
#define __NR_add_key 279
#define __NR_request_key 280
#define __NR_keyctl 281
-
+
#define NR_syscalls 282
/* user-visible error numbers are in the range -1 - -122: see
--- /dev/null
+/*
+ * mv64340.h - MV-64340 Internal registers definition file.
+ *
+ * Copyright 2002 Momentum Computer, Inc.
+ * Author: Matthew Dharm <mdharm@momenco.com>
+ * Copyright 2002 GALILEO TECHNOLOGY, LTD.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+#ifndef __ASM_MV64340_H
+#define __ASM_MV64340_H
+
+#include <asm/addrspace.h>
+#include <asm/marvell.h>
+
+/****************************************/
+/* Processor Address Space */
+/****************************************/
+
+/* DDR SDRAM BAR and size registers */
+
+#define MV64340_CS_0_BASE_ADDR 0x008
+#define MV64340_CS_0_SIZE 0x010
+#define MV64340_CS_1_BASE_ADDR 0x208
+#define MV64340_CS_1_SIZE 0x210
+#define MV64340_CS_2_BASE_ADDR 0x018
+#define MV64340_CS_2_SIZE 0x020
+#define MV64340_CS_3_BASE_ADDR 0x218
+#define MV64340_CS_3_SIZE 0x220
+
+/* Devices BAR and size registers */
+
+#define MV64340_DEV_CS0_BASE_ADDR 0x028
+#define MV64340_DEV_CS0_SIZE 0x030
+#define MV64340_DEV_CS1_BASE_ADDR 0x228
+#define MV64340_DEV_CS1_SIZE 0x230
+#define MV64340_DEV_CS2_BASE_ADDR 0x248
+#define MV64340_DEV_CS2_SIZE 0x250
+#define MV64340_DEV_CS3_BASE_ADDR 0x038
+#define MV64340_DEV_CS3_SIZE 0x040
+#define MV64340_BOOTCS_BASE_ADDR 0x238
+#define MV64340_BOOTCS_SIZE 0x240
+
+/* PCI 0 BAR and size registers */
+
+#define MV64340_PCI_0_IO_BASE_ADDR 0x048
+#define MV64340_PCI_0_IO_SIZE 0x050
+#define MV64340_PCI_0_MEMORY0_BASE_ADDR 0x058
+#define MV64340_PCI_0_MEMORY0_SIZE 0x060
+#define MV64340_PCI_0_MEMORY1_BASE_ADDR 0x080
+#define MV64340_PCI_0_MEMORY1_SIZE 0x088
+#define MV64340_PCI_0_MEMORY2_BASE_ADDR 0x258
+#define MV64340_PCI_0_MEMORY2_SIZE 0x260
+#define MV64340_PCI_0_MEMORY3_BASE_ADDR 0x280
+#define MV64340_PCI_0_MEMORY3_SIZE 0x288
+
+/* PCI 1 BAR and size registers */
+#define MV64340_PCI_1_IO_BASE_ADDR 0x090
+#define MV64340_PCI_1_IO_SIZE 0x098
+#define MV64340_PCI_1_MEMORY0_BASE_ADDR 0x0a0
+#define MV64340_PCI_1_MEMORY0_SIZE 0x0a8
+#define MV64340_PCI_1_MEMORY1_BASE_ADDR 0x0b0
+#define MV64340_PCI_1_MEMORY1_SIZE 0x0b8
+#define MV64340_PCI_1_MEMORY2_BASE_ADDR 0x2a0
+#define MV64340_PCI_1_MEMORY2_SIZE 0x2a8
+#define MV64340_PCI_1_MEMORY3_BASE_ADDR 0x2b0
+#define MV64340_PCI_1_MEMORY3_SIZE 0x2b8
+
+/* SRAM base address */
+#define MV64340_INTEGRATED_SRAM_BASE_ADDR 0x268
+
+/* internal registers space base address */
+#define MV64340_INTERNAL_SPACE_BASE_ADDR 0x068
+
+/* Enables the CS , DEV_CS , PCI 0 and PCI 1
+ windows above */
+#define MV64340_BASE_ADDR_ENABLE 0x278
+
+/****************************************/
+/* PCI remap registers */
+/****************************************/
+ /* PCI 0 */
+#define MV64340_PCI_0_IO_ADDR_REMAP 0x0f0
+#define MV64340_PCI_0_MEMORY0_LOW_ADDR_REMAP 0x0f8
+#define MV64340_PCI_0_MEMORY0_HIGH_ADDR_REMAP 0x320
+#define MV64340_PCI_0_MEMORY1_LOW_ADDR_REMAP 0x100
+#define MV64340_PCI_0_MEMORY1_HIGH_ADDR_REMAP 0x328
+#define MV64340_PCI_0_MEMORY2_LOW_ADDR_REMAP 0x2f8
+#define MV64340_PCI_0_MEMORY2_HIGH_ADDR_REMAP 0x330
+#define MV64340_PCI_0_MEMORY3_LOW_ADDR_REMAP 0x300
+#define MV64340_PCI_0_MEMORY3_HIGH_ADDR_REMAP 0x338
+ /* PCI 1 */
+#define MV64340_PCI_1_IO_ADDR_REMAP 0x108
+#define MV64340_PCI_1_MEMORY0_LOW_ADDR_REMAP 0x110
+#define MV64340_PCI_1_MEMORY0_HIGH_ADDR_REMAP 0x340
+#define MV64340_PCI_1_MEMORY1_LOW_ADDR_REMAP 0x118
+#define MV64340_PCI_1_MEMORY1_HIGH_ADDR_REMAP 0x348
+#define MV64340_PCI_1_MEMORY2_LOW_ADDR_REMAP 0x310
+#define MV64340_PCI_1_MEMORY2_HIGH_ADDR_REMAP 0x350
+#define MV64340_PCI_1_MEMORY3_LOW_ADDR_REMAP 0x318
+#define MV64340_PCI_1_MEMORY3_HIGH_ADDR_REMAP 0x358
+
+#define MV64340_CPU_PCI_0_HEADERS_RETARGET_CONTROL 0x3b0
+#define MV64340_CPU_PCI_0_HEADERS_RETARGET_BASE 0x3b8
+#define MV64340_CPU_PCI_1_HEADERS_RETARGET_CONTROL 0x3c0
+#define MV64340_CPU_PCI_1_HEADERS_RETARGET_BASE 0x3c8
+#define MV64340_CPU_GE_HEADERS_RETARGET_CONTROL 0x3d0
+#define MV64340_CPU_GE_HEADERS_RETARGET_BASE 0x3d8
+#define MV64340_CPU_IDMA_HEADERS_RETARGET_CONTROL 0x3e0
+#define MV64340_CPU_IDMA_HEADERS_RETARGET_BASE 0x3e8
+
+/****************************************/
+/* CPU Control Registers */
+/****************************************/
+
+#define MV64340_CPU_CONFIG 0x000
+#define MV64340_CPU_MODE 0x120
+#define MV64340_CPU_MASTER_CONTROL 0x160
+#define MV64340_CPU_CROSS_BAR_CONTROL_LOW 0x150
+#define MV64340_CPU_CROSS_BAR_CONTROL_HIGH 0x158
+#define MV64340_CPU_CROSS_BAR_TIMEOUT 0x168
+
+/****************************************/
+/* SMP RegisterS */
+/****************************************/
+
+#define MV64340_SMP_WHO_AM_I 0x200
+#define MV64340_SMP_CPU0_DOORBELL 0x214
+#define MV64340_SMP_CPU0_DOORBELL_CLEAR 0x21C
+#define MV64340_SMP_CPU1_DOORBELL 0x224
+#define MV64340_SMP_CPU1_DOORBELL_CLEAR 0x22C
+#define MV64340_SMP_CPU0_DOORBELL_MASK 0x234
+#define MV64340_SMP_CPU1_DOORBELL_MASK 0x23C
+#define MV64340_SMP_SEMAPHOR0 0x244
+#define MV64340_SMP_SEMAPHOR1 0x24c
+#define MV64340_SMP_SEMAPHOR2 0x254
+#define MV64340_SMP_SEMAPHOR3 0x25c
+#define MV64340_SMP_SEMAPHOR4 0x264
+#define MV64340_SMP_SEMAPHOR5 0x26c
+#define MV64340_SMP_SEMAPHOR6 0x274
+#define MV64340_SMP_SEMAPHOR7 0x27c
+
+/****************************************/
+/* CPU Sync Barrier Register */
+/****************************************/
+
+#define MV64340_CPU_0_SYNC_BARRIER_TRIGGER 0x0c0
+#define MV64340_CPU_0_SYNC_BARRIER_VIRTUAL 0x0c8
+#define MV64340_CPU_1_SYNC_BARRIER_TRIGGER 0x0d0
+#define MV64340_CPU_1_SYNC_BARRIER_VIRTUAL 0x0d8
+
+/****************************************/
+/* CPU Access Protect */
+/****************************************/
+
+#define MV64340_CPU_PROTECT_WINDOW_0_BASE_ADDR 0x180
+#define MV64340_CPU_PROTECT_WINDOW_0_SIZE 0x188
+#define MV64340_CPU_PROTECT_WINDOW_1_BASE_ADDR 0x190
+#define MV64340_CPU_PROTECT_WINDOW_1_SIZE 0x198
+#define MV64340_CPU_PROTECT_WINDOW_2_BASE_ADDR 0x1a0
+#define MV64340_CPU_PROTECT_WINDOW_2_SIZE 0x1a8
+#define MV64340_CPU_PROTECT_WINDOW_3_BASE_ADDR 0x1b0
+#define MV64340_CPU_PROTECT_WINDOW_3_SIZE 0x1b8
+
+
+/****************************************/
+/* CPU Error Report */
+/****************************************/
+
+#define MV64340_CPU_ERROR_ADDR_LOW 0x070
+#define MV64340_CPU_ERROR_ADDR_HIGH 0x078
+#define MV64340_CPU_ERROR_DATA_LOW 0x128
+#define MV64340_CPU_ERROR_DATA_HIGH 0x130
+#define MV64340_CPU_ERROR_PARITY 0x138
+#define MV64340_CPU_ERROR_CAUSE 0x140
+#define MV64340_CPU_ERROR_MASK 0x148
+
+/****************************************/
+/* CPU Interface Debug Registers */
+/****************************************/
+
+#define MV64340_PUNIT_SLAVE_DEBUG_LOW 0x360
+#define MV64340_PUNIT_SLAVE_DEBUG_HIGH 0x368
+#define MV64340_PUNIT_MASTER_DEBUG_LOW 0x370
+#define MV64340_PUNIT_MASTER_DEBUG_HIGH 0x378
+#define MV64340_PUNIT_MMASK 0x3e4
+
+/****************************************/
+/* Integrated SRAM Registers */
+/****************************************/
+
+#define MV64340_SRAM_CONFIG 0x380
+#define MV64340_SRAM_TEST_MODE 0X3F4
+#define MV64340_SRAM_ERROR_CAUSE 0x388
+#define MV64340_SRAM_ERROR_ADDR 0x390
+#define MV64340_SRAM_ERROR_ADDR_HIGH 0X3F8
+#define MV64340_SRAM_ERROR_DATA_LOW 0x398
+#define MV64340_SRAM_ERROR_DATA_HIGH 0x3a0
+#define MV64340_SRAM_ERROR_DATA_PARITY 0x3a8
+
+/****************************************/
+/* SDRAM Configuration */
+/****************************************/
+
+#define MV64340_SDRAM_CONFIG 0x1400
+#define MV64340_D_UNIT_CONTROL_LOW 0x1404
+#define MV64340_D_UNIT_CONTROL_HIGH 0x1424
+#define MV64340_SDRAM_TIMING_CONTROL_LOW 0x1408
+#define MV64340_SDRAM_TIMING_CONTROL_HIGH 0x140c
+#define MV64340_SDRAM_ADDR_CONTROL 0x1410
+#define MV64340_SDRAM_OPEN_PAGES_CONTROL 0x1414
+#define MV64340_SDRAM_OPERATION 0x1418
+#define MV64340_SDRAM_MODE 0x141c
+#define MV64340_EXTENDED_DRAM_MODE 0x1420
+#define MV64340_SDRAM_CROSS_BAR_CONTROL_LOW 0x1430
+#define MV64340_SDRAM_CROSS_BAR_CONTROL_HIGH 0x1434
+#define MV64340_SDRAM_CROSS_BAR_TIMEOUT 0x1438
+#define MV64340_SDRAM_ADDR_CTRL_PADS_CALIBRATION 0x14c0
+#define MV64340_SDRAM_DATA_PADS_CALIBRATION 0x14c4
+
+/****************************************/
+/* SDRAM Error Report */
+/****************************************/
+
+#define MV64340_SDRAM_ERROR_DATA_LOW 0x1444
+#define MV64340_SDRAM_ERROR_DATA_HIGH 0x1440
+#define MV64340_SDRAM_ERROR_ADDR 0x1450
+#define MV64340_SDRAM_RECEIVED_ECC 0x1448
+#define MV64340_SDRAM_CALCULATED_ECC 0x144c
+#define MV64340_SDRAM_ECC_CONTROL 0x1454
+#define MV64340_SDRAM_ECC_ERROR_COUNTER 0x1458
+
+/******************************************/
+/* Controlled Delay Line (CDL) Registers */
+/******************************************/
+
+#define MV64340_DFCDL_CONFIG0 0x1480
+#define MV64340_DFCDL_CONFIG1 0x1484
+#define MV64340_DLL_WRITE 0x1488
+#define MV64340_DLL_READ 0x148c
+#define MV64340_SRAM_ADDR 0x1490
+#define MV64340_SRAM_DATA0 0x1494
+#define MV64340_SRAM_DATA1 0x1498
+#define MV64340_SRAM_DATA2 0x149c
+#define MV64340_DFCL_PROBE 0x14a0
+
+/******************************************/
+/* Debug Registers */
+/******************************************/
+
+#define MV64340_DUNIT_DEBUG_LOW 0x1460
+#define MV64340_DUNIT_DEBUG_HIGH 0x1464
+#define MV64340_DUNIT_MMASK 0X1b40
+
+/****************************************/
+/* Device Parameters */
+/****************************************/
+
+#define MV64340_DEVICE_BANK0_PARAMETERS 0x45c
+#define MV64340_DEVICE_BANK1_PARAMETERS 0x460
+#define MV64340_DEVICE_BANK2_PARAMETERS 0x464
+#define MV64340_DEVICE_BANK3_PARAMETERS 0x468
+#define MV64340_DEVICE_BOOT_BANK_PARAMETERS 0x46c
+#define MV64340_DEVICE_INTERFACE_CONTROL 0x4c0
+#define MV64340_DEVICE_INTERFACE_CROSS_BAR_CONTROL_LOW 0x4c8
+#define MV64340_DEVICE_INTERFACE_CROSS_BAR_CONTROL_HIGH 0x4cc
+#define MV64340_DEVICE_INTERFACE_CROSS_BAR_TIMEOUT 0x4c4
+
+/****************************************/
+/* Device interrupt registers */
+/****************************************/
+
+#define MV64340_DEVICE_INTERRUPT_CAUSE 0x4d0
+#define MV64340_DEVICE_INTERRUPT_MASK 0x4d4
+#define MV64340_DEVICE_ERROR_ADDR 0x4d8
+#define MV64340_DEVICE_ERROR_DATA 0x4dc
+#define MV64340_DEVICE_ERROR_PARITY 0x4e0
+
+/****************************************/
+/* Device debug registers */
+/****************************************/
+
+#define MV64340_DEVICE_DEBUG_LOW 0x4e4
+#define MV64340_DEVICE_DEBUG_HIGH 0x4e8
+#define MV64340_RUNIT_MMASK 0x4f0
+
+/****************************************/
+/* PCI Slave Address Decoding registers */
+/****************************************/
+
+#define MV64340_PCI_0_CS_0_BANK_SIZE 0xc08
+#define MV64340_PCI_1_CS_0_BANK_SIZE 0xc88
+#define MV64340_PCI_0_CS_1_BANK_SIZE 0xd08
+#define MV64340_PCI_1_CS_1_BANK_SIZE 0xd88
+#define MV64340_PCI_0_CS_2_BANK_SIZE 0xc0c
+#define MV64340_PCI_1_CS_2_BANK_SIZE 0xc8c
+#define MV64340_PCI_0_CS_3_BANK_SIZE 0xd0c
+#define MV64340_PCI_1_CS_3_BANK_SIZE 0xd8c
+#define MV64340_PCI_0_DEVCS_0_BANK_SIZE 0xc10
+#define MV64340_PCI_1_DEVCS_0_BANK_SIZE 0xc90
+#define MV64340_PCI_0_DEVCS_1_BANK_SIZE 0xd10
+#define MV64340_PCI_1_DEVCS_1_BANK_SIZE 0xd90
+#define MV64340_PCI_0_DEVCS_2_BANK_SIZE 0xd18
+#define MV64340_PCI_1_DEVCS_2_BANK_SIZE 0xd98
+#define MV64340_PCI_0_DEVCS_3_BANK_SIZE 0xc14
+#define MV64340_PCI_1_DEVCS_3_BANK_SIZE 0xc94
+#define MV64340_PCI_0_DEVCS_BOOT_BANK_SIZE 0xd14
+#define MV64340_PCI_1_DEVCS_BOOT_BANK_SIZE 0xd94
+#define MV64340_PCI_0_P2P_MEM0_BAR_SIZE 0xd1c
+#define MV64340_PCI_1_P2P_MEM0_BAR_SIZE 0xd9c
+#define MV64340_PCI_0_P2P_MEM1_BAR_SIZE 0xd20
+#define MV64340_PCI_1_P2P_MEM1_BAR_SIZE 0xda0
+#define MV64340_PCI_0_P2P_I_O_BAR_SIZE 0xd24
+#define MV64340_PCI_1_P2P_I_O_BAR_SIZE 0xda4
+#define MV64340_PCI_0_CPU_BAR_SIZE 0xd28
+#define MV64340_PCI_1_CPU_BAR_SIZE 0xda8
+#define MV64340_PCI_0_INTERNAL_SRAM_BAR_SIZE 0xe00
+#define MV64340_PCI_1_INTERNAL_SRAM_BAR_SIZE 0xe80
+#define MV64340_PCI_0_EXPANSION_ROM_BAR_SIZE 0xd2c
+#define MV64340_PCI_1_EXPANSION_ROM_BAR_SIZE 0xd9c
+#define MV64340_PCI_0_BASE_ADDR_REG_ENABLE 0xc3c
+#define MV64340_PCI_1_BASE_ADDR_REG_ENABLE 0xcbc
+#define MV64340_PCI_0_CS_0_BASE_ADDR_REMAP 0xc48
+#define MV64340_PCI_1_CS_0_BASE_ADDR_REMAP 0xcc8
+#define MV64340_PCI_0_CS_1_BASE_ADDR_REMAP 0xd48
+#define MV64340_PCI_1_CS_1_BASE_ADDR_REMAP 0xdc8
+#define MV64340_PCI_0_CS_2_BASE_ADDR_REMAP 0xc4c
+#define MV64340_PCI_1_CS_2_BASE_ADDR_REMAP 0xccc
+#define MV64340_PCI_0_CS_3_BASE_ADDR_REMAP 0xd4c
+#define MV64340_PCI_1_CS_3_BASE_ADDR_REMAP 0xdcc
+#define MV64340_PCI_0_CS_0_BASE_HIGH_ADDR_REMAP 0xF04
+#define MV64340_PCI_1_CS_0_BASE_HIGH_ADDR_REMAP 0xF84
+#define MV64340_PCI_0_CS_1_BASE_HIGH_ADDR_REMAP 0xF08
+#define MV64340_PCI_1_CS_1_BASE_HIGH_ADDR_REMAP 0xF88
+#define MV64340_PCI_0_CS_2_BASE_HIGH_ADDR_REMAP 0xF0C
+#define MV64340_PCI_1_CS_2_BASE_HIGH_ADDR_REMAP 0xF8C
+#define MV64340_PCI_0_CS_3_BASE_HIGH_ADDR_REMAP 0xF10
+#define MV64340_PCI_1_CS_3_BASE_HIGH_ADDR_REMAP 0xF90
+#define MV64340_PCI_0_DEVCS_0_BASE_ADDR_REMAP 0xc50
+#define MV64340_PCI_1_DEVCS_0_BASE_ADDR_REMAP 0xcd0
+#define MV64340_PCI_0_DEVCS_1_BASE_ADDR_REMAP 0xd50
+#define MV64340_PCI_1_DEVCS_1_BASE_ADDR_REMAP 0xdd0
+#define MV64340_PCI_0_DEVCS_2_BASE_ADDR_REMAP 0xd58
+#define MV64340_PCI_1_DEVCS_2_BASE_ADDR_REMAP 0xdd8
+#define MV64340_PCI_0_DEVCS_3_BASE_ADDR_REMAP 0xc54
+#define MV64340_PCI_1_DEVCS_3_BASE_ADDR_REMAP 0xcd4
+#define MV64340_PCI_0_DEVCS_BOOTCS_BASE_ADDR_REMAP 0xd54
+#define MV64340_PCI_1_DEVCS_BOOTCS_BASE_ADDR_REMAP 0xdd4
+#define MV64340_PCI_0_P2P_MEM0_BASE_ADDR_REMAP_LOW 0xd5c
+#define MV64340_PCI_1_P2P_MEM0_BASE_ADDR_REMAP_LOW 0xddc
+#define MV64340_PCI_0_P2P_MEM0_BASE_ADDR_REMAP_HIGH 0xd60
+#define MV64340_PCI_1_P2P_MEM0_BASE_ADDR_REMAP_HIGH 0xde0
+#define MV64340_PCI_0_P2P_MEM1_BASE_ADDR_REMAP_LOW 0xd64
+#define MV64340_PCI_1_P2P_MEM1_BASE_ADDR_REMAP_LOW 0xde4
+#define MV64340_PCI_0_P2P_MEM1_BASE_ADDR_REMAP_HIGH 0xd68
+#define MV64340_PCI_1_P2P_MEM1_BASE_ADDR_REMAP_HIGH 0xde8
+#define MV64340_PCI_0_P2P_I_O_BASE_ADDR_REMAP 0xd6c
+#define MV64340_PCI_1_P2P_I_O_BASE_ADDR_REMAP 0xdec
+#define MV64340_PCI_0_CPU_BASE_ADDR_REMAP_LOW 0xd70
+#define MV64340_PCI_1_CPU_BASE_ADDR_REMAP_LOW 0xdf0
+#define MV64340_PCI_0_CPU_BASE_ADDR_REMAP_HIGH 0xd74
+#define MV64340_PCI_1_CPU_BASE_ADDR_REMAP_HIGH 0xdf4
+#define MV64340_PCI_0_INTEGRATED_SRAM_BASE_ADDR_REMAP 0xf00
+#define MV64340_PCI_1_INTEGRATED_SRAM_BASE_ADDR_REMAP 0xf80
+#define MV64340_PCI_0_EXPANSION_ROM_BASE_ADDR_REMAP 0xf38
+#define MV64340_PCI_1_EXPANSION_ROM_BASE_ADDR_REMAP 0xfb8
+#define MV64340_PCI_0_ADDR_DECODE_CONTROL 0xd3c
+#define MV64340_PCI_1_ADDR_DECODE_CONTROL 0xdbc
+#define MV64340_PCI_0_HEADERS_RETARGET_CONTROL 0xF40
+#define MV64340_PCI_1_HEADERS_RETARGET_CONTROL 0xFc0
+#define MV64340_PCI_0_HEADERS_RETARGET_BASE 0xF44
+#define MV64340_PCI_1_HEADERS_RETARGET_BASE 0xFc4
+#define MV64340_PCI_0_HEADERS_RETARGET_HIGH 0xF48
+#define MV64340_PCI_1_HEADERS_RETARGET_HIGH 0xFc8
+
+/***********************************/
+/* PCI Control Register Map */
+/***********************************/
+
+#define MV64340_PCI_0_DLL_STATUS_AND_COMMAND 0x1d20
+#define MV64340_PCI_1_DLL_STATUS_AND_COMMAND 0x1da0
+#define MV64340_PCI_0_MPP_PADS_DRIVE_CONTROL 0x1d1C
+#define MV64340_PCI_1_MPP_PADS_DRIVE_CONTROL 0x1d9C
+#define MV64340_PCI_0_COMMAND 0xc00
+#define MV64340_PCI_1_COMMAND 0xc80
+#define MV64340_PCI_0_MODE 0xd00
+#define MV64340_PCI_1_MODE 0xd80
+#define MV64340_PCI_0_RETRY 0xc04
+#define MV64340_PCI_1_RETRY 0xc84
+#define MV64340_PCI_0_READ_BUFFER_DISCARD_TIMER 0xd04
+#define MV64340_PCI_1_READ_BUFFER_DISCARD_TIMER 0xd84
+#define MV64340_PCI_0_MSI_TRIGGER_TIMER 0xc38
+#define MV64340_PCI_1_MSI_TRIGGER_TIMER 0xcb8
+#define MV64340_PCI_0_ARBITER_CONTROL 0x1d00
+#define MV64340_PCI_1_ARBITER_CONTROL 0x1d80
+#define MV64340_PCI_0_CROSS_BAR_CONTROL_LOW 0x1d08
+#define MV64340_PCI_1_CROSS_BAR_CONTROL_LOW 0x1d88
+#define MV64340_PCI_0_CROSS_BAR_CONTROL_HIGH 0x1d0c
+#define MV64340_PCI_1_CROSS_BAR_CONTROL_HIGH 0x1d8c
+#define MV64340_PCI_0_CROSS_BAR_TIMEOUT 0x1d04
+#define MV64340_PCI_1_CROSS_BAR_TIMEOUT 0x1d84
+#define MV64340_PCI_0_SYNC_BARRIER_TRIGGER_REG 0x1D18
+#define MV64340_PCI_1_SYNC_BARRIER_TRIGGER_REG 0x1D98
+#define MV64340_PCI_0_SYNC_BARRIER_VIRTUAL_REG 0x1d10
+#define MV64340_PCI_1_SYNC_BARRIER_VIRTUAL_REG 0x1d90
+#define MV64340_PCI_0_P2P_CONFIG 0x1d14
+#define MV64340_PCI_1_P2P_CONFIG 0x1d94
+
+#define MV64340_PCI_0_ACCESS_CONTROL_BASE_0_LOW 0x1e00
+#define MV64340_PCI_0_ACCESS_CONTROL_BASE_0_HIGH 0x1e04
+#define MV64340_PCI_0_ACCESS_CONTROL_SIZE_0 0x1e08
+#define MV64340_PCI_0_ACCESS_CONTROL_BASE_1_LOW 0x1e10
+#define MV64340_PCI_0_ACCESS_CONTROL_BASE_1_HIGH 0x1e14
+#define MV64340_PCI_0_ACCESS_CONTROL_SIZE_1 0x1e18
+#define MV64340_PCI_0_ACCESS_CONTROL_BASE_2_LOW 0x1e20
+#define MV64340_PCI_0_ACCESS_CONTROL_BASE_2_HIGH 0x1e24
+#define MV64340_PCI_0_ACCESS_CONTROL_SIZE_2 0x1e28
+#define MV64340_PCI_0_ACCESS_CONTROL_BASE_3_LOW 0x1e30
+#define MV64340_PCI_0_ACCESS_CONTROL_BASE_3_HIGH 0x1e34
+#define MV64340_PCI_0_ACCESS_CONTROL_SIZE_3 0x1e38
+#define MV64340_PCI_0_ACCESS_CONTROL_BASE_4_LOW 0x1e40
+#define MV64340_PCI_0_ACCESS_CONTROL_BASE_4_HIGH 0x1e44
+#define MV64340_PCI_0_ACCESS_CONTROL_SIZE_4 0x1e48
+#define MV64340_PCI_0_ACCESS_CONTROL_BASE_5_LOW 0x1e50
+#define MV64340_PCI_0_ACCESS_CONTROL_BASE_5_HIGH 0x1e54
+#define MV64340_PCI_0_ACCESS_CONTROL_SIZE_5 0x1e58
+
+#define MV64340_PCI_1_ACCESS_CONTROL_BASE_0_LOW 0x1e80
+#define MV64340_PCI_1_ACCESS_CONTROL_BASE_0_HIGH 0x1e84
+#define MV64340_PCI_1_ACCESS_CONTROL_SIZE_0 0x1e88
+#define MV64340_PCI_1_ACCESS_CONTROL_BASE_1_LOW 0x1e90
+#define MV64340_PCI_1_ACCESS_CONTROL_BASE_1_HIGH 0x1e94
+#define MV64340_PCI_1_ACCESS_CONTROL_SIZE_1 0x1e98
+#define MV64340_PCI_1_ACCESS_CONTROL_BASE_2_LOW 0x1ea0
+#define MV64340_PCI_1_ACCESS_CONTROL_BASE_2_HIGH 0x1ea4
+#define MV64340_PCI_1_ACCESS_CONTROL_SIZE_2 0x1ea8
+#define MV64340_PCI_1_ACCESS_CONTROL_BASE_3_LOW 0x1eb0
+#define MV64340_PCI_1_ACCESS_CONTROL_BASE_3_HIGH 0x1eb4
+#define MV64340_PCI_1_ACCESS_CONTROL_SIZE_3 0x1eb8
+#define MV64340_PCI_1_ACCESS_CONTROL_BASE_4_LOW 0x1ec0
+#define MV64340_PCI_1_ACCESS_CONTROL_BASE_4_HIGH 0x1ec4
+#define MV64340_PCI_1_ACCESS_CONTROL_SIZE_4 0x1ec8
+#define MV64340_PCI_1_ACCESS_CONTROL_BASE_5_LOW 0x1ed0
+#define MV64340_PCI_1_ACCESS_CONTROL_BASE_5_HIGH 0x1ed4
+#define MV64340_PCI_1_ACCESS_CONTROL_SIZE_5 0x1ed8
+
+/****************************************/
+/* PCI Configuration Access Registers */
+/****************************************/
+
+#define MV64340_PCI_0_CONFIG_ADDR 0xcf8
+#define MV64340_PCI_0_CONFIG_DATA_VIRTUAL_REG 0xcfc
+#define MV64340_PCI_1_CONFIG_ADDR 0xc78
+#define MV64340_PCI_1_CONFIG_DATA_VIRTUAL_REG 0xc7c
+#define MV64340_PCI_0_INTERRUPT_ACKNOWLEDGE_VIRTUAL_REG 0xc34
+#define MV64340_PCI_1_INTERRUPT_ACKNOWLEDGE_VIRTUAL_REG 0xcb4
+
+/****************************************/
+/* PCI Error Report Registers */
+/****************************************/
+
+#define MV64340_PCI_0_SERR_MASK 0xc28
+#define MV64340_PCI_1_SERR_MASK 0xca8
+#define MV64340_PCI_0_ERROR_ADDR_LOW 0x1d40
+#define MV64340_PCI_1_ERROR_ADDR_LOW 0x1dc0
+#define MV64340_PCI_0_ERROR_ADDR_HIGH 0x1d44
+#define MV64340_PCI_1_ERROR_ADDR_HIGH 0x1dc4
+#define MV64340_PCI_0_ERROR_ATTRIBUTE 0x1d48
+#define MV64340_PCI_1_ERROR_ATTRIBUTE 0x1dc8
+#define MV64340_PCI_0_ERROR_COMMAND 0x1d50
+#define MV64340_PCI_1_ERROR_COMMAND 0x1dd0
+#define MV64340_PCI_0_ERROR_CAUSE 0x1d58
+#define MV64340_PCI_1_ERROR_CAUSE 0x1dd8
+#define MV64340_PCI_0_ERROR_MASK 0x1d5c
+#define MV64340_PCI_1_ERROR_MASK 0x1ddc
+
+/****************************************/
+/* PCI Debug Registers */
+/****************************************/
+
+#define MV64340_PCI_0_MMASK 0X1D24
+#define MV64340_PCI_1_MMASK 0X1DA4
+
+/*********************************************/
+/* PCI Configuration, Function 0, Registers */
+/*********************************************/
+
+#define MV64340_PCI_DEVICE_AND_VENDOR_ID 0x000
+#define MV64340_PCI_STATUS_AND_COMMAND 0x004
+#define MV64340_PCI_CLASS_CODE_AND_REVISION_ID 0x008
+#define MV64340_PCI_BIST_HEADER_TYPE_LATENCY_TIMER_CACHE_LINE 0x00C
+
+#define MV64340_PCI_SCS_0_BASE_ADDR_LOW 0x010
+#define MV64340_PCI_SCS_0_BASE_ADDR_HIGH 0x014
+#define MV64340_PCI_SCS_1_BASE_ADDR_LOW 0x018
+#define MV64340_PCI_SCS_1_BASE_ADDR_HIGH 0x01C
+#define MV64340_PCI_INTERNAL_REG_MEM_MAPPED_BASE_ADDR_LOW 0x020
+#define MV64340_PCI_INTERNAL_REG_MEM_MAPPED_BASE_ADDR_HIGH 0x024
+#define MV64340_PCI_SUBSYSTEM_ID_AND_SUBSYSTEM_VENDOR_ID 0x02c
+#define MV64340_PCI_EXPANSION_ROM_BASE_ADDR_REG 0x030
+#define MV64340_PCI_CAPABILTY_LIST_POINTER 0x034
+#define MV64340_PCI_INTERRUPT_PIN_AND_LINE 0x03C
+ /* capability list */
+#define MV64340_PCI_POWER_MANAGEMENT_CAPABILITY 0x040
+#define MV64340_PCI_POWER_MANAGEMENT_STATUS_AND_CONTROL 0x044
+#define MV64340_PCI_VPD_ADDR 0x048
+#define MV64340_PCI_VPD_DATA 0x04c
+#define MV64340_PCI_MSI_MESSAGE_CONTROL 0x050
+#define MV64340_PCI_MSI_MESSAGE_ADDR 0x054
+#define MV64340_PCI_MSI_MESSAGE_UPPER_ADDR 0x058
+#define MV64340_PCI_MSI_MESSAGE_DATA 0x05c
+#define MV64340_PCI_X_COMMAND 0x060
+#define MV64340_PCI_X_STATUS 0x064
+#define MV64340_PCI_COMPACT_PCI_HOT_SWAP 0x068
+
+/***********************************************/
+/* PCI Configuration, Function 1, Registers */
+/***********************************************/
+
+#define MV64340_PCI_SCS_2_BASE_ADDR_LOW 0x110
+#define MV64340_PCI_SCS_2_BASE_ADDR_HIGH 0x114
+#define MV64340_PCI_SCS_3_BASE_ADDR_LOW 0x118
+#define MV64340_PCI_SCS_3_BASE_ADDR_HIGH 0x11c
+#define MV64340_PCI_INTERNAL_SRAM_BASE_ADDR_LOW 0x120
+#define MV64340_PCI_INTERNAL_SRAM_BASE_ADDR_HIGH 0x124
+
+/***********************************************/
+/* PCI Configuration, Function 2, Registers */
+/***********************************************/
+
+#define MV64340_PCI_DEVCS_0_BASE_ADDR_LOW 0x210
+#define MV64340_PCI_DEVCS_0_BASE_ADDR_HIGH 0x214
+#define MV64340_PCI_DEVCS_1_BASE_ADDR_LOW 0x218
+#define MV64340_PCI_DEVCS_1_BASE_ADDR_HIGH 0x21c
+#define MV64340_PCI_DEVCS_2_BASE_ADDR_LOW 0x220
+#define MV64340_PCI_DEVCS_2_BASE_ADDR_HIGH 0x224
+
+/***********************************************/
+/* PCI Configuration, Function 3, Registers */
+/***********************************************/
+
+#define MV64340_PCI_DEVCS_3_BASE_ADDR_LOW 0x310
+#define MV64340_PCI_DEVCS_3_BASE_ADDR_HIGH 0x314
+#define MV64340_PCI_BOOT_CS_BASE_ADDR_LOW 0x318
+#define MV64340_PCI_BOOT_CS_BASE_ADDR_HIGH 0x31c
+#define MV64340_PCI_CPU_BASE_ADDR_LOW 0x220
+#define MV64340_PCI_CPU_BASE_ADDR_HIGH 0x224
+
+/***********************************************/
+/* PCI Configuration, Function 4, Registers */
+/***********************************************/
+
+#define MV64340_PCI_P2P_MEM0_BASE_ADDR_LOW 0x410
+#define MV64340_PCI_P2P_MEM0_BASE_ADDR_HIGH 0x414
+#define MV64340_PCI_P2P_MEM1_BASE_ADDR_LOW 0x418
+#define MV64340_PCI_P2P_MEM1_BASE_ADDR_HIGH 0x41c
+#define MV64340_PCI_P2P_I_O_BASE_ADDR 0x420
+#define MV64340_PCI_INTERNAL_REGS_I_O_MAPPED_BASE_ADDR 0x424
+
+/****************************************/
+/* Messaging Unit Registers (I20) */
+/****************************************/
+
+#define MV64340_I2O_INBOUND_MESSAGE_REG0_PCI_0_SIDE 0x010
+#define MV64340_I2O_INBOUND_MESSAGE_REG1_PCI_0_SIDE 0x014
+#define MV64340_I2O_OUTBOUND_MESSAGE_REG0_PCI_0_SIDE 0x018
+#define MV64340_I2O_OUTBOUND_MESSAGE_REG1_PCI_0_SIDE 0x01C
+#define MV64340_I2O_INBOUND_DOORBELL_REG_PCI_0_SIDE 0x020
+#define MV64340_I2O_INBOUND_INTERRUPT_CAUSE_REG_PCI_0_SIDE 0x024
+#define MV64340_I2O_INBOUND_INTERRUPT_MASK_REG_PCI_0_SIDE 0x028
+#define MV64340_I2O_OUTBOUND_DOORBELL_REG_PCI_0_SIDE 0x02C
+#define MV64340_I2O_OUTBOUND_INTERRUPT_CAUSE_REG_PCI_0_SIDE 0x030
+#define MV64340_I2O_OUTBOUND_INTERRUPT_MASK_REG_PCI_0_SIDE 0x034
+#define MV64340_I2O_INBOUND_QUEUE_PORT_VIRTUAL_REG_PCI_0_SIDE 0x040
+#define MV64340_I2O_OUTBOUND_QUEUE_PORT_VIRTUAL_REG_PCI_0_SIDE 0x044
+#define MV64340_I2O_QUEUE_CONTROL_REG_PCI_0_SIDE 0x050
+#define MV64340_I2O_QUEUE_BASE_ADDR_REG_PCI_0_SIDE 0x054
+#define MV64340_I2O_INBOUND_FREE_HEAD_POINTER_REG_PCI_0_SIDE 0x060
+#define MV64340_I2O_INBOUND_FREE_TAIL_POINTER_REG_PCI_0_SIDE 0x064
+#define MV64340_I2O_INBOUND_POST_HEAD_POINTER_REG_PCI_0_SIDE 0x068
+#define MV64340_I2O_INBOUND_POST_TAIL_POINTER_REG_PCI_0_SIDE 0x06C
+#define MV64340_I2O_OUTBOUND_FREE_HEAD_POINTER_REG_PCI_0_SIDE 0x070
+#define MV64340_I2O_OUTBOUND_FREE_TAIL_POINTER_REG_PCI_0_SIDE 0x074
+#define MV64340_I2O_OUTBOUND_POST_HEAD_POINTER_REG_PCI_0_SIDE 0x0F8
+#define MV64340_I2O_OUTBOUND_POST_TAIL_POINTER_REG_PCI_0_SIDE 0x0FC
+
+#define MV64340_I2O_INBOUND_MESSAGE_REG0_PCI_1_SIDE 0x090
+#define MV64340_I2O_INBOUND_MESSAGE_REG1_PCI_1_SIDE 0x094
+#define MV64340_I2O_OUTBOUND_MESSAGE_REG0_PCI_1_SIDE 0x098
+#define MV64340_I2O_OUTBOUND_MESSAGE_REG1_PCI_1_SIDE 0x09C
+#define MV64340_I2O_INBOUND_DOORBELL_REG_PCI_1_SIDE 0x0A0
+#define MV64340_I2O_INBOUND_INTERRUPT_CAUSE_REG_PCI_1_SIDE 0x0A4
+#define MV64340_I2O_INBOUND_INTERRUPT_MASK_REG_PCI_1_SIDE 0x0A8
+#define MV64340_I2O_OUTBOUND_DOORBELL_REG_PCI_1_SIDE 0x0AC
+#define MV64340_I2O_OUTBOUND_INTERRUPT_CAUSE_REG_PCI_1_SIDE 0x0B0
+#define MV64340_I2O_OUTBOUND_INTERRUPT_MASK_REG_PCI_1_SIDE 0x0B4
+#define MV64340_I2O_INBOUND_QUEUE_PORT_VIRTUAL_REG_PCI_1_SIDE 0x0C0
+#define MV64340_I2O_OUTBOUND_QUEUE_PORT_VIRTUAL_REG_PCI_1_SIDE 0x0C4
+#define MV64340_I2O_QUEUE_CONTROL_REG_PCI_1_SIDE 0x0D0
+#define MV64340_I2O_QUEUE_BASE_ADDR_REG_PCI_1_SIDE 0x0D4
+#define MV64340_I2O_INBOUND_FREE_HEAD_POINTER_REG_PCI_1_SIDE 0x0E0
+#define MV64340_I2O_INBOUND_FREE_TAIL_POINTER_REG_PCI_1_SIDE 0x0E4
+#define MV64340_I2O_INBOUND_POST_HEAD_POINTER_REG_PCI_1_SIDE 0x0E8
+#define MV64340_I2O_INBOUND_POST_TAIL_POINTER_REG_PCI_1_SIDE 0x0EC
+#define MV64340_I2O_OUTBOUND_FREE_HEAD_POINTER_REG_PCI_1_SIDE 0x0F0
+#define MV64340_I2O_OUTBOUND_FREE_TAIL_POINTER_REG_PCI_1_SIDE 0x0F4
+#define MV64340_I2O_OUTBOUND_POST_HEAD_POINTER_REG_PCI_1_SIDE 0x078
+#define MV64340_I2O_OUTBOUND_POST_TAIL_POINTER_REG_PCI_1_SIDE 0x07C
+
+#define MV64340_I2O_INBOUND_MESSAGE_REG0_CPU0_SIDE 0x1C10
+#define MV64340_I2O_INBOUND_MESSAGE_REG1_CPU0_SIDE 0x1C14
+#define MV64340_I2O_OUTBOUND_MESSAGE_REG0_CPU0_SIDE 0x1C18
+#define MV64340_I2O_OUTBOUND_MESSAGE_REG1_CPU0_SIDE 0x1C1C
+#define MV64340_I2O_INBOUND_DOORBELL_REG_CPU0_SIDE 0x1C20
+#define MV64340_I2O_INBOUND_INTERRUPT_CAUSE_REG_CPU0_SIDE 0x1C24
+#define MV64340_I2O_INBOUND_INTERRUPT_MASK_REG_CPU0_SIDE 0x1C28
+#define MV64340_I2O_OUTBOUND_DOORBELL_REG_CPU0_SIDE 0x1C2C
+#define MV64340_I2O_OUTBOUND_INTERRUPT_CAUSE_REG_CPU0_SIDE 0x1C30
+#define MV64340_I2O_OUTBOUND_INTERRUPT_MASK_REG_CPU0_SIDE 0x1C34
+#define MV64340_I2O_INBOUND_QUEUE_PORT_VIRTUAL_REG_CPU0_SIDE 0x1C40
+#define MV64340_I2O_OUTBOUND_QUEUE_PORT_VIRTUAL_REG_CPU0_SIDE 0x1C44
+#define MV64340_I2O_QUEUE_CONTROL_REG_CPU0_SIDE 0x1C50
+#define MV64340_I2O_QUEUE_BASE_ADDR_REG_CPU0_SIDE 0x1C54
+#define MV64340_I2O_INBOUND_FREE_HEAD_POINTER_REG_CPU0_SIDE 0x1C60
+#define MV64340_I2O_INBOUND_FREE_TAIL_POINTER_REG_CPU0_SIDE 0x1C64
+#define MV64340_I2O_INBOUND_POST_HEAD_POINTER_REG_CPU0_SIDE 0x1C68
+#define MV64340_I2O_INBOUND_POST_TAIL_POINTER_REG_CPU0_SIDE 0x1C6C
+#define MV64340_I2O_OUTBOUND_FREE_HEAD_POINTER_REG_CPU0_SIDE 0x1C70
+#define MV64340_I2O_OUTBOUND_FREE_TAIL_POINTER_REG_CPU0_SIDE 0x1C74
+#define MV64340_I2O_OUTBOUND_POST_HEAD_POINTER_REG_CPU0_SIDE 0x1CF8
+#define MV64340_I2O_OUTBOUND_POST_TAIL_POINTER_REG_CPU0_SIDE 0x1CFC
+#define MV64340_I2O_INBOUND_MESSAGE_REG0_CPU1_SIDE 0x1C90
+#define MV64340_I2O_INBOUND_MESSAGE_REG1_CPU1_SIDE 0x1C94
+#define MV64340_I2O_OUTBOUND_MESSAGE_REG0_CPU1_SIDE 0x1C98
+#define MV64340_I2O_OUTBOUND_MESSAGE_REG1_CPU1_SIDE 0x1C9C
+#define MV64340_I2O_INBOUND_DOORBELL_REG_CPU1_SIDE 0x1CA0
+#define MV64340_I2O_INBOUND_INTERRUPT_CAUSE_REG_CPU1_SIDE 0x1CA4
+#define MV64340_I2O_INBOUND_INTERRUPT_MASK_REG_CPU1_SIDE 0x1CA8
+#define MV64340_I2O_OUTBOUND_DOORBELL_REG_CPU1_SIDE 0x1CAC
+#define MV64340_I2O_OUTBOUND_INTERRUPT_CAUSE_REG_CPU1_SIDE 0x1CB0
+#define MV64340_I2O_OUTBOUND_INTERRUPT_MASK_REG_CPU1_SIDE 0x1CB4
+#define MV64340_I2O_INBOUND_QUEUE_PORT_VIRTUAL_REG_CPU1_SIDE 0x1CC0
+#define MV64340_I2O_OUTBOUND_QUEUE_PORT_VIRTUAL_REG_CPU1_SIDE 0x1CC4
+#define MV64340_I2O_QUEUE_CONTROL_REG_CPU1_SIDE 0x1CD0
+#define MV64340_I2O_QUEUE_BASE_ADDR_REG_CPU1_SIDE 0x1CD4
+#define MV64340_I2O_INBOUND_FREE_HEAD_POINTER_REG_CPU1_SIDE 0x1CE0
+#define MV64340_I2O_INBOUND_FREE_TAIL_POINTER_REG_CPU1_SIDE 0x1CE4
+#define MV64340_I2O_INBOUND_POST_HEAD_POINTER_REG_CPU1_SIDE 0x1CE8
+#define MV64340_I2O_INBOUND_POST_TAIL_POINTER_REG_CPU1_SIDE 0x1CEC
+#define MV64340_I2O_OUTBOUND_FREE_HEAD_POINTER_REG_CPU1_SIDE 0x1CF0
+#define MV64340_I2O_OUTBOUND_FREE_TAIL_POINTER_REG_CPU1_SIDE 0x1CF4
+#define MV64340_I2O_OUTBOUND_POST_HEAD_POINTER_REG_CPU1_SIDE 0x1C78
+#define MV64340_I2O_OUTBOUND_POST_TAIL_POINTER_REG_CPU1_SIDE 0x1C7C
+
+/****************************************/
+/* Ethernet Unit Registers */
+/****************************************/
+
+#define MV64340_ETH_PHY_ADDR_REG 0x2000
+#define MV64340_ETH_SMI_REG 0x2004
+#define MV64340_ETH_UNIT_DEFAULT_ADDR_REG 0x2008
+#define MV64340_ETH_UNIT_DEFAULTID_REG 0x200c
+#define MV64340_ETH_UNIT_INTERRUPT_CAUSE_REG 0x2080
+#define MV64340_ETH_UNIT_INTERRUPT_MASK_REG 0x2084
+#define MV64340_ETH_UNIT_INTERNAL_USE_REG 0x24fc
+#define MV64340_ETH_UNIT_ERROR_ADDR_REG 0x2094
+#define MV64340_ETH_BAR_0 0x2200
+#define MV64340_ETH_BAR_1 0x2208
+#define MV64340_ETH_BAR_2 0x2210
+#define MV64340_ETH_BAR_3 0x2218
+#define MV64340_ETH_BAR_4 0x2220
+#define MV64340_ETH_BAR_5 0x2228
+#define MV64340_ETH_SIZE_REG_0 0x2204
+#define MV64340_ETH_SIZE_REG_1 0x220c
+#define MV64340_ETH_SIZE_REG_2 0x2214
+#define MV64340_ETH_SIZE_REG_3 0x221c
+#define MV64340_ETH_SIZE_REG_4 0x2224
+#define MV64340_ETH_SIZE_REG_5 0x222c
+#define MV64340_ETH_HEADERS_RETARGET_BASE_REG 0x2230
+#define MV64340_ETH_HEADERS_RETARGET_CONTROL_REG 0x2234
+#define MV64340_ETH_HIGH_ADDR_REMAP_REG_0 0x2280
+#define MV64340_ETH_HIGH_ADDR_REMAP_REG_1 0x2284
+#define MV64340_ETH_HIGH_ADDR_REMAP_REG_2 0x2288
+#define MV64340_ETH_HIGH_ADDR_REMAP_REG_3 0x228c
+#define MV64340_ETH_BASE_ADDR_ENABLE_REG 0x2290
+#define MV64340_ETH_ACCESS_PROTECTION_REG(port) (0x2294 + (port<<2))
+#define MV64340_ETH_MIB_COUNTERS_BASE(port) (0x3000 + (port<<7))
+#define MV64340_ETH_PORT_CONFIG_REG(port) (0x2400 + (port<<10))
+#define MV64340_ETH_PORT_CONFIG_EXTEND_REG(port) (0x2404 + (port<<10))
+#define MV64340_ETH_MII_SERIAL_PARAMETRS_REG(port) (0x2408 + (port<<10))
+#define MV64340_ETH_GMII_SERIAL_PARAMETRS_REG(port) (0x240c + (port<<10))
+#define MV64340_ETH_VLAN_ETHERTYPE_REG(port) (0x2410 + (port<<10))
+#define MV64340_ETH_MAC_ADDR_LOW(port) (0x2414 + (port<<10))
+#define MV64340_ETH_MAC_ADDR_HIGH(port) (0x2418 + (port<<10))
+#define MV64340_ETH_SDMA_CONFIG_REG(port) (0x241c + (port<<10))
+#define MV64340_ETH_DSCP_0(port) (0x2420 + (port<<10))
+#define MV64340_ETH_DSCP_1(port) (0x2424 + (port<<10))
+#define MV64340_ETH_DSCP_2(port) (0x2428 + (port<<10))
+#define MV64340_ETH_DSCP_3(port) (0x242c + (port<<10))
+#define MV64340_ETH_DSCP_4(port) (0x2430 + (port<<10))
+#define MV64340_ETH_DSCP_5(port) (0x2434 + (port<<10))
+#define MV64340_ETH_DSCP_6(port) (0x2438 + (port<<10))
+#define MV64340_ETH_PORT_SERIAL_CONTROL_REG(port) (0x243c + (port<<10))
+#define MV64340_ETH_VLAN_PRIORITY_TAG_TO_PRIORITY(port) (0x2440 + (port<<10))
+#define MV64340_ETH_PORT_STATUS_REG(port) (0x2444 + (port<<10))
+#define MV64340_ETH_TRANSMIT_QUEUE_COMMAND_REG(port) (0x2448 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_FIXED_PRIORITY(port) (0x244c + (port<<10))
+#define MV64340_ETH_PORT_TX_TOKEN_BUCKET_RATE_CONFIG(port) (0x2450 + (port<<10))
+#define MV64340_ETH_MAXIMUM_TRANSMIT_UNIT(port) (0x2458 + (port<<10))
+#define MV64340_ETH_PORT_MAXIMUM_TOKEN_BUCKET_SIZE(port) (0x245c + (port<<10))
+#define MV64340_ETH_INTERRUPT_CAUSE_REG(port) (0x2460 + (port<<10))
+#define MV64340_ETH_INTERRUPT_CAUSE_EXTEND_REG(port) (0x2464 + (port<<10))
+#define MV64340_ETH_INTERRUPT_MASK_REG(port) (0x2468 + (port<<10))
+#define MV64340_ETH_INTERRUPT_EXTEND_MASK_REG(port) (0x246c + (port<<10))
+#define MV64340_ETH_RX_FIFO_URGENT_THRESHOLD_REG(port) (0x2470 + (port<<10))
+#define MV64340_ETH_TX_FIFO_URGENT_THRESHOLD_REG(port) (0x2474 + (port<<10))
+#define MV64340_ETH_RX_MINIMAL_FRAME_SIZE_REG(port) (0x247c + (port<<10))
+#define MV64340_ETH_RX_DISCARDED_FRAMES_COUNTER(port) (0x2484 + (port<<10)
+#define MV64340_ETH_PORT_DEBUG_0_REG(port) (0x248c + (port<<10))
+#define MV64340_ETH_PORT_DEBUG_1_REG(port) (0x2490 + (port<<10))
+#define MV64340_ETH_PORT_INTERNAL_ADDR_ERROR_REG(port) (0x2494 + (port<<10))
+#define MV64340_ETH_INTERNAL_USE_REG(port) (0x24fc + (port<<10))
+#define MV64340_ETH_RECEIVE_QUEUE_COMMAND_REG(port) (0x2680 + (port<<10))
+#define MV64340_ETH_CURRENT_SERVED_TX_DESC_PTR(port) (0x2684 + (port<<10))
+#define MV64340_ETH_RX_CURRENT_QUEUE_DESC_PTR_0(port) (0x260c + (port<<10))
+#define MV64340_ETH_RX_CURRENT_QUEUE_DESC_PTR_1(port) (0x261c + (port<<10))
+#define MV64340_ETH_RX_CURRENT_QUEUE_DESC_PTR_2(port) (0x262c + (port<<10))
+#define MV64340_ETH_RX_CURRENT_QUEUE_DESC_PTR_3(port) (0x263c + (port<<10))
+#define MV64340_ETH_RX_CURRENT_QUEUE_DESC_PTR_4(port) (0x264c + (port<<10))
+#define MV64340_ETH_RX_CURRENT_QUEUE_DESC_PTR_5(port) (0x265c + (port<<10))
+#define MV64340_ETH_RX_CURRENT_QUEUE_DESC_PTR_6(port) (0x266c + (port<<10))
+#define MV64340_ETH_RX_CURRENT_QUEUE_DESC_PTR_7(port) (0x267c + (port<<10))
+#define MV64340_ETH_TX_CURRENT_QUEUE_DESC_PTR_0(port) (0x26c0 + (port<<10))
+#define MV64340_ETH_TX_CURRENT_QUEUE_DESC_PTR_1(port) (0x26c4 + (port<<10))
+#define MV64340_ETH_TX_CURRENT_QUEUE_DESC_PTR_2(port) (0x26c8 + (port<<10))
+#define MV64340_ETH_TX_CURRENT_QUEUE_DESC_PTR_3(port) (0x26cc + (port<<10))
+#define MV64340_ETH_TX_CURRENT_QUEUE_DESC_PTR_4(port) (0x26d0 + (port<<10))
+#define MV64340_ETH_TX_CURRENT_QUEUE_DESC_PTR_5(port) (0x26d4 + (port<<10))
+#define MV64340_ETH_TX_CURRENT_QUEUE_DESC_PTR_6(port) (0x26d8 + (port<<10))
+#define MV64340_ETH_TX_CURRENT_QUEUE_DESC_PTR_7(port) (0x26dc + (port<<10))
+#define MV64340_ETH_TX_QUEUE_0_TOKEN_BUCKET_COUNT(port) (0x2700 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_1_TOKEN_BUCKET_COUNT(port) (0x2710 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_2_TOKEN_BUCKET_COUNT(port) (0x2720 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_3_TOKEN_BUCKET_COUNT(port) (0x2730 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_4_TOKEN_BUCKET_COUNT(port) (0x2740 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_5_TOKEN_BUCKET_COUNT(port) (0x2750 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_6_TOKEN_BUCKET_COUNT(port) (0x2760 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_7_TOKEN_BUCKET_COUNT(port) (0x2770 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_0_TOKEN_BUCKET_CONFIG(port) (0x2704 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_1_TOKEN_BUCKET_CONFIG(port) (0x2714 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_2_TOKEN_BUCKET_CONFIG(port) (0x2724 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_3_TOKEN_BUCKET_CONFIG(port) (0x2734 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_4_TOKEN_BUCKET_CONFIG(port) (0x2744 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_5_TOKEN_BUCKET_CONFIG(port) (0x2754 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_6_TOKEN_BUCKET_CONFIG(port) (0x2764 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_7_TOKEN_BUCKET_CONFIG(port) (0x2774 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_0_ARBITER_CONFIG(port) (0x2708 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_1_ARBITER_CONFIG(port) (0x2718 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_2_ARBITER_CONFIG(port) (0x2728 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_3_ARBITER_CONFIG(port) (0x2738 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_4_ARBITER_CONFIG(port) (0x2748 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_5_ARBITER_CONFIG(port) (0x2758 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_6_ARBITER_CONFIG(port) (0x2768 + (port<<10))
+#define MV64340_ETH_TX_QUEUE_7_ARBITER_CONFIG(port) (0x2778 + (port<<10))
+#define MV64340_ETH_PORT_TX_TOKEN_BUCKET_COUNT(port) (0x2780 + (port<<10))
+#define MV64340_ETH_DA_FILTER_SPECIAL_MULTICAST_TABLE_BASE(port) (0x3400 + (port<<10))
+#define MV64340_ETH_DA_FILTER_OTHER_MULTICAST_TABLE_BASE(port) (0x3500 + (port<<10))
+#define MV64340_ETH_DA_FILTER_UNICAST_TABLE_BASE(port) (0x3600 + (port<<10))
+
+/*******************************************/
+/* CUNIT Registers */
+/*******************************************/
+
+ /* Address Decoding Register Map */
+
+#define MV64340_CUNIT_BASE_ADDR_REG0 0xf200
+#define MV64340_CUNIT_BASE_ADDR_REG1 0xf208
+#define MV64340_CUNIT_BASE_ADDR_REG2 0xf210
+#define MV64340_CUNIT_BASE_ADDR_REG3 0xf218
+#define MV64340_CUNIT_SIZE0 0xf204
+#define MV64340_CUNIT_SIZE1 0xf20c
+#define MV64340_CUNIT_SIZE2 0xf214
+#define MV64340_CUNIT_SIZE3 0xf21c
+#define MV64340_CUNIT_HIGH_ADDR_REMAP_REG0 0xf240
+#define MV64340_CUNIT_HIGH_ADDR_REMAP_REG1 0xf244
+#define MV64340_CUNIT_BASE_ADDR_ENABLE_REG 0xf250
+#define MV64340_MPSC0_ACCESS_PROTECTION_REG 0xf254
+#define MV64340_MPSC1_ACCESS_PROTECTION_REG 0xf258
+#define MV64340_CUNIT_INTERNAL_SPACE_BASE_ADDR_REG 0xf25C
+
+ /* Error Report Registers */
+
+#define MV64340_CUNIT_INTERRUPT_CAUSE_REG 0xf310
+#define MV64340_CUNIT_INTERRUPT_MASK_REG 0xf314
+#define MV64340_CUNIT_ERROR_ADDR 0xf318
+
+ /* Cunit Control Registers */
+
+#define MV64340_CUNIT_ARBITER_CONTROL_REG 0xf300
+#define MV64340_CUNIT_CONFIG_REG 0xb40c
+#define MV64340_CUNIT_CRROSBAR_TIMEOUT_REG 0xf304
+
+ /* Cunit Debug Registers */
+
+#define MV64340_CUNIT_DEBUG_LOW 0xf340
+#define MV64340_CUNIT_DEBUG_HIGH 0xf344
+#define MV64340_CUNIT_MMASK 0xf380
+
+ /* MPSCs Clocks Routing Registers */
+
+#define MV64340_MPSC_ROUTING_REG 0xb400
+#define MV64340_MPSC_RX_CLOCK_ROUTING_REG 0xb404
+#define MV64340_MPSC_TX_CLOCK_ROUTING_REG 0xb408
+
+ /* MPSCs Interrupts Registers */
+
+#define MV64340_MPSC_CAUSE_REG(port) (0xb804 + (port<<3))
+#define MV64340_MPSC_MASK_REG(port) (0xb884 + (port<<3))
+
+#define MV64340_MPSC_MAIN_CONFIG_LOW(port) (0x8000 + (port<<12))
+#define MV64340_MPSC_MAIN_CONFIG_HIGH(port) (0x8004 + (port<<12))
+#define MV64340_MPSC_PROTOCOL_CONFIG(port) (0x8008 + (port<<12))
+#define MV64340_MPSC_CHANNEL_REG1(port) (0x800c + (port<<12))
+#define MV64340_MPSC_CHANNEL_REG2(port) (0x8010 + (port<<12))
+#define MV64340_MPSC_CHANNEL_REG3(port) (0x8014 + (port<<12))
+#define MV64340_MPSC_CHANNEL_REG4(port) (0x8018 + (port<<12))
+#define MV64340_MPSC_CHANNEL_REG5(port) (0x801c + (port<<12))
+#define MV64340_MPSC_CHANNEL_REG6(port) (0x8020 + (port<<12))
+#define MV64340_MPSC_CHANNEL_REG7(port) (0x8024 + (port<<12))
+#define MV64340_MPSC_CHANNEL_REG8(port) (0x8028 + (port<<12))
+#define MV64340_MPSC_CHANNEL_REG9(port) (0x802c + (port<<12))
+#define MV64340_MPSC_CHANNEL_REG10(port) (0x8030 + (port<<12))
+
+ /* MPSC0 Registers */
+
+
+/***************************************/
+/* SDMA Registers */
+/***************************************/
+
+#define MV64340_SDMA_CONFIG_REG(channel) (0x4000 + (channel<<13))
+#define MV64340_SDMA_COMMAND_REG(channel) (0x4008 + (channel<<13))
+#define MV64340_SDMA_CURRENT_RX_DESCRIPTOR_POINTER(channel) (0x4810 + (channel<<13))
+#define MV64340_SDMA_CURRENT_TX_DESCRIPTOR_POINTER(channel) (0x4c10 + (channel<<13))
+#define MV64340_SDMA_FIRST_TX_DESCRIPTOR_POINTER(channel) (0x4c14 + (channel<<13))
+
+#define MV64340_SDMA_CAUSE_REG 0xb800
+#define MV64340_SDMA_MASK_REG 0xb880
+
+/* BRG Interrupts */
+
+#define MV64340_BRG_CONFIG_REG(brg) (0xb200 + (brg<<3))
+#define MV64340_BRG_BAUDE_TUNING_REG(brg) (0xb208 + (brg<<3))
+#define MV64340_BRG_CAUSE_REG 0xb834
+#define MV64340_BRG_MASK_REG 0xb8b4
+
+/****************************************/
+/* DMA Channel Control */
+/****************************************/
+
+#define MV64340_DMA_CHANNEL0_CONTROL 0x840
+#define MV64340_DMA_CHANNEL0_CONTROL_HIGH 0x880
+#define MV64340_DMA_CHANNEL1_CONTROL 0x844
+#define MV64340_DMA_CHANNEL1_CONTROL_HIGH 0x884
+#define MV64340_DMA_CHANNEL2_CONTROL 0x848
+#define MV64340_DMA_CHANNEL2_CONTROL_HIGH 0x888
+#define MV64340_DMA_CHANNEL3_CONTROL 0x84C
+#define MV64340_DMA_CHANNEL3_CONTROL_HIGH 0x88C
+
+
+/****************************************/
+/* IDMA Registers */
+/****************************************/
+
+#define MV64340_DMA_CHANNEL0_BYTE_COUNT 0x800
+#define MV64340_DMA_CHANNEL1_BYTE_COUNT 0x804
+#define MV64340_DMA_CHANNEL2_BYTE_COUNT 0x808
+#define MV64340_DMA_CHANNEL3_BYTE_COUNT 0x80C
+#define MV64340_DMA_CHANNEL0_SOURCE_ADDR 0x810
+#define MV64340_DMA_CHANNEL1_SOURCE_ADDR 0x814
+#define MV64340_DMA_CHANNEL2_SOURCE_ADDR 0x818
+#define MV64340_DMA_CHANNEL3_SOURCE_ADDR 0x81c
+#define MV64340_DMA_CHANNEL0_DESTINATION_ADDR 0x820
+#define MV64340_DMA_CHANNEL1_DESTINATION_ADDR 0x824
+#define MV64340_DMA_CHANNEL2_DESTINATION_ADDR 0x828
+#define MV64340_DMA_CHANNEL3_DESTINATION_ADDR 0x82C
+#define MV64340_DMA_CHANNEL0_NEXT_DESCRIPTOR_POINTER 0x830
+#define MV64340_DMA_CHANNEL1_NEXT_DESCRIPTOR_POINTER 0x834
+#define MV64340_DMA_CHANNEL2_NEXT_DESCRIPTOR_POINTER 0x838
+#define MV64340_DMA_CHANNEL3_NEXT_DESCRIPTOR_POINTER 0x83C
+#define MV64340_DMA_CHANNEL0_CURRENT_DESCRIPTOR_POINTER 0x870
+#define MV64340_DMA_CHANNEL1_CURRENT_DESCRIPTOR_POINTER 0x874
+#define MV64340_DMA_CHANNEL2_CURRENT_DESCRIPTOR_POINTER 0x878
+#define MV64340_DMA_CHANNEL3_CURRENT_DESCRIPTOR_POINTER 0x87C
+
+ /* IDMA Address Decoding Base Address Registers */
+
+#define MV64340_DMA_BASE_ADDR_REG0 0xa00
+#define MV64340_DMA_BASE_ADDR_REG1 0xa08
+#define MV64340_DMA_BASE_ADDR_REG2 0xa10
+#define MV64340_DMA_BASE_ADDR_REG3 0xa18
+#define MV64340_DMA_BASE_ADDR_REG4 0xa20
+#define MV64340_DMA_BASE_ADDR_REG5 0xa28
+#define MV64340_DMA_BASE_ADDR_REG6 0xa30
+#define MV64340_DMA_BASE_ADDR_REG7 0xa38
+
+ /* IDMA Address Decoding Size Address Register */
+
+#define MV64340_DMA_SIZE_REG0 0xa04
+#define MV64340_DMA_SIZE_REG1 0xa0c
+#define MV64340_DMA_SIZE_REG2 0xa14
+#define MV64340_DMA_SIZE_REG3 0xa1c
+#define MV64340_DMA_SIZE_REG4 0xa24
+#define MV64340_DMA_SIZE_REG5 0xa2c
+#define MV64340_DMA_SIZE_REG6 0xa34
+#define MV64340_DMA_SIZE_REG7 0xa3C
+
+ /* IDMA Address Decoding High Address Remap and Access
+ Protection Registers */
+
+#define MV64340_DMA_HIGH_ADDR_REMAP_REG0 0xa60
+#define MV64340_DMA_HIGH_ADDR_REMAP_REG1 0xa64
+#define MV64340_DMA_HIGH_ADDR_REMAP_REG2 0xa68
+#define MV64340_DMA_HIGH_ADDR_REMAP_REG3 0xa6C
+#define MV64340_DMA_BASE_ADDR_ENABLE_REG 0xa80
+#define MV64340_DMA_CHANNEL0_ACCESS_PROTECTION_REG 0xa70
+#define MV64340_DMA_CHANNEL1_ACCESS_PROTECTION_REG 0xa74
+#define MV64340_DMA_CHANNEL2_ACCESS_PROTECTION_REG 0xa78
+#define MV64340_DMA_CHANNEL3_ACCESS_PROTECTION_REG 0xa7c
+#define MV64340_DMA_ARBITER_CONTROL 0x860
+#define MV64340_DMA_CROSS_BAR_TIMEOUT 0x8d0
+
+ /* IDMA Headers Retarget Registers */
+
+#define MV64340_DMA_HEADERS_RETARGET_CONTROL 0xa84
+#define MV64340_DMA_HEADERS_RETARGET_BASE 0xa88
+
+ /* IDMA Interrupt Register */
+
+#define MV64340_DMA_INTERRUPT_CAUSE_REG 0x8c0
+#define MV64340_DMA_INTERRUPT_CAUSE_MASK 0x8c4
+#define MV64340_DMA_ERROR_ADDR 0x8c8
+#define MV64340_DMA_ERROR_SELECT 0x8cc
+
+ /* IDMA Debug Register ( for internal use ) */
+
+#define MV64340_DMA_DEBUG_LOW 0x8e0
+#define MV64340_DMA_DEBUG_HIGH 0x8e4
+#define MV64340_DMA_SPARE 0xA8C
+
+/****************************************/
+/* Timer_Counter */
+/****************************************/
+
+#define MV64340_TIMER_COUNTER0 0x850
+#define MV64340_TIMER_COUNTER1 0x854
+#define MV64340_TIMER_COUNTER2 0x858
+#define MV64340_TIMER_COUNTER3 0x85C
+#define MV64340_TIMER_COUNTER_0_3_CONTROL 0x864
+#define MV64340_TIMER_COUNTER_0_3_INTERRUPT_CAUSE 0x868
+#define MV64340_TIMER_COUNTER_0_3_INTERRUPT_MASK 0x86c
+
+/****************************************/
+/* Watchdog registers */
+/****************************************/
+
+#define MV64340_WATCHDOG_CONFIG_REG 0xb410
+#define MV64340_WATCHDOG_VALUE_REG 0xb414
+
+/****************************************/
+/* I2C Registers */
+/****************************************/
+
+#define MV64340_I2C_SLAVE_ADDR 0xc000
+#define MV64340_I2C_EXTENDED_SLAVE_ADDR 0xc010
+#define MV64340_I2C_DATA 0xc004
+#define MV64340_I2C_CONTROL 0xc008
+#define MV64340_I2C_STATUS_BAUDE_RATE 0xc00C
+#define MV64340_I2C_SOFT_RESET 0xc01c
+
+/****************************************/
+/* GPP Interface Registers */
+/****************************************/
+
+#define MV64340_GPP_IO_CONTROL 0xf100
+#define MV64340_GPP_LEVEL_CONTROL 0xf110
+#define MV64340_GPP_VALUE 0xf104
+#define MV64340_GPP_INTERRUPT_CAUSE 0xf108
+#define MV64340_GPP_INTERRUPT_MASK0 0xf10c
+#define MV64340_GPP_INTERRUPT_MASK1 0xf114
+#define MV64340_GPP_VALUE_SET 0xf118
+#define MV64340_GPP_VALUE_CLEAR 0xf11c
+
+/****************************************/
+/* Interrupt Controller Registers */
+/****************************************/
+
+/****************************************/
+/* Interrupts */
+/****************************************/
+
+#define MV64340_MAIN_INTERRUPT_CAUSE_LOW 0x004
+#define MV64340_MAIN_INTERRUPT_CAUSE_HIGH 0x00c
+#define MV64340_CPU_INTERRUPT0_MASK_LOW 0x014
+#define MV64340_CPU_INTERRUPT0_MASK_HIGH 0x01c
+#define MV64340_CPU_INTERRUPT0_SELECT_CAUSE 0x024
+#define MV64340_CPU_INTERRUPT1_MASK_LOW 0x034
+#define MV64340_CPU_INTERRUPT1_MASK_HIGH 0x03c
+#define MV64340_CPU_INTERRUPT1_SELECT_CAUSE 0x044
+#define MV64340_INTERRUPT0_MASK_0_LOW 0x054
+#define MV64340_INTERRUPT0_MASK_0_HIGH 0x05c
+#define MV64340_INTERRUPT0_SELECT_CAUSE 0x064
+#define MV64340_INTERRUPT1_MASK_0_LOW 0x074
+#define MV64340_INTERRUPT1_MASK_0_HIGH 0x07c
+#define MV64340_INTERRUPT1_SELECT_CAUSE 0x084
+
+/****************************************/
+/* MPP Interface Registers */
+/****************************************/
+
+#define MV64340_MPP_CONTROL0 0xf000
+#define MV64340_MPP_CONTROL1 0xf004
+#define MV64340_MPP_CONTROL2 0xf008
+#define MV64340_MPP_CONTROL3 0xf00c
+
+/****************************************/
+/* Serial Initialization registers */
+/****************************************/
+
+#define MV64340_SERIAL_INIT_LAST_DATA 0xf324
+#define MV64340_SERIAL_INIT_CONTROL 0xf328
+#define MV64340_SERIAL_INIT_STATUS 0xf32c
+
+extern void mv64340_irq_init(unsigned int base);
+
+#endif /* __ASM_MV64340_H */
--- /dev/null
+#ifndef _ASM_RELAY_H
+#define _ASM_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+#ifndef _ASM_RELAY_H
+#define _ASM_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+#ifndef _ASM_PARISC_RELAY_H
+#define _ASM_PARISC_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+#ifndef _ASM_PPC_RELAY_H
+#define _ASM_PPC_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+/*
+ * This file describes the structure passed from the BootX application
+ * (for MacOS) when it is used to boot Linux.
+ *
+ * Written by Benjamin Herrenschmidt.
+ */
+
+
+#ifndef __ASM_BOOTX_H__
+#define __ASM_BOOTX_H__
+
+#ifdef macintosh
+#include <Types.h>
+#include "linux_type_defs.h"
+#endif
+
+#ifdef macintosh
+/* All this requires PowerPC alignment */
+#pragma options align=power
+#endif
+
+/* On kernel entry:
+ *
+ * r3 = 0x426f6f58 ('BooX')
+ * r4 = pointer to boot_infos
+ * r5 = NULL
+ *
+ * Data and instruction translation disabled, interrupts
+ * disabled, kernel loaded at physical 0x00000000 on PCI
+ * machines (will be different on NuBus).
+ */
+
+#define BOOT_INFO_VERSION 5
+#define BOOT_INFO_COMPATIBLE_VERSION 1
+
+/* Bit in the architecture flag mask. More to be defined in
+ future versions. Note that either BOOT_ARCH_PCI or
+ BOOT_ARCH_NUBUS is set. The other BOOT_ARCH_NUBUS_xxx are
+ set additionally when BOOT_ARCH_NUBUS is set.
+ */
+#define BOOT_ARCH_PCI 0x00000001UL
+#define BOOT_ARCH_NUBUS 0x00000002UL
+#define BOOT_ARCH_NUBUS_PDM 0x00000010UL
+#define BOOT_ARCH_NUBUS_PERFORMA 0x00000020UL
+#define BOOT_ARCH_NUBUS_POWERBOOK 0x00000040UL
+
+/* Maximum number of ranges in phys memory map */
+#define MAX_MEM_MAP_SIZE 26
+
+/* This is the format of an element in the physical memory map. Note that
+ the map is optional and current BootX will only build it for pre-PCI
+ machines */
+typedef struct boot_info_map_entry
+{
+ __u32 physAddr; /* Physical starting address */
+ __u32 size; /* Size in bytes */
+} boot_info_map_entry_t;
+
+
+/* Here are the boot informations that are passed to the bootstrap
+ * Note that the kernel arguments and the device tree are appended
+ * at the end of this structure. */
+typedef struct boot_infos
+{
+ /* Version of this structure */
+ __u32 version;
+ /* backward compatible down to version: */
+ __u32 compatible_version;
+
+ /* NEW (vers. 2) this holds the current _logical_ base addr of
+ the frame buffer (for use by early boot message) */
+ __u8* logicalDisplayBase;
+
+ /* NEW (vers. 4) Apple's machine identification */
+ __u32 machineID;
+
+ /* NEW (vers. 4) Detected hw architecture */
+ __u32 architecture;
+
+ /* The device tree (internal addresses relative to the beginning of the tree,
+ * device tree offset relative to the beginning of this structure).
+ * On pre-PCI macintosh (BOOT_ARCH_PCI bit set to 0 in architecture), this
+ * field is 0.
+ */
+ __u32 deviceTreeOffset; /* Device tree offset */
+ __u32 deviceTreeSize; /* Size of the device tree */
+
+ /* Some infos about the current MacOS display */
+ __u32 dispDeviceRect[4]; /* left,top,right,bottom */
+ __u32 dispDeviceDepth; /* (8, 16 or 32) */
+ __u8* dispDeviceBase; /* base address (physical) */
+ __u32 dispDeviceRowBytes; /* rowbytes (in bytes) */
+ __u32 dispDeviceColorsOffset; /* Colormap (8 bits only) or 0 (*) */
+ /* Optional offset in the registry to the current
+ * MacOS display. (Can be 0 when not detected) */
+ __u32 dispDeviceRegEntryOffset;
+
+ /* Optional pointer to boot ramdisk (offset from this structure) */
+ __u32 ramDisk;
+ __u32 ramDiskSize; /* size of ramdisk image */
+
+ /* Kernel command line arguments (offset from this structure) */
+ __u32 kernelParamsOffset;
+
+ /* ALL BELOW NEW (vers. 4) */
+
+ /* This defines the physical memory. Valid with BOOT_ARCH_NUBUS flag
+ (non-PCI) only. On PCI, memory is contiguous and it's size is in the
+ device-tree. */
+ boot_info_map_entry_t
+ physMemoryMap[MAX_MEM_MAP_SIZE]; /* Where the phys memory is */
+ __u32 physMemoryMapSize; /* How many entries in map */
+
+
+ /* The framebuffer size (optional, currently 0) */
+ __u32 frameBufferSize; /* Represents a max size, can be 0. */
+
+ /* NEW (vers. 5) */
+
+ /* Total params size (args + colormap + device tree + ramdisk) */
+ __u32 totalParamsSize;
+
+} boot_infos_t;
+
+/* (*) The format of the colormap is 256 * 3 * 2 bytes. Each color index is represented
+ * by 3 short words containing a 16 bits (unsigned) color component.
+ * Later versions may contain the gamma table for direct-color devices here.
+ */
+#define BOOTX_COLORTABLE_SIZE (256UL*3UL*2UL)
+
+#ifdef macintosh
+#pragma options align=reset
+#endif
+
+#endif
--- /dev/null
+#ifndef _ASM_PPC64_RELAY_H
+#define _ASM_PPC64_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+#ifndef _ASM_S390_RELAY_H
+#define _ASM_S390_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+#ifndef _ASM_SH_RELAY_H
+#define _ASM_SH_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+#ifndef __ASM_SH64_SMPLOCK_H
+#define __ASM_SH64_SMPLOCK_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/smplock.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#include <linux/config.h>
+
+#ifndef CONFIG_SMP
+
+#define lock_kernel() do { } while(0)
+#define unlock_kernel() do { } while(0)
+#define release_kernel_lock(task, cpu, depth) ((depth) = 1)
+#define reacquire_kernel_lock(task, cpu, depth) do { } while(0)
+
+#else
+
+#error "We do not support SMP on SH64 yet"
+/*
+ * Default SMP lock implementation
+ */
+
+#include <linux/interrupt.h>
+#include <asm/spinlock.h>
+
+extern spinlock_t kernel_flag;
+
+/*
+ * Getting the big kernel lock.
+ *
+ * This cannot happen asynchronously,
+ * so we only need to worry about other
+ * CPU's.
+ */
+extern __inline__ void lock_kernel(void)
+{
+ if (!++current->lock_depth)
+ spin_lock(&kernel_flag);
+}
+
+extern __inline__ void unlock_kernel(void)
+{
+ if (--current->lock_depth < 0)
+ spin_unlock(&kernel_flag);
+}
+
+/*
+ * Release global kernel lock and global interrupt lock
+ */
+#define release_kernel_lock(task, cpu) \
+do { \
+ if (task->lock_depth >= 0) \
+ spin_unlock(&kernel_flag); \
+ release_irqlock(cpu); \
+ __sti(); \
+} while (0)
+
+/*
+ * Re-acquire the kernel lock
+ */
+#define reacquire_kernel_lock(task) \
+do { \
+ if (task->lock_depth >= 0) \
+ spin_lock(&kernel_flag); \
+} while (0)
+
+#endif /* CONFIG_SMP */
+
+#endif /* __ASM_SH64_SMPLOCK_H */
--- /dev/null
+#ifndef __ASM_SH_SOFTIRQ_H
+#define __ASM_SH_SOFTIRQ_H
+
+#include <asm/atomic.h>
+#include <asm/hardirq.h>
+
+#define local_bh_disable() \
+do { \
+ local_bh_count(smp_processor_id())++; \
+ barrier(); \
+} while (0)
+
+#define __local_bh_enable() \
+do { \
+ barrier(); \
+ local_bh_count(smp_processor_id())--; \
+} while (0)
+
+#define local_bh_enable() \
+do { \
+ barrier(); \
+ if (!--local_bh_count(smp_processor_id()) \
+ && softirq_pending(smp_processor_id())) { \
+ do_softirq(); \
+ } \
+} while (0)
+
+#define in_softirq() (local_bh_count(smp_processor_id()) != 0)
+
+#endif /* __ASM_SH_SOFTIRQ_H */
--- /dev/null
+#ifndef _ASM_SPARC_RELAY_H
+#define _ASM_SPARC_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+#ifndef _ASM_SPARC64_RELAY_H
+#define _ASM_SPARC64_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+#ifndef _ASM_UM_CPUMASK_H
+#define _ASM_UM_CPUMASK_H
+
+#include <asm-generic/cpumask.h>
+
+#endif /* _ASM_UM_CPUMASK_H */
--- /dev/null
+#ifndef _UM_INIT_H
+#define _UM_INIT_H
+
+#ifdef notdef
+#define __init
+#define __initdata
+#define __initfunc(__arginit) __arginit
+#define __cacheline_aligned
+#endif
+
+#endif
#define HAVE_ARCH_VALIDATE
#define devmem_is_allowed(x) 1
-extern int arch_free_page(struct page *page, int order);
+extern void arch_free_page(struct page *page, int order);
#define HAVE_ARCH_FREE_PAGE
#endif
--- /dev/null
+#ifndef __UM_SMPLOCK_H
+#define __UM_SMPLOCK_H
+
+#include "asm/arch/smplock.h"
+
+#endif
--- /dev/null
+#ifndef __V850_RELAY_H
+#define __V850_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
--- /dev/null
+#ifndef _ASM_X86_64_RELAY_H
+#define _ASM_X86_64_RELAY_H
+
+#include <asm-generic/relay.h>
+#endif
*/
#define IDT_ENTRIES 256
-#define KERN_PHYS_OFFSET (CONFIG_KERN_PHYS_OFFSET * 0x100000)
-
#endif
--- /dev/null
+/*
+ * Kernel header file for Linux crash dumps.
+ *
+ * Created by: Matt Robinson (yakker@sgi.com)
+ * Copyright 1999 - 2002 Silicon Graphics, Inc. All rights reserved.
+ *
+ * vmdump.h to dump.h by: Matt D. Robinson (yakker@sourceforge.net)
+ * Copyright 2001 - 2002 Matt D. Robinson. All rights reserved.
+ * Copyright (C) 2002 Free Software Foundation, Inc. All rights reserved.
+ *
+ * Most of this is the same old stuff from vmdump.h, except now we're
+ * actually a stand-alone driver plugged into the block layer interface,
+ * with the exception that we now allow for compression modes externally
+ * loaded (e.g., someone can come up with their own).
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+/* This header file includes all structure definitions for crash dumps. */
+#ifndef _DUMP_H
+#define _DUMP_H
+
+#if defined(CONFIG_CRASH_DUMP)
+
+#include <linux/list.h>
+#include <linux/notifier.h>
+#include <linux/dumpdev.h>
+#include <asm/ioctl.h>
+
+/*
+ * Predefine default DUMP_PAGE constants, asm header may override.
+ *
+ * On ia64 discontinuous memory systems it's possible for the memory
+ * banks to stop at 2**12 page alignments, the smallest possible page
+ * size. But the system page size, PAGE_SIZE, is in fact larger.
+ */
+#define DUMP_PAGE_SHIFT PAGE_SHIFT
+#define DUMP_PAGE_MASK PAGE_MASK
+#define DUMP_PAGE_ALIGN(addr) PAGE_ALIGN(addr)
+#define DUMP_HEADER_OFFSET PAGE_SIZE
+
+#define OLDMINORBITS 8
+#define OLDMINORMASK ((1U << OLDMINORBITS) -1)
+
+/* keep DUMP_PAGE_SIZE constant to 4K = 1<<12
+ * it may be different from PAGE_SIZE then.
+ */
+#define DUMP_PAGE_SIZE 4096
+
+/*
+ * Predefined default memcpy() to use when copying memory to the dump buffer.
+ *
+ * On ia64 there is a heads up function that can be called to let the prom
+ * machine check monitor know that the current activity is risky and it should
+ * ignore the fault (nofault). In this case the ia64 header will redefine this
+ * macro to __dump_memcpy() and use it's arch specific version.
+ */
+#define DUMP_memcpy memcpy
+
+/* necessary header files */
+#include <asm/dump.h> /* for architecture-specific header */
+
+/*
+ * Size of the buffer that's used to hold:
+ *
+ * 1. the dump header (padded to fill the complete buffer)
+ * 2. the possibly compressed page headers and data
+ */
+#define DUMP_BUFFER_SIZE (64 * 1024) /* size of dump buffer */
+#define DUMP_HEADER_SIZE DUMP_BUFFER_SIZE
+
+/* standard header definitions */
+#define DUMP_MAGIC_NUMBER 0xa8190173618f23edULL /* dump magic number */
+#define DUMP_MAGIC_LIVE 0xa8190173618f23cdULL /* live magic number */
+#define DUMP_VERSION_NUMBER 0x8 /* dump version number */
+#define DUMP_PANIC_LEN 0x100 /* dump panic string length */
+
+/* dump levels - type specific stuff added later -- add as necessary */
+#define DUMP_LEVEL_NONE 0x0 /* no dumping at all -- just bail */
+#define DUMP_LEVEL_HEADER 0x1 /* kernel dump header only */
+#define DUMP_LEVEL_KERN 0x2 /* dump header and kernel pages */
+#define DUMP_LEVEL_USED 0x4 /* dump header, kernel/user pages */
+#define DUMP_LEVEL_ALL_RAM 0x8 /* dump header, all RAM pages */
+#define DUMP_LEVEL_ALL 0x10 /* dump all memory RAM and firmware */
+
+
+/* dump compression options -- add as necessary */
+#define DUMP_COMPRESS_NONE 0x0 /* don't compress this dump */
+#define DUMP_COMPRESS_RLE 0x1 /* use RLE compression */
+#define DUMP_COMPRESS_GZIP 0x2 /* use GZIP compression */
+
+/* dump flags - any dump-type specific flags -- add as necessary */
+#define DUMP_FLAGS_NONE 0x0 /* no flags are set for this dump */
+#define DUMP_FLAGS_SOFTBOOT 0x2 /* 2 stage soft-boot based dump */
+#define DUMP_FLAGS_NONDISRUPT 0X1 /* non-disruptive dumping */
+
+#define DUMP_FLAGS_TARGETMASK 0xf0000000 /* handle special case targets */
+#define DUMP_FLAGS_DISKDUMP 0x80000000 /* dump to local disk */
+#define DUMP_FLAGS_NETDUMP 0x40000000 /* dump over the network */
+
+/* dump header flags -- add as necessary */
+#define DUMP_DH_FLAGS_NONE 0x0 /* no flags set (error condition!) */
+#define DUMP_DH_RAW 0x1 /* raw page (no compression) */
+#define DUMP_DH_COMPRESSED 0x2 /* page is compressed */
+#define DUMP_DH_END 0x4 /* end marker on a full dump */
+#define DUMP_DH_TRUNCATED 0x8 /* dump is incomplete */
+#define DUMP_DH_TEST_PATTERN 0x10 /* dump page is a test pattern */
+#define DUMP_DH_NOT_USED 0x20 /* 1st bit not used in flags */
+
+/* names for various dump parameters in /proc/kernel */
+#define DUMP_ROOT_NAME "sys/dump"
+#define DUMP_DEVICE_NAME "device"
+#define DUMP_COMPRESS_NAME "compress"
+#define DUMP_LEVEL_NAME "level"
+#define DUMP_FLAGS_NAME "flags"
+#define DUMP_ADDR_NAME "addr"
+
+#define DUMP_SYSRQ_KEY 'd' /* key to use for MAGIC_SYSRQ key */
+
+/* CTL_DUMP names: */
+enum
+{
+ CTL_DUMP_DEVICE=1,
+ CTL_DUMP_COMPRESS=3,
+ CTL_DUMP_LEVEL=3,
+ CTL_DUMP_FLAGS=4,
+ CTL_DUMP_ADDR=5,
+ CTL_DUMP_TEST=6,
+};
+
+
+/* page size for gzip compression -- buffered slightly beyond hardware PAGE_SIZE used by DUMP */
+#define DUMP_DPC_PAGE_SIZE (DUMP_PAGE_SIZE + 512)
+
+/* dump ioctl() control options */
+#define DIOSDUMPDEV _IOW('p', 0xA0, unsigned int) /* set the dump device */
+#define DIOGDUMPDEV _IOR('p', 0xA1, unsigned int) /* get the dump device */
+#define DIOSDUMPLEVEL _IOW('p', 0xA2, unsigned int) /* set the dump level */
+#define DIOGDUMPLEVEL _IOR('p', 0xA3, unsigned int) /* get the dump level */
+#define DIOSDUMPFLAGS _IOW('p', 0xA4, unsigned int) /* set the dump flag parameters */
+#define DIOGDUMPFLAGS _IOR('p', 0xA5, unsigned int) /* get the dump flag parameters */
+#define DIOSDUMPCOMPRESS _IOW('p', 0xA6, unsigned int) /* set the dump compress level */
+#define DIOGDUMPCOMPRESS _IOR('p', 0xA7, unsigned int) /* get the dump compress level */
+
+/* these ioctls are used only by netdump module */
+#define DIOSTARGETIP _IOW('p', 0xA8, unsigned int) /* set the target m/c's ip */
+#define DIOGTARGETIP _IOR('p', 0xA9, unsigned int) /* get the target m/c's ip */
+#define DIOSTARGETPORT _IOW('p', 0xAA, unsigned int) /* set the target m/c's port */
+#define DIOGTARGETPORT _IOR('p', 0xAB, unsigned int) /* get the target m/c's port */
+#define DIOSSOURCEPORT _IOW('p', 0xAC, unsigned int) /* set the source m/c's port */
+#define DIOGSOURCEPORT _IOR('p', 0xAD, unsigned int) /* get the source m/c's port */
+#define DIOSETHADDR _IOW('p', 0xAE, unsigned int) /* set ethernet address */
+#define DIOGETHADDR _IOR('p', 0xAF, unsigned int) /* get ethernet address */
+#define DIOGDUMPOKAY _IOR('p', 0xB0, unsigned int) /* check if dump is configured */
+#define DIOSDUMPTAKE _IOW('p', 0xB1, unsigned int) /* Take a manual dump */
+
+/*
+ * Structure: __dump_header
+ * Function: This is the header dumped at the top of every valid crash
+ * dump.
+ */
+struct __dump_header {
+ /* the dump magic number -- unique to verify dump is valid */
+ u64 dh_magic_number;
+
+ /* the version number of this dump */
+ u32 dh_version;
+
+ /* the size of this header (in case we can't read it) */
+ u32 dh_header_size;
+
+ /* the level of this dump (just a header?) */
+ u32 dh_dump_level;
+
+ /*
+ * We assume dump_page_size to be 4K in every case.
+ * Store here the configurable system page size (4K, 8K, 16K, etc.)
+ */
+ u32 dh_page_size;
+
+ /* the size of all physical memory */
+ u64 dh_memory_size;
+
+ /* the start of physical memory */
+ u64 dh_memory_start;
+
+ /* the end of physical memory */
+ u64 dh_memory_end;
+
+ /* the number of hardware/physical pages in this dump specifically */
+ u32 dh_num_dump_pages;
+
+ /* the panic string, if available */
+ char dh_panic_string[DUMP_PANIC_LEN];
+
+ /* timeval depends on architecture, two long values */
+ struct {
+ u64 tv_sec;
+ u64 tv_usec;
+ } dh_time; /* the time of the system crash */
+
+ /* the NEW utsname (uname) information -- in character form */
+ /* we do this so we don't have to include utsname.h */
+ /* plus it helps us be more architecture independent */
+ /* now maybe one day soon they'll make the [65] a #define! */
+ char dh_utsname_sysname[65];
+ char dh_utsname_nodename[65];
+ char dh_utsname_release[65];
+ char dh_utsname_version[65];
+ char dh_utsname_machine[65];
+ char dh_utsname_domainname[65];
+
+ /* the address of current task (OLD = void *, NEW = u64) */
+ u64 dh_current_task;
+
+ /* what type of compression we're using in this dump (if any) */
+ u32 dh_dump_compress;
+
+ /* any additional flags */
+ u32 dh_dump_flags;
+
+ /* any additional flags */
+ u32 dh_dump_device;
+} __attribute__((packed));
+
+/*
+ * Structure: __dump_page
+ * Function: To act as the header associated to each physical page of
+ * memory saved in the system crash dump. This allows for
+ * easy reassembly of each crash dump page. The address bits
+ * are split to make things easier for 64-bit/32-bit system
+ * conversions.
+ *
+ * dp_byte_offset and dp_page_index are landmarks that are helpful when
+ * looking at a hex dump of /dev/vmdump,
+ */
+struct __dump_page {
+ /* the address of this dump page */
+ u64 dp_address;
+
+ /* the size of this dump page */
+ u32 dp_size;
+
+ /* flags (currently DUMP_COMPRESSED, DUMP_RAW or DUMP_END) */
+ u32 dp_flags;
+} __attribute__((packed));
+
+/*
+ * Structure: __lkcdinfo
+ * Function: This structure contains information needed for the lkcdutils
+ * package (particularly lcrash) to determine what information is
+ * associated to this kernel, specifically.
+ */
+struct __lkcdinfo {
+ int arch;
+ int ptrsz;
+ int byte_order;
+ int linux_release;
+ int page_shift;
+ int page_size;
+ u64 page_mask;
+ u64 page_offset;
+ int stack_offset;
+};
+
+#ifdef __KERNEL__
+
+/*
+ * Structure: __dump_compress
+ * Function: This is what an individual compression mechanism can use
+ * to plug in their own compression techniques. It's always
+ * best to build these as individual modules so that people
+ * can put in whatever they want.
+ */
+struct __dump_compress {
+ /* the list_head structure for list storage */
+ struct list_head list;
+
+ /* the type of compression to use (DUMP_COMPRESS_XXX) */
+ int compress_type;
+ const char *compress_name;
+
+ /* the compression function to call */
+ u16 (*compress_func)(const u8 *, u16, u8 *, u16);
+};
+
+/* functions for dump compression registration */
+extern void dump_register_compression(struct __dump_compress *);
+extern void dump_unregister_compression(int);
+
+/*
+ * Structure dump_mbank[]:
+ *
+ * For CONFIG_DISCONTIGMEM systems this array specifies the
+ * memory banks/chunks that need to be dumped after a panic.
+ *
+ * For classic systems it specifies a single set of pages from
+ * 0 to max_mapnr.
+ */
+struct __dump_mbank {
+ u64 start;
+ u64 end;
+ int type;
+ int pad1;
+ long pad2;
+};
+
+#define DUMP_MBANK_TYPE_CONVENTIONAL_MEMORY 1
+#define DUMP_MBANK_TYPE_OTHER 2
+
+#define MAXCHUNKS 256
+extern int dump_mbanks;
+extern struct __dump_mbank dump_mbank[MAXCHUNKS];
+
+/* notification event codes */
+#define DUMP_BEGIN 0x0001 /* dump beginning */
+#define DUMP_END 0x0002 /* dump ending */
+
+/* Scheduler soft spin control.
+ *
+ * 0 - no dump in progress
+ * 1 - cpu0 is dumping, ...
+ */
+extern unsigned long dump_oncpu;
+extern void dump_execute(const char *, const struct pt_regs *);
+
+/*
+ * Notifier list for kernel code which wants to be called
+ * at kernel dump.
+ */
+extern struct notifier_block *dump_notifier_list;
+static inline int register_dump_notifier(struct notifier_block *nb)
+{
+ return notifier_chain_register(&dump_notifier_list, nb);
+}
+static inline int unregister_dump_notifier(struct notifier_block * nb)
+{
+ return notifier_chain_unregister(&dump_notifier_list, nb);
+}
+
+extern void (*dump_function_ptr)(const char *, const struct pt_regs *);
+static inline void dump(char * str, struct pt_regs * regs)
+{
+ if (dump_function_ptr)
+ dump_function_ptr(str, regs);
+}
+
+/*
+ * Common Arch Specific Functions should be declared here.
+ * This allows the C compiler to detect discrepancies.
+ */
+extern void __dump_open(void);
+extern void __dump_cleanup(void);
+extern void __dump_init(u64);
+extern void __dump_save_regs(struct pt_regs *, const struct pt_regs *);
+extern int __dump_configure_header(const struct pt_regs *);
+extern int __dump_irq_enable(void);
+extern void __dump_irq_restore(void);
+extern int __dump_page_valid(unsigned long index);
+#ifdef CONFIG_SMP
+extern void __dump_save_other_cpus(void);
+#else
+#define __dump_save_other_cpus()
+#endif
+
+extern int manual_handle_crashdump(void);
+
+/* to track all used (compound + zero order) pages */
+#define PageInuse(p) (PageCompound(p) || page_count(p))
+
+#endif /* __KERNEL__ */
+
+#else /* !CONFIG_CRASH_DUMP */
+
+/* If not configured then make code disappear! */
+#define register_dump_watchdog(x) do { } while(0)
+#define unregister_dump_watchdog(x) do { } while(0)
+#define register_dump_notifier(x) do { } while(0)
+#define unregister_dump_notifier(x) do { } while(0)
+#define dump_in_progress() 0
+#define dump(x, y) do { } while(0)
+
+#endif /* !CONFIG_CRASH_DUMP */
+
+#endif /* _DUMP_H */
--- /dev/null
+/*
+ * linux/drivers/net/netconsole.h
+ *
+ * Copyright (C) 2001 Ingo Molnar <mingo@redhat.com>
+ *
+ * This file contains the implementation of an IRQ-safe, crash-safe
+ * kernel console implementation that outputs kernel messages to the
+ * network.
+ *
+ * Modification history:
+ *
+ * 2001-09-17 started by Ingo Molnar.
+ */
+
+/****************************************************************
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ ****************************************************************/
+
+#define NETCONSOLE_VERSION 0x03
+
+enum netdump_commands {
+ COMM_NONE = 0,
+ COMM_SEND_MEM = 1,
+ COMM_EXIT = 2,
+ COMM_REBOOT = 3,
+ COMM_HELLO = 4,
+ COMM_GET_NR_PAGES = 5,
+ COMM_GET_PAGE_SIZE = 6,
+ COMM_START_NETDUMP_ACK = 7,
+ COMM_GET_REGS = 8,
+ COMM_GET_MAGIC = 9,
+ COMM_START_WRITE_NETDUMP_ACK = 10,
+};
+
+typedef struct netdump_req_s {
+ u64 magic;
+ u32 nr;
+ u32 command;
+ u32 from;
+ u32 to;
+} req_t;
+
+enum netdump_replies {
+ REPLY_NONE = 0,
+ REPLY_ERROR = 1,
+ REPLY_LOG = 2,
+ REPLY_MEM = 3,
+ REPLY_RESERVED = 4,
+ REPLY_HELLO = 5,
+ REPLY_NR_PAGES = 6,
+ REPLY_PAGE_SIZE = 7,
+ REPLY_START_NETDUMP = 8,
+ REPLY_END_NETDUMP = 9,
+ REPLY_REGS = 10,
+ REPLY_MAGIC = 11,
+ REPLY_START_WRITE_NETDUMP = 12,
+};
+
+typedef struct netdump_reply_s {
+ u32 nr;
+ u32 code;
+ u32 info;
+} reply_t;
+
+#define HEADER_LEN (1 + sizeof(reply_t))
+
+
--- /dev/null
+/*
+ * Generic dump device interfaces for flexible system dump
+ * (Enables variation of dump target types e.g disk, network, memory)
+ *
+ * These interfaces have evolved based on discussions on lkcd-devel.
+ * Eventually the intent is to support primary and secondary or
+ * alternate targets registered at the same time, with scope for
+ * situation based failover or multiple dump devices used for parallel
+ * dump i/o.
+ *
+ * Started: Oct 2002 - Suparna Bhattacharya (suparna@in.ibm.com)
+ *
+ * Copyright (C) 2001 - 2002 Matt D. Robinson. All rights reserved.
+ * Copyright (C) 2002 International Business Machines Corp.
+ *
+ * This code is released under version 2 of the GNU GPL.
+ */
+
+#ifndef _LINUX_DUMPDEV_H
+#define _LINUX_DUMPDEV_H
+
+#include <linux/kernel.h>
+#include <linux/wait.h>
+#include <linux/bio.h>
+
+/* Determined by the dump target (device) type */
+
+struct dump_dev;
+
+struct dump_dev_ops {
+ int (*open)(struct dump_dev *, unsigned long); /* configure */
+ int (*release)(struct dump_dev *); /* unconfigure */
+ int (*silence)(struct dump_dev *); /* when dump starts */
+ int (*resume)(struct dump_dev *); /* when dump is over */
+ int (*seek)(struct dump_dev *, loff_t);
+ /* trigger a write (async in nature typically) */
+ int (*write)(struct dump_dev *, void *, unsigned long);
+ /* not usually used during dump, but option available */
+ int (*read)(struct dump_dev *, void *, unsigned long);
+ /* use to poll for completion */
+ int (*ready)(struct dump_dev *, void *);
+ int (*ioctl)(struct dump_dev *, unsigned int, unsigned long);
+};
+
+struct dump_dev {
+ char type_name[32]; /* block, net-poll etc */
+ unsigned long device_id; /* interpreted differently for various types */
+ struct dump_dev_ops *ops;
+ struct list_head list;
+ loff_t curr_offset;
+};
+
+/*
+ * dump_dev type variations:
+ */
+
+/* block */
+struct dump_blockdev {
+ struct dump_dev ddev;
+ dev_t dev_id;
+ struct block_device *bdev;
+ struct bio *bio;
+ loff_t start_offset;
+ loff_t limit;
+ int err;
+};
+
+static inline struct dump_blockdev *DUMP_BDEV(struct dump_dev *dev)
+{
+ return container_of(dev, struct dump_blockdev, ddev);
+}
+
+
+/* mem - for internal use by soft-boot based dumper */
+struct dump_memdev {
+ struct dump_dev ddev;
+ unsigned long indirect_map_root;
+ unsigned long nr_free;
+ struct page *curr_page;
+ unsigned long *curr_map;
+ unsigned long curr_map_offset;
+ unsigned long last_offset;
+ unsigned long last_used_offset;
+ unsigned long last_bs_offset;
+};
+
+static inline struct dump_memdev *DUMP_MDEV(struct dump_dev *dev)
+{
+ return container_of(dev, struct dump_memdev, ddev);
+}
+
+/* Todo/future - meant for raw dedicated interfaces e.g. mini-ide driver */
+struct dump_rdev {
+ struct dump_dev ddev;
+ char name[32];
+ int (*reset)(struct dump_rdev *, unsigned int,
+ unsigned long);
+ /* ... to do ... */
+};
+
+/* just to get the size right when saving config across a soft-reboot */
+struct dump_anydev {
+ union {
+ struct dump_blockdev bddev;
+ /* .. add other types here .. */
+ };
+};
+
+
+
+/* Dump device / target operation wrappers */
+/* These assume that dump_dev is initiatized to dump_config.dumper->dev */
+
+extern struct dump_dev *dump_dev;
+
+static inline int dump_dev_open(unsigned long arg)
+{
+ return dump_dev->ops->open(dump_dev, arg);
+}
+
+static inline int dump_dev_release(void)
+{
+ return dump_dev->ops->release(dump_dev);
+}
+
+static inline int dump_dev_silence(void)
+{
+ return dump_dev->ops->silence(dump_dev);
+}
+
+static inline int dump_dev_resume(void)
+{
+ return dump_dev->ops->resume(dump_dev);
+}
+
+static inline int dump_dev_seek(loff_t offset)
+{
+ return dump_dev->ops->seek(dump_dev, offset);
+}
+
+static inline int dump_dev_write(void *buf, unsigned long len)
+{
+ return dump_dev->ops->write(dump_dev, buf, len);
+}
+
+static inline int dump_dev_ready(void *buf)
+{
+ return dump_dev->ops->ready(dump_dev, buf);
+}
+
+static inline int dump_dev_ioctl(unsigned int cmd, unsigned long arg)
+{
+ if (!dump_dev || !dump_dev->ops->ioctl)
+ return -EINVAL;
+ return dump_dev->ops->ioctl(dump_dev, cmd, arg);
+}
+
+extern int dump_register_device(struct dump_dev *);
+extern void dump_unregister_device(struct dump_dev *);
+
+#endif /* _LINUX_DUMPDEV_H */
#ifdef CONFIG_JBD_DEBUG
#define EXT3_IOC_WAIT_FOR_READONLY _IOR('f', 99, long)
#endif
+#ifdef CONFIG_VSERVER_LEGACY
+#define EXT3_IOC_SETXID FIOC_SETXIDJ
+#endif
#define EXT3_IOC_GETRSVSZ _IOR('f', 5, long)
#define EXT3_IOC_SETRSVSZ _IOW('f', 6, long)
#include <linux/config.h>
#include <linux/limits.h>
#include <linux/ioctl.h>
-#include <linux/mount.h>
/*
* It's silly to have NR_OPEN bigger than NR_FILE, but you can change
#include <linux/radix-tree.h>
#include <linux/prio_tree.h>
#include <linux/init.h>
+#include <linux/mount.h>
#include <asm/atomic.h>
#include <asm/semaphore.h>
* VFS helper functions..
*/
extern int vfs_create(struct inode *, struct dentry *, int, struct nameidata *);
-extern int vfs_mkdir(struct inode *, struct dentry *, int, struct nameidata *);
-extern int vfs_mknod(struct inode *, struct dentry *, int, dev_t, struct nameidata *);
-extern int vfs_symlink(struct inode *, struct dentry *, const char *, int, struct nameidata *);
-extern int vfs_link(struct dentry *, struct inode *, struct dentry *, struct nameidata *);
-extern int vfs_rmdir(struct inode *, struct dentry *, struct nameidata *);
-extern int vfs_unlink(struct inode *, struct dentry *, struct nameidata *);
+extern int vfs_mkdir(struct inode *, struct dentry *, int);
+extern int vfs_mknod(struct inode *, struct dentry *, int, dev_t);
+extern int vfs_symlink(struct inode *, struct dentry *, const char *, int);
+extern int vfs_link(struct dentry *, struct inode *, struct dentry *);
+extern int vfs_rmdir(struct inode *, struct dentry *);
+extern int vfs_unlink(struct inode *, struct dentry *);
extern int vfs_rename(struct inode *, struct dentry *, struct inode *, struct dentry *);
/*
--- /dev/null
+/*
+ * KLOG Generic Logging facility built upon the relayfs infrastructure
+ *
+ * Authors: Hubertus Frankeh (frankeh@us.ibm.com)
+ * Tom Zanussi (zanussi@us.ibm.com)
+ *
+ * Please direct all questions/comments to zanussi@us.ibm.com
+ *
+ * Copyright (C) 2003, IBM Corp
+ *
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#ifndef _LINUX_KLOG_H
+#define _LINUX_KLOG_H
+
+extern int klog(const char *fmt, ...);
+extern int klog_raw(const char *buf,int len);
+
+#endif /* _LINUX_KLOG_H */
#define MICROCODE_MINOR 184
#define MWAVE_MINOR 219 /* ACP/Mwave Modem */
#define MPT_MINOR 220
+#define CRASH_DUMP_MINOR 230 /* LKCD */
#define MISC_DYNAMIC_MINOR 255
#define TUN_MINOR 200
/* Fields commonly accessed by the page reclaim scanner */
spinlock_t lru_lock;
- struct list_head active_list;
- struct list_head inactive_list;
+ struct list_head active_list;
+ struct list_head inactive_list;
unsigned long nr_scan_active;
unsigned long nr_scan_inactive;
unsigned long nr_active;
union ip_conntrack_expect_proto {
/* insert expect proto private data here */
+ struct ip_ct_gre_expect gre;
};
/* Add protocol helper include file here */
#ifdef CONFIG_IP_NF_NAT_NEEDED
#include <linux/netfilter_ipv4/ip_nat.h>
-#include <linux/netfilter_ipv4/ip_nat_pptp.h>
-
-/* per conntrack: nat application helper private data */
-union ip_conntrack_nat_help {
- /* insert nat helper private data here */
- struct ip_nat_pptp nat_pptp_info;
-};
#endif
#include <linux/types.h>
#ifdef CONFIG_IP_NF_NAT_NEEDED
struct {
struct ip_nat_info info;
- union ip_conntrack_nat_help help;
#if defined(CONFIG_IP_NF_TARGET_MASQUERADE) || \
defined(CONFIG_IP_NF_TARGET_MASQUERADE_MODULE)
int masq_index;
ip_conntrack_find_get(const struct ip_conntrack_tuple *tuple,
const struct ip_conntrack *ignored_conntrack);
-struct ip_conntrack_tuple_hash *
-__ip_conntrack_find(const struct ip_conntrack_tuple *tuple,
- const struct ip_conntrack *ignored_conntrack);
-
-struct ip_conntrack_expect *
-__ip_conntrack_exp_find(const struct ip_conntrack_tuple *tuple);
-
extern int __ip_conntrack_confirm(struct sk_buff **pskb);
/* Confirm a connection: returns NF_DROP if packet must be dropped. */
enum pptp_ctrlcall_state cstate; /* call state */
u_int16_t pac_call_id; /* call id of PAC, host byte order */
u_int16_t pns_call_id; /* call id of PNS, host byte order */
-
- /* in pre-2.6.11 this used to be per-expect. Now it is per-conntrack
- * and therefore imposes a fixed limit on the number of maps */
- struct ip_ct_gre_keymap *keymap_orig, *keymap_reply;
};
/* conntrack_expect private member */
#ifdef __KERNEL__
-
#include <linux/netfilter_ipv4/lockhelp.h>
DECLARE_LOCK_EXTERN(ip_pptp_lock);
struct PptpSetLinkInfo setlink;
};
-struct ip_conntrack;
-
-extern int
-(*ip_nat_pptp_hook_outbound)(struct sk_buff **pskb,
- struct ip_conntrack *ct,
- enum ip_conntrack_info ctinfo,
- struct PptpControlHeader *ctlh,
- union pptp_ctrl_union *pptpReq);
-
-extern int
-(*ip_nat_pptp_hook_inbound)(struct sk_buff **pskb,
- struct ip_conntrack *ct,
- enum ip_conntrack_info ctinfo,
- struct PptpControlHeader *ctlh,
- union pptp_ctrl_union *pptpReq);
-
-extern int
-(*ip_nat_pptp_hook_exp_gre)(struct ip_conntrack_expect *exp_orig,
- struct ip_conntrack_expect *exp_reply);
-
-extern void
-(*ip_nat_pptp_hook_expectfn)(struct ip_conntrack *ct,
- struct ip_conntrack_expect *exp);
#endif /* __KERNEL__ */
#endif /* _CONNTRACK_PPTP_H */
unsigned int timeout;
};
+/* this is part of ip_conntrack_expect */
+struct ip_ct_gre_expect {
+ struct ip_ct_gre_keymap *keymap_orig, *keymap_reply;
+};
+
#ifdef __KERNEL__
struct ip_conntrack_expect;
-struct ip_conntrack;
/* structure for original <-> reply keymap */
struct ip_ct_gre_keymap {
struct ip_conntrack_tuple tuple;
};
+
/* add new tuple->key_reply pair to keymap */
-int ip_ct_gre_keymap_add(struct ip_conntrack *ct,
+int ip_ct_gre_keymap_add(struct ip_conntrack_expect *exp,
struct ip_conntrack_tuple *t,
int reply);
+/* change an existing keymap entry */
+void ip_ct_gre_keymap_change(struct ip_ct_gre_keymap *km,
+ struct ip_conntrack_tuple *t);
+
/* delete keymap entries */
-void ip_ct_gre_keymap_destroy(struct ip_conntrack *ct);
+void ip_ct_gre_keymap_destroy(struct ip_conntrack_expect *exp);
/* get pointer to gre key, if present */
union ip_conntrack_manip_proto
{
/* Add other protocols here. */
- u_int16_t all;
+ u_int32_t all;
struct {
u_int16_t port;
struct {
u_int16_t id;
} icmp;
+ struct {
+ u_int32_t key;
+ } gre;
struct {
u_int16_t port;
} sctp;
- struct {
- u_int16_t key; /* key is 32bit, pptp onky uses 16 */
- } gre;
};
/* The manipulable part of the tuple. */
u_int32_t ip;
union {
/* Add other protocols here. */
- u_int16_t all;
+ u_int32_t all;
struct {
u_int16_t port;
struct {
u_int8_t type, code;
} icmp;
+ struct {
+ u_int32_t key;
+ } gre;
struct {
u_int16_t port;
} sctp;
- struct {
- u_int16_t key;
- } gre;
} u;
/* The protocol. */
#ifdef __KERNEL__
#define DUMP_TUPLE(tp) \
-DEBUGP("tuple %p: %u %u.%u.%u.%u:%hu -> %u.%u.%u.%u:%hu\n", \
+DEBUGP("tuple %p: %u %u.%u.%u.%u:%u -> %u.%u.%u.%u:%u\n", \
(tp), (tp)->dst.protonum, \
- NIPQUAD((tp)->src.ip), ntohs((tp)->src.u.all), \
- NIPQUAD((tp)->dst.ip), ntohs((tp)->dst.u.all))
+ NIPQUAD((tp)->src.ip), ntohl((tp)->src.u.all), \
+ NIPQUAD((tp)->dst.ip), ntohl((tp)->dst.u.all))
+
+#define DUMP_TUPLE_RAW(x) \
+ DEBUGP("tuple %p: %u %u.%u.%u.%u:0x%08x -> %u.%u.%u.%u:0x%08x\n",\
+ (x), (x)->dst.protonum, \
+ NIPQUAD((x)->src.ip), ntohl((x)->src.u.all), \
+ NIPQUAD((x)->dst.ip), ntohl((x)->dst.u.all))
#define CTINFO2DIR(ctinfo) ((ctinfo) >= IP_CT_IS_REPLY ? IP_CT_DIR_REPLY : IP_CT_DIR_ORIGINAL)
REISERFS_ERROR_PANIC,
REISERFS_ERROR_RO,
REISERFS_ERROR_CONTINUE,
-
REISERFS_TEST1,
REISERFS_TEST2,
REISERFS_TEST3,
--- /dev/null
+/*
+ * linux/include/linux/relayfs_fs.h
+ *
+ * Copyright (C) 2002, 2003 - Tom Zanussi (zanussi@us.ibm.com), IBM Corp
+ * Copyright (C) 1999, 2000, 2001, 2002 - Karim Yaghmour (karim@opersys.com)
+ *
+ * RelayFS definitions and declarations
+ *
+ * Please see Documentation/filesystems/relayfs.txt for more info.
+ */
+
+#ifndef _LINUX_RELAYFS_FS_H
+#define _LINUX_RELAYFS_FS_H
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/wait.h>
+#include <linux/list.h>
+#include <linux/fs.h>
+
+/*
+ * Tracks changes to rchan struct
+ */
+#define RELAYFS_CHANNEL_VERSION 1
+
+/*
+ * Maximum number of simultaneously open channels
+ */
+#define RELAY_MAX_CHANNELS 256
+
+/*
+ * Relay properties
+ */
+#define RELAY_MIN_BUFS 2
+#define RELAY_MIN_BUFSIZE 4096
+#define RELAY_MAX_BUFS 256
+#define RELAY_MAX_BUF_SIZE 0x1000000
+#define RELAY_MAX_TOTAL_BUF_SIZE 0x8000000
+
+/*
+ * Lockless scheme utility macros
+ */
+#define RELAY_MAX_BUFNO(bufno_bits) (1UL << (bufno_bits))
+#define RELAY_BUF_SIZE(offset_bits) (1UL << (offset_bits))
+#define RELAY_BUF_OFFSET_MASK(offset_bits) (RELAY_BUF_SIZE(offset_bits) - 1)
+#define RELAY_BUFNO_GET(index, offset_bits) ((index) >> (offset_bits))
+#define RELAY_BUF_OFFSET_GET(index, mask) ((index) & (mask))
+#define RELAY_BUF_OFFSET_CLEAR(index, mask) ((index) & ~(mask))
+
+/*
+ * Flags returned by relay_reserve()
+ */
+#define RELAY_BUFFER_SWITCH_NONE 0x0
+#define RELAY_WRITE_DISCARD_NONE 0x0
+#define RELAY_BUFFER_SWITCH 0x1
+#define RELAY_WRITE_DISCARD 0x2
+#define RELAY_WRITE_TOO_LONG 0x4
+
+/*
+ * Relay attribute flags
+ */
+#define RELAY_DELIVERY_BULK 0x1
+#define RELAY_DELIVERY_PACKET 0x2
+#define RELAY_SCHEME_LOCKLESS 0x4
+#define RELAY_SCHEME_LOCKING 0x8
+#define RELAY_SCHEME_ANY 0xC
+#define RELAY_TIMESTAMP_TSC 0x10
+#define RELAY_TIMESTAMP_GETTIMEOFDAY 0x20
+#define RELAY_TIMESTAMP_ANY 0x30
+#define RELAY_USAGE_SMP 0x40
+#define RELAY_USAGE_GLOBAL 0x80
+#define RELAY_MODE_CONTINUOUS 0x100
+#define RELAY_MODE_NO_OVERWRITE 0x200
+
+/*
+ * Flags for needs_resize() callback
+ */
+#define RELAY_RESIZE_NONE 0x0
+#define RELAY_RESIZE_EXPAND 0x1
+#define RELAY_RESIZE_SHRINK 0x2
+#define RELAY_RESIZE_REPLACE 0x4
+#define RELAY_RESIZE_REPLACED 0x8
+
+/*
+ * Values for fileop_notify() callback
+ */
+enum relay_fileop
+{
+ RELAY_FILE_OPEN,
+ RELAY_FILE_CLOSE,
+ RELAY_FILE_MAP,
+ RELAY_FILE_UNMAP
+};
+
+/*
+ * Data structure returned by relay_info()
+ */
+struct rchan_info
+{
+ u32 flags; /* relay attribute flags for channel */
+ u32 buf_size; /* channel's sub-buffer size */
+ char *buf_addr; /* address of channel start */
+ u32 alloc_size; /* total buffer size actually allocated */
+ u32 n_bufs; /* number of sub-buffers in channel */
+ u32 cur_idx; /* current write index into channel */
+ u32 bufs_produced; /* current count of sub-buffers produced */
+ u32 bufs_consumed; /* current count of sub-buffers consumed */
+ u32 buf_id; /* buf_id of current sub-buffer */
+ int buffer_complete[RELAY_MAX_BUFS]; /* boolean per sub-buffer */
+ int unused_bytes[RELAY_MAX_BUFS]; /* count per sub-buffer */
+};
+
+/*
+ * Relay channel client callbacks
+ */
+struct rchan_callbacks
+{
+ /*
+ * buffer_start - called at the beginning of a new sub-buffer
+ * @rchan_id: the channel id
+ * @current_write_pos: position in sub-buffer client should write to
+ * @buffer_id: the id of the new sub-buffer
+ * @start_time: the timestamp associated with the start of sub-buffer
+ * @start_tsc: the TSC associated with the timestamp, if using_tsc
+ * @using_tsc: boolean, indicates whether start_tsc is valid
+ *
+ * Return value should be the number of bytes written by the client.
+ *
+ * See Documentation/filesystems/relayfs.txt for details.
+ */
+ int (*buffer_start) (int rchan_id,
+ char *current_write_pos,
+ u32 buffer_id,
+ struct timeval start_time,
+ u32 start_tsc,
+ int using_tsc);
+
+ /*
+ * buffer_end - called at the end of a sub-buffer
+ * @rchan_id: the channel id
+ * @current_write_pos: position in sub-buffer of end of data
+ * @end_of_buffer: the position of the end of the sub-buffer
+ * @end_time: the timestamp associated with the end of the sub-buffer
+ * @end_tsc: the TSC associated with the end_time, if using_tsc
+ * @using_tsc: boolean, indicates whether end_tsc is valid
+ *
+ * Return value should be the number of bytes written by the client.
+ *
+ * See Documentation/filesystems/relayfs.txt for details.
+ */
+ int (*buffer_end) (int rchan_id,
+ char *current_write_pos,
+ char *end_of_buffer,
+ struct timeval end_time,
+ u32 end_tsc,
+ int using_tsc);
+
+ /*
+ * deliver - called when data is ready for the client
+ * @rchan_id: the channel id
+ * @from: the start of the delivered data
+ * @len: the length of the delivered data
+ *
+ * See Documentation/filesystems/relayfs.txt for details.
+ */
+ void (*deliver) (int rchan_id, char *from, u32 len);
+
+ /*
+ * user_deliver - called when data has been written from userspace
+ * @rchan_id: the channel id
+ * @from: the start of the delivered data
+ * @len: the length of the delivered data
+ *
+ * See Documentation/filesystems/relayfs.txt for details.
+ */
+ void (*user_deliver) (int rchan_id, char *from, u32 len);
+
+ /*
+ * needs_resize - called when a resizing event occurs
+ * @rchan_id: the channel id
+ * @resize_type: the type of resizing event
+ * @suggested_buf_size: the suggested new sub-buffer size
+ * @suggested_buf_size: the suggested new number of sub-buffers
+ *
+ * See Documentation/filesystems/relayfs.txt for details.
+ */
+ void (*needs_resize)(int rchan_id,
+ int resize_type,
+ u32 suggested_buf_size,
+ u32 suggested_n_bufs);
+
+ /*
+ * fileop_notify - called on open/close/mmap/munmap of a relayfs file
+ * @rchan_id: the channel id
+ * @filp: relayfs file pointer
+ * @fileop: which file operation is in progress
+ *
+ * The return value can direct the outcome of the operation.
+ *
+ * See Documentation/filesystems/relayfs.txt for details.
+ */
+ int (*fileop_notify)(int rchan_id,
+ struct file *filp,
+ enum relay_fileop fileop);
+
+ /*
+ * ioctl - called in ioctl context from userspace
+ * @rchan_id: the channel id
+ * @cmd: ioctl cmd
+ * @arg: ioctl cmd arg
+ *
+ * The return value is returned as the value from the ioctl call.
+ *
+ * See Documentation/filesystems/relayfs.txt for details.
+ */
+ int (*ioctl) (int rchan_id, unsigned int cmd, unsigned long arg);
+};
+
+/*
+ * Lockless scheme-specific data
+ */
+struct lockless_rchan
+{
+ u8 bufno_bits; /* # bits used for sub-buffer id */
+ u8 offset_bits; /* # bits used for offset within sub-buffer */
+ u32 index; /* current index = sub-buffer id and offset */
+ u32 offset_mask; /* used to obtain offset portion of index */
+ u32 index_mask; /* used to mask off unused bits index */
+ atomic_t fill_count[RELAY_MAX_BUFS]; /* fill count per sub-buffer */
+};
+
+/*
+ * Locking scheme-specific data
+ */
+struct locking_rchan
+{
+ char *write_buf; /* start of write sub-buffer */
+ char *write_buf_end; /* end of write sub-buffer */
+ char *current_write_pos; /* current write pointer */
+ char *write_limit; /* takes reserves into account */
+ char *in_progress_event_pos; /* used for interrupted writes */
+ u16 in_progress_event_size; /* used for interrupted writes */
+ char *interrupted_pos; /* used for interrupted writes */
+ u16 interrupting_size; /* used for interrupted writes */
+ spinlock_t lock; /* channel lock for locking scheme */
+};
+
+struct relay_ops;
+
+/*
+ * Offset resizing data structure
+ */
+struct resize_offset
+{
+ u32 ge;
+ u32 le;
+ int delta;
+};
+
+/*
+ * Relay channel data structure
+ */
+struct rchan
+{
+ u32 version; /* the version of this struct */
+ char *buf; /* the channel buffer */
+ union
+ {
+ struct lockless_rchan lockless;
+ struct locking_rchan locking;
+ } scheme; /* scheme-specific channel data */
+
+ int id; /* the channel id */
+ struct rchan_callbacks *callbacks; /* client callbacks */
+ u32 flags; /* relay channel attributes */
+ u32 buf_id; /* current sub-buffer id */
+ u32 buf_idx; /* current sub-buffer index */
+
+ atomic_t mapped; /* map count */
+
+ atomic_t suspended; /* channel suspended i.e full? */
+ int half_switch; /* used internally for suspend */
+
+ struct timeval buf_start_time; /* current sub-buffer start time */
+ u32 buf_start_tsc; /* current sub-buffer start TSC */
+
+ u32 buf_size; /* sub-buffer size */
+ u32 alloc_size; /* total buffer size allocated */
+ u32 n_bufs; /* number of sub-buffers */
+
+ u32 bufs_produced; /* count of sub-buffers produced */
+ u32 bufs_consumed; /* count of sub-buffers consumed */
+ u32 bytes_consumed; /* bytes consumed in cur sub-buffer */
+
+ int initialized; /* first buffer initialized? */
+ int finalized; /* channel finalized? */
+
+ u32 start_reserve; /* reserve at start of sub-buffers */
+ u32 end_reserve; /* reserve at end of sub-buffers */
+ u32 rchan_start_reserve; /* additional reserve sub-buffer 0 */
+
+ struct dentry *dentry; /* channel file dentry */
+
+ wait_queue_head_t read_wait; /* VFS read wait queue */
+ wait_queue_head_t write_wait; /* VFS write wait queue */
+ struct work_struct wake_readers; /* reader wake-up work struct */
+ struct work_struct wake_writers; /* reader wake-up work struct */
+ atomic_t refcount; /* channel refcount */
+
+ struct relay_ops *relay_ops; /* scheme-specific channel ops */
+
+ int unused_bytes[RELAY_MAX_BUFS]; /* unused count per sub-buffer */
+
+ struct semaphore resize_sem; /* serializes alloc/repace */
+ struct work_struct work; /* resize allocation work struct */
+
+ struct list_head open_readers; /* open readers for this channel */
+ rwlock_t open_readers_lock; /* protection for open_readers list */
+
+ char *init_buf; /* init channel buffer, if non-NULL */
+
+ u32 resize_min; /* minimum resized total buffer size */
+ u32 resize_max; /* maximum resized total buffer size */
+ char *resize_buf; /* for autosize alloc/free */
+ u32 resize_buf_size; /* resized sub-buffer size */
+ u32 resize_n_bufs; /* resized number of sub-buffers */
+ u32 resize_alloc_size; /* resized actual total size */
+ int resizing; /* is resizing in progress? */
+ int resize_err; /* resizing err code */
+ int resize_failures; /* number of resize failures */
+ int replace_buffer; /* is the alloced buffer ready? */
+ struct resize_offset resize_offset; /* offset change */
+ struct timer_list shrink_timer; /* timer used for shrinking */
+ int resize_order; /* size of last resize */
+ u32 expand_buf_id; /* subbuf id expand will occur at */
+
+ struct page **buf_page_array; /* array of current buffer pages */
+ int buf_page_count; /* number of current buffer pages */
+ struct page **expand_page_array;/* new pages to be inserted */
+ int expand_page_count; /* number of new pages */
+ struct page **shrink_page_array;/* old pages to be freed */
+ int shrink_page_count; /* number of old pages */
+ struct page **resize_page_array;/* will become current pages */
+ int resize_page_count; /* number of resize pages */
+ struct page **old_buf_page_array; /* hold for freeing */
+} ____cacheline_aligned;
+
+/*
+ * Relay channel reader struct
+ */
+struct rchan_reader
+{
+ struct list_head list; /* for list inclusion */
+ struct rchan *rchan; /* the channel we're reading from */
+ int auto_consume; /* does this reader auto-consume? */
+ u32 bufs_consumed; /* buffers this reader has consumed */
+ u32 bytes_consumed; /* bytes consumed in cur sub-buffer */
+ int offset_changed; /* have channel offsets changed? */
+ int vfs_reader; /* are we a VFS reader? */
+ int map_reader; /* are we an mmap reader? */
+
+ union
+ {
+ struct file *file;
+ u32 f_pos;
+ } pos; /* current read offset */
+};
+
+/*
+ * These help make union member access less tedious
+ */
+#define channel_buffer(rchan) ((rchan)->buf)
+#define idx(rchan) ((rchan)->scheme.lockless.index)
+#define bufno_bits(rchan) ((rchan)->scheme.lockless.bufno_bits)
+#define offset_bits(rchan) ((rchan)->scheme.lockless.offset_bits)
+#define offset_mask(rchan) ((rchan)->scheme.lockless.offset_mask)
+#define idx_mask(rchan) ((rchan)->scheme.lockless.index_mask)
+#define bulk_delivery(rchan) (((rchan)->flags & RELAY_DELIVERY_BULK) ? 1 : 0)
+#define packet_delivery(rchan) (((rchan)->flags & RELAY_DELIVERY_PACKET) ? 1 : 0)
+#define using_lockless(rchan) (((rchan)->flags & RELAY_SCHEME_LOCKLESS) ? 1 : 0)
+#define using_locking(rchan) (((rchan)->flags & RELAY_SCHEME_LOCKING) ? 1 : 0)
+#define using_tsc(rchan) (((rchan)->flags & RELAY_TIMESTAMP_TSC) ? 1 : 0)
+#define using_gettimeofday(rchan) (((rchan)->flags & RELAY_TIMESTAMP_GETTIMEOFDAY) ? 1 : 0)
+#define usage_smp(rchan) (((rchan)->flags & RELAY_USAGE_SMP) ? 1 : 0)
+#define usage_global(rchan) (((rchan)->flags & RELAY_USAGE_GLOBAL) ? 1 : 0)
+#define mode_continuous(rchan) (((rchan)->flags & RELAY_MODE_CONTINUOUS) ? 1 : 0)
+#define fill_count(rchan, i) ((rchan)->scheme.lockless.fill_count[(i)])
+#define write_buf(rchan) ((rchan)->scheme.locking.write_buf)
+#define read_buf(rchan) ((rchan)->scheme.locking.read_buf)
+#define write_buf_end(rchan) ((rchan)->scheme.locking.write_buf_end)
+#define read_buf_end(rchan) ((rchan)->scheme.locking.read_buf_end)
+#define cur_write_pos(rchan) ((rchan)->scheme.locking.current_write_pos)
+#define read_limit(rchan) ((rchan)->scheme.locking.read_limit)
+#define write_limit(rchan) ((rchan)->scheme.locking.write_limit)
+#define in_progress_event_pos(rchan) ((rchan)->scheme.locking.in_progress_event_pos)
+#define in_progress_event_size(rchan) ((rchan)->scheme.locking.in_progress_event_size)
+#define interrupted_pos(rchan) ((rchan)->scheme.locking.interrupted_pos)
+#define interrupting_size(rchan) ((rchan)->scheme.locking.interrupting_size)
+#define channel_lock(rchan) ((rchan)->scheme.locking.lock)
+
+
+/**
+ * calc_time_delta - utility function for time delta calculation
+ * @now: current time
+ * @start: start time
+ *
+ * Returns the time delta produced by subtracting start time from now.
+ */
+static inline u32
+calc_time_delta(struct timeval *now,
+ struct timeval *start)
+{
+ return (now->tv_sec - start->tv_sec) * 1000000
+ + (now->tv_usec - start->tv_usec);
+}
+
+/**
+ * recalc_time_delta - utility function for time delta recalculation
+ * @now: current time
+ * @new_delta: the new time delta calculated
+ * @cpu: the associated CPU id
+ */
+static inline void
+recalc_time_delta(struct timeval *now,
+ u32 *new_delta,
+ struct rchan *rchan)
+{
+ if (using_tsc(rchan) == 0)
+ *new_delta = calc_time_delta(now, &rchan->buf_start_time);
+}
+
+/**
+ * have_cmpxchg - does this architecture have a cmpxchg?
+ *
+ * Returns 1 if this architecture has a cmpxchg useable by
+ * the lockless scheme, 0 otherwise.
+ */
+static inline int
+have_cmpxchg(void)
+{
+#if defined(__HAVE_ARCH_CMPXCHG)
+ return 1;
+#else
+ return 0;
+#endif
+}
+
+/**
+ * relay_write_direct - write data directly into destination buffer
+ */
+#define relay_write_direct(DEST, SRC, SIZE) \
+do\
+{\
+ memcpy(DEST, SRC, SIZE);\
+ DEST += SIZE;\
+} while (0);
+
+/**
+ * relay_lock_channel - lock the relay channel if applicable
+ *
+ * This macro only affects the locking scheme. If the locking scheme
+ * is in use and the channel usage is SMP, does a local_irq_save. If the
+ * locking sheme is in use and the channel usage is GLOBAL, uses
+ * spin_lock_irqsave. FLAGS is initialized to 0 since we know that
+ * it is being initialized prior to use and we avoid the compiler warning.
+ */
+#define relay_lock_channel(RCHAN, FLAGS) \
+do\
+{\
+ FLAGS = 0;\
+ if (using_locking(RCHAN)) {\
+ if (usage_smp(RCHAN)) {\
+ local_irq_save(FLAGS); \
+ } else {\
+ spin_lock_irqsave(&(RCHAN)->scheme.locking.lock, FLAGS); \
+ }\
+ }\
+} while (0);
+
+/**
+ * relay_unlock_channel - unlock the relay channel if applicable
+ *
+ * This macro only affects the locking scheme. See relay_lock_channel.
+ */
+#define relay_unlock_channel(RCHAN, FLAGS) \
+do\
+{\
+ if (using_locking(RCHAN)) {\
+ if (usage_smp(RCHAN)) {\
+ local_irq_restore(FLAGS); \
+ } else {\
+ spin_unlock_irqrestore(&(RCHAN)->scheme.locking.lock, FLAGS); \
+ }\
+ }\
+} while (0);
+
+/*
+ * Define cmpxchg if we don't have it
+ */
+#ifndef __HAVE_ARCH_CMPXCHG
+#define cmpxchg(p,o,n) 0
+#endif
+
+/*
+ * High-level relayfs kernel API, fs/relayfs/relay.c
+ */
+extern int
+relay_open(const char *chanpath,
+ int bufsize,
+ int nbufs,
+ u32 flags,
+ struct rchan_callbacks *channel_callbacks,
+ u32 start_reserve,
+ u32 end_reserve,
+ u32 rchan_start_reserve,
+ u32 resize_min,
+ u32 resize_max,
+ int mode,
+ char *init_buf,
+ u32 init_buf_size);
+
+extern int
+relay_close(int rchan_id);
+
+extern int
+relay_write(int rchan_id,
+ const void *data_ptr,
+ size_t count,
+ int td_offset,
+ void **wrote_pos);
+
+extern ssize_t
+relay_read(struct rchan_reader *reader,
+ char *buf,
+ size_t count,
+ int wait,
+ u32 *actual_read_offset);
+
+extern int
+relay_discard_init_buf(int rchan_id);
+
+extern struct rchan_reader *
+add_rchan_reader(int rchan_id, int autoconsume);
+
+extern int
+remove_rchan_reader(struct rchan_reader *reader);
+
+extern struct rchan_reader *
+add_map_reader(int rchan_id);
+
+extern int
+remove_map_reader(struct rchan_reader *reader);
+
+extern int
+relay_info(int rchan_id, struct rchan_info *rchan_info);
+
+extern void
+relay_buffers_consumed(struct rchan_reader *reader, u32 buffers_consumed);
+
+extern void
+relay_bytes_consumed(struct rchan_reader *reader, u32 bytes_consumed, u32 read_offset);
+
+extern ssize_t
+relay_bytes_avail(struct rchan_reader *reader);
+
+extern int
+relay_realloc_buffer(int rchan_id, u32 new_nbufs, int in_background);
+
+extern int
+relay_replace_buffer(int rchan_id);
+
+extern int
+rchan_empty(struct rchan_reader *reader);
+
+extern int
+rchan_full(struct rchan_reader *reader);
+
+extern void
+update_readers_consumed(struct rchan *rchan, u32 bufs_consumed, u32 bytes_consumed);
+
+extern int
+__relay_mmap_buffer(struct rchan *rchan, struct vm_area_struct *vma);
+
+extern struct rchan_reader *
+__add_rchan_reader(struct rchan *rchan, struct file *filp, int auto_consume, int map_reader);
+
+extern void
+__remove_rchan_reader(struct rchan_reader *reader);
+
+/*
+ * Low-level relayfs kernel API, fs/relayfs/relay.c
+ */
+extern struct rchan *
+rchan_get(int rchan_id);
+
+extern void
+rchan_put(struct rchan *rchan);
+
+extern char *
+relay_reserve(struct rchan *rchan,
+ u32 data_len,
+ struct timeval *time_stamp,
+ u32 *time_delta,
+ int *errcode,
+ int *interrupting);
+
+extern void
+relay_commit(struct rchan *rchan,
+ char *from,
+ u32 len,
+ int reserve_code,
+ int interrupting);
+
+extern u32
+relay_get_offset(struct rchan *rchan, u32 *max_offset);
+
+extern int
+relay_reset(int rchan_id);
+
+/*
+ * VFS functions, fs/relayfs/inode.c
+ */
+extern int
+relayfs_create_dir(const char *name,
+ struct dentry *parent,
+ struct dentry **dentry);
+
+extern int
+relayfs_create_file(const char * name,
+ struct dentry *parent,
+ struct dentry **dentry,
+ void * data,
+ int mode);
+
+extern int
+relayfs_remove_file(struct dentry *dentry);
+
+extern int
+reset_index(struct rchan *rchan, u32 old_index);
+
+
+/*
+ * klog functions, fs/relayfs/klog.c
+ */
+extern int
+create_klog_channel(void);
+
+extern int
+remove_klog_channel(void);
+
+/*
+ * Scheme-specific channel ops
+ */
+struct relay_ops
+{
+ char * (*reserve) (struct rchan *rchan,
+ u32 slot_len,
+ struct timeval *time_stamp,
+ u32 *tsc,
+ int * errcode,
+ int * interrupting);
+
+ void (*commit) (struct rchan *rchan,
+ char *from,
+ u32 len,
+ int deliver,
+ int interrupting);
+
+ u32 (*get_offset) (struct rchan *rchan,
+ u32 *max_offset);
+
+ void (*resume) (struct rchan *rchan);
+ void (*finalize) (struct rchan *rchan);
+ void (*reset) (struct rchan *rchan,
+ int init);
+ int (*reset_index) (struct rchan *rchan,
+ u32 old_index);
+};
+
+#endif /* _LINUX_RELAYFS_FS_H */
+
+
+
+
+
/* signal handlers */
struct signal_struct *signal;
struct sighand_struct *sighand;
-
sigset_t blocked, real_blocked;
struct sigpending pending;
atomic_inc(&u->__count);
return u;
}
+
extern void free_uid(struct user_struct *);
extern void switch_uid(struct user_struct *);
}
#endif
+
/*
* Routines for handling mm_structs
*/
#endif /* CONFIG_SMP */
+/* API for registering delay info */
+#ifdef CONFIG_DELAY_ACCT
+
+#define test_delay_flag(tsk,flg) ((tsk)->flags & (flg))
+#define set_delay_flag(tsk,flg) ((tsk)->flags |= (flg))
+#define clear_delay_flag(tsk,flg) ((tsk)->flags &= ~(flg))
+
+#define def_delay_var(var) unsigned long long var
+#define get_delay(tsk,field) ((tsk)->delays.field)
+
+#define start_delay(var) ((var) = sched_clock())
+#define start_delay_set(var,flg) (set_delay_flag(current,flg),(var) = sched_clock())
+
+#define inc_delay(tsk,field) (((tsk)->delays.field)++)
+
+/* because of hardware timer drifts in SMPs and task continue on different cpu
+ * then where the start_ts was taken there is a possibility that
+ * end_ts < start_ts by some usecs. In this case we ignore the diff
+ * and add nothing to the total.
+ */
+#ifdef CONFIG_SMP
+#define test_ts_integrity(start_ts,end_ts) (likely((end_ts) > (start_ts)))
+#else
+#define test_ts_integrity(start_ts,end_ts) (1)
+#endif
+
+#define add_delay_ts(tsk,field,start_ts,end_ts) \
+ do { if (test_ts_integrity(start_ts,end_ts)) (tsk)->delays.field += ((end_ts)-(start_ts)); } while (0)
+
+#define add_delay_clear(tsk,field,start_ts,flg) \
+ do { \
+ unsigned long long now = sched_clock();\
+ add_delay_ts(tsk,field,start_ts,now); \
+ clear_delay_flag(tsk,flg); \
+ } while (0)
+
+static inline void add_io_delay(unsigned long long dstart)
+{
+ struct task_struct * tsk = current;
+ unsigned long long now = sched_clock();
+ unsigned long long val;
+
+ if (test_ts_integrity(dstart,now))
+ val = now - dstart;
+ else
+ val = 0;
+ if (test_delay_flag(tsk,PF_MEMIO)) {
+ tsk->delays.mem_iowait_total += val;
+ tsk->delays.num_memwaits++;
+ } else {
+ tsk->delays.iowait_total += val;
+ tsk->delays.num_iowaits++;
+ }
+ clear_delay_flag(tsk,PF_IOWAIT);
+}
+
+inline static void init_delays(struct task_struct *tsk)
+{
+ memset((void*)&tsk->delays,0,sizeof(tsk->delays));
+}
+
+#else
+
+#define test_delay_flag(tsk,flg) (0)
+#define set_delay_flag(tsk,flg) do { } while (0)
+#define clear_delay_flag(tsk,flg) do { } while (0)
+
+#define def_delay_var(var)
+#define get_delay(tsk,field) (0)
+
+#define start_delay(var) do { } while (0)
+#define start_delay_set(var,flg) do { } while (0)
+
+#define inc_delay(tsk,field) do { } while (0)
+#define add_delay_ts(tsk,field,start_ts,now) do { } while (0)
+#define add_delay_clear(tsk,field,start_ts,flg) do { } while (0)
+#define add_io_delay(dstart) do { } while (0)
+#define init_delays(tsk) do { } while (0)
+#endif
+
+
+
#ifdef HAVE_ARCH_PICK_MMAP_LAYOUT
extern void arch_pick_mmap_layout(struct mm_struct *mm);
#else
KERN_UNKNOWN_NMI_PANIC=66, /* int: unknown nmi panic flag */
KERN_BOOTLOADER_TYPE=67, /* int: boot loader type */
KERN_RANDOMIZE=68, /* int: randomize virtual address space */
- KERN_VSHELPER=69, /* string: path to vshelper policy agent */
+ KERN_SETUID_DUMPABLE=69, /* int: behaviour of dumps for setuid core */
+ KERN_DUMP=70, /* dir: dump parameters */
+ KERN_VSHELPER=71, /* string: path to vshelper policy agent */
};
/* FIFO of established children */
struct open_request *accept_queue;
struct open_request *accept_queue_tail;
-
unsigned int keepalive_time; /* time before keep alive takes place */
unsigned int keepalive_intvl; /* time interval between keep alive probes */
int linger2;
typedef __kernel_gid32_t gid_t;
typedef __kernel_uid16_t uid16_t;
typedef __kernel_gid16_t gid16_t;
+
+/* The following two typedef's are for vserver */
typedef unsigned int xid_t;
typedef unsigned int nid_t;
atomic_add(amount, &vxi->limit.rcur[res]);
}
-
/* process and file limits */
#define vx_nproc_inc(p) \
#define vx_openfd_avail(n) \
vx_cres_avail(current->vx_info, n, VLIMIT_OPENFD)
-
/* socket limits */
#define vx_sock_inc(s) \
#define MAX_PRIO_BIAS 20
#define MIN_PRIO_BIAS -20
-#ifdef CONFIG_VSERVER_ACB_SCHED
-
-#define VX_INVALID_TICKS -1000000
-#define IS_BEST_EFFORT(vxi) (vx_info_flags(vxi, VXF_SCHED_SHARE, 0))
-
-int vx_tokens_avail(struct vx_info *vxi);
-void vx_consume_token(struct vx_info *vxi);
-void vx_scheduler_tick(void);
-void vx_advance_best_effort_ticks(int ticks);
-void vx_advance_guaranteed_ticks(int ticks);
-
-#else
static inline int vx_tokens_avail(struct vx_info *vxi)
{
atomic_dec(&vxi->sched.tokens);
}
-#endif /* CONFIG_VSERVER_ACB_SCHED */
-
static inline int vx_need_resched(struct task_struct *p)
{
#ifdef CONFIG_VSERVER_HARDCPU
#define VXF_SCHED_HARD 0x00000100
#define VXF_SCHED_PRIO 0x00000200
#define VXF_SCHED_PAUSE 0x00000400
-#define VXF_SCHED_SHARE 0x00000800
-
+
#define VXF_VIRT_MEM 0x00010000
#define VXF_VIRT_UPTIME 0x00020000
#define VXF_VIRT_CPU 0x00040000
#ifndef _VX_INODE_H
#define _VX_INODE_H
-
#define IATTR_XID 0x01000000
#define IATTR_ADMIN 0x00000001
#define vx_hide_check(c,m) (((m) & IATTR_HIDE) ? vx_check(c,m) : 1)
+extern int vc_get_iattr_v0(uint32_t, void __user *);
+extern int vc_set_iattr_v0(uint32_t, void __user *);
+
+extern int vc_get_iattr(uint32_t, void __user *);
+extern int vc_set_iattr(uint32_t, void __user *);
+
extern int vc_iattr_ioctl(struct dentry *de,
unsigned int cmd,
unsigned long arg);
-
#endif /* __KERNEL__ */
/* inode ioctls */
#define FIOC_GETXFLG _IOR('x', 5, long)
#define FIOC_SETXFLG _IOW('x', 6, long)
-
#define FIOC_GETIATTR _IOR('x', 7, long)
#define FIOC_SETIATTR _IOR('x', 8, long)
uint64_t unused[5]; /* cacheline ? */
};
-#ifdef CONFIG_VSERVER_ACB_SCHED
-enum {
-// Different scheduling classes
- SCH_GUARANTEE = 0,
- SCH_BEST_EFFORT = 1,
- SCH_NUM_CLASSES = 2,
-// States
- SCH_UNINITIALIZED,
- SCH_INITIALIZED,
-};
-#endif
-
/* context sub struct */
struct _vx_sched {
-#ifdef CONFIG_VSERVER_ACB_SCHED
- uint64_t ticks[SCH_NUM_CLASSES];
- uint64_t last_ticks[SCH_NUM_CLASSES];
- int state[SCH_NUM_CLASSES];
-#endif
atomic_t tokens; /* number of CPU tokens */
spinlock_t tokens_lock; /* lock for token bucket */
#ifndef __LINUX_NET_AFUNIX_H
#define __LINUX_NET_AFUNIX_H
+
+#include <linux/vs_base.h>
+
extern void unix_inflight(struct file *fp);
extern void unix_notinflight(struct file *fp);
extern void unix_gc(void);
return tcp_win_from_space(sk->sk_rcvbuf);
}
+struct tcp_listen_opt
+{
+ u8 max_qlen_log; /* log_2 of maximal queued SYNs */
+ int qlen;
+ int qlen_young;
+ int clock_hand;
+ u32 hash_rnd;
+ struct open_request *syn_table[TCP_SYNQ_HSIZE];
+};
+
static inline void tcp_acceptq_queue(struct sock *sk, struct open_request *req,
struct sock *child)
{
req->dl_next = NULL;
}
-struct tcp_listen_opt
-{
- u8 max_qlen_log; /* log_2 of maximal queued SYNs */
- int qlen;
- int qlen_young;
- int clock_hand;
- u32 hash_rnd;
- struct open_request *syn_table[TCP_SYNQ_HSIZE];
-};
static inline void
tcp_synq_removed(struct sock *sk, struct open_request *req)
--- /dev/null
+#ifndef __SOUND_SNDMAGIC_H
+#define __SOUND_SNDMAGIC_H
+
+/*
+ * Magic allocation, deallocation, check
+ * Copyright (c) 2000 by Abramo Bagnara <abramo@alsa-project.org>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+
+#ifdef CONFIG_SND_DEBUG_MEMORY
+
+void *_snd_magic_kcalloc(unsigned long magic, size_t size, int flags);
+void *_snd_magic_kmalloc(unsigned long magic, size_t size, int flags);
+
+/**
+ * snd_magic_kmalloc - allocate a record with a magic-prefix
+ * @type: the type to allocate a record (like xxx_t)
+ * @extra: the extra size to allocate in bytes
+ * @flags: the allocation condition (GFP_XXX)
+ *
+ * Allocates a record of the given type with the extra space and
+ * returns its pointer. The allocated record has a secret magic-key
+ * to be checked via snd_magic_cast() for safe casts.
+ *
+ * The allocated pointer must be released via snd_magic_kfree().
+ *
+ * The "struct xxx" style cannot be used as the type argument
+ * because the magic-key constant is generated from the type-name
+ * string.
+ */
+#define snd_magic_kmalloc(type, extra, flags) \
+ (type *) _snd_magic_kmalloc(type##_magic, sizeof(type) + extra, flags)
+/**
+ * snd_magic_kcalloc - allocate a record with a magic-prefix and initialize
+ * @type: the type to allocate a record (like xxx_t)
+ * @extra: the extra size to allocate in bytes
+ * @flags: the allocation condition (GFP_XXX)
+ *
+ * Works like snd_magic_kmalloc() but this clears the area with zero
+ * automatically.
+ */
+#define snd_magic_kcalloc(type, extra, flags) \
+ (type *) _snd_magic_kcalloc(type##_magic, sizeof(type) + extra, flags)
+
+/**
+ * snd_magic_kfree - release the allocated area
+ * @ptr: the pointer allocated via snd_magic_kmalloc() or snd_magic_kcalloc()
+ *
+ * Releases the memory area allocated via snd_magic_kmalloc() or
+ * snd_magic_kcalloc() function.
+ */
+void snd_magic_kfree(void *ptr);
+
+static inline unsigned long _snd_magic_value(void *obj)
+{
+ return obj == NULL ? (unsigned long)-1 : *(((unsigned long *)obj) - 1);
+}
+
+static inline int _snd_magic_bad(void *obj, unsigned long magic)
+{
+ return _snd_magic_value(obj) != magic;
+}
+
+#define snd_magic_cast1(t, expr, cmd) snd_magic_cast(t, expr, cmd)
+
+/**
+ * snd_magic_cast - check and cast the magic-allocated pointer
+ * @type: the type of record to cast
+ * @ptr: the magic-allocated pointer
+ * @action...: the action to do if failed
+ *
+ * This macro provides a safe cast for the given type, which was
+ * allocated via snd_magic_kmalloc() or snd_magic_kcallc().
+ * If the pointer is invalid, i.e. the cast-type doesn't match,
+ * the action arguments are called with a debug message.
+ */
+#define snd_magic_cast(type, ptr, action...) \
+ (type *) ({\
+ void *__ptr = ptr;\
+ unsigned long __magic = _snd_magic_value(__ptr);\
+ if (__magic != type##_magic) {\
+ snd_printk("bad MAGIC (0x%lx)\n", __magic);\
+ action;\
+ }\
+ __ptr;\
+})
+
+#define snd_device_t_magic 0xa15a00ff
+#define snd_pcm_t_magic 0xa15a0101
+#define snd_pcm_file_t_magic 0xa15a0102
+#define snd_pcm_substream_t_magic 0xa15a0103
+#define snd_pcm_proc_private_t_magic 0xa15a0104
+#define snd_pcm_oss_file_t_magic 0xa15a0105
+#define snd_mixer_oss_t_magic 0xa15a0106
+// #define snd_pcm_sgbuf_t_magic 0xa15a0107
+
+#define snd_info_private_data_t_magic 0xa15a0201
+#define snd_info_entry_t_magic 0xa15a0202
+#define snd_ctl_file_t_magic 0xa15a0301
+#define snd_kcontrol_t_magic 0xa15a0302
+#define snd_rawmidi_t_magic 0xa15a0401
+#define snd_rawmidi_file_t_magic 0xa15a0402
+#define snd_virmidi_t_magic 0xa15a0403
+#define snd_virmidi_dev_t_magic 0xa15a0404
+#define snd_timer_t_magic 0xa15a0501
+#define snd_timer_user_t_magic 0xa15a0502
+#define snd_hwdep_t_magic 0xa15a0601
+#define snd_seq_device_t_magic 0xa15a0701
+
+#define es18xx_t_magic 0xa15a1101
+#define trident_t_magic 0xa15a1201
+#define es1938_t_magic 0xa15a1301
+#define cs46xx_t_magic 0xa15a1401
+#define cs46xx_pcm_t_magic 0xa15a1402
+#define ensoniq_t_magic 0xa15a1501
+#define sonicvibes_t_magic 0xa15a1601
+#define mpu401_t_magic 0xa15a1701
+#define fm801_t_magic 0xa15a1801
+#define ac97_t_magic 0xa15a1901
+#define ac97_bus_t_magic 0xa15a1902
+#define ak4531_t_magic 0xa15a1a01
+#define snd_uart16550_t_magic 0xa15a1b01
+#define emu10k1_t_magic 0xa15a1c01
+#define emu10k1_pcm_t_magic 0xa15a1c02
+#define emu10k1_midi_t_magic 0xa15a1c03
+#define snd_gus_card_t_magic 0xa15a1d01
+#define gus_pcm_private_t_magic 0xa15a1d02
+#define gus_proc_private_t_magic 0xa15a1d03
+#define tea6330t_t_magic 0xa15a1e01
+#define ad1848_t_magic 0xa15a1f01
+#define cs4231_t_magic 0xa15a2001
+#define es1688_t_magic 0xa15a2101
+#define opti93x_t_magic 0xa15a2201
+#define emu8000_t_magic 0xa15a2301
+#define emu8000_proc_private_t_magic 0xa15a2302
+#define snd_emux_t_magic 0xa15a2303
+#define snd_emux_port_t_magic 0xa15a2304
+#define sb_t_magic 0xa15a2401
+#define snd_sb_csp_t_magic 0xa15a2402
+#define snd_card_dummy_t_magic 0xa15a2501
+#define snd_card_dummy_pcm_t_magic 0xa15a2502
+#define opl3_t_magic 0xa15a2601
+#define opl4_t_magic 0xa15a2602
+#define snd_seq_dummy_port_t_magic 0xa15a2701
+#define ice1712_t_magic 0xa15a2801
+#define ad1816a_t_magic 0xa15a2901
+#define intel8x0_t_magic 0xa15a2a01
+#define es1968_t_magic 0xa15a2b01
+#define esschan_t_magic 0xa15a2b02
+#define via82xx_t_magic 0xa15a2c01
+#define pdplus_t_magic 0xa15a2d01
+#define cmipci_t_magic 0xa15a2e01
+#define ymfpci_t_magic 0xa15a2f01
+#define ymfpci_pcm_t_magic 0xa15a2f02
+#define cs4281_t_magic 0xa15a3001
+#define snd_i2c_bus_t_magic 0xa15a3101
+#define snd_i2c_device_t_magic 0xa15a3102
+#define cs8427_t_magic 0xa15a3111
+#define m3_t_magic 0xa15a3201
+#define m3_dma_t_magic 0xa15a3202
+#define nm256_t_magic 0xa15a3301
+#define nm256_dma_t_magic 0xa15a3302
+#define sam9407_t_magic 0xa15a3401
+#define pmac_t_magic 0xa15a3501
+#define ali_t_magic 0xa15a3601
+#define mtpav_t_magic 0xa15a3701
+#define mtpav_port_t_magic 0xa15a3702
+#define korg1212_t_magic 0xa15a3800
+#define opl3sa2_t_magic 0xa15a3900
+#define serialmidi_t_magic 0xa15a3a00
+#define sa11xx_uda1341_t_magic 0xa15a3b00
+#define uda1341_t_magic 0xa15a3c00
+#define l3_client_t_magic 0xa15a3d00
+#define snd_usb_audio_t_magic 0xa15a3e01
+#define usb_mixer_elem_info_t_magic 0xa15a3e02
+#define snd_usb_stream_t_magic 0xa15a3e03
+#define snd_usb_midi_t_magic 0xa15a3f01
+#define snd_usb_midi_out_endpoint_t_magic 0xa15a3f02
+#define snd_usb_midi_in_endpoint_t_magic 0xa15a3f03
+#define ak4117_t_magic 0xa15a4000
+#define psic_t_magic 0xa15a4100
+#define vx_core_t_magic 0xa15a4110
+#define vx_pipe_t_magic 0xa15a4112
+#define azf3328_t_magic 0xa15a4200
+#define snd_card_harmony_t_magic 0xa15a4300
+#define bt87x_t_magic 0xa15a4400
+#define pdacf_t_magic 0xa15a4500
+#define vortex_t_magic 0xa15a4601
+#define atiixp_t_magic 0xa15a4701
+#define amd7930_t_magic 0xa15a4801
+
+#else
+
+#define snd_magic_kcalloc(type, extra, flags) (type *) snd_kcalloc(sizeof(type) + extra, flags)
+#define snd_magic_kmalloc(type, extra, flags) (type *) kmalloc(sizeof(type) + extra, flags)
+#define snd_magic_cast(type, ptr, retval) (type *) ptr
+#define snd_magic_cast1(type, ptr, retval) snd_magic_cast(type, ptr, retval)
+#define snd_magic_kfree kfree
+
+#endif
+
+#endif /* __SOUND_SNDMAGIC_H */
environments which can tolerate a "non-standard" kernel.
Only use this if you really know what you are doing.
+config DELAY_ACCT
+ bool "Enable delay accounting (EXPERIMENTAL)"
+ help
+ In addition to counting frequency the total delay in ns is also
+ recorded. CPU delays are specified as cpu-wait and cpu-run.
+ I/O delays are recorded for memory and regular I/O.
+ Information is accessible through /proc/<pid>/delay.
+
config KALLSYMS
bool "Load all symbols for debugging/kksymoops" if EMBEDDED
default y
mounts-$(CONFIG_BLK_DEV_INITRD) += do_mounts_initrd.o
mounts-$(CONFIG_BLK_DEV_MD) += do_mounts_md.o
+extra-$(subst m,y,$(CONFIG_CRASH_DUMP)) += kerntypes.o
+CFLAGS_kerntypes.o := -gstabs
+
# files to be removed upon make clean
clean-files := ../include/linux/compile.h
--- /dev/null
+/*
+ * kerntypes.c
+ *
+ * Copyright (C) 2000 Tom Morano (tjm@sgi.com) and
+ * Matt D. Robinson (yakker@alacritech.com)
+ *
+ * Dummy module that includes headers for all kernel types of interest.
+ * The kernel type information is used by the lcrash utility when
+ * analyzing system crash dumps or the live system. Using the type
+ * information for the running system, rather than kernel header files,
+ * makes for a more flexible and robust analysis tool.
+ *
+ * This source code is released under version 2 of the GNU GPL.
+ */
+
+#include <linux/compile.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/config.h>
+#include <linux/utsname.h>
+#include <linux/dump.h>
+
+#ifdef LINUX_COMPILE_VERSION_ID_TYPE
+/* Define version type for version validation of dump and kerntypes */
+LINUX_COMPILE_VERSION_ID_TYPE;
+#endif
+
+void
+kerntypes_dummy(void)
+{
+}
* firmware files.
*/
populate_rootfs();
-
do_basic_setup();
-
sched_init_smp();
/*
#include <linux/uts.h>
#include <linux/utsname.h>
#include <linux/version.h>
+#include <linux/stringify.h>
#define version(a) Version_ ## a
#define version_string(a) version(a)
"Linux version " UTS_RELEASE " (" LINUX_COMPILE_BY "@"
LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION "\n";
+const char *LINUX_COMPILE_VERSION_ID = __stringify(LINUX_COMPILE_VERSION_ID);
+LINUX_COMPILE_VERSION_ID_TYPE;
+
const char vx_linux_banner[] =
"Linux version %s (" LINUX_COMPILE_BY "@"
LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") %s\n";
-
if (inode)
atomic_inc(&inode->i_count);
- err = vfs_unlink(dentry->d_parent->d_inode, dentry, NULL);
+ err = vfs_unlink(dentry->d_parent->d_inode, dentry);
out_err:
dput(dentry);
#include <linux/list.h>
#include <linux/security.h>
#include <linux/sched.h>
+#include <linux/vs_base.h>
#include <linux/syscalls.h>
#include <linux/audit.h>
#include <asm/current.h>
#include <linux/time.h>
#include <linux/smp_lock.h>
#include <linux/security.h>
+#include <linux/vs_base.h>
#include <linux/syscalls.h>
#include <linux/audit.h>
#include <asm/uaccess.h>
#include <linux/security.h>
#include <linux/rcupdate.h>
#include <linux/workqueue.h>
+#include <linux/vs_base.h>
+
+#include <asm/unistd.h>
#include <asm/unistd.h>
rcupdate.o intermodule.o extable.o params.o posix-timers.o \
kthread.o wait.o kfifo.o sys_ni.o posix-cpu-timers.o dump.o
+# mod-subdirs := vserver
+
+subdir-y += vserver
+obj-y += vserver/vserver.o
+
+subdir-y += vserver
+obj-y += vserver/vserver.o
+
+subdir-y += vserver
+obj-y += vserver/vserver.o
+
+subdir-y += vserver
+obj-y += vserver/vserver.o
+
subdir-y += vserver
obj-y += vserver/vserver.o
#include <linux/mm.h>
#include <linux/module.h>
#include <linux/security.h>
+#include <linux/vs_cvirt.h>
#include <linux/syscalls.h>
#include <linux/vs_cvirt.h>
#include <asm/uaccess.h>
while (set) {
if (set & 1) {
struct file * file = xchg(&files->fd[i], NULL);
- if (file)
+ if (file)
filp_close(file, files);
vx_openfd_dec(i);
}
flag = 1;
if (!unlikely(options & WCONTINUED))
continue;
+
retval = wait_task_continued(
p, (options & WNOWAIT),
infop, stat_addr, ru);
goto fork_out;
p->tux_info = NULL;
+ p->vx_info = NULL;
+ set_vx_info(&p->vx_info, current->vx_info);
+ p->nx_info = NULL;
+ set_nx_info(&p->nx_info, current->nx_info);
+
+ /* check vserver memory */
+ if (p->mm && !(clone_flags & CLONE_VM)) {
+ if (vx_vmpages_avail(p->mm, p->mm->total_vm))
+ vx_pages_add(p->mm->mm_vx_info, RLIMIT_AS, p->mm->total_vm);
+ else
+ goto bad_fork_free;
+ }
+ if (p->mm && vx_flags(VXF_FORK_RSS, 0)) {
+ if (!vx_rsspages_avail(p->mm, get_mm_counter(p->mm, rss)))
+ goto bad_fork_cleanup_vm;
+ }
+
+ p->vx_info = NULL;
+ set_vx_info(&p->vx_info, current->vx_info);
+ p->nx_info = NULL;
+ set_nx_info(&p->nx_info, current->nx_info);
+
+ /* check vserver memory */
+ if (p->mm && !(clone_flags & CLONE_VM)) {
+ if (vx_vmpages_avail(p->mm, p->mm->total_vm))
+ vx_pages_add(p->mm->mm_vx_info, RLIMIT_AS, p->mm->total_vm);
+ else
+ goto bad_fork_free;
+ }
+ if (p->mm && vx_flags(VXF_FORK_RSS, 0)) {
+ if (!vx_rsspages_avail(p->mm, get_mm_counter(p->mm, rss)))
+ goto bad_fork_cleanup_vm;
+ }
+
+ init_vx_info(&p->vx_info, current->vx_info);
+ p->nx_info = NULL;
+ set_nx_info(&p->nx_info, current->nx_info);
+
+ /* check vserver memory */
+ if (p->mm && !(clone_flags & CLONE_VM)) {
+ if (vx_vmpages_avail(p->mm, p->mm->total_vm))
+ vx_pages_add(p->mm->mm_vx_info, RLIMIT_AS, p->mm->total_vm);
+ else
+ goto bad_fork_free;
+ }
+ if (p->mm && vx_flags(VXF_FORK_RSS, 0)) {
+ if (!vx_rsspages_avail(p->mm, get_mm_counter(p->mm, rss)))
+ goto bad_fork_cleanup_vm;
+ }
+
+ init_vx_info(&p->vx_info, current->vx_info);
+ init_nx_info(&p->nx_info, current->nx_info);
+
+ /* check vserver memory */
+ if (p->mm && !(clone_flags & CLONE_VM)) {
+ if (vx_vmpages_avail(p->mm, p->mm->total_vm))
+ vx_pages_add(p->mm->mm_vx_info, RLIMIT_AS, p->mm->total_vm);
+ else
+ goto bad_fork_free;
+ }
+ if (p->mm && vx_flags(VXF_FORK_RSS, 0)) {
+ if (!vx_rsspages_avail(p->mm, get_mm_counter(p->mm, rss)))
+ goto bad_fork_cleanup_vm;
+ }
+
init_vx_info(&p->vx_info, current->vx_info);
init_nx_info(&p->nx_info, current->nx_info);
__get_cpu_var(process_counts)++;
}
+ p->ioprio = current->ioprio;
nr_threads++;
total_forks++;
}
p = copy_process(clone_flags, stack_start, regs, stack_size, parent_tidptr, child_tidptr, pid);
+
/*
* Do this prior waking up the new thread - the thread pointer
* might get invalid after that point, if the thread exits quickly.
int tainted;
unsigned int crashed;
int crash_dump_on;
+void (*dump_function_ptr)(const char *, const struct pt_regs *) = 0;
EXPORT_SYMBOL(panic_timeout);
+EXPORT_SYMBOL(dump_function_ptr);
struct notifier_block *panic_notifier_list;
/* If we have crashed, perform a kexec reboot, for dump write-out */
crash_machine_kexec();
+ notifier_call_chain(&panic_notifier_list, 0, buf);
+
#ifdef CONFIG_SMP
smp_send_stop();
#endif
* We can't use the "normal" timers since we just panicked..
*/
printk(KERN_EMERG "Rebooting in %d seconds..",panic_timeout);
+#ifdef CONFIG_KEXEC
+ {
+ struct kimage *image;
+ image = xchg(&kexec_image, 0);
+ if (image) {
+ printk(KERN_EMERG "by starting a new kernel ..\n");
+ mdelay(panic_timeout*1000);
+ machine_kexec(image);
+ }
+ }
+#endif
for (i = 0; i < panic_timeout*1000; ) {
touch_nmi_watchdog();
i += panic_blink(i);
--- /dev/null
+/*
+ * kernel/power/pmdisk.c - Suspend-to-disk implmentation
+ *
+ * This STD implementation is initially derived from swsusp (suspend-to-swap).
+ * The original copyright on that was:
+ *
+ * Copyright (C) 1998-2001 Gabor Kuti <seasons@fornax.hu>
+ * Copyright (C) 1998,2001,2002 Pavel Machek <pavel@suse.cz>
+ *
+ * The additional parts are:
+ *
+ * Copyright (C) 2003 Patrick Mochel
+ * Copyright (C) 2003 Open Source Development Lab
+ *
+ * This file is released under the GPLv2.
+ *
+ * For more information, please see the text files in Documentation/power/
+ *
+ */
+
+#undef DEBUG
+
+#include <linux/mm.h>
+#include <linux/bio.h>
+#include <linux/suspend.h>
+#include <linux/version.h>
+#include <linux/reboot.h>
+#include <linux/device.h>
+#include <linux/swapops.h>
+#include <linux/bootmem.h>
+#include <linux/utsname.h>
+
+#include <asm/mmu_context.h>
+
+#include "power.h"
+
+
+extern asmlinkage int pmdisk_arch_suspend(int resume);
+
+#define __ADDRESS(x) ((unsigned long) phys_to_virt(x))
+#define ADDRESS(x) __ADDRESS((x) << PAGE_SHIFT)
+#define ADDRESS2(x) __ADDRESS(__pa(x)) /* Needed for x86-64 where some pages are in memory twice */
+
+/* References to section boundaries */
+extern char __nosave_begin, __nosave_end;
+
+extern int is_head_of_free_region(struct page *);
+
+/* Variables to be preserved over suspend */
+static int pagedir_order_check;
+static int nr_copy_pages_check;
+
+/* For resume= kernel option */
+static char resume_file[256] = CONFIG_PM_DISK_PARTITION;
+
+static dev_t resume_device;
+/* Local variables that should not be affected by save */
+unsigned int pmdisk_pages __nosavedata = 0;
+
+/* Suspend pagedir is allocated before final copy, therefore it
+ must be freed after resume
+
+ Warning: this is evil. There are actually two pagedirs at time of
+ resume. One is "pagedir_save", which is empty frame allocated at
+ time of suspend, that must be freed. Second is "pagedir_nosave",
+ allocated at time of resume, that travels through memory not to
+ collide with anything.
+ */
+suspend_pagedir_t *pm_pagedir_nosave __nosavedata = NULL;
+static suspend_pagedir_t *pagedir_save;
+static int pagedir_order __nosavedata = 0;
+
+
+struct pmdisk_info {
+ struct new_utsname uts;
+ u32 version_code;
+ unsigned long num_physpages;
+ int cpus;
+ unsigned long image_pages;
+ unsigned long pagedir_pages;
+ swp_entry_t pagedir[768];
+} __attribute__((aligned(PAGE_SIZE))) pmdisk_info;
+
+
+
+#define PMDISK_SIG "pmdisk-swap1"
+
+struct pmdisk_header {
+ char reserved[PAGE_SIZE - 20 - sizeof(swp_entry_t)];
+ swp_entry_t pmdisk_info;
+ char orig_sig[10];
+ char sig[10];
+} __attribute__((packed, aligned(PAGE_SIZE))) pmdisk_header;
+
+/*
+ * XXX: We try to keep some more pages free so that I/O operations succeed
+ * without paging. Might this be more?
+ */
+#define PAGES_FOR_IO 512
+
+
+/*
+ * Saving part...
+ */
+
+
+/* We memorize in swapfile_used what swap devices are used for suspension */
+#define SWAPFILE_UNUSED 0
+#define SWAPFILE_SUSPEND 1 /* This is the suspending device */
+#define SWAPFILE_IGNORED 2 /* Those are other swap devices ignored for suspension */
+
+static unsigned short swapfile_used[MAX_SWAPFILES];
+static unsigned short root_swap;
+
+
+static int mark_swapfiles(swp_entry_t prev)
+{
+ int error;
+
+ rw_swap_page_sync(READ,
+ swp_entry(root_swap, 0),
+ virt_to_page((unsigned long)&pmdisk_header));
+ if (!memcmp("SWAP-SPACE",pmdisk_header.sig,10) ||
+ !memcmp("SWAPSPACE2",pmdisk_header.sig,10)) {
+ memcpy(pmdisk_header.orig_sig,pmdisk_header.sig,10);
+ memcpy(pmdisk_header.sig,PMDISK_SIG,10);
+ pmdisk_header.pmdisk_info = prev;
+ error = rw_swap_page_sync(WRITE,
+ swp_entry(root_swap, 0),
+ virt_to_page((unsigned long)
+ &pmdisk_header));
+ } else {
+ pr_debug("pmdisk: Partition is not swap space.\n");
+ error = -ENODEV;
+ }
+ return error;
+}
+
+static int read_swapfiles(void) /* This is called before saving image */
+{
+ int i, len;
+
+ len=strlen(resume_file);
+ root_swap = 0xFFFF;
+
+ swap_list_lock();
+ for(i=0; i<MAX_SWAPFILES; i++) {
+ if (swap_info[i].flags == 0) {
+ swapfile_used[i]=SWAPFILE_UNUSED;
+ } else {
+ if(!len) {
+ pr_debug("pmdisk: Default resume partition not set.\n");
+ if(root_swap == 0xFFFF) {
+ swapfile_used[i] = SWAPFILE_SUSPEND;
+ root_swap = i;
+ } else
+ swapfile_used[i] = SWAPFILE_IGNORED;
+ } else {
+ /* we ignore all swap devices that are not the resume_file */
+ if (1) {
+// FIXME if(resume_device == swap_info[i].swap_device) {
+ swapfile_used[i] = SWAPFILE_SUSPEND;
+ root_swap = i;
+ } else
+ swapfile_used[i] = SWAPFILE_IGNORED;
+ }
+ }
+ }
+ swap_list_unlock();
+ return (root_swap != 0xffff) ? 0 : -ENODEV;
+}
+
+
+/* This is called after saving image so modification
+ will be lost after resume... and that's what we want. */
+static void lock_swapdevices(void)
+{
+ int i;
+
+ swap_list_lock();
+ for(i = 0; i< MAX_SWAPFILES; i++)
+ if(swapfile_used[i] == SWAPFILE_IGNORED) {
+ swap_info[i].flags ^= 0xFF; /* we make the device unusable. A new call to
+ lock_swapdevices can unlock the devices. */
+ }
+ swap_list_unlock();
+}
+
+
+
+/**
+ * write_swap_page - Write one page to a fresh swap location.
+ * @addr: Address we're writing.
+ * @loc: Place to store the entry we used.
+ *
+ * Allocate a new swap entry and 'sync' it. Note we discard -EIO
+ * errors. That is an artifact left over from swsusp. It did not
+ * check the return of rw_swap_page_sync() at all, since most pages
+ * written back to swap would return -EIO.
+ * This is a partial improvement, since we will at least return other
+ * errors, though we need to eventually fix the damn code.
+ */
+
+static int write_swap_page(unsigned long addr, swp_entry_t * loc)
+{
+ swp_entry_t entry;
+ int error = 0;
+
+ entry = get_swap_page();
+ if (swp_offset(entry) &&
+ swapfile_used[swp_type(entry)] == SWAPFILE_SUSPEND) {
+ error = rw_swap_page_sync(WRITE, entry,
+ virt_to_page(addr));
+ if (error == -EIO)
+ error = 0;
+ if (!error)
+ *loc = entry;
+ } else
+ error = -ENOSPC;
+ return error;
+}
+
+
+/**
+ * free_data - Free the swap entries used by the saved image.
+ *
+ * Walk the list of used swap entries and free each one.
+ */
+
+static void free_data(void)
+{
+ swp_entry_t entry;
+ int i;
+
+ for (i = 0; i < pmdisk_pages; i++) {
+ entry = (pm_pagedir_nosave + i)->swap_address;
+ if (entry.val)
+ swap_free(entry);
+ else
+ break;
+ (pm_pagedir_nosave + i)->swap_address = (swp_entry_t){0};
+ }
+}
+
+
+/**
+ * write_data - Write saved image to swap.
+ *
+ * Walk the list of pages in the image and sync each one to swap.
+ */
+
+static int write_data(void)
+{
+ int error = 0;
+ int i;
+
+ printk( "Writing data to swap (%d pages): ", pmdisk_pages );
+ for (i = 0; i < pmdisk_pages && !error; i++) {
+ if (!(i%100))
+ printk( "." );
+ error = write_swap_page((pm_pagedir_nosave+i)->address,
+ &((pm_pagedir_nosave+i)->swap_address));
+ }
+ printk(" %d Pages done.\n",i);
+ return error;
+}
+
+
+/**
+ * free_pagedir - Free pages used by the page directory.
+ */
+
+static void free_pagedir_entries(void)
+{
+ int num = pmdisk_info.pagedir_pages;
+ int i;
+
+ for (i = 0; i < num; i++)
+ swap_free(pmdisk_info.pagedir[i]);
+}
+
+
+/**
+ * write_pagedir - Write the array of pages holding the page directory.
+ * @last: Last swap entry we write (needed for header).
+ */
+
+static int write_pagedir(void)
+{
+ unsigned long addr = (unsigned long)pm_pagedir_nosave;
+ int error = 0;
+ int n = SUSPEND_PD_PAGES(pmdisk_pages);
+ int i;
+
+ pmdisk_info.pagedir_pages = n;
+ printk( "Writing pagedir (%d pages)\n", n);
+ for (i = 0; i < n && !error; i++, addr += PAGE_SIZE)
+ error = write_swap_page(addr,&pmdisk_info.pagedir[i]);
+ return error;
+}
+
+
+#ifdef DEBUG
+static void dump_pmdisk_info(void)
+{
+ printk(" pmdisk: Version: %u\n",pmdisk_info.version_code);
+ printk(" pmdisk: Num Pages: %ld\n",pmdisk_info.num_physpages);
+ printk(" pmdisk: UTS Sys: %s\n",pmdisk_info.uts.sysname);
+ printk(" pmdisk: UTS Node: %s\n",pmdisk_info.uts.nodename);
+ printk(" pmdisk: UTS Release: %s\n",pmdisk_info.uts.release);
+ printk(" pmdisk: UTS Version: %s\n",pmdisk_info.uts.version);
+ printk(" pmdisk: UTS Machine: %s\n",pmdisk_info.uts.machine);
+ printk(" pmdisk: UTS Domain: %s\n",pmdisk_info.uts.domainname);
+ printk(" pmdisk: CPUs: %d\n",pmdisk_info.cpus);
+ printk(" pmdisk: Image: %ld Pages\n",pmdisk_info.image_pages);
+ printk(" pmdisk: Pagedir: %ld Pages\n",pmdisk_info.pagedir_pages);
+}
+#else
+static void dump_pmdisk_info(void)
+{
+
+}
+#endif
+
+static void init_header(void)
+{
+ memset(&pmdisk_info,0,sizeof(pmdisk_info));
+ pmdisk_info.version_code = LINUX_VERSION_CODE;
+ pmdisk_info.num_physpages = num_physpages;
+ memcpy(&pmdisk_info.uts,&system_utsname,sizeof(system_utsname));
+
+ pmdisk_info.cpus = num_online_cpus();
+ pmdisk_info.image_pages = pmdisk_pages;
+}
+
+/**
+ * write_header - Fill and write the suspend header.
+ * @entry: Location of the last swap entry used.
+ *
+ * Allocate a page, fill header, write header.
+ *
+ * @entry is the location of the last pagedir entry written on
+ * entrance. On exit, it contains the location of the header.
+ */
+
+static int write_header(swp_entry_t * entry)
+{
+ dump_pmdisk_info();
+ return write_swap_page((unsigned long)&pmdisk_info,entry);
+}
+
+
+
+/**
+ * write_suspend_image - Write entire image and metadata.
+ *
+ */
+
+static int write_suspend_image(void)
+{
+ int error;
+ swp_entry_t prev = { 0 };
+
+ init_header();
+
+ if ((error = write_data()))
+ goto FreeData;
+
+ if ((error = write_pagedir()))
+ goto FreePagedir;
+
+ if ((error = write_header(&prev)))
+ goto FreePagedir;
+
+ error = mark_swapfiles(prev);
+ Done:
+ return error;
+ FreePagedir:
+ free_pagedir_entries();
+ FreeData:
+ free_data();
+ goto Done;
+}
+
+
+
+/**
+ * saveable - Determine whether a page should be cloned or not.
+ * @pfn: The page
+ *
+ * We save a page if it's Reserved, and not in the range of pages
+ * statically defined as 'unsaveable', or if it isn't reserved, and
+ * isn't part of a free chunk of pages.
+ * If it is part of a free chunk, we update @pfn to point to the last
+ * page of the chunk.
+ */
+
+static int saveable(unsigned long * pfn)
+{
+ struct page * page = pfn_to_page(*pfn);
+
+ if (PageNosave(page))
+ return 0;
+
+ if (!PageReserved(page)) {
+ int chunk_size;
+
+ if ((chunk_size = is_head_of_free_region(page))) {
+ *pfn += chunk_size - 1;
+ return 0;
+ }
+ } else if (PageReserved(page)) {
+ /* Just copy whole code segment.
+ * Hopefully it is not that big.
+ */
+ if ((ADDRESS(*pfn) >= (unsigned long) ADDRESS2(&__nosave_begin)) &&
+ (ADDRESS(*pfn) < (unsigned long) ADDRESS2(&__nosave_end))) {
+ pr_debug("[nosave %lx]\n", ADDRESS(*pfn));
+ return 0;
+ }
+ /* Hmm, perhaps copying all reserved pages is not
+ * too healthy as they may contain
+ * critical bios data?
+ */
+ }
+ return 1;
+}
+
+
+
+/**
+ * count_pages - Determine size of page directory.
+ *
+ * Iterate over all the pages in the system and tally the number
+ * we need to clone.
+ */
+
+static void count_pages(void)
+{
+ unsigned long pfn;
+ int n = 0;
+
+ for (pfn = 0; pfn < max_pfn; pfn++) {
+ if (saveable(&pfn))
+ n++;
+ }
+ pmdisk_pages = n;
+}
+
+
+/**
+ * copy_pages - Atomically snapshot memory.
+ *
+ * Iterate over all the pages in the system and copy each one
+ * into its corresponding location in the pagedir.
+ * We rely on the fact that the number of pages that we're snap-
+ * shotting hasn't changed since we counted them.
+ */
+
+static void copy_pages(void)
+{
+ struct pbe * p = pagedir_save;
+ unsigned long pfn;
+ int n = 0;
+
+ for (pfn = 0; pfn < max_pfn; pfn++) {
+ if (saveable(&pfn)) {
+ n++;
+ p->orig_address = ADDRESS(pfn);
+ copy_page((void *) p->address,
+ (void *) p->orig_address);
+ p++;
+ }
+ }
+ BUG_ON(n != pmdisk_pages);
+}
+
+
+/**
+ * free_image_pages - Free each page allocated for snapshot.
+ */
+
+static void free_image_pages(void)
+{
+ struct pbe * p;
+ int i;
+
+ for (i = 0, p = pagedir_save; i < pmdisk_pages; i++, p++) {
+ ClearPageNosave(virt_to_page(p->address));
+ free_page(p->address);
+ }
+}
+
+
+/**
+ * free_pagedir - Free the page directory.
+ */
+
+static void free_pagedir(void)
+{
+ free_image_pages();
+ free_pages((unsigned long)pagedir_save, pagedir_order);
+}
+
+
+static void calc_order(void)
+{
+ int diff;
+ int order;
+
+ order = get_bitmask_order(SUSPEND_PD_PAGES(pmdisk_pages));
+ pmdisk_pages += 1 << order;
+ do {
+ diff = get_bitmask_order(SUSPEND_PD_PAGES(pmdisk_pages)) - order;
+ if (diff) {
+ order += diff;
+ pmdisk_pages += 1 << diff;
+ }
+ } while(diff);
+ pagedir_order = order;
+}
+
+
+/**
+ * alloc_pagedir - Allocate the page directory.
+ *
+ * First, determine exactly how many contiguous pages we need,
+ * allocate them, then mark each 'unsavable'.
+ */
+
+static int alloc_pagedir(void)
+{
+ calc_order();
+ pagedir_save = (suspend_pagedir_t *)__get_free_pages(GFP_ATOMIC | __GFP_COLD,
+ pagedir_order);
+ if(!pagedir_save)
+ return -ENOMEM;
+ memset(pagedir_save,0,(1 << pagedir_order) * PAGE_SIZE);
+ pm_pagedir_nosave = pagedir_save;
+ return 0;
+}
+
+
+/**
+ * alloc_image_pages - Allocate pages for the snapshot.
+ *
+ */
+
+static int alloc_image_pages(void)
+{
+ struct pbe * p;
+ int i;
+
+ for (i = 0, p = pagedir_save; i < pmdisk_pages; i++, p++) {
+ p->address = get_zeroed_page(GFP_ATOMIC | __GFP_COLD);
+ if(!p->address)
+ goto Error;
+ SetPageNosave(virt_to_page(p->address));
+ }
+ return 0;
+ Error:
+ do {
+ if (p->address)
+ free_page(p->address);
+ p->address = 0;
+ } while (p-- > pagedir_save);
+ return -ENOMEM;
+}
+
+
+/**
+ * enough_free_mem - Make sure we enough free memory to snapshot.
+ *
+ * Returns TRUE or FALSE after checking the number of available
+ * free pages.
+ */
+
+static int enough_free_mem(void)
+{
+ if(nr_free_pages() < (pmdisk_pages + PAGES_FOR_IO)) {
+ pr_debug("pmdisk: Not enough free pages: Have %d\n",
+ nr_free_pages());
+ return 0;
+ }
+ return 1;
+}
+
+
+/**
+ * enough_swap - Make sure we have enough swap to save the image.
+ *
+ * Returns TRUE or FALSE after checking the total amount of swap
+ * space avaiable.
+ *
+ * FIXME: si_swapinfo(&i) returns all swap devices information.
+ * We should only consider resume_device.
+ */
+
+static int enough_swap(void)
+{
+ struct sysinfo i;
+
+ si_swapinfo(&i);
+ if (i.freeswap < (pmdisk_pages + PAGES_FOR_IO)) {
+ pr_debug("pmdisk: Not enough swap. Need %ld\n",i.freeswap);
+ return 0;
+ }
+ return 1;
+}
+
+
+/**
+ * pmdisk_suspend - Atomically snapshot the system.
+ *
+ * This must be called with interrupts disabled, to prevent the
+ * system changing at all from underneath us.
+ *
+ * To do this, we count the number of pages in the system that we
+ * need to save; make sure we have enough memory and swap to clone
+ * the pages and save them in swap, allocate the space to hold them,
+ * and then snapshot them all.
+ */
+
+int pmdisk_suspend(void)
+{
+ int error = 0;
+
+ if ((error = read_swapfiles()))
+ return error;
+
+ drain_local_pages();
+
+ pm_pagedir_nosave = NULL;
+ pr_debug("pmdisk: Counting pages to copy.\n" );
+ count_pages();
+
+ pr_debug("pmdisk: (pages needed: %d + %d free: %d)\n",
+ pmdisk_pages,PAGES_FOR_IO,nr_free_pages());
+
+ if (!enough_free_mem())
+ return -ENOMEM;
+
+ if (!enough_swap())
+ return -ENOSPC;
+
+ if ((error = alloc_pagedir())) {
+ pr_debug("pmdisk: Allocating pagedir failed.\n");
+ return error;
+ }
+ if ((error = alloc_image_pages())) {
+ pr_debug("pmdisk: Allocating image pages failed.\n");
+ free_pagedir();
+ return error;
+ }
+
+ nr_copy_pages_check = pmdisk_pages;
+ pagedir_order_check = pagedir_order;
+
+ /* During allocating of suspend pagedir, new cold pages may appear.
+ * Kill them
+ */
+ drain_local_pages();
+
+ /* copy */
+ copy_pages();
+
+ /*
+ * End of critical section. From now on, we can write to memory,
+ * but we should not touch disk. This specially means we must _not_
+ * touch swap space! Except we must write out our image of course.
+ */
+
+ pr_debug("pmdisk: %d pages copied\n", pmdisk_pages );
+ return 0;
+}
+
+
+/**
+ * suspend_save_image - Prepare and write saved image to swap.
+ *
+ * IRQs are re-enabled here so we can resume devices and safely write
+ * to the swap devices. We disable them again before we leave.
+ *
+ * The second lock_swapdevices() will unlock ignored swap devices since
+ * writing is finished.
+ * It is important _NOT_ to umount filesystems at this point. We want
+ * them synced (in case something goes wrong) but we DO not want to mark
+ * filesystem clean: it is not. (And it does not matter, if we resume
+ * correctly, we'll mark system clean, anyway.)
+ */
+
+static int suspend_save_image(void)
+{
+ int error;
+ device_resume();
+ lock_swapdevices();
+ error = write_suspend_image();
+ lock_swapdevices();
+ return error;
+}
+
+/*
+ * Magic happens here
+ */
+
+int pmdisk_resume(void)
+{
+ BUG_ON (nr_copy_pages_check != pmdisk_pages);
+ BUG_ON (pagedir_order_check != pagedir_order);
+
+ /* Even mappings of "global" things (vmalloc) need to be fixed */
+ __flush_tlb_global();
+ return 0;
+}
+
+/* pmdisk_arch_suspend() is implemented in arch/?/power/pmdisk.S,
+ and basically does:
+
+ if (!resume) {
+ save_processor_state();
+ SAVE_REGISTERS
+ return pmdisk_suspend();
+ }
+ GO_TO_SWAPPER_PAGE_TABLES
+ COPY_PAGES_BACK
+ RESTORE_REGISTERS
+ restore_processor_state();
+ return pmdisk_resume();
+
+ */
+
+
+/* More restore stuff */
+
+#define does_collide(addr) does_collide_order(pm_pagedir_nosave, addr, 0)
+
+/*
+ * Returns true if given address/order collides with any orig_address
+ */
+static int __init does_collide_order(suspend_pagedir_t *pagedir,
+ unsigned long addr, int order)
+{
+ int i;
+ unsigned long addre = addr + (PAGE_SIZE<<order);
+
+ for(i=0; i < pmdisk_pages; i++)
+ if((pagedir+i)->orig_address >= addr &&
+ (pagedir+i)->orig_address < addre)
+ return 1;
+
+ return 0;
+}
+
+/*
+ * We check here that pagedir & pages it points to won't collide with pages
+ * where we're going to restore from the loaded pages later
+ */
+static int __init check_pagedir(void)
+{
+ int i;
+
+ for(i=0; i < pmdisk_pages; i++) {
+ unsigned long addr;
+
+ do {
+ addr = get_zeroed_page(GFP_ATOMIC);
+ if(!addr)
+ return -ENOMEM;
+ } while (does_collide(addr));
+
+ (pm_pagedir_nosave+i)->address = addr;
+ }
+ return 0;
+}
+
+static int __init relocate_pagedir(void)
+{
+ /*
+ * We have to avoid recursion (not to overflow kernel stack),
+ * and that's why code looks pretty cryptic
+ */
+ suspend_pagedir_t *old_pagedir = pm_pagedir_nosave;
+ void **eaten_memory = NULL;
+ void **c = eaten_memory, *m, *f;
+ int err;
+
+ pr_debug("pmdisk: Relocating pagedir\n");
+
+ if(!does_collide_order(old_pagedir, (unsigned long)old_pagedir, pagedir_order)) {
+ pr_debug("pmdisk: Relocation not necessary\n");
+ return 0;
+ }
+
+ err = -ENOMEM;
+ while ((m = (void *) __get_free_pages(GFP_ATOMIC, pagedir_order)) != NULL) {
+ if (!does_collide_order(old_pagedir, (unsigned long)m,
+ pagedir_order)) {
+ pm_pagedir_nosave =
+ memcpy(m, old_pagedir,
+ PAGE_SIZE << pagedir_order);
+ err = 0;
+ break;
+ }
+ eaten_memory = m;
+ printk( "." );
+ *eaten_memory = c;
+ c = eaten_memory;
+ }
+
+ c = eaten_memory;
+ while(c) {
+ printk(":");
+ f = c;
+ c = *c;
+ free_pages((unsigned long)f, pagedir_order);
+ }
+ printk("|\n");
+ return err;
+}
+
+
+static struct block_device * resume_bdev;
+
+
+/**
+ * Using bio to read from swap.
+ * This code requires a bit more work than just using buffer heads
+ * but, it is the recommended way for 2.5/2.6.
+ * The following are to signal the beginning and end of I/O. Bios
+ * finish asynchronously, while we want them to happen synchronously.
+ * A simple atomic_t, and a wait loop take care of this problem.
+ */
+
+static atomic_t io_done = ATOMIC_INIT(0);
+
+static void start_io(void)
+{
+ atomic_set(&io_done,1);
+}
+
+static int end_io(struct bio * bio, unsigned int num, int err)
+{
+ atomic_set(&io_done,0);
+ return 0;
+}
+
+static void wait_io(void)
+{
+ while(atomic_read(&io_done))
+ io_schedule();
+}
+
+
+/**
+ * submit - submit BIO request.
+ * @rw: READ or WRITE.
+ * @off physical offset of page.
+ * @page: page we're reading or writing.
+ *
+ * Straight from the textbook - allocate and initialize the bio.
+ * If we're writing, make sure the page is marked as dirty.
+ * Then submit it and wait.
+ */
+
+static int submit(int rw, pgoff_t page_off, void * page)
+{
+ int error = 0;
+ struct bio * bio;
+
+ bio = bio_alloc(GFP_ATOMIC,1);
+ if (!bio)
+ return -ENOMEM;
+ bio->bi_sector = page_off * (PAGE_SIZE >> 9);
+ bio_get(bio);
+ bio->bi_bdev = resume_bdev;
+ bio->bi_end_io = end_io;
+
+ if (bio_add_page(bio, virt_to_page(page), PAGE_SIZE, 0) < PAGE_SIZE) {
+ printk("pmdisk: ERROR: adding page to bio at %ld\n",page_off);
+ error = -EFAULT;
+ goto Done;
+ }
+
+ if (rw == WRITE)
+ bio_set_pages_dirty(bio);
+ start_io();
+ submit_bio(rw | (1 << BIO_RW_SYNC), bio);
+ wait_io();
+ Done:
+ bio_put(bio);
+ return error;
+}
+
+static int
+read_page(pgoff_t page_off, void * page)
+{
+ return submit(READ,page_off,page);
+}
+
+static int
+write_page(pgoff_t page_off, void * page)
+{
+ return submit(WRITE,page_off,page);
+}
+
+
+extern dev_t __init name_to_dev_t(const char *line);
+
+
+static int __init check_sig(void)
+{
+ int error;
+
+ memset(&pmdisk_header,0,sizeof(pmdisk_header));
+ if ((error = read_page(0,&pmdisk_header)))
+ return error;
+ if (!memcmp(PMDISK_SIG,pmdisk_header.sig,10)) {
+ memcpy(pmdisk_header.sig,pmdisk_header.orig_sig,10);
+
+ /*
+ * Reset swap signature now.
+ */
+ error = write_page(0,&pmdisk_header);
+ } else {
+ pr_debug(KERN_ERR "pmdisk: Invalid partition type.\n");
+ return -EINVAL;
+ }
+ if (!error)
+ pr_debug("pmdisk: Signature found, resuming\n");
+ return error;
+}
+
+
+/*
+ * Sanity check if this image makes sense with this kernel/swap context
+ * I really don't think that it's foolproof but more than nothing..
+ */
+
+static const char * __init sanity_check(void)
+{
+ dump_pmdisk_info();
+ if(pmdisk_info.version_code != LINUX_VERSION_CODE)
+ return "kernel version";
+ if(pmdisk_info.num_physpages != num_physpages)
+ return "memory size";
+ if (strcmp(pmdisk_info.uts.sysname,system_utsname.sysname))
+ return "system type";
+ if (strcmp(pmdisk_info.uts.release,system_utsname.release))
+ return "kernel release";
+ if (strcmp(pmdisk_info.uts.version,system_utsname.version))
+ return "version";
+ if (strcmp(pmdisk_info.uts.machine,system_utsname.machine))
+ return "machine";
+ if(pmdisk_info.cpus != num_online_cpus())
+ return "number of cpus";
+ return NULL;
+}
+
+
+static int __init check_header(void)
+{
+ const char * reason = NULL;
+ int error;
+
+ init_header();
+
+ if ((error = read_page(swp_offset(pmdisk_header.pmdisk_info),
+ &pmdisk_info)))
+ return error;
+
+ /* Is this same machine? */
+ if ((reason = sanity_check())) {
+ printk(KERN_ERR "pmdisk: Resume mismatch: %s\n",reason);
+ return -EPERM;
+ }
+ pmdisk_pages = pmdisk_info.image_pages;
+ return error;
+}
+
+
+static int __init read_pagedir(void)
+{
+ unsigned long addr;
+ int i, n = pmdisk_info.pagedir_pages;
+ int error = 0;
+
+ pagedir_order = get_bitmask_order(n);
+
+ addr =__get_free_pages(GFP_ATOMIC, pagedir_order);
+ if (!addr)
+ return -ENOMEM;
+ pm_pagedir_nosave = (struct pbe *)addr;
+
+ pr_debug("pmdisk: Reading pagedir (%d Pages)\n",n);
+
+ for (i = 0; i < n && !error; i++, addr += PAGE_SIZE) {
+ unsigned long offset = swp_offset(pmdisk_info.pagedir[i]);
+ if (offset)
+ error = read_page(offset, (void *)addr);
+ else
+ error = -EFAULT;
+ }
+ if (error)
+ free_pages((unsigned long)pm_pagedir_nosave,pagedir_order);
+ return error;
+}
+
+
+/**
+ * read_image_data - Read image pages from swap.
+ *
+ * You do not need to check for overlaps, check_pagedir()
+ * already did that.
+ */
+
+static int __init read_image_data(void)
+{
+ struct pbe * p;
+ int error = 0;
+ int i;
+
+ printk( "Reading image data (%d pages): ", pmdisk_pages );
+ for(i = 0, p = pm_pagedir_nosave; i < pmdisk_pages && !error; i++, p++) {
+ if (!(i%100))
+ printk( "." );
+ error = read_page(swp_offset(p->swap_address),
+ (void *)p->address);
+ }
+ printk(" %d done.\n",i);
+ return error;
+}
+
+
+static int __init read_suspend_image(void)
+{
+ int error = 0;
+
+ if ((error = check_sig()))
+ return error;
+ if ((error = check_header()))
+ return error;
+ if ((error = read_pagedir()))
+ return error;
+ if ((error = relocate_pagedir()))
+ goto FreePagedir;
+ if ((error = check_pagedir()))
+ goto FreePagedir;
+ if ((error = read_image_data()))
+ goto FreePagedir;
+ Done:
+ return error;
+ FreePagedir:
+ free_pages((unsigned long)pm_pagedir_nosave,pagedir_order);
+ goto Done;
+}
+
+/**
+ * pmdisk_save - Snapshot memory
+ */
+
+int pmdisk_save(void)
+{
+ int error;
+
+#if defined (CONFIG_HIGHMEM) || defined (CONFIG_DISCONTIGMEM)
+ pr_debug("pmdisk: not supported with high- or discontig-mem.\n");
+ return -EPERM;
+#endif
+ if ((error = arch_prepare_suspend()))
+ return error;
+ local_irq_disable();
+ save_processor_state();
+ error = pmdisk_arch_suspend(0);
+ restore_processor_state();
+ local_irq_enable();
+ return error;
+}
+
+
+/**
+ * pmdisk_write - Write saved memory image to swap.
+ *
+ * pmdisk_arch_suspend(0) returns after system is resumed.
+ *
+ * pmdisk_arch_suspend() copies all "used" memory to "free" memory,
+ * then unsuspends all device drivers, and writes memory to disk
+ * using normal kernel mechanism.
+ */
+
+int pmdisk_write(void)
+{
+ return suspend_save_image();
+}
+
+
+/**
+ * pmdisk_read - Read saved image from swap.
+ */
+
+int __init pmdisk_read(void)
+{
+ int error;
+
+ if (!strlen(resume_file))
+ return -ENOENT;
+
+ resume_device = name_to_dev_t(resume_file);
+ pr_debug("pmdisk: Resume From Partition: %s\n", resume_file);
+
+ resume_bdev = open_by_devnum(resume_device, FMODE_READ);
+ if (!IS_ERR(resume_bdev)) {
+ set_blocksize(resume_bdev, PAGE_SIZE);
+ error = read_suspend_image();
+ blkdev_put(resume_bdev);
+ } else
+ error = PTR_ERR(resume_bdev);
+
+ if (!error)
+ pr_debug("Reading resume file was successful\n");
+ else
+ pr_debug("pmdisk: Error %d resuming\n", error);
+ return error;
+}
+
+
+/**
+ * pmdisk_restore - Replace running kernel with saved image.
+ */
+
+int __init pmdisk_restore(void)
+{
+ int error;
+ local_irq_disable();
+ save_processor_state();
+ error = pmdisk_arch_suspend(1);
+ restore_processor_state();
+ local_irq_enable();
+ return error;
+}
+
+
+/**
+ * pmdisk_free - Free memory allocated to hold snapshot.
+ */
+
+int pmdisk_free(void)
+{
+ pr_debug( "Freeing prev allocated pagedir\n" );
+ free_pagedir();
+ return 0;
+}
+
+static int __init pmdisk_setup(char *str)
+{
+ if (strlen(str)) {
+ if (!strcmp(str,"off"))
+ resume_file[0] = '\0';
+ else
+ strncpy(resume_file, str, 255);
+ } else
+ resume_file[0] = '\0';
+ return 1;
+}
+
+__setup("pmdisk=", pmdisk_setup);
+
#include <linux/smp.h>
#include <linux/security.h>
#include <linux/bootmem.h>
+#include <linux/vs_base.h>
#include <linux/syscalls.h>
#include <linux/vserver/cvirt.h>
rq->timestamp_last_tick = now;
-#if defined(CONFIG_VSERVER_HARDCPU) && defined(CONFIG_VSERVER_ACB_SCHED)
- vx_scheduler_tick();
-#endif
-
if (p == rq->idle) {
if (wake_priority_sleeper(rq))
goto out;
struct vx_info *vxi;
#ifdef CONFIG_VSERVER_HARDCPU
int maxidle = -HZ;
-# ifdef CONFIG_VSERVER_ACB_SCHED
- int min_guarantee_ticks = VX_INVALID_TICKS;
- int min_best_effort_ticks = VX_INVALID_TICKS;
-# endif
#endif
int cpu, idx;
}
#ifdef CONFIG_VSERVER_HARDCPU
-# ifdef CONFIG_VSERVER_ACB_SCHED
-drain_hold_queue:
-# endif
if (!list_empty(&rq->hold_queue)) {
struct list_head *l, *n;
int ret;
}
if ((ret < 0) && (maxidle < ret))
maxidle = ret;
-# ifdef CONFIG_VSERVER_ACB_SCHED
- if (ret < 0) {
- if (IS_BEST_EFFORT(vxi)) {
- if (min_best_effort_ticks < ret)
- min_best_effort_ticks = ret;
- } else {
- if (min_guarantee_ticks < ret)
- min_guarantee_ticks = ret;
- }
- }
-# endif
}
}
rq->idle_tokens = -maxidle;
int ret = vx_tokens_recalc(vxi);
if (unlikely(ret <= 0)) {
- if (ret) {
- if ((rq->idle_tokens > -ret))
- rq->idle_tokens = -ret;
-# ifdef CONFIG_VSERVER_ACB_SCHED
- if (IS_BEST_EFFORT(vxi)) {
- if (min_best_effort_ticks < ret)
- min_best_effort_ticks = ret;
- } else {
- if (min_guarantee_ticks < ret)
- min_guarantee_ticks = ret;
- }
-# endif
- }
+ if (ret && (rq->idle_tokens > -ret))
+ rq->idle_tokens = -ret;
vx_hold_task(vxi, next, rq);
goto pick_next;
}
}
next->activated = 0;
switch_tasks:
-#if defined(CONFIG_VSERVER_HARDCPU) && defined(CONFIG_VSERVER_ACB_SCHED)
- if (next == rq->idle && !list_empty(&rq->hold_queue)) {
- if (min_best_effort_ticks != VX_INVALID_TICKS) {
- vx_advance_best_effort_ticks(-min_best_effort_ticks);
- goto drain_hold_queue;
- }
- if (min_guarantee_ticks != VX_INVALID_TICKS) {
- vx_advance_guaranteed_ticks(-min_guarantee_ticks);
- goto drain_hold_queue;
- }
- }
-#endif
if (next == rq->idle)
schedstat_inc(rq, sched_goidle);
prefetch(next);
EXPORT_SYMBOL(block_all_signals);
EXPORT_SYMBOL(unblock_all_signals);
-
/*
* System call entry points.
*/
out_unlock:
read_unlock(&tasklist_lock);
- return retval;
+ key_fsgid_changed(current);
+ return 0;
}
long vs_reboot(unsigned int, void *);
}
+
/*
* Unprivileged users may change the real gid to the effective gid
* or vice versa. (BSD-style)
current->fsgid = new_egid;
current->egid = new_egid;
current->gid = new_rgid;
+
key_fsgid_changed(current);
return 0;
}
return -EPERM;
key_fsgid_changed(current);
+
return 0;
}
#include <linux/jiffies.h>
#include <linux/posix-timers.h>
#include <linux/cpu.h>
+#include <linux/vs_cvirt.h>
+#include <linux/vserver/sched.h>
#include <linux/syscalls.h>
#include <linux/delay.h>
#include <linux/diskdump.h>
val.procs = nr_threads;
} while (read_seqretry(&xtime_lock, seq));
+/* if (vx_flags(VXF_VIRT_CPU, 0))
+ vx_vsi_cpu(val);
+*/
si_meminfo(&val);
si_swapinfo(&val);
This might improve interactivity and latency, but
will also marginally increase scheduling overhead.
-config VSERVER_ACB_SCHED
- bool "Guaranteed/fair share scheduler"
- depends on VSERVER_HARDCPU
- default n
- help
- Andy Bavier's experimental scheduler
-
choice
prompt "Persistent Inode Context Tagging"
default INOXID_UGID24
return 0;
}
-
/*
* argv [0] = vshelper_path;
* argv [1] = action: "netup", "netdown"
#include <asm/errno.h>
#include <asm/uaccess.h>
-#ifdef CONFIG_VSERVER_ACB_SCHED
-
-#define TICK_SCALE 1000
-#define TICKS_PER_TOKEN(vxi) \
- ((vxi->sched.interval * TICK_SCALE) / vxi->sched.fill_rate)
-#define CLASS(vxi) \
- (IS_BEST_EFFORT(vxi) ? SCH_BEST_EFFORT : SCH_GUARANTEE)
-#define GLOBAL_TICKS(vxi) \
- (IS_BEST_EFFORT(vxi) ? vx_best_effort_ticks : vx_guaranteed_ticks)
-
-uint64_t vx_guaranteed_ticks = 0;
-uint64_t vx_best_effort_ticks = 0;
-
-void vx_tokens_set(struct vx_info *vxi, int tokens) {
- int class = CLASS(vxi);
-
- vxi->sched.ticks[class] = GLOBAL_TICKS(vxi);
- vxi->sched.ticks[class] -= tokens * TICKS_PER_TOKEN(vxi);
-}
-
-void vx_scheduler_tick(void) {
- vx_guaranteed_ticks += TICK_SCALE;
- vx_best_effort_ticks += TICK_SCALE;
-}
-
-void vx_advance_best_effort_ticks(int ticks) {
- vx_best_effort_ticks += TICK_SCALE * ticks;
-}
-
-void vx_advance_guaranteed_ticks(int ticks) {
- vx_guaranteed_ticks += TICK_SCALE * ticks;
-}
-
-int vx_tokens_avail(struct vx_info *vxi)
-{
- uint64_t diff;
- int tokens;
- long rem;
- int class = CLASS(vxi);
-
- if (vxi->sched.state[class] == SCH_UNINITIALIZED) {
- /* Set the "real" token count */
- tokens = atomic_read(&vxi->sched.tokens);
- vx_tokens_set(vxi, tokens);
- vxi->sched.state[class] = SCH_INITIALIZED;
- goto out;
- }
-
- if (vxi->sched.last_ticks[class] == GLOBAL_TICKS(vxi)) {
- tokens = atomic_read(&vxi->sched.tokens);
- goto out;
- }
-
- /* Use of fixed-point arithmetic in these calculations leads to
- * some limitations. These should be made explicit.
- */
- diff = GLOBAL_TICKS(vxi) - vxi->sched.ticks[class];
- tokens = div_long_long_rem(diff, TICKS_PER_TOKEN(vxi), &rem);
-
- if (tokens > vxi->sched.tokens_max) {
- vx_tokens_set(vxi, vxi->sched.tokens_max);
- tokens = vxi->sched.tokens_max;
- }
-
- atomic_set(&vxi->sched.tokens, tokens);
-
-out:
- vxi->sched.last_ticks[class] = GLOBAL_TICKS(vxi);
- return tokens;
-}
-
-void vx_consume_token(struct vx_info *vxi)
-{
- int class = CLASS(vxi);
-
- vxi->sched.ticks[class] += TICKS_PER_TOKEN(vxi);
-}
-
-/*
- * recalculate the context's scheduling tokens
- *
- * ret > 0 : number of tokens available
- * ret = 0 : context is paused
- * ret < 0 : number of jiffies until new tokens arrive
- *
- */
-int vx_tokens_recalc(struct vx_info *vxi)
-{
- long delta, tokens;
-
- if (vx_info_flags(vxi, VXF_SCHED_PAUSE, 0))
- /* we are paused */
- return 0;
-
- tokens = vx_tokens_avail(vxi);
- if (tokens <= 0)
- vxi->vx_state |= VXS_ONHOLD;
- if (tokens < vxi->sched.tokens_min) {
- delta = tokens - vxi->sched.tokens_min;
- /* enough tokens will be available in */
- return (delta * vxi->sched.interval) / vxi->sched.fill_rate;
- }
-
- /* we have some tokens left */
- if (vx_info_state(vxi, VXS_ONHOLD) &&
- (tokens >= vxi->sched.tokens_min))
- vxi->vx_state &= ~VXS_ONHOLD;
- if (vx_info_state(vxi, VXS_ONHOLD))
- tokens -= vxi->sched.tokens_min;
-
- return tokens;
-}
-
-#else
/*
* recalculate the context's scheduling tokens
return tokens;
}
-#endif /* CONFIG_VSERVER_ACB_SCHED */
-
/*
* effective_prio - return the priority that is based on the static
* priority but is modified by bonuses/penalties.
if (vxi->sched.tokens_min > vxi->sched.tokens_max)
vxi->sched.tokens_min = vxi->sched.tokens_max;
-#ifdef CONFIG_VSERVER_ACB_SCHED
- vx_tokens_set(vxi, atomic_read(&vxi->sched.tokens));
-#endif
-
spin_unlock(&vxi->sched.tokens_lock);
put_vx_info(vxi);
return 0;
if (vxi->sched.priority_bias < MIN_PRIO_BIAS)
vxi->sched.priority_bias = MIN_PRIO_BIAS;
-#ifdef CONFIG_VSERVER_ACB_SCHED
- vx_tokens_set(vxi, atomic_read(&vxi->sched.tokens));
-#endif
-
spin_unlock(&vxi->sched.tokens_lock);
put_vx_info(vxi);
return 0;
sched->jiffies = jiffies;
sched->tokens_lock = SPIN_LOCK_UNLOCKED;
-#ifdef CONFIG_VSERVER_ACB_SCHED
- /* We can't set the "real" token count here because we don't have
- * access to the vx_info struct. Do it later... */
- for (i = 0; i < SCH_NUM_CLASSES; i++) {
- sched->state[i] = SCH_UNINITIALIZED;
- }
-#endif
-
atomic_set(&sched->tokens, HZ >> 2);
sched->cpus_allowed = CPU_MASK_ALL;
sched->priority_bias = 0;
--- /dev/null
+/* inffixed.h -- table for decoding fixed codes
+ * Generated automatically by the maketree.c program
+ */
+
+/* WARNING: this file should *not* be used by applications. It is
+ part of the implementation of the compression library and is
+ subject to change. Applications should only use zlib.h.
+ */
+
+static uInt fixed_bl = 9;
+static uInt fixed_bd = 5;
+static inflate_huft fixed_tl[] = {
+ {{{96,7}},256}, {{{0,8}},80}, {{{0,8}},16}, {{{84,8}},115},
+ {{{82,7}},31}, {{{0,8}},112}, {{{0,8}},48}, {{{0,9}},192},
+ {{{80,7}},10}, {{{0,8}},96}, {{{0,8}},32}, {{{0,9}},160},
+ {{{0,8}},0}, {{{0,8}},128}, {{{0,8}},64}, {{{0,9}},224},
+ {{{80,7}},6}, {{{0,8}},88}, {{{0,8}},24}, {{{0,9}},144},
+ {{{83,7}},59}, {{{0,8}},120}, {{{0,8}},56}, {{{0,9}},208},
+ {{{81,7}},17}, {{{0,8}},104}, {{{0,8}},40}, {{{0,9}},176},
+ {{{0,8}},8}, {{{0,8}},136}, {{{0,8}},72}, {{{0,9}},240},
+ {{{80,7}},4}, {{{0,8}},84}, {{{0,8}},20}, {{{85,8}},227},
+ {{{83,7}},43}, {{{0,8}},116}, {{{0,8}},52}, {{{0,9}},200},
+ {{{81,7}},13}, {{{0,8}},100}, {{{0,8}},36}, {{{0,9}},168},
+ {{{0,8}},4}, {{{0,8}},132}, {{{0,8}},68}, {{{0,9}},232},
+ {{{80,7}},8}, {{{0,8}},92}, {{{0,8}},28}, {{{0,9}},152},
+ {{{84,7}},83}, {{{0,8}},124}, {{{0,8}},60}, {{{0,9}},216},
+ {{{82,7}},23}, {{{0,8}},108}, {{{0,8}},44}, {{{0,9}},184},
+ {{{0,8}},12}, {{{0,8}},140}, {{{0,8}},76}, {{{0,9}},248},
+ {{{80,7}},3}, {{{0,8}},82}, {{{0,8}},18}, {{{85,8}},163},
+ {{{83,7}},35}, {{{0,8}},114}, {{{0,8}},50}, {{{0,9}},196},
+ {{{81,7}},11}, {{{0,8}},98}, {{{0,8}},34}, {{{0,9}},164},
+ {{{0,8}},2}, {{{0,8}},130}, {{{0,8}},66}, {{{0,9}},228},
+ {{{80,7}},7}, {{{0,8}},90}, {{{0,8}},26}, {{{0,9}},148},
+ {{{84,7}},67}, {{{0,8}},122}, {{{0,8}},58}, {{{0,9}},212},
+ {{{82,7}},19}, {{{0,8}},106}, {{{0,8}},42}, {{{0,9}},180},
+ {{{0,8}},10}, {{{0,8}},138}, {{{0,8}},74}, {{{0,9}},244},
+ {{{80,7}},5}, {{{0,8}},86}, {{{0,8}},22}, {{{192,8}},0},
+ {{{83,7}},51}, {{{0,8}},118}, {{{0,8}},54}, {{{0,9}},204},
+ {{{81,7}},15}, {{{0,8}},102}, {{{0,8}},38}, {{{0,9}},172},
+ {{{0,8}},6}, {{{0,8}},134}, {{{0,8}},70}, {{{0,9}},236},
+ {{{80,7}},9}, {{{0,8}},94}, {{{0,8}},30}, {{{0,9}},156},
+ {{{84,7}},99}, {{{0,8}},126}, {{{0,8}},62}, {{{0,9}},220},
+ {{{82,7}},27}, {{{0,8}},110}, {{{0,8}},46}, {{{0,9}},188},
+ {{{0,8}},14}, {{{0,8}},142}, {{{0,8}},78}, {{{0,9}},252},
+ {{{96,7}},256}, {{{0,8}},81}, {{{0,8}},17}, {{{85,8}},131},
+ {{{82,7}},31}, {{{0,8}},113}, {{{0,8}},49}, {{{0,9}},194},
+ {{{80,7}},10}, {{{0,8}},97}, {{{0,8}},33}, {{{0,9}},162},
+ {{{0,8}},1}, {{{0,8}},129}, {{{0,8}},65}, {{{0,9}},226},
+ {{{80,7}},6}, {{{0,8}},89}, {{{0,8}},25}, {{{0,9}},146},
+ {{{83,7}},59}, {{{0,8}},121}, {{{0,8}},57}, {{{0,9}},210},
+ {{{81,7}},17}, {{{0,8}},105}, {{{0,8}},41}, {{{0,9}},178},
+ {{{0,8}},9}, {{{0,8}},137}, {{{0,8}},73}, {{{0,9}},242},
+ {{{80,7}},4}, {{{0,8}},85}, {{{0,8}},21}, {{{80,8}},258},
+ {{{83,7}},43}, {{{0,8}},117}, {{{0,8}},53}, {{{0,9}},202},
+ {{{81,7}},13}, {{{0,8}},101}, {{{0,8}},37}, {{{0,9}},170},
+ {{{0,8}},5}, {{{0,8}},133}, {{{0,8}},69}, {{{0,9}},234},
+ {{{80,7}},8}, {{{0,8}},93}, {{{0,8}},29}, {{{0,9}},154},
+ {{{84,7}},83}, {{{0,8}},125}, {{{0,8}},61}, {{{0,9}},218},
+ {{{82,7}},23}, {{{0,8}},109}, {{{0,8}},45}, {{{0,9}},186},
+ {{{0,8}},13}, {{{0,8}},141}, {{{0,8}},77}, {{{0,9}},250},
+ {{{80,7}},3}, {{{0,8}},83}, {{{0,8}},19}, {{{85,8}},195},
+ {{{83,7}},35}, {{{0,8}},115}, {{{0,8}},51}, {{{0,9}},198},
+ {{{81,7}},11}, {{{0,8}},99}, {{{0,8}},35}, {{{0,9}},166},
+ {{{0,8}},3}, {{{0,8}},131}, {{{0,8}},67}, {{{0,9}},230},
+ {{{80,7}},7}, {{{0,8}},91}, {{{0,8}},27}, {{{0,9}},150},
+ {{{84,7}},67}, {{{0,8}},123}, {{{0,8}},59}, {{{0,9}},214},
+ {{{82,7}},19}, {{{0,8}},107}, {{{0,8}},43}, {{{0,9}},182},
+ {{{0,8}},11}, {{{0,8}},139}, {{{0,8}},75}, {{{0,9}},246},
+ {{{80,7}},5}, {{{0,8}},87}, {{{0,8}},23}, {{{192,8}},0},
+ {{{83,7}},51}, {{{0,8}},119}, {{{0,8}},55}, {{{0,9}},206},
+ {{{81,7}},15}, {{{0,8}},103}, {{{0,8}},39}, {{{0,9}},174},
+ {{{0,8}},7}, {{{0,8}},135}, {{{0,8}},71}, {{{0,9}},238},
+ {{{80,7}},9}, {{{0,8}},95}, {{{0,8}},31}, {{{0,9}},158},
+ {{{84,7}},99}, {{{0,8}},127}, {{{0,8}},63}, {{{0,9}},222},
+ {{{82,7}},27}, {{{0,8}},111}, {{{0,8}},47}, {{{0,9}},190},
+ {{{0,8}},15}, {{{0,8}},143}, {{{0,8}},79}, {{{0,9}},254},
+ {{{96,7}},256}, {{{0,8}},80}, {{{0,8}},16}, {{{84,8}},115},
+ {{{82,7}},31}, {{{0,8}},112}, {{{0,8}},48}, {{{0,9}},193},
+ {{{80,7}},10}, {{{0,8}},96}, {{{0,8}},32}, {{{0,9}},161},
+ {{{0,8}},0}, {{{0,8}},128}, {{{0,8}},64}, {{{0,9}},225},
+ {{{80,7}},6}, {{{0,8}},88}, {{{0,8}},24}, {{{0,9}},145},
+ {{{83,7}},59}, {{{0,8}},120}, {{{0,8}},56}, {{{0,9}},209},
+ {{{81,7}},17}, {{{0,8}},104}, {{{0,8}},40}, {{{0,9}},177},
+ {{{0,8}},8}, {{{0,8}},136}, {{{0,8}},72}, {{{0,9}},241},
+ {{{80,7}},4}, {{{0,8}},84}, {{{0,8}},20}, {{{85,8}},227},
+ {{{83,7}},43}, {{{0,8}},116}, {{{0,8}},52}, {{{0,9}},201},
+ {{{81,7}},13}, {{{0,8}},100}, {{{0,8}},36}, {{{0,9}},169},
+ {{{0,8}},4}, {{{0,8}},132}, {{{0,8}},68}, {{{0,9}},233},
+ {{{80,7}},8}, {{{0,8}},92}, {{{0,8}},28}, {{{0,9}},153},
+ {{{84,7}},83}, {{{0,8}},124}, {{{0,8}},60}, {{{0,9}},217},
+ {{{82,7}},23}, {{{0,8}},108}, {{{0,8}},44}, {{{0,9}},185},
+ {{{0,8}},12}, {{{0,8}},140}, {{{0,8}},76}, {{{0,9}},249},
+ {{{80,7}},3}, {{{0,8}},82}, {{{0,8}},18}, {{{85,8}},163},
+ {{{83,7}},35}, {{{0,8}},114}, {{{0,8}},50}, {{{0,9}},197},
+ {{{81,7}},11}, {{{0,8}},98}, {{{0,8}},34}, {{{0,9}},165},
+ {{{0,8}},2}, {{{0,8}},130}, {{{0,8}},66}, {{{0,9}},229},
+ {{{80,7}},7}, {{{0,8}},90}, {{{0,8}},26}, {{{0,9}},149},
+ {{{84,7}},67}, {{{0,8}},122}, {{{0,8}},58}, {{{0,9}},213},
+ {{{82,7}},19}, {{{0,8}},106}, {{{0,8}},42}, {{{0,9}},181},
+ {{{0,8}},10}, {{{0,8}},138}, {{{0,8}},74}, {{{0,9}},245},
+ {{{80,7}},5}, {{{0,8}},86}, {{{0,8}},22}, {{{192,8}},0},
+ {{{83,7}},51}, {{{0,8}},118}, {{{0,8}},54}, {{{0,9}},205},
+ {{{81,7}},15}, {{{0,8}},102}, {{{0,8}},38}, {{{0,9}},173},
+ {{{0,8}},6}, {{{0,8}},134}, {{{0,8}},70}, {{{0,9}},237},
+ {{{80,7}},9}, {{{0,8}},94}, {{{0,8}},30}, {{{0,9}},157},
+ {{{84,7}},99}, {{{0,8}},126}, {{{0,8}},62}, {{{0,9}},221},
+ {{{82,7}},27}, {{{0,8}},110}, {{{0,8}},46}, {{{0,9}},189},
+ {{{0,8}},14}, {{{0,8}},142}, {{{0,8}},78}, {{{0,9}},253},
+ {{{96,7}},256}, {{{0,8}},81}, {{{0,8}},17}, {{{85,8}},131},
+ {{{82,7}},31}, {{{0,8}},113}, {{{0,8}},49}, {{{0,9}},195},
+ {{{80,7}},10}, {{{0,8}},97}, {{{0,8}},33}, {{{0,9}},163},
+ {{{0,8}},1}, {{{0,8}},129}, {{{0,8}},65}, {{{0,9}},227},
+ {{{80,7}},6}, {{{0,8}},89}, {{{0,8}},25}, {{{0,9}},147},
+ {{{83,7}},59}, {{{0,8}},121}, {{{0,8}},57}, {{{0,9}},211},
+ {{{81,7}},17}, {{{0,8}},105}, {{{0,8}},41}, {{{0,9}},179},
+ {{{0,8}},9}, {{{0,8}},137}, {{{0,8}},73}, {{{0,9}},243},
+ {{{80,7}},4}, {{{0,8}},85}, {{{0,8}},21}, {{{80,8}},258},
+ {{{83,7}},43}, {{{0,8}},117}, {{{0,8}},53}, {{{0,9}},203},
+ {{{81,7}},13}, {{{0,8}},101}, {{{0,8}},37}, {{{0,9}},171},
+ {{{0,8}},5}, {{{0,8}},133}, {{{0,8}},69}, {{{0,9}},235},
+ {{{80,7}},8}, {{{0,8}},93}, {{{0,8}},29}, {{{0,9}},155},
+ {{{84,7}},83}, {{{0,8}},125}, {{{0,8}},61}, {{{0,9}},219},
+ {{{82,7}},23}, {{{0,8}},109}, {{{0,8}},45}, {{{0,9}},187},
+ {{{0,8}},13}, {{{0,8}},141}, {{{0,8}},77}, {{{0,9}},251},
+ {{{80,7}},3}, {{{0,8}},83}, {{{0,8}},19}, {{{85,8}},195},
+ {{{83,7}},35}, {{{0,8}},115}, {{{0,8}},51}, {{{0,9}},199},
+ {{{81,7}},11}, {{{0,8}},99}, {{{0,8}},35}, {{{0,9}},167},
+ {{{0,8}},3}, {{{0,8}},131}, {{{0,8}},67}, {{{0,9}},231},
+ {{{80,7}},7}, {{{0,8}},91}, {{{0,8}},27}, {{{0,9}},151},
+ {{{84,7}},67}, {{{0,8}},123}, {{{0,8}},59}, {{{0,9}},215},
+ {{{82,7}},19}, {{{0,8}},107}, {{{0,8}},43}, {{{0,9}},183},
+ {{{0,8}},11}, {{{0,8}},139}, {{{0,8}},75}, {{{0,9}},247},
+ {{{80,7}},5}, {{{0,8}},87}, {{{0,8}},23}, {{{192,8}},0},
+ {{{83,7}},51}, {{{0,8}},119}, {{{0,8}},55}, {{{0,9}},207},
+ {{{81,7}},15}, {{{0,8}},103}, {{{0,8}},39}, {{{0,9}},175},
+ {{{0,8}},7}, {{{0,8}},135}, {{{0,8}},71}, {{{0,9}},239},
+ {{{80,7}},9}, {{{0,8}},95}, {{{0,8}},31}, {{{0,9}},159},
+ {{{84,7}},99}, {{{0,8}},127}, {{{0,8}},63}, {{{0,9}},223},
+ {{{82,7}},27}, {{{0,8}},111}, {{{0,8}},47}, {{{0,9}},191},
+ {{{0,8}},15}, {{{0,8}},143}, {{{0,8}},79}, {{{0,9}},255}
+ };
+static inflate_huft fixed_td[] = {
+ {{{80,5}},1}, {{{87,5}},257}, {{{83,5}},17}, {{{91,5}},4097},
+ {{{81,5}},5}, {{{89,5}},1025}, {{{85,5}},65}, {{{93,5}},16385},
+ {{{80,5}},3}, {{{88,5}},513}, {{{84,5}},33}, {{{92,5}},8193},
+ {{{82,5}},9}, {{{90,5}},2049}, {{{86,5}},129}, {{{192,5}},24577},
+ {{{80,5}},2}, {{{87,5}},385}, {{{83,5}},25}, {{{91,5}},6145},
+ {{{81,5}},7}, {{{89,5}},1537}, {{{85,5}},97}, {{{93,5}},24577},
+ {{{80,5}},4}, {{{88,5}},769}, {{{84,5}},49}, {{{92,5}},12289},
+ {{{82,5}},13}, {{{90,5}},3073}, {{{86,5}},193}, {{{192,5}},24577}
+ };
*/
unsigned long max_low_pfn;
unsigned long min_low_pfn;
+EXPORT_SYMBOL(min_low_pfn);
unsigned long max_pfn;
/*
* If we have booted due to a crash, max_pfn will be a very low value. We need
#include <linux/swapops.h>
#include <linux/rmap.h>
#include <linux/module.h>
+#include <linux/vs_memory.h>
#include <linux/syscalls.h>
#include <linux/vs_memory.h>
pgd = pgd_offset(mm, addr);
spin_lock(&mm->page_table_lock);
+
+ if (!vx_rsspages_avail(mm, 1))
+ goto err_unlock;
if (!vx_rsspages_avail(mm, 1))
goto err_unlock;
return VM_FAULT_SIGBUS;
if (new_page == NOPAGE_OOM)
return VM_FAULT_OOM;
+ if (!vx_rsspages_avail(mm, 1))
+ return VM_FAULT_OOM;
/*
* Should we do an early C-O-W break?
if (!may_expand_vm(mm, len >> PAGE_SHIFT))
return -ENOMEM;
+ /* check context space, maybe only Private writable mapping? */
+ if (!vx_vmpages_avail(mm, len >> PAGE_SHIFT))
+ return -ENOMEM;
+
if (accountable && (!(flags & MAP_NORESERVE) ||
sysctl_overcommit_memory == OVERCOMMIT_NEVER)) {
if (vm_flags & VM_SHARED) {
return -ENOMEM;
}
+ if (!vx_vmpages_avail(vma->vm_mm, grow))
+ return -ENOMEM;
+
/*
* Overcommit.. This must be the final test, as it will
* update security statistics.
if (mm->map_count > sysctl_max_map_count)
return -ENOMEM;
- if (security_vm_enough_memory(len >> PAGE_SHIFT))
+ if (security_vm_enough_memory(len >> PAGE_SHIFT) ||
+ !vx_vmpages_avail(mm, len >> PAGE_SHIFT))
return -ENOMEM;
flags = VM_DATA_DEFAULT_FLAGS | VM_ACCOUNT | mm->def_flags;
}
asmlinkage long
-sys_mprotect(unsigned long start, size_t len, unsigned long prot)
+do_mprotect(struct mm_struct *mm, unsigned long start, size_t len,
+ unsigned long prot)
{
unsigned long vm_flags, nstart, end, tmp, reqprot;
struct vm_area_struct *vma, *prev;
vm_flags = calc_vm_prot_bits(prot);
- down_write(¤t->mm->mmap_sem);
+ down_write(&mm->mmap_sem);
- vma = find_vma_prev(current->mm, start, &prev);
+ vma = find_vma_prev(mm, start, &prev);
error = -ENOMEM;
if (!vma)
goto out;
}
}
out:
- up_write(¤t->mm->mmap_sem);
+ up_write(&mm->mmap_sem);
return error;
}
+
+asmlinkage long sys_mprotect(unsigned long start, size_t len, unsigned long prot)
+{
+ return(do_mprotect(current->mm, start, len, prot));
+}
vma->vm_next->vm_flags |= VM_ACCOUNT;
}
+ vx_vmpages_add(mm, new_len >> PAGE_SHIFT);
__vm_stat_account(mm, vma->vm_flags, vma->vm_file, new_len>>PAGE_SHIFT);
if (vm_flags & VM_LOCKED) {
vx_vmlocked_add(mm, new_len >> PAGE_SHIFT);
goto out;
}
+ /* check context space, maybe only Private writable mapping? */
+ if (!vx_vmpages_avail(current->mm, (new_len - old_len) >> PAGE_SHIFT))
+ goto out;
+
if (vma->vm_flags & VM_ACCOUNT) {
charged = (new_len - old_len) >> PAGE_SHIFT;
if (security_vm_enough_memory(charged))
EXPORT_SYMBOL(totalram_pages);
EXPORT_SYMBOL(nr_swap_pages);
+#ifdef CONFIG_CRASH_DUMP
+/* This symbol has to be exported to use 'for_each_pgdat' macro by modules. */
+EXPORT_SYMBOL(pgdat_list);
+#endif
+
/*
* Used by page_zone() to look up the address of the struct zone whose
* id is encoded in the upper bits of page->flags
tainted |= TAINT_BAD_PAGE;
}
-#ifndef CONFIG_HUGETLB_PAGE
+#if !defined(CONFIG_HUGETLB_PAGE) && !defined(CONFIG_CRASH_DUMP)
#define prep_compound_page(page, order) do { } while (0)
#define destroy_compound_page(page, order) do { } while (0)
#else
--- /dev/null
+/*
+ * Copyright (C) 2002 Jeff Dike (jdike@karaya.com)
+ * Licensed under the GPL
+ */
+
+#include "linux/mm.h"
+#include "linux/init.h"
+#include "linux/proc_fs.h"
+#include "linux/proc_mm.h"
+#include "linux/file.h"
+#include "asm/uaccess.h"
+#include "asm/mmu_context.h"
+
+static struct file_operations proc_mm_fops;
+
+struct mm_struct *proc_mm_get_mm(int fd)
+{
+ struct mm_struct *ret = ERR_PTR(-EBADF);
+ struct file *file;
+
+ file = fget(fd);
+ if (!file)
+ goto out;
+
+ ret = ERR_PTR(-EINVAL);
+ if(file->f_op != &proc_mm_fops)
+ goto out_fput;
+
+ ret = file->private_data;
+ out_fput:
+ fput(file);
+ out:
+ return(ret);
+}
+
+extern long do_mmap2(struct mm_struct *mm, unsigned long addr,
+ unsigned long len, unsigned long prot,
+ unsigned long flags, unsigned long fd,
+ unsigned long pgoff);
+
+static ssize_t write_proc_mm(struct file *file, const char *buffer,
+ size_t count, loff_t *ppos)
+{
+ struct mm_struct *mm = file->private_data;
+ struct proc_mm_op req;
+ int n, ret;
+
+ if(count > sizeof(req))
+ return(-EINVAL);
+
+ n = copy_from_user(&req, buffer, count);
+ if(n != 0)
+ return(-EFAULT);
+
+ ret = count;
+ switch(req.op){
+ case MM_MMAP: {
+ struct mm_mmap *map = &req.u.mmap;
+
+ ret = do_mmap2(mm, map->addr, map->len, map->prot,
+ map->flags, map->fd, map->offset >> PAGE_SHIFT);
+ if((ret & ~PAGE_MASK) == 0)
+ ret = count;
+
+ break;
+ }
+ case MM_MUNMAP: {
+ struct mm_munmap *unmap = &req.u.munmap;
+
+ down_write(&mm->mmap_sem);
+ ret = do_munmap(mm, unmap->addr, unmap->len);
+ up_write(&mm->mmap_sem);
+
+ if(ret == 0)
+ ret = count;
+ break;
+ }
+ case MM_MPROTECT: {
+ struct mm_mprotect *protect = &req.u.mprotect;
+
+ ret = do_mprotect(mm, protect->addr, protect->len,
+ protect->prot);
+ if(ret == 0)
+ ret = count;
+ break;
+ }
+
+ case MM_COPY_SEGMENTS: {
+ struct mm_struct *from = proc_mm_get_mm(req.u.copy_segments);
+
+ if(IS_ERR(from)){
+ ret = PTR_ERR(from);
+ break;
+ }
+
+ mm_copy_segments(from, mm);
+ break;
+ }
+ default:
+ ret = -EINVAL;
+ break;
+ }
+
+ return(ret);
+}
+
+static int open_proc_mm(struct inode *inode, struct file *file)
+{
+ struct mm_struct *mm = mm_alloc();
+ int ret;
+
+ ret = -ENOMEM;
+ if(mm == NULL)
+ goto out_mem;
+
+ ret = init_new_context(current, mm);
+ if(ret)
+ goto out_free;
+
+ spin_lock(&mmlist_lock);
+ list_add(&mm->mmlist, ¤t->mm->mmlist);
+ mmlist_nr++;
+ spin_unlock(&mmlist_lock);
+
+ file->private_data = mm;
+
+ return(0);
+
+ out_free:
+ mmput(mm);
+ out_mem:
+ return(ret);
+}
+
+static int release_proc_mm(struct inode *inode, struct file *file)
+{
+ struct mm_struct *mm = file->private_data;
+
+ mmput(mm);
+ return(0);
+}
+
+static struct file_operations proc_mm_fops = {
+ .open = open_proc_mm,
+ .release = release_proc_mm,
+ .write = write_proc_mm,
+};
+
+static int make_proc_mm(void)
+{
+ struct proc_dir_entry *ent;
+
+ ent = create_proc_entry("mm", 0222, &proc_root);
+ if(ent == NULL){
+ printk("make_proc_mm : Failed to register /proc/mm\n");
+ return(0);
+ }
+ ent->proc_fops = &proc_mm_fops;
+
+ return(0);
+}
+
+__initcall(make_proc_mm);
+
+/*
+ * Overrides for Emacs so that we follow Linus's tabbing style.
+ * Emacs will notice this stuff at the end of the file and automatically
+ * adjust the settings for this buffer only. This must remain at the end
+ * of the file.
+ * ---------------------------------------------------------------------------
+ * Local variables:
+ * c-file-style: "linux"
+ * End:
+ */
*/
int rotate_reclaimable_page(struct page *page)
{
- struct zone *zone;
+ struct zone *zone = page_zone(page);
unsigned long flags;
if (PageLocked(page))
if (!PageLRU(page))
return 1;
- zone = page_zone(page);
spin_lock_irqsave(&zone->lru_lock, flags);
if (PageLRU(page) && !PageActive(page)) {
list_del(&page->lru);
--- /dev/null
+/*
+ * linux/mm/usercopy.c
+ *
+ * (C) Copyright 2003 Ingo Molnar
+ *
+ * Generic implementation of all the user-VM access functions, without
+ * relying on being able to access the VM directly.
+ */
+
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/pagemap.h>
+#include <linux/smp_lock.h>
+#include <linux/ptrace.h>
+#include <linux/interrupt.h>
+
+#include <asm/pgtable.h>
+#include <asm/uaccess.h>
+#include <asm/atomic_kmap.h>
+
+/*
+ * Get kernel address of the user page and pin it.
+ */
+static inline struct page *pin_page(unsigned long addr, int write,
+ unsigned long *pfn)
+{
+ struct mm_struct *mm = current->mm ? : &init_mm;
+ struct page *page = NULL;
+ int ret;
+
+ /*
+ * Do a quick atomic lookup first - this is the fastpath.
+ */
+retry:
+ page = follow_page_pfn(mm, addr, write, pfn);
+ if (likely(page != NULL)) {
+ if (!PageReserved(page))
+ get_page(page);
+ return page;
+ }
+ if (*pfn)
+ return NULL;
+ /*
+ * No luck - bad address or need to fault in the page:
+ */
+
+ /* Release the lock so get_user_pages can sleep */
+ spin_unlock(&mm->page_table_lock);
+
+ /*
+ * In the context of filemap_copy_from_user(), we are not allowed
+ * to sleep. We must fail this usercopy attempt and allow
+ * filemap_copy_from_user() to recover: drop its atomic kmap and use
+ * a sleeping kmap instead.
+ */
+ if (in_atomic()) {
+ spin_lock(&mm->page_table_lock);
+ return NULL;
+ }
+
+ down_read(&mm->mmap_sem);
+ ret = get_user_pages(current, mm, addr, 1, write, 0, NULL, NULL);
+ up_read(&mm->mmap_sem);
+ spin_lock(&mm->page_table_lock);
+
+ if (ret <= 0)
+ return NULL;
+
+ /*
+ * Go try the follow_page again.
+ */
+ goto retry;
+}
+
+static inline void unpin_page(struct page *page)
+{
+ put_page(page);
+}
+
+/*
+ * Access another process' address space.
+ * Source/target buffer must be kernel space,
+ * Do not walk the page table directly, use get_user_pages
+ */
+static int rw_vm(unsigned long addr, void *buf, int len, int write)
+{
+ struct mm_struct *mm = current->mm ? : &init_mm;
+
+ if (!len)
+ return 0;
+
+ spin_lock(&mm->page_table_lock);
+
+ /* ignore errors, just check how much was sucessfully transfered */
+ while (len) {
+ struct page *page = NULL;
+ unsigned long pfn = 0;
+ int bytes, offset;
+ void *maddr;
+
+ page = pin_page(addr, write, &pfn);
+ if (!page && !pfn)
+ break;
+
+ bytes = len;
+ offset = addr & (PAGE_SIZE-1);
+ if (bytes > PAGE_SIZE-offset)
+ bytes = PAGE_SIZE-offset;
+
+ if (page)
+ maddr = kmap_atomic(page, KM_USER_COPY);
+ else
+ maddr = kmap_atomic_nocache_pfn(pfn, KM_USER_COPY);
+
+#define HANDLE_TYPE(type) \
+ case sizeof(type): *(type *)(maddr+offset) = *(type *)(buf); break;
+
+ if (write) {
+ switch (bytes) {
+ HANDLE_TYPE(char);
+ HANDLE_TYPE(int);
+ HANDLE_TYPE(long long);
+ default:
+ memcpy(maddr + offset, buf, bytes);
+ }
+ } else {
+#undef HANDLE_TYPE
+#define HANDLE_TYPE(type) \
+ case sizeof(type): *(type *)(buf) = *(type *)(maddr+offset); break;
+ switch (bytes) {
+ HANDLE_TYPE(char);
+ HANDLE_TYPE(int);
+ HANDLE_TYPE(long long);
+ default:
+ memcpy(buf, maddr + offset, bytes);
+ }
+#undef HANDLE_TYPE
+ }
+ kunmap_atomic(maddr, KM_USER_COPY);
+ if (page)
+ unpin_page(page);
+ len -= bytes;
+ buf += bytes;
+ addr += bytes;
+ }
+ spin_unlock(&mm->page_table_lock);
+
+ return len;
+}
+
+static int str_vm(unsigned long addr, void *buf0, int len, int copy)
+{
+ struct mm_struct *mm = current->mm ? : &init_mm;
+ struct page *page;
+ void *buf = buf0;
+
+ if (!len)
+ return len;
+
+ spin_lock(&mm->page_table_lock);
+
+ /* ignore errors, just check how much was sucessfully transfered */
+ while (len) {
+ int bytes, offset, left, copied;
+ unsigned long pfn = 0;
+ char *maddr;
+
+ page = pin_page(addr, copy == 2, &pfn);
+ if (!page && !pfn) {
+ spin_unlock(&mm->page_table_lock);
+ return -EFAULT;
+ }
+ bytes = len;
+ offset = addr & (PAGE_SIZE-1);
+ if (bytes > PAGE_SIZE-offset)
+ bytes = PAGE_SIZE-offset;
+
+ if (page)
+ maddr = kmap_atomic(page, KM_USER_COPY);
+ else
+ maddr = kmap_atomic_nocache_pfn(pfn, KM_USER_COPY);
+ if (copy == 2) {
+ memset(maddr + offset, 0, bytes);
+ copied = bytes;
+ left = 0;
+ } else if (copy == 1) {
+ left = strncpy_count(buf, maddr + offset, bytes);
+ copied = bytes - left;
+ } else {
+ copied = strnlen(maddr + offset, bytes);
+ left = bytes - copied;
+ }
+ BUG_ON(bytes < 0 || copied < 0);
+ kunmap_atomic(maddr, KM_USER_COPY);
+ if (page)
+ unpin_page(page);
+ len -= copied;
+ buf += copied;
+ addr += copied;
+ if (left)
+ break;
+ }
+ spin_unlock(&mm->page_table_lock);
+
+ return len;
+}
+
+/*
+ * Copies memory from userspace (ptr) into kernelspace (val).
+ *
+ * returns # of bytes not copied.
+ */
+int get_user_size(unsigned int size, void *val, const void *ptr)
+{
+ int ret;
+
+ if (unlikely(segment_eq(get_fs(), KERNEL_DS)))
+ ret = __direct_copy_from_user(val, ptr, size);
+ else
+ ret = rw_vm((unsigned long)ptr, val, size, 0);
+ if (ret)
+ /*
+ * Zero the rest:
+ */
+ memset(val + size - ret, 0, ret);
+ return ret;
+}
+
+/*
+ * Copies memory from kernelspace (val) into userspace (ptr).
+ *
+ * returns # of bytes not copied.
+ */
+int put_user_size(unsigned int size, const void *val, void *ptr)
+{
+ if (unlikely(segment_eq(get_fs(), KERNEL_DS)))
+ return __direct_copy_to_user(ptr, val, size);
+ else
+ return rw_vm((unsigned long)ptr, (void *)val, size, 1);
+}
+
+int copy_str_fromuser_size(unsigned int size, void *val, const void *ptr)
+{
+ int copied, left;
+
+ if (unlikely(segment_eq(get_fs(), KERNEL_DS))) {
+ left = strncpy_count(val, ptr, size);
+ copied = size - left;
+ BUG_ON(copied < 0);
+
+ return copied;
+ }
+ left = str_vm((unsigned long)ptr, val, size, 1);
+ if (left < 0)
+ return left;
+ copied = size - left;
+ BUG_ON(copied < 0);
+
+ return copied;
+}
+
+int strlen_fromuser_size(unsigned int size, const void *ptr)
+{
+ int copied, left;
+
+ if (unlikely(segment_eq(get_fs(), KERNEL_DS))) {
+ copied = strnlen(ptr, size) + 1;
+ BUG_ON(copied < 0);
+
+ return copied;
+ }
+ left = str_vm((unsigned long)ptr, NULL, size, 0);
+ if (left < 0)
+ return 0;
+ copied = size - left + 1;
+ BUG_ON(copied < 0);
+
+ return copied;
+}
+
+int zero_user_size(unsigned int size, void *ptr)
+{
+ int left;
+
+ if (unlikely(segment_eq(get_fs(), KERNEL_DS))) {
+ memset(ptr, 0, size);
+ return 0;
+ }
+ left = str_vm((unsigned long)ptr, NULL, size, 2);
+ if (left < 0)
+ return size;
+ return left;
+}
+
+EXPORT_SYMBOL(get_user_size);
+EXPORT_SYMBOL(put_user_size);
+EXPORT_SYMBOL(zero_user_size);
+EXPORT_SYMBOL(copy_str_fromuser_size);
+EXPORT_SYMBOL(strlen_fromuser_size);
#include <linux/swapops.h>
+
/* possible outcome of pageout() */
typedef enum {
/* failed to write page out, page is locked */
LIST_HEAD(page_list);
struct pagevec pvec;
int max_scan = sc->nr_to_scan;
+ struct list_head *inactive_list = &zone->inactive_list;
+ struct list_head *active_list = &zone->active_list;
pagevec_init(&pvec, 1);
if (TestSetPageLRU(page))
BUG();
list_del(&page->lru);
- if (PageActive(page))
- add_page_to_active_list(zone, page);
- else
- add_page_to_inactive_list(zone, page);
+ if (PageActive(page)) {
+ zone->nr_active++;
+ list_add(&page->lru, active_list);
+ } else {
+ zone->nr_inactive++;
+ list_add(&page->lru, inactive_list);
+ }
if (!pagevec_add(&pvec, page)) {
spin_unlock_irq(&zone->lru_lock);
__pagevec_release(&pvec);
long mapped_ratio;
long distress;
long swap_tendency;
+ struct list_head *active_list = &zone->active_list;
+ struct list_head *inactive_list = &zone->inactive_list;
lru_add_drain();
spin_lock_irq(&zone->lru_lock);
BUG();
if (!TestClearPageActive(page))
BUG();
- list_move(&page->lru, &zone->inactive_list);
+ list_move(&page->lru, inactive_list);
pgmoved++;
if (!pagevec_add(&pvec, page)) {
zone->nr_inactive += pgmoved;
if (TestSetPageLRU(page))
BUG();
BUG_ON(!PageActive(page));
- list_move(&page->lru, &zone->active_list);
+ list_move(&page->lru, active_list);
pgmoved++;
if (!pagevec_add(&pvec, page)) {
zone->nr_active += pgmoved;
shrink_slab(sc.nr_scanned, GFP_KERNEL, lru_pages);
sc.nr_reclaimed += reclaim_state->reclaimed_slab;
total_reclaimed += sc.nr_reclaimed;
- total_scanned += sc.nr_scanned;
+ total_scanned += sc.nr_scanned;
if (zone->all_unreclaimable)
continue;
if (zone->pages_scanned >= (zone->nr_active +
#endif /* CONFIG_NET_RADIO */
#include <linux/vs_network.h>
#include <asm/current.h>
+#include <linux/vs_network.h>
/* This define, if set, will randomly drop a packet when congestion
* is more than moderate. It helps fairness in the multi-interface
EXPORT_SYMBOL(dev_set_allmulti);
EXPORT_SYMBOL(dev_set_promiscuity);
EXPORT_SYMBOL(dev_change_flags);
+EXPORT_SYMBOL(dev_change_name);
EXPORT_SYMBOL(dev_set_mtu);
EXPORT_SYMBOL(dev_set_mac_address);
EXPORT_SYMBOL(free_netdev);
for (opti = 0; opti < (ip->ihl - sizeof(struct iphdr) / 4); opti++)
printk(" O=0x%8.8X", *opt++);
- printk(" MARK=%lu (0x%lx)",
+ printk(" MARK=%lu (0x%lu)",
(long unsigned int)skb->nfmark,
(long unsigned int)skb->nfmark);
printk("\n");
subsys_initcall(proto_init);
#endif /* PROC_FS */
-
EXPORT_SYMBOL(sk_alloc);
EXPORT_SYMBOL(sk_free);
EXPORT_SYMBOL(sk_send_sigurg);
if (inet->opt)
kfree(inet->opt);
+
+#warning MEF: figure out whether the following vserver net code is required by PlanetLab
+#if 0
+ vx_sock_dec(sk);
+ clr_vx_info(&sk->sk_vx_info);
+ sk->sk_xid = -1;
+ clr_nx_info(&sk->sk_nx_info);
+ sk->sk_nid = -1;
+#endif
+
dst_release(sk->sk_dst_cache);
#ifdef INET_REFCNT_DEBUG
atomic_dec(&inet_sock_nr);
sk->sk_protocol = protocol;
sk->sk_backlog_rcv = sk->sk_prot->backlog_rcv;
+#warning MEF: figure out whether the following vserver net code is required by PlanetLab
+#if 0
+ set_vx_info(&sk->sk_vx_info, current->vx_info);
+ sk->sk_xid = vx_current_xid();
+ vx_sock_inc(sk);
+ set_nx_info(&sk->sk_nx_info, current->nx_info);
+ sk->sk_nid = nx_current_nid();
+#endif
+
inet->uc_ttl = -1;
inet->mc_loop = 1;
inet->mc_ttl = 1;
!(current->flags & PF_EXITING))
timeout = sk->sk_lingertime;
sock->sk = NULL;
+
+#warning MEF: figure out whether the following vserver net code is required by PlanetLab
+#if 0
+ vx_sock_dec(sk);
+ clr_vx_info(&sk->sk_vx_info);
+ //sk->sk_xid = -1;
+ clr_nx_info(&sk->sk_nx_info);
+ //sk->sk_nid = -1;
+#endif
sk->sk_prot->close(sk, timeout);
}
return 0;
--- /dev/null
+/*
+ * INET An implementation of the TCP/IP protocol suite for the LINUX
+ * operating system. INET is implemented using the BSD Socket
+ * interface as the means of communication with the user level.
+ *
+ * Dumb Network Address Translation.
+ *
+ * Version: $Id: ip_nat_dumb.c,v 1.11 2000/12/13 18:31:48 davem Exp $
+ *
+ * Authors: Alexey Kuznetsov, <kuznet@ms2.inr.ac.ru>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Fixes:
+ * Rani Assaf : A zero checksum is a special case
+ * only in UDP
+ * Rani Assaf : Added ICMP messages rewriting
+ * Rani Assaf : Repaired wrong changes, made by ANK.
+ *
+ *
+ * NOTE: It is just working model of real NAT.
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/skbuff.h>
+#include <linux/ip.h>
+#include <linux/icmp.h>
+#include <linux/netdevice.h>
+#include <net/sock.h>
+#include <net/ip.h>
+#include <net/icmp.h>
+#include <linux/tcp.h>
+#include <linux/udp.h>
+#include <net/checksum.h>
+#include <linux/route.h>
+#include <net/route.h>
+#include <net/ip_fib.h>
+
+
+int
+ip_do_nat(struct sk_buff *skb)
+{
+ struct rtable *rt = (struct rtable*)skb->dst;
+ struct iphdr *iph = skb->nh.iph;
+ u32 odaddr = iph->daddr;
+ u32 osaddr = iph->saddr;
+ u16 check;
+
+ IPCB(skb)->flags |= IPSKB_TRANSLATED;
+
+ /* Rewrite IP header */
+ iph->daddr = rt->rt_dst_map;
+ iph->saddr = rt->rt_src_map;
+ iph->check = 0;
+ iph->check = ip_fast_csum((unsigned char *)iph, iph->ihl);
+
+ /* If it is the first fragment, rewrite protocol headers */
+
+ if (!(iph->frag_off & htons(IP_OFFSET))) {
+ u16 *cksum;
+
+ switch(iph->protocol) {
+ case IPPROTO_TCP:
+ cksum = (u16*)&((struct tcphdr*)(((char*)iph) + (iph->ihl<<2)))->check;
+ if ((u8*)(cksum+1) > skb->tail)
+ goto truncated;
+ check = *cksum;
+ if (skb->ip_summed != CHECKSUM_HW)
+ check = ~check;
+ check = csum_tcpudp_magic(iph->saddr, iph->daddr, 0, 0, check);
+ check = csum_tcpudp_magic(~osaddr, ~odaddr, 0, 0, ~check);
+ if (skb->ip_summed == CHECKSUM_HW)
+ check = ~check;
+ *cksum = check;
+ break;
+ case IPPROTO_UDP:
+ cksum = (u16*)&((struct udphdr*)(((char*)iph) + (iph->ihl<<2)))->check;
+ if ((u8*)(cksum+1) > skb->tail)
+ goto truncated;
+ if ((check = *cksum) != 0) {
+ check = csum_tcpudp_magic(iph->saddr, iph->daddr, 0, 0, ~check);
+ check = csum_tcpudp_magic(~osaddr, ~odaddr, 0, 0, ~check);
+ *cksum = check ? : 0xFFFF;
+ }
+ break;
+ case IPPROTO_ICMP:
+ {
+ struct icmphdr *icmph = (struct icmphdr*)((char*)iph + (iph->ihl<<2));
+ struct iphdr *ciph;
+ u32 idaddr, isaddr;
+ int updated;
+
+ if ((icmph->type != ICMP_DEST_UNREACH) &&
+ (icmph->type != ICMP_TIME_EXCEEDED) &&
+ (icmph->type != ICMP_PARAMETERPROB))
+ break;
+
+ ciph = (struct iphdr *) (icmph + 1);
+
+ if ((u8*)(ciph+1) > skb->tail)
+ goto truncated;
+
+ isaddr = ciph->saddr;
+ idaddr = ciph->daddr;
+ updated = 0;
+
+ if (rt->rt_flags&RTCF_DNAT && ciph->saddr == odaddr) {
+ ciph->saddr = iph->daddr;
+ updated = 1;
+ }
+ if (rt->rt_flags&RTCF_SNAT) {
+ if (ciph->daddr != osaddr) {
+ struct fib_result res;
+ unsigned flags = 0;
+ struct flowi fl = {
+ .iif = skb->dev->ifindex,
+ .nl_u =
+ { .ip4_u =
+ { .daddr = ciph->saddr,
+ .saddr = ciph->daddr,
+#ifdef CONFIG_IP_ROUTE_TOS
+ .tos = RT_TOS(ciph->tos)
+#endif
+ } },
+ .proto = ciph->protocol };
+
+ /* Use fib_lookup() until we get our own
+ * hash table of NATed hosts -- Rani
+ */
+ if (fib_lookup(&fl, &res) == 0) {
+ if (res.r) {
+ ciph->daddr = fib_rules_policy(ciph->daddr, &res, &flags);
+ if (ciph->daddr != idaddr)
+ updated = 1;
+ }
+ fib_res_put(&res);
+ }
+ } else {
+ ciph->daddr = iph->saddr;
+ updated = 1;
+ }
+ }
+ if (updated) {
+ cksum = &icmph->checksum;
+ /* Using tcpudp primitive. Why not? */
+ check = csum_tcpudp_magic(ciph->saddr, ciph->daddr, 0, 0, ~(*cksum));
+ *cksum = csum_tcpudp_magic(~isaddr, ~idaddr, 0, 0, ~check);
+ }
+ break;
+ }
+ default:
+ break;
+ }
+ }
+ return NET_RX_SUCCESS;
+
+truncated:
+ /* should be return NET_RX_BAD; */
+ return -EINVAL;
+}
#ifdef CONFIG_NETFILTER_DEBUG
nf_debug_ip_loopback_xmit(newskb);
#endif
+ nf_reset(newskb);
netif_rx(newskb);
return 0;
}
nf_debug_ip_finish_output2(skb);
#endif /*CONFIG_NETFILTER_DEBUG*/
+#ifdef CONFIG_BRIDGE_NETFILTER
+ /* bridge-netfilter defers calling some IP hooks to the bridge layer
+ * and still needs the conntrack reference.
+ */
+ if (skb->nf_bridge == NULL)
+#endif
+ nf_reset(skb);
+
if (hh) {
int hh_alen;
Allows altering the ARP packet payload: source and destination
hardware and network addresses.
+# Backwards compatibility modules: only if you don't build in the others.
+config IP_NF_COMPAT_IPCHAINS
+ tristate "ipchains (2.2-style) support"
+ depends on IP_NF_CONNTRACK!=y && IP_NF_IPTABLES!=y
+ help
+ This option places ipchains (with masquerading and redirection
+ support) back into the kernel, using the new netfilter
+ infrastructure. It is not recommended for new installations (see
+ `Packet filtering'). With this enabled, you should be able to use
+ the ipchains tool exactly as in 2.2 kernels.
+
+ To compile it as a module, choose M here. If unsure, say N.
+
+config IP_NF_COMPAT_IPFWADM
+ tristate "ipfwadm (2.0-style) support"
+ depends on IP_NF_CONNTRACK!=y && IP_NF_IPTABLES!=y && IP_NF_COMPAT_IPCHAINS!=y
+ help
+ This option places ipfwadm (with masquerading and redirection
+ support) back into the kernel, using the new netfilter
+ infrastructure. It is not recommended for new installations (see
+ `Packet filtering'). With this enabled, you should be able to use
+ the ipfwadm tool exactly as in 2.0 kernels.
+
+ To compile it as a module, choose M here. If unsure, say N.
+
+config IP_NF_TARGET_NOTRACK
+ tristate 'NOTRACK target support'
+ depends on IP_NF_RAW
+ depends on IP_NF_CONNTRACK
+ help
+ The NOTRACK target allows a select rule to specify
+ which packets *not* to enter the conntrack/NAT
+ subsystem with all the consequences (no ICMP error tracking,
+ no protocol helpers for the selected packets).
+
+ If you want to compile it as a module, say M here and read
+ <file:Documentation/modules.txt>. If unsure, say `N'.
+
+config IP_NF_RAW
+ tristate 'raw table support (required for NOTRACK/TRACE)'
+ depends on IP_NF_IPTABLES
+ help
+ This option adds a `raw' table to iptables. This table is the very
+ first in the netfilter framework and hooks in at the PREROUTING
+ and OUTPUT chains.
+
+ If you want to compile it as a module, say M here and read
+ <file:Documentation/modules.txt>. If unsure, say `N'.
+ help
+
+config IP_NF_MATCH_ADDRTYPE
+ tristate 'address type match support'
+ depends on IP_NF_IPTABLES
+ help
+ This option allows you to match what routing thinks of an address,
+ eg. UNICAST, LOCAL, BROADCAST, ...
+
+ If you want to compile it as a module, say M here and read
+ Documentation/modules.txt. If unsure, say `N'.
+
+config IP_NF_MATCH_REALM
+ tristate 'realm match support'
+ depends on IP_NF_IPTABLES
+ select NET_CLS_ROUTE
+ help
+ This option adds a `realm' match, which allows you to use the realm
+ key from the routing subsytem inside iptables.
+
+ This match pretty much resembles the CONFIG_NET_CLS_ROUTE4 option
+ in tc world.
+
+ If you want to compile it as a module, say M here and read
+ Documentation/modules.txt. If unsure, say `N'.
+
+config IP_NF_CT_ACCT
+ bool "Connection tracking flow accounting"
+ depends on IP_NF_CONNTRACK
+
config IP_NF_CT_PROTO_GRE
tristate ' GRE protocol support'
depends on IP_NF_CONNTRACK
help
- This module adds generic support for connection tracking and NAT of
- the GRE protocol (RFC1701, RFC2784). Please note that this will
- only work with GRE connections using the key field of the GRE
- header.
+ This module adds generic support for connection tracking and NAT of the
+ GRE protocol (RFC1701, RFC2784). Please note that this will only work
+ with GRE connections using the key field of the GRE header.
You will need GRE support to enable PPTP support.
tristate 'PPTP protocol support'
depends on IP_NF_CT_PROTO_GRE
help
- This module adds support for PPTP (Point to Point Tunnelling
- Protocol, RFC2637) conncection tracking and NAT.
+ This module adds support for PPTP (Point to Point Tunnelling Protocol,
+ RFC2637) conncection tracking and NAT.
- If you are running PPTP sessions over a stateful firewall or NAT
- box, you may want to enable this feature.
+ If you are running PPTP sessions over a stateful firewall or NAT box,
+ you may want to enable this feature.
Please note that not all PPTP modes of operation are supported yet.
- For more info, read top of the file
- net/ipv4/netfilter/ip_conntrack_pptp.c
+ For more info, read top of the file net/ipv4/netfilter/ip_conntrack_pptp.c
If you want to compile it as a module, say M here and read
Documentation/modules.txt. If unsure, say `N'.
# connection tracking
obj-$(CONFIG_IP_NF_CONNTRACK) += ip_conntrack.o
-# SCTP protocol connection tracking
+# connection tracking protocol helpers
obj-$(CONFIG_IP_NF_CT_PROTO_GRE) += ip_conntrack_proto_gre.o
# NAT protocol helpers
obj-$(CONFIG_IP_NF_NAT_PROTO_GRE) += ip_nat_proto_gre.o
+# SCTP protocol connection tracking
obj-$(CONFIG_IP_NF_CT_PROTO_SCTP) += ip_conntrack_proto_sctp.o
# connection tracking helpers
-obj-$(CONFIG_IP_NF_PPTP) += ip_conntrack_pptp.o
obj-$(CONFIG_IP_NF_AMANDA) += ip_conntrack_amanda.o
obj-$(CONFIG_IP_NF_TFTP) += ip_conntrack_tftp.o
obj-$(CONFIG_IP_NF_FTP) += ip_conntrack_ftp.o
obj-$(CONFIG_IP_NF_IRC) += ip_conntrack_irc.o
+obj-$(CONFIG_IP_NF_PPTP) += ip_conntrack_pptp.o
# NAT helpers
-obj-$(CONFIG_IP_NF_NAT_PPTP) += ip_nat_pptp.o
obj-$(CONFIG_IP_NF_NAT_AMANDA) += ip_nat_amanda.o
obj-$(CONFIG_IP_NF_NAT_TFTP) += ip_nat_tftp.o
obj-$(CONFIG_IP_NF_NAT_FTP) += ip_nat_ftp.o
obj-$(CONFIG_IP_NF_NAT_IRC) += ip_nat_irc.o
+obj-$(CONFIG_IP_NF_NAT_PPTP) += ip_nat_pptp.o
# generic IP tables
obj-$(CONFIG_IP_NF_IPTABLES) += ip_tables.o
tuple->src.ip = iph->saddr;
tuple->dst.ip = iph->daddr;
tuple->dst.protonum = iph->protocol;
+ tuple->src.u.all = tuple->dst.u.all = 0;
tuple->dst.dir = IP_CT_DIR_ORIGINAL;
return protocol->pkt_to_tuple(skb, dataoff, tuple);
inverse->dst.protonum = orig->dst.protonum;
inverse->dst.dir = !orig->dst.dir;
+ inverse->src.u.all = inverse->dst.u.all = 0;
+
return protocol->invert_tuple(inverse, orig);
}
/* If an expectation for this connection is found, it gets delete from
* global list then returned. */
-struct ip_conntrack_expect *
-__ip_conntrack_exp_find(const struct ip_conntrack_tuple *tuple)
+static struct ip_conntrack_expect *
+find_expectation(const struct ip_conntrack_tuple *tuple)
{
struct ip_conntrack_expect *i;
&& ip_ct_tuple_equal(tuple, &i->tuple);
}
-struct ip_conntrack_tuple_hash *
+static struct ip_conntrack_tuple_hash *
__ip_conntrack_find(const struct ip_conntrack_tuple *tuple,
const struct ip_conntrack *ignored_conntrack)
{
conntrack->timeout.function = death_by_timeout;
WRITE_LOCK(&ip_conntrack_lock);
- exp = __ip_conntrack_exp_find(tuple);
+ exp = find_expectation(tuple);
if (exp) {
DEBUGP("conntrack: expectation arrives ct=%p exp=%p\n",
/*
- * ip_conntrack_pptp.c - Version 3.0
+ * ip_conntrack_pptp.c - Version 2.0
*
* Connection tracking support for PPTP (Point to Point Tunneling Protocol).
* PPTP is a a protocol for creating virtual private networks.
* GRE is defined in RFC 1701 and RFC 1702. Documentation of
* PPTP can be found in RFC 2637
*
- * (C) 2000-2005 by Harald Welte <laforge@gnumonks.org>
+ * (C) 2000-2003 by Harald Welte <laforge@gnumonks.org>
*
* Development of this code funded by Astaro AG (http://www.astaro.com/)
*
* - We blindly assume that control connections are always
* established in PNS->PAC direction. This is a violation
* of RFFC2673
- * - We can only support one single call within each session
*
- * TODO:
+ * TODO: - finish support for multiple calls within one session
+ * (needs expect reservations in newnat)
* - testing of incoming PPTP calls
*
* Changes:
* 2004-10-22 - Version 2.0
* - merge Mandrake's 2.6.x port with recent 2.6.x API changes
* - fix lots of linear skb assumptions from Mandrake's port
- * 2005-06-10 - Version 2.1
- * - use ip_conntrack_expect_free() instead of kfree() on the
- * expect's (which are from the slab for quite some time)
- * 2005-06-10 - Version 3.0
- * - port helper to post-2.6.11 API changes,
- * funded by Oxcoda NetBox Blue (http://www.netboxblue.com/)
*
*/
#include <net/tcp.h>
#include <linux/netfilter_ipv4/lockhelp.h>
-#include <linux/netfilter_ipv4/ip_conntrack.h>
-#include <linux/netfilter_ipv4/ip_conntrack_core.h>
#include <linux/netfilter_ipv4/ip_conntrack_helper.h>
#include <linux/netfilter_ipv4/ip_conntrack_proto_gre.h>
#include <linux/netfilter_ipv4/ip_conntrack_pptp.h>
-#define IP_CT_PPTP_VERSION "3.0"
+#define IP_CT_PPTP_VERSION "2.0"
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Harald Welte <laforge@gnumonks.org>");
DECLARE_LOCK(ip_pptp_lock);
-int
-(*ip_nat_pptp_hook_outbound)(struct sk_buff **pskb,
- struct ip_conntrack *ct,
- enum ip_conntrack_info ctinfo,
- struct PptpControlHeader *ctlh,
- union pptp_ctrl_union *pptpReq);
-
-int
-(*ip_nat_pptp_hook_inbound)(struct sk_buff **pskb,
- struct ip_conntrack *ct,
- enum ip_conntrack_info ctinfo,
- struct PptpControlHeader *ctlh,
- union pptp_ctrl_union *pptpReq);
-
-int
-(*ip_nat_pptp_hook_exp_gre)(struct ip_conntrack_expect *expect_orig,
- struct ip_conntrack_expect *expect_reply);
-
-void
-(*ip_nat_pptp_hook_expectfn)(struct ip_conntrack *ct,
- struct ip_conntrack_expect *exp);
-
#if 0
#include "ip_conntrack_pptp_priv.h"
#define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s: " format, __FILE__, __FUNCTION__, ## args)
#define PPTP_GRE_TIMEOUT (10 MINS)
#define PPTP_GRE_STREAM_TIMEOUT (5 DAYS)
-static void pptp_expectfn(struct ip_conntrack *ct,
- struct ip_conntrack_expect *exp)
+static int pptp_expectfn(struct ip_conntrack *ct)
{
- DEBUGP("increasing timeouts\n");
+ struct ip_conntrack *master;
+ struct ip_conntrack_expect *exp;
+ DEBUGP("increasing timeouts\n");
/* increase timeout of GRE data channel conntrack entry */
ct->proto.gre.timeout = PPTP_GRE_TIMEOUT;
ct->proto.gre.stream_timeout = PPTP_GRE_STREAM_TIMEOUT;
- /* Can you see how rusty this code is, compared with the pre-2.6.11
- * one? That's what happened to my shiny newnat of 2002 ;( -HW */
-
- if (!ip_nat_pptp_hook_expectfn) {
- struct ip_conntrack_tuple inv_t;
- struct ip_conntrack_expect *exp_other;
-
- /* obviously this tuple inversion only works until you do NAT */
- invert_tuplepr(&inv_t, &exp->tuple);
- DEBUGP("trying to unexpect other dir: ");
- DUMP_TUPLE(&inv_t);
-
- exp_other = __ip_conntrack_exp_find(&inv_t);
- if (exp_other) {
- /* delete other expectation. */
- DEBUGP("found\n");
- ip_conntrack_unexpect_related(exp_other);
- } else {
- DEBUGP("not found\n");
- }
- } else {
- /* we need more than simple inversion */
- ip_nat_pptp_hook_expectfn(ct, exp);
+ master = master_ct(ct);
+ if (!master) {
+ DEBUGP(" no master!!!\n");
+ return 0;
}
-}
-
-static int timeout_ct_or_exp(const struct ip_conntrack_tuple *t)
-{
- struct ip_conntrack_tuple_hash *h;
- struct ip_conntrack_expect *exp;
- DEBUGP("trying to timeout ct or exp for tuple ");
- DUMP_TUPLE(t);
+ exp = ct->master;
+ if (!exp) {
+ DEBUGP("no expectation!!\n");
+ return 0;
+ }
- h = __ip_conntrack_find(t, NULL);
- if (h) {
- struct ip_conntrack *sibling = tuplehash_to_ctrack(h);
- DEBUGP("setting timeout of conntrack %p to 0\n", sibling);
- sibling->proto.gre.timeout = 0;
- sibling->proto.gre.stream_timeout = 0;
- /* refresh_acct will not modify counters if skb == NULL */
- ip_ct_refresh_acct(sibling, 0, NULL, 0);
- return 1;
+ DEBUGP("completing tuples with ct info\n");
+ /* we can do this, since we're unconfirmed */
+ if (ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.u.gre.key ==
+ htonl(master->help.ct_pptp_info.pac_call_id)) {
+ /* assume PNS->PAC */
+ ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u.gre.key =
+ htonl(master->help.ct_pptp_info.pns_call_id);
+ ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u.gre.key =
+ htonl(master->help.ct_pptp_info.pns_call_id);
} else {
- exp = __ip_conntrack_exp_find(t);
- if (exp) {
- DEBUGP("unexpect_related of expect %p\n", exp);
- ip_conntrack_unexpect_related(exp);
- return 1;
+ /* assume PAC->PNS */
+ ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u.gre.key =
+ htonl(master->help.ct_pptp_info.pac_call_id);
+ ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u.gre.key =
+ htonl(master->help.ct_pptp_info.pac_call_id);
+ }
+
+ /* delete other expectation */
+ if (exp->expected_list.next != &exp->expected_list) {
+ struct ip_conntrack_expect *other_exp;
+ struct list_head *cur_item, *next;
+
+ for (cur_item = master->sibling_list.next;
+ cur_item != &master->sibling_list; cur_item = next) {
+ next = cur_item->next;
+ other_exp = list_entry(cur_item,
+ struct ip_conntrack_expect,
+ expected_list);
+ /* remove only if occurred at same sequence number */
+ if (other_exp != exp && other_exp->seq == exp->seq) {
+ DEBUGP("unexpecting other direction\n");
+ ip_ct_gre_keymap_destroy(other_exp);
+ ip_conntrack_unexpect_related(other_exp);
+ }
}
}
return 0;
}
-
/* timeout GRE data connections */
static int pptp_timeout_related(struct ip_conntrack *ct)
{
- struct ip_conntrack_tuple t;
- int ret;
-
- /* Since ct->sibling_list has literally rusted away in 2.6.11,
- * we now need another way to find out about our sibling
- * contrack and expects... -HW */
-
- /* try original (pns->pac) tuple */
- memcpy(&t, &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple, sizeof(t));
- t.dst.protonum = IPPROTO_GRE;
- t.src.u.gre.key = htons(ct->help.ct_pptp_info.pns_call_id);
- t.dst.u.gre.key = htons(ct->help.ct_pptp_info.pac_call_id);
+ struct list_head *cur_item, *next;
+ struct ip_conntrack_expect *exp;
- ret = timeout_ct_or_exp(&t);
+ /* FIXME: do we have to lock something ? */
+ for (cur_item = ct->sibling_list.next;
+ cur_item != &ct->sibling_list; cur_item = next) {
+ next = cur_item->next;
+ exp = list_entry(cur_item, struct ip_conntrack_expect,
+ expected_list);
- /* try reply (pac->pns) tuple */
- memcpy(&t, &ct->tuplehash[IP_CT_DIR_REPLY].tuple, sizeof(t));
- t.dst.protonum = IPPROTO_GRE;
- t.src.u.gre.key = htons(ct->help.ct_pptp_info.pac_call_id);
- t.dst.u.gre.key = htons(ct->help.ct_pptp_info.pns_call_id);
+ ip_ct_gre_keymap_destroy(exp);
+ if (!exp->sibling) {
+ ip_conntrack_unexpect_related(exp);
+ continue;
+ }
- ret += timeout_ct_or_exp(&t);
+ DEBUGP("setting timeout of conntrack %p to 0\n",
+ exp->sibling);
+ exp->sibling->proto.gre.timeout = 0;
+ exp->sibling->proto.gre.stream_timeout = 0;
+ /* refresh_acct will not modify counters if skb == NULL */
+ ip_ct_refresh_acct(exp->sibling, 0, NULL, 0);
+ }
- return ret;
+ return 0;
}
/* expect GRE connections (PNS->PAC and PAC->PNS direction) */
struct ip_conntrack_tuple exp_tuples[] = {
/* tuple in original direction, PNS->PAC */
{ .src = { .ip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip,
- .u = { .gre = { .key = peer_callid } }
+ .u = { .gre = { .key = htonl(ntohs(peer_callid)) } }
},
.dst = { .ip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip,
- .u = { .gre = { .key = callid } },
+ .u = { .gre = { .key = htonl(ntohs(callid)) } },
.protonum = IPPROTO_GRE
},
},
/* tuple in reply direction, PAC->PNS */
{ .src = { .ip = master->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip,
- .u = { .gre = { .key = callid } }
+ .u = { .gre = { .key = htonl(ntohs(callid)) } }
},
.dst = { .ip = master->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip,
- .u = { .gre = { .key = peer_callid } },
+ .u = { .gre = { .key = htonl(ntohs(peer_callid)) } },
.protonum = IPPROTO_GRE
},
}
- };
-
- struct ip_conntrack_expect *exp_orig, *exp_reply;
+ }, *exp_tuple;
- exp_orig = ip_conntrack_expect_alloc();
- if (exp_orig == NULL)
- return 1;
+ for (exp_tuple = exp_tuples; exp_tuple < &exp_tuples[2]; exp_tuple++) {
+ struct ip_conntrack_expect *exp;
- exp_reply = ip_conntrack_expect_alloc();
- if (exp_reply == NULL) {
- ip_conntrack_expect_free(exp_orig);
- return 1;
- }
-
- memcpy(&exp_orig->tuple, &exp_tuples[0], sizeof(exp_orig->tuple));
-
- exp_orig->mask.src.ip = 0xffffffff;
- exp_orig->mask.src.u.all = 0;
- exp_orig->mask.dst.u.all = 0;
- exp_orig->mask.dst.u.gre.key = 0xffff;
- exp_orig->mask.dst.ip = 0xffffffff;
- exp_orig->mask.dst.protonum = 0xff;
-
- exp_orig->master = master;
- exp_orig->expectfn = pptp_expectfn;
-
- exp_orig->dir = IP_CT_DIR_ORIGINAL;
-
- /* both expectations are identical apart from tuple */
- memcpy(exp_reply, exp_orig, sizeof(*exp_reply));
- memcpy(&exp_reply->tuple, &exp_tuples[1], sizeof(exp_reply->tuple));
-
- exp_reply->dir = !exp_orig->dir;
-
- if (ip_nat_pptp_hook_exp_gre)
- return ip_nat_pptp_hook_exp_gre(exp_orig, exp_reply);
- else {
-
- DEBUGP("calling expect_related PNS->PAC");
- DUMP_TUPLE(&exp_orig->tuple);
-
- if (ip_conntrack_expect_related(exp_orig) != 0) {
- ip_conntrack_expect_free(exp_orig);
- ip_conntrack_expect_free(exp_reply);
- DEBUGP("cannot expect_related()\n");
+ exp = ip_conntrack_expect_alloc();
+ if (exp == NULL)
return 1;
- }
- DEBUGP("calling expect_related PAC->PNS");
- DUMP_TUPLE(&exp_reply->tuple);
+ memcpy(&exp->tuple, exp_tuple, sizeof(exp->tuple));
- if (ip_conntrack_expect_related(exp_reply) != 0) {
- ip_conntrack_unexpect_related(exp_orig);
- ip_conntrack_expect_free(exp_reply);
- DEBUGP("cannot expect_related()\n");
- return 1;
- }
+ exp->mask.src.ip = 0xffffffff;
+ exp->mask.src.u.all = 0;
+ exp->mask.dst.u.all = 0;
+ exp->mask.dst.u.gre.key = 0xffffffff;
+ exp->mask.dst.ip = 0xffffffff;
+ exp->mask.dst.protonum = 0xffff;
+
+ exp->seq = seq;
+ exp->expectfn = pptp_expectfn;
+
+ exp->help.exp_pptp_info.pac_call_id = ntohs(callid);
+ exp->help.exp_pptp_info.pns_call_id = ntohs(peer_callid);
+ DEBUGP("calling expect_related ");
+ DUMP_TUPLE_RAW(&exp->tuple);
+
/* Add GRE keymap entries */
- if (ip_ct_gre_keymap_add(master, &exp_reply->tuple, 0) != 0) {
- ip_conntrack_unexpect_related(exp_orig);
- ip_conntrack_unexpect_related(exp_reply);
- DEBUGP("cannot keymap_add() exp\n");
+ if (ip_ct_gre_keymap_add(exp, &exp->tuple, 0) != 0) {
+ kfree(exp);
return 1;
}
- invert_tuplepr(&inv_tuple, &exp_reply->tuple);
- if (ip_ct_gre_keymap_add(master, &inv_tuple, 1) != 0) {
- ip_conntrack_unexpect_related(exp_orig);
- ip_conntrack_unexpect_related(exp_reply);
- ip_ct_gre_keymap_destroy(master);
- DEBUGP("cannot keymap_add() exp_inv\n");
+ invert_tuplepr(&inv_tuple, &exp->tuple);
+ if (ip_ct_gre_keymap_add(exp, &inv_tuple, 1) != 0) {
+ ip_ct_gre_keymap_destroy(exp);
+ kfree(exp);
return 1;
}
+ if (ip_conntrack_expect_related(exp, master) != 0) {
+ ip_ct_gre_keymap_destroy(exp);
+ kfree(exp);
+ DEBUGP("cannot expect_related()\n");
+ return 1;
+ }
}
return 0;
}
static inline int
-pptp_inbound_pkt(struct sk_buff **pskb,
+pptp_inbound_pkt(struct sk_buff *skb,
struct tcphdr *tcph,
unsigned int ctlhoff,
size_t datalen,
- struct ip_conntrack *ct,
- enum ip_conntrack_info ctinfo)
+ struct ip_conntrack *ct)
{
struct PptpControlHeader _ctlh, *ctlh;
unsigned int reqlen;
u_int16_t msg, *cid, *pcid;
u_int32_t seq;
- ctlh = skb_header_pointer(*pskb, ctlhoff, sizeof(_ctlh), &_ctlh);
+ ctlh = skb_header_pointer(skb, ctlhoff, sizeof(_ctlh), &_ctlh);
if (unlikely(!ctlh)) {
DEBUGP("error during skb_header_pointer\n");
return NF_ACCEPT;
}
reqlen = datalen - sizeof(struct pptp_pkt_hdr) - sizeof(_ctlh);
- pptpReq = skb_header_pointer(*pskb, ctlhoff+sizeof(_ctlh),
+ pptpReq = skb_header_pointer(skb, ctlhoff+sizeof(struct pptp_pkt_hdr),
reqlen, &_pptpReq);
if (unlikely(!pptpReq)) {
DEBUGP("error during skb_header_pointer\n");
break;
}
-
- if (ip_nat_pptp_hook_inbound)
- return ip_nat_pptp_hook_inbound(pskb, ct, ctinfo, ctlh,
- pptpReq);
-
return NF_ACCEPT;
}
static inline int
-pptp_outbound_pkt(struct sk_buff **pskb,
+pptp_outbound_pkt(struct sk_buff *skb,
struct tcphdr *tcph,
unsigned int ctlhoff,
size_t datalen,
- struct ip_conntrack *ct,
- enum ip_conntrack_info ctinfo)
+ struct ip_conntrack *ct)
{
struct PptpControlHeader _ctlh, *ctlh;
unsigned int reqlen;
struct ip_ct_pptp_master *info = &ct->help.ct_pptp_info;
u_int16_t msg, *cid, *pcid;
- ctlh = skb_header_pointer(*pskb, ctlhoff, sizeof(_ctlh), &_ctlh);
+ ctlh = skb_header_pointer(skb, ctlhoff, sizeof(_ctlh), &_ctlh);
if (!ctlh)
return NF_ACCEPT;
reqlen = datalen - sizeof(struct pptp_pkt_hdr) - sizeof(_ctlh);
- pptpReq = skb_header_pointer(*pskb, ctlhoff+sizeof(_ctlh), reqlen,
+ pptpReq = skb_header_pointer(skb, ctlhoff+sizeof(_ctlh), reqlen,
&_pptpReq);
if (!pptpReq)
return NF_ACCEPT;
case PPTP_OUT_CALL_REQUEST:
if (reqlen < sizeof(_pptpReq.ocreq)) {
DEBUGP("%s: short packet\n", strMName[msg]);
- /* FIXME: break; */
+ break;
}
/* client initiating connection to server */
/* unknown: no need to create GRE masq table entry */
break;
}
-
- if (ip_nat_pptp_hook_outbound)
- return ip_nat_pptp_hook_outbound(pskb, ct, ctinfo, ctlh,
- pptpReq);
return NF_ACCEPT;
}
/* track caller id inside control connection, call expect_related */
static int
-conntrack_pptp_help(struct sk_buff **pskb,
+conntrack_pptp_help(struct sk_buff *skb,
struct ip_conntrack *ct, enum ip_conntrack_info ctinfo)
{
struct pptp_pkt_hdr _pptph, *pptph;
struct tcphdr _tcph, *tcph;
- u_int32_t tcplen = (*pskb)->len - (*pskb)->nh.iph->ihl * 4;
+ u_int32_t tcplen = skb->len - skb->nh.iph->ihl * 4;
u_int32_t datalen;
void *datalimit;
int dir = CTINFO2DIR(ctinfo);
return NF_ACCEPT;
}
- nexthdr_off = (*pskb)->nh.iph->ihl*4;
- tcph = skb_header_pointer(*pskb, (*pskb)->nh.iph->ihl*4, sizeof(_tcph),
+ nexthdr_off = skb->nh.iph->ihl*4;
+ tcph = skb_header_pointer(skb, skb->nh.iph->ihl*4, sizeof(_tcph),
&_tcph);
if (!tcph)
return NF_ACCEPT;
datalen = tcplen - tcph->doff * 4;
/* checksum invalid? */
- if (tcp_v4_check(tcph, tcplen, (*pskb)->nh.iph->saddr,
- (*pskb)->nh.iph->daddr,
- csum_partial((char *) tcph, tcplen, 0))) {
- DEBUGP(" bad csum\n");
+ if (tcp_v4_check(tcph, tcplen, skb->nh.iph->saddr, skb->nh.iph->daddr,
+ csum_partial((char *) tcph, tcplen, 0))) {
+ printk(KERN_NOTICE __FILE__ ": bad csum\n");
/* W2K PPTP server sends TCP packets with wrong checksum :(( */
//return NF_ACCEPT;
}
}
nexthdr_off += tcph->doff*4;
- pptph = skb_header_pointer(*pskb, (*pskb)->nh.iph->ihl*4 + tcph->doff*4,
+ pptph = skb_header_pointer(skb, skb->nh.iph->ihl*4 + tcph->doff*4,
sizeof(_pptph), &_pptph);
if (!pptph) {
DEBUGP("no full PPTP header, can't track\n");
* established from PNS->PAC. However, RFC makes no guarantee */
if (dir == IP_CT_DIR_ORIGINAL)
/* client -> server (PNS -> PAC) */
- ret = pptp_outbound_pkt(pskb, tcph, nexthdr_off, datalen, ct,
- ctinfo);
+ ret = pptp_outbound_pkt(skb, tcph, nexthdr_off, datalen, ct);
else
/* server -> client (PAC -> PNS) */
- ret = pptp_inbound_pkt(pskb, tcph, nexthdr_off, datalen, ct,
- ctinfo);
+ ret = pptp_inbound_pkt(skb, tcph, nexthdr_off, datalen, ct);
DEBUGP("sstate: %d->%d, cstate: %d->%d\n",
oldsstate, info->sstate, oldcstate, info->cstate);
UNLOCK_BH(&ip_pptp_lock);
static struct ip_conntrack_helper pptp = {
.list = { NULL, NULL },
.name = "pptp",
+ .flags = IP_CT_HELPER_F_REUSE_EXPECT,
.me = THIS_MODULE,
.max_expected = 2,
- .timeout = 5 * 60,
+ .timeout = 0,
.tuple = { .src = { .ip = 0,
.u = { .tcp = { .port =
__constant_htons(PPTP_CONTROL_PORT) } }
},
.dst = { .ip = 0,
.u = { .all = 0 },
- .protonum = 0xff
+ .protonum = 0xffff
}
},
.help = conntrack_pptp_help
{
int retcode;
- DEBUGP(" registering helper\n");
+ DEBUGP(__FILE__ ": registering helper\n");
if ((retcode = ip_conntrack_helper_register(&pptp))) {
printk(KERN_ERR "Unable to register conntrack application "
"helper for pptp: %d\n", retcode);
module_exit(fini);
EXPORT_SYMBOL(ip_pptp_lock);
-EXPORT_SYMBOL(ip_nat_pptp_hook_outbound);
-EXPORT_SYMBOL(ip_nat_pptp_hook_inbound);
-EXPORT_SYMBOL(ip_nat_pptp_hook_exp_gre);
-EXPORT_SYMBOL(ip_nat_pptp_hook_expectfn);
/*
- * ip_conntrack_proto_gre.c - Version 3.0
+ * ip_conntrack_proto_gre.c - Version 2.0
*
* Connection tracking protocol helper module for GRE.
*
*
* Documentation about PPTP can be found in RFC 2637
*
- * (C) 2000-2005 by Harald Welte <laforge@gnumonks.org>
+ * (C) 2000-2004 by Harald Welte <laforge@gnumonks.org>
*
* Development of this code funded by Astaro AG (http://www.astaro.com/)
*
#if 0
#define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s: " format, __FILE__, __FUNCTION__, ## args)
#define DUMP_TUPLE_GRE(x) printk("%u.%u.%u.%u:0x%x -> %u.%u.%u.%u:0x%x\n", \
- NIPQUAD((x)->src.ip), ntohs((x)->src.u.gre.key), \
- NIPQUAD((x)->dst.ip), ntohs((x)->dst.u.gre.key))
+ NIPQUAD((x)->src.ip), ntohl((x)->src.u.gre.key), \
+ NIPQUAD((x)->dst.ip), ntohl((x)->dst.u.gre.key))
#else
#define DEBUGP(x, args...)
#define DUMP_TUPLE_GRE(x)
static u_int32_t gre_keymap_lookup(struct ip_conntrack_tuple *t)
{
struct ip_ct_gre_keymap *km;
- u_int32_t key = 0;
+ u_int32_t key;
READ_LOCK(&ip_ct_gre_lock);
km = LIST_FIND(&gre_keymap_list, gre_key_cmpfn,
struct ip_ct_gre_keymap *, t);
- if (km)
- key = km->tuple.src.u.gre.key;
+ if (!km) {
+ READ_UNLOCK(&ip_ct_gre_lock);
+ return 0;
+ }
+
+ key = km->tuple.src.u.gre.key;
READ_UNLOCK(&ip_ct_gre_lock);
-
- DEBUGP("lookup src key 0x%x up key for ", key);
- DUMP_TUPLE_GRE(t);
return key;
}
-/* add a single keymap entry, associate with specified master ct */
-int
-ip_ct_gre_keymap_add(struct ip_conntrack *ct,
- struct ip_conntrack_tuple *t, int reply)
+/* add a single keymap entry, associate with specified expect */
+int ip_ct_gre_keymap_add(struct ip_conntrack_expect *exp,
+ struct ip_conntrack_tuple *t, int reply)
{
- struct ip_ct_gre_keymap *km, *old;
-
- if (!ct->helper || strcmp(ct->helper->name, "pptp")) {
- DEBUGP("refusing to add GRE keymap to non-pptp session\n");
- return -1;
- }
+ struct ip_ct_gre_keymap *km;
km = kmalloc(sizeof(*km), GFP_ATOMIC);
if (!km)
memcpy(&km->tuple, t, sizeof(*t));
- if (!reply) {
- if (ct->help.ct_pptp_info.keymap_orig) {
- kfree(km);
-
- /* check whether it's a retransmission */
- old = LIST_FIND(&gre_keymap_list, gre_key_cmpfn,
- struct ip_ct_gre_keymap *, t);
- if (old == ct->help.ct_pptp_info.keymap_orig) {
- DEBUGP("retransmission\n");
- return 0;
- }
-
- DEBUGP("trying to override keymap_orig for ct %p\n",
- ct);
- return -2;
- }
- ct->help.ct_pptp_info.keymap_orig = km;
- } else {
- if (ct->help.ct_pptp_info.keymap_reply) {
- kfree(km);
-
- /* check whether it's a retransmission */
- old = LIST_FIND(&gre_keymap_list, gre_key_cmpfn,
- struct ip_ct_gre_keymap *, t);
- if (old == ct->help.ct_pptp_info.keymap_reply) {
- DEBUGP("retransmission\n");
- return 0;
- }
-
- DEBUGP("trying to override keymap_reply for ct %p\n",
- ct);
- return -2;
- }
- ct->help.ct_pptp_info.keymap_reply = km;
- }
+ if (!reply)
+ exp->proto.gre.keymap_orig = km;
+ else
+ exp->proto.gre.keymap_reply = km;
DEBUGP("adding new entry %p: ", km);
DUMP_TUPLE_GRE(&km->tuple);
return 0;
}
-/* destroy the keymap entries associated with specified master ct */
-void ip_ct_gre_keymap_destroy(struct ip_conntrack *ct)
+/* change the tuple of a keymap entry (used by nat helper) */
+void ip_ct_gre_keymap_change(struct ip_ct_gre_keymap *km,
+ struct ip_conntrack_tuple *t)
{
- DEBUGP("entering for ct %p\n", ct);
+ if (!km)
+ {
+ printk(KERN_WARNING
+ "NULL GRE conntrack keymap change requested\n");
+ return;
+ }
+
+ DEBUGP("changing entry %p to: ", km);
+ DUMP_TUPLE_GRE(t);
- if (!ct->helper || strcmp(ct->helper->name, "pptp")) {
- DEBUGP("refusing to destroy GRE keymap to non-pptp session\n");
- return;
- }
+ WRITE_LOCK(&ip_ct_gre_lock);
+ memcpy(&km->tuple, t, sizeof(km->tuple));
+ WRITE_UNLOCK(&ip_ct_gre_lock);
+}
+/* destroy the keymap entries associated with specified expect */
+void ip_ct_gre_keymap_destroy(struct ip_conntrack_expect *exp)
+{
+ DEBUGP("entering for exp %p\n", exp);
WRITE_LOCK(&ip_ct_gre_lock);
- if (ct->help.ct_pptp_info.keymap_orig) {
- DEBUGP("removing %p from list\n",
- ct->help.ct_pptp_info.keymap_orig);
- list_del(&ct->help.ct_pptp_info.keymap_orig->list);
- kfree(ct->help.ct_pptp_info.keymap_orig);
- ct->help.ct_pptp_info.keymap_orig = NULL;
+ if (exp->proto.gre.keymap_orig) {
+ DEBUGP("removing %p from list\n", exp->proto.gre.keymap_orig);
+ list_del(&exp->proto.gre.keymap_orig->list);
+ kfree(exp->proto.gre.keymap_orig);
+ exp->proto.gre.keymap_orig = NULL;
}
- if (ct->help.ct_pptp_info.keymap_reply) {
- DEBUGP("removing %p from list\n",
- ct->help.ct_pptp_info.keymap_reply);
- list_del(&ct->help.ct_pptp_info.keymap_reply->list);
- kfree(ct->help.ct_pptp_info.keymap_reply);
- ct->help.ct_pptp_info.keymap_reply = NULL;
+ if (exp->proto.gre.keymap_reply) {
+ DEBUGP("removing %p from list\n", exp->proto.gre.keymap_reply);
+ list_del(&exp->proto.gre.keymap_reply->list);
+ kfree(exp->proto.gre.keymap_reply);
+ exp->proto.gre.keymap_reply = NULL;
}
WRITE_UNLOCK(&ip_ct_gre_lock);
}
DEBUGP("GRE_VERSION_PPTP but unknown proto\n");
return 0;
}
- tuple->dst.u.gre.key = pgrehdr->call_id;
+ tuple->dst.u.gre.key = htonl(ntohs(pgrehdr->call_id));
break;
default:
tuple->src.u.gre.key = srckey;
#if 0
- DEBUGP("found src key %x for tuple ", ntohs(srckey));
+ DEBUGP("found src key %x for tuple ", ntohl(srckey));
DUMP_TUPLE_GRE(tuple);
#endif
}
/* print gre part of tuple */
-static int gre_print_tuple(struct seq_file *s,
- const struct ip_conntrack_tuple *tuple)
+static unsigned int gre_print_tuple(char *buffer,
+ const struct ip_conntrack_tuple *tuple)
{
- return seq_printf(s, "srckey=0x%x dstkey=0x%x ",
- ntohs(tuple->src.u.gre.key),
- ntohs(tuple->dst.u.gre.key));
+ return sprintf(buffer, "srckey=0x%x dstkey=0x%x ",
+ ntohl(tuple->src.u.gre.key),
+ ntohl(tuple->dst.u.gre.key));
}
/* print private data for conntrack */
-static int gre_print_conntrack(struct seq_file *s,
- const struct ip_conntrack *ct)
+static unsigned int gre_print_conntrack(char *buffer,
+ const struct ip_conntrack *ct)
{
- return seq_printf(s, "timeout=%u, stream_timeout=%u ",
- (ct->proto.gre.timeout / HZ),
- (ct->proto.gre.stream_timeout / HZ));
+ return sprintf(buffer, "timeout=%u, stream_timeout=%u ",
+ (ct->proto.gre.timeout / HZ),
+ (ct->proto.gre.stream_timeout / HZ));
}
/* Returns verdict for packet, and may modify conntrack */
* and is about to be deleted from memory */
static void gre_destroy(struct ip_conntrack *ct)
{
- struct ip_conntrack *master = ct->master;
+ struct ip_conntrack_expect *master = ct->master;
+
DEBUGP(" entering\n");
- if (!master)
- DEBUGP("no master !?!\n");
- else
- ip_ct_gre_keymap_destroy(master);
+ if (!master) {
+ DEBUGP("no master exp for ct %p\n", ct);
+ return;
+ }
+
+ ip_ct_gre_keymap_destroy(master);
}
/* protocol helper struct */
.packet = gre_packet,
.new = gre_new,
.destroy = gre_destroy,
+ .exp_matches_pkt = NULL,
.me = THIS_MODULE
};
}
EXPORT_SYMBOL(ip_ct_gre_keymap_add);
+EXPORT_SYMBOL(ip_ct_gre_keymap_change);
EXPORT_SYMBOL(ip_ct_gre_keymap_destroy);
module_init(init);
const struct net_device *out,
int (*okfn)(struct sk_buff *))
{
-#if !defined(CONFIG_IP_NF_NAT) && !defined(CONFIG_IP_NF_NAT_MODULE)
- /* Previously seen (loopback)? Ignore. Do this before
- fragment check. */
- if ((*pskb)->nfct)
- return NF_ACCEPT;
-#endif
-
/* Gather fragments. */
if ((*pskb)->nh.iph->frag_off & htons(IP_MF|IP_OFFSET)) {
*pskb = ip_ct_gather_frags(*pskb,
EXPORT_SYMBOL(ip_conntrack_hash);
EXPORT_SYMBOL(ip_conntrack_untracked);
EXPORT_SYMBOL_GPL(ip_conntrack_find_get);
-EXPORT_SYMBOL_GPL(__ip_conntrack_find);
-EXPORT_SYMBOL_GPL(__ip_conntrack_exp_find);
EXPORT_SYMBOL_GPL(ip_conntrack_put);
#ifdef CONFIG_IP_NF_NAT_NEEDED
EXPORT_SYMBOL(ip_conntrack_tcp_update);
/*
- * ip_nat_pptp.c - Version 3.0
+ * ip_nat_pptp.c - Version 2.0
*
* NAT support for PPTP (Point to Point Tunneling Protocol).
* PPTP is a a protocol for creating virtual private networks.
* GRE is defined in RFC 1701 and RFC 1702. Documentation of
* PPTP can be found in RFC 2637
*
- * (C) 2000-2005 by Harald Welte <laforge@gnumonks.org>
+ * (C) 2000-2004 by Harald Welte <laforge@gnumonks.org>
*
* Development of this code funded by Astaro AG (http://www.astaro.com/)
*
- * TODO: - NAT to a unique tuple, not to TCP source port
+ * TODO: - Support for multiple calls within one session
+ * (needs netfilter newnat code)
+ * - NAT to a unique tuple, not to TCP source port
* (needs netfilter tuple reservation)
*
* Changes:
* TCP header is mangled (Philip Craig <philipc@snapgear.com>)
* 2004-10-22 - Version 2.0
* - kernel 2.6.x version
- * 2005-06-10 - Version 3.0
- * - kernel >= 2.6.11 version,
- * funded by Oxcoda NetBox Blue (http://www.netboxblue.com/)
*
*/
#include <linux/ip.h>
#include <linux/tcp.h>
#include <net/tcp.h>
-
#include <linux/netfilter_ipv4/ip_nat.h>
#include <linux/netfilter_ipv4/ip_nat_rule.h>
#include <linux/netfilter_ipv4/ip_nat_helper.h>
#include <linux/netfilter_ipv4/ip_nat_pptp.h>
-#include <linux/netfilter_ipv4/ip_conntrack_core.h>
#include <linux/netfilter_ipv4/ip_conntrack_helper.h>
#include <linux/netfilter_ipv4/ip_conntrack_proto_gre.h>
#include <linux/netfilter_ipv4/ip_conntrack_pptp.h>
-#define IP_NAT_PPTP_VERSION "3.0"
+#define IP_NAT_PPTP_VERSION "2.0"
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Harald Welte <laforge@gnumonks.org>");
MODULE_DESCRIPTION("Netfilter NAT helper module for PPTP");
-#if 1
+#if 0
#include "ip_conntrack_pptp_priv.h"
-#define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s: " format, __FILE__, \
- __FUNCTION__, ## args)
+#define DEBUGP(format, args...) printk(KERN_DEBUG __FILE__ ":" __FUNCTION__ \
+ ": " format, ## args)
#else
#define DEBUGP(format, args...)
#endif
-static void pptp_nat_expected(struct ip_conntrack *ct,
- struct ip_conntrack_expect *exp)
+static unsigned int
+pptp_nat_expected(struct sk_buff **pskb,
+ unsigned int hooknum,
+ struct ip_conntrack *ct,
+ struct ip_nat_info *info)
{
- struct ip_conntrack *master = ct->master;
- struct ip_conntrack_expect *other_exp;
- struct ip_conntrack_tuple t;
+ struct ip_conntrack *master = master_ct(ct);
+ struct ip_nat_multi_range mr;
struct ip_ct_pptp_master *ct_pptp_info;
struct ip_nat_pptp *nat_pptp_info;
+ u_int32_t newip, newcid;
+ int ret;
+
+ IP_NF_ASSERT(info);
+ IP_NF_ASSERT(master);
+ IP_NF_ASSERT(!(info->initialized & (1 << HOOK2MANIP(hooknum))));
+
+ DEBUGP("we have a connection!\n");
+ LOCK_BH(&ip_pptp_lock);
ct_pptp_info = &master->help.ct_pptp_info;
nat_pptp_info = &master->nat.help.nat_pptp_info;
- /* And here goes the grand finale of corrosion... */
-
- if (exp->dir == IP_CT_DIR_ORIGINAL) {
- DEBUGP("we are PNS->PAC\n");
- /* therefore, build tuple for PAC->PNS */
- t.src.ip = master->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip;
- t.src.u.gre.key = htons(master->help.ct_pptp_info.pac_call_id);
- t.dst.ip = master->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip;
- t.dst.u.gre.key = htons(master->help.ct_pptp_info.pns_call_id);
- t.dst.protonum = IPPROTO_GRE;
+ /* need to alter GRE tuple because conntrack expectfn() used 'wrong'
+ * (unmanipulated) values */
+ if (HOOK2MANIP(hooknum) == IP_NAT_MANIP_DST) {
+ DEBUGP("completing tuples with NAT info \n");
+ /* we can do this, since we're unconfirmed */
+ if (ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.u.gre.key ==
+ htonl(ct_pptp_info->pac_call_id)) {
+ /* assume PNS->PAC */
+ ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u.gre.key =
+ htonl(nat_pptp_info->pns_call_id);
+ ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u.gre.key =
+ htonl(nat_pptp_info->pns_call_id);
+ newip = master->tuplehash[IP_CT_DIR_REPLY].tuple.src.ip;
+ newcid = htonl(nat_pptp_info->pac_call_id);
+ } else {
+ /* assume PAC->PNS */
+ ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.u.gre.key =
+ htonl(nat_pptp_info->pac_call_id);
+ ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.u.gre.key =
+ htonl(nat_pptp_info->pac_call_id);
+ newip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip;
+ newcid = htonl(nat_pptp_info->pns_call_id);
+ }
} else {
- DEBUGP("we are PAC->PNS\n");
- /* build tuple for PNS->PAC */
- t.src.ip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.src.ip;
- t.src.u.gre.key =
- htons(master->nat.help.nat_pptp_info.pns_call_id);
- t.dst.ip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip;
- t.dst.u.gre.key =
- htons(master->nat.help.nat_pptp_info.pac_call_id);
- t.dst.protonum = IPPROTO_GRE;
+ if (ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.u.gre.key ==
+ htonl(ct_pptp_info->pac_call_id)) {
+ /* assume PNS->PAC */
+ newip = master->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip;
+ newcid = htonl(ct_pptp_info->pns_call_id);
+ }
+ else {
+ /* assume PAC->PNS */
+ newip = master->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip;
+ newcid = htonl(ct_pptp_info->pac_call_id);
+ }
}
- DEBUGP("trying to unexpect other dir: ");
- DUMP_TUPLE(&t);
- other_exp = __ip_conntrack_exp_find(&t);
- if (other_exp) {
- ip_conntrack_unexpect_related(other_exp);
- DEBUGP("success\n");
- } else {
- DEBUGP("not found!\n");
- }
+ mr.rangesize = 1;
+ mr.range[0].flags = IP_NAT_RANGE_MAP_IPS | IP_NAT_RANGE_PROTO_SPECIFIED;
+ mr.range[0].min_ip = mr.range[0].max_ip = newip;
+ mr.range[0].min = mr.range[0].max =
+ ((union ip_conntrack_manip_proto ) { newcid });
+ DEBUGP("change ip to %u.%u.%u.%u\n",
+ NIPQUAD(newip));
+ DEBUGP("change key to 0x%x\n", ntohl(newcid));
+ ret = ip_nat_setup_info(ct, &mr, hooknum);
+
+ UNLOCK_BH(&ip_pptp_lock);
+
+ return ret;
- ip_nat_follow_master(ct, exp);
}
/* outbound packets == from PNS to PAC */
-static int
+static inline unsigned int
pptp_outbound_pkt(struct sk_buff **pskb,
struct ip_conntrack *ct,
enum ip_conntrack_info ctinfo,
- struct PptpControlHeader *ctlh,
- union pptp_ctrl_union *pptpReq)
+ struct ip_conntrack_expect *exp)
{
+ struct iphdr *iph = (*pskb)->nh.iph;
+ struct tcphdr *tcph = (void *) iph + iph->ihl*4;
+ struct pptp_pkt_hdr *pptph = (struct pptp_pkt_hdr *)
+ ((void *)tcph + tcph->doff*4);
+
+ struct PptpControlHeader *ctlh;
+ union pptp_ctrl_union *pptpReq;
struct ip_ct_pptp_master *ct_pptp_info = &ct->help.ct_pptp_info;
struct ip_nat_pptp *nat_pptp_info = &ct->nat.help.nat_pptp_info;
u_int16_t msg, *cid = NULL, new_callid;
+ /* FIXME: size checks !!! */
+ ctlh = (struct PptpControlHeader *) ((void *) pptph + sizeof(*pptph));
+ pptpReq = (void *) ((void *) ctlh + sizeof(*ctlh));
+
new_callid = htons(ct_pptp_info->pns_call_id);
switch (msg = ntohs(ctlh->messageType)) {
return NF_ACCEPT;
}
- /* only OUT_CALL_REQUEST, IN_CALL_REPLY, CALL_CLEAR_REQUEST pass
- * down to here */
-
IP_NF_ASSERT(cid);
DEBUGP("altering call id from 0x%04x to 0x%04x\n",
ntohs(*cid), ntohs(new_callid));
/* mangle packet */
- if (ip_nat_mangle_tcp_packet(pskb, ct, ctinfo,
- (void *)cid - ((void *)ctlh - sizeof(struct pptp_pkt_hdr)),
- sizeof(new_callid),
- (char *)&new_callid,
- sizeof(new_callid)) == 0)
- return NF_DROP;
+ ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, (void *)cid - (void *)pptph,
+ sizeof(new_callid), (char *)&new_callid,
+ sizeof(new_callid));
return NF_ACCEPT;
}
-static int
-pptp_exp_gre(struct ip_conntrack_expect *expect_orig,
- struct ip_conntrack_expect *expect_reply)
-{
- struct ip_ct_pptp_master *ct_pptp_info =
- &expect_orig->master->help.ct_pptp_info;
- struct ip_nat_pptp *nat_pptp_info =
- &expect_orig->master->nat.help.nat_pptp_info;
-
- struct ip_conntrack *ct = expect_orig->master;
-
- struct ip_conntrack_tuple inv_t;
- struct ip_conntrack_tuple *orig_t, *reply_t;
-
- /* save original PAC call ID in nat_info */
- nat_pptp_info->pac_call_id = ct_pptp_info->pac_call_id;
-
- /* alter expectation */
- orig_t = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple;
- reply_t = &ct->tuplehash[IP_CT_DIR_REPLY].tuple;
-
- /* alter expectation for PNS->PAC direction */
- invert_tuplepr(&inv_t, &expect_orig->tuple);
- expect_orig->saved_proto.gre.key = htons(nat_pptp_info->pac_call_id);
- expect_orig->tuple.src.u.gre.key = htons(nat_pptp_info->pns_call_id);
- expect_orig->tuple.dst.u.gre.key = htons(ct_pptp_info->pac_call_id);
- inv_t.src.ip = reply_t->src.ip;
- inv_t.dst.ip = reply_t->dst.ip;
- inv_t.src.u.gre.key = htons(nat_pptp_info->pac_call_id);
- inv_t.dst.u.gre.key = htons(ct_pptp_info->pns_call_id);
-
- if (!ip_conntrack_expect_related(expect_orig)) {
- DEBUGP("successfully registered expect\n");
- } else {
- DEBUGP("can't expect_related(expect_orig)\n");
- ip_conntrack_expect_free(expect_orig);
- return 1;
- }
-
- /* alter expectation for PAC->PNS direction */
- invert_tuplepr(&inv_t, &expect_reply->tuple);
- expect_reply->saved_proto.gre.key = htons(nat_pptp_info->pns_call_id);
- expect_reply->tuple.src.u.gre.key = htons(nat_pptp_info->pac_call_id);
- expect_reply->tuple.dst.u.gre.key = htons(ct_pptp_info->pns_call_id);
- inv_t.src.ip = orig_t->src.ip;
- inv_t.dst.ip = orig_t->dst.ip;
- inv_t.src.u.gre.key = htons(nat_pptp_info->pns_call_id);
- inv_t.dst.u.gre.key = htons(ct_pptp_info->pac_call_id);
-
- if (!ip_conntrack_expect_related(expect_reply)) {
- DEBUGP("successfully registered expect\n");
- } else {
- DEBUGP("can't expect_related(expect_reply)\n");
- ip_conntrack_unexpect_related(expect_orig);
- ip_conntrack_expect_free(expect_reply);
- return 1;
- }
-
- if (ip_ct_gre_keymap_add(ct, &expect_reply->tuple, 0) < 0) {
- DEBUGP("can't register original keymap\n");
- ip_conntrack_unexpect_related(expect_orig);
- ip_conntrack_unexpect_related(expect_reply);
- return 1;
- }
-
- if (ip_ct_gre_keymap_add(ct, &inv_t, 1) < 0) {
- DEBUGP("can't register reply keymap\n");
- ip_conntrack_unexpect_related(expect_orig);
- ip_conntrack_unexpect_related(expect_reply);
- ip_ct_gre_keymap_destroy(ct);
- return 1;
- }
-
- return 0;
-}
-
/* inbound packets == from PAC to PNS */
-static int
+static inline unsigned int
pptp_inbound_pkt(struct sk_buff **pskb,
struct ip_conntrack *ct,
enum ip_conntrack_info ctinfo,
- struct PptpControlHeader *ctlh,
- union pptp_ctrl_union *pptpReq)
+ struct ip_conntrack_expect *oldexp)
{
+ struct iphdr *iph = (*pskb)->nh.iph;
+ struct tcphdr *tcph = (void *) iph + iph->ihl*4;
+ struct pptp_pkt_hdr *pptph = (struct pptp_pkt_hdr *)
+ ((void *)tcph + tcph->doff*4);
+
+ struct PptpControlHeader *ctlh;
+ union pptp_ctrl_union *pptpReq;
+ struct ip_ct_pptp_master *ct_pptp_info = &ct->help.ct_pptp_info;
struct ip_nat_pptp *nat_pptp_info = &ct->nat.help.nat_pptp_info;
+
u_int16_t msg, new_cid = 0, new_pcid, *pcid = NULL, *cid = NULL;
+ u_int32_t old_dst_ip;
+
+ struct ip_conntrack_tuple t, inv_t;
+ struct ip_conntrack_tuple *orig_t, *reply_t;
- int ret = NF_ACCEPT, rv;
+ /* FIXME: size checks !!! */
+ ctlh = (struct PptpControlHeader *) ((void *) pptph + sizeof(*pptph));
+ pptpReq = (void *) ((void *) ctlh + sizeof(*ctlh));
new_pcid = htons(nat_pptp_info->pns_call_id);
case PPTP_OUT_CALL_REPLY:
pcid = &pptpReq->ocack.peersCallID;
cid = &pptpReq->ocack.callID;
+ if (!oldexp) {
+ DEBUGP("outcall but no expectation\n");
+ break;
+ }
+ old_dst_ip = oldexp->tuple.dst.ip;
+ t = oldexp->tuple;
+ invert_tuplepr(&inv_t, &t);
+
+ /* save original PAC call ID in nat_info */
+ nat_pptp_info->pac_call_id = ct_pptp_info->pac_call_id;
+
+ /* alter expectation */
+ orig_t = &ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple;
+ reply_t = &ct->tuplehash[IP_CT_DIR_REPLY].tuple;
+ if (t.src.ip == orig_t->src.ip && t.dst.ip == orig_t->dst.ip) {
+ /* expectation for PNS->PAC direction */
+ t.src.u.gre.key = htonl(nat_pptp_info->pns_call_id);
+ t.dst.u.gre.key = htonl(ct_pptp_info->pac_call_id);
+ inv_t.src.ip = reply_t->src.ip;
+ inv_t.dst.ip = reply_t->dst.ip;
+ inv_t.src.u.gre.key = htonl(nat_pptp_info->pac_call_id);
+ inv_t.dst.u.gre.key = htonl(ct_pptp_info->pns_call_id);
+ } else {
+ /* expectation for PAC->PNS direction */
+ t.src.u.gre.key = htonl(nat_pptp_info->pac_call_id);
+ t.dst.u.gre.key = htonl(ct_pptp_info->pns_call_id);
+ inv_t.src.ip = orig_t->src.ip;
+ inv_t.dst.ip = orig_t->dst.ip;
+ inv_t.src.u.gre.key = htonl(nat_pptp_info->pns_call_id);
+ inv_t.dst.u.gre.key = htonl(ct_pptp_info->pac_call_id);
+ }
+
+ if (!ip_conntrack_change_expect(oldexp, &t)) {
+ DEBUGP("successfully changed expect\n");
+ } else {
+ DEBUGP("can't change expect\n");
+ }
+ ip_ct_gre_keymap_change(oldexp->proto.gre.keymap_orig, &t);
+ ip_ct_gre_keymap_change(oldexp->proto.gre.keymap_reply, &inv_t);
break;
case PPTP_IN_CALL_CONNECT:
pcid = &pptpReq->iccon.peersCallID;
+ if (!oldexp)
+ break;
+ old_dst_ip = oldexp->tuple.dst.ip;
+ t = oldexp->tuple;
+
+ /* alter expectation, no need for callID */
+ if (t.dst.ip == ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.ip) {
+ /* expectation for PNS->PAC direction */
+ t.src.ip = ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip;
+ } else {
+ /* expectation for PAC->PNS direction */
+ t.dst.ip = ct->tuplehash[IP_CT_DIR_REPLY].tuple.dst.ip;
+ }
+
+ if (!ip_conntrack_change_expect(oldexp, &t)) {
+ DEBUGP("successfully changed expect\n");
+ } else {
+ DEBUGP("can't change expect\n");
+ }
break;
case PPTP_IN_CALL_REQUEST:
/* only need to nat in case PAC is behind NAT box */
return NF_ACCEPT;
}
- /* only OUT_CALL_REPLY, IN_CALL_CONNECT, IN_CALL_REQUEST,
- * WAN_ERROR_NOTIFY, CALL_DISCONNECT_NOTIFY pass down here */
-
/* mangle packet */
IP_NF_ASSERT(pcid);
DEBUGP("altering peer call id from 0x%04x to 0x%04x\n",
ntohs(*pcid), ntohs(new_pcid));
-
- rv = ip_nat_mangle_tcp_packet(pskb, ct, ctinfo,
- (void *)pcid - ((void *)ctlh - sizeof(struct pptp_pkt_hdr)),
- sizeof(new_pcid), (char *)&new_pcid,
- sizeof(new_pcid));
- if (rv != NF_ACCEPT)
- return rv;
+ ip_nat_mangle_tcp_packet(pskb, ct, ctinfo, (void *)pcid - (void *)pptph,
+ sizeof(new_pcid), (char *)&new_pcid,
+ sizeof(new_pcid));
if (new_cid) {
IP_NF_ASSERT(cid);
DEBUGP("altering call id from 0x%04x to 0x%04x\n",
ntohs(*cid), ntohs(new_cid));
- rv = ip_nat_mangle_tcp_packet(pskb, ct, ctinfo,
- (void *)cid - ((void *)ctlh - sizeof(struct pptp_pkt_hdr)),
- sizeof(new_cid),
- (char *)&new_cid,
- sizeof(new_cid));
- if (rv != NF_ACCEPT)
- return rv;
+ ip_nat_mangle_tcp_packet(pskb, ct, ctinfo,
+ (void *)cid - (void *)pptph,
+ sizeof(new_cid), (char *)&new_cid,
+ sizeof(new_cid));
}
- /* check for earlier return value of 'switch' above */
- if (ret != NF_ACCEPT)
- return ret;
-
/* great, at least we don't need to resize packets */
return NF_ACCEPT;
}
-static int __init init(void)
+static unsigned int tcp_help(struct ip_conntrack *ct,
+ struct ip_conntrack_expect *exp,
+ struct ip_nat_info *info,
+ enum ip_conntrack_info ctinfo,
+ unsigned int hooknum, struct sk_buff **pskb)
{
- DEBUGP("%s: registering NAT helper\n", __FILE__);
+ struct iphdr *iph = (*pskb)->nh.iph;
+ struct tcphdr *tcph = (void *) iph + iph->ihl*4;
+ unsigned int datalen = (*pskb)->len - iph->ihl*4 - tcph->doff*4;
+ struct pptp_pkt_hdr *pptph;
+
+ int dir;
+
+ DEBUGP("entering\n");
+
+ /* Only mangle things once: DST for original direction
+ and SRC for reply direction. */
+ dir = CTINFO2DIR(ctinfo);
+ if (!((HOOK2MANIP(hooknum) == IP_NAT_MANIP_SRC
+ && dir == IP_CT_DIR_ORIGINAL)
+ || (HOOK2MANIP(hooknum) == IP_NAT_MANIP_DST
+ && dir == IP_CT_DIR_REPLY))) {
+ DEBUGP("Not touching dir %s at hook %s\n",
+ dir == IP_CT_DIR_ORIGINAL ? "ORIG" : "REPLY",
+ hooknum == NF_IP_POST_ROUTING ? "POSTROUTING"
+ : hooknum == NF_IP_PRE_ROUTING ? "PREROUTING"
+ : hooknum == NF_IP_LOCAL_OUT ? "OUTPUT"
+ : hooknum == NF_IP_LOCAL_IN ? "INPUT" : "???");
+ return NF_ACCEPT;
+ }
- BUG_ON(ip_nat_pptp_hook_outbound);
- ip_nat_pptp_hook_outbound = &pptp_outbound_pkt;
+ /* if packet is too small, just skip it */
+ if (datalen < sizeof(struct pptp_pkt_hdr)+
+ sizeof(struct PptpControlHeader)) {
+ DEBUGP("pptp packet too short\n");
+ return NF_ACCEPT;
+ }
- BUG_ON(ip_nat_pptp_hook_inbound);
- ip_nat_pptp_hook_inbound = &pptp_inbound_pkt;
+ pptph = (struct pptp_pkt_hdr *) ((void *)tcph + tcph->doff*4);
- BUG_ON(ip_nat_pptp_hook_exp_gre);
- ip_nat_pptp_hook_exp_gre = &pptp_exp_gre;
+ /* if it's not a control message, we can't handle it */
+ if (ntohs(pptph->packetType) != PPTP_PACKET_CONTROL ||
+ ntohl(pptph->magicCookie) != PPTP_MAGIC_COOKIE) {
+ DEBUGP("not a pptp control packet\n");
+ return NF_ACCEPT;
+ }
- BUG_ON(ip_nat_pptp_hook_expectfn);
- ip_nat_pptp_hook_expectfn = &pptp_nat_expected;
+ LOCK_BH(&ip_pptp_lock);
+
+ if (dir == IP_CT_DIR_ORIGINAL) {
+ /* reuqests sent by client to server (PNS->PAC) */
+ pptp_outbound_pkt(pskb, ct, ctinfo, exp);
+ } else {
+ /* response from the server to the client (PAC->PNS) */
+ pptp_inbound_pkt(pskb, ct, ctinfo, exp);
+ }
+
+ UNLOCK_BH(&ip_pptp_lock);
+
+ return NF_ACCEPT;
+}
+
+/* nat helper struct for control connection */
+static struct ip_nat_helper pptp_tcp_helper = {
+ .list = { NULL, NULL },
+ .name = "pptp",
+ .flags = IP_NAT_HELPER_F_ALWAYS,
+ .me = THIS_MODULE,
+ .tuple = { .src = { .ip = 0,
+ .u = { .tcp = { .port =
+ __constant_htons(PPTP_CONTROL_PORT) }
+ }
+ },
+ .dst = { .ip = 0,
+ .u = { .all = 0 },
+ .protonum = IPPROTO_TCP
+ }
+ },
+
+ .mask = { .src = { .ip = 0,
+ .u = { .tcp = { .port = 0xFFFF } }
+ },
+ .dst = { .ip = 0,
+ .u = { .all = 0 },
+ .protonum = 0xFFFF
+ }
+ },
+ .help = tcp_help,
+ .expect = pptp_nat_expected
+};
+
+
+static int __init init(void)
+{
+ DEBUGP("%s: registering NAT helper\n", __FILE__);
+ if (ip_nat_helper_register(&pptp_tcp_helper)) {
+ printk(KERN_ERR "Unable to register NAT application helper "
+ "for pptp\n");
+ return -EIO;
+ }
printk("ip_nat_pptp version %s loaded\n", IP_NAT_PPTP_VERSION);
return 0;
static void __exit fini(void)
{
DEBUGP("cleanup_module\n" );
-
- ip_nat_pptp_hook_expectfn = NULL;
- ip_nat_pptp_hook_exp_gre = NULL;
- ip_nat_pptp_hook_inbound = NULL;
- ip_nat_pptp_hook_outbound = NULL;
-
- /* Make sure noone calls it, meanwhile */
- synchronize_net();
-
+ ip_nat_helper_unregister(&pptp_tcp_helper);
printk("ip_nat_pptp version %s unloaded\n", IP_NAT_PPTP_VERSION);
}
MODULE_DESCRIPTION("Netfilter NAT protocol helper module for GRE");
#if 0
-#define DEBUGP(format, args...) printk(KERN_DEBUG "%s:%s: " format, __FILE__, \
- __FUNCTION__, ## args)
+#define DEBUGP(format, args...) printk(KERN_DEBUG __FILE__ ":" __FUNCTION__ \
+ ": " format, ## args)
#else
#define DEBUGP(x, args...)
#endif
const struct ip_conntrack *conntrack)
{
u_int32_t min, i, range_size;
- u_int16_t key = 0, *keyptr;
+ u_int32_t key = 0, *keyptr;
if (maniptype == IP_NAT_MANIP_SRC)
keyptr = &tuple->src.u.gre.key;
/* manipulate a GRE packet according to maniptype */
static int
gre_manip_pkt(struct sk_buff **pskb,
- unsigned int iphdroff,
- const struct ip_conntrack_tuple *tuple,
+ unsigned int hdroff,
+ const struct ip_conntrack_manip *manip,
enum ip_nat_manip_type maniptype)
{
struct gre_hdr *greh;
struct gre_hdr_pptp *pgreh;
- struct iphdr *iph = (struct iphdr *)((*pskb)->data + iphdroff);
- unsigned int hdroff = iphdroff + iph->ihl*4;
- /* pgreh includes two optional 32bit fields which are not required
- * to be there. That's where the magic '8' comes from */
- if (!skb_ip_make_writable(pskb, hdroff + sizeof(*pgreh)-8))
+ if (!skb_ip_make_writable(pskb, hdroff + sizeof(*pgreh)))
return 0;
greh = (void *)(*pskb)->data + hdroff;
/* FIXME: Never tested this code... */
*(gre_csum(greh)) =
ip_nat_cheat_check(~*(gre_key(greh)),
- tuple->dst.u.gre.key,
+ manip->u.gre.key,
*(gre_csum(greh)));
}
- *(gre_key(greh)) = tuple->dst.u.gre.key;
+ *(gre_key(greh)) = manip->u.gre.key;
break;
case GRE_VERSION_PPTP:
DEBUGP("call_id -> 0x%04x\n",
- ntohl(tuple->dst.u.gre.key));
- pgreh->call_id = htons(ntohl(tuple->dst.u.gre.key));
+ ntohl(manip->u.gre.key));
+ pgreh->call_id = htons(ntohl(manip->u.gre.key));
break;
default:
DEBUGP("can't nat unknown GRE version\n");
#include <linux/seq_file.h>
#include <linux/netfilter.h>
#include <linux/netfilter_ipv4.h>
+#include <linux/vs_base.h>
struct hlist_head raw_v4_htable[RAWV4_HTABLE_SIZE];
DEFINE_RWLOCK(raw_v4_lock);
sk_dst_reset(sk);
sk->sk_prot->hash(sk);
-
return 0;
}
write_lock_bh(&tp->syn_wait_lock);
tp->listen_opt = NULL;
write_unlock_bh(&tp->syn_wait_lock);
- tp->accept_queue = tp->accept_queue_tail = NULL;
+
+ tp->accept_queue_tail = NULL;
+ tp->accept_queue = NULL;
if (lopt->qlen) {
for (i = 0; i < TCP_SYNQ_HSIZE; i++) {
/* Find already established connection */
if (!tp->accept_queue) {
long timeo = sock_rcvtimeo(sk, flags & O_NONBLOCK);
-
/* If this is a non blocking socket don't sleep */
error = -EAGAIN;
if (!timeo)
req = tp->accept_queue;
if ((tp->accept_queue = req->dl_next) == NULL)
tp->accept_queue_tail = NULL;
-
- newsk = req->sk;
+ newsk = req->sk;
sk_acceptq_removed(sk);
tcp_openreq_fastfree(req);
BUG_TRAP(newsk->sk_state != TCP_SYN_RECV);
return NULL;
}
+
/*
* Socket option code for TCP.
*/
}
}
break;
-
+
default:
err = -ENOPROTOOPT;
break;
case TCP_QUICKACK:
val = !tp->ack.pingpong;
break;
+
default:
return -ENOPROTOOPT;
};
#include <linux/module.h>
#include <linux/sysctl.h>
#include <linux/workqueue.h>
+#include <linux/vs_limit.h>
+#include <linux/vs_socket.h>
#include <net/tcp.h>
#include <net/inet_common.h>
#include <net/xfrm.h>
unsigned long timeo;
if (req->retrans++ == 0)
- lopt->qlen_young--;
- timeo = min((TCP_TIMEOUT_INIT << req->retrans),
- TCP_RTO_MAX);
+ lopt->qlen_young--;
+ timeo = min((TCP_TIMEOUT_INIT << req->retrans), TCP_RTO_MAX);
req->expires = now + timeo;
reqp = &req->dl_next;
continue;
write_unlock(&tp->syn_wait_lock);
lopt->qlen--;
if (req->retrans == 0)
- lopt->qlen_young--;
+ lopt->qlen_young--;
tcp_openreq_free(req);
continue;
}
if (!ipv6_unicast_destination(skb))
goto drop;
+
/*
* There are no SYN attacks on IPv6, yet...
*/
if (sk_acceptq_is_full(sk) && tcp_synq_young(sk) > 1)
goto drop;
+
req = tcp_openreq_alloc();
if (req == NULL)
goto drop;
#include <linux/bitops.h>
#include <linux/mm.h>
#include <linux/types.h>
+#include <linux/vs_context.h>
+#include <linux/vs_network.h>
+#include <linux/vs_limit.h>
#include <linux/audit.h>
#include <linux/vs_context.h>
#include <linux/vs_network.h>
{
BUG_TRAP(!atomic_read(&sk->sk_rmem_alloc));
BUG_TRAP(!atomic_read(&sk->sk_wmem_alloc));
+ BUG_ON(sk->sk_nx_info);
+ BUG_ON(sk->sk_vx_info);
if (!sock_flag(sk, SOCK_DEAD)) {
printk("Attempt to release alive packet socket: %p\n", sk);
dst_release(skb->dst);
skb->dst = NULL;
- /* drop conntrack reference */
- nf_reset(skb);
-
spkt = (struct sockaddr_pkt*)skb->cb;
skb_push(skb, skb->data-skb->mac.raw);
dst_release(skb->dst);
skb->dst = NULL;
- /* drop conntrack reference */
- nf_reset(skb);
-
spin_lock(&sk->sk_receive_queue.lock);
po->stats.tp_packets++;
__skb_queue_tail(&sk->sk_receive_queue, skb);
}
#endif
+#warning MEF: figure out whether the following vserver net code is required by PlanetLab
+#if 0
+ clr_vx_info(&sk->sk_vx_info);
+ clr_nx_info(&sk->sk_nx_info);
+#endif
+
/*
* Now the socket is dead. No more input will appear.
*/
sk->sk_destruct = packet_sock_destruct;
atomic_inc(&packet_socks_nr);
+#warning MEF: figure out whether the following vserver net code is required by PlanetLab
+#if 0
+ set_vx_info(&sk->sk_vx_info, current->vx_info);
+ sk->sk_xid = vx_current_xid();
+ set_nx_info(&sk->sk_nx_info, current->nx_info);
+ sk->sk_nid = nx_current_nid();
+#endif
+
/*
* Attach a protocol block
*/
mntput(mnt);
}
+ vx_sock_dec(sk);
+ clr_vx_info(&sk->sk_vx_info);
+ clr_nx_info(&sk->sk_nx_info);
sock_put(sk);
/* ---- Socket is dead now and most probably destroyed ---- */
sock_init_data(sock,sk);
+ set_vx_info(&sk->sk_vx_info, current->vx_info);
+ sk->sk_xid = vx_current_xid();
+ vx_sock_inc(sk);
+ set_nx_info(&sk->sk_nx_info, current->nx_info);
+
sk->sk_write_space = unix_write_space;
sk->sk_max_ack_backlog = sysctl_unix_max_dgram_qlen;
sk->sk_destruct = unix_sock_destructor;
*/
mode = S_IFSOCK |
(SOCK_INODE(sock)->i_mode & ~current->fs->umask);
- err = vfs_mknod(nd.dentry->d_inode, dentry, mode, 0, NULL);
+ err = vfs_mknod(nd.dentry->d_inode, dentry, mode, 0);
if (err)
goto out_mknod_dput;
up(&nd.dentry->d_inode->i_sem);
# These are the kernels that are built IF the architecture allows it.
%define buildup 1
-%define buildsmp 1
-%define builduml 1
-%define buildxen 1
-%define builddoc 1
+%define buildsmp 0
+%define builduml 0
+%define buildsource 0
+%define builddoc 0
# Versions of various parts
#
%define nptl_conflicts SysVinit < 2.84-13, pam < 0.75-48, vixie-cron < 3.0.1-73, privoxy < 3.0.0-8, spamassassin < 2.44-4.8.x, cups < 1.1.17-13
-#
-# The ld.so.conf.d file we install uses syntax older ldconfig's don't grok.
-#
-
-# MEF commented out
-# %define xen_conflicts glibc < 2.3.5-1
-
#
# Packages that need to be installed before the kernel is, because the %post
# scripts use them.
ExclusiveOS: Linux
Provides: kernel = %{version}
Provides: kernel-drm = 4.3.0
-Provides: kernel-%{_target_cpu} = %{rpmversion}-%{release}
Prereq: %{kernel_prereq}
Conflicts: %{kernel_dot_org_conflicts}
Conflicts: %{package_conflicts}
# List the packages used during the kernel build
#
BuildPreReq: module-init-tools, patch >= 2.5.4, bash >= 2.03, sh-utils, tar
-BuildPreReq: bzip2, findutils, gzip, m4, perl, make >= 3.78, gnupg, diffutils
-#BuildRequires: gcc >= 3.4.2, binutils >= 2.12, redhat-rpm-config
+BuildPreReq: bzip2, findutils, gzip, m4, perl, make >= 3.78, gnupg
+#BuildPreReq: kernel-utils >= 1:2.4-12.1.142
BuildRequires: gcc >= 2.96-98, binutils >= 2.12, redhat-rpm-config
BuildConflicts: rhbuildsys(DiskFree) < 500Mb
BuildArchitectures: i686
of the operating system: memory allocation, process allocation, device
input and output, etc.
-%package devel
-Summary: Development package for building kernel modules to match the kernel.
-Group: System Environment/Kernel
-AutoReqProv: no
-Provides: kernel-devel-%{_target_cpu} = %{rpmversion}-%{release}
-Prereq: /usr/sbin/hardlink, /usr/bin/find
-
-%description devel
-This package provides kernel headers and makefiles sufficient to build modules
-against the kernel package.
+%package sourcecode
+Summary: The source code for the Linux kernel.
+Group: Development/System
+Prereq: fileutils
+Requires: make >= 3.78
+Requires: gcc >= 3.2
+Requires: /usr/bin/strip
+# for xconfig and gconfig
+Requires: qt-devel, gtk2-devel readline-devel ncurses-devel
+Provides: kernel-source
+Obsoletes: kernel-source <= 2.6.6
+
+%description sourcecode
+The kernel-sourcecode package contains the source code files for the Linux
+kernel. The source files can be used to build a custom kernel that is
+smaller by virtue of only including drivers for your particular hardware, if
+you are so inclined (and you know what you're doing). The customisation
+guide in the documentation describes in detail how to do this. This package
+is neither needed nor usable for building external kernel modules for
+linking such modules into the default operating system kernels.
%package doc
Summary: Various documentation bits found in the kernel source.
Group: Documentation
+%if !%{buildsource}
+Obsoletes: kernel-source <= 2.6.6
+Obsoletes: kernel-sourcecode <= 2.6.6
+%endif
%description doc
This package contains documentation files from the kernel
Group: System Environment/Kernel
Provides: kernel = %{version}
Provides: kernel-drm = 4.3.0
-Provides: kernel-%{_target_cpu} = %{rpmversion}-%{release}smp
Prereq: %{kernel_prereq}
Conflicts: %{kernel_dot_org_conflicts}
Conflicts: %{package_conflicts}
Install the kernel-smp package if your machine uses two or more CPUs.
-%package smp-devel
-Summary: Development package for building kernel modules to match the SMP kernel.
-Group: System Environment/Kernel
-Provides: kernel-smp-devel-%{_target_cpu} = %{rpmversion}-%{release}
-Provides: kernel-devel-%{_target_cpu} = %{rpmversion}-%{release}smp
-Provides: kernel-devel = %{rpmversion}-%{release}smp
-AutoReqProv: no
-Prereq: /usr/sbin/hardlink, /usr/bin/find
-
-%description smp-devel
-This package provides kernel headers and makefiles sufficient to build modules
-against the SMP kernel package.
-
-%package xenU
-Summary: The Linux kernel compiled for unprivileged Xen guest VMs
-
-Group: System Environment/Kernel
-Provides: kernel = %{version}
-Provides: kernel-%{_target_cpu} = %{rpmversion}-%{release}xenU
-Prereq: %{kernel_prereq}
-Conflicts: %{kernel_dot_org_conflicts}
-Conflicts: %{package_conflicts}
-Conflicts: %{nptl_conflicts}
-
-# MEF commented out
-# Conflicts: %{xen_conflicts}
-
-# We can't let RPM do the dependencies automatic because it'll then pick up
-# a correct but undesirable perl dependency from the module headers which
-# isn't required for the kernel proper to function
-AutoReqProv: no
-
-%description xenU
-This package includes a version of the Linux kernel which
-runs in Xen unprivileged guest VMs. This should be installed
-both inside the unprivileged guest (for the modules) and in
-the guest0 domain.
-
-%package xenU-devel
-Summary: Development package for building kernel modules to match the kernel.
-Group: System Environment/Kernel
-AutoReqProv: no
-Provides: kernel-xenU-devel-%{_target_cpu} = %{rpmversion}-%{release}
-Provides: kernel-devel-%{_target_cpu} = %{rpmversion}-%{release}xenU
-Provides: kernel-devel = %{rpmversion}-%{release}xenU
-Prereq: /usr/sbin/hardlink, /usr/bin/find
-
-%description xenU-devel
-This package provides kernel headers and makefiles sufficient to build modules
-against the kernel package.
-
%package uml
Summary: The Linux kernel compiled for use in user mode (User Mode Linux).
%description uml
This package includes a user mode version of the Linux kernel.
-%package uml-devel
-Summary: Development package for building kernel modules to match the UML kernel.
-Group: System Environment/Kernel
-Provides: kernel-uml-devel-%{_target_cpu} = %{rpmversion}-%{release}
-Provides: kernel-devel-%{_target_cpu} = %{rpmversion}-%{release}smp
-Provides: kernel-devel = %{rpmversion}-%{release}smp
-AutoReqProv: no
-Prereq: /usr/sbin/hardlink, /usr/bin/find
-
-%description uml-devel
-This package provides kernel headers and makefiles sufficient to build modules
-against the User Mode Linux kernel package.
-
-%package uml-modules
-Summary: The Linux kernel modules compiled for use in user mode (User Mode Linux).
-
-Group: System Environment/Kernel
-
-%description uml-modules
-This package includes a user mode version of the Linux kernel modules.
-
%package vserver
Summary: A placeholder RPM that provides kernel and kernel-drm
Group: System Environment/Kernel
Provides: kernel = %{version}
Provides: kernel-drm = 4.3.0
-Provides: kernel-%{_target_cpu} = %{rpmversion}-%{release}
%description vserver
VServers do not require and cannot use kernels, but some RPMs have
(e.g. tcpdump). This package installs no files but provides the
necessary dependencies to make rpm and yum happy.
-
-
%prep
-if [ ! -d kernel-%{kversion}/vanilla ]; then
-%setup -q -n %{name}-%{version} -c
-rm -f pax_global_header
-mv linux-%{kversion} vanilla
-else
- cd kernel-%{kversion}
-fi
-cd vanilla
+%setup -n linux-%{kversion}
# make sure the kernel has the sublevel we know it has. This looks weird
# but for -pre and -rc versions we need it since we only want to use
%build
BuildKernel() {
- # create a clean copy in BUILD/ (for backward compatibility with
- # other RPMs that bootstrap off of the kernel build)
- cd $RPM_BUILD_DIR
- rm -rf linux-%{kversion}$1
- cp -rl kernel-%{kversion}/vanilla linux-%{kversion}$1
- cd linux-%{kversion}$1
# Pick the right config file for the kernel we're building
- Arch=i386
- Target=%{make_target}
if [ -n "$1" ] ; then
Config=kernel-%{kversion}-%{_target_cpu}-$1-planetlab.config
- DevelDir=/usr/src/kernels/%{KVERREL}-$1-%{_target_cpu}
- DevelLink=/usr/src/kernels/%{KVERREL}$1-%{_target_cpu}
- # override ARCH in the case of UML or Xen
- if [ "$1" = "uml" ] ; then
- Arch=um
- Target=linux
- elif [ "$1" = "xenU" ] ; then
- Arch=xen
- fi
else
Config=kernel-%{kversion}-%{_target_cpu}-planetlab.config
- DevelDir=/usr/src/kernels/%{KVERREL}-%{_target_cpu}
- DevelLink=
fi
KernelVer=%{version}-%{release}$1
# make sure EXTRAVERSION says what we want it to say
perl -p -i -e "s/^EXTRAVERSION.*/EXTRAVERSION = -%{release}$1/" Makefile
+ # override ARCH in the case of UML
+ if [ "$1" = "uml" ] ; then
+ export ARCH=um
+ fi
+
# and now to start the build process
- make -s ARCH=$Arch mrproper
+ make -s mrproper
cp configs/$Config .config
- echo USING ARCH=$Arch
-
- make -s ARCH=$Arch nonint_oldconfig > /dev/null
- make -s ARCH=$Arch include/linux/version.h
+ make -s nonint_oldconfig > /dev/null
+ make -s include/linux/version.h
- make -s ARCH=$Arch %{?_smp_mflags} $Target
- make -s ARCH=$Arch %{?_smp_mflags} modules || exit 1
- make ARCH=$Arch buildcheck
+ make -s %{?_smp_mflags} %{make_target}
+ make -s %{?_smp_mflags} modules || exit 1
+ make buildcheck
# Start installing the results
-%if "%{_enable_debug_packages}" == "1"
mkdir -p $RPM_BUILD_ROOT/usr/lib/debug/boot
-%endif
mkdir -p $RPM_BUILD_ROOT/%{image_install_path}
+ install -m 644 System.map $RPM_BUILD_ROOT/usr/lib/debug/boot/System.map-$KernelVer
+ objdump -t vmlinux | grep ksymtab | cut -f2 | cut -d" " -f2 | cut -c11- | sort -u > exported
+ echo "_stext" >> exported
+ echo "_end" >> exported
+ touch $RPM_BUILD_ROOT/boot/System.map-$KernelVer
+ for i in `cat exported`
+ do
+ grep " $i\$" System.map >> $RPM_BUILD_ROOT/boot/System.map-$KernelVer || :
+ grep "tab_$i\$" System.map >> $RPM_BUILD_ROOT/boot/System.map-$KernelVer || :
+ grep "__crc_$i\$" System.map >> $RPM_BUILD_ROOT/boot/System.map-$KernelVer ||:
+ done
+ rm -f exported
+# install -m 644 init/kerntypes.o $RPM_BUILD_ROOT/boot/Kerntypes-$KernelVer
install -m 644 .config $RPM_BUILD_ROOT/boot/config-$KernelVer
- install -m 644 System.map $RPM_BUILD_ROOT/boot/System.map-$KernelVer
- if [ -f arch/$Arch/boot/bzImage ]; then
- cp arch/$Arch/boot/bzImage $RPM_BUILD_ROOT/%{image_install_path}/vmlinuz-$KernelVer
- fi
- if [ -f arch/$Arch/boot/zImage.stub ]; then
- cp arch/$Arch/boot/zImage.stub $RPM_BUILD_ROOT/%{image_install_path}/zImage.stub-$KernelVer
- fi
- if [ "$1" = "uml" ] ; then
- install -D -m 755 linux $RPM_BUILD_ROOT/%{_bindir}/linux
- fi
+ rm -f System.map
+ cp arch/*/boot/bzImage $RPM_BUILD_ROOT/%{image_install_path}/vmlinuz-$KernelVer
+
mkdir -p $RPM_BUILD_ROOT/lib/modules/$KernelVer
- make -s ARCH=$Arch INSTALL_MOD_PATH=$RPM_BUILD_ROOT modules_install KERNELRELEASE=$KernelVer
+ make -s INSTALL_MOD_PATH=$RPM_BUILD_ROOT modules_install KERNELRELEASE=$KernelVer
# And save the headers/makefiles etc for building modules against
#
mkdir -p $RPM_BUILD_ROOT/lib/modules/$KernelVer/build
(cd $RPM_BUILD_ROOT/lib/modules/$KernelVer ; ln -s build source)
# first copy everything
- cp --parents `find -type f -name "Makefile*" -o -name "Kconfig*"` $RPM_BUILD_ROOT/lib/modules/$KernelVer/build
+ cp --parents `find -type f -name Makefile -o -name "Kconfig*"` $RPM_BUILD_ROOT/lib/modules/$KernelVer/build
cp Module.symvers $RPM_BUILD_ROOT/lib/modules/$KernelVer/build
- if [ "$1" = "uml" ] ; then
- cp --parents -a `find arch/um -name include` $RPM_BUILD_ROOT/lib/modules/$KernelVer/build
- fi
# then drop all but the needed Makefiles/Kconfig files
rm -rf $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/Documentation
rm -rf $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/scripts
cp arch/%{_arch}/kernel/asm-offsets.s $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/arch/%{_arch}/kernel || :
cp .config $RPM_BUILD_ROOT/lib/modules/$KernelVer/build
cp -a scripts $RPM_BUILD_ROOT/lib/modules/$KernelVer/build
- if [ -d arch/%{_arch}/scripts ]; then
- cp -a arch/%{_arch}/scripts $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/arch/%{_arch} || :
- fi
- if [ -f arch/%{_arch}/*lds ]; then
- cp -a arch/%{_arch}/*lds $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/arch/%{_arch}/ || :
- fi
+ cp -a arch/%{_arch}/scripts $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/arch/%{_arch} || :
+ cp -a arch/%{_arch}/*lds $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/arch/%{_arch}/ || :
rm -f $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/scripts/*.o
rm -f $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/scripts/*/*.o
mkdir -p $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/include
cd include
cp -a acpi config linux math-emu media net pcmcia rxrpc scsi sound video asm asm-generic $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/include
-%if %{buildxen}
- cp -a asm-xen $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/include
-%endif
- if [ "$1" = "uml" ] ; then
- cd asm
- cp -a `readlink arch` $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/include
- cd ..
- fi
cp -a `readlink asm` $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/include
# Make sure the Makefile and version.h have a matching timestamp so that
# external modules can be built
#
# save the vmlinux file for kernel debugging into the kernel-debuginfo rpm
#
-%if "%{_enable_debug_packages}" == "1"
mkdir -p $RPM_BUILD_ROOT/usr/lib/debug/lib/modules/$KernelVer
cp vmlinux $RPM_BUILD_ROOT/usr/lib/debug/lib/modules/$KernelVer
-%endif
# mark modules executable so that strip-to-file can strip them
find $RPM_BUILD_ROOT/lib/modules/$KernelVer -name "*.ko" -type f | xargs chmod u+x
+ # detect missing or incorrect license tags
+ for i in `find $RPM_BUILD_ROOT/lib/modules/$KernelVer -name "*.ko" ` ; do echo -n "$i " ; /sbin/modinfo -l $i >> modinfo ; done
+ cat modinfo | grep -v "^GPL" | grep -v "^Dual BSD/GPL" | grep -v "^Dual MPL/GPL" | grep -v "^GPL and additional rights" | grep -v "^GPL v2" && exit 1
+ rm -f modinfo
# remove files that will be auto generated by depmod at rpm -i time
rm -f $RPM_BUILD_ROOT/lib/modules/$KernelVer/modules.*
- # Move the devel headers out of the root file system
- mkdir -p $RPM_BUILD_ROOT/usr/src/kernels
- mv $RPM_BUILD_ROOT/lib/modules/$KernelVer/build $RPM_BUILD_ROOT/$DevelDir
- ln -sf ../../..$DevelDir $RPM_BUILD_ROOT/lib/modules/$KernelVer/build
- [ -z "$DevelLink" ] || ln -sf `basename $DevelDir` $RPM_BUILD_ROOT/$DevelLink
}
###
BuildKernel uml
%endif
-%if %{buildxen}
-BuildKernel xenU
-%endif
-
-
###
### install
###
%install
-cd vanilla
-
-%if %{buildxen}
-mkdir -p $RPM_BUILD_ROOT/etc/ld.so.conf.d
-rm -f $RPM_BUILD_ROOT/etc/ld.so.conf.d/kernelcap-%{KVERREL}.conf
-cat > $RPM_BUILD_ROOT/etc/ld.so.conf.d/kernelcap-%{KVERREL}.conf <<\EOF
-# This directive teaches ldconfig to search in nosegneg subdirectories
-# and cache the DSOs there with extra bit 0 set in their hwcap match
-# fields. In Xen guest kernels, the vDSO tells the dynamic linker to
-# search in nosegneg subdirectories and to match this extra hwcap bit
-# in the ld.so.cache file.
-hwcap 0 nosegneg
-EOF
-chmod 444 $RPM_BUILD_ROOT/etc/ld.so.conf.d/kernelcap-%{KVERREL}.conf
-%endif
+# architectures that don't get kernel-source (i586/i686/athlon) dont need
+# much of an install because the build phase already copied the needed files
%if %{builddoc}
mkdir -p $RPM_BUILD_ROOT/usr/share/doc/kernel-doc-%{kversion}/Documentation
tar cf - Documentation | tar xf - -C $RPM_BUILD_ROOT/usr/share/doc/kernel-doc-%{kversion}
%endif
+%if %{buildsource}
+
+mkdir -p $RPM_BUILD_ROOT/usr/src/linux-%{KVERREL}
+chmod -R a+r *
+
+# clean up the source tree so that it is ready for users to build their own
+# kernel
+make -s mrproper
+# copy the source over
+tar cf - . | tar xf - -C $RPM_BUILD_ROOT/usr/src/linux-%{KVERREL}
+
+# set the EXTRAVERSION to <version>custom, so that people who follow a kernel building howto
+# don't accidentally overwrite their currently working moduleset and hose
+# their system
+perl -p -i -e "s/^EXTRAVERSION.*/EXTRAVERSION = -%{release}custom/" $RPM_BUILD_ROOT/usr/src/linux-%{KVERREL}/Makefile
+
+# some config options may be appropriate for an rpm kernel build but are less so for custom user builds,
+# change those to values that are more appropriate as default for people who build their own kernel.
+perl -p -i -e "s/^CONFIG_DEBUG_INFO.*/# CONFIG_DEBUG_INFO is not set/" $RPM_BUILD_ROOT/usr/src/linux-%{KVERREL}/configs/*
+perl -p -i -e "s/^.*CONFIG_DEBUG_PAGEALLOC.*/# CONFIG_DEBUG_PAGEALLOC is not set/" $RPM_BUILD_ROOT/usr/src/linux-%{KVERREL}/configs/*
+perl -p -i -e "s/^.*CONFIG_DEBUG_SLAB.*/# CONFIG_DEBUG_SLAB is not set/" $RPM_BUILD_ROOT/usr/src/linux-%{KVERREL}/configs/*
+perl -p -i -e "s/^.*CONFIG_DEBUG_SPINLOCK.*/# CONFIG_DEBUG_SPINLOCK is not set/" $RPM_BUILD_ROOT/usr/src/linux-%{KVERREL}/configs/*
+perl -p -i -e "s/^.*CONFIG_DEBUG_HIGHMEM.*/# CONFIG_DEBUG_HIGHMEM is not set/" $RPM_BUILD_ROOT/usr/src/linux-%{KVERREL}/configs/*
+perl -p -i -e "s/^.*CONFIG_MODULE_SIG.*/# CONFIG_MODULE_SIG is not set/" $RPM_BUILD_ROOT/usr/src/linux-%{KVERREL}/configs/*
+
+install -m 644 %{SOURCE10} $RPM_BUILD_ROOT/usr/src/linux-%{KVERREL}
+%endif
+
###
### clean
###
touch $rootdev
fi
fi
-
-[ ! -x /usr/sbin/module_upgrade ] || /usr/sbin/module_upgrade
-#[ -x /sbin/new-kernel-pkg ] && /sbin/new-kernel-pkg --package kernel --mkinitrd --depmod --install %{KVERREL}
-# Older modutils do not support --package option
[ -x /sbin/new-kernel-pkg ] && /sbin/new-kernel-pkg --mkinitrd --depmod --install %{KVERREL}
-
-# remove fake handle
if [ -n "$fake_root_lvm" ]; then
rm -f $rootdev
fi
-
-# make some useful links
-pushd /boot > /dev/null ; {
- ln -sf config-%{KVERREL} config
- ln -sf initrd-%{KVERREL}.img initrd-boot
- ln -sf vmlinuz-%{KVERREL} kernel-boot
-}
-popd > /dev/null
-
-# ask for a reboot
-mkdir -p /etc/planetlab
-touch /etc/planetlab/update-reboot
-
-%post devel
if [ -x /usr/sbin/hardlink ] ; then
-pushd /usr/src/kernels/%{KVERREL}-%{_target_cpu} > /dev/null
-/usr/bin/find . -type f | while read f; do hardlink -c /usr/src/kernels/*FC*/$f $f ; done
+pushd /lib/modules/%{KVERREL}/build > /dev/null ; {
+ cd /lib/modules/%{KVERREL}/build
+ find . -type f | while read f; do hardlink -c /lib/modules/*/build/$f $f ; done
+}
popd > /dev/null
fi
-%post smp
-# trick mkinitrd in case the current environment does not have device mapper
-rootdev=$(awk '/^[ \t]*[^#]/ { if ($2 == "/") { print $1; }}' /etc/fstab)
-if echo $rootdev |grep -q /dev/mapper 2>/dev/null ; then
- if [ ! -f $rootdev ]; then
- fake_root_lvm=1
- mkdir -p $(dirname $rootdev)
- touch $rootdev
- fi
-fi
-
-[ ! -x /usr/sbin/module_upgrade ] || /usr/sbin/module_upgrade
-#[ -x /sbin/new-kernel-pkg ] && /sbin/new-kernel-pkg --package kernel-smp --mkinitrd --depmod --install %{KVERREL}smp
-# Older modutils do not support --package option
-[ -x /sbin/new-kernel-pkg ] && /sbin/new-kernel-pkg --mkinitrd --depmod --install %{KVERREL}smp
-
-# remove fake handle
-if [ -n "$fake_root_lvm" ]; then
- rm -f $rootdev
-fi
-
# make some useful links
pushd /boot > /dev/null ; {
+ ln -sf System.map-%{KVERREL} System.map
+# ln -sf Kerntypes-%{KVERREL} Kerntypes
ln -sf config-%{KVERREL} config
ln -sf initrd-%{KVERREL}.img initrd-boot
ln -sf vmlinuz-%{KVERREL} kernel-boot
mkdir -p /etc/planetlab
touch /etc/planetlab/update-reboot
-%post smp-devel
-if [ -x /usr/sbin/hardlink ] ; then
-pushd /usr/src/kernels/%{KVERREL}-smp-%{_target_cpu} > /dev/null
-/usr/bin/find . -type f | while read f; do hardlink -c /usr/src/kernels/*FC*/$f $f ; done
-popd > /dev/null
-fi
-
-%post xenU
-[ ! -x /usr/sbin/module_upgrade ] || /usr/sbin/module_upgrade
-[ ! -x /sbin/ldconfig ] || /sbin/ldconfig -X
-
-%post xenU-devel
+%post smp
+[ -x /sbin/new-kernel-pkg ] && /sbin/new-kernel-pkg --mkinitrd --depmod --install %{KVERREL}smp
if [ -x /usr/sbin/hardlink ] ; then
-pushd /usr/src/kernels/%{KVERREL}-xenU-%{_target_cpu} > /dev/null
-/usr/bin/find . -type f | while read f; do hardlink -c /usr/src/kernels/*FC*/$f $f ; done
+pushd /lib/modules/%{KVERREL}smp/build > /dev/null ; {
+ cd /lib/modules/%{KVERREL}smp/build
+ find . -type f | while read f; do hardlink -c /lib/modules/*/build/$f $f ; done
+}
popd > /dev/null
fi
-
%preun
/sbin/modprobe loop 2> /dev/null > /dev/null || :
[ -x /sbin/new-kernel-pkg ] && /sbin/new-kernel-pkg --rminitrd --rmmoddep --remove %{KVERREL}
/sbin/modprobe loop 2> /dev/null > /dev/null || :
[ -x /sbin/new-kernel-pkg ] && /sbin/new-kernel-pkg --rminitrd --rmmoddep --remove %{KVERREL}smp
-%preun xenU
-/sbin/modprobe loop 2> /dev/null > /dev/null || :
-[ -x /sbin/new-kernel-pkg ] && /sbin/new-kernel-pkg --rmmoddep --remove %{KVERREL}xenU
-
-
###
### file lists
###
%files
%defattr(-,root,root)
/%{image_install_path}/*-%{KVERREL}
+#/boot/Kerntypes-%{KVERREL}
/boot/System.map-%{KVERREL}
/boot/config-%{KVERREL}
%dir /lib/modules/%{KVERREL}
/lib/modules/%{KVERREL}/kernel
-/lib/modules/%{KVERREL}/build
+%verify(not mtime) /lib/modules/%{KVERREL}/build
/lib/modules/%{KVERREL}/source
-
-%files devel
-%defattr(-,root,root)
-%verify(not mtime) /usr/src/kernels/%{KVERREL}-%{_target_cpu}
%endif
%if %{buildsmp}
%files smp
%defattr(-,root,root)
/%{image_install_path}/*-%{KVERREL}smp
+#/boot/Kerntypes-%{KVERREL}smp
/boot/System.map-%{KVERREL}smp
/boot/config-%{KVERREL}smp
%dir /lib/modules/%{KVERREL}smp
/lib/modules/%{KVERREL}smp/kernel
-/lib/modules/%{KVERREL}smp/build
+%verify(not mtime) /lib/modules/%{KVERREL}smp/build
/lib/modules/%{KVERREL}smp/source
-
-%files smp-devel
-%defattr(-,root,root)
-%verify(not mtime) /usr/src/kernels/%{KVERREL}-smp-%{_target_cpu}
-/usr/src/kernels/%{KVERREL}smp-%{_target_cpu}
%endif
%if %{builduml}
%files uml
%defattr(-,root,root)
-%{_bindir}/linux
-%files uml-devel
-%defattr(-,root,root)
-%verify(not mtime) /usr/src/kernels/%{KVERREL}-uml-%{_target_cpu}
-/usr/src/kernels/%{KVERREL}uml-%{_target_cpu}
-
-%files uml-modules
-%defattr(-,root,root)
-/boot/System.map-%{KVERREL}uml
-/boot/config-%{KVERREL}uml
-%dir /lib/modules/%{KVERREL}uml
-/lib/modules/%{KVERREL}uml/kernel
-%verify(not mtime) /lib/modules/%{KVERREL}uml/build
-/lib/modules/%{KVERREL}uml/source
%endif
-%if %{buildxen}
-%files xenU
-%defattr(-,root,root)
-/%{image_install_path}/*-%{KVERREL}xenU
-/boot/System.map-%{KVERREL}xenU
-/boot/config-%{KVERREL}xenU
-%dir /lib/modules/%{KVERREL}xenU
-/lib/modules/%{KVERREL}xenU/kernel
-%verify(not mtime) /lib/modules/%{KVERREL}xenU/build
-/lib/modules/%{KVERREL}xenU/source
-/etc/ld.so.conf.d/kernelcap-%{KVERREL}.conf
-
-%files xenU-devel
-%defattr(-,root,root)
-%verify(not mtime) /usr/src/kernels/%{KVERREL}-xenU-%{_target_cpu}
-/usr/src/kernels/%{KVERREL}xenU-%{_target_cpu}
-%endif
+# only some architecture builds need kernel-source and kernel-doc
-%files vserver
+%if %{buildsource}
+%files sourcecode
%defattr(-,root,root)
-# no files
-
+/usr/src/linux-%{KVERREL}/
+%endif
-# only some architecture builds need kernel-doc
%if %{builddoc}
%files doc
%defattr(-,root,root)
-%{_datadir}/doc/kernel-doc-%{kversion}/Documentation/*
-%dir %{_datadir}/doc/kernel-doc-%{kversion}/Documentation
-%dir %{_datadir}/doc/kernel-doc-%{kversion}
+/usr/share/doc/kernel-doc-%{kversion}/Documentation/*
%endif
-%changelog
-* Fri Jul 15 2005 Dave Jones <davej@redhat.com>
-- Include a number of patches likely to show up in 2.6.12.3
-
-* Thu Jul 14 2005 Dave Jones <davej@redhat.com>
-- Add Appletouch support.
-
-* Wed Jul 13 2005 David Woodhouse <dwmw2@redhat.com>
-- Audit updates. In particular, don't printk audit messages that
- are passed from userspace when auditing is disabled.
-
-* Tue Jul 12 2005 Dave Jones <davej@redhat.com>
-- Fix up several reports of CD's causing crashes.
-- Make -p port arg of rpc.nfsd work.
-- Work around a usbmon deficiency.
-- Fix connection tracking bug with bridging. (#162438)
-
-* Mon Jul 11 2005 Dave Jones <davej@redhat.com>
-- Fix up locking in piix IDE driver whilst tuning chipset.
-
-* Tue Jul 5 2005 Dave Jones <davej@redhat.com>
-- Fixup ACPI IRQ routing bug that prevented booting for some folks.
-- Reenable ISA I2C drivers for x86-64.
-- Bump requirement on mkinitrd to something newer (#160492)
-
-* Wed Jun 29 2005 Dave Jones <davej@redhat.com>
-- 2.6.12.2
-
-* Mon Jun 27 2005 Dave Jones <davej@redhat.com>
-- Disable multipath caches. (#161168)
-- Reenable AMD756 I2C driver for x86-64. (#159609)
-- Add more IBM r40e BIOS's to the C2/C3 blacklist.
-
-* Thu Jun 23 2005 Dave Jones <davej@redhat.com>
-- Make orinoco driver suck less.
- (Scanning/roaming/ethtool support).
-- Exec-shield randomisation fix.
-- pwc driver warning fix.
-- Prevent potential oops in tux with symlinks. (#160219)
-
-* Wed Jun 22 2005 Dave Jones <davej@redhat.com>
-- 2.6.12.1
- - Clean up subthread exec (CAN-2005-1913)
- - ia64 ptrace + sigrestore_context (CAN-2005-1761)
-
-* Wed Jun 22 2005 David Woodhouse <dwmw2@redhat.com>
-- Update audit support
-
-* Mon Jun 20 2005 Dave Jones <davej@redhat.com>
-- Rebase to 2.6.12
- - Temporarily drop Alans IDE fixes whilst they get redone.
-- Enable userspace queueing of ipv6 packets.
-
-* Tue Jun 7 2005 Dave Jones <davej@redhat.com>
-- Drop recent b44 changes which broke some setups.
-
-* Wed Jun 1 2005 Dave Jones <davej@redhat.com>
-- Fix up ALI IDE regression. (#157175)
-
-* Mon May 30 2005 Dave Jones <davej@redhat.com>
-- Fix up VIA IRQ quirk.
-
-* Sun May 29 2005 Dave Jones <davej@redhat.com>
-- Fix slab corruption in firewire (#158424)
-
-* Fri May 27 2005 Dave Jones <davej@redhat.com>
-- remove non-cleanroom pwc driver compression.
-- Fix unintialised value in single bit error detector. (#158825)
-
-* Wed May 25 2005 Dave Jones <davej@redhat.com>
-- Disable TPM driver, it breaks 8139 driver.
-- Revert to previous version of ipw2x00 drivers.
- The newer ones sadly brought too many problems this close to
- the release. I'll look at updating them again for an update.
-- Update to 2.6.12rc5
- Fix potential local DoS. 1-2 other small fixes.
-- Tweak to fix up some vdso arithmetic.
-- Disable sysenter again for now.
-
-* Wed May 25 2005 David Woodhouse <dwmw2@redhat.com>
-- Turn off CONFIG_ISA on PPC again. It makes some Macs unhappy (#149200)
-- Make Speedtouch DSL modem resync automatically
-
-* Tue May 24 2005 Dave Jones <davej@redhat.com>
-- Update various cpufreq drivers.
-- 2.6.12-rc4-git8
- kobject ordering, tg3 fixes, ppc32 ipic fix,
- ppc64 powermac smp fix. token-ring fixes,
- TCP fix. ipv6 fix.
-- Disable slab debugging.
-
-* Mon May 23 2005 Dave Jones <davej@redhat.com>
-- Add extra id to SATA Sil driver. (#155748)
-- Fix oops on rmmod of lanai & ms558 drivers when no hardware present.
-
-* Mon May 23 2005 Dave Jones <davej@redhat.com>
-- Fix double unlock of spinlock on tulip. (#158522)
-
-* Mon May 23 2005 David Woodhouse <dwmw2@redhat.com>
-- audit updates: log serial # in user messages, escape comm= in syscalls
-
-* Mon May 23 2005 Dave Jones <davej@redhat.com>
-- 2.6.12-rc4-git6
- MMC update, reiserfs fixes, AIO fix.
-- Fix absolute symlink in -devel (#158582)
-- 2.6.12-rc4-git7
- PPC64 & i2c fixes
-- Fix another divide by zero in ipw2100 (#158406)
-- Fix dir ownership in kernel-doc rpm (#158478)
-
-* Sun May 22 2005 Dave Jones <davej@redhat.com>
-- Fix divide by zero in ipw2100 driver. (#158406)
-- 2.6.12-rc4-git5
- More x86-64 updates, Further pktcdvd frobbing,
- yet more dvb updates, x86(64) ioremap fixes,
- ppc updates, IPMI sysfs support (reverted for now due to breakage),
- various SCSI fixes (aix7xxx, spi transport), vmalloc improvements
-
-* Sat May 21 2005 David Woodhouse <dwmw2@redhat.com>
-- Fix oops in avc_audit() (#158377)
-- Include serial numbers in non-syscall audit messages
-
-* Sat May 21 2005 Bill Nottingham <notting@redhat.com>
-- bump ipw2200 conflict
-
-* Sat May 21 2005 Dave Jones <davej@redhat.com> [2.6.11-1.1334_FC4]
-- driver core: restore event order for device_add()
-
-* Sat May 21 2005 David Woodhouse <dwmw2@redhat.com>
-- More audit updates. Including a fix for AVC_USER messages.
-
-* Fri May 20 2005 Dave Jones <davej@redhat.com>
-- 2.6.12-rc4-git4
- networking fixes (netlink, pkt_sched, ipsec, netfilter,
- ip_vs, af_unix, ipv4/6, xfrm). TG3 driver improvements.
-
-* Thu May 19 2005 Dave Jones <davej@redhat.com> [2.6.11-1.1327_FC4]
-- 2.6.12-rc4-git3
- Further fixing to raw driver. More DVB updates,
- driver model updates, power management improvements,
- ext3 fixes.
-- Radeon on thinkpad backlight power-management goodness.
- (Peter Jones owes me two tacos).
-- Fix ieee1394 smp init.
-
-* Thu May 19 2005 Rik van Riel <riel@redhat.com>
-- Xen: disable TLS warning (#156414)
-
-* Thu May 19 2005 David Woodhouse <dwmw2@redhat.com>
-- Update audit patches
-
-* Thu May 19 2005 Dave Jones <davej@redhat.com> [2.6.11-1.1325_FC4]
-- Fix up missing symbols in ipw2200 driver.
-- Reenable debugfs / usbmon. SELinux seems to cope ok now.
- (Needs selinux-targeted-policy >= 1.23.16-1)
-
-* Wed May 18 2005 Dave Jones <davej@redhat.com>
-- Fix up some warnings in the IDE patches.
-- 2.6.12-rc4-git2
- Further pktcdvd fixing, DVB update, Lots of x86-64 updates,
- ptrace fixes, ieee1394 changes, input layer tweaks,
- md layer fixes, PCI hotplug improvements, PCMCIA fixes,
- libata fixes, serial layer, usb core, usbnet, VM fixes,
- SELinux tweaks.
-- Update ipw2100 driver to 1.1.0
-- Update ipw2200 driver to 1.0.4 (#158073)
-
-* Tue May 17 2005 Dave Jones <davej@redhat.com>
-- 2.6.12-rc4-git1
- ARM, ioctl security fixes, mmc driver update,
- ibm_emac & tulip netdriver fixes, serial updates
- ELF loader security fix.
-
-* Mon May 16 2005 Rik van Riel <riel@redhat.com>
-- enable Xen again (not tested yet)
-- fix a typo in the EXPORT_SYMBOL patch
-
-* Sat May 14 2005 Dave Jones <davej@redhat.com>
-- Update E1000 driver from netdev-2.6 tree.
-- Add some missing EXPORT_SYMBOLs.
-
-* Fri May 13 2005 Dave Jones <davej@redhat.com>
-- Bump maximum supported CPUs on x86-64 to 32.
-- Tickle the NMI watchdog when we're doing serial writes.
-- SCSI CAM geometry fix.
-- Slab debug single-bit error improvement.
-
-* Thu May 12 2005 David Woodhouse <dwmw2@redhat.com>
-- Enable CONFIG_ISA on ppc32 to make the RS/6000 user happy.
-- Update audit patches
-
-* Wed May 11 2005 Dave Jones <davej@redhat.com>
-- Add Ingo's patch to detect soft lockups.
-- Thread exits silently via __RESTORE_ALL exception for iret. (#154369)
-
-* Wed May 11 2005 David Woodhouse <dwmw2@redhat.com>
-- Import post-rc4 audit fixes from git, including ppc syscall auditing
-
-* Wed May 11 2005 Dave Jones <davej@redhat.com>
-- Revert NMI watchdog changes.
-
-* Tue May 10 2005 Dave Jones <davej@redhat.com>
-- Enable PNP on x86-64
-
-* Tue May 10 2005 Jeremy Katz <katzj@redhat.com>
-- make other -devel packages provide kernel-devel so they get
- installed instead of upgraded (#155988)
-
-* Mon May 9 2005 Dave Jones <davej@redhat.com>
-- Rebase to 2.6.12-rc4
- | Xen builds are temporarily disabled again.
-- Conflict if old version of ipw firmware is present.
-
-* Fri May 6 2005 Dave Jones <davej@redhat.com>
-- Add PCI ID for new sundance driver. (#156859)
-
-* Thu May 5 2005 David Woodhouse <dwmw2@redhat.com>
-- Import audit fixes from upstream
-
-* Wed May 4 2005 Jeremy Katz <katzj@redhat.com>
-- enable radeonfb and agp on ppc64 to fix X on the G5
-
-* Tue May 3 2005 Dave Jones <davej@redhat.com>
-- Disable usbmon/debugfs again for now until SELinux policy is fixed.
-
-* Mon May 2 2005 David Woodhouse <dwmw2@redhat.com>
-- Make kallsyms include platform-specific symbols
-- Fix might_sleep warning in pbook clock-spreading fix
-
-* Sun May 1 2005 Dave Jones <davej@redhat.com>
-- Fix yesterdays IDE fixes.
-- Blacklist another brainless SCSI scanner. (#155457)
-
-* Sun May 1 2005 David Woodhouse <dwmw2@redhat.com>
-- Fix EHCI port power switching
-
-* Sun May 1 2005 Dave Jones <davej@redhat.com>
-- Enable usbmon & debugfs. (#156489)
-
-* Sat Apr 30 2005 Dave Jones <davej@redhat.com>
-- Numerous IDE layer fixes from Alan Cox.
-- Kill off some stupid messages from the input layer.
-
-* Fri Apr 29 2005 Roland McGrath <roland@redhat.com>
-- Fix the 32bit emulation on x86-64 segfaults.
-
-* Wed Apr 27 2005 Dave Jones <davej@redhat.com>
-- Hopefully fix the random reboots some folks saw on x86-64.
-
-* Wed Apr 27 2005 Jeremy Katz <katzj@redhat.com>
-- fix prereqs for -devel packages
-
-* Wed Apr 27 2005 Rik van Riel <riel@redhat.com>
-- Fix up the vdso stuff so kernel-xen* compile again
-- Import upstream bugfix so xenU domains can be started again
-
-* Tue Apr 26 2005 Dave Jones <davej@redhat.com>
-- Fix up the vdso again, which broke on the last rebase to -rc3
-- Fix the put_user() fix. (#155999)
-
-* Mon Apr 25 2005 Dave Jones <davej@redhat.com>
-- Fix x86-64 put_user()
-- Fix serio oops.
-- Fix ipv6_skip_exthdr() invocation causing OOPS.
-- Fix up some permissions on some /proc files.
-- Support PATA drives on Promise SATA. (#147303)
-
-* Mon Apr 25 2005 Rik van Riel <riel@redhat.com>
-- upgrade to the latest version of xenolinux patches
-- reenable xen (it boots, ship it!)
-
-* Sat Apr 23 2005 David Woodhouse <dwmw2@redhat.com>
-- Enable adt746x and windtunnel thermal modules
-- Disable clock spreading on certain pbooks before sleep
-- Sound support for Mac Mini
-
-* Fri Apr 22 2005 Dave Jones <davej@redhat.com>
-- Reenable i2c-viapro on x86-64.
-
-* Fri Apr 22 2005 Dave Jones <davej@redhat.com>
-- Don't build powernow-k6 on anything other than 586 kernels.
-- Temporarily disable Xen again.
-
-* Wed Apr 20 2005 Dave Jones <davej@redhat.com>
-- 2.6.12rc3
-
-* Wed Apr 20 2005 Dave Jones <davej@redhat.com>
-- Adjust struct dentry 'padding' based on 64bit'ness.
-* Tue Apr 19 2005 Dave Jones <davej@redhat.com>
-- Print stack trace when we panic.
- Might give more clues for some of the wierd panics being seen right now.
-- Blacklist another 'No C2/C3 states' Thinkpad R40e BIOS. (#155236)
-
-* Mon Apr 18 2005 Dave Jones <davej@redhat.com>
-- Make ISDN ICN driver not oops when probed with no hardware present.
-- Add missing MODULE_LICENSE to mac_modes.ko
-
-* Sat Apr 16 2005 Dave Jones <davej@redhat.com>
-- Make some i2c drivers arch dependant.
-- Make multimedia buttons on Dell inspiron 8200 work. (#126148)
-- Add diffutils buildreq (#155121)
-
-* Thu Apr 14 2005 Dave Jones <davej@redhat.com>
-- Build DRM modular. (#154769)
-
-* Wed Apr 13 2005 Rik van Riel <riel@redhat.com>
-- fix up Xen for 2.6.12-rc2
-- drop arch/xen/i386/signal.c, thanks to Roland's vdso patch (yay!)
-- reenable xen compile - this kernel test boots on my system
-
-* Tue Apr 12 2005 Dave Jones <davej@redhat.com>
-- Further vdso work from Roland.
-
-* Mon Apr 11 2005 David Woodhouse <dwmw2@redhat.com>
-- Disable PPC cpufreq/sleep patches which make sleep less reliable
-- Add TIMEOUT to hotplug environment when requesting firmware (#153993)
-
-* Sun Apr 10 2005 Dave Jones <davej@redhat.com>
-- Integrate Roland McGrath's changes to make exec-shield
- and vdso play nicely together.
-
-* Fri Apr 8 2005 Dave Jones <davej@redhat.com>
-- Disable Longhaul driver (again).
-
-* Wed Apr 6 2005 Dave Jones <davej@redhat.com>
-- 2.6.12rc2
- - netdump/netconsole currently broken.
- - Xen temporarily disabled.
-
-* Fri Apr 1 2005 Dave Jones <davej@redhat.com>
-- Make the CFQ elevator the default again.
-
-* Thu Mar 31 2005 Rik van Riel <riel@redhat.com>
-- upgrade to new upstream Xen code, twice
-- for performance reasons, disable CONFIG_DEBUG_PAGEALLOC for FC4t2
-
-* Wed Mar 30 2005 Rik van Riel <riel@redhat.com>
-- fix Xen kernel compilation (pci, page table, put_user, execshield, ...)
-- reenable Xen kernel compilation
-
-* Tue Mar 29 2005 Rik van Riel <riel@redhat.com>
-- apply Xen patches again (they don't compile yet, though)
-- Use uname in kernel-devel directories (#145914)
-- add uname-based kernel-devel provisions (#152357)
-- make sure /usr/share/doc/kernel-doc-%%{kversion} is owned by a
- package, so it will get removed again on uninstall/upgrade (#130667)
-
-* Mon Mar 28 2005 Dave Jones <davej@redhat.com>
-- Don't generate debuginfo files if %%_enable_debug_packages isnt set. (#152268)
+%files vserver
+%defattr(-,root,root)
+# no files
+%changelog
* Sun Mar 27 2005 Dave Jones <davej@redhat.com>
-- 2.6.12rc1-bk2
-- Disable NVidia FB driver for time being, it isn't stable.
-
-* Thu Mar 24 2005 Dave Jones <davej@redhat.com>
-- rebuild
+- Catch up with all recent security issues.
+ - CAN-2005-0210 : dst leak
+ - CAN-2005-0384 : ppp dos
+ - CAN-2005-0531 : Sign handling issues.
+ - CAN-2005-0400 : EXT2 information leak.
+ - CAN-2005-0449 : Remote oops.
+ - CAN-2005-0736 : Epoll overflow
+ - CAN-2005-0749 : ELF loader may kfree wrong memory.
+ - CAN-2005-0750 : Missing range checking in bluetooth
+ - CAN-2005-0767 : drm race in radeon
+ - CAN-2005-0815 : Corrupt isofs images could cause oops.
* Tue Mar 22 2005 Dave Jones <davej@redhat.com>
-- Fix several instances of swapped arguments to memset()
-- 2.6.12rc1-bk1
-
-* Fri Mar 18 2005 Dave Jones <davej@redhat.com>
-- kjournald release race. (#146344)
-- 2.6.12rc1
-
-* Thu Mar 17 2005 Rik van Riel <riel@redhat.com>
-- upgrade to latest upstream Xen code
-
-* Tue Mar 15 2005 Rik van Riel <riel@redhat.com>
-- add Provides: headers for external kernel modules (#149249)
-- move build & source symlinks from kernel-*-devel to kernel-* (#149210)
-- fix xen0 and xenU devel %%post scripts to use /usr/src/kernels (#149210)
-
-* Thu Mar 10 2005 Dave Jones <davej@redhat.com>
-- Reenable advansys driver for x86
-
-* Tue Mar 8 2005 Dave Jones <davej@redhat.com>
-- Change SELinux execute-related permission checking. (#149819)
-
-* Sun Mar 6 2005 Dave Jones <davej@redhat.com>
-- Forward port some FC3 patches that got lost.
-
-* Fri Mar 4 2005 Dave Jones <davej@redhat.com>
-- Fix up ACPI vs keyboard controller problem.
-- Fix up Altivec usage on PPC/PPC64.
-
-* Fri Mar 4 2005 Dave Jones <davej@redhat.com>
-- Finger the programs that try to read from /dev/mem.
-- Improve spinlock debugging a little.
-
-* Thu Mar 3 2005 Dave Jones <davej@redhat.com>
-- Fix up the unresolved symbols problem.
-
-* Thu Mar 3 2005 Rik van Riel <riel@redhat.com>
-- upgrade to new Xen snapshot (requires new xen RPM, too)
-
-* Wed Mar 2 2005 Dave Jones <davej@redhat.com>
-- 2.6.11
-
-* Tue Mar 1 2005 David Woodhouse <dwmw2@redhat.com>
-- Building is nice. Booting would be better. Work around GCC -Os bug which
- which makes the PPC kernel die when extracting its initramfs. (#150020)
-- Update include/linux/compiler-gcc+.h
-
-* Tue Mar 1 2005 Dave Jones <davej@redhat.com>
-- 802.11b/ipw2100/ipw2200 update.
-- 2.6.11-rc5-bk4
-
-* Tue Mar 1 2005 David Woodhouse <dwmw2@redhat.com>
-- Fix ppc/ppc64/ppc64iseries builds for gcc 4.0
-- Fix Xen build too
-
-* Mon Feb 28 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc5-bk3
-- Various compile fixes for building with gcc-4.0
-
-* Sat Feb 26 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc5-bk1
-
-* Fri Feb 25 2005 Dave Jones <davej@redhat.com>
-- Hopefully fix the zillion unresolved symbols. (#149758)
+- Fix swapped parameters to memset in ieee802.11 code.
* Thu Feb 24 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc5
-
-* Wed Feb 23 2005 Rik van Riel <riel@redhat.com>
-- get rid of unknown symbols in kernel-xen0 (#149495)
+- Use old scheme first when probing USB. (#145273)
* Wed Feb 23 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc4-bk11
+- Try as you may, there's no escape from crap SCSI hardware. (#149402)
* Mon Feb 21 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc4-bk9
-
-* Sat Feb 19 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc4-bk7
-
-* Sat Feb 19 2005 Rik van Riel <riel@redhat.com>
-- upgrade to newer Xen code, needs xen-20050218 to run
-
-* Sat Feb 19 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc4-bk6
-
-* Fri Feb 18 2005 David Woodhouse <dwmw2@redhat.com>
-- Add SMP kernel for PPC32
-
-* Fri Feb 18 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc4-bk5
+- Disable some experimental USB EHCI features.
* Tue Feb 15 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc4-bk3
-
-* Mon Feb 14 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc4-bk2
-
-* Sun Feb 13 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc4-bk1
-
-* Sat Feb 12 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc4
-
-* Fri Feb 11 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc3-bk8
-
-* Thu Feb 10 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc3-bk7
+- Fix bio leak in md layer.
-* Wed Feb 9 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc3-bk6
+* Wed Feb 9 2005 Dave Jones <davej@redhat.com> [2.6.10-1.766_FC3, 2.6.10-1.14_FC2]
+- Backport some exec-shield fixes from devel/ branch.
+- Scan all SCSI LUNs by default.
+ Theoretically, some devices may hang when being probed, though
+ there should be few enough of these that we can blacklist them
+ instead of having to whitelist every other device on the planet.
* Tue Feb 8 2005 Dave Jones <davej@redhat.com>
-- Enable old style and new style USB initialisation.
-- More PPC jiggery pokery hackery.
-- 2.6.11-rc3-bk5
+- Use both old-style and new-style for USB initialisation.
-* Mon Feb 7 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc3-bk4
-- Various patches to unbork PPC.
-- Display taint bits on VM error.
+* Mon Feb 7 2005 Dave Jones <davej@redhat.com> [2.6.10-1.762_FC3, 2.6.10-1.13_FC2]
+- Update to 2.6.10-ac12
-* Mon Feb 7 2005 Rik van Riel <riel@redhat.com>
-- upgrade to latest upstream Xen bits, upgrade those to 2.6.11-rc3-bk2
+* Tue Feb 1 2005 Dave Jones <davej@redhat.com> [2.6.10-1.760_FC3, 2.6.10-1.12_FC2]
+- Disable longhaul driver, it causes random hangs. (#140873)
+- Fixup NFSv3 oops when mounting with sec=krb5 (#146703)
-* Sat Feb 5 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc3-bk2
+* Mon Jan 31 2005 Dave Jones <davej@redhat.com>
+- Rebase to 2.6.10-ac11
-* Fri Feb 4 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc3-bk1
+* Sat Jan 29 2005 Dave Jones <davej@redhat.com>
+- Reintegrate Tux. (#144812)
-* Wed Feb 2 2005 Dave Jones <davej@redhat.com>
-- Stop the input layer spamming the console. (#146906)
-- 2.6.11-rc3
-
-* Tue Feb 1 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc2-bk10
-- Reenable periodic slab checker.
-
-* Tue Feb 1 2005 Rik van Riel <riel@redhat.com>
-- update to latest xen-unstable source snapshot
-- add agpgart patch from upstream xen tree
-- port Ingo's latest execshield updates to Xen
-
-* Mon Jan 31 2005 Rik van Riel <riel@redhat.com>
-- enable SMP support in xenU kernel, use the xen0 kernel for the
- unprivileged domains if the SMP xenU breaks on your system
-
-* Thu Jan 27 2005 Dave Jones <davej@redhat.com>
-- Drop VM hack that broke in yesterdays rebase.
-
-* Wed Jan 26 2005 Dave Jones <davej@redhat.com>
-- Drop 586-SMP kernels. These are a good candidate for
- fedora-extras when it appears. The number of people
- actually using this variant is likely to be very very small.
-- 2.6.11-rc2-bk4
-
-* Tue Jan 25 2005 Dave Jones <davej@redhat.com>
-- 2.6.11-rc2-bk3
-
-* Sun Jan 23 2005 Dave Jones <davej@redhat.com>
-- Updated periodic slab debug check from Manfred.
-- Enable PAGE_ALLOC debugging again, it should now be fixed.
-- 2.6.11-rc2-bk1
-
-* Fri Jan 21 2005 Dave Jones <davej@redhat.com>
-- Rebase to 2.6.11-rc2
-
-* Fri Jan 21 2005 Rik van Riel <riel@redhat.com>
-- make exec-shield segment limits work inside the xen kernels
-
-* Thu Jan 20 2005 Dave Jones <davej@redhat.com>
-- Rebase to -bk8
-
-* Wed Jan 19 2005 Dave Jones <davej@redhat.com>
-- Re-add diskdump/netdump based on Jeff Moyers patches.
-- Rebase to -bk7
-
-* Tue Jan 18 2005 Jeremy Katz <katzj@redhat.com>
-- fixup xen0 %%post to use new grubby features for multiboot kernels
-- conflict with older mkinitrd for kernel-xen0
+* Thu Jan 20 2005 Dave Jones <davej@redhat.com> [2.6.10-1.753_FC3, 2.6.10-1.11_FC2]
+- Fix x87 fnsave Tag Word emulation when using FXSR (SSE)
+- Add multi-card reader of the day to the whitelist. (#145587)
* Tue Jan 18 2005 Dave Jones <davej@redhat.com>
-- -bk6
+- Reintegrate netdump/netconsole. (#144068)
* Mon Jan 17 2005 Dave Jones <davej@redhat.com>
-- First stab at kernel-devel packages. (David Woodhouse).
-
-* Mon Jan 17 2005 Rik van Riel <riel@redhat.com>
-- apply dmi fix, now xenU boots again
+- Update to 2.6.10-ac10
+- Revert module loader patch that caused lots of invalid parameter problems.
+- Print more debug info when spinlock code triggers a panic.
+- Print tainted information on various mm debug info.
* Fri Jan 14 2005 Dave Jones <davej@redhat.com>
-- Rebase to 2.6.11-bk2
+- Enable advansys scsi module on x86. (#141004)
* Thu Jan 13 2005 Dave Jones <davej@redhat.com>
-- Rebase to 2.6.11-bk1
-
-* Wed Jan 12 2005 Dave Jones <davej@redhat.com>
-- Rebase to 2.6.11rc1
-
-* Tue Jan 11 2005 Rik van Riel <riel@redhat.com>
-- fix Xen compile with -bk14
-
-* Tue Jan 11 2005 Dave Jones <davej@redhat.com>
-- Update to -bk14
-- Print tainted information in slab corruption messages.
+- Reenable CONFIG_PARIDE (#127333)
-* Tue Jan 11 2005 Rik van Riel <riel@redhat.com>
-- merge fix for the Xen TLS segment fixup issue
-
-* Tue Jan 11 2005 Dave Jones <davej@redhat.com>
-- Depend on hardlink, not kernel-utils.
+* Thu Jan 13 2005 Dave Jones <davej@redhat.com> [2.6.10-1.741_FC3, 2.6.10-1.9_FC2]
+- Update to 2.6.10-ac9
+- Fix slab corruption in ACPI video code.
* Mon Jan 10 2005 Dave Jones <davej@redhat.com>
-- Update to -bk13, reinstate GFP_ZERO patch which hopefully
- is now fixed.
- Add another Lexar card reader to the whitelist. (#143600)
- Package asm-m68k for asm-ppc includes. (don't ask). (#144604)
+* Mon Jan 10 2005 Dave Jones <davej@redhat.com> [2.6.10-1.737_FC3, 2.6.10-1.8_FC2]
+- Disable slab debugging.
+
* Sat Jan 8 2005 Dave Jones <davej@redhat.com>
- Periodic slab debug is incompatable with pagealloc debug.
Disable the latter.
+- Update to 2.6.10-ac8
* Fri Jan 7 2005 Dave Jones <davej@redhat.com>
-- Santa came to Notting's house too. (another new card reader)
-- Rebase to 2.6.10-bk10
-
-* Thu Jan 6 2005 Rik van Riel <riel@redhat.com>
-- update to latest xen-unstable tree
-- fix up Xen compile with -bk9, mostly pudding
+- Bump up to -ac7
+- Another new card reader.
* Thu Jan 6 2005 Dave Jones <davej@redhat.com>
-- Rebase to 2.6.10-bk9
+- Rebase to 2.6.10-ac5
* Tue Jan 4 2005 Dave Jones <davej@redhat.com>
-- Rebase to 2.6.10-bk7
+- Rebase to 2.6.10-ac4
- Add periodic slab debug checker.
-* Sun Jan 2 2005 Dave Jones <davej@redhat.com>
-- Rebase to 2.6.10-bk5
+* Mon Jan 3 2005 Dave Jones <davej@redhat.com>
+- Drop patch which meant we needed a newer gcc. (#144035)
+- Rebase to 2.6.10-ac2
+- Enable SL82C104 IDE driver as built-in on PPC64 (#131033)
* Sat Jan 1 2005 Dave Jones <davej@redhat.com>
- Fix probing of vesafb. (#125890)
-- Reenable EDD.
-- Don't assume existance of ~/.gnupg (#142201)
+- Enable PCILynx driver. (#142173)
* Fri Dec 31 2004 Dave Jones <davej@redhat.com>
-- Rebase to 2.6.10-bk4
-
-* Thu Dec 30 2004 Dave Jones <davej@redhat.com>
-- Rebase to 2.6.10-bk3
+- Drop 4g/4g patch completely.
* Tue Dec 28 2004 Dave Jones <davej@redhat.com>
- Drop bogus ethernet slab cache.
-* Sun Dec 26 2004 Dave Jones <davej@redhat.com>
-- Santa brought a new card reader that needs whitelisting.
-
-* Fri Dec 24 2004 Dave Jones <davej@redhat.com>
-- Rebase to 2.6.10
-
-* Wed Dec 22 2004 Dave Jones <davej@redhat.com>
-- Re-add missing part of the exit() race fix. (#142505, #141896)
-
-* Tue Dec 21 2004 Dave Jones <davej@redhat.com>
-- Fix two silly bugs in the AGP posting fixes.
-
-* Fri Dec 17 2004 Dave Jones <davej@redhat.com>
+* Thu Dec 23 2004 Dave Jones <davej@redhat.com>
- Fix bio error propagation.
- Clear ebp on sysenter return.
- Extra debugging info on OOM kill.
- Fix ext2/3 leak on umount.
- fix missing wakeup in ipc/sem
- Fix another tux corner case bug.
-- NULL out ptrs in airo driver after kfree'ing them.
+
+* Wed Dec 22 2004 Dave Jones <davej@redhat.com>
+- Add another ipod to the unusual usb devices list. (#142779)
+
+* Tue Dec 21 2004 Dave Jones <davej@redhat.com>
+- Fix two silly bugs in the AGP posting fixes.
* Thu Dec 16 2004 Dave Jones <davej@redhat.com>
-- Better version of the PCI Posting fixes for AGPGART.
+- Better version of the PCI Posting fixes for agpgart.
- Add missing cache flush to the AGP code.
-- Drop netdump and common crashdump code.
-
-* Mon Dec 13 2004 Dave Jones <davej@redhat.com>
-- Drop diskdump. Aiming for a better kexec based solution for FC4.
* Sun Dec 12 2004 Dave Jones <davej@redhat.com>
- fix false ECHILD result from wait* with zombie group leader.
* Sat Dec 11 2004 Dave Jones <davej@redhat.com>
- Workaround broken pci posting in AGPGART.
-- Compile 686 kernel tuned for pentium4.
- | Needs benchmarking across various CPUs under
- | various workloads to find out if its worth keeping.
- Make sure VC resizing fits in s16.
* Fri Dec 10 2004 Dave Jones <davej@redhat.com>
- SELinux: Fix avc_node_update oops. (#142353)
- Fix CCISS ioctl return code.
- Make ppc64's pci_alloc_consistent() conform to documentation. (#140047)
+- Disable tiglusb module. (#142102)
+- E1000 64k-alignment fix. (#140047)
+- Disable tiglusb module. (#142102)
+- ID updates for cciss driver.
+- Fix overflows in USB Edgeport-IO driver. (#142258)
+- Fix wrong TASK_SIZE for 32bit processes on x86-64. (#141737)
+- Fix ext2/ext3 xattr/mbcache race. (#138951)
+- Fix bug where __getblk_slow can loop forever when pages are partially mapped. (#140424)
+- Add missing cache flushes in agpgart code.
+
+* Wed Dec 8 2004 Dave Jones <davej@redhat.com>
- Enable EDD
- Enable ETH1394. (#138497)
- Workaround E1000 post-maturely writing back to TX descriptors. (#133261)
- Fix compat fcntl F_GETLK{,64} (#141680)
- blkdev_get_blocks(): handle eof
- Another card reader for the whitelist. (#134094)
-- Disable tiglusb module. (#142102)
-- E1000 64k-alignment fix. (#140047)
-- Disable tiglusb module. (#142102)
-- ID updates for cciss driver.
-- Fix overflows in USB Edgeport-IO driver. (#142258)
-- Fix wrong TASK_SIZE for 32bit processes on x86-64. (#141737)
-- Fix ext2/ext3 xattr/mbcache race. (#138951)
-- Fix bug where __getblk_slow can loop forever when pages are partially mapped. (#140424)
-- Add missing cache flushes in agpgart code.
-
-* Thu Dec 9 2004 Dave Jones <davej@redhat.com>
-- Drop the 4g/4g hugemem kernel completely.
-
-* Wed Dec 8 2004 Rik van Riel <riel@redhat.com>
-- make Xen inherit config options from x86
-
-* Mon Dec 6 2004 Rik van Riel <riel@redhat.com>
-- apparently Xen works better without serial drivers in domain0 (#141497)
-
-* Sun Dec 5 2004 Rik van Riel <riel@redhat.com>
-- Fix up and reenable Xen compile.
-- Fix bug in install part of BuildKernel.
* Sat Dec 4 2004 Dave Jones <davej@redhat.com>
- Enable both old and new megaraid drivers.
- Add yet another card reader to usb scsi whitelist. (#141367)
+- Fix oops in conntrack on rmmod.
* Fri Dec 3 2004 Dave Jones <davej@redhat.com>
-- Sync all patches with latest updates in FC3.
-- Fix up xen0/xenU uninstall.
-- Temporarily disable xen builds.
-
-* Wed Dec 1 2004 Rik van Riel <riel@redhat.com>
-- replace VM hack with the upstream version
-- more Xen bugfixes
-
-* Tue Nov 30 2004 Rik van Riel <riel@redhat.com>
-- upgrade to later Xen sources, with upstream bugfixes
-- export direct_remap_area_pages for Xen
+- Pull in bits of -ac12
+ Should fix the smbfs & visor issues among others.
+
+* Thu Dec 2 2004 Dave Jones <davej@redhat.com>
+- Drop the futex debug patch, it served its purpose.
+- XFRM layer bug fixes
+- ppc64: Convert to using ibm,read-slot-reset-state2 RTAS call
+- ide: Make CSB6 driver support configurations.
+- ide: Handle early EOF on CDs.
+- Fix sx8 device naming in sysfs
+- e100/e1000: return -EINVAL when setting rx-mini or rx-jumbo. (#140793)
+
+* Wed Dec 1 2004 Dave Jones <davej@redhat.com>
+- Disable 4G/4G for i686.
+- Workaround for the E1000 erratum 23 (#140047)
+- Remove bogus futex warning. (#138179)
+- x86_64: Fix lost edge triggered irqs on UP kernel.
+- x86_64: Reenable DRI for MGA.
+- Workaround E1000 post-maturely writing back to TX descriptors (#133261)
+- 3c59x: add EEPROM_RESET for 3c900 Boomerang
+- Fix buffer overrun in arch/x86_64/sys_ia32.c:sys32_ni_syscall()
+- ext3: improves ext3's error logging when we encounter an on-disk corruption.
+- ext3: improves ext3's ability to deal with corruption on-disk
+- ext3: Handle double-delete of indirect blocks.
+- Disable SCB2 flash driver for RHEL4. (#141142)
+
+* Tue Nov 30 2004 Dave Jones <davej@redhat.com>
+- x86_64: add an option to configure oops stack dump
+- x86[64]: display phys_proc_id only when it is initialized
+- x86_64: no TIOCSBRK/TIOCCBRK in ia32 emulation
+- via-rhine: references __init code during resume
+- Add barriers to generic timer code to prevent race. (#128242)
+- ppc64: Add PURR and version data to /proc/ppc64/lparcfg
+- Prevent xtime value becoming incorrect.
+- scsi: return full SCSI status byte in SG_IO
+- Fix show_trace() in irq context with CONFIG_4KSTACKS
+- Adjust alignment of pagevec structure.
+- md: make sure md always uses rdev_dec_pending properly.
+- Make proc_pid_status not dereference dead task structs.
+- sg: Fix oops of sg_cmd_done and sg_release race (#140648)
+- fix bad segment coalescing in blk_recalc_rq_segments()
+- fix missing security_*() check in net/compat.c
+- ia64/x86_64/s390 overlapping vma fix
+- Update Emulex lpfc to 8.0.15
* Mon Nov 29 2004 Dave Jones <davej@redhat.com>
- Add another card reader to whitelist. (#141022)
-
-* Fri Nov 26 2004 Rik van Riel <riel@redhat.com>
-- add Xen kernels for i686, plus various bits and pieces to make them work
-
-* Mon Nov 15 2004 Dave Jones <davej@redhat.com>
-- Rebase to 2.6.9-ac9
+- Fix possible hang in do_wait() (#140042)
+- Fix ps showing wrong ppid. (#132030)
+- Print advice to use -hugemem if >=16GB of memory is detected.
+- Enable ICOM serial driver. (#136150)
+- Enable acpi hotplug driver for IA64.
+- SCSI: fix USB forced remove oops.
+- ia64: add missing sn2 timer mask in time_interpolator code. (#140580)
+- ia64: Fix hang reading /proc/pal/cpu0/tr_info (#139571)
+- ia64: bump number of UARTS. (#139100)
+- Fix ACPI debug level (#141292)
+- Make EDD runtime configurable, and reenable.
+- ppc64: IBM VSCSI driver race fix. (#138725)
+- ppc64: Ensure PPC64 interrupts don't end up hard-disabled. (#139020, #131590)
+- ppc64: Yet more sigsuspend/singlestep fixing. (#140102, #137931)
+- x86-64: Implement ACPI based reset mechanism. (#139104)
+- Backport 2.6.10rc sysfs changes needed for IBM hotplug driver. (#140372)
+- Update Emulex lpfc driver to v8.0.14
+- Optimize away the unconditional write to debug registers on signal delivery path.
+- Fix up scsi_test_unit_ready() to work correctly with CD-ROMs.
+- md: fix two little bugs in raid10
+- Remove incorrect ELF check from module loading. (#140954)
+- Plug leaks in error paths of aic driver.
+- Add refcounting to scsi command allocation.
+- Taint oopses on machine checks, bad_page()'s calls and forced rmmod's.
+- Share Intel cache descriptors between x86 & x86-64.
+- rx checksum support for gige nForce ethernet
+- vm: vm_dirty_ratio initialisation fix
+
+* Mon Nov 29 2004 Soeren Sandmann <sandmann@redhat.com>
+- Build FC-3 kernel in RHEL build root
+
+* Sun Nov 28 2004 Dave Jones <davej@redhat.com>
+- Move 4g/4g kernel into -hugemem.
+
+* Sat Nov 27 2004 Dave Jones <davej@redhat.com>
+- Recognise Shuttle SN85G4 card reader. (#139163)
+
+* Tue Nov 23 2004 Dave Jones <davej@redhat.com>
+- Add futex debug patch.
+
+* Mon Nov 22 2004 Dave Jones <davej@redhat.com>
+- Update -ac patch to 2.6.9-ac11
+- make tulip_stop_rxtx() wait for DMA to fully stop. (#138240)
+- ACPI: Make LEqual less strict about operand types matching.
+- scsi: avoid extra 'put' on devices in __scsi_iterate_device() (#138135)
+- Fix bugs with SOCK_SEQPACKET AF_UNIX sockets
+- Reenable token ring drivers. (#119345)
+- SELinux: Map Unix seqpacket sockets to appropriate security class
+- SELinux: destroy avtab node cache in policy load error path.
+- AF_UNIX: Serialize dgram read using semaphore just like stream.
+- lockd: NLM blocks locks don't sleep
+- NFS lock recovery fixes
+- Add more MODULE_VERSION tags (#136403)
+- Update qlogic driver to 2.6.10rc2 level.
+- cciss: fixes for clustering
+- ieee802.11 update.
+- ipw2100: update to ver 1.0.0
+- ipw2200: update to ver 1.0.0
+- Enable promisc mode on ipw2100
+- 3c59x: reload EEPROM values at rmmod for needy cards
+- ppc64: Prevent sigsuspend stomping on r4 and r5
+- ppc64: Alternative single-step fix.
+- fix for recursive netdump oops on x86_64
+- ia64: Fix IRQ routing fix when booted with maxcpus= (#138236)
+- ia64: search the iommu for the correct size
+- Deal with fraglists correctly on ipv4/ipv6 output
+- Various statm accounting fixes (#139447)
+- Reenable CMM /proc interface for s390 (#137397)
+
+* Fri Nov 19 2004 Dave Jones <davej@redhat.com>
+- e100: fix improper enabling of interrupts. (#139706)
+- autofs4: allow map update recognition
+- Various TCP fixes from 2.6.10rc
+- Various netlink fixes from 2.6.10rc
+- [IPV4]: Do not try to unhash null-netdev nexthops.
+- ppc64: Make NUMA map CPU->node before bringing up the CPU (#128063)
+- ppc64: sched domains / cpu hotplug cleanup. (#128063)
+- ppc64: Add a CPU_DOWN_PREPARE hotplug CPU notifier (#128063)
+- ppc64: Register a cpu hotplug notifier to reinitialize the
+ scheduler domains hierarchy (#128063)
+- ppc64: Introduce CPU_DOWN_FAILED notifier (#128063)
+- ppc64: Make arch_destroy_sched_domains() conditional (#128063)
+- ppc64: Use CPU_DOWN_FAILED notifier in the sched-domains hotplug code (#128063)
+- Various updates to the SCSI midlayer from 2.6.10rc.
+- vlan_dev: return 0 on vlan_dev_change_mtu success. (#139760)
+- Update Emulex lpfc driver to v8013
+- Fix problem with b44 driver and 4g/4g patch. (#118165)
+- Prevent oops when loading aic79xx on machine without hardware. (#125982)
+- Use correct spinlock functions in token ring net code. (#135462)
+- scsi: Add reset ioctl capability to ULDs
+- scsi: update ips driver to 7.10.18
+- Reenable ACPI hotplug driver. (#139976, #140130, #132691)
+
+* Thu Nov 18 2004 Dave Jones <davej@redhat.com>
+- Drop 2.6.9 changes that broke megaraid. (#139723)
+- Update to 2.6.9-ac10, fixing the SATA problems (#139674)
+- Update the OOM-killer tamer to upstream.
+- Implement an RCU scheme for the SELinux AVC
+- Improve on the OOM-killer taming patch.
+- device-mapper: Remove duplicate kfree in dm_register_target error path.
+- Make SHA1 guard against misaligned accesses
+- ASPM workaround for PCIe. (#123360)
+- Hot-plug driver updates due to MSI change (#134290)
+- Workaround for 80332 IOP hot-plug problem (#139041)
+- ExpressCard hot-plug support for ICH6M (#131800)
+- Fix boot crash on VIA systems (noted on x86-64)
+- PPC64: Store correct backtracking info in ppc64 signal frames
+- PPC64: Prevent HVSI from oopsing on hangup (#137912)
+- Fix poor performance b/c of noncacheable mapping in 4g/4g (#130842)
+- Fix PCI-X hotplug issues (#132852, #134290)
+- Re-export force_sig() (#139503)
+- Various fixes for more security issues from latest -ac patch.
+- Fix d_find_alias brokenness (#137791)
+- tg3: Fix fiber hw autoneg bounces (#138738)
+- diskdump: Fix issue with NMI watchdog. (#138041)
+- diskdump: Export disk_dump_state. (#138132)
+- diskdump: Tickle NMI watchdog in diskdump_mdelay() (#138036)
+- diskdump: Fix mem= for x86-64 (#138139)
+- diskdump: Fix missing system_state setting. (#138130)
+- diskdump: Fix diskdump completion message (#138028)
+- Re-add aic host raid support.
+- Take a few more export removal patches from 2.6.10rc
+- SATA: Make AHCI work
+- SATA: Core updates.
+- S390: Fix Incorrect registers in core dumps. (#138206)
+- S390: Fix up lcs device state. (#131167)
+- S390: Fix possible qeth IP registration failure.
+- S390: Support broadcast on z800/z900 HiperSockets
+- S390: Allow FCP port to recover after aborted nameserver request.
+- Flush error in pci_mmcfg_write (#129338)
+- hugetlb_get_unmapped_area fix (#135364, #129525)
+- Fix ia64 cyclone timer on ia64 (#137842, #136684)
+- Fix ipv6 MTU calculation. (#130397)
+- ACPI: Don't display messages about ACPI breakpoints. (#135856)
+- Fix x86_64 copy_user_generic (#135655)
+- lockd: remove hardcoded maximum NLM cookie length
+- Fix SCSI bounce limit
+- Disable polling mode on hotplug controllers in favour of interrupt driven. (#138737)
* Sat Nov 13 2004 Dave Jones <davej@redhat.com>
- Drop some bogus patches.
- re-add and enable the Auditing patch
- switch several cpufreq modules to built in since detecting in userspace
which to use is unpleasant
-
* Thu Jul 03 2003 Arjan van de Ven <arjanv@redhat.com>
- 2.6 start
-
UTS_LEN=64
UTS_TRUNCATE="sed -e s/\(.\{1,$UTS_LEN\}\).*/\1/"
-
+LINUX_COMPILE_VERSION_ID="__linux_compile_version_id__`hostname | tr -c '[0-9A-Za-z\n]' '__'`_`LANG=C date | tr -c '[0-9A-Za-z\n]' '_'`"
# Generate a temporary compile.h
( echo /\* This file is auto generated, version $VERSION \*/
fi
echo \#define LINUX_COMPILER \"`$CC -v 2>&1 | tail -n 1`\"
+ echo \#define LINUX_COMPILE_VERSION_ID $LINUX_COMPILE_VERSION_ID
+ echo \#define LINUX_COMPILE_VERSION_ID_TYPE typedef char* "$LINUX_COMPILE_VERSION_ID""_t"
) > .tmpcompile
# Only replace the real compile.h if the new one is different,