platform-drivers-x86 for v5.12-1

- Microsoft Surface devices System Aggregator Module support
 - SW_TABLET_MODE reporting improvements
 - thinkpad_acpi keyboard language setting support
 - platform / DPTF profile settings support
  - Base / userspace API parts merged from Rafael's acpi-platform branch
  - thinkpad_acpi and ideapad-laptop support through pdx86
 - Remove support for some obsolete Intel MID platforms through merging
   of the shared intel-mid-removal branch
 - Big cleanup of the ideapad-laptop driver
 - Misc. other fixes / new hw support / quirks
 
 The following is an automated git shortlog grouped by driver:
 
 ACPI:
  -  platform-profile: Fix possible deadlock in platform_profile_remove()
  -  platform-profile: Introduce object pointers to callbacks
  -  platform-profile: Drop const qualifier for cur_profile
  -  platform: Add platform profile support
 
 Documentation:
  -  Add documentation for new platform_profile sysfs attribute
 
 Documentation/ABI:
  -  sysfs-platform-ideapad-laptop: conservation_mode attribute
  -  sysfs-platform-ideapad-laptop: update device attribute paths
 
 Kconfig:
  -  add missing selects for ideapad-laptop
 
 MAINTAINERS:
  -  update email address for Henrique de Moraes Holschuh
 
 Merge remote-tracking branch 'intel-speed-select/intel-sst' into review-hans:
  - Merge remote-tracking branch 'intel-speed-select/intel-sst' into review-hans
 
 Merge remote-tracking branch 'linux-pm/acpi-platform' into review-hans:
  - Merge remote-tracking branch 'linux-pm/acpi-platform' into review-hans
 
 Merge tag 'ib-drm-gpio-pdx86-rtc-wdt-v5.12-1' into for-next:
  - Merge tag 'ib-drm-gpio-pdx86-rtc-wdt-v5.12-1' into for-next
 
 Move all dell drivers to their own subdirectory:
  - Move all dell drivers to their own subdirectory
 
 Platform:
  -  OLPC: Constify static struct regulator_ops
  -  OLPC: Specify the enable time
  -  OLPC: Remove dcon_rdev from olpc_ec_priv
  -  OLPC: Fix probe error handling
 
 Revert "platform/x86:
  -  ideapad-laptop: Switch touchpad attribute to be RO"
 
 acer-wmi:
  -  Don't use ACPI_EXCEPTION()
 
 amd-pmc:
  -  put device on error paths
  -  Fix CONFIG_DEBUG_FS check
 
 dell-wmi-sysman:
  -  fix a NULL pointer dereference
 
 docs:
  -  driver-api: Add Surface Aggregator subsystem documentation
 
 drm/gma500:
  -  Get rid of duplicate NULL checks
  -  Convert to use new SCU IPC API
 
 gpio:
  -  msic: Remove driver for deprecated platform
  -  intel-mid: Remove driver for deprecated platform
 
 hp-wmi:
  -  Disable tablet-mode reporting by default
  -  Don't log a warning on HPWMI_RET_UNKNOWN_COMMAND errors
 
 i2c-multi-instantiate:
  -  Don't create platform device for INT3515 ACPI nodes
 
 ideapad-laptop:
  -  add "always on USB charging" control support
  -  add keyboard backlight control support
  -  send notification about touchpad state change to sysfs
  -  fix checkpatch warnings, more consistent style
  -  change 'cfg' debugfs file format
  -  change 'status' debugfs file format
  -  check for touchpad support in _CFG
  -  check for Fn-lock support in HALS
  -  rework is_visible() logic
  -  rework and create new ACPI helpers
  -  group and separate (un)related constants into enums
  -  misc. device attribute changes
  -  always propagate error codes from device attributes' show() callback
  -  convert ACPI helpers to return -EIO in case of failure
  -  use dev_{err,warn} or appropriate variant to display log messages
  -  use msecs_to_jiffies() helper instead of hand-crafted formula
  -  use for_each_set_bit() helper to simplify event processing
  -  use kobj_to_dev()
  -  use device_{add,remove}_group
  -  use sysfs_emit()
  -  add missing call to submodule destructor
  -  sort includes lexicographically
  -  use appropriately typed variable to store the return value of ACPI methods
  -  remove unnecessary NULL checks
  -  remove unnecessary dev_set_drvdata() call
  -  DYTC Platform profile support
  -  Disable touchpad_switch for ELAN0634
 
 intel-vbtn:
  -  Eval VBDL after registering our notifier
  -  Add alternative method to enable switches
  -  Create 2 separate input-devs for buttons and switches
  -  Rework wakeup handling in notify_handler()
  -  Drop HP Stream x360 Convertible PC 11 from allow-list
  -  Support for tablet mode on Dell Inspiron 7352
 
 intel_mid_powerbtn:
  -  Remove driver for deprecated platform
  -  Remove driver for deprecated platform
 
 intel_mid_thermal:
  -  Remove driver for deprecated platform
  -  Remove driver for deprecated platform
 
 intel_pmt:
  -  Make INTEL_PMT_CLASS non-user-selectable
 
 intel_pmt_crashlog:
  -  Add dependency on MFD_INTEL_PMT
 
 intel_pmt_telemetry:
  -  Add dependency on MFD_INTEL_PMT
 
 intel_scu_ipc:
  -  Increase virtual timeout from 3 to 5 seconds
 
 intel_scu_wdt:
  -  Drop mistakenly added const
  -  Get rid of custom x86 model comparison
  -  Drop SCU notification
  -  Move driver from arch/x86
 
 msi-wmi:
  -  Fix variable 'status' set but not used compiler warning
 
 platform/surface:
  -  aggregator: Fix access of unaligned value
  -  Add Surface Hot-Plug driver
  -  surface3-wmi: Fix variable 'status' set but not used compiler warning
  -  aggregator: Fix braces in if condition with unlikely() macro
  -  aggregator: Fix kernel-doc references
  -  aggregator: fix a kernel-doc markup
  -  aggregator_cdev: Add comments regarding unchecked allocation size
  -  aggregator_cdev: Fix access of uninitialized variables
  -  fix potential integer overflow on shift of a int
  -  Add Surface ACPI Notify driver
  -  Add Surface Aggregator user-space interface
  -  aggregator: Add dedicated bus and device type
  -  aggregator: Add error injection capabilities
  -  aggregator: Add trace points
  -  aggregator: Add event item allocation caching
  -  aggregator: Add control packet allocation caching
  -  Add Surface Aggregator subsystem
  -  SURFACE_PLATFORMS should depend on ACPI
  -  surface_gpe: Fix non-PM_SLEEP build warnings
 
 platform/x86/intel-uncore-freq:
  -  Add Sapphire Rapids server support
 
 rtc:
  -  mrst: Remove driver for deprecated platform
 
 sony-laptop:
  -  Remove unneeded semicolon
 
 thinkpad_acpi:
  -  Replace ifdef CONFIG_ACPI_PLATFORM_PROFILE with depends on
  -  Fix 'warning: no previous prototype for' warnings
  -  Add platform profile support
  -  fixed warning and incorporated review comments
  -  rectify length of title underline
  -  Don't register keyboard_lang unnecessarily
  -  set keyboard language
  -  Add P53/73 firmware to fan_quirk_table for dual fan control
  -  correct palmsensor error checking
 
 tools/power/x86/intel-speed-select:
  -  Update version to 1.8
  -  Add new command to get/set TRL
  -  Add new command turbo-mode
  -  Set higher of cpuinfo_max_freq or base_frequency
  -  Set scaling_max_freq to base_frequency
 
 touchscreen_dmi:
  -  Add info for the Jumper EZpad 7 tablet
  -  Add swap-x-y quirk for Goodix touchscreen on Estar Beauty HD tablet
 
 watchdog:
  -  intel-mid_wdt: Postpone IRQ handler registration till SCU is ready
  -  intel_scu_watchdog: Remove driver for deprecated platform
 -----BEGIN PGP SIGNATURE-----
 
 iQFIBAABCAAyFiEEuvA7XScYQRpenhd+kuxHeUQDJ9wFAmAqZ5cUHGhkZWdvZWRl
 QHJlZGhhdC5jb20ACgkQkuxHeUQDJ9zmuwf/XLoZzs6oW7Ps9DhkyU5lk7D79rti
 DY4AabVnWZhJ+Yl5+qMCTjC0R0nJYoq9PCDU5q20HHSFq7PXV0fPEVo7ZOp8tPng
 wdzb2glbtGjSWksjj3c8eB/jjPP0tpsWptH+9jlTv9yXwQNVh/rPVltmD+z8y69U
 qNzySltQMtoKmQKNbktUeHA12jBldnH+QlkL8KUp5ZEVDd7gukkmAovpzEcnwk5U
 lrza7I52c9Ggu1pD2OCX7an9tk6N7mQ6Rk2/c6GzRsOYa6SC5Aj7fi2bs0LRdGGx
 Kz/gtKS3dRIreEs4LGmL8byVi7a/YvCQoTfO+MxKq/btedBwxO2edDDsRg==
 =B+Fz
 -----END PGP SIGNATURE-----

Merge tag 'platform-drivers-x86-v5.12-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86

Pull x86 platform driver updates from Hans de Goede:
 "Highlights:

   - Microsoft Surface devices System Aggregator Module support

   - SW_TABLET_MODE reporting improvements

   - thinkpad_acpi keyboard language setting support

   - platform / DPTF profile settings support:

      - Base / userspace API parts merged from Rafael's acpi-platform
        branch

      - thinkpad_acpi and ideapad-laptop support through pdx86

   - Remove support for some obsolete Intel MID platforms through
     merging of the shared intel-mid-removal branch

   - Big cleanup of the ideapad-laptop driver

   - Misc other fixes / new hw support / quirks"

* tag 'platform-drivers-x86-v5.12-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86: (99 commits)
  platform/x86: intel_scu_ipc: Increase virtual timeout from 3 to 5 seconds
  platform/surface: aggregator: Fix access of unaligned value
  tools/power/x86/intel-speed-select: Update version to 1.8
  tools/power/x86/intel-speed-select: Add new command to get/set TRL
  tools/power/x86/intel-speed-select: Add new command turbo-mode
  Platform: OLPC: Constify static struct regulator_ops
  platform/surface: Add Surface Hot-Plug driver
  platform/x86: intel_scu_wdt: Drop mistakenly added const
  platform/x86: Kconfig: add missing selects for ideapad-laptop
  platform/x86: acer-wmi: Don't use ACPI_EXCEPTION()
  platform/x86: thinkpad_acpi: Replace ifdef CONFIG_ACPI_PLATFORM_PROFILE with depends on
  platform/x86: thinkpad_acpi: Fix 'warning: no previous prototype for' warnings
  platform/x86: msi-wmi: Fix variable 'status' set but not used compiler warning
  platform/surface: surface3-wmi: Fix variable 'status' set but not used compiler warning
  platform/x86: Move all dell drivers to their own subdirectory
  Documentation/ABI: sysfs-platform-ideapad-laptop: conservation_mode attribute
  Documentation/ABI: sysfs-platform-ideapad-laptop: update device attribute paths
  platform/x86: ideapad-laptop: add "always on USB charging" control support
  platform/x86: ideapad-laptop: add keyboard backlight control support
  platform/x86: ideapad-laptop: send notification about touchpad state change to sysfs
  ...
This commit is contained in:
Linus Torvalds 2021-02-22 08:50:01 -08:00
commit 983e4adae0
108 changed files with 16567 additions and 3474 deletions

View file

@ -1,11 +1,11 @@
What: /sys/devices/platform/ideapad/camera_power
What: /sys/bus/platform/devices/VPC2004:*/camera_power
Date: Dec 2010
KernelVersion: 2.6.37
Contact: "Ike Panhc <ike.pan@canonical.com>"
Description:
Control the power of camera module. 1 means on, 0 means off.
What: /sys/devices/platform/ideapad/fan_mode
What: /sys/bus/platform/devices/VPC2004:*/fan_mode
Date: June 2012
KernelVersion: 3.6
Contact: "Maxim Mikityanskiy <maxtram95@gmail.com>"
@ -18,7 +18,7 @@ Description:
* 2 -> Dust Cleaning
* 4 -> Efficient Thermal Dissipation Mode
What: /sys/devices/platform/ideapad/touchpad
What: /sys/bus/platform/devices/VPC2004:*/touchpad
Date: May 2017
KernelVersion: 4.13
Contact: "Ritesh Raj Sarraf <rrs@debian.org>"
@ -27,7 +27,16 @@ Description:
* 1 -> Switched On
* 0 -> Switched Off
What: /sys/bus/pci/devices/<bdf>/<device>/VPC2004:00/fn_lock
What: /sys/bus/platform/devices/VPC2004:*/conservation_mode
Date: Aug 2017
KernelVersion: 4.14
Contact: platform-driver-x86@vger.kernel.org
Description:
Controls whether the conservation mode is enabled or not.
This feature limits the maximum battery charge percentage to
around 50-60% in order to prolong the lifetime of the battery.
What: /sys/bus/platform/devices/VPC2004:*/fn_lock
Date: May 2018
KernelVersion: 4.18
Contact: "Oleg Keri <ezhi99@gmail.com>"
@ -41,3 +50,12 @@ Description:
# echo "0" > \
/sys/bus/pci/devices/0000:00:1f.0/PNP0C09:00/VPC2004:00/fn_lock
What: /sys/bus/platform/devices/VPC2004:*/usb_charging
Date: Feb 2021
KernelVersion: 5.12
Contact: platform-driver-x86@vger.kernel.org
Description:
Controls whether the "always on USB charging" feature is
enabled or not. This feature enables charging USB devices
even if the computer is not turned on.

View file

@ -51,6 +51,7 @@ detailed description):
- UWB enable and disable
- LCD Shadow (PrivacyGuard) enable and disable
- Lap mode sensor
- Setting keyboard language
A compatibility table by model and feature is maintained on the web
site, http://ibm-acpi.sf.net/. I appreciate any success or failure
@ -1466,6 +1467,30 @@ Sysfs notes
rfkill controller switch "tpacpi_uwb_sw": refer to
Documentation/driver-api/rfkill.rst for details.
Setting keyboard language
-------------------------
sysfs: keyboard_lang
This feature is used to set keyboard language to ECFW using ASL interface.
Fewer thinkpads models like T580 , T590 , T15 Gen 1 etc.. has "=", "(',
")" numeric keys, which are not displaying correctly, when keyboard language
is other than "english". This is because the default keyboard language in ECFW
is set as "english". Hence using this sysfs, user can set the correct keyboard
language to ECFW and then these key's will work correctly.
Example of command to set keyboard language is mentioned below::
echo jp > /sys/devices/platform/thinkpad_acpi/keyboard_lang
Text corresponding to keyboard layout to be set in sysfs are: be(Belgian),
cz(Czech), da(Danish), de(German), en(English), es(Spain), et(Estonian),
fr(French), fr-ch(French(Switzerland)), hu(Hungarian), it(Italy), jp (Japan),
nl(Dutch), nn(Norway), pl(Polish), pt(portugese), sl(Slovenian), sv(Sweden),
tr(Turkey)
Adaptive keyboard
-----------------

View file

@ -99,6 +99,7 @@ available subsections can be seen below.
rfkill
serial/index
sm501
surface_aggregator/index
switchtec
sync_file
vfio-mediated-device

View file

@ -0,0 +1,38 @@
.. SPDX-License-Identifier: GPL-2.0+
===============================
Client Driver API Documentation
===============================
.. contents::
:depth: 2
Serial Hub Communication
========================
.. kernel-doc:: include/linux/surface_aggregator/serial_hub.h
.. kernel-doc:: drivers/platform/surface/aggregator/ssh_packet_layer.c
:export:
Controller and Core Interface
=============================
.. kernel-doc:: include/linux/surface_aggregator/controller.h
.. kernel-doc:: drivers/platform/surface/aggregator/controller.c
:export:
.. kernel-doc:: drivers/platform/surface/aggregator/core.c
:export:
Client Bus and Client Device API
================================
.. kernel-doc:: include/linux/surface_aggregator/device.h
.. kernel-doc:: drivers/platform/surface/aggregator/bus.c
:export:

View file

@ -0,0 +1,393 @@
.. SPDX-License-Identifier: GPL-2.0+
.. |ssam_controller| replace:: :c:type:`struct ssam_controller <ssam_controller>`
.. |ssam_device| replace:: :c:type:`struct ssam_device <ssam_device>`
.. |ssam_device_driver| replace:: :c:type:`struct ssam_device_driver <ssam_device_driver>`
.. |ssam_client_bind| replace:: :c:func:`ssam_client_bind`
.. |ssam_client_link| replace:: :c:func:`ssam_client_link`
.. |ssam_get_controller| replace:: :c:func:`ssam_get_controller`
.. |ssam_controller_get| replace:: :c:func:`ssam_controller_get`
.. |ssam_controller_put| replace:: :c:func:`ssam_controller_put`
.. |ssam_device_alloc| replace:: :c:func:`ssam_device_alloc`
.. |ssam_device_add| replace:: :c:func:`ssam_device_add`
.. |ssam_device_remove| replace:: :c:func:`ssam_device_remove`
.. |ssam_device_driver_register| replace:: :c:func:`ssam_device_driver_register`
.. |ssam_device_driver_unregister| replace:: :c:func:`ssam_device_driver_unregister`
.. |module_ssam_device_driver| replace:: :c:func:`module_ssam_device_driver`
.. |SSAM_DEVICE| replace:: :c:func:`SSAM_DEVICE`
.. |ssam_notifier_register| replace:: :c:func:`ssam_notifier_register`
.. |ssam_notifier_unregister| replace:: :c:func:`ssam_notifier_unregister`
.. |ssam_request_sync| replace:: :c:func:`ssam_request_sync`
.. |ssam_event_mask| replace:: :c:type:`enum ssam_event_mask <ssam_event_mask>`
======================
Writing Client Drivers
======================
For the API documentation, refer to:
.. toctree::
:maxdepth: 2
client-api
Overview
========
Client drivers can be set up in two main ways, depending on how the
corresponding device is made available to the system. We specifically
differentiate between devices that are presented to the system via one of
the conventional ways, e.g. as platform devices via ACPI, and devices that
are non-discoverable and instead need to be explicitly provided by some
other mechanism, as discussed further below.
Non-SSAM Client Drivers
=======================
All communication with the SAM EC is handled via the |ssam_controller|
representing that EC to the kernel. Drivers targeting a non-SSAM device (and
thus not being a |ssam_device_driver|) need to explicitly establish a
connection/relation to that controller. This can be done via the
|ssam_client_bind| function. Said function returns a reference to the SSAM
controller, but, more importantly, also establishes a device link between
client device and controller (this can also be done separate via
|ssam_client_link|). It is important to do this, as it, first, guarantees
that the returned controller is valid for use in the client driver for as
long as this driver is bound to its device, i.e. that the driver gets
unbound before the controller ever becomes invalid, and, second, as it
ensures correct suspend/resume ordering. This setup should be done in the
driver's probe function, and may be used to defer probing in case the SSAM
subsystem is not ready yet, for example:
.. code-block:: c
static int client_driver_probe(struct platform_device *pdev)
{
struct ssam_controller *ctrl;
ctrl = ssam_client_bind(&pdev->dev);
if (IS_ERR(ctrl))
return PTR_ERR(ctrl) == -ENODEV ? -EPROBE_DEFER : PTR_ERR(ctrl);
// ...
return 0;
}
The controller may be separately obtained via |ssam_get_controller| and its
lifetime be guaranteed via |ssam_controller_get| and |ssam_controller_put|.
Note that none of these functions, however, guarantee that the controller
will not be shut down or suspended. These functions essentially only operate
on the reference, i.e. only guarantee a bare minimum of accessibility
without any guarantees at all on practical operability.
Adding SSAM Devices
===================
If a device does not already exist/is not already provided via conventional
means, it should be provided as |ssam_device| via the SSAM client device
hub. New devices can be added to this hub by entering their UID into the
corresponding registry. SSAM devices can also be manually allocated via
|ssam_device_alloc|, subsequently to which they have to be added via
|ssam_device_add| and eventually removed via |ssam_device_remove|. By
default, the parent of the device is set to the controller device provided
for allocation, however this may be changed before the device is added. Note
that, when changing the parent device, care must be taken to ensure that the
controller lifetime and suspend/resume ordering guarantees, in the default
setup provided through the parent-child relation, are preserved. If
necessary, by use of |ssam_client_link| as is done for non-SSAM client
drivers and described in more detail above.
A client device must always be removed by the party which added the
respective device before the controller shuts down. Such removal can be
guaranteed by linking the driver providing the SSAM device to the controller
via |ssam_client_link|, causing it to unbind before the controller driver
unbinds. Client devices registered with the controller as parent are
automatically removed when the controller shuts down, but this should not be
relied upon, especially as this does not extend to client devices with a
different parent.
SSAM Client Drivers
===================
SSAM client device drivers are, in essence, no different than other device
driver types. They are represented via |ssam_device_driver| and bind to a
|ssam_device| via its UID (:c:type:`struct ssam_device.uid <ssam_device>`)
member and the match table
(:c:type:`struct ssam_device_driver.match_table <ssam_device_driver>`),
which should be set when declaring the driver struct instance. Refer to the
|SSAM_DEVICE| macro documentation for more details on how to define members
of the driver's match table.
The UID for SSAM client devices consists of a ``domain``, a ``category``,
a ``target``, an ``instance``, and a ``function``. The ``domain`` is used
differentiate between physical SAM devices
(:c:type:`SSAM_DOMAIN_SERIALHUB <ssam_device_domain>`), i.e. devices that can
be accessed via the Surface Serial Hub, and virtual ones
(:c:type:`SSAM_DOMAIN_VIRTUAL <ssam_device_domain>`), such as client-device
hubs, that have no real representation on the SAM EC and are solely used on
the kernel/driver-side. For physical devices, ``category`` represents the
target category, ``target`` the target ID, and ``instance`` the instance ID
used to access the physical SAM device. In addition, ``function`` references
a specific device functionality, but has no meaning to the SAM EC. The
(default) name of a client device is generated based on its UID.
A driver instance can be registered via |ssam_device_driver_register| and
unregistered via |ssam_device_driver_unregister|. For convenience, the
|module_ssam_device_driver| macro may be used to define module init- and
exit-functions registering the driver.
The controller associated with a SSAM client device can be found in its
:c:type:`struct ssam_device.ctrl <ssam_device>` member. This reference is
guaranteed to be valid for at least as long as the client driver is bound,
but should also be valid for as long as the client device exists. Note,
however, that access outside of the bound client driver must ensure that the
controller device is not suspended while making any requests or
(un-)registering event notifiers (and thus should generally be avoided). This
is guaranteed when the controller is accessed from inside the bound client
driver.
Making Synchronous Requests
===========================
Synchronous requests are (currently) the main form of host-initiated
communication with the EC. There are a couple of ways to define and execute
such requests, however, most of them boil down to something similar as shown
in the example below. This example defines a write-read request, meaning
that the caller provides an argument to the SAM EC and receives a response.
The caller needs to know the (maximum) length of the response payload and
provide a buffer for it.
Care must be taken to ensure that any command payload data passed to the SAM
EC is provided in little-endian format and, similarly, any response payload
data received from it is converted from little-endian to host endianness.
.. code-block:: c
int perform_request(struct ssam_controller *ctrl, u32 arg, u32 *ret)
{
struct ssam_request rqst;
struct ssam_response resp;
int status;
/* Convert request argument to little-endian. */
__le32 arg_le = cpu_to_le32(arg);
__le32 ret_le = cpu_to_le32(0);
/*
* Initialize request specification. Replace this with your values.
* The rqst.payload field may be NULL if rqst.length is zero,
* indicating that the request does not have any argument.
*
* Note: The request parameters used here are not valid, i.e.
* they do not correspond to an actual SAM/EC request.
*/
rqst.target_category = SSAM_SSH_TC_SAM;
rqst.target_id = 0x01;
rqst.command_id = 0x02;
rqst.instance_id = 0x03;
rqst.flags = SSAM_REQUEST_HAS_RESPONSE;
rqst.length = sizeof(arg_le);
rqst.payload = (u8 *)&arg_le;
/* Initialize request response. */
resp.capacity = sizeof(ret_le);
resp.length = 0;
resp.pointer = (u8 *)&ret_le;
/*
* Perform actual request. The response pointer may be null in case
* the request does not have any response. This must be consistent
* with the SSAM_REQUEST_HAS_RESPONSE flag set in the specification
* above.
*/
status = ssam_request_sync(ctrl, &rqst, &resp);
/*
* Alternatively use
*
* ssam_request_sync_onstack(ctrl, &rqst, &resp, sizeof(arg_le));
*
* to perform the request, allocating the message buffer directly
* on the stack as opposed to allocation via kzalloc().
*/
/*
* Convert request response back to native format. Note that in the
* error case, this value is not touched by the SSAM core, i.e.
* 'ret_le' will be zero as specified in its initialization.
*/
*ret = le32_to_cpu(ret_le);
return status;
}
Note that |ssam_request_sync| in its essence is a wrapper over lower-level
request primitives, which may also be used to perform requests. Refer to its
implementation and documentation for more details.
An arguably more user-friendly way of defining such functions is by using
one of the generator macros, for example via:
.. code-block:: c
SSAM_DEFINE_SYNC_REQUEST_W(__ssam_tmp_perf_mode_set, __le32, {
.target_category = SSAM_SSH_TC_TMP,
.target_id = 0x01,
.command_id = 0x03,
.instance_id = 0x00,
});
This example defines a function
.. code-block:: c
int __ssam_tmp_perf_mode_set(struct ssam_controller *ctrl, const __le32 *arg);
executing the specified request, with the controller passed in when calling
said function. In this example, the argument is provided via the ``arg``
pointer. Note that the generated function allocates the message buffer on
the stack. Thus, if the argument provided via the request is large, these
kinds of macros should be avoided. Also note that, in contrast to the
previous non-macro example, this function does not do any endianness
conversion, which has to be handled by the caller. Apart from those
differences the function generated by the macro is similar to the one
provided in the non-macro example above.
The full list of such function-generating macros is
- :c:func:`SSAM_DEFINE_SYNC_REQUEST_N` for requests without return value and
without argument.
- :c:func:`SSAM_DEFINE_SYNC_REQUEST_R` for requests with return value but no
argument.
- :c:func:`SSAM_DEFINE_SYNC_REQUEST_W` for requests without return value but
with argument.
Refer to their respective documentation for more details. For each one of
these macros, a special variant is provided, which targets request types
applicable to multiple instances of the same device type:
- :c:func:`SSAM_DEFINE_SYNC_REQUEST_MD_N`
- :c:func:`SSAM_DEFINE_SYNC_REQUEST_MD_R`
- :c:func:`SSAM_DEFINE_SYNC_REQUEST_MD_W`
The difference of those macros to the previously mentioned versions is, that
the device target and instance IDs are not fixed for the generated function,
but instead have to be provided by the caller of said function.
Additionally, variants for direct use with client devices, i.e.
|ssam_device|, are also provided. These can, for example, be used as
follows:
.. code-block:: c
SSAM_DEFINE_SYNC_REQUEST_CL_R(ssam_bat_get_sta, __le32, {
.target_category = SSAM_SSH_TC_BAT,
.command_id = 0x01,
});
This invocation of the macro defines a function
.. code-block:: c
int ssam_bat_get_sta(struct ssam_device *sdev, __le32 *ret);
executing the specified request, using the device IDs and controller given
in the client device. The full list of such macros for client devices is:
- :c:func:`SSAM_DEFINE_SYNC_REQUEST_CL_N`
- :c:func:`SSAM_DEFINE_SYNC_REQUEST_CL_R`
- :c:func:`SSAM_DEFINE_SYNC_REQUEST_CL_W`
Handling Events
===============
To receive events from the SAM EC, an event notifier must be registered for
the desired event via |ssam_notifier_register|. The notifier must be
unregistered via |ssam_notifier_unregister| once it is not required any
more.
Event notifiers are registered by providing (at minimum) a callback to call
in case an event has been received, the registry specifying how the event
should be enabled, an event ID specifying for which target category and,
optionally and depending on the registry used, for which instance ID events
should be enabled, and finally, flags describing how the EC will send these
events. If the specific registry does not enable events by instance ID, the
instance ID must be set to zero. Additionally, a priority for the respective
notifier may be specified, which determines its order in relation to any
other notifier registered for the same target category.
By default, event notifiers will receive all events for the specific target
category, regardless of the instance ID specified when registering the
notifier. The core may be instructed to only call a notifier if the target
ID or instance ID (or both) of the event match the ones implied by the
notifier IDs (in case of target ID, the target ID of the registry), by
providing an event mask (see |ssam_event_mask|).
In general, the target ID of the registry is also the target ID of the
enabled event (with the notable exception being keyboard input events on the
Surface Laptop 1 and 2, which are enabled via a registry with target ID 1,
but provide events with target ID 2).
A full example for registering an event notifier and handling received
events is provided below:
.. code-block:: c
u32 notifier_callback(struct ssam_event_notifier *nf,
const struct ssam_event *event)
{
int status = ...
/* Handle the event here ... */
/* Convert return value and indicate that we handled the event. */
return ssam_notifier_from_errno(status) | SSAM_NOTIF_HANDLED;
}
int setup_notifier(struct ssam_device *sdev,
struct ssam_event_notifier *nf)
{
/* Set priority wrt. other handlers of same target category. */
nf->base.priority = 1;
/* Set event/notifier callback. */
nf->base.fn = notifier_callback;
/* Specify event registry, i.e. how events get enabled/disabled. */
nf->event.reg = SSAM_EVENT_REGISTRY_KIP;
/* Specify which event to enable/disable */
nf->event.id.target_category = sdev->uid.category;
nf->event.id.instance = sdev->uid.instance;
/*
* Specify for which events the notifier callback gets executed.
* This essentially tells the core if it can skip notifiers that
* don't have target or instance IDs matching those of the event.
*/
nf->event.mask = SSAM_EVENT_MASK_STRICT;
/* Specify event flags. */
nf->event.flags = SSAM_EVENT_SEQUENCED;
return ssam_notifier_register(sdev->ctrl, nf);
}
Multiple event notifiers can be registered for the same event. The event
handler core takes care of enabling and disabling events when notifiers are
registered and unregistered, by keeping track of how many notifiers for a
specific event (combination of registry, event target category, and event
instance ID) are currently registered. This means that a specific event will
be enabled when the first notifier for it is being registered and disabled
when the last notifier for it is being unregistered. Note that the event
flags are therefore only used on the first registered notifier, however, one
should take care that notifiers for a specific event are always registered
with the same flag and it is considered a bug to do otherwise.

View file

@ -0,0 +1,87 @@
.. SPDX-License-Identifier: GPL-2.0+
.. |u8| replace:: :c:type:`u8 <u8>`
.. |u16| replace:: :c:type:`u16 <u16>`
.. |ssam_cdev_request| replace:: :c:type:`struct ssam_cdev_request <ssam_cdev_request>`
.. |ssam_cdev_request_flags| replace:: :c:type:`enum ssam_cdev_request_flags <ssam_cdev_request_flags>`
==============================
User-Space EC Interface (cdev)
==============================
The ``surface_aggregator_cdev`` module provides a misc-device for the SSAM
controller to allow for a (more or less) direct connection from user-space to
the SAM EC. It is intended to be used for development and debugging, and
therefore should not be used or relied upon in any other way. Note that this
module is not loaded automatically, but instead must be loaded manually.
The provided interface is accessible through the ``/dev/surface/aggregator``
device-file. All functionality of this interface is provided via IOCTLs.
These IOCTLs and their respective input/output parameter structs are defined in
``include/uapi/linux/surface_aggregator/cdev.h``.
A small python library and scripts for accessing this interface can be found
at https://github.com/linux-surface/surface-aggregator-module/tree/master/scripts/ssam.
Controller IOCTLs
=================
The following IOCTLs are provided:
.. flat-table:: Controller IOCTLs
:widths: 1 1 1 1 4
:header-rows: 1
* - Type
- Number
- Direction
- Name
- Description
* - ``0xA5``
- ``1``
- ``WR``
- ``REQUEST``
- Perform synchronous SAM request.
``REQUEST``
-----------
Defined as ``_IOWR(0xA5, 1, struct ssam_cdev_request)``.
Executes a synchronous SAM request. The request specification is passed in
as argument of type |ssam_cdev_request|, which is then written to/modified
by the IOCTL to return status and result of the request.
Request payload data must be allocated separately and is passed in via the
``payload.data`` and ``payload.length`` members. If a response is required,
the response buffer must be allocated by the caller and passed in via the
``response.data`` member. The ``response.length`` member must be set to the
capacity of this buffer, or if no response is required, zero. Upon
completion of the request, the call will write the response to the response
buffer (if its capacity allows it) and overwrite the length field with the
actual size of the response, in bytes.
Additionally, if the request has a response, this must be indicated via the
request flags, as is done with in-kernel requests. Request flags can be set
via the ``flags`` member and the values correspond to the values found in
|ssam_cdev_request_flags|.
Finally, the status of the request itself is returned in the ``status``
member (a negative errno value indicating failure). Note that failure
indication of the IOCTL is separated from failure indication of the request:
The IOCTL returns a negative status code if anything failed during setup of
the request (``-EFAULT``) or if the provided argument or any of its fields
are invalid (``-EINVAL``). In this case, the status value of the request
argument may be set, providing more detail on what went wrong (e.g.
``-ENOMEM`` for out-of-memory), but this value may also be zero. The IOCTL
will return with a zero status code in case the request has been set up,
submitted, and completed (i.e. handed back to user-space) successfully from
inside the IOCTL, but the request ``status`` member may still be negative in
case the actual execution of the request failed after it has been submitted.
A full definition of the argument struct is provided below:
.. kernel-doc:: include/uapi/linux/surface_aggregator/cdev.h

View file

@ -0,0 +1,21 @@
.. SPDX-License-Identifier: GPL-2.0+
===========================
Client Driver Documentation
===========================
This is the documentation for client drivers themselves. Refer to
:doc:`../client` for documentation on how to write client drivers.
.. toctree::
:maxdepth: 1
cdev
san
.. only:: subproject and html
Indices
=======
* :ref:`genindex`

View file

@ -0,0 +1,44 @@
.. SPDX-License-Identifier: GPL-2.0+
.. |san_client_link| replace:: :c:func:`san_client_link`
.. |san_dgpu_notifier_register| replace:: :c:func:`san_dgpu_notifier_register`
.. |san_dgpu_notifier_unregister| replace:: :c:func:`san_dgpu_notifier_unregister`
===================
Surface ACPI Notify
===================
The Surface ACPI Notify (SAN) device provides the bridge between ACPI and
SAM controller. Specifically, ACPI code can execute requests and handle
battery and thermal events via this interface. In addition to this, events
relating to the discrete GPU (dGPU) of the Surface Book 2 can be sent from
ACPI code (note: the Surface Book 3 uses a different method for this). The
only currently known event sent via this interface is a dGPU power-on
notification. While this driver handles the former part internally, it only
relays the dGPU events to any other driver interested via its public API and
does not handle them.
The public interface of this driver is split into two parts: Client
registration and notifier-block registration.
A client to the SAN interface can be linked as consumer to the SAN device
via |san_client_link|. This can be used to ensure that the a client
receiving dGPU events does not miss any events due to the SAN interface not
being set up as this forces the client driver to unbind once the SAN driver
is unbound.
Notifier-blocks can be registered by any device for as long as the module is
loaded, regardless of being linked as client or not. Registration is done
with |san_dgpu_notifier_register|. If the notifier is not needed any more, it
should be unregistered via |san_dgpu_notifier_unregister|.
Consult the API documentation below for more details.
API Documentation
=================
.. kernel-doc:: include/linux/surface_acpi_notify.h
.. kernel-doc:: drivers/platform/surface/surface_acpi_notify.c
:export:

View file

@ -0,0 +1,21 @@
.. SPDX-License-Identifier: GPL-2.0+
=======================================
Surface System Aggregator Module (SSAM)
=======================================
.. toctree::
:maxdepth: 2
overview
client
clients/index
ssh
internal
.. only:: subproject and html
Indices
=======
* :ref:`genindex`

View file

@ -0,0 +1,67 @@
.. SPDX-License-Identifier: GPL-2.0+
==========================
Internal API Documentation
==========================
.. contents::
:depth: 2
Packet Transport Layer
======================
.. kernel-doc:: drivers/platform/surface/aggregator/ssh_parser.h
:internal:
.. kernel-doc:: drivers/platform/surface/aggregator/ssh_parser.c
:internal:
.. kernel-doc:: drivers/platform/surface/aggregator/ssh_msgb.h
:internal:
.. kernel-doc:: drivers/platform/surface/aggregator/ssh_packet_layer.h
:internal:
.. kernel-doc:: drivers/platform/surface/aggregator/ssh_packet_layer.c
:internal:
Request Transport Layer
=======================
.. kernel-doc:: drivers/platform/surface/aggregator/ssh_request_layer.h
:internal:
.. kernel-doc:: drivers/platform/surface/aggregator/ssh_request_layer.c
:internal:
Controller
==========
.. kernel-doc:: drivers/platform/surface/aggregator/controller.h
:internal:
.. kernel-doc:: drivers/platform/surface/aggregator/controller.c
:internal:
Client Device Bus
=================
.. kernel-doc:: drivers/platform/surface/aggregator/bus.c
:internal:
Core
====
.. kernel-doc:: drivers/platform/surface/aggregator/core.c
:internal:
Trace Helpers
=============
.. kernel-doc:: drivers/platform/surface/aggregator/trace.h

View file

@ -0,0 +1,577 @@
.. SPDX-License-Identifier: GPL-2.0+
.. |ssh_ptl| replace:: :c:type:`struct ssh_ptl <ssh_ptl>`
.. |ssh_ptl_submit| replace:: :c:func:`ssh_ptl_submit`
.. |ssh_ptl_cancel| replace:: :c:func:`ssh_ptl_cancel`
.. |ssh_ptl_shutdown| replace:: :c:func:`ssh_ptl_shutdown`
.. |ssh_ptl_rx_rcvbuf| replace:: :c:func:`ssh_ptl_rx_rcvbuf`
.. |ssh_rtl| replace:: :c:type:`struct ssh_rtl <ssh_rtl>`
.. |ssh_rtl_submit| replace:: :c:func:`ssh_rtl_submit`
.. |ssh_rtl_cancel| replace:: :c:func:`ssh_rtl_cancel`
.. |ssh_rtl_shutdown| replace:: :c:func:`ssh_rtl_shutdown`
.. |ssh_packet| replace:: :c:type:`struct ssh_packet <ssh_packet>`
.. |ssh_packet_get| replace:: :c:func:`ssh_packet_get`
.. |ssh_packet_put| replace:: :c:func:`ssh_packet_put`
.. |ssh_packet_ops| replace:: :c:type:`struct ssh_packet_ops <ssh_packet_ops>`
.. |ssh_packet_base_priority| replace:: :c:type:`enum ssh_packet_base_priority <ssh_packet_base_priority>`
.. |ssh_packet_flags| replace:: :c:type:`enum ssh_packet_flags <ssh_packet_flags>`
.. |SSH_PACKET_PRIORITY| replace:: :c:func:`SSH_PACKET_PRIORITY`
.. |ssh_frame| replace:: :c:type:`struct ssh_frame <ssh_frame>`
.. |ssh_command| replace:: :c:type:`struct ssh_command <ssh_command>`
.. |ssh_request| replace:: :c:type:`struct ssh_request <ssh_request>`
.. |ssh_request_get| replace:: :c:func:`ssh_request_get`
.. |ssh_request_put| replace:: :c:func:`ssh_request_put`
.. |ssh_request_ops| replace:: :c:type:`struct ssh_request_ops <ssh_request_ops>`
.. |ssh_request_init| replace:: :c:func:`ssh_request_init`
.. |ssh_request_flags| replace:: :c:type:`enum ssh_request_flags <ssh_request_flags>`
.. |ssam_controller| replace:: :c:type:`struct ssam_controller <ssam_controller>`
.. |ssam_device| replace:: :c:type:`struct ssam_device <ssam_device>`
.. |ssam_device_driver| replace:: :c:type:`struct ssam_device_driver <ssam_device_driver>`
.. |ssam_client_bind| replace:: :c:func:`ssam_client_bind`
.. |ssam_client_link| replace:: :c:func:`ssam_client_link`
.. |ssam_request_sync| replace:: :c:type:`struct ssam_request_sync <ssam_request_sync>`
.. |ssam_event_registry| replace:: :c:type:`struct ssam_event_registry <ssam_event_registry>`
.. |ssam_event_id| replace:: :c:type:`struct ssam_event_id <ssam_event_id>`
.. |ssam_nf| replace:: :c:type:`struct ssam_nf <ssam_nf>`
.. |ssam_nf_refcount_inc| replace:: :c:func:`ssam_nf_refcount_inc`
.. |ssam_nf_refcount_dec| replace:: :c:func:`ssam_nf_refcount_dec`
.. |ssam_notifier_register| replace:: :c:func:`ssam_notifier_register`
.. |ssam_notifier_unregister| replace:: :c:func:`ssam_notifier_unregister`
.. |ssam_cplt| replace:: :c:type:`struct ssam_cplt <ssam_cplt>`
.. |ssam_event_queue| replace:: :c:type:`struct ssam_event_queue <ssam_event_queue>`
.. |ssam_request_sync_submit| replace:: :c:func:`ssam_request_sync_submit`
=====================
Core Driver Internals
=====================
Architectural overview of the Surface System Aggregator Module (SSAM) core
and Surface Serial Hub (SSH) driver. For the API documentation, refer to:
.. toctree::
:maxdepth: 2
internal-api
Overview
========
The SSAM core implementation is structured in layers, somewhat following the
SSH protocol structure:
Lower-level packet transport is implemented in the *packet transport layer
(PTL)*, directly building on top of the serial device (serdev)
infrastructure of the kernel. As the name indicates, this layer deals with
the packet transport logic and handles things like packet validation, packet
acknowledgment (ACKing), packet (retransmission) timeouts, and relaying
packet payloads to higher-level layers.
Above this sits the *request transport layer (RTL)*. This layer is centered
around command-type packet payloads, i.e. requests (sent from host to EC),
responses of the EC to those requests, and events (sent from EC to host).
It, specifically, distinguishes events from request responses, matches
responses to their corresponding requests, and implements request timeouts.
The *controller* layer is building on top of this and essentially decides
how request responses and, especially, events are dealt with. It provides an
event notifier system, handles event activation/deactivation, provides a
workqueue for event and asynchronous request completion, and also manages
the message counters required for building command messages (``SEQ``,
``RQID``). This layer basically provides a fundamental interface to the SAM
EC for use in other kernel drivers.
While the controller layer already provides an interface for other kernel
drivers, the client *bus* extends this interface to provide support for
native SSAM devices, i.e. devices that are not defined in ACPI and not
implemented as platform devices, via |ssam_device| and |ssam_device_driver|
simplify management of client devices and client drivers.
Refer to :doc:`client` for documentation regarding the client device/driver
API and interface options for other kernel drivers. It is recommended to
familiarize oneself with that chapter and the :doc:`ssh` before continuing
with the architectural overview below.
Packet Transport Layer
======================
The packet transport layer is represented via |ssh_ptl| and is structured
around the following key concepts:
Packets
-------
Packets are the fundamental transmission unit of the SSH protocol. They are
managed by the packet transport layer, which is essentially the lowest layer
of the driver and is built upon by other components of the SSAM core.
Packets to be transmitted by the SSAM core are represented via |ssh_packet|
(in contrast, packets received by the core do not have any specific
structure and are managed entirely via the raw |ssh_frame|).
This structure contains the required fields to manage the packet inside the
transport layer, as well as a reference to the buffer containing the data to
be transmitted (i.e. the message wrapped in |ssh_frame|). Most notably, it
contains an internal reference count, which is used for managing its
lifetime (accessible via |ssh_packet_get| and |ssh_packet_put|). When this
counter reaches zero, the ``release()`` callback provided to the packet via
its |ssh_packet_ops| reference is executed, which may then deallocate the
packet or its enclosing structure (e.g. |ssh_request|).
In addition to the ``release`` callback, the |ssh_packet_ops| reference also
provides a ``complete()`` callback, which is run once the packet has been
completed and provides the status of this completion, i.e. zero on success
or a negative errno value in case of an error. Once the packet has been
submitted to the packet transport layer, the ``complete()`` callback is
always guaranteed to be executed before the ``release()`` callback, i.e. the
packet will always be completed, either successfully, with an error, or due
to cancellation, before it will be released.
The state of a packet is managed via its ``state`` flags
(|ssh_packet_flags|), which also contains the packet type. In particular,
the following bits are noteworthy:
* ``SSH_PACKET_SF_LOCKED_BIT``: This bit is set when completion, either
through error or success, is imminent. It indicates that no further
references of the packet should be taken and any existing references
should be dropped as soon as possible. The process setting this bit is
responsible for removing any references to this packet from the packet
queue and pending set.
* ``SSH_PACKET_SF_COMPLETED_BIT``: This bit is set by the process running the
``complete()`` callback and is used to ensure that this callback only runs
once.
* ``SSH_PACKET_SF_QUEUED_BIT``: This bit is set when the packet is queued on
the packet queue and cleared when it is dequeued.
* ``SSH_PACKET_SF_PENDING_BIT``: This bit is set when the packet is added to
the pending set and cleared when it is removed from it.
Packet Queue
------------
The packet queue is the first of the two fundamental collections in the
packet transport layer. It is a priority queue, with priority of the
respective packets based on the packet type (major) and number of tries
(minor). See |SSH_PACKET_PRIORITY| for more details on the priority value.
All packets to be transmitted by the transport layer must be submitted to
this queue via |ssh_ptl_submit|. Note that this includes control packets
sent by the transport layer itself. Internally, data packets can be
re-submitted to this queue due to timeouts or NAK packets sent by the EC.
Pending Set
-----------
The pending set is the second of the two fundamental collections in the
packet transport layer. It stores references to packets that have already
been transmitted, but wait for acknowledgment (e.g. the corresponding ACK
packet) by the EC.
Note that a packet may both be pending and queued if it has been
re-submitted due to a packet acknowledgment timeout or NAK. On such a
re-submission, packets are not removed from the pending set.
Transmitter Thread
------------------
The transmitter thread is responsible for most of the actual work regarding
packet transmission. In each iteration, it (waits for and) checks if the
next packet on the queue (if any) can be transmitted and, if so, removes it
from the queue and increments its counter for the number of transmission
attempts, i.e. tries. If the packet is sequenced, i.e. requires an ACK by
the EC, the packet is added to the pending set. Next, the packet's data is
submitted to the serdev subsystem. In case of an error or timeout during
this submission, the packet is completed by the transmitter thread with the
status value of the callback set accordingly. In case the packet is
unsequenced, i.e. does not require an ACK by the EC, the packet is completed
with success on the transmitter thread.
Transmission of sequenced packets is limited by the number of concurrently
pending packets, i.e. a limit on how many packets may be waiting for an ACK
from the EC in parallel. This limit is currently set to one (see :doc:`ssh`
for the reasoning behind this). Control packets (i.e. ACK and NAK) can
always be transmitted.
Receiver Thread
---------------
Any data received from the EC is put into a FIFO buffer for further
processing. This processing happens on the receiver thread. The receiver
thread parses and validates the received message into its |ssh_frame| and
corresponding payload. It prepares and submits the necessary ACK (and on
validation error or invalid data NAK) packets for the received messages.
This thread also handles further processing, such as matching ACK messages
to the corresponding pending packet (via sequence ID) and completing it, as
well as initiating re-submission of all currently pending packets on
receival of a NAK message (re-submission in case of a NAK is similar to
re-submission due to timeout, see below for more details on that). Note that
the successful completion of a sequenced packet will always run on the
receiver thread (whereas any failure-indicating completion will run on the
process where the failure occurred).
Any payload data is forwarded via a callback to the next upper layer, i.e.
the request transport layer.
Timeout Reaper
--------------
The packet acknowledgment timeout is a per-packet timeout for sequenced
packets, started when the respective packet begins (re-)transmission (i.e.
this timeout is armed once per transmission attempt on the transmitter
thread). It is used to trigger re-submission or, when the number of tries
has been exceeded, cancellation of the packet in question.
This timeout is handled via a dedicated reaper task, which is essentially a
work item (re-)scheduled to run when the next packet is set to time out. The
work item then checks the set of pending packets for any packets that have
exceeded the timeout and, if there are any remaining packets, re-schedules
itself to the next appropriate point in time.
If a timeout has been detected by the reaper, the packet will either be
re-submitted if it still has some remaining tries left, or completed with
``-ETIMEDOUT`` as status if not. Note that re-submission, in this case and
triggered by receival of a NAK, means that the packet is added to the queue
with a now incremented number of tries, yielding a higher priority. The
timeout for the packet will be disabled until the next transmission attempt
and the packet remains on the pending set.
Note that due to transmission and packet acknowledgment timeouts, the packet
transport layer is always guaranteed to make progress, if only through
timing out packets, and will never fully block.
Concurrency and Locking
-----------------------
There are two main locks in the packet transport layer: One guarding access
to the packet queue and one guarding access to the pending set. These
collections may only be accessed and modified under the respective lock. If
access to both collections is needed, the pending lock must be acquired
before the queue lock to avoid deadlocks.
In addition to guarding the collections, after initial packet submission
certain packet fields may only be accessed under one of the locks.
Specifically, the packet priority must only be accessed while holding the
queue lock and the packet timestamp must only be accessed while holding the
pending lock.
Other parts of the packet transport layer are guarded independently. State
flags are managed by atomic bit operations and, if necessary, memory
barriers. Modifications to the timeout reaper work item and expiration date
are guarded by their own lock.
The reference of the packet to the packet transport layer (``ptl``) is
somewhat special. It is either set when the upper layer request is submitted
or, if there is none, when the packet is first submitted. After it is set,
it will not change its value. Functions that may run concurrently with
submission, i.e. cancellation, can not rely on the ``ptl`` reference to be
set. Access to it in these functions is guarded by ``READ_ONCE()``, whereas
setting ``ptl`` is equally guarded with ``WRITE_ONCE()`` for symmetry.
Some packet fields may be read outside of the respective locks guarding
them, specifically priority and state for tracing. In those cases, proper
access is ensured by employing ``WRITE_ONCE()`` and ``READ_ONCE()``. Such
read-only access is only allowed when stale values are not critical.
With respect to the interface for higher layers, packet submission
(|ssh_ptl_submit|), packet cancellation (|ssh_ptl_cancel|), data receival
(|ssh_ptl_rx_rcvbuf|), and layer shutdown (|ssh_ptl_shutdown|) may always be
executed concurrently with respect to each other. Note that packet
submission may not run concurrently with itself for the same packet.
Equally, shutdown and data receival may also not run concurrently with
themselves (but may run concurrently with each other).
Request Transport Layer
=======================
The request transport layer is represented via |ssh_rtl| and builds on top
of the packet transport layer. It deals with requests, i.e. SSH packets sent
by the host containing a |ssh_command| as frame payload. This layer
separates responses to requests from events, which are also sent by the EC
via a |ssh_command| payload. While responses are handled in this layer,
events are relayed to the next upper layer, i.e. the controller layer, via
the corresponding callback. The request transport layer is structured around
the following key concepts:
Request
-------
Requests are packets with a command-type payload, sent from host to EC to
query data from or trigger an action on it (or both simultaneously). They
are represented by |ssh_request|, wrapping the underlying |ssh_packet|
storing its message data (i.e. SSH frame with command payload). Note that
all top-level representations, e.g. |ssam_request_sync| are built upon this
struct.
As |ssh_request| extends |ssh_packet|, its lifetime is also managed by the
reference counter inside the packet struct (which can be accessed via
|ssh_request_get| and |ssh_request_put|). Once the counter reaches zero, the
``release()`` callback of the |ssh_request_ops| reference of the request is
called.
Requests can have an optional response that is equally sent via a SSH
message with command-type payload (from EC to host). The party constructing
the request must know if a response is expected and mark this in the request
flags provided to |ssh_request_init|, so that the request transport layer
can wait for this response.
Similar to |ssh_packet|, |ssh_request| also has a ``complete()`` callback
provided via its request ops reference and is guaranteed to be completed
before it is released once it has been submitted to the request transport
layer via |ssh_rtl_submit|. For a request without a response, successful
completion will occur once the underlying packet has been successfully
transmitted by the packet transport layer (i.e. from within the packet
completion callback). For a request with response, successful completion
will occur once the response has been received and matched to the request
via its request ID (which happens on the packet layer's data-received
callback running on the receiver thread). If the request is completed with
an error, the status value will be set to the corresponding (negative) errno
value.
The state of a request is again managed via its ``state`` flags
(|ssh_request_flags|), which also encode the request type. In particular,
the following bits are noteworthy:
* ``SSH_REQUEST_SF_LOCKED_BIT``: This bit is set when completion, either
through error or success, is imminent. It indicates that no further
references of the request should be taken and any existing references
should be dropped as soon as possible. The process setting this bit is
responsible for removing any references to this request from the request
queue and pending set.
* ``SSH_REQUEST_SF_COMPLETED_BIT``: This bit is set by the process running the
``complete()`` callback and is used to ensure that this callback only runs
once.
* ``SSH_REQUEST_SF_QUEUED_BIT``: This bit is set when the request is queued on
the request queue and cleared when it is dequeued.
* ``SSH_REQUEST_SF_PENDING_BIT``: This bit is set when the request is added to
the pending set and cleared when it is removed from it.
Request Queue
-------------
The request queue is the first of the two fundamental collections in the
request transport layer. In contrast to the packet queue of the packet
transport layer, it is not a priority queue and the simple first come first
serve principle applies.
All requests to be transmitted by the request transport layer must be
submitted to this queue via |ssh_rtl_submit|. Once submitted, requests may
not be re-submitted, and will not be re-submitted automatically on timeout.
Instead, the request is completed with a timeout error. If desired, the
caller can create and submit a new request for another try, but it must not
submit the same request again.
Pending Set
-----------
The pending set is the second of the two fundamental collections in the
request transport layer. This collection stores references to all pending
requests, i.e. requests awaiting a response from the EC (similar to what the
pending set of the packet transport layer does for packets).
Transmitter Task
----------------
The transmitter task is scheduled when a new request is available for
transmission. It checks if the next request on the request queue can be
transmitted and, if so, submits its underlying packet to the packet
transport layer. This check ensures that only a limited number of
requests can be pending, i.e. waiting for a response, at the same time. If
the request requires a response, the request is added to the pending set
before its packet is submitted.
Packet Completion Callback
--------------------------
The packet completion callback is executed once the underlying packet of a
request has been completed. In case of an error completion, the
corresponding request is completed with the error value provided in this
callback.
On successful packet completion, further processing depends on the request.
If the request expects a response, it is marked as transmitted and the
request timeout is started. If the request does not expect a response, it is
completed with success.
Data-Received Callback
----------------------
The data received callback notifies the request transport layer of data
being received by the underlying packet transport layer via a data-type
frame. In general, this is expected to be a command-type payload.
If the request ID of the command is one of the request IDs reserved for
events (one to ``SSH_NUM_EVENTS``, inclusively), it is forwarded to the
event callback registered in the request transport layer. If the request ID
indicates a response to a request, the respective request is looked up in
the pending set and, if found and marked as transmitted, completed with
success.
Timeout Reaper
--------------
The request-response-timeout is a per-request timeout for requests expecting
a response. It is used to ensure that a request does not wait indefinitely
on a response from the EC and is started after the underlying packet has
been successfully completed.
This timeout is, similar to the packet acknowledgment timeout on the packet
transport layer, handled via a dedicated reaper task. This task is
essentially a work-item (re-)scheduled to run when the next request is set
to time out. The work item then scans the set of pending requests for any
requests that have timed out and completes them with ``-ETIMEDOUT`` as
status. Requests will not be re-submitted automatically. Instead, the issuer
of the request must construct and submit a new request, if so desired.
Note that this timeout, in combination with packet transmission and
acknowledgment timeouts, guarantees that the request layer will always make
progress, even if only through timing out packets, and never fully block.
Concurrency and Locking
-----------------------
Similar to the packet transport layer, there are two main locks in the
request transport layer: One guarding access to the request queue and one
guarding access to the pending set. These collections may only be accessed
and modified under the respective lock.
Other parts of the request transport layer are guarded independently. State
flags are (again) managed by atomic bit operations and, if necessary, memory
barriers. Modifications to the timeout reaper work item and expiration date
are guarded by their own lock.
Some request fields may be read outside of the respective locks guarding
them, specifically the state for tracing. In those cases, proper access is
ensured by employing ``WRITE_ONCE()`` and ``READ_ONCE()``. Such read-only
access is only allowed when stale values are not critical.
With respect to the interface for higher layers, request submission
(|ssh_rtl_submit|), request cancellation (|ssh_rtl_cancel|), and layer
shutdown (|ssh_rtl_shutdown|) may always be executed concurrently with
respect to each other. Note that request submission may not run concurrently
with itself for the same request (and also may only be called once per
request). Equally, shutdown may also not run concurrently with itself.
Controller Layer
================
The controller layer extends on the request transport layer to provide an
easy-to-use interface for client drivers. It is represented by
|ssam_controller| and the SSH driver. While the lower level transport layers
take care of transmitting and handling packets and requests, the controller
layer takes on more of a management role. Specifically, it handles device
initialization, power management, and event handling, including event
delivery and registration via the (event) completion system (|ssam_cplt|).
Event Registration
------------------
In general, an event (or rather a class of events) has to be explicitly
requested by the host before the EC will send it (HID input events seem to
be the exception). This is done via an event-enable request (similarly,
events should be disabled via an event-disable request once no longer
desired).
The specific request used to enable (or disable) an event is given via an
event registry, i.e. the governing authority of this event (so to speak),
represented by |ssam_event_registry|. As parameters to this request, the
target category and, depending on the event registry, instance ID of the
event to be enabled must be provided. This (optional) instance ID must be
zero if the registry does not use it. Together, target category and instance
ID form the event ID, represented by |ssam_event_id|. In short, both, event
registry and event ID, are required to uniquely identify a respective class
of events.
Note that a further *request ID* parameter must be provided for the
enable-event request. This parameter does not influence the class of events
being enabled, but instead is set as the request ID (RQID) on each event of
this class sent by the EC. It is used to identify events (as a limited
number of request IDs is reserved for use in events only, specifically one
to ``SSH_NUM_EVENTS`` inclusively) and also map events to their specific
class. Currently, the controller always sets this parameter to the target
category specified in |ssam_event_id|.
As multiple client drivers may rely on the same (or overlapping) classes of
events and enable/disable calls are strictly binary (i.e. on/off), the
controller has to manage access to these events. It does so via reference
counting, storing the counter inside an RB-tree based mapping with event
registry and ID as key (there is no known list of valid event registry and
event ID combinations). See |ssam_nf|, |ssam_nf_refcount_inc|, and
|ssam_nf_refcount_dec| for details.
This management is done together with notifier registration (described in
the next section) via the top-level |ssam_notifier_register| and
|ssam_notifier_unregister| functions.
Event Delivery
--------------
To receive events, a client driver has to register an event notifier via
|ssam_notifier_register|. This increments the reference counter for that
specific class of events (as detailed in the previous section), enables the
class on the EC (if it has not been enabled already), and installs the
provided notifier callback.
Notifier callbacks are stored in lists, with one (RCU) list per target
category (provided via the event ID; NB: there is a fixed known number of
target categories). There is no known association from the combination of
event registry and event ID to the command data (target ID, target category,
command ID, and instance ID) that can be provided by an event class, apart
from target category and instance ID given via the event ID.
Note that due to the way notifiers are (or rather have to be) stored, client
drivers may receive events that they have not requested and need to account
for them. Specifically, they will, by default, receive all events from the
same target category. To simplify dealing with this, filtering of events by
target ID (provided via the event registry) and instance ID (provided via
the event ID) can be requested when registering a notifier. This filtering
is applied when iterating over the notifiers at the time they are executed.
All notifier callbacks are executed on a dedicated workqueue, the so-called
completion workqueue. After an event has been received via the callback
installed in the request layer (running on the receiver thread of the packet
transport layer), it will be put on its respective event queue
(|ssam_event_queue|). From this event queue the completion work item of that
queue (running on the completion workqueue) will pick up the event and
execute the notifier callback. This is done to avoid blocking on the
receiver thread.
There is one event queue per combination of target ID and target category.
This is done to ensure that notifier callbacks are executed in sequence for
events of the same target ID and target category. Callbacks can be executed
in parallel for events with a different combination of target ID and target
category.
Concurrency and Locking
-----------------------
Most of the concurrency related safety guarantees of the controller are
provided by the lower-level request transport layer. In addition to this,
event (un-)registration is guarded by its own lock.
Access to the controller state is guarded by the state lock. This lock is a
read/write semaphore. The reader part can be used to ensure that the state
does not change while functions depending on the state to stay the same
(e.g. |ssam_notifier_register|, |ssam_notifier_unregister|,
|ssam_request_sync_submit|, and derivatives) are executed and this guarantee
is not already provided otherwise (e.g. through |ssam_client_bind| or
|ssam_client_link|). The writer part guards any transitions that will change
the state, i.e. initialization, destruction, suspension, and resumption.
The controller state may be accessed (read-only) outside the state lock for
smoke-testing against invalid API usage (e.g. in |ssam_request_sync_submit|).
Note that such checks are not supposed to (and will not) protect against all
invalid usages, but rather aim to help catch them. In those cases, proper
variable access is ensured by employing ``WRITE_ONCE()`` and ``READ_ONCE()``.
Assuming any preconditions on the state not changing have been satisfied,
all non-initialization and non-shutdown functions may run concurrently with
each other. This includes |ssam_notifier_register|, |ssam_notifier_unregister|,
|ssam_request_sync_submit|, as well as all functions building on top of those.

View file

@ -0,0 +1,77 @@
.. SPDX-License-Identifier: GPL-2.0+
========
Overview
========
The Surface/System Aggregator Module (SAM, SSAM) is an (arguably *the*)
embedded controller (EC) on Microsoft Surface devices. It has been originally
introduced on 4th generation devices (Surface Pro 4, Surface Book 1), but
its responsibilities and feature-set have since been expanded significantly
with the following generations.
Features and Integration
========================
Not much is currently known about SAM on 4th generation devices (Surface Pro
4, Surface Book 1), due to the use of a different communication interface
between host and EC (as detailed below). On 5th (Surface Pro 2017, Surface
Book 2, Surface Laptop 1) and later generation devices, SAM is responsible
for providing battery information (both current status and static values,
such as maximum capacity etc.), as well as an assortment of temperature
sensors (e.g. skin temperature) and cooling/performance-mode setting to the
host. On the Surface Book 2, specifically, it additionally provides an
interface for properly handling clipboard detachment (i.e. separating the
display part from the keyboard part of the device), on the Surface Laptop 1
and 2 it is required for keyboard HID input. This HID subsystem has been
restructured for 7th generation devices and on those, specifically Surface
Laptop 3 and Surface Book 3, is responsible for all major HID input (i.e.
keyboard and touchpad).
While features have not changed much on a coarse level since the 5th
generation, internal interfaces have undergone some rather large changes. On
5th and 6th generation devices, both battery and temperature information is
exposed to ACPI via a shim driver (referred to as Surface ACPI Notify, or
SAN), translating ACPI generic serial bus write-/read-accesses to SAM
requests. On 7th generation devices, this additional layer is gone and these
devices require a driver hooking directly into the SAM interface. Equally,
on newer generations, less devices are declared in ACPI, making them a bit
harder to discover and requiring us to hard-code a sort of device registry.
Due to this, a SSAM bus and subsystem with client devices
(:c:type:`struct ssam_device <ssam_device>`) has been implemented.
Communication
=============
The type of communication interface between host and EC depends on the
generation of the Surface device. On 4th generation devices, host and EC
communicate via HID, specifically using a HID-over-I2C device, whereas on
5th and later generations, communication takes place via a USART serial
device. In accordance to the drivers found on other operating systems, we
refer to the serial device and its driver as Surface Serial Hub (SSH). When
needed, we differentiate between both types of SAM by referring to them as
SAM-over-SSH and SAM-over-HID.
Currently, this subsystem only supports SAM-over-SSH. The SSH communication
interface is described in more detail below. The HID interface has not been
reverse engineered yet and it is, at the moment, unclear how many (and
which) concepts of the SSH interface detailed below can be transferred to
it.
Surface Serial Hub
------------------
As already elaborated above, the Surface Serial Hub (SSH) is the
communication interface for SAM on 5th- and all later-generation Surface
devices. On the highest level, communication can be separated into two main
types: Requests, messages sent from host to EC that may trigger a direct
response from the EC (explicitly associated with the request), and events
(sometimes also referred to as notifications), sent from EC to host without
being a direct response to a previous request. We may also refer to requests
without response as commands. In general, events need to be enabled via one
of multiple dedicated requests before they are sent by the EC.
See :doc:`ssh` for a more technical protocol documentation and
:doc:`internal` for an overview of the internal driver architecture.

View file

@ -0,0 +1,344 @@
.. SPDX-License-Identifier: GPL-2.0+
.. |u8| replace:: :c:type:`u8 <u8>`
.. |u16| replace:: :c:type:`u16 <u16>`
.. |TYPE| replace:: ``TYPE``
.. |LEN| replace:: ``LEN``
.. |SEQ| replace:: ``SEQ``
.. |SYN| replace:: ``SYN``
.. |NAK| replace:: ``NAK``
.. |ACK| replace:: ``ACK``
.. |DATA| replace:: ``DATA``
.. |DATA_SEQ| replace:: ``DATA_SEQ``
.. |DATA_NSQ| replace:: ``DATA_NSQ``
.. |TC| replace:: ``TC``
.. |TID| replace:: ``TID``
.. |IID| replace:: ``IID``
.. |RQID| replace:: ``RQID``
.. |CID| replace:: ``CID``
===========================
Surface Serial Hub Protocol
===========================
The Surface Serial Hub (SSH) is the central communication interface for the
embedded Surface Aggregator Module controller (SAM or EC), found on newer
Surface generations. We will refer to this protocol and interface as
SAM-over-SSH, as opposed to SAM-over-HID for the older generations.
On Surface devices with SAM-over-SSH, SAM is connected to the host via UART
and defined in ACPI as device with ID ``MSHW0084``. On these devices,
significant functionality is provided via SAM, including access to battery
and power information and events, thermal read-outs and events, and many
more. For Surface Laptops, keyboard input is handled via HID directed
through SAM, on the Surface Laptop 3 and Surface Book 3 this also includes
touchpad input.
Note that the standard disclaimer for this subsystem also applies to this
document: All of this has been reverse-engineered and may thus be erroneous
and/or incomplete.
All CRCs used in the following are two-byte ``crc_ccitt_false(0xffff, ...)``.
All multi-byte values are little-endian, there is no implicit padding between
values.
SSH Packet Protocol: Definitions
================================
The fundamental communication unit of the SSH protocol is a frame
(:c:type:`struct ssh_frame <ssh_frame>`). A frame consists of the following
fields, packed together and in order:
.. flat-table:: SSH Frame
:widths: 1 1 4
:header-rows: 1
* - Field
- Type
- Description
* - |TYPE|
- |u8|
- Type identifier of the frame.
* - |LEN|
- |u16|
- Length of the payload associated with the frame.
* - |SEQ|
- |u8|
- Sequence ID (see explanation below).
Each frame structure is followed by a CRC over this structure. The CRC over
the frame structure (|TYPE|, |LEN|, and |SEQ| fields) is placed directly
after the frame structure and before the payload. The payload is followed by
its own CRC (over all payload bytes). If the payload is not present (i.e.
the frame has ``LEN=0``), the CRC of the payload is still present and will
evaluate to ``0xffff``. The |LEN| field does not include any of the CRCs, it
equals the number of bytes inbetween the CRC of the frame and the CRC of the
payload.
Additionally, the following fixed two-byte sequences are used:
.. flat-table:: SSH Byte Sequences
:widths: 1 1 4
:header-rows: 1
* - Name
- Value
- Description
* - |SYN|
- ``[0xAA, 0x55]``
- Synchronization bytes.
A message consists of |SYN|, followed by the frame (|TYPE|, |LEN|, |SEQ| and
CRC) and, if specified in the frame (i.e. ``LEN > 0``), payload bytes,
followed finally, regardless if the payload is present, the payload CRC. The
messages corresponding to an exchange are, in part, identified by having the
same sequence ID (|SEQ|), stored inside the frame (more on this in the next
section). The sequence ID is a wrapping counter.
A frame can have the following types
(:c:type:`enum ssh_frame_type <ssh_frame_type>`):
.. flat-table:: SSH Frame Types
:widths: 1 1 4
:header-rows: 1
* - Name
- Value
- Short Description
* - |NAK|
- ``0x04``
- Sent on error in previously received message.
* - |ACK|
- ``0x40``
- Sent to acknowledge receival of |DATA| frame.
* - |DATA_SEQ|
- ``0x80``
- Sent to transfer data. Sequenced.
* - |DATA_NSQ|
- ``0x00``
- Same as |DATA_SEQ|, but does not need to be ACKed.
Both |NAK|- and |ACK|-type frames are used to control flow of messages and
thus do not carry a payload. |DATA_SEQ|- and |DATA_NSQ|-type frames on the
other hand must carry a payload. The flow sequence and interaction of
different frame types will be described in more depth in the next section.
SSH Packet Protocol: Flow Sequence
==================================
Each exchange begins with |SYN|, followed by a |DATA_SEQ|- or
|DATA_NSQ|-type frame, followed by its CRC, payload, and payload CRC. In
case of a |DATA_NSQ|-type frame, the exchange is then finished. In case of a
|DATA_SEQ|-type frame, the receiving party has to acknowledge receival of
the frame by responding with a message containing an |ACK|-type frame with
the same sequence ID of the |DATA| frame. In other words, the sequence ID of
the |ACK| frame specifies the |DATA| frame to be acknowledged. In case of an
error, e.g. an invalid CRC, the receiving party responds with a message
containing an |NAK|-type frame. As the sequence ID of the previous data
frame, for which an error is indicated via the |NAK| frame, cannot be relied
upon, the sequence ID of the |NAK| frame should not be used and is set to
zero. After receival of an |NAK| frame, the sending party should re-send all
outstanding (non-ACKed) messages.
Sequence IDs are not synchronized between the two parties, meaning that they
are managed independently for each party. Identifying the messages
corresponding to a single exchange thus relies on the sequence ID as well as
the type of the message, and the context. Specifically, the sequence ID is
used to associate an ``ACK`` with its ``DATA_SEQ``-type frame, but not
``DATA_SEQ``- or ``DATA_NSQ``-type frames with other ``DATA``- type frames.
An example exchange might look like this:
::
tx: -- SYN FRAME(D) CRC(F) PAYLOAD CRC(P) -----------------------------
rx: ------------------------------------- SYN FRAME(A) CRC(F) CRC(P) --
where both frames have the same sequence ID (``SEQ``). Here, ``FRAME(D)``
indicates a |DATA_SEQ|-type frame, ``FRAME(A)`` an ``ACK``-type frame,
``CRC(F)`` the CRC over the previous frame, ``CRC(P)`` the CRC over the
previous payload. In case of an error, the exchange would look like this:
::
tx: -- SYN FRAME(D) CRC(F) PAYLOAD CRC(P) -----------------------------
rx: ------------------------------------- SYN FRAME(N) CRC(F) CRC(P) --
upon which the sender should re-send the message. ``FRAME(N)`` indicates an
|NAK|-type frame. Note that the sequence ID of the |NAK|-type frame is fixed
to zero. For |DATA_NSQ|-type frames, both exchanges are the same:
::
tx: -- SYN FRAME(DATA_NSQ) CRC(F) PAYLOAD CRC(P) ----------------------
rx: -------------------------------------------------------------------
Here, an error can be detected, but not corrected or indicated to the
sending party. These exchanges are symmetric, i.e. switching ``rx`` and
``tx`` results again in a valid exchange. Currently, no longer exchanges are
known.
Commands: Requests, Responses, and Events
=========================================
Commands are sent as payload inside a data frame. Currently, this is the
only known payload type of |DATA| frames, with a payload-type value of
``0x80`` (:c:type:`SSH_PLD_TYPE_CMD <ssh_payload_type>`).
The command-type payload (:c:type:`struct ssh_command <ssh_command>`)
consists of an eight-byte command structure, followed by optional and
variable length command data. The length of this optional data is derived
from the frame payload length given in the corresponding frame, i.e. it is
``frame.len - sizeof(struct ssh_command)``. The command struct contains the
following fields, packed together and in order:
.. flat-table:: SSH Command
:widths: 1 1 4
:header-rows: 1
* - Field
- Type
- Description
* - |TYPE|
- |u8|
- Type of the payload. For commands always ``0x80``.
* - |TC|
- |u8|
- Target category.
* - |TID| (out)
- |u8|
- Target ID for outgoing (host to EC) commands.
* - |TID| (in)
- |u8|
- Target ID for incoming (EC to host) commands.
* - |IID|
- |u8|
- Instance ID.
* - |RQID|
- |u16|
- Request ID.
* - |CID|
- |u8|
- Command ID.
The command struct and data, in general, does not contain any failure
detection mechanism (e.g. CRCs), this is solely done on the frame level.
Command-type payloads are used by the host to send commands and requests to
the EC as well as by the EC to send responses and events back to the host.
We differentiate between requests (sent by the host), responses (sent by the
EC in response to a request), and events (sent by the EC without a preceding
request).
Commands and events are uniquely identified by their target category
(``TC``) and command ID (``CID``). The target category specifies a general
category for the command (e.g. system in general, vs. battery and AC, vs.
temperature, and so on), while the command ID specifies the command inside
that category. Only the combination of |TC| + |CID| is unique. Additionally,
commands have an instance ID (``IID``), which is used to differentiate
between different sub-devices. For example ``TC=3`` ``CID=1`` is a
request to get the temperature on a thermal sensor, where |IID| specifies
the respective sensor. If the instance ID is not used, it should be set to
zero. If instance IDs are used, they, in general, start with a value of one,
whereas zero may be used for instance independent queries, if applicable. A
response to a request should have the same target category, command ID, and
instance ID as the corresponding request.
Responses are matched to their corresponding request via the request ID
(``RQID``) field. This is a 16 bit wrapping counter similar to the sequence
ID on the frames. Note that the sequence ID of the frames for a
request-response pair does not match. Only the request ID has to match.
Frame-protocol wise these are two separate exchanges, and may even be
separated, e.g. by an event being sent after the request but before the
response. Not all commands produce a response, and this is not detectable by
|TC| + |CID|. It is the responsibility of the issuing party to wait for a
response (or signal this to the communication framework, as is done in
SAN/ACPI via the ``SNC`` flag).
Events are identified by unique and reserved request IDs. These IDs should
not be used by the host when sending a new request. They are used on the
host to, first, detect events and, second, match them with a registered
event handler. Request IDs for events are chosen by the host and directed to
the EC when setting up and enabling an event source (via the
enable-event-source request). The EC then uses the specified request ID for
events sent from the respective source. Note that an event should still be
identified by its target category, command ID, and, if applicable, instance
ID, as a single event source can send multiple different event types. In
general, however, a single target category should map to a single reserved
event request ID.
Furthermore, requests, responses, and events have an associated target ID
(``TID``). This target ID is split into output (host to EC) and input (EC to
host) fields, with the respecting other field (e.g. output field on incoming
messages) set to zero. Two ``TID`` values are known: Primary (``0x01``) and
secondary (``0x02``). In general, the response to a request should have the
same ``TID`` value, however, the field (output vs. input) should be used in
accordance to the direction in which the response is sent (i.e. on the input
field, as responses are generally sent from the EC to the host).
Note that, even though requests and events should be uniquely identifiable
by target category and command ID alone, the EC may require specific
target ID and instance ID values to accept a command. A command that is
accepted for ``TID=1``, for example, may not be accepted for ``TID=2``
and vice versa.
Limitations and Observations
============================
The protocol can, in theory, handle up to ``U8_MAX`` frames in parallel,
with up to ``U16_MAX`` pending requests (neglecting request IDs reserved for
events). In practice, however, this is more limited. From our testing
(although via a python and thus a user-space program), it seems that the EC
can handle up to four requests (mostly) reliably in parallel at a certain
time. With five or more requests in parallel, consistent discarding of
commands (ACKed frame but no command response) has been observed. For five
simultaneous commands, this reproducibly resulted in one command being
dropped and four commands being handled.
However, it has also been noted that, even with three requests in parallel,
occasional frame drops happen. Apart from this, with a limit of three
pending requests, no dropped commands (i.e. command being dropped but frame
carrying command being ACKed) have been observed. In any case, frames (and
possibly also commands) should be re-sent by the host if a certain timeout
is exceeded. This is done by the EC for frames with a timeout of one second,
up to two re-tries (i.e. three transmissions in total). The limit of
re-tries also applies to received NAKs, and, in a worst case scenario, can
lead to entire messages being dropped.
While this also seems to work fine for pending data frames as long as no
transmission failures occur, implementation and handling of these seems to
depend on the assumption that there is only one non-acknowledged data frame.
In particular, the detection of repeated frames relies on the last sequence
number. This means that, if a frame that has been successfully received by
the EC is sent again, e.g. due to the host not receiving an |ACK|, the EC
will only detect this if it has the sequence ID of the last frame received
by the EC. As an example: Sending two frames with ``SEQ=0`` and ``SEQ=1``
followed by a repetition of ``SEQ=0`` will not detect the second ``SEQ=0``
frame as such, and thus execute the command in this frame each time it has
been received, i.e. twice in this example. Sending ``SEQ=0``, ``SEQ=1`` and
then repeating ``SEQ=1`` will detect the second ``SEQ=1`` as repetition of
the first one and ignore it, thus executing the contained command only once.
In conclusion, this suggests a limit of at most one pending un-ACKed frame
(per party, effectively leading to synchronous communication regarding
frames) and at most three pending commands. The limit to synchronous frame
transfers seems to be consistent with behavior observed on Windows.

View file

@ -324,6 +324,8 @@ Code Seq# Include File Comments
0xA3 90-9F linux/dtlk.h
0xA4 00-1F uapi/linux/tee.h Generic TEE subsystem
0xA4 00-1F uapi/asm/sgx.h <mailto:linux-sgx@vger.kernel.org>
0xA5 01 linux/surface_aggregator/cdev.h Microsoft Surface Platform System Aggregator
<mailto:luzmaximilian@gmail.com>
0xAA 00-3F linux/uapi/linux/userfaultfd.h
0xAB 00-1F linux/nbd.h
0xAC 00-1F linux/raw.h

View file

@ -4946,17 +4946,17 @@ M: Matthew Garrett <mjg59@srcf.ucam.org>
M: Pali Rohár <pali@kernel.org>
L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/dell-laptop.c
F: drivers/platform/x86/dell/dell-laptop.c
DELL LAPTOP FREEFALL DRIVER
M: Pali Rohár <pali@kernel.org>
S: Maintained
F: drivers/platform/x86/dell-smo8800.c
F: drivers/platform/x86/dell/dell-smo8800.c
DELL LAPTOP RBTN DRIVER
M: Pali Rohár <pali@kernel.org>
S: Maintained
F: drivers/platform/x86/dell-rbtn.*
F: drivers/platform/x86/dell/dell-rbtn.*
DELL LAPTOP SMM DRIVER
M: Pali Rohár <pali@kernel.org>
@ -4968,26 +4968,26 @@ DELL REMOTE BIOS UPDATE DRIVER
M: Stuart Hayes <stuart.w.hayes@gmail.com>
L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/dell_rbu.c
F: drivers/platform/x86/dell/dell_rbu.c
DELL SMBIOS DRIVER
M: Pali Rohár <pali@kernel.org>
M: Mario Limonciello <mario.limonciello@dell.com>
L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/dell-smbios.*
F: drivers/platform/x86/dell/dell-smbios.*
DELL SMBIOS SMM DRIVER
M: Mario Limonciello <mario.limonciello@dell.com>
L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/dell-smbios-smm.c
F: drivers/platform/x86/dell/dell-smbios-smm.c
DELL SMBIOS WMI DRIVER
M: Mario Limonciello <mario.limonciello@dell.com>
L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/dell-smbios-wmi.c
F: drivers/platform/x86/dell/dell-smbios-wmi.c
F: tools/wmi/dell-smbios-example.c
DELL SYSTEMS MANAGEMENT BASE DRIVER (dcdbas)
@ -4995,12 +4995,12 @@ M: Stuart Hayes <stuart.w.hayes@gmail.com>
L: platform-driver-x86@vger.kernel.org
S: Maintained
F: Documentation/driver-api/dcdbas.rst
F: drivers/platform/x86/dcdbas.*
F: drivers/platform/x86/dell/dcdbas.*
DELL WMI DESCRIPTOR DRIVER
M: Mario Limonciello <mario.limonciello@dell.com>
S: Maintained
F: drivers/platform/x86/dell-wmi-descriptor.c
F: drivers/platform/x86/dell/dell-wmi-descriptor.c
DELL WMI SYSMAN DRIVER
M: Divya Bharathi <divya.bharathi@dell.com>
@ -5009,13 +5009,13 @@ M: Prasanth Ksr <prasanth.ksr@dell.com>
L: platform-driver-x86@vger.kernel.org
S: Maintained
F: Documentation/ABI/testing/sysfs-class-firmware-attributes
F: drivers/platform/x86/dell-wmi-sysman/
F: drivers/platform/x86/dell/dell-wmi-sysman/
DELL WMI NOTIFICATIONS DRIVER
M: Matthew Garrett <mjg59@srcf.ucam.org>
M: Pali Rohár <pali@kernel.org>
S: Maintained
F: drivers/platform/x86/dell-wmi.c
F: drivers/platform/x86/dell/dell-wmi.c
DELTA ST MEDIA DRIVER
M: Hugues Fruchet <hugues.fruchet@st.com>
@ -8908,7 +8908,6 @@ L: linux-gpio@vger.kernel.org
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/andy/linux-gpio-intel.git
F: drivers/gpio/gpio-ich.c
F: drivers/gpio/gpio-intel-mid.c
F: drivers/gpio/gpio-merrifield.c
F: drivers/gpio/gpio-ml-ioh.c
F: drivers/gpio/gpio-pch.c
@ -9080,7 +9079,6 @@ M: Andy Shevchenko <andy@kernel.org>
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/andy/linux-gpio-intel.git
F: drivers/gpio/gpio-*cove.c
F: drivers/gpio/gpio-msic.c
INTEL PMIC MULTIFUNCTION DEVICE DRIVERS
M: Andy Shevchenko <andy@kernel.org>
@ -11798,12 +11796,31 @@ S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86.git
F: drivers/platform/surface/
MICROSOFT SURFACE HOT-PLUG DRIVER
M: Maximilian Luz <luzmaximilian@gmail.com>
L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/surface/surface_hotplug.c
MICROSOFT SURFACE PRO 3 BUTTON DRIVER
M: Chen Yu <yu.c.chen@intel.com>
L: platform-driver-x86@vger.kernel.org
S: Supported
F: drivers/platform/surface/surfacepro3_button.c
MICROSOFT SURFACE SYSTEM AGGREGATOR SUBSYSTEM
M: Maximilian Luz <luzmaximilian@gmail.com>
S: Maintained
W: https://github.com/linux-surface/surface-aggregator-module
C: irc://chat.freenode.net/##linux-surface
F: Documentation/driver-api/surface_aggregator/
F: drivers/platform/surface/aggregator/
F: drivers/platform/surface/surface_acpi_notify.c
F: drivers/platform/surface/surface_aggregator_cdev.c
F: include/linux/surface_acpi_notify.h
F: include/linux/surface_aggregator/
F: include/uapi/linux/surface_aggregator/
MICROTEK X6 SCANNER
M: Oliver Neukum <oliver@neukum.org>
S: Maintained
@ -17656,7 +17673,7 @@ F: drivers/thermal/gov_power_allocator.c
F: include/trace/events/thermal_power_allocator.h
THINKPAD ACPI EXTRAS DRIVER
M: Henrique de Moraes Holschuh <ibm-acpi@hmh.eng.br>
M: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
L: ibm-acpi-devel@lists.sourceforge.net
L: platform-driver-x86@vger.kernel.org
S: Maintained

View file

@ -30,4 +30,3 @@ obj-$(subst m,y,$(CONFIG_GPIO_PCA953X)) += platform_tca6416.o
obj-$(subst m,y,$(CONFIG_KEYBOARD_GPIO)) += platform_gpio_keys.o
obj-$(subst m,y,$(CONFIG_INTEL_MID_POWER_BUTTON)) += platform_mrfld_power_btn.o
obj-$(subst m,y,$(CONFIG_RTC_DRV_CMOS)) += platform_mrfld_rtc.o
obj-$(subst m,y,$(CONFIG_INTEL_MID_WATCHDOG)) += platform_mrfld_wdt.o

View file

@ -1253,13 +1253,6 @@ config GPIO_MAX77650
GPIO driver for MAX77650/77651 PMIC from Maxim Semiconductor.
These chips have a single pin that can be configured as GPIO.
config GPIO_MSIC
bool "Intel MSIC mixed signal gpio support"
depends on (X86 || COMPILE_TEST) && MFD_INTEL_MSIC
help
Enable support for GPIO on intel MSIC controllers found in
intel MID devices
config GPIO_PALMAS
bool "TI PALMAS series PMICs GPIO"
depends on MFD_PALMAS
@ -1455,13 +1448,6 @@ config GPIO_BT8XX
If unsure, say N.
config GPIO_INTEL_MID
bool "Intel MID GPIO support"
depends on X86_INTEL_MID
select GPIOLIB_IRQCHIP
help
Say Y here to support Intel MID GPIO.
config GPIO_MERRIFIELD
tristate "Intel Merrifield GPIO support"
depends on X86_INTEL_MID

View file

@ -67,7 +67,6 @@ obj-$(CONFIG_GPIO_HISI) += gpio-hisi.o
obj-$(CONFIG_GPIO_HLWD) += gpio-hlwd.o
obj-$(CONFIG_HTC_EGPIO) += gpio-htc-egpio.o
obj-$(CONFIG_GPIO_ICH) += gpio-ich.o
obj-$(CONFIG_GPIO_INTEL_MID) += gpio-intel-mid.o
obj-$(CONFIG_GPIO_IOP) += gpio-iop.o
obj-$(CONFIG_GPIO_IT87) += gpio-it87.o
obj-$(CONFIG_GPIO_IXP4XX) += gpio-ixp4xx.o

View file

@ -101,7 +101,7 @@ for a few GPIOs. Those should stay where they are.
At the same time it makes sense to get rid of code duplication in existing or
new coming drivers. For example, gpio-ml-ioh should be incorporated into
gpio-pch. In similar way gpio-intel-mid into gpio-pxa.
gpio-pch.
Generic MMIO GPIO

View file

@ -1,414 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Intel MID GPIO driver
*
* Copyright (c) 2008-2014,2016 Intel Corporation.
*/
/* Supports:
* Moorestown platform Langwell chip.
* Medfield platform Penwell chip.
* Clovertrail platform Cloverview chip.
*/
#include <linux/delay.h>
#include <linux/gpio/driver.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/slab.h>
#include <linux/stddef.h>
#define INTEL_MID_IRQ_TYPE_EDGE (1 << 0)
#define INTEL_MID_IRQ_TYPE_LEVEL (1 << 1)
/*
* Langwell chip has 64 pins and thus there are 2 32bit registers to control
* each feature, while Penwell chip has 96 pins for each block, and need 3 32bit
* registers to control them, so we only define the order here instead of a
* structure, to get a bit offset for a pin (use GPDR as an example):
*
* nreg = ngpio / 32;
* reg = offset / 32;
* bit = offset % 32;
* reg_addr = reg_base + GPDR * nreg * 4 + reg * 4;
*
* so the bit of reg_addr is to control pin offset's GPDR feature
*/
enum GPIO_REG {
GPLR = 0, /* pin level read-only */
GPDR, /* pin direction */
GPSR, /* pin set */
GPCR, /* pin clear */
GRER, /* rising edge detect */
GFER, /* falling edge detect */
GEDR, /* edge detect result */
GAFR, /* alt function */
};
/* intel_mid gpio driver data */
struct intel_mid_gpio_ddata {
u16 ngpio; /* number of gpio pins */
u32 chip_irq_type; /* chip interrupt type */
};
struct intel_mid_gpio {
struct gpio_chip chip;
void __iomem *reg_base;
spinlock_t lock;
struct pci_dev *pdev;
};
static void __iomem *gpio_reg(struct gpio_chip *chip, unsigned offset,
enum GPIO_REG reg_type)
{
struct intel_mid_gpio *priv = gpiochip_get_data(chip);
unsigned nreg = chip->ngpio / 32;
u8 reg = offset / 32;
return priv->reg_base + reg_type * nreg * 4 + reg * 4;
}
static void __iomem *gpio_reg_2bit(struct gpio_chip *chip, unsigned offset,
enum GPIO_REG reg_type)
{
struct intel_mid_gpio *priv = gpiochip_get_data(chip);
unsigned nreg = chip->ngpio / 32;
u8 reg = offset / 16;
return priv->reg_base + reg_type * nreg * 4 + reg * 4;
}
static int intel_gpio_request(struct gpio_chip *chip, unsigned offset)
{
void __iomem *gafr = gpio_reg_2bit(chip, offset, GAFR);
u32 value = readl(gafr);
int shift = (offset % 16) << 1, af = (value >> shift) & 3;
if (af) {
value &= ~(3 << shift);
writel(value, gafr);
}
return 0;
}
static int intel_gpio_get(struct gpio_chip *chip, unsigned offset)
{
void __iomem *gplr = gpio_reg(chip, offset, GPLR);
return !!(readl(gplr) & BIT(offset % 32));
}
static void intel_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
{
void __iomem *gpsr, *gpcr;
if (value) {
gpsr = gpio_reg(chip, offset, GPSR);
writel(BIT(offset % 32), gpsr);
} else {
gpcr = gpio_reg(chip, offset, GPCR);
writel(BIT(offset % 32), gpcr);
}
}
static int intel_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
{
struct intel_mid_gpio *priv = gpiochip_get_data(chip);
void __iomem *gpdr = gpio_reg(chip, offset, GPDR);
u32 value;
unsigned long flags;
if (priv->pdev)
pm_runtime_get(&priv->pdev->dev);
spin_lock_irqsave(&priv->lock, flags);
value = readl(gpdr);
value &= ~BIT(offset % 32);
writel(value, gpdr);
spin_unlock_irqrestore(&priv->lock, flags);
if (priv->pdev)
pm_runtime_put(&priv->pdev->dev);
return 0;
}
static int intel_gpio_direction_output(struct gpio_chip *chip,
unsigned offset, int value)
{
struct intel_mid_gpio *priv = gpiochip_get_data(chip);
void __iomem *gpdr = gpio_reg(chip, offset, GPDR);
unsigned long flags;
intel_gpio_set(chip, offset, value);
if (priv->pdev)
pm_runtime_get(&priv->pdev->dev);
spin_lock_irqsave(&priv->lock, flags);
value = readl(gpdr);
value |= BIT(offset % 32);
writel(value, gpdr);
spin_unlock_irqrestore(&priv->lock, flags);
if (priv->pdev)
pm_runtime_put(&priv->pdev->dev);
return 0;
}
static int intel_mid_irq_type(struct irq_data *d, unsigned type)
{
struct gpio_chip *gc = irq_data_get_irq_chip_data(d);
struct intel_mid_gpio *priv = gpiochip_get_data(gc);
u32 gpio = irqd_to_hwirq(d);
unsigned long flags;
u32 value;
void __iomem *grer = gpio_reg(&priv->chip, gpio, GRER);
void __iomem *gfer = gpio_reg(&priv->chip, gpio, GFER);
if (gpio >= priv->chip.ngpio)
return -EINVAL;
if (priv->pdev)
pm_runtime_get(&priv->pdev->dev);
spin_lock_irqsave(&priv->lock, flags);
if (type & IRQ_TYPE_EDGE_RISING)
value = readl(grer) | BIT(gpio % 32);
else
value = readl(grer) & (~BIT(gpio % 32));
writel(value, grer);
if (type & IRQ_TYPE_EDGE_FALLING)
value = readl(gfer) | BIT(gpio % 32);
else
value = readl(gfer) & (~BIT(gpio % 32));
writel(value, gfer);
spin_unlock_irqrestore(&priv->lock, flags);
if (priv->pdev)
pm_runtime_put(&priv->pdev->dev);
return 0;
}
static void intel_mid_irq_unmask(struct irq_data *d)
{
}
static void intel_mid_irq_mask(struct irq_data *d)
{
}
static struct irq_chip intel_mid_irqchip = {
.name = "INTEL_MID-GPIO",
.irq_mask = intel_mid_irq_mask,
.irq_unmask = intel_mid_irq_unmask,
.irq_set_type = intel_mid_irq_type,
};
static const struct intel_mid_gpio_ddata gpio_lincroft = {
.ngpio = 64,
};
static const struct intel_mid_gpio_ddata gpio_penwell_aon = {
.ngpio = 96,
.chip_irq_type = INTEL_MID_IRQ_TYPE_EDGE,
};
static const struct intel_mid_gpio_ddata gpio_penwell_core = {
.ngpio = 96,
.chip_irq_type = INTEL_MID_IRQ_TYPE_EDGE,
};
static const struct intel_mid_gpio_ddata gpio_cloverview_aon = {
.ngpio = 96,
.chip_irq_type = INTEL_MID_IRQ_TYPE_EDGE | INTEL_MID_IRQ_TYPE_LEVEL,
};
static const struct intel_mid_gpio_ddata gpio_cloverview_core = {
.ngpio = 96,
.chip_irq_type = INTEL_MID_IRQ_TYPE_EDGE,
};
static const struct pci_device_id intel_gpio_ids[] = {
{
/* Lincroft */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x080f),
.driver_data = (kernel_ulong_t)&gpio_lincroft,
},
{
/* Penwell AON */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x081f),
.driver_data = (kernel_ulong_t)&gpio_penwell_aon,
},
{
/* Penwell Core */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x081a),
.driver_data = (kernel_ulong_t)&gpio_penwell_core,
},
{
/* Cloverview Aon */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x08eb),
.driver_data = (kernel_ulong_t)&gpio_cloverview_aon,
},
{
/* Cloverview Core */
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x08f7),
.driver_data = (kernel_ulong_t)&gpio_cloverview_core,
},
{ }
};
static void intel_mid_irq_handler(struct irq_desc *desc)
{
struct gpio_chip *gc = irq_desc_get_handler_data(desc);
struct intel_mid_gpio *priv = gpiochip_get_data(gc);
struct irq_data *data = irq_desc_get_irq_data(desc);
struct irq_chip *chip = irq_data_get_irq_chip(data);
u32 base, gpio, mask;
unsigned long pending;
void __iomem *gedr;
/* check GPIO controller to check which pin triggered the interrupt */
for (base = 0; base < priv->chip.ngpio; base += 32) {
gedr = gpio_reg(&priv->chip, base, GEDR);
while ((pending = readl(gedr))) {
gpio = __ffs(pending);
mask = BIT(gpio);
/* Clear before handling so we can't lose an edge */
writel(mask, gedr);
generic_handle_irq(irq_find_mapping(gc->irq.domain,
base + gpio));
}
}
chip->irq_eoi(data);
}
static int intel_mid_irq_init_hw(struct gpio_chip *chip)
{
struct intel_mid_gpio *priv = gpiochip_get_data(chip);
void __iomem *reg;
unsigned base;
for (base = 0; base < priv->chip.ngpio; base += 32) {
/* Clear the rising-edge detect register */
reg = gpio_reg(&priv->chip, base, GRER);
writel(0, reg);
/* Clear the falling-edge detect register */
reg = gpio_reg(&priv->chip, base, GFER);
writel(0, reg);
/* Clear the edge detect status register */
reg = gpio_reg(&priv->chip, base, GEDR);
writel(~0, reg);
}
return 0;
}
static int __maybe_unused intel_gpio_runtime_idle(struct device *dev)
{
int err = pm_schedule_suspend(dev, 500);
return err ?: -EBUSY;
}
static const struct dev_pm_ops intel_gpio_pm_ops = {
SET_RUNTIME_PM_OPS(NULL, NULL, intel_gpio_runtime_idle)
};
static int intel_gpio_probe(struct pci_dev *pdev,
const struct pci_device_id *id)
{
void __iomem *base;
struct intel_mid_gpio *priv;
u32 gpio_base;
u32 irq_base;
int retval;
struct gpio_irq_chip *girq;
struct intel_mid_gpio_ddata *ddata =
(struct intel_mid_gpio_ddata *)id->driver_data;
retval = pcim_enable_device(pdev);
if (retval)
return retval;
retval = pcim_iomap_regions(pdev, 1 << 0 | 1 << 1, pci_name(pdev));
if (retval) {
dev_err(&pdev->dev, "I/O memory mapping error\n");
return retval;
}
base = pcim_iomap_table(pdev)[1];
irq_base = readl(base);
gpio_base = readl(sizeof(u32) + base);
/* release the IO mapping, since we already get the info from bar1 */
pcim_iounmap_regions(pdev, 1 << 1);
priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL);
if (!priv)
return -ENOMEM;
priv->reg_base = pcim_iomap_table(pdev)[0];
priv->chip.label = dev_name(&pdev->dev);
priv->chip.parent = &pdev->dev;
priv->chip.request = intel_gpio_request;
priv->chip.direction_input = intel_gpio_direction_input;
priv->chip.direction_output = intel_gpio_direction_output;
priv->chip.get = intel_gpio_get;
priv->chip.set = intel_gpio_set;
priv->chip.base = gpio_base;
priv->chip.ngpio = ddata->ngpio;
priv->chip.can_sleep = false;
priv->pdev = pdev;
spin_lock_init(&priv->lock);
girq = &priv->chip.irq;
girq->chip = &intel_mid_irqchip;
girq->init_hw = intel_mid_irq_init_hw;
girq->parent_handler = intel_mid_irq_handler;
girq->num_parents = 1;
girq->parents = devm_kcalloc(&pdev->dev, girq->num_parents,
sizeof(*girq->parents),
GFP_KERNEL);
if (!girq->parents)
return -ENOMEM;
girq->parents[0] = pdev->irq;
girq->first = irq_base;
girq->default_type = IRQ_TYPE_NONE;
girq->handler = handle_simple_irq;
pci_set_drvdata(pdev, priv);
retval = devm_gpiochip_add_data(&pdev->dev, &priv->chip, priv);
if (retval) {
dev_err(&pdev->dev, "gpiochip_add error %d\n", retval);
return retval;
}
pm_runtime_put_noidle(&pdev->dev);
pm_runtime_allow(&pdev->dev);
return 0;
}
static struct pci_driver intel_gpio_driver = {
.name = "intel_mid_gpio",
.id_table = intel_gpio_ids,
.probe = intel_gpio_probe,
.driver = {
.pm = &intel_gpio_pm_ops,
},
};
builtin_pci_driver(intel_gpio_driver);

View file

@ -1,314 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Intel Medfield MSIC GPIO driver>
* Copyright (c) 2011, Intel Corporation.
*
* Author: Mathias Nyman <mathias.nyman@linux.intel.com>
* Based on intel_pmic_gpio.c
*/
#include <linux/gpio/driver.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/mfd/intel_msic.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
/* the offset for the mapping of global gpio pin to irq */
#define MSIC_GPIO_IRQ_OFFSET 0x100
#define MSIC_GPIO_DIR_IN 0
#define MSIC_GPIO_DIR_OUT BIT(5)
#define MSIC_GPIO_TRIG_FALL BIT(1)
#define MSIC_GPIO_TRIG_RISE BIT(2)
/* masks for msic gpio output GPIOxxxxCTLO registers */
#define MSIC_GPIO_DIR_MASK BIT(5)
#define MSIC_GPIO_DRV_MASK BIT(4)
#define MSIC_GPIO_REN_MASK BIT(3)
#define MSIC_GPIO_RVAL_MASK (BIT(2) | BIT(1))
#define MSIC_GPIO_DOUT_MASK BIT(0)
/* masks for msic gpio input GPIOxxxxCTLI registers */
#define MSIC_GPIO_GLBYP_MASK BIT(5)
#define MSIC_GPIO_DBNC_MASK (BIT(4) | BIT(3))
#define MSIC_GPIO_INTCNT_MASK (BIT(2) | BIT(1))
#define MSIC_GPIO_DIN_MASK BIT(0)
#define MSIC_NUM_GPIO 24
struct msic_gpio {
struct platform_device *pdev;
struct mutex buslock;
struct gpio_chip chip;
int irq;
unsigned irq_base;
unsigned long trig_change_mask;
unsigned trig_type;
};
/*
* MSIC has 24 gpios, 16 low voltage (1.2-1.8v) and 8 high voltage (3v).
* Both the high and low voltage gpios are divided in two banks.
* GPIOs are numbered with GPIO0LV0 as gpio_base in the following order:
* GPIO0LV0..GPIO0LV7: low voltage, bank 0, gpio_base
* GPIO1LV0..GPIO1LV7: low voltage, bank 1, gpio_base + 8
* GPIO0HV0..GPIO0HV3: high voltage, bank 0, gpio_base + 16
* GPIO1HV0..GPIO1HV3: high voltage, bank 1, gpio_base + 20
*/
static int msic_gpio_to_ireg(unsigned offset)
{
if (offset >= MSIC_NUM_GPIO)
return -EINVAL;
if (offset < 8)
return INTEL_MSIC_GPIO0LV0CTLI - offset;
if (offset < 16)
return INTEL_MSIC_GPIO1LV0CTLI - offset + 8;
if (offset < 20)
return INTEL_MSIC_GPIO0HV0CTLI - offset + 16;
return INTEL_MSIC_GPIO1HV0CTLI - offset + 20;
}
static int msic_gpio_to_oreg(unsigned offset)
{
if (offset >= MSIC_NUM_GPIO)
return -EINVAL;
if (offset < 8)
return INTEL_MSIC_GPIO0LV0CTLO - offset;
if (offset < 16)
return INTEL_MSIC_GPIO1LV0CTLO - offset + 8;
if (offset < 20)
return INTEL_MSIC_GPIO0HV0CTLO - offset + 16;
return INTEL_MSIC_GPIO1HV0CTLO - offset + 20;
}
static int msic_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
{
int reg;
reg = msic_gpio_to_oreg(offset);
if (reg < 0)
return reg;
return intel_msic_reg_update(reg, MSIC_GPIO_DIR_IN, MSIC_GPIO_DIR_MASK);
}
static int msic_gpio_direction_output(struct gpio_chip *chip,
unsigned offset, int value)
{
int reg;
unsigned mask;
value = (!!value) | MSIC_GPIO_DIR_OUT;
mask = MSIC_GPIO_DIR_MASK | MSIC_GPIO_DOUT_MASK;
reg = msic_gpio_to_oreg(offset);
if (reg < 0)
return reg;
return intel_msic_reg_update(reg, value, mask);
}
static int msic_gpio_get(struct gpio_chip *chip, unsigned offset)
{
u8 r;
int ret;
int reg;
reg = msic_gpio_to_ireg(offset);
if (reg < 0)
return reg;
ret = intel_msic_reg_read(reg, &r);
if (ret < 0)
return ret;
return !!(r & MSIC_GPIO_DIN_MASK);
}
static void msic_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
{
int reg;
reg = msic_gpio_to_oreg(offset);
if (reg < 0)
return;
intel_msic_reg_update(reg, !!value , MSIC_GPIO_DOUT_MASK);
}
/*
* This is called from genirq with mg->buslock locked and
* irq_desc->lock held. We can not access the scu bus here, so we
* store the change and update in the bus_sync_unlock() function below
*/
static int msic_irq_type(struct irq_data *data, unsigned type)
{
struct msic_gpio *mg = irq_data_get_irq_chip_data(data);
u32 gpio = data->irq - mg->irq_base;
if (gpio >= mg->chip.ngpio)
return -EINVAL;
/* mark for which gpio the trigger changed, protected by buslock */
mg->trig_change_mask |= (1 << gpio);
mg->trig_type = type;
return 0;
}
static int msic_gpio_to_irq(struct gpio_chip *chip, unsigned offset)
{
struct msic_gpio *mg = gpiochip_get_data(chip);
return mg->irq_base + offset;
}
static void msic_bus_lock(struct irq_data *data)
{
struct msic_gpio *mg = irq_data_get_irq_chip_data(data);
mutex_lock(&mg->buslock);
}
static void msic_bus_sync_unlock(struct irq_data *data)
{
struct msic_gpio *mg = irq_data_get_irq_chip_data(data);
int offset;
int reg;
u8 trig = 0;
/* We can only get one change at a time as the buslock covers the
entire transaction. The irq_desc->lock is dropped before we are
called but that is fine */
if (mg->trig_change_mask) {
offset = __ffs(mg->trig_change_mask);
reg = msic_gpio_to_ireg(offset);
if (reg < 0)
goto out;
if (mg->trig_type & IRQ_TYPE_EDGE_RISING)
trig |= MSIC_GPIO_TRIG_RISE;
if (mg->trig_type & IRQ_TYPE_EDGE_FALLING)
trig |= MSIC_GPIO_TRIG_FALL;
intel_msic_reg_update(reg, trig, MSIC_GPIO_INTCNT_MASK);
mg->trig_change_mask = 0;
}
out:
mutex_unlock(&mg->buslock);
}
/* Firmware does all the masking and unmasking for us, no masking here. */
static void msic_irq_unmask(struct irq_data *data) { }
static void msic_irq_mask(struct irq_data *data) { }
static struct irq_chip msic_irqchip = {
.name = "MSIC-GPIO",
.irq_mask = msic_irq_mask,
.irq_unmask = msic_irq_unmask,
.irq_set_type = msic_irq_type,
.irq_bus_lock = msic_bus_lock,
.irq_bus_sync_unlock = msic_bus_sync_unlock,
};
static void msic_gpio_irq_handler(struct irq_desc *desc)
{
struct irq_data *data = irq_desc_get_irq_data(desc);
struct msic_gpio *mg = irq_data_get_irq_handler_data(data);
struct irq_chip *chip = irq_data_get_irq_chip(data);
struct intel_msic *msic = pdev_to_intel_msic(mg->pdev);
unsigned long pending;
int i;
int bitnr;
u8 pin;
for (i = 0; i < (mg->chip.ngpio / BITS_PER_BYTE); i++) {
intel_msic_irq_read(msic, INTEL_MSIC_GPIO0LVIRQ + i, &pin);
pending = pin;
for_each_set_bit(bitnr, &pending, BITS_PER_BYTE)
generic_handle_irq(mg->irq_base + i * BITS_PER_BYTE + bitnr);
}
chip->irq_eoi(data);
}
static int platform_msic_gpio_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct intel_msic_gpio_pdata *pdata = dev_get_platdata(dev);
struct msic_gpio *mg;
int irq = platform_get_irq(pdev, 0);
int retval;
int i;
if (irq < 0) {
dev_err(dev, "no IRQ line: %d\n", irq);
return irq;
}
if (!pdata || !pdata->gpio_base) {
dev_err(dev, "incorrect or missing platform data\n");
return -EINVAL;
}
mg = kzalloc(sizeof(*mg), GFP_KERNEL);
if (!mg)
return -ENOMEM;
dev_set_drvdata(dev, mg);
mg->pdev = pdev;
mg->irq = irq;
mg->irq_base = pdata->gpio_base + MSIC_GPIO_IRQ_OFFSET;
mg->chip.label = "msic_gpio";
mg->chip.direction_input = msic_gpio_direction_input;
mg->chip.direction_output = msic_gpio_direction_output;
mg->chip.get = msic_gpio_get;
mg->chip.set = msic_gpio_set;
mg->chip.to_irq = msic_gpio_to_irq;
mg->chip.base = pdata->gpio_base;
mg->chip.ngpio = MSIC_NUM_GPIO;
mg->chip.can_sleep = true;
mg->chip.parent = dev;
mutex_init(&mg->buslock);
retval = gpiochip_add_data(&mg->chip, mg);
if (retval) {
dev_err(dev, "Adding MSIC gpio chip failed\n");
goto err;
}
for (i = 0; i < mg->chip.ngpio; i++) {
irq_set_chip_data(i + mg->irq_base, mg);
irq_set_chip_and_handler(i + mg->irq_base,
&msic_irqchip,
handle_simple_irq);
}
irq_set_chained_handler_and_data(mg->irq, msic_gpio_irq_handler, mg);
return 0;
err:
kfree(mg);
return retval;
}
static struct platform_driver platform_msic_gpio_driver = {
.driver = {
.name = "msic_gpio",
},
.probe = platform_msic_gpio_probe,
};
static int __init platform_msic_gpio_init(void)
{
return platform_driver_register(&platform_msic_gpio_driver);
}
subsys_initcall(platform_msic_gpio_init);

View file

@ -10,9 +10,6 @@
#include <linux/dmi.h>
#include <linux/module.h>
#include <asm/intel-mid.h>
#include <asm/intel_scu_ipc.h>
#include <drm/drm.h>
#include "intel_bios.h"

View file

@ -386,6 +386,8 @@ struct psb_ops;
#define PSB_NUM_PIPE 3
struct intel_scu_ipc_dev;
struct drm_psb_private {
struct drm_device *dev;
struct pci_dev *aux_pdev; /* Currently only used by mrst */
@ -525,6 +527,7 @@ struct drm_psb_private {
* Used for modifying backlight from
* xrandr -- consider removing and using HAL instead
*/
struct intel_scu_ipc_dev *scu;
struct backlight_device *backlight_device;
struct drm_property *backlight_property;
bool backlight_enabled;

View file

@ -37,7 +37,6 @@ struct olpc_ec_priv {
struct mutex cmd_lock;
/* DCON regulator */
struct regulator_dev *dcon_rdev;
bool dcon_enabled;
/* Pending EC commands */
@ -387,24 +386,26 @@ static int dcon_regulator_is_enabled(struct regulator_dev *rdev)
return ec->dcon_enabled ? 1 : 0;
}
static struct regulator_ops dcon_regulator_ops = {
static const struct regulator_ops dcon_regulator_ops = {
.enable = dcon_regulator_enable,
.disable = dcon_regulator_disable,
.is_enabled = dcon_regulator_is_enabled,
};
static const struct regulator_desc dcon_desc = {
.name = "dcon",
.id = 0,
.ops = &dcon_regulator_ops,
.type = REGULATOR_VOLTAGE,
.owner = THIS_MODULE,
.name = "dcon",
.id = 0,
.ops = &dcon_regulator_ops,
.type = REGULATOR_VOLTAGE,
.owner = THIS_MODULE,
.enable_time = 25000,
};
static int olpc_ec_probe(struct platform_device *pdev)
{
struct olpc_ec_priv *ec;
struct regulator_config config = { };
struct regulator_dev *regulator;
int err;
if (!ec_driver)
@ -426,26 +427,26 @@ static int olpc_ec_probe(struct platform_device *pdev)
/* get the EC revision */
err = olpc_ec_cmd(EC_FIRMWARE_REV, NULL, 0, &ec->version, 1);
if (err) {
ec_priv = NULL;
kfree(ec);
return err;
}
if (err)
goto error;
config.dev = pdev->dev.parent;
config.driver_data = ec;
ec->dcon_enabled = true;
ec->dcon_rdev = devm_regulator_register(&pdev->dev, &dcon_desc,
&config);
if (IS_ERR(ec->dcon_rdev)) {
regulator = devm_regulator_register(&pdev->dev, &dcon_desc, &config);
if (IS_ERR(regulator)) {
dev_err(&pdev->dev, "failed to register DCON regulator\n");
err = PTR_ERR(ec->dcon_rdev);
kfree(ec);
return err;
err = PTR_ERR(regulator);
goto error;
}
ec->dbgfs_dir = olpc_ec_setup_debugfs();
return 0;
error:
ec_priv = NULL;
kfree(ec);
return err;
}

View file

@ -41,6 +41,42 @@ config SURFACE_3_POWER_OPREGION
This driver provides support for ACPI operation
region of the Surface 3 battery platform driver.
config SURFACE_ACPI_NOTIFY
tristate "Surface ACPI Notify Driver"
depends on SURFACE_AGGREGATOR
help
Surface ACPI Notify (SAN) driver for Microsoft Surface devices.
This driver provides support for the ACPI interface (called SAN) of
the Surface System Aggregator Module (SSAM) EC. This interface is used
on 5th- and 6th-generation Microsoft Surface devices (including
Surface Pro 5 and 6, Surface Book 2, Surface Laptops 1 and 2, and in
reduced functionality on the Surface Laptop 3) to execute SSAM
requests directly from ACPI code, as well as receive SSAM events and
turn them into ACPI notifications. It essentially acts as a
translation layer between the SSAM controller and ACPI.
Specifically, this driver may be needed for battery status reporting,
thermal sensor access, and real-time clock information, depending on
the Surface device in question.
config SURFACE_AGGREGATOR_CDEV
tristate "Surface System Aggregator Module User-Space Interface"
depends on SURFACE_AGGREGATOR
help
Provides a misc-device interface to the Surface System Aggregator
Module (SSAM) controller.
This option provides a module (called surface_aggregator_cdev), that,
when loaded, will add a client device (and its respective driver) to
the SSAM controller. Said client device manages a misc-device
interface (/dev/surface/aggregator), which can be used by user-space
tools to directly communicate with the SSAM EC by sending requests and
receiving the corresponding responses.
The provided interface is intended for debugging and development only,
and should not be used otherwise.
config SURFACE_GPE
tristate "Surface GPE/Lid Support Driver"
depends on DMI
@ -50,10 +86,31 @@ config SURFACE_GPE
accordingly. It is required on those devices to allow wake-ups from
suspend by opening the lid.
config SURFACE_HOTPLUG
tristate "Surface Hot-Plug Driver"
depends on GPIOLIB
help
Driver for out-of-band hot-plug event signaling on Microsoft Surface
devices with hot-pluggable PCIe cards.
This driver is used on Surface Book (2 and 3) devices with a
hot-pluggable discrete GPU (dGPU). When not in use, the dGPU on those
devices can enter D3cold, which prevents in-band (standard) PCIe
hot-plug signaling. Thus, without this driver, detaching the base
containing the dGPU will not correctly update the state of the
corresponding PCIe device if it is in D3cold. This driver adds support
for out-of-band hot-plug notifications, ensuring that the device state
is properly updated even when the device in question is in D3cold.
Select M or Y here, if you want to (fully) support hot-plugging of
dGPU devices on the Surface Book 2 and/or 3 during D3cold.
config SURFACE_PRO3_BUTTON
tristate "Power/home/volume buttons driver for Microsoft Surface Pro 3/4 tablet"
depends on INPUT
help
This driver handles the power/home/volume buttons on the Microsoft Surface Pro 3/4 tablet.
source "drivers/platform/surface/aggregator/Kconfig"
endif # SURFACE_PLATFORMS

View file

@ -7,5 +7,9 @@
obj-$(CONFIG_SURFACE3_WMI) += surface3-wmi.o
obj-$(CONFIG_SURFACE_3_BUTTON) += surface3_button.o
obj-$(CONFIG_SURFACE_3_POWER_OPREGION) += surface3_power.o
obj-$(CONFIG_SURFACE_ACPI_NOTIFY) += surface_acpi_notify.o
obj-$(CONFIG_SURFACE_AGGREGATOR) += aggregator/
obj-$(CONFIG_SURFACE_AGGREGATOR_CDEV) += surface_aggregator_cdev.o
obj-$(CONFIG_SURFACE_GPE) += surface_gpe.o
obj-$(CONFIG_SURFACE_HOTPLUG) += surface_hotplug.o
obj-$(CONFIG_SURFACE_PRO3_BUTTON) += surfacepro3_button.o

View file

@ -0,0 +1,68 @@
# SPDX-License-Identifier: GPL-2.0+
# Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
menuconfig SURFACE_AGGREGATOR
tristate "Microsoft Surface System Aggregator Module Subsystem and Drivers"
depends on SERIAL_DEV_BUS
select CRC_CCITT
help
The Surface System Aggregator Module (Surface SAM or SSAM) is an
embedded controller (EC) found on 5th- and later-generation Microsoft
Surface devices (i.e. Surface Pro 5, Surface Book 2, Surface Laptop,
and newer, with exception of Surface Go series devices).
Depending on the device in question, this EC provides varying
functionality, including:
- EC access from ACPI via Surface ACPI Notify (5th- and 6th-generation)
- battery status information (all devices)
- thermal sensor access (all devices)
- performance mode / cooling mode control (all devices)
- clipboard detachment system control (Surface Book 2 and 3)
- HID / keyboard input (Surface Laptops, Surface Book 3)
This option controls whether the Surface SAM subsystem core will be
built. This includes a driver for the Surface Serial Hub (SSH), which
is the device responsible for the communication with the EC, and a
basic kernel interface exposing the EC functionality to other client
drivers, i.e. allowing them to make requests to the EC and receive
events from it. Selecting this option alone will not provide any
client drivers and therefore no functionality beyond the in-kernel
interface. Said functionality is the responsibility of the respective
client drivers.
Note: While 4th-generation Surface devices also make use of a SAM EC,
due to a difference in the communication interface of the controller,
only 5th and later generations are currently supported. Specifically,
devices using SAM-over-SSH are supported, whereas devices using
SAM-over-HID, which is used on the 4th generation, are currently not
supported.
Choose m if you want to build the SAM subsystem core and SSH driver as
module, y if you want to build it into the kernel and n if you don't
want it at all.
config SURFACE_AGGREGATOR_BUS
bool "Surface System Aggregator Module Bus"
depends on SURFACE_AGGREGATOR
default y
help
Expands the Surface System Aggregator Module (SSAM) core driver by
providing a dedicated bus and client-device type.
This bus and device type are intended to provide and simplify support
for non-platform and non-ACPI SSAM devices, i.e. SSAM devices that are
not auto-detectable via the conventional means (e.g. ACPI).
config SURFACE_AGGREGATOR_ERROR_INJECTION
bool "Surface System Aggregator Module Error Injection Capabilities"
depends on SURFACE_AGGREGATOR
depends on FUNCTION_ERROR_INJECTION
help
Provides error-injection capabilities for the Surface System
Aggregator Module subsystem and Surface Serial Hub driver.
Specifically, exports error injection hooks to be used with the
kernel's function error injection capabilities to simulate underlying
transport and communication problems, such as invalid data sent to or
received from the EC, dropped data, and communication timeouts.
Intended for development and debugging.

View file

@ -0,0 +1,17 @@
# SPDX-License-Identifier: GPL-2.0+
# Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
# For include/trace/define_trace.h to include trace.h
CFLAGS_core.o = -I$(src)
obj-$(CONFIG_SURFACE_AGGREGATOR) += surface_aggregator.o
surface_aggregator-objs := core.o
surface_aggregator-objs += ssh_parser.o
surface_aggregator-objs += ssh_packet_layer.o
surface_aggregator-objs += ssh_request_layer.o
surface_aggregator-objs += controller.o
ifeq ($(CONFIG_SURFACE_AGGREGATOR_BUS),y)
surface_aggregator-objs += bus.o
endif

View file

@ -0,0 +1,415 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Surface System Aggregator Module bus and device integration.
*
* Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#include <linux/device.h>
#include <linux/slab.h>
#include <linux/surface_aggregator/controller.h>
#include <linux/surface_aggregator/device.h>
#include "bus.h"
#include "controller.h"
static ssize_t modalias_show(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct ssam_device *sdev = to_ssam_device(dev);
return sysfs_emit(buf, "ssam:d%02Xc%02Xt%02Xi%02Xf%02X\n",
sdev->uid.domain, sdev->uid.category, sdev->uid.target,
sdev->uid.instance, sdev->uid.function);
}
static DEVICE_ATTR_RO(modalias);
static struct attribute *ssam_device_attrs[] = {
&dev_attr_modalias.attr,
NULL,
};
ATTRIBUTE_GROUPS(ssam_device);
static int ssam_device_uevent(struct device *dev, struct kobj_uevent_env *env)
{
struct ssam_device *sdev = to_ssam_device(dev);
return add_uevent_var(env, "MODALIAS=ssam:d%02Xc%02Xt%02Xi%02Xf%02X",
sdev->uid.domain, sdev->uid.category,
sdev->uid.target, sdev->uid.instance,
sdev->uid.function);
}
static void ssam_device_release(struct device *dev)
{
struct ssam_device *sdev = to_ssam_device(dev);
ssam_controller_put(sdev->ctrl);
kfree(sdev);
}
const struct device_type ssam_device_type = {
.name = "surface_aggregator_device",
.groups = ssam_device_groups,
.uevent = ssam_device_uevent,
.release = ssam_device_release,
};
EXPORT_SYMBOL_GPL(ssam_device_type);
/**
* ssam_device_alloc() - Allocate and initialize a SSAM client device.
* @ctrl: The controller under which the device should be added.
* @uid: The UID of the device to be added.
*
* Allocates and initializes a new client device. The parent of the device
* will be set to the controller device and the name will be set based on the
* UID. Note that the device still has to be added via ssam_device_add().
* Refer to that function for more details.
*
* Return: Returns the newly allocated and initialized SSAM client device, or
* %NULL if it could not be allocated.
*/
struct ssam_device *ssam_device_alloc(struct ssam_controller *ctrl,
struct ssam_device_uid uid)
{
struct ssam_device *sdev;
sdev = kzalloc(sizeof(*sdev), GFP_KERNEL);
if (!sdev)
return NULL;
device_initialize(&sdev->dev);
sdev->dev.bus = &ssam_bus_type;
sdev->dev.type = &ssam_device_type;
sdev->dev.parent = ssam_controller_device(ctrl);
sdev->ctrl = ssam_controller_get(ctrl);
sdev->uid = uid;
dev_set_name(&sdev->dev, "%02x:%02x:%02x:%02x:%02x",
sdev->uid.domain, sdev->uid.category, sdev->uid.target,
sdev->uid.instance, sdev->uid.function);
return sdev;
}
EXPORT_SYMBOL_GPL(ssam_device_alloc);
/**
* ssam_device_add() - Add a SSAM client device.
* @sdev: The SSAM client device to be added.
*
* Added client devices must be guaranteed to always have a valid and active
* controller. Thus, this function will fail with %-ENODEV if the controller
* of the device has not been initialized yet, has been suspended, or has been
* shut down.
*
* The caller of this function should ensure that the corresponding call to
* ssam_device_remove() is issued before the controller is shut down. If the
* added device is a direct child of the controller device (default), it will
* be automatically removed when the controller is shut down.
*
* By default, the controller device will become the parent of the newly
* created client device. The parent may be changed before ssam_device_add is
* called, but care must be taken that a) the correct suspend/resume ordering
* is guaranteed and b) the client device does not outlive the controller,
* i.e. that the device is removed before the controller is being shut down.
* In case these guarantees have to be manually enforced, please refer to the
* ssam_client_link() and ssam_client_bind() functions, which are intended to
* set up device-links for this purpose.
*
* Return: Returns zero on success, a negative error code on failure.
*/
int ssam_device_add(struct ssam_device *sdev)
{
int status;
/*
* Ensure that we can only add new devices to a controller if it has
* been started and is not going away soon. This works in combination
* with ssam_controller_remove_clients to ensure driver presence for the
* controller device, i.e. it ensures that the controller (sdev->ctrl)
* is always valid and can be used for requests as long as the client
* device we add here is registered as child under it. This essentially
* guarantees that the client driver can always expect the preconditions
* for functions like ssam_request_sync (controller has to be started
* and is not suspended) to hold and thus does not have to check for
* them.
*
* Note that for this to work, the controller has to be a parent device.
* If it is not a direct parent, care has to be taken that the device is
* removed via ssam_device_remove(), as device_unregister does not
* remove child devices recursively.
*/
ssam_controller_statelock(sdev->ctrl);
if (sdev->ctrl->state != SSAM_CONTROLLER_STARTED) {
ssam_controller_stateunlock(sdev->ctrl);
return -ENODEV;
}
status = device_add(&sdev->dev);
ssam_controller_stateunlock(sdev->ctrl);
return status;
}
EXPORT_SYMBOL_GPL(ssam_device_add);
/**
* ssam_device_remove() - Remove a SSAM client device.
* @sdev: The device to remove.
*
* Removes and unregisters the provided SSAM client device.
*/
void ssam_device_remove(struct ssam_device *sdev)
{
device_unregister(&sdev->dev);
}
EXPORT_SYMBOL_GPL(ssam_device_remove);
/**
* ssam_device_id_compatible() - Check if a device ID matches a UID.
* @id: The device ID as potential match.
* @uid: The device UID matching against.
*
* Check if the given ID is a match for the given UID, i.e. if a device with
* the provided UID is compatible to the given ID following the match rules
* described in its &ssam_device_id.match_flags member.
*
* Return: Returns %true if the given UID is compatible to the match rule
* described by the given ID, %false otherwise.
*/
static bool ssam_device_id_compatible(const struct ssam_device_id *id,
struct ssam_device_uid uid)
{
if (id->domain != uid.domain || id->category != uid.category)
return false;
if ((id->match_flags & SSAM_MATCH_TARGET) && id->target != uid.target)
return false;
if ((id->match_flags & SSAM_MATCH_INSTANCE) && id->instance != uid.instance)
return false;
if ((id->match_flags & SSAM_MATCH_FUNCTION) && id->function != uid.function)
return false;
return true;
}
/**
* ssam_device_id_is_null() - Check if a device ID is null.
* @id: The device ID to check.
*
* Check if a given device ID is null, i.e. all zeros. Used to check for the
* end of ``MODULE_DEVICE_TABLE(ssam, ...)`` or similar lists.
*
* Return: Returns %true if the given ID represents a null ID, %false
* otherwise.
*/
static bool ssam_device_id_is_null(const struct ssam_device_id *id)
{
return id->match_flags == 0 &&
id->domain == 0 &&
id->category == 0 &&
id->target == 0 &&
id->instance == 0 &&
id->function == 0 &&
id->driver_data == 0;
}
/**
* ssam_device_id_match() - Find the matching ID table entry for the given UID.
* @table: The table to search in.
* @uid: The UID to matched against the individual table entries.
*
* Find the first match for the provided device UID in the provided ID table
* and return it. Returns %NULL if no match could be found.
*/
const struct ssam_device_id *ssam_device_id_match(const struct ssam_device_id *table,
const struct ssam_device_uid uid)
{
const struct ssam_device_id *id;
for (id = table; !ssam_device_id_is_null(id); ++id)
if (ssam_device_id_compatible(id, uid))
return id;
return NULL;
}
EXPORT_SYMBOL_GPL(ssam_device_id_match);
/**
* ssam_device_get_match() - Find and return the ID matching the device in the
* ID table of the bound driver.
* @dev: The device for which to get the matching ID table entry.
*
* Find the fist match for the UID of the device in the ID table of the
* currently bound driver and return it. Returns %NULL if the device does not
* have a driver bound to it, the driver does not have match_table (i.e. it is
* %NULL), or there is no match in the driver's match_table.
*
* This function essentially calls ssam_device_id_match() with the ID table of
* the bound device driver and the UID of the device.
*
* Return: Returns the first match for the UID of the device in the device
* driver's match table, or %NULL if no such match could be found.
*/
const struct ssam_device_id *ssam_device_get_match(const struct ssam_device *dev)
{
const struct ssam_device_driver *sdrv;
sdrv = to_ssam_device_driver(dev->dev.driver);
if (!sdrv)
return NULL;
if (!sdrv->match_table)
return NULL;
return ssam_device_id_match(sdrv->match_table, dev->uid);
}
EXPORT_SYMBOL_GPL(ssam_device_get_match);
/**
* ssam_device_get_match_data() - Find the ID matching the device in the
* ID table of the bound driver and return its ``driver_data`` member.
* @dev: The device for which to get the match data.
*
* Find the fist match for the UID of the device in the ID table of the
* corresponding driver and return its driver_data. Returns %NULL if the
* device does not have a driver bound to it, the driver does not have
* match_table (i.e. it is %NULL), there is no match in the driver's
* match_table, or the match does not have any driver_data.
*
* This function essentially calls ssam_device_get_match() and, if any match
* could be found, returns its ``struct ssam_device_id.driver_data`` member.
*
* Return: Returns the driver data associated with the first match for the UID
* of the device in the device driver's match table, or %NULL if no such match
* could be found.
*/
const void *ssam_device_get_match_data(const struct ssam_device *dev)
{
const struct ssam_device_id *id;
id = ssam_device_get_match(dev);
if (!id)
return NULL;
return (const void *)id->driver_data;
}
EXPORT_SYMBOL_GPL(ssam_device_get_match_data);
static int ssam_bus_match(struct device *dev, struct device_driver *drv)
{
struct ssam_device_driver *sdrv = to_ssam_device_driver(drv);
struct ssam_device *sdev = to_ssam_device(dev);
if (!is_ssam_device(dev))
return 0;
return !!ssam_device_id_match(sdrv->match_table, sdev->uid);
}
static int ssam_bus_probe(struct device *dev)
{
return to_ssam_device_driver(dev->driver)
->probe(to_ssam_device(dev));
}
static int ssam_bus_remove(struct device *dev)
{
struct ssam_device_driver *sdrv = to_ssam_device_driver(dev->driver);
if (sdrv->remove)
sdrv->remove(to_ssam_device(dev));
return 0;
}
struct bus_type ssam_bus_type = {
.name = "surface_aggregator",
.match = ssam_bus_match,
.probe = ssam_bus_probe,
.remove = ssam_bus_remove,
};
EXPORT_SYMBOL_GPL(ssam_bus_type);
/**
* __ssam_device_driver_register() - Register a SSAM client device driver.
* @sdrv: The driver to register.
* @owner: The module owning the provided driver.
*
* Please refer to the ssam_device_driver_register() macro for the normal way
* to register a driver from inside its owning module.
*/
int __ssam_device_driver_register(struct ssam_device_driver *sdrv,
struct module *owner)
{
sdrv->driver.owner = owner;
sdrv->driver.bus = &ssam_bus_type;
/* force drivers to async probe so I/O is possible in probe */
sdrv->driver.probe_type = PROBE_PREFER_ASYNCHRONOUS;
return driver_register(&sdrv->driver);
}
EXPORT_SYMBOL_GPL(__ssam_device_driver_register);
/**
* ssam_device_driver_unregister - Unregister a SSAM device driver.
* @sdrv: The driver to unregister.
*/
void ssam_device_driver_unregister(struct ssam_device_driver *sdrv)
{
driver_unregister(&sdrv->driver);
}
EXPORT_SYMBOL_GPL(ssam_device_driver_unregister);
static int ssam_remove_device(struct device *dev, void *_data)
{
struct ssam_device *sdev = to_ssam_device(dev);
if (is_ssam_device(dev))
ssam_device_remove(sdev);
return 0;
}
/**
* ssam_controller_remove_clients() - Remove SSAM client devices registered as
* direct children under the given controller.
* @ctrl: The controller to remove all direct clients for.
*
* Remove all SSAM client devices registered as direct children under the
* given controller. Note that this only accounts for direct children of the
* controller device. This does not take care of any client devices where the
* parent device has been manually set before calling ssam_device_add. Refer
* to ssam_device_add()/ssam_device_remove() for more details on those cases.
*
* To avoid new devices being added in parallel to this call, the main
* controller lock (not statelock) must be held during this (and if
* necessary, any subsequent deinitialization) call.
*/
void ssam_controller_remove_clients(struct ssam_controller *ctrl)
{
struct device *dev;
dev = ssam_controller_device(ctrl);
device_for_each_child_reverse(dev, NULL, ssam_remove_device);
}
/**
* ssam_bus_register() - Register and set-up the SSAM client device bus.
*/
int ssam_bus_register(void)
{
return bus_register(&ssam_bus_type);
}
/**
* ssam_bus_unregister() - Unregister the SSAM client device bus.
*/
void ssam_bus_unregister(void)
{
return bus_unregister(&ssam_bus_type);
}

View file

@ -0,0 +1,27 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Surface System Aggregator Module bus and device integration.
*
* Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#ifndef _SURFACE_AGGREGATOR_BUS_H
#define _SURFACE_AGGREGATOR_BUS_H
#include <linux/surface_aggregator/controller.h>
#ifdef CONFIG_SURFACE_AGGREGATOR_BUS
void ssam_controller_remove_clients(struct ssam_controller *ctrl);
int ssam_bus_register(void);
void ssam_bus_unregister(void);
#else /* CONFIG_SURFACE_AGGREGATOR_BUS */
static inline void ssam_controller_remove_clients(struct ssam_controller *ctrl) {}
static inline int ssam_bus_register(void) { return 0; }
static inline void ssam_bus_unregister(void) {}
#endif /* CONFIG_SURFACE_AGGREGATOR_BUS */
#endif /* _SURFACE_AGGREGATOR_BUS_H */

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,285 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Main SSAM/SSH controller structure and functionality.
*
* Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#ifndef _SURFACE_AGGREGATOR_CONTROLLER_H
#define _SURFACE_AGGREGATOR_CONTROLLER_H
#include <linux/kref.h>
#include <linux/list.h>
#include <linux/mutex.h>
#include <linux/rbtree.h>
#include <linux/rwsem.h>
#include <linux/serdev.h>
#include <linux/spinlock.h>
#include <linux/srcu.h>
#include <linux/types.h>
#include <linux/workqueue.h>
#include <linux/surface_aggregator/controller.h>
#include <linux/surface_aggregator/serial_hub.h>
#include "ssh_request_layer.h"
/* -- Safe counters. -------------------------------------------------------- */
/**
* struct ssh_seq_counter - Safe counter for SSH sequence IDs.
* @value: The current counter value.
*/
struct ssh_seq_counter {
u8 value;
};
/**
* struct ssh_rqid_counter - Safe counter for SSH request IDs.
* @value: The current counter value.
*/
struct ssh_rqid_counter {
u16 value;
};
/* -- Event/notification system. -------------------------------------------- */
/**
* struct ssam_nf_head - Notifier head for SSAM events.
* @srcu: The SRCU struct for synchronization.
* @head: List-head for notifier blocks registered under this head.
*/
struct ssam_nf_head {
struct srcu_struct srcu;
struct list_head head;
};
/**
* struct ssam_nf - Notifier callback- and activation-registry for SSAM events.
* @lock: Lock guarding (de-)registration of notifier blocks. Note: This
* lock does not need to be held for notifier calls, only
* registration and deregistration.
* @refcount: The root of the RB-tree used for reference-counting enabled
* events/notifications.
* @head: The list of notifier heads for event/notification callbacks.
*/
struct ssam_nf {
struct mutex lock;
struct rb_root refcount;
struct ssam_nf_head head[SSH_NUM_EVENTS];
};
/* -- Event/async request completion system. -------------------------------- */
struct ssam_cplt;
/**
* struct ssam_event_item - Struct for event queuing and completion.
* @node: The node in the queue.
* @rqid: The request ID of the event.
* @ops: Instance specific functions.
* @ops.free: Callback for freeing this event item.
* @event: Actual event data.
*/
struct ssam_event_item {
struct list_head node;
u16 rqid;
struct {
void (*free)(struct ssam_event_item *event);
} ops;
struct ssam_event event; /* must be last */
};
/**
* struct ssam_event_queue - Queue for completing received events.
* @cplt: Reference to the completion system on which this queue is active.
* @lock: The lock for any operation on the queue.
* @head: The list-head of the queue.
* @work: The &struct work_struct performing completion work for this queue.
*/
struct ssam_event_queue {
struct ssam_cplt *cplt;
spinlock_t lock;
struct list_head head;
struct work_struct work;
};
/**
* struct ssam_event_target - Set of queues for a single SSH target ID.
* @queue: The array of queues, one queue per event ID.
*/
struct ssam_event_target {
struct ssam_event_queue queue[SSH_NUM_EVENTS];
};
/**
* struct ssam_cplt - SSAM event/async request completion system.
* @dev: The device with which this system is associated. Only used
* for logging.
* @wq: The &struct workqueue_struct on which all completion work
* items are queued.
* @event: Event completion management.
* @event.target: Array of &struct ssam_event_target, one for each target.
* @event.notif: Notifier callbacks and event activation reference counting.
*/
struct ssam_cplt {
struct device *dev;
struct workqueue_struct *wq;
struct {
struct ssam_event_target target[SSH_NUM_TARGETS];
struct ssam_nf notif;
} event;
};
/* -- Main SSAM device structures. ------------------------------------------ */
/**
* enum ssam_controller_state - State values for &struct ssam_controller.
* @SSAM_CONTROLLER_UNINITIALIZED:
* The controller has not been initialized yet or has been deinitialized.
* @SSAM_CONTROLLER_INITIALIZED:
* The controller is initialized, but has not been started yet.
* @SSAM_CONTROLLER_STARTED:
* The controller has been started and is ready to use.
* @SSAM_CONTROLLER_STOPPED:
* The controller has been stopped.
* @SSAM_CONTROLLER_SUSPENDED:
* The controller has been suspended.
*/
enum ssam_controller_state {
SSAM_CONTROLLER_UNINITIALIZED,
SSAM_CONTROLLER_INITIALIZED,
SSAM_CONTROLLER_STARTED,
SSAM_CONTROLLER_STOPPED,
SSAM_CONTROLLER_SUSPENDED,
};
/**
* struct ssam_controller_caps - Controller device capabilities.
* @ssh_power_profile: SSH power profile.
* @ssh_buffer_size: SSH driver UART buffer size.
* @screen_on_sleep_idle_timeout: SAM UART screen-on sleep idle timeout.
* @screen_off_sleep_idle_timeout: SAM UART screen-off sleep idle timeout.
* @d3_closes_handle: SAM closes UART handle in D3.
*
* Controller and SSH device capabilities found in ACPI.
*/
struct ssam_controller_caps {
u32 ssh_power_profile;
u32 ssh_buffer_size;
u32 screen_on_sleep_idle_timeout;
u32 screen_off_sleep_idle_timeout;
u32 d3_closes_handle:1;
};
/**
* struct ssam_controller - SSAM controller device.
* @kref: Reference count of the controller.
* @lock: Main lock for the controller, used to guard state changes.
* @state: Controller state.
* @rtl: Request transport layer for SSH I/O.
* @cplt: Completion system for SSH/SSAM events and asynchronous requests.
* @counter: Safe SSH message ID counters.
* @counter.seq: Sequence ID counter.
* @counter.rqid: Request ID counter.
* @irq: Wakeup IRQ resources.
* @irq.num: The wakeup IRQ number.
* @irq.wakeup_enabled: Whether wakeup by IRQ is enabled during suspend.
* @caps: The controller device capabilities.
*/
struct ssam_controller {
struct kref kref;
struct rw_semaphore lock;
enum ssam_controller_state state;
struct ssh_rtl rtl;
struct ssam_cplt cplt;
struct {
struct ssh_seq_counter seq;
struct ssh_rqid_counter rqid;
} counter;
struct {
int num;
bool wakeup_enabled;
} irq;
struct ssam_controller_caps caps;
};
#define to_ssam_controller(ptr, member) \
container_of(ptr, struct ssam_controller, member)
#define ssam_dbg(ctrl, fmt, ...) rtl_dbg(&(ctrl)->rtl, fmt, ##__VA_ARGS__)
#define ssam_info(ctrl, fmt, ...) rtl_info(&(ctrl)->rtl, fmt, ##__VA_ARGS__)
#define ssam_warn(ctrl, fmt, ...) rtl_warn(&(ctrl)->rtl, fmt, ##__VA_ARGS__)
#define ssam_err(ctrl, fmt, ...) rtl_err(&(ctrl)->rtl, fmt, ##__VA_ARGS__)
/**
* ssam_controller_receive_buf() - Provide input-data to the controller.
* @ctrl: The controller.
* @buf: The input buffer.
* @n: The number of bytes in the input buffer.
*
* Provide input data to be evaluated by the controller, which has been
* received via the lower-level transport.
*
* Return: Returns the number of bytes consumed, or, if the packet transport
* layer of the controller has been shut down, %-ESHUTDOWN.
*/
static inline
int ssam_controller_receive_buf(struct ssam_controller *ctrl,
const unsigned char *buf, size_t n)
{
return ssh_ptl_rx_rcvbuf(&ctrl->rtl.ptl, buf, n);
}
/**
* ssam_controller_write_wakeup() - Notify the controller that the underlying
* device has space available for data to be written.
* @ctrl: The controller.
*/
static inline void ssam_controller_write_wakeup(struct ssam_controller *ctrl)
{
ssh_ptl_tx_wakeup_transfer(&ctrl->rtl.ptl);
}
int ssam_controller_init(struct ssam_controller *ctrl, struct serdev_device *s);
int ssam_controller_start(struct ssam_controller *ctrl);
void ssam_controller_shutdown(struct ssam_controller *ctrl);
void ssam_controller_destroy(struct ssam_controller *ctrl);
int ssam_notifier_disable_registered(struct ssam_controller *ctrl);
void ssam_notifier_restore_registered(struct ssam_controller *ctrl);
int ssam_irq_setup(struct ssam_controller *ctrl);
void ssam_irq_free(struct ssam_controller *ctrl);
int ssam_irq_arm_for_wakeup(struct ssam_controller *ctrl);
void ssam_irq_disarm_wakeup(struct ssam_controller *ctrl);
void ssam_controller_lock(struct ssam_controller *c);
void ssam_controller_unlock(struct ssam_controller *c);
int ssam_get_firmware_version(struct ssam_controller *ctrl, u32 *version);
int ssam_ctrl_notif_display_off(struct ssam_controller *ctrl);
int ssam_ctrl_notif_display_on(struct ssam_controller *ctrl);
int ssam_ctrl_notif_d0_exit(struct ssam_controller *ctrl);
int ssam_ctrl_notif_d0_entry(struct ssam_controller *ctrl);
int ssam_controller_suspend(struct ssam_controller *ctrl);
int ssam_controller_resume(struct ssam_controller *ctrl);
int ssam_event_item_cache_init(void);
void ssam_event_item_cache_destroy(void);
#endif /* _SURFACE_AGGREGATOR_CONTROLLER_H */

View file

@ -0,0 +1,839 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Surface Serial Hub (SSH) driver for communication with the Surface/System
* Aggregator Module (SSAM/SAM).
*
* Provides access to a SAM-over-SSH connected EC via a controller device.
* Handles communication via requests as well as enabling, disabling, and
* relaying of events.
*
* Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#include <linux/acpi.h>
#include <linux/atomic.h>
#include <linux/completion.h>
#include <linux/gpio/consumer.h>
#include <linux/kernel.h>
#include <linux/kref.h>
#include <linux/module.h>
#include <linux/pm.h>
#include <linux/serdev.h>
#include <linux/sysfs.h>
#include <linux/surface_aggregator/controller.h>
#include "bus.h"
#include "controller.h"
#define CREATE_TRACE_POINTS
#include "trace.h"
/* -- Static controller reference. ------------------------------------------ */
/*
* Main controller reference. The corresponding lock must be held while
* accessing (reading/writing) the reference.
*/
static struct ssam_controller *__ssam_controller;
static DEFINE_SPINLOCK(__ssam_controller_lock);
/**
* ssam_get_controller() - Get reference to SSAM controller.
*
* Returns a reference to the SSAM controller of the system or %NULL if there
* is none, it hasn't been set up yet, or it has already been unregistered.
* This function automatically increments the reference count of the
* controller, thus the calling party must ensure that ssam_controller_put()
* is called when it doesn't need the controller any more.
*/
struct ssam_controller *ssam_get_controller(void)
{
struct ssam_controller *ctrl;
spin_lock(&__ssam_controller_lock);
ctrl = __ssam_controller;
if (!ctrl)
goto out;
if (WARN_ON(!kref_get_unless_zero(&ctrl->kref)))
ctrl = NULL;
out:
spin_unlock(&__ssam_controller_lock);
return ctrl;
}
EXPORT_SYMBOL_GPL(ssam_get_controller);
/**
* ssam_try_set_controller() - Try to set the main controller reference.
* @ctrl: The controller to which the reference should point.
*
* Set the main controller reference to the given pointer if the reference
* hasn't been set already.
*
* Return: Returns zero on success or %-EEXIST if the reference has already
* been set.
*/
static int ssam_try_set_controller(struct ssam_controller *ctrl)
{
int status = 0;
spin_lock(&__ssam_controller_lock);
if (!__ssam_controller)
__ssam_controller = ctrl;
else
status = -EEXIST;
spin_unlock(&__ssam_controller_lock);
return status;
}
/**
* ssam_clear_controller() - Remove/clear the main controller reference.
*
* Clears the main controller reference, i.e. sets it to %NULL. This function
* should be called before the controller is shut down.
*/
static void ssam_clear_controller(void)
{
spin_lock(&__ssam_controller_lock);
__ssam_controller = NULL;
spin_unlock(&__ssam_controller_lock);
}
/**
* ssam_client_link() - Link an arbitrary client device to the controller.
* @c: The controller to link to.
* @client: The client device.
*
* Link an arbitrary client device to the controller by creating a device link
* between it as consumer and the controller device as provider. This function
* can be used for non-SSAM devices (or SSAM devices not registered as child
* under the controller) to guarantee that the controller is valid for as long
* as the driver of the client device is bound, and that proper suspend and
* resume ordering is guaranteed.
*
* The device link does not have to be destructed manually. It is removed
* automatically once the driver of the client device unbinds.
*
* Return: Returns zero on success, %-ENODEV if the controller is not ready or
* going to be removed soon, or %-ENOMEM if the device link could not be
* created for other reasons.
*/
int ssam_client_link(struct ssam_controller *c, struct device *client)
{
const u32 flags = DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE_CONSUMER;
struct device_link *link;
struct device *ctrldev;
ssam_controller_statelock(c);
if (c->state != SSAM_CONTROLLER_STARTED) {
ssam_controller_stateunlock(c);
return -ENODEV;
}
ctrldev = ssam_controller_device(c);
if (!ctrldev) {
ssam_controller_stateunlock(c);
return -ENODEV;
}
link = device_link_add(client, ctrldev, flags);
if (!link) {
ssam_controller_stateunlock(c);
return -ENOMEM;
}
/*
* Return -ENODEV if supplier driver is on its way to be removed. In
* this case, the controller won't be around for much longer and the
* device link is not going to save us any more, as unbinding is
* already in progress.
*/
if (READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND) {
ssam_controller_stateunlock(c);
return -ENODEV;
}
ssam_controller_stateunlock(c);
return 0;
}
EXPORT_SYMBOL_GPL(ssam_client_link);
/**
* ssam_client_bind() - Bind an arbitrary client device to the controller.
* @client: The client device.
*
* Link an arbitrary client device to the controller by creating a device link
* between it as consumer and the main controller device as provider. This
* function can be used for non-SSAM devices to guarantee that the controller
* returned by this function is valid for as long as the driver of the client
* device is bound, and that proper suspend and resume ordering is guaranteed.
*
* This function does essentially the same as ssam_client_link(), except that
* it first fetches the main controller reference, then creates the link, and
* finally returns this reference. Note that this function does not increment
* the reference counter of the controller, as, due to the link, the
* controller lifetime is assured as long as the driver of the client device
* is bound.
*
* It is not valid to use the controller reference obtained by this method
* outside of the driver bound to the client device at the time of calling
* this function, without first incrementing the reference count of the
* controller via ssam_controller_get(). Even after doing this, care must be
* taken that requests are only submitted and notifiers are only
* (un-)registered when the controller is active and not suspended. In other
* words: The device link only lives as long as the client driver is bound and
* any guarantees enforced by this link (e.g. active controller state) can
* only be relied upon as long as this link exists and may need to be enforced
* in other ways afterwards.
*
* The created device link does not have to be destructed manually. It is
* removed automatically once the driver of the client device unbinds.
*
* Return: Returns the controller on success, an error pointer with %-ENODEV
* if the controller is not present, not ready or going to be removed soon, or
* %-ENOMEM if the device link could not be created for other reasons.
*/
struct ssam_controller *ssam_client_bind(struct device *client)
{
struct ssam_controller *c;
int status;
c = ssam_get_controller();
if (!c)
return ERR_PTR(-ENODEV);
status = ssam_client_link(c, client);
/*
* Note that we can drop our controller reference in both success and
* failure cases: On success, we have bound the controller lifetime
* inherently to the client driver lifetime, i.e. it the controller is
* now guaranteed to outlive the client driver. On failure, we're not
* going to use the controller any more.
*/
ssam_controller_put(c);
return status >= 0 ? c : ERR_PTR(status);
}
EXPORT_SYMBOL_GPL(ssam_client_bind);
/* -- Glue layer (serdev_device -> ssam_controller). ------------------------ */
static int ssam_receive_buf(struct serdev_device *dev, const unsigned char *buf,
size_t n)
{
struct ssam_controller *ctrl;
ctrl = serdev_device_get_drvdata(dev);
return ssam_controller_receive_buf(ctrl, buf, n);
}
static void ssam_write_wakeup(struct serdev_device *dev)
{
ssam_controller_write_wakeup(serdev_device_get_drvdata(dev));
}
static const struct serdev_device_ops ssam_serdev_ops = {
.receive_buf = ssam_receive_buf,
.write_wakeup = ssam_write_wakeup,
};
/* -- SysFS and misc. ------------------------------------------------------- */
static int ssam_log_firmware_version(struct ssam_controller *ctrl)
{
u32 version, a, b, c;
int status;
status = ssam_get_firmware_version(ctrl, &version);
if (status)
return status;
a = (version >> 24) & 0xff;
b = ((version >> 8) & 0xffff);
c = version & 0xff;
ssam_info(ctrl, "SAM firmware version: %u.%u.%u\n", a, b, c);
return 0;
}
static ssize_t firmware_version_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct ssam_controller *ctrl = dev_get_drvdata(dev);
u32 version, a, b, c;
int status;
status = ssam_get_firmware_version(ctrl, &version);
if (status < 0)
return status;
a = (version >> 24) & 0xff;
b = ((version >> 8) & 0xffff);
c = version & 0xff;
return sysfs_emit(buf, "%u.%u.%u\n", a, b, c);
}
static DEVICE_ATTR_RO(firmware_version);
static struct attribute *ssam_sam_attrs[] = {
&dev_attr_firmware_version.attr,
NULL
};
static const struct attribute_group ssam_sam_group = {
.name = "sam",
.attrs = ssam_sam_attrs,
};
/* -- ACPI based device setup. ---------------------------------------------- */
static acpi_status ssam_serdev_setup_via_acpi_crs(struct acpi_resource *rsc,
void *ctx)
{
struct serdev_device *serdev = ctx;
struct acpi_resource_common_serialbus *serial;
struct acpi_resource_uart_serialbus *uart;
bool flow_control;
int status = 0;
if (rsc->type != ACPI_RESOURCE_TYPE_SERIAL_BUS)
return AE_OK;
serial = &rsc->data.common_serial_bus;
if (serial->type != ACPI_RESOURCE_SERIAL_TYPE_UART)
return AE_OK;
uart = &rsc->data.uart_serial_bus;
/* Set up serdev device. */
serdev_device_set_baudrate(serdev, uart->default_baud_rate);
/* serdev currently only supports RTSCTS flow control. */
if (uart->flow_control & (~((u8)ACPI_UART_FLOW_CONTROL_HW))) {
dev_warn(&serdev->dev, "setup: unsupported flow control (value: %#04x)\n",
uart->flow_control);
}
/* Set RTSCTS flow control. */
flow_control = uart->flow_control & ACPI_UART_FLOW_CONTROL_HW;
serdev_device_set_flow_control(serdev, flow_control);
/* serdev currently only supports EVEN/ODD parity. */
switch (uart->parity) {
case ACPI_UART_PARITY_NONE:
status = serdev_device_set_parity(serdev, SERDEV_PARITY_NONE);
break;
case ACPI_UART_PARITY_EVEN:
status = serdev_device_set_parity(serdev, SERDEV_PARITY_EVEN);
break;
case ACPI_UART_PARITY_ODD:
status = serdev_device_set_parity(serdev, SERDEV_PARITY_ODD);
break;
default:
dev_warn(&serdev->dev, "setup: unsupported parity (value: %#04x)\n",
uart->parity);
break;
}
if (status) {
dev_err(&serdev->dev, "setup: failed to set parity (value: %#04x, error: %d)\n",
uart->parity, status);
return AE_ERROR;
}
/* We've found the resource and are done. */
return AE_CTRL_TERMINATE;
}
static acpi_status ssam_serdev_setup_via_acpi(acpi_handle handle,
struct serdev_device *serdev)
{
return acpi_walk_resources(handle, METHOD_NAME__CRS,
ssam_serdev_setup_via_acpi_crs, serdev);
}
/* -- Power management. ----------------------------------------------------- */
static void ssam_serial_hub_shutdown(struct device *dev)
{
struct ssam_controller *c = dev_get_drvdata(dev);
int status;
/*
* Try to disable notifiers, signal display-off and D0-exit, ignore any
* errors.
*
* Note: It has not been established yet if this is actually
* necessary/useful for shutdown.
*/
status = ssam_notifier_disable_registered(c);
if (status) {
ssam_err(c, "pm: failed to disable notifiers for shutdown: %d\n",
status);
}
status = ssam_ctrl_notif_display_off(c);
if (status)
ssam_err(c, "pm: display-off notification failed: %d\n", status);
status = ssam_ctrl_notif_d0_exit(c);
if (status)
ssam_err(c, "pm: D0-exit notification failed: %d\n", status);
}
#ifdef CONFIG_PM_SLEEP
static int ssam_serial_hub_pm_prepare(struct device *dev)
{
struct ssam_controller *c = dev_get_drvdata(dev);
int status;
/*
* Try to signal display-off, This will quiesce events.
*
* Note: Signaling display-off/display-on should normally be done from
* some sort of display state notifier. As that is not available,
* signal it here.
*/
status = ssam_ctrl_notif_display_off(c);
if (status)
ssam_err(c, "pm: display-off notification failed: %d\n", status);
return status;
}
static void ssam_serial_hub_pm_complete(struct device *dev)
{
struct ssam_controller *c = dev_get_drvdata(dev);
int status;
/*
* Try to signal display-on. This will restore events.
*
* Note: Signaling display-off/display-on should normally be done from
* some sort of display state notifier. As that is not available,
* signal it here.
*/
status = ssam_ctrl_notif_display_on(c);
if (status)
ssam_err(c, "pm: display-on notification failed: %d\n", status);
}
static int ssam_serial_hub_pm_suspend(struct device *dev)
{
struct ssam_controller *c = dev_get_drvdata(dev);
int status;
/*
* Try to signal D0-exit, enable IRQ wakeup if specified. Abort on
* error.
*/
status = ssam_ctrl_notif_d0_exit(c);
if (status) {
ssam_err(c, "pm: D0-exit notification failed: %d\n", status);
goto err_notif;
}
status = ssam_irq_arm_for_wakeup(c);
if (status)
goto err_irq;
WARN_ON(ssam_controller_suspend(c));
return 0;
err_irq:
ssam_ctrl_notif_d0_entry(c);
err_notif:
ssam_ctrl_notif_display_on(c);
return status;
}
static int ssam_serial_hub_pm_resume(struct device *dev)
{
struct ssam_controller *c = dev_get_drvdata(dev);
int status;
WARN_ON(ssam_controller_resume(c));
/*
* Try to disable IRQ wakeup (if specified) and signal D0-entry. In
* case of errors, log them and try to restore normal operation state
* as far as possible.
*
* Note: Signaling display-off/display-on should normally be done from
* some sort of display state notifier. As that is not available,
* signal it here.
*/
ssam_irq_disarm_wakeup(c);
status = ssam_ctrl_notif_d0_entry(c);
if (status)
ssam_err(c, "pm: D0-entry notification failed: %d\n", status);
return 0;
}
static int ssam_serial_hub_pm_freeze(struct device *dev)
{
struct ssam_controller *c = dev_get_drvdata(dev);
int status;
/*
* During hibernation image creation, we only have to ensure that the
* EC doesn't send us any events. This is done via the display-off
* and D0-exit notifications. Note that this sets up the wakeup IRQ
* on the EC side, however, we have disabled it by default on our side
* and won't enable it here.
*
* See ssam_serial_hub_poweroff() for more details on the hibernation
* process.
*/
status = ssam_ctrl_notif_d0_exit(c);
if (status) {
ssam_err(c, "pm: D0-exit notification failed: %d\n", status);
ssam_ctrl_notif_display_on(c);
return status;
}
WARN_ON(ssam_controller_suspend(c));
return 0;
}
static int ssam_serial_hub_pm_thaw(struct device *dev)
{
struct ssam_controller *c = dev_get_drvdata(dev);
int status;
WARN_ON(ssam_controller_resume(c));
status = ssam_ctrl_notif_d0_entry(c);
if (status)
ssam_err(c, "pm: D0-exit notification failed: %d\n", status);
return status;
}
static int ssam_serial_hub_pm_poweroff(struct device *dev)
{
struct ssam_controller *c = dev_get_drvdata(dev);
int status;
/*
* When entering hibernation and powering off the system, the EC, at
* least on some models, may disable events. Without us taking care of
* that, this leads to events not being enabled/restored when the
* system resumes from hibernation, resulting SAM-HID subsystem devices
* (i.e. keyboard, touchpad) not working, AC-plug/AC-unplug events being
* gone, etc.
*
* To avoid these issues, we disable all registered events here (this is
* likely not actually required) and restore them during the drivers PM
* restore callback.
*
* Wakeup from the EC interrupt is not supported during hibernation,
* so don't arm the IRQ here.
*/
status = ssam_notifier_disable_registered(c);
if (status) {
ssam_err(c, "pm: failed to disable notifiers for hibernation: %d\n",
status);
return status;
}
status = ssam_ctrl_notif_d0_exit(c);
if (status) {
ssam_err(c, "pm: D0-exit notification failed: %d\n", status);
ssam_notifier_restore_registered(c);
return status;
}
WARN_ON(ssam_controller_suspend(c));
return 0;
}
static int ssam_serial_hub_pm_restore(struct device *dev)
{
struct ssam_controller *c = dev_get_drvdata(dev);
int status;
/*
* Ignore but log errors, try to restore state as much as possible in
* case of failures. See ssam_serial_hub_poweroff() for more details on
* the hibernation process.
*/
WARN_ON(ssam_controller_resume(c));
status = ssam_ctrl_notif_d0_entry(c);
if (status)
ssam_err(c, "pm: D0-entry notification failed: %d\n", status);
ssam_notifier_restore_registered(c);
return 0;
}
static const struct dev_pm_ops ssam_serial_hub_pm_ops = {
.prepare = ssam_serial_hub_pm_prepare,
.complete = ssam_serial_hub_pm_complete,
.suspend = ssam_serial_hub_pm_suspend,
.resume = ssam_serial_hub_pm_resume,
.freeze = ssam_serial_hub_pm_freeze,
.thaw = ssam_serial_hub_pm_thaw,
.poweroff = ssam_serial_hub_pm_poweroff,
.restore = ssam_serial_hub_pm_restore,
};
#else /* CONFIG_PM_SLEEP */
static const struct dev_pm_ops ssam_serial_hub_pm_ops = { };
#endif /* CONFIG_PM_SLEEP */
/* -- Device/driver setup. -------------------------------------------------- */
static const struct acpi_gpio_params gpio_ssam_wakeup_int = { 0, 0, false };
static const struct acpi_gpio_params gpio_ssam_wakeup = { 1, 0, false };
static const struct acpi_gpio_mapping ssam_acpi_gpios[] = {
{ "ssam_wakeup-int-gpio", &gpio_ssam_wakeup_int, 1 },
{ "ssam_wakeup-gpio", &gpio_ssam_wakeup, 1 },
{ },
};
static int ssam_serial_hub_probe(struct serdev_device *serdev)
{
struct ssam_controller *ctrl;
acpi_handle *ssh = ACPI_HANDLE(&serdev->dev);
acpi_status astatus;
int status;
if (gpiod_count(&serdev->dev, NULL) < 0)
return -ENODEV;
status = devm_acpi_dev_add_driver_gpios(&serdev->dev, ssam_acpi_gpios);
if (status)
return status;
/* Allocate controller. */
ctrl = kzalloc(sizeof(*ctrl), GFP_KERNEL);
if (!ctrl)
return -ENOMEM;
/* Initialize controller. */
status = ssam_controller_init(ctrl, serdev);
if (status)
goto err_ctrl_init;
ssam_controller_lock(ctrl);
/* Set up serdev device. */
serdev_device_set_drvdata(serdev, ctrl);
serdev_device_set_client_ops(serdev, &ssam_serdev_ops);
status = serdev_device_open(serdev);
if (status)
goto err_devopen;
astatus = ssam_serdev_setup_via_acpi(ssh, serdev);
if (ACPI_FAILURE(astatus)) {
status = -ENXIO;
goto err_devinit;
}
/* Start controller. */
status = ssam_controller_start(ctrl);
if (status)
goto err_devinit;
ssam_controller_unlock(ctrl);
/*
* Initial SAM requests: Log version and notify default/init power
* states.
*/
status = ssam_log_firmware_version(ctrl);
if (status)
goto err_initrq;
status = ssam_ctrl_notif_d0_entry(ctrl);
if (status)
goto err_initrq;
status = ssam_ctrl_notif_display_on(ctrl);
if (status)
goto err_initrq;
status = sysfs_create_group(&serdev->dev.kobj, &ssam_sam_group);
if (status)
goto err_initrq;
/* Set up IRQ. */
status = ssam_irq_setup(ctrl);
if (status)
goto err_irq;
/* Finally, set main controller reference. */
status = ssam_try_set_controller(ctrl);
if (WARN_ON(status)) /* Currently, we're the only provider. */
goto err_mainref;
/*
* TODO: The EC can wake up the system via the associated GPIO interrupt
* in multiple situations. One of which is the remaining battery
* capacity falling below a certain threshold. Normally, we should
* use the device_init_wakeup function, however, the EC also seems
* to have other reasons for waking up the system and it seems
* that Windows has additional checks whether the system should be
* resumed. In short, this causes some spurious unwanted wake-ups.
* For now let's thus default power/wakeup to false.
*/
device_set_wakeup_capable(&serdev->dev, true);
acpi_walk_dep_device_list(ssh);
return 0;
err_mainref:
ssam_irq_free(ctrl);
err_irq:
sysfs_remove_group(&serdev->dev.kobj, &ssam_sam_group);
err_initrq:
ssam_controller_lock(ctrl);
ssam_controller_shutdown(ctrl);
err_devinit:
serdev_device_close(serdev);
err_devopen:
ssam_controller_destroy(ctrl);
ssam_controller_unlock(ctrl);
err_ctrl_init:
kfree(ctrl);
return status;
}
static void ssam_serial_hub_remove(struct serdev_device *serdev)
{
struct ssam_controller *ctrl = serdev_device_get_drvdata(serdev);
int status;
/* Clear static reference so that no one else can get a new one. */
ssam_clear_controller();
/* Disable and free IRQ. */
ssam_irq_free(ctrl);
sysfs_remove_group(&serdev->dev.kobj, &ssam_sam_group);
ssam_controller_lock(ctrl);
/* Remove all client devices. */
ssam_controller_remove_clients(ctrl);
/* Act as if suspending to silence events. */
status = ssam_ctrl_notif_display_off(ctrl);
if (status) {
dev_err(&serdev->dev, "display-off notification failed: %d\n",
status);
}
status = ssam_ctrl_notif_d0_exit(ctrl);
if (status) {
dev_err(&serdev->dev, "D0-exit notification failed: %d\n",
status);
}
/* Shut down controller and remove serdev device reference from it. */
ssam_controller_shutdown(ctrl);
/* Shut down actual transport. */
serdev_device_wait_until_sent(serdev, 0);
serdev_device_close(serdev);
/* Drop our controller reference. */
ssam_controller_unlock(ctrl);
ssam_controller_put(ctrl);
device_set_wakeup_capable(&serdev->dev, false);
}
static const struct acpi_device_id ssam_serial_hub_match[] = {
{ "MSHW0084", 0 },
{ },
};
MODULE_DEVICE_TABLE(acpi, ssam_serial_hub_match);
static struct serdev_device_driver ssam_serial_hub = {
.probe = ssam_serial_hub_probe,
.remove = ssam_serial_hub_remove,
.driver = {
.name = "surface_serial_hub",
.acpi_match_table = ssam_serial_hub_match,
.pm = &ssam_serial_hub_pm_ops,
.shutdown = ssam_serial_hub_shutdown,
.probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
};
/* -- Module setup. --------------------------------------------------------- */
static int __init ssam_core_init(void)
{
int status;
status = ssam_bus_register();
if (status)
goto err_bus;
status = ssh_ctrl_packet_cache_init();
if (status)
goto err_cpkg;
status = ssam_event_item_cache_init();
if (status)
goto err_evitem;
status = serdev_device_driver_register(&ssam_serial_hub);
if (status)
goto err_register;
return 0;
err_register:
ssam_event_item_cache_destroy();
err_evitem:
ssh_ctrl_packet_cache_destroy();
err_cpkg:
ssam_bus_unregister();
err_bus:
return status;
}
module_init(ssam_core_init);
static void __exit ssam_core_exit(void)
{
serdev_device_driver_unregister(&ssam_serial_hub);
ssam_event_item_cache_destroy();
ssh_ctrl_packet_cache_destroy();
ssam_bus_unregister();
}
module_exit(ssam_core_exit);
MODULE_AUTHOR("Maximilian Luz <luzmaximilian@gmail.com>");
MODULE_DESCRIPTION("Subsystem and Surface Serial Hub driver for Surface System Aggregator Module");
MODULE_LICENSE("GPL");

View file

@ -0,0 +1,205 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* SSH message builder functions.
*
* Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#ifndef _SURFACE_AGGREGATOR_SSH_MSGB_H
#define _SURFACE_AGGREGATOR_SSH_MSGB_H
#include <asm/unaligned.h>
#include <linux/types.h>
#include <linux/surface_aggregator/controller.h>
#include <linux/surface_aggregator/serial_hub.h>
/**
* struct msgbuf - Buffer struct to construct SSH messages.
* @begin: Pointer to the beginning of the allocated buffer space.
* @end: Pointer to the end (one past last element) of the allocated buffer
* space.
* @ptr: Pointer to the first free element in the buffer.
*/
struct msgbuf {
u8 *begin;
u8 *end;
u8 *ptr;
};
/**
* msgb_init() - Initialize the given message buffer struct.
* @msgb: The buffer struct to initialize
* @ptr: Pointer to the underlying memory by which the buffer will be backed.
* @cap: Size of the underlying memory.
*
* Initialize the given message buffer struct using the provided memory as
* backing.
*/
static inline void msgb_init(struct msgbuf *msgb, u8 *ptr, size_t cap)
{
msgb->begin = ptr;
msgb->end = ptr + cap;
msgb->ptr = ptr;
}
/**
* msgb_bytes_used() - Return the current number of bytes used in the buffer.
* @msgb: The message buffer.
*/
static inline size_t msgb_bytes_used(const struct msgbuf *msgb)
{
return msgb->ptr - msgb->begin;
}
static inline void __msgb_push_u8(struct msgbuf *msgb, u8 value)
{
*msgb->ptr = value;
msgb->ptr += sizeof(u8);
}
static inline void __msgb_push_u16(struct msgbuf *msgb, u16 value)
{
put_unaligned_le16(value, msgb->ptr);
msgb->ptr += sizeof(u16);
}
/**
* msgb_push_u16() - Push a u16 value to the buffer.
* @msgb: The message buffer.
* @value: The value to push to the buffer.
*/
static inline void msgb_push_u16(struct msgbuf *msgb, u16 value)
{
if (WARN_ON(msgb->ptr + sizeof(u16) > msgb->end))
return;
__msgb_push_u16(msgb, value);
}
/**
* msgb_push_syn() - Push SSH SYN bytes to the buffer.
* @msgb: The message buffer.
*/
static inline void msgb_push_syn(struct msgbuf *msgb)
{
msgb_push_u16(msgb, SSH_MSG_SYN);
}
/**
* msgb_push_buf() - Push raw data to the buffer.
* @msgb: The message buffer.
* @buf: The data to push to the buffer.
* @len: The length of the data to push to the buffer.
*/
static inline void msgb_push_buf(struct msgbuf *msgb, const u8 *buf, size_t len)
{
msgb->ptr = memcpy(msgb->ptr, buf, len) + len;
}
/**
* msgb_push_crc() - Compute CRC and push it to the buffer.
* @msgb: The message buffer.
* @buf: The data for which the CRC should be computed.
* @len: The length of the data for which the CRC should be computed.
*/
static inline void msgb_push_crc(struct msgbuf *msgb, const u8 *buf, size_t len)
{
msgb_push_u16(msgb, ssh_crc(buf, len));
}
/**
* msgb_push_frame() - Push a SSH message frame header to the buffer.
* @msgb: The message buffer
* @ty: The type of the frame.
* @len: The length of the payload of the frame.
* @seq: The sequence ID of the frame/packet.
*/
static inline void msgb_push_frame(struct msgbuf *msgb, u8 ty, u16 len, u8 seq)
{
u8 *const begin = msgb->ptr;
if (WARN_ON(msgb->ptr + sizeof(struct ssh_frame) > msgb->end))
return;
__msgb_push_u8(msgb, ty); /* Frame type. */
__msgb_push_u16(msgb, len); /* Frame payload length. */
__msgb_push_u8(msgb, seq); /* Frame sequence ID. */
msgb_push_crc(msgb, begin, msgb->ptr - begin);
}
/**
* msgb_push_ack() - Push a SSH ACK frame to the buffer.
* @msgb: The message buffer
* @seq: The sequence ID of the frame/packet to be ACKed.
*/
static inline void msgb_push_ack(struct msgbuf *msgb, u8 seq)
{
/* SYN. */
msgb_push_syn(msgb);
/* ACK-type frame + CRC. */
msgb_push_frame(msgb, SSH_FRAME_TYPE_ACK, 0x00, seq);
/* Payload CRC (ACK-type frames do not have a payload). */
msgb_push_crc(msgb, msgb->ptr, 0);
}
/**
* msgb_push_nak() - Push a SSH NAK frame to the buffer.
* @msgb: The message buffer
*/
static inline void msgb_push_nak(struct msgbuf *msgb)
{
/* SYN. */
msgb_push_syn(msgb);
/* NAK-type frame + CRC. */
msgb_push_frame(msgb, SSH_FRAME_TYPE_NAK, 0x00, 0x00);
/* Payload CRC (ACK-type frames do not have a payload). */
msgb_push_crc(msgb, msgb->ptr, 0);
}
/**
* msgb_push_cmd() - Push a SSH command frame with payload to the buffer.
* @msgb: The message buffer.
* @seq: The sequence ID (SEQ) of the frame/packet.
* @rqid: The request ID (RQID) of the request contained in the frame.
* @rqst: The request to wrap in the frame.
*/
static inline void msgb_push_cmd(struct msgbuf *msgb, u8 seq, u16 rqid,
const struct ssam_request *rqst)
{
const u8 type = SSH_FRAME_TYPE_DATA_SEQ;
u8 *cmd;
/* SYN. */
msgb_push_syn(msgb);
/* Command frame + CRC. */
msgb_push_frame(msgb, type, sizeof(struct ssh_command) + rqst->length, seq);
/* Frame payload: Command struct + payload. */
if (WARN_ON(msgb->ptr + sizeof(struct ssh_command) > msgb->end))
return;
cmd = msgb->ptr;
__msgb_push_u8(msgb, SSH_PLD_TYPE_CMD); /* Payload type. */
__msgb_push_u8(msgb, rqst->target_category); /* Target category. */
__msgb_push_u8(msgb, rqst->target_id); /* Target ID (out). */
__msgb_push_u8(msgb, 0x00); /* Target ID (in). */
__msgb_push_u8(msgb, rqst->instance_id); /* Instance ID. */
__msgb_push_u16(msgb, rqid); /* Request ID. */
__msgb_push_u8(msgb, rqst->command_id); /* Command ID. */
/* Command payload. */
msgb_push_buf(msgb, rqst->payload, rqst->length);
/* CRC for command struct + payload. */
msgb_push_crc(msgb, cmd, msgb->ptr - cmd);
}
#endif /* _SURFACE_AGGREGATOR_SSH_MSGB_H */

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,190 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* SSH packet transport layer.
*
* Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#ifndef _SURFACE_AGGREGATOR_SSH_PACKET_LAYER_H
#define _SURFACE_AGGREGATOR_SSH_PACKET_LAYER_H
#include <linux/atomic.h>
#include <linux/kfifo.h>
#include <linux/ktime.h>
#include <linux/list.h>
#include <linux/serdev.h>
#include <linux/spinlock.h>
#include <linux/types.h>
#include <linux/wait.h>
#include <linux/workqueue.h>
#include <linux/surface_aggregator/serial_hub.h>
#include "ssh_parser.h"
/**
* enum ssh_ptl_state_flags - State-flags for &struct ssh_ptl.
*
* @SSH_PTL_SF_SHUTDOWN_BIT:
* Indicates that the packet transport layer has been shut down or is
* being shut down and should not accept any new packets/data.
*/
enum ssh_ptl_state_flags {
SSH_PTL_SF_SHUTDOWN_BIT,
};
/**
* struct ssh_ptl_ops - Callback operations for packet transport layer.
* @data_received: Function called when a data-packet has been received. Both,
* the packet layer on which the packet has been received and
* the packet's payload data are provided to this function.
*/
struct ssh_ptl_ops {
void (*data_received)(struct ssh_ptl *p, const struct ssam_span *data);
};
/**
* struct ssh_ptl - SSH packet transport layer.
* @serdev: Serial device providing the underlying data transport.
* @state: State(-flags) of the transport layer.
* @queue: Packet submission queue.
* @queue.lock: Lock for modifying the packet submission queue.
* @queue.head: List-head of the packet submission queue.
* @pending: Set/list of pending packets.
* @pending.lock: Lock for modifying the pending set.
* @pending.head: List-head of the pending set/list.
* @pending.count: Number of currently pending packets.
* @tx: Transmitter subsystem.
* @tx.running: Flag indicating (desired) transmitter thread state.
* @tx.thread: Transmitter thread.
* @tx.thread_cplt_tx: Completion for transmitter thread waiting on transfer.
* @tx.thread_cplt_pkt: Completion for transmitter thread waiting on packets.
* @tx.packet_wq: Waitqueue-head for packet transmit completion.
* @rx: Receiver subsystem.
* @rx.thread: Receiver thread.
* @rx.wq: Waitqueue-head for receiver thread.
* @rx.fifo: Buffer for receiving data/pushing data to receiver thread.
* @rx.buf: Buffer for evaluating data on receiver thread.
* @rx.blocked: List of recent/blocked sequence IDs to detect retransmission.
* @rx.blocked.seqs: Array of blocked sequence IDs.
* @rx.blocked.offset: Offset indicating where a new ID should be inserted.
* @rtx_timeout: Retransmission timeout subsystem.
* @rtx_timeout.lock: Lock for modifying the retransmission timeout reaper.
* @rtx_timeout.timeout: Timeout interval for retransmission.
* @rtx_timeout.expires: Time specifying when the reaper work is next scheduled.
* @rtx_timeout.reaper: Work performing timeout checks and subsequent actions.
* @ops: Packet layer operations.
*/
struct ssh_ptl {
struct serdev_device *serdev;
unsigned long state;
struct {
spinlock_t lock;
struct list_head head;
} queue;
struct {
spinlock_t lock;
struct list_head head;
atomic_t count;
} pending;
struct {
atomic_t running;
struct task_struct *thread;
struct completion thread_cplt_tx;
struct completion thread_cplt_pkt;
struct wait_queue_head packet_wq;
} tx;
struct {
struct task_struct *thread;
struct wait_queue_head wq;
struct kfifo fifo;
struct sshp_buf buf;
struct {
u16 seqs[8];
u16 offset;
} blocked;
} rx;
struct {
spinlock_t lock;
ktime_t timeout;
ktime_t expires;
struct delayed_work reaper;
} rtx_timeout;
struct ssh_ptl_ops ops;
};
#define __ssam_prcond(func, p, fmt, ...) \
do { \
typeof(p) __p = (p); \
\
if (__p) \
func(__p, fmt, ##__VA_ARGS__); \
} while (0)
#define ptl_dbg(p, fmt, ...) dev_dbg(&(p)->serdev->dev, fmt, ##__VA_ARGS__)
#define ptl_info(p, fmt, ...) dev_info(&(p)->serdev->dev, fmt, ##__VA_ARGS__)
#define ptl_warn(p, fmt, ...) dev_warn(&(p)->serdev->dev, fmt, ##__VA_ARGS__)
#define ptl_err(p, fmt, ...) dev_err(&(p)->serdev->dev, fmt, ##__VA_ARGS__)
#define ptl_dbg_cond(p, fmt, ...) __ssam_prcond(ptl_dbg, p, fmt, ##__VA_ARGS__)
#define to_ssh_ptl(ptr, member) \
container_of(ptr, struct ssh_ptl, member)
int ssh_ptl_init(struct ssh_ptl *ptl, struct serdev_device *serdev,
struct ssh_ptl_ops *ops);
void ssh_ptl_destroy(struct ssh_ptl *ptl);
/**
* ssh_ptl_get_device() - Get device associated with packet transport layer.
* @ptl: The packet transport layer.
*
* Return: Returns the device on which the given packet transport layer builds
* upon.
*/
static inline struct device *ssh_ptl_get_device(struct ssh_ptl *ptl)
{
return ptl->serdev ? &ptl->serdev->dev : NULL;
}
int ssh_ptl_tx_start(struct ssh_ptl *ptl);
int ssh_ptl_tx_stop(struct ssh_ptl *ptl);
int ssh_ptl_rx_start(struct ssh_ptl *ptl);
int ssh_ptl_rx_stop(struct ssh_ptl *ptl);
void ssh_ptl_shutdown(struct ssh_ptl *ptl);
int ssh_ptl_submit(struct ssh_ptl *ptl, struct ssh_packet *p);
void ssh_ptl_cancel(struct ssh_packet *p);
int ssh_ptl_rx_rcvbuf(struct ssh_ptl *ptl, const u8 *buf, size_t n);
/**
* ssh_ptl_tx_wakeup_transfer() - Wake up packet transmitter thread for
* transfer.
* @ptl: The packet transport layer.
*
* Wakes up the packet transmitter thread, notifying it that the underlying
* transport has more space for data to be transmitted. If the packet
* transport layer has been shut down, calls to this function will be ignored.
*/
static inline void ssh_ptl_tx_wakeup_transfer(struct ssh_ptl *ptl)
{
if (test_bit(SSH_PTL_SF_SHUTDOWN_BIT, &ptl->state))
return;
complete(&ptl->tx.thread_cplt_tx);
}
void ssh_packet_init(struct ssh_packet *packet, unsigned long type,
u8 priority, const struct ssh_packet_ops *ops);
int ssh_ctrl_packet_cache_init(void);
void ssh_ctrl_packet_cache_destroy(void);
#endif /* _SURFACE_AGGREGATOR_SSH_PACKET_LAYER_H */

View file

@ -0,0 +1,228 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* SSH message parser.
*
* Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#include <asm/unaligned.h>
#include <linux/compiler.h>
#include <linux/device.h>
#include <linux/types.h>
#include <linux/surface_aggregator/serial_hub.h>
#include "ssh_parser.h"
/**
* sshp_validate_crc() - Validate a CRC in raw message data.
* @src: The span of data over which the CRC should be computed.
* @crc: The pointer to the expected u16 CRC value.
*
* Computes the CRC of the provided data span (@src), compares it to the CRC
* stored at the given address (@crc), and returns the result of this
* comparison, i.e. %true if equal. This function is intended to run on raw
* input/message data.
*
* Return: Returns %true if the computed CRC matches the stored CRC, %false
* otherwise.
*/
static bool sshp_validate_crc(const struct ssam_span *src, const u8 *crc)
{
u16 actual = ssh_crc(src->ptr, src->len);
u16 expected = get_unaligned_le16(crc);
return actual == expected;
}
/**
* sshp_starts_with_syn() - Check if the given data starts with SSH SYN bytes.
* @src: The data span to check the start of.
*/
static bool sshp_starts_with_syn(const struct ssam_span *src)
{
return src->len >= 2 && get_unaligned_le16(src->ptr) == SSH_MSG_SYN;
}
/**
* sshp_find_syn() - Find SSH SYN bytes in the given data span.
* @src: The data span to search in.
* @rem: The span (output) indicating the remaining data, starting with SSH
* SYN bytes, if found.
*
* Search for SSH SYN bytes in the given source span. If found, set the @rem
* span to the remaining data, starting with the first SYN bytes and capped by
* the source span length, and return %true. This function does not copy any
* data, but rather only sets pointers to the respective start addresses and
* length values.
*
* If no SSH SYN bytes could be found, set the @rem span to the zero-length
* span at the end of the source span and return %false.
*
* If partial SSH SYN bytes could be found at the end of the source span, set
* the @rem span to cover these partial SYN bytes, capped by the end of the
* source span, and return %false. This function should then be re-run once
* more data is available.
*
* Return: Returns %true if a complete SSH SYN sequence could be found,
* %false otherwise.
*/
bool sshp_find_syn(const struct ssam_span *src, struct ssam_span *rem)
{
size_t i;
for (i = 0; i < src->len - 1; i++) {
if (likely(get_unaligned_le16(src->ptr + i) == SSH_MSG_SYN)) {
rem->ptr = src->ptr + i;
rem->len = src->len - i;
return true;
}
}
if (unlikely(src->ptr[src->len - 1] == (SSH_MSG_SYN & 0xff))) {
rem->ptr = src->ptr + src->len - 1;
rem->len = 1;
return false;
}
rem->ptr = src->ptr + src->len;
rem->len = 0;
return false;
}
/**
* sshp_parse_frame() - Parse SSH frame.
* @dev: The device used for logging.
* @source: The source to parse from.
* @frame: The parsed frame (output).
* @payload: The parsed payload (output).
* @maxlen: The maximum supported message length.
*
* Parses and validates a SSH frame, including its payload, from the given
* source. Sets the provided @frame pointer to the start of the frame and
* writes the limits of the frame payload to the provided @payload span
* pointer.
*
* This function does not copy any data, but rather only validates the message
* data and sets pointers (and length values) to indicate the respective parts.
*
* If no complete SSH frame could be found, the frame pointer will be set to
* the %NULL pointer and the payload span will be set to the null span (start
* pointer %NULL, size zero).
*
* Return: Returns zero on success or if the frame is incomplete, %-ENOMSG if
* the start of the message is invalid, %-EBADMSG if any (frame-header or
* payload) CRC is invalid, or %-EMSGSIZE if the SSH message is bigger than
* the maximum message length specified in the @maxlen parameter.
*/
int sshp_parse_frame(const struct device *dev, const struct ssam_span *source,
struct ssh_frame **frame, struct ssam_span *payload,
size_t maxlen)
{
struct ssam_span sf;
struct ssam_span sp;
/* Initialize output. */
*frame = NULL;
payload->ptr = NULL;
payload->len = 0;
if (!sshp_starts_with_syn(source)) {
dev_warn(dev, "rx: parser: invalid start of frame\n");
return -ENOMSG;
}
/* Check for minimum packet length. */
if (unlikely(source->len < SSH_MESSAGE_LENGTH(0))) {
dev_dbg(dev, "rx: parser: not enough data for frame\n");
return 0;
}
/* Pin down frame. */
sf.ptr = source->ptr + sizeof(u16);
sf.len = sizeof(struct ssh_frame);
/* Validate frame CRC. */
if (unlikely(!sshp_validate_crc(&sf, sf.ptr + sf.len))) {
dev_warn(dev, "rx: parser: invalid frame CRC\n");
return -EBADMSG;
}
/* Ensure packet does not exceed maximum length. */
sp.len = get_unaligned_le16(&((struct ssh_frame *)sf.ptr)->len);
if (unlikely(SSH_MESSAGE_LENGTH(sp.len) > maxlen)) {
dev_warn(dev, "rx: parser: frame too large: %llu bytes\n",
SSH_MESSAGE_LENGTH(sp.len));
return -EMSGSIZE;
}
/* Pin down payload. */
sp.ptr = sf.ptr + sf.len + sizeof(u16);
/* Check for frame + payload length. */
if (source->len < SSH_MESSAGE_LENGTH(sp.len)) {
dev_dbg(dev, "rx: parser: not enough data for payload\n");
return 0;
}
/* Validate payload CRC. */
if (unlikely(!sshp_validate_crc(&sp, sp.ptr + sp.len))) {
dev_warn(dev, "rx: parser: invalid payload CRC\n");
return -EBADMSG;
}
*frame = (struct ssh_frame *)sf.ptr;
*payload = sp;
dev_dbg(dev, "rx: parser: valid frame found (type: %#04x, len: %u)\n",
(*frame)->type, (*frame)->len);
return 0;
}
/**
* sshp_parse_command() - Parse SSH command frame payload.
* @dev: The device used for logging.
* @source: The source to parse from.
* @command: The parsed command (output).
* @command_data: The parsed command data/payload (output).
*
* Parses and validates a SSH command frame payload. Sets the @command pointer
* to the command header and the @command_data span to the command data (i.e.
* payload of the command). This will result in a zero-length span if the
* command does not have any associated data/payload. This function does not
* check the frame-payload-type field, which should be checked by the caller
* before calling this function.
*
* The @source parameter should be the complete frame payload, e.g. returned
* by the sshp_parse_frame() command.
*
* This function does not copy any data, but rather only validates the frame
* payload data and sets pointers (and length values) to indicate the
* respective parts.
*
* Return: Returns zero on success or %-ENOMSG if @source does not represent a
* valid command-type frame payload, i.e. is too short.
*/
int sshp_parse_command(const struct device *dev, const struct ssam_span *source,
struct ssh_command **command,
struct ssam_span *command_data)
{
/* Check for minimum length. */
if (unlikely(source->len < sizeof(struct ssh_command))) {
*command = NULL;
command_data->ptr = NULL;
command_data->len = 0;
dev_err(dev, "rx: parser: command payload is too short\n");
return -ENOMSG;
}
*command = (struct ssh_command *)source->ptr;
command_data->ptr = source->ptr + sizeof(struct ssh_command);
command_data->len = source->len - sizeof(struct ssh_command);
dev_dbg(dev, "rx: parser: valid command found (tc: %#04x, cid: %#04x)\n",
(*command)->tc, (*command)->cid);
return 0;
}

View file

@ -0,0 +1,154 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* SSH message parser.
*
* Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#ifndef _SURFACE_AGGREGATOR_SSH_PARSER_H
#define _SURFACE_AGGREGATOR_SSH_PARSER_H
#include <linux/device.h>
#include <linux/kfifo.h>
#include <linux/slab.h>
#include <linux/types.h>
#include <linux/surface_aggregator/serial_hub.h>
/**
* struct sshp_buf - Parser buffer for SSH messages.
* @ptr: Pointer to the beginning of the buffer.
* @len: Number of bytes used in the buffer.
* @cap: Maximum capacity of the buffer.
*/
struct sshp_buf {
u8 *ptr;
size_t len;
size_t cap;
};
/**
* sshp_buf_init() - Initialize a SSH parser buffer.
* @buf: The buffer to initialize.
* @ptr: The memory backing the buffer.
* @cap: The length of the memory backing the buffer, i.e. its capacity.
*
* Initializes the buffer with the given memory as backing and set its used
* length to zero.
*/
static inline void sshp_buf_init(struct sshp_buf *buf, u8 *ptr, size_t cap)
{
buf->ptr = ptr;
buf->len = 0;
buf->cap = cap;
}
/**
* sshp_buf_alloc() - Allocate and initialize a SSH parser buffer.
* @buf: The buffer to initialize/allocate to.
* @cap: The desired capacity of the buffer.
* @flags: The flags used for allocating the memory.
*
* Allocates @cap bytes and initializes the provided buffer struct with the
* allocated memory.
*
* Return: Returns zero on success and %-ENOMEM if allocation failed.
*/
static inline int sshp_buf_alloc(struct sshp_buf *buf, size_t cap, gfp_t flags)
{
u8 *ptr;
ptr = kzalloc(cap, flags);
if (!ptr)
return -ENOMEM;
sshp_buf_init(buf, ptr, cap);
return 0;
}
/**
* sshp_buf_free() - Free a SSH parser buffer.
* @buf: The buffer to free.
*
* Frees a SSH parser buffer by freeing the memory backing it and then
* resetting its pointer to %NULL and length and capacity to zero. Intended to
* free a buffer previously allocated with sshp_buf_alloc().
*/
static inline void sshp_buf_free(struct sshp_buf *buf)
{
kfree(buf->ptr);
buf->ptr = NULL;
buf->len = 0;
buf->cap = 0;
}
/**
* sshp_buf_drop() - Drop data from the beginning of the buffer.
* @buf: The buffer to drop data from.
* @n: The number of bytes to drop.
*
* Drops the first @n bytes from the buffer. Re-aligns any remaining data to
* the beginning of the buffer.
*/
static inline void sshp_buf_drop(struct sshp_buf *buf, size_t n)
{
memmove(buf->ptr, buf->ptr + n, buf->len - n);
buf->len -= n;
}
/**
* sshp_buf_read_from_fifo() - Transfer data from a fifo to the buffer.
* @buf: The buffer to write the data into.
* @fifo: The fifo to read the data from.
*
* Transfers the data contained in the fifo to the buffer, removing it from
* the fifo. This function will try to transfer as much data as possible,
* limited either by the remaining space in the buffer or by the number of
* bytes available in the fifo.
*
* Return: Returns the number of bytes transferred.
*/
static inline size_t sshp_buf_read_from_fifo(struct sshp_buf *buf,
struct kfifo *fifo)
{
size_t n;
n = kfifo_out(fifo, buf->ptr + buf->len, buf->cap - buf->len);
buf->len += n;
return n;
}
/**
* sshp_buf_span_from() - Initialize a span from the given buffer and offset.
* @buf: The buffer to create the span from.
* @offset: The offset in the buffer at which the span should start.
* @span: The span to initialize (output).
*
* Initializes the provided span to point to the memory at the given offset in
* the buffer, with the length of the span being capped by the number of bytes
* used in the buffer after the offset (i.e. bytes remaining after the
* offset).
*
* Warning: This function does not validate that @offset is less than or equal
* to the number of bytes used in the buffer or the buffer capacity. This must
* be guaranteed by the caller.
*/
static inline void sshp_buf_span_from(struct sshp_buf *buf, size_t offset,
struct ssam_span *span)
{
span->ptr = buf->ptr + offset;
span->len = buf->len - offset;
}
bool sshp_find_syn(const struct ssam_span *src, struct ssam_span *rem);
int sshp_parse_frame(const struct device *dev, const struct ssam_span *source,
struct ssh_frame **frame, struct ssam_span *payload,
size_t maxlen);
int sshp_parse_command(const struct device *dev, const struct ssam_span *source,
struct ssh_command **command,
struct ssam_span *command_data);
#endif /* _SURFACE_AGGREGATOR_SSH_PARSER_h */

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,143 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* SSH request transport layer.
*
* Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#ifndef _SURFACE_AGGREGATOR_SSH_REQUEST_LAYER_H
#define _SURFACE_AGGREGATOR_SSH_REQUEST_LAYER_H
#include <linux/atomic.h>
#include <linux/ktime.h>
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/workqueue.h>
#include <linux/surface_aggregator/serial_hub.h>
#include <linux/surface_aggregator/controller.h>
#include "ssh_packet_layer.h"
/**
* enum ssh_rtl_state_flags - State-flags for &struct ssh_rtl.
*
* @SSH_RTL_SF_SHUTDOWN_BIT:
* Indicates that the request transport layer has been shut down or is
* being shut down and should not accept any new requests.
*/
enum ssh_rtl_state_flags {
SSH_RTL_SF_SHUTDOWN_BIT,
};
/**
* struct ssh_rtl_ops - Callback operations for request transport layer.
* @handle_event: Function called when a SSH event has been received. The
* specified function takes the request layer, received command
* struct, and corresponding payload as arguments. If the event
* has no payload, the payload span is empty (not %NULL).
*/
struct ssh_rtl_ops {
void (*handle_event)(struct ssh_rtl *rtl, const struct ssh_command *cmd,
const struct ssam_span *data);
};
/**
* struct ssh_rtl - SSH request transport layer.
* @ptl: Underlying packet transport layer.
* @state: State(-flags) of the transport layer.
* @queue: Request submission queue.
* @queue.lock: Lock for modifying the request submission queue.
* @queue.head: List-head of the request submission queue.
* @pending: Set/list of pending requests.
* @pending.lock: Lock for modifying the request set.
* @pending.head: List-head of the pending set/list.
* @pending.count: Number of currently pending requests.
* @tx: Transmitter subsystem.
* @tx.work: Transmitter work item.
* @rtx_timeout: Retransmission timeout subsystem.
* @rtx_timeout.lock: Lock for modifying the retransmission timeout reaper.
* @rtx_timeout.timeout: Timeout interval for retransmission.
* @rtx_timeout.expires: Time specifying when the reaper work is next scheduled.
* @rtx_timeout.reaper: Work performing timeout checks and subsequent actions.
* @ops: Request layer operations.
*/
struct ssh_rtl {
struct ssh_ptl ptl;
unsigned long state;
struct {
spinlock_t lock;
struct list_head head;
} queue;
struct {
spinlock_t lock;
struct list_head head;
atomic_t count;
} pending;
struct {
struct work_struct work;
} tx;
struct {
spinlock_t lock;
ktime_t timeout;
ktime_t expires;
struct delayed_work reaper;
} rtx_timeout;
struct ssh_rtl_ops ops;
};
#define rtl_dbg(r, fmt, ...) ptl_dbg(&(r)->ptl, fmt, ##__VA_ARGS__)
#define rtl_info(p, fmt, ...) ptl_info(&(p)->ptl, fmt, ##__VA_ARGS__)
#define rtl_warn(r, fmt, ...) ptl_warn(&(r)->ptl, fmt, ##__VA_ARGS__)
#define rtl_err(r, fmt, ...) ptl_err(&(r)->ptl, fmt, ##__VA_ARGS__)
#define rtl_dbg_cond(r, fmt, ...) __ssam_prcond(rtl_dbg, r, fmt, ##__VA_ARGS__)
#define to_ssh_rtl(ptr, member) \
container_of(ptr, struct ssh_rtl, member)
/**
* ssh_rtl_get_device() - Get device associated with request transport layer.
* @rtl: The request transport layer.
*
* Return: Returns the device on which the given request transport layer
* builds upon.
*/
static inline struct device *ssh_rtl_get_device(struct ssh_rtl *rtl)
{
return ssh_ptl_get_device(&rtl->ptl);
}
/**
* ssh_request_rtl() - Get request transport layer associated with request.
* @rqst: The request to get the request transport layer reference for.
*
* Return: Returns the &struct ssh_rtl associated with the given SSH request.
*/
static inline struct ssh_rtl *ssh_request_rtl(struct ssh_request *rqst)
{
struct ssh_ptl *ptl;
ptl = READ_ONCE(rqst->packet.ptl);
return likely(ptl) ? to_ssh_rtl(ptl, ptl) : NULL;
}
int ssh_rtl_submit(struct ssh_rtl *rtl, struct ssh_request *rqst);
bool ssh_rtl_cancel(struct ssh_request *rqst, bool pending);
int ssh_rtl_init(struct ssh_rtl *rtl, struct serdev_device *serdev,
const struct ssh_rtl_ops *ops);
int ssh_rtl_start(struct ssh_rtl *rtl);
int ssh_rtl_flush(struct ssh_rtl *rtl, unsigned long timeout);
void ssh_rtl_shutdown(struct ssh_rtl *rtl);
void ssh_rtl_destroy(struct ssh_rtl *rtl);
int ssh_request_init(struct ssh_request *rqst, enum ssam_request_flags flags,
const struct ssh_request_ops *ops);
#endif /* _SURFACE_AGGREGATOR_SSH_REQUEST_LAYER_H */

View file

@ -0,0 +1,632 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Trace points for SSAM/SSH.
*
* Copyright (C) 2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#undef TRACE_SYSTEM
#define TRACE_SYSTEM surface_aggregator
#if !defined(_SURFACE_AGGREGATOR_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
#define _SURFACE_AGGREGATOR_TRACE_H
#include <linux/surface_aggregator/serial_hub.h>
#include <asm/unaligned.h>
#include <linux/tracepoint.h>
TRACE_DEFINE_ENUM(SSH_FRAME_TYPE_DATA_SEQ);
TRACE_DEFINE_ENUM(SSH_FRAME_TYPE_DATA_NSQ);
TRACE_DEFINE_ENUM(SSH_FRAME_TYPE_ACK);
TRACE_DEFINE_ENUM(SSH_FRAME_TYPE_NAK);
TRACE_DEFINE_ENUM(SSH_PACKET_SF_LOCKED_BIT);
TRACE_DEFINE_ENUM(SSH_PACKET_SF_QUEUED_BIT);
TRACE_DEFINE_ENUM(SSH_PACKET_SF_PENDING_BIT);
TRACE_DEFINE_ENUM(SSH_PACKET_SF_TRANSMITTING_BIT);
TRACE_DEFINE_ENUM(SSH_PACKET_SF_TRANSMITTED_BIT);
TRACE_DEFINE_ENUM(SSH_PACKET_SF_ACKED_BIT);
TRACE_DEFINE_ENUM(SSH_PACKET_SF_CANCELED_BIT);
TRACE_DEFINE_ENUM(SSH_PACKET_SF_COMPLETED_BIT);
TRACE_DEFINE_ENUM(SSH_PACKET_TY_FLUSH_BIT);
TRACE_DEFINE_ENUM(SSH_PACKET_TY_SEQUENCED_BIT);
TRACE_DEFINE_ENUM(SSH_PACKET_TY_BLOCKING_BIT);
TRACE_DEFINE_ENUM(SSH_PACKET_FLAGS_SF_MASK);
TRACE_DEFINE_ENUM(SSH_PACKET_FLAGS_TY_MASK);
TRACE_DEFINE_ENUM(SSH_REQUEST_SF_LOCKED_BIT);
TRACE_DEFINE_ENUM(SSH_REQUEST_SF_QUEUED_BIT);
TRACE_DEFINE_ENUM(SSH_REQUEST_SF_PENDING_BIT);
TRACE_DEFINE_ENUM(SSH_REQUEST_SF_TRANSMITTING_BIT);
TRACE_DEFINE_ENUM(SSH_REQUEST_SF_TRANSMITTED_BIT);
TRACE_DEFINE_ENUM(SSH_REQUEST_SF_RSPRCVD_BIT);
TRACE_DEFINE_ENUM(SSH_REQUEST_SF_CANCELED_BIT);
TRACE_DEFINE_ENUM(SSH_REQUEST_SF_COMPLETED_BIT);
TRACE_DEFINE_ENUM(SSH_REQUEST_TY_FLUSH_BIT);
TRACE_DEFINE_ENUM(SSH_REQUEST_TY_HAS_RESPONSE_BIT);
TRACE_DEFINE_ENUM(SSH_REQUEST_FLAGS_SF_MASK);
TRACE_DEFINE_ENUM(SSH_REQUEST_FLAGS_TY_MASK);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_SAM);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_BAT);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_TMP);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_PMC);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_FAN);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_PoM);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_DBG);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_KBD);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_FWU);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_UNI);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_LPC);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_TCL);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_SFL);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_KIP);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_EXT);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_BLD);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_BAS);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_SEN);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_SRQ);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_MCU);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_HID);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_TCH);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_BKL);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_TAM);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_ACC);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_UFI);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_USC);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_PEN);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_VID);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_AUD);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_SMC);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_KPD);
TRACE_DEFINE_ENUM(SSAM_SSH_TC_REG);
#define SSAM_PTR_UID_LEN 9
#define SSAM_U8_FIELD_NOT_APPLICABLE ((u16)-1)
#define SSAM_SEQ_NOT_APPLICABLE ((u16)-1)
#define SSAM_RQID_NOT_APPLICABLE ((u32)-1)
#define SSAM_SSH_TC_NOT_APPLICABLE 0
#ifndef _SURFACE_AGGREGATOR_TRACE_HELPERS
#define _SURFACE_AGGREGATOR_TRACE_HELPERS
/**
* ssam_trace_ptr_uid() - Convert the pointer to a non-pointer UID string.
* @ptr: The pointer to convert.
* @uid_str: A buffer of length SSAM_PTR_UID_LEN where the UID will be stored.
*
* Converts the given pointer into a UID string that is safe to be shared
* with userspace and logs, i.e. doesn't give away the real memory location.
*/
static inline void ssam_trace_ptr_uid(const void *ptr, char *uid_str)
{
char buf[2 * sizeof(void *) + 1];
BUILD_BUG_ON(ARRAY_SIZE(buf) < SSAM_PTR_UID_LEN);
snprintf(buf, ARRAY_SIZE(buf), "%p", ptr);
memcpy(uid_str, &buf[ARRAY_SIZE(buf) - SSAM_PTR_UID_LEN],
SSAM_PTR_UID_LEN);
}
/**
* ssam_trace_get_packet_seq() - Read the packet's sequence ID.
* @p: The packet.
*
* Return: Returns the packet's sequence ID (SEQ) field if present, or
* %SSAM_SEQ_NOT_APPLICABLE if not (e.g. flush packet).
*/
static inline u16 ssam_trace_get_packet_seq(const struct ssh_packet *p)
{
if (!p->data.ptr || p->data.len < SSH_MESSAGE_LENGTH(0))
return SSAM_SEQ_NOT_APPLICABLE;
return p->data.ptr[SSH_MSGOFFSET_FRAME(seq)];
}
/**
* ssam_trace_get_request_id() - Read the packet's request ID.
* @p: The packet.
*
* Return: Returns the packet's request ID (RQID) field if the packet
* represents a request with command data, or %SSAM_RQID_NOT_APPLICABLE if not
* (e.g. flush request, control packet).
*/
static inline u32 ssam_trace_get_request_id(const struct ssh_packet *p)
{
if (!p->data.ptr || p->data.len < SSH_COMMAND_MESSAGE_LENGTH(0))
return SSAM_RQID_NOT_APPLICABLE;
return get_unaligned_le16(&p->data.ptr[SSH_MSGOFFSET_COMMAND(rqid)]);
}
/**
* ssam_trace_get_request_tc() - Read the packet's request target category.
* @p: The packet.
*
* Return: Returns the packet's request target category (TC) field if the
* packet represents a request with command data, or %SSAM_TC_NOT_APPLICABLE
* if not (e.g. flush request, control packet).
*/
static inline u32 ssam_trace_get_request_tc(const struct ssh_packet *p)
{
if (!p->data.ptr || p->data.len < SSH_COMMAND_MESSAGE_LENGTH(0))
return SSAM_SSH_TC_NOT_APPLICABLE;
return get_unaligned_le16(&p->data.ptr[SSH_MSGOFFSET_COMMAND(tc)]);
}
#endif /* _SURFACE_AGGREGATOR_TRACE_HELPERS */
#define ssam_trace_get_command_field_u8(packet, field) \
((!(packet) || (packet)->data.len < SSH_COMMAND_MESSAGE_LENGTH(0)) \
? 0 : (packet)->data.ptr[SSH_MSGOFFSET_COMMAND(field)])
#define ssam_show_generic_u8_field(value) \
__print_symbolic(value, \
{ SSAM_U8_FIELD_NOT_APPLICABLE, "N/A" } \
)
#define ssam_show_frame_type(ty) \
__print_symbolic(ty, \
{ SSH_FRAME_TYPE_DATA_SEQ, "DSEQ" }, \
{ SSH_FRAME_TYPE_DATA_NSQ, "DNSQ" }, \
{ SSH_FRAME_TYPE_ACK, "ACK" }, \
{ SSH_FRAME_TYPE_NAK, "NAK" } \
)
#define ssam_show_packet_type(type) \
__print_flags(flags & SSH_PACKET_FLAGS_TY_MASK, "", \
{ BIT(SSH_PACKET_TY_FLUSH_BIT), "F" }, \
{ BIT(SSH_PACKET_TY_SEQUENCED_BIT), "S" }, \
{ BIT(SSH_PACKET_TY_BLOCKING_BIT), "B" } \
)
#define ssam_show_packet_state(state) \
__print_flags(flags & SSH_PACKET_FLAGS_SF_MASK, "", \
{ BIT(SSH_PACKET_SF_LOCKED_BIT), "L" }, \
{ BIT(SSH_PACKET_SF_QUEUED_BIT), "Q" }, \
{ BIT(SSH_PACKET_SF_PENDING_BIT), "P" }, \
{ BIT(SSH_PACKET_SF_TRANSMITTING_BIT), "S" }, \
{ BIT(SSH_PACKET_SF_TRANSMITTED_BIT), "T" }, \
{ BIT(SSH_PACKET_SF_ACKED_BIT), "A" }, \
{ BIT(SSH_PACKET_SF_CANCELED_BIT), "C" }, \
{ BIT(SSH_PACKET_SF_COMPLETED_BIT), "F" } \
)
#define ssam_show_packet_seq(seq) \
__print_symbolic(seq, \
{ SSAM_SEQ_NOT_APPLICABLE, "N/A" } \
)
#define ssam_show_request_type(flags) \
__print_flags((flags) & SSH_REQUEST_FLAGS_TY_MASK, "", \
{ BIT(SSH_REQUEST_TY_FLUSH_BIT), "F" }, \
{ BIT(SSH_REQUEST_TY_HAS_RESPONSE_BIT), "R" } \
)
#define ssam_show_request_state(flags) \
__print_flags((flags) & SSH_REQUEST_FLAGS_SF_MASK, "", \
{ BIT(SSH_REQUEST_SF_LOCKED_BIT), "L" }, \
{ BIT(SSH_REQUEST_SF_QUEUED_BIT), "Q" }, \
{ BIT(SSH_REQUEST_SF_PENDING_BIT), "P" }, \
{ BIT(SSH_REQUEST_SF_TRANSMITTING_BIT), "S" }, \
{ BIT(SSH_REQUEST_SF_TRANSMITTED_BIT), "T" }, \
{ BIT(SSH_REQUEST_SF_RSPRCVD_BIT), "A" }, \
{ BIT(SSH_REQUEST_SF_CANCELED_BIT), "C" }, \
{ BIT(SSH_REQUEST_SF_COMPLETED_BIT), "F" } \
)
#define ssam_show_request_id(rqid) \
__print_symbolic(rqid, \
{ SSAM_RQID_NOT_APPLICABLE, "N/A" } \
)
#define ssam_show_ssh_tc(rqid) \
__print_symbolic(rqid, \
{ SSAM_SSH_TC_NOT_APPLICABLE, "N/A" }, \
{ SSAM_SSH_TC_SAM, "SAM" }, \
{ SSAM_SSH_TC_BAT, "BAT" }, \
{ SSAM_SSH_TC_TMP, "TMP" }, \
{ SSAM_SSH_TC_PMC, "PMC" }, \
{ SSAM_SSH_TC_FAN, "FAN" }, \
{ SSAM_SSH_TC_PoM, "PoM" }, \
{ SSAM_SSH_TC_DBG, "DBG" }, \
{ SSAM_SSH_TC_KBD, "KBD" }, \
{ SSAM_SSH_TC_FWU, "FWU" }, \
{ SSAM_SSH_TC_UNI, "UNI" }, \
{ SSAM_SSH_TC_LPC, "LPC" }, \
{ SSAM_SSH_TC_TCL, "TCL" }, \
{ SSAM_SSH_TC_SFL, "SFL" }, \
{ SSAM_SSH_TC_KIP, "KIP" }, \
{ SSAM_SSH_TC_EXT, "EXT" }, \
{ SSAM_SSH_TC_BLD, "BLD" }, \
{ SSAM_SSH_TC_BAS, "BAS" }, \
{ SSAM_SSH_TC_SEN, "SEN" }, \
{ SSAM_SSH_TC_SRQ, "SRQ" }, \
{ SSAM_SSH_TC_MCU, "MCU" }, \
{ SSAM_SSH_TC_HID, "HID" }, \
{ SSAM_SSH_TC_TCH, "TCH" }, \
{ SSAM_SSH_TC_BKL, "BKL" }, \
{ SSAM_SSH_TC_TAM, "TAM" }, \
{ SSAM_SSH_TC_ACC, "ACC" }, \
{ SSAM_SSH_TC_UFI, "UFI" }, \
{ SSAM_SSH_TC_USC, "USC" }, \
{ SSAM_SSH_TC_PEN, "PEN" }, \
{ SSAM_SSH_TC_VID, "VID" }, \
{ SSAM_SSH_TC_AUD, "AUD" }, \
{ SSAM_SSH_TC_SMC, "SMC" }, \
{ SSAM_SSH_TC_KPD, "KPD" }, \
{ SSAM_SSH_TC_REG, "REG" } \
)
DECLARE_EVENT_CLASS(ssam_frame_class,
TP_PROTO(const struct ssh_frame *frame),
TP_ARGS(frame),
TP_STRUCT__entry(
__field(u8, type)
__field(u8, seq)
__field(u16, len)
),
TP_fast_assign(
__entry->type = frame->type;
__entry->seq = frame->seq;
__entry->len = get_unaligned_le16(&frame->len);
),
TP_printk("ty=%s, seq=%#04x, len=%u",
ssam_show_frame_type(__entry->type),
__entry->seq,
__entry->len
)
);
#define DEFINE_SSAM_FRAME_EVENT(name) \
DEFINE_EVENT(ssam_frame_class, ssam_##name, \
TP_PROTO(const struct ssh_frame *frame), \
TP_ARGS(frame) \
)
DECLARE_EVENT_CLASS(ssam_command_class,
TP_PROTO(const struct ssh_command *cmd, u16 len),
TP_ARGS(cmd, len),
TP_STRUCT__entry(
__field(u16, rqid)
__field(u16, len)
__field(u8, tc)
__field(u8, cid)
__field(u8, iid)
),
TP_fast_assign(
__entry->rqid = get_unaligned_le16(&cmd->rqid);
__entry->tc = cmd->tc;
__entry->cid = cmd->cid;
__entry->iid = cmd->iid;
__entry->len = len;
),
TP_printk("rqid=%#06x, tc=%s, cid=%#04x, iid=%#04x, len=%u",
__entry->rqid,
ssam_show_ssh_tc(__entry->tc),
__entry->cid,
__entry->iid,
__entry->len
)
);
#define DEFINE_SSAM_COMMAND_EVENT(name) \
DEFINE_EVENT(ssam_command_class, ssam_##name, \
TP_PROTO(const struct ssh_command *cmd, u16 len), \
TP_ARGS(cmd, len) \
)
DECLARE_EVENT_CLASS(ssam_packet_class,
TP_PROTO(const struct ssh_packet *packet),
TP_ARGS(packet),
TP_STRUCT__entry(
__field(unsigned long, state)
__array(char, uid, SSAM_PTR_UID_LEN)
__field(u8, priority)
__field(u16, length)
__field(u16, seq)
),
TP_fast_assign(
__entry->state = READ_ONCE(packet->state);
ssam_trace_ptr_uid(packet, __entry->uid);
__entry->priority = READ_ONCE(packet->priority);
__entry->length = packet->data.len;
__entry->seq = ssam_trace_get_packet_seq(packet);
),
TP_printk("uid=%s, seq=%s, ty=%s, pri=%#04x, len=%u, sta=%s",
__entry->uid,
ssam_show_packet_seq(__entry->seq),
ssam_show_packet_type(__entry->state),
__entry->priority,
__entry->length,
ssam_show_packet_state(__entry->state)
)
);
#define DEFINE_SSAM_PACKET_EVENT(name) \
DEFINE_EVENT(ssam_packet_class, ssam_##name, \
TP_PROTO(const struct ssh_packet *packet), \
TP_ARGS(packet) \
)
DECLARE_EVENT_CLASS(ssam_packet_status_class,
TP_PROTO(const struct ssh_packet *packet, int status),
TP_ARGS(packet, status),
TP_STRUCT__entry(
__field(unsigned long, state)
__field(int, status)
__array(char, uid, SSAM_PTR_UID_LEN)
__field(u8, priority)
__field(u16, length)
__field(u16, seq)
),
TP_fast_assign(
__entry->state = READ_ONCE(packet->state);
__entry->status = status;
ssam_trace_ptr_uid(packet, __entry->uid);
__entry->priority = READ_ONCE(packet->priority);
__entry->length = packet->data.len;
__entry->seq = ssam_trace_get_packet_seq(packet);
),
TP_printk("uid=%s, seq=%s, ty=%s, pri=%#04x, len=%u, sta=%s, status=%d",
__entry->uid,
ssam_show_packet_seq(__entry->seq),
ssam_show_packet_type(__entry->state),
__entry->priority,
__entry->length,
ssam_show_packet_state(__entry->state),
__entry->status
)
);
#define DEFINE_SSAM_PACKET_STATUS_EVENT(name) \
DEFINE_EVENT(ssam_packet_status_class, ssam_##name, \
TP_PROTO(const struct ssh_packet *packet, int status), \
TP_ARGS(packet, status) \
)
DECLARE_EVENT_CLASS(ssam_request_class,
TP_PROTO(const struct ssh_request *request),
TP_ARGS(request),
TP_STRUCT__entry(
__field(unsigned long, state)
__field(u32, rqid)
__array(char, uid, SSAM_PTR_UID_LEN)
__field(u8, tc)
__field(u16, cid)
__field(u16, iid)
),
TP_fast_assign(
const struct ssh_packet *p = &request->packet;
/* Use packet for UID so we can match requests to packets. */
__entry->state = READ_ONCE(request->state);
__entry->rqid = ssam_trace_get_request_id(p);
ssam_trace_ptr_uid(p, __entry->uid);
__entry->tc = ssam_trace_get_request_tc(p);
__entry->cid = ssam_trace_get_command_field_u8(p, cid);
__entry->iid = ssam_trace_get_command_field_u8(p, iid);
),
TP_printk("uid=%s, rqid=%s, ty=%s, sta=%s, tc=%s, cid=%s, iid=%s",
__entry->uid,
ssam_show_request_id(__entry->rqid),
ssam_show_request_type(__entry->state),
ssam_show_request_state(__entry->state),
ssam_show_ssh_tc(__entry->tc),
ssam_show_generic_u8_field(__entry->cid),
ssam_show_generic_u8_field(__entry->iid)
)
);
#define DEFINE_SSAM_REQUEST_EVENT(name) \
DEFINE_EVENT(ssam_request_class, ssam_##name, \
TP_PROTO(const struct ssh_request *request), \
TP_ARGS(request) \
)
DECLARE_EVENT_CLASS(ssam_request_status_class,
TP_PROTO(const struct ssh_request *request, int status),
TP_ARGS(request, status),
TP_STRUCT__entry(
__field(unsigned long, state)
__field(u32, rqid)
__field(int, status)
__array(char, uid, SSAM_PTR_UID_LEN)
__field(u8, tc)
__field(u16, cid)
__field(u16, iid)
),
TP_fast_assign(
const struct ssh_packet *p = &request->packet;
/* Use packet for UID so we can match requests to packets. */
__entry->state = READ_ONCE(request->state);
__entry->rqid = ssam_trace_get_request_id(p);
__entry->status = status;
ssam_trace_ptr_uid(p, __entry->uid);
__entry->tc = ssam_trace_get_request_tc(p);
__entry->cid = ssam_trace_get_command_field_u8(p, cid);
__entry->iid = ssam_trace_get_command_field_u8(p, iid);
),
TP_printk("uid=%s, rqid=%s, ty=%s, sta=%s, tc=%s, cid=%s, iid=%s, status=%d",
__entry->uid,
ssam_show_request_id(__entry->rqid),
ssam_show_request_type(__entry->state),
ssam_show_request_state(__entry->state),
ssam_show_ssh_tc(__entry->tc),
ssam_show_generic_u8_field(__entry->cid),
ssam_show_generic_u8_field(__entry->iid),
__entry->status
)
);
#define DEFINE_SSAM_REQUEST_STATUS_EVENT(name) \
DEFINE_EVENT(ssam_request_status_class, ssam_##name, \
TP_PROTO(const struct ssh_request *request, int status),\
TP_ARGS(request, status) \
)
DECLARE_EVENT_CLASS(ssam_alloc_class,
TP_PROTO(void *ptr, size_t len),
TP_ARGS(ptr, len),
TP_STRUCT__entry(
__field(size_t, len)
__array(char, uid, SSAM_PTR_UID_LEN)
),
TP_fast_assign(
__entry->len = len;
ssam_trace_ptr_uid(ptr, __entry->uid);
),
TP_printk("uid=%s, len=%zu", __entry->uid, __entry->len)
);
#define DEFINE_SSAM_ALLOC_EVENT(name) \
DEFINE_EVENT(ssam_alloc_class, ssam_##name, \
TP_PROTO(void *ptr, size_t len), \
TP_ARGS(ptr, len) \
)
DECLARE_EVENT_CLASS(ssam_free_class,
TP_PROTO(void *ptr),
TP_ARGS(ptr),
TP_STRUCT__entry(
__array(char, uid, SSAM_PTR_UID_LEN)
),
TP_fast_assign(
ssam_trace_ptr_uid(ptr, __entry->uid);
),
TP_printk("uid=%s", __entry->uid)
);
#define DEFINE_SSAM_FREE_EVENT(name) \
DEFINE_EVENT(ssam_free_class, ssam_##name, \
TP_PROTO(void *ptr), \
TP_ARGS(ptr) \
)
DECLARE_EVENT_CLASS(ssam_pending_class,
TP_PROTO(unsigned int pending),
TP_ARGS(pending),
TP_STRUCT__entry(
__field(unsigned int, pending)
),
TP_fast_assign(
__entry->pending = pending;
),
TP_printk("pending=%u", __entry->pending)
);
#define DEFINE_SSAM_PENDING_EVENT(name) \
DEFINE_EVENT(ssam_pending_class, ssam_##name, \
TP_PROTO(unsigned int pending), \
TP_ARGS(pending) \
)
DECLARE_EVENT_CLASS(ssam_data_class,
TP_PROTO(size_t length),
TP_ARGS(length),
TP_STRUCT__entry(
__field(size_t, length)
),
TP_fast_assign(
__entry->length = length;
),
TP_printk("length=%zu", __entry->length)
);
#define DEFINE_SSAM_DATA_EVENT(name) \
DEFINE_EVENT(ssam_data_class, ssam_##name, \
TP_PROTO(size_t length), \
TP_ARGS(length) \
)
DEFINE_SSAM_FRAME_EVENT(rx_frame_received);
DEFINE_SSAM_COMMAND_EVENT(rx_response_received);
DEFINE_SSAM_COMMAND_EVENT(rx_event_received);
DEFINE_SSAM_PACKET_EVENT(packet_release);
DEFINE_SSAM_PACKET_EVENT(packet_submit);
DEFINE_SSAM_PACKET_EVENT(packet_resubmit);
DEFINE_SSAM_PACKET_EVENT(packet_timeout);
DEFINE_SSAM_PACKET_EVENT(packet_cancel);
DEFINE_SSAM_PACKET_STATUS_EVENT(packet_complete);
DEFINE_SSAM_PENDING_EVENT(ptl_timeout_reap);
DEFINE_SSAM_REQUEST_EVENT(request_submit);
DEFINE_SSAM_REQUEST_EVENT(request_timeout);
DEFINE_SSAM_REQUEST_EVENT(request_cancel);
DEFINE_SSAM_REQUEST_STATUS_EVENT(request_complete);
DEFINE_SSAM_PENDING_EVENT(rtl_timeout_reap);
DEFINE_SSAM_PACKET_EVENT(ei_tx_drop_ack_packet);
DEFINE_SSAM_PACKET_EVENT(ei_tx_drop_nak_packet);
DEFINE_SSAM_PACKET_EVENT(ei_tx_drop_dsq_packet);
DEFINE_SSAM_PACKET_STATUS_EVENT(ei_tx_fail_write);
DEFINE_SSAM_PACKET_EVENT(ei_tx_corrupt_data);
DEFINE_SSAM_DATA_EVENT(ei_rx_corrupt_syn);
DEFINE_SSAM_FRAME_EVENT(ei_rx_corrupt_data);
DEFINE_SSAM_REQUEST_EVENT(ei_rx_drop_response);
DEFINE_SSAM_ALLOC_EVENT(ctrl_packet_alloc);
DEFINE_SSAM_FREE_EVENT(ctrl_packet_free);
DEFINE_SSAM_ALLOC_EVENT(event_item_alloc);
DEFINE_SSAM_FREE_EVENT(event_item_free);
#endif /* _SURFACE_AGGREGATOR_TRACE_H */
/* This part must be outside protection */
#undef TRACE_INCLUDE_PATH
#undef TRACE_INCLUDE_FILE
#define TRACE_INCLUDE_PATH .
#define TRACE_INCLUDE_FILE trace
#include <trace/define_trace.h>

View file

@ -57,12 +57,16 @@ static DEFINE_MUTEX(s3_wmi_lock);
static int s3_wmi_query_block(const char *guid, int instance, int *ret)
{
struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };
union acpi_object *obj = NULL;
acpi_status status;
union acpi_object *obj;
int error = 0;
mutex_lock(&s3_wmi_lock);
status = wmi_query_block(guid, instance, &output);
if (ACPI_FAILURE(status)) {
error = -EIO;
goto out_free_unlock;
}
obj = output.pointer;

View file

@ -0,0 +1,886 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Driver for the Surface ACPI Notify (SAN) interface/shim.
*
* Translates communication from ACPI to Surface System Aggregator Module
* (SSAM/SAM) requests and back, specifically SAM-over-SSH. Translates SSAM
* events back to ACPI notifications. Allows handling of discrete GPU
* notifications sent from ACPI via the SAN interface by providing them to any
* registered external driver.
*
* Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#include <asm/unaligned.h>
#include <linux/acpi.h>
#include <linux/delay.h>
#include <linux/jiffies.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/notifier.h>
#include <linux/platform_device.h>
#include <linux/rwsem.h>
#include <linux/surface_aggregator/controller.h>
#include <linux/surface_acpi_notify.h>
struct san_data {
struct device *dev;
struct ssam_controller *ctrl;
struct acpi_connection_info info;
struct ssam_event_notifier nf_bat;
struct ssam_event_notifier nf_tmp;
};
#define to_san_data(ptr, member) \
container_of(ptr, struct san_data, member)
/* -- dGPU notifier interface. ---------------------------------------------- */
struct san_rqsg_if {
struct rw_semaphore lock;
struct device *dev;
struct blocking_notifier_head nh;
};
static struct san_rqsg_if san_rqsg_if = {
.lock = __RWSEM_INITIALIZER(san_rqsg_if.lock),
.dev = NULL,
.nh = BLOCKING_NOTIFIER_INIT(san_rqsg_if.nh),
};
static int san_set_rqsg_interface_device(struct device *dev)
{
int status = 0;
down_write(&san_rqsg_if.lock);
if (!san_rqsg_if.dev && dev)
san_rqsg_if.dev = dev;
else
status = -EBUSY;
up_write(&san_rqsg_if.lock);
return status;
}
/**
* san_client_link() - Link client as consumer to SAN device.
* @client: The client to link.
*
* Sets up a device link between the provided client device as consumer and
* the SAN device as provider. This function can be used to ensure that the
* SAN interface has been set up and will be set up for as long as the driver
* of the client device is bound. This guarantees that, during that time, all
* dGPU events will be received by any registered notifier.
*
* The link will be automatically removed once the client device's driver is
* unbound.
*
* Return: Returns zero on success, %-ENXIO if the SAN interface has not been
* set up yet, and %-ENOMEM if device link creation failed.
*/
int san_client_link(struct device *client)
{
const u32 flags = DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE_CONSUMER;
struct device_link *link;
down_read(&san_rqsg_if.lock);
if (!san_rqsg_if.dev) {
up_read(&san_rqsg_if.lock);
return -ENXIO;
}
link = device_link_add(client, san_rqsg_if.dev, flags);
if (!link) {
up_read(&san_rqsg_if.lock);
return -ENOMEM;
}
if (READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND) {
up_read(&san_rqsg_if.lock);
return -ENXIO;
}
up_read(&san_rqsg_if.lock);
return 0;
}
EXPORT_SYMBOL_GPL(san_client_link);
/**
* san_dgpu_notifier_register() - Register a SAN dGPU notifier.
* @nb: The notifier-block to register.
*
* Registers a SAN dGPU notifier, receiving any new SAN dGPU events sent from
* ACPI. The registered notifier will be called with &struct san_dgpu_event
* as notifier data and the command ID of that event as notifier action.
*/
int san_dgpu_notifier_register(struct notifier_block *nb)
{
return blocking_notifier_chain_register(&san_rqsg_if.nh, nb);
}
EXPORT_SYMBOL_GPL(san_dgpu_notifier_register);
/**
* san_dgpu_notifier_unregister() - Unregister a SAN dGPU notifier.
* @nb: The notifier-block to unregister.
*/
int san_dgpu_notifier_unregister(struct notifier_block *nb)
{
return blocking_notifier_chain_unregister(&san_rqsg_if.nh, nb);
}
EXPORT_SYMBOL_GPL(san_dgpu_notifier_unregister);
static int san_dgpu_notifier_call(struct san_dgpu_event *evt)
{
int ret;
ret = blocking_notifier_call_chain(&san_rqsg_if.nh, evt->command, evt);
return notifier_to_errno(ret);
}
/* -- ACPI _DSM event relay. ------------------------------------------------ */
#define SAN_DSM_REVISION 0
/* 93b666c5-70c6-469f-a215-3d487c91ab3c */
static const guid_t SAN_DSM_UUID =
GUID_INIT(0x93b666c5, 0x70c6, 0x469f, 0xa2, 0x15, 0x3d,
0x48, 0x7c, 0x91, 0xab, 0x3c);
enum san_dsm_event_fn {
SAN_DSM_EVENT_FN_BAT1_STAT = 0x03,
SAN_DSM_EVENT_FN_BAT1_INFO = 0x04,
SAN_DSM_EVENT_FN_ADP1_STAT = 0x05,
SAN_DSM_EVENT_FN_ADP1_INFO = 0x06,
SAN_DSM_EVENT_FN_BAT2_STAT = 0x07,
SAN_DSM_EVENT_FN_BAT2_INFO = 0x08,
SAN_DSM_EVENT_FN_THERMAL = 0x09,
SAN_DSM_EVENT_FN_DPTF = 0x0a,
};
enum sam_event_cid_bat {
SAM_EVENT_CID_BAT_BIX = 0x15,
SAM_EVENT_CID_BAT_BST = 0x16,
SAM_EVENT_CID_BAT_ADP = 0x17,
SAM_EVENT_CID_BAT_PROT = 0x18,
SAM_EVENT_CID_BAT_DPTF = 0x4f,
};
enum sam_event_cid_tmp {
SAM_EVENT_CID_TMP_TRIP = 0x0b,
};
struct san_event_work {
struct delayed_work work;
struct device *dev;
struct ssam_event event; /* must be last */
};
static int san_acpi_notify_event(struct device *dev, u64 func,
union acpi_object *param)
{
acpi_handle san = ACPI_HANDLE(dev);
union acpi_object *obj;
int status = 0;
if (!acpi_check_dsm(san, &SAN_DSM_UUID, SAN_DSM_REVISION, BIT_ULL(func)))
return 0;
dev_dbg(dev, "notify event %#04llx\n", func);
obj = acpi_evaluate_dsm_typed(san, &SAN_DSM_UUID, SAN_DSM_REVISION,
func, param, ACPI_TYPE_BUFFER);
if (!obj)
return -EFAULT;
if (obj->buffer.length != 1 || obj->buffer.pointer[0] != 0) {
dev_err(dev, "got unexpected result from _DSM\n");
status = -EPROTO;
}
ACPI_FREE(obj);
return status;
}
static int san_evt_bat_adp(struct device *dev, const struct ssam_event *event)
{
int status;
status = san_acpi_notify_event(dev, SAN_DSM_EVENT_FN_ADP1_STAT, NULL);
if (status)
return status;
/*
* Ensure that the battery states get updated correctly. When the
* battery is fully charged and an adapter is plugged in, it sometimes
* is not updated correctly, instead showing it as charging.
* Explicitly trigger battery updates to fix this.
*/
status = san_acpi_notify_event(dev, SAN_DSM_EVENT_FN_BAT1_STAT, NULL);
if (status)
return status;
return san_acpi_notify_event(dev, SAN_DSM_EVENT_FN_BAT2_STAT, NULL);
}
static int san_evt_bat_bix(struct device *dev, const struct ssam_event *event)
{
enum san_dsm_event_fn fn;
if (event->instance_id == 0x02)
fn = SAN_DSM_EVENT_FN_BAT2_INFO;
else
fn = SAN_DSM_EVENT_FN_BAT1_INFO;
return san_acpi_notify_event(dev, fn, NULL);
}
static int san_evt_bat_bst(struct device *dev, const struct ssam_event *event)
{
enum san_dsm_event_fn fn;
if (event->instance_id == 0x02)
fn = SAN_DSM_EVENT_FN_BAT2_STAT;
else
fn = SAN_DSM_EVENT_FN_BAT1_STAT;
return san_acpi_notify_event(dev, fn, NULL);
}
static int san_evt_bat_dptf(struct device *dev, const struct ssam_event *event)
{
union acpi_object payload;
/*
* The Surface ACPI expects a buffer and not a package. It specifically
* checks for ObjectType (Arg3) == 0x03. This will cause a warning in
* acpica/nsarguments.c, but that warning can be safely ignored.
*/
payload.type = ACPI_TYPE_BUFFER;
payload.buffer.length = event->length;
payload.buffer.pointer = (u8 *)&event->data[0];
return san_acpi_notify_event(dev, SAN_DSM_EVENT_FN_DPTF, &payload);
}
static unsigned long san_evt_bat_delay(u8 cid)
{
switch (cid) {
case SAM_EVENT_CID_BAT_ADP:
/*
* Wait for battery state to update before signaling adapter
* change.
*/
return msecs_to_jiffies(5000);
case SAM_EVENT_CID_BAT_BST:
/* Ensure we do not miss anything important due to caching. */
return msecs_to_jiffies(2000);
default:
return 0;
}
}
static bool san_evt_bat(const struct ssam_event *event, struct device *dev)
{
int status;
switch (event->command_id) {
case SAM_EVENT_CID_BAT_BIX:
status = san_evt_bat_bix(dev, event);
break;
case SAM_EVENT_CID_BAT_BST:
status = san_evt_bat_bst(dev, event);
break;
case SAM_EVENT_CID_BAT_ADP:
status = san_evt_bat_adp(dev, event);
break;
case SAM_EVENT_CID_BAT_PROT:
/*
* TODO: Implement support for battery protection status change
* event.
*/
return true;
case SAM_EVENT_CID_BAT_DPTF:
status = san_evt_bat_dptf(dev, event);
break;
default:
return false;
}
if (status) {
dev_err(dev, "error handling power event (cid = %#04x)\n",
event->command_id);
}
return true;
}
static void san_evt_bat_workfn(struct work_struct *work)
{
struct san_event_work *ev;
ev = container_of(work, struct san_event_work, work.work);
san_evt_bat(&ev->event, ev->dev);
kfree(ev);
}
static u32 san_evt_bat_nf(struct ssam_event_notifier *nf,
const struct ssam_event *event)
{
struct san_data *d = to_san_data(nf, nf_bat);
struct san_event_work *work;
unsigned long delay = san_evt_bat_delay(event->command_id);
if (delay == 0)
return san_evt_bat(event, d->dev) ? SSAM_NOTIF_HANDLED : 0;
work = kzalloc(sizeof(*work) + event->length, GFP_KERNEL);
if (!work)
return ssam_notifier_from_errno(-ENOMEM);
INIT_DELAYED_WORK(&work->work, san_evt_bat_workfn);
work->dev = d->dev;
memcpy(&work->event, event, sizeof(struct ssam_event) + event->length);
schedule_delayed_work(&work->work, delay);
return SSAM_NOTIF_HANDLED;
}
static int san_evt_tmp_trip(struct device *dev, const struct ssam_event *event)
{
union acpi_object param;
/*
* The Surface ACPI expects an integer and not a package. This will
* cause a warning in acpica/nsarguments.c, but that warning can be
* safely ignored.
*/
param.type = ACPI_TYPE_INTEGER;
param.integer.value = event->instance_id;
return san_acpi_notify_event(dev, SAN_DSM_EVENT_FN_THERMAL, &param);
}
static bool san_evt_tmp(const struct ssam_event *event, struct device *dev)
{
int status;
switch (event->command_id) {
case SAM_EVENT_CID_TMP_TRIP:
status = san_evt_tmp_trip(dev, event);
break;
default:
return false;
}
if (status) {
dev_err(dev, "error handling thermal event (cid = %#04x)\n",
event->command_id);
}
return true;
}
static u32 san_evt_tmp_nf(struct ssam_event_notifier *nf,
const struct ssam_event *event)
{
struct san_data *d = to_san_data(nf, nf_tmp);
return san_evt_tmp(event, d->dev) ? SSAM_NOTIF_HANDLED : 0;
}
/* -- ACPI GSB OperationRegion handler -------------------------------------- */
struct gsb_data_in {
u8 cv;
} __packed;
struct gsb_data_rqsx {
u8 cv; /* Command value (san_gsb_request_cv). */
u8 tc; /* Target category. */
u8 tid; /* Target ID. */
u8 iid; /* Instance ID. */
u8 snc; /* Expect-response-flag. */
u8 cid; /* Command ID. */
u16 cdl; /* Payload length. */
u8 pld[]; /* Payload. */
} __packed;
struct gsb_data_etwl {
u8 cv; /* Command value (should be 0x02). */
u8 etw3; /* Unknown. */
u8 etw4; /* Unknown. */
u8 msg[]; /* Error message (ASCIIZ). */
} __packed;
struct gsb_data_out {
u8 status; /* _SSH communication status. */
u8 len; /* _SSH payload length. */
u8 pld[]; /* _SSH payload. */
} __packed;
union gsb_buffer_data {
struct gsb_data_in in; /* Common input. */
struct gsb_data_rqsx rqsx; /* RQSX input. */
struct gsb_data_etwl etwl; /* ETWL input. */
struct gsb_data_out out; /* Output. */
};
struct gsb_buffer {
u8 status; /* GSB AttribRawProcess status. */
u8 len; /* GSB AttribRawProcess length. */
union gsb_buffer_data data;
} __packed;
#define SAN_GSB_MAX_RQSX_PAYLOAD (U8_MAX - 2 - sizeof(struct gsb_data_rqsx))
#define SAN_GSB_MAX_RESPONSE (U8_MAX - 2 - sizeof(struct gsb_data_out))
#define SAN_GSB_COMMAND 0
enum san_gsb_request_cv {
SAN_GSB_REQUEST_CV_RQST = 0x01,
SAN_GSB_REQUEST_CV_ETWL = 0x02,
SAN_GSB_REQUEST_CV_RQSG = 0x03,
};
#define SAN_REQUEST_NUM_TRIES 5
static acpi_status san_etwl(struct san_data *d, struct gsb_buffer *b)
{
struct gsb_data_etwl *etwl = &b->data.etwl;
if (b->len < sizeof(struct gsb_data_etwl)) {
dev_err(d->dev, "invalid ETWL package (len = %d)\n", b->len);
return AE_OK;
}
dev_err(d->dev, "ETWL(%#04x, %#04x): %.*s\n", etwl->etw3, etwl->etw4,
(unsigned int)(b->len - sizeof(struct gsb_data_etwl)),
(char *)etwl->msg);
/* Indicate success. */
b->status = 0x00;
b->len = 0x00;
return AE_OK;
}
static
struct gsb_data_rqsx *san_validate_rqsx(struct device *dev, const char *type,
struct gsb_buffer *b)
{
struct gsb_data_rqsx *rqsx = &b->data.rqsx;
if (b->len < sizeof(struct gsb_data_rqsx)) {
dev_err(dev, "invalid %s package (len = %d)\n", type, b->len);
return NULL;
}
if (get_unaligned(&rqsx->cdl) != b->len - sizeof(struct gsb_data_rqsx)) {
dev_err(dev, "bogus %s package (len = %d, cdl = %d)\n",
type, b->len, get_unaligned(&rqsx->cdl));
return NULL;
}
if (get_unaligned(&rqsx->cdl) > SAN_GSB_MAX_RQSX_PAYLOAD) {
dev_err(dev, "payload for %s package too large (cdl = %d)\n",
type, get_unaligned(&rqsx->cdl));
return NULL;
}
return rqsx;
}
static void gsb_rqsx_response_error(struct gsb_buffer *gsb, int status)
{
gsb->status = 0x00;
gsb->len = 0x02;
gsb->data.out.status = (u8)(-status);
gsb->data.out.len = 0x00;
}
static void gsb_rqsx_response_success(struct gsb_buffer *gsb, u8 *ptr, size_t len)
{
gsb->status = 0x00;
gsb->len = len + 2;
gsb->data.out.status = 0x00;
gsb->data.out.len = len;
if (len)
memcpy(&gsb->data.out.pld[0], ptr, len);
}
static acpi_status san_rqst_fixup_suspended(struct san_data *d,
struct ssam_request *rqst,
struct gsb_buffer *gsb)
{
if (rqst->target_category == SSAM_SSH_TC_BAS && rqst->command_id == 0x0D) {
u8 base_state = 1;
/* Base state quirk:
* The base state may be queried from ACPI when the EC is still
* suspended. In this case it will return '-EPERM'. This query
* will only be triggered from the ACPI lid GPE interrupt, thus
* we are either in laptop or studio mode (base status 0x01 or
* 0x02). Furthermore, we will only get here if the device (and
* EC) have been suspended.
*
* We now assume that the device is in laptop mode (0x01). This
* has the drawback that it will wake the device when unfolding
* it in studio mode, but it also allows us to avoid actively
* waiting for the EC to wake up, which may incur a notable
* delay.
*/
dev_dbg(d->dev, "rqst: fixup: base-state quirk\n");
gsb_rqsx_response_success(gsb, &base_state, sizeof(base_state));
return AE_OK;
}
gsb_rqsx_response_error(gsb, -ENXIO);
return AE_OK;
}
static acpi_status san_rqst(struct san_data *d, struct gsb_buffer *buffer)
{
u8 rspbuf[SAN_GSB_MAX_RESPONSE];
struct gsb_data_rqsx *gsb_rqst;
struct ssam_request rqst;
struct ssam_response rsp;
int status = 0;
gsb_rqst = san_validate_rqsx(d->dev, "RQST", buffer);
if (!gsb_rqst)
return AE_OK;
rqst.target_category = gsb_rqst->tc;
rqst.target_id = gsb_rqst->tid;
rqst.command_id = gsb_rqst->cid;
rqst.instance_id = gsb_rqst->iid;
rqst.flags = gsb_rqst->snc ? SSAM_REQUEST_HAS_RESPONSE : 0;
rqst.length = get_unaligned(&gsb_rqst->cdl);
rqst.payload = &gsb_rqst->pld[0];
rsp.capacity = ARRAY_SIZE(rspbuf);
rsp.length = 0;
rsp.pointer = &rspbuf[0];
/* Handle suspended device. */
if (d->dev->power.is_suspended) {
dev_warn(d->dev, "rqst: device is suspended, not executing\n");
return san_rqst_fixup_suspended(d, &rqst, buffer);
}
status = __ssam_retry(ssam_request_sync_onstack, SAN_REQUEST_NUM_TRIES,
d->ctrl, &rqst, &rsp, SAN_GSB_MAX_RQSX_PAYLOAD);
if (!status) {
gsb_rqsx_response_success(buffer, rsp.pointer, rsp.length);
} else {
dev_err(d->dev, "rqst: failed with error %d\n", status);
gsb_rqsx_response_error(buffer, status);
}
return AE_OK;
}
static acpi_status san_rqsg(struct san_data *d, struct gsb_buffer *buffer)
{
struct gsb_data_rqsx *gsb_rqsg;
struct san_dgpu_event evt;
int status;
gsb_rqsg = san_validate_rqsx(d->dev, "RQSG", buffer);
if (!gsb_rqsg)
return AE_OK;
evt.category = gsb_rqsg->tc;
evt.target = gsb_rqsg->tid;
evt.command = gsb_rqsg->cid;
evt.instance = gsb_rqsg->iid;
evt.length = get_unaligned(&gsb_rqsg->cdl);
evt.payload = &gsb_rqsg->pld[0];
status = san_dgpu_notifier_call(&evt);
if (!status) {
gsb_rqsx_response_success(buffer, NULL, 0);
} else {
dev_err(d->dev, "rqsg: failed with error %d\n", status);
gsb_rqsx_response_error(buffer, status);
}
return AE_OK;
}
static acpi_status san_opreg_handler(u32 function, acpi_physical_address command,
u32 bits, u64 *value64, void *opreg_context,
void *region_context)
{
struct san_data *d = to_san_data(opreg_context, info);
struct gsb_buffer *buffer = (struct gsb_buffer *)value64;
int accessor_type = (function & 0xFFFF0000) >> 16;
if (command != SAN_GSB_COMMAND) {
dev_warn(d->dev, "unsupported command: %#04llx\n", command);
return AE_OK;
}
if (accessor_type != ACPI_GSB_ACCESS_ATTRIB_RAW_PROCESS) {
dev_err(d->dev, "invalid access type: %#04x\n", accessor_type);
return AE_OK;
}
/* Buffer must have at least contain the command-value. */
if (buffer->len == 0) {
dev_err(d->dev, "request-package too small\n");
return AE_OK;
}
switch (buffer->data.in.cv) {
case SAN_GSB_REQUEST_CV_RQST:
return san_rqst(d, buffer);
case SAN_GSB_REQUEST_CV_ETWL:
return san_etwl(d, buffer);
case SAN_GSB_REQUEST_CV_RQSG:
return san_rqsg(d, buffer);
default:
dev_warn(d->dev, "unsupported SAN0 request (cv: %#04x)\n",
buffer->data.in.cv);
return AE_OK;
}
}
/* -- Driver setup. --------------------------------------------------------- */
static int san_events_register(struct platform_device *pdev)
{
struct san_data *d = platform_get_drvdata(pdev);
int status;
d->nf_bat.base.priority = 1;
d->nf_bat.base.fn = san_evt_bat_nf;
d->nf_bat.event.reg = SSAM_EVENT_REGISTRY_SAM;
d->nf_bat.event.id.target_category = SSAM_SSH_TC_BAT;
d->nf_bat.event.id.instance = 0;
d->nf_bat.event.mask = SSAM_EVENT_MASK_TARGET;
d->nf_bat.event.flags = SSAM_EVENT_SEQUENCED;
d->nf_tmp.base.priority = 1;
d->nf_tmp.base.fn = san_evt_tmp_nf;
d->nf_tmp.event.reg = SSAM_EVENT_REGISTRY_SAM;
d->nf_tmp.event.id.target_category = SSAM_SSH_TC_TMP;
d->nf_tmp.event.id.instance = 0;
d->nf_tmp.event.mask = SSAM_EVENT_MASK_TARGET;
d->nf_tmp.event.flags = SSAM_EVENT_SEQUENCED;
status = ssam_notifier_register(d->ctrl, &d->nf_bat);
if (status)
return status;
status = ssam_notifier_register(d->ctrl, &d->nf_tmp);
if (status)
ssam_notifier_unregister(d->ctrl, &d->nf_bat);
return status;
}
static void san_events_unregister(struct platform_device *pdev)
{
struct san_data *d = platform_get_drvdata(pdev);
ssam_notifier_unregister(d->ctrl, &d->nf_bat);
ssam_notifier_unregister(d->ctrl, &d->nf_tmp);
}
#define san_consumer_printk(level, dev, handle, fmt, ...) \
do { \
char *path = "<error getting consumer path>"; \
struct acpi_buffer buffer = { \
.length = ACPI_ALLOCATE_BUFFER, \
.pointer = NULL, \
}; \
\
if (ACPI_SUCCESS(acpi_get_name(handle, ACPI_FULL_PATHNAME, &buffer))) \
path = buffer.pointer; \
\
dev_##level(dev, "[%s]: " fmt, path, ##__VA_ARGS__); \
kfree(buffer.pointer); \
} while (0)
#define san_consumer_dbg(dev, handle, fmt, ...) \
san_consumer_printk(dbg, dev, handle, fmt, ##__VA_ARGS__)
#define san_consumer_warn(dev, handle, fmt, ...) \
san_consumer_printk(warn, dev, handle, fmt, ##__VA_ARGS__)
static bool is_san_consumer(struct platform_device *pdev, acpi_handle handle)
{
struct acpi_handle_list dep_devices;
acpi_handle supplier = ACPI_HANDLE(&pdev->dev);
acpi_status status;
int i;
if (!acpi_has_method(handle, "_DEP"))
return false;
status = acpi_evaluate_reference(handle, "_DEP", NULL, &dep_devices);
if (ACPI_FAILURE(status)) {
san_consumer_dbg(&pdev->dev, handle, "failed to evaluate _DEP\n");
return false;
}
for (i = 0; i < dep_devices.count; i++) {
if (dep_devices.handles[i] == supplier)
return true;
}
return false;
}
static acpi_status san_consumer_setup(acpi_handle handle, u32 lvl,
void *context, void **rv)
{
const u32 flags = DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE_SUPPLIER;
struct platform_device *pdev = context;
struct acpi_device *adev;
struct device_link *link;
if (!is_san_consumer(pdev, handle))
return AE_OK;
/* Ignore ACPI devices that are not present. */
if (acpi_bus_get_device(handle, &adev) != 0)
return AE_OK;
san_consumer_dbg(&pdev->dev, handle, "creating device link\n");
/* Try to set up device links, ignore but log errors. */
link = device_link_add(&adev->dev, &pdev->dev, flags);
if (!link) {
san_consumer_warn(&pdev->dev, handle, "failed to create device link\n");
return AE_OK;
}
return AE_OK;
}
static int san_consumer_links_setup(struct platform_device *pdev)
{
acpi_status status;
status = acpi_walk_namespace(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX, san_consumer_setup, NULL,
pdev, NULL);
return status ? -EFAULT : 0;
}
static int san_probe(struct platform_device *pdev)
{
acpi_handle san = ACPI_HANDLE(&pdev->dev);
struct ssam_controller *ctrl;
struct san_data *data;
acpi_status astatus;
int status;
ctrl = ssam_client_bind(&pdev->dev);
if (IS_ERR(ctrl))
return PTR_ERR(ctrl) == -ENODEV ? -EPROBE_DEFER : PTR_ERR(ctrl);
status = san_consumer_links_setup(pdev);
if (status)
return status;
data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL);
if (!data)
return -ENOMEM;
data->dev = &pdev->dev;
data->ctrl = ctrl;
platform_set_drvdata(pdev, data);
astatus = acpi_install_address_space_handler(san, ACPI_ADR_SPACE_GSBUS,
&san_opreg_handler, NULL,
&data->info);
if (ACPI_FAILURE(astatus))
return -ENXIO;
status = san_events_register(pdev);
if (status)
goto err_enable_events;
status = san_set_rqsg_interface_device(&pdev->dev);
if (status)
goto err_install_dev;
acpi_walk_dep_device_list(san);
return 0;
err_install_dev:
san_events_unregister(pdev);
err_enable_events:
acpi_remove_address_space_handler(san, ACPI_ADR_SPACE_GSBUS,
&san_opreg_handler);
return status;
}
static int san_remove(struct platform_device *pdev)
{
acpi_handle san = ACPI_HANDLE(&pdev->dev);
san_set_rqsg_interface_device(NULL);
acpi_remove_address_space_handler(san, ACPI_ADR_SPACE_GSBUS,
&san_opreg_handler);
san_events_unregister(pdev);
/*
* We have unregistered our event sources. Now we need to ensure that
* all delayed works they may have spawned are run to completion.
*/
flush_scheduled_work();
return 0;
}
static const struct acpi_device_id san_match[] = {
{ "MSHW0091" },
{ },
};
MODULE_DEVICE_TABLE(acpi, san_match);
static struct platform_driver surface_acpi_notify = {
.probe = san_probe,
.remove = san_remove,
.driver = {
.name = "surface_acpi_notify",
.acpi_match_table = san_match,
.probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
};
module_platform_driver(surface_acpi_notify);
MODULE_AUTHOR("Maximilian Luz <luzmaximilian@gmail.com>");
MODULE_DESCRIPTION("Surface ACPI Notify driver for Surface System Aggregator Module");
MODULE_LICENSE("GPL");

View file

@ -0,0 +1,322 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Provides user-space access to the SSAM EC via the /dev/surface/aggregator
* misc device. Intended for debugging and development.
*
* Copyright (C) 2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#include <linux/fs.h>
#include <linux/kernel.h>
#include <linux/kref.h>
#include <linux/miscdevice.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/rwsem.h>
#include <linux/slab.h>
#include <linux/uaccess.h>
#include <linux/surface_aggregator/cdev.h>
#include <linux/surface_aggregator/controller.h>
#define SSAM_CDEV_DEVICE_NAME "surface_aggregator_cdev"
struct ssam_cdev {
struct kref kref;
struct rw_semaphore lock;
struct ssam_controller *ctrl;
struct miscdevice mdev;
};
static void __ssam_cdev_release(struct kref *kref)
{
kfree(container_of(kref, struct ssam_cdev, kref));
}
static struct ssam_cdev *ssam_cdev_get(struct ssam_cdev *cdev)
{
if (cdev)
kref_get(&cdev->kref);
return cdev;
}
static void ssam_cdev_put(struct ssam_cdev *cdev)
{
if (cdev)
kref_put(&cdev->kref, __ssam_cdev_release);
}
static int ssam_cdev_device_open(struct inode *inode, struct file *filp)
{
struct miscdevice *mdev = filp->private_data;
struct ssam_cdev *cdev = container_of(mdev, struct ssam_cdev, mdev);
filp->private_data = ssam_cdev_get(cdev);
return stream_open(inode, filp);
}
static int ssam_cdev_device_release(struct inode *inode, struct file *filp)
{
ssam_cdev_put(filp->private_data);
return 0;
}
static long ssam_cdev_request(struct ssam_cdev *cdev, unsigned long arg)
{
struct ssam_cdev_request __user *r;
struct ssam_cdev_request rqst;
struct ssam_request spec = {};
struct ssam_response rsp = {};
const void __user *plddata;
void __user *rspdata;
int status = 0, ret = 0, tmp;
r = (struct ssam_cdev_request __user *)arg;
ret = copy_struct_from_user(&rqst, sizeof(rqst), r, sizeof(*r));
if (ret)
goto out;
plddata = u64_to_user_ptr(rqst.payload.data);
rspdata = u64_to_user_ptr(rqst.response.data);
/* Setup basic request fields. */
spec.target_category = rqst.target_category;
spec.target_id = rqst.target_id;
spec.command_id = rqst.command_id;
spec.instance_id = rqst.instance_id;
spec.flags = 0;
spec.length = rqst.payload.length;
spec.payload = NULL;
if (rqst.flags & SSAM_CDEV_REQUEST_HAS_RESPONSE)
spec.flags |= SSAM_REQUEST_HAS_RESPONSE;
if (rqst.flags & SSAM_CDEV_REQUEST_UNSEQUENCED)
spec.flags |= SSAM_REQUEST_UNSEQUENCED;
rsp.capacity = rqst.response.length;
rsp.length = 0;
rsp.pointer = NULL;
/* Get request payload from user-space. */
if (spec.length) {
if (!plddata) {
ret = -EINVAL;
goto out;
}
/*
* Note: spec.length is limited to U16_MAX bytes via struct
* ssam_cdev_request. This is slightly larger than the
* theoretical maximum (SSH_COMMAND_MAX_PAYLOAD_SIZE) of the
* underlying protocol (note that nothing remotely this size
* should ever be allocated in any normal case). This size is
* validated later in ssam_request_sync(), for allocation the
* bound imposed by u16 should be enough.
*/
spec.payload = kzalloc(spec.length, GFP_KERNEL);
if (!spec.payload) {
ret = -ENOMEM;
goto out;
}
if (copy_from_user((void *)spec.payload, plddata, spec.length)) {
ret = -EFAULT;
goto out;
}
}
/* Allocate response buffer. */
if (rsp.capacity) {
if (!rspdata) {
ret = -EINVAL;
goto out;
}
/*
* Note: rsp.capacity is limited to U16_MAX bytes via struct
* ssam_cdev_request. This is slightly larger than the
* theoretical maximum (SSH_COMMAND_MAX_PAYLOAD_SIZE) of the
* underlying protocol (note that nothing remotely this size
* should ever be allocated in any normal case). In later use,
* this capacity does not have to be strictly bounded, as it
* is only used as an output buffer to be written to. For
* allocation the bound imposed by u16 should be enough.
*/
rsp.pointer = kzalloc(rsp.capacity, GFP_KERNEL);
if (!rsp.pointer) {
ret = -ENOMEM;
goto out;
}
}
/* Perform request. */
status = ssam_request_sync(cdev->ctrl, &spec, &rsp);
if (status)
goto out;
/* Copy response to user-space. */
if (rsp.length && copy_to_user(rspdata, rsp.pointer, rsp.length))
ret = -EFAULT;
out:
/* Always try to set response-length and status. */
tmp = put_user(rsp.length, &r->response.length);
if (tmp)
ret = tmp;
tmp = put_user(status, &r->status);
if (tmp)
ret = tmp;
/* Cleanup. */
kfree(spec.payload);
kfree(rsp.pointer);
return ret;
}
static long __ssam_cdev_device_ioctl(struct ssam_cdev *cdev, unsigned int cmd,
unsigned long arg)
{
switch (cmd) {
case SSAM_CDEV_REQUEST:
return ssam_cdev_request(cdev, arg);
default:
return -ENOTTY;
}
}
static long ssam_cdev_device_ioctl(struct file *file, unsigned int cmd,
unsigned long arg)
{
struct ssam_cdev *cdev = file->private_data;
long status;
/* Ensure that controller is valid for as long as we need it. */
if (down_read_killable(&cdev->lock))
return -ERESTARTSYS;
if (!cdev->ctrl) {
up_read(&cdev->lock);
return -ENODEV;
}
status = __ssam_cdev_device_ioctl(cdev, cmd, arg);
up_read(&cdev->lock);
return status;
}
static const struct file_operations ssam_controller_fops = {
.owner = THIS_MODULE,
.open = ssam_cdev_device_open,
.release = ssam_cdev_device_release,
.unlocked_ioctl = ssam_cdev_device_ioctl,
.compat_ioctl = ssam_cdev_device_ioctl,
.llseek = noop_llseek,
};
static int ssam_dbg_device_probe(struct platform_device *pdev)
{
struct ssam_controller *ctrl;
struct ssam_cdev *cdev;
int status;
ctrl = ssam_client_bind(&pdev->dev);
if (IS_ERR(ctrl))
return PTR_ERR(ctrl) == -ENODEV ? -EPROBE_DEFER : PTR_ERR(ctrl);
cdev = kzalloc(sizeof(*cdev), GFP_KERNEL);
if (!cdev)
return -ENOMEM;
kref_init(&cdev->kref);
init_rwsem(&cdev->lock);
cdev->ctrl = ctrl;
cdev->mdev.parent = &pdev->dev;
cdev->mdev.minor = MISC_DYNAMIC_MINOR;
cdev->mdev.name = "surface_aggregator";
cdev->mdev.nodename = "surface/aggregator";
cdev->mdev.fops = &ssam_controller_fops;
status = misc_register(&cdev->mdev);
if (status) {
kfree(cdev);
return status;
}
platform_set_drvdata(pdev, cdev);
return 0;
}
static int ssam_dbg_device_remove(struct platform_device *pdev)
{
struct ssam_cdev *cdev = platform_get_drvdata(pdev);
misc_deregister(&cdev->mdev);
/*
* The controller is only guaranteed to be valid for as long as the
* driver is bound. Remove controller so that any lingering open files
* cannot access it any more after we're gone.
*/
down_write(&cdev->lock);
cdev->ctrl = NULL;
up_write(&cdev->lock);
ssam_cdev_put(cdev);
return 0;
}
static struct platform_device *ssam_cdev_device;
static struct platform_driver ssam_cdev_driver = {
.probe = ssam_dbg_device_probe,
.remove = ssam_dbg_device_remove,
.driver = {
.name = SSAM_CDEV_DEVICE_NAME,
.probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
};
static int __init ssam_debug_init(void)
{
int status;
ssam_cdev_device = platform_device_alloc(SSAM_CDEV_DEVICE_NAME,
PLATFORM_DEVID_NONE);
if (!ssam_cdev_device)
return -ENOMEM;
status = platform_device_add(ssam_cdev_device);
if (status)
goto err_device;
status = platform_driver_register(&ssam_cdev_driver);
if (status)
goto err_driver;
return 0;
err_driver:
platform_device_del(ssam_cdev_device);
err_device:
platform_device_put(ssam_cdev_device);
return status;
}
module_init(ssam_debug_init);
static void __exit ssam_debug_exit(void)
{
platform_driver_unregister(&ssam_cdev_driver);
platform_device_unregister(ssam_cdev_device);
}
module_exit(ssam_debug_exit);
MODULE_AUTHOR("Maximilian Luz <luzmaximilian@gmail.com>");
MODULE_DESCRIPTION("User-space interface for Surface System Aggregator Module");
MODULE_LICENSE("GPL");

View file

@ -0,0 +1,282 @@
// SPDX-License-Identifier: GPL-2.0+
/*
* Surface Book (2 and later) hot-plug driver.
*
* Surface Book devices (can) have a hot-pluggable discrete GPU (dGPU). This
* driver is responsible for out-of-band hot-plug event signaling on these
* devices. It is specifically required when the hot-plug device is in D3cold
* and can thus not generate PCIe hot-plug events itself.
*
* Event signaling is handled via ACPI, which will generate the appropriate
* device-check notifications to be picked up by the PCIe hot-plug driver.
*
* Copyright (C) 2019-2021 Maximilian Luz <luzmaximilian@gmail.com>
*/
#include <linux/acpi.h>
#include <linux/gpio.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/platform_device.h>
static const struct acpi_gpio_params shps_base_presence_int = { 0, 0, false };
static const struct acpi_gpio_params shps_base_presence = { 1, 0, false };
static const struct acpi_gpio_params shps_device_power_int = { 2, 0, false };
static const struct acpi_gpio_params shps_device_power = { 3, 0, false };
static const struct acpi_gpio_params shps_device_presence_int = { 4, 0, false };
static const struct acpi_gpio_params shps_device_presence = { 5, 0, false };
static const struct acpi_gpio_mapping shps_acpi_gpios[] = {
{ "base_presence-int-gpio", &shps_base_presence_int, 1 },
{ "base_presence-gpio", &shps_base_presence, 1 },
{ "device_power-int-gpio", &shps_device_power_int, 1 },
{ "device_power-gpio", &shps_device_power, 1 },
{ "device_presence-int-gpio", &shps_device_presence_int, 1 },
{ "device_presence-gpio", &shps_device_presence, 1 },
{ },
};
/* 5515a847-ed55-4b27-8352-cd320e10360a */
static const guid_t shps_dsm_guid =
GUID_INIT(0x5515a847, 0xed55, 0x4b27, 0x83, 0x52, 0xcd, 0x32, 0x0e, 0x10, 0x36, 0x0a);
#define SHPS_DSM_REVISION 1
enum shps_dsm_fn {
SHPS_DSM_FN_PCI_NUM_ENTRIES = 0x01,
SHPS_DSM_FN_PCI_GET_ENTRIES = 0x02,
SHPS_DSM_FN_IRQ_BASE_PRESENCE = 0x03,
SHPS_DSM_FN_IRQ_DEVICE_POWER = 0x04,
SHPS_DSM_FN_IRQ_DEVICE_PRESENCE = 0x05,
};
enum shps_irq_type {
/* NOTE: Must be in order of enum shps_dsm_fn above. */
SHPS_IRQ_TYPE_BASE_PRESENCE = 0,
SHPS_IRQ_TYPE_DEVICE_POWER = 1,
SHPS_IRQ_TYPE_DEVICE_PRESENCE = 2,
SHPS_NUM_IRQS,
};
static const char *const shps_gpio_names[] = {
[SHPS_IRQ_TYPE_BASE_PRESENCE] = "base_presence",
[SHPS_IRQ_TYPE_DEVICE_POWER] = "device_power",
[SHPS_IRQ_TYPE_DEVICE_PRESENCE] = "device_presence",
};
struct shps_device {
struct mutex lock[SHPS_NUM_IRQS]; /* Protects update in shps_dsm_notify_irq() */
struct gpio_desc *gpio[SHPS_NUM_IRQS];
unsigned int irq[SHPS_NUM_IRQS];
};
#define SHPS_IRQ_NOT_PRESENT ((unsigned int)-1)
static enum shps_dsm_fn shps_dsm_fn_for_irq(enum shps_irq_type type)
{
return SHPS_DSM_FN_IRQ_BASE_PRESENCE + type;
}
static void shps_dsm_notify_irq(struct platform_device *pdev, enum shps_irq_type type)
{
struct shps_device *sdev = platform_get_drvdata(pdev);
acpi_handle handle = ACPI_HANDLE(&pdev->dev);
union acpi_object *result;
union acpi_object param;
int value;
mutex_lock(&sdev->lock[type]);
value = gpiod_get_value_cansleep(sdev->gpio[type]);
if (value < 0) {
mutex_unlock(&sdev->lock[type]);
dev_err(&pdev->dev, "failed to get gpio: %d (irq=%d)\n", type, value);
return;
}
dev_dbg(&pdev->dev, "IRQ notification via DSM (irq=%d, value=%d)\n", type, value);
param.type = ACPI_TYPE_INTEGER;
param.integer.value = value;
result = acpi_evaluate_dsm(handle, &shps_dsm_guid, SHPS_DSM_REVISION,
shps_dsm_fn_for_irq(type), &param);
if (!result) {
dev_err(&pdev->dev, "IRQ notification via DSM failed (irq=%d, gpio=%d)\n",
type, value);
} else if (result->type != ACPI_TYPE_BUFFER) {
dev_err(&pdev->dev,
"IRQ notification via DSM failed: unexpected result type (irq=%d, gpio=%d)\n",
type, value);
} else if (result->buffer.length != 1 || result->buffer.pointer[0] != 0) {
dev_err(&pdev->dev,
"IRQ notification via DSM failed: unexpected result value (irq=%d, gpio=%d)\n",
type, value);
}
mutex_unlock(&sdev->lock[type]);
if (result)
ACPI_FREE(result);
}
static irqreturn_t shps_handle_irq(int irq, void *data)
{
struct platform_device *pdev = data;
struct shps_device *sdev = platform_get_drvdata(pdev);
int type;
/* Figure out which IRQ we're handling. */
for (type = 0; type < SHPS_NUM_IRQS; type++)
if (irq == sdev->irq[type])
break;
/* We should have found our interrupt, if not: this is a bug. */
if (WARN(type >= SHPS_NUM_IRQS, "invalid IRQ number: %d\n", irq))
return IRQ_HANDLED;
/* Forward interrupt to ACPI via DSM. */
shps_dsm_notify_irq(pdev, type);
return IRQ_HANDLED;
}
static int shps_setup_irq(struct platform_device *pdev, enum shps_irq_type type)
{
unsigned long flags = IRQF_ONESHOT | IRQF_TRIGGER_FALLING | IRQF_TRIGGER_RISING;
struct shps_device *sdev = platform_get_drvdata(pdev);
struct gpio_desc *gpiod;
acpi_handle handle = ACPI_HANDLE(&pdev->dev);
const char *irq_name;
const int dsm = shps_dsm_fn_for_irq(type);
int status, irq;
/*
* Only set up interrupts that we actually need: The Surface Book 3
* does not have a DSM for base presence, so don't set up an interrupt
* for that.
*/
if (!acpi_check_dsm(handle, &shps_dsm_guid, SHPS_DSM_REVISION, BIT(dsm))) {
dev_dbg(&pdev->dev, "IRQ notification via DSM not present (irq=%d)\n", type);
return 0;
}
gpiod = devm_gpiod_get(&pdev->dev, shps_gpio_names[type], GPIOD_ASIS);
if (IS_ERR(gpiod))
return PTR_ERR(gpiod);
irq = gpiod_to_irq(gpiod);
if (irq < 0)
return irq;
irq_name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "shps-irq-%d", type);
if (!irq_name)
return -ENOMEM;
status = devm_request_threaded_irq(&pdev->dev, irq, NULL, shps_handle_irq,
flags, irq_name, pdev);
if (status)
return status;
dev_dbg(&pdev->dev, "set up irq %d as type %d\n", irq, type);
sdev->gpio[type] = gpiod;
sdev->irq[type] = irq;
return 0;
}
static int surface_hotplug_remove(struct platform_device *pdev)
{
struct shps_device *sdev = platform_get_drvdata(pdev);
int i;
/* Ensure that IRQs have been fully handled and won't trigger any more. */
for (i = 0; i < SHPS_NUM_IRQS; i++) {
if (sdev->irq[i] != SHPS_IRQ_NOT_PRESENT)
disable_irq(sdev->irq[i]);
mutex_destroy(&sdev->lock[i]);
}
return 0;
}
static int surface_hotplug_probe(struct platform_device *pdev)
{
struct shps_device *sdev;
int status, i;
/*
* The MSHW0153 device is also present on the Surface Laptop 3,
* however that doesn't have a hot-pluggable PCIe device. It also
* doesn't have any GPIO interrupts/pins under the MSHW0153, so filter
* it out here.
*/
if (gpiod_count(&pdev->dev, NULL) < 0)
return -ENODEV;
status = devm_acpi_dev_add_driver_gpios(&pdev->dev, shps_acpi_gpios);
if (status)
return status;
sdev = devm_kzalloc(&pdev->dev, sizeof(*sdev), GFP_KERNEL);
if (!sdev)
return -ENOMEM;
platform_set_drvdata(pdev, sdev);
/*
* Initialize IRQs so that we can safely call surface_hotplug_remove()
* on errors.
*/
for (i = 0; i < SHPS_NUM_IRQS; i++)
sdev->irq[i] = SHPS_IRQ_NOT_PRESENT;
/* Set up IRQs. */
for (i = 0; i < SHPS_NUM_IRQS; i++) {
mutex_init(&sdev->lock[i]);
status = shps_setup_irq(pdev, i);
if (status) {
dev_err(&pdev->dev, "failed to set up IRQ %d: %d\n", i, status);
goto err;
}
}
/* Ensure everything is up-to-date. */
for (i = 0; i < SHPS_NUM_IRQS; i++)
if (sdev->irq[i] != SHPS_IRQ_NOT_PRESENT)
shps_dsm_notify_irq(pdev, i);
return 0;
err:
surface_hotplug_remove(pdev);
return status;
}
static const struct acpi_device_id surface_hotplug_acpi_match[] = {
{ "MSHW0153", 0 },
{ },
};
MODULE_DEVICE_TABLE(acpi, surface_hotplug_acpi_match);
static struct platform_driver surface_hotplug_driver = {
.probe = surface_hotplug_probe,
.remove = surface_hotplug_remove,
.driver = {
.name = "surface_hotplug",
.acpi_match_table = surface_hotplug_acpi_match,
.probe_type = PROBE_PREFER_ASYNCHRONOUS,
},
};
module_platform_driver(surface_hotplug_driver);
MODULE_AUTHOR("Maximilian Luz <luzmaximilian@gmail.com>");
MODULE_DESCRIPTION("Surface Hot-Plug Signaling Driver for Surface Book Devices");
MODULE_LICENSE("GPL");

View file

@ -49,18 +49,6 @@ config WMI_BMOF
To compile this driver as a module, choose M here: the module will
be called wmi-bmof.
config ALIENWARE_WMI
tristate "Alienware Special feature control"
depends on ACPI
depends on LEDS_CLASS
depends on NEW_LEDS
depends on ACPI_WMI
help
This is a driver for controlling Alienware BIOS driven
features. It exposes an interface for controlling the AlienFX
zones on Alienware machines that don't contain a dedicated AlienFX
USB MCU such as the X51 and X51-R2.
config HUAWEI_WMI
tristate "Huawei WMI laptop extras driver"
depends on ACPI_BATTERY
@ -327,169 +315,7 @@ config EEEPC_WMI
If you have an ACPI-WMI compatible Eee PC laptop (>= 1000), say Y or M
here.
config DCDBAS
tristate "Dell Systems Management Base Driver"
depends on X86
help
The Dell Systems Management Base Driver provides a sysfs interface
for systems management software to perform System Management
Interrupts (SMIs) and Host Control Actions (system power cycle or
power off after OS shutdown) on certain Dell systems.
See <file:Documentation/driver-api/dcdbas.rst> for more details on the driver
and the Dell systems on which Dell systems management software makes
use of this driver.
Say Y or M here to enable the driver for use by Dell systems
management software such as Dell OpenManage.
#
# The DELL_SMBIOS driver depends on ACPI_WMI and/or DCDBAS if those
# backends are selected. The "depends" line prevents a configuration
# where DELL_SMBIOS=y while either of those dependencies =m.
#
config DELL_SMBIOS
tristate "Dell SMBIOS driver"
depends on DCDBAS || DCDBAS=n
depends on ACPI_WMI || ACPI_WMI=n
help
This provides support for the Dell SMBIOS calling interface.
If you have a Dell computer you should enable this option.
Be sure to select at least one backend for it to work properly.
config DELL_SMBIOS_WMI
bool "Dell SMBIOS driver WMI backend"
default y
depends on ACPI_WMI
select DELL_WMI_DESCRIPTOR
depends on DELL_SMBIOS
help
This provides an implementation for the Dell SMBIOS calling interface
communicated over ACPI-WMI.
If you have a Dell computer from >2007 you should say Y here.
If you aren't sure and this module doesn't work for your computer
it just won't load.
config DELL_SMBIOS_SMM
bool "Dell SMBIOS driver SMM backend"
default y
depends on DCDBAS
depends on DELL_SMBIOS
help
This provides an implementation for the Dell SMBIOS calling interface
communicated over SMI/SMM.
If you have a Dell computer from <=2017 you should say Y here.
If you aren't sure and this module doesn't work for your computer
it just won't load.
config DELL_LAPTOP
tristate "Dell Laptop Extras"
depends on DMI
depends on BACKLIGHT_CLASS_DEVICE
depends on ACPI_VIDEO || ACPI_VIDEO = n
depends on RFKILL || RFKILL = n
depends on SERIO_I8042
depends on DELL_SMBIOS
select POWER_SUPPLY
select LEDS_CLASS
select NEW_LEDS
select LEDS_TRIGGERS
select LEDS_TRIGGER_AUDIO
help
This driver adds support for rfkill and backlight control to Dell
laptops (except for some models covered by the Compal driver).
config DELL_RBTN
tristate "Dell Airplane Mode Switch driver"
depends on ACPI
depends on INPUT
depends on RFKILL
help
Say Y here if you want to support Dell Airplane Mode Switch ACPI
device on Dell laptops. Sometimes it has names: DELLABCE or DELRBTN.
This driver register rfkill device or input hotkey device depending
on hardware type (hw switch slider or keyboard toggle button). For
rfkill devices it receive HW switch events and set correct hard
rfkill state.
To compile this driver as a module, choose M here: the module will
be called dell-rbtn.
config DELL_RBU
tristate "BIOS update support for DELL systems via sysfs"
depends on X86
select FW_LOADER
select FW_LOADER_USER_HELPER
help
Say m if you want to have the option of updating the BIOS for your
DELL system. Note you need a Dell OpenManage or Dell Update package (DUP)
supporting application to communicate with the BIOS regarding the new
image for the image update to take effect.
See <file:Documentation/admin-guide/dell_rbu.rst> for more details on the driver.
config DELL_SMO8800
tristate "Dell Latitude freefall driver (ACPI SMO88XX)"
depends on ACPI
help
Say Y here if you want to support SMO88XX freefall devices
on Dell Latitude laptops.
To compile this driver as a module, choose M here: the module will
be called dell-smo8800.
config DELL_WMI
tristate "Dell WMI notifications"
depends on ACPI_WMI
depends on DMI
depends on INPUT
depends on ACPI_VIDEO || ACPI_VIDEO = n
depends on DELL_SMBIOS
select DELL_WMI_DESCRIPTOR
select INPUT_SPARSEKMAP
help
Say Y here if you want to support WMI-based hotkeys on Dell laptops.
To compile this driver as a module, choose M here: the module will
be called dell-wmi.
config DELL_WMI_SYSMAN
tristate "Dell WMI-based Systems management driver"
depends on ACPI_WMI
depends on DMI
select NLS
help
This driver allows changing BIOS settings on many Dell machines from
2018 and newer without the use of any additional software.
To compile this driver as a module, choose M here: the module will
be called dell-wmi-sysman.
config DELL_WMI_DESCRIPTOR
tristate
depends on ACPI_WMI
config DELL_WMI_AIO
tristate "WMI Hotkeys for Dell All-In-One series"
depends on ACPI_WMI
depends on INPUT
select INPUT_SPARSEKMAP
help
Say Y here if you want to support WMI-based hotkeys on Dell
All-In-One machines.
To compile this driver as a module, choose M here: the module will
be called dell-wmi-aio.
config DELL_WMI_LED
tristate "External LED on Dell Business Netbooks"
depends on LEDS_CLASS
depends on ACPI_WMI
help
This adds support for the Latitude 2100 and similar
notebooks that have an external LED.
source "drivers/platform/x86/dell/Kconfig"
config AMILO_RFKILL
tristate "Fujitsu-Siemens Amilo rfkill support"
@ -624,7 +450,10 @@ config IDEAPAD_LAPTOP
depends on BACKLIGHT_CLASS_DEVICE
depends on ACPI_VIDEO || ACPI_VIDEO = n
depends on ACPI_WMI || ACPI_WMI = n
depends on ACPI_PLATFORM_PROFILE
select INPUT_SPARSEKMAP
select NEW_LEDS
select LEDS_CLASS
help
This is a driver for Lenovo IdeaPad netbooks contains drivers for
rfkill switch, hotkey, fan control and backlight control.
@ -655,6 +484,7 @@ config THINKPAD_ACPI
depends on RFKILL || RFKILL = n
depends on ACPI_VIDEO || ACPI_VIDEO = n
depends on BACKLIGHT_CLASS_DEVICE
depends on ACPI_PLATFORM_PROFILE
select HWMON
select NVRAM
select NEW_LEDS
@ -1327,21 +1157,6 @@ config INTEL_CHTDC_TI_PWRBTN
To compile this driver as a module, choose M here: the module
will be called intel_chtdc_ti_pwrbtn.
config INTEL_MFLD_THERMAL
tristate "Thermal driver for Intel Medfield platform"
depends on MFD_INTEL_MSIC && THERMAL
help
Say Y here to enable thermal driver support for the Intel Medfield
platform.
config INTEL_MID_POWER_BUTTON
tristate "power button driver for Intel MID platforms"
depends on INTEL_SCU && INPUT
help
This driver handles the power button on the Intel MID platforms.
If unsure, say N.
config INTEL_MRFLD_PWRBTN
tristate "Intel Merrifield Basin Cove power button driver"
depends on INTEL_SOC_PMIC_MRFLD
@ -1369,7 +1184,7 @@ config INTEL_PMC_CORE
- MPHY/PLL gating status (Sunrisepoint PCH only)
config INTEL_PMT_CLASS
tristate "Intel Platform Monitoring Technology (PMT) Class driver"
tristate
help
The Intel Platform Monitoring Technology (PMT) class driver provides
the basic sysfs interface and file hierarchy uses by PMT devices.
@ -1382,6 +1197,7 @@ config INTEL_PMT_CLASS
config INTEL_PMT_TELEMETRY
tristate "Intel Platform Monitoring Technology (PMT) Telemetry driver"
depends on MFD_INTEL_PMT
select INTEL_PMT_CLASS
help
The Intel Platform Monitory Technology (PMT) Telemetry driver provides
@ -1393,6 +1209,7 @@ config INTEL_PMT_TELEMETRY
config INTEL_PMT_CRASHLOG
tristate "Intel Platform Monitoring Technology (PMT) Crashlog driver"
depends on MFD_INTEL_PMT
select INTEL_PMT_CLASS
help
The Intel Platform Monitoring Technology (PMT) crashlog driver provides
@ -1439,6 +1256,14 @@ config INTEL_SCU_PLATFORM
and SCU (sometimes called PMC as well). The driver currently
supports Intel Elkhart Lake and compatible platforms.
config INTEL_SCU_WDT
bool
default INTEL_SCU_PCI
depends on INTEL_MID_WATCHDOG
help
This is a specific platform code to instantiate watchdog device
on ACPI-based Intel MID platforms.
config INTEL_SCU_IPC_UTIL
tristate "Intel SCU IPC utility driver"
depends on INTEL_SCU

View file

@ -9,7 +9,6 @@ obj-$(CONFIG_ACPI_WMI) += wmi.o
obj-$(CONFIG_WMI_BMOF) += wmi-bmof.o
# WMI drivers
obj-$(CONFIG_ALIENWARE_WMI) += alienware-wmi.o
obj-$(CONFIG_HUAWEI_WMI) += huawei-wmi.o
obj-$(CONFIG_INTEL_WMI_SBL_FW_UPDATE) += intel-wmi-sbl-fw-update.o
obj-$(CONFIG_INTEL_WMI_THUNDERBOLT) += intel-wmi-thunderbolt.o
@ -37,20 +36,7 @@ obj-$(CONFIG_EEEPC_LAPTOP) += eeepc-laptop.o
obj-$(CONFIG_EEEPC_WMI) += eeepc-wmi.o
# Dell
obj-$(CONFIG_DCDBAS) += dcdbas.o
obj-$(CONFIG_DELL_SMBIOS) += dell-smbios.o
dell-smbios-objs := dell-smbios-base.o
dell-smbios-$(CONFIG_DELL_SMBIOS_WMI) += dell-smbios-wmi.o
dell-smbios-$(CONFIG_DELL_SMBIOS_SMM) += dell-smbios-smm.o
obj-$(CONFIG_DELL_LAPTOP) += dell-laptop.o
obj-$(CONFIG_DELL_RBTN) += dell-rbtn.o
obj-$(CONFIG_DELL_RBU) += dell_rbu.o
obj-$(CONFIG_DELL_SMO8800) += dell-smo8800.o
obj-$(CONFIG_DELL_WMI) += dell-wmi.o
obj-$(CONFIG_DELL_WMI_DESCRIPTOR) += dell-wmi-descriptor.o
obj-$(CONFIG_DELL_WMI_AIO) += dell-wmi-aio.o
obj-$(CONFIG_DELL_WMI_LED) += dell-wmi-led.o
obj-$(CONFIG_DELL_WMI_SYSMAN) += dell-wmi-sysman/
obj-$(CONFIG_X86_PLATFORM_DRIVERS_DELL) += dell/
# Fujitsu
obj-$(CONFIG_AMILO_RFKILL) += amilo-rfkill.o
@ -137,8 +123,6 @@ obj-$(CONFIG_INTEL_UNCORE_FREQ_CONTROL) += intel-uncore-frequency.o
# Intel PMIC / PMC / P-Unit devices
obj-$(CONFIG_INTEL_BXTWC_PMIC_TMU) += intel_bxtwc_tmu.o
obj-$(CONFIG_INTEL_CHTDC_TI_PWRBTN) += intel_chtdc_ti_pwrbtn.o
obj-$(CONFIG_INTEL_MFLD_THERMAL) += intel_mid_thermal.o
obj-$(CONFIG_INTEL_MID_POWER_BUTTON) += intel_mid_powerbtn.o
obj-$(CONFIG_INTEL_MRFLD_PWRBTN) += intel_mrfld_pwrbtn.o
obj-$(CONFIG_INTEL_PMC_CORE) += intel_pmc_core.o intel_pmc_core_pltdrv.o
obj-$(CONFIG_INTEL_PMT_CLASS) += intel_pmt_class.o
@ -148,6 +132,7 @@ obj-$(CONFIG_INTEL_PUNIT_IPC) += intel_punit_ipc.o
obj-$(CONFIG_INTEL_SCU_IPC) += intel_scu_ipc.o
obj-$(CONFIG_INTEL_SCU_PCI) += intel_scu_pcidrv.o
obj-$(CONFIG_INTEL_SCU_PLATFORM) += intel_scu_pltdrv.o
obj-$(CONFIG_INTEL_SCU_WDT) += intel_scu_wdt.o
obj-$(CONFIG_INTEL_SCU_IPC_UTIL) += intel_scu_ipcutil.o
obj-$(CONFIG_INTEL_TELEMETRY) += intel_telemetry_core.o \
intel_telemetry_pltdrv.o \

View file

@ -30,7 +30,6 @@
#include <linux/input/sparse-keymap.h>
#include <acpi/video.h>
ACPI_MODULE_NAME(KBUILD_MODNAME);
MODULE_AUTHOR("Carlos Corbacho");
MODULE_DESCRIPTION("Acer Laptop WMI Extras Driver");
MODULE_LICENSE("GPL");
@ -1605,7 +1604,8 @@ static void acer_kbd_dock_get_initial_state(void)
status = wmi_evaluate_method(WMID_GUID3, 0, 0x2, &input_buf, &output_buf);
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status, "Error getting keyboard-dock initial status"));
pr_err("Error getting keyboard-dock initial status: %s\n",
acpi_format_exception(status));
return;
}

View file

@ -210,31 +210,39 @@ static int amd_pmc_probe(struct platform_device *pdev)
dev->dev = &pdev->dev;
rdev = pci_get_domain_bus_and_slot(0, 0, PCI_DEVFN(0, 0));
if (!rdev || !pci_match_id(pmc_pci_ids, rdev))
if (!rdev || !pci_match_id(pmc_pci_ids, rdev)) {
pci_dev_put(rdev);
return -ENODEV;
}
dev->cpu_id = rdev->device;
err = pci_write_config_dword(rdev, AMD_PMC_SMU_INDEX_ADDRESS, AMD_PMC_BASE_ADDR_LO);
if (err) {
dev_err(dev->dev, "error writing to 0x%x\n", AMD_PMC_SMU_INDEX_ADDRESS);
pci_dev_put(rdev);
return pcibios_err_to_errno(err);
}
err = pci_read_config_dword(rdev, AMD_PMC_SMU_INDEX_DATA, &val);
if (err)
if (err) {
pci_dev_put(rdev);
return pcibios_err_to_errno(err);
}
base_addr_lo = val & AMD_PMC_BASE_ADDR_HI_MASK;
err = pci_write_config_dword(rdev, AMD_PMC_SMU_INDEX_ADDRESS, AMD_PMC_BASE_ADDR_HI);
if (err) {
dev_err(dev->dev, "error writing to 0x%x\n", AMD_PMC_SMU_INDEX_ADDRESS);
pci_dev_put(rdev);
return pcibios_err_to_errno(err);
}
err = pci_read_config_dword(rdev, AMD_PMC_SMU_INDEX_DATA, &val);
if (err)
if (err) {
pci_dev_put(rdev);
return pcibios_err_to_errno(err);
}
base_addr_hi = val & AMD_PMC_BASE_ADDR_LO_MASK;
pci_dev_put(rdev);

View file

@ -0,0 +1,207 @@
# SPDX-License-Identifier: GPL-2.0-only
#
# Dell X86 Platform Specific Drivers
#
menuconfig X86_PLATFORM_DRIVERS_DELL
bool "Dell X86 Platform Specific Device Drivers"
default n
depends on X86_PLATFORM_DEVICES
help
Say Y here to get to see options for device drivers for various
Dell x86 platforms, including vendor-specific laptop extension drivers.
This option alone does not add any kernel code.
If you say N, all options in this submenu will be skipped and disabled.
if X86_PLATFORM_DRIVERS_DELL
config ALIENWARE_WMI
tristate "Alienware Special feature control"
default m
depends on ACPI
depends on LEDS_CLASS
depends on NEW_LEDS
depends on ACPI_WMI
help
This is a driver for controlling Alienware BIOS driven
features. It exposes an interface for controlling the AlienFX
zones on Alienware machines that don't contain a dedicated AlienFX
USB MCU such as the X51 and X51-R2.
config DCDBAS
tristate "Dell Systems Management Base Driver"
default m
depends on X86
help
The Dell Systems Management Base Driver provides a sysfs interface
for systems management software to perform System Management
Interrupts (SMIs) and Host Control Actions (system power cycle or
power off after OS shutdown) on certain Dell systems.
See <file:Documentation/driver-api/dcdbas.rst> for more details on the driver
and the Dell systems on which Dell systems management software makes
use of this driver.
Say Y or M here to enable the driver for use by Dell systems
management software such as Dell OpenManage.
config DELL_LAPTOP
tristate "Dell Laptop Extras"
default m
depends on DMI
depends on BACKLIGHT_CLASS_DEVICE
depends on ACPI_VIDEO || ACPI_VIDEO = n
depends on RFKILL || RFKILL = n
depends on SERIO_I8042
depends on DELL_SMBIOS
select POWER_SUPPLY
select LEDS_CLASS
select NEW_LEDS
select LEDS_TRIGGERS
select LEDS_TRIGGER_AUDIO
help
This driver adds support for rfkill and backlight control to Dell
laptops (except for some models covered by the Compal driver).
config DELL_RBU
tristate "BIOS update support for DELL systems via sysfs"
default m
depends on X86
select FW_LOADER
select FW_LOADER_USER_HELPER
help
Say m if you want to have the option of updating the BIOS for your
DELL system. Note you need a Dell OpenManage or Dell Update package (DUP)
supporting application to communicate with the BIOS regarding the new
image for the image update to take effect.
See <file:Documentation/admin-guide/dell_rbu.rst> for more details on the driver.
config DELL_RBTN
tristate "Dell Airplane Mode Switch driver"
default m
depends on ACPI
depends on INPUT
depends on RFKILL
help
Say Y here if you want to support Dell Airplane Mode Switch ACPI
device on Dell laptops. Sometimes it has names: DELLABCE or DELRBTN.
This driver register rfkill device or input hotkey device depending
on hardware type (hw switch slider or keyboard toggle button). For
rfkill devices it receive HW switch events and set correct hard
rfkill state.
To compile this driver as a module, choose M here: the module will
be called dell-rbtn.
#
# The DELL_SMBIOS driver depends on ACPI_WMI and/or DCDBAS if those
# backends are selected. The "depends" line prevents a configuration
# where DELL_SMBIOS=y while either of those dependencies =m.
#
config DELL_SMBIOS
tristate "Dell SMBIOS driver"
default m
depends on DCDBAS || DCDBAS=n
depends on ACPI_WMI || ACPI_WMI=n
help
This provides support for the Dell SMBIOS calling interface.
If you have a Dell computer you should enable this option.
Be sure to select at least one backend for it to work properly.
config DELL_SMBIOS_WMI
bool "Dell SMBIOS driver WMI backend"
default y
depends on ACPI_WMI
select DELL_WMI_DESCRIPTOR
depends on DELL_SMBIOS
help
This provides an implementation for the Dell SMBIOS calling interface
communicated over ACPI-WMI.
If you have a Dell computer from >2007 you should say Y here.
If you aren't sure and this module doesn't work for your computer
it just won't load.
config DELL_SMBIOS_SMM
bool "Dell SMBIOS driver SMM backend"
default y
depends on DCDBAS
depends on DELL_SMBIOS
help
This provides an implementation for the Dell SMBIOS calling interface
communicated over SMI/SMM.
If you have a Dell computer from <=2017 you should say Y here.
If you aren't sure and this module doesn't work for your computer
it just won't load.
config DELL_SMO8800
tristate "Dell Latitude freefall driver (ACPI SMO88XX)"
default m
depends on ACPI
help
Say Y here if you want to support SMO88XX freefall devices
on Dell Latitude laptops.
To compile this driver as a module, choose M here: the module will
be called dell-smo8800.
config DELL_WMI
tristate "Dell WMI notifications"
default m
depends on ACPI_WMI
depends on DMI
depends on INPUT
depends on ACPI_VIDEO || ACPI_VIDEO = n
depends on DELL_SMBIOS
select DELL_WMI_DESCRIPTOR
select INPUT_SPARSEKMAP
help
Say Y here if you want to support WMI-based hotkeys on Dell laptops.
To compile this driver as a module, choose M here: the module will
be called dell-wmi.
config DELL_WMI_AIO
tristate "WMI Hotkeys for Dell All-In-One series"
default m
depends on ACPI_WMI
depends on INPUT
select INPUT_SPARSEKMAP
help
Say Y here if you want to support WMI-based hotkeys on Dell
All-In-One machines.
To compile this driver as a module, choose M here: the module will
be called dell-wmi-aio.
config DELL_WMI_DESCRIPTOR
tristate
default m
depends on ACPI_WMI
config DELL_WMI_LED
tristate "External LED on Dell Business Netbooks"
default m
depends on LEDS_CLASS
depends on ACPI_WMI
help
This adds support for the Latitude 2100 and similar
notebooks that have an external LED.
config DELL_WMI_SYSMAN
tristate "Dell WMI-based Systems management driver"
default m
depends on ACPI_WMI
depends on DMI
select NLS
help
This driver allows changing BIOS settings on many Dell machines from
2018 and newer without the use of any additional software.
To compile this driver as a module, choose M here: the module will
be called dell-wmi-sysman.
endif # X86_PLATFORM_DRIVERS_DELL

View file

@ -0,0 +1,21 @@
# SPDX-License-Identifier: GPL-2.0
#
# Makefile for linux/drivers/platform/x86/dell
# Dell x86 Platform-Specific Drivers
#
obj-$(CONFIG_ALIENWARE_WMI) += alienware-wmi.o
obj-$(CONFIG_DCDBAS) += dcdbas.o
obj-$(CONFIG_DELL_LAPTOP) += dell-laptop.o
obj-$(CONFIG_DELL_RBTN) += dell-rbtn.o
obj-$(CONFIG_DELL_RBU) += dell_rbu.o
obj-$(CONFIG_DELL_SMBIOS) += dell-smbios.o
dell-smbios-objs := dell-smbios-base.o
dell-smbios-$(CONFIG_DELL_SMBIOS_WMI) += dell-smbios-wmi.o
dell-smbios-$(CONFIG_DELL_SMBIOS_SMM) += dell-smbios-smm.o
obj-$(CONFIG_DELL_SMO8800) += dell-smo8800.o
obj-$(CONFIG_DELL_WMI) += dell-wmi.o
obj-$(CONFIG_DELL_WMI_AIO) += dell-wmi-aio.o
obj-$(CONFIG_DELL_WMI_DESCRIPTOR) += dell-wmi-descriptor.o
obj-$(CONFIG_DELL_WMI_LED) += dell-wmi-led.o
obj-$(CONFIG_DELL_WMI_SYSMAN) += dell-wmi-sysman/

File diff suppressed because it is too large Load diff

View file

@ -377,6 +377,7 @@ static const struct x86_cpu_id intel_uncore_cpu_ids[] = {
X86_MATCH_INTEL_FAM6_MODEL(SKYLAKE_X, NULL),
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, NULL),
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_D, NULL),
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, NULL),
{}
};

View file

@ -44,6 +44,7 @@ static const struct key_entry intel_vbtn_keymap[] = {
{ KE_IGNORE, 0xC7, { KEY_VOLUMEDOWN } }, /* volume-down key release */
{ KE_KEY, 0xC8, { KEY_ROTATE_LOCK_TOGGLE } }, /* rotate-lock key press */
{ KE_KEY, 0xC9, { KEY_ROTATE_LOCK_TOGGLE } }, /* rotate-lock key release */
{ KE_END }
};
static const struct key_entry intel_vbtn_switchmap[] = {
@ -51,14 +52,15 @@ static const struct key_entry intel_vbtn_switchmap[] = {
{ KE_SW, 0xCB, { .sw = { SW_DOCK, 0 } } }, /* Undocked */
{ KE_SW, 0xCC, { .sw = { SW_TABLET_MODE, 1 } } }, /* Tablet */
{ KE_SW, 0xCD, { .sw = { SW_TABLET_MODE, 0 } } }, /* Laptop */
{ KE_END }
};
#define KEYMAP_LEN \
(ARRAY_SIZE(intel_vbtn_keymap) + ARRAY_SIZE(intel_vbtn_switchmap) + 1)
struct intel_vbtn_priv {
struct key_entry keymap[KEYMAP_LEN];
struct input_dev *input_dev;
struct input_dev *buttons_dev;
struct input_dev *switches_dev;
bool has_buttons;
bool has_switches;
bool wakeup_mode;
@ -77,48 +79,62 @@ static void detect_tablet_mode(struct platform_device *device)
return;
m = !(vgbs & VGBS_TABLET_MODE_FLAGS);
input_report_switch(priv->input_dev, SW_TABLET_MODE, m);
input_report_switch(priv->switches_dev, SW_TABLET_MODE, m);
m = (vgbs & VGBS_DOCK_MODE_FLAG) ? 1 : 0;
input_report_switch(priv->input_dev, SW_DOCK, m);
input_report_switch(priv->switches_dev, SW_DOCK, m);
}
/*
* Note this unconditionally creates the 2 input_dev-s and sets up
* the sparse-keymaps. Only the registration is conditional on
* have_buttons / have_switches. This is done so that the notify
* handler can always call sparse_keymap_entry_from_scancode()
* on the input_dev-s do determine the event type.
*/
static int intel_vbtn_input_setup(struct platform_device *device)
{
struct intel_vbtn_priv *priv = dev_get_drvdata(&device->dev);
int ret, keymap_len = 0;
int ret;
if (priv->has_buttons) {
memcpy(&priv->keymap[keymap_len], intel_vbtn_keymap,
ARRAY_SIZE(intel_vbtn_keymap) *
sizeof(struct key_entry));
keymap_len += ARRAY_SIZE(intel_vbtn_keymap);
}
if (priv->has_switches) {
memcpy(&priv->keymap[keymap_len], intel_vbtn_switchmap,
ARRAY_SIZE(intel_vbtn_switchmap) *
sizeof(struct key_entry));
keymap_len += ARRAY_SIZE(intel_vbtn_switchmap);
}
priv->keymap[keymap_len].type = KE_END;
priv->input_dev = devm_input_allocate_device(&device->dev);
if (!priv->input_dev)
priv->buttons_dev = devm_input_allocate_device(&device->dev);
if (!priv->buttons_dev)
return -ENOMEM;
ret = sparse_keymap_setup(priv->input_dev, priv->keymap, NULL);
ret = sparse_keymap_setup(priv->buttons_dev, intel_vbtn_keymap, NULL);
if (ret)
return ret;
priv->input_dev->dev.parent = &device->dev;
priv->input_dev->name = "Intel Virtual Button driver";
priv->input_dev->id.bustype = BUS_HOST;
priv->buttons_dev->dev.parent = &device->dev;
priv->buttons_dev->name = "Intel Virtual Buttons";
priv->buttons_dev->id.bustype = BUS_HOST;
if (priv->has_switches)
if (priv->has_buttons) {
ret = input_register_device(priv->buttons_dev);
if (ret)
return ret;
}
priv->switches_dev = devm_input_allocate_device(&device->dev);
if (!priv->switches_dev)
return -ENOMEM;
ret = sparse_keymap_setup(priv->switches_dev, intel_vbtn_switchmap, NULL);
if (ret)
return ret;
priv->switches_dev->dev.parent = &device->dev;
priv->switches_dev->name = "Intel Virtual Switches";
priv->switches_dev->id.bustype = BUS_HOST;
if (priv->has_switches) {
detect_tablet_mode(device);
return input_register_device(priv->input_dev);
ret = input_register_device(priv->switches_dev);
if (ret)
return ret;
}
return 0;
}
static void notify_handler(acpi_handle handle, u32 event, void *context)
@ -127,48 +143,50 @@ static void notify_handler(acpi_handle handle, u32 event, void *context)
struct intel_vbtn_priv *priv = dev_get_drvdata(&device->dev);
unsigned int val = !(event & 1); /* Even=press, Odd=release */
const struct key_entry *ke, *ke_rel;
struct input_dev *input_dev;
bool autorelease;
int ret;
if (priv->wakeup_mode) {
ke = sparse_keymap_entry_from_scancode(priv->input_dev, event);
if (ke) {
pm_wakeup_hard_event(&device->dev);
/*
* Switch events like tablet mode will wake the device
* and report the new switch position to the input
* subsystem.
*/
if (ke->type == KE_SW)
sparse_keymap_report_event(priv->input_dev,
event,
val,
0);
if ((ke = sparse_keymap_entry_from_scancode(priv->buttons_dev, event))) {
if (!priv->has_buttons) {
dev_warn(&device->dev, "Warning: received a button event on a device without buttons, please report this.\n");
return;
}
goto out_unknown;
input_dev = priv->buttons_dev;
} else if ((ke = sparse_keymap_entry_from_scancode(priv->switches_dev, event))) {
if (!priv->has_switches) {
dev_info(&device->dev, "Registering Intel Virtual Switches input-dev after receiving a switch event\n");
ret = input_register_device(priv->switches_dev);
if (ret)
return;
priv->has_switches = true;
}
input_dev = priv->switches_dev;
} else {
dev_dbg(&device->dev, "unknown event index 0x%x\n", event);
return;
}
if (priv->wakeup_mode) {
pm_wakeup_hard_event(&device->dev);
/*
* Skip reporting an evdev event for button wake events,
* mirroring how the drivers/acpi/button.c code skips this too.
*/
if (ke->type == KE_KEY)
return;
}
/*
* Even press events are autorelease if there is no corresponding odd
* release event, or if the odd event is KE_IGNORE.
*/
ke_rel = sparse_keymap_entry_from_scancode(priv->input_dev, event | 1);
ke_rel = sparse_keymap_entry_from_scancode(input_dev, event | 1);
autorelease = val && (!ke_rel || ke_rel->type == KE_IGNORE);
if (sparse_keymap_report_event(priv->input_dev, event, val, autorelease))
return;
out_unknown:
dev_dbg(&device->dev, "unknown event index 0x%x\n", event);
}
static bool intel_vbtn_has_buttons(acpi_handle handle)
{
acpi_status status;
status = acpi_evaluate_object(handle, "VBDL", NULL, NULL);
return ACPI_SUCCESS(status);
sparse_keymap_report_event(input_dev, event, val, autorelease);
}
/*
@ -245,7 +263,7 @@ static int intel_vbtn_probe(struct platform_device *device)
acpi_status status;
int err;
has_buttons = intel_vbtn_has_buttons(handle);
has_buttons = acpi_has_method(handle, "VBDL");
has_switches = intel_vbtn_has_switches(handle);
if (!has_buttons && !has_switches) {
@ -274,6 +292,12 @@ static int intel_vbtn_probe(struct platform_device *device)
if (ACPI_FAILURE(status))
return -EBUSY;
if (has_buttons) {
status = acpi_evaluate_object(handle, "VBDL", NULL, NULL);
if (ACPI_FAILURE(status))
dev_err(&device->dev, "Error VBDL failed with ACPI status %d\n", status);
}
device_init_wakeup(&device->dev, true);
/*
* In order for system wakeup to work, the EC GPE has to be marked as

View file

@ -1,233 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Power button driver for Intel MID platforms.
*
* Copyright (C) 2010,2017 Intel Corp
*
* Author: Hong Liu <hong.liu@intel.com>
* Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
*/
#include <linux/input.h>
#include <linux/interrupt.h>
#include <linux/mfd/intel_msic.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/pm_wakeirq.h>
#include <linux/slab.h>
#include <asm/cpu_device_id.h>
#include <asm/intel-family.h>
#include <asm/intel_scu_ipc.h>
#define DRIVER_NAME "msic_power_btn"
#define MSIC_PB_LEVEL (1 << 3) /* 1 - release, 0 - press */
/*
* MSIC document ti_datasheet defines the 1st bit reg 0x21 is used to mask
* power button interrupt
*/
#define MSIC_PWRBTNM (1 << 0)
/* Intel Tangier */
#define BCOVE_PB_LEVEL (1 << 4) /* 1 - release, 0 - press */
/* Basin Cove PMIC */
#define BCOVE_PBIRQ 0x02
#define BCOVE_IRQLVL1MSK 0x0c
#define BCOVE_PBIRQMASK 0x0d
#define BCOVE_PBSTATUS 0x27
struct mid_pb_ddata {
struct device *dev;
int irq;
struct input_dev *input;
unsigned short mirqlvl1_addr;
unsigned short pbstat_addr;
u8 pbstat_mask;
struct intel_scu_ipc_dev *scu;
int (*setup)(struct mid_pb_ddata *ddata);
};
static int mid_pbstat(struct mid_pb_ddata *ddata, int *value)
{
struct input_dev *input = ddata->input;
int ret;
u8 pbstat;
ret = intel_scu_ipc_dev_ioread8(ddata->scu, ddata->pbstat_addr,
&pbstat);
if (ret)
return ret;
dev_dbg(input->dev.parent, "PB_INT status= %d\n", pbstat);
*value = !(pbstat & ddata->pbstat_mask);
return 0;
}
static int mid_irq_ack(struct mid_pb_ddata *ddata)
{
return intel_scu_ipc_dev_update(ddata->scu, ddata->mirqlvl1_addr, 0,
MSIC_PWRBTNM);
}
static int mrfld_setup(struct mid_pb_ddata *ddata)
{
/* Unmask the PBIRQ and MPBIRQ on Tangier */
intel_scu_ipc_dev_update(ddata->scu, BCOVE_PBIRQ, 0, MSIC_PWRBTNM);
intel_scu_ipc_dev_update(ddata->scu, BCOVE_PBIRQMASK, 0, MSIC_PWRBTNM);
return 0;
}
static irqreturn_t mid_pb_isr(int irq, void *dev_id)
{
struct mid_pb_ddata *ddata = dev_id;
struct input_dev *input = ddata->input;
int value = 0;
int ret;
ret = mid_pbstat(ddata, &value);
if (ret < 0) {
dev_err(input->dev.parent,
"Read error %d while reading MSIC_PB_STATUS\n", ret);
} else {
input_event(input, EV_KEY, KEY_POWER, value);
input_sync(input);
}
mid_irq_ack(ddata);
return IRQ_HANDLED;
}
static const struct mid_pb_ddata mfld_ddata = {
.mirqlvl1_addr = INTEL_MSIC_IRQLVL1MSK,
.pbstat_addr = INTEL_MSIC_PBSTATUS,
.pbstat_mask = MSIC_PB_LEVEL,
};
static const struct mid_pb_ddata mrfld_ddata = {
.mirqlvl1_addr = BCOVE_IRQLVL1MSK,
.pbstat_addr = BCOVE_PBSTATUS,
.pbstat_mask = BCOVE_PB_LEVEL,
.setup = mrfld_setup,
};
static const struct x86_cpu_id mid_pb_cpu_ids[] = {
X86_MATCH_INTEL_FAM6_MODEL(ATOM_SALTWELL_MID, &mfld_ddata),
X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT_MID, &mrfld_ddata),
{}
};
static int mid_pb_probe(struct platform_device *pdev)
{
const struct x86_cpu_id *id;
struct mid_pb_ddata *ddata;
struct input_dev *input;
int irq = platform_get_irq(pdev, 0);
int error;
id = x86_match_cpu(mid_pb_cpu_ids);
if (!id)
return -ENODEV;
if (irq < 0) {
dev_err(&pdev->dev, "Failed to get IRQ: %d\n", irq);
return irq;
}
input = devm_input_allocate_device(&pdev->dev);
if (!input)
return -ENOMEM;
input->name = pdev->name;
input->phys = "power-button/input0";
input->id.bustype = BUS_HOST;
input->dev.parent = &pdev->dev;
input_set_capability(input, EV_KEY, KEY_POWER);
ddata = devm_kmemdup(&pdev->dev, (void *)id->driver_data,
sizeof(*ddata), GFP_KERNEL);
if (!ddata)
return -ENOMEM;
ddata->dev = &pdev->dev;
ddata->irq = irq;
ddata->input = input;
if (ddata->setup) {
error = ddata->setup(ddata);
if (error)
return error;
}
ddata->scu = devm_intel_scu_ipc_dev_get(&pdev->dev);
if (!ddata->scu)
return -EPROBE_DEFER;
error = devm_request_threaded_irq(&pdev->dev, irq, NULL, mid_pb_isr,
IRQF_ONESHOT, DRIVER_NAME, ddata);
if (error) {
dev_err(&pdev->dev,
"Unable to request irq %d for MID power button\n", irq);
return error;
}
error = input_register_device(input);
if (error) {
dev_err(&pdev->dev,
"Unable to register input dev, error %d\n", error);
return error;
}
platform_set_drvdata(pdev, ddata);
/*
* SCU firmware might send power button interrupts to IA core before
* kernel boots and doesn't get EOI from IA core. The first bit of
* MSIC reg 0x21 is kept masked, and SCU firmware doesn't send new
* power interrupt to Android kernel. Unmask the bit when probing
* power button in kernel.
* There is a very narrow race between irq handler and power button
* initialization. The race happens rarely. So we needn't worry
* about it.
*/
error = mid_irq_ack(ddata);
if (error) {
dev_err(&pdev->dev,
"Unable to clear power button interrupt, error: %d\n",
error);
return error;
}
device_init_wakeup(&pdev->dev, true);
dev_pm_set_wake_irq(&pdev->dev, irq);
return 0;
}
static int mid_pb_remove(struct platform_device *pdev)
{
dev_pm_clear_wake_irq(&pdev->dev);
device_init_wakeup(&pdev->dev, false);
return 0;
}
static struct platform_driver mid_pb_driver = {
.driver = {
.name = DRIVER_NAME,
},
.probe = mid_pb_probe,
.remove = mid_pb_remove,
};
module_platform_driver(mid_pb_driver);
MODULE_AUTHOR("Hong Liu <hong.liu@intel.com>");
MODULE_DESCRIPTION("Intel MID Power Button Driver");
MODULE_LICENSE("GPL v2");
MODULE_ALIAS("platform:" DRIVER_NAME);

View file

@ -1,560 +0,0 @@
// SPDX-License-Identifier: GPL-2.0
/*
* Intel MID platform thermal driver
*
* Copyright (C) 2011 Intel Corporation
*
* Author: Durgadoss R <durgadoss.r@intel.com>
*/
#define pr_fmt(fmt) "intel_mid_thermal: " fmt
#include <linux/device.h>
#include <linux/err.h>
#include <linux/mfd/intel_msic.h>
#include <linux/module.h>
#include <linux/param.h>
#include <linux/platform_device.h>
#include <linux/pm.h>
#include <linux/slab.h>
#include <linux/thermal.h>
/* Number of thermal sensors */
#define MSIC_THERMAL_SENSORS 4
/* ADC1 - thermal registers */
#define MSIC_ADC_ENBL 0x10
#define MSIC_ADC_START 0x08
#define MSIC_ADCTHERM_ENBL 0x04
#define MSIC_ADCRRDATA_ENBL 0x05
#define MSIC_CHANL_MASK_VAL 0x0F
#define MSIC_STOPBIT_MASK 16
#define MSIC_ADCTHERM_MASK 4
/* Number of ADC channels */
#define ADC_CHANLS_MAX 15
#define ADC_LOOP_MAX (ADC_CHANLS_MAX - MSIC_THERMAL_SENSORS)
/* ADC channel code values */
#define SKIN_SENSOR0_CODE 0x08
#define SKIN_SENSOR1_CODE 0x09
#define SYS_SENSOR_CODE 0x0A
#define MSIC_DIE_SENSOR_CODE 0x03
#define SKIN_THERM_SENSOR0 0
#define SKIN_THERM_SENSOR1 1
#define SYS_THERM_SENSOR2 2
#define MSIC_DIE_THERM_SENSOR3 3
/* ADC code range */
#define ADC_MAX 977
#define ADC_MIN 162
#define ADC_VAL0C 887
#define ADC_VAL20C 720
#define ADC_VAL40C 508
#define ADC_VAL60C 315
/* ADC base addresses */
#define ADC_CHNL_START_ADDR INTEL_MSIC_ADC1ADDR0 /* increments by 1 */
#define ADC_DATA_START_ADDR INTEL_MSIC_ADC1SNS0H /* increments by 2 */
/* MSIC die attributes */
#define MSIC_DIE_ADC_MIN 488
#define MSIC_DIE_ADC_MAX 1004
/* This holds the address of the first free ADC channel,
* among the 15 channels
*/
static int channel_index;
struct platform_info {
struct platform_device *pdev;
struct thermal_zone_device *tzd[MSIC_THERMAL_SENSORS];
};
struct thermal_device_info {
unsigned int chnl_addr;
int direct;
/* This holds the current temperature in millidegree celsius */
long curr_temp;
};
/**
* to_msic_die_temp - converts adc_val to msic_die temperature
* @adc_val: ADC value to be converted
*
* Can sleep
*/
static int to_msic_die_temp(uint16_t adc_val)
{
return (368 * (adc_val) / 1000) - 220;
}
/**
* is_valid_adc - checks whether the adc code is within the defined range
* @min: minimum value for the sensor
* @max: maximum value for the sensor
*
* Can sleep
*/
static int is_valid_adc(uint16_t adc_val, uint16_t min, uint16_t max)
{
return (adc_val >= min) && (adc_val <= max);
}
/**
* adc_to_temp - converts the ADC code to temperature in C
* @direct: true if ths channel is direct index
* @adc_val: the adc_val that needs to be converted
* @tp: temperature return value
*
* Linear approximation is used to covert the skin adc value into temperature.
* This technique is used to avoid very long look-up table to get
* the appropriate temp value from ADC value.
* The adc code vs sensor temp curve is split into five parts
* to achieve very close approximate temp value with less than
* 0.5C error
*/
static int adc_to_temp(int direct, uint16_t adc_val, int *tp)
{
int temp;
/* Direct conversion for die temperature */
if (direct) {
if (is_valid_adc(adc_val, MSIC_DIE_ADC_MIN, MSIC_DIE_ADC_MAX)) {
*tp = to_msic_die_temp(adc_val) * 1000;
return 0;
}
return -ERANGE;
}
if (!is_valid_adc(adc_val, ADC_MIN, ADC_MAX))
return -ERANGE;
/* Linear approximation for skin temperature */
if (adc_val > ADC_VAL0C)
temp = 177 - (adc_val/5);
else if ((adc_val <= ADC_VAL0C) && (adc_val > ADC_VAL20C))
temp = 111 - (adc_val/8);
else if ((adc_val <= ADC_VAL20C) && (adc_val > ADC_VAL40C))
temp = 92 - (adc_val/10);
else if ((adc_val <= ADC_VAL40C) && (adc_val > ADC_VAL60C))
temp = 91 - (adc_val/10);
else
temp = 112 - (adc_val/6);
/* Convert temperature in celsius to milli degree celsius */
*tp = temp * 1000;
return 0;
}
/**
* mid_read_temp - read sensors for temperature
* @temp: holds the current temperature for the sensor after reading
*
* reads the adc_code from the channel and converts it to real
* temperature. The converted value is stored in temp.
*
* Can sleep
*/
static int mid_read_temp(struct thermal_zone_device *tzd, int *temp)
{
struct thermal_device_info *td_info = tzd->devdata;
uint16_t adc_val, addr;
uint8_t data = 0;
int ret;
int curr_temp;
addr = td_info->chnl_addr;
/* Enable the msic for conversion before reading */
ret = intel_msic_reg_write(INTEL_MSIC_ADC1CNTL3, MSIC_ADCRRDATA_ENBL);
if (ret)
return ret;
/* Re-toggle the RRDATARD bit (temporary workaround) */
ret = intel_msic_reg_write(INTEL_MSIC_ADC1CNTL3, MSIC_ADCTHERM_ENBL);
if (ret)
return ret;
/* Read the higher bits of data */
ret = intel_msic_reg_read(addr, &data);
if (ret)
return ret;
/* Shift bits to accommodate the lower two data bits */
adc_val = (data << 2);
addr++;
ret = intel_msic_reg_read(addr, &data);/* Read lower bits */
if (ret)
return ret;
/* Adding lower two bits to the higher bits */
data &= 03;
adc_val += data;
/* Convert ADC value to temperature */
ret = adc_to_temp(td_info->direct, adc_val, &curr_temp);
if (ret == 0)
*temp = td_info->curr_temp = curr_temp;
return ret;
}
/**
* configure_adc - enables/disables the ADC for conversion
* @val: zero: disables the ADC non-zero:enables the ADC
*
* Enable/Disable the ADC depending on the argument
*
* Can sleep
*/
static int configure_adc(int val)
{
int ret;
uint8_t data;
ret = intel_msic_reg_read(INTEL_MSIC_ADC1CNTL1, &data);
if (ret)
return ret;
if (val) {
/* Enable and start the ADC */
data |= (MSIC_ADC_ENBL | MSIC_ADC_START);
} else {
/* Just stop the ADC */
data &= (~MSIC_ADC_START);
}
return intel_msic_reg_write(INTEL_MSIC_ADC1CNTL1, data);
}
/**
* set_up_therm_channel - enable thermal channel for conversion
* @base_addr: index of free msic ADC channel
*
* Enable all the three channels for conversion
*
* Can sleep
*/
static int set_up_therm_channel(u16 base_addr)
{
int ret;
/* Enable all the sensor channels */
ret = intel_msic_reg_write(base_addr, SKIN_SENSOR0_CODE);
if (ret)
return ret;
ret = intel_msic_reg_write(base_addr + 1, SKIN_SENSOR1_CODE);
if (ret)
return ret;
ret = intel_msic_reg_write(base_addr + 2, SYS_SENSOR_CODE);
if (ret)
return ret;
/* Since this is the last channel, set the stop bit
* to 1 by ORing the DIE_SENSOR_CODE with 0x10 */
ret = intel_msic_reg_write(base_addr + 3,
(MSIC_DIE_SENSOR_CODE | 0x10));
if (ret)
return ret;
/* Enable ADC and start it */
return configure_adc(1);
}
/**
* reset_stopbit - sets the stop bit to 0 on the given channel
* @addr: address of the channel
*
* Can sleep
*/
static int reset_stopbit(uint16_t addr)
{
int ret;
uint8_t data;
ret = intel_msic_reg_read(addr, &data);
if (ret)
return ret;
/* Set the stop bit to zero */
return intel_msic_reg_write(addr, (data & 0xEF));
}
/**
* find_free_channel - finds an empty channel for conversion
*
* If the ADC is not enabled then start using 0th channel
* itself. Otherwise find an empty channel by looking for a
* channel in which the stopbit is set to 1. returns the index
* of the first free channel if succeeds or an error code.
*
* Context: can sleep
*
* FIXME: Ultimately the channel allocator will move into the intel_scu_ipc
* code.
*/
static int find_free_channel(void)
{
int ret;
int i;
uint8_t data;
/* check whether ADC is enabled */
ret = intel_msic_reg_read(INTEL_MSIC_ADC1CNTL1, &data);
if (ret)
return ret;
if ((data & MSIC_ADC_ENBL) == 0)
return 0;
/* ADC is already enabled; Looking for an empty channel */
for (i = 0; i < ADC_CHANLS_MAX; i++) {
ret = intel_msic_reg_read(ADC_CHNL_START_ADDR + i, &data);
if (ret)
return ret;
if (data & MSIC_STOPBIT_MASK) {
ret = i;
break;
}
}
return (ret > ADC_LOOP_MAX) ? (-EINVAL) : ret;
}
/**
* mid_initialize_adc - initializing the ADC
* @dev: our device structure
*
* Initialize the ADC for reading thermistor values. Can sleep.
*/
static int mid_initialize_adc(struct device *dev)
{
u8 data;
u16 base_addr;
int ret;
/*
* Ensure that adctherm is disabled before we
* initialize the ADC
*/
ret = intel_msic_reg_read(INTEL_MSIC_ADC1CNTL3, &data);
if (ret)
return ret;
data &= ~MSIC_ADCTHERM_MASK;
ret = intel_msic_reg_write(INTEL_MSIC_ADC1CNTL3, data);
if (ret)
return ret;
/* Index of the first channel in which the stop bit is set */
channel_index = find_free_channel();
if (channel_index < 0) {
dev_err(dev, "No free ADC channels");
return channel_index;
}
base_addr = ADC_CHNL_START_ADDR + channel_index;
if (!(channel_index == 0 || channel_index == ADC_LOOP_MAX)) {
/* Reset stop bit for channels other than 0 and 12 */
ret = reset_stopbit(base_addr);
if (ret)
return ret;
/* Index of the first free channel */
base_addr++;
channel_index++;
}
ret = set_up_therm_channel(base_addr);
if (ret) {
dev_err(dev, "unable to enable ADC");
return ret;
}
dev_dbg(dev, "ADC initialization successful");
return ret;
}
/**
* initialize_sensor - sets default temp and timer ranges
* @index: index of the sensor
*
* Context: can sleep
*/
static struct thermal_device_info *initialize_sensor(int index)
{
struct thermal_device_info *td_info =
kzalloc(sizeof(struct thermal_device_info), GFP_KERNEL);
if (!td_info)
return NULL;
/* Set the base addr of the channel for this sensor */
td_info->chnl_addr = ADC_DATA_START_ADDR + 2 * (channel_index + index);
/* Sensor 3 is direct conversion */
if (index == 3)
td_info->direct = 1;
return td_info;
}
#ifdef CONFIG_PM_SLEEP
/**
* mid_thermal_resume - resume routine
* @dev: device structure
*
* mid thermal resume: re-initializes the adc. Can sleep.
*/
static int mid_thermal_resume(struct device *dev)
{
return mid_initialize_adc(dev);
}
/**
* mid_thermal_suspend - suspend routine
* @dev: device structure
*
* mid thermal suspend implements the suspend functionality
* by stopping the ADC. Can sleep.
*/
static int mid_thermal_suspend(struct device *dev)
{
/*
* This just stops the ADC and does not disable it.
* temporary workaround until we have a generic ADC driver.
* If 0 is passed, it disables the ADC.
*/
return configure_adc(0);
}
#endif
static SIMPLE_DEV_PM_OPS(mid_thermal_pm,
mid_thermal_suspend, mid_thermal_resume);
/**
* read_curr_temp - reads the current temperature and stores in temp
* @temp: holds the current temperature value after reading
*
* Can sleep
*/
static int read_curr_temp(struct thermal_zone_device *tzd, int *temp)
{
WARN_ON(tzd == NULL);
return mid_read_temp(tzd, temp);
}
/* Can't be const */
static struct thermal_zone_device_ops tzd_ops = {
.get_temp = read_curr_temp,
};
/**
* mid_thermal_probe - mfld thermal initialize
* @pdev: platform device structure
*
* mid thermal probe initializes the hardware and registers
* all the sensors with the generic thermal framework. Can sleep.
*/
static int mid_thermal_probe(struct platform_device *pdev)
{
static char *name[MSIC_THERMAL_SENSORS] = {
"skin0", "skin1", "sys", "msicdie"
};
int ret;
int i;
struct platform_info *pinfo;
pinfo = devm_kzalloc(&pdev->dev, sizeof(struct platform_info),
GFP_KERNEL);
if (!pinfo)
return -ENOMEM;
/* Initializing the hardware */
ret = mid_initialize_adc(&pdev->dev);
if (ret) {
dev_err(&pdev->dev, "ADC init failed");
return ret;
}
/* Register each sensor with the generic thermal framework*/
for (i = 0; i < MSIC_THERMAL_SENSORS; i++) {
struct thermal_device_info *td_info = initialize_sensor(i);
if (!td_info) {
ret = -ENOMEM;
goto err;
}
pinfo->tzd[i] = thermal_zone_device_register(name[i],
0, 0, td_info, &tzd_ops, NULL, 0, 0);
if (IS_ERR(pinfo->tzd[i])) {
kfree(td_info);
ret = PTR_ERR(pinfo->tzd[i]);
goto err;
}
ret = thermal_zone_device_enable(pinfo->tzd[i]);
if (ret) {
kfree(td_info);
thermal_zone_device_unregister(pinfo->tzd[i]);
goto err;
}
}
pinfo->pdev = pdev;
platform_set_drvdata(pdev, pinfo);
return 0;
err:
while (--i >= 0) {
kfree(pinfo->tzd[i]->devdata);
thermal_zone_device_unregister(pinfo->tzd[i]);
}
configure_adc(0);
return ret;
}
/**
* mid_thermal_remove - mfld thermal finalize
* @dev: platform device structure
*
* MLFD thermal remove unregisters all the sensors from the generic
* thermal framework. Can sleep.
*/
static int mid_thermal_remove(struct platform_device *pdev)
{
int i;
struct platform_info *pinfo = platform_get_drvdata(pdev);
for (i = 0; i < MSIC_THERMAL_SENSORS; i++) {
kfree(pinfo->tzd[i]->devdata);
thermal_zone_device_unregister(pinfo->tzd[i]);
}
/* Stop the ADC */
return configure_adc(0);
}
#define DRIVER_NAME "msic_thermal"
static const struct platform_device_id therm_id_table[] = {
{ DRIVER_NAME, 1 },
{ }
};
MODULE_DEVICE_TABLE(platform, therm_id_table);
static struct platform_driver mid_thermal_driver = {
.driver = {
.name = DRIVER_NAME,
.pm = &mid_thermal_pm,
},
.probe = mid_thermal_probe,
.remove = mid_thermal_remove,
.id_table = therm_id_table,
};
module_platform_driver(mid_thermal_driver);
MODULE_AUTHOR("Durgadoss R <durgadoss.r@intel.com>");
MODULE_DESCRIPTION("Intel Medfield Platform Thermal Driver");
MODULE_LICENSE("GPL v2");

View file

@ -75,7 +75,7 @@ struct intel_scu_ipc_dev {
#define IPC_READ_BUFFER 0x90
/* Timeout in jiffies */
#define IPC_TIMEOUT (3 * HZ)
#define IPC_TIMEOUT (5 * HZ)
static struct intel_scu_ipc_dev *ipcdev; /* Only one for now */
static DEFINE_MUTEX(ipclock); /* lock used to prevent multiple call to SCU */

View file

@ -11,8 +11,9 @@
#include <linux/platform_device.h>
#include <linux/platform_data/intel-mid_wdt.h>
#include <asm/cpu_device_id.h>
#include <asm/intel-family.h>
#include <asm/intel-mid.h>
#include <asm/intel_scu_ipc.h>
#include <asm/io_apic.h>
#include <asm/hw_irq.h>
@ -49,34 +50,26 @@ static struct intel_mid_wdt_pdata tangier_pdata = {
.probe = tangier_probe,
};
static int wdt_scu_status_change(struct notifier_block *nb,
unsigned long code, void *data)
{
if (code == SCU_DOWN) {
platform_device_unregister(&wdt_dev);
return 0;
}
return platform_device_register(&wdt_dev);
}
static struct notifier_block wdt_scu_notifier = {
.notifier_call = wdt_scu_status_change,
static const struct x86_cpu_id intel_mid_cpu_ids[] = {
X86_MATCH_INTEL_FAM6_MODEL(ATOM_SILVERMONT_MID, &tangier_pdata),
{}
};
static int __init register_mid_wdt(void)
{
if (intel_mid_identify_cpu() != INTEL_MID_CPU_CHIP_TANGIER)
const struct x86_cpu_id *id;
id = x86_match_cpu(intel_mid_cpu_ids);
if (!id)
return -ENODEV;
wdt_dev.dev.platform_data = &tangier_pdata;
/*
* We need to be sure that the SCU IPC is ready before watchdog device
* can be registered:
*/
intel_scu_notifier_add(&wdt_scu_notifier);
return 0;
wdt_dev.dev.platform_data = (struct intel_mid_wdt_pdata *)id->driver_data;
return platform_device_register(&wdt_dev);
}
arch_initcall(register_mid_wdt);
static void __exit unregister_mid_wdt(void)
{
platform_device_unregister(&wdt_dev);
}
__exitcall(unregister_mid_wdt);

View file

@ -96,6 +96,8 @@ static int msi_wmi_query_block(int instance, int *ret)
struct acpi_buffer output = { ACPI_ALLOCATE_BUFFER, NULL };
status = wmi_query_block(MSIWMI_BIOS_GUID, instance, &output);
if (ACPI_FAILURE(status))
return -EIO;
obj = output.pointer;

View file

@ -66,6 +66,7 @@
#include <linux/acpi.h>
#include <linux/pci.h>
#include <linux/power_supply.h>
#include <linux/platform_profile.h>
#include <sound/core.h>
#include <sound/control.h>
#include <sound/initval.h>
@ -9855,16 +9856,27 @@ static bool has_lapsensor;
static bool palm_state;
static bool lap_state;
static int lapsensor_get(bool *present, bool *state)
static int dytc_command(int command, int *output)
{
acpi_handle dytc_handle;
int output;
if (ACPI_FAILURE(acpi_get_handle(hkey_handle, "DYTC", &dytc_handle))) {
/* Platform doesn't support DYTC */
return -ENODEV;
}
if (!acpi_evalf(dytc_handle, output, NULL, "dd", command))
return -EIO;
return 0;
}
static int lapsensor_get(bool *present, bool *state)
{
int output, err;
*present = false;
if (ACPI_FAILURE(acpi_get_handle(hkey_handle, "DYTC", &dytc_handle)))
return -ENODEV;
if (!acpi_evalf(dytc_handle, &output, NULL, "dd", DYTC_CMD_GET))
return -EIO;
err = dytc_command(DYTC_CMD_GET, &output);
if (err)
return err;
*present = true; /*If we get his far, we have lapmode support*/
*state = output & BIT(DYTC_GET_LAPMODE_BIT) ? true : false;
@ -9983,6 +9995,434 @@ static struct ibm_struct proxsensor_driver_data = {
.exit = proxsensor_exit,
};
/*************************************************************************
* DYTC Platform Profile interface
*/
#define DYTC_CMD_QUERY 0 /* To get DYTC status - enable/revision */
#define DYTC_CMD_SET 1 /* To enable/disable IC function mode */
#define DYTC_CMD_RESET 0x1ff /* To reset back to default */
#define DYTC_QUERY_ENABLE_BIT 8 /* Bit 8 - 0 = disabled, 1 = enabled */
#define DYTC_QUERY_SUBREV_BIT 16 /* Bits 16 - 27 - sub revision */
#define DYTC_QUERY_REV_BIT 28 /* Bits 28 - 31 - revision */
#define DYTC_GET_FUNCTION_BIT 8 /* Bits 8-11 - function setting */
#define DYTC_GET_MODE_BIT 12 /* Bits 12-15 - mode setting */
#define DYTC_SET_FUNCTION_BIT 12 /* Bits 12-15 - function setting */
#define DYTC_SET_MODE_BIT 16 /* Bits 16-19 - mode setting */
#define DYTC_SET_VALID_BIT 20 /* Bit 20 - 1 = on, 0 = off */
#define DYTC_FUNCTION_STD 0 /* Function = 0, standard mode */
#define DYTC_FUNCTION_CQL 1 /* Function = 1, lap mode */
#define DYTC_FUNCTION_MMC 11 /* Function = 11, desk mode */
#define DYTC_MODE_PERFORM 2 /* High power mode aka performance */
#define DYTC_MODE_LOWPOWER 3 /* Low power mode */
#define DYTC_MODE_BALANCE 0xF /* Default mode aka balanced */
#define DYTC_SET_COMMAND(function, mode, on) \
(DYTC_CMD_SET | (function) << DYTC_SET_FUNCTION_BIT | \
(mode) << DYTC_SET_MODE_BIT | \
(on) << DYTC_SET_VALID_BIT)
#define DYTC_DISABLE_CQL DYTC_SET_COMMAND(DYTC_FUNCTION_CQL, DYTC_MODE_BALANCE, 0)
#define DYTC_ENABLE_CQL DYTC_SET_COMMAND(DYTC_FUNCTION_CQL, DYTC_MODE_BALANCE, 1)
static bool dytc_profile_available;
static enum platform_profile_option dytc_current_profile;
static atomic_t dytc_ignore_event = ATOMIC_INIT(0);
static DEFINE_MUTEX(dytc_mutex);
static int convert_dytc_to_profile(int dytcmode, enum platform_profile_option *profile)
{
switch (dytcmode) {
case DYTC_MODE_LOWPOWER:
*profile = PLATFORM_PROFILE_LOW_POWER;
break;
case DYTC_MODE_BALANCE:
*profile = PLATFORM_PROFILE_BALANCED;
break;
case DYTC_MODE_PERFORM:
*profile = PLATFORM_PROFILE_PERFORMANCE;
break;
default: /* Unknown mode */
return -EINVAL;
}
return 0;
}
static int convert_profile_to_dytc(enum platform_profile_option profile, int *perfmode)
{
switch (profile) {
case PLATFORM_PROFILE_LOW_POWER:
*perfmode = DYTC_MODE_LOWPOWER;
break;
case PLATFORM_PROFILE_BALANCED:
*perfmode = DYTC_MODE_BALANCE;
break;
case PLATFORM_PROFILE_PERFORMANCE:
*perfmode = DYTC_MODE_PERFORM;
break;
default: /* Unknown profile */
return -EOPNOTSUPP;
}
return 0;
}
/*
* dytc_profile_get: Function to register with platform_profile
* handler. Returns current platform profile.
*/
static int dytc_profile_get(struct platform_profile_handler *pprof,
enum platform_profile_option *profile)
{
*profile = dytc_current_profile;
return 0;
}
/*
* Helper function - check if we are in CQL mode and if we are
* - disable CQL,
* - run the command
* - enable CQL
* If not in CQL mode, just run the command
*/
static int dytc_cql_command(int command, int *output)
{
int err, cmd_err, dummy;
int cur_funcmode;
/* Determine if we are in CQL mode. This alters the commands we do */
err = dytc_command(DYTC_CMD_GET, output);
if (err)
return err;
cur_funcmode = (*output >> DYTC_GET_FUNCTION_BIT) & 0xF;
/* Check if we're OK to return immediately */
if ((command == DYTC_CMD_GET) && (cur_funcmode != DYTC_FUNCTION_CQL))
return 0;
if (cur_funcmode == DYTC_FUNCTION_CQL) {
atomic_inc(&dytc_ignore_event);
err = dytc_command(DYTC_DISABLE_CQL, &dummy);
if (err)
return err;
}
cmd_err = dytc_command(command, output);
/* Check return condition after we've restored CQL state */
if (cur_funcmode == DYTC_FUNCTION_CQL) {
err = dytc_command(DYTC_ENABLE_CQL, &dummy);
if (err)
return err;
}
return cmd_err;
}
/*
* dytc_profile_set: Function to register with platform_profile
* handler. Sets current platform profile.
*/
static int dytc_profile_set(struct platform_profile_handler *pprof,
enum platform_profile_option profile)
{
int output;
int err;
if (!dytc_profile_available)
return -ENODEV;
err = mutex_lock_interruptible(&dytc_mutex);
if (err)
return err;
if (profile == PLATFORM_PROFILE_BALANCED) {
/* To get back to balanced mode we just issue a reset command */
err = dytc_command(DYTC_CMD_RESET, &output);
if (err)
goto unlock;
} else {
int perfmode;
err = convert_profile_to_dytc(profile, &perfmode);
if (err)
goto unlock;
/* Determine if we are in CQL mode. This alters the commands we do */
err = dytc_cql_command(DYTC_SET_COMMAND(DYTC_FUNCTION_MMC, perfmode, 1), &output);
if (err)
goto unlock;
}
/* Success - update current profile */
dytc_current_profile = profile;
unlock:
mutex_unlock(&dytc_mutex);
return err;
}
static void dytc_profile_refresh(void)
{
enum platform_profile_option profile;
int output, err;
int perfmode;
mutex_lock(&dytc_mutex);
err = dytc_cql_command(DYTC_CMD_GET, &output);
mutex_unlock(&dytc_mutex);
if (err)
return;
perfmode = (output >> DYTC_GET_MODE_BIT) & 0xF;
convert_dytc_to_profile(perfmode, &profile);
if (profile != dytc_current_profile) {
dytc_current_profile = profile;
platform_profile_notify();
}
}
static struct platform_profile_handler dytc_profile = {
.profile_get = dytc_profile_get,
.profile_set = dytc_profile_set,
};
static int tpacpi_dytc_profile_init(struct ibm_init_struct *iibm)
{
int err, output;
/* Setup supported modes */
set_bit(PLATFORM_PROFILE_LOW_POWER, dytc_profile.choices);
set_bit(PLATFORM_PROFILE_BALANCED, dytc_profile.choices);
set_bit(PLATFORM_PROFILE_PERFORMANCE, dytc_profile.choices);
dytc_profile_available = false;
err = dytc_command(DYTC_CMD_QUERY, &output);
/*
* If support isn't available (ENODEV) then don't return an error
* and don't create the sysfs group
*/
if (err == -ENODEV)
return 0;
/* For all other errors we can flag the failure */
if (err)
return err;
/* Check DYTC is enabled and supports mode setting */
if (output & BIT(DYTC_QUERY_ENABLE_BIT)) {
/* Only DYTC v5.0 and later has this feature. */
int dytc_version;
dytc_version = (output >> DYTC_QUERY_REV_BIT) & 0xF;
if (dytc_version >= 5) {
dbg_printk(TPACPI_DBG_INIT,
"DYTC version %d: thermal mode available\n", dytc_version);
/* Create platform_profile structure and register */
err = platform_profile_register(&dytc_profile);
/*
* If for some reason platform_profiles aren't enabled
* don't quit terminally.
*/
if (err)
return 0;
dytc_profile_available = true;
/* Ensure initial values are correct */
dytc_profile_refresh();
}
}
return 0;
}
static void dytc_profile_exit(void)
{
if (dytc_profile_available) {
dytc_profile_available = false;
platform_profile_remove();
}
}
static struct ibm_struct dytc_profile_driver_data = {
.name = "dytc-profile",
.exit = dytc_profile_exit,
};
/*************************************************************************
* Keyboard language interface
*/
struct keyboard_lang_data {
const char *lang_str;
int lang_code;
};
static const struct keyboard_lang_data keyboard_lang_data[] = {
{"be", 0x080c},
{"cz", 0x0405},
{"da", 0x0406},
{"de", 0x0c07},
{"en", 0x0000},
{"es", 0x2c0a},
{"et", 0x0425},
{"fr", 0x040c},
{"fr-ch", 0x100c},
{"hu", 0x040e},
{"it", 0x0410},
{"jp", 0x0411},
{"nl", 0x0413},
{"nn", 0x0414},
{"pl", 0x0415},
{"pt", 0x0816},
{"sl", 0x041b},
{"sv", 0x081d},
{"tr", 0x041f},
};
static int set_keyboard_lang_command(int command)
{
acpi_handle sskl_handle;
int output;
if (ACPI_FAILURE(acpi_get_handle(hkey_handle, "SSKL", &sskl_handle))) {
/* Platform doesn't support SSKL */
return -ENODEV;
}
if (!acpi_evalf(sskl_handle, &output, NULL, "dd", command))
return -EIO;
return 0;
}
static int get_keyboard_lang(int *output)
{
acpi_handle gskl_handle;
int kbd_lang;
if (ACPI_FAILURE(acpi_get_handle(hkey_handle, "GSKL", &gskl_handle))) {
/* Platform doesn't support GSKL */
return -ENODEV;
}
if (!acpi_evalf(gskl_handle, &kbd_lang, NULL, "dd", 0x02000000))
return -EIO;
/*
* METHOD_ERR gets returned on devices where there are no special (e.g. '=',
* '(' and ')') keys which use layout dependent key-press emulation.
*/
if (kbd_lang & METHOD_ERR)
return -ENODEV;
*output = kbd_lang;
return 0;
}
/* sysfs keyboard language entry */
static ssize_t keyboard_lang_show(struct device *dev,
struct device_attribute *attr,
char *buf)
{
int output, err, i, len = 0;
err = get_keyboard_lang(&output);
if (err)
return err;
for (i = 0; i < ARRAY_SIZE(keyboard_lang_data); i++) {
if (i)
len += sysfs_emit_at(buf, len, "%s", " ");
if (output == keyboard_lang_data[i].lang_code) {
len += sysfs_emit_at(buf, len, "[%s]", keyboard_lang_data[i].lang_str);
} else {
len += sysfs_emit_at(buf, len, "%s", keyboard_lang_data[i].lang_str);
}
}
len += sysfs_emit_at(buf, len, "\n");
return len;
}
static ssize_t keyboard_lang_store(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
{
int err, i;
bool lang_found = false;
int lang_code = 0;
for (i = 0; i < ARRAY_SIZE(keyboard_lang_data); i++) {
if (sysfs_streq(buf, keyboard_lang_data[i].lang_str)) {
lang_code = keyboard_lang_data[i].lang_code;
lang_found = true;
break;
}
}
if (lang_found) {
lang_code = lang_code | 1 << 24;
/* Set language code */
err = set_keyboard_lang_command(lang_code);
if (err)
return err;
} else {
dev_err(&tpacpi_pdev->dev, "Unknown Keyboard language. Ignoring\n");
return -EINVAL;
}
tpacpi_disclose_usertask(attr->attr.name,
"keyboard language is set to %s\n", buf);
sysfs_notify(&tpacpi_pdev->dev.kobj, NULL, "keyboard_lang");
return count;
}
static DEVICE_ATTR_RW(keyboard_lang);
static struct attribute *kbdlang_attributes[] = {
&dev_attr_keyboard_lang.attr,
NULL
};
static const struct attribute_group kbdlang_attr_group = {
.attrs = kbdlang_attributes,
};
static int tpacpi_kbdlang_init(struct ibm_init_struct *iibm)
{
int err, output;
err = get_keyboard_lang(&output);
/*
* If support isn't available (ENODEV) then don't return an error
* just don't create the sysfs group.
*/
if (err == -ENODEV)
return 0;
if (err)
return err;
/* Platform supports this feature - create the sysfs file */
return sysfs_create_group(&tpacpi_pdev->dev.kobj, &kbdlang_attr_group);
}
static void kbdlang_exit(void)
{
sysfs_remove_group(&tpacpi_pdev->dev.kobj, &kbdlang_attr_group);
}
static struct ibm_struct kbdlang_driver_data = {
.name = "kbdlang",
.exit = kbdlang_exit,
};
/****************************************************************************
****************************************************************************
*
@ -10031,8 +10471,12 @@ static void tpacpi_driver_event(const unsigned int hkey_event)
mutex_unlock(&kbdlight_mutex);
}
if (hkey_event == TP_HKEY_EV_THM_CSM_COMPLETED)
if (hkey_event == TP_HKEY_EV_THM_CSM_COMPLETED) {
lapsensor_refresh();
/* If we are already accessing DYTC then skip dytc update */
if (!atomic_add_unless(&dytc_ignore_event, -1, 0))
dytc_profile_refresh();
}
}
static void hotkey_driver_event(const unsigned int scancode)
@ -10475,6 +10919,14 @@ static struct ibm_init_struct ibms_init[] __initdata = {
.init = tpacpi_proxsensor_init,
.data = &proxsensor_driver_data,
},
{
.init = tpacpi_dytc_profile_init,
.data = &dytc_profile_driver_data,
},
{
.init = tpacpi_kbdlang_init,
.data = &kbdlang_driver_data,
},
};
static int __init set_ibm_param(const char *val, const struct kernel_param *kp)

View file

@ -382,6 +382,23 @@ static const struct ts_dmi_data jumper_ezpad_6_m4_data = {
.properties = jumper_ezpad_6_m4_props,
};
static const struct property_entry jumper_ezpad_7_props[] = {
PROPERTY_ENTRY_U32("touchscreen-min-x", 4),
PROPERTY_ENTRY_U32("touchscreen-min-y", 10),
PROPERTY_ENTRY_U32("touchscreen-size-x", 2044),
PROPERTY_ENTRY_U32("touchscreen-size-y", 1526),
PROPERTY_ENTRY_BOOL("touchscreen-swapped-x-y"),
PROPERTY_ENTRY_STRING("firmware-name", "gsl3680-jumper-ezpad-7.fw"),
PROPERTY_ENTRY_U32("silead,max-fingers", 10),
PROPERTY_ENTRY_BOOL("silead,stuck-controller-bug"),
{ }
};
static const struct ts_dmi_data jumper_ezpad_7_data = {
.acpi_name = "MSSL1680:00",
.properties = jumper_ezpad_7_props,
};
static const struct property_entry jumper_ezpad_mini3_props[] = {
PROPERTY_ENTRY_U32("touchscreen-min-x", 23),
PROPERTY_ENTRY_U32("touchscreen-min-y", 16),
@ -1034,6 +1051,16 @@ const struct dmi_system_id touchscreen_dmi_table[] = {
DMI_MATCH(DMI_BIOS_VERSION, "Jumper8.S106x"),
},
},
{
/* Jumper EZpad 7 */
.driver_data = (void *)&jumper_ezpad_7_data,
.matches = {
DMI_MATCH(DMI_SYS_VENDOR, "Jumper"),
DMI_MATCH(DMI_PRODUCT_NAME, "EZpad"),
/* Jumper12x.WJ2012.bsBKRCP05 with the version dropped */
DMI_MATCH(DMI_BIOS_VERSION, "Jumper12x.WJ2012.bsBKRCP"),
},
},
{
/* Jumper EZpad mini3 */
.driver_data = (void *)&jumper_ezpad_mini3_data,

View file

@ -973,18 +973,6 @@ config RTC_DRV_ALPHA
Direct support for the real-time clock found on every Alpha
system, specifically MC146818 compatibles. If in doubt, say Y.
config RTC_DRV_VRTC
tristate "Virtual RTC for Intel MID platforms"
depends on X86_INTEL_MID
default y if X86_INTEL_MID
help
Say "yes" here to get direct support for the real time clock
found on Moorestown platforms. The VRTC is a emulated RTC that
derives its clock source from a real RTC in the PMIC. The MC146818
style programming interface is mostly conserved, but any
updates are done via IPC calls to the system controller FW.
config RTC_DRV_DS1216
tristate "Dallas DS1216"
depends on SNI_RM

View file

@ -174,7 +174,6 @@ obj-$(CONFIG_RTC_DRV_TWL4030) += rtc-twl.o
obj-$(CONFIG_RTC_DRV_TX4939) += rtc-tx4939.o
obj-$(CONFIG_RTC_DRV_V3020) += rtc-v3020.o
obj-$(CONFIG_RTC_DRV_VR41XX) += rtc-vr41xx.o
obj-$(CONFIG_RTC_DRV_VRTC) += rtc-mrst.o
obj-$(CONFIG_RTC_DRV_VT8500) += rtc-vt8500.o
obj-$(CONFIG_RTC_DRV_WILCO_EC) += rtc-wilco-ec.o
obj-$(CONFIG_RTC_DRV_WM831X) += rtc-wm831x.o

View file

@ -1,521 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* rtc-mrst.c: Driver for Moorestown virtual RTC
*
* (C) Copyright 2009 Intel Corporation
* Author: Jacob Pan (jacob.jun.pan@intel.com)
* Feng Tang (feng.tang@intel.com)
*
* Note:
* VRTC is emulated by system controller firmware, the real HW
* RTC is located in the PMIC device. SCU FW shadows PMIC RTC
* in a memory mapped IO space that is visible to the host IA
* processor.
*
* This driver is based upon drivers/rtc/rtc-cmos.c
*/
/*
* Note:
* * vRTC only supports binary mode and 24H mode
* * vRTC only support PIE and AIE, no UIE, and its PIE only happens
* at 23:59:59pm everyday, no support for adjustable frequency
* * Alarm function is also limited to hr/min/sec.
*/
#include <linux/mod_devicetable.h>
#include <linux/platform_device.h>
#include <linux/interrupt.h>
#include <linux/spinlock.h>
#include <linux/kernel.h>
#include <linux/mc146818rtc.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/sfi.h>
#include <asm/intel_scu_ipc.h>
#include <asm/intel-mid.h>
#include <asm/intel_mid_vrtc.h>
struct mrst_rtc {
struct rtc_device *rtc;
struct device *dev;
int irq;
u8 enabled_wake;
u8 suspend_ctrl;
};
static const char driver_name[] = "rtc_mrst";
#define RTC_IRQMASK (RTC_PF | RTC_AF)
static inline int is_intr(u8 rtc_intr)
{
if (!(rtc_intr & RTC_IRQF))
return 0;
return rtc_intr & RTC_IRQMASK;
}
static inline unsigned char vrtc_is_updating(void)
{
unsigned char uip;
unsigned long flags;
spin_lock_irqsave(&rtc_lock, flags);
uip = (vrtc_cmos_read(RTC_FREQ_SELECT) & RTC_UIP);
spin_unlock_irqrestore(&rtc_lock, flags);
return uip;
}
/*
* rtc_time's year contains the increment over 1900, but vRTC's YEAR
* register can't be programmed to value larger than 0x64, so vRTC
* driver chose to use 1972 (1970 is UNIX time start point) as the base,
* and does the translation at read/write time.
*
* Why not just use 1970 as the offset? it's because using 1972 will
* make it consistent in leap year setting for both vrtc and low-level
* physical rtc devices. Then why not use 1960 as the offset? If we use
* 1960, for a device's first use, its YEAR register is 0 and the system
* year will be parsed as 1960 which is not a valid UNIX time and will
* cause many applications to fail mysteriously.
*/
static int mrst_read_time(struct device *dev, struct rtc_time *time)
{
unsigned long flags;
if (vrtc_is_updating())
msleep(20);
spin_lock_irqsave(&rtc_lock, flags);
time->tm_sec = vrtc_cmos_read(RTC_SECONDS);
time->tm_min = vrtc_cmos_read(RTC_MINUTES);
time->tm_hour = vrtc_cmos_read(RTC_HOURS);
time->tm_mday = vrtc_cmos_read(RTC_DAY_OF_MONTH);
time->tm_mon = vrtc_cmos_read(RTC_MONTH);
time->tm_year = vrtc_cmos_read(RTC_YEAR);
spin_unlock_irqrestore(&rtc_lock, flags);
/* Adjust for the 1972/1900 */
time->tm_year += 72;
time->tm_mon--;
return 0;
}
static int mrst_set_time(struct device *dev, struct rtc_time *time)
{
int ret;
unsigned long flags;
unsigned char mon, day, hrs, min, sec;
unsigned int yrs;
yrs = time->tm_year;
mon = time->tm_mon + 1; /* tm_mon starts at zero */
day = time->tm_mday;
hrs = time->tm_hour;
min = time->tm_min;
sec = time->tm_sec;
if (yrs < 72 || yrs > 172)
return -EINVAL;
yrs -= 72;
spin_lock_irqsave(&rtc_lock, flags);
vrtc_cmos_write(yrs, RTC_YEAR);
vrtc_cmos_write(mon, RTC_MONTH);
vrtc_cmos_write(day, RTC_DAY_OF_MONTH);
vrtc_cmos_write(hrs, RTC_HOURS);
vrtc_cmos_write(min, RTC_MINUTES);
vrtc_cmos_write(sec, RTC_SECONDS);
spin_unlock_irqrestore(&rtc_lock, flags);
ret = intel_scu_ipc_simple_command(IPCMSG_VRTC, IPC_CMD_VRTC_SETTIME);
return ret;
}
static int mrst_read_alarm(struct device *dev, struct rtc_wkalrm *t)
{
struct mrst_rtc *mrst = dev_get_drvdata(dev);
unsigned char rtc_control;
if (mrst->irq <= 0)
return -EIO;
/* vRTC only supports binary mode */
spin_lock_irq(&rtc_lock);
t->time.tm_sec = vrtc_cmos_read(RTC_SECONDS_ALARM);
t->time.tm_min = vrtc_cmos_read(RTC_MINUTES_ALARM);
t->time.tm_hour = vrtc_cmos_read(RTC_HOURS_ALARM);
rtc_control = vrtc_cmos_read(RTC_CONTROL);
spin_unlock_irq(&rtc_lock);
t->enabled = !!(rtc_control & RTC_AIE);
t->pending = 0;
return 0;
}
static void mrst_checkintr(struct mrst_rtc *mrst, unsigned char rtc_control)
{
unsigned char rtc_intr;
/*
* NOTE after changing RTC_xIE bits we always read INTR_FLAGS;
* allegedly some older rtcs need that to handle irqs properly
*/
rtc_intr = vrtc_cmos_read(RTC_INTR_FLAGS);
rtc_intr &= (rtc_control & RTC_IRQMASK) | RTC_IRQF;
if (is_intr(rtc_intr))
rtc_update_irq(mrst->rtc, 1, rtc_intr);
}
static void mrst_irq_enable(struct mrst_rtc *mrst, unsigned char mask)
{
unsigned char rtc_control;
/*
* Flush any pending IRQ status, notably for update irqs,
* before we enable new IRQs
*/
rtc_control = vrtc_cmos_read(RTC_CONTROL);
mrst_checkintr(mrst, rtc_control);
rtc_control |= mask;
vrtc_cmos_write(rtc_control, RTC_CONTROL);
mrst_checkintr(mrst, rtc_control);
}
static void mrst_irq_disable(struct mrst_rtc *mrst, unsigned char mask)
{
unsigned char rtc_control;
rtc_control = vrtc_cmos_read(RTC_CONTROL);
rtc_control &= ~mask;
vrtc_cmos_write(rtc_control, RTC_CONTROL);
mrst_checkintr(mrst, rtc_control);
}
static int mrst_set_alarm(struct device *dev, struct rtc_wkalrm *t)
{
struct mrst_rtc *mrst = dev_get_drvdata(dev);
unsigned char hrs, min, sec;
int ret = 0;
if (!mrst->irq)
return -EIO;
hrs = t->time.tm_hour;
min = t->time.tm_min;
sec = t->time.tm_sec;
spin_lock_irq(&rtc_lock);
/* Next rtc irq must not be from previous alarm setting */
mrst_irq_disable(mrst, RTC_AIE);
/* Update alarm */
vrtc_cmos_write(hrs, RTC_HOURS_ALARM);
vrtc_cmos_write(min, RTC_MINUTES_ALARM);
vrtc_cmos_write(sec, RTC_SECONDS_ALARM);
spin_unlock_irq(&rtc_lock);
ret = intel_scu_ipc_simple_command(IPCMSG_VRTC, IPC_CMD_VRTC_SETALARM);
if (ret)
return ret;
spin_lock_irq(&rtc_lock);
if (t->enabled)
mrst_irq_enable(mrst, RTC_AIE);
spin_unlock_irq(&rtc_lock);
return 0;
}
/* Currently, the vRTC doesn't support UIE ON/OFF */
static int mrst_rtc_alarm_irq_enable(struct device *dev, unsigned int enabled)
{
struct mrst_rtc *mrst = dev_get_drvdata(dev);
unsigned long flags;
spin_lock_irqsave(&rtc_lock, flags);
if (enabled)
mrst_irq_enable(mrst, RTC_AIE);
else
mrst_irq_disable(mrst, RTC_AIE);
spin_unlock_irqrestore(&rtc_lock, flags);
return 0;
}
#if IS_ENABLED(CONFIG_RTC_INTF_PROC)
static int mrst_procfs(struct device *dev, struct seq_file *seq)
{
unsigned char rtc_control;
spin_lock_irq(&rtc_lock);
rtc_control = vrtc_cmos_read(RTC_CONTROL);
spin_unlock_irq(&rtc_lock);
seq_printf(seq,
"periodic_IRQ\t: %s\n"
"alarm\t\t: %s\n"
"BCD\t\t: no\n"
"periodic_freq\t: daily (not adjustable)\n",
(rtc_control & RTC_PIE) ? "on" : "off",
(rtc_control & RTC_AIE) ? "on" : "off");
return 0;
}
#else
#define mrst_procfs NULL
#endif
static const struct rtc_class_ops mrst_rtc_ops = {
.read_time = mrst_read_time,
.set_time = mrst_set_time,
.read_alarm = mrst_read_alarm,
.set_alarm = mrst_set_alarm,
.proc = mrst_procfs,
.alarm_irq_enable = mrst_rtc_alarm_irq_enable,
};
static struct mrst_rtc mrst_rtc;
/*
* When vRTC IRQ is captured by SCU FW, FW will clear the AIE bit in
* Reg B, so no need for this driver to clear it
*/
static irqreturn_t mrst_rtc_irq(int irq, void *p)
{
u8 irqstat;
spin_lock(&rtc_lock);
/* This read will clear all IRQ flags inside Reg C */
irqstat = vrtc_cmos_read(RTC_INTR_FLAGS);
spin_unlock(&rtc_lock);
irqstat &= RTC_IRQMASK | RTC_IRQF;
if (is_intr(irqstat)) {
rtc_update_irq(p, 1, irqstat);
return IRQ_HANDLED;
}
return IRQ_NONE;
}
static int vrtc_mrst_do_probe(struct device *dev, struct resource *iomem,
int rtc_irq)
{
int retval = 0;
unsigned char rtc_control;
/* There can be only one ... */
if (mrst_rtc.dev)
return -EBUSY;
if (!iomem)
return -ENODEV;
iomem = devm_request_mem_region(dev, iomem->start, resource_size(iomem),
driver_name);
if (!iomem) {
dev_dbg(dev, "i/o mem already in use.\n");
return -EBUSY;
}
mrst_rtc.irq = rtc_irq;
mrst_rtc.dev = dev;
dev_set_drvdata(dev, &mrst_rtc);
mrst_rtc.rtc = devm_rtc_allocate_device(dev);
if (IS_ERR(mrst_rtc.rtc))
return PTR_ERR(mrst_rtc.rtc);
mrst_rtc.rtc->ops = &mrst_rtc_ops;
rename_region(iomem, dev_name(&mrst_rtc.rtc->dev));
spin_lock_irq(&rtc_lock);
mrst_irq_disable(&mrst_rtc, RTC_PIE | RTC_AIE);
rtc_control = vrtc_cmos_read(RTC_CONTROL);
spin_unlock_irq(&rtc_lock);
if (!(rtc_control & RTC_24H) || (rtc_control & (RTC_DM_BINARY)))
dev_dbg(dev, "TODO: support more than 24-hr BCD mode\n");
if (rtc_irq) {
retval = devm_request_irq(dev, rtc_irq, mrst_rtc_irq,
0, dev_name(&mrst_rtc.rtc->dev),
mrst_rtc.rtc);
if (retval < 0) {
dev_dbg(dev, "IRQ %d is already in use, err %d\n",
rtc_irq, retval);
goto cleanup0;
}
}
retval = devm_rtc_register_device(mrst_rtc.rtc);
if (retval)
goto cleanup0;
dev_dbg(dev, "initialised\n");
return 0;
cleanup0:
mrst_rtc.dev = NULL;
dev_err(dev, "rtc-mrst: unable to initialise\n");
return retval;
}
static void rtc_mrst_do_shutdown(void)
{
spin_lock_irq(&rtc_lock);
mrst_irq_disable(&mrst_rtc, RTC_IRQMASK);
spin_unlock_irq(&rtc_lock);
}
static void rtc_mrst_do_remove(struct device *dev)
{
struct mrst_rtc *mrst = dev_get_drvdata(dev);
rtc_mrst_do_shutdown();
mrst->rtc = NULL;
mrst->dev = NULL;
}
#ifdef CONFIG_PM_SLEEP
static int mrst_suspend(struct device *dev)
{
struct mrst_rtc *mrst = dev_get_drvdata(dev);
unsigned char tmp;
/* Only the alarm might be a wakeup event source */
spin_lock_irq(&rtc_lock);
mrst->suspend_ctrl = tmp = vrtc_cmos_read(RTC_CONTROL);
if (tmp & (RTC_PIE | RTC_AIE)) {
unsigned char mask;
if (device_may_wakeup(dev))
mask = RTC_IRQMASK & ~RTC_AIE;
else
mask = RTC_IRQMASK;
tmp &= ~mask;
vrtc_cmos_write(tmp, RTC_CONTROL);
mrst_checkintr(mrst, tmp);
}
spin_unlock_irq(&rtc_lock);
if (tmp & RTC_AIE) {
mrst->enabled_wake = 1;
enable_irq_wake(mrst->irq);
}
dev_dbg(&mrst_rtc.rtc->dev, "suspend%s, ctrl %02x\n",
(tmp & RTC_AIE) ? ", alarm may wake" : "",
tmp);
return 0;
}
/*
* We want RTC alarms to wake us from the deep power saving state
*/
static inline int mrst_poweroff(struct device *dev)
{
return mrst_suspend(dev);
}
static int mrst_resume(struct device *dev)
{
struct mrst_rtc *mrst = dev_get_drvdata(dev);
unsigned char tmp = mrst->suspend_ctrl;
/* Re-enable any irqs previously active */
if (tmp & RTC_IRQMASK) {
unsigned char mask;
if (mrst->enabled_wake) {
disable_irq_wake(mrst->irq);
mrst->enabled_wake = 0;
}
spin_lock_irq(&rtc_lock);
do {
vrtc_cmos_write(tmp, RTC_CONTROL);
mask = vrtc_cmos_read(RTC_INTR_FLAGS);
mask &= (tmp & RTC_IRQMASK) | RTC_IRQF;
if (!is_intr(mask))
break;
rtc_update_irq(mrst->rtc, 1, mask);
tmp &= ~RTC_AIE;
} while (mask & RTC_AIE);
spin_unlock_irq(&rtc_lock);
}
dev_dbg(&mrst_rtc.rtc->dev, "resume, ctrl %02x\n", tmp);
return 0;
}
static SIMPLE_DEV_PM_OPS(mrst_pm_ops, mrst_suspend, mrst_resume);
#define MRST_PM_OPS (&mrst_pm_ops)
#else
#define MRST_PM_OPS NULL
static inline int mrst_poweroff(struct device *dev)
{
return -ENOSYS;
}
#endif
static int vrtc_mrst_platform_probe(struct platform_device *pdev)
{
return vrtc_mrst_do_probe(&pdev->dev,
platform_get_resource(pdev, IORESOURCE_MEM, 0),
platform_get_irq(pdev, 0));
}
static int vrtc_mrst_platform_remove(struct platform_device *pdev)
{
rtc_mrst_do_remove(&pdev->dev);
return 0;
}
static void vrtc_mrst_platform_shutdown(struct platform_device *pdev)
{
if (system_state == SYSTEM_POWER_OFF && !mrst_poweroff(&pdev->dev))
return;
rtc_mrst_do_shutdown();
}
MODULE_ALIAS("platform:vrtc_mrst");
static struct platform_driver vrtc_mrst_platform_driver = {
.probe = vrtc_mrst_platform_probe,
.remove = vrtc_mrst_platform_remove,
.shutdown = vrtc_mrst_platform_shutdown,
.driver = {
.name = driver_name,
.pm = MRST_PM_OPS,
}
};
module_platform_driver(vrtc_mrst_platform_driver);
MODULE_AUTHOR("Jacob Pan; Feng Tang");
MODULE_DESCRIPTION("Driver for Moorestown virtual RTC");
MODULE_LICENSE("GPL");

View file

@ -1219,15 +1219,6 @@ config IE6XX_WDT
To compile this driver as a module, choose M here: the
module will be called ie6xx_wdt.
config INTEL_SCU_WATCHDOG
bool "Intel SCU Watchdog for Mobile Platforms"
depends on X86_INTEL_MID
help
Hardware driver for the watchdog time built into the Intel SCU
for Intel Mobile Platforms.
To compile this driver as a module, choose M here.
config INTEL_MID_WATCHDOG
tristate "Intel MID Watchdog Timer"
depends on X86_INTEL_MID

View file

@ -140,7 +140,6 @@ obj-$(CONFIG_W83877F_WDT) += w83877f_wdt.o
obj-$(CONFIG_W83977F_WDT) += w83977f_wdt.o
obj-$(CONFIG_MACHZ_WDT) += machzwd.o
obj-$(CONFIG_SBC_EPX_C3_WATCHDOG) += sbc_epx_c3.o
obj-$(CONFIG_INTEL_SCU_WATCHDOG) += intel_scu_watchdog.o
obj-$(CONFIG_INTEL_MID_WATCHDOG) += intel-mid_wdt.o
obj-$(CONFIG_INTEL_MEI_WDT) += mei_wdt.o
obj-$(CONFIG_NI903X_WDT) += ni903x_wdt.o

View file

@ -154,6 +154,10 @@ static int mid_wdt_probe(struct platform_device *pdev)
watchdog_set_nowayout(wdt_dev, WATCHDOG_NOWAYOUT);
watchdog_set_drvdata(wdt_dev, mid);
mid->scu = devm_intel_scu_ipc_dev_get(dev);
if (!mid->scu)
return -EPROBE_DEFER;
ret = devm_request_irq(dev, pdata->irq, mid_wdt_irq,
IRQF_SHARED | IRQF_NO_SUSPEND, "watchdog",
wdt_dev);
@ -162,10 +166,6 @@ static int mid_wdt_probe(struct platform_device *pdev)
return ret;
}
mid->scu = devm_intel_scu_ipc_dev_get(dev);
if (!mid->scu)
return -EPROBE_DEFER;
/*
* The firmware followed by U-Boot leaves the watchdog running
* with the default threshold which may vary. When we get here

View file

@ -1,533 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* Intel_SCU 0.2: An Intel SCU IOH Based Watchdog Device
* for Intel part #(s):
* - AF82MP20 PCH
*
* Copyright (C) 2009-2010 Intel Corporation. All rights reserved.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/compiler.h>
#include <linux/kernel.h>
#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/miscdevice.h>
#include <linux/watchdog.h>
#include <linux/fs.h>
#include <linux/notifier.h>
#include <linux/reboot.h>
#include <linux/init.h>
#include <linux/jiffies.h>
#include <linux/uaccess.h>
#include <linux/slab.h>
#include <linux/io.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <linux/sched.h>
#include <linux/signal.h>
#include <linux/sfi.h>
#include <asm/irq.h>
#include <linux/atomic.h>
#include <asm/intel_scu_ipc.h>
#include <asm/apb_timer.h>
#include <asm/intel-mid.h>
#include "intel_scu_watchdog.h"
/* Bounds number of times we will retry loading time count */
/* This retry is a work around for a silicon bug. */
#define MAX_RETRY 16
#define IPC_SET_WATCHDOG_TIMER 0xF8
static int timer_margin = DEFAULT_SOFT_TO_HARD_MARGIN;
module_param(timer_margin, int, 0);
MODULE_PARM_DESC(timer_margin,
"Watchdog timer margin"
"Time between interrupt and resetting the system"
"The range is from 1 to 160"
"This is the time for all keep alives to arrive");
static int timer_set = DEFAULT_TIME;
module_param(timer_set, int, 0);
MODULE_PARM_DESC(timer_set,
"Default Watchdog timer setting"
"Complete cycle time"
"The range is from 1 to 170"
"This is the time for all keep alives to arrive");
/* After watchdog device is closed, check force_boot. If:
* force_boot == 0, then force boot on next watchdog interrupt after close,
* force_boot == 1, then force boot immediately when device is closed.
*/
static int force_boot;
module_param(force_boot, int, 0);
MODULE_PARM_DESC(force_boot,
"A value of 1 means that the driver will reboot"
"the system immediately if the /dev/watchdog device is closed"
"A value of 0 means that when /dev/watchdog device is closed"
"the watchdog timer will be refreshed for one more interval"
"of length: timer_set. At the end of this interval, the"
"watchdog timer will reset the system."
);
/* there is only one device in the system now; this can be made into
* an array in the future if we have more than one device */
static struct intel_scu_watchdog_dev watchdog_device;
/* Forces restart, if force_reboot is set */
static void watchdog_fire(void)
{
if (force_boot) {
pr_crit("Initiating system reboot\n");
emergency_restart();
pr_crit("Reboot didn't ?????\n");
}
else {
pr_crit("Immediate Reboot Disabled\n");
pr_crit("System will reset when watchdog timer times out!\n");
}
}
static int check_timer_margin(int new_margin)
{
if ((new_margin < MIN_TIME_CYCLE) ||
(new_margin > MAX_TIME - timer_set)) {
pr_debug("value of new_margin %d is out of the range %d to %d\n",
new_margin, MIN_TIME_CYCLE, MAX_TIME - timer_set);
return -EINVAL;
}
return 0;
}
/*
* IPC operations
*/
static int watchdog_set_ipc(int soft_threshold, int threshold)
{
u32 *ipc_wbuf;
u8 cbuf[16] = { '\0' };
int ipc_ret = 0;
ipc_wbuf = (u32 *)&cbuf;
ipc_wbuf[0] = soft_threshold;
ipc_wbuf[1] = threshold;
ipc_ret = intel_scu_ipc_command(
IPC_SET_WATCHDOG_TIMER,
0,
ipc_wbuf,
2,
NULL,
0);
if (ipc_ret != 0)
pr_err("Error setting SCU watchdog timer: %x\n", ipc_ret);
return ipc_ret;
};
/*
* Intel_SCU operations
*/
/* timer interrupt handler */
static irqreturn_t watchdog_timer_interrupt(int irq, void *dev_id)
{
int int_status;
int_status = ioread32(watchdog_device.timer_interrupt_status_addr);
pr_debug("irq, int_status: %x\n", int_status);
if (int_status != 0)
return IRQ_NONE;
/* has the timer been started? If not, then this is spurious */
if (watchdog_device.timer_started == 0) {
pr_debug("spurious interrupt received\n");
return IRQ_HANDLED;
}
/* temporarily disable the timer */
iowrite32(0x00000002, watchdog_device.timer_control_addr);
/* set the timer to the threshold */
iowrite32(watchdog_device.threshold,
watchdog_device.timer_load_count_addr);
/* allow the timer to run */
iowrite32(0x00000003, watchdog_device.timer_control_addr);
return IRQ_HANDLED;
}
static int intel_scu_keepalive(void)
{
/* read eoi register - clears interrupt */
ioread32(watchdog_device.timer_clear_interrupt_addr);
/* temporarily disable the timer */
iowrite32(0x00000002, watchdog_device.timer_control_addr);
/* set the timer to the soft_threshold */
iowrite32(watchdog_device.soft_threshold,
watchdog_device.timer_load_count_addr);
/* allow the timer to run */
iowrite32(0x00000003, watchdog_device.timer_control_addr);
return 0;
}
static int intel_scu_stop(void)
{
iowrite32(0, watchdog_device.timer_control_addr);
return 0;
}
static int intel_scu_set_heartbeat(u32 t)
{
int ipc_ret;
int retry_count;
u32 soft_value;
u32 hw_value;
watchdog_device.timer_set = t;
watchdog_device.threshold =
timer_margin * watchdog_device.timer_tbl_ptr->freq_hz;
watchdog_device.soft_threshold =
(watchdog_device.timer_set - timer_margin)
* watchdog_device.timer_tbl_ptr->freq_hz;
pr_debug("set_heartbeat: timer freq is %d\n",
watchdog_device.timer_tbl_ptr->freq_hz);
pr_debug("set_heartbeat: timer_set is %x (hex)\n",
watchdog_device.timer_set);
pr_debug("set_heartbeat: timer_margin is %x (hex)\n", timer_margin);
pr_debug("set_heartbeat: threshold is %x (hex)\n",
watchdog_device.threshold);
pr_debug("set_heartbeat: soft_threshold is %x (hex)\n",
watchdog_device.soft_threshold);
/* Adjust thresholds by FREQ_ADJUSTMENT factor, to make the */
/* watchdog timing come out right. */
watchdog_device.threshold =
watchdog_device.threshold / FREQ_ADJUSTMENT;
watchdog_device.soft_threshold =
watchdog_device.soft_threshold / FREQ_ADJUSTMENT;
/* temporarily disable the timer */
iowrite32(0x00000002, watchdog_device.timer_control_addr);
/* send the threshold and soft_threshold via IPC to the processor */
ipc_ret = watchdog_set_ipc(watchdog_device.soft_threshold,
watchdog_device.threshold);
if (ipc_ret != 0) {
/* Make sure the watchdog timer is stopped */
intel_scu_stop();
return ipc_ret;
}
/* Soft Threshold set loop. Early versions of silicon did */
/* not always set this count correctly. This loop checks */
/* the value and retries if it was not set correctly. */
retry_count = 0;
soft_value = watchdog_device.soft_threshold & 0xFFFF0000;
do {
/* Make sure timer is stopped */
intel_scu_stop();
if (MAX_RETRY < retry_count++) {
/* Unable to set timer value */
pr_err("Unable to set timer\n");
return -ENODEV;
}
/* set the timer to the soft threshold */
iowrite32(watchdog_device.soft_threshold,
watchdog_device.timer_load_count_addr);
/* read count value before starting timer */
ioread32(watchdog_device.timer_load_count_addr);
/* Start the timer */
iowrite32(0x00000003, watchdog_device.timer_control_addr);
/* read the value the time loaded into its count reg */
hw_value = ioread32(watchdog_device.timer_load_count_addr);
hw_value = hw_value & 0xFFFF0000;
} while (soft_value != hw_value);
watchdog_device.timer_started = 1;
return 0;
}
/*
* /dev/watchdog handling
*/
static int intel_scu_open(struct inode *inode, struct file *file)
{
/* Set flag to indicate that watchdog device is open */
if (test_and_set_bit(0, &watchdog_device.driver_open))
return -EBUSY;
/* Check for reopen of driver. Reopens are not allowed */
if (watchdog_device.driver_closed)
return -EPERM;
return stream_open(inode, file);
}
static int intel_scu_release(struct inode *inode, struct file *file)
{
/*
* This watchdog should not be closed, after the timer
* is started with the WDIPC_SETTIMEOUT ioctl
* If force_boot is set watchdog_fire() will cause an
* immediate reset. If force_boot is not set, the watchdog
* timer is refreshed for one more interval. At the end
* of that interval, the watchdog timer will reset the system.
*/
if (!test_and_clear_bit(0, &watchdog_device.driver_open)) {
pr_debug("intel_scu_release, without open\n");
return -ENOTTY;
}
if (!watchdog_device.timer_started) {
/* Just close, since timer has not been started */
pr_debug("closed, without starting timer\n");
return 0;
}
pr_crit("Unexpected close of /dev/watchdog!\n");
/* Since the timer was started, prevent future reopens */
watchdog_device.driver_closed = 1;
/* Refresh the timer for one more interval */
intel_scu_keepalive();
/* Reboot system (if force_boot is set) */
watchdog_fire();
/* We should only reach this point if force_boot is not set */
return 0;
}
static ssize_t intel_scu_write(struct file *file,
char const *data,
size_t len,
loff_t *ppos)
{
if (watchdog_device.timer_started)
/* Watchdog already started, keep it alive */
intel_scu_keepalive();
else
/* Start watchdog with timer value set by init */
intel_scu_set_heartbeat(watchdog_device.timer_set);
return len;
}
static long intel_scu_ioctl(struct file *file,
unsigned int cmd,
unsigned long arg)
{
void __user *argp = (void __user *)arg;
u32 __user *p = argp;
u32 new_margin;
static const struct watchdog_info ident = {
.options = WDIOF_SETTIMEOUT
| WDIOF_KEEPALIVEPING,
.firmware_version = 0, /* @todo Get from SCU via
ipc_get_scu_fw_version()? */
.identity = "Intel_SCU IOH Watchdog" /* len < 32 */
};
switch (cmd) {
case WDIOC_GETSUPPORT:
return copy_to_user(argp,
&ident,
sizeof(ident)) ? -EFAULT : 0;
case WDIOC_GETSTATUS:
case WDIOC_GETBOOTSTATUS:
return put_user(0, p);
case WDIOC_KEEPALIVE:
intel_scu_keepalive();
return 0;
case WDIOC_SETTIMEOUT:
if (get_user(new_margin, p))
return -EFAULT;
if (check_timer_margin(new_margin))
return -EINVAL;
if (intel_scu_set_heartbeat(new_margin))
return -EINVAL;
return 0;
case WDIOC_GETTIMEOUT:
return put_user(watchdog_device.soft_threshold, p);
default:
return -ENOTTY;
}
}
/*
* Notifier for system down
*/
static int intel_scu_notify_sys(struct notifier_block *this,
unsigned long code,
void *another_unused)
{
if (code == SYS_DOWN || code == SYS_HALT)
/* Turn off the watchdog timer. */
intel_scu_stop();
return NOTIFY_DONE;
}
/*
* Kernel Interfaces
*/
static const struct file_operations intel_scu_fops = {
.owner = THIS_MODULE,
.llseek = no_llseek,
.write = intel_scu_write,
.unlocked_ioctl = intel_scu_ioctl,
.compat_ioctl = compat_ptr_ioctl,
.open = intel_scu_open,
.release = intel_scu_release,
};
static int __init intel_scu_watchdog_init(void)
{
int ret;
u32 __iomem *tmp_addr;
/*
* We don't really need to check this as the SFI timer get will fail
* but if we do so we can exit with a clearer reason and no noise.
*
* If it isn't an intel MID device then it doesn't have this watchdog
*/
if (!intel_mid_identify_cpu())
return -ENODEV;
/* Check boot parameters to verify that their initial values */
/* are in range. */
/* Check value of timer_set boot parameter */
if ((timer_set < MIN_TIME_CYCLE) ||
(timer_set > MAX_TIME - MIN_TIME_CYCLE)) {
pr_err("value of timer_set %x (hex) is out of range from %x to %x (hex)\n",
timer_set, MIN_TIME_CYCLE, MAX_TIME - MIN_TIME_CYCLE);
return -EINVAL;
}
/* Check value of timer_margin boot parameter */
if (check_timer_margin(timer_margin))
return -EINVAL;
watchdog_device.timer_tbl_ptr = sfi_get_mtmr(sfi_mtimer_num-1);
if (watchdog_device.timer_tbl_ptr == NULL) {
pr_debug("timer is not available\n");
return -ENODEV;
}
/* make sure the timer exists */
if (watchdog_device.timer_tbl_ptr->phys_addr == 0) {
pr_debug("timer %d does not have valid physical memory\n",
sfi_mtimer_num);
return -ENODEV;
}
if (watchdog_device.timer_tbl_ptr->irq == 0) {
pr_debug("timer %d invalid irq\n", sfi_mtimer_num);
return -ENODEV;
}
tmp_addr = ioremap(watchdog_device.timer_tbl_ptr->phys_addr,
20);
if (tmp_addr == NULL) {
pr_debug("timer unable to ioremap\n");
return -ENOMEM;
}
watchdog_device.timer_load_count_addr = tmp_addr++;
watchdog_device.timer_current_value_addr = tmp_addr++;
watchdog_device.timer_control_addr = tmp_addr++;
watchdog_device.timer_clear_interrupt_addr = tmp_addr++;
watchdog_device.timer_interrupt_status_addr = tmp_addr++;
/* Set the default time values in device structure */
watchdog_device.timer_set = timer_set;
watchdog_device.threshold =
timer_margin * watchdog_device.timer_tbl_ptr->freq_hz;
watchdog_device.soft_threshold =
(watchdog_device.timer_set - timer_margin)
* watchdog_device.timer_tbl_ptr->freq_hz;
watchdog_device.intel_scu_notifier.notifier_call =
intel_scu_notify_sys;
ret = register_reboot_notifier(&watchdog_device.intel_scu_notifier);
if (ret) {
pr_err("cannot register notifier %d)\n", ret);
goto register_reboot_error;
}
watchdog_device.miscdev.minor = WATCHDOG_MINOR;
watchdog_device.miscdev.name = "watchdog";
watchdog_device.miscdev.fops = &intel_scu_fops;
ret = misc_register(&watchdog_device.miscdev);
if (ret) {
pr_err("cannot register miscdev %d err =%d\n",
WATCHDOG_MINOR, ret);
goto misc_register_error;
}
ret = request_irq((unsigned int)watchdog_device.timer_tbl_ptr->irq,
watchdog_timer_interrupt,
IRQF_SHARED, "watchdog",
&watchdog_device.timer_load_count_addr);
if (ret) {
pr_err("error requesting irq %d\n", ret);
goto request_irq_error;
}
/* Make sure timer is disabled before returning */
intel_scu_stop();
return 0;
/* error cleanup */
request_irq_error:
misc_deregister(&watchdog_device.miscdev);
misc_register_error:
unregister_reboot_notifier(&watchdog_device.intel_scu_notifier);
register_reboot_error:
intel_scu_stop();
iounmap(watchdog_device.timer_load_count_addr);
return ret;
}
late_initcall(intel_scu_watchdog_init);

View file

@ -1,50 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Intel_SCU 0.2: An Intel SCU IOH Based Watchdog Device
* for Intel part #(s):
* - AF82MP20 PCH
*
* Copyright (C) 2009-2010 Intel Corporation. All rights reserved.
*/
#ifndef __INTEL_SCU_WATCHDOG_H
#define __INTEL_SCU_WATCHDOG_H
#define WDT_VER "0.3"
/* minimum time between interrupts */
#define MIN_TIME_CYCLE 1
/* Time from warning to reboot is 2 seconds */
#define DEFAULT_SOFT_TO_HARD_MARGIN 2
#define MAX_TIME 170
#define DEFAULT_TIME 5
#define MAX_SOFT_TO_HARD_MARGIN (MAX_TIME-MIN_TIME_CYCLE)
/* Ajustment to clock tick frequency to make timing come out right */
#define FREQ_ADJUSTMENT 8
struct intel_scu_watchdog_dev {
ulong driver_open;
ulong driver_closed;
u32 timer_started;
u32 timer_set;
u32 threshold;
u32 soft_threshold;
u32 __iomem *timer_load_count_addr;
u32 __iomem *timer_current_value_addr;
u32 __iomem *timer_control_addr;
u32 __iomem *timer_clear_interrupt_addr;
u32 __iomem *timer_interrupt_status_addr;
struct sfi_timer_table_entry *timer_tbl_ptr;
struct notifier_block intel_scu_notifier;
struct miscdevice miscdev;
};
extern int sfi_mtimer_num;
/* extern struct sfi_timer_table_entry *sfi_get_mtmr(int hint); */
#endif /* __INTEL_SCU_WATCHDOG_H */

View file

@ -846,4 +846,22 @@ struct auxiliary_device_id {
kernel_ulong_t driver_data;
};
/* Surface System Aggregator Module */
#define SSAM_MATCH_TARGET 0x1
#define SSAM_MATCH_INSTANCE 0x2
#define SSAM_MATCH_FUNCTION 0x4
struct ssam_device_id {
__u8 match_flags;
__u8 domain;
__u8 category;
__u8 target;
__u8 instance;
__u8 function;
kernel_ulong_t driver_data;
};
#endif /* LINUX_MOD_DEVICETABLE_H */

View file

@ -31,7 +31,7 @@
#if IS_ENABLED(CONFIG_SONY_LAPTOP)
int sony_pic_camera_command(int command, u8 value);
#else
static inline int sony_pic_camera_command(int command, u8 value) { return 0; };
static inline int sony_pic_camera_command(int command, u8 value) { return 0; }
#endif
#endif /* __KERNEL__ */

View file

@ -0,0 +1,39 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Interface for Surface ACPI Notify (SAN) driver.
*
* Provides access to discrete GPU notifications sent from ACPI via the SAN
* driver, which are not handled by this driver directly.
*
* Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#ifndef _LINUX_SURFACE_ACPI_NOTIFY_H
#define _LINUX_SURFACE_ACPI_NOTIFY_H
#include <linux/notifier.h>
#include <linux/types.h>
/**
* struct san_dgpu_event - Discrete GPU ACPI event.
* @category: Category of the event.
* @target: Target ID of the event source.
* @command: Command ID of the event.
* @instance: Instance ID of the event source.
* @length: Length of the event's payload data (in bytes).
* @payload: Pointer to the event's payload data.
*/
struct san_dgpu_event {
u8 category;
u8 target;
u8 command;
u8 instance;
u16 length;
u8 *payload;
};
int san_client_link(struct device *client);
int san_dgpu_notifier_register(struct notifier_block *nb);
int san_dgpu_notifier_unregister(struct notifier_block *nb);
#endif /* _LINUX_SURFACE_ACPI_NOTIFY_H */

View file

@ -0,0 +1,824 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Surface System Aggregator Module (SSAM) controller interface.
*
* Main communication interface for the SSAM EC. Provides a controller
* managing access and communication to and from the SSAM EC, as well as main
* communication structures and definitions.
*
* Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#ifndef _LINUX_SURFACE_AGGREGATOR_CONTROLLER_H
#define _LINUX_SURFACE_AGGREGATOR_CONTROLLER_H
#include <linux/completion.h>
#include <linux/device.h>
#include <linux/types.h>
#include <linux/surface_aggregator/serial_hub.h>
/* -- Main data types and definitions --------------------------------------- */
/**
* enum ssam_event_flags - Flags for enabling/disabling SAM events
* @SSAM_EVENT_SEQUENCED: The event will be sent via a sequenced data frame.
*/
enum ssam_event_flags {
SSAM_EVENT_SEQUENCED = BIT(0),
};
/**
* struct ssam_event - SAM event sent from the EC to the host.
* @target_category: Target category of the event source. See &enum ssam_ssh_tc.
* @target_id: Target ID of the event source.
* @command_id: Command ID of the event.
* @instance_id: Instance ID of the event source.
* @length: Length of the event payload in bytes.
* @data: Event payload data.
*/
struct ssam_event {
u8 target_category;
u8 target_id;
u8 command_id;
u8 instance_id;
u16 length;
u8 data[];
};
/**
* enum ssam_request_flags - Flags for SAM requests.
*
* @SSAM_REQUEST_HAS_RESPONSE:
* Specifies that the request expects a response. If not set, the request
* will be directly completed after its underlying packet has been
* transmitted. If set, the request transport system waits for a response
* of the request.
*
* @SSAM_REQUEST_UNSEQUENCED:
* Specifies that the request should be transmitted via an unsequenced
* packet. If set, the request must not have a response, meaning that this
* flag and the %SSAM_REQUEST_HAS_RESPONSE flag are mutually exclusive.
*/
enum ssam_request_flags {
SSAM_REQUEST_HAS_RESPONSE = BIT(0),
SSAM_REQUEST_UNSEQUENCED = BIT(1),
};
/**
* struct ssam_request - SAM request description.
* @target_category: Category of the request's target. See &enum ssam_ssh_tc.
* @target_id: ID of the request's target.
* @command_id: Command ID of the request.
* @instance_id: Instance ID of the request's target.
* @flags: Flags for the request. See &enum ssam_request_flags.
* @length: Length of the request payload in bytes.
* @payload: Request payload data.
*
* This struct fully describes a SAM request with payload. It is intended to
* help set up the actual transport struct, e.g. &struct ssam_request_sync,
* and specifically its raw message data via ssam_request_write_data().
*/
struct ssam_request {
u8 target_category;
u8 target_id;
u8 command_id;
u8 instance_id;
u16 flags;
u16 length;
const u8 *payload;
};
/**
* struct ssam_response - Response buffer for SAM request.
* @capacity: Capacity of the buffer, in bytes.
* @length: Length of the actual data stored in the memory pointed to by
* @pointer, in bytes. Set by the transport system.
* @pointer: Pointer to the buffer's memory, storing the response payload data.
*/
struct ssam_response {
size_t capacity;
size_t length;
u8 *pointer;
};
struct ssam_controller;
struct ssam_controller *ssam_get_controller(void);
struct ssam_controller *ssam_client_bind(struct device *client);
int ssam_client_link(struct ssam_controller *ctrl, struct device *client);
struct device *ssam_controller_device(struct ssam_controller *c);
struct ssam_controller *ssam_controller_get(struct ssam_controller *c);
void ssam_controller_put(struct ssam_controller *c);
void ssam_controller_statelock(struct ssam_controller *c);
void ssam_controller_stateunlock(struct ssam_controller *c);
ssize_t ssam_request_write_data(struct ssam_span *buf,
struct ssam_controller *ctrl,
const struct ssam_request *spec);
/* -- Synchronous request interface. ---------------------------------------- */
/**
* struct ssam_request_sync - Synchronous SAM request struct.
* @base: Underlying SSH request.
* @comp: Completion used to signal full completion of the request. After the
* request has been submitted, this struct may only be modified or
* deallocated after the completion has been signaled.
* request has been submitted,
* @resp: Buffer to store the response.
* @status: Status of the request, set after the base request has been
* completed or has failed.
*/
struct ssam_request_sync {
struct ssh_request base;
struct completion comp;
struct ssam_response *resp;
int status;
};
int ssam_request_sync_alloc(size_t payload_len, gfp_t flags,
struct ssam_request_sync **rqst,
struct ssam_span *buffer);
void ssam_request_sync_free(struct ssam_request_sync *rqst);
int ssam_request_sync_init(struct ssam_request_sync *rqst,
enum ssam_request_flags flags);
/**
* ssam_request_sync_set_data - Set message data of a synchronous request.
* @rqst: The request.
* @ptr: Pointer to the request message data.
* @len: Length of the request message data.
*
* Set the request message data of a synchronous request. The provided buffer
* needs to live until the request has been completed.
*/
static inline void ssam_request_sync_set_data(struct ssam_request_sync *rqst,
u8 *ptr, size_t len)
{
ssh_request_set_data(&rqst->base, ptr, len);
}
/**
* ssam_request_sync_set_resp - Set response buffer of a synchronous request.
* @rqst: The request.
* @resp: The response buffer.
*
* Sets the response buffer of a synchronous request. This buffer will store
* the response of the request after it has been completed. May be %NULL if no
* response is expected.
*/
static inline void ssam_request_sync_set_resp(struct ssam_request_sync *rqst,
struct ssam_response *resp)
{
rqst->resp = resp;
}
int ssam_request_sync_submit(struct ssam_controller *ctrl,
struct ssam_request_sync *rqst);
/**
* ssam_request_sync_wait - Wait for completion of a synchronous request.
* @rqst: The request to wait for.
*
* Wait for completion and release of a synchronous request. After this
* function terminates, the request is guaranteed to have left the transport
* system. After successful submission of a request, this function must be
* called before accessing the response of the request, freeing the request,
* or freeing any of the buffers associated with the request.
*
* This function must not be called if the request has not been submitted yet
* and may lead to a deadlock/infinite wait if a subsequent request submission
* fails in that case, due to the completion never triggering.
*
* Return: Returns the status of the given request, which is set on completion
* of the packet. This value is zero on success and negative on failure.
*/
static inline int ssam_request_sync_wait(struct ssam_request_sync *rqst)
{
wait_for_completion(&rqst->comp);
return rqst->status;
}
int ssam_request_sync(struct ssam_controller *ctrl,
const struct ssam_request *spec,
struct ssam_response *rsp);
int ssam_request_sync_with_buffer(struct ssam_controller *ctrl,
const struct ssam_request *spec,
struct ssam_response *rsp,
struct ssam_span *buf);
/**
* ssam_request_sync_onstack - Execute a synchronous request on the stack.
* @ctrl: The controller via which the request is submitted.
* @rqst: The request specification.
* @rsp: The response buffer.
* @payload_len: The (maximum) request payload length.
*
* Allocates a synchronous request with specified payload length on the stack,
* fully initializes it via the provided request specification, submits it,
* and finally waits for its completion before returning its status. This
* helper macro essentially allocates the request message buffer on the stack
* and then calls ssam_request_sync_with_buffer().
*
* Note: The @payload_len parameter specifies the maximum payload length, used
* for buffer allocation. The actual payload length may be smaller.
*
* Return: Returns the status of the request or any failure during setup, i.e.
* zero on success and a negative value on failure.
*/
#define ssam_request_sync_onstack(ctrl, rqst, rsp, payload_len) \
({ \
u8 __data[SSH_COMMAND_MESSAGE_LENGTH(payload_len)]; \
struct ssam_span __buf = { &__data[0], ARRAY_SIZE(__data) }; \
\
ssam_request_sync_with_buffer(ctrl, rqst, rsp, &__buf); \
})
/**
* __ssam_retry - Retry request in case of I/O errors or timeouts.
* @request: The request function to execute. Must return an integer.
* @n: Number of tries.
* @args: Arguments for the request function.
*
* Executes the given request function, i.e. calls @request. In case the
* request returns %-EREMOTEIO (indicates I/O error) or %-ETIMEDOUT (request
* or underlying packet timed out), @request will be re-executed again, up to
* @n times in total.
*
* Return: Returns the return value of the last execution of @request.
*/
#define __ssam_retry(request, n, args...) \
({ \
int __i, __s = 0; \
\
for (__i = (n); __i > 0; __i--) { \
__s = request(args); \
if (__s != -ETIMEDOUT && __s != -EREMOTEIO) \
break; \
} \
__s; \
})
/**
* ssam_retry - Retry request in case of I/O errors or timeouts up to three
* times in total.
* @request: The request function to execute. Must return an integer.
* @args: Arguments for the request function.
*
* Executes the given request function, i.e. calls @request. In case the
* request returns %-EREMOTEIO (indicates I/O error) or -%ETIMEDOUT (request
* or underlying packet timed out), @request will be re-executed again, up to
* three times in total.
*
* See __ssam_retry() for a more generic macro for this purpose.
*
* Return: Returns the return value of the last execution of @request.
*/
#define ssam_retry(request, args...) \
__ssam_retry(request, 3, args)
/**
* struct ssam_request_spec - Blue-print specification of SAM request.
* @target_category: Category of the request's target. See &enum ssam_ssh_tc.
* @target_id: ID of the request's target.
* @command_id: Command ID of the request.
* @instance_id: Instance ID of the request's target.
* @flags: Flags for the request. See &enum ssam_request_flags.
*
* Blue-print specification for a SAM request. This struct describes the
* unique static parameters of a request (i.e. type) without specifying any of
* its instance-specific data (e.g. payload). It is intended to be used as base
* for defining simple request functions via the
* ``SSAM_DEFINE_SYNC_REQUEST_x()`` family of macros.
*/
struct ssam_request_spec {
u8 target_category;
u8 target_id;
u8 command_id;
u8 instance_id;
u8 flags;
};
/**
* struct ssam_request_spec_md - Blue-print specification for multi-device SAM
* request.
* @target_category: Category of the request's target. See &enum ssam_ssh_tc.
* @command_id: Command ID of the request.
* @flags: Flags for the request. See &enum ssam_request_flags.
*
* Blue-print specification for a multi-device SAM request, i.e. a request
* that is applicable to multiple device instances, described by their
* individual target and instance IDs. This struct describes the unique static
* parameters of a request (i.e. type) without specifying any of its
* instance-specific data (e.g. payload) and without specifying any of its
* device specific IDs (i.e. target and instance ID). It is intended to be
* used as base for defining simple multi-device request functions via the
* ``SSAM_DEFINE_SYNC_REQUEST_MD_x()`` and ``SSAM_DEFINE_SYNC_REQUEST_CL_x()``
* families of macros.
*/
struct ssam_request_spec_md {
u8 target_category;
u8 command_id;
u8 flags;
};
/**
* SSAM_DEFINE_SYNC_REQUEST_N() - Define synchronous SAM request function
* with neither argument nor return value.
* @name: Name of the generated function.
* @spec: Specification (&struct ssam_request_spec) defining the request.
*
* Defines a function executing the synchronous SAM request specified by
* @spec, with the request having neither argument nor return value. The
* generated function takes care of setting up the request struct and buffer
* allocation, as well as execution of the request itself, returning once the
* request has been fully completed. The required transport buffer will be
* allocated on the stack.
*
* The generated function is defined as ``int name(struct ssam_controller
* *ctrl)``, returning the status of the request, which is zero on success and
* negative on failure. The ``ctrl`` parameter is the controller via which the
* request is being sent.
*
* Refer to ssam_request_sync_onstack() for more details on the behavior of
* the generated function.
*/
#define SSAM_DEFINE_SYNC_REQUEST_N(name, spec...) \
int name(struct ssam_controller *ctrl) \
{ \
struct ssam_request_spec s = (struct ssam_request_spec)spec; \
struct ssam_request rqst; \
\
rqst.target_category = s.target_category; \
rqst.target_id = s.target_id; \
rqst.command_id = s.command_id; \
rqst.instance_id = s.instance_id; \
rqst.flags = s.flags; \
rqst.length = 0; \
rqst.payload = NULL; \
\
return ssam_request_sync_onstack(ctrl, &rqst, NULL, 0); \
}
/**
* SSAM_DEFINE_SYNC_REQUEST_W() - Define synchronous SAM request function with
* argument.
* @name: Name of the generated function.
* @atype: Type of the request's argument.
* @spec: Specification (&struct ssam_request_spec) defining the request.
*
* Defines a function executing the synchronous SAM request specified by
* @spec, with the request taking an argument of type @atype and having no
* return value. The generated function takes care of setting up the request
* struct, buffer allocation, as well as execution of the request itself,
* returning once the request has been fully completed. The required transport
* buffer will be allocated on the stack.
*
* The generated function is defined as ``int name(struct ssam_controller
* *ctrl, const atype *arg)``, returning the status of the request, which is
* zero on success and negative on failure. The ``ctrl`` parameter is the
* controller via which the request is sent. The request argument is specified
* via the ``arg`` pointer.
*
* Refer to ssam_request_sync_onstack() for more details on the behavior of
* the generated function.
*/
#define SSAM_DEFINE_SYNC_REQUEST_W(name, atype, spec...) \
int name(struct ssam_controller *ctrl, const atype *arg) \
{ \
struct ssam_request_spec s = (struct ssam_request_spec)spec; \
struct ssam_request rqst; \
\
rqst.target_category = s.target_category; \
rqst.target_id = s.target_id; \
rqst.command_id = s.command_id; \
rqst.instance_id = s.instance_id; \
rqst.flags = s.flags; \
rqst.length = sizeof(atype); \
rqst.payload = (u8 *)arg; \
\
return ssam_request_sync_onstack(ctrl, &rqst, NULL, \
sizeof(atype)); \
}
/**
* SSAM_DEFINE_SYNC_REQUEST_R() - Define synchronous SAM request function with
* return value.
* @name: Name of the generated function.
* @rtype: Type of the request's return value.
* @spec: Specification (&struct ssam_request_spec) defining the request.
*
* Defines a function executing the synchronous SAM request specified by
* @spec, with the request taking no argument but having a return value of
* type @rtype. The generated function takes care of setting up the request
* and response structs, buffer allocation, as well as execution of the
* request itself, returning once the request has been fully completed. The
* required transport buffer will be allocated on the stack.
*
* The generated function is defined as ``int name(struct ssam_controller
* *ctrl, rtype *ret)``, returning the status of the request, which is zero on
* success and negative on failure. The ``ctrl`` parameter is the controller
* via which the request is sent. The request's return value is written to the
* memory pointed to by the ``ret`` parameter.
*
* Refer to ssam_request_sync_onstack() for more details on the behavior of
* the generated function.
*/
#define SSAM_DEFINE_SYNC_REQUEST_R(name, rtype, spec...) \
int name(struct ssam_controller *ctrl, rtype *ret) \
{ \
struct ssam_request_spec s = (struct ssam_request_spec)spec; \
struct ssam_request rqst; \
struct ssam_response rsp; \
int status; \
\
rqst.target_category = s.target_category; \
rqst.target_id = s.target_id; \
rqst.command_id = s.command_id; \
rqst.instance_id = s.instance_id; \
rqst.flags = s.flags | SSAM_REQUEST_HAS_RESPONSE; \
rqst.length = 0; \
rqst.payload = NULL; \
\
rsp.capacity = sizeof(rtype); \
rsp.length = 0; \
rsp.pointer = (u8 *)ret; \
\
status = ssam_request_sync_onstack(ctrl, &rqst, &rsp, 0); \
if (status) \
return status; \
\
if (rsp.length != sizeof(rtype)) { \
struct device *dev = ssam_controller_device(ctrl); \
dev_err(dev, \
"rqst: invalid response length, expected %zu, got %zu (tc: %#04x, cid: %#04x)", \
sizeof(rtype), rsp.length, rqst.target_category,\
rqst.command_id); \
return -EIO; \
} \
\
return 0; \
}
/**
* SSAM_DEFINE_SYNC_REQUEST_MD_N() - Define synchronous multi-device SAM
* request function with neither argument nor return value.
* @name: Name of the generated function.
* @spec: Specification (&struct ssam_request_spec_md) defining the request.
*
* Defines a function executing the synchronous SAM request specified by
* @spec, with the request having neither argument nor return value. Device
* specifying parameters are not hard-coded, but instead must be provided to
* the function. The generated function takes care of setting up the request
* struct, buffer allocation, as well as execution of the request itself,
* returning once the request has been fully completed. The required transport
* buffer will be allocated on the stack.
*
* The generated function is defined as ``int name(struct ssam_controller
* *ctrl, u8 tid, u8 iid)``, returning the status of the request, which is
* zero on success and negative on failure. The ``ctrl`` parameter is the
* controller via which the request is sent, ``tid`` the target ID for the
* request, and ``iid`` the instance ID.
*
* Refer to ssam_request_sync_onstack() for more details on the behavior of
* the generated function.
*/
#define SSAM_DEFINE_SYNC_REQUEST_MD_N(name, spec...) \
int name(struct ssam_controller *ctrl, u8 tid, u8 iid) \
{ \
struct ssam_request_spec_md s = (struct ssam_request_spec_md)spec; \
struct ssam_request rqst; \
\
rqst.target_category = s.target_category; \
rqst.target_id = tid; \
rqst.command_id = s.command_id; \
rqst.instance_id = iid; \
rqst.flags = s.flags; \
rqst.length = 0; \
rqst.payload = NULL; \
\
return ssam_request_sync_onstack(ctrl, &rqst, NULL, 0); \
}
/**
* SSAM_DEFINE_SYNC_REQUEST_MD_W() - Define synchronous multi-device SAM
* request function with argument.
* @name: Name of the generated function.
* @atype: Type of the request's argument.
* @spec: Specification (&struct ssam_request_spec_md) defining the request.
*
* Defines a function executing the synchronous SAM request specified by
* @spec, with the request taking an argument of type @atype and having no
* return value. Device specifying parameters are not hard-coded, but instead
* must be provided to the function. The generated function takes care of
* setting up the request struct, buffer allocation, as well as execution of
* the request itself, returning once the request has been fully completed.
* The required transport buffer will be allocated on the stack.
*
* The generated function is defined as ``int name(struct ssam_controller
* *ctrl, u8 tid, u8 iid, const atype *arg)``, returning the status of the
* request, which is zero on success and negative on failure. The ``ctrl``
* parameter is the controller via which the request is sent, ``tid`` the
* target ID for the request, and ``iid`` the instance ID. The request argument
* is specified via the ``arg`` pointer.
*
* Refer to ssam_request_sync_onstack() for more details on the behavior of
* the generated function.
*/
#define SSAM_DEFINE_SYNC_REQUEST_MD_W(name, atype, spec...) \
int name(struct ssam_controller *ctrl, u8 tid, u8 iid, const atype *arg)\
{ \
struct ssam_request_spec_md s = (struct ssam_request_spec_md)spec; \
struct ssam_request rqst; \
\
rqst.target_category = s.target_category; \
rqst.target_id = tid; \
rqst.command_id = s.command_id; \
rqst.instance_id = iid; \
rqst.flags = s.flags; \
rqst.length = sizeof(atype); \
rqst.payload = (u8 *)arg; \
\
return ssam_request_sync_onstack(ctrl, &rqst, NULL, \
sizeof(atype)); \
}
/**
* SSAM_DEFINE_SYNC_REQUEST_MD_R() - Define synchronous multi-device SAM
* request function with return value.
* @name: Name of the generated function.
* @rtype: Type of the request's return value.
* @spec: Specification (&struct ssam_request_spec_md) defining the request.
*
* Defines a function executing the synchronous SAM request specified by
* @spec, with the request taking no argument but having a return value of
* type @rtype. Device specifying parameters are not hard-coded, but instead
* must be provided to the function. The generated function takes care of
* setting up the request and response structs, buffer allocation, as well as
* execution of the request itself, returning once the request has been fully
* completed. The required transport buffer will be allocated on the stack.
*
* The generated function is defined as ``int name(struct ssam_controller
* *ctrl, u8 tid, u8 iid, rtype *ret)``, returning the status of the request,
* which is zero on success and negative on failure. The ``ctrl`` parameter is
* the controller via which the request is sent, ``tid`` the target ID for the
* request, and ``iid`` the instance ID. The request's return value is written
* to the memory pointed to by the ``ret`` parameter.
*
* Refer to ssam_request_sync_onstack() for more details on the behavior of
* the generated function.
*/
#define SSAM_DEFINE_SYNC_REQUEST_MD_R(name, rtype, spec...) \
int name(struct ssam_controller *ctrl, u8 tid, u8 iid, rtype *ret) \
{ \
struct ssam_request_spec_md s = (struct ssam_request_spec_md)spec; \
struct ssam_request rqst; \
struct ssam_response rsp; \
int status; \
\
rqst.target_category = s.target_category; \
rqst.target_id = tid; \
rqst.command_id = s.command_id; \
rqst.instance_id = iid; \
rqst.flags = s.flags | SSAM_REQUEST_HAS_RESPONSE; \
rqst.length = 0; \
rqst.payload = NULL; \
\
rsp.capacity = sizeof(rtype); \
rsp.length = 0; \
rsp.pointer = (u8 *)ret; \
\
status = ssam_request_sync_onstack(ctrl, &rqst, &rsp, 0); \
if (status) \
return status; \
\
if (rsp.length != sizeof(rtype)) { \
struct device *dev = ssam_controller_device(ctrl); \
dev_err(dev, \
"rqst: invalid response length, expected %zu, got %zu (tc: %#04x, cid: %#04x)", \
sizeof(rtype), rsp.length, rqst.target_category,\
rqst.command_id); \
return -EIO; \
} \
\
return 0; \
}
/* -- Event notifier/callbacks. --------------------------------------------- */
#define SSAM_NOTIF_STATE_SHIFT 2
#define SSAM_NOTIF_STATE_MASK ((1 << SSAM_NOTIF_STATE_SHIFT) - 1)
/**
* enum ssam_notif_flags - Flags used in return values from SSAM notifier
* callback functions.
*
* @SSAM_NOTIF_HANDLED:
* Indicates that the notification has been handled. This flag should be
* set by the handler if the handler can act/has acted upon the event
* provided to it. This flag should not be set if the handler is not a
* primary handler intended for the provided event.
*
* If this flag has not been set by any handler after the notifier chain
* has been traversed, a warning will be emitted, stating that the event
* has not been handled.
*
* @SSAM_NOTIF_STOP:
* Indicates that the notifier traversal should stop. If this flag is
* returned from a notifier callback, notifier chain traversal will
* immediately stop and any remaining notifiers will not be called. This
* flag is automatically set when ssam_notifier_from_errno() is called
* with a negative error value.
*/
enum ssam_notif_flags {
SSAM_NOTIF_HANDLED = BIT(0),
SSAM_NOTIF_STOP = BIT(1),
};
struct ssam_event_notifier;
typedef u32 (*ssam_notifier_fn_t)(struct ssam_event_notifier *nf,
const struct ssam_event *event);
/**
* struct ssam_notifier_block - Base notifier block for SSAM event
* notifications.
* @node: The node for the list of notifiers.
* @fn: The callback function of this notifier. This function takes the
* respective notifier block and event as input and should return
* a notifier value, which can either be obtained from the flags
* provided in &enum ssam_notif_flags, converted from a standard
* error value via ssam_notifier_from_errno(), or a combination of
* both (e.g. ``ssam_notifier_from_errno(e) | SSAM_NOTIF_HANDLED``).
* @priority: Priority value determining the order in which notifier callbacks
* will be called. A higher value means higher priority, i.e. the
* associated callback will be executed earlier than other (lower
* priority) callbacks.
*/
struct ssam_notifier_block {
struct list_head node;
ssam_notifier_fn_t fn;
int priority;
};
/**
* ssam_notifier_from_errno() - Convert standard error value to notifier
* return code.
* @err: The error code to convert, must be negative (in case of failure) or
* zero (in case of success).
*
* Return: Returns the notifier return value obtained by converting the
* specified @err value. In case @err is negative, the %SSAM_NOTIF_STOP flag
* will be set, causing notifier call chain traversal to abort.
*/
static inline u32 ssam_notifier_from_errno(int err)
{
if (WARN_ON(err > 0) || err == 0)
return 0;
else
return ((-err) << SSAM_NOTIF_STATE_SHIFT) | SSAM_NOTIF_STOP;
}
/**
* ssam_notifier_to_errno() - Convert notifier return code to standard error
* value.
* @ret: The notifier return value to convert.
*
* Return: Returns the negative error value encoded in @ret or zero if @ret
* indicates success.
*/
static inline int ssam_notifier_to_errno(u32 ret)
{
return -(ret >> SSAM_NOTIF_STATE_SHIFT);
}
/* -- Event/notification registry. ------------------------------------------ */
/**
* struct ssam_event_registry - Registry specification used for enabling events.
* @target_category: Target category for the event registry requests.
* @target_id: Target ID for the event registry requests.
* @cid_enable: Command ID for the event-enable request.
* @cid_disable: Command ID for the event-disable request.
*
* This struct describes a SAM event registry via the minimal collection of
* SAM IDs specifying the requests to use for enabling and disabling an event.
* The individual event to be enabled/disabled itself is specified via &struct
* ssam_event_id.
*/
struct ssam_event_registry {
u8 target_category;
u8 target_id;
u8 cid_enable;
u8 cid_disable;
};
/**
* struct ssam_event_id - Unique event ID used for enabling events.
* @target_category: Target category of the event source.
* @instance: Instance ID of the event source.
*
* This struct specifies the event to be enabled/disabled via an externally
* provided registry. It does not specify the registry to be used itself, this
* is done via &struct ssam_event_registry.
*/
struct ssam_event_id {
u8 target_category;
u8 instance;
};
/**
* enum ssam_event_mask - Flags specifying how events are matched to notifiers.
*
* @SSAM_EVENT_MASK_NONE:
* Run the callback for any event with matching target category. Do not
* do any additional filtering.
*
* @SSAM_EVENT_MASK_TARGET:
* In addition to filtering by target category, only execute the notifier
* callback for events with a target ID matching to the one of the
* registry used for enabling/disabling the event.
*
* @SSAM_EVENT_MASK_INSTANCE:
* In addition to filtering by target category, only execute the notifier
* callback for events with an instance ID matching to the instance ID
* used when enabling the event.
*
* @SSAM_EVENT_MASK_STRICT:
* Do all the filtering above.
*/
enum ssam_event_mask {
SSAM_EVENT_MASK_TARGET = BIT(0),
SSAM_EVENT_MASK_INSTANCE = BIT(1),
SSAM_EVENT_MASK_NONE = 0,
SSAM_EVENT_MASK_STRICT =
SSAM_EVENT_MASK_TARGET
| SSAM_EVENT_MASK_INSTANCE,
};
/**
* SSAM_EVENT_REGISTRY() - Define a new event registry.
* @tc: Target category for the event registry requests.
* @tid: Target ID for the event registry requests.
* @cid_en: Command ID for the event-enable request.
* @cid_dis: Command ID for the event-disable request.
*
* Return: Returns the &struct ssam_event_registry specified by the given
* parameters.
*/
#define SSAM_EVENT_REGISTRY(tc, tid, cid_en, cid_dis) \
((struct ssam_event_registry) { \
.target_category = (tc), \
.target_id = (tid), \
.cid_enable = (cid_en), \
.cid_disable = (cid_dis), \
})
#define SSAM_EVENT_REGISTRY_SAM \
SSAM_EVENT_REGISTRY(SSAM_SSH_TC_SAM, 0x01, 0x0b, 0x0c)
#define SSAM_EVENT_REGISTRY_KIP \
SSAM_EVENT_REGISTRY(SSAM_SSH_TC_KIP, 0x02, 0x27, 0x28)
#define SSAM_EVENT_REGISTRY_REG \
SSAM_EVENT_REGISTRY(SSAM_SSH_TC_REG, 0x02, 0x01, 0x02)
/**
* struct ssam_event_notifier - Notifier block for SSAM events.
* @base: The base notifier block with callback function and priority.
* @event: The event for which this block will receive notifications.
* @event.reg: Registry via which the event will be enabled/disabled.
* @event.id: ID specifying the event.
* @event.mask: Flags determining how events are matched to the notifier.
* @event.flags: Flags used for enabling the event.
*/
struct ssam_event_notifier {
struct ssam_notifier_block base;
struct {
struct ssam_event_registry reg;
struct ssam_event_id id;
enum ssam_event_mask mask;
u8 flags;
} event;
};
int ssam_notifier_register(struct ssam_controller *ctrl,
struct ssam_event_notifier *n);
int ssam_notifier_unregister(struct ssam_controller *ctrl,
struct ssam_event_notifier *n);
#endif /* _LINUX_SURFACE_AGGREGATOR_CONTROLLER_H */

View file

@ -0,0 +1,423 @@
/* SPDX-License-Identifier: GPL-2.0+ */
/*
* Surface System Aggregator Module (SSAM) bus and client-device subsystem.
*
* Main interface for the surface-aggregator bus, surface-aggregator client
* devices, and respective drivers building on top of the SSAM controller.
* Provides support for non-platform/non-ACPI SSAM clients via dedicated
* subsystem.
*
* Copyright (C) 2019-2020 Maximilian Luz <luzmaximilian@gmail.com>
*/
#ifndef _LINUX_SURFACE_AGGREGATOR_DEVICE_H
#define _LINUX_SURFACE_AGGREGATOR_DEVICE_H
#include <linux/device.h>
#include <linux/mod_devicetable.h>
#include <linux/types.h>
#include <linux/surface_aggregator/controller.h>
/* -- Surface System Aggregator Module bus. --------------------------------- */
/**
* enum ssam_device_domain - SAM device domain.
* @SSAM_DOMAIN_VIRTUAL: Virtual device.
* @SSAM_DOMAIN_SERIALHUB: Physical device connected via Surface Serial Hub.
*/
enum ssam_device_domain {
SSAM_DOMAIN_VIRTUAL = 0x00,
SSAM_DOMAIN_SERIALHUB = 0x01,
};
/**
* enum ssam_virtual_tc - Target categories for the virtual SAM domain.
* @SSAM_VIRTUAL_TC_HUB: Device hub category.
*/
enum ssam_virtual_tc {
SSAM_VIRTUAL_TC_HUB = 0x00,
};
/**
* struct ssam_device_uid - Unique identifier for SSAM device.
* @domain: Domain of the device.
* @category: Target category of the device.
* @target: Target ID of the device.
* @instance: Instance ID of the device.
* @function: Sub-function of the device. This field can be used to split a
* single SAM device into multiple virtual subdevices to separate
* different functionality of that device and allow one driver per
* such functionality.
*/
struct ssam_device_uid {
u8 domain;
u8 category;
u8 target;
u8 instance;
u8 function;
};
/*
* Special values for device matching.
*
* These values are intended to be used with SSAM_DEVICE(), SSAM_VDEV(), and
* SSAM_SDEV() exclusively. Specifically, they are used to initialize the
* match_flags member of the device ID structure. Do not use them directly
* with struct ssam_device_id or struct ssam_device_uid.
*/
#define SSAM_ANY_TID 0xffff
#define SSAM_ANY_IID 0xffff
#define SSAM_ANY_FUN 0xffff
/**
* SSAM_DEVICE() - Initialize a &struct ssam_device_id with the given
* parameters.
* @d: Domain of the device.
* @cat: Target category of the device.
* @tid: Target ID of the device.
* @iid: Instance ID of the device.
* @fun: Sub-function of the device.
*
* Initializes a &struct ssam_device_id with the given parameters. See &struct
* ssam_device_uid for details regarding the parameters. The special values
* %SSAM_ANY_TID, %SSAM_ANY_IID, and %SSAM_ANY_FUN can be used to specify that
* matching should ignore target ID, instance ID, and/or sub-function,
* respectively. This macro initializes the ``match_flags`` field based on the
* given parameters.
*
* Note: The parameters @d and @cat must be valid &u8 values, the parameters
* @tid, @iid, and @fun must be either valid &u8 values or %SSAM_ANY_TID,
* %SSAM_ANY_IID, or %SSAM_ANY_FUN, respectively. Other non-&u8 values are not
* allowed.
*/
#define SSAM_DEVICE(d, cat, tid, iid, fun) \
.match_flags = (((tid) != SSAM_ANY_TID) ? SSAM_MATCH_TARGET : 0) \
| (((iid) != SSAM_ANY_IID) ? SSAM_MATCH_INSTANCE : 0) \
| (((fun) != SSAM_ANY_FUN) ? SSAM_MATCH_FUNCTION : 0), \
.domain = d, \
.category = cat, \
.target = ((tid) != SSAM_ANY_TID) ? (tid) : 0, \
.instance = ((iid) != SSAM_ANY_IID) ? (iid) : 0, \
.function = ((fun) != SSAM_ANY_FUN) ? (fun) : 0 \
/**
* SSAM_VDEV() - Initialize a &struct ssam_device_id as virtual device with
* the given parameters.
* @cat: Target category of the device.
* @tid: Target ID of the device.
* @iid: Instance ID of the device.
* @fun: Sub-function of the device.
*
* Initializes a &struct ssam_device_id with the given parameters in the
* virtual domain. See &struct ssam_device_uid for details regarding the
* parameters. The special values %SSAM_ANY_TID, %SSAM_ANY_IID, and
* %SSAM_ANY_FUN can be used to specify that matching should ignore target ID,
* instance ID, and/or sub-function, respectively. This macro initializes the
* ``match_flags`` field based on the given parameters.
*
* Note: The parameter @cat must be a valid &u8 value, the parameters @tid,
* @iid, and @fun must be either valid &u8 values or %SSAM_ANY_TID,
* %SSAM_ANY_IID, or %SSAM_ANY_FUN, respectively. Other non-&u8 values are not
* allowed.
*/
#define SSAM_VDEV(cat, tid, iid, fun) \
SSAM_DEVICE(SSAM_DOMAIN_VIRTUAL, SSAM_VIRTUAL_TC_##cat, tid, iid, fun)
/**
* SSAM_SDEV() - Initialize a &struct ssam_device_id as physical SSH device
* with the given parameters.
* @cat: Target category of the device.
* @tid: Target ID of the device.
* @iid: Instance ID of the device.
* @fun: Sub-function of the device.
*
* Initializes a &struct ssam_device_id with the given parameters in the SSH
* domain. See &struct ssam_device_uid for details regarding the parameters.
* The special values %SSAM_ANY_TID, %SSAM_ANY_IID, and %SSAM_ANY_FUN can be
* used to specify that matching should ignore target ID, instance ID, and/or
* sub-function, respectively. This macro initializes the ``match_flags``
* field based on the given parameters.
*
* Note: The parameter @cat must be a valid &u8 value, the parameters @tid,
* @iid, and @fun must be either valid &u8 values or %SSAM_ANY_TID,
* %SSAM_ANY_IID, or %SSAM_ANY_FUN, respectively. Other non-&u8 values are not
* allowed.
*/
#define SSAM_SDEV(cat, tid, iid, fun) \
SSAM_DEVICE(SSAM_DOMAIN_SERIALHUB, SSAM_SSH_TC_##cat, tid, iid, fun)
/**
* struct ssam_device - SSAM client device.
* @dev: Driver model representation of the device.
* @ctrl: SSAM controller managing this device.
* @uid: UID identifying the device.
*/
struct ssam_device {
struct device dev;
struct ssam_controller *ctrl;
struct ssam_device_uid uid;
};
/**
* struct ssam_device_driver - SSAM client device driver.
* @driver: Base driver model structure.
* @match_table: Match table specifying which devices the driver should bind to.
* @probe: Called when the driver is being bound to a device.
* @remove: Called when the driver is being unbound from the device.
*/
struct ssam_device_driver {
struct device_driver driver;
const struct ssam_device_id *match_table;
int (*probe)(struct ssam_device *sdev);
void (*remove)(struct ssam_device *sdev);
};
extern struct bus_type ssam_bus_type;
extern const struct device_type ssam_device_type;
/**
* is_ssam_device() - Check if the given device is a SSAM client device.
* @d: The device to test the type of.
*
* Return: Returns %true if the specified device is of type &struct
* ssam_device, i.e. the device type points to %ssam_device_type, and %false
* otherwise.
*/
static inline bool is_ssam_device(struct device *d)
{
return d->type == &ssam_device_type;
}
/**
* to_ssam_device() - Casts the given device to a SSAM client device.
* @d: The device to cast.
*
* Casts the given &struct device to a &struct ssam_device. The caller has to
* ensure that the given device is actually enclosed in a &struct ssam_device,
* e.g. by calling is_ssam_device().
*
* Return: Returns a pointer to the &struct ssam_device wrapping the given
* device @d.
*/
static inline struct ssam_device *to_ssam_device(struct device *d)
{
return container_of(d, struct ssam_device, dev);
}
/**
* to_ssam_device_driver() - Casts the given device driver to a SSAM client
* device driver.
* @d: The driver to cast.
*
* Casts the given &struct device_driver to a &struct ssam_device_driver. The
* caller has to ensure that the given driver is actually enclosed in a
* &struct ssam_device_driver.
*
* Return: Returns the pointer to the &struct ssam_device_driver wrapping the
* given device driver @d.
*/
static inline
struct ssam_device_driver *to_ssam_device_driver(struct device_driver *d)
{
return container_of(d, struct ssam_device_driver, driver);
}
const struct ssam_device_id *ssam_device_id_match(const struct ssam_device_id *table,
const struct ssam_device_uid uid);
const struct ssam_device_id *ssam_device_get_match(const struct ssam_device *dev);
const void *ssam_device_get_match_data(const struct ssam_device *dev);
struct ssam_device *ssam_device_alloc(struct ssam_controller *ctrl,
struct ssam_device_uid uid);
int ssam_device_add(struct ssam_device *sdev);
void ssam_device_remove(struct ssam_device *sdev);
/**
* ssam_device_get() - Increment reference count of SSAM client device.
* @sdev: The device to increment the reference count of.
*
* Increments the reference count of the given SSAM client device by
* incrementing the reference count of the enclosed &struct device via
* get_device().
*
* See ssam_device_put() for the counter-part of this function.
*
* Return: Returns the device provided as input.
*/
static inline struct ssam_device *ssam_device_get(struct ssam_device *sdev)
{
return sdev ? to_ssam_device(get_device(&sdev->dev)) : NULL;
}
/**
* ssam_device_put() - Decrement reference count of SSAM client device.
* @sdev: The device to decrement the reference count of.
*
* Decrements the reference count of the given SSAM client device by
* decrementing the reference count of the enclosed &struct device via
* put_device().
*
* See ssam_device_get() for the counter-part of this function.
*/
static inline void ssam_device_put(struct ssam_device *sdev)
{
if (sdev)
put_device(&sdev->dev);
}
/**
* ssam_device_get_drvdata() - Get driver-data of SSAM client device.
* @sdev: The device to get the driver-data from.
*
* Return: Returns the driver-data of the given device, previously set via
* ssam_device_set_drvdata().
*/
static inline void *ssam_device_get_drvdata(struct ssam_device *sdev)
{
return dev_get_drvdata(&sdev->dev);
}
/**
* ssam_device_set_drvdata() - Set driver-data of SSAM client device.
* @sdev: The device to set the driver-data of.
* @data: The data to set the device's driver-data pointer to.
*/
static inline void ssam_device_set_drvdata(struct ssam_device *sdev, void *data)
{
dev_set_drvdata(&sdev->dev, data);
}
int __ssam_device_driver_register(struct ssam_device_driver *d, struct module *o);
void ssam_device_driver_unregister(struct ssam_device_driver *d);
/**
* ssam_device_driver_register() - Register a SSAM client device driver.
* @drv: The driver to register.
*/
#define ssam_device_driver_register(drv) \
__ssam_device_driver_register(drv, THIS_MODULE)
/**
* module_ssam_device_driver() - Helper macro for SSAM device driver
* registration.
* @drv: The driver managed by this module.
*
* Helper macro to register a SSAM device driver via module_init() and
* module_exit(). This macro may only be used once per module and replaces the
* aforementioned definitions.
*/
#define module_ssam_device_driver(drv) \
module_driver(drv, ssam_device_driver_register, \
ssam_device_driver_unregister)
/* -- Helpers for client-device requests. ----------------------------------- */
/**
* SSAM_DEFINE_SYNC_REQUEST_CL_N() - Define synchronous client-device SAM
* request function with neither argument nor return value.
* @name: Name of the generated function.
* @spec: Specification (&struct ssam_request_spec_md) defining the request.
*
* Defines a function executing the synchronous SAM request specified by
* @spec, with the request having neither argument nor return value. Device
* specifying parameters are not hard-coded, but instead are provided via the
* client device, specifically its UID, supplied when calling this function.
* The generated function takes care of setting up the request struct, buffer
* allocation, as well as execution of the request itself, returning once the
* request has been fully completed. The required transport buffer will be
* allocated on the stack.
*
* The generated function is defined as ``int name(struct ssam_device *sdev)``,
* returning the status of the request, which is zero on success and negative
* on failure. The ``sdev`` parameter specifies both the target device of the
* request and by association the controller via which the request is sent.
*
* Refer to ssam_request_sync_onstack() for more details on the behavior of
* the generated function.
*/
#define SSAM_DEFINE_SYNC_REQUEST_CL_N(name, spec...) \
SSAM_DEFINE_SYNC_REQUEST_MD_N(__raw_##name, spec) \
int name(struct ssam_device *sdev) \
{ \
return __raw_##name(sdev->ctrl, sdev->uid.target, \
sdev->uid.instance); \
}
/**
* SSAM_DEFINE_SYNC_REQUEST_CL_W() - Define synchronous client-device SAM
* request function with argument.
* @name: Name of the generated function.
* @atype: Type of the request's argument.
* @spec: Specification (&struct ssam_request_spec_md) defining the request.
*
* Defines a function executing the synchronous SAM request specified by
* @spec, with the request taking an argument of type @atype and having no
* return value. Device specifying parameters are not hard-coded, but instead
* are provided via the client device, specifically its UID, supplied when
* calling this function. The generated function takes care of setting up the
* request struct, buffer allocation, as well as execution of the request
* itself, returning once the request has been fully completed. The required
* transport buffer will be allocated on the stack.
*
* The generated function is defined as ``int name(struct ssam_device *sdev,
* const atype *arg)``, returning the status of the request, which is zero on
* success and negative on failure. The ``sdev`` parameter specifies both the
* target device of the request and by association the controller via which
* the request is sent. The request's argument is specified via the ``arg``
* pointer.
*
* Refer to ssam_request_sync_onstack() for more details on the behavior of
* the generated function.
*/
#define SSAM_DEFINE_SYNC_REQUEST_CL_W(name, atype, spec...) \
SSAM_DEFINE_SYNC_REQUEST_MD_W(__raw_##name, atype, spec) \
int name(struct ssam_device *sdev, const atype *arg) \
{ \
return __raw_##name(sdev->ctrl, sdev->uid.target, \
sdev->uid.instance, arg); \
}
/**
* SSAM_DEFINE_SYNC_REQUEST_CL_R() - Define synchronous client-device SAM
* request function with return value.
* @name: Name of the generated function.
* @rtype: Type of the request's return value.
* @spec: Specification (&struct ssam_request_spec_md) defining the request.
*
* Defines a function executing the synchronous SAM request specified by
* @spec, with the request taking no argument but having a return value of
* type @rtype. Device specifying parameters are not hard-coded, but instead
* are provided via the client device, specifically its UID, supplied when
* calling this function. The generated function takes care of setting up the
* request struct, buffer allocation, as well as execution of the request
* itself, returning once the request has been fully completed. The required
* transport buffer will be allocated on the stack.
*
* The generated function is defined as ``int name(struct ssam_device *sdev,
* rtype *ret)``, returning the status of the request, which is zero on
* success and negative on failure. The ``sdev`` parameter specifies both the
* target device of the request and by association the controller via which
* the request is sent. The request's return value is written to the memory
* pointed to by the ``ret`` parameter.
*
* Refer to ssam_request_sync_onstack() for more details on the behavior of
* the generated function.
*/
#define SSAM_DEFINE_SYNC_REQUEST_CL_R(name, rtype, spec...) \
SSAM_DEFINE_SYNC_REQUEST_MD_R(__raw_##name, rtype, spec) \
int name(struct ssam_device *sdev, rtype *ret) \
{ \
return __raw_##name(sdev->ctrl, sdev->uid.target, \
sdev->uid.instance, ret); \
}
#endif /* _LINUX_SURFACE_AGGREGATOR_DEVICE_H */

Some files were not shown because too many files have changed in this diff Show more