Skip to content
Commit 2485b867 authored by Kenji Kaneshige's avatar Kenji Kaneshige Committed by Jesse Barnes
Browse files

PCI: ignore bit0 of _OSC return code



Currently acpi_run_osc() checks all the bits in _OSC result code (the
first DWORD in the capabilities buffer) to see error condition. But the
bit 0, which doesn't indicate any error, must be ignored.

The bit 0 is used as the query flag at _OSC invocation time. Some
platforms clear it during _OSC evaluation, but the others don't. On
latter platforms, current acpi_run_osc() mis-detects error when _OSC is
evaluated with query flag set because it doesn't ignore the bit 0.
Because of this, the __acpi_query_osc() always fails on such platforms.

And this is the cause of the problem that pci_osc_control_set() doesn't
work since the commit 4e39432f which
changed pci_osc_control_set() to use __acpi_query_osc().

Tested-by: default avatar"Tomasz Czernecki <czernecki@gmail.com>
Signed-off-by: default avatarKenji Kaneshige <kaneshige.kenji@jp.fujitsu.com>
Signed-off-by: default avatarJesse Barnes <jbarnes@virtuousgeek.org>
parent f21f237c
Loading
Loading
Loading
Loading
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment