win: Crash handler server
This replaces the registration server, and adds dispatch to a delegate
on crash requests.
(As you are already aware) we went around in circles on trying to come
up with a slightly-too-fancy threading design. All of them seemed to
have problems when it comes to out of order events, and orderly
shutdown, so I've gone back to something not-too-fancy.
Two named pipe instances (that clients connect to) are created. These
are used only for registration (which should take <1ms), so 2 should be
sufficient to avoid any waits. When a client registers, we duplicate
an event to it, which is used to signal when it wants a dump taken.
The server registers threadpool waits on that event, and also on the
process handle (which will be signalled when the client process exits).
These requests (in particular the taking of the dump) are serviced
on the threadpool, which avoids us needing to manage those threads,
but still allows parallelism in taking dumps. On process termination,
we use an IO Completion Port to post a message back to the main thread
to request cleanup. This complexity is necessary so that we can
unregister the threadpool waits without being on the threadpool, which
we need to do synchronously so that we can be sure that no further
callbacks will execute (and expect to have the client data around
still).
In a followup, I will readd support for DumpWithoutCrashing -- I don't
think it will be too difficult now that we have an orderly way to
clean up client records in the server.
R=cpu@chromium.org, mark@chromium.org, jschuh@chromium.org
BUG=crashpad:1,crashpad:45
Review URL: https://codereview.chromium.org/1301853002 .
2015-09-03 11:06:17 -07:00
|
|
|
|
// Copyright 2015 The Crashpad Authors. All rights reserved.
|
|
|
|
|
//
|
|
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
|
// You may obtain a copy of the License at
|
|
|
|
|
//
|
|
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
|
//
|
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
|
// limitations under the License.
|
|
|
|
|
|
|
|
|
|
#include "util/win/registration_protocol_win.h"
|
|
|
|
|
|
|
|
|
|
#include <windows.h>
|
2019-12-10 08:51:20 -08:00
|
|
|
|
#include <aclapi.h>
|
|
|
|
|
#include <sddl.h>
|
|
|
|
|
#include <stddef.h>
|
win: Crash handler server
This replaces the registration server, and adds dispatch to a delegate
on crash requests.
(As you are already aware) we went around in circles on trying to come
up with a slightly-too-fancy threading design. All of them seemed to
have problems when it comes to out of order events, and orderly
shutdown, so I've gone back to something not-too-fancy.
Two named pipe instances (that clients connect to) are created. These
are used only for registration (which should take <1ms), so 2 should be
sufficient to avoid any waits. When a client registers, we duplicate
an event to it, which is used to signal when it wants a dump taken.
The server registers threadpool waits on that event, and also on the
process handle (which will be signalled when the client process exits).
These requests (in particular the taking of the dump) are serviced
on the threadpool, which avoids us needing to manage those threads,
but still allows parallelism in taking dumps. On process termination,
we use an IO Completion Port to post a message back to the main thread
to request cleanup. This complexity is necessary so that we can
unregister the threadpool waits without being on the threadpool, which
we need to do synchronously so that we can be sure that no further
callbacks will execute (and expect to have the client data around
still).
In a followup, I will readd support for DumpWithoutCrashing -- I don't
think it will be too difficult now that we have an orderly way to
clean up client records in the server.
R=cpu@chromium.org, mark@chromium.org, jschuh@chromium.org
BUG=crashpad:1,crashpad:45
Review URL: https://codereview.chromium.org/1301853002 .
2015-09-03 11:06:17 -07:00
|
|
|
|
|
2021-05-25 14:24:55 -07:00
|
|
|
|
#include "base/cxx17_backports.h"
|
win: Crash handler server
This replaces the registration server, and adds dispatch to a delegate
on crash requests.
(As you are already aware) we went around in circles on trying to come
up with a slightly-too-fancy threading design. All of them seemed to
have problems when it comes to out of order events, and orderly
shutdown, so I've gone back to something not-too-fancy.
Two named pipe instances (that clients connect to) are created. These
are used only for registration (which should take <1ms), so 2 should be
sufficient to avoid any waits. When a client registers, we duplicate
an event to it, which is used to signal when it wants a dump taken.
The server registers threadpool waits on that event, and also on the
process handle (which will be signalled when the client process exits).
These requests (in particular the taking of the dump) are serviced
on the threadpool, which avoids us needing to manage those threads,
but still allows parallelism in taking dumps. On process termination,
we use an IO Completion Port to post a message back to the main thread
to request cleanup. This complexity is necessary so that we can
unregister the threadpool waits without being on the threadpool, which
we need to do synchronously so that we can be sure that no further
callbacks will execute (and expect to have the client data around
still).
In a followup, I will readd support for DumpWithoutCrashing -- I don't
think it will be too difficult now that we have an orderly way to
clean up client records in the server.
R=cpu@chromium.org, mark@chromium.org, jschuh@chromium.org
BUG=crashpad:1,crashpad:45
Review URL: https://codereview.chromium.org/1301853002 .
2015-09-03 11:06:17 -07:00
|
|
|
|
#include "base/logging.h"
|
2016-10-21 13:08:18 -07:00
|
|
|
|
#include "util/win/exception_handler_server.h"
|
2019-12-10 08:51:20 -08:00
|
|
|
|
#include "util/win/loader_lock.h"
|
2015-09-11 15:34:35 -07:00
|
|
|
|
#include "util/win/scoped_handle.h"
|
2019-12-10 08:51:20 -08:00
|
|
|
|
#include "util/win/scoped_local_alloc.h"
|
win: Crash handler server
This replaces the registration server, and adds dispatch to a delegate
on crash requests.
(As you are already aware) we went around in circles on trying to come
up with a slightly-too-fancy threading design. All of them seemed to
have problems when it comes to out of order events, and orderly
shutdown, so I've gone back to something not-too-fancy.
Two named pipe instances (that clients connect to) are created. These
are used only for registration (which should take <1ms), so 2 should be
sufficient to avoid any waits. When a client registers, we duplicate
an event to it, which is used to signal when it wants a dump taken.
The server registers threadpool waits on that event, and also on the
process handle (which will be signalled when the client process exits).
These requests (in particular the taking of the dump) are serviced
on the threadpool, which avoids us needing to manage those threads,
but still allows parallelism in taking dumps. On process termination,
we use an IO Completion Port to post a message back to the main thread
to request cleanup. This complexity is necessary so that we can
unregister the threadpool waits without being on the threadpool, which
we need to do synchronously so that we can be sure that no further
callbacks will execute (and expect to have the client data around
still).
In a followup, I will readd support for DumpWithoutCrashing -- I don't
think it will be too difficult now that we have an orderly way to
clean up client records in the server.
R=cpu@chromium.org, mark@chromium.org, jschuh@chromium.org
BUG=crashpad:1,crashpad:45
Review URL: https://codereview.chromium.org/1301853002 .
2015-09-03 11:06:17 -07:00
|
|
|
|
|
|
|
|
|
namespace crashpad {
|
|
|
|
|
|
2019-12-10 08:51:20 -08:00
|
|
|
|
namespace {
|
|
|
|
|
|
2020-09-12 09:20:14 +02:00
|
|
|
|
void* GetSecurityDescriptorWithUser(const wchar_t* sddl_string, size_t* size) {
|
2019-12-10 08:51:20 -08:00
|
|
|
|
if (size)
|
|
|
|
|
*size = 0;
|
|
|
|
|
|
|
|
|
|
PSECURITY_DESCRIPTOR base_sec_desc;
|
|
|
|
|
if (!ConvertStringSecurityDescriptorToSecurityDescriptor(
|
|
|
|
|
sddl_string, SDDL_REVISION_1, &base_sec_desc, nullptr)) {
|
|
|
|
|
PLOG(ERROR) << "ConvertStringSecurityDescriptorToSecurityDescriptor";
|
|
|
|
|
return nullptr;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
ScopedLocalAlloc base_sec_desc_owner(base_sec_desc);
|
|
|
|
|
EXPLICIT_ACCESS access;
|
|
|
|
|
wchar_t username[] = L"CURRENT_USER";
|
|
|
|
|
BuildExplicitAccessWithName(
|
|
|
|
|
&access, username, GENERIC_ALL, GRANT_ACCESS, NO_INHERITANCE);
|
|
|
|
|
|
|
|
|
|
PSECURITY_DESCRIPTOR user_sec_desc;
|
|
|
|
|
ULONG user_sec_desc_size;
|
|
|
|
|
DWORD error = BuildSecurityDescriptor(nullptr,
|
|
|
|
|
nullptr,
|
|
|
|
|
1,
|
|
|
|
|
&access,
|
|
|
|
|
0,
|
|
|
|
|
nullptr,
|
|
|
|
|
base_sec_desc,
|
|
|
|
|
&user_sec_desc_size,
|
|
|
|
|
&user_sec_desc);
|
|
|
|
|
if (error != ERROR_SUCCESS) {
|
|
|
|
|
SetLastError(error);
|
|
|
|
|
PLOG(ERROR) << "BuildSecurityDescriptor";
|
|
|
|
|
return nullptr;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
*size = user_sec_desc_size;
|
|
|
|
|
return user_sec_desc;
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
} // namespace
|
|
|
|
|
|
2020-09-12 09:20:14 +02:00
|
|
|
|
bool SendToCrashHandlerServer(const std::wstring& pipe_name,
|
win: Address failure-to-start-handler case for async startup
Second follow up to https://chromium-review.googlesource.com/c/400015/
The ideal would be that if we fail to start the handler, then we don't
end up passing through our unhandled exception filter at all.
In the case of the non-initial client (i.e. renderers) we can do this by
not setting our UnhandledExceptionFilter until after we know we've
connected successfully (because those connections are synchronous from
its point of view). We also change WaitForNamedPipe in the connection
message to block forever, so as long as the precreated pipe exists,
they'll wait to connect. After the initial client has passed the server
side of that pipe to the handler, the handler has the only handle to it.
So, if the handler has disappeared for whatever reason, pipe-connecting
clients will fail with FILE_NOT_FOUND, and will not stick around in the
connection loop. This means non-initial clients do not need additional
logic to avoid getting stuck in our UnhandledExceptionFilter.
For the initial client, it would be ideal to avoid passing through our
UEF too, but none of the 3 options are great:
1. Block until we find out if we started, and then install the filter.
We don't want to do that, because we don't want to wait.
2. Restore the old filter if it turns out we failed to start. We can't
do that because Chrome disables ::SetUnhandledExceptionFilter()
immediately after StartHandler/SetHandlerIPCPipe returns.
3. Don't install our filter until we've successfully started. We don't
want to do that because we'd miss early crashes, negating the benefit
of deferred startup.
So, we do need to pass through our UnhandledExceptionFilter. I don't
want more Win32 API calls during the vulnerable filter function. So, at
any point during async startup where there's a failure, set a global
atomic that allows the filter function to abort without trying to signal
a handler that's known to not exist.
One further improvement we might want to look at is unexpected
termination of the handler (as opposed to a failure to start) which
would still result in a useless Sleep(60s). This isn't new behaviour,
but now we have a clear thing to do if we detect the handler is gone.
(Also a missing DWORD/size_t cast for the _x64 bots.)
R=mark@chromium.org
BUG=chromium:567850,chromium:656800
Change-Id: I5be831ca39bd8b2e5c962b9647c8bd469e2be878
Reviewed-on: https://chromium-review.googlesource.com/400985
Reviewed-by: Mark Mentovai <mark@chromium.org>
2016-11-02 14:24:21 -07:00
|
|
|
|
const ClientToServerMessage& message,
|
|
|
|
|
ServerToClientMessage* response) {
|
2015-11-10 16:43:13 -05:00
|
|
|
|
// Retry CreateFile() in a loop. If the handler isn’t actively waiting in
|
|
|
|
|
// ConnectNamedPipe() on a pipe instance because it’s busy doing something
|
|
|
|
|
// else, CreateFile() will fail with ERROR_PIPE_BUSY. WaitNamedPipe() waits
|
|
|
|
|
// until a pipe instance is ready, but there’s no way to wait for this
|
|
|
|
|
// condition and atomically open the client side of the pipe in a single
|
|
|
|
|
// operation. CallNamedPipe() implements similar retry logic to this, also in
|
|
|
|
|
// user-mode code.
|
|
|
|
|
//
|
|
|
|
|
// This loop is only intended to retry on ERROR_PIPE_BUSY. Notably, if the
|
|
|
|
|
// handler is so lazy that it hasn’t even called CreateNamedPipe() yet,
|
|
|
|
|
// CreateFile() will fail with ERROR_FILE_NOT_FOUND, and this function is
|
|
|
|
|
// expected to fail without retrying anything. If the handler is started at
|
|
|
|
|
// around the same time as its client, something external to this code must be
|
|
|
|
|
// done to guarantee correct ordering. When the client starts the handler
|
|
|
|
|
// itself, CrashpadClient::StartHandler() provides this synchronization.
|
win: Address failure-to-start-handler case for async startup
Second follow up to https://chromium-review.googlesource.com/c/400015/
The ideal would be that if we fail to start the handler, then we don't
end up passing through our unhandled exception filter at all.
In the case of the non-initial client (i.e. renderers) we can do this by
not setting our UnhandledExceptionFilter until after we know we've
connected successfully (because those connections are synchronous from
its point of view). We also change WaitForNamedPipe in the connection
message to block forever, so as long as the precreated pipe exists,
they'll wait to connect. After the initial client has passed the server
side of that pipe to the handler, the handler has the only handle to it.
So, if the handler has disappeared for whatever reason, pipe-connecting
clients will fail with FILE_NOT_FOUND, and will not stick around in the
connection loop. This means non-initial clients do not need additional
logic to avoid getting stuck in our UnhandledExceptionFilter.
For the initial client, it would be ideal to avoid passing through our
UEF too, but none of the 3 options are great:
1. Block until we find out if we started, and then install the filter.
We don't want to do that, because we don't want to wait.
2. Restore the old filter if it turns out we failed to start. We can't
do that because Chrome disables ::SetUnhandledExceptionFilter()
immediately after StartHandler/SetHandlerIPCPipe returns.
3. Don't install our filter until we've successfully started. We don't
want to do that because we'd miss early crashes, negating the benefit
of deferred startup.
So, we do need to pass through our UnhandledExceptionFilter. I don't
want more Win32 API calls during the vulnerable filter function. So, at
any point during async startup where there's a failure, set a global
atomic that allows the filter function to abort without trying to signal
a handler that's known to not exist.
One further improvement we might want to look at is unexpected
termination of the handler (as opposed to a failure to start) which
would still result in a useless Sleep(60s). This isn't new behaviour,
but now we have a clear thing to do if we detect the handler is gone.
(Also a missing DWORD/size_t cast for the _x64 bots.)
R=mark@chromium.org
BUG=chromium:567850,chromium:656800
Change-Id: I5be831ca39bd8b2e5c962b9647c8bd469e2be878
Reviewed-on: https://chromium-review.googlesource.com/400985
Reviewed-by: Mark Mentovai <mark@chromium.org>
2016-11-02 14:24:21 -07:00
|
|
|
|
for (;;) {
|
2015-09-11 15:34:35 -07:00
|
|
|
|
ScopedFileHANDLE pipe(
|
|
|
|
|
CreateFile(pipe_name.c_str(),
|
|
|
|
|
GENERIC_READ | GENERIC_WRITE,
|
|
|
|
|
0,
|
|
|
|
|
nullptr,
|
|
|
|
|
OPEN_EXISTING,
|
|
|
|
|
SECURITY_SQOS_PRESENT | SECURITY_IDENTIFICATION,
|
|
|
|
|
nullptr));
|
|
|
|
|
if (!pipe.is_valid()) {
|
win: Address failure-to-start-handler case for async startup
Second follow up to https://chromium-review.googlesource.com/c/400015/
The ideal would be that if we fail to start the handler, then we don't
end up passing through our unhandled exception filter at all.
In the case of the non-initial client (i.e. renderers) we can do this by
not setting our UnhandledExceptionFilter until after we know we've
connected successfully (because those connections are synchronous from
its point of view). We also change WaitForNamedPipe in the connection
message to block forever, so as long as the precreated pipe exists,
they'll wait to connect. After the initial client has passed the server
side of that pipe to the handler, the handler has the only handle to it.
So, if the handler has disappeared for whatever reason, pipe-connecting
clients will fail with FILE_NOT_FOUND, and will not stick around in the
connection loop. This means non-initial clients do not need additional
logic to avoid getting stuck in our UnhandledExceptionFilter.
For the initial client, it would be ideal to avoid passing through our
UEF too, but none of the 3 options are great:
1. Block until we find out if we started, and then install the filter.
We don't want to do that, because we don't want to wait.
2. Restore the old filter if it turns out we failed to start. We can't
do that because Chrome disables ::SetUnhandledExceptionFilter()
immediately after StartHandler/SetHandlerIPCPipe returns.
3. Don't install our filter until we've successfully started. We don't
want to do that because we'd miss early crashes, negating the benefit
of deferred startup.
So, we do need to pass through our UnhandledExceptionFilter. I don't
want more Win32 API calls during the vulnerable filter function. So, at
any point during async startup where there's a failure, set a global
atomic that allows the filter function to abort without trying to signal
a handler that's known to not exist.
One further improvement we might want to look at is unexpected
termination of the handler (as opposed to a failure to start) which
would still result in a useless Sleep(60s). This isn't new behaviour,
but now we have a clear thing to do if we detect the handler is gone.
(Also a missing DWORD/size_t cast for the _x64 bots.)
R=mark@chromium.org
BUG=chromium:567850,chromium:656800
Change-Id: I5be831ca39bd8b2e5c962b9647c8bd469e2be878
Reviewed-on: https://chromium-review.googlesource.com/400985
Reviewed-by: Mark Mentovai <mark@chromium.org>
2016-11-02 14:24:21 -07:00
|
|
|
|
if (GetLastError() != ERROR_PIPE_BUSY) {
|
2015-11-06 15:54:48 -08:00
|
|
|
|
PLOG(ERROR) << "CreateFile";
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
win: Address failure-to-start-handler case for async startup
Second follow up to https://chromium-review.googlesource.com/c/400015/
The ideal would be that if we fail to start the handler, then we don't
end up passing through our unhandled exception filter at all.
In the case of the non-initial client (i.e. renderers) we can do this by
not setting our UnhandledExceptionFilter until after we know we've
connected successfully (because those connections are synchronous from
its point of view). We also change WaitForNamedPipe in the connection
message to block forever, so as long as the precreated pipe exists,
they'll wait to connect. After the initial client has passed the server
side of that pipe to the handler, the handler has the only handle to it.
So, if the handler has disappeared for whatever reason, pipe-connecting
clients will fail with FILE_NOT_FOUND, and will not stick around in the
connection loop. This means non-initial clients do not need additional
logic to avoid getting stuck in our UnhandledExceptionFilter.
For the initial client, it would be ideal to avoid passing through our
UEF too, but none of the 3 options are great:
1. Block until we find out if we started, and then install the filter.
We don't want to do that, because we don't want to wait.
2. Restore the old filter if it turns out we failed to start. We can't
do that because Chrome disables ::SetUnhandledExceptionFilter()
immediately after StartHandler/SetHandlerIPCPipe returns.
3. Don't install our filter until we've successfully started. We don't
want to do that because we'd miss early crashes, negating the benefit
of deferred startup.
So, we do need to pass through our UnhandledExceptionFilter. I don't
want more Win32 API calls during the vulnerable filter function. So, at
any point during async startup where there's a failure, set a global
atomic that allows the filter function to abort without trying to signal
a handler that's known to not exist.
One further improvement we might want to look at is unexpected
termination of the handler (as opposed to a failure to start) which
would still result in a useless Sleep(60s). This isn't new behaviour,
but now we have a clear thing to do if we detect the handler is gone.
(Also a missing DWORD/size_t cast for the _x64 bots.)
R=mark@chromium.org
BUG=chromium:567850,chromium:656800
Change-Id: I5be831ca39bd8b2e5c962b9647c8bd469e2be878
Reviewed-on: https://chromium-review.googlesource.com/400985
Reviewed-by: Mark Mentovai <mark@chromium.org>
2016-11-02 14:24:21 -07:00
|
|
|
|
if (!WaitNamedPipe(pipe_name.c_str(), NMPWAIT_WAIT_FOREVER)) {
|
2015-11-06 15:54:48 -08:00
|
|
|
|
PLOG(ERROR) << "WaitNamedPipe";
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
|
win: Crash handler server
This replaces the registration server, and adds dispatch to a delegate
on crash requests.
(As you are already aware) we went around in circles on trying to come
up with a slightly-too-fancy threading design. All of them seemed to
have problems when it comes to out of order events, and orderly
shutdown, so I've gone back to something not-too-fancy.
Two named pipe instances (that clients connect to) are created. These
are used only for registration (which should take <1ms), so 2 should be
sufficient to avoid any waits. When a client registers, we duplicate
an event to it, which is used to signal when it wants a dump taken.
The server registers threadpool waits on that event, and also on the
process handle (which will be signalled when the client process exits).
These requests (in particular the taking of the dump) are serviced
on the threadpool, which avoids us needing to manage those threads,
but still allows parallelism in taking dumps. On process termination,
we use an IO Completion Port to post a message back to the main thread
to request cleanup. This complexity is necessary so that we can
unregister the threadpool waits without being on the threadpool, which
we need to do synchronously so that we can be sure that no further
callbacks will execute (and expect to have the client data around
still).
In a followup, I will readd support for DumpWithoutCrashing -- I don't
think it will be too difficult now that we have an orderly way to
clean up client records in the server.
R=cpu@chromium.org, mark@chromium.org, jschuh@chromium.org
BUG=crashpad:1,crashpad:45
Review URL: https://codereview.chromium.org/1301853002 .
2015-09-03 11:06:17 -07:00
|
|
|
|
continue;
|
|
|
|
|
}
|
2015-11-06 15:54:48 -08:00
|
|
|
|
|
win: Crash handler server
This replaces the registration server, and adds dispatch to a delegate
on crash requests.
(As you are already aware) we went around in circles on trying to come
up with a slightly-too-fancy threading design. All of them seemed to
have problems when it comes to out of order events, and orderly
shutdown, so I've gone back to something not-too-fancy.
Two named pipe instances (that clients connect to) are created. These
are used only for registration (which should take <1ms), so 2 should be
sufficient to avoid any waits. When a client registers, we duplicate
an event to it, which is used to signal when it wants a dump taken.
The server registers threadpool waits on that event, and also on the
process handle (which will be signalled when the client process exits).
These requests (in particular the taking of the dump) are serviced
on the threadpool, which avoids us needing to manage those threads,
but still allows parallelism in taking dumps. On process termination,
we use an IO Completion Port to post a message back to the main thread
to request cleanup. This complexity is necessary so that we can
unregister the threadpool waits without being on the threadpool, which
we need to do synchronously so that we can be sure that no further
callbacks will execute (and expect to have the client data around
still).
In a followup, I will readd support for DumpWithoutCrashing -- I don't
think it will be too difficult now that we have an orderly way to
clean up client records in the server.
R=cpu@chromium.org, mark@chromium.org, jschuh@chromium.org
BUG=crashpad:1,crashpad:45
Review URL: https://codereview.chromium.org/1301853002 .
2015-09-03 11:06:17 -07:00
|
|
|
|
DWORD mode = PIPE_READMODE_MESSAGE;
|
2015-09-11 15:34:35 -07:00
|
|
|
|
if (!SetNamedPipeHandleState(pipe.get(), &mode, nullptr, nullptr)) {
|
win: Crash handler server
This replaces the registration server, and adds dispatch to a delegate
on crash requests.
(As you are already aware) we went around in circles on trying to come
up with a slightly-too-fancy threading design. All of them seemed to
have problems when it comes to out of order events, and orderly
shutdown, so I've gone back to something not-too-fancy.
Two named pipe instances (that clients connect to) are created. These
are used only for registration (which should take <1ms), so 2 should be
sufficient to avoid any waits. When a client registers, we duplicate
an event to it, which is used to signal when it wants a dump taken.
The server registers threadpool waits on that event, and also on the
process handle (which will be signalled when the client process exits).
These requests (in particular the taking of the dump) are serviced
on the threadpool, which avoids us needing to manage those threads,
but still allows parallelism in taking dumps. On process termination,
we use an IO Completion Port to post a message back to the main thread
to request cleanup. This complexity is necessary so that we can
unregister the threadpool waits without being on the threadpool, which
we need to do synchronously so that we can be sure that no further
callbacks will execute (and expect to have the client data around
still).
In a followup, I will readd support for DumpWithoutCrashing -- I don't
think it will be too difficult now that we have an orderly way to
clean up client records in the server.
R=cpu@chromium.org, mark@chromium.org, jschuh@chromium.org
BUG=crashpad:1,crashpad:45
Review URL: https://codereview.chromium.org/1301853002 .
2015-09-03 11:06:17 -07:00
|
|
|
|
PLOG(ERROR) << "SetNamedPipeHandleState";
|
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
DWORD bytes_read = 0;
|
|
|
|
|
BOOL result = TransactNamedPipe(
|
2015-09-11 15:34:35 -07:00
|
|
|
|
pipe.get(),
|
win: Crash handler server
This replaces the registration server, and adds dispatch to a delegate
on crash requests.
(As you are already aware) we went around in circles on trying to come
up with a slightly-too-fancy threading design. All of them seemed to
have problems when it comes to out of order events, and orderly
shutdown, so I've gone back to something not-too-fancy.
Two named pipe instances (that clients connect to) are created. These
are used only for registration (which should take <1ms), so 2 should be
sufficient to avoid any waits. When a client registers, we duplicate
an event to it, which is used to signal when it wants a dump taken.
The server registers threadpool waits on that event, and also on the
process handle (which will be signalled when the client process exits).
These requests (in particular the taking of the dump) are serviced
on the threadpool, which avoids us needing to manage those threads,
but still allows parallelism in taking dumps. On process termination,
we use an IO Completion Port to post a message back to the main thread
to request cleanup. This complexity is necessary so that we can
unregister the threadpool waits without being on the threadpool, which
we need to do synchronously so that we can be sure that no further
callbacks will execute (and expect to have the client data around
still).
In a followup, I will readd support for DumpWithoutCrashing -- I don't
think it will be too difficult now that we have an orderly way to
clean up client records in the server.
R=cpu@chromium.org, mark@chromium.org, jschuh@chromium.org
BUG=crashpad:1,crashpad:45
Review URL: https://codereview.chromium.org/1301853002 .
2015-09-03 11:06:17 -07:00
|
|
|
|
// This is [in], but is incorrectly declared non-const.
|
win: Address failure-to-start-handler case for async startup
Second follow up to https://chromium-review.googlesource.com/c/400015/
The ideal would be that if we fail to start the handler, then we don't
end up passing through our unhandled exception filter at all.
In the case of the non-initial client (i.e. renderers) we can do this by
not setting our UnhandledExceptionFilter until after we know we've
connected successfully (because those connections are synchronous from
its point of view). We also change WaitForNamedPipe in the connection
message to block forever, so as long as the precreated pipe exists,
they'll wait to connect. After the initial client has passed the server
side of that pipe to the handler, the handler has the only handle to it.
So, if the handler has disappeared for whatever reason, pipe-connecting
clients will fail with FILE_NOT_FOUND, and will not stick around in the
connection loop. This means non-initial clients do not need additional
logic to avoid getting stuck in our UnhandledExceptionFilter.
For the initial client, it would be ideal to avoid passing through our
UEF too, but none of the 3 options are great:
1. Block until we find out if we started, and then install the filter.
We don't want to do that, because we don't want to wait.
2. Restore the old filter if it turns out we failed to start. We can't
do that because Chrome disables ::SetUnhandledExceptionFilter()
immediately after StartHandler/SetHandlerIPCPipe returns.
3. Don't install our filter until we've successfully started. We don't
want to do that because we'd miss early crashes, negating the benefit
of deferred startup.
So, we do need to pass through our UnhandledExceptionFilter. I don't
want more Win32 API calls during the vulnerable filter function. So, at
any point during async startup where there's a failure, set a global
atomic that allows the filter function to abort without trying to signal
a handler that's known to not exist.
One further improvement we might want to look at is unexpected
termination of the handler (as opposed to a failure to start) which
would still result in a useless Sleep(60s). This isn't new behaviour,
but now we have a clear thing to do if we detect the handler is gone.
(Also a missing DWORD/size_t cast for the _x64 bots.)
R=mark@chromium.org
BUG=chromium:567850,chromium:656800
Change-Id: I5be831ca39bd8b2e5c962b9647c8bd469e2be878
Reviewed-on: https://chromium-review.googlesource.com/400985
Reviewed-by: Mark Mentovai <mark@chromium.org>
2016-11-02 14:24:21 -07:00
|
|
|
|
const_cast<ClientToServerMessage*>(&message),
|
win: Crash handler server
This replaces the registration server, and adds dispatch to a delegate
on crash requests.
(As you are already aware) we went around in circles on trying to come
up with a slightly-too-fancy threading design. All of them seemed to
have problems when it comes to out of order events, and orderly
shutdown, so I've gone back to something not-too-fancy.
Two named pipe instances (that clients connect to) are created. These
are used only for registration (which should take <1ms), so 2 should be
sufficient to avoid any waits. When a client registers, we duplicate
an event to it, which is used to signal when it wants a dump taken.
The server registers threadpool waits on that event, and also on the
process handle (which will be signalled when the client process exits).
These requests (in particular the taking of the dump) are serviced
on the threadpool, which avoids us needing to manage those threads,
but still allows parallelism in taking dumps. On process termination,
we use an IO Completion Port to post a message back to the main thread
to request cleanup. This complexity is necessary so that we can
unregister the threadpool waits without being on the threadpool, which
we need to do synchronously so that we can be sure that no further
callbacks will execute (and expect to have the client data around
still).
In a followup, I will readd support for DumpWithoutCrashing -- I don't
think it will be too difficult now that we have an orderly way to
clean up client records in the server.
R=cpu@chromium.org, mark@chromium.org, jschuh@chromium.org
BUG=crashpad:1,crashpad:45
Review URL: https://codereview.chromium.org/1301853002 .
2015-09-03 11:06:17 -07:00
|
|
|
|
sizeof(message),
|
|
|
|
|
response,
|
|
|
|
|
sizeof(*response),
|
|
|
|
|
&bytes_read,
|
|
|
|
|
nullptr);
|
|
|
|
|
if (!result) {
|
2015-11-10 16:43:13 -05:00
|
|
|
|
PLOG(ERROR) << "TransactNamedPipe";
|
win: Crash handler server
This replaces the registration server, and adds dispatch to a delegate
on crash requests.
(As you are already aware) we went around in circles on trying to come
up with a slightly-too-fancy threading design. All of them seemed to
have problems when it comes to out of order events, and orderly
shutdown, so I've gone back to something not-too-fancy.
Two named pipe instances (that clients connect to) are created. These
are used only for registration (which should take <1ms), so 2 should be
sufficient to avoid any waits. When a client registers, we duplicate
an event to it, which is used to signal when it wants a dump taken.
The server registers threadpool waits on that event, and also on the
process handle (which will be signalled when the client process exits).
These requests (in particular the taking of the dump) are serviced
on the threadpool, which avoids us needing to manage those threads,
but still allows parallelism in taking dumps. On process termination,
we use an IO Completion Port to post a message back to the main thread
to request cleanup. This complexity is necessary so that we can
unregister the threadpool waits without being on the threadpool, which
we need to do synchronously so that we can be sure that no further
callbacks will execute (and expect to have the client data around
still).
In a followup, I will readd support for DumpWithoutCrashing -- I don't
think it will be too difficult now that we have an orderly way to
clean up client records in the server.
R=cpu@chromium.org, mark@chromium.org, jschuh@chromium.org
BUG=crashpad:1,crashpad:45
Review URL: https://codereview.chromium.org/1301853002 .
2015-09-03 11:06:17 -07:00
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
if (bytes_read != sizeof(*response)) {
|
2015-11-10 16:43:13 -05:00
|
|
|
|
LOG(ERROR) << "TransactNamedPipe: expected " << sizeof(*response)
|
|
|
|
|
<< ", observed " << bytes_read;
|
win: Crash handler server
This replaces the registration server, and adds dispatch to a delegate
on crash requests.
(As you are already aware) we went around in circles on trying to come
up with a slightly-too-fancy threading design. All of them seemed to
have problems when it comes to out of order events, and orderly
shutdown, so I've gone back to something not-too-fancy.
Two named pipe instances (that clients connect to) are created. These
are used only for registration (which should take <1ms), so 2 should be
sufficient to avoid any waits. When a client registers, we duplicate
an event to it, which is used to signal when it wants a dump taken.
The server registers threadpool waits on that event, and also on the
process handle (which will be signalled when the client process exits).
These requests (in particular the taking of the dump) are serviced
on the threadpool, which avoids us needing to manage those threads,
but still allows parallelism in taking dumps. On process termination,
we use an IO Completion Port to post a message back to the main thread
to request cleanup. This complexity is necessary so that we can
unregister the threadpool waits without being on the threadpool, which
we need to do synchronously so that we can be sure that no further
callbacks will execute (and expect to have the client data around
still).
In a followup, I will readd support for DumpWithoutCrashing -- I don't
think it will be too difficult now that we have an orderly way to
clean up client records in the server.
R=cpu@chromium.org, mark@chromium.org, jschuh@chromium.org
BUG=crashpad:1,crashpad:45
Review URL: https://codereview.chromium.org/1301853002 .
2015-09-03 11:06:17 -07:00
|
|
|
|
return false;
|
|
|
|
|
}
|
|
|
|
|
return true;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
2016-10-21 13:08:18 -07:00
|
|
|
|
HANDLE CreateNamedPipeInstance(const std::wstring& pipe_name,
|
|
|
|
|
bool first_instance) {
|
|
|
|
|
SECURITY_ATTRIBUTES security_attributes;
|
|
|
|
|
SECURITY_ATTRIBUTES* security_attributes_pointer = nullptr;
|
|
|
|
|
|
|
|
|
|
if (first_instance) {
|
|
|
|
|
// Pre-Vista does not have integrity levels.
|
|
|
|
|
const DWORD version = GetVersion();
|
|
|
|
|
const DWORD major_version = LOBYTE(LOWORD(version));
|
|
|
|
|
const bool is_vista_or_later = major_version >= 6;
|
|
|
|
|
if (is_vista_or_later) {
|
|
|
|
|
memset(&security_attributes, 0, sizeof(security_attributes));
|
|
|
|
|
security_attributes.nLength = sizeof(SECURITY_ATTRIBUTES);
|
2016-12-07 11:35:07 -08:00
|
|
|
|
security_attributes.lpSecurityDescriptor =
|
|
|
|
|
const_cast<void*>(GetSecurityDescriptorForNamedPipeInstance(nullptr));
|
2016-10-21 13:08:18 -07:00
|
|
|
|
security_attributes.bInheritHandle = TRUE;
|
|
|
|
|
security_attributes_pointer = &security_attributes;
|
|
|
|
|
}
|
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
return CreateNamedPipe(
|
|
|
|
|
pipe_name.c_str(),
|
|
|
|
|
PIPE_ACCESS_DUPLEX | (first_instance ? FILE_FLAG_FIRST_PIPE_INSTANCE : 0),
|
|
|
|
|
PIPE_TYPE_MESSAGE | PIPE_READMODE_MESSAGE | PIPE_WAIT,
|
|
|
|
|
ExceptionHandlerServer::kPipeInstances,
|
|
|
|
|
512,
|
|
|
|
|
512,
|
|
|
|
|
0,
|
|
|
|
|
security_attributes_pointer);
|
|
|
|
|
}
|
|
|
|
|
|
2019-12-10 08:51:20 -08:00
|
|
|
|
const void* GetFallbackSecurityDescriptorForNamedPipeInstance(size_t* size) {
|
2016-12-07 11:35:07 -08:00
|
|
|
|
// Mandatory Label, no ACE flags, no ObjectType, integrity level untrusted is
|
2019-12-10 08:51:20 -08:00
|
|
|
|
// "S:(ML;;;;;S-1-16-0)". This static security descriptor is used as a
|
|
|
|
|
// fallback if GetSecurityDescriptorWithUser fails, to avoid losing crashes
|
|
|
|
|
// from non-AppContainer sandboxed applications.
|
2016-12-07 11:35:07 -08:00
|
|
|
|
|
|
|
|
|
#pragma pack(push, 1)
|
2017-07-25 13:34:04 -04:00
|
|
|
|
static constexpr struct SecurityDescriptorBlob {
|
2017-11-20 16:57:43 -05:00
|
|
|
|
// See https://msdn.microsoft.com/library/cc230366.aspx.
|
2016-12-07 11:35:07 -08:00
|
|
|
|
SECURITY_DESCRIPTOR_RELATIVE sd_rel;
|
|
|
|
|
struct {
|
|
|
|
|
ACL acl;
|
|
|
|
|
struct {
|
|
|
|
|
// This is equivalent to SYSTEM_MANDATORY_LABEL_ACE, but there's no
|
|
|
|
|
// DWORD offset to the SID, instead it's inline.
|
|
|
|
|
ACE_HEADER header;
|
|
|
|
|
ACCESS_MASK mask;
|
|
|
|
|
SID sid;
|
|
|
|
|
} ace[1];
|
|
|
|
|
} sacl;
|
|
|
|
|
} kSecDescBlob = {
|
|
|
|
|
// sd_rel.
|
|
|
|
|
{
|
|
|
|
|
SECURITY_DESCRIPTOR_REVISION1, // Revision.
|
|
|
|
|
0x00, // Sbz1.
|
|
|
|
|
SE_SELF_RELATIVE | SE_SACL_PRESENT, // Control.
|
|
|
|
|
0, // OffsetOwner.
|
|
|
|
|
0, // OffsetGroup.
|
|
|
|
|
offsetof(SecurityDescriptorBlob, sacl), // OffsetSacl.
|
|
|
|
|
0, // OffsetDacl.
|
|
|
|
|
},
|
|
|
|
|
|
|
|
|
|
// sacl.
|
|
|
|
|
{
|
|
|
|
|
// acl.
|
|
|
|
|
{
|
|
|
|
|
ACL_REVISION, // AclRevision.
|
|
|
|
|
0, // Sbz1.
|
2016-12-07 13:25:02 -08:00
|
|
|
|
sizeof(kSecDescBlob.sacl), // AclSize.
|
2019-01-04 16:57:57 -05:00
|
|
|
|
static_cast<WORD>(
|
|
|
|
|
base::size(kSecDescBlob.sacl.ace)), // AceCount.
|
2016-12-07 11:35:07 -08:00
|
|
|
|
0, // Sbz2.
|
|
|
|
|
},
|
|
|
|
|
|
|
|
|
|
// ace[0].
|
|
|
|
|
{
|
|
|
|
|
{
|
|
|
|
|
// header.
|
|
|
|
|
{
|
|
|
|
|
SYSTEM_MANDATORY_LABEL_ACE_TYPE, // AceType.
|
|
|
|
|
0, // AceFlags.
|
2016-12-07 13:25:02 -08:00
|
|
|
|
sizeof(kSecDescBlob.sacl.ace[0]), // AceSize.
|
2016-12-07 11:35:07 -08:00
|
|
|
|
},
|
|
|
|
|
|
|
|
|
|
// mask.
|
|
|
|
|
0,
|
|
|
|
|
|
|
|
|
|
// sid.
|
|
|
|
|
{
|
|
|
|
|
SID_REVISION, // Revision.
|
2019-01-03 13:36:02 -05:00
|
|
|
|
// SubAuthorityCount.
|
2019-01-04 16:57:57 -05:00
|
|
|
|
static_cast<BYTE>(base::size(
|
|
|
|
|
kSecDescBlob.sacl.ace[0].sid.SubAuthority)),
|
2016-12-07 11:35:07 -08:00
|
|
|
|
// IdentifierAuthority.
|
2016-12-08 09:58:46 -08:00
|
|
|
|
{SECURITY_MANDATORY_LABEL_AUTHORITY},
|
2016-12-07 11:35:07 -08:00
|
|
|
|
{SECURITY_MANDATORY_UNTRUSTED_RID}, // SubAuthority.
|
|
|
|
|
},
|
|
|
|
|
},
|
|
|
|
|
},
|
|
|
|
|
},
|
|
|
|
|
};
|
|
|
|
|
#pragma pack(pop)
|
|
|
|
|
|
|
|
|
|
if (size)
|
|
|
|
|
*size = sizeof(kSecDescBlob);
|
|
|
|
|
return reinterpret_cast<const void*>(&kSecDescBlob);
|
|
|
|
|
}
|
|
|
|
|
|
2019-12-10 08:51:20 -08:00
|
|
|
|
const void* GetSecurityDescriptorForNamedPipeInstance(size_t* size) {
|
|
|
|
|
CHECK(!IsThreadInLoaderLock());
|
|
|
|
|
|
|
|
|
|
// Get a security descriptor which grants the current user and SYSTEM full
|
|
|
|
|
// access to the named pipe. Also grant AppContainer RW access through the ALL
|
|
|
|
|
// APPLICATION PACKAGES SID (S-1-15-2-1). Finally add an Untrusted Mandatory
|
|
|
|
|
// Label for non-AppContainer sandboxed users.
|
|
|
|
|
static size_t sd_size;
|
|
|
|
|
static void* sec_desc = GetSecurityDescriptorWithUser(
|
|
|
|
|
L"D:(A;;GA;;;SY)(A;;GWGR;;;S-1-15-2-1)S:(ML;;;;;S-1-16-0)", &sd_size);
|
|
|
|
|
|
|
|
|
|
if (!sec_desc)
|
|
|
|
|
return GetFallbackSecurityDescriptorForNamedPipeInstance(size);
|
|
|
|
|
|
|
|
|
|
if (size)
|
|
|
|
|
*size = sd_size;
|
|
|
|
|
return sec_desc;
|
|
|
|
|
}
|
|
|
|
|
|
win: Crash handler server
This replaces the registration server, and adds dispatch to a delegate
on crash requests.
(As you are already aware) we went around in circles on trying to come
up with a slightly-too-fancy threading design. All of them seemed to
have problems when it comes to out of order events, and orderly
shutdown, so I've gone back to something not-too-fancy.
Two named pipe instances (that clients connect to) are created. These
are used only for registration (which should take <1ms), so 2 should be
sufficient to avoid any waits. When a client registers, we duplicate
an event to it, which is used to signal when it wants a dump taken.
The server registers threadpool waits on that event, and also on the
process handle (which will be signalled when the client process exits).
These requests (in particular the taking of the dump) are serviced
on the threadpool, which avoids us needing to manage those threads,
but still allows parallelism in taking dumps. On process termination,
we use an IO Completion Port to post a message back to the main thread
to request cleanup. This complexity is necessary so that we can
unregister the threadpool waits without being on the threadpool, which
we need to do synchronously so that we can be sure that no further
callbacks will execute (and expect to have the client data around
still).
In a followup, I will readd support for DumpWithoutCrashing -- I don't
think it will be too difficult now that we have an orderly way to
clean up client records in the server.
R=cpu@chromium.org, mark@chromium.org, jschuh@chromium.org
BUG=crashpad:1,crashpad:45
Review URL: https://codereview.chromium.org/1301853002 .
2015-09-03 11:06:17 -07:00
|
|
|
|
} // namespace crashpad
|