feat add gperftools
This commit is contained in:
parent
719fecd4bc
commit
0b9103e276
1458
3party/gperftools/CMakeLists.txt
Normal file
1458
3party/gperftools/CMakeLists.txt
Normal file
File diff suppressed because it is too large
Load Diff
202
3party/gperftools/README
Normal file
202
3party/gperftools/README
Normal file
@ -0,0 +1,202 @@
|
||||
gperftools
|
||||
----------
|
||||
(originally Google Performance Tools)
|
||||
|
||||
The fastest malloc we’ve seen; works particularly well with threads
|
||||
and STL. Also: thread-friendly heap-checker, heap-profiler, and
|
||||
cpu-profiler.
|
||||
|
||||
|
||||
OVERVIEW
|
||||
---------
|
||||
|
||||
gperftools is a collection of a high-performance multi-threaded
|
||||
malloc() implementation, plus some pretty nifty performance analysis
|
||||
tools.
|
||||
|
||||
gperftools is distributed under the terms of the BSD License. Join our
|
||||
mailing list at gperftools@googlegroups.com for updates:
|
||||
https://groups.google.com/forum/#!forum/gperftools
|
||||
|
||||
gperftools was original home for pprof program. But do note that
|
||||
original pprof (which is still included with gperftools) is now
|
||||
deprecated in favor of Go version at https://github.com/google/pprof
|
||||
|
||||
|
||||
TCMALLOC
|
||||
--------
|
||||
Just link in -ltcmalloc or -ltcmalloc_minimal to get the advantages of
|
||||
tcmalloc -- a replacement for malloc and new. See below for some
|
||||
environment variables you can use with tcmalloc, as well.
|
||||
|
||||
tcmalloc functionality is available on all systems we've tested; see
|
||||
INSTALL for more details. See README_windows.txt for instructions on
|
||||
using tcmalloc on Windows.
|
||||
|
||||
when compiling. gcc makes some optimizations assuming it is using its
|
||||
own, built-in malloc; that assumption obviously isn't true with
|
||||
tcmalloc. In practice, we haven't seen any problems with this, but
|
||||
the expected risk is highest for users who register their own malloc
|
||||
hooks with tcmalloc (using gperftools/malloc_hook.h). The risk is
|
||||
lowest for folks who use tcmalloc_minimal (or, of course, who pass in
|
||||
the above flags :-) ).
|
||||
|
||||
|
||||
HEAP PROFILER
|
||||
-------------
|
||||
See docs/heapprofile.html for information about how to use tcmalloc's
|
||||
heap profiler and analyze its output.
|
||||
|
||||
As a quick-start, do the following after installing this package:
|
||||
|
||||
1) Link your executable with -ltcmalloc
|
||||
2) Run your executable with the HEAPPROFILE environment var set:
|
||||
$ HEAPPROFILE=/tmp/heapprof <path/to/binary> [binary args]
|
||||
3) Run pprof to analyze the heap usage
|
||||
$ pprof <path/to/binary> /tmp/heapprof.0045.heap # run 'ls' to see options
|
||||
$ pprof --gv <path/to/binary> /tmp/heapprof.0045.heap
|
||||
|
||||
You can also use LD_PRELOAD to heap-profile an executable that you
|
||||
didn't compile.
|
||||
|
||||
There are other environment variables, besides HEAPPROFILE, you can
|
||||
set to adjust the heap-profiler behavior; c.f. "ENVIRONMENT VARIABLES"
|
||||
below.
|
||||
|
||||
The heap profiler is available on all unix-based systems we've tested;
|
||||
see INSTALL for more details. It is not currently available on Windows.
|
||||
|
||||
|
||||
HEAP CHECKER
|
||||
------------
|
||||
|
||||
Please note that as of gperftools-2.11 this is deprecated. You should
|
||||
consider asan and other sanitizers instead.
|
||||
|
||||
See docs/heap_checker.html for information about how to use tcmalloc's
|
||||
heap checker.
|
||||
|
||||
In order to catch all heap leaks, tcmalloc must be linked *last* into
|
||||
your executable. The heap checker may mischaracterize some memory
|
||||
accesses in libraries listed after it on the link line. For instance,
|
||||
it may report these libraries as leaking memory when they're not.
|
||||
(See the source code for more details.)
|
||||
|
||||
Here's a quick-start for how to use:
|
||||
|
||||
As a quick-start, do the following after installing this package:
|
||||
|
||||
1) Link your executable with -ltcmalloc
|
||||
2) Run your executable with the HEAPCHECK environment var set:
|
||||
$ HEAPCHECK=1 <path/to/binary> [binary args]
|
||||
|
||||
Other values for HEAPCHECK: normal (equivalent to "1"), strict, draconian
|
||||
|
||||
You can also use LD_PRELOAD to heap-check an executable that you
|
||||
didn't compile.
|
||||
|
||||
The heap checker is only available on Linux at this time; see INSTALL
|
||||
for more details.
|
||||
|
||||
|
||||
CPU PROFILER
|
||||
------------
|
||||
See docs/cpuprofile.html for information about how to use the CPU
|
||||
profiler and analyze its output.
|
||||
|
||||
As a quick-start, do the following after installing this package:
|
||||
|
||||
1) Link your executable with -lprofiler
|
||||
2) Run your executable with the CPUPROFILE environment var set:
|
||||
$ CPUPROFILE=/tmp/prof.out <path/to/binary> [binary args]
|
||||
3) Run pprof to analyze the CPU usage
|
||||
$ pprof <path/to/binary> /tmp/prof.out # -pg-like text output
|
||||
$ pprof --gv <path/to/binary> /tmp/prof.out # really cool graphical output
|
||||
|
||||
There are other environment variables, besides CPUPROFILE, you can set
|
||||
to adjust the cpu-profiler behavior; cf "ENVIRONMENT VARIABLES" below.
|
||||
|
||||
The CPU profiler is available on all unix-based systems we've tested;
|
||||
see INSTALL for more details. It is not currently available on Windows.
|
||||
|
||||
NOTE: CPU profiling doesn't work after fork (unless you immediately
|
||||
do an exec()-like call afterwards). Furthermore, if you do
|
||||
fork, and the child calls exit(), it may corrupt the profile
|
||||
data. You can use _exit() to work around this. We hope to have
|
||||
a fix for both problems in the next release of perftools
|
||||
(hopefully perftools 1.2).
|
||||
|
||||
|
||||
EVERYTHING IN ONE
|
||||
-----------------
|
||||
If you want the CPU profiler, heap profiler, and heap leak-checker to
|
||||
all be available for your application, you can do:
|
||||
gcc -o myapp ... -lprofiler -ltcmalloc
|
||||
|
||||
However, if you have a reason to use the static versions of the
|
||||
library, this two-library linking won't work:
|
||||
gcc -o myapp ... /usr/lib/libprofiler.a /usr/lib/libtcmalloc.a # errors!
|
||||
|
||||
Instead, use the special libtcmalloc_and_profiler library, which we
|
||||
make for just this purpose:
|
||||
gcc -o myapp ... /usr/lib/libtcmalloc_and_profiler.a
|
||||
|
||||
|
||||
CONFIGURATION OPTIONS
|
||||
---------------------
|
||||
For advanced users, there are several flags you can pass to
|
||||
'./configure' that tweak tcmalloc performance. (These are in addition
|
||||
to the environment variables you can set at runtime to affect
|
||||
tcmalloc, described below.) See the INSTALL file for details.
|
||||
|
||||
|
||||
ENVIRONMENT VARIABLES
|
||||
---------------------
|
||||
The cpu profiler, heap checker, and heap profiler will lie dormant,
|
||||
using no memory or CPU, until you turn them on. (Thus, there's no
|
||||
harm in linking -lprofiler into every application, and also -ltcmalloc
|
||||
assuming you're ok using the non-libc malloc library.)
|
||||
|
||||
The easiest way to turn them on is by setting the appropriate
|
||||
environment variables. We have several variables that let you
|
||||
enable/disable features as well as tweak parameters.
|
||||
|
||||
Here are some of the most important variables:
|
||||
|
||||
HEAPPROFILE=<pre> -- turns on heap profiling and dumps data using this prefix
|
||||
HEAPCHECK=<type> -- turns on heap checking with strictness 'type'
|
||||
CPUPROFILE=<file> -- turns on cpu profiling and dumps data to this file.
|
||||
PROFILESELECTED=1 -- if set, cpu-profiler will only profile regions of code
|
||||
surrounded with ProfilerEnable()/ProfilerDisable().
|
||||
CPUPROFILE_FREQUENCY=x-- how many interrupts/second the cpu-profiler samples.
|
||||
|
||||
PERFTOOLS_VERBOSE=<level> -- the higher level, the more messages malloc emits
|
||||
MALLOCSTATS=<level> -- prints memory-use stats at program-exit
|
||||
|
||||
For a full list of variables, see the documentation pages:
|
||||
docs/cpuprofile.html
|
||||
docs/heapprofile.html
|
||||
docs/heap_checker.html
|
||||
|
||||
See also TCMALLOC_STACKTRACE_METHOD_VERBOSE and
|
||||
TCMALLOC_STACKTRACE_METHOD environment variables briefly documented in
|
||||
our INSTALL file and on our wiki page at:
|
||||
https://github.com/gperftools/gperftools/wiki/gperftools'-stacktrace-capturing-methods-and-their-issues
|
||||
|
||||
|
||||
COMPILING ON NON-LINUX SYSTEMS
|
||||
------------------------------
|
||||
|
||||
Perftools was developed and tested on x86, aarch64 and riscv Linux
|
||||
systems, and it works in its full generality only on those systems.
|
||||
|
||||
However, we've successfully ported much of the tcmalloc library to
|
||||
FreeBSD, Solaris x86 (not tested recently though), and Mac OS X
|
||||
(aarch64; x86 and ppc have not been tested recently); and we've ported
|
||||
the basic functionality in tcmalloc_minimal to Windows. See INSTALL
|
||||
for details. See README_windows.txt for details on the Windows port.
|
||||
|
||||
|
||||
---
|
||||
Originally written: 17 May 2011
|
||||
Last refreshed: 10 Aug 2023
|
22
3party/gperftools/cmake/DefineTargetVariables.cmake
Normal file
22
3party/gperftools/cmake/DefineTargetVariables.cmake
Normal file
@ -0,0 +1,22 @@
|
||||
if(NOT COMMAND check_cxx_source_compiles)
|
||||
include(CheckCXXSourceCompiles)
|
||||
endif()
|
||||
|
||||
macro(define_target_variables)
|
||||
check_cxx_source_compiles("int main() { return __i386__; }" i386)
|
||||
check_cxx_source_compiles("int main() { return __x86_64__; }" x86_64)
|
||||
check_cxx_source_compiles("int main() { return __s390__; }" s390)
|
||||
if(APPLE)
|
||||
check_cxx_source_compiles("int main() { return __arm64__; }" ARM)
|
||||
check_cxx_source_compiles("int main() { return __ppc64__; }" PPC64)
|
||||
check_cxx_source_compiles("int main() { return __ppc__; }" PPC)
|
||||
else()
|
||||
check_cxx_source_compiles("int main() { return __arm__; }" ARM)
|
||||
check_cxx_source_compiles("int main() { return __PPC64__; }" PPC64)
|
||||
check_cxx_source_compiles("int main() { return __PPC__; }" PPC)
|
||||
endif()
|
||||
check_cxx_source_compiles("int main() { return __FreeBSD__; }" FreeBSD)
|
||||
check_cxx_source_compiles("int main() { return __MINGW__; }" MINGW)
|
||||
check_cxx_source_compiles("int main() { return __linux; }" LINUX)
|
||||
check_cxx_source_compiles("int main() { return __APPLE__; }" OSX)
|
||||
endmacro()
|
275
3party/gperftools/cmake/config.h.in
Normal file
275
3party/gperftools/cmake/config.h.in
Normal file
@ -0,0 +1,275 @@
|
||||
/* Sometimes we accidentally #include this config.h instead of the one
|
||||
in .. -- this is particularly true for msys/mingw, which uses the
|
||||
unix config.h but also runs code in the windows directory.
|
||||
*/
|
||||
#ifdef __MINGW32__
|
||||
#include "../config.h"
|
||||
#define GOOGLE_PERFTOOLS_WINDOWS_CONFIG_H_
|
||||
#endif
|
||||
|
||||
#ifndef GOOGLE_PERFTOOLS_WINDOWS_CONFIG_H_
|
||||
#define GOOGLE_PERFTOOLS_WINDOWS_CONFIG_H_
|
||||
/* used by tcmalloc.h */
|
||||
#define GPERFTOOLS_CONFIG_H_
|
||||
|
||||
/* Enable aggressive decommit by default */
|
||||
#cmakedefine ENABLE_AGGRESSIVE_DECOMMIT_BY_DEFAULT
|
||||
|
||||
/* Build new/delete operators for overaligned types */
|
||||
#cmakedefine ENABLE_ALIGNED_NEW_DELETE
|
||||
|
||||
/* Build runtime detection for sized delete */
|
||||
#cmakedefine ENABLE_DYNAMIC_SIZED_DELETE
|
||||
|
||||
/* Report large allocation */
|
||||
#cmakedefine ENABLE_LARGE_ALLOC_REPORT
|
||||
|
||||
/* Build sized deletion operators */
|
||||
#cmakedefine ENABLE_SIZED_DELETE
|
||||
|
||||
/* Define to 1 if you have the <asm/ptrace.h> header file. */
|
||||
#cmakedefine HAVE_ASM_PTRACE_H
|
||||
|
||||
/* Define to 1 if you have the <cygwin/signal.h> header file. */
|
||||
#cmakedefine HAVE_CYGWIN_SIGNAL_H
|
||||
|
||||
/* Define to 1 if you have the declaration of `backtrace', and to 0 if you
|
||||
don't. */
|
||||
#cmakedefine01 HAVE_DECL_BACKTRACE
|
||||
|
||||
/* Define to 1 if you have the declaration of `cfree', and to 0 if you don't.
|
||||
*/
|
||||
#cmakedefine01 HAVE_DECL_CFREE
|
||||
|
||||
/* Define to 1 if you have the declaration of `memalign', and to 0 if you
|
||||
don't. */
|
||||
#cmakedefine01 HAVE_DECL_MEMALIGN
|
||||
|
||||
/* Define to 1 if you have the declaration of `nanosleep', and to 0 if you
|
||||
don't. */
|
||||
#cmakedefine01 HAVE_DECL_NANOSLEEP
|
||||
|
||||
/* Define to 1 if you have the declaration of `posix_memalign', and to 0 if
|
||||
you don't. */
|
||||
#cmakedefine01 HAVE_DECL_POSIX_MEMALIGN
|
||||
|
||||
/* Define to 1 if you have the declaration of `pvalloc', and to 0 if you
|
||||
don't. */
|
||||
#cmakedefine01 HAVE_DECL_PVALLOC
|
||||
|
||||
/* Define to 1 if you have the declaration of `sleep', and to 0 if you don't.
|
||||
*/
|
||||
#cmakedefine01 HAVE_DECL_SLEEP
|
||||
|
||||
/* Define to 1 if you have the declaration of `valloc', and to 0 if you don't.
|
||||
*/
|
||||
#cmakedefine01 HAVE_DECL_VALLOC
|
||||
|
||||
/* Define to 1 if you have the <execinfo.h> header file. */
|
||||
#cmakedefine HAVE_EXECINFO_H
|
||||
|
||||
/* Define to 1 if you have the <fcntl.h> header file. */
|
||||
#cmakedefine HAVE_FCNTL_H
|
||||
|
||||
/* Define to 1 if you have the <features.h> header file. */
|
||||
#cmakedefine HAVE_FEATURES_H
|
||||
|
||||
/* Define to 1 if you have the `fork' function. */
|
||||
#cmakedefine HAVE_FORK
|
||||
|
||||
/* Define to 1 if you have the `geteuid' function. */
|
||||
#cmakedefine HAVE_GETEUID
|
||||
|
||||
/* Define to 1 if you have the <glob.h> header file. */
|
||||
#cmakedefine HAVE_GLOB_H
|
||||
|
||||
/* Define to 1 if you have the <grp.h> header file. */
|
||||
#cmakedefine HAVE_GRP_H
|
||||
|
||||
/* Define to 1 if you have the <libunwind.h> header file. */
|
||||
#cmakedefine01 HAVE_LIBUNWIND_H
|
||||
|
||||
#cmakedefine USE_LIBUNWIND
|
||||
|
||||
/* Define if this is Linux that has SIGEV_THREAD_ID */
|
||||
#cmakedefine01 HAVE_LINUX_SIGEV_THREAD_ID
|
||||
|
||||
/* Define to 1 if you have the <malloc.h> header file. */
|
||||
#cmakedefine HAVE_MALLOC_H
|
||||
|
||||
/* Define to 1 if you have the <malloc/malloc.h> header file. */
|
||||
#cmakedefine HAVE_MALLOC_MALLOC_H
|
||||
|
||||
/* Define to 1 if you have a working `mmap' system call. */
|
||||
#cmakedefine HAVE_MMAP
|
||||
|
||||
/* Define to 1 if you have the <poll.h> header file. */
|
||||
#cmakedefine HAVE_POLL_H
|
||||
|
||||
/* define if libc has program_invocation_name */
|
||||
#cmakedefine HAVE_PROGRAM_INVOCATION_NAME
|
||||
|
||||
/* Define if you have POSIX threads libraries and header files. */
|
||||
#cmakedefine HAVE_PTHREAD
|
||||
|
||||
/* defined to 1 if pthread symbols are exposed even without include pthread.h
|
||||
*/
|
||||
#cmakedefine HAVE_PTHREAD_DESPITE_ASKING_FOR
|
||||
|
||||
/* Define to 1 if you have the <pwd.h> header file. */
|
||||
#cmakedefine HAVE_PWD_H
|
||||
|
||||
/* Define to 1 if you have the `sbrk' function. */
|
||||
#cmakedefine HAVE_SBRK
|
||||
|
||||
/* Define to 1 if you have the <sched.h> header file. */
|
||||
#cmakedefine HAVE_SCHED_H
|
||||
|
||||
/* Define to 1 if the system has the type `struct mallinfo'. */
|
||||
#cmakedefine HAVE_STRUCT_MALLINFO
|
||||
|
||||
/* Define to 1 if the system has the type `struct mallinfo2'. */
|
||||
#cmakedefine HAVE_STRUCT_MALLINFO2
|
||||
|
||||
/* Define to 1 if you have the <sys/cdefs.h> header file. */
|
||||
#cmakedefine HAVE_SYS_CDEFS_H
|
||||
|
||||
/* Define to 1 if you have the <sys/malloc.h> header file. */
|
||||
#cmakedefine HAVE_SYS_MALLOC_H
|
||||
|
||||
/* Define to 1 if you have the <sys/resource.h> header file. */
|
||||
#cmakedefine HAVE_SYS_RESOURCE_H
|
||||
|
||||
/* Define to 1 if you have the <sys/socket.h> header file. */
|
||||
#cmakedefine HAVE_SYS_SOCKET_H
|
||||
|
||||
/* Define to 1 if you have the <sys/syscall.h> header file. */
|
||||
#cmakedefine01 HAVE_SYS_SYSCALL_H
|
||||
|
||||
/* Define to 1 if you have the <sys/types.h> header file. */
|
||||
#cmakedefine HAVE_SYS_TYPES_H
|
||||
|
||||
/* Define to 1 if you have the <sys/ucontext.h> header file. */
|
||||
#cmakedefine01 HAVE_SYS_UCONTEXT_H
|
||||
|
||||
/* Define to 1 if you have the <sys/wait.h> header file. */
|
||||
#cmakedefine HAVE_SYS_WAIT_H
|
||||
|
||||
/* Define to 1 if compiler supports __thread */
|
||||
#cmakedefine HAVE_TLS
|
||||
|
||||
/* Define to 1 if you have the <ucontext.h> header file. */
|
||||
#cmakedefine01 HAVE_UCONTEXT_H
|
||||
|
||||
/* Define to 1 if you have the <unistd.h> header file. */
|
||||
#cmakedefine HAVE_UNISTD_H
|
||||
|
||||
/* Whether <unwind.h> contains _Unwind_Backtrace */
|
||||
#cmakedefine HAVE_UNWIND_BACKTRACE
|
||||
|
||||
/* Define to 1 if you have the <unwind.h> header file. */
|
||||
#cmakedefine HAVE_UNWIND_H
|
||||
|
||||
/* define if your compiler has __attribute__ */
|
||||
#cmakedefine HAVE___ATTRIBUTE__
|
||||
|
||||
/* define if your compiler supports alignment of functions */
|
||||
#cmakedefine HAVE___ATTRIBUTE__ALIGNED_FN
|
||||
|
||||
/* Define to 1 if compiler supports __environ */
|
||||
#cmakedefine HAVE___ENVIRON
|
||||
|
||||
/* Define to 1 if you have the `__sbrk' function. */
|
||||
#cmakedefine01 HAVE___SBRK
|
||||
|
||||
/* prefix where we look for installed files */
|
||||
#cmakedefine INSTALL_PREFIX
|
||||
|
||||
/* Define to the sub-directory where libtool stores uninstalled libraries. */
|
||||
#cmakedefine LT_OBJDIR
|
||||
|
||||
/* Name of package */
|
||||
#define PACKAGE "@PROJECT_NAME@"
|
||||
|
||||
/* Define to the address where bug reports for this package should be sent. */
|
||||
#define PACKAGE_BUGREPORT "gperftools@googlegroups.com"
|
||||
|
||||
/* Define to the full name of this package. */
|
||||
#define PACKAGE_NAME "@PROJECT_NAME@"
|
||||
|
||||
/* Define to the full name and version of this package. */
|
||||
#define PACKAGE_STRING "@PROJECT_NAME@ @PROJECT_VERSION@"
|
||||
|
||||
/* Define to the one symbol short name of this package. */
|
||||
#define PACKAGE_TARNAME "@PROJECT_NAME@"
|
||||
|
||||
/* Define to the home page for this package. */
|
||||
#cmakedefine PACKAGE_URL
|
||||
|
||||
/* Define to the version of this package. */
|
||||
#define PACKAGE_VERSION "@PROJECT_VERSION@"
|
||||
|
||||
/* Always the empty-string on non-windows systems. On windows, should be
|
||||
"__declspec(dllexport)". This way, when we compile the dll, we export our
|
||||
functions/classes. It's safe to define this here because config.h is only
|
||||
used internally, to compile the DLL, and every DLL source file #includes
|
||||
"config.h" before anything else. */
|
||||
#ifndef WIN32
|
||||
#cmakedefine WIN32
|
||||
#endif
|
||||
#if defined(WIN32)
|
||||
#ifndef PERFTOOLS_DLL_DECL
|
||||
# define PERFTOOLS_IS_A_DLL 1
|
||||
# define PERFTOOLS_DLL_DECL __declspec(dllexport)
|
||||
# define PERFTOOLS_DLL_DECL_FOR_UNITTESTS __declspec(dllimport)
|
||||
#endif
|
||||
#else
|
||||
#ifndef PERFTOOLS_DLL_DECL
|
||||
# define PERFTOOLS_DLL_DECL
|
||||
# define PERFTOOLS_DLL_DECL_FOR_UNITTESTS
|
||||
#endif
|
||||
#endif
|
||||
|
||||
/* if libgcc stacktrace method should be default */
|
||||
#cmakedefine PREFER_LIBGCC_UNWINDER
|
||||
|
||||
/* Mark the systems where we know it's bad if pthreads runs too
|
||||
early before main (before threads are initialized, presumably). */
|
||||
#ifdef __FreeBSD__
|
||||
#define PTHREADS_CRASHES_IF_RUN_TOO_EARLY 1
|
||||
#endif
|
||||
|
||||
/* Define 8 bytes of allocation alignment for tcmalloc */
|
||||
#cmakedefine TCMALLOC_ALIGN_8BYTES
|
||||
|
||||
/* Define internal page size for tcmalloc as number of left bitshift */
|
||||
#cmakedefine TCMALLOC_PAGE_SIZE_SHIFT @TCMALLOC_PAGE_SIZE_SHIFT@
|
||||
|
||||
/* Version number of package */
|
||||
#define VERSION @PROJECT_VERSION@
|
||||
|
||||
/* C99 says: define this to get the PRI... macros from stdint.h */
|
||||
#ifndef __STDC_FORMAT_MACROS
|
||||
# define __STDC_FORMAT_MACROS 1
|
||||
#endif
|
||||
|
||||
// ---------------------------------------------------------------------
|
||||
// Extra stuff not found in config.h.in
|
||||
#if defined(WIN32)
|
||||
|
||||
// This must be defined before the windows.h is included. We need at
|
||||
// least 0x0400 for mutex.h to have access to TryLock, and at least
|
||||
// 0x0501 for patch_functions.cc to have access to GetModuleHandleEx.
|
||||
// (This latter is an optimization we could take out if need be.)
|
||||
#ifndef _WIN32_WINNT
|
||||
# define _WIN32_WINNT 0x0501
|
||||
#endif
|
||||
|
||||
// We want to make sure not to ever try to #include heap-checker.h
|
||||
#define NO_HEAP_CHECK 1
|
||||
|
||||
// TODO(csilvers): include windows/port.h in every relevant source file instead?
|
||||
#include "windows/port.h"
|
||||
|
||||
#endif
|
||||
#endif /* GOOGLE_PERFTOOLS_WINDOWS_CONFIG_H_ */
|
14
3party/gperftools/cmake/pkgconfig.pc
Normal file
14
3party/gperftools/cmake/pkgconfig.pc
Normal file
@ -0,0 +1,14 @@
|
||||
prefix=@CMAKE_INSTALL_PREFIX@
|
||||
exec_prefix=${prefix}
|
||||
libdir=${prefix}/@CMAKE_INSTALL_LIBDIR@
|
||||
includedir=${prefix}/@CMAKE_INSTALL_INCLUDEDIR@
|
||||
|
||||
Name: @CMAKE_PROJECT_NAME@
|
||||
Version: @CMAKE_PROJECT_VERSION@
|
||||
Description: @CMAKE_PROJECT_DESCRIPTION@
|
||||
URL: @CMAKE_PROJECT_HOMEPAGE_URL@
|
||||
Requires:
|
||||
Libs: -L${libdir} -l@NAME@
|
||||
Libs.private:@PTHREAD_FLAGS@
|
||||
Cflags: -I${includedir}
|
||||
|
166
3party/gperftools/cmake/tcmalloc.h.in
Normal file
166
3party/gperftools/cmake/tcmalloc.h.in
Normal file
@ -0,0 +1,166 @@
|
||||
// -*- Mode: C; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2003, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Sanjay Ghemawat <opensource@google.com>
|
||||
* .h file by Craig Silverstein <opensource@google.com>
|
||||
*/
|
||||
|
||||
#ifndef TCMALLOC_TCMALLOC_H_
|
||||
#define TCMALLOC_TCMALLOC_H_
|
||||
|
||||
#include <stddef.h> /* for size_t */
|
||||
#ifdef __cplusplus
|
||||
#include <new> /* for std::nothrow_t, std::align_val_t */
|
||||
#endif
|
||||
|
||||
/* Define the version number so folks can check against it */
|
||||
#define TC_VERSION_MAJOR @PROJECT_VERSION_MAJOR@
|
||||
#define TC_VERSION_MINOR @PROJECT_VERSION_MINOR@
|
||||
#define TC_VERSION_PATCH ".@PROJECT_VERSION_PATCH@"
|
||||
#define TC_VERSION_STRING "gperftools @PROJECT_VERSION@"
|
||||
|
||||
/* For struct mallinfo, if it's defined. */
|
||||
#if @HAVE_STRUCT_MALLINFO@ || @HAVE_STRUCT_MALLINFO2@
|
||||
# include <malloc.h>
|
||||
#endif
|
||||
|
||||
#ifndef PERFTOOLS_NOTHROW
|
||||
|
||||
#if __cplusplus >= 201103L
|
||||
#define PERFTOOLS_NOTHROW noexcept
|
||||
#elif defined(__cplusplus)
|
||||
#define PERFTOOLS_NOTHROW throw()
|
||||
#else
|
||||
# ifdef __GNUC__
|
||||
# define PERFTOOLS_NOTHROW __attribute__((__nothrow__))
|
||||
# else
|
||||
# define PERFTOOLS_NOTHROW
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
||||
#ifndef PERFTOOLS_DLL_DECL
|
||||
# ifdef _WIN32
|
||||
# define PERFTOOLS_DLL_DECL __declspec(dllimport)
|
||||
# else
|
||||
# define PERFTOOLS_DLL_DECL
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
/*
|
||||
* Returns a human-readable version string. If major, minor,
|
||||
* and/or patch are not NULL, they are set to the major version,
|
||||
* minor version, and patch-code (a string, usually "").
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL const char* tc_version(int* major, int* minor,
|
||||
const char** patch) PERFTOOLS_NOTHROW;
|
||||
|
||||
PERFTOOLS_DLL_DECL void* tc_malloc(size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_malloc_skip_new_handler(size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_free(void* ptr) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_free_sized(void *ptr, size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_realloc(void* ptr, size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_calloc(size_t nmemb, size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_cfree(void* ptr) PERFTOOLS_NOTHROW;
|
||||
|
||||
PERFTOOLS_DLL_DECL void* tc_memalign(size_t __alignment,
|
||||
size_t __size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL int tc_posix_memalign(void** ptr,
|
||||
size_t align, size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_valloc(size_t __size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_pvalloc(size_t __size) PERFTOOLS_NOTHROW;
|
||||
|
||||
PERFTOOLS_DLL_DECL void tc_malloc_stats(void) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL int tc_mallopt(int cmd, int value) PERFTOOLS_NOTHROW;
|
||||
#if @HAVE_STRUCT_MALLINFO@
|
||||
PERFTOOLS_DLL_DECL struct mallinfo tc_mallinfo(void) PERFTOOLS_NOTHROW;
|
||||
#endif
|
||||
#if @HAVE_STRUCT_MALLINFO2@
|
||||
PERFTOOLS_DLL_DECL struct mallinfo2 tc_mallinfo2(void) PERFTOOLS_NOTHROW;
|
||||
#endif
|
||||
|
||||
/*
|
||||
* This is an alias for MallocExtension::instance()->GetAllocatedSize().
|
||||
* It is equivalent to
|
||||
* OS X: malloc_size()
|
||||
* glibc: malloc_usable_size()
|
||||
* Windows: _msize()
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL size_t tc_malloc_size(void* ptr) PERFTOOLS_NOTHROW;
|
||||
|
||||
#ifdef __cplusplus
|
||||
PERFTOOLS_DLL_DECL int tc_set_new_mode(int flag) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_new(size_t size);
|
||||
PERFTOOLS_DLL_DECL void* tc_new_nothrow(size_t size,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_delete(void* p) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_delete_sized(void* p, size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_delete_nothrow(void* p,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_newarray(size_t size);
|
||||
PERFTOOLS_DLL_DECL void* tc_newarray_nothrow(size_t size,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_deletearray(void* p) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_deletearray_sized(void* p, size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_deletearray_nothrow(void* p,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
|
||||
#if @HAVE_STD_ALIGN_VAL_T@ && __cplusplus >= 201703L
|
||||
PERFTOOLS_DLL_DECL void* tc_new_aligned(size_t size, std::align_val_t al);
|
||||
PERFTOOLS_DLL_DECL void* tc_new_aligned_nothrow(size_t size, std::align_val_t al,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_delete_aligned(void* p, std::align_val_t al) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_delete_sized_aligned(void* p, size_t size, std::align_val_t al) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_delete_aligned_nothrow(void* p, std::align_val_t al,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_newarray_aligned(size_t size, std::align_val_t al);
|
||||
PERFTOOLS_DLL_DECL void* tc_newarray_aligned_nothrow(size_t size, std::align_val_t al,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_deletearray_aligned(void* p, std::align_val_t al) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_deletearray_sized_aligned(void* p, size_t size, std::align_val_t al) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_deletearray_aligned_nothrow(void* p, std::align_val_t al,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
#endif
|
||||
}
|
||||
#endif
|
||||
|
||||
/* We're only un-defining for public */
|
||||
#if !defined(GPERFTOOLS_CONFIG_H_)
|
||||
|
||||
#undef PERFTOOLS_NOTHROW
|
||||
|
||||
#endif /* GPERFTOOLS_CONFIG_H_ */
|
||||
|
||||
#endif /* #ifndef TCMALLOC_TCMALLOC_H_ */
|
417
3party/gperftools/src/addressmap-inl.h
Normal file
417
3party/gperftools/src/addressmap-inl.h
Normal file
@ -0,0 +1,417 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat
|
||||
//
|
||||
// A fast map from addresses to values. Assumes that addresses are
|
||||
// clustered. The main use is intended to be for heap-profiling.
|
||||
// May be too memory-hungry for other uses.
|
||||
//
|
||||
// We use a user-defined allocator/de-allocator so that we can use
|
||||
// this data structure during heap-profiling.
|
||||
//
|
||||
// IMPLEMENTATION DETAIL:
|
||||
//
|
||||
// Some default definitions/parameters:
|
||||
// * Block -- aligned 128-byte region of the address space
|
||||
// * Cluster -- aligned 1-MB region of the address space
|
||||
// * Block-ID -- block-number within a cluster
|
||||
// * Cluster-ID -- Starting address of cluster divided by cluster size
|
||||
//
|
||||
// We use a three-level map to represent the state:
|
||||
// 1. A hash-table maps from a cluster-ID to the data for that cluster.
|
||||
// 2. For each non-empty cluster we keep an array indexed by
|
||||
// block-ID tht points to the first entry in the linked-list
|
||||
// for the block.
|
||||
// 3. At the bottom, we keep a singly-linked list of all
|
||||
// entries in a block (for non-empty blocks).
|
||||
//
|
||||
// hash table
|
||||
// +-------------+
|
||||
// | id->cluster |---> ...
|
||||
// | ... |
|
||||
// | id->cluster |---> Cluster
|
||||
// +-------------+ +-------+ Data for one block
|
||||
// | nil | +------------------------------------+
|
||||
// | ----+---|->[addr/value]-->[addr/value]-->... |
|
||||
// | nil | +------------------------------------+
|
||||
// | ----+--> ...
|
||||
// | nil |
|
||||
// | ... |
|
||||
// +-------+
|
||||
//
|
||||
// Note that we require zero-bytes of overhead for completely empty
|
||||
// clusters. The minimum space requirement for a cluster is the size
|
||||
// of the hash-table entry plus a pointer value for each block in
|
||||
// the cluster. Empty blocks impose no extra space requirement.
|
||||
//
|
||||
// The cost of a lookup is:
|
||||
// a. A hash-table lookup to find the cluster
|
||||
// b. An array access in the cluster structure
|
||||
// c. A traversal over the linked-list for a block
|
||||
|
||||
#ifndef BASE_ADDRESSMAP_INL_H_
|
||||
#define BASE_ADDRESSMAP_INL_H_
|
||||
|
||||
#include "config.h"
|
||||
#include <stddef.h>
|
||||
#include <string.h>
|
||||
#include <stdint.h> // to get uint16_t (ISO naming madness)
|
||||
#include <inttypes.h> // another place uint16_t might be defined
|
||||
|
||||
// This class is thread-unsafe -- that is, instances of this class can
|
||||
// not be accessed concurrently by multiple threads -- because the
|
||||
// callback function for Iterate() may mutate contained values. If the
|
||||
// callback functions you pass do not mutate their Value* argument,
|
||||
// AddressMap can be treated as thread-compatible -- that is, it's
|
||||
// safe for multiple threads to call "const" methods on this class,
|
||||
// but not safe for one thread to call const methods on this class
|
||||
// while another thread is calling non-const methods on the class.
|
||||
template <class Value>
|
||||
class AddressMap {
|
||||
public:
|
||||
typedef void* (*Allocator)(size_t size);
|
||||
typedef void (*DeAllocator)(void* ptr);
|
||||
typedef const void* Key;
|
||||
|
||||
// Create an AddressMap that uses the specified allocator/deallocator.
|
||||
// The allocator/deallocator should behave like malloc/free.
|
||||
// For instance, the allocator does not need to return initialized memory.
|
||||
AddressMap(Allocator alloc, DeAllocator dealloc);
|
||||
~AddressMap();
|
||||
|
||||
// If the map contains an entry for "key", return it. Else return NULL.
|
||||
inline const Value* Find(Key key) const;
|
||||
inline Value* FindMutable(Key key);
|
||||
|
||||
// Insert <key,value> into the map. Any old value associated
|
||||
// with key is forgotten.
|
||||
void Insert(Key key, Value value);
|
||||
|
||||
// Remove any entry for key in the map. If an entry was found
|
||||
// and removed, stores the associated value in "*removed_value"
|
||||
// and returns true. Else returns false.
|
||||
bool FindAndRemove(Key key, Value* removed_value);
|
||||
|
||||
// Similar to Find but we assume that keys are addresses of non-overlapping
|
||||
// memory ranges whose sizes are given by size_func.
|
||||
// If the map contains a range into which "key" points
|
||||
// (at its start or inside of it, but not at the end),
|
||||
// return the address of the associated value
|
||||
// and store its key in "*res_key".
|
||||
// Else return NULL.
|
||||
// max_size specifies largest range size possibly in existence now.
|
||||
typedef size_t (*ValueSizeFunc)(const Value& v);
|
||||
const Value* FindInside(ValueSizeFunc size_func, size_t max_size,
|
||||
Key key, Key* res_key);
|
||||
|
||||
// Iterate over the address map calling 'callback'
|
||||
// for all stored key-value pairs and passing 'arg' to it.
|
||||
// We don't use full Closure/Callback machinery not to add
|
||||
// unnecessary dependencies to this class with low-level uses.
|
||||
template<class Type>
|
||||
inline void Iterate(void (*callback)(Key, Value*, Type), Type arg) const;
|
||||
|
||||
private:
|
||||
typedef uintptr_t Number;
|
||||
|
||||
// The implementation assumes that addresses inserted into the map
|
||||
// will be clustered. We take advantage of this fact by splitting
|
||||
// up the address-space into blocks and using a linked-list entry
|
||||
// for each block.
|
||||
|
||||
// Size of each block. There is one linked-list for each block, so
|
||||
// do not make the block-size too big. Oterwise, a lot of time
|
||||
// will be spent traversing linked lists.
|
||||
static const int kBlockBits = 7;
|
||||
static const int kBlockSize = 1 << kBlockBits;
|
||||
|
||||
// Entry kept in per-block linked-list
|
||||
struct Entry {
|
||||
Entry* next;
|
||||
Key key;
|
||||
Value value;
|
||||
};
|
||||
|
||||
// We further group a sequence of consecutive blocks into a cluster.
|
||||
// The data for a cluster is represented as a dense array of
|
||||
// linked-lists, one list per contained block.
|
||||
static const int kClusterBits = 13;
|
||||
static const Number kClusterSize = 1 << (kBlockBits + kClusterBits);
|
||||
static const int kClusterBlocks = 1 << kClusterBits;
|
||||
|
||||
// We use a simple chaining hash-table to represent the clusters.
|
||||
struct Cluster {
|
||||
Cluster* next; // Next cluster in hash table chain
|
||||
Number id; // Cluster ID
|
||||
Entry* blocks[kClusterBlocks]; // Per-block linked-lists
|
||||
};
|
||||
|
||||
// Number of hash-table entries. With the block-size/cluster-size
|
||||
// defined above, each cluster covers 1 MB, so an 4K entry
|
||||
// hash-table will give an average hash-chain length of 1 for 4GB of
|
||||
// in-use memory.
|
||||
static const int kHashBits = 12;
|
||||
static const int kHashSize = 1 << 12;
|
||||
|
||||
// Number of entry objects allocated at a time
|
||||
static const int ALLOC_COUNT = 64;
|
||||
|
||||
Cluster** hashtable_; // The hash-table
|
||||
Entry* free_; // Free list of unused Entry objects
|
||||
|
||||
// Multiplicative hash function:
|
||||
// The value "kHashMultiplier" is the bottom 32 bits of
|
||||
// int((sqrt(5)-1)/2 * 2^32)
|
||||
// This is a good multiplier as suggested in CLR, Knuth. The hash
|
||||
// value is taken to be the top "k" bits of the bottom 32 bits
|
||||
// of the muliplied value.
|
||||
static const uint32_t kHashMultiplier = 2654435769u;
|
||||
static int HashInt(Number x) {
|
||||
// Multiply by a constant and take the top bits of the result.
|
||||
const uint32_t m = static_cast<uint32_t>(x) * kHashMultiplier;
|
||||
return static_cast<int>(m >> (32 - kHashBits));
|
||||
}
|
||||
|
||||
// Find cluster object for specified address. If not found
|
||||
// and "create" is true, create the object. If not found
|
||||
// and "create" is false, return NULL.
|
||||
//
|
||||
// This method is bitwise-const if create is false.
|
||||
Cluster* FindCluster(Number address, bool create) {
|
||||
// Look in hashtable
|
||||
const Number cluster_id = address >> (kBlockBits + kClusterBits);
|
||||
const int h = HashInt(cluster_id);
|
||||
for (Cluster* c = hashtable_[h]; c != NULL; c = c->next) {
|
||||
if (c->id == cluster_id) {
|
||||
return c;
|
||||
}
|
||||
}
|
||||
|
||||
// Create cluster if necessary
|
||||
if (create) {
|
||||
Cluster* c = New<Cluster>(1);
|
||||
c->id = cluster_id;
|
||||
c->next = hashtable_[h];
|
||||
hashtable_[h] = c;
|
||||
return c;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// Return the block ID for an address within its cluster
|
||||
static int BlockID(Number address) {
|
||||
return (address >> kBlockBits) & (kClusterBlocks - 1);
|
||||
}
|
||||
|
||||
//--------------------------------------------------------------
|
||||
// Memory management -- we keep all objects we allocate linked
|
||||
// together in a singly linked list so we can get rid of them
|
||||
// when we are all done. Furthermore, we allow the client to
|
||||
// pass in custom memory allocator/deallocator routines.
|
||||
//--------------------------------------------------------------
|
||||
struct Object {
|
||||
Object* next;
|
||||
// The real data starts here
|
||||
};
|
||||
|
||||
Allocator alloc_; // The allocator
|
||||
DeAllocator dealloc_; // The deallocator
|
||||
Object* allocated_; // List of allocated objects
|
||||
|
||||
// Allocates a zeroed array of T with length "num". Also inserts
|
||||
// the allocated block into a linked list so it can be deallocated
|
||||
// when we are all done.
|
||||
template <class T> T* New(int num) {
|
||||
void* ptr = (*alloc_)(sizeof(Object) + num*sizeof(T));
|
||||
memset(ptr, 0, sizeof(Object) + num*sizeof(T));
|
||||
Object* obj = reinterpret_cast<Object*>(ptr);
|
||||
obj->next = allocated_;
|
||||
allocated_ = obj;
|
||||
return reinterpret_cast<T*>(reinterpret_cast<Object*>(ptr) + 1);
|
||||
}
|
||||
};
|
||||
|
||||
// More implementation details follow:
|
||||
|
||||
template <class Value>
|
||||
AddressMap<Value>::AddressMap(Allocator alloc, DeAllocator dealloc)
|
||||
: free_(NULL),
|
||||
alloc_(alloc),
|
||||
dealloc_(dealloc),
|
||||
allocated_(NULL) {
|
||||
hashtable_ = New<Cluster*>(kHashSize);
|
||||
}
|
||||
|
||||
template <class Value>
|
||||
AddressMap<Value>::~AddressMap() {
|
||||
// De-allocate all of the objects we allocated
|
||||
for (Object* obj = allocated_; obj != NULL; /**/) {
|
||||
Object* next = obj->next;
|
||||
(*dealloc_)(obj);
|
||||
obj = next;
|
||||
}
|
||||
}
|
||||
|
||||
template <class Value>
|
||||
inline const Value* AddressMap<Value>::Find(Key key) const {
|
||||
return const_cast<AddressMap*>(this)->FindMutable(key);
|
||||
}
|
||||
|
||||
template <class Value>
|
||||
inline Value* AddressMap<Value>::FindMutable(Key key) {
|
||||
const Number num = reinterpret_cast<Number>(key);
|
||||
const Cluster* const c = FindCluster(num, false/*do not create*/);
|
||||
if (c != NULL) {
|
||||
for (Entry* e = c->blocks[BlockID(num)]; e != NULL; e = e->next) {
|
||||
if (e->key == key) {
|
||||
return &e->value;
|
||||
}
|
||||
}
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
template <class Value>
|
||||
void AddressMap<Value>::Insert(Key key, Value value) {
|
||||
const Number num = reinterpret_cast<Number>(key);
|
||||
Cluster* const c = FindCluster(num, true/*create*/);
|
||||
|
||||
// Look in linked-list for this block
|
||||
const int block = BlockID(num);
|
||||
for (Entry* e = c->blocks[block]; e != NULL; e = e->next) {
|
||||
if (e->key == key) {
|
||||
e->value = value;
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Create entry
|
||||
if (free_ == NULL) {
|
||||
// Allocate a new batch of entries and add to free-list
|
||||
Entry* array = New<Entry>(ALLOC_COUNT);
|
||||
for (int i = 0; i < ALLOC_COUNT-1; i++) {
|
||||
array[i].next = &array[i+1];
|
||||
}
|
||||
array[ALLOC_COUNT-1].next = free_;
|
||||
free_ = &array[0];
|
||||
}
|
||||
Entry* e = free_;
|
||||
free_ = e->next;
|
||||
e->key = key;
|
||||
e->value = value;
|
||||
e->next = c->blocks[block];
|
||||
c->blocks[block] = e;
|
||||
}
|
||||
|
||||
template <class Value>
|
||||
bool AddressMap<Value>::FindAndRemove(Key key, Value* removed_value) {
|
||||
const Number num = reinterpret_cast<Number>(key);
|
||||
Cluster* const c = FindCluster(num, false/*do not create*/);
|
||||
if (c != NULL) {
|
||||
for (Entry** p = &c->blocks[BlockID(num)]; *p != NULL; p = &(*p)->next) {
|
||||
Entry* e = *p;
|
||||
if (e->key == key) {
|
||||
*removed_value = e->value;
|
||||
*p = e->next; // Remove e from linked-list
|
||||
e->next = free_; // Add e to free-list
|
||||
free_ = e;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
template <class Value>
|
||||
const Value* AddressMap<Value>::FindInside(ValueSizeFunc size_func,
|
||||
size_t max_size,
|
||||
Key key,
|
||||
Key* res_key) {
|
||||
const Number key_num = reinterpret_cast<Number>(key);
|
||||
Number num = key_num; // we'll move this to move back through the clusters
|
||||
while (1) {
|
||||
const Cluster* c = FindCluster(num, false/*do not create*/);
|
||||
if (c != NULL) {
|
||||
while (1) {
|
||||
const int block = BlockID(num);
|
||||
bool had_smaller_key = false;
|
||||
for (const Entry* e = c->blocks[block]; e != NULL; e = e->next) {
|
||||
const Number e_num = reinterpret_cast<Number>(e->key);
|
||||
if (e_num <= key_num) {
|
||||
if (e_num == key_num || // to handle 0-sized ranges
|
||||
key_num < e_num + (*size_func)(e->value)) {
|
||||
*res_key = e->key;
|
||||
return &e->value;
|
||||
}
|
||||
had_smaller_key = true;
|
||||
}
|
||||
}
|
||||
if (had_smaller_key) return NULL; // got a range before 'key'
|
||||
// and it did not contain 'key'
|
||||
if (block == 0) break;
|
||||
// try address-wise previous block
|
||||
num |= kBlockSize - 1; // start at the last addr of prev block
|
||||
num -= kBlockSize;
|
||||
if (key_num - num > max_size) return NULL;
|
||||
}
|
||||
}
|
||||
if (num < kClusterSize) return NULL; // first cluster
|
||||
// go to address-wise previous cluster to try
|
||||
num |= kClusterSize - 1; // start at the last block of previous cluster
|
||||
num -= kClusterSize;
|
||||
if (key_num - num > max_size) return NULL;
|
||||
// Having max_size to limit the search is crucial: else
|
||||
// we have to traverse a lot of empty clusters (or blocks).
|
||||
// We can avoid needing max_size if we put clusters into
|
||||
// a search tree, but performance suffers considerably
|
||||
// if we use this approach by using stl::set.
|
||||
}
|
||||
}
|
||||
|
||||
template <class Value>
|
||||
template <class Type>
|
||||
inline void AddressMap<Value>::Iterate(void (*callback)(Key, Value*, Type),
|
||||
Type arg) const {
|
||||
// We could optimize this by traversing only non-empty clusters and/or blocks
|
||||
// but it does not speed up heap-checker noticeably.
|
||||
for (int h = 0; h < kHashSize; ++h) {
|
||||
for (const Cluster* c = hashtable_[h]; c != NULL; c = c->next) {
|
||||
for (int b = 0; b < kClusterBlocks; ++b) {
|
||||
for (Entry* e = c->blocks[b]; e != NULL; e = e->next) {
|
||||
callback(e->key, &e->value, arg);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#endif // BASE_ADDRESSMAP_INL_H_
|
439
3party/gperftools/src/base/basictypes.h
Normal file
439
3party/gperftools/src/base/basictypes.h
Normal file
@ -0,0 +1,439 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
#ifndef _BASICTYPES_H_
|
||||
#define _BASICTYPES_H_
|
||||
|
||||
#include <config.h>
|
||||
#include <string.h> // for memcpy()
|
||||
#include <inttypes.h> // gets us PRId64, etc
|
||||
|
||||
// To use this in an autoconf setting, make sure you run the following
|
||||
// autoconf macros:
|
||||
// AC_HEADER_STDC /* for stdint_h and inttypes_h */
|
||||
// AC_CHECK_TYPES([__int64]) /* defined in some windows platforms */
|
||||
|
||||
#include <stdint.h> // to get uint16_t (ISO naming madness)
|
||||
#include <sys/types.h> // our last best hope for uint16_t
|
||||
|
||||
// Standard typedefs
|
||||
// All Google code is compiled with -funsigned-char to make "char"
|
||||
// unsigned. Google code therefore doesn't need a "uchar" type.
|
||||
// TODO(csilvers): how do we make sure unsigned-char works on non-gcc systems?
|
||||
typedef signed char schar;
|
||||
typedef int8_t int8;
|
||||
typedef int16_t int16;
|
||||
typedef int32_t int32;
|
||||
typedef int64_t int64;
|
||||
|
||||
// NOTE: unsigned types are DANGEROUS in loops and other arithmetical
|
||||
// places. Use the signed types unless your variable represents a bit
|
||||
// pattern (eg a hash value) or you really need the extra bit. Do NOT
|
||||
// use 'unsigned' to express "this value should always be positive";
|
||||
// use assertions for this.
|
||||
|
||||
typedef uint8_t uint8;
|
||||
typedef uint16_t uint16;
|
||||
typedef uint32_t uint32;
|
||||
typedef uint64_t uint64;
|
||||
|
||||
const uint16 kuint16max = ( (uint16) 0xFFFF);
|
||||
const uint32 kuint32max = ( (uint32) 0xFFFFFFFF);
|
||||
const uint64 kuint64max = ( (((uint64) kuint32max) << 32) | kuint32max );
|
||||
|
||||
const int8 kint8max = ( ( int8) 0x7F);
|
||||
const int16 kint16max = ( ( int16) 0x7FFF);
|
||||
const int32 kint32max = ( ( int32) 0x7FFFFFFF);
|
||||
const int64 kint64max = ( ((( int64) kint32max) << 32) | kuint32max );
|
||||
|
||||
const int8 kint8min = ( ( int8) 0x80);
|
||||
const int16 kint16min = ( ( int16) 0x8000);
|
||||
const int32 kint32min = ( ( int32) 0x80000000);
|
||||
const int64 kint64min = ( (((uint64) kint32min) << 32) | 0 );
|
||||
|
||||
// Define the "portable" printf and scanf macros, if they're not
|
||||
// already there (via the inttypes.h we #included above, hopefully).
|
||||
// Mostly it's old systems that don't support inttypes.h, so we assume
|
||||
// they're 32 bit.
|
||||
#ifndef PRIx64
|
||||
#define PRIx64 "llx"
|
||||
#endif
|
||||
#ifndef SCNx64
|
||||
#define SCNx64 "llx"
|
||||
#endif
|
||||
#ifndef PRId64
|
||||
#define PRId64 "lld"
|
||||
#endif
|
||||
#ifndef SCNd64
|
||||
#define SCNd64 "lld"
|
||||
#endif
|
||||
#ifndef PRIu64
|
||||
#define PRIu64 "llu"
|
||||
#endif
|
||||
#ifndef PRIxPTR
|
||||
#define PRIxPTR "lx"
|
||||
#endif
|
||||
|
||||
// Also allow for printing of a pthread_t.
|
||||
#define GPRIuPTHREAD "lu"
|
||||
#define GPRIxPTHREAD "lx"
|
||||
#if defined(__CYGWIN__) || defined(__CYGWIN32__) || defined(__APPLE__) || defined(__FreeBSD__)
|
||||
#define PRINTABLE_PTHREAD(pthreadt) reinterpret_cast<uintptr_t>(pthreadt)
|
||||
#elif defined(__QNXNTO__)
|
||||
#define PRINTABLE_PTHREAD(pthreadt) static_cast<intptr_t>(pthreadt)
|
||||
#else
|
||||
#define PRINTABLE_PTHREAD(pthreadt) pthreadt
|
||||
#endif
|
||||
|
||||
#if defined(__GNUC__)
|
||||
#define PREDICT_TRUE(x) __builtin_expect(!!(x), 1)
|
||||
#define PREDICT_FALSE(x) __builtin_expect(!!(x), 0)
|
||||
#else
|
||||
#define PREDICT_TRUE(x) (x)
|
||||
#define PREDICT_FALSE(x) (x)
|
||||
#endif
|
||||
|
||||
// A macro to disallow the evil copy constructor and operator= functions
|
||||
// This should be used in the private: declarations for a class
|
||||
#define DISALLOW_EVIL_CONSTRUCTORS(TypeName) \
|
||||
TypeName(const TypeName&); \
|
||||
void operator=(const TypeName&)
|
||||
|
||||
// An alternate name that leaves out the moral judgment... :-)
|
||||
#define DISALLOW_COPY_AND_ASSIGN(TypeName) DISALLOW_EVIL_CONSTRUCTORS(TypeName)
|
||||
|
||||
// The COMPILE_ASSERT macro can be used to verify that a compile time
|
||||
// expression is true. For example, you could use it to verify the
|
||||
// size of a static array:
|
||||
//
|
||||
// COMPILE_ASSERT(sizeof(num_content_type_names) == sizeof(int),
|
||||
// content_type_names_incorrect_size);
|
||||
//
|
||||
// or to make sure a struct is smaller than a certain size:
|
||||
//
|
||||
// COMPILE_ASSERT(sizeof(foo) < 128, foo_too_large);
|
||||
//
|
||||
// The second argument to the macro is the name of the variable. If
|
||||
// the expression is false, most compilers will issue a warning/error
|
||||
// containing the name of the variable.
|
||||
//
|
||||
// Implementation details of COMPILE_ASSERT:
|
||||
//
|
||||
// - COMPILE_ASSERT works by defining an array type that has -1
|
||||
// elements (and thus is invalid) when the expression is false.
|
||||
//
|
||||
// - The simpler definition
|
||||
//
|
||||
// #define COMPILE_ASSERT(expr, msg) typedef char msg[(expr) ? 1 : -1]
|
||||
//
|
||||
// does not work, as gcc supports variable-length arrays whose sizes
|
||||
// are determined at run-time (this is gcc's extension and not part
|
||||
// of the C++ standard). As a result, gcc fails to reject the
|
||||
// following code with the simple definition:
|
||||
//
|
||||
// int foo;
|
||||
// COMPILE_ASSERT(foo, msg); // not supposed to compile as foo is
|
||||
// // not a compile-time constant.
|
||||
//
|
||||
// - By using the type CompileAssert<(bool(expr))>, we ensures that
|
||||
// expr is a compile-time constant. (Template arguments must be
|
||||
// determined at compile-time.)
|
||||
//
|
||||
// - The outter parentheses in CompileAssert<(bool(expr))> are necessary
|
||||
// to work around a bug in gcc 3.4.4 and 4.0.1. If we had written
|
||||
//
|
||||
// CompileAssert<bool(expr)>
|
||||
//
|
||||
// instead, these compilers will refuse to compile
|
||||
//
|
||||
// COMPILE_ASSERT(5 > 0, some_message);
|
||||
//
|
||||
// (They seem to think the ">" in "5 > 0" marks the end of the
|
||||
// template argument list.)
|
||||
//
|
||||
// - The array size is (bool(expr) ? 1 : -1), instead of simply
|
||||
//
|
||||
// ((expr) ? 1 : -1).
|
||||
//
|
||||
// This is to avoid running into a bug in MS VC 7.1, which
|
||||
// causes ((0.0) ? 1 : -1) to incorrectly evaluate to 1.
|
||||
|
||||
template <bool>
|
||||
struct CompileAssert {
|
||||
};
|
||||
|
||||
#ifdef HAVE___ATTRIBUTE__
|
||||
# define ATTRIBUTE_UNUSED __attribute__((unused))
|
||||
#else
|
||||
# define ATTRIBUTE_UNUSED
|
||||
#endif
|
||||
|
||||
#if defined(HAVE___ATTRIBUTE__) && defined(HAVE_TLS)
|
||||
#define ATTR_INITIAL_EXEC __attribute__ ((tls_model ("initial-exec")))
|
||||
#else
|
||||
#define ATTR_INITIAL_EXEC
|
||||
#endif
|
||||
|
||||
#define COMPILE_ASSERT(expr, msg) \
|
||||
typedef CompileAssert<(bool(expr))> msg[bool(expr) ? 1 : -1] ATTRIBUTE_UNUSED
|
||||
|
||||
#define arraysize(a) (sizeof(a) / sizeof(*(a)))
|
||||
|
||||
#define OFFSETOF_MEMBER(strct, field) \
|
||||
(reinterpret_cast<char*>(&reinterpret_cast<strct*>(16)->field) - \
|
||||
reinterpret_cast<char*>(16))
|
||||
|
||||
// bit_cast<Dest,Source> implements the equivalent of
|
||||
// "*reinterpret_cast<Dest*>(&source)".
|
||||
//
|
||||
// The reinterpret_cast method would produce undefined behavior
|
||||
// according to ISO C++ specification section 3.10 -15 -.
|
||||
// bit_cast<> calls memcpy() which is blessed by the standard,
|
||||
// especially by the example in section 3.9.
|
||||
//
|
||||
// Fortunately memcpy() is very fast. In optimized mode, with a
|
||||
// constant size, gcc 2.95.3, gcc 4.0.1, and msvc 7.1 produce inline
|
||||
// code with the minimal amount of data movement. On a 32-bit system,
|
||||
// memcpy(d,s,4) compiles to one load and one store, and memcpy(d,s,8)
|
||||
// compiles to two loads and two stores.
|
||||
|
||||
template <class Dest, class Source>
|
||||
inline Dest bit_cast(const Source& source) {
|
||||
COMPILE_ASSERT(sizeof(Dest) == sizeof(Source), bitcasting_unequal_sizes);
|
||||
Dest dest;
|
||||
memcpy(&dest, &source, sizeof(dest));
|
||||
return dest;
|
||||
}
|
||||
|
||||
// bit_store<Dest,Source> implements the equivalent of
|
||||
// "dest = *reinterpret_cast<Dest*>(&source)".
|
||||
//
|
||||
// This prevents undefined behavior when the dest pointer is unaligned.
|
||||
template <class Dest, class Source>
|
||||
inline void bit_store(Dest *dest, const Source *source) {
|
||||
COMPILE_ASSERT(sizeof(Dest) == sizeof(Source), bitcasting_unequal_sizes);
|
||||
memcpy(dest, source, sizeof(Dest));
|
||||
}
|
||||
|
||||
#ifdef HAVE___ATTRIBUTE__
|
||||
# define ATTRIBUTE_WEAK __attribute__((weak))
|
||||
# define ATTRIBUTE_NOINLINE __attribute__((noinline))
|
||||
#else
|
||||
# define ATTRIBUTE_WEAK
|
||||
# define ATTRIBUTE_NOINLINE
|
||||
#endif
|
||||
|
||||
#ifdef _MSC_VER
|
||||
#undef ATTRIBUTE_NOINLINE
|
||||
#define ATTRIBUTE_NOINLINE __declspec(noinline)
|
||||
#endif
|
||||
|
||||
#if defined(HAVE___ATTRIBUTE__) && defined(__ELF__)
|
||||
# define ATTRIBUTE_VISIBILITY_HIDDEN __attribute__((visibility("hidden")))
|
||||
#else
|
||||
# define ATTRIBUTE_VISIBILITY_HIDDEN
|
||||
#endif
|
||||
|
||||
// Section attributes are supported for both ELF and Mach-O, but in
|
||||
// very different ways. Here's the API we provide:
|
||||
// 1) ATTRIBUTE_SECTION: put this with the declaration of all functions
|
||||
// you want to be in the same linker section
|
||||
// 2) DEFINE_ATTRIBUTE_SECTION_VARS: must be called once per unique
|
||||
// name. You want to make sure this is executed before any
|
||||
// DECLARE_ATTRIBUTE_SECTION_VARS; the easiest way is to put them
|
||||
// in the same .cc file. Put this call at the global level.
|
||||
// 3) INIT_ATTRIBUTE_SECTION_VARS: you can scatter calls to this in
|
||||
// multiple places to help ensure execution before any
|
||||
// DECLARE_ATTRIBUTE_SECTION_VARS. You must have at least one
|
||||
// DEFINE, but you can have many INITs. Put each in its own scope.
|
||||
// 4) DECLARE_ATTRIBUTE_SECTION_VARS: must be called before using
|
||||
// ATTRIBUTE_SECTION_START or ATTRIBUTE_SECTION_STOP on a name.
|
||||
// Put this call at the global level.
|
||||
// 5) ATTRIBUTE_SECTION_START/ATTRIBUTE_SECTION_STOP: call this to say
|
||||
// where in memory a given section is. All functions declared with
|
||||
// ATTRIBUTE_SECTION are guaranteed to be between START and STOP.
|
||||
|
||||
#if defined(HAVE___ATTRIBUTE__) && defined(__ELF__)
|
||||
# define ATTRIBUTE_SECTION(name) __attribute__ ((section (#name))) __attribute__((noinline))
|
||||
|
||||
// Weak section declaration to be used as a global declaration
|
||||
// for ATTRIBUTE_SECTION_START|STOP(name) to compile and link
|
||||
// even without functions with ATTRIBUTE_SECTION(name).
|
||||
# define DECLARE_ATTRIBUTE_SECTION_VARS(name) \
|
||||
extern char __start_##name[] ATTRIBUTE_WEAK; \
|
||||
extern char __stop_##name[] ATTRIBUTE_WEAK
|
||||
# define INIT_ATTRIBUTE_SECTION_VARS(name) // no-op for ELF
|
||||
# define DEFINE_ATTRIBUTE_SECTION_VARS(name) // no-op for ELF
|
||||
|
||||
// Return void* pointers to start/end of a section of code with functions
|
||||
// having ATTRIBUTE_SECTION(name), or 0 if no such function exists.
|
||||
// One must DECLARE_ATTRIBUTE_SECTION(name) for this to compile and link.
|
||||
# define ATTRIBUTE_SECTION_START(name) (reinterpret_cast<void*>(__start_##name))
|
||||
# define ATTRIBUTE_SECTION_STOP(name) (reinterpret_cast<void*>(__stop_##name))
|
||||
# define HAVE_ATTRIBUTE_SECTION_START 1
|
||||
|
||||
#elif defined(HAVE___ATTRIBUTE__) && defined(__MACH__)
|
||||
# define ATTRIBUTE_SECTION(name) __attribute__ ((section ("__TEXT, " #name))) __attribute__((noinline))
|
||||
|
||||
#include <mach-o/getsect.h>
|
||||
#include <mach-o/dyld.h>
|
||||
class AssignAttributeStartEnd {
|
||||
public:
|
||||
AssignAttributeStartEnd(const char* name, char** pstart, char** pend) {
|
||||
// Find out what dynamic library name is defined in
|
||||
for (int i = _dyld_image_count() - 1; i >= 0; --i) {
|
||||
const mach_header* hdr = _dyld_get_image_header(i);
|
||||
#ifdef MH_MAGIC_64
|
||||
if (hdr->magic == MH_MAGIC_64) {
|
||||
uint64_t len;
|
||||
*pstart = getsectdatafromheader_64((mach_header_64*)hdr,
|
||||
"__TEXT", name, &len);
|
||||
if (*pstart) { // NULL if not defined in this dynamic library
|
||||
*pstart += _dyld_get_image_vmaddr_slide(i); // correct for reloc
|
||||
*pend = *pstart + len;
|
||||
return;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
if (hdr->magic == MH_MAGIC) {
|
||||
uint32_t len;
|
||||
*pstart = getsectdatafromheader(hdr, "__TEXT", name, &len);
|
||||
if (*pstart) { // NULL if not defined in this dynamic library
|
||||
*pstart += _dyld_get_image_vmaddr_slide(i); // correct for reloc
|
||||
*pend = *pstart + len;
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If we get here, not defined in a dll at all. See if defined statically.
|
||||
unsigned long len; // don't ask me why this type isn't uint32_t too...
|
||||
*pstart = getsectdata("__TEXT", name, &len);
|
||||
*pend = *pstart + len;
|
||||
}
|
||||
};
|
||||
|
||||
#define DECLARE_ATTRIBUTE_SECTION_VARS(name) \
|
||||
extern char* __start_##name; \
|
||||
extern char* __stop_##name
|
||||
|
||||
#define INIT_ATTRIBUTE_SECTION_VARS(name) \
|
||||
DECLARE_ATTRIBUTE_SECTION_VARS(name); \
|
||||
static const AssignAttributeStartEnd __assign_##name( \
|
||||
#name, &__start_##name, &__stop_##name)
|
||||
|
||||
#define DEFINE_ATTRIBUTE_SECTION_VARS(name) \
|
||||
char* __start_##name, *__stop_##name; \
|
||||
INIT_ATTRIBUTE_SECTION_VARS(name)
|
||||
|
||||
# define ATTRIBUTE_SECTION_START(name) (reinterpret_cast<void*>(__start_##name))
|
||||
# define ATTRIBUTE_SECTION_STOP(name) (reinterpret_cast<void*>(__stop_##name))
|
||||
# define HAVE_ATTRIBUTE_SECTION_START 1
|
||||
|
||||
#else // not HAVE___ATTRIBUTE__ && __ELF__, nor HAVE___ATTRIBUTE__ && __MACH__
|
||||
# define ATTRIBUTE_SECTION(name)
|
||||
# define DECLARE_ATTRIBUTE_SECTION_VARS(name)
|
||||
# define INIT_ATTRIBUTE_SECTION_VARS(name)
|
||||
# define DEFINE_ATTRIBUTE_SECTION_VARS(name)
|
||||
# define ATTRIBUTE_SECTION_START(name) (reinterpret_cast<void*>(0))
|
||||
# define ATTRIBUTE_SECTION_STOP(name) (reinterpret_cast<void*>(0))
|
||||
|
||||
#endif // HAVE___ATTRIBUTE__ and __ELF__ or __MACH__
|
||||
|
||||
#if defined(HAVE___ATTRIBUTE__)
|
||||
# if (defined(__i386__) || defined(__x86_64__))
|
||||
# define CACHELINE_ALIGNED __attribute__((aligned(64)))
|
||||
# elif (defined(__PPC__) || defined(__PPC64__) || defined(__ppc__) || defined(__ppc64__))
|
||||
# define CACHELINE_ALIGNED __attribute__((aligned(16)))
|
||||
# elif (defined(__arm__))
|
||||
# define CACHELINE_ALIGNED __attribute__((aligned(64)))
|
||||
// some ARMs have shorter cache lines (ARM1176JZF-S is 32 bytes for example) but obviously 64-byte aligned implies 32-byte aligned
|
||||
# elif (defined(__mips__))
|
||||
# define CACHELINE_ALIGNED __attribute__((aligned(128)))
|
||||
# elif (defined(__aarch64__))
|
||||
# define CACHELINE_ALIGNED __attribute__((aligned(64)))
|
||||
// implementation specific, Cortex-A53 and 57 should have 64 bytes
|
||||
# elif (defined(__s390__))
|
||||
# define CACHELINE_ALIGNED __attribute__((aligned(256)))
|
||||
# elif (defined(__riscv) && __riscv_xlen == 64)
|
||||
# define CACHELINE_ALIGNED __attribute__((aligned(64)))
|
||||
# elif defined(__loongarch64)
|
||||
# define CACHELINE_ALIGNED __attribute__((aligned(64)))
|
||||
# else
|
||||
# error Could not determine cache line length - unknown architecture
|
||||
# endif
|
||||
#else
|
||||
# define CACHELINE_ALIGNED
|
||||
#endif // defined(HAVE___ATTRIBUTE__)
|
||||
|
||||
#if defined(HAVE___ATTRIBUTE__ALIGNED_FN)
|
||||
# define CACHELINE_ALIGNED_FN CACHELINE_ALIGNED
|
||||
#else
|
||||
# define CACHELINE_ALIGNED_FN
|
||||
#endif
|
||||
|
||||
// Structure for discovering alignment
|
||||
union MemoryAligner {
|
||||
void* p;
|
||||
double d;
|
||||
size_t s;
|
||||
} CACHELINE_ALIGNED;
|
||||
|
||||
#if defined(HAVE___ATTRIBUTE__) && defined(__ELF__)
|
||||
#define ATTRIBUTE_HIDDEN __attribute__((visibility("hidden")))
|
||||
#else
|
||||
#define ATTRIBUTE_HIDDEN
|
||||
#endif
|
||||
|
||||
#if defined(__GNUC__)
|
||||
#define ATTRIBUTE_ALWAYS_INLINE __attribute__((always_inline))
|
||||
#elif defined(_MSC_VER)
|
||||
#define ATTRIBUTE_ALWAYS_INLINE __forceinline
|
||||
#else
|
||||
#define ATTRIBUTE_ALWAYS_INLINE
|
||||
#endif
|
||||
|
||||
// The following enum should be used only as a constructor argument to indicate
|
||||
// that the variable has static storage class, and that the constructor should
|
||||
// do nothing to its state. It indicates to the reader that it is legal to
|
||||
// declare a static nistance of the class, provided the constructor is given
|
||||
// the base::LINKER_INITIALIZED argument. Normally, it is unsafe to declare a
|
||||
// static variable that has a constructor or a destructor because invocation
|
||||
// order is undefined. However, IF the type can be initialized by filling with
|
||||
// zeroes (which the loader does for static variables), AND the destructor also
|
||||
// does nothing to the storage, then a constructor declared as
|
||||
// explicit MyClass(base::LinkerInitialized x) {}
|
||||
// and invoked as
|
||||
// static MyClass my_variable_name(base::LINKER_INITIALIZED);
|
||||
namespace base {
|
||||
enum LinkerInitialized { LINKER_INITIALIZED };
|
||||
}
|
||||
|
||||
#endif // _BASICTYPES_H_
|
175
3party/gperftools/src/base/commandlineflags.h
Normal file
175
3party/gperftools/src/base/commandlineflags.h
Normal file
@ -0,0 +1,175 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// This file is a compatibility layer that defines Google's version of
|
||||
// command line flags that are used for configuration.
|
||||
//
|
||||
// We put flags into their own namespace. It is purposefully
|
||||
// named in an opaque way that people should have trouble typing
|
||||
// directly. The idea is that DEFINE puts the flag in the weird
|
||||
// namespace, and DECLARE imports the flag from there into the
|
||||
// current namespace. The net result is to force people to use
|
||||
// DECLARE to get access to a flag, rather than saying
|
||||
// extern bool FLAGS_logtostderr;
|
||||
// or some such instead. We want this so we can put extra
|
||||
// functionality (like sanity-checking) in DECLARE if we want,
|
||||
// and make sure it is picked up everywhere.
|
||||
//
|
||||
// We also put the type of the variable in the namespace, so that
|
||||
// people can't DECLARE_int32 something that they DEFINE_bool'd
|
||||
// elsewhere.
|
||||
#ifndef BASE_COMMANDLINEFLAGS_H_
|
||||
#define BASE_COMMANDLINEFLAGS_H_
|
||||
|
||||
#include <config.h>
|
||||
#include <string>
|
||||
#include <string.h> // for memchr
|
||||
#include <stdlib.h> // for getenv
|
||||
#include "base/basictypes.h"
|
||||
|
||||
#define DECLARE_VARIABLE(type, name) \
|
||||
namespace FLAG__namespace_do_not_use_directly_use_DECLARE_##type##_instead { \
|
||||
extern PERFTOOLS_DLL_DECL type FLAGS_##name; \
|
||||
} \
|
||||
using FLAG__namespace_do_not_use_directly_use_DECLARE_##type##_instead::FLAGS_##name
|
||||
|
||||
#define DEFINE_VARIABLE(type, name, value, meaning) \
|
||||
namespace FLAG__namespace_do_not_use_directly_use_DECLARE_##type##_instead { \
|
||||
PERFTOOLS_DLL_DECL type FLAGS_##name(value); \
|
||||
char FLAGS_no##name; \
|
||||
} \
|
||||
using FLAG__namespace_do_not_use_directly_use_DECLARE_##type##_instead::FLAGS_##name
|
||||
|
||||
// bool specialization
|
||||
#define DECLARE_bool(name) \
|
||||
DECLARE_VARIABLE(bool, name)
|
||||
#define DEFINE_bool(name, value, meaning) \
|
||||
DEFINE_VARIABLE(bool, name, value, meaning)
|
||||
|
||||
// int32 specialization
|
||||
#define DECLARE_int32(name) \
|
||||
DECLARE_VARIABLE(int32, name)
|
||||
#define DEFINE_int32(name, value, meaning) \
|
||||
DEFINE_VARIABLE(int32, name, value, meaning)
|
||||
|
||||
// int64 specialization
|
||||
#define DECLARE_int64(name) \
|
||||
DECLARE_VARIABLE(int64, name)
|
||||
#define DEFINE_int64(name, value, meaning) \
|
||||
DEFINE_VARIABLE(int64, name, value, meaning)
|
||||
|
||||
#define DECLARE_uint64(name) \
|
||||
DECLARE_VARIABLE(uint64, name)
|
||||
#define DEFINE_uint64(name, value, meaning) \
|
||||
DEFINE_VARIABLE(uint64, name, value, meaning)
|
||||
|
||||
// double specialization
|
||||
#define DECLARE_double(name) \
|
||||
DECLARE_VARIABLE(double, name)
|
||||
#define DEFINE_double(name, value, meaning) \
|
||||
DEFINE_VARIABLE(double, name, value, meaning)
|
||||
|
||||
// Special case for string, because we have to specify the namespace
|
||||
// std::string, which doesn't play nicely with our FLAG__namespace hackery.
|
||||
#define DECLARE_string(name) \
|
||||
namespace FLAG__namespace_do_not_use_directly_use_DECLARE_string_instead { \
|
||||
extern std::string FLAGS_##name; \
|
||||
} \
|
||||
using FLAG__namespace_do_not_use_directly_use_DECLARE_string_instead::FLAGS_##name
|
||||
#define DEFINE_string(name, value, meaning) \
|
||||
namespace FLAG__namespace_do_not_use_directly_use_DECLARE_string_instead { \
|
||||
std::string FLAGS_##name(value); \
|
||||
char FLAGS_no##name; \
|
||||
} \
|
||||
using FLAG__namespace_do_not_use_directly_use_DECLARE_string_instead::FLAGS_##name
|
||||
|
||||
// implemented in sysinfo.cc
|
||||
namespace tcmalloc {
|
||||
namespace commandlineflags {
|
||||
|
||||
inline bool StringToBool(const char *value, bool def) {
|
||||
if (!value) {
|
||||
return def;
|
||||
}
|
||||
switch (value[0]) {
|
||||
case 't':
|
||||
case 'T':
|
||||
case 'y':
|
||||
case 'Y':
|
||||
case '1':
|
||||
case '\0':
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
inline int StringToInt(const char *value, int def) {
|
||||
if (!value) {
|
||||
return def;
|
||||
}
|
||||
return strtol(value, NULL, 10);
|
||||
}
|
||||
|
||||
inline long long StringToLongLong(const char *value, long long def) {
|
||||
if (!value) {
|
||||
return def;
|
||||
}
|
||||
return strtoll(value, NULL, 10);
|
||||
}
|
||||
|
||||
inline double StringToDouble(const char *value, double def) {
|
||||
if (!value) {
|
||||
return def;
|
||||
}
|
||||
return strtod(value, NULL);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// These macros (could be functions, but I don't want to bother with a .cc
|
||||
// file), make it easier to initialize flags from the environment.
|
||||
|
||||
#define EnvToString(envname, dflt) \
|
||||
(!getenv(envname) ? (dflt) : getenv(envname))
|
||||
|
||||
#define EnvToBool(envname, dflt) \
|
||||
tcmalloc::commandlineflags::StringToBool(getenv(envname), dflt)
|
||||
|
||||
#define EnvToInt(envname, dflt) \
|
||||
tcmalloc::commandlineflags::StringToInt(getenv(envname), dflt)
|
||||
|
||||
#define EnvToInt64(envname, dflt) \
|
||||
tcmalloc::commandlineflags::StringToLongLong(getenv(envname), dflt)
|
||||
|
||||
#define EnvToDouble(envname, dflt) \
|
||||
tcmalloc::commandlineflags::StringToDouble(getenv(envname), dflt)
|
||||
|
||||
#endif // BASE_COMMANDLINEFLAGS_H_
|
60
3party/gperftools/src/base/dynamic_annotations.cc
Normal file
60
3party/gperftools/src/base/dynamic_annotations.cc
Normal file
@ -0,0 +1,60 @@
|
||||
// -*- Mode: c; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2008-2009, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Kostya Serebryany
|
||||
*/
|
||||
|
||||
#include "config.h"
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
|
||||
#include "base/dynamic_annotations.h"
|
||||
#include "getenv_safe.h" // for TCMallocGetenvSafe
|
||||
|
||||
static int GetRunningOnValgrind(void) {
|
||||
#ifdef RUNNING_ON_VALGRIND
|
||||
if (RUNNING_ON_VALGRIND) return 1;
|
||||
#endif
|
||||
const char *running_on_valgrind_str = TCMallocGetenvSafe("RUNNING_ON_VALGRIND");
|
||||
if (running_on_valgrind_str) {
|
||||
return strcmp(running_on_valgrind_str, "0") != 0;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* See the comments in dynamic_annotations.h */
|
||||
int RunningOnValgrind(void) {
|
||||
static volatile int running_on_valgrind = -1;
|
||||
int local_running_on_valgrind = running_on_valgrind;
|
||||
if (local_running_on_valgrind == -1)
|
||||
running_on_valgrind = local_running_on_valgrind = GetRunningOnValgrind();
|
||||
return local_running_on_valgrind;
|
||||
}
|
86
3party/gperftools/src/base/dynamic_annotations.h
Normal file
86
3party/gperftools/src/base/dynamic_annotations.h
Normal file
@ -0,0 +1,86 @@
|
||||
/* -*- Mode: c; c-basic-offset: 2; indent-tabs-mode: nil -*- */
|
||||
/* Copyright (c) 2008, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Kostya Serebryany
|
||||
*/
|
||||
|
||||
/* This file defines dynamic annotations for use with dynamic analysis
|
||||
tool such as valgrind, PIN, etc.
|
||||
|
||||
Dynamic annotation is a source code annotation that affects
|
||||
the generated code (that is, the annotation is not a comment).
|
||||
Each such annotation is attached to a particular
|
||||
instruction and/or to a particular object (address) in the program.
|
||||
|
||||
The annotations that should be used by users are macros in all upper-case
|
||||
(e.g., ANNOTATE_NEW_MEMORY).
|
||||
|
||||
Actual implementation of these macros may differ depending on the
|
||||
dynamic analysis tool being used.
|
||||
|
||||
See http://code.google.com/p/data-race-test/ for more information.
|
||||
|
||||
This file supports the following dynamic analysis tools:
|
||||
- None (DYNAMIC_ANNOTATIONS_ENABLED is not defined or zero).
|
||||
Macros are defined empty.
|
||||
- ThreadSanitizer, Helgrind, DRD (DYNAMIC_ANNOTATIONS_ENABLED is 1).
|
||||
Macros are defined as calls to non-inlinable empty functions
|
||||
that are intercepted by Valgrind. */
|
||||
|
||||
#ifndef BASE_DYNAMIC_ANNOTATIONS_H_
|
||||
#define BASE_DYNAMIC_ANNOTATIONS_H_
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
/* Return non-zero value if running under valgrind.
|
||||
|
||||
If "valgrind.h" is included into dynamic_annotations.c,
|
||||
the regular valgrind mechanism will be used.
|
||||
See http://valgrind.org/docs/manual/manual-core-adv.html about
|
||||
RUNNING_ON_VALGRIND and other valgrind "client requests".
|
||||
The file "valgrind.h" may be obtained by doing
|
||||
svn co svn://svn.valgrind.org/valgrind/trunk/include
|
||||
|
||||
If for some reason you can't use "valgrind.h" or want to fake valgrind,
|
||||
there are two ways to make this function return non-zero:
|
||||
- Use environment variable: export RUNNING_ON_VALGRIND=1
|
||||
- Make your tool intercept the function RunningOnValgrind() and
|
||||
change its return value.
|
||||
*/
|
||||
int RunningOnValgrind(void);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif /* BASE_DYNAMIC_ANNOTATIONS_H_ */
|
434
3party/gperftools/src/base/elf_mem_image.cc
Normal file
434
3party/gperftools/src/base/elf_mem_image.cc
Normal file
@ -0,0 +1,434 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2008, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Paul Pluzhnikov
|
||||
//
|
||||
// Allow dynamic symbol lookup in an in-memory Elf image.
|
||||
//
|
||||
|
||||
#include "base/elf_mem_image.h"
|
||||
|
||||
#ifdef HAVE_ELF_MEM_IMAGE // defined in elf_mem_image.h
|
||||
|
||||
#include <stddef.h> // for size_t, ptrdiff_t
|
||||
#include "base/logging.h"
|
||||
|
||||
// From binutils/include/elf/common.h (this doesn't appear to be documented
|
||||
// anywhere else).
|
||||
//
|
||||
// /* This flag appears in a Versym structure. It means that the symbol
|
||||
// is hidden, and is only visible with an explicit version number.
|
||||
// This is a GNU extension. */
|
||||
// #define VERSYM_HIDDEN 0x8000
|
||||
//
|
||||
// /* This is the mask for the rest of the Versym information. */
|
||||
// #define VERSYM_VERSION 0x7fff
|
||||
|
||||
#define VERSYM_VERSION 0x7fff
|
||||
|
||||
namespace base {
|
||||
|
||||
namespace {
|
||||
template <int N> class ElfClass {
|
||||
public:
|
||||
static const int kElfClass = -1;
|
||||
static int ElfBind(const ElfW(Sym) *) {
|
||||
CHECK(false); // << "Unexpected word size";
|
||||
return 0;
|
||||
}
|
||||
static int ElfType(const ElfW(Sym) *) {
|
||||
CHECK(false); // << "Unexpected word size";
|
||||
return 0;
|
||||
}
|
||||
};
|
||||
|
||||
template <> class ElfClass<32> {
|
||||
public:
|
||||
static const int kElfClass = ELFCLASS32;
|
||||
static int ElfBind(const ElfW(Sym) *symbol) {
|
||||
return ELF32_ST_BIND(symbol->st_info);
|
||||
}
|
||||
static int ElfType(const ElfW(Sym) *symbol) {
|
||||
return ELF32_ST_TYPE(symbol->st_info);
|
||||
}
|
||||
};
|
||||
|
||||
template <> class ElfClass<64> {
|
||||
public:
|
||||
static const int kElfClass = ELFCLASS64;
|
||||
static int ElfBind(const ElfW(Sym) *symbol) {
|
||||
return ELF64_ST_BIND(symbol->st_info);
|
||||
}
|
||||
static int ElfType(const ElfW(Sym) *symbol) {
|
||||
return ELF64_ST_TYPE(symbol->st_info);
|
||||
}
|
||||
};
|
||||
|
||||
typedef ElfClass<__WORDSIZE> CurrentElfClass;
|
||||
|
||||
// Extract an element from one of the ELF tables, cast it to desired type.
|
||||
// This is just a simple arithmetic and a glorified cast.
|
||||
// Callers are responsible for bounds checking.
|
||||
template <class T>
|
||||
const T* GetTableElement(const ElfW(Ehdr) *ehdr,
|
||||
ElfW(Off) table_offset,
|
||||
ElfW(Word) element_size,
|
||||
size_t index) {
|
||||
return reinterpret_cast<const T*>(reinterpret_cast<const char *>(ehdr)
|
||||
+ table_offset
|
||||
+ index * element_size);
|
||||
}
|
||||
} // namespace
|
||||
|
||||
const void *const ElfMemImage::kInvalidBase =
|
||||
reinterpret_cast<const void *>(~0L);
|
||||
|
||||
ElfMemImage::ElfMemImage(const void *base) {
|
||||
CHECK(base != kInvalidBase);
|
||||
Init(base);
|
||||
}
|
||||
|
||||
int ElfMemImage::GetNumSymbols() const {
|
||||
if (!hash_) {
|
||||
return 0;
|
||||
}
|
||||
// See http://www.caldera.com/developers/gabi/latest/ch5.dynamic.html#hash
|
||||
return hash_[1];
|
||||
}
|
||||
|
||||
const ElfW(Sym) *ElfMemImage::GetDynsym(int index) const {
|
||||
CHECK_LT(index, GetNumSymbols());
|
||||
return dynsym_ + index;
|
||||
}
|
||||
|
||||
const ElfW(Versym) *ElfMemImage::GetVersym(int index) const {
|
||||
CHECK_LT(index, GetNumSymbols());
|
||||
return versym_ + index;
|
||||
}
|
||||
|
||||
const ElfW(Phdr) *ElfMemImage::GetPhdr(int index) const {
|
||||
CHECK_LT(index, ehdr_->e_phnum);
|
||||
return GetTableElement<ElfW(Phdr)>(ehdr_,
|
||||
ehdr_->e_phoff,
|
||||
ehdr_->e_phentsize,
|
||||
index);
|
||||
}
|
||||
|
||||
const char *ElfMemImage::GetDynstr(ElfW(Word) offset) const {
|
||||
CHECK_LT(offset, strsize_);
|
||||
return dynstr_ + offset;
|
||||
}
|
||||
|
||||
const void *ElfMemImage::GetSymAddr(const ElfW(Sym) *sym) const {
|
||||
if (sym->st_shndx == SHN_UNDEF || sym->st_shndx >= SHN_LORESERVE) {
|
||||
// Symbol corresponds to "special" (e.g. SHN_ABS) section.
|
||||
return reinterpret_cast<const void *>(sym->st_value);
|
||||
}
|
||||
CHECK_LT(link_base_, sym->st_value);
|
||||
return GetTableElement<char>(ehdr_, 0, 1, sym->st_value) - link_base_;
|
||||
}
|
||||
|
||||
const ElfW(Verdef) *ElfMemImage::GetVerdef(int index) const {
|
||||
CHECK_LE(index, verdefnum_);
|
||||
const ElfW(Verdef) *version_definition = verdef_;
|
||||
while (version_definition->vd_ndx < index && version_definition->vd_next) {
|
||||
const char *const version_definition_as_char =
|
||||
reinterpret_cast<const char *>(version_definition);
|
||||
version_definition =
|
||||
reinterpret_cast<const ElfW(Verdef) *>(version_definition_as_char +
|
||||
version_definition->vd_next);
|
||||
}
|
||||
return version_definition->vd_ndx == index ? version_definition : NULL;
|
||||
}
|
||||
|
||||
const ElfW(Verdaux) *ElfMemImage::GetVerdefAux(
|
||||
const ElfW(Verdef) *verdef) const {
|
||||
return reinterpret_cast<const ElfW(Verdaux) *>(verdef+1);
|
||||
}
|
||||
|
||||
const char *ElfMemImage::GetVerstr(ElfW(Word) offset) const {
|
||||
CHECK_LT(offset, strsize_);
|
||||
return dynstr_ + offset;
|
||||
}
|
||||
|
||||
void ElfMemImage::Init(const void *base) {
|
||||
ehdr_ = NULL;
|
||||
dynsym_ = NULL;
|
||||
dynstr_ = NULL;
|
||||
versym_ = NULL;
|
||||
verdef_ = NULL;
|
||||
hash_ = NULL;
|
||||
strsize_ = 0;
|
||||
verdefnum_ = 0;
|
||||
link_base_ = ~0L; // Sentinel: PT_LOAD .p_vaddr can't possibly be this.
|
||||
if (!base) {
|
||||
return;
|
||||
}
|
||||
const intptr_t base_as_uintptr_t = reinterpret_cast<uintptr_t>(base);
|
||||
// Fake VDSO has low bit set.
|
||||
const bool fake_vdso = ((base_as_uintptr_t & 1) != 0);
|
||||
base = reinterpret_cast<const void *>(base_as_uintptr_t & ~1);
|
||||
const char *const base_as_char = reinterpret_cast<const char *>(base);
|
||||
if (base_as_char[EI_MAG0] != ELFMAG0 || base_as_char[EI_MAG1] != ELFMAG1 ||
|
||||
base_as_char[EI_MAG2] != ELFMAG2 || base_as_char[EI_MAG3] != ELFMAG3) {
|
||||
RAW_DCHECK(false, "no ELF magic"); // at %p", base);
|
||||
return;
|
||||
}
|
||||
int elf_class = base_as_char[EI_CLASS];
|
||||
if (elf_class != CurrentElfClass::kElfClass) {
|
||||
DCHECK_EQ(elf_class, CurrentElfClass::kElfClass);
|
||||
return;
|
||||
}
|
||||
switch (base_as_char[EI_DATA]) {
|
||||
case ELFDATA2LSB: {
|
||||
if (__LITTLE_ENDIAN != __BYTE_ORDER) {
|
||||
DCHECK_EQ(__LITTLE_ENDIAN, __BYTE_ORDER); // << ": wrong byte order";
|
||||
return;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case ELFDATA2MSB: {
|
||||
if (__BIG_ENDIAN != __BYTE_ORDER) {
|
||||
DCHECK_EQ(__BIG_ENDIAN, __BYTE_ORDER); // << ": wrong byte order";
|
||||
return;
|
||||
}
|
||||
break;
|
||||
}
|
||||
default: {
|
||||
RAW_DCHECK(false, "unexpected data encoding"); // << base_as_char[EI_DATA];
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
ehdr_ = reinterpret_cast<const ElfW(Ehdr) *>(base);
|
||||
const ElfW(Phdr) *dynamic_program_header = NULL;
|
||||
for (int i = 0; i < ehdr_->e_phnum; ++i) {
|
||||
const ElfW(Phdr) *const program_header = GetPhdr(i);
|
||||
switch (program_header->p_type) {
|
||||
case PT_LOAD:
|
||||
if (link_base_ == ~0L) {
|
||||
link_base_ = program_header->p_vaddr;
|
||||
}
|
||||
break;
|
||||
case PT_DYNAMIC:
|
||||
dynamic_program_header = program_header;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (link_base_ == ~0L || !dynamic_program_header) {
|
||||
RAW_DCHECK(~0L != link_base_, "no PT_LOADs in VDSO");
|
||||
RAW_DCHECK(dynamic_program_header, "no PT_DYNAMIC in VDSO");
|
||||
// Mark this image as not present. Can not recur infinitely.
|
||||
Init(0);
|
||||
return;
|
||||
}
|
||||
ptrdiff_t relocation =
|
||||
base_as_char - reinterpret_cast<const char *>(link_base_);
|
||||
ElfW(Dyn) *dynamic_entry =
|
||||
reinterpret_cast<ElfW(Dyn) *>(dynamic_program_header->p_vaddr +
|
||||
relocation);
|
||||
for (; dynamic_entry->d_tag != DT_NULL; ++dynamic_entry) {
|
||||
ElfW(Xword) value = dynamic_entry->d_un.d_val;
|
||||
if (fake_vdso) {
|
||||
// A complication: in the real VDSO, dynamic entries are not relocated
|
||||
// (it wasn't loaded by a dynamic loader). But when testing with a
|
||||
// "fake" dlopen()ed vdso library, the loader relocates some (but
|
||||
// not all!) of them before we get here.
|
||||
if (dynamic_entry->d_tag == DT_VERDEF) {
|
||||
// The only dynamic entry (of the ones we care about) libc-2.3.6
|
||||
// loader doesn't relocate.
|
||||
value += relocation;
|
||||
}
|
||||
} else {
|
||||
// Real VDSO. Everything needs to be relocated.
|
||||
value += relocation;
|
||||
}
|
||||
switch (dynamic_entry->d_tag) {
|
||||
case DT_HASH:
|
||||
hash_ = reinterpret_cast<ElfW(Word) *>(value);
|
||||
break;
|
||||
case DT_SYMTAB:
|
||||
dynsym_ = reinterpret_cast<ElfW(Sym) *>(value);
|
||||
break;
|
||||
case DT_STRTAB:
|
||||
dynstr_ = reinterpret_cast<const char *>(value);
|
||||
break;
|
||||
case DT_VERSYM:
|
||||
versym_ = reinterpret_cast<ElfW(Versym) *>(value);
|
||||
break;
|
||||
case DT_VERDEF:
|
||||
verdef_ = reinterpret_cast<ElfW(Verdef) *>(value);
|
||||
break;
|
||||
case DT_VERDEFNUM:
|
||||
verdefnum_ = dynamic_entry->d_un.d_val;
|
||||
break;
|
||||
case DT_STRSZ:
|
||||
strsize_ = dynamic_entry->d_un.d_val;
|
||||
break;
|
||||
default:
|
||||
// Unrecognized entries explicitly ignored.
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (!hash_ || !dynsym_ || !dynstr_ || !versym_ ||
|
||||
!verdef_ || !verdefnum_ || !strsize_) {
|
||||
RAW_DCHECK(hash_, "invalid VDSO (no DT_HASH)");
|
||||
RAW_DCHECK(dynsym_, "invalid VDSO (no DT_SYMTAB)");
|
||||
RAW_DCHECK(dynstr_, "invalid VDSO (no DT_STRTAB)");
|
||||
RAW_DCHECK(versym_, "invalid VDSO (no DT_VERSYM)");
|
||||
RAW_DCHECK(verdef_, "invalid VDSO (no DT_VERDEF)");
|
||||
RAW_DCHECK(verdefnum_, "invalid VDSO (no DT_VERDEFNUM)");
|
||||
RAW_DCHECK(strsize_, "invalid VDSO (no DT_STRSZ)");
|
||||
// Mark this image as not present. Can not recur infinitely.
|
||||
Init(0);
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
bool ElfMemImage::LookupSymbol(const char *name,
|
||||
const char *version,
|
||||
int type,
|
||||
SymbolInfo *info) const {
|
||||
for (SymbolIterator it = begin(); it != end(); ++it) {
|
||||
if (strcmp(it->name, name) == 0 && strcmp(it->version, version) == 0 &&
|
||||
CurrentElfClass::ElfType(it->symbol) == type) {
|
||||
if (info) {
|
||||
*info = *it;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
bool ElfMemImage::LookupSymbolByAddress(const void *address,
|
||||
SymbolInfo *info_out) const {
|
||||
for (SymbolIterator it = begin(); it != end(); ++it) {
|
||||
const char *const symbol_start =
|
||||
reinterpret_cast<const char *>(it->address);
|
||||
const char *const symbol_end = symbol_start + it->symbol->st_size;
|
||||
if (symbol_start <= address && address < symbol_end) {
|
||||
if (info_out) {
|
||||
// Client wants to know details for that symbol (the usual case).
|
||||
if (CurrentElfClass::ElfBind(it->symbol) == STB_GLOBAL) {
|
||||
// Strong symbol; just return it.
|
||||
*info_out = *it;
|
||||
return true;
|
||||
} else {
|
||||
// Weak or local. Record it, but keep looking for a strong one.
|
||||
*info_out = *it;
|
||||
}
|
||||
} else {
|
||||
// Client only cares if there is an overlapping symbol.
|
||||
return true;
|
||||
}
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
ElfMemImage::SymbolIterator::SymbolIterator(const void *const image, int index)
|
||||
: index_(index), image_(image) {
|
||||
}
|
||||
|
||||
const ElfMemImage::SymbolInfo *ElfMemImage::SymbolIterator::operator->() const {
|
||||
return &info_;
|
||||
}
|
||||
|
||||
const ElfMemImage::SymbolInfo& ElfMemImage::SymbolIterator::operator*() const {
|
||||
return info_;
|
||||
}
|
||||
|
||||
bool ElfMemImage::SymbolIterator::operator==(const SymbolIterator &rhs) const {
|
||||
return this->image_ == rhs.image_ && this->index_ == rhs.index_;
|
||||
}
|
||||
|
||||
bool ElfMemImage::SymbolIterator::operator!=(const SymbolIterator &rhs) const {
|
||||
return !(*this == rhs);
|
||||
}
|
||||
|
||||
ElfMemImage::SymbolIterator &ElfMemImage::SymbolIterator::operator++() {
|
||||
this->Update(1);
|
||||
return *this;
|
||||
}
|
||||
|
||||
ElfMemImage::SymbolIterator ElfMemImage::begin() const {
|
||||
SymbolIterator it(this, 0);
|
||||
it.Update(0);
|
||||
return it;
|
||||
}
|
||||
|
||||
ElfMemImage::SymbolIterator ElfMemImage::end() const {
|
||||
return SymbolIterator(this, GetNumSymbols());
|
||||
}
|
||||
|
||||
void ElfMemImage::SymbolIterator::Update(int increment) {
|
||||
const ElfMemImage *image = reinterpret_cast<const ElfMemImage *>(image_);
|
||||
CHECK(image->IsPresent() || increment == 0);
|
||||
if (!image->IsPresent()) {
|
||||
return;
|
||||
}
|
||||
index_ += increment;
|
||||
if (index_ >= image->GetNumSymbols()) {
|
||||
index_ = image->GetNumSymbols();
|
||||
return;
|
||||
}
|
||||
const ElfW(Sym) *symbol = image->GetDynsym(index_);
|
||||
const ElfW(Versym) *version_symbol = image->GetVersym(index_);
|
||||
CHECK(symbol && version_symbol);
|
||||
const char *const symbol_name = image->GetDynstr(symbol->st_name);
|
||||
const ElfW(Versym) version_index = version_symbol[0] & VERSYM_VERSION;
|
||||
const ElfW(Verdef) *version_definition = NULL;
|
||||
const char *version_name = "";
|
||||
if (symbol->st_shndx == SHN_UNDEF) {
|
||||
// Undefined symbols reference DT_VERNEED, not DT_VERDEF, and
|
||||
// version_index could well be greater than verdefnum_, so calling
|
||||
// GetVerdef(version_index) may trigger assertion.
|
||||
} else {
|
||||
version_definition = image->GetVerdef(version_index);
|
||||
}
|
||||
if (version_definition) {
|
||||
// I am expecting 1 or 2 auxiliary entries: 1 for the version itself,
|
||||
// optional 2nd if the version has a parent.
|
||||
CHECK_LE(1, version_definition->vd_cnt);
|
||||
CHECK_LE(version_definition->vd_cnt, 2);
|
||||
const ElfW(Verdaux) *version_aux = image->GetVerdefAux(version_definition);
|
||||
version_name = image->GetVerstr(version_aux->vda_name);
|
||||
}
|
||||
info_.name = symbol_name;
|
||||
info_.version = version_name;
|
||||
info_.address = image->GetSymAddr(symbol);
|
||||
info_.symbol = symbol;
|
||||
}
|
||||
|
||||
} // namespace base
|
||||
|
||||
#endif // HAVE_ELF_MEM_IMAGE
|
135
3party/gperftools/src/base/elf_mem_image.h
Normal file
135
3party/gperftools/src/base/elf_mem_image.h
Normal file
@ -0,0 +1,135 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2008, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Paul Pluzhnikov
|
||||
//
|
||||
// Allow dynamic symbol lookup for in-memory Elf images.
|
||||
|
||||
#ifndef BASE_ELF_MEM_IMAGE_H_
|
||||
#define BASE_ELF_MEM_IMAGE_H_
|
||||
|
||||
#include <config.h>
|
||||
#ifdef HAVE_FEATURES_H
|
||||
#include <features.h> // for __GLIBC__
|
||||
#endif
|
||||
|
||||
// Maybe one day we can rewrite this file not to require the elf
|
||||
// symbol extensions in glibc, but for right now we need them.
|
||||
#if defined(__ELF__) && defined(__GLIBC__) && !defined(__native_client__)
|
||||
|
||||
#define HAVE_ELF_MEM_IMAGE 1
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <link.h> // for ElfW
|
||||
|
||||
namespace base {
|
||||
|
||||
// An in-memory ELF image (may not exist on disk).
|
||||
class ElfMemImage {
|
||||
public:
|
||||
// Sentinel: there could never be an elf image at this address.
|
||||
static const void *const kInvalidBase;
|
||||
|
||||
// Information about a single vdso symbol.
|
||||
// All pointers are into .dynsym, .dynstr, or .text of the VDSO.
|
||||
// Do not free() them or modify through them.
|
||||
struct SymbolInfo {
|
||||
const char *name; // E.g. "__vdso_getcpu"
|
||||
const char *version; // E.g. "LINUX_2.6", could be ""
|
||||
// for unversioned symbol.
|
||||
const void *address; // Relocated symbol address.
|
||||
const ElfW(Sym) *symbol; // Symbol in the dynamic symbol table.
|
||||
};
|
||||
|
||||
// Supports iteration over all dynamic symbols.
|
||||
class SymbolIterator {
|
||||
public:
|
||||
friend class ElfMemImage;
|
||||
const SymbolInfo *operator->() const;
|
||||
const SymbolInfo &operator*() const;
|
||||
SymbolIterator& operator++();
|
||||
bool operator!=(const SymbolIterator &rhs) const;
|
||||
bool operator==(const SymbolIterator &rhs) const;
|
||||
private:
|
||||
SymbolIterator(const void *const image, int index);
|
||||
void Update(int incr);
|
||||
SymbolInfo info_;
|
||||
int index_;
|
||||
const void *const image_;
|
||||
};
|
||||
|
||||
|
||||
explicit ElfMemImage(const void *base);
|
||||
void Init(const void *base);
|
||||
bool IsPresent() const { return ehdr_ != NULL; }
|
||||
const ElfW(Phdr)* GetPhdr(int index) const;
|
||||
const ElfW(Sym)* GetDynsym(int index) const;
|
||||
const ElfW(Versym)* GetVersym(int index) const;
|
||||
const ElfW(Verdef)* GetVerdef(int index) const;
|
||||
const ElfW(Verdaux)* GetVerdefAux(const ElfW(Verdef) *verdef) const;
|
||||
const char* GetDynstr(ElfW(Word) offset) const;
|
||||
const void* GetSymAddr(const ElfW(Sym) *sym) const;
|
||||
const char* GetVerstr(ElfW(Word) offset) const;
|
||||
int GetNumSymbols() const;
|
||||
|
||||
SymbolIterator begin() const;
|
||||
SymbolIterator end() const;
|
||||
|
||||
// Look up versioned dynamic symbol in the image.
|
||||
// Returns false if image is not present, or doesn't contain given
|
||||
// symbol/version/type combination.
|
||||
// If info_out != NULL, additional details are filled in.
|
||||
bool LookupSymbol(const char *name, const char *version,
|
||||
int symbol_type, SymbolInfo *info_out) const;
|
||||
|
||||
// Find info about symbol (if any) which overlaps given address.
|
||||
// Returns true if symbol was found; false if image isn't present
|
||||
// or doesn't have a symbol overlapping given address.
|
||||
// If info_out != NULL, additional details are filled in.
|
||||
bool LookupSymbolByAddress(const void *address, SymbolInfo *info_out) const;
|
||||
|
||||
private:
|
||||
const ElfW(Ehdr) *ehdr_;
|
||||
const ElfW(Sym) *dynsym_;
|
||||
const ElfW(Versym) *versym_;
|
||||
const ElfW(Verdef) *verdef_;
|
||||
const ElfW(Word) *hash_;
|
||||
const char *dynstr_;
|
||||
size_t strsize_;
|
||||
size_t verdefnum_;
|
||||
ElfW(Addr) link_base_; // Link-time base (p_vaddr of first PT_LOAD).
|
||||
};
|
||||
|
||||
} // namespace base
|
||||
|
||||
#endif // __ELF__ and __GLIBC__ and !__native_client__
|
||||
|
||||
#endif // BASE_ELF_MEM_IMAGE_H_
|
74
3party/gperftools/src/base/googleinit.h
Normal file
74
3party/gperftools/src/base/googleinit.h
Normal file
@ -0,0 +1,74 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Jacob Hoffman-Andrews
|
||||
|
||||
#ifndef _GOOGLEINIT_H
|
||||
#define _GOOGLEINIT_H
|
||||
|
||||
#include "base/logging.h"
|
||||
|
||||
class GoogleInitializer {
|
||||
public:
|
||||
typedef void (*VoidFunction)(void);
|
||||
GoogleInitializer(const char* name, VoidFunction ctor, VoidFunction dtor)
|
||||
: name_(name), destructor_(dtor) {
|
||||
RAW_VLOG(10, "<GoogleModuleObject> constructing: %s\n", name_);
|
||||
if (ctor)
|
||||
ctor();
|
||||
}
|
||||
~GoogleInitializer() {
|
||||
RAW_VLOG(10, "<GoogleModuleObject> destroying: %s\n", name_);
|
||||
if (destructor_)
|
||||
destructor_();
|
||||
}
|
||||
|
||||
private:
|
||||
const char* const name_;
|
||||
const VoidFunction destructor_;
|
||||
};
|
||||
|
||||
#define REGISTER_MODULE_INITIALIZER(name, body) \
|
||||
namespace { \
|
||||
static void google_init_module_##name () { body; } \
|
||||
GoogleInitializer google_initializer_module_##name(#name, \
|
||||
google_init_module_##name, NULL); \
|
||||
}
|
||||
|
||||
#define REGISTER_MODULE_DESTRUCTOR(name, body) \
|
||||
namespace { \
|
||||
static void google_destruct_module_##name () { body; } \
|
||||
GoogleInitializer google_destructor_module_##name(#name, \
|
||||
NULL, google_destruct_module_##name); \
|
||||
}
|
||||
|
||||
|
||||
#endif /* _GOOGLEINIT_H */
|
727
3party/gperftools/src/base/linuxthreads.cc
Normal file
727
3party/gperftools/src/base/linuxthreads.cc
Normal file
@ -0,0 +1,727 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2005-2007, Google Inc.
|
||||
* Copyright (c) 2023, gperftools Contributors
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Markus Gutschke
|
||||
*
|
||||
* Substantial upgrades by Aliaksey Kandratsenka. All bugs are mine.
|
||||
*/
|
||||
#ifndef _GNU_SOURCE
|
||||
#define _GNU_SOURCE
|
||||
#endif
|
||||
|
||||
#include "base/linuxthreads.h"
|
||||
|
||||
#include <errno.h>
|
||||
#include <fcntl.h>
|
||||
#include <limits.h>
|
||||
#include <sched.h>
|
||||
#include <signal.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <sys/prctl.h>
|
||||
#include <sys/ptrace.h>
|
||||
#include <sys/socket.h>
|
||||
#include <sys/stat.h>
|
||||
#include <sys/syscall.h>
|
||||
#include <sys/wait.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#include <atomic>
|
||||
|
||||
#include "base/basictypes.h"
|
||||
#include "base/logging.h"
|
||||
|
||||
#ifndef CLONE_UNTRACED
|
||||
#define CLONE_UNTRACED 0x00800000
|
||||
#endif
|
||||
|
||||
#ifndef PR_SET_PTRACER
|
||||
#define PR_SET_PTRACER 0x59616d61
|
||||
#endif
|
||||
|
||||
namespace {
|
||||
|
||||
class SetPTracerSetup {
|
||||
public:
|
||||
~SetPTracerSetup() {
|
||||
if (need_cleanup_) {
|
||||
prctl(PR_SET_PTRACER, 0, 0, 0, 0);
|
||||
}
|
||||
}
|
||||
void Prepare(int clone_pid) {
|
||||
if (prctl(PR_SET_PTRACER, clone_pid, 0, 0, 0) == 0) {
|
||||
need_cleanup_ = true;
|
||||
}
|
||||
}
|
||||
|
||||
private:
|
||||
bool need_cleanup_ = false;
|
||||
};
|
||||
|
||||
class UniqueFD {
|
||||
public:
|
||||
explicit UniqueFD(int fd) : fd_(fd) {}
|
||||
|
||||
int ReleaseFD() {
|
||||
int retval = fd_;
|
||||
fd_ = -1;
|
||||
return retval;
|
||||
}
|
||||
|
||||
~UniqueFD() {
|
||||
if (fd_ < 0) {
|
||||
return;
|
||||
}
|
||||
(void)close(fd_);
|
||||
}
|
||||
private:
|
||||
int fd_;
|
||||
};
|
||||
|
||||
template <typename Body>
|
||||
struct SimpleCleanup {
|
||||
const Body body;
|
||||
|
||||
explicit SimpleCleanup(const Body& body) : body(body) {}
|
||||
|
||||
~SimpleCleanup() {
|
||||
body();
|
||||
}
|
||||
};
|
||||
|
||||
template <typename Body>
|
||||
SimpleCleanup<Body> MakeSimpleCleanup(const Body& body) {
|
||||
return SimpleCleanup<Body>{body};
|
||||
};
|
||||
|
||||
} // namespace
|
||||
|
||||
/* Synchronous signals that should not be blocked while in the lister thread.
|
||||
*/
|
||||
static const int sync_signals[] = {
|
||||
SIGABRT, SIGILL,
|
||||
SIGFPE, SIGSEGV, SIGBUS,
|
||||
#ifdef SIGEMT
|
||||
SIGEMT,
|
||||
#endif
|
||||
SIGSYS, SIGTRAP,
|
||||
SIGXCPU, SIGXFSZ };
|
||||
|
||||
ATTRIBUTE_NOINLINE
|
||||
static int local_clone (int (*fn)(void *), void *arg) {
|
||||
#ifdef __PPC64__
|
||||
/* To avoid the gap cross page boundaries, increase by the large parge
|
||||
* size mostly PowerPC system uses. */
|
||||
|
||||
// FIXME(alk): I don't really understand why ppc needs this and why
|
||||
// 64k pages matter. I.e. some other architectures have 64k pages,
|
||||
// so should we do the same there?
|
||||
uintptr_t clone_stack_size = 64 << 10;
|
||||
#else
|
||||
uintptr_t clone_stack_size = 4 << 10;
|
||||
#endif
|
||||
|
||||
bool grows_to_low = (&arg < arg);
|
||||
if (grows_to_low) {
|
||||
// Negate clone_stack_size if stack grows to lower addresses
|
||||
// (common for arch-es that matter).
|
||||
clone_stack_size = ~clone_stack_size + 1;
|
||||
}
|
||||
|
||||
#if defined(__i386__) || defined(__x86_64__) || defined(__riscv) || defined(__arm__) || defined(__aarch64__)
|
||||
// Sanity check code above. We know that those arch-es grow stack to
|
||||
// lower addresses.
|
||||
CHECK(grows_to_low);
|
||||
#endif
|
||||
|
||||
/* Leave 4kB of gap between the callers stack and the new clone. This
|
||||
* should be more than sufficient for the caller to call waitpid() until
|
||||
* the cloned thread terminates.
|
||||
*
|
||||
* It is important that we set the CLONE_UNTRACED flag, because newer
|
||||
* versions of "gdb" otherwise attempt to attach to our thread, and will
|
||||
* attempt to reap its status codes. This subsequently results in the
|
||||
* caller hanging indefinitely in waitpid(), waiting for a change in
|
||||
* status that will never happen. By setting the CLONE_UNTRACED flag, we
|
||||
* prevent "gdb" from stealing events, but we still expect the thread
|
||||
* lister to fail, because it cannot PTRACE_ATTACH to the process that
|
||||
* is being debugged. This is OK and the error code will be reported
|
||||
* correctly.
|
||||
*/
|
||||
uintptr_t stack_addr = reinterpret_cast<uintptr_t>(&arg) + clone_stack_size;
|
||||
stack_addr &= ~63; // align stack address on 64 bytes (x86 needs 16, but lets be generous)
|
||||
return clone(fn, reinterpret_cast<void*>(stack_addr),
|
||||
CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_UNTRACED,
|
||||
arg, 0, 0, 0);
|
||||
}
|
||||
|
||||
|
||||
/* Local substitute for the atoi() function, which is not necessarily safe
|
||||
* to call once threads are suspended (depending on whether libc looks up
|
||||
* locale information, when executing atoi()).
|
||||
*/
|
||||
static int local_atoi(const char *s) {
|
||||
int n = 0;
|
||||
int neg = *s == '-';
|
||||
if (neg)
|
||||
s++;
|
||||
while (*s >= '0' && *s <= '9')
|
||||
n = 10*n + (*s++ - '0');
|
||||
return neg ? -n : n;
|
||||
}
|
||||
|
||||
static int ptrace_detach(pid_t pid) {
|
||||
return ptrace(PTRACE_DETACH, pid, nullptr, nullptr);
|
||||
}
|
||||
|
||||
/* Re-runs fn until it doesn't cause EINTR
|
||||
*/
|
||||
#define NO_INTR(fn) do {} while ((fn) < 0 && errno == EINTR)
|
||||
|
||||
/* abort() is not safely reentrant, and changes it's behavior each time
|
||||
* it is called. This means, if the main application ever called abort()
|
||||
* we cannot safely call it again. This would happen if we were called
|
||||
* from a SIGABRT signal handler in the main application. So, document
|
||||
* that calling SIGABRT from the thread lister makes it not signal safe
|
||||
* (and vice-versa).
|
||||
* Also, since we share address space with the main application, we
|
||||
* cannot call abort() from the callback and expect the main application
|
||||
* to behave correctly afterwards. In fact, the only thing we can do, is
|
||||
* to terminate the main application with extreme prejudice (aka
|
||||
* PTRACE_KILL).
|
||||
* We set up our own SIGABRT handler to do this.
|
||||
* In order to find the main application from the signal handler, we
|
||||
* need to store information about it in global variables. This is
|
||||
* safe, because the main application should be suspended at this
|
||||
* time. If the callback ever called TCMalloc_ResumeAllProcessThreads(), then
|
||||
* we are running a higher risk, though. So, try to avoid calling
|
||||
* abort() after calling TCMalloc_ResumeAllProcessThreads.
|
||||
*/
|
||||
static volatile int *sig_pids, sig_num_threads;
|
||||
|
||||
|
||||
/* Signal handler to help us recover from dying while we are attached to
|
||||
* other threads.
|
||||
*/
|
||||
static void SignalHandler(int signum, siginfo_t *si, void *data) {
|
||||
RAW_LOG(ERROR, "Got fatal signal %d inside ListerThread", signum);
|
||||
|
||||
if (sig_pids != NULL) {
|
||||
if (signum == SIGABRT) {
|
||||
prctl(PR_SET_PDEATHSIG, 0);
|
||||
while (sig_num_threads-- > 0) {
|
||||
/* Not sure if sched_yield is really necessary here, but it does not */
|
||||
/* hurt, and it might be necessary for the same reasons that we have */
|
||||
/* to do so in ptrace_detach(). */
|
||||
sched_yield();
|
||||
ptrace(PTRACE_KILL, sig_pids[sig_num_threads], 0, 0);
|
||||
}
|
||||
} else if (sig_num_threads > 0) {
|
||||
TCMalloc_ResumeAllProcessThreads(sig_num_threads, (int *)sig_pids);
|
||||
}
|
||||
}
|
||||
sig_pids = NULL;
|
||||
|
||||
syscall(SYS_exit, signum == SIGABRT ? 1 : 2);
|
||||
}
|
||||
|
||||
|
||||
/* Try to dirty the stack, and hope that the compiler is not smart enough
|
||||
* to optimize this function away. Or worse, the compiler could inline the
|
||||
* function and permanently allocate the data on the stack.
|
||||
*/
|
||||
static void DirtyStack(size_t amount) {
|
||||
char buf[amount];
|
||||
memset(buf, 0, amount);
|
||||
read(-1, buf, amount);
|
||||
}
|
||||
|
||||
|
||||
/* Data structure for passing arguments to the lister thread.
|
||||
*/
|
||||
#define ALT_STACKSIZE (MINSIGSTKSZ + 4096)
|
||||
|
||||
struct ListerParams {
|
||||
int result, err;
|
||||
pid_t ppid;
|
||||
int start_pipe_rd;
|
||||
int start_pipe_wr;
|
||||
char *altstack_mem;
|
||||
ListAllProcessThreadsCallBack callback;
|
||||
void *parameter;
|
||||
va_list ap;
|
||||
int proc_fd;
|
||||
};
|
||||
|
||||
struct kernel_dirent64 { // see man 2 getdents
|
||||
int64_t d_ino; /* 64-bit inode number */
|
||||
int64_t d_off; /* 64-bit offset to next structure */
|
||||
unsigned short d_reclen; /* Size of this dirent */
|
||||
unsigned char d_type; /* File type */
|
||||
char d_name[]; /* Filename (null-terminated) */
|
||||
};
|
||||
|
||||
static const kernel_dirent64 *BumpDirentPtr(const kernel_dirent64 *ptr, uintptr_t by_bytes) {
|
||||
return reinterpret_cast<kernel_dirent64*>(reinterpret_cast<uintptr_t>(ptr) + by_bytes);
|
||||
}
|
||||
|
||||
static int ListerThread(struct ListerParams *args) {
|
||||
int found_parent = 0;
|
||||
pid_t clone_pid = syscall(SYS_gettid);
|
||||
int proc = args->proc_fd, num_threads = 0;
|
||||
int max_threads = 0, sig;
|
||||
struct stat proc_sb;
|
||||
stack_t altstack;
|
||||
|
||||
/* Wait for parent thread to set appropriate permissions to allow
|
||||
* ptrace activity. Note we using pipe pair, so which ensures we
|
||||
* don't sleep past parent's death.
|
||||
*/
|
||||
(void)close(args->start_pipe_wr);
|
||||
{
|
||||
char tmp;
|
||||
read(args->start_pipe_rd, &tmp, sizeof(tmp));
|
||||
}
|
||||
|
||||
// No point in continuing if parent dies before/during ptracing.
|
||||
prctl(PR_SET_PDEATHSIG, SIGKILL);
|
||||
|
||||
/* Catch signals on an alternate pre-allocated stack. This way, we can
|
||||
* safely execute the signal handler even if we ran out of memory.
|
||||
*/
|
||||
memset(&altstack, 0, sizeof(altstack));
|
||||
altstack.ss_sp = args->altstack_mem;
|
||||
altstack.ss_flags = 0;
|
||||
altstack.ss_size = ALT_STACKSIZE;
|
||||
sigaltstack(&altstack, nullptr);
|
||||
|
||||
/* Some kernels forget to wake up traced processes, when the
|
||||
* tracer dies. So, intercept synchronous signals and make sure
|
||||
* that we wake up our tracees before dying. It is the caller's
|
||||
* responsibility to ensure that asynchronous signals do not
|
||||
* interfere with this function.
|
||||
*/
|
||||
for (sig = 0; sig < sizeof(sync_signals)/sizeof(*sync_signals); sig++) {
|
||||
struct sigaction sa;
|
||||
memset(&sa, 0, sizeof(sa));
|
||||
sa.sa_sigaction = SignalHandler;
|
||||
sigfillset(&sa.sa_mask);
|
||||
sa.sa_flags = SA_ONSTACK|SA_SIGINFO|SA_RESETHAND;
|
||||
sigaction(sync_signals[sig], &sa, nullptr);
|
||||
}
|
||||
|
||||
/* Read process directories in /proc/... */
|
||||
for (;;) {
|
||||
if (lseek(proc, 0, SEEK_SET) < 0) {
|
||||
goto failure;
|
||||
}
|
||||
if (fstat(proc, &proc_sb) < 0) {
|
||||
goto failure;
|
||||
}
|
||||
|
||||
/* Since we are suspending threads, we cannot call any libc
|
||||
* functions that might acquire locks. Most notably, we cannot
|
||||
* call malloc(). So, we have to allocate memory on the stack,
|
||||
* instead. Since we do not know how much memory we need, we
|
||||
* make a best guess. And if we guessed incorrectly we retry on
|
||||
* a second iteration (by jumping to "detach_threads").
|
||||
*
|
||||
* Unless the number of threads is increasing very rapidly, we
|
||||
* should never need to do so, though, as our guestimate is very
|
||||
* conservative.
|
||||
*/
|
||||
if (max_threads < proc_sb.st_nlink + 100) {
|
||||
max_threads = proc_sb.st_nlink + 100;
|
||||
}
|
||||
|
||||
/* scope */ {
|
||||
pid_t pids[max_threads];
|
||||
int added_entries = 0;
|
||||
sig_num_threads = num_threads;
|
||||
sig_pids = pids;
|
||||
for (;;) {
|
||||
// lets make sure to align buf to store kernel_dirent64-s properly.
|
||||
int64_t buf[4096 / sizeof(int64_t)];
|
||||
|
||||
ssize_t nbytes = syscall(SYS_getdents64, proc, buf, sizeof(buf));
|
||||
// fprintf(stderr, "nbytes = %zd\n", nbytes);
|
||||
|
||||
if (nbytes < 0) {
|
||||
goto failure;
|
||||
}
|
||||
|
||||
if (nbytes == 0) {
|
||||
if (added_entries) {
|
||||
/* Need to keep iterating over "/proc" in multiple
|
||||
* passes until we no longer find any more threads. This
|
||||
* algorithm eventually completes, when all threads have
|
||||
* been suspended.
|
||||
*/
|
||||
added_entries = 0;
|
||||
lseek(proc, 0, SEEK_SET);
|
||||
continue;
|
||||
}
|
||||
break;
|
||||
}
|
||||
|
||||
const kernel_dirent64 *entry = reinterpret_cast<kernel_dirent64*>(buf);
|
||||
const kernel_dirent64 *end = BumpDirentPtr(entry, nbytes);
|
||||
|
||||
for (;entry < end; entry = BumpDirentPtr(entry, entry->d_reclen)) {
|
||||
if (entry->d_ino == 0) {
|
||||
continue;
|
||||
}
|
||||
|
||||
const char *ptr = entry->d_name;
|
||||
// fprintf(stderr, "name: %s\n", ptr);
|
||||
pid_t pid;
|
||||
|
||||
/* Some kernels hide threads by preceding the pid with a '.' */
|
||||
if (*ptr == '.')
|
||||
ptr++;
|
||||
|
||||
/* If the directory is not numeric, it cannot be a
|
||||
* process/thread
|
||||
*/
|
||||
if (*ptr < '0' || *ptr > '9')
|
||||
continue;
|
||||
pid = local_atoi(ptr);
|
||||
// fprintf(stderr, "pid = %d (%d)\n", pid, getpid());
|
||||
|
||||
if (!pid || pid == clone_pid) {
|
||||
continue;
|
||||
}
|
||||
|
||||
/* Attach (and suspend) all threads */
|
||||
long i, j;
|
||||
|
||||
/* Found one of our threads, make sure it is no duplicate */
|
||||
for (i = 0; i < num_threads; i++) {
|
||||
/* Linear search is slow, but should not matter much for
|
||||
* the typically small number of threads.
|
||||
*/
|
||||
if (pids[i] == pid) {
|
||||
/* Found a duplicate; most likely on second pass */
|
||||
goto next_entry;
|
||||
}
|
||||
}
|
||||
|
||||
/* Check whether data structure needs growing */
|
||||
if (num_threads >= max_threads) {
|
||||
/* Back to square one, this time with more memory */
|
||||
goto detach_threads;
|
||||
}
|
||||
|
||||
/* Attaching to thread suspends it */
|
||||
pids[num_threads++] = pid;
|
||||
sig_num_threads = num_threads;
|
||||
|
||||
if (ptrace(PTRACE_ATTACH, pid, (void *)0,
|
||||
(void *)0) < 0) {
|
||||
/* If operation failed, ignore thread. Maybe it
|
||||
* just died? There might also be a race
|
||||
* condition with a concurrent core dumper or
|
||||
* with a debugger. In that case, we will just
|
||||
* make a best effort, rather than failing
|
||||
* entirely.
|
||||
*/
|
||||
num_threads--;
|
||||
sig_num_threads = num_threads;
|
||||
goto next_entry;
|
||||
}
|
||||
while (waitpid(pid, (int *)0, __WALL) < 0) {
|
||||
if (errno != EINTR) {
|
||||
ptrace_detach(pid);
|
||||
num_threads--;
|
||||
sig_num_threads = num_threads;
|
||||
goto next_entry;
|
||||
}
|
||||
}
|
||||
|
||||
if (syscall(SYS_ptrace, PTRACE_PEEKDATA, pid, &i, &j) || i++ != j ||
|
||||
syscall(SYS_ptrace, PTRACE_PEEKDATA, pid, &i, &j) || i != j) {
|
||||
/* Address spaces are distinct. This is probably
|
||||
* a forked child process rather than a thread.
|
||||
*/
|
||||
ptrace_detach(pid);
|
||||
num_threads--;
|
||||
sig_num_threads = num_threads;
|
||||
goto next_entry;
|
||||
}
|
||||
|
||||
found_parent |= pid == args->ppid;
|
||||
added_entries++;
|
||||
|
||||
next_entry:;
|
||||
} // entries iterations loop
|
||||
} // getdents loop
|
||||
|
||||
/* If we never found the parent process, something is very wrong.
|
||||
* Most likely, we are running in debugger. Any attempt to operate
|
||||
* on the threads would be very incomplete. Let's just report an
|
||||
* error to the caller.
|
||||
*/
|
||||
if (!found_parent) {
|
||||
TCMalloc_ResumeAllProcessThreads(num_threads, pids);
|
||||
return 3;
|
||||
}
|
||||
|
||||
/* Now we are ready to call the callback,
|
||||
* which takes care of resuming the threads for us.
|
||||
*/
|
||||
args->result = args->callback(args->parameter, num_threads,
|
||||
pids, args->ap);
|
||||
args->err = errno;
|
||||
|
||||
/* Callback should have resumed threads, but better safe than sorry */
|
||||
if (TCMalloc_ResumeAllProcessThreads(num_threads, pids)) {
|
||||
/* Callback forgot to resume at least one thread, report error */
|
||||
args->err = EINVAL;
|
||||
args->result = -1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
detach_threads:
|
||||
/* Resume all threads prior to retrying the operation */
|
||||
TCMalloc_ResumeAllProcessThreads(num_threads, pids);
|
||||
sig_pids = NULL;
|
||||
num_threads = 0;
|
||||
sig_num_threads = num_threads;
|
||||
max_threads += 100;
|
||||
} // pids[max_threads] scope
|
||||
} // for (;;)
|
||||
|
||||
failure:
|
||||
args->result = -1;
|
||||
args->err = errno;
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* This function gets the list of all linux threads of the current process
|
||||
* passes them to the 'callback' along with the 'parameter' pointer; at the
|
||||
* call back call time all the threads are paused via
|
||||
* PTRACE_ATTACH.
|
||||
* The callback is executed from a separate thread which shares only the
|
||||
* address space, the filesystem, and the filehandles with the caller. Most
|
||||
* notably, it does not share the same pid and ppid; and if it terminates,
|
||||
* the rest of the application is still there. 'callback' is supposed to do
|
||||
* or arrange for TCMalloc_ResumeAllProcessThreads. This happens automatically, if
|
||||
* the thread raises a synchronous signal (e.g. SIGSEGV); asynchronous
|
||||
* signals are blocked. If the 'callback' decides to unblock them, it must
|
||||
* ensure that they cannot terminate the application, or that
|
||||
* TCMalloc_ResumeAllProcessThreads will get called.
|
||||
* It is an error for the 'callback' to make any library calls that could
|
||||
* acquire locks. Most notably, this means that most system calls have to
|
||||
* avoid going through libc. Also, this means that it is not legal to call
|
||||
* exit() or abort().
|
||||
* We return -1 on error and the return value of 'callback' on success.
|
||||
*/
|
||||
int TCMalloc_ListAllProcessThreads(void *parameter,
|
||||
ListAllProcessThreadsCallBack callback, ...) {
|
||||
char altstack_mem[ALT_STACKSIZE];
|
||||
struct ListerParams args;
|
||||
pid_t clone_pid;
|
||||
int dumpable = 1;
|
||||
int need_sigprocmask = 0;
|
||||
sigset_t sig_blocked, sig_old;
|
||||
int status, rc;
|
||||
|
||||
SetPTracerSetup ptracer_setup;
|
||||
|
||||
auto cleanup = MakeSimpleCleanup([&] () {
|
||||
int old_errno = errno;
|
||||
|
||||
if (need_sigprocmask) {
|
||||
sigprocmask(SIG_SETMASK, &sig_old, nullptr);
|
||||
}
|
||||
|
||||
if (!dumpable) {
|
||||
prctl(PR_SET_DUMPABLE, dumpable);
|
||||
}
|
||||
|
||||
errno = old_errno;
|
||||
});
|
||||
|
||||
va_start(args.ap, callback);
|
||||
|
||||
/* If we are short on virtual memory, initializing the alternate stack
|
||||
* might trigger a SIGSEGV. Let's do this early, before it could get us
|
||||
* into more trouble (i.e. before signal handlers try to use the alternate
|
||||
* stack, and before we attach to other threads).
|
||||
*/
|
||||
memset(altstack_mem, 0, sizeof(altstack_mem));
|
||||
|
||||
/* Some of our cleanup functions could conceivable use more stack space.
|
||||
* Try to touch the stack right now. This could be defeated by the compiler
|
||||
* being too smart for it's own good, so try really hard.
|
||||
*/
|
||||
DirtyStack(32768);
|
||||
|
||||
/* Make this process "dumpable". This is necessary in order to ptrace()
|
||||
* after having called setuid().
|
||||
*/
|
||||
dumpable = prctl(PR_GET_DUMPABLE, 0);
|
||||
if (!dumpable) {
|
||||
prctl(PR_SET_DUMPABLE, 1);
|
||||
}
|
||||
|
||||
/* Fill in argument block for dumper thread */
|
||||
args.result = -1;
|
||||
args.err = 0;
|
||||
args.ppid = getpid();
|
||||
args.altstack_mem = altstack_mem;
|
||||
args.parameter = parameter;
|
||||
args.callback = callback;
|
||||
|
||||
NO_INTR(args.proc_fd = open("/proc/self/task/", O_RDONLY|O_DIRECTORY|O_CLOEXEC));
|
||||
UniqueFD proc_closer{args.proc_fd};
|
||||
|
||||
if (args.proc_fd < 0) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
int pipefds[2];
|
||||
if (pipe2(pipefds, O_CLOEXEC)) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
UniqueFD pipe_rd_closer{pipefds[0]};
|
||||
UniqueFD pipe_wr_closer{pipefds[1]};
|
||||
|
||||
args.start_pipe_rd = pipefds[0];
|
||||
args.start_pipe_wr = pipefds[1];
|
||||
|
||||
/* Before cloning the thread lister, block all asynchronous signals, as we */
|
||||
/* are not prepared to handle them. */
|
||||
sigfillset(&sig_blocked);
|
||||
for (int sig = 0; sig < sizeof(sync_signals)/sizeof(*sync_signals); sig++) {
|
||||
sigdelset(&sig_blocked, sync_signals[sig]);
|
||||
}
|
||||
if (sigprocmask(SIG_BLOCK, &sig_blocked, &sig_old)) {
|
||||
return -1;
|
||||
}
|
||||
need_sigprocmask = 1;
|
||||
|
||||
// make sure all functions used by parent from local_clone to after
|
||||
// waitpid have plt entries fully initialized. We cannot afford
|
||||
// dynamic linker running relocations and messing with errno (see
|
||||
// comment just below)
|
||||
(void)prctl(PR_GET_PDEATHSIG, 0);
|
||||
(void)close(-1);
|
||||
(void)waitpid(INT_MIN, nullptr, 0);
|
||||
|
||||
/* After cloning, both the parent and the child share the same
|
||||
* instance of errno. We deal with this by being very
|
||||
* careful. Specifically, child immediately calls into sem_wait
|
||||
* which never fails (cannot even EINTR), so doesn't touch errno.
|
||||
*
|
||||
* Parent sets up PR_SET_PTRACER prctl (if it fails, which usually
|
||||
* doesn't happen, we ignore that failure). Then parent does close
|
||||
* on write side of start pipe. After that child runs complex code,
|
||||
* including arbitrary callback. So parent avoids screwing with
|
||||
* errno by immediately calling waitpid with async signals disabled.
|
||||
*
|
||||
* I.e. errno is parent's up until close below. Then errno belongs
|
||||
* to child up until it exits.
|
||||
*/
|
||||
clone_pid = local_clone((int (*)(void *))ListerThread, &args);
|
||||
if (clone_pid < 0) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Most Linux kernels in the wild have Yama LSM enabled, so
|
||||
* requires us to explicitly give permission for child to ptrace
|
||||
* us. See man 2 ptrace for details. This then requires us to
|
||||
* synchronize with the child (see close on start pipe
|
||||
* below). I.e. so that child doesn't start ptracing before we've
|
||||
* completed this prctl call.
|
||||
*/
|
||||
ptracer_setup.Prepare(clone_pid);
|
||||
|
||||
/* Closing write side of pipe works like releasing the lock. It
|
||||
* allows the ListerThread to run past read() call on read side of
|
||||
* pipe and ptrace us.
|
||||
*/
|
||||
close(pipe_wr_closer.ReleaseFD());
|
||||
|
||||
/* So here child runs (see ListerThread), it finds and ptraces all
|
||||
* threads, runs whatever callback is setup and then
|
||||
* detaches/resumes everything. In any case we wait for child's
|
||||
* completion to gather status and synchronize everything. */
|
||||
|
||||
rc = waitpid(clone_pid, &status, __WALL);
|
||||
|
||||
if (rc < 0) {
|
||||
if (errno == EINTR) {
|
||||
RAW_LOG(FATAL, "BUG: EINTR from waitpid shouldn't be possible!");
|
||||
}
|
||||
// Any error waiting for child is sign of some bug, so abort
|
||||
// asap. Continuing is unsafe anyways with child potentially writing to our
|
||||
// stack.
|
||||
RAW_LOG(FATAL, "BUG: waitpid inside TCMalloc_ListAllProcessThreads cannot fail, but it did. Raw errno: %d\n", errno);
|
||||
} else if (WIFEXITED(status)) {
|
||||
errno = args.err;
|
||||
switch (WEXITSTATUS(status)) {
|
||||
case 0: break; /* Normal process termination */
|
||||
case 2: args.err = EFAULT; /* Some fault (e.g. SIGSEGV) detected */
|
||||
args.result = -1;
|
||||
break;
|
||||
case 3: args.err = EPERM; /* Process is already being traced */
|
||||
args.result = -1;
|
||||
break;
|
||||
default:args.err = ECHILD; /* Child died unexpectedly */
|
||||
args.result = -1;
|
||||
break;
|
||||
}
|
||||
} else if (!WIFEXITED(status)) {
|
||||
args.err = EFAULT; /* Terminated due to an unhandled signal*/
|
||||
args.result = -1;
|
||||
}
|
||||
|
||||
errno = args.err;
|
||||
return args.result;
|
||||
}
|
||||
|
||||
/* This function resumes the list of all linux threads that
|
||||
* TCMalloc_ListAllProcessThreads pauses before giving to its callback.
|
||||
* The function returns non-zero if at least one thread was
|
||||
* suspended and has now been resumed.
|
||||
*/
|
||||
int TCMalloc_ResumeAllProcessThreads(int num_threads, pid_t *thread_pids) {
|
||||
int detached_at_least_one = 0;
|
||||
while (num_threads-- > 0) {
|
||||
detached_at_least_one |= (ptrace_detach(thread_pids[num_threads]) >= 0);
|
||||
}
|
||||
return detached_at_least_one;
|
||||
}
|
75
3party/gperftools/src/base/linuxthreads.h
Normal file
75
3party/gperftools/src/base/linuxthreads.h
Normal file
@ -0,0 +1,75 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2005-2007, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Markus Gutschke
|
||||
*/
|
||||
|
||||
#ifndef _LINUXTHREADS_H
|
||||
#define _LINUXTHREADS_H
|
||||
|
||||
#include <stdarg.h>
|
||||
#include <sys/types.h>
|
||||
|
||||
typedef int (*ListAllProcessThreadsCallBack)(void *parameter,
|
||||
int num_threads,
|
||||
pid_t *thread_pids,
|
||||
va_list ap);
|
||||
|
||||
/* This function gets the list of all linux threads of the current process
|
||||
* passes them to the 'callback' along with the 'parameter' pointer; at the
|
||||
* call back call time all the threads are paused via
|
||||
* PTRACE_ATTACH.
|
||||
* The callback is executed from a separate thread which shares only the
|
||||
* address space, the filesystem, and the filehandles with the caller. Most
|
||||
* notably, it does not share the same pid and ppid; and if it terminates,
|
||||
* the rest of the application is still there. 'callback' is supposed to do
|
||||
* or arrange for TCMalloc_ResumeAllProcessThreads. This happens automatically, if
|
||||
* the thread raises a synchronous signal (e.g. SIGSEGV); asynchronous
|
||||
* signals are blocked. If the 'callback' decides to unblock them, it must
|
||||
* ensure that they cannot terminate the application, or that
|
||||
* TCMalloc_ResumeAllProcessThreads will get called.
|
||||
* It is an error for the 'callback' to make any library calls that could
|
||||
* acquire locks. Most notably, this means that most system calls have to
|
||||
* avoid going through libc. Also, this means that it is not legal to call
|
||||
* exit() or abort().
|
||||
* We return -1 on error and the return value of 'callback' on success.
|
||||
*/
|
||||
int TCMalloc_ListAllProcessThreads(void *parameter,
|
||||
ListAllProcessThreadsCallBack callback, ...);
|
||||
|
||||
/* This function resumes the list of all linux threads that
|
||||
* TCMalloc_ListAllProcessThreads pauses before giving to its
|
||||
* callback. The function returns non-zero if at least one thread was
|
||||
* suspended and has now been resumed.
|
||||
*/
|
||||
int TCMalloc_ResumeAllProcessThreads(int num_threads, pid_t *thread_pids);
|
||||
|
||||
#endif /* _LINUXTHREADS_H */
|
108
3party/gperftools/src/base/logging.cc
Normal file
108
3party/gperftools/src/base/logging.cc
Normal file
@ -0,0 +1,108 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2007, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// This file just provides storage for FLAGS_verbose.
|
||||
|
||||
#include <config.h>
|
||||
#include "base/logging.h"
|
||||
#include "base/commandlineflags.h"
|
||||
|
||||
DEFINE_int32(verbose, EnvToInt("PERFTOOLS_VERBOSE", 0),
|
||||
"Set to numbers >0 for more verbose output, or <0 for less. "
|
||||
"--verbose == -4 means we log fatal errors only.");
|
||||
|
||||
|
||||
#if defined(_WIN32) || defined(__CYGWIN__) || defined(__CYGWIN32__)
|
||||
|
||||
// While windows does have a POSIX-compatible API
|
||||
// (_open/_write/_close), it acquires memory. Using this lower-level
|
||||
// windows API is the closest we can get to being "raw".
|
||||
RawFD RawOpenForWriting(const char* filename) {
|
||||
// CreateFile allocates memory if file_name isn't absolute, so if
|
||||
// that ever becomes a problem then we ought to compute the absolute
|
||||
// path on its behalf (perhaps the ntdll/kernel function isn't aware
|
||||
// of the working directory?)
|
||||
RawFD fd = CreateFileA(filename, GENERIC_WRITE, 0, NULL,
|
||||
CREATE_ALWAYS, 0, NULL);
|
||||
if (fd != kIllegalRawFD && GetLastError() == ERROR_ALREADY_EXISTS)
|
||||
SetEndOfFile(fd); // truncate the existing file
|
||||
return fd;
|
||||
}
|
||||
|
||||
void RawWrite(RawFD handle, const char* buf, size_t len) {
|
||||
while (len > 0) {
|
||||
DWORD wrote;
|
||||
BOOL ok = WriteFile(handle, buf, len, &wrote, NULL);
|
||||
// We do not use an asynchronous file handle, so ok==false means an error
|
||||
if (!ok) break;
|
||||
buf += wrote;
|
||||
len -= wrote;
|
||||
}
|
||||
}
|
||||
|
||||
void RawClose(RawFD handle) {
|
||||
CloseHandle(handle);
|
||||
}
|
||||
|
||||
#else // _WIN32 || __CYGWIN__ || __CYGWIN32__
|
||||
|
||||
#ifdef HAVE_SYS_TYPES_H
|
||||
#include <sys/types.h>
|
||||
#endif
|
||||
#ifdef HAVE_UNISTD_H
|
||||
#include <unistd.h>
|
||||
#endif
|
||||
#ifdef HAVE_FCNTL_H
|
||||
#include <fcntl.h>
|
||||
#endif
|
||||
|
||||
// Re-run fn until it doesn't cause EINTR.
|
||||
#define NO_INTR(fn) do {} while ((fn) < 0 && errno == EINTR)
|
||||
|
||||
RawFD RawOpenForWriting(const char* filename) {
|
||||
return open(filename, O_WRONLY|O_CREAT|O_TRUNC, 0664);
|
||||
}
|
||||
|
||||
void RawWrite(RawFD fd, const char* buf, size_t len) {
|
||||
while (len > 0) {
|
||||
ssize_t r;
|
||||
NO_INTR(r = write(fd, buf, len));
|
||||
if (r <= 0) break;
|
||||
buf += r;
|
||||
len -= r;
|
||||
}
|
||||
}
|
||||
|
||||
void RawClose(RawFD fd) {
|
||||
close(fd);
|
||||
}
|
||||
|
||||
#endif // _WIN32 || __CYGWIN__ || __CYGWIN32__
|
259
3party/gperftools/src/base/logging.h
Normal file
259
3party/gperftools/src/base/logging.h
Normal file
@ -0,0 +1,259 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// This file contains #include information about logging-related stuff.
|
||||
// Pretty much everybody needs to #include this file so that they can
|
||||
// log various happenings.
|
||||
//
|
||||
#ifndef _LOGGING_H_
|
||||
#define _LOGGING_H_
|
||||
|
||||
#include <config.h>
|
||||
#include <stdarg.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdio.h>
|
||||
#ifdef HAVE_UNISTD_H
|
||||
#include <unistd.h> // for write()
|
||||
#endif
|
||||
#include <string.h> // for strlen(), strcmp()
|
||||
#include <assert.h>
|
||||
#include <errno.h> // for errno
|
||||
#include "base/commandlineflags.h"
|
||||
|
||||
// On some systems (like freebsd), we can't call write() at all in a
|
||||
// global constructor, perhaps because errno hasn't been set up.
|
||||
// (In windows, we can't call it because it might call malloc.)
|
||||
// Calling the write syscall is safer (it doesn't set errno), so we
|
||||
// prefer that. Note we don't care about errno for logging: we just
|
||||
// do logging on a best-effort basis.
|
||||
#if defined(_MSC_VER)
|
||||
#define WRITE_TO_STDERR(buf, len) WriteToStderr(buf, len); // in port.cc
|
||||
#elif HAVE_SYS_SYSCALL_H && !defined(__APPLE__)
|
||||
#include <sys/syscall.h>
|
||||
#define WRITE_TO_STDERR(buf, len) syscall(SYS_write, STDERR_FILENO, buf, len)
|
||||
#else
|
||||
#define WRITE_TO_STDERR(buf, len) write(STDERR_FILENO, buf, len)
|
||||
#endif
|
||||
|
||||
// MSVC and mingw define their own, safe version of vnsprintf (the
|
||||
// windows one in broken) in port.cc. Everyone else can use the
|
||||
// version here. We had to give it a unique name for windows.
|
||||
#ifndef _WIN32
|
||||
# define perftools_vsnprintf vsnprintf
|
||||
#endif
|
||||
|
||||
|
||||
// We log all messages at this log-level and below.
|
||||
// INFO == -1, WARNING == -2, ERROR == -3, FATAL == -4
|
||||
DECLARE_int32(verbose);
|
||||
|
||||
// CHECK dies with a fatal error if condition is not true. It is *not*
|
||||
// controlled by NDEBUG, so the check will be executed regardless of
|
||||
// compilation mode. Therefore, it is safe to do things like:
|
||||
// CHECK(fp->Write(x) == 4)
|
||||
// Note we use write instead of printf/puts to avoid the risk we'll
|
||||
// call malloc().
|
||||
#define CHECK(condition) \
|
||||
do { \
|
||||
if (!(condition)) { \
|
||||
WRITE_TO_STDERR("Check failed: " #condition "\n", \
|
||||
sizeof("Check failed: " #condition "\n")-1); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
// This takes a message to print. The name is historical.
|
||||
#define RAW_CHECK(condition, message) \
|
||||
do { \
|
||||
if (!(condition)) { \
|
||||
WRITE_TO_STDERR("Check failed: " #condition ": " message "\n", \
|
||||
sizeof("Check failed: " #condition ": " message "\n")-1);\
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
// This is like RAW_CHECK, but only in debug-mode
|
||||
#ifdef NDEBUG
|
||||
enum { DEBUG_MODE = 0 };
|
||||
#define RAW_DCHECK(condition, message)
|
||||
#else
|
||||
enum { DEBUG_MODE = 1 };
|
||||
#define RAW_DCHECK(condition, message) RAW_CHECK(condition, message)
|
||||
#endif
|
||||
|
||||
// This prints errno as well. Note we use write instead of printf/puts to
|
||||
// avoid the risk we'll call malloc().
|
||||
#define PCHECK(condition) \
|
||||
do { \
|
||||
if (!(condition)) { \
|
||||
const int err_no = errno; \
|
||||
WRITE_TO_STDERR("Check failed: " #condition ": ", \
|
||||
sizeof("Check failed: " #condition ": ")-1); \
|
||||
WRITE_TO_STDERR(strerror(err_no), strlen(strerror(err_no))); \
|
||||
WRITE_TO_STDERR("\n", sizeof("\n")-1); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
// Helper macro for binary operators; prints the two values on error
|
||||
// Don't use this macro directly in your code, use CHECK_EQ et al below
|
||||
|
||||
// WARNING: These don't compile correctly if one of the arguments is a pointer
|
||||
// and the other is NULL. To work around this, simply static_cast NULL to the
|
||||
// type of the desired pointer.
|
||||
|
||||
// TODO(jandrews): Also print the values in case of failure. Requires some
|
||||
// sort of type-sensitive ToString() function.
|
||||
#define CHECK_OP(op, val1, val2) \
|
||||
do { \
|
||||
if (!((val1) op (val2))) { \
|
||||
fprintf(stderr, "%s:%d Check failed: %s %s %s\n", __FILE__, __LINE__, #val1, #op, #val2); \
|
||||
abort(); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define CHECK_EQ(val1, val2) CHECK_OP(==, val1, val2)
|
||||
#define CHECK_NE(val1, val2) CHECK_OP(!=, val1, val2)
|
||||
#define CHECK_LE(val1, val2) CHECK_OP(<=, val1, val2)
|
||||
#define CHECK_LT(val1, val2) CHECK_OP(< , val1, val2)
|
||||
#define CHECK_GE(val1, val2) CHECK_OP(>=, val1, val2)
|
||||
#define CHECK_GT(val1, val2) CHECK_OP(> , val1, val2)
|
||||
|
||||
// Synonyms for CHECK_* that are used in some unittests.
|
||||
#define EXPECT_EQ(val1, val2) CHECK_EQ(val1, val2)
|
||||
#define EXPECT_NE(val1, val2) CHECK_NE(val1, val2)
|
||||
#define EXPECT_LE(val1, val2) CHECK_LE(val1, val2)
|
||||
#define EXPECT_LT(val1, val2) CHECK_LT(val1, val2)
|
||||
#define EXPECT_GE(val1, val2) CHECK_GE(val1, val2)
|
||||
#define EXPECT_GT(val1, val2) CHECK_GT(val1, val2)
|
||||
#define ASSERT_EQ(val1, val2) EXPECT_EQ(val1, val2)
|
||||
#define ASSERT_NE(val1, val2) EXPECT_NE(val1, val2)
|
||||
#define ASSERT_LE(val1, val2) EXPECT_LE(val1, val2)
|
||||
#define ASSERT_LT(val1, val2) EXPECT_LT(val1, val2)
|
||||
#define ASSERT_GE(val1, val2) EXPECT_GE(val1, val2)
|
||||
#define ASSERT_GT(val1, val2) EXPECT_GT(val1, val2)
|
||||
// As are these variants.
|
||||
#define EXPECT_TRUE(cond) CHECK(cond)
|
||||
#define EXPECT_FALSE(cond) CHECK(!(cond))
|
||||
#define EXPECT_STREQ(a, b) CHECK(strcmp(a, b) == 0)
|
||||
#define ASSERT_TRUE(cond) EXPECT_TRUE(cond)
|
||||
#define ASSERT_FALSE(cond) EXPECT_FALSE(cond)
|
||||
#define ASSERT_STREQ(a, b) EXPECT_STREQ(a, b)
|
||||
|
||||
// Used for (libc) functions that return -1 and set errno
|
||||
#define CHECK_ERR(invocation) PCHECK((invocation) != -1)
|
||||
|
||||
// A few more checks that only happen in debug mode
|
||||
#ifdef NDEBUG
|
||||
#define DCHECK_EQ(val1, val2)
|
||||
#define DCHECK_NE(val1, val2)
|
||||
#define DCHECK_LE(val1, val2)
|
||||
#define DCHECK_LT(val1, val2)
|
||||
#define DCHECK_GE(val1, val2)
|
||||
#define DCHECK_GT(val1, val2)
|
||||
#else
|
||||
#define DCHECK_EQ(val1, val2) CHECK_EQ(val1, val2)
|
||||
#define DCHECK_NE(val1, val2) CHECK_NE(val1, val2)
|
||||
#define DCHECK_LE(val1, val2) CHECK_LE(val1, val2)
|
||||
#define DCHECK_LT(val1, val2) CHECK_LT(val1, val2)
|
||||
#define DCHECK_GE(val1, val2) CHECK_GE(val1, val2)
|
||||
#define DCHECK_GT(val1, val2) CHECK_GT(val1, val2)
|
||||
#endif
|
||||
|
||||
|
||||
#ifdef ERROR
|
||||
#undef ERROR // may conflict with ERROR macro on windows
|
||||
#endif
|
||||
enum LogSeverity {INFO = -1, WARNING = -2, ERROR = -3, FATAL = -4};
|
||||
|
||||
// NOTE: we add a newline to the end of the output if it's not there already
|
||||
inline void LogPrintf(int severity, const char* pat, va_list ap) {
|
||||
// We write directly to the stderr file descriptor and avoid FILE
|
||||
// buffering because that may invoke malloc()
|
||||
char buf[600];
|
||||
perftools_vsnprintf(buf, sizeof(buf)-1, pat, ap);
|
||||
if (buf[0] != '\0' && buf[strlen(buf)-1] != '\n') {
|
||||
assert(strlen(buf)+1 < sizeof(buf));
|
||||
strcat(buf, "\n");
|
||||
}
|
||||
WRITE_TO_STDERR(buf, strlen(buf));
|
||||
if ((severity) == FATAL)
|
||||
abort(); // LOG(FATAL) indicates a big problem, so don't run atexit() calls
|
||||
}
|
||||
|
||||
// Note that since the order of global constructors is unspecified,
|
||||
// global code that calls RAW_LOG may execute before FLAGS_verbose is set.
|
||||
// Such code will run with verbosity == 0 no matter what.
|
||||
#define VLOG_IS_ON(severity) (FLAGS_verbose >= severity)
|
||||
|
||||
// In a better world, we'd use __VA_ARGS__, but VC++ 7 doesn't support it.
|
||||
#define LOG_PRINTF(severity, pat) do { \
|
||||
if (VLOG_IS_ON(severity)) { \
|
||||
va_list ap; \
|
||||
va_start(ap, pat); \
|
||||
LogPrintf(severity, pat, ap); \
|
||||
va_end(ap); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
// RAW_LOG is the main function; some synonyms are used in unittests.
|
||||
inline void RAW_LOG(int lvl, const char* pat, ...) { LOG_PRINTF(lvl, pat); }
|
||||
inline void RAW_VLOG(int lvl, const char* pat, ...) { LOG_PRINTF(lvl, pat); }
|
||||
inline void LOG(int lvl, const char* pat, ...) { LOG_PRINTF(lvl, pat); }
|
||||
inline void VLOG(int lvl, const char* pat, ...) { LOG_PRINTF(lvl, pat); }
|
||||
inline void LOG_IF(int lvl, bool cond, const char* pat, ...) {
|
||||
if (cond) LOG_PRINTF(lvl, pat);
|
||||
}
|
||||
|
||||
// This isn't technically logging, but it's also IO and also is an
|
||||
// attempt to be "raw" -- that is, to not use any higher-level libc
|
||||
// routines that might allocate memory or (ideally) try to allocate
|
||||
// locks. We use an opaque file handle (not necessarily an int)
|
||||
// to allow even more low-level stuff in the future.
|
||||
// Like other "raw" routines, these functions are best effort, and
|
||||
// thus don't return error codes (except RawOpenForWriting()).
|
||||
#if defined(_WIN32) || defined(__CYGWIN__) || defined(__CYGWIN32__)
|
||||
#ifndef NOMINMAX
|
||||
#define NOMINMAX // @#!$& windows
|
||||
#endif
|
||||
#include <windows.h>
|
||||
typedef HANDLE RawFD;
|
||||
const RawFD kIllegalRawFD = INVALID_HANDLE_VALUE;
|
||||
#else
|
||||
typedef int RawFD;
|
||||
const RawFD kIllegalRawFD = -1; // what open returns if it fails
|
||||
#endif // defined(_WIN32) || defined(__CYGWIN__) || defined(__CYGWIN32__)
|
||||
|
||||
RawFD RawOpenForWriting(const char* filename); // uses default permissions
|
||||
void RawWrite(RawFD fd, const char* buf, size_t len);
|
||||
void RawClose(RawFD fd);
|
||||
|
||||
#endif // _LOGGING_H_
|
561
3party/gperftools/src/base/low_level_alloc.cc
Normal file
561
3party/gperftools/src/base/low_level_alloc.cc
Normal file
@ -0,0 +1,561 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2006, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
// A low-level allocator that can be used by other low-level
|
||||
// modules without introducing dependency cycles.
|
||||
// This allocator is slow and wasteful of memory;
|
||||
// it should not be used when performance is key.
|
||||
|
||||
#include "base/low_level_alloc.h"
|
||||
#include "base/dynamic_annotations.h"
|
||||
#include "base/spinlock.h"
|
||||
#include "base/logging.h"
|
||||
|
||||
#include "malloc_hook-inl.h"
|
||||
#include <gperftools/malloc_hook.h>
|
||||
|
||||
#include "mmap_hook.h"
|
||||
|
||||
#ifdef HAVE_UNISTD_H
|
||||
#include <unistd.h>
|
||||
#endif
|
||||
#include <new> // for placement-new
|
||||
|
||||
// A first-fit allocator with amortized logarithmic free() time.
|
||||
|
||||
LowLevelAlloc::PagesAllocator::~PagesAllocator() {
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
static const int kMaxLevel = 30;
|
||||
|
||||
// We put this class-only struct in a namespace to avoid polluting the
|
||||
// global namespace with this struct name (thus risking an ODR violation).
|
||||
namespace low_level_alloc_internal {
|
||||
// This struct describes one allocated block, or one free block.
|
||||
struct AllocList {
|
||||
struct Header {
|
||||
intptr_t size; // size of entire region, including this field. Must be
|
||||
// first. Valid in both allocated and unallocated blocks
|
||||
intptr_t magic; // kMagicAllocated or kMagicUnallocated xor this
|
||||
LowLevelAlloc::Arena *arena; // pointer to parent arena
|
||||
void *dummy_for_alignment; // aligns regions to 0 mod 2*sizeof(void*)
|
||||
} header;
|
||||
|
||||
// Next two fields: in unallocated blocks: freelist skiplist data
|
||||
// in allocated blocks: overlaps with client data
|
||||
int levels; // levels in skiplist used
|
||||
AllocList *next[kMaxLevel]; // actually has levels elements.
|
||||
// The AllocList node may not have room for
|
||||
// all kMaxLevel entries. See max_fit in
|
||||
// LLA_SkiplistLevels()
|
||||
};
|
||||
}
|
||||
using low_level_alloc_internal::AllocList;
|
||||
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// A trivial skiplist implementation. This is used to keep the freelist
|
||||
// in address order while taking only logarithmic time per insert and delete.
|
||||
|
||||
// An integer approximation of log2(size/base)
|
||||
// Requires size >= base.
|
||||
static int IntLog2(size_t size, size_t base) {
|
||||
int result = 0;
|
||||
for (size_t i = size; i > base; i >>= 1) { // i == floor(size/2**result)
|
||||
result++;
|
||||
}
|
||||
// floor(size / 2**result) <= base < floor(size / 2**(result-1))
|
||||
// => log2(size/(base+1)) <= result < 1+log2(size/base)
|
||||
// => result ~= log2(size/base)
|
||||
return result;
|
||||
}
|
||||
|
||||
// Return a random integer n: p(n)=1/(2**n) if 1 <= n; p(n)=0 if n < 1.
|
||||
static int Random() {
|
||||
static uint32 r = 1; // no locking---it's not critical
|
||||
int result = 1;
|
||||
while ((((r = r*1103515245 + 12345) >> 30) & 1) == 0) {
|
||||
result++;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
// Return a number of skiplist levels for a node of size bytes, where
|
||||
// base is the minimum node size. Compute level=log2(size / base)+n
|
||||
// where n is 1 if random is false and otherwise a random number generated with
|
||||
// the standard distribution for a skiplist: See Random() above.
|
||||
// Bigger nodes tend to have more skiplist levels due to the log2(size / base)
|
||||
// term, so first-fit searches touch fewer nodes. "level" is clipped so
|
||||
// level<kMaxLevel and next[level-1] will fit in the node.
|
||||
// 0 < LLA_SkiplistLevels(x,y,false) <= LLA_SkiplistLevels(x,y,true) < kMaxLevel
|
||||
static int LLA_SkiplistLevels(size_t size, size_t base, bool random) {
|
||||
// max_fit is the maximum number of levels that will fit in a node for the
|
||||
// given size. We can't return more than max_fit, no matter what the
|
||||
// random number generator says.
|
||||
int max_fit = (size-OFFSETOF_MEMBER(AllocList, next)) / sizeof (AllocList *);
|
||||
int level = IntLog2(size, base) + (random? Random() : 1);
|
||||
if (level > max_fit) level = max_fit;
|
||||
if (level > kMaxLevel-1) level = kMaxLevel - 1;
|
||||
RAW_CHECK(level >= 1, "block not big enough for even one level");
|
||||
return level;
|
||||
}
|
||||
|
||||
// Return "atleast", the first element of AllocList *head s.t. *atleast >= *e.
|
||||
// For 0 <= i < head->levels, set prev[i] to "no_greater", where no_greater
|
||||
// points to the last element at level i in the AllocList less than *e, or is
|
||||
// head if no such element exists.
|
||||
static AllocList *LLA_SkiplistSearch(AllocList *head,
|
||||
AllocList *e, AllocList **prev) {
|
||||
AllocList *p = head;
|
||||
for (int level = head->levels - 1; level >= 0; level--) {
|
||||
for (AllocList *n; (n = p->next[level]) != 0 && n < e; p = n) {
|
||||
}
|
||||
prev[level] = p;
|
||||
}
|
||||
return (head->levels == 0) ? 0 : prev[0]->next[0];
|
||||
}
|
||||
|
||||
// Insert element *e into AllocList *head. Set prev[] as LLA_SkiplistSearch.
|
||||
// Requires that e->levels be previously set by the caller (using
|
||||
// LLA_SkiplistLevels())
|
||||
static void LLA_SkiplistInsert(AllocList *head, AllocList *e,
|
||||
AllocList **prev) {
|
||||
LLA_SkiplistSearch(head, e, prev);
|
||||
for (; head->levels < e->levels; head->levels++) { // extend prev pointers
|
||||
prev[head->levels] = head; // to all *e's levels
|
||||
}
|
||||
for (int i = 0; i != e->levels; i++) { // add element to list
|
||||
e->next[i] = prev[i]->next[i];
|
||||
prev[i]->next[i] = e;
|
||||
}
|
||||
}
|
||||
|
||||
// Remove element *e from AllocList *head. Set prev[] as LLA_SkiplistSearch().
|
||||
// Requires that e->levels be previous set by the caller (using
|
||||
// LLA_SkiplistLevels())
|
||||
static void LLA_SkiplistDelete(AllocList *head, AllocList *e,
|
||||
AllocList **prev) {
|
||||
AllocList *found = LLA_SkiplistSearch(head, e, prev);
|
||||
RAW_CHECK(e == found, "element not in freelist");
|
||||
for (int i = 0; i != e->levels && prev[i]->next[i] == e; i++) {
|
||||
prev[i]->next[i] = e->next[i];
|
||||
}
|
||||
while (head->levels > 0 && head->next[head->levels - 1] == 0) {
|
||||
head->levels--; // reduce head->levels if level unused
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Arena implementation
|
||||
|
||||
struct LowLevelAlloc::Arena {
|
||||
Arena() : mu(SpinLock::LINKER_INITIALIZED) {} // does nothing; for static init
|
||||
explicit Arena(int) : pagesize(0) {} // set pagesize to zero explicitly
|
||||
// for non-static init
|
||||
|
||||
SpinLock mu; // protects freelist, allocation_count,
|
||||
// pagesize, roundup, min_size
|
||||
AllocList freelist; // head of free list; sorted by addr (under mu)
|
||||
int32 allocation_count; // count of allocated blocks (under mu)
|
||||
int32 flags; // flags passed to NewArena (ro after init)
|
||||
size_t pagesize; // ==getpagesize() (init under mu, then ro)
|
||||
size_t roundup; // lowest power of 2 >= max(16,sizeof (AllocList))
|
||||
// (init under mu, then ro)
|
||||
size_t min_size; // smallest allocation block size
|
||||
// (init under mu, then ro)
|
||||
PagesAllocator *allocator;
|
||||
};
|
||||
|
||||
// The default arena, which is used when 0 is passed instead of an Arena
|
||||
// pointer.
|
||||
static struct LowLevelAlloc::Arena default_arena;
|
||||
|
||||
// Non-malloc-hooked arenas: used only to allocate metadata for arenas that
|
||||
// do not want malloc hook reporting, so that for them there's no malloc hook
|
||||
// reporting even during arena creation.
|
||||
static struct LowLevelAlloc::Arena unhooked_arena;
|
||||
static struct LowLevelAlloc::Arena unhooked_async_sig_safe_arena;
|
||||
|
||||
namespace {
|
||||
|
||||
class DefaultPagesAllocator : public LowLevelAlloc::PagesAllocator {
|
||||
public:
|
||||
virtual ~DefaultPagesAllocator() {};
|
||||
virtual void *MapPages(int32 flags, size_t size);
|
||||
virtual void UnMapPages(int32 flags, void *addr, size_t size);
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
// magic numbers to identify allocated and unallocated blocks
|
||||
static const intptr_t kMagicAllocated = 0x4c833e95;
|
||||
static const intptr_t kMagicUnallocated = ~kMagicAllocated;
|
||||
|
||||
namespace {
|
||||
class SCOPED_LOCKABLE ArenaLock {
|
||||
public:
|
||||
explicit ArenaLock(LowLevelAlloc::Arena *arena)
|
||||
EXCLUSIVE_LOCK_FUNCTION(arena->mu)
|
||||
: left_(false), mask_valid_(false), arena_(arena) {
|
||||
if ((arena->flags & LowLevelAlloc::kAsyncSignalSafe) != 0) {
|
||||
// We've decided not to support async-signal-safe arena use until
|
||||
// there a demonstrated need. Here's how one could do it though
|
||||
// (would need to be made more portable).
|
||||
#if 0
|
||||
sigset_t all;
|
||||
sigfillset(&all);
|
||||
this->mask_valid_ =
|
||||
(pthread_sigmask(SIG_BLOCK, &all, &this->mask_) == 0);
|
||||
#else
|
||||
RAW_CHECK(false, "We do not yet support async-signal-safe arena.");
|
||||
#endif
|
||||
}
|
||||
this->arena_->mu.Lock();
|
||||
}
|
||||
~ArenaLock() { RAW_CHECK(this->left_, "haven't left Arena region"); }
|
||||
void Leave() UNLOCK_FUNCTION() {
|
||||
this->arena_->mu.Unlock();
|
||||
#if 0
|
||||
if (this->mask_valid_) {
|
||||
pthread_sigmask(SIG_SETMASK, &this->mask_, 0);
|
||||
}
|
||||
#endif
|
||||
this->left_ = true;
|
||||
}
|
||||
private:
|
||||
bool left_; // whether left region
|
||||
bool mask_valid_;
|
||||
#if 0
|
||||
sigset_t mask_; // old mask of blocked signals
|
||||
#endif
|
||||
LowLevelAlloc::Arena *arena_;
|
||||
DISALLOW_COPY_AND_ASSIGN(ArenaLock);
|
||||
};
|
||||
} // anonymous namespace
|
||||
|
||||
// create an appropriate magic number for an object at "ptr"
|
||||
// "magic" should be kMagicAllocated or kMagicUnallocated
|
||||
inline static intptr_t Magic(intptr_t magic, AllocList::Header *ptr) {
|
||||
return magic ^ reinterpret_cast<intptr_t>(ptr);
|
||||
}
|
||||
|
||||
// Initialize the fields of an Arena
|
||||
static void ArenaInit(LowLevelAlloc::Arena *arena) {
|
||||
if (arena->pagesize == 0) {
|
||||
arena->pagesize = getpagesize();
|
||||
// Round up block sizes to a power of two close to the header size.
|
||||
arena->roundup = 16;
|
||||
while (arena->roundup < sizeof (arena->freelist.header)) {
|
||||
arena->roundup += arena->roundup;
|
||||
}
|
||||
// Don't allocate blocks less than twice the roundup size to avoid tiny
|
||||
// free blocks.
|
||||
arena->min_size = 2 * arena->roundup;
|
||||
arena->freelist.header.size = 0;
|
||||
arena->freelist.header.magic =
|
||||
Magic(kMagicUnallocated, &arena->freelist.header);
|
||||
arena->freelist.header.arena = arena;
|
||||
arena->freelist.levels = 0;
|
||||
memset(arena->freelist.next, 0, sizeof (arena->freelist.next));
|
||||
arena->allocation_count = 0;
|
||||
if (arena == &default_arena) {
|
||||
// Default arena should be hooked, e.g. for heap-checker to trace
|
||||
// pointer chains through objects in the default arena.
|
||||
arena->flags = LowLevelAlloc::kCallMallocHook;
|
||||
} else if (arena == &unhooked_async_sig_safe_arena) {
|
||||
arena->flags = LowLevelAlloc::kAsyncSignalSafe;
|
||||
} else {
|
||||
arena->flags = 0; // other arenas' flags may be overridden by client,
|
||||
// but unhooked_arena will have 0 in 'flags'.
|
||||
}
|
||||
arena->allocator = LowLevelAlloc::GetDefaultPagesAllocator();
|
||||
}
|
||||
}
|
||||
|
||||
// L < meta_data_arena->mu
|
||||
LowLevelAlloc::Arena *LowLevelAlloc::NewArena(int32 flags,
|
||||
Arena *meta_data_arena) {
|
||||
return NewArenaWithCustomAlloc(flags, meta_data_arena, NULL);
|
||||
}
|
||||
|
||||
// L < meta_data_arena->mu
|
||||
LowLevelAlloc::Arena *LowLevelAlloc::NewArenaWithCustomAlloc(int32 flags,
|
||||
Arena *meta_data_arena,
|
||||
PagesAllocator *allocator) {
|
||||
RAW_CHECK(meta_data_arena != 0, "must pass a valid arena");
|
||||
if (meta_data_arena == &default_arena) {
|
||||
if ((flags & LowLevelAlloc::kAsyncSignalSafe) != 0) {
|
||||
meta_data_arena = &unhooked_async_sig_safe_arena;
|
||||
} else if ((flags & LowLevelAlloc::kCallMallocHook) == 0) {
|
||||
meta_data_arena = &unhooked_arena;
|
||||
}
|
||||
}
|
||||
// Arena(0) uses the constructor for non-static contexts
|
||||
Arena *result =
|
||||
new (AllocWithArena(sizeof (*result), meta_data_arena)) Arena(0);
|
||||
ArenaInit(result);
|
||||
result->flags = flags;
|
||||
if (allocator) {
|
||||
result->allocator = allocator;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
// L < arena->mu, L < arena->arena->mu
|
||||
bool LowLevelAlloc::DeleteArena(Arena *arena) {
|
||||
RAW_CHECK(arena != 0 && arena != &default_arena && arena != &unhooked_arena,
|
||||
"may not delete default arena");
|
||||
ArenaLock section(arena);
|
||||
bool empty = (arena->allocation_count == 0);
|
||||
section.Leave();
|
||||
if (empty) {
|
||||
while (arena->freelist.next[0] != 0) {
|
||||
AllocList *region = arena->freelist.next[0];
|
||||
size_t size = region->header.size;
|
||||
arena->freelist.next[0] = region->next[0];
|
||||
RAW_CHECK(region->header.magic ==
|
||||
Magic(kMagicUnallocated, ®ion->header),
|
||||
"bad magic number in DeleteArena()");
|
||||
RAW_CHECK(region->header.arena == arena,
|
||||
"bad arena pointer in DeleteArena()");
|
||||
RAW_CHECK(size % arena->pagesize == 0,
|
||||
"empty arena has non-page-aligned block size");
|
||||
RAW_CHECK(reinterpret_cast<intptr_t>(region) % arena->pagesize == 0,
|
||||
"empty arena has non-page-aligned block");
|
||||
int munmap_result = tcmalloc::DirectMUnMap((arena->flags & LowLevelAlloc::kAsyncSignalSafe) == 0,
|
||||
region, size);
|
||||
RAW_CHECK(munmap_result == 0,
|
||||
"LowLevelAlloc::DeleteArena: munmap failed address");
|
||||
}
|
||||
Free(arena);
|
||||
}
|
||||
return empty;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// Return value rounded up to next multiple of align.
|
||||
// align must be a power of two.
|
||||
static intptr_t RoundUp(intptr_t addr, intptr_t align) {
|
||||
return (addr + align - 1) & ~(align - 1);
|
||||
}
|
||||
|
||||
// Equivalent to "return prev->next[i]" but with sanity checking
|
||||
// that the freelist is in the correct order, that it
|
||||
// consists of regions marked "unallocated", and that no two regions
|
||||
// are adjacent in memory (they should have been coalesced).
|
||||
// L < arena->mu
|
||||
static AllocList *Next(int i, AllocList *prev, LowLevelAlloc::Arena *arena) {
|
||||
RAW_CHECK(i < prev->levels, "too few levels in Next()");
|
||||
AllocList *next = prev->next[i];
|
||||
if (next != 0) {
|
||||
RAW_CHECK(next->header.magic == Magic(kMagicUnallocated, &next->header),
|
||||
"bad magic number in Next()");
|
||||
RAW_CHECK(next->header.arena == arena,
|
||||
"bad arena pointer in Next()");
|
||||
if (prev != &arena->freelist) {
|
||||
RAW_CHECK(prev < next, "unordered freelist");
|
||||
RAW_CHECK(reinterpret_cast<char *>(prev) + prev->header.size <
|
||||
reinterpret_cast<char *>(next), "malformed freelist");
|
||||
}
|
||||
}
|
||||
return next;
|
||||
}
|
||||
|
||||
// Coalesce list item "a" with its successor if they are adjacent.
|
||||
static void Coalesce(AllocList *a) {
|
||||
AllocList *n = a->next[0];
|
||||
if (n != 0 && reinterpret_cast<char *>(a) + a->header.size ==
|
||||
reinterpret_cast<char *>(n)) {
|
||||
LowLevelAlloc::Arena *arena = a->header.arena;
|
||||
a->header.size += n->header.size;
|
||||
n->header.magic = 0;
|
||||
n->header.arena = 0;
|
||||
AllocList *prev[kMaxLevel];
|
||||
LLA_SkiplistDelete(&arena->freelist, n, prev);
|
||||
LLA_SkiplistDelete(&arena->freelist, a, prev);
|
||||
a->levels = LLA_SkiplistLevels(a->header.size, arena->min_size, true);
|
||||
LLA_SkiplistInsert(&arena->freelist, a, prev);
|
||||
}
|
||||
}
|
||||
|
||||
// Adds block at location "v" to the free list
|
||||
// L >= arena->mu
|
||||
static void AddToFreelist(void *v, LowLevelAlloc::Arena *arena) {
|
||||
AllocList *f = reinterpret_cast<AllocList *>(
|
||||
reinterpret_cast<char *>(v) - sizeof (f->header));
|
||||
RAW_CHECK(f->header.magic == Magic(kMagicAllocated, &f->header),
|
||||
"bad magic number in AddToFreelist()");
|
||||
RAW_CHECK(f->header.arena == arena,
|
||||
"bad arena pointer in AddToFreelist()");
|
||||
f->levels = LLA_SkiplistLevels(f->header.size, arena->min_size, true);
|
||||
AllocList *prev[kMaxLevel];
|
||||
LLA_SkiplistInsert(&arena->freelist, f, prev);
|
||||
f->header.magic = Magic(kMagicUnallocated, &f->header);
|
||||
Coalesce(f); // maybe coalesce with successor
|
||||
Coalesce(prev[0]); // maybe coalesce with predecessor
|
||||
}
|
||||
|
||||
// Frees storage allocated by LowLevelAlloc::Alloc().
|
||||
// L < arena->mu
|
||||
void LowLevelAlloc::Free(void *v) {
|
||||
if (v != 0) {
|
||||
AllocList *f = reinterpret_cast<AllocList *>(
|
||||
reinterpret_cast<char *>(v) - sizeof (f->header));
|
||||
RAW_CHECK(f->header.magic == Magic(kMagicAllocated, &f->header),
|
||||
"bad magic number in Free()");
|
||||
LowLevelAlloc::Arena *arena = f->header.arena;
|
||||
if ((arena->flags & kCallMallocHook) != 0) {
|
||||
MallocHook::InvokeDeleteHook(v);
|
||||
}
|
||||
ArenaLock section(arena);
|
||||
AddToFreelist(v, arena);
|
||||
RAW_CHECK(arena->allocation_count > 0, "nothing in arena to free");
|
||||
arena->allocation_count--;
|
||||
section.Leave();
|
||||
}
|
||||
}
|
||||
|
||||
// allocates and returns a block of size bytes, to be freed with Free()
|
||||
// L < arena->mu
|
||||
static void *DoAllocWithArena(size_t request, LowLevelAlloc::Arena *arena) {
|
||||
void *result = 0;
|
||||
if (request != 0) {
|
||||
AllocList *s; // will point to region that satisfies request
|
||||
ArenaLock section(arena);
|
||||
ArenaInit(arena);
|
||||
// round up with header
|
||||
size_t req_rnd = RoundUp(request + sizeof (s->header), arena->roundup);
|
||||
for (;;) { // loop until we find a suitable region
|
||||
// find the minimum levels that a block of this size must have
|
||||
int i = LLA_SkiplistLevels(req_rnd, arena->min_size, false) - 1;
|
||||
if (i < arena->freelist.levels) { // potential blocks exist
|
||||
AllocList *before = &arena->freelist; // predecessor of s
|
||||
while ((s = Next(i, before, arena)) != 0 && s->header.size < req_rnd) {
|
||||
before = s;
|
||||
}
|
||||
if (s != 0) { // we found a region
|
||||
break;
|
||||
}
|
||||
}
|
||||
// we unlock before mmap() both because mmap() may call a callback hook,
|
||||
// and because it may be slow.
|
||||
arena->mu.Unlock();
|
||||
// mmap generous 64K chunks to decrease
|
||||
// the chances/impact of fragmentation:
|
||||
size_t new_pages_size = RoundUp(req_rnd, arena->pagesize * 16);
|
||||
void *new_pages = arena->allocator->MapPages(arena->flags, new_pages_size);
|
||||
arena->mu.Lock();
|
||||
s = reinterpret_cast<AllocList *>(new_pages);
|
||||
s->header.size = new_pages_size;
|
||||
// Pretend the block is allocated; call AddToFreelist() to free it.
|
||||
s->header.magic = Magic(kMagicAllocated, &s->header);
|
||||
s->header.arena = arena;
|
||||
AddToFreelist(&s->levels, arena); // insert new region into free list
|
||||
}
|
||||
AllocList *prev[kMaxLevel];
|
||||
LLA_SkiplistDelete(&arena->freelist, s, prev); // remove from free list
|
||||
// s points to the first free region that's big enough
|
||||
if (req_rnd + arena->min_size <= s->header.size) { // big enough to split
|
||||
AllocList *n = reinterpret_cast<AllocList *>
|
||||
(req_rnd + reinterpret_cast<char *>(s));
|
||||
n->header.size = s->header.size - req_rnd;
|
||||
n->header.magic = Magic(kMagicAllocated, &n->header);
|
||||
n->header.arena = arena;
|
||||
s->header.size = req_rnd;
|
||||
AddToFreelist(&n->levels, arena);
|
||||
}
|
||||
s->header.magic = Magic(kMagicAllocated, &s->header);
|
||||
RAW_CHECK(s->header.arena == arena, "");
|
||||
arena->allocation_count++;
|
||||
section.Leave();
|
||||
result = &s->levels;
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
void *LowLevelAlloc::Alloc(size_t request) {
|
||||
void *result = DoAllocWithArena(request, &default_arena);
|
||||
if ((default_arena.flags & kCallMallocHook) != 0) {
|
||||
// this call must be directly in the user-called allocator function
|
||||
// for MallocHook::GetCallerStackTrace to work properly
|
||||
MallocHook::InvokeNewHook(result, request);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
void *LowLevelAlloc::AllocWithArena(size_t request, Arena *arena) {
|
||||
RAW_CHECK(arena != 0, "must pass a valid arena");
|
||||
void *result = DoAllocWithArena(request, arena);
|
||||
if ((arena->flags & kCallMallocHook) != 0) {
|
||||
// this call must be directly in the user-called allocator function
|
||||
// for MallocHook::GetCallerStackTrace to work properly
|
||||
MallocHook::InvokeNewHook(result, request);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
LowLevelAlloc::Arena *LowLevelAlloc::DefaultArena() {
|
||||
return &default_arena;
|
||||
}
|
||||
|
||||
static DefaultPagesAllocator *default_pages_allocator;
|
||||
static union {
|
||||
char chars[sizeof(DefaultPagesAllocator)];
|
||||
void *ptr;
|
||||
} debug_pages_allocator_space;
|
||||
|
||||
LowLevelAlloc::PagesAllocator *LowLevelAlloc::GetDefaultPagesAllocator(void) {
|
||||
if (default_pages_allocator) {
|
||||
return default_pages_allocator;
|
||||
}
|
||||
default_pages_allocator = new (debug_pages_allocator_space.chars) DefaultPagesAllocator();
|
||||
return default_pages_allocator;
|
||||
}
|
||||
|
||||
void *DefaultPagesAllocator::MapPages(int32 flags, size_t size) {
|
||||
const bool invoke_hooks = ((flags & LowLevelAlloc::kAsyncSignalSafe) == 0);
|
||||
|
||||
auto result = tcmalloc::DirectAnonMMap(invoke_hooks, size);
|
||||
|
||||
RAW_CHECK(result.success, "mmap error");
|
||||
|
||||
return result.addr;
|
||||
}
|
||||
|
||||
void DefaultPagesAllocator::UnMapPages(int32 flags, void *region, size_t size) {
|
||||
const bool invoke_hooks = ((flags & LowLevelAlloc::kAsyncSignalSafe) == 0);
|
||||
|
||||
int munmap_result = tcmalloc::DirectMUnMap(invoke_hooks, region, size);
|
||||
RAW_CHECK(munmap_result == 0,
|
||||
"LowLevelAlloc::DeleteArena: munmap failed address");
|
||||
}
|
130
3party/gperftools/src/base/low_level_alloc.h
Normal file
130
3party/gperftools/src/base/low_level_alloc.h
Normal file
@ -0,0 +1,130 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2006, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
#if !defined(_BASE_LOW_LEVEL_ALLOC_H_)
|
||||
#define _BASE_LOW_LEVEL_ALLOC_H_
|
||||
|
||||
// A simple thread-safe memory allocator that does not depend on
|
||||
// mutexes or thread-specific data. It is intended to be used
|
||||
// sparingly, and only when malloc() would introduce an unwanted
|
||||
// dependency, such as inside the heap-checker.
|
||||
|
||||
#include <config.h>
|
||||
#include <stddef.h> // for size_t
|
||||
#include "base/basictypes.h"
|
||||
|
||||
#ifndef __APPLE__
|
||||
// As of now, whatever clang version apple ships (clang-1205.0.22.11),
|
||||
// somehow miscompiles LowLevelAlloc when we try this section
|
||||
// thingy. Thankfully, we only need this section stuff heap leak
|
||||
// checker which is Linux-only anyways.
|
||||
#define ATTR_MALLOC_SECTION ATTRIBUTE_SECTION(malloc_hook)
|
||||
#else
|
||||
#define ATTR_MALLOC_SECTION
|
||||
#endif
|
||||
|
||||
class LowLevelAlloc {
|
||||
public:
|
||||
class PagesAllocator {
|
||||
public:
|
||||
virtual ~PagesAllocator();
|
||||
virtual void *MapPages(int32 flags, size_t size) = 0;
|
||||
virtual void UnMapPages(int32 flags, void *addr, size_t size) = 0;
|
||||
};
|
||||
|
||||
static PagesAllocator *GetDefaultPagesAllocator(void);
|
||||
|
||||
struct Arena; // an arena from which memory may be allocated
|
||||
|
||||
// Returns a pointer to a block of at least "request" bytes
|
||||
// that have been newly allocated from the specific arena.
|
||||
// for Alloc() call the DefaultArena() is used.
|
||||
// Returns 0 if passed request==0.
|
||||
// Does not return 0 under other circumstances; it crashes if memory
|
||||
// is not available.
|
||||
static void *Alloc(size_t request)
|
||||
ATTR_MALLOC_SECTION;
|
||||
static void *AllocWithArena(size_t request, Arena *arena)
|
||||
ATTR_MALLOC_SECTION;
|
||||
|
||||
// Deallocates a region of memory that was previously allocated with
|
||||
// Alloc(). Does nothing if passed 0. "s" must be either 0,
|
||||
// or must have been returned from a call to Alloc() and not yet passed to
|
||||
// Free() since that call to Alloc(). The space is returned to the arena
|
||||
// from which it was allocated.
|
||||
static void Free(void *s) ATTR_MALLOC_SECTION;
|
||||
|
||||
// ATTR_MALLOC_SECTION for Alloc* and Free
|
||||
// are to put all callers of MallocHook::Invoke* in this module
|
||||
// into special section,
|
||||
// so that MallocHook::GetCallerStackTrace can function accurately.
|
||||
|
||||
// Create a new arena.
|
||||
// The root metadata for the new arena is allocated in the
|
||||
// meta_data_arena; the DefaultArena() can be passed for meta_data_arena.
|
||||
// These values may be ored into flags:
|
||||
enum {
|
||||
// Report calls to Alloc() and Free() via the MallocHook interface.
|
||||
// Set in the DefaultArena.
|
||||
kCallMallocHook = 0x0001,
|
||||
|
||||
// Make calls to Alloc(), Free() be async-signal-safe. Not set in
|
||||
// DefaultArena().
|
||||
kAsyncSignalSafe = 0x0002,
|
||||
|
||||
// When used with DefaultArena(), the NewArena() and DeleteArena() calls
|
||||
// obey the flags given explicitly in the NewArena() call, even if those
|
||||
// flags differ from the settings in DefaultArena(). So the call
|
||||
// NewArena(kAsyncSignalSafe, DefaultArena()) is itself async-signal-safe,
|
||||
// as well as generatating an arena that provides async-signal-safe
|
||||
// Alloc/Free.
|
||||
};
|
||||
static Arena *NewArena(int32 flags, Arena *meta_data_arena);
|
||||
|
||||
// note: pages allocator will never be destroyed and allocated pages will never be freed
|
||||
// When allocator is NULL, it's same as NewArena
|
||||
static Arena *NewArenaWithCustomAlloc(int32 flags, Arena *meta_data_arena, PagesAllocator *allocator);
|
||||
|
||||
// Destroys an arena allocated by NewArena and returns true,
|
||||
// provided no allocated blocks remain in the arena.
|
||||
// If allocated blocks remain in the arena, does nothing and
|
||||
// returns false.
|
||||
// It is illegal to attempt to destroy the DefaultArena().
|
||||
static bool DeleteArena(Arena *arena);
|
||||
|
||||
// The default arena that always exists.
|
||||
static Arena *DefaultArena();
|
||||
|
||||
private:
|
||||
LowLevelAlloc(); // no instances
|
||||
};
|
||||
|
||||
#endif
|
332
3party/gperftools/src/base/simple_mutex.h
Normal file
332
3party/gperftools/src/base/simple_mutex.h
Normal file
@ -0,0 +1,332 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2007, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
//
|
||||
// ---
|
||||
// Author: Craig Silverstein.
|
||||
//
|
||||
// A simple mutex wrapper, supporting locks and read-write locks.
|
||||
// You should assume the locks are *not* re-entrant.
|
||||
//
|
||||
// To use: you should define the following macros in your configure.ac:
|
||||
// ACX_PTHREAD
|
||||
// AC_RWLOCK
|
||||
// The latter is defined in ../autoconf.
|
||||
//
|
||||
// This class is meant to be internal-only and should be wrapped by an
|
||||
// internal namespace. Before you use this module, please give the
|
||||
// name of your internal namespace for this module. Or, if you want
|
||||
// to expose it, you'll want to move it to the Google namespace. We
|
||||
// cannot put this class in global namespace because there can be some
|
||||
// problems when we have multiple versions of Mutex in each shared object.
|
||||
//
|
||||
// NOTE: TryLock() is broken for NO_THREADS mode, at least in NDEBUG
|
||||
// mode.
|
||||
//
|
||||
// CYGWIN NOTE: Cygwin support for rwlock seems to be buggy:
|
||||
// http://www.cygwin.com/ml/cygwin/2008-12/msg00017.html
|
||||
// Because of that, we might as well use windows locks for
|
||||
// cygwin. They seem to be more reliable than the cygwin pthreads layer.
|
||||
//
|
||||
// TRICKY IMPLEMENTATION NOTE:
|
||||
// This class is designed to be safe to use during
|
||||
// dynamic-initialization -- that is, by global constructors that are
|
||||
// run before main() starts. The issue in this case is that
|
||||
// dynamic-initialization happens in an unpredictable order, and it
|
||||
// could be that someone else's dynamic initializer could call a
|
||||
// function that tries to acquire this mutex -- but that all happens
|
||||
// before this mutex's constructor has run. (This can happen even if
|
||||
// the mutex and the function that uses the mutex are in the same .cc
|
||||
// file.) Basically, because Mutex does non-trivial work in its
|
||||
// constructor, it's not, in the naive implementation, safe to use
|
||||
// before dynamic initialization has run on it.
|
||||
//
|
||||
// The solution used here is to pair the actual mutex primitive with a
|
||||
// bool that is set to true when the mutex is dynamically initialized.
|
||||
// (Before that it's false.) Then we modify all mutex routines to
|
||||
// look at the bool, and not try to lock/unlock until the bool makes
|
||||
// it to true (which happens after the Mutex constructor has run.)
|
||||
//
|
||||
// This works because before main() starts -- particularly, during
|
||||
// dynamic initialization -- there are no threads, so a) it's ok that
|
||||
// the mutex operations are a no-op, since we don't need locking then
|
||||
// anyway; and b) we can be quite confident our bool won't change
|
||||
// state between a call to Lock() and a call to Unlock() (that would
|
||||
// require a global constructor in one translation unit to call Lock()
|
||||
// and another global constructor in another translation unit to call
|
||||
// Unlock() later, which is pretty perverse).
|
||||
//
|
||||
// That said, it's tricky, and can conceivably fail; it's safest to
|
||||
// avoid trying to acquire a mutex in a global constructor, if you
|
||||
// can. One way it can fail is that a really smart compiler might
|
||||
// initialize the bool to true at static-initialization time (too
|
||||
// early) rather than at dynamic-initialization time. To discourage
|
||||
// that, we set is_safe_ to true in code (not the constructor
|
||||
// colon-initializer) and set it to true via a function that always
|
||||
// evaluates to true, but that the compiler can't know always
|
||||
// evaluates to true. This should be good enough.
|
||||
//
|
||||
// A related issue is code that could try to access the mutex
|
||||
// after it's been destroyed in the global destructors (because
|
||||
// the Mutex global destructor runs before some other global
|
||||
// destructor, that tries to acquire the mutex). The way we
|
||||
// deal with this is by taking a constructor arg that global
|
||||
// mutexes should pass in, that causes the destructor to do no
|
||||
// work. We still depend on the compiler not doing anything
|
||||
// weird to a Mutex's memory after it is destroyed, but for a
|
||||
// static global variable, that's pretty safe.
|
||||
|
||||
#ifndef GOOGLE_MUTEX_H_
|
||||
#define GOOGLE_MUTEX_H_
|
||||
|
||||
#include <config.h>
|
||||
|
||||
#if defined(NO_THREADS)
|
||||
typedef int MutexType; // to keep a lock-count
|
||||
#elif defined(_WIN32) || defined(__CYGWIN__) || defined(__CYGWIN32__)
|
||||
# ifndef WIN32_LEAN_AND_MEAN
|
||||
# define WIN32_LEAN_AND_MEAN // We only need minimal includes
|
||||
# endif
|
||||
// We need Windows NT or later for TryEnterCriticalSection(). If you
|
||||
// don't need that functionality, you can remove these _WIN32_WINNT
|
||||
// lines, and change TryLock() to assert(0) or something.
|
||||
# ifndef _WIN32_WINNT
|
||||
# define _WIN32_WINNT 0x0400
|
||||
# endif
|
||||
# include <windows.h>
|
||||
typedef CRITICAL_SECTION MutexType;
|
||||
#elif defined(HAVE_PTHREAD) && defined(HAVE_RWLOCK)
|
||||
// Needed for pthread_rwlock_*. If it causes problems, you could take it
|
||||
// out, but then you'd have to unset HAVE_RWLOCK (at least on linux -- it
|
||||
// *does* cause problems for FreeBSD, or MacOSX, but isn't needed
|
||||
// for locking there.)
|
||||
# ifdef __linux__
|
||||
# define _XOPEN_SOURCE 500 // may be needed to get the rwlock calls
|
||||
# endif
|
||||
# include <pthread.h>
|
||||
typedef pthread_rwlock_t MutexType;
|
||||
#elif defined(HAVE_PTHREAD)
|
||||
# include <pthread.h>
|
||||
typedef pthread_mutex_t MutexType;
|
||||
#else
|
||||
# error Need to implement mutex.h for your architecture, or #define NO_THREADS
|
||||
#endif
|
||||
|
||||
#include <assert.h>
|
||||
#include <stdlib.h> // for abort()
|
||||
|
||||
#define MUTEX_NAMESPACE perftools_mutex_namespace
|
||||
|
||||
namespace MUTEX_NAMESPACE {
|
||||
|
||||
class Mutex {
|
||||
public:
|
||||
// This is used for the single-arg constructor
|
||||
enum LinkerInitialized { LINKER_INITIALIZED };
|
||||
|
||||
// Create a Mutex that is not held by anybody. This constructor is
|
||||
// typically used for Mutexes allocated on the heap or the stack.
|
||||
inline Mutex();
|
||||
// This constructor should be used for global, static Mutex objects.
|
||||
// It inhibits work being done by the destructor, which makes it
|
||||
// safer for code that tries to acqiure this mutex in their global
|
||||
// destructor.
|
||||
inline Mutex(LinkerInitialized);
|
||||
|
||||
// Destructor
|
||||
inline ~Mutex();
|
||||
|
||||
inline void Lock(); // Block if needed until free then acquire exclusively
|
||||
inline void Unlock(); // Release a lock acquired via Lock()
|
||||
inline bool TryLock(); // If free, Lock() and return true, else return false
|
||||
// Note that on systems that don't support read-write locks, these may
|
||||
// be implemented as synonyms to Lock() and Unlock(). So you can use
|
||||
// these for efficiency, but don't use them anyplace where being able
|
||||
// to do shared reads is necessary to avoid deadlock.
|
||||
inline void ReaderLock(); // Block until free or shared then acquire a share
|
||||
inline void ReaderUnlock(); // Release a read share of this Mutex
|
||||
inline void WriterLock() { Lock(); } // Acquire an exclusive lock
|
||||
inline void WriterUnlock() { Unlock(); } // Release a lock from WriterLock()
|
||||
|
||||
private:
|
||||
MutexType mutex_;
|
||||
// We want to make sure that the compiler sets is_safe_ to true only
|
||||
// when we tell it to, and never makes assumptions is_safe_ is
|
||||
// always true. volatile is the most reliable way to do that.
|
||||
volatile bool is_safe_;
|
||||
// This indicates which constructor was called.
|
||||
bool destroy_;
|
||||
|
||||
inline void SetIsSafe() { is_safe_ = true; }
|
||||
|
||||
// Catch the error of writing Mutex when intending MutexLock.
|
||||
Mutex(Mutex* /*ignored*/) {}
|
||||
// Disallow "evil" constructors
|
||||
Mutex(const Mutex&);
|
||||
void operator=(const Mutex&);
|
||||
};
|
||||
|
||||
// Now the implementation of Mutex for various systems
|
||||
#if defined(NO_THREADS)
|
||||
|
||||
// When we don't have threads, we can be either reading or writing,
|
||||
// but not both. We can have lots of readers at once (in no-threads
|
||||
// mode, that's most likely to happen in recursive function calls),
|
||||
// but only one writer. We represent this by having mutex_ be -1 when
|
||||
// writing and a number > 0 when reading (and 0 when no lock is held).
|
||||
//
|
||||
// In debug mode, we assert these invariants, while in non-debug mode
|
||||
// we do nothing, for efficiency. That's why everything is in an
|
||||
// assert.
|
||||
|
||||
Mutex::Mutex() : mutex_(0) { }
|
||||
Mutex::Mutex(Mutex::LinkerInitialized) : mutex_(0) { }
|
||||
Mutex::~Mutex() { assert(mutex_ == 0); }
|
||||
void Mutex::Lock() { assert(--mutex_ == -1); }
|
||||
void Mutex::Unlock() { assert(mutex_++ == -1); }
|
||||
bool Mutex::TryLock() { if (mutex_) return false; Lock(); return true; }
|
||||
void Mutex::ReaderLock() { assert(++mutex_ > 0); }
|
||||
void Mutex::ReaderUnlock() { assert(mutex_-- > 0); }
|
||||
|
||||
#elif defined(_WIN32) || defined(__CYGWIN__) || defined(__CYGWIN32__)
|
||||
|
||||
Mutex::Mutex() : destroy_(true) {
|
||||
InitializeCriticalSection(&mutex_);
|
||||
SetIsSafe();
|
||||
}
|
||||
Mutex::Mutex(LinkerInitialized) : destroy_(false) {
|
||||
InitializeCriticalSection(&mutex_);
|
||||
SetIsSafe();
|
||||
}
|
||||
Mutex::~Mutex() { if (destroy_) DeleteCriticalSection(&mutex_); }
|
||||
void Mutex::Lock() { if (is_safe_) EnterCriticalSection(&mutex_); }
|
||||
void Mutex::Unlock() { if (is_safe_) LeaveCriticalSection(&mutex_); }
|
||||
bool Mutex::TryLock() { return is_safe_ ?
|
||||
TryEnterCriticalSection(&mutex_) != 0 : true; }
|
||||
void Mutex::ReaderLock() { Lock(); } // we don't have read-write locks
|
||||
void Mutex::ReaderUnlock() { Unlock(); }
|
||||
|
||||
#elif defined(HAVE_PTHREAD) && defined(HAVE_RWLOCK)
|
||||
|
||||
#define SAFE_PTHREAD(fncall) do { /* run fncall if is_safe_ is true */ \
|
||||
if (is_safe_ && fncall(&mutex_) != 0) abort(); \
|
||||
} while (0)
|
||||
|
||||
Mutex::Mutex() : destroy_(true) {
|
||||
SetIsSafe();
|
||||
if (is_safe_ && pthread_rwlock_init(&mutex_, NULL) != 0) abort();
|
||||
}
|
||||
Mutex::Mutex(Mutex::LinkerInitialized) : destroy_(false) {
|
||||
SetIsSafe();
|
||||
if (is_safe_ && pthread_rwlock_init(&mutex_, NULL) != 0) abort();
|
||||
}
|
||||
Mutex::~Mutex() { if (destroy_) SAFE_PTHREAD(pthread_rwlock_destroy); }
|
||||
void Mutex::Lock() { SAFE_PTHREAD(pthread_rwlock_wrlock); }
|
||||
void Mutex::Unlock() { SAFE_PTHREAD(pthread_rwlock_unlock); }
|
||||
bool Mutex::TryLock() { return is_safe_ ?
|
||||
pthread_rwlock_trywrlock(&mutex_) == 0 : true; }
|
||||
void Mutex::ReaderLock() { SAFE_PTHREAD(pthread_rwlock_rdlock); }
|
||||
void Mutex::ReaderUnlock() { SAFE_PTHREAD(pthread_rwlock_unlock); }
|
||||
#undef SAFE_PTHREAD
|
||||
|
||||
#elif defined(HAVE_PTHREAD)
|
||||
|
||||
#define SAFE_PTHREAD(fncall) do { /* run fncall if is_safe_ is true */ \
|
||||
if (is_safe_ && fncall(&mutex_) != 0) abort(); \
|
||||
} while (0)
|
||||
|
||||
Mutex::Mutex() : destroy_(true) {
|
||||
SetIsSafe();
|
||||
if (is_safe_ && pthread_mutex_init(&mutex_, NULL) != 0) abort();
|
||||
}
|
||||
Mutex::Mutex(Mutex::LinkerInitialized) : destroy_(false) {
|
||||
SetIsSafe();
|
||||
if (is_safe_ && pthread_mutex_init(&mutex_, NULL) != 0) abort();
|
||||
}
|
||||
Mutex::~Mutex() { if (destroy_) SAFE_PTHREAD(pthread_mutex_destroy); }
|
||||
void Mutex::Lock() { SAFE_PTHREAD(pthread_mutex_lock); }
|
||||
void Mutex::Unlock() { SAFE_PTHREAD(pthread_mutex_unlock); }
|
||||
bool Mutex::TryLock() { return is_safe_ ?
|
||||
pthread_mutex_trylock(&mutex_) == 0 : true; }
|
||||
void Mutex::ReaderLock() { Lock(); }
|
||||
void Mutex::ReaderUnlock() { Unlock(); }
|
||||
#undef SAFE_PTHREAD
|
||||
|
||||
#endif
|
||||
|
||||
// --------------------------------------------------------------------------
|
||||
// Some helper classes
|
||||
|
||||
// MutexLock(mu) acquires mu when constructed and releases it when destroyed.
|
||||
class MutexLock {
|
||||
public:
|
||||
explicit MutexLock(Mutex *mu) : mu_(mu) { mu_->Lock(); }
|
||||
~MutexLock() { mu_->Unlock(); }
|
||||
private:
|
||||
Mutex * const mu_;
|
||||
// Disallow "evil" constructors
|
||||
MutexLock(const MutexLock&);
|
||||
void operator=(const MutexLock&);
|
||||
};
|
||||
|
||||
// ReaderMutexLock and WriterMutexLock do the same, for rwlocks
|
||||
class ReaderMutexLock {
|
||||
public:
|
||||
explicit ReaderMutexLock(Mutex *mu) : mu_(mu) { mu_->ReaderLock(); }
|
||||
~ReaderMutexLock() { mu_->ReaderUnlock(); }
|
||||
private:
|
||||
Mutex * const mu_;
|
||||
// Disallow "evil" constructors
|
||||
ReaderMutexLock(const ReaderMutexLock&);
|
||||
void operator=(const ReaderMutexLock&);
|
||||
};
|
||||
|
||||
class WriterMutexLock {
|
||||
public:
|
||||
explicit WriterMutexLock(Mutex *mu) : mu_(mu) { mu_->WriterLock(); }
|
||||
~WriterMutexLock() { mu_->WriterUnlock(); }
|
||||
private:
|
||||
Mutex * const mu_;
|
||||
// Disallow "evil" constructors
|
||||
WriterMutexLock(const WriterMutexLock&);
|
||||
void operator=(const WriterMutexLock&);
|
||||
};
|
||||
|
||||
// Catch bug where variable name is omitted, e.g. MutexLock (&mu);
|
||||
#define MutexLock(x) COMPILE_ASSERT(0, mutex_lock_decl_missing_var_name)
|
||||
#define ReaderMutexLock(x) COMPILE_ASSERT(0, rmutex_lock_decl_missing_var_name)
|
||||
#define WriterMutexLock(x) COMPILE_ASSERT(0, wmutex_lock_decl_missing_var_name)
|
||||
|
||||
} // namespace MUTEX_NAMESPACE
|
||||
|
||||
using namespace MUTEX_NAMESPACE;
|
||||
|
||||
#undef MUTEX_NAMESPACE
|
||||
|
||||
#endif /* #define GOOGLE_SIMPLE_MUTEX_H_ */
|
144
3party/gperftools/src/base/spinlock.cc
Normal file
144
3party/gperftools/src/base/spinlock.cc
Normal file
@ -0,0 +1,144 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2006, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Sanjay Ghemawat
|
||||
*/
|
||||
|
||||
#include <config.h>
|
||||
#include "base/spinlock.h"
|
||||
#include "base/spinlock_internal.h"
|
||||
#include "base/sysinfo.h" /* for GetSystemCPUsCount() */
|
||||
|
||||
// NOTE on the Lock-state values:
|
||||
//
|
||||
// kSpinLockFree represents the unlocked state
|
||||
// kSpinLockHeld represents the locked state with no waiters
|
||||
// kSpinLockSleeper represents the locked state with waiters
|
||||
|
||||
static int adaptive_spin_count = 0;
|
||||
|
||||
const base::LinkerInitialized SpinLock::LINKER_INITIALIZED =
|
||||
base::LINKER_INITIALIZED;
|
||||
|
||||
namespace {
|
||||
struct SpinLock_InitHelper {
|
||||
SpinLock_InitHelper() {
|
||||
// On multi-cpu machines, spin for longer before yielding
|
||||
// the processor or sleeping. Reduces idle time significantly.
|
||||
if (GetSystemCPUsCount() > 1) {
|
||||
adaptive_spin_count = 1000;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Hook into global constructor execution:
|
||||
// We do not do adaptive spinning before that,
|
||||
// but nothing lock-intensive should be going on at that time.
|
||||
static SpinLock_InitHelper init_helper;
|
||||
|
||||
inline void SpinlockPause(void) {
|
||||
#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))
|
||||
__asm__ __volatile__("rep; nop" : : );
|
||||
#elif defined(__GNUC__) && defined(__aarch64__)
|
||||
__asm__ __volatile__("isb" : : );
|
||||
#endif
|
||||
}
|
||||
|
||||
} // unnamed namespace
|
||||
|
||||
// Monitor the lock to see if its value changes within some time
|
||||
// period (adaptive_spin_count loop iterations). The last value read
|
||||
// from the lock is returned from the method.
|
||||
int SpinLock::SpinLoop() {
|
||||
int c = adaptive_spin_count;
|
||||
while (lockword_.load(std::memory_order_relaxed) != kSpinLockFree && --c > 0) {
|
||||
SpinlockPause();
|
||||
}
|
||||
int old = kSpinLockFree;
|
||||
lockword_.compare_exchange_strong(old, kSpinLockSleeper, std::memory_order_acquire);
|
||||
// note, that we try to set lock word to 'have sleeper' state might
|
||||
// look unnecessary, but:
|
||||
//
|
||||
// *) pay attention to second call to SpinLoop at the bottom of SlowLock loop below
|
||||
//
|
||||
// *) note, that we get there after sleeping in SpinLockDelay and
|
||||
// getting woken by Unlock
|
||||
//
|
||||
// *) also note, that we don't "count" sleepers, so when unlock
|
||||
// awakes us, it also sets lock word to "free". So we risk
|
||||
// forgetting other sleepers. And to prevent this, we become
|
||||
// "designated waker", by setting lock word to "have sleeper". So
|
||||
// then when we unlock, we also wake up someone.
|
||||
return old;
|
||||
}
|
||||
|
||||
void SpinLock::SlowLock() {
|
||||
int lock_value = SpinLoop();
|
||||
|
||||
int lock_wait_call_count = 0;
|
||||
while (lock_value != kSpinLockFree) {
|
||||
// If the lock is currently held, but not marked as having a sleeper, mark
|
||||
// it as having a sleeper.
|
||||
if (lock_value == kSpinLockHeld) {
|
||||
// Here, just "mark" that the thread is going to sleep. Don't
|
||||
// store the lock wait time in the lock as that will cause the
|
||||
// current lock owner to think it experienced contention. Note,
|
||||
// compare_exchange updates lock_value with previous value of
|
||||
// lock word.
|
||||
lockword_.compare_exchange_strong(lock_value, kSpinLockSleeper,
|
||||
std::memory_order_acquire);
|
||||
if (lock_value == kSpinLockHeld) {
|
||||
// Successfully transitioned to kSpinLockSleeper. Pass
|
||||
// kSpinLockSleeper to the SpinLockDelay routine to properly indicate
|
||||
// the last lock_value observed.
|
||||
lock_value = kSpinLockSleeper;
|
||||
} else if (lock_value == kSpinLockFree) {
|
||||
// Lock is free again, so try and acquire it before sleeping. The
|
||||
// new lock state will be the number of cycles this thread waited if
|
||||
// this thread obtains the lock.
|
||||
lockword_.compare_exchange_strong(lock_value, kSpinLockSleeper, std::memory_order_acquire);
|
||||
continue; // skip the delay at the end of the loop
|
||||
}
|
||||
}
|
||||
|
||||
// Wait for an OS specific delay.
|
||||
base::internal::SpinLockDelay(&lockword_, lock_value,
|
||||
++lock_wait_call_count);
|
||||
// Spin again after returning from the wait routine to give this thread
|
||||
// some chance of obtaining the lock.
|
||||
lock_value = SpinLoop();
|
||||
}
|
||||
}
|
||||
|
||||
void SpinLock::SlowUnlock() {
|
||||
// wake waiter if necessary
|
||||
base::internal::SpinLockWake(&lockword_, false);
|
||||
}
|
166
3party/gperftools/src/base/spinlock.h
Normal file
166
3party/gperftools/src/base/spinlock.h
Normal file
@ -0,0 +1,166 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2006, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Sanjay Ghemawat
|
||||
*/
|
||||
|
||||
// SpinLock is async signal safe.
|
||||
// If used within a signal handler, all lock holders
|
||||
// should block the signal even outside the signal handler.
|
||||
|
||||
#ifndef BASE_SPINLOCK_H_
|
||||
#define BASE_SPINLOCK_H_
|
||||
|
||||
#include <config.h>
|
||||
|
||||
#include <atomic>
|
||||
#include <type_traits>
|
||||
|
||||
#include "base/basictypes.h"
|
||||
#include "base/dynamic_annotations.h"
|
||||
#include "base/thread_annotations.h"
|
||||
|
||||
class LOCKABLE SpinLock {
|
||||
public:
|
||||
SpinLock() : lockword_(kSpinLockFree) { }
|
||||
|
||||
// Special constructor for use with static SpinLock objects. E.g.,
|
||||
//
|
||||
// static SpinLock lock(base::LINKER_INITIALIZED);
|
||||
//
|
||||
// When intialized using this constructor, we depend on the fact
|
||||
// that the linker has already initialized the memory appropriately.
|
||||
// A SpinLock constructed like this can be freely used from global
|
||||
// initializers without worrying about the order in which global
|
||||
// initializers run.
|
||||
explicit SpinLock(base::LinkerInitialized /*x*/) {
|
||||
// Does nothing; lockword_ is already initialized
|
||||
}
|
||||
|
||||
// Acquire this SpinLock.
|
||||
void Lock() EXCLUSIVE_LOCK_FUNCTION() {
|
||||
int old = kSpinLockFree;
|
||||
if (!lockword_.compare_exchange_weak(old, kSpinLockHeld, std::memory_order_acquire)) {
|
||||
SlowLock();
|
||||
}
|
||||
}
|
||||
|
||||
// Try to acquire this SpinLock without blocking and return true if the
|
||||
// acquisition was successful. If the lock was not acquired, false is
|
||||
// returned. If this SpinLock is free at the time of the call, TryLock
|
||||
// will return true with high probability.
|
||||
bool TryLock() EXCLUSIVE_TRYLOCK_FUNCTION(true) {
|
||||
int old = kSpinLockFree;
|
||||
return lockword_.compare_exchange_weak(old, kSpinLockHeld);
|
||||
}
|
||||
|
||||
// Release this SpinLock, which must be held by the calling thread.
|
||||
void Unlock() UNLOCK_FUNCTION() {
|
||||
int prev_value = lockword_.exchange(kSpinLockFree, std::memory_order_release);
|
||||
if (prev_value != kSpinLockHeld) {
|
||||
// Speed the wakeup of any waiter.
|
||||
SlowUnlock();
|
||||
}
|
||||
}
|
||||
|
||||
// Determine if the lock is held. When the lock is held by the invoking
|
||||
// thread, true will always be returned. Intended to be used as
|
||||
// CHECK(lock.IsHeld()).
|
||||
bool IsHeld() const {
|
||||
return lockword_.load(std::memory_order_relaxed) != kSpinLockFree;
|
||||
}
|
||||
|
||||
static const base::LinkerInitialized LINKER_INITIALIZED; // backwards compat
|
||||
private:
|
||||
enum { kSpinLockFree = 0 };
|
||||
enum { kSpinLockHeld = 1 };
|
||||
enum { kSpinLockSleeper = 2 };
|
||||
|
||||
std::atomic<int> lockword_;
|
||||
|
||||
void SlowLock();
|
||||
void SlowUnlock();
|
||||
int SpinLoop();
|
||||
|
||||
DISALLOW_COPY_AND_ASSIGN(SpinLock);
|
||||
};
|
||||
|
||||
// Corresponding locker object that arranges to acquire a spinlock for
|
||||
// the duration of a C++ scope.
|
||||
class SCOPED_LOCKABLE SpinLockHolder {
|
||||
private:
|
||||
SpinLock* lock_;
|
||||
public:
|
||||
explicit SpinLockHolder(SpinLock* l) EXCLUSIVE_LOCK_FUNCTION(l)
|
||||
: lock_(l) {
|
||||
l->Lock();
|
||||
}
|
||||
SpinLockHolder(const SpinLockHolder&) = delete;
|
||||
~SpinLockHolder() UNLOCK_FUNCTION() {
|
||||
lock_->Unlock();
|
||||
}
|
||||
};
|
||||
// Catch bug where variable name is omitted, e.g. SpinLockHolder (&lock);
|
||||
#define SpinLockHolder(x) COMPILE_ASSERT(0, spin_lock_decl_missing_var_name)
|
||||
|
||||
namespace tcmalloc {
|
||||
|
||||
class TrivialOnce {
|
||||
public:
|
||||
template <typename Body>
|
||||
bool RunOnce(Body body) {
|
||||
auto done_atomic = reinterpret_cast<std::atomic<int>*>(&done_flag_);
|
||||
if (done_atomic->load(std::memory_order_acquire) == 1) {
|
||||
return false;
|
||||
}
|
||||
|
||||
SpinLockHolder h(reinterpret_cast<SpinLock*>(&lock_storage_));
|
||||
|
||||
if (done_atomic->load(std::memory_order_relaxed) == 1) {
|
||||
// barrier provided by lock
|
||||
return false;
|
||||
}
|
||||
body();
|
||||
done_atomic->store(1, std::memory_order_release);
|
||||
return true;
|
||||
}
|
||||
|
||||
private:
|
||||
int done_flag_;
|
||||
alignas(alignof(SpinLock)) char lock_storage_[sizeof(SpinLock)];
|
||||
};
|
||||
|
||||
static_assert(std::is_trivial<TrivialOnce>::value == true, "");
|
||||
|
||||
} // namespace tcmalloc
|
||||
|
||||
|
||||
#endif // BASE_SPINLOCK_H_
|
83
3party/gperftools/src/base/spinlock_internal.cc
Normal file
83
3party/gperftools/src/base/spinlock_internal.cc
Normal file
@ -0,0 +1,83 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2010, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
// The OS-specific header included below must provide two calls:
|
||||
// base::internal::SpinLockDelay() and base::internal::SpinLockWake().
|
||||
// See spinlock_internal.h for the spec of SpinLockWake().
|
||||
|
||||
// void SpinLockDelay(std::atomic<int> *w, int32 value, int loop)
|
||||
// SpinLockDelay() generates an apprproate spin delay on iteration "loop" of a
|
||||
// spin loop on location *w, whose previously observed value was "value".
|
||||
// SpinLockDelay() may do nothing, may yield the CPU, may sleep a clock tick,
|
||||
// or may wait for a delay that can be truncated by a call to SpinlockWake(w).
|
||||
// In all cases, it must return in bounded time even if SpinlockWake() is not
|
||||
// called.
|
||||
|
||||
#include "base/spinlock_internal.h"
|
||||
|
||||
// forward declaration for use by spinlock_*-inl.h
|
||||
namespace base { namespace internal { static int SuggestedDelayNS(int loop); }}
|
||||
|
||||
#if defined(_WIN32)
|
||||
#include "base/spinlock_win32-inl.h"
|
||||
#elif defined(__linux__)
|
||||
#include "base/spinlock_linux-inl.h"
|
||||
#else
|
||||
#include "base/spinlock_posix-inl.h"
|
||||
#endif
|
||||
|
||||
namespace base {
|
||||
namespace internal {
|
||||
|
||||
// Return a suggested delay in nanoseconds for iteration number "loop"
|
||||
static int SuggestedDelayNS(int loop) {
|
||||
// Weak pseudo-random number generator to get some spread between threads
|
||||
// when many are spinning.
|
||||
static volatile uint64_t rand;
|
||||
uint64 r = rand;
|
||||
r = 0x5deece66dLL * r + 0xb; // numbers from nrand48()
|
||||
rand = r;
|
||||
|
||||
r <<= 16; // 48-bit random number now in top 48-bits.
|
||||
if (loop < 0 || loop > 32) { // limit loop to 0..32
|
||||
loop = 32;
|
||||
}
|
||||
// loop>>3 cannot exceed 4 because loop cannot exceed 32.
|
||||
// Select top 20..24 bits of lower 48 bits,
|
||||
// giving approximately 0ms to 16ms.
|
||||
// Mean is exponential in loop for first 32 iterations, then 8ms.
|
||||
// The futex path multiplies this by 16, since we expect explicit wakeups
|
||||
// almost always on that path.
|
||||
return r >> (44 - (loop >> 3));
|
||||
}
|
||||
|
||||
} // namespace internal
|
||||
} // namespace base
|
53
3party/gperftools/src/base/spinlock_internal.h
Normal file
53
3party/gperftools/src/base/spinlock_internal.h
Normal file
@ -0,0 +1,53 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2010, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* This file is an internal part spinlock.cc and once.cc
|
||||
* It may not be used directly by code outside of //base.
|
||||
*/
|
||||
|
||||
#ifndef BASE_SPINLOCK_INTERNAL_H_
|
||||
#define BASE_SPINLOCK_INTERNAL_H_
|
||||
|
||||
#include <config.h>
|
||||
|
||||
#include <atomic>
|
||||
|
||||
#include "base/basictypes.h"
|
||||
|
||||
namespace base {
|
||||
namespace internal {
|
||||
|
||||
void SpinLockWake(std::atomic<int> *w, bool all);
|
||||
void SpinLockDelay(std::atomic<int> *w, int32 value, int loop);
|
||||
|
||||
} // namespace internal
|
||||
} // namespace base
|
||||
#endif
|
102
3party/gperftools/src/base/spinlock_linux-inl.h
Normal file
102
3party/gperftools/src/base/spinlock_linux-inl.h
Normal file
@ -0,0 +1,102 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2009, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* This file is a Linux-specific part of spinlock_internal.cc
|
||||
*/
|
||||
|
||||
#include <errno.h>
|
||||
#include <limits.h>
|
||||
#include <sched.h>
|
||||
#include <sys/syscall.h>
|
||||
#include <time.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#define FUTEX_WAIT 0
|
||||
#define FUTEX_WAKE 1
|
||||
#define FUTEX_PRIVATE_FLAG 128
|
||||
|
||||
// Note: Instead of making direct system calls that are inlined, we rely
|
||||
// on the syscall() function in glibc to do the right thing.
|
||||
|
||||
static bool have_futex;
|
||||
static int futex_private_flag = FUTEX_PRIVATE_FLAG;
|
||||
|
||||
namespace {
|
||||
static struct InitModule {
|
||||
InitModule() {
|
||||
int x = 0;
|
||||
// futexes are ints, so we can use them only when
|
||||
// that's the same size as the lockword_ in SpinLock.
|
||||
have_futex = (syscall(__NR_futex, &x, FUTEX_WAKE, 1, NULL, NULL, 0) >= 0);
|
||||
if (have_futex && syscall(__NR_futex, &x, FUTEX_WAKE | futex_private_flag,
|
||||
1, NULL, NULL, 0) < 0) {
|
||||
futex_private_flag = 0;
|
||||
}
|
||||
}
|
||||
} init_module;
|
||||
|
||||
} // anonymous namespace
|
||||
|
||||
|
||||
namespace base {
|
||||
namespace internal {
|
||||
|
||||
void SpinLockDelay(std::atomic<int> *w, int32 value, int loop) {
|
||||
if (loop != 0) {
|
||||
int save_errno = errno;
|
||||
struct timespec tm;
|
||||
tm.tv_sec = 0;
|
||||
if (have_futex) {
|
||||
tm.tv_nsec = base::internal::SuggestedDelayNS(loop);
|
||||
} else {
|
||||
tm.tv_nsec = 2000001; // above 2ms so linux 2.4 doesn't spin
|
||||
}
|
||||
if (have_futex) {
|
||||
tm.tv_nsec *= 16; // increase the delay; we expect explicit wakeups
|
||||
syscall(__NR_futex, reinterpret_cast<int*>(w),
|
||||
FUTEX_WAIT | futex_private_flag, value,
|
||||
reinterpret_cast<struct kernel_timespec*>(&tm), NULL, 0);
|
||||
} else {
|
||||
nanosleep(&tm, NULL);
|
||||
}
|
||||
errno = save_errno;
|
||||
}
|
||||
}
|
||||
|
||||
void SpinLockWake(std::atomic<int> *w, bool all) {
|
||||
if (have_futex) {
|
||||
syscall(__NR_futex, reinterpret_cast<int*>(w),
|
||||
FUTEX_WAKE | futex_private_flag, all ? INT_MAX : 1, NULL, NULL, 0);
|
||||
}
|
||||
}
|
||||
|
||||
} // namespace internal
|
||||
} // namespace base
|
63
3party/gperftools/src/base/spinlock_posix-inl.h
Normal file
63
3party/gperftools/src/base/spinlock_posix-inl.h
Normal file
@ -0,0 +1,63 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2009, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* This file is a Posix-specific part of spinlock_internal.cc
|
||||
*/
|
||||
|
||||
#include <config.h>
|
||||
#include <errno.h>
|
||||
#ifdef HAVE_SCHED_H
|
||||
#include <sched.h> /* For sched_yield() */
|
||||
#endif
|
||||
#include <time.h> /* For nanosleep() */
|
||||
|
||||
namespace base {
|
||||
namespace internal {
|
||||
|
||||
void SpinLockDelay(std::atomic<int> *w, int32 value, int loop) {
|
||||
int save_errno = errno;
|
||||
if (loop == 0) {
|
||||
} else if (loop == 1) {
|
||||
sched_yield();
|
||||
} else {
|
||||
struct timespec tm;
|
||||
tm.tv_sec = 0;
|
||||
tm.tv_nsec = base::internal::SuggestedDelayNS(loop);
|
||||
nanosleep(&tm, NULL);
|
||||
}
|
||||
errno = save_errno;
|
||||
}
|
||||
|
||||
void SpinLockWake(std::atomic<int> *w, bool all) {
|
||||
}
|
||||
|
||||
} // namespace internal
|
||||
} // namespace base
|
63
3party/gperftools/src/base/spinlock_win32-inl.h
Normal file
63
3party/gperftools/src/base/spinlock_win32-inl.h
Normal file
@ -0,0 +1,63 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2009, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* This file is a Win32-specific part of spinlock_internal.cc
|
||||
*/
|
||||
|
||||
|
||||
#include <windows.h>
|
||||
|
||||
#ifdef _MSC_VER
|
||||
# pragma comment(lib, "Synchronization.lib")
|
||||
#endif
|
||||
|
||||
namespace base {
|
||||
namespace internal {
|
||||
|
||||
void SpinLockDelay(std::atomic<int> *w, int32 value, int loop) {
|
||||
if (loop != 0) {
|
||||
auto wait_ns = static_cast<uint64_t>(base::internal::SuggestedDelayNS(loop)) * 16;
|
||||
auto wait_ms = wait_ns / 1000000;
|
||||
|
||||
WaitOnAddress(w, &value, 4, static_cast<DWORD>(wait_ms));
|
||||
}
|
||||
}
|
||||
|
||||
void SpinLockWake(std::atomic<int> *w, bool all) {
|
||||
if (all) {
|
||||
WakeByAddressAll((void*)w);
|
||||
} else {
|
||||
WakeByAddressSingle((void*)w);
|
||||
}
|
||||
}
|
||||
|
||||
} // namespace internal
|
||||
} // namespace base
|
98
3party/gperftools/src/base/stl_allocator.h
Normal file
98
3party/gperftools/src/base/stl_allocator.h
Normal file
@ -0,0 +1,98 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2006, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Maxim Lifantsev
|
||||
*/
|
||||
|
||||
|
||||
#ifndef BASE_STL_ALLOCATOR_H_
|
||||
#define BASE_STL_ALLOCATOR_H_
|
||||
|
||||
#include <config.h>
|
||||
|
||||
#include <stddef.h> // for ptrdiff_t
|
||||
#include <limits>
|
||||
|
||||
#include "base/logging.h"
|
||||
|
||||
// Generic allocator class for STL objects
|
||||
// that uses a given type-less allocator Alloc, which must provide:
|
||||
// static void* Alloc::Allocate(size_t size);
|
||||
// static void Alloc::Free(void* ptr, size_t size);
|
||||
//
|
||||
// STL_Allocator<T, MyAlloc> provides the same thread-safety
|
||||
// guarantees as MyAlloc.
|
||||
//
|
||||
// Usage example:
|
||||
// set<T, less<T>, STL_Allocator<T, MyAlloc> > my_set;
|
||||
// CAVEAT: Parts of the code below are probably specific
|
||||
// to the STL version(s) we are using.
|
||||
// The code is simply lifted from what std::allocator<> provides.
|
||||
template <typename T, class Alloc>
|
||||
class STL_Allocator {
|
||||
public:
|
||||
typedef size_t size_type;
|
||||
typedef ptrdiff_t difference_type;
|
||||
typedef T* pointer;
|
||||
typedef const T* const_pointer;
|
||||
typedef T& reference;
|
||||
typedef const T& const_reference;
|
||||
typedef T value_type;
|
||||
|
||||
template <class T1> struct rebind {
|
||||
typedef STL_Allocator<T1, Alloc> other;
|
||||
};
|
||||
|
||||
STL_Allocator() { }
|
||||
STL_Allocator(const STL_Allocator&) { }
|
||||
template <class T1> STL_Allocator(const STL_Allocator<T1, Alloc>&) { }
|
||||
~STL_Allocator() { }
|
||||
|
||||
pointer address(reference x) const { return &x; }
|
||||
const_pointer address(const_reference x) const { return &x; }
|
||||
|
||||
pointer allocate(size_type n, const void* = 0) {
|
||||
RAW_DCHECK((n * sizeof(T)) / sizeof(T) == n, "n is too big to allocate");
|
||||
return static_cast<T*>(Alloc::Allocate(n * sizeof(T)));
|
||||
}
|
||||
void deallocate(pointer p, size_type n) { Alloc::Free(p, n * sizeof(T)); }
|
||||
|
||||
size_type max_size() const { return size_t(-1) / sizeof(T); }
|
||||
|
||||
void construct(pointer p, const T& val) { ::new(p) T(val); }
|
||||
void construct(pointer p) { ::new(p) T(); }
|
||||
void destroy(pointer p) { p->~T(); }
|
||||
|
||||
// There's no state, so these allocators are always equal
|
||||
bool operator==(const STL_Allocator&) const { return true; }
|
||||
};
|
||||
|
||||
#endif // BASE_STL_ALLOCATOR_H_
|
1013
3party/gperftools/src/base/sysinfo.cc
Normal file
1013
3party/gperftools/src/base/sysinfo.cc
Normal file
File diff suppressed because it is too large
Load Diff
230
3party/gperftools/src/base/sysinfo.h
Normal file
230
3party/gperftools/src/base/sysinfo.h
Normal file
@ -0,0 +1,230 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2006, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// All functions here are thread-hostile due to file caching unless
|
||||
// commented otherwise.
|
||||
|
||||
#ifndef _SYSINFO_H_
|
||||
#define _SYSINFO_H_
|
||||
|
||||
#include <config.h>
|
||||
|
||||
#include <time.h>
|
||||
#if (defined(_WIN32) || defined(__MINGW32__)) && (!defined(__CYGWIN__) && !defined(__CYGWIN32__))
|
||||
#include <windows.h> // for DWORD
|
||||
#include <tlhelp32.h> // for CreateToolhelp32Snapshot
|
||||
#endif
|
||||
#ifdef HAVE_UNISTD_H
|
||||
#include <unistd.h> // for pid_t
|
||||
#endif
|
||||
#include <stddef.h> // for size_t
|
||||
#include <limits.h> // for PATH_MAX
|
||||
#include "base/basictypes.h"
|
||||
#include "base/logging.h" // for RawFD
|
||||
|
||||
// This getenv function is safe to call before the C runtime is initialized.
|
||||
// On Windows, it utilizes GetEnvironmentVariable() and on unix it uses
|
||||
// /proc/self/environ instead calling getenv(). It's intended to be used in
|
||||
// routines that run before main(), when the state required for getenv() may
|
||||
// not be set up yet. In particular, errno isn't set up until relatively late
|
||||
// (after the pthreads library has a chance to make it threadsafe), and
|
||||
// getenv() doesn't work until then.
|
||||
// On some platforms, this call will utilize the same, static buffer for
|
||||
// repeated GetenvBeforeMain() calls. Callers should not expect pointers from
|
||||
// this routine to be long lived.
|
||||
// Note that on unix, /proc only has the environment at the time the
|
||||
// application was started, so this routine ignores setenv() calls/etc. Also
|
||||
// note it only reads the first 16K of the environment.
|
||||
extern const char* GetenvBeforeMain(const char* name);
|
||||
|
||||
// This takes as an argument an environment-variable name (like
|
||||
// CPUPROFILE) whose value is supposed to be a file-path, and sets
|
||||
// path to that path, and returns true. Non-trivial for surprising
|
||||
// reasons, as documented in sysinfo.cc. path must have space PATH_MAX.
|
||||
extern bool GetUniquePathFromEnv(const char* env_name, char* path);
|
||||
|
||||
extern int GetSystemCPUsCount();
|
||||
|
||||
// Return true if we're running POSIX (e.g., NPTL on Linux) threads,
|
||||
// as opposed to a non-POSIX thread library. The thing that we care
|
||||
// about is whether a thread's pid is the same as the thread that
|
||||
// spawned it. If so, this function returns true.
|
||||
// Thread-safe.
|
||||
// Note: We consider false negatives to be OK.
|
||||
bool HasPosixThreads();
|
||||
|
||||
#ifndef SWIG // SWIG doesn't like struct Buffer and variable arguments.
|
||||
|
||||
// A ProcMapsIterator abstracts access to /proc/maps for a given
|
||||
// process. Needs to be stack-allocatable and avoid using stdio/malloc
|
||||
// so it can be used in the google stack dumper, heap-profiler, etc.
|
||||
//
|
||||
// On Windows and Mac OS X, this iterator iterates *only* over DLLs
|
||||
// mapped into this process space. For Linux, FreeBSD, and Solaris,
|
||||
// it iterates over *all* mapped memory regions, including anonymous
|
||||
// mmaps. For other O/Ss, it is unlikely to work at all, and Valid()
|
||||
// will always return false. Also note: this routine only works on
|
||||
// FreeBSD if procfs is mounted: make sure this is in your /etc/fstab:
|
||||
// proc /proc procfs rw 0 0
|
||||
class ProcMapsIterator {
|
||||
public:
|
||||
struct Buffer {
|
||||
#ifdef __FreeBSD__
|
||||
// FreeBSD requires us to read all of the maps file at once, so
|
||||
// we have to make a buffer that's "always" big enough
|
||||
static const size_t kBufSize = 102400;
|
||||
#else // a one-line buffer is good enough
|
||||
static const size_t kBufSize = PATH_MAX + 1024;
|
||||
#endif
|
||||
char buf_[kBufSize];
|
||||
};
|
||||
|
||||
|
||||
// Create a new iterator for the specified pid. pid can be 0 for "self".
|
||||
explicit ProcMapsIterator(pid_t pid);
|
||||
|
||||
// Create an iterator with specified storage (for use in signal
|
||||
// handler). "buffer" should point to a ProcMapsIterator::Buffer
|
||||
// buffer can be NULL in which case a bufer will be allocated.
|
||||
ProcMapsIterator(pid_t pid, Buffer *buffer);
|
||||
|
||||
// Iterate through maps_backing instead of maps if use_maps_backing
|
||||
// is true. Otherwise the same as above. buffer can be NULL and
|
||||
// it will allocate a buffer itself.
|
||||
ProcMapsIterator(pid_t pid, Buffer *buffer,
|
||||
bool use_maps_backing);
|
||||
|
||||
// Returns true if the iterator successfully initialized;
|
||||
bool Valid() const;
|
||||
|
||||
// Returns a pointer to the most recently parsed line. Only valid
|
||||
// after Next() returns true, and until the iterator is destroyed or
|
||||
// Next() is called again. This may give strange results on non-Linux
|
||||
// systems. Prefer FormatLine() if that may be a concern.
|
||||
const char *CurrentLine() const { return stext_; }
|
||||
|
||||
// Writes the "canonical" form of the /proc/xxx/maps info for a single
|
||||
// line to the passed-in buffer. Returns the number of bytes written,
|
||||
// or 0 if it was not able to write the complete line. (To guarantee
|
||||
// success, buffer should have size at least Buffer::kBufSize.)
|
||||
// Takes as arguments values set via a call to Next(). The
|
||||
// "canonical" form of the line (taken from linux's /proc/xxx/maps):
|
||||
// <start_addr(hex)>-<end_addr(hex)> <perms(rwxp)> <offset(hex)> +
|
||||
// <major_dev(hex)>:<minor_dev(hex)> <inode> <filename> Note: the
|
||||
// eg
|
||||
// 08048000-0804c000 r-xp 00000000 03:01 3793678 /bin/cat
|
||||
// If you don't have the dev_t (dev), feel free to pass in 0.
|
||||
// (Next() doesn't return a dev_t, though NextExt does.)
|
||||
//
|
||||
// Note: if filename and flags were obtained via a call to Next(),
|
||||
// then the output of this function is only valid if Next() returned
|
||||
// true, and only until the iterator is destroyed or Next() is
|
||||
// called again. (Since filename, at least, points into CurrentLine.)
|
||||
static int FormatLine(char* buffer, int bufsize,
|
||||
uint64 start, uint64 end, const char *flags,
|
||||
uint64 offset, int64 inode, const char *filename,
|
||||
dev_t dev);
|
||||
|
||||
// Find the next entry in /proc/maps; return true if found or false
|
||||
// if at the end of the file.
|
||||
//
|
||||
// Any of the result pointers can be NULL if you're not interested
|
||||
// in those values.
|
||||
//
|
||||
// If "flags" and "filename" are passed, they end up pointing to
|
||||
// storage within the ProcMapsIterator that is valid only until the
|
||||
// iterator is destroyed or Next() is called again. The caller may
|
||||
// modify the contents of these strings (up as far as the first NUL,
|
||||
// and only until the subsequent call to Next()) if desired.
|
||||
|
||||
// The offsets are all uint64 in order to handle the case of a
|
||||
// 32-bit process running on a 64-bit kernel
|
||||
//
|
||||
// IMPORTANT NOTE: see top-of-class notes for details about what
|
||||
// mapped regions Next() iterates over, depending on O/S.
|
||||
// TODO(csilvers): make flags and filename const.
|
||||
bool Next(uint64 *start, uint64 *end, char **flags,
|
||||
uint64 *offset, int64 *inode, char **filename);
|
||||
|
||||
bool NextExt(uint64 *start, uint64 *end, char **flags,
|
||||
uint64 *offset, int64 *inode, char **filename,
|
||||
uint64 *file_mapping, uint64 *file_pages,
|
||||
uint64 *anon_mapping, uint64 *anon_pages,
|
||||
dev_t *dev);
|
||||
|
||||
~ProcMapsIterator();
|
||||
|
||||
private:
|
||||
void Init(pid_t pid, Buffer *buffer, bool use_maps_backing);
|
||||
|
||||
char *ibuf_; // input buffer
|
||||
char *stext_; // start of text
|
||||
char *etext_; // end of text
|
||||
char *nextline_; // start of next line
|
||||
char *ebuf_; // end of buffer (1 char for a nul)
|
||||
#if (defined(_WIN32) || defined(__MINGW32__)) && (!defined(__CYGWIN__) && !defined(__CYGWIN32__))
|
||||
HANDLE snapshot_; // filehandle on dll info
|
||||
// In a change from the usual W-A pattern, there is no A variant of
|
||||
// MODULEENTRY32. Tlhelp32.h #defines the W variant, but not the A.
|
||||
// We want the original A variants, and this #undef is the only
|
||||
// way I see to get them. Redefining it when we're done prevents us
|
||||
// from affecting other .cc files.
|
||||
# ifdef MODULEENTRY32 // Alias of W
|
||||
# undef MODULEENTRY32
|
||||
MODULEENTRY32 module_; // info about current dll (and dll iterator)
|
||||
# define MODULEENTRY32 MODULEENTRY32W
|
||||
# else // It's the ascii, the one we want.
|
||||
MODULEENTRY32 module_; // info about current dll (and dll iterator)
|
||||
# endif
|
||||
#elif defined(__MACH__)
|
||||
int current_image_; // dll's are called "images" in macos parlance
|
||||
int current_load_cmd_; // the segment of this dll we're examining
|
||||
#elif defined(__sun__) // Solaris
|
||||
int fd_;
|
||||
char current_filename_[PATH_MAX];
|
||||
#else
|
||||
int fd_; // filehandle on /proc/*/maps
|
||||
#endif
|
||||
pid_t pid_;
|
||||
char flags_[10];
|
||||
Buffer* dynamic_buffer_; // dynamically-allocated Buffer
|
||||
bool using_maps_backing_; // true if we are looking at maps_backing instead of maps.
|
||||
};
|
||||
|
||||
#endif /* #ifndef SWIG */
|
||||
|
||||
// Helper routines
|
||||
|
||||
namespace tcmalloc {
|
||||
int FillProcSelfMaps(char buf[], int size, bool* wrote_all);
|
||||
void DumpProcSelfMaps(RawFD fd);
|
||||
}
|
||||
|
||||
#endif /* #ifndef _SYSINFO_H_ */
|
133
3party/gperftools/src/base/thread_annotations.h
Normal file
133
3party/gperftools/src/base/thread_annotations.h
Normal file
@ -0,0 +1,133 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2008, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Le-Chun Wu
|
||||
//
|
||||
// This header file contains the macro definitions for thread safety
|
||||
// annotations that allow the developers to document the locking policies
|
||||
// of their multi-threaded code. The annotations can also help program
|
||||
// analysis tools to identify potential thread safety issues.
|
||||
//
|
||||
// The annotations are implemented using clang's "attributes" extension.
|
||||
// Using the macros defined here instead of the raw clang attributes allows
|
||||
// for portability and future compatibility.
|
||||
//
|
||||
// This functionality is not yet fully implemented in perftools,
|
||||
// but may be one day.
|
||||
|
||||
#ifndef BASE_THREAD_ANNOTATIONS_H_
|
||||
#define BASE_THREAD_ANNOTATIONS_H_
|
||||
|
||||
|
||||
#if defined(__clang__)
|
||||
#define THREAD_ANNOTATION_ATTRIBUTE__(x) __attribute__((x))
|
||||
#else
|
||||
#define THREAD_ANNOTATION_ATTRIBUTE__(x) // no-op
|
||||
#endif
|
||||
|
||||
|
||||
// Document if a shared variable/field needs to be protected by a lock.
|
||||
// GUARDED_BY allows the user to specify a particular lock that should be
|
||||
// held when accessing the annotated variable, while GUARDED_VAR only
|
||||
// indicates a shared variable should be guarded (by any lock). GUARDED_VAR
|
||||
// is primarily used when the client cannot express the name of the lock.
|
||||
#define GUARDED_BY(x) THREAD_ANNOTATION_ATTRIBUTE__(guarded_by(x))
|
||||
#define GUARDED_VAR THREAD_ANNOTATION_ATTRIBUTE__(guarded)
|
||||
|
||||
// Document if the memory location pointed to by a pointer should be guarded
|
||||
// by a lock when dereferencing the pointer. Similar to GUARDED_VAR,
|
||||
// PT_GUARDED_VAR is primarily used when the client cannot express the name
|
||||
// of the lock. Note that a pointer variable to a shared memory location
|
||||
// could itself be a shared variable. For example, if a shared global pointer
|
||||
// q, which is guarded by mu1, points to a shared memory location that is
|
||||
// guarded by mu2, q should be annotated as follows:
|
||||
// int *q GUARDED_BY(mu1) PT_GUARDED_BY(mu2);
|
||||
#define PT_GUARDED_BY(x) \
|
||||
THREAD_ANNOTATION_ATTRIBUTE__(point_to_guarded_by(x))
|
||||
#define PT_GUARDED_VAR \
|
||||
THREAD_ANNOTATION_ATTRIBUTE__(point_to_guarded)
|
||||
|
||||
// Document the acquisition order between locks that can be held
|
||||
// simultaneously by a thread. For any two locks that need to be annotated
|
||||
// to establish an acquisition order, only one of them needs the annotation.
|
||||
// (i.e. You don't have to annotate both locks with both ACQUIRED_AFTER
|
||||
// and ACQUIRED_BEFORE.)
|
||||
#define ACQUIRED_AFTER(x) \
|
||||
THREAD_ANNOTATION_ATTRIBUTE__(acquired_after(x))
|
||||
#define ACQUIRED_BEFORE(x) \
|
||||
THREAD_ANNOTATION_ATTRIBUTE__(acquired_before(x))
|
||||
|
||||
// The following three annotations document the lock requirements for
|
||||
// functions/methods.
|
||||
|
||||
// Document if a function expects certain locks to be held before it is called
|
||||
#define EXCLUSIVE_LOCKS_REQUIRED(x) \
|
||||
THREAD_ANNOTATION_ATTRIBUTE__(exclusive_locks_required(x))
|
||||
|
||||
#define SHARED_LOCKS_REQUIRED(x) \
|
||||
THREAD_ANNOTATION_ATTRIBUTE__(shared_locks_required(x))
|
||||
|
||||
// Document the locks acquired in the body of the function. These locks
|
||||
// cannot be held when calling this function (as google3's Mutex locks are
|
||||
// non-reentrant).
|
||||
#define LOCKS_EXCLUDED(x) \
|
||||
THREAD_ANNOTATION_ATTRIBUTE__(locks_excluded(x))
|
||||
|
||||
// Document the lock the annotated function returns without acquiring it.
|
||||
#define LOCK_RETURNED(x) THREAD_ANNOTATION_ATTRIBUTE__(lock_returned(x))
|
||||
|
||||
// Document if a class/type is a lockable type (such as the Mutex class).
|
||||
#define LOCKABLE THREAD_ANNOTATION_ATTRIBUTE__(lockable)
|
||||
|
||||
// Document if a class is a scoped lockable type (such as the MutexLock class).
|
||||
#define SCOPED_LOCKABLE THREAD_ANNOTATION_ATTRIBUTE__(scoped_lockable)
|
||||
|
||||
// The following annotations specify lock and unlock primitives.
|
||||
#define EXCLUSIVE_LOCK_FUNCTION(x) \
|
||||
THREAD_ANNOTATION_ATTRIBUTE__(exclusive_lock_function(x))
|
||||
|
||||
#define SHARED_LOCK_FUNCTION(x) \
|
||||
THREAD_ANNOTATION_ATTRIBUTE__(shared_lock_function(x))
|
||||
|
||||
#define EXCLUSIVE_TRYLOCK_FUNCTION(x) \
|
||||
THREAD_ANNOTATION_ATTRIBUTE__(exclusive_trylock_function(x))
|
||||
|
||||
#define SHARED_TRYLOCK_FUNCTION(x) \
|
||||
THREAD_ANNOTATION_ATTRIBUTE__(shared_trylock_function(x))
|
||||
|
||||
#define UNLOCK_FUNCTION(x) \
|
||||
THREAD_ANNOTATION_ATTRIBUTE__(unlock_function(x))
|
||||
|
||||
// An escape hatch for thread safety analysis to ignore the annotated function.
|
||||
#define NO_THREAD_SAFETY_ANALYSIS \
|
||||
THREAD_ANNOTATION_ATTRIBUTE__(no_thread_safety_analysis)
|
||||
|
||||
#endif // BASE_THREAD_ANNOTATIONS_H_
|
140
3party/gperftools/src/base/vdso_support.cc
Normal file
140
3party/gperftools/src/base/vdso_support.cc
Normal file
@ -0,0 +1,140 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2008, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Paul Pluzhnikov
|
||||
//
|
||||
// Allow dynamic symbol lookup in the kernel VDSO page.
|
||||
//
|
||||
// VDSOSupport -- a class representing kernel VDSO (if present).
|
||||
//
|
||||
|
||||
#include "base/vdso_support.h"
|
||||
|
||||
#ifdef HAVE_VDSO_SUPPORT // defined in vdso_support.h
|
||||
|
||||
#include <fcntl.h>
|
||||
#include <stddef.h> // for ptrdiff_t
|
||||
|
||||
#include "base/logging.h"
|
||||
#include "base/dynamic_annotations.h"
|
||||
#include "base/basictypes.h" // for COMPILE_ASSERT
|
||||
|
||||
#ifndef AT_SYSINFO_EHDR
|
||||
#define AT_SYSINFO_EHDR 33
|
||||
#endif
|
||||
|
||||
namespace base {
|
||||
|
||||
const void *VDSOSupport::vdso_base_ = ElfMemImage::kInvalidBase;
|
||||
VDSOSupport::VDSOSupport()
|
||||
// If vdso_base_ is still set to kInvalidBase, we got here
|
||||
// before VDSOSupport::Init has been called. Call it now.
|
||||
: image_(vdso_base_ == ElfMemImage::kInvalidBase ? Init() : vdso_base_) {
|
||||
}
|
||||
|
||||
// NOTE: we can't use GoogleOnceInit() below, because we can be
|
||||
// called by tcmalloc, and none of the *once* stuff may be functional yet.
|
||||
//
|
||||
// In addition, we hope that the VDSOSupportHelper constructor
|
||||
// causes this code to run before there are any threads, and before
|
||||
// InitGoogle() has executed any chroot or setuid calls.
|
||||
//
|
||||
// Finally, even if there is a race here, it is harmless, because
|
||||
// the operation should be idempotent.
|
||||
const void *VDSOSupport::Init() {
|
||||
if (vdso_base_ == ElfMemImage::kInvalidBase) {
|
||||
// Valgrind zaps AT_SYSINFO_EHDR and friends from the auxv[]
|
||||
// on stack, and so glibc works as if VDSO was not present.
|
||||
// But going directly to kernel via /proc/self/auxv below bypasses
|
||||
// Valgrind zapping. So we check for Valgrind separately.
|
||||
if (RunningOnValgrind()) {
|
||||
vdso_base_ = NULL;
|
||||
return NULL;
|
||||
}
|
||||
int fd = open("/proc/self/auxv", O_RDONLY);
|
||||
if (fd == -1) {
|
||||
// Kernel too old to have a VDSO.
|
||||
vdso_base_ = NULL;
|
||||
return NULL;
|
||||
}
|
||||
ElfW(auxv_t) aux;
|
||||
while (read(fd, &aux, sizeof(aux)) == sizeof(aux)) {
|
||||
if (aux.a_type == AT_SYSINFO_EHDR) {
|
||||
COMPILE_ASSERT(sizeof(vdso_base_) == sizeof(aux.a_un.a_val),
|
||||
unexpected_sizeof_pointer_NE_sizeof_a_val);
|
||||
vdso_base_ = reinterpret_cast<void *>(aux.a_un.a_val);
|
||||
break;
|
||||
}
|
||||
}
|
||||
close(fd);
|
||||
if (vdso_base_ == ElfMemImage::kInvalidBase) {
|
||||
// Didn't find AT_SYSINFO_EHDR in auxv[].
|
||||
vdso_base_ = NULL;
|
||||
}
|
||||
}
|
||||
return vdso_base_;
|
||||
}
|
||||
|
||||
const void *VDSOSupport::SetBase(const void *base) {
|
||||
CHECK(base != ElfMemImage::kInvalidBase);
|
||||
const void *old_base = vdso_base_;
|
||||
vdso_base_ = base;
|
||||
image_.Init(base);
|
||||
return old_base;
|
||||
}
|
||||
|
||||
bool VDSOSupport::LookupSymbol(const char *name,
|
||||
const char *version,
|
||||
int type,
|
||||
SymbolInfo *info) const {
|
||||
return image_.LookupSymbol(name, version, type, info);
|
||||
}
|
||||
|
||||
bool VDSOSupport::LookupSymbolByAddress(const void *address,
|
||||
SymbolInfo *info_out) const {
|
||||
return image_.LookupSymbolByAddress(address, info_out);
|
||||
}
|
||||
|
||||
// We need to make sure VDSOSupport::Init() is called before
|
||||
// the main() runs, since it might do something like setuid or
|
||||
// chroot. If VDSOSupport
|
||||
// is used in any global constructor, this will happen, since
|
||||
// VDSOSupport's constructor calls Init. But if not, we need to
|
||||
// ensure it here, with a global constructor of our own. This
|
||||
// is an allowed exception to the normal rule against non-trivial
|
||||
// global constructors.
|
||||
static class VDSOInitHelper {
|
||||
public:
|
||||
VDSOInitHelper() { VDSOSupport::Init(); }
|
||||
} vdso_init_helper;
|
||||
}
|
||||
|
||||
#endif // HAVE_VDSO_SUPPORT
|
137
3party/gperftools/src/base/vdso_support.h
Normal file
137
3party/gperftools/src/base/vdso_support.h
Normal file
@ -0,0 +1,137 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2008, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Paul Pluzhnikov
|
||||
//
|
||||
// Allow dynamic symbol lookup in the kernel VDSO page.
|
||||
//
|
||||
// VDSO stands for "Virtual Dynamic Shared Object" -- a page of
|
||||
// executable code, which looks like a shared library, but doesn't
|
||||
// necessarily exist anywhere on disk, and which gets mmap()ed into
|
||||
// every process by kernels which support VDSO, such as 2.6.x for 32-bit
|
||||
// executables, and 2.6.24 and above for 64-bit executables.
|
||||
//
|
||||
// More details could be found here:
|
||||
// http://www.trilithium.com/johan/2005/08/linux-gate/
|
||||
//
|
||||
// VDSOSupport -- a class representing kernel VDSO (if present).
|
||||
//
|
||||
// Example usage:
|
||||
// VDSOSupport vdso;
|
||||
// VDSOSupport::SymbolInfo info;
|
||||
// typedef (*FN)(unsigned *, void *, void *);
|
||||
// FN fn = NULL;
|
||||
// if (vdso.LookupSymbol("__vdso_getcpu", "LINUX_2.6", STT_FUNC, &info)) {
|
||||
// fn = reinterpret_cast<FN>(info.address);
|
||||
// }
|
||||
|
||||
#ifndef BASE_VDSO_SUPPORT_H_
|
||||
#define BASE_VDSO_SUPPORT_H_
|
||||
|
||||
#include <config.h>
|
||||
#include "base/basictypes.h"
|
||||
#include "base/elf_mem_image.h"
|
||||
|
||||
#ifdef HAVE_ELF_MEM_IMAGE
|
||||
|
||||
// Enable VDSO support only for the architectures/operating systems that
|
||||
// support it.
|
||||
#if defined(__linux__) && (defined(__i386__) || defined(__PPC__))
|
||||
#define HAVE_VDSO_SUPPORT 1
|
||||
#endif
|
||||
|
||||
#include <stdlib.h> // for NULL
|
||||
|
||||
namespace base {
|
||||
|
||||
// NOTE: this class may be used from within tcmalloc, and can not
|
||||
// use any memory allocation routines.
|
||||
class VDSOSupport {
|
||||
public:
|
||||
VDSOSupport();
|
||||
|
||||
typedef ElfMemImage::SymbolInfo SymbolInfo;
|
||||
typedef ElfMemImage::SymbolIterator SymbolIterator;
|
||||
|
||||
// Answers whether we have a vdso at all.
|
||||
bool IsPresent() const { return image_.IsPresent(); }
|
||||
|
||||
// Allow to iterate over all VDSO symbols.
|
||||
SymbolIterator begin() const { return image_.begin(); }
|
||||
SymbolIterator end() const { return image_.end(); }
|
||||
|
||||
// Look up versioned dynamic symbol in the kernel VDSO.
|
||||
// Returns false if VDSO is not present, or doesn't contain given
|
||||
// symbol/version/type combination.
|
||||
// If info_out != NULL, additional details are filled in.
|
||||
bool LookupSymbol(const char *name, const char *version,
|
||||
int symbol_type, SymbolInfo *info_out) const;
|
||||
|
||||
// Find info about symbol (if any) which overlaps given address.
|
||||
// Returns true if symbol was found; false if VDSO isn't present
|
||||
// or doesn't have a symbol overlapping given address.
|
||||
// If info_out != NULL, additional details are filled in.
|
||||
bool LookupSymbolByAddress(const void *address, SymbolInfo *info_out) const;
|
||||
|
||||
// Used only for testing. Replace real VDSO base with a mock.
|
||||
// Returns previous value of vdso_base_. After you are done testing,
|
||||
// you are expected to call SetBase() with previous value, in order to
|
||||
// reset state to the way it was.
|
||||
const void *SetBase(const void *s);
|
||||
|
||||
// Computes vdso_base_ and returns it. Should be called as early as
|
||||
// possible; before any thread creation, chroot or setuid.
|
||||
static const void *Init();
|
||||
|
||||
private:
|
||||
// image_ represents VDSO ELF image in memory.
|
||||
// image_.ehdr_ == NULL implies there is no VDSO.
|
||||
ElfMemImage image_;
|
||||
|
||||
// Cached value of auxv AT_SYSINFO_EHDR, computed once.
|
||||
// This is a tri-state:
|
||||
// kInvalidBase => value hasn't been determined yet.
|
||||
// 0 => there is no VDSO.
|
||||
// else => vma of VDSO Elf{32,64}_Ehdr.
|
||||
//
|
||||
// When testing with mock VDSO, low bit is set.
|
||||
// The low bit is always available because vdso_base_ is
|
||||
// page-aligned.
|
||||
static const void *vdso_base_;
|
||||
|
||||
DISALLOW_COPY_AND_ASSIGN(VDSOSupport);
|
||||
};
|
||||
|
||||
} // namespace base
|
||||
|
||||
#endif // HAVE_ELF_MEM_IMAGE
|
||||
|
||||
#endif // BASE_VDSO_SUPPORT_H_
|
396
3party/gperftools/src/central_freelist.cc
Normal file
396
3party/gperftools/src/central_freelist.cc
Normal file
@ -0,0 +1,396 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2008, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat <opensource@google.com>
|
||||
|
||||
#include "config.h"
|
||||
#include <algorithm>
|
||||
#include "central_freelist.h"
|
||||
#include "internal_logging.h" // for ASSERT, MESSAGE
|
||||
#include "linked_list.h" // for SLL_Next, SLL_Push, etc
|
||||
#include "page_heap.h" // for PageHeap
|
||||
#include "static_vars.h" // for Static
|
||||
|
||||
#if defined(__has_builtin)
|
||||
#if __has_builtin(__builtin_add_overflow)
|
||||
#define USE_ADD_OVERFLOW
|
||||
#endif
|
||||
#endif
|
||||
|
||||
using std::min;
|
||||
using std::max;
|
||||
|
||||
namespace tcmalloc {
|
||||
|
||||
void CentralFreeList::Init(size_t cl) {
|
||||
size_class_ = cl;
|
||||
tcmalloc::DLL_Init(&empty_);
|
||||
tcmalloc::DLL_Init(&nonempty_);
|
||||
num_spans_ = 0;
|
||||
counter_ = 0;
|
||||
|
||||
max_cache_size_ = kMaxNumTransferEntries;
|
||||
#ifdef TCMALLOC_SMALL_BUT_SLOW
|
||||
// Disable the transfer cache for the small footprint case.
|
||||
cache_size_ = 0;
|
||||
#else
|
||||
cache_size_ = 16;
|
||||
#endif
|
||||
if (cl > 0) {
|
||||
// Limit the maximum size of the cache based on the size class. If this
|
||||
// is not done, large size class objects will consume a lot of memory if
|
||||
// they just sit in the transfer cache.
|
||||
int32_t bytes = Static::sizemap()->ByteSizeForClass(cl);
|
||||
int32_t objs_to_move = Static::sizemap()->num_objects_to_move(cl);
|
||||
|
||||
ASSERT(objs_to_move > 0 && bytes > 0);
|
||||
// Limit each size class cache to at most 1MB of objects or one entry,
|
||||
// whichever is greater. Total transfer cache memory used across all
|
||||
// size classes then can't be greater than approximately
|
||||
// 1MB * kMaxNumTransferEntries.
|
||||
// min and max are in parens to avoid macro-expansion on windows.
|
||||
max_cache_size_ = (min)(max_cache_size_,
|
||||
(max)(1, (1024 * 1024) / (bytes * objs_to_move)));
|
||||
cache_size_ = (min)(cache_size_, max_cache_size_);
|
||||
}
|
||||
used_slots_ = 0;
|
||||
ASSERT(cache_size_ <= max_cache_size_);
|
||||
}
|
||||
|
||||
void CentralFreeList::ReleaseListToSpans(void* start) {
|
||||
while (start) {
|
||||
void *next = SLL_Next(start);
|
||||
ReleaseToSpans(start);
|
||||
start = next;
|
||||
}
|
||||
}
|
||||
|
||||
void CentralFreeList::ReleaseToSpans(void* object) {
|
||||
const PageID p = reinterpret_cast<uintptr_t>(object) >> kPageShift;
|
||||
Span* span = Static::pageheap()->GetDescriptor(p);
|
||||
ASSERT(span != NULL);
|
||||
ASSERT(span->refcount > 0);
|
||||
|
||||
// If span is empty, move it to non-empty list
|
||||
if (span->objects == NULL) {
|
||||
tcmalloc::DLL_Remove(span);
|
||||
tcmalloc::DLL_Prepend(&nonempty_, span);
|
||||
}
|
||||
|
||||
// The following check is expensive, so it is disabled by default
|
||||
if (false) {
|
||||
// Check that object does not occur in list
|
||||
int got = 0;
|
||||
for (void* p = span->objects; p != NULL; p = *((void**) p)) {
|
||||
ASSERT(p != object);
|
||||
got++;
|
||||
}
|
||||
(void)got;
|
||||
ASSERT(got + span->refcount ==
|
||||
(span->length<<kPageShift) /
|
||||
Static::sizemap()->ByteSizeForClass(span->sizeclass));
|
||||
}
|
||||
|
||||
counter_++;
|
||||
span->refcount--;
|
||||
if (span->refcount == 0) {
|
||||
counter_ -= ((span->length<<kPageShift) /
|
||||
Static::sizemap()->ByteSizeForClass(span->sizeclass));
|
||||
tcmalloc::DLL_Remove(span);
|
||||
--num_spans_;
|
||||
|
||||
// Release central list lock while operating on pageheap
|
||||
lock_.Unlock();
|
||||
Static::pageheap()->Delete(span);
|
||||
lock_.Lock();
|
||||
} else {
|
||||
*(reinterpret_cast<void**>(object)) = span->objects;
|
||||
span->objects = object;
|
||||
}
|
||||
}
|
||||
|
||||
bool CentralFreeList::EvictRandomSizeClass(
|
||||
int locked_size_class, bool force) {
|
||||
static int race_counter = 0;
|
||||
int t = race_counter++; // Updated without a lock, but who cares.
|
||||
if (t >= Static::num_size_classes()) {
|
||||
while (t >= Static::num_size_classes()) {
|
||||
t -= Static::num_size_classes();
|
||||
}
|
||||
race_counter = t;
|
||||
}
|
||||
ASSERT(t >= 0);
|
||||
ASSERT(t < Static::num_size_classes());
|
||||
if (t == locked_size_class) return false;
|
||||
return Static::central_cache()[t].ShrinkCache(locked_size_class, force);
|
||||
}
|
||||
|
||||
bool CentralFreeList::MakeCacheSpace() {
|
||||
// Is there room in the cache?
|
||||
if (used_slots_ < cache_size_) return true;
|
||||
// Check if we can expand this cache?
|
||||
if (cache_size_ == max_cache_size_) return false;
|
||||
// Ok, we'll try to grab an entry from some other size class.
|
||||
if (EvictRandomSizeClass(size_class_, false) ||
|
||||
EvictRandomSizeClass(size_class_, true)) {
|
||||
// Succeeded in evicting, we're going to make our cache larger.
|
||||
// However, we may have dropped and re-acquired the lock in
|
||||
// EvictRandomSizeClass (via ShrinkCache and the LockInverter), so the
|
||||
// cache_size may have changed. Therefore, check and verify that it is
|
||||
// still OK to increase the cache_size.
|
||||
if (cache_size_ < max_cache_size_) {
|
||||
cache_size_++;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
namespace {
|
||||
class LockInverter {
|
||||
private:
|
||||
SpinLock *held_, *temp_;
|
||||
public:
|
||||
inline explicit LockInverter(SpinLock* held, SpinLock *temp)
|
||||
: held_(held), temp_(temp) { held_->Unlock(); temp_->Lock(); }
|
||||
inline ~LockInverter() { temp_->Unlock(); held_->Lock(); }
|
||||
};
|
||||
}
|
||||
|
||||
// This function is marked as NO_THREAD_SAFETY_ANALYSIS because it uses
|
||||
// LockInverter to release one lock and acquire another in scoped-lock
|
||||
// style, which our current annotation/analysis does not support.
|
||||
bool CentralFreeList::ShrinkCache(int locked_size_class, bool force)
|
||||
NO_THREAD_SAFETY_ANALYSIS {
|
||||
// Start with a quick check without taking a lock.
|
||||
if (cache_size_ == 0) return false;
|
||||
// We don't evict from a full cache unless we are 'forcing'.
|
||||
if (force == false && used_slots_ == cache_size_) return false;
|
||||
|
||||
// Grab lock, but first release the other lock held by this thread. We use
|
||||
// the lock inverter to ensure that we never hold two size class locks
|
||||
// concurrently. That can create a deadlock because there is no well
|
||||
// defined nesting order.
|
||||
LockInverter li(&Static::central_cache()[locked_size_class].lock_, &lock_);
|
||||
ASSERT(used_slots_ <= cache_size_);
|
||||
ASSERT(0 <= cache_size_);
|
||||
if (cache_size_ == 0) return false;
|
||||
if (used_slots_ == cache_size_) {
|
||||
if (force == false) return false;
|
||||
// ReleaseListToSpans releases the lock, so we have to make all the
|
||||
// updates to the central list before calling it.
|
||||
cache_size_--;
|
||||
used_slots_--;
|
||||
ReleaseListToSpans(tc_slots_[used_slots_].head);
|
||||
return true;
|
||||
}
|
||||
cache_size_--;
|
||||
return true;
|
||||
}
|
||||
|
||||
void CentralFreeList::InsertRange(void *start, void *end, int N) {
|
||||
SpinLockHolder h(&lock_);
|
||||
if (N == Static::sizemap()->num_objects_to_move(size_class_) &&
|
||||
MakeCacheSpace()) {
|
||||
int slot = used_slots_++;
|
||||
ASSERT(slot >=0);
|
||||
ASSERT(slot < max_cache_size_);
|
||||
TCEntry *entry = &tc_slots_[slot];
|
||||
entry->head = start;
|
||||
entry->tail = end;
|
||||
return;
|
||||
}
|
||||
ReleaseListToSpans(start);
|
||||
}
|
||||
|
||||
int CentralFreeList::RemoveRange(void **start, void **end, int N) {
|
||||
ASSERT(N > 0);
|
||||
lock_.Lock();
|
||||
if (N == Static::sizemap()->num_objects_to_move(size_class_) &&
|
||||
used_slots_ > 0) {
|
||||
int slot = --used_slots_;
|
||||
ASSERT(slot >= 0);
|
||||
TCEntry *entry = &tc_slots_[slot];
|
||||
*start = entry->head;
|
||||
*end = entry->tail;
|
||||
lock_.Unlock();
|
||||
return N;
|
||||
}
|
||||
|
||||
int result = 0;
|
||||
*start = NULL;
|
||||
*end = NULL;
|
||||
// TODO: Prefetch multiple TCEntries?
|
||||
result = FetchFromOneSpansSafe(N, start, end);
|
||||
if (result != 0) {
|
||||
while (result < N) {
|
||||
int n;
|
||||
void* head = NULL;
|
||||
void* tail = NULL;
|
||||
n = FetchFromOneSpans(N - result, &head, &tail);
|
||||
if (!n) break;
|
||||
result += n;
|
||||
SLL_PushRange(start, head, tail);
|
||||
}
|
||||
}
|
||||
lock_.Unlock();
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
int CentralFreeList::FetchFromOneSpansSafe(int N, void **start, void **end) {
|
||||
int result = FetchFromOneSpans(N, start, end);
|
||||
if (!result) {
|
||||
Populate();
|
||||
result = FetchFromOneSpans(N, start, end);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
int CentralFreeList::FetchFromOneSpans(int N, void **start, void **end) {
|
||||
if (tcmalloc::DLL_IsEmpty(&nonempty_)) return 0;
|
||||
Span* span = nonempty_.next;
|
||||
|
||||
ASSERT(span->objects != NULL);
|
||||
|
||||
int result = 0;
|
||||
void *prev, *curr;
|
||||
curr = span->objects;
|
||||
do {
|
||||
prev = curr;
|
||||
curr = *(reinterpret_cast<void**>(curr));
|
||||
} while (++result < N && curr != NULL);
|
||||
|
||||
if (curr == NULL) {
|
||||
// Move to empty list
|
||||
tcmalloc::DLL_Remove(span);
|
||||
tcmalloc::DLL_Prepend(&empty_, span);
|
||||
}
|
||||
|
||||
*start = span->objects;
|
||||
*end = prev;
|
||||
span->objects = curr;
|
||||
SLL_SetNext(*end, NULL);
|
||||
span->refcount += result;
|
||||
counter_ -= result;
|
||||
return result;
|
||||
}
|
||||
|
||||
// Fetch memory from the system and add to the central cache freelist.
|
||||
void CentralFreeList::Populate() {
|
||||
// Release central list lock while operating on pageheap
|
||||
lock_.Unlock();
|
||||
const size_t npages = Static::sizemap()->class_to_pages(size_class_);
|
||||
|
||||
Span* span = Static::pageheap()->NewWithSizeClass(npages, size_class_);
|
||||
if (span == nullptr) {
|
||||
Log(kLog, __FILE__, __LINE__,
|
||||
"tcmalloc: allocation failed", npages << kPageShift);
|
||||
lock_.Lock();
|
||||
return;
|
||||
}
|
||||
ASSERT(span->length == npages);
|
||||
// Cache sizeclass info eagerly. Locking is not necessary.
|
||||
// (Instead of being eager, we could just replace any stale info
|
||||
// about this span, but that seems to be no better in practice.)
|
||||
for (int i = 0; i < npages; i++) {
|
||||
Static::pageheap()->SetCachedSizeClass(span->start + i, size_class_);
|
||||
}
|
||||
|
||||
// Split the block into pieces and add to the free-list
|
||||
// TODO: coloring of objects to avoid cache conflicts?
|
||||
void** tail = &span->objects;
|
||||
char* ptr = reinterpret_cast<char*>(span->start << kPageShift);
|
||||
char* limit = ptr + (npages << kPageShift);
|
||||
const size_t size = Static::sizemap()->ByteSizeForClass(size_class_);
|
||||
int num = 0;
|
||||
|
||||
// Note, when ptr is close to the top of address space, ptr + size
|
||||
// might overflow the top of address space before we're able to
|
||||
// detect that it exceeded limit. So we need to be careful. See
|
||||
// https://github.com/gperftools/gperftools/issues/1323.
|
||||
ASSERT(limit - size >= ptr);
|
||||
for (;;) {
|
||||
|
||||
#ifndef USE_ADD_OVERFLOW
|
||||
auto nextptr = reinterpret_cast<char *>(reinterpret_cast<uintptr_t>(ptr) + size);
|
||||
if (nextptr < ptr || nextptr > limit) {
|
||||
break;
|
||||
}
|
||||
#else
|
||||
// Same as above, just helping compiler a bit to produce better code
|
||||
uintptr_t nextaddr;
|
||||
if (__builtin_add_overflow(reinterpret_cast<uintptr_t>(ptr), size, &nextaddr)) {
|
||||
break;
|
||||
}
|
||||
char* nextptr = reinterpret_cast<char*>(nextaddr);
|
||||
if (nextptr > limit) {
|
||||
break;
|
||||
}
|
||||
#endif
|
||||
|
||||
// [ptr, ptr+size) bytes are all valid bytes, so append them
|
||||
*tail = ptr;
|
||||
tail = reinterpret_cast<void**>(ptr);
|
||||
num++;
|
||||
ptr = nextptr;
|
||||
}
|
||||
ASSERT(ptr <= limit);
|
||||
ASSERT(ptr > limit - size); // same as ptr + size > limit but avoiding overflow
|
||||
*tail = NULL;
|
||||
span->refcount = 0; // No sub-object in use yet
|
||||
|
||||
// Add span to list of non-empty spans
|
||||
lock_.Lock();
|
||||
tcmalloc::DLL_Prepend(&nonempty_, span);
|
||||
++num_spans_;
|
||||
counter_ += num;
|
||||
}
|
||||
|
||||
int CentralFreeList::tc_length() {
|
||||
SpinLockHolder h(&lock_);
|
||||
return used_slots_ * Static::sizemap()->num_objects_to_move(size_class_);
|
||||
}
|
||||
|
||||
size_t CentralFreeList::OverheadBytes() {
|
||||
SpinLockHolder h(&lock_);
|
||||
if (size_class_ == 0) { // 0 holds the 0-sized allocations
|
||||
return 0;
|
||||
}
|
||||
const size_t pages_per_span = Static::sizemap()->class_to_pages(size_class_);
|
||||
const size_t object_size = Static::sizemap()->class_to_size(size_class_);
|
||||
ASSERT(object_size > 0);
|
||||
const size_t overhead_per_span = (pages_per_span * kPageSize) % object_size;
|
||||
return num_spans_ * overhead_per_span;
|
||||
}
|
||||
|
||||
} // namespace tcmalloc
|
209
3party/gperftools/src/central_freelist.h
Normal file
209
3party/gperftools/src/central_freelist.h
Normal file
@ -0,0 +1,209 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2008, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat <opensource@google.com>
|
||||
|
||||
#ifndef TCMALLOC_CENTRAL_FREELIST_H_
|
||||
#define TCMALLOC_CENTRAL_FREELIST_H_
|
||||
|
||||
#include "config.h"
|
||||
#include <stddef.h> // for size_t
|
||||
#include <stdint.h> // for int32_t
|
||||
#include "base/spinlock.h"
|
||||
#include "base/thread_annotations.h"
|
||||
#include "common.h"
|
||||
#include "span.h"
|
||||
|
||||
namespace tcmalloc {
|
||||
|
||||
// Data kept per size-class in central cache.
|
||||
class CentralFreeList {
|
||||
public:
|
||||
// A CentralFreeList may be used before its constructor runs.
|
||||
// So we prevent lock_'s constructor from doing anything to the
|
||||
// lock_ state.
|
||||
CentralFreeList() : lock_(base::LINKER_INITIALIZED) { }
|
||||
|
||||
void Init(size_t cl);
|
||||
|
||||
// These methods all do internal locking.
|
||||
|
||||
// Insert the specified range into the central freelist. N is the number of
|
||||
// elements in the range. RemoveRange() is the opposite operation.
|
||||
void InsertRange(void *start, void *end, int N);
|
||||
|
||||
// Returns the actual number of fetched elements and sets *start and *end.
|
||||
int RemoveRange(void **start, void **end, int N);
|
||||
|
||||
// Returns the number of free objects in cache.
|
||||
int length() {
|
||||
SpinLockHolder h(&lock_);
|
||||
return counter_;
|
||||
}
|
||||
|
||||
// Returns the number of free objects in the transfer cache.
|
||||
int tc_length();
|
||||
|
||||
// Returns the memory overhead (internal fragmentation) attributable
|
||||
// to the freelist. This is memory lost when the size of elements
|
||||
// in a freelist doesn't exactly divide the page-size (an 8192-byte
|
||||
// page full of 5-byte objects would have 2 bytes memory overhead).
|
||||
size_t OverheadBytes();
|
||||
|
||||
// Lock/Unlock the internal SpinLock. Used on the pthread_atfork call
|
||||
// to set the lock in a consistent state before the fork.
|
||||
void Lock() EXCLUSIVE_LOCK_FUNCTION(lock_) {
|
||||
lock_.Lock();
|
||||
}
|
||||
|
||||
void Unlock() UNLOCK_FUNCTION(lock_) {
|
||||
lock_.Unlock();
|
||||
}
|
||||
|
||||
private:
|
||||
// TransferCache is used to cache transfers of
|
||||
// sizemap.num_objects_to_move(size_class) back and forth between
|
||||
// thread caches and the central cache for a given size class.
|
||||
struct TCEntry {
|
||||
void *head; // Head of chain of objects.
|
||||
void *tail; // Tail of chain of objects.
|
||||
};
|
||||
|
||||
// A central cache freelist can have anywhere from 0 to kMaxNumTransferEntries
|
||||
// slots to put link list chains into.
|
||||
#ifdef TCMALLOC_SMALL_BUT_SLOW
|
||||
// For the small memory model, the transfer cache is not used.
|
||||
static const int kMaxNumTransferEntries = 0;
|
||||
#else
|
||||
// Starting point for the the maximum number of entries in the transfer cache.
|
||||
// This actual maximum for a given size class may be lower than this
|
||||
// maximum value.
|
||||
static const int kMaxNumTransferEntries = 64;
|
||||
#endif
|
||||
|
||||
// REQUIRES: lock_ is held
|
||||
// Remove object from cache and return.
|
||||
// Return NULL if no free entries in cache.
|
||||
int FetchFromOneSpans(int N, void **start, void **end) EXCLUSIVE_LOCKS_REQUIRED(lock_);
|
||||
|
||||
// REQUIRES: lock_ is held
|
||||
// Remove object from cache and return. Fetches
|
||||
// from pageheap if cache is empty. Only returns
|
||||
// NULL on allocation failure.
|
||||
int FetchFromOneSpansSafe(int N, void **start, void **end) EXCLUSIVE_LOCKS_REQUIRED(lock_);
|
||||
|
||||
// REQUIRES: lock_ is held
|
||||
// Release a linked list of objects to spans.
|
||||
// May temporarily release lock_.
|
||||
void ReleaseListToSpans(void *start) EXCLUSIVE_LOCKS_REQUIRED(lock_);
|
||||
|
||||
// REQUIRES: lock_ is held
|
||||
// Release an object to spans.
|
||||
// May temporarily release lock_.
|
||||
void ReleaseToSpans(void* object) EXCLUSIVE_LOCKS_REQUIRED(lock_);
|
||||
|
||||
// REQUIRES: lock_ is held
|
||||
// Populate cache by fetching from the page heap.
|
||||
// May temporarily release lock_.
|
||||
void Populate() EXCLUSIVE_LOCKS_REQUIRED(lock_);
|
||||
|
||||
// REQUIRES: lock is held.
|
||||
// Tries to make room for a TCEntry. If the cache is full it will try to
|
||||
// expand it at the cost of some other cache size. Return false if there is
|
||||
// no space.
|
||||
bool MakeCacheSpace() EXCLUSIVE_LOCKS_REQUIRED(lock_);
|
||||
|
||||
// REQUIRES: lock_ for locked_size_class is held.
|
||||
// Picks a "random" size class to steal TCEntry slot from. In reality it
|
||||
// just iterates over the sizeclasses but does so without taking a lock.
|
||||
// Returns true on success.
|
||||
// May temporarily lock a "random" size class.
|
||||
static bool EvictRandomSizeClass(int locked_size_class, bool force);
|
||||
|
||||
// REQUIRES: lock_ is *not* held.
|
||||
// Tries to shrink the Cache. If force is true it will relase objects to
|
||||
// spans if it allows it to shrink the cache. Return false if it failed to
|
||||
// shrink the cache. Decrements cache_size_ on succeess.
|
||||
// May temporarily take lock_. If it takes lock_, the locked_size_class
|
||||
// lock is released to keep the thread from holding two size class locks
|
||||
// concurrently which could lead to a deadlock.
|
||||
bool ShrinkCache(int locked_size_class, bool force) LOCKS_EXCLUDED(lock_);
|
||||
|
||||
// This lock protects all the data members. cached_entries and cache_size_
|
||||
// may be looked at without holding the lock.
|
||||
SpinLock lock_;
|
||||
|
||||
// We keep linked lists of empty and non-empty spans.
|
||||
size_t size_class_; // My size class
|
||||
Span empty_; // Dummy header for list of empty spans
|
||||
Span nonempty_; // Dummy header for list of non-empty spans
|
||||
size_t num_spans_; // Number of spans in empty_ plus nonempty_
|
||||
size_t counter_; // Number of free objects in cache entry
|
||||
|
||||
// Here we reserve space for TCEntry cache slots. Space is preallocated
|
||||
// for the largest possible number of entries than any one size class may
|
||||
// accumulate. Not all size classes are allowed to accumulate
|
||||
// kMaxNumTransferEntries, so there is some wasted space for those size
|
||||
// classes.
|
||||
TCEntry tc_slots_[kMaxNumTransferEntries];
|
||||
|
||||
// Number of currently used cached entries in tc_slots_. This variable is
|
||||
// updated under a lock but can be read without one.
|
||||
int32_t used_slots_;
|
||||
// The current number of slots for this size class. This is an
|
||||
// adaptive value that is increased if there is lots of traffic
|
||||
// on a given size class.
|
||||
int32_t cache_size_;
|
||||
// Maximum size of the cache for a given size class.
|
||||
int32_t max_cache_size_;
|
||||
};
|
||||
|
||||
// Pads each CentralCache object to multiple of 64 bytes. Since some
|
||||
// compilers (such as MSVC) don't like it when the padding is 0, I use
|
||||
// template specialization to remove the padding entirely when
|
||||
// sizeof(CentralFreeList) is a multiple of 64.
|
||||
template<int kFreeListSizeMod64>
|
||||
class CentralFreeListPaddedTo : public CentralFreeList {
|
||||
private:
|
||||
char pad_[64 - kFreeListSizeMod64];
|
||||
};
|
||||
|
||||
template<>
|
||||
class CentralFreeListPaddedTo<0> : public CentralFreeList {
|
||||
};
|
||||
|
||||
class CentralFreeListPadded : public CentralFreeListPaddedTo<
|
||||
sizeof(CentralFreeList) % 64> {
|
||||
};
|
||||
|
||||
} // namespace tcmalloc
|
||||
|
||||
#endif // TCMALLOC_CENTRAL_FREELIST_H_
|
195
3party/gperftools/src/check_address-inl.h
Normal file
195
3party/gperftools/src/check_address-inl.h
Normal file
@ -0,0 +1,195 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2023, gperftools Contributors
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// This is internal implementation details of
|
||||
// stacktrace_generic_fp-inl.h module. We only split this into
|
||||
// separate header to enable unit test coverage.
|
||||
|
||||
// This is only used on OS-es with mmap support.
|
||||
#include <fcntl.h>
|
||||
#include <signal.h>
|
||||
#include <sys/mman.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#if HAVE_SYS_SYSCALL_H && !__APPLE__
|
||||
#include <sys/syscall.h>
|
||||
#endif
|
||||
|
||||
namespace {
|
||||
|
||||
#if defined(__linux__) && !defined(FORCE_PIPES)
|
||||
#define CHECK_ADDRESS_USES_SIGPROCMASK
|
||||
|
||||
// Linux kernel ABI for sigprocmask requires us to pass exact sizeof
|
||||
// for kernel's sigset_t. Which is 64-bit for most arches, with only
|
||||
// notable exception of mips.
|
||||
#if defined(__mips__)
|
||||
static constexpr int kKernelSigSetSize = 16;
|
||||
#else
|
||||
static constexpr int kKernelSigSetSize = 8;
|
||||
#endif
|
||||
|
||||
// For Linux we have two strategies. One is calling sigprocmask with
|
||||
// bogus HOW argument and 'new' sigset arg our address. Kernel ends up
|
||||
// reading new sigset before interpreting how. So then we either get
|
||||
// EFAULT when addr is unreadable, or we get EINVAL for readable addr,
|
||||
// but bogus HOW argument.
|
||||
//
|
||||
// We 'steal' this idea from abseil. But nothing guarantees this exact
|
||||
// behavior of Linux. So to be future-compatible (some our binaries
|
||||
// will run tens of years from the time they're compiled), we also
|
||||
// have second more robust method.
|
||||
bool CheckAccessSingleSyscall(uintptr_t addr, int pagesize) {
|
||||
addr &= ~uintptr_t{15};
|
||||
|
||||
if (addr == 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
int rv = syscall(SYS_rt_sigprocmask, ~0, addr, uintptr_t{0}, kKernelSigSetSize);
|
||||
RAW_CHECK(rv < 0, "sigprocmask(~0, addr, ...)");
|
||||
|
||||
return (errno != EFAULT);
|
||||
}
|
||||
|
||||
// This is second strategy. Idea is more or less same as before, but
|
||||
// we use SIG_BLOCK for HOW argument. Then if this succeeds (with side
|
||||
// effect of blocking random set of signals), we simply restore
|
||||
// previous signal mask.
|
||||
bool CheckAccessTwoSyscalls(uintptr_t addr, int pagesize) {
|
||||
addr &= ~uintptr_t{15};
|
||||
|
||||
if (addr == 0) {
|
||||
return false;
|
||||
}
|
||||
|
||||
uintptr_t old[(kKernelSigSetSize + sizeof(uintptr_t) - 1) / sizeof(uintptr_t)];
|
||||
int rv = syscall(SYS_rt_sigprocmask, SIG_BLOCK, addr, old, kKernelSigSetSize);
|
||||
if (rv == 0) {
|
||||
syscall(SYS_rt_sigprocmask, SIG_SETMASK, old, nullptr, kKernelSigSetSize);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
bool CheckAddressFirstCall(uintptr_t addr, int pagesize);
|
||||
|
||||
bool (* volatile CheckAddress)(uintptr_t addr, int pagesize) = CheckAddressFirstCall;
|
||||
|
||||
// And we choose between strategies by checking at runtime if
|
||||
// single-syscall approach actually works and switch to a proper
|
||||
// version.
|
||||
bool CheckAddressFirstCall(uintptr_t addr, int pagesize) {
|
||||
void* unreadable = mmap(0, pagesize, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0);
|
||||
RAW_CHECK(unreadable != MAP_FAILED, "mmap of unreadable");
|
||||
|
||||
if (!CheckAccessSingleSyscall(reinterpret_cast<uintptr_t>(unreadable), pagesize)) {
|
||||
CheckAddress = CheckAccessSingleSyscall;
|
||||
} else {
|
||||
CheckAddress = CheckAccessTwoSyscalls;
|
||||
}
|
||||
|
||||
// Sanity check that our unreadable address is unreadable and that
|
||||
// our readable address (our own fn pointer variable) is readable.
|
||||
RAW_CHECK(CheckAddress(reinterpret_cast<uintptr_t>(CheckAddress),
|
||||
pagesize),
|
||||
"sanity check for readable addr");
|
||||
RAW_CHECK(!CheckAddress(reinterpret_cast<uintptr_t>(unreadable),
|
||||
pagesize),
|
||||
"sanity check for unreadable addr");
|
||||
|
||||
(void)munmap(unreadable, pagesize);
|
||||
|
||||
return CheckAddress(addr, pagesize);
|
||||
};
|
||||
|
||||
#else
|
||||
|
||||
#if HAVE_SYS_SYSCALL_H && !__APPLE__
|
||||
static int raw_read(int fd, void* buf, size_t count) {
|
||||
return syscall(SYS_read, fd, buf, count);
|
||||
}
|
||||
static int raw_write(int fd, void* buf, size_t count) {
|
||||
return syscall(SYS_write, fd, buf, count);
|
||||
}
|
||||
#else
|
||||
#define raw_read read
|
||||
#define raw_write write
|
||||
#endif
|
||||
|
||||
bool CheckAddress(uintptr_t addr, int pagesize) {
|
||||
static tcmalloc::TrivialOnce once;
|
||||
static int fds[2];
|
||||
|
||||
once.RunOnce([] () {
|
||||
RAW_CHECK(pipe(fds) == 0, "pipe(fds)");
|
||||
|
||||
auto add_flag = [] (int fd, int get, int set, int the_flag) {
|
||||
int flags = fcntl(fd, get, 0);
|
||||
RAW_CHECK(flags >= 0, "fcntl get");
|
||||
flags |= the_flag;
|
||||
RAW_CHECK(fcntl(fd, set, flags) == 0, "fcntl set");
|
||||
};
|
||||
|
||||
for (int i = 0; i < 2; i++) {
|
||||
add_flag(fds[i], F_GETFD, F_SETFD, FD_CLOEXEC);
|
||||
add_flag(fds[i], F_GETFL, F_SETFL, O_NONBLOCK);
|
||||
}
|
||||
});
|
||||
|
||||
do {
|
||||
int rv = raw_write(fds[1], reinterpret_cast<void*>(addr), 1);
|
||||
RAW_CHECK(rv != 0, "raw_write(...) == 0");
|
||||
if (rv > 0) {
|
||||
return true;
|
||||
}
|
||||
if (errno == EFAULT) {
|
||||
return false;
|
||||
}
|
||||
|
||||
RAW_CHECK(errno == EAGAIN, "write errno must be EAGAIN");
|
||||
|
||||
char drainbuf[256];
|
||||
do {
|
||||
rv = raw_read(fds[0], drainbuf, sizeof(drainbuf));
|
||||
if (rv < 0 && errno != EINTR) {
|
||||
RAW_CHECK(errno == EAGAIN, "read errno must be EAGAIN");
|
||||
break;
|
||||
}
|
||||
// read succeeded or we got EINTR
|
||||
} while (true);
|
||||
} while (true);
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
} // namespace
|
324
3party/gperftools/src/common.cc
Normal file
324
3party/gperftools/src/common.cc
Normal file
@ -0,0 +1,324 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2008, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat <opensource@google.com>
|
||||
|
||||
#include "config.h"
|
||||
|
||||
#include <stdlib.h> // for strtol
|
||||
#ifdef HAVE_UNISTD_H
|
||||
#include <unistd.h>
|
||||
#endif
|
||||
|
||||
#include <algorithm>
|
||||
|
||||
#include "common.h"
|
||||
#include "system-alloc.h"
|
||||
#include "base/spinlock.h"
|
||||
#include "base/commandlineflags.h"
|
||||
#include "getenv_safe.h" // TCMallocGetenvSafe
|
||||
|
||||
namespace tcmalloc {
|
||||
|
||||
// Define the maximum number of object per classe type to transfer between
|
||||
// thread and central caches.
|
||||
static int32 FLAGS_tcmalloc_transfer_num_objects;
|
||||
|
||||
static const int32 kDefaultTransferNumObjecs = 32;
|
||||
|
||||
// The init function is provided to explicit initialize the variable value
|
||||
// from the env. var to avoid C++ global construction that might defer its
|
||||
// initialization after a malloc/new call.
|
||||
static inline void InitTCMallocTransferNumObjects()
|
||||
{
|
||||
if (FLAGS_tcmalloc_transfer_num_objects == 0) {
|
||||
const char *envval = TCMallocGetenvSafe("TCMALLOC_TRANSFER_NUM_OBJ");
|
||||
FLAGS_tcmalloc_transfer_num_objects = !envval ? kDefaultTransferNumObjecs :
|
||||
strtol(envval, NULL, 10);
|
||||
}
|
||||
}
|
||||
|
||||
// Note: the following only works for "n"s that fit in 32-bits, but
|
||||
// that is fine since we only use it for small sizes.
|
||||
static inline int LgFloor(size_t n) {
|
||||
int log = 0;
|
||||
for (int i = 4; i >= 0; --i) {
|
||||
int shift = (1 << i);
|
||||
size_t x = n >> shift;
|
||||
if (x != 0) {
|
||||
n = x;
|
||||
log += shift;
|
||||
}
|
||||
}
|
||||
ASSERT(n == 1);
|
||||
return log;
|
||||
}
|
||||
|
||||
static int AlignmentForSize(size_t size) {
|
||||
int alignment = kAlignment;
|
||||
if (size > kMaxSize) {
|
||||
// Cap alignment at kPageSize for large sizes.
|
||||
alignment = kPageSize;
|
||||
} else if (size >= 128) {
|
||||
// Space wasted due to alignment is at most 1/8, i.e., 12.5%.
|
||||
alignment = (1 << LgFloor(size)) / 8;
|
||||
} else if (size >= kMinAlign) {
|
||||
// We need an alignment of at least 16 bytes to satisfy
|
||||
// requirements for some SSE types.
|
||||
alignment = kMinAlign;
|
||||
}
|
||||
// Maximum alignment allowed is page size alignment.
|
||||
if (alignment > kPageSize) {
|
||||
alignment = kPageSize;
|
||||
}
|
||||
CHECK_CONDITION(size < kMinAlign || alignment >= kMinAlign);
|
||||
CHECK_CONDITION((alignment & (alignment - 1)) == 0);
|
||||
return alignment;
|
||||
}
|
||||
|
||||
int SizeMap::NumMoveSize(size_t size) {
|
||||
if (size == 0) return 0;
|
||||
// Use approx 64k transfers between thread and central caches.
|
||||
int num = static_cast<int>(64.0 * 1024.0 / size);
|
||||
if (num < 2) num = 2;
|
||||
|
||||
// Avoid bringing too many objects into small object free lists.
|
||||
// If this value is too large:
|
||||
// - We waste memory with extra objects sitting in the thread caches.
|
||||
// - The central freelist holds its lock for too long while
|
||||
// building a linked list of objects, slowing down the allocations
|
||||
// of other threads.
|
||||
// If this value is too small:
|
||||
// - We go to the central freelist too often and we have to acquire
|
||||
// its lock each time.
|
||||
// This value strikes a balance between the constraints above.
|
||||
if (num > FLAGS_tcmalloc_transfer_num_objects)
|
||||
num = FLAGS_tcmalloc_transfer_num_objects;
|
||||
|
||||
return num;
|
||||
}
|
||||
|
||||
// Initialize the mapping arrays
|
||||
void SizeMap::Init() {
|
||||
InitTCMallocTransferNumObjects();
|
||||
|
||||
#if (!defined(_WIN32) || defined(TCMALLOC_BRAVE_EFFECTIVE_PAGE_SIZE)) && !defined(TCMALLOC_COWARD_EFFECTIVE_PAGE_SIZE)
|
||||
size_t native_page_size = tcmalloc::commandlineflags::StringToLongLong(
|
||||
TCMallocGetenvSafe("TCMALLOC_OVERRIDE_PAGESIZE"), getpagesize());
|
||||
#else
|
||||
// So windows getpagesize() returns 64k. Because that is
|
||||
// "granularity size" w.r.t. their virtual memory facility. So kinda
|
||||
// maybe not a bad idea to also have effective logical pages at 64k
|
||||
// too. But it breaks frag_unittest (for mostly harmless
|
||||
// reason). And I am not brave enough to have our behavior change so
|
||||
// much on windows (which isn't that much; people routinely run 256k
|
||||
// logical pages anyways).
|
||||
constexpr size_t native_page_size = kPageSize;
|
||||
#endif
|
||||
|
||||
size_t min_span_size = std::max<size_t>(native_page_size, kPageSize);
|
||||
if (min_span_size > kPageSize && (min_span_size % kPageSize) != 0) {
|
||||
Log(kLog, __FILE__, __LINE__, "This should never happen, but somehow "
|
||||
"we got systems page size not power of 2 and not multiple of "
|
||||
"malloc's logical page size. Releasing memory back will mostly not happen. "
|
||||
"system: ", native_page_size, ", malloc: ", kPageSize);
|
||||
min_span_size = kPageSize;
|
||||
}
|
||||
|
||||
min_span_size_in_pages_ = min_span_size / kPageSize;
|
||||
|
||||
// Do some sanity checking on add_amount[]/shift_amount[]/class_array[]
|
||||
if (ClassIndex(0) != 0) {
|
||||
Log(kCrash, __FILE__, __LINE__,
|
||||
"Invalid class index for size 0", ClassIndex(0));
|
||||
}
|
||||
if (ClassIndex(kMaxSize) >= sizeof(class_array_)) {
|
||||
Log(kCrash, __FILE__, __LINE__,
|
||||
"Invalid class index for kMaxSize", ClassIndex(kMaxSize));
|
||||
}
|
||||
|
||||
// Compute the size classes we want to use
|
||||
int sc = 1; // Next size class to assign
|
||||
int alignment = kAlignment;
|
||||
CHECK_CONDITION(kAlignment <= kMinAlign);
|
||||
for (size_t size = kAlignment; size <= kMaxSize; size += alignment) {
|
||||
alignment = AlignmentForSize(size);
|
||||
CHECK_CONDITION((size % alignment) == 0);
|
||||
|
||||
int blocks_to_move = NumMoveSize(size) / 4;
|
||||
size_t psize = 0;
|
||||
do {
|
||||
psize += min_span_size;
|
||||
// Allocate enough pages so leftover is less than 1/8 of total.
|
||||
// This bounds wasted space to at most 12.5%.
|
||||
while ((psize % size) > (psize >> 3)) {
|
||||
psize += min_span_size;
|
||||
}
|
||||
// Continue to add pages until there are at least as many objects in
|
||||
// the span as are needed when moving objects from the central
|
||||
// freelists and spans to the thread caches.
|
||||
} while ((psize / size) < (blocks_to_move));
|
||||
const size_t my_pages = psize >> kPageShift;
|
||||
|
||||
if (sc > 1 && my_pages == class_to_pages_[sc-1]) {
|
||||
// See if we can merge this into the previous class without
|
||||
// increasing the fragmentation of the previous class.
|
||||
const size_t my_objects = (my_pages << kPageShift) / size;
|
||||
const size_t prev_objects = (class_to_pages_[sc-1] << kPageShift)
|
||||
/ class_to_size_[sc-1];
|
||||
if (my_objects == prev_objects) {
|
||||
// Adjust last class to include this size
|
||||
class_to_size_[sc-1] = size;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
// Add new class
|
||||
class_to_pages_[sc] = my_pages;
|
||||
class_to_size_[sc] = size;
|
||||
sc++;
|
||||
}
|
||||
num_size_classes = sc;
|
||||
if (sc > kClassSizesMax) {
|
||||
Log(kCrash, __FILE__, __LINE__,
|
||||
"too many size classes: (found vs. max)", sc, kClassSizesMax);
|
||||
}
|
||||
|
||||
// Initialize the mapping arrays
|
||||
int next_size = 0;
|
||||
for (int c = 1; c < num_size_classes; c++) {
|
||||
const int max_size_in_class = class_to_size_[c];
|
||||
for (int s = next_size; s <= max_size_in_class; s += kAlignment) {
|
||||
class_array_[ClassIndex(s)] = c;
|
||||
}
|
||||
next_size = max_size_in_class + kAlignment;
|
||||
}
|
||||
|
||||
// Double-check sizes just to be safe
|
||||
for (size_t size = 0; size <= kMaxSize;) {
|
||||
const int sc = SizeClass(size);
|
||||
if (sc <= 0 || sc >= num_size_classes) {
|
||||
Log(kCrash, __FILE__, __LINE__,
|
||||
"Bad size class (class, size)", sc, size);
|
||||
}
|
||||
if (sc > 1 && size <= class_to_size_[sc-1]) {
|
||||
Log(kCrash, __FILE__, __LINE__,
|
||||
"Allocating unnecessarily large class (class, size)", sc, size);
|
||||
}
|
||||
const size_t s = class_to_size_[sc];
|
||||
if (size > s || s == 0) {
|
||||
Log(kCrash, __FILE__, __LINE__,
|
||||
"Bad (class, size, requested)", sc, s, size);
|
||||
}
|
||||
if (size <= kMaxSmallSize) {
|
||||
size += 8;
|
||||
} else {
|
||||
size += 128;
|
||||
}
|
||||
}
|
||||
|
||||
// Our fast-path aligned allocation functions rely on 'naturally
|
||||
// aligned' sizes to produce aligned addresses. Lets check if that
|
||||
// holds for size classes that we produced.
|
||||
//
|
||||
// I.e. we're checking that
|
||||
//
|
||||
// align = (1 << shift), malloc(i * align) % align == 0,
|
||||
//
|
||||
// for all align values up to kPageSize.
|
||||
for (size_t align = kMinAlign; align <= kPageSize; align <<= 1) {
|
||||
for (size_t size = align; size < kPageSize; size += align) {
|
||||
CHECK_CONDITION(class_to_size_[SizeClass(size)] % align == 0);
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize the num_objects_to_move array.
|
||||
for (size_t cl = 1; cl < num_size_classes; ++cl) {
|
||||
num_objects_to_move_[cl] = NumMoveSize(ByteSizeForClass(cl));
|
||||
}
|
||||
}
|
||||
|
||||
// Metadata allocator -- keeps stats about how many bytes allocated.
|
||||
static uint64_t metadata_system_bytes_ = 0;
|
||||
static const size_t kMetadataAllocChunkSize = 8*1024*1024;
|
||||
// As ThreadCache objects are allocated with MetaDataAlloc, and also
|
||||
// CACHELINE_ALIGNED, we must use the same alignment as TCMalloc_SystemAlloc.
|
||||
static const size_t kMetadataAllignment = sizeof(MemoryAligner);
|
||||
|
||||
static char *metadata_chunk_alloc_;
|
||||
static size_t metadata_chunk_avail_;
|
||||
|
||||
static SpinLock metadata_alloc_lock(SpinLock::LINKER_INITIALIZED);
|
||||
|
||||
void* MetaDataAlloc(size_t bytes) {
|
||||
if (bytes >= kMetadataAllocChunkSize) {
|
||||
void *rv = TCMalloc_SystemAlloc(bytes,
|
||||
NULL, kMetadataAllignment);
|
||||
if (rv != NULL) {
|
||||
metadata_system_bytes_ += bytes;
|
||||
}
|
||||
return rv;
|
||||
}
|
||||
|
||||
SpinLockHolder h(&metadata_alloc_lock);
|
||||
|
||||
// the following works by essentially turning address to integer of
|
||||
// log_2 kMetadataAllignment size and negating it. I.e. negated
|
||||
// value + original value gets 0 and that's what we want modulo
|
||||
// kMetadataAllignment. Note, we negate before masking higher bits
|
||||
// off, otherwise we'd have to mask them off after negation anyways.
|
||||
intptr_t alignment = -reinterpret_cast<intptr_t>(metadata_chunk_alloc_) & (kMetadataAllignment-1);
|
||||
|
||||
if (metadata_chunk_avail_ < bytes + alignment) {
|
||||
size_t real_size;
|
||||
void *ptr = TCMalloc_SystemAlloc(kMetadataAllocChunkSize,
|
||||
&real_size, kMetadataAllignment);
|
||||
if (ptr == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
metadata_chunk_alloc_ = static_cast<char *>(ptr);
|
||||
metadata_chunk_avail_ = real_size;
|
||||
|
||||
alignment = 0;
|
||||
}
|
||||
|
||||
void *rv = static_cast<void *>(metadata_chunk_alloc_ + alignment);
|
||||
bytes += alignment;
|
||||
metadata_chunk_alloc_ += bytes;
|
||||
metadata_chunk_avail_ -= bytes;
|
||||
metadata_system_bytes_ += bytes;
|
||||
return rv;
|
||||
}
|
||||
|
||||
uint64_t metadata_system_bytes() { return metadata_system_bytes_; }
|
||||
|
||||
} // namespace tcmalloc
|
311
3party/gperftools/src/common.h
Normal file
311
3party/gperftools/src/common.h
Normal file
@ -0,0 +1,311 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2008, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat <opensource@google.com>
|
||||
//
|
||||
// Common definitions for tcmalloc code.
|
||||
|
||||
#ifndef TCMALLOC_COMMON_H_
|
||||
#define TCMALLOC_COMMON_H_
|
||||
|
||||
#include "config.h"
|
||||
#include <stddef.h> // for size_t
|
||||
#include <stdint.h> // for uintptr_t, uint64_t
|
||||
#include "internal_logging.h" // for ASSERT, etc
|
||||
#include "base/basictypes.h" // for LIKELY, etc
|
||||
|
||||
// Type that can hold a page number
|
||||
typedef uintptr_t PageID;
|
||||
|
||||
// Type that can hold the length of a run of pages
|
||||
typedef uintptr_t Length;
|
||||
|
||||
//-------------------------------------------------------------------
|
||||
// Configuration
|
||||
//-------------------------------------------------------------------
|
||||
|
||||
#if defined(TCMALLOC_ALIGN_8BYTES)
|
||||
// Unless we force to use 8 bytes alignment we use an alignment of
|
||||
// at least 16 bytes to statisfy requirements for some SSE types.
|
||||
// Keep in mind when using the 16 bytes alignment you can have a space
|
||||
// waste due alignment of 25%. (eg malloc of 24 bytes will get 32 bytes)
|
||||
static const size_t kMinAlign = 8;
|
||||
#else
|
||||
static const size_t kMinAlign = 16;
|
||||
#endif
|
||||
|
||||
// Using large pages speeds up the execution at a cost of larger memory use.
|
||||
// Deallocation may speed up by a factor as the page map gets 8x smaller, so
|
||||
// lookups in the page map result in fewer L2 cache misses, which translates to
|
||||
// speedup for application/platform combinations with high L2 cache pressure.
|
||||
// As the number of size classes increases with large pages, we increase
|
||||
// the thread cache allowance to avoid passing more free ranges to and from
|
||||
// central lists. Also, larger pages are less likely to get freed.
|
||||
// These two factors cause a bounded increase in memory use.
|
||||
#if defined(TCMALLOC_PAGE_SIZE_SHIFT)
|
||||
static const size_t kPageShift = TCMALLOC_PAGE_SIZE_SHIFT;
|
||||
#else
|
||||
static const size_t kPageShift = 13;
|
||||
#endif
|
||||
|
||||
static const size_t kClassSizesMax = 128;
|
||||
|
||||
static const size_t kMaxThreadCacheSize = 4 << 20;
|
||||
|
||||
static const size_t kPageSize = 1 << kPageShift;
|
||||
static const size_t kMaxSize = 256 * 1024;
|
||||
static const size_t kAlignment = 8;
|
||||
// For all span-lengths <= kMaxPages we keep an exact-size list in PageHeap.
|
||||
static const size_t kMaxPages = 1 << (20 - kPageShift);
|
||||
|
||||
// Default bound on the total amount of thread caches.
|
||||
#ifdef TCMALLOC_SMALL_BUT_SLOW
|
||||
// Make the overall thread cache no bigger than that of a single thread
|
||||
// for the small memory footprint case.
|
||||
static const size_t kDefaultOverallThreadCacheSize = kMaxThreadCacheSize;
|
||||
#else
|
||||
static const size_t kDefaultOverallThreadCacheSize = 8u * kMaxThreadCacheSize;
|
||||
#endif
|
||||
|
||||
// Lower bound on the per-thread cache sizes
|
||||
static const size_t kMinThreadCacheSize = kMaxSize * 2;
|
||||
|
||||
// The number of bytes one ThreadCache will steal from another when
|
||||
// the first ThreadCache is forced to Scavenge(), delaying the
|
||||
// next call to Scavenge for this thread.
|
||||
static const size_t kStealAmount = 1 << 16;
|
||||
|
||||
// The number of times that a deallocation can cause a freelist to
|
||||
// go over its max_length() before shrinking max_length().
|
||||
static const int kMaxOverages = 3;
|
||||
|
||||
// Maximum length we allow a per-thread free-list to have before we
|
||||
// move objects from it into the corresponding central free-list. We
|
||||
// want this big to avoid locking the central free-list too often. It
|
||||
// should not hurt to make this list somewhat big because the
|
||||
// scavenging code will shrink it down when its contents are not in use.
|
||||
static const int kMaxDynamicFreeListLength = 8192;
|
||||
|
||||
static const Length kMaxValidPages = (~static_cast<Length>(0)) >> kPageShift;
|
||||
|
||||
#if __aarch64__ || __x86_64__ || _M_AMD64 || _M_ARM64
|
||||
// All current x86_64 processors only look at the lower 48 bits in
|
||||
// virtual to physical address translation. The top 16 are all same as
|
||||
// bit 47. And bit 47 value 1 reserved for kernel-space addresses in
|
||||
// practice. So it is actually 47 usable bits from malloc
|
||||
// perspective. This lets us use faster two level page maps on this
|
||||
// architecture.
|
||||
//
|
||||
// There is very similar story on 64-bit arms except it has full 48
|
||||
// bits for user-space. Because of that, and because in principle OSes
|
||||
// can start giving some of highest-bit-set addresses to user-space,
|
||||
// we don't bother to limit x86 to 47 bits.
|
||||
//
|
||||
// As of now there are published plans to add more bits to x86-64
|
||||
// virtual address space, but since 48 bits has been norm for long
|
||||
// time and lots of software is relying on it, it will be opt-in from
|
||||
// OS perspective. So we can keep doing "48 bits" at least for now.
|
||||
static const int kAddressBits = (sizeof(void*) < 8 ? (8 * sizeof(void*)) : 48);
|
||||
#else
|
||||
// mipsen and ppcs have more general hardware so we have to support
|
||||
// full 64-bits of addresses.
|
||||
static const int kAddressBits = 8 * sizeof(void*);
|
||||
#endif
|
||||
|
||||
namespace tcmalloc {
|
||||
|
||||
// Convert byte size into pages. This won't overflow, but may return
|
||||
// an unreasonably large value if bytes is huge enough.
|
||||
inline Length pages(size_t bytes) {
|
||||
return (bytes >> kPageShift) +
|
||||
((bytes & (kPageSize - 1)) > 0 ? 1 : 0);
|
||||
}
|
||||
|
||||
// Size-class information + mapping
|
||||
class SizeMap {
|
||||
private:
|
||||
//-------------------------------------------------------------------
|
||||
// Mapping from size to size_class and vice versa
|
||||
//-------------------------------------------------------------------
|
||||
|
||||
// Sizes <= 1024 have an alignment >= 8. So for such sizes we have an
|
||||
// array indexed by ceil(size/8). Sizes > 1024 have an alignment >= 128.
|
||||
// So for these larger sizes we have an array indexed by ceil(size/128).
|
||||
//
|
||||
// We flatten both logical arrays into one physical array and use
|
||||
// arithmetic to compute an appropriate index. The constants used by
|
||||
// ClassIndex() were selected to make the flattening work.
|
||||
//
|
||||
// Examples:
|
||||
// Size Expression Index
|
||||
// -------------------------------------------------------
|
||||
// 0 (0 + 7) / 8 0
|
||||
// 1 (1 + 7) / 8 1
|
||||
// ...
|
||||
// 1024 (1024 + 7) / 8 128
|
||||
// 1025 (1025 + 127 + (120<<7)) / 128 129
|
||||
// ...
|
||||
// 32768 (32768 + 127 + (120<<7)) / 128 376
|
||||
static const int kMaxSmallSize = 1024;
|
||||
static const size_t kClassArraySize =
|
||||
((kMaxSize + 127 + (120 << 7)) >> 7) + 1;
|
||||
unsigned char class_array_[kClassArraySize];
|
||||
|
||||
static inline size_t SmallSizeClass(size_t s) {
|
||||
return (static_cast<uint32_t>(s) + 7) >> 3;
|
||||
}
|
||||
|
||||
static inline size_t LargeSizeClass(size_t s) {
|
||||
return (static_cast<uint32_t>(s) + 127 + (120 << 7)) >> 7;
|
||||
}
|
||||
|
||||
// If size is no more than kMaxSize, compute index of the
|
||||
// class_array[] entry for it, putting the class index in output
|
||||
// parameter idx and returning true. Otherwise return false.
|
||||
static inline bool ATTRIBUTE_ALWAYS_INLINE ClassIndexMaybe(size_t s,
|
||||
uint32* idx) {
|
||||
if (PREDICT_TRUE(s <= kMaxSmallSize)) {
|
||||
*idx = (static_cast<uint32>(s) + 7) >> 3;
|
||||
return true;
|
||||
} else if (s <= kMaxSize) {
|
||||
*idx = (static_cast<uint32>(s) + 127 + (120 << 7)) >> 7;
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
// Compute index of the class_array[] entry for a given size
|
||||
static inline size_t ClassIndex(size_t s) {
|
||||
// Use unsigned arithmetic to avoid unnecessary sign extensions.
|
||||
ASSERT(0 <= s);
|
||||
ASSERT(s <= kMaxSize);
|
||||
if (PREDICT_TRUE(s <= kMaxSmallSize)) {
|
||||
return SmallSizeClass(s);
|
||||
} else {
|
||||
return LargeSizeClass(s);
|
||||
}
|
||||
}
|
||||
|
||||
// Number of objects to move between a per-thread list and a central
|
||||
// list in one shot. We want this to be not too small so we can
|
||||
// amortize the lock overhead for accessing the central list. Making
|
||||
// it too big may temporarily cause unnecessary memory wastage in the
|
||||
// per-thread free list until the scavenger cleans up the list.
|
||||
int num_objects_to_move_[kClassSizesMax];
|
||||
|
||||
int NumMoveSize(size_t size);
|
||||
|
||||
// Mapping from size class to max size storable in that class
|
||||
int32 class_to_size_[kClassSizesMax];
|
||||
|
||||
// Mapping from size class to number of pages to allocate at a time
|
||||
size_t class_to_pages_[kClassSizesMax];
|
||||
|
||||
size_t min_span_size_in_pages_;
|
||||
|
||||
public:
|
||||
size_t num_size_classes;
|
||||
|
||||
// Constructor should do nothing since we rely on explicit Init()
|
||||
// call, which may or may not be called before the constructor runs.
|
||||
SizeMap() { }
|
||||
|
||||
// Initialize the mapping arrays
|
||||
void Init();
|
||||
|
||||
inline int SizeClass(size_t size) {
|
||||
return class_array_[ClassIndex(size)];
|
||||
}
|
||||
|
||||
// Check if size is small enough to be representable by a size
|
||||
// class, and if it is, put matching size class into *cl. Returns
|
||||
// true iff matching size class was found.
|
||||
bool ATTRIBUTE_ALWAYS_INLINE GetSizeClass(size_t size, uint32* cl) {
|
||||
uint32 idx;
|
||||
if (!ClassIndexMaybe(size, &idx)) {
|
||||
return false;
|
||||
}
|
||||
*cl = class_array_[idx];
|
||||
return true;
|
||||
}
|
||||
|
||||
// Get the byte-size for a specified class
|
||||
int32 ATTRIBUTE_ALWAYS_INLINE ByteSizeForClass(uint32 cl) {
|
||||
return class_to_size_[cl];
|
||||
}
|
||||
|
||||
// Mapping from size class to max size storable in that class
|
||||
int32 class_to_size(uint32 cl) {
|
||||
return class_to_size_[cl];
|
||||
}
|
||||
|
||||
// Mapping from size class to number of pages to allocate at a time
|
||||
size_t class_to_pages(uint32 cl) {
|
||||
return class_to_pages_[cl];
|
||||
}
|
||||
|
||||
// Number of objects to move between a per-thread list and a central
|
||||
// list in one shot. We want this to be not too small so we can
|
||||
// amortize the lock overhead for accessing the central list. Making
|
||||
// it too big may temporarily cause unnecessary memory wastage in the
|
||||
// per-thread free list until the scavenger cleans up the list.
|
||||
int num_objects_to_move(uint32 cl) {
|
||||
return num_objects_to_move_[cl];
|
||||
}
|
||||
|
||||
// Smallest Span size in bytes (max of system's page size and
|
||||
// kPageSize).
|
||||
Length min_span_size_in_pages() {
|
||||
return min_span_size_in_pages_;
|
||||
}
|
||||
};
|
||||
|
||||
// Allocates "bytes" worth of memory and returns it. Increments
|
||||
// metadata_system_bytes appropriately. May return NULL if allocation
|
||||
// fails. Requires pageheap_lock is held.
|
||||
void* MetaDataAlloc(size_t bytes);
|
||||
|
||||
// Returns the total number of bytes allocated from the system.
|
||||
// Requires pageheap_lock is held.
|
||||
uint64_t metadata_system_bytes();
|
||||
|
||||
// size/depth are made the same size as a pointer so that some generic
|
||||
// code below can conveniently cast them back and forth to void*.
|
||||
static const int kMaxStackDepth = 31;
|
||||
struct StackTrace {
|
||||
uintptr_t size; // Size of object
|
||||
uintptr_t depth; // Number of PC values stored in array below
|
||||
void* stack[kMaxStackDepth];
|
||||
};
|
||||
|
||||
} // namespace tcmalloc
|
||||
|
||||
#endif // TCMALLOC_COMMON_H_
|
278
3party/gperftools/src/config.h.in
Normal file
278
3party/gperftools/src/config.h.in
Normal file
@ -0,0 +1,278 @@
|
||||
/* src/config.h.in. Generated from configure.ac by autoheader. */
|
||||
|
||||
|
||||
#ifndef GPERFTOOLS_CONFIG_H_
|
||||
#define GPERFTOOLS_CONFIG_H_
|
||||
|
||||
|
||||
/* enable aggressive decommit by default */
|
||||
#undef ENABLE_AGGRESSIVE_DECOMMIT_BY_DEFAULT
|
||||
|
||||
/* Build new/delete operators for overaligned types */
|
||||
#undef ENABLE_ALIGNED_NEW_DELETE
|
||||
|
||||
/* Build runtime detection for sized delete */
|
||||
#undef ENABLE_DYNAMIC_SIZED_DELETE
|
||||
|
||||
/* report large allocation */
|
||||
#undef ENABLE_LARGE_ALLOC_REPORT
|
||||
|
||||
/* Build sized deletion operators */
|
||||
#undef ENABLE_SIZED_DELETE
|
||||
|
||||
/* Define to 1 if you have the <asm/ptrace.h> header file. */
|
||||
#undef HAVE_ASM_PTRACE_H
|
||||
|
||||
/* define if the compiler supports basic C++11 syntax */
|
||||
#undef HAVE_CXX11
|
||||
|
||||
/* Define to 1 if you have the <cygwin/signal.h> header file. */
|
||||
#undef HAVE_CYGWIN_SIGNAL_H
|
||||
|
||||
/* Define to 1 if you have the declaration of `backtrace', and to 0 if you
|
||||
don't. */
|
||||
#undef HAVE_DECL_BACKTRACE
|
||||
|
||||
/* Define to 1 if you have the declaration of `backtrace_symbols', and to 0 if
|
||||
you don't. */
|
||||
#undef HAVE_DECL_BACKTRACE_SYMBOLS
|
||||
|
||||
/* Define to 1 if you have the declaration of `cfree', and to 0 if you don't.
|
||||
*/
|
||||
#undef HAVE_DECL_CFREE
|
||||
|
||||
/* Define to 1 if you have the declaration of `memalign', and to 0 if you
|
||||
don't. */
|
||||
#undef HAVE_DECL_MEMALIGN
|
||||
|
||||
/* Define to 1 if you have the declaration of `nanosleep', and to 0 if you
|
||||
don't. */
|
||||
#undef HAVE_DECL_NANOSLEEP
|
||||
|
||||
/* Define to 1 if you have the declaration of `posix_memalign', and to 0 if
|
||||
you don't. */
|
||||
#undef HAVE_DECL_POSIX_MEMALIGN
|
||||
|
||||
/* Define to 1 if you have the declaration of `pvalloc', and to 0 if you
|
||||
don't. */
|
||||
#undef HAVE_DECL_PVALLOC
|
||||
|
||||
/* Define to 1 if you have the declaration of `sleep', and to 0 if you don't.
|
||||
*/
|
||||
#undef HAVE_DECL_SLEEP
|
||||
|
||||
/* Define to 1 if you have the declaration of `valloc', and to 0 if you don't.
|
||||
*/
|
||||
#undef HAVE_DECL_VALLOC
|
||||
|
||||
/* Define to 1 if you have the <dlfcn.h> header file. */
|
||||
#undef HAVE_DLFCN_H
|
||||
|
||||
/* Define to 1 if the system has the type `Elf32_Versym'. */
|
||||
#undef HAVE_ELF32_VERSYM
|
||||
|
||||
/* Define to 1 if you have the <execinfo.h> header file. */
|
||||
#undef HAVE_EXECINFO_H
|
||||
|
||||
/* Define to 1 if you have the <fcntl.h> header file. */
|
||||
#undef HAVE_FCNTL_H
|
||||
|
||||
/* Define to 1 if you have the <features.h> header file. */
|
||||
#undef HAVE_FEATURES_H
|
||||
|
||||
/* Define to 1 if you have the `fork' function. */
|
||||
#undef HAVE_FORK
|
||||
|
||||
/* Define to 1 if you have the `geteuid' function. */
|
||||
#undef HAVE_GETEUID
|
||||
|
||||
/* Define to 1 if you have the <glob.h> header file. */
|
||||
#undef HAVE_GLOB_H
|
||||
|
||||
/* Define to 1 if you have the <grp.h> header file. */
|
||||
#undef HAVE_GRP_H
|
||||
|
||||
/* Define to 1 if you have the <inttypes.h> header file. */
|
||||
#undef HAVE_INTTYPES_H
|
||||
|
||||
/* Define to 1 if you have the <libunwind.h> header file. */
|
||||
#undef HAVE_LIBUNWIND_H
|
||||
|
||||
/* Define if this is Linux that has SIGEV_THREAD_ID */
|
||||
#undef HAVE_LINUX_SIGEV_THREAD_ID
|
||||
|
||||
/* Define to 1 if you have the <malloc.h> header file. */
|
||||
#undef HAVE_MALLOC_H
|
||||
|
||||
/* Define to 1 if you have a working `mmap' system call. */
|
||||
#undef HAVE_MMAP
|
||||
|
||||
/* Define to 1 if you have the <poll.h> header file. */
|
||||
#undef HAVE_POLL_H
|
||||
|
||||
/* define if libc has program_invocation_name */
|
||||
#undef HAVE_PROGRAM_INVOCATION_NAME
|
||||
|
||||
/* Define if you have POSIX threads libraries and header files. */
|
||||
#undef HAVE_PTHREAD
|
||||
|
||||
/* Have PTHREAD_PRIO_INHERIT. */
|
||||
#undef HAVE_PTHREAD_PRIO_INHERIT
|
||||
|
||||
/* Define to 1 if you have the <pwd.h> header file. */
|
||||
#undef HAVE_PWD_H
|
||||
|
||||
/* Define to 1 if you have the `sbrk' function. */
|
||||
#undef HAVE_SBRK
|
||||
|
||||
/* Define to 1 if you have the <sched.h> header file. */
|
||||
#undef HAVE_SCHED_H
|
||||
|
||||
/* Define to 1 if you have the <stdint.h> header file. */
|
||||
#undef HAVE_STDINT_H
|
||||
|
||||
/* Define to 1 if you have the <stdio.h> header file. */
|
||||
#undef HAVE_STDIO_H
|
||||
|
||||
/* Define to 1 if you have the <stdlib.h> header file. */
|
||||
#undef HAVE_STDLIB_H
|
||||
|
||||
/* Define to 1 if you have the <strings.h> header file. */
|
||||
#undef HAVE_STRINGS_H
|
||||
|
||||
/* Define to 1 if you have the <string.h> header file. */
|
||||
#undef HAVE_STRING_H
|
||||
|
||||
/* Define to 1 if the system has the type `struct mallinfo'. */
|
||||
#undef HAVE_STRUCT_MALLINFO
|
||||
|
||||
/* Define to 1 if the system has the type `struct mallinfo2'. */
|
||||
#undef HAVE_STRUCT_MALLINFO2
|
||||
|
||||
/* Define to 1 if you have the <sys/cdefs.h> header file. */
|
||||
#undef HAVE_SYS_CDEFS_H
|
||||
|
||||
/* Define to 1 if you have the <sys/resource.h> header file. */
|
||||
#undef HAVE_SYS_RESOURCE_H
|
||||
|
||||
/* Define to 1 if you have the <sys/socket.h> header file. */
|
||||
#undef HAVE_SYS_SOCKET_H
|
||||
|
||||
/* Define to 1 if you have the <sys/stat.h> header file. */
|
||||
#undef HAVE_SYS_STAT_H
|
||||
|
||||
/* Define to 1 if you have the <sys/syscall.h> header file. */
|
||||
#undef HAVE_SYS_SYSCALL_H
|
||||
|
||||
/* Define to 1 if you have the <sys/types.h> header file. */
|
||||
#undef HAVE_SYS_TYPES_H
|
||||
|
||||
/* Define to 1 if you have the <sys/ucontext.h> header file. */
|
||||
#undef HAVE_SYS_UCONTEXT_H
|
||||
|
||||
/* Define to 1 if you have the <sys/wait.h> header file. */
|
||||
#undef HAVE_SYS_WAIT_H
|
||||
|
||||
/* Define to 1 if compiler supports __thread */
|
||||
#undef HAVE_TLS
|
||||
|
||||
/* Define to 1 if you have the <ucontext.h> header file. */
|
||||
#undef HAVE_UCONTEXT_H
|
||||
|
||||
/* Define to 1 if you have the <unistd.h> header file. */
|
||||
#undef HAVE_UNISTD_H
|
||||
|
||||
/* Whether <unwind.h> contains _Unwind_Backtrace */
|
||||
#undef HAVE_UNWIND_BACKTRACE
|
||||
|
||||
/* Define to 1 if you have the <unwind.h> header file. */
|
||||
#undef HAVE_UNWIND_H
|
||||
|
||||
/* define if your compiler has __attribute__ */
|
||||
#undef HAVE___ATTRIBUTE__
|
||||
|
||||
/* define if your compiler supports alignment of functions */
|
||||
#undef HAVE___ATTRIBUTE__ALIGNED_FN
|
||||
|
||||
/* Define to 1 if compiler supports __environ */
|
||||
#undef HAVE___ENVIRON
|
||||
|
||||
/* Define to 1 if you have the `__sbrk' function. */
|
||||
#undef HAVE___SBRK
|
||||
|
||||
/* prefix where we look for installed files */
|
||||
#undef INSTALL_PREFIX
|
||||
|
||||
/* Define to the sub-directory where libtool stores uninstalled libraries. */
|
||||
#undef LT_OBJDIR
|
||||
|
||||
/* Name of package */
|
||||
#undef PACKAGE
|
||||
|
||||
/* Define to the address where bug reports for this package should be sent. */
|
||||
#undef PACKAGE_BUGREPORT
|
||||
|
||||
/* Define to the full name of this package. */
|
||||
#undef PACKAGE_NAME
|
||||
|
||||
/* Define to the full name and version of this package. */
|
||||
#undef PACKAGE_STRING
|
||||
|
||||
/* Define to the one symbol short name of this package. */
|
||||
#undef PACKAGE_TARNAME
|
||||
|
||||
/* Define to the home page for this package. */
|
||||
#undef PACKAGE_URL
|
||||
|
||||
/* Define to the version of this package. */
|
||||
#undef PACKAGE_VERSION
|
||||
|
||||
/* Always the empty-string on non-windows systems. On windows, should be
|
||||
"__declspec(dllexport)". This way, when we compile the dll, we export our
|
||||
functions/classes. It's safe to define this here because config.h is only
|
||||
used internally, to compile the DLL, and every DLL source file #includes
|
||||
"config.h" before anything else. */
|
||||
#undef PERFTOOLS_DLL_DECL
|
||||
|
||||
/* if libgcc stacktrace method should be default */
|
||||
#undef PREFER_LIBGCC_UNWINDER
|
||||
|
||||
/* Mark the systems where we know it's bad if pthreads runs too
|
||||
early before main (before threads are initialized, presumably). */
|
||||
#if defined(__FreeBSD__) || defined(_AIX)
|
||||
#define PTHREADS_CRASHES_IF_RUN_TOO_EARLY 1
|
||||
#endif
|
||||
|
||||
/* Define to necessary symbol if this constant uses a non-standard name on
|
||||
your system. */
|
||||
#undef PTHREAD_CREATE_JOINABLE
|
||||
|
||||
/* Define to 1 if all of the C90 standard headers exist (not just the ones
|
||||
required in a freestanding environment). This macro is provided for
|
||||
backward compatibility; new code need not use it. */
|
||||
#undef STDC_HEADERS
|
||||
|
||||
/* Define 8 bytes of allocation alignment for tcmalloc */
|
||||
#undef TCMALLOC_ALIGN_8BYTES
|
||||
|
||||
/* Define internal page size for tcmalloc as number of left bitshift */
|
||||
#undef TCMALLOC_PAGE_SIZE_SHIFT
|
||||
|
||||
/* libunwind.h was found and is working */
|
||||
#undef USE_LIBUNWIND
|
||||
|
||||
/* Version number of package */
|
||||
#undef VERSION
|
||||
|
||||
/* C99 says: define this to get the PRI... macros from stdint.h */
|
||||
#ifndef __STDC_FORMAT_MACROS
|
||||
# define __STDC_FORMAT_MACROS 1
|
||||
#endif
|
||||
|
||||
|
||||
#ifdef __MINGW32__
|
||||
#include "windows/mingw.h"
|
||||
#endif
|
||||
|
||||
#endif /* #ifndef GPERFTOOLS_CONFIG_H_ */
|
||||
|
87
3party/gperftools/src/config_for_unittests.h
Normal file
87
3party/gperftools/src/config_for_unittests.h
Normal file
@ -0,0 +1,87 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2007, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// All Rights Reserved.
|
||||
//
|
||||
// Author: Craig Silverstein
|
||||
//
|
||||
// This file is needed for windows -- unittests are not part of the
|
||||
// perftools dll, but still want to include config.h just like the
|
||||
// dll does, so they can use internal tools and APIs for testing.
|
||||
//
|
||||
// The problem is that config.h declares PERFTOOLS_DLL_DECL to be
|
||||
// for exporting symbols, but the unittest needs to *import* symbols
|
||||
// (since it's not the dll).
|
||||
//
|
||||
// The solution is to have this file, which is just like config.h but
|
||||
// sets PERFTOOLS_DLL_DECL to do a dllimport instead of a dllexport.
|
||||
//
|
||||
// The reason we need this extra PERFTOOLS_DLL_DECL_FOR_UNITTESTS
|
||||
// variable is in case people want to set PERFTOOLS_DLL_DECL explicitly
|
||||
// to something other than __declspec(dllexport). In that case, they
|
||||
// may want to use something other than __declspec(dllimport) for the
|
||||
// unittest case. For that, we allow folks to define both
|
||||
// PERFTOOLS_DLL_DECL and PERFTOOLS_DLL_DECL_FOR_UNITTESTS explicitly.
|
||||
//
|
||||
// NOTE: This file is equivalent to config.h on non-windows systems,
|
||||
// which never defined PERFTOOLS_DLL_DECL_FOR_UNITTESTS and always
|
||||
// define PERFTOOLS_DLL_DECL to the empty string.
|
||||
|
||||
#include "config.h"
|
||||
|
||||
#undef PERFTOOLS_DLL_DECL
|
||||
#ifdef PERFTOOLS_DLL_DECL_FOR_UNITTESTS
|
||||
# define PERFTOOLS_DLL_DECL PERFTOOLS_DLL_DECL_FOR_UNITTESTS
|
||||
#else
|
||||
# define PERFTOOLS_DLL_DECL // if DLL_DECL_FOR_UNITTESTS isn't defined, use ""
|
||||
#endif
|
||||
|
||||
#if defined(__clang__)
|
||||
#if __has_warning("-Wuse-after-free")
|
||||
#pragma clang diagnostic ignored "-Wuse-after-free"
|
||||
#endif
|
||||
#if __has_warning("-Wunused-result")
|
||||
#pragma clang diagnostic ignored "-Wunused-result"
|
||||
#endif
|
||||
#if __has_warning("-Wunused-private-field")
|
||||
#pragma clang diagnostic ignored "-Wunused-private-field"
|
||||
#endif
|
||||
#if __has_warning("-Wimplicit-exception-spec-mismatch")
|
||||
#pragma clang diagnostic ignored "-Wimplicit-exception-spec-mismatch"
|
||||
#endif
|
||||
#if __has_warning("-Wmissing-exception-spec")
|
||||
#pragma clang diagnostic ignored "-Wmissing-exception-spec"
|
||||
#endif
|
||||
#elif defined(__GNUC__)
|
||||
#pragma GCC diagnostic ignored "-Wpragmas" // warning: unknown option after '#pragma GCC diagnostic' kind
|
||||
#pragma GCC diagnostic ignored "-Wuse-after-free"
|
||||
#pragma GCC diagnostic ignored "-Wunused-result"
|
||||
#endif
|
1594
3party/gperftools/src/debugallocation.cc
Normal file
1594
3party/gperftools/src/debugallocation.cc
Normal file
File diff suppressed because it is too large
Load Diff
169
3party/gperftools/src/emergency_malloc.cc
Normal file
169
3party/gperftools/src/emergency_malloc.cc
Normal file
@ -0,0 +1,169 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2014, gperftools Contributors
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
//
|
||||
|
||||
#include "config.h"
|
||||
|
||||
#include "emergency_malloc.h"
|
||||
|
||||
#include <errno.h> // for ENOMEM, errno
|
||||
#include <string.h> // for memset
|
||||
|
||||
#include "base/basictypes.h"
|
||||
#include "base/logging.h"
|
||||
#include "base/low_level_alloc.h"
|
||||
#include "base/spinlock.h"
|
||||
#include "internal_logging.h"
|
||||
|
||||
|
||||
namespace tcmalloc {
|
||||
__attribute__ ((visibility("internal"))) char *emergency_arena_start;
|
||||
__attribute__ ((visibility("internal"))) uintptr_t emergency_arena_start_shifted;
|
||||
|
||||
static CACHELINE_ALIGNED SpinLock emergency_malloc_lock(base::LINKER_INITIALIZED);
|
||||
static char *emergency_arena_end;
|
||||
static LowLevelAlloc::Arena *emergency_arena;
|
||||
|
||||
class EmergencyArenaPagesAllocator : public LowLevelAlloc::PagesAllocator {
|
||||
~EmergencyArenaPagesAllocator() {}
|
||||
void *MapPages(int32 flags, size_t size) {
|
||||
char *new_end = emergency_arena_end + size;
|
||||
if (new_end > emergency_arena_start + kEmergencyArenaSize) {
|
||||
RAW_LOG(FATAL, "Unable to allocate %zu bytes in emergency zone.", size);
|
||||
}
|
||||
char *rv = emergency_arena_end;
|
||||
emergency_arena_end = new_end;
|
||||
return static_cast<void *>(rv);
|
||||
}
|
||||
void UnMapPages(int32 flags, void *addr, size_t size) {
|
||||
RAW_LOG(FATAL, "UnMapPages is not implemented for emergency arena");
|
||||
}
|
||||
};
|
||||
|
||||
static union {
|
||||
char bytes[sizeof(EmergencyArenaPagesAllocator)];
|
||||
void *ptr;
|
||||
} pages_allocator_place;
|
||||
|
||||
static void InitEmergencyMalloc(void) {
|
||||
const int32 flags = LowLevelAlloc::kAsyncSignalSafe;
|
||||
|
||||
void *arena = LowLevelAlloc::GetDefaultPagesAllocator()->MapPages(flags, kEmergencyArenaSize * 2);
|
||||
|
||||
uintptr_t arena_ptr = reinterpret_cast<uintptr_t>(arena);
|
||||
uintptr_t ptr = (arena_ptr + kEmergencyArenaSize - 1) & ~(kEmergencyArenaSize-1);
|
||||
|
||||
emergency_arena_end = emergency_arena_start = reinterpret_cast<char *>(ptr);
|
||||
EmergencyArenaPagesAllocator *allocator = new (pages_allocator_place.bytes) EmergencyArenaPagesAllocator();
|
||||
emergency_arena = LowLevelAlloc::NewArenaWithCustomAlloc(0, LowLevelAlloc::DefaultArena(), allocator);
|
||||
|
||||
emergency_arena_start_shifted = reinterpret_cast<uintptr_t>(emergency_arena_start) >> kEmergencyArenaShift;
|
||||
|
||||
uintptr_t head_unmap_size = ptr - arena_ptr;
|
||||
CHECK_CONDITION(head_unmap_size < kEmergencyArenaSize);
|
||||
if (head_unmap_size != 0) {
|
||||
LowLevelAlloc::GetDefaultPagesAllocator()->UnMapPages(flags, arena, ptr - arena_ptr);
|
||||
}
|
||||
|
||||
uintptr_t tail_unmap_size = kEmergencyArenaSize - head_unmap_size;
|
||||
void *tail_start = reinterpret_cast<void *>(arena_ptr + head_unmap_size + kEmergencyArenaSize);
|
||||
LowLevelAlloc::GetDefaultPagesAllocator()->UnMapPages(flags, tail_start, tail_unmap_size);
|
||||
}
|
||||
|
||||
PERFTOOLS_DLL_DECL void *EmergencyMalloc(size_t size) {
|
||||
SpinLockHolder l(&emergency_malloc_lock);
|
||||
|
||||
if (emergency_arena_start == NULL) {
|
||||
InitEmergencyMalloc();
|
||||
CHECK_CONDITION(emergency_arena_start != NULL);
|
||||
}
|
||||
|
||||
void *rv = LowLevelAlloc::AllocWithArena(size, emergency_arena);
|
||||
if (rv == NULL) {
|
||||
errno = ENOMEM;
|
||||
}
|
||||
return rv;
|
||||
}
|
||||
|
||||
PERFTOOLS_DLL_DECL void EmergencyFree(void *p) {
|
||||
SpinLockHolder l(&emergency_malloc_lock);
|
||||
if (emergency_arena_start == NULL) {
|
||||
InitEmergencyMalloc();
|
||||
CHECK_CONDITION(emergency_arena_start != NULL);
|
||||
free(p);
|
||||
return;
|
||||
}
|
||||
CHECK_CONDITION(emergency_arena_start);
|
||||
LowLevelAlloc::Free(p);
|
||||
}
|
||||
|
||||
PERFTOOLS_DLL_DECL void *EmergencyRealloc(void *_old_ptr, size_t new_size) {
|
||||
if (_old_ptr == NULL) {
|
||||
return EmergencyMalloc(new_size);
|
||||
}
|
||||
if (new_size == 0) {
|
||||
EmergencyFree(_old_ptr);
|
||||
return NULL;
|
||||
}
|
||||
SpinLockHolder l(&emergency_malloc_lock);
|
||||
CHECK_CONDITION(emergency_arena_start);
|
||||
|
||||
char *old_ptr = static_cast<char *>(_old_ptr);
|
||||
CHECK_CONDITION(old_ptr <= emergency_arena_end);
|
||||
CHECK_CONDITION(emergency_arena_start <= old_ptr);
|
||||
|
||||
// NOTE: we don't know previous size of old_ptr chunk. So instead
|
||||
// of trying to figure out right size of copied memory, we just
|
||||
// copy largest possible size. We don't care about being slow.
|
||||
size_t old_ptr_size = emergency_arena_end - old_ptr;
|
||||
size_t copy_size = (new_size < old_ptr_size) ? new_size : old_ptr_size;
|
||||
|
||||
void *new_ptr = LowLevelAlloc::AllocWithArena(new_size, emergency_arena);
|
||||
if (new_ptr == NULL) {
|
||||
errno = ENOMEM;
|
||||
return NULL;
|
||||
}
|
||||
memcpy(new_ptr, old_ptr, copy_size);
|
||||
|
||||
LowLevelAlloc::Free(old_ptr);
|
||||
return new_ptr;
|
||||
}
|
||||
|
||||
PERFTOOLS_DLL_DECL void *EmergencyCalloc(size_t n, size_t elem_size) {
|
||||
// Overflow check
|
||||
const size_t size = n * elem_size;
|
||||
if (elem_size != 0 && size / elem_size != n) return NULL;
|
||||
void *rv = EmergencyMalloc(size);
|
||||
if (rv != NULL) {
|
||||
memset(rv, 0, size);
|
||||
}
|
||||
return rv;
|
||||
}
|
||||
};
|
60
3party/gperftools/src/emergency_malloc.h
Normal file
60
3party/gperftools/src/emergency_malloc.h
Normal file
@ -0,0 +1,60 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2014, gperftools Contributors
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
#ifndef EMERGENCY_MALLOC_H
|
||||
#define EMERGENCY_MALLOC_H
|
||||
#include "config.h"
|
||||
|
||||
#include <stddef.h>
|
||||
|
||||
#include "base/basictypes.h"
|
||||
#include "common.h"
|
||||
|
||||
namespace tcmalloc {
|
||||
static const uintptr_t kEmergencyArenaShift = 20+4; // 16 megs
|
||||
static const uintptr_t kEmergencyArenaSize = 1 << kEmergencyArenaShift;
|
||||
|
||||
extern __attribute__ ((visibility("internal"))) char *emergency_arena_start;
|
||||
extern __attribute__ ((visibility("internal"))) uintptr_t emergency_arena_start_shifted;;
|
||||
|
||||
PERFTOOLS_DLL_DECL void *EmergencyMalloc(size_t size);
|
||||
PERFTOOLS_DLL_DECL void EmergencyFree(void *p);
|
||||
PERFTOOLS_DLL_DECL void *EmergencyCalloc(size_t n, size_t elem_size);
|
||||
PERFTOOLS_DLL_DECL void *EmergencyRealloc(void *old_ptr, size_t new_size);
|
||||
|
||||
static inline bool IsEmergencyPtr(const void *_ptr) {
|
||||
uintptr_t ptr = reinterpret_cast<uintptr_t>(_ptr);
|
||||
return PREDICT_FALSE((ptr >> kEmergencyArenaShift) == emergency_arena_start_shifted)
|
||||
&& emergency_arena_start_shifted;
|
||||
}
|
||||
|
||||
} // namespace tcmalloc
|
||||
|
||||
#endif
|
48
3party/gperftools/src/emergency_malloc_for_stacktrace.cc
Normal file
48
3party/gperftools/src/emergency_malloc_for_stacktrace.cc
Normal file
@ -0,0 +1,48 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2014, gperftools Contributors
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
#include "emergency_malloc.h"
|
||||
#include "thread_cache.h"
|
||||
|
||||
namespace tcmalloc {
|
||||
bool EnterStacktraceScope(void);
|
||||
void LeaveStacktraceScope(void);
|
||||
}
|
||||
|
||||
bool tcmalloc::EnterStacktraceScope(void) {
|
||||
if (ThreadCache::IsUseEmergencyMalloc()) {
|
||||
return false;
|
||||
}
|
||||
ThreadCache::SetUseEmergencyMalloc();
|
||||
return true;
|
||||
}
|
||||
|
||||
void tcmalloc::LeaveStacktraceScope(void) {
|
||||
ThreadCache::ResetUseEmergencyMalloc();
|
||||
}
|
39
3party/gperftools/src/fake_stacktrace_scope.cc
Normal file
39
3party/gperftools/src/fake_stacktrace_scope.cc
Normal file
@ -0,0 +1,39 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2014, gperftools Contributors
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
#include "base/basictypes.h"
|
||||
|
||||
namespace tcmalloc {
|
||||
ATTRIBUTE_WEAK bool EnterStacktraceScope(void) {
|
||||
return true;
|
||||
}
|
||||
ATTRIBUTE_WEAK void LeaveStacktraceScope(void) {
|
||||
}
|
||||
}
|
63
3party/gperftools/src/getenv_safe.h
Normal file
63
3party/gperftools/src/getenv_safe.h
Normal file
@ -0,0 +1,63 @@
|
||||
/* -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
* Copyright (c) 2014, gperftools Contributors
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
#ifndef GETENV_SAFE_H
|
||||
#define GETENV_SAFE_H
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
/*
|
||||
* This getenv function is safe to call before the C runtime is initialized.
|
||||
* On Windows, it utilizes GetEnvironmentVariable() and on unix it uses
|
||||
* /proc/self/environ instead calling getenv(). It's intended to be used in
|
||||
* routines that run before main(), when the state required for getenv() may
|
||||
* not be set up yet. In particular, errno isn't set up until relatively late
|
||||
* (after the pthreads library has a chance to make it threadsafe), and
|
||||
* getenv() doesn't work until then.
|
||||
* On some platforms, this call will utilize the same, static buffer for
|
||||
* repeated GetenvBeforeMain() calls. Callers should not expect pointers from
|
||||
* this routine to be long lived.
|
||||
* Note that on unix, /proc only has the environment at the time the
|
||||
* application was started, so this routine ignores setenv() calls/etc. Also
|
||||
* note it only reads the first 16K of the environment.
|
||||
*
|
||||
* NOTE: this is version of GetenvBeforeMain that's usable from
|
||||
* C. Implementation is in sysinfo.cc
|
||||
*/
|
||||
const char* TCMallocGetenvSafe(const char* name);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
396
3party/gperftools/src/getpc-inl.h
Normal file
396
3party/gperftools/src/getpc-inl.h
Normal file
@ -0,0 +1,396 @@
|
||||
// -*- eval: (read-only-mode) -*-
|
||||
// WARNING: this file is autogenerated.
|
||||
// Change and run gen_getpc.rb >getpc-inl.h if you want to
|
||||
// update. (And submit both files)
|
||||
|
||||
// What this file does? We have several possible ways of fetching PC
|
||||
// (program counter) of signal's ucontext. We explicitly choose to
|
||||
// avoid ifdef-ing specific OSes (or even specific versions), to
|
||||
// increase our chances that stuff simply works. Comments below refer
|
||||
// to OS/architecture combos for documentation purposes, but what
|
||||
// works is what is used.
|
||||
|
||||
// How it does it? It uses lightweight C++ template magic where
|
||||
// "wrong" ucontext_t{nullptr}-><field access> combos are
|
||||
// automagically filtered out (via SFINAE).
|
||||
|
||||
// Each known case is represented as a template class. For SFINAE
|
||||
// reasons we masquerade ucontext_t type behind U template
|
||||
// parameter. And we also parameterize by parent class. This allows us
|
||||
// to arrange all template instantiations in a single ordered chain of
|
||||
// inheritance. See RawUCToPC below.
|
||||
|
||||
// Note, we do anticipate that most times exactly one of those access
|
||||
// methods works. But we're prepared there could be several. In
|
||||
// particular, according to previous comments Solaris/x86 also has
|
||||
// REG_RIP defined, but it is somehow wrong. So we're careful about
|
||||
// preserving specific order. We couldn't handle this "multiplicity"
|
||||
// aspect in pure C++, so we use code generation.
|
||||
|
||||
namespace internal {
|
||||
|
||||
struct Empty {
|
||||
#ifdef DEFINE_TRIVIAL_GET
|
||||
#define HAVE_TRIVIAL_GET
|
||||
// special thing for stacktrace_generic_fp-inl which wants no-op case
|
||||
static void* Get(...) {
|
||||
return nullptr;
|
||||
}
|
||||
#endif
|
||||
};
|
||||
|
||||
// NetBSD has really nice portable macros
|
||||
template <class U, class P, class = void>
|
||||
struct get_c47a30af : public P {
|
||||
};
|
||||
#ifdef _UC_MACHINE_PC
|
||||
template <class U, class P>
|
||||
struct get_c47a30af<U, P, void_t<decltype(_UC_MACHINE_PC(((U*){})))>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// NetBSD has really nice portable macros
|
||||
return (void*)(_UC_MACHINE_PC(uc));
|
||||
}
|
||||
};
|
||||
#endif // _UC_MACHINE_PC
|
||||
|
||||
// Solaris/x86
|
||||
template <class U, class P, class = void>
|
||||
struct get_c4719e8d : public P {
|
||||
};
|
||||
#ifdef REG_PC
|
||||
template <class U, class P>
|
||||
struct get_c4719e8d<U, P, void_t<decltype(((U*){})->uc_mcontext.gregs[REG_PC])>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// Solaris/x86
|
||||
return (void*)(uc->uc_mcontext.gregs[REG_PC]);
|
||||
}
|
||||
};
|
||||
#endif // REG_PC
|
||||
|
||||
// Linux/i386
|
||||
template <class U, class P, class = void>
|
||||
struct get_278cba85 : public P {
|
||||
};
|
||||
#ifdef REG_EIP
|
||||
template <class U, class P>
|
||||
struct get_278cba85<U, P, void_t<decltype(((U*){})->uc_mcontext.gregs[REG_EIP])>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// Linux/i386
|
||||
return (void*)(uc->uc_mcontext.gregs[REG_EIP]);
|
||||
}
|
||||
};
|
||||
#endif // REG_EIP
|
||||
|
||||
// Linux/amd64
|
||||
template <class U, class P, class = void>
|
||||
struct get_b49f2593 : public P {
|
||||
};
|
||||
#ifdef REG_RIP
|
||||
template <class U, class P>
|
||||
struct get_b49f2593<U, P, void_t<decltype(((U*){})->uc_mcontext.gregs[REG_RIP])>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// Linux/amd64
|
||||
return (void*)(uc->uc_mcontext.gregs[REG_RIP]);
|
||||
}
|
||||
};
|
||||
#endif // REG_RIP
|
||||
|
||||
// Linux/ia64
|
||||
template <class U, class P, class = void>
|
||||
struct get_8fda99d3 : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_8fda99d3<U, P, void_t<decltype(((U*){})->uc_mcontext.sc_ip)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// Linux/ia64
|
||||
return (void*)(uc->uc_mcontext.sc_ip);
|
||||
}
|
||||
};
|
||||
|
||||
// Linux/loongarch64
|
||||
template <class U, class P, class = void>
|
||||
struct get_4e9b682d : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_4e9b682d<U, P, void_t<decltype(((U*){})->uc_mcontext.__pc)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// Linux/loongarch64
|
||||
return (void*)(uc->uc_mcontext.__pc);
|
||||
}
|
||||
};
|
||||
|
||||
// Linux/{mips,aarch64}
|
||||
template <class U, class P, class = void>
|
||||
struct get_b94b7246 : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_b94b7246<U, P, void_t<decltype(((U*){})->uc_mcontext.pc)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// Linux/{mips,aarch64}
|
||||
return (void*)(uc->uc_mcontext.pc);
|
||||
}
|
||||
};
|
||||
|
||||
// Linux/ppc
|
||||
template <class U, class P, class = void>
|
||||
struct get_d0eeceae : public P {
|
||||
};
|
||||
#ifdef PT_NIP
|
||||
template <class U, class P>
|
||||
struct get_d0eeceae<U, P, void_t<decltype(((U*){})->uc_mcontext.uc_regs->gregs[PT_NIP])>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// Linux/ppc
|
||||
return (void*)(uc->uc_mcontext.uc_regs->gregs[PT_NIP]);
|
||||
}
|
||||
};
|
||||
#endif // PT_NIP
|
||||
|
||||
// Linux/ppc
|
||||
template <class U, class P, class = void>
|
||||
struct get_a81f6801 : public P {
|
||||
};
|
||||
#ifdef PT_NIP
|
||||
template <class U, class P>
|
||||
struct get_a81f6801<U, P, void_t<decltype(((U*){})->uc_mcontext.gp_regs[PT_NIP])>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// Linux/ppc
|
||||
return (void*)(uc->uc_mcontext.gp_regs[PT_NIP]);
|
||||
}
|
||||
};
|
||||
#endif // PT_NIP
|
||||
|
||||
// Linux/riscv
|
||||
template <class U, class P, class = void>
|
||||
struct get_24e794ef : public P {
|
||||
};
|
||||
#ifdef REG_PC
|
||||
template <class U, class P>
|
||||
struct get_24e794ef<U, P, void_t<decltype(((U*){})->uc_mcontext.__gregs[REG_PC])>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// Linux/riscv
|
||||
return (void*)(uc->uc_mcontext.__gregs[REG_PC]);
|
||||
}
|
||||
};
|
||||
#endif // REG_PC
|
||||
|
||||
// Linux/s390
|
||||
template <class U, class P, class = void>
|
||||
struct get_d9a75ed3 : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_d9a75ed3<U, P, void_t<decltype(((U*){})->uc_mcontext.psw.addr)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// Linux/s390
|
||||
return (void*)(uc->uc_mcontext.psw.addr);
|
||||
}
|
||||
};
|
||||
|
||||
// Linux/arm (32-bit; legacy)
|
||||
template <class U, class P, class = void>
|
||||
struct get_07114491 : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_07114491<U, P, void_t<decltype(((U*){})->uc_mcontext.arm_pc)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// Linux/arm (32-bit; legacy)
|
||||
return (void*)(uc->uc_mcontext.arm_pc);
|
||||
}
|
||||
};
|
||||
|
||||
// FreeBSD/i386
|
||||
template <class U, class P, class = void>
|
||||
struct get_9be162e6 : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_9be162e6<U, P, void_t<decltype(((U*){})->uc_mcontext.mc_eip)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// FreeBSD/i386
|
||||
return (void*)(uc->uc_mcontext.mc_eip);
|
||||
}
|
||||
};
|
||||
|
||||
// FreeBSD/ppc
|
||||
template <class U, class P, class = void>
|
||||
struct get_2812b129 : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_2812b129<U, P, void_t<decltype(((U*){})->uc_mcontext.mc_srr0)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// FreeBSD/ppc
|
||||
return (void*)(uc->uc_mcontext.mc_srr0);
|
||||
}
|
||||
};
|
||||
|
||||
// FreeBSD/x86_64
|
||||
template <class U, class P, class = void>
|
||||
struct get_5bb1da03 : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_5bb1da03<U, P, void_t<decltype(((U*){})->uc_mcontext.mc_rip)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// FreeBSD/x86_64
|
||||
return (void*)(uc->uc_mcontext.mc_rip);
|
||||
}
|
||||
};
|
||||
|
||||
// OS X (i386, <=10.4)
|
||||
template <class U, class P, class = void>
|
||||
struct get_880f83fe : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_880f83fe<U, P, void_t<decltype(((U*){})->uc_mcontext->ss.eip)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// OS X (i386, <=10.4)
|
||||
return (void*)(uc->uc_mcontext->ss.eip);
|
||||
}
|
||||
};
|
||||
|
||||
// OS X (i386, >=10.5)
|
||||
template <class U, class P, class = void>
|
||||
struct get_92fcd89a : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_92fcd89a<U, P, void_t<decltype(((U*){})->uc_mcontext->__ss.__eip)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// OS X (i386, >=10.5)
|
||||
return (void*)(uc->uc_mcontext->__ss.__eip);
|
||||
}
|
||||
};
|
||||
|
||||
// OS X (x86_64)
|
||||
template <class U, class P, class = void>
|
||||
struct get_773e27c8 : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_773e27c8<U, P, void_t<decltype(((U*){})->uc_mcontext->ss.rip)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// OS X (x86_64)
|
||||
return (void*)(uc->uc_mcontext->ss.rip);
|
||||
}
|
||||
};
|
||||
|
||||
// OS X (>=10.5 [untested])
|
||||
template <class U, class P, class = void>
|
||||
struct get_6627078a : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_6627078a<U, P, void_t<decltype(((U*){})->uc_mcontext->__ss.__rip)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// OS X (>=10.5 [untested])
|
||||
return (void*)(uc->uc_mcontext->__ss.__rip);
|
||||
}
|
||||
};
|
||||
|
||||
// OS X (ppc, ppc64 [untested])
|
||||
template <class U, class P, class = void>
|
||||
struct get_da992aca : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_da992aca<U, P, void_t<decltype(((U*){})->uc_mcontext->ss.srr0)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// OS X (ppc, ppc64 [untested])
|
||||
return (void*)(uc->uc_mcontext->ss.srr0);
|
||||
}
|
||||
};
|
||||
|
||||
// OS X (>=10.5 [untested])
|
||||
template <class U, class P, class = void>
|
||||
struct get_cce47a40 : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_cce47a40<U, P, void_t<decltype(((U*){})->uc_mcontext->__ss.__srr0)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// OS X (>=10.5 [untested])
|
||||
return (void*)(uc->uc_mcontext->__ss.__srr0);
|
||||
}
|
||||
};
|
||||
|
||||
// OS X (arm64)
|
||||
template <class U, class P, class = void>
|
||||
struct get_0a082e42 : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_0a082e42<U, P, void_t<decltype(((U*){})->uc_mcontext->__ss.__pc)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// OS X (arm64)
|
||||
return (void*)(uc->uc_mcontext->__ss.__pc);
|
||||
}
|
||||
};
|
||||
|
||||
// OpenBSD/i386
|
||||
template <class U, class P, class = void>
|
||||
struct get_3baa113a : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_3baa113a<U, P, void_t<decltype(((U*){})->sc_eip)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// OpenBSD/i386
|
||||
return (void*)(uc->sc_eip);
|
||||
}
|
||||
};
|
||||
|
||||
// OpenBSD/x86_64
|
||||
template <class U, class P, class = void>
|
||||
struct get_79f33851 : public P {
|
||||
};
|
||||
template <class U, class P>
|
||||
struct get_79f33851<U, P, void_t<decltype(((U*){})->sc_rip)>> : public P {
|
||||
static void* Get(const U* uc) {
|
||||
// OpenBSD/x86_64
|
||||
return (void*)(uc->sc_rip);
|
||||
}
|
||||
};
|
||||
|
||||
inline void* RawUCToPC(const ucontext_t* uc) {
|
||||
// OpenBSD/x86_64
|
||||
using g_79f33851 = get_79f33851<ucontext_t, Empty>;
|
||||
// OpenBSD/i386
|
||||
using g_3baa113a = get_3baa113a<ucontext_t, g_79f33851>;
|
||||
// OS X (arm64)
|
||||
using g_0a082e42 = get_0a082e42<ucontext_t, g_3baa113a>;
|
||||
// OS X (>=10.5 [untested])
|
||||
using g_cce47a40 = get_cce47a40<ucontext_t, g_0a082e42>;
|
||||
// OS X (ppc, ppc64 [untested])
|
||||
using g_da992aca = get_da992aca<ucontext_t, g_cce47a40>;
|
||||
// OS X (>=10.5 [untested])
|
||||
using g_6627078a = get_6627078a<ucontext_t, g_da992aca>;
|
||||
// OS X (x86_64)
|
||||
using g_773e27c8 = get_773e27c8<ucontext_t, g_6627078a>;
|
||||
// OS X (i386, >=10.5)
|
||||
using g_92fcd89a = get_92fcd89a<ucontext_t, g_773e27c8>;
|
||||
// OS X (i386, <=10.4)
|
||||
using g_880f83fe = get_880f83fe<ucontext_t, g_92fcd89a>;
|
||||
// FreeBSD/x86_64
|
||||
using g_5bb1da03 = get_5bb1da03<ucontext_t, g_880f83fe>;
|
||||
// FreeBSD/ppc
|
||||
using g_2812b129 = get_2812b129<ucontext_t, g_5bb1da03>;
|
||||
// FreeBSD/i386
|
||||
using g_9be162e6 = get_9be162e6<ucontext_t, g_2812b129>;
|
||||
// Linux/arm (32-bit; legacy)
|
||||
using g_07114491 = get_07114491<ucontext_t, g_9be162e6>;
|
||||
// Linux/s390
|
||||
using g_d9a75ed3 = get_d9a75ed3<ucontext_t, g_07114491>;
|
||||
// Linux/riscv (with #ifdef REG_PC)
|
||||
using g_24e794ef = get_24e794ef<ucontext_t, g_d9a75ed3>;
|
||||
// Linux/ppc (with #ifdef PT_NIP)
|
||||
using g_a81f6801 = get_a81f6801<ucontext_t, g_24e794ef>;
|
||||
// Linux/ppc (with #ifdef PT_NIP)
|
||||
using g_d0eeceae = get_d0eeceae<ucontext_t, g_a81f6801>;
|
||||
// Linux/{mips,aarch64}
|
||||
using g_b94b7246 = get_b94b7246<ucontext_t, g_d0eeceae>;
|
||||
// Linux/loongarch64
|
||||
using g_4e9b682d = get_4e9b682d<ucontext_t, g_b94b7246>;
|
||||
// Linux/ia64
|
||||
using g_8fda99d3 = get_8fda99d3<ucontext_t, g_4e9b682d>;
|
||||
// Linux/amd64 (with #ifdef REG_RIP)
|
||||
using g_b49f2593 = get_b49f2593<ucontext_t, g_8fda99d3>;
|
||||
// Linux/i386 (with #ifdef REG_EIP)
|
||||
using g_278cba85 = get_278cba85<ucontext_t, g_b49f2593>;
|
||||
// Solaris/x86 (with #ifdef REG_PC)
|
||||
using g_c4719e8d = get_c4719e8d<ucontext_t, g_278cba85>;
|
||||
// NetBSD has really nice portable macros (with #ifdef _UC_MACHINE_PC)
|
||||
using g_c47a30af = get_c47a30af<ucontext_t, g_c4719e8d>;
|
||||
return g_c47a30af::Get(uc);
|
||||
}
|
||||
|
||||
} // namespace internal
|
99
3party/gperftools/src/getpc.h
Normal file
99
3party/gperftools/src/getpc.h
Normal file
@ -0,0 +1,99 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Craig Silverstein
|
||||
//
|
||||
// This is an internal header file used by profiler.cc. It defines
|
||||
// the single (inline) function GetPC. GetPC is used in a signal
|
||||
// handler to figure out the instruction that was being executed when
|
||||
// the signal-handler was triggered.
|
||||
//
|
||||
// To get this, we use the ucontext_t argument to the signal-handler
|
||||
// callback, which holds the full context of what was going on when
|
||||
// the signal triggered. How to get from a ucontext_t to a Program
|
||||
// Counter is OS-dependent.
|
||||
|
||||
#ifndef BASE_GETPC_H_
|
||||
#define BASE_GETPC_H_
|
||||
|
||||
// Note: we include this from one of configure script C++ tests as
|
||||
// part of verifying that we're able to build CPU profiler. I.e. we
|
||||
// cannot include config.h as we normally do, since it isn't produced
|
||||
// yet, but those HAVE_XYZ defines are available, so including
|
||||
// ucontext etc stuff works. It's usage from profiler.cc (and
|
||||
// stacktrace_generic_fp-inl.h) is after config.h is included.
|
||||
|
||||
// On many linux systems, we may need _GNU_SOURCE to get access to
|
||||
// the defined constants that define the register we want to see (eg
|
||||
// REG_EIP). Note this #define must come first!
|
||||
#define _GNU_SOURCE 1
|
||||
|
||||
#ifdef HAVE_ASM_PTRACE_H
|
||||
#include <asm/ptrace.h>
|
||||
#endif
|
||||
#if HAVE_SYS_UCONTEXT_H
|
||||
#include <sys/ucontext.h>
|
||||
#elif HAVE_UCONTEXT_H
|
||||
#include <ucontext.h> // for ucontext_t (and also mcontext_t)
|
||||
#elif defined(HAVE_CYGWIN_SIGNAL_H)
|
||||
#include <cygwin/signal.h>
|
||||
typedef ucontext ucontext_t;
|
||||
#endif
|
||||
|
||||
namespace tcmalloc {
|
||||
namespace getpc {
|
||||
|
||||
// std::void_t is C++ 14. So we steal this from
|
||||
// https://en.cppreference.com/w/cpp/types/void_t
|
||||
template<typename... Ts>
|
||||
struct make_void { typedef void type; };
|
||||
template <typename... Ts>
|
||||
using void_t = typename make_void<Ts...>::type;
|
||||
|
||||
#include "getpc-inl.h"
|
||||
|
||||
} // namespace getpc
|
||||
} // namespace tcmalloc
|
||||
|
||||
// If this doesn't compile, you need to figure out the right value for
|
||||
// your system, and add it to the list above.
|
||||
inline void* GetPC(const ucontext_t& signal_ucontext) {
|
||||
void* retval = tcmalloc::getpc::internal::RawUCToPC(&signal_ucontext);
|
||||
|
||||
#if defined(__s390__) && !defined(__s390x__)
|
||||
// Mask out the AMODE31 bit from the PC recorded in the context.
|
||||
retval = (void*)((unsigned long)retval & 0x7fffffffUL);
|
||||
#endif
|
||||
|
||||
return retval;
|
||||
}
|
||||
|
||||
#endif // BASE_GETPC_H_
|
36
3party/gperftools/src/google/heap-checker.h
Normal file
36
3party/gperftools/src/google/heap-checker.h
Normal file
@ -0,0 +1,36 @@
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
/* The code has moved to gperftools/. Use that include-directory for
|
||||
* new code.
|
||||
*/
|
||||
#if defined(__GNUC__) && !defined(GPERFTOOLS_SUPPRESS_LEGACY_WARNING)
|
||||
#warning "google/heap-checker.h is deprecated. Use gperftools/heap-checker.h instead"
|
||||
#endif
|
||||
#include <gperftools/heap-checker.h>
|
37
3party/gperftools/src/google/heap-profiler.h
Normal file
37
3party/gperftools/src/google/heap-profiler.h
Normal file
@ -0,0 +1,37 @@
|
||||
/* Copyright (c) 2005, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
/* The code has moved to gperftools/. Use that include-directory for
|
||||
* new code.
|
||||
*/
|
||||
#if defined(__GNUC__) && !defined(GPERFTOOLS_SUPPRESS_LEGACY_WARNING)
|
||||
#warning "google/heap-profiler.h is deprecated. Use gperftools/heap-profiler.h instead"
|
||||
#endif
|
||||
#include <gperftools/heap-profiler.h>
|
36
3party/gperftools/src/google/malloc_extension.h
Normal file
36
3party/gperftools/src/google/malloc_extension.h
Normal file
@ -0,0 +1,36 @@
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
/* The code has moved to gperftools/. Use that include-directory for
|
||||
* new code.
|
||||
*/
|
||||
#if defined(__GNUC__) && !defined(GPERFTOOLS_SUPPRESS_LEGACY_WARNING)
|
||||
#warning "google/malloc_extension.h is deprecated. Use gperftools/malloc_extension.h instead"
|
||||
#endif
|
||||
#include <gperftools/malloc_extension.h>
|
37
3party/gperftools/src/google/malloc_extension_c.h
Normal file
37
3party/gperftools/src/google/malloc_extension_c.h
Normal file
@ -0,0 +1,37 @@
|
||||
/* Copyright (c) 2008, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
/* The code has moved to gperftools/. Use that include-directory for
|
||||
* new code.
|
||||
*/
|
||||
#if defined(__GNUC__) && !defined(GPERFTOOLS_SUPPRESS_LEGACY_WARNING)
|
||||
#warning "google/malloc_extension_c.h is deprecated. Use gperftools/malloc_extension_c.h instead"
|
||||
#endif
|
||||
#include <gperftools/malloc_extension_c.h>
|
36
3party/gperftools/src/google/malloc_hook.h
Normal file
36
3party/gperftools/src/google/malloc_hook.h
Normal file
@ -0,0 +1,36 @@
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
/* The code has moved to gperftools/. Use that include-directory for
|
||||
* new code.
|
||||
*/
|
||||
#if defined(__GNUC__) && !defined(GPERFTOOLS_SUPPRESS_LEGACY_WARNING)
|
||||
#warning "google/malloc_hook.h is deprecated. Use gperftools/malloc_hook.h instead"
|
||||
#endif
|
||||
#include <gperftools/malloc_hook.h>
|
37
3party/gperftools/src/google/malloc_hook_c.h
Normal file
37
3party/gperftools/src/google/malloc_hook_c.h
Normal file
@ -0,0 +1,37 @@
|
||||
/* Copyright (c) 2008, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
/* The code has moved to gperftools/. Use that include-directory for
|
||||
* new code.
|
||||
*/
|
||||
#if defined(__GNUC__) && !defined(GPERFTOOLS_SUPPRESS_LEGACY_WARNING)
|
||||
#warning "google/malloc_hook_c.h is deprecated. Use gperftools/malloc_hook_c.h instead"
|
||||
#endif
|
||||
#include <gperftools/malloc_hook_c.h>
|
37
3party/gperftools/src/google/profiler.h
Normal file
37
3party/gperftools/src/google/profiler.h
Normal file
@ -0,0 +1,37 @@
|
||||
/* Copyright (c) 2005, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
/* The code has moved to gperftools/. Use that include-directory for
|
||||
* new code.
|
||||
*/
|
||||
#if defined(__GNUC__) && !defined(GPERFTOOLS_SUPPRESS_LEGACY_WARNING)
|
||||
#warning "google/profiler.h is deprecated. Use gperftools/profiler.h instead"
|
||||
#endif
|
||||
#include <gperftools/profiler.h>
|
36
3party/gperftools/src/google/stacktrace.h
Normal file
36
3party/gperftools/src/google/stacktrace.h
Normal file
@ -0,0 +1,36 @@
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
/* The code has moved to gperftools/. Use that include-directory for
|
||||
* new code.
|
||||
*/
|
||||
#if defined(__GNUC__) && !defined(GPERFTOOLS_SUPPRESS_LEGACY_WARNING)
|
||||
#warning "google/stacktrace.h is deprecated. Use gperftools/stacktrace.h instead"
|
||||
#endif
|
||||
#include <gperftools/stacktrace.h>
|
37
3party/gperftools/src/google/tcmalloc.h
Normal file
37
3party/gperftools/src/google/tcmalloc.h
Normal file
@ -0,0 +1,37 @@
|
||||
/* Copyright (c) 2003, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
/* The code has moved to gperftools/. Use that include-directory for
|
||||
* new code.
|
||||
*/
|
||||
#if defined(__GNUC__) && !defined(GPERFTOOLS_SUPPRESS_LEGACY_WARNING)
|
||||
#warning "google/tcmalloc.h is deprecated. Use gperftools/tcmalloc.h instead"
|
||||
#endif
|
||||
#include <gperftools/tcmalloc.h>
|
422
3party/gperftools/src/gperftools/heap-checker.h
Normal file
422
3party/gperftools/src/gperftools/heap-checker.h
Normal file
@ -0,0 +1,422 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Maxim Lifantsev (with design ideas by Sanjay Ghemawat)
|
||||
//
|
||||
//
|
||||
// Module for detecing heap (memory) leaks.
|
||||
//
|
||||
// For full(er) information, see docs/heap_checker.html
|
||||
//
|
||||
// This module can be linked into programs with
|
||||
// no slowdown caused by this unless you activate the leak-checker:
|
||||
//
|
||||
// 1. Set the environment variable HEAPCHEK to _type_ before
|
||||
// running the program.
|
||||
//
|
||||
// _type_ is usually "normal" but can also be "minimal", "strict", or
|
||||
// "draconian". (See the html file for other options, like 'local'.)
|
||||
//
|
||||
// After that, just run your binary. If the heap-checker detects
|
||||
// a memory leak at program-exit, it will print instructions on how
|
||||
// to track down the leak.
|
||||
|
||||
#ifndef BASE_HEAP_CHECKER_H_
|
||||
#define BASE_HEAP_CHECKER_H_
|
||||
|
||||
#include <sys/types.h> // for size_t
|
||||
// I can't #include config.h in this public API file, but I should
|
||||
// really use configure (and make malloc_extension.h a .in file) to
|
||||
// figure out if the system has stdint.h or not. But I'm lazy, so
|
||||
// for now I'm assuming it's a problem only with MSVC.
|
||||
#ifndef _MSC_VER
|
||||
#include <stdint.h> // for uintptr_t
|
||||
#endif
|
||||
#include <stdarg.h> // for va_list
|
||||
#include <vector>
|
||||
|
||||
// Annoying stuff for windows -- makes sure clients can import these functions
|
||||
#ifndef PERFTOOLS_DLL_DECL
|
||||
# ifdef _WIN32
|
||||
# define PERFTOOLS_DLL_DECL __declspec(dllimport)
|
||||
# else
|
||||
# define PERFTOOLS_DLL_DECL
|
||||
# endif
|
||||
#endif
|
||||
|
||||
|
||||
// The class is thread-safe with respect to all the provided static methods,
|
||||
// as well as HeapLeakChecker objects: they can be accessed by multiple threads.
|
||||
class PERFTOOLS_DLL_DECL HeapLeakChecker {
|
||||
public:
|
||||
|
||||
// ----------------------------------------------------------------------- //
|
||||
// Static functions for working with (whole-program) leak checking.
|
||||
|
||||
// If heap leak checking is currently active in some mode
|
||||
// e.g. if leak checking was started (and is still active now)
|
||||
// due to HEAPCHECK=... defined in the environment.
|
||||
// The return value reflects iff HeapLeakChecker objects manually
|
||||
// constructed right now will be doing leak checking or nothing.
|
||||
// Note that we can go from active to inactive state during InitGoogle()
|
||||
// if FLAGS_heap_check gets set to "" by some code before/during InitGoogle().
|
||||
static bool IsActive();
|
||||
|
||||
// Return pointer to the whole-program checker if it has been created
|
||||
// and NULL otherwise.
|
||||
// Once GlobalChecker() returns non-NULL that object will not disappear and
|
||||
// will be returned by all later GlobalChecker calls.
|
||||
// This is mainly to access BytesLeaked() and ObjectsLeaked() (see below)
|
||||
// for the whole-program checker after one calls NoGlobalLeaks()
|
||||
// or similar and gets false.
|
||||
static HeapLeakChecker* GlobalChecker();
|
||||
|
||||
// Do whole-program leak check now (if it was activated for this binary);
|
||||
// return false only if it was activated and has failed.
|
||||
// The mode of the check is controlled by the command-line flags.
|
||||
// This method can be called repeatedly.
|
||||
// Things like GlobalChecker()->SameHeap() can also be called explicitly
|
||||
// to do the desired flavor of the check.
|
||||
static bool NoGlobalLeaks();
|
||||
|
||||
// If whole-program checker if active,
|
||||
// cancel its automatic execution after main() exits.
|
||||
// This requires that some leak check (e.g. NoGlobalLeaks())
|
||||
// has been called at least once on the whole-program checker.
|
||||
static void CancelGlobalCheck();
|
||||
|
||||
// ----------------------------------------------------------------------- //
|
||||
// Non-static functions for starting and doing leak checking.
|
||||
|
||||
// Start checking and name the leak check performed.
|
||||
// The name is used in naming dumped profiles
|
||||
// and needs to be unique only within your binary.
|
||||
// It must also be a string that can be a part of a file name,
|
||||
// in particular not contain path expressions.
|
||||
explicit HeapLeakChecker(const char *name);
|
||||
|
||||
// Destructor (verifies that some *NoLeaks or *SameHeap method
|
||||
// has been called at least once).
|
||||
~HeapLeakChecker();
|
||||
|
||||
// These used to be different but are all the same now: they return
|
||||
// true iff all memory allocated since this HeapLeakChecker object
|
||||
// was constructor is still reachable from global state.
|
||||
//
|
||||
// Because we fork to convert addresses to symbol-names, and forking
|
||||
// is not thread-safe, and we may be called in a threaded context,
|
||||
// we do not try to symbolize addresses when called manually.
|
||||
bool NoLeaks() { return DoNoLeaks(DO_NOT_SYMBOLIZE); }
|
||||
|
||||
// These forms are obsolete; use NoLeaks() instead.
|
||||
// TODO(csilvers): mark as DEPRECATED.
|
||||
bool QuickNoLeaks() { return NoLeaks(); }
|
||||
bool BriefNoLeaks() { return NoLeaks(); }
|
||||
bool SameHeap() { return NoLeaks(); }
|
||||
bool QuickSameHeap() { return NoLeaks(); }
|
||||
bool BriefSameHeap() { return NoLeaks(); }
|
||||
|
||||
// Detailed information about the number of leaked bytes and objects
|
||||
// (both of these can be negative as well).
|
||||
// These are available only after a *SameHeap or *NoLeaks
|
||||
// method has been called.
|
||||
// Note that it's possible for both of these to be zero
|
||||
// while SameHeap() or NoLeaks() returned false in case
|
||||
// of a heap state change that is significant
|
||||
// but preserves the byte and object counts.
|
||||
ssize_t BytesLeaked() const;
|
||||
ssize_t ObjectsLeaked() const;
|
||||
|
||||
// ----------------------------------------------------------------------- //
|
||||
// Static helpers to make us ignore certain leaks.
|
||||
|
||||
// Scoped helper class. Should be allocated on the stack inside a
|
||||
// block of code. Any heap allocations done in the code block
|
||||
// covered by the scoped object (including in nested function calls
|
||||
// done by the code block) will not be reported as leaks. This is
|
||||
// the recommended replacement for the GetDisableChecksStart() and
|
||||
// DisableChecksToHereFrom() routines below.
|
||||
//
|
||||
// Example:
|
||||
// void Foo() {
|
||||
// HeapLeakChecker::Disabler disabler;
|
||||
// ... code that allocates objects whose leaks should be ignored ...
|
||||
// }
|
||||
//
|
||||
// REQUIRES: Destructor runs in same thread as constructor
|
||||
class Disabler {
|
||||
public:
|
||||
Disabler();
|
||||
~Disabler();
|
||||
private:
|
||||
Disabler(const Disabler&); // disallow copy
|
||||
void operator=(const Disabler&); // and assign
|
||||
};
|
||||
|
||||
// Ignore an object located at 'ptr' (can go at the start or into the object)
|
||||
// as well as all heap objects (transitively) referenced from it for the
|
||||
// purposes of heap leak checking. Returns 'ptr' so that one can write
|
||||
// static T* obj = IgnoreObject(new T(...));
|
||||
//
|
||||
// If 'ptr' does not point to an active allocated object at the time of this
|
||||
// call, it is ignored; but if it does, the object must not get deleted from
|
||||
// the heap later on.
|
||||
//
|
||||
// See also HiddenPointer, below, if you need to prevent a pointer from
|
||||
// being traversed by the heap checker but do not wish to transitively
|
||||
// whitelist objects referenced through it.
|
||||
template <typename T>
|
||||
static T* IgnoreObject(T* ptr) {
|
||||
DoIgnoreObject(static_cast<const void*>(const_cast<const T*>(ptr)));
|
||||
return ptr;
|
||||
}
|
||||
|
||||
// Undo what an earlier IgnoreObject() call promised and asked to do.
|
||||
// At the time of this call 'ptr' must point at or inside of an active
|
||||
// allocated object which was previously registered with IgnoreObject().
|
||||
static void UnIgnoreObject(const void* ptr);
|
||||
|
||||
// ----------------------------------------------------------------------- //
|
||||
// Internal types defined in .cc
|
||||
|
||||
class Allocator;
|
||||
struct RangeValue;
|
||||
|
||||
private:
|
||||
|
||||
// ----------------------------------------------------------------------- //
|
||||
// Various helpers
|
||||
|
||||
// Create the name of the heap profile file.
|
||||
// Should be deleted via Allocator::Free().
|
||||
char* MakeProfileNameLocked();
|
||||
|
||||
// Helper for constructors
|
||||
void Create(const char *name, bool make_start_snapshot);
|
||||
|
||||
enum ShouldSymbolize { SYMBOLIZE, DO_NOT_SYMBOLIZE };
|
||||
|
||||
// Helper for *NoLeaks and *SameHeap
|
||||
bool DoNoLeaks(ShouldSymbolize should_symbolize);
|
||||
|
||||
// Helper for NoGlobalLeaks, also called by the global destructor.
|
||||
static bool NoGlobalLeaksMaybeSymbolize(ShouldSymbolize should_symbolize);
|
||||
|
||||
// These used to be public, but they are now deprecated.
|
||||
// Will remove entirely when all internal uses are fixed.
|
||||
// In the meantime, use friendship so the unittest can still test them.
|
||||
static void* GetDisableChecksStart();
|
||||
static void DisableChecksToHereFrom(const void* start_address);
|
||||
static void DisableChecksIn(const char* pattern);
|
||||
friend void RangeDisabledLeaks();
|
||||
friend void NamedTwoDisabledLeaks();
|
||||
friend void* RunNamedDisabledLeaks(void*);
|
||||
friend void TestHeapLeakCheckerNamedDisabling();
|
||||
|
||||
// Actually implements IgnoreObject().
|
||||
static void DoIgnoreObject(const void* ptr);
|
||||
|
||||
// Disable checks based on stack trace entry at a depth <=
|
||||
// max_depth. Used to hide allocations done inside some special
|
||||
// libraries.
|
||||
static void DisableChecksFromToLocked(const void* start_address,
|
||||
const void* end_address,
|
||||
int max_depth);
|
||||
|
||||
// Helper for DoNoLeaks to ignore all objects reachable from all live data
|
||||
static void IgnoreAllLiveObjectsLocked(const void* self_stack_top);
|
||||
|
||||
// Callback we pass to TCMalloc_ListAllProcessThreads (see linuxthreads.h)
|
||||
// that is invoked when all threads of our process are found and stopped.
|
||||
// The call back does the things needed to ignore live data reachable from
|
||||
// thread stacks and registers for all our threads
|
||||
// as well as do other global-live-data ignoring
|
||||
// (via IgnoreNonThreadLiveObjectsLocked)
|
||||
// during the quiet state of all threads being stopped.
|
||||
// For the argument meaning see the comment by TCMalloc_ListAllProcessThreads.
|
||||
// Here we only use num_threads and thread_pids, that TCMalloc_ListAllProcessThreads
|
||||
// fills for us with the number and pids of all the threads of our process
|
||||
// it found and attached to.
|
||||
static int IgnoreLiveThreadsLocked(void* parameter,
|
||||
int num_threads,
|
||||
pid_t* thread_pids,
|
||||
va_list ap);
|
||||
|
||||
// Helper for IgnoreAllLiveObjectsLocked and IgnoreLiveThreadsLocked
|
||||
// that we prefer to execute from IgnoreLiveThreadsLocked
|
||||
// while all threads are stopped.
|
||||
// This helper does live object discovery and ignoring
|
||||
// for all objects that are reachable from everything
|
||||
// not related to thread stacks and registers.
|
||||
static void IgnoreNonThreadLiveObjectsLocked();
|
||||
|
||||
// Helper for IgnoreNonThreadLiveObjectsLocked and IgnoreLiveThreadsLocked
|
||||
// to discover and ignore all heap objects
|
||||
// reachable from currently considered live objects
|
||||
// (live_objects static global variable in out .cc file).
|
||||
// "name", "name2" are two strings that we print one after another
|
||||
// in a debug message to describe what kind of live object sources
|
||||
// are being used.
|
||||
static void IgnoreLiveObjectsLocked(const char* name, const char* name2);
|
||||
|
||||
// Do the overall whole-program heap leak check if needed;
|
||||
// returns true when did the leak check.
|
||||
static bool DoMainHeapCheck();
|
||||
|
||||
// Type of task for UseProcMapsLocked
|
||||
enum ProcMapsTask {
|
||||
RECORD_GLOBAL_DATA,
|
||||
DISABLE_LIBRARY_ALLOCS
|
||||
};
|
||||
|
||||
// Success/Error Return codes for UseProcMapsLocked.
|
||||
enum ProcMapsResult {
|
||||
PROC_MAPS_USED,
|
||||
CANT_OPEN_PROC_MAPS,
|
||||
NO_SHARED_LIBS_IN_PROC_MAPS
|
||||
};
|
||||
|
||||
// Read /proc/self/maps, parse it, and do the 'proc_maps_task' for each line.
|
||||
static ProcMapsResult UseProcMapsLocked(ProcMapsTask proc_maps_task);
|
||||
|
||||
// A ProcMapsTask to disable allocations from 'library'
|
||||
// that is mapped to [start_address..end_address)
|
||||
// (only if library is a certain system library).
|
||||
static void DisableLibraryAllocsLocked(const char* library,
|
||||
uintptr_t start_address,
|
||||
uintptr_t end_address);
|
||||
|
||||
// Return true iff "*ptr" points to a heap object
|
||||
// ("*ptr" can point at the start or inside of a heap object
|
||||
// so that this works e.g. for pointers to C++ arrays, C++ strings,
|
||||
// multiple-inherited objects, or pointers to members).
|
||||
// We also fill *object_size for this object then
|
||||
// and we move "*ptr" to point to the very start of the heap object.
|
||||
static inline bool HaveOnHeapLocked(const void** ptr, size_t* object_size);
|
||||
|
||||
// Helper to shutdown heap leak checker when it's not needed
|
||||
// or can't function properly.
|
||||
static void TurnItselfOffLocked();
|
||||
|
||||
// Internally-used c-tor to start whole-executable checking.
|
||||
HeapLeakChecker();
|
||||
|
||||
// ----------------------------------------------------------------------- //
|
||||
// Friends and externally accessed helpers.
|
||||
|
||||
// Helper for VerifyHeapProfileTableStackGet in the unittest
|
||||
// to get the recorded allocation caller for ptr,
|
||||
// which must be a heap object.
|
||||
static const void* GetAllocCaller(void* ptr);
|
||||
friend void VerifyHeapProfileTableStackGet();
|
||||
|
||||
// This gets to execute before constructors for all global objects
|
||||
static void BeforeConstructorsLocked();
|
||||
friend void HeapLeakChecker_BeforeConstructors();
|
||||
|
||||
// This gets to execute after destructors for all global objects
|
||||
friend void HeapLeakChecker_AfterDestructors();
|
||||
|
||||
// Full starting of recommended whole-program checking.
|
||||
friend void HeapLeakChecker_InternalInitStart();
|
||||
|
||||
// Runs REGISTER_HEAPCHECK_CLEANUP cleanups and potentially
|
||||
// calls DoMainHeapCheck
|
||||
friend void HeapLeakChecker_RunHeapCleanups();
|
||||
|
||||
// ----------------------------------------------------------------------- //
|
||||
// Member data.
|
||||
|
||||
class SpinLock* lock_; // to make HeapLeakChecker objects thread-safe
|
||||
const char* name_; // our remembered name (we own it)
|
||||
// NULL means this leak checker is a noop
|
||||
|
||||
// Snapshot taken when the checker was created. May be NULL
|
||||
// for the global heap checker object. We use void* instead of
|
||||
// HeapProfileTable::Snapshot* to avoid including heap-profile-table.h.
|
||||
void* start_snapshot_;
|
||||
|
||||
bool has_checked_; // if we have done the leak check, so these are ready:
|
||||
ssize_t inuse_bytes_increase_; // bytes-in-use increase for this checker
|
||||
ssize_t inuse_allocs_increase_; // allocations-in-use increase
|
||||
// for this checker
|
||||
bool keep_profiles_; // iff we should keep the heap profiles we've made
|
||||
|
||||
// ----------------------------------------------------------------------- //
|
||||
|
||||
// Disallow "evil" constructors.
|
||||
HeapLeakChecker(const HeapLeakChecker&);
|
||||
void operator=(const HeapLeakChecker&);
|
||||
};
|
||||
|
||||
|
||||
// Holds a pointer that will not be traversed by the heap checker.
|
||||
// Contrast with HeapLeakChecker::IgnoreObject(o), in which o and
|
||||
// all objects reachable from o are ignored by the heap checker.
|
||||
template <class T>
|
||||
class HiddenPointer {
|
||||
public:
|
||||
explicit HiddenPointer(T* t)
|
||||
: masked_t_(reinterpret_cast<uintptr_t>(t) ^ kHideMask) {
|
||||
}
|
||||
// Returns unhidden pointer. Be careful where you save the result.
|
||||
T* get() const { return reinterpret_cast<T*>(masked_t_ ^ kHideMask); }
|
||||
|
||||
private:
|
||||
// Arbitrary value, but not such that xor'ing with it is likely
|
||||
// to map one valid pointer to another valid pointer:
|
||||
static const uintptr_t kHideMask =
|
||||
static_cast<uintptr_t>(0xF03A5F7BF03A5F7Bll);
|
||||
uintptr_t masked_t_;
|
||||
};
|
||||
|
||||
// A class that exists solely to run its destructor. This class should not be
|
||||
// used directly, but instead by the REGISTER_HEAPCHECK_CLEANUP macro below.
|
||||
class PERFTOOLS_DLL_DECL HeapCleaner {
|
||||
public:
|
||||
typedef void (*void_function)(void);
|
||||
HeapCleaner(void_function f);
|
||||
static void RunHeapCleanups();
|
||||
private:
|
||||
static std::vector<void_function>* heap_cleanups_;
|
||||
};
|
||||
|
||||
// A macro to declare module heap check cleanup tasks
|
||||
// (they run only if we are doing heap leak checking.)
|
||||
// 'body' should be the cleanup code to run. 'name' doesn't matter,
|
||||
// but must be unique amongst all REGISTER_HEAPCHECK_CLEANUP calls.
|
||||
#define REGISTER_HEAPCHECK_CLEANUP(name, body) \
|
||||
namespace { \
|
||||
void heapcheck_cleanup_##name() { body; } \
|
||||
static HeapCleaner heapcheck_cleaner_##name(&heapcheck_cleanup_##name); \
|
||||
}
|
||||
|
||||
#endif // BASE_HEAP_CHECKER_H_
|
105
3party/gperftools/src/gperftools/heap-profiler.h
Normal file
105
3party/gperftools/src/gperftools/heap-profiler.h
Normal file
@ -0,0 +1,105 @@
|
||||
/* -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*- */
|
||||
/* Copyright (c) 2005, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Sanjay Ghemawat
|
||||
*
|
||||
* Module for heap-profiling.
|
||||
*
|
||||
* For full(er) information, see docs/heapprofile.html
|
||||
*
|
||||
* This module can be linked into your program with
|
||||
* no slowdown caused by this unless you activate the profiler
|
||||
* using one of the following methods:
|
||||
*
|
||||
* 1. Before starting the program, set the environment variable
|
||||
* "HEAPPROFILE" to be the name of the file to which the profile
|
||||
* data should be written.
|
||||
*
|
||||
* 2. Programmatically, start and stop the profiler using the
|
||||
* routines "HeapProfilerStart(filename)" and "HeapProfilerStop()".
|
||||
*
|
||||
*/
|
||||
|
||||
#ifndef BASE_HEAP_PROFILER_H_
|
||||
#define BASE_HEAP_PROFILER_H_
|
||||
|
||||
#include <stddef.h>
|
||||
|
||||
/* Annoying stuff for windows; makes sure clients can import these functions */
|
||||
#ifndef PERFTOOLS_DLL_DECL
|
||||
# ifdef _WIN32
|
||||
# define PERFTOOLS_DLL_DECL __declspec(dllimport)
|
||||
# else
|
||||
# define PERFTOOLS_DLL_DECL
|
||||
# endif
|
||||
#endif
|
||||
|
||||
/* All this code should be usable from within C apps. */
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
/* Start profiling and arrange to write profile data to file names
|
||||
* of the form: "prefix.0000", "prefix.0001", ...
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL void HeapProfilerStart(const char* prefix);
|
||||
|
||||
/* Returns non-zero if we are currently profiling the heap. (Returns
|
||||
* an int rather than a bool so it's usable from C.) This is true
|
||||
* between calls to HeapProfilerStart() and HeapProfilerStop(), and
|
||||
* also if the program has been run with HEAPPROFILER, or some other
|
||||
* way to turn on whole-program profiling.
|
||||
*/
|
||||
int IsHeapProfilerRunning();
|
||||
|
||||
/* Stop heap profiling. Can be restarted again with HeapProfilerStart(),
|
||||
* but the currently accumulated profiling information will be cleared.
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL void HeapProfilerStop();
|
||||
|
||||
/* Dump a profile now - can be used for dumping at a hopefully
|
||||
* quiescent state in your program, in order to more easily track down
|
||||
* memory leaks. Will include the reason in the logged message
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL void HeapProfilerDump(const char *reason);
|
||||
|
||||
/* Generate current heap profiling information.
|
||||
* Returns an empty string when heap profiling is not active.
|
||||
* The returned pointer is a '\0'-terminated string allocated using malloc()
|
||||
* and should be free()-ed as soon as the caller does not need it anymore.
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL char* GetHeapProfile();
|
||||
|
||||
#ifdef __cplusplus
|
||||
} // extern "C"
|
||||
#endif
|
||||
|
||||
#endif /* BASE_HEAP_PROFILER_H_ */
|
444
3party/gperftools/src/gperftools/malloc_extension.h
Normal file
444
3party/gperftools/src/gperftools/malloc_extension.h
Normal file
@ -0,0 +1,444 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat <opensource@google.com>
|
||||
//
|
||||
// Extra extensions exported by some malloc implementations. These
|
||||
// extensions are accessed through a virtual base class so an
|
||||
// application can link against a malloc that does not implement these
|
||||
// extensions, and it will get default versions that do nothing.
|
||||
//
|
||||
// NOTE FOR C USERS: If you wish to use this functionality from within
|
||||
// a C program, see malloc_extension_c.h.
|
||||
|
||||
#ifndef BASE_MALLOC_EXTENSION_H_
|
||||
#define BASE_MALLOC_EXTENSION_H_
|
||||
|
||||
#include <stddef.h>
|
||||
// I can't #include config.h in this public API file, but I should
|
||||
// really use configure (and make malloc_extension.h a .in file) to
|
||||
// figure out if the system has stdint.h or not. But I'm lazy, so
|
||||
// for now I'm assuming it's a problem only with MSVC.
|
||||
#ifndef _MSC_VER
|
||||
#include <stdint.h>
|
||||
#endif
|
||||
#include <string>
|
||||
#include <vector>
|
||||
|
||||
// Annoying stuff for windows -- makes sure clients can import these functions
|
||||
#ifndef PERFTOOLS_DLL_DECL
|
||||
# ifdef _WIN32
|
||||
# define PERFTOOLS_DLL_DECL __declspec(dllimport)
|
||||
# else
|
||||
# define PERFTOOLS_DLL_DECL
|
||||
# endif
|
||||
#endif
|
||||
|
||||
static const int kMallocHistogramSize = 64;
|
||||
|
||||
// One day, we could support other types of writers (perhaps for C?)
|
||||
typedef std::string MallocExtensionWriter;
|
||||
|
||||
namespace base {
|
||||
struct MallocRange;
|
||||
}
|
||||
|
||||
// Interface to a pluggable system allocator.
|
||||
class PERFTOOLS_DLL_DECL SysAllocator {
|
||||
public:
|
||||
SysAllocator() {
|
||||
}
|
||||
virtual ~SysAllocator();
|
||||
|
||||
// Allocates "size"-byte of memory from system aligned with "alignment".
|
||||
// Returns NULL if failed. Otherwise, the returned pointer p up to and
|
||||
// including (p + actual_size -1) have been allocated.
|
||||
virtual void* Alloc(size_t size, size_t *actual_size, size_t alignment) = 0;
|
||||
};
|
||||
|
||||
// The default implementations of the following routines do nothing.
|
||||
// All implementations should be thread-safe; the current one
|
||||
// (TCMallocImplementation) is.
|
||||
class PERFTOOLS_DLL_DECL MallocExtension {
|
||||
public:
|
||||
virtual ~MallocExtension();
|
||||
|
||||
// Call this very early in the program execution -- say, in a global
|
||||
// constructor -- to set up parameters and state needed by all
|
||||
// instrumented malloc implemenatations. One example: this routine
|
||||
// sets environemnt variables to tell STL to use libc's malloc()
|
||||
// instead of doing its own memory management. This is safe to call
|
||||
// multiple times, as long as each time is before threads start up.
|
||||
static void Initialize();
|
||||
|
||||
// See "verify_memory.h" to see what these routines do
|
||||
virtual bool VerifyAllMemory();
|
||||
virtual bool VerifyNewMemory(const void* p);
|
||||
virtual bool VerifyArrayNewMemory(const void* p);
|
||||
virtual bool VerifyMallocMemory(const void* p);
|
||||
virtual bool MallocMemoryStats(int* blocks, size_t* total,
|
||||
int histogram[kMallocHistogramSize]);
|
||||
|
||||
// Get a human readable description of the following malloc data structures.
|
||||
// - Total inuse memory by application.
|
||||
// - Free memory(thread, central and page heap),
|
||||
// - Freelist of central cache, each class.
|
||||
// - Page heap freelist.
|
||||
// The state is stored as a null-terminated string
|
||||
// in a prefix of "buffer[0,buffer_length-1]".
|
||||
// REQUIRES: buffer_length > 0.
|
||||
virtual void GetStats(char* buffer, int buffer_length);
|
||||
|
||||
// Outputs to "writer" a sample of live objects and the stack traces
|
||||
// that allocated these objects. The format of the returned output
|
||||
// is equivalent to the output of the heap profiler and can
|
||||
// therefore be passed to "pprof". This function is equivalent to
|
||||
// ReadStackTraces. The main difference is that this function returns
|
||||
// serialized data appropriately formatted for use by the pprof tool.
|
||||
//
|
||||
// Since gperftools 2.8 heap samples are not de-duplicated by the
|
||||
// library anymore.
|
||||
//
|
||||
// NOTE: by default, tcmalloc does not do any heap sampling, and this
|
||||
// function will always return an empty sample. To get useful
|
||||
// data from GetHeapSample, you must also set the environment
|
||||
// variable TCMALLOC_SAMPLE_PARAMETER to a value such as 524288.
|
||||
virtual void GetHeapSample(MallocExtensionWriter* writer);
|
||||
|
||||
// Outputs to "writer" the stack traces that caused growth in the
|
||||
// address space size. The format of the returned output is
|
||||
// equivalent to the output of the heap profiler and can therefore
|
||||
// be passed to "pprof". This function is equivalent to
|
||||
// ReadHeapGrowthStackTraces. The main difference is that this function
|
||||
// returns serialized data appropriately formatted for use by the
|
||||
// pprof tool. (This does not depend on, or require,
|
||||
// TCMALLOC_SAMPLE_PARAMETER.)
|
||||
virtual void GetHeapGrowthStacks(MallocExtensionWriter* writer);
|
||||
|
||||
// Invokes func(arg, range) for every controlled memory
|
||||
// range. *range is filled in with information about the range.
|
||||
//
|
||||
// This is a best-effort interface useful only for performance
|
||||
// analysis. The implementation may not call func at all.
|
||||
typedef void (RangeFunction)(void*, const base::MallocRange*);
|
||||
virtual void Ranges(void* arg, RangeFunction func);
|
||||
|
||||
// -------------------------------------------------------------------
|
||||
// Control operations for getting and setting malloc implementation
|
||||
// specific parameters. Some currently useful properties:
|
||||
//
|
||||
// generic
|
||||
// -------
|
||||
// "generic.current_allocated_bytes"
|
||||
// Number of bytes currently allocated by application
|
||||
// This property is not writable.
|
||||
//
|
||||
// "generic.heap_size"
|
||||
// Number of bytes in the heap ==
|
||||
// current_allocated_bytes +
|
||||
// fragmentation +
|
||||
// freed memory regions
|
||||
// This property is not writable.
|
||||
//
|
||||
// "generic.total_physical_bytes"
|
||||
// Estimate of total bytes of the physical memory usage by the
|
||||
// allocator ==
|
||||
// current_allocated_bytes +
|
||||
// fragmentation +
|
||||
// metadata
|
||||
// This property is not writable.
|
||||
//
|
||||
// tcmalloc
|
||||
// --------
|
||||
// "tcmalloc.max_total_thread_cache_bytes"
|
||||
// Upper limit on total number of bytes stored across all
|
||||
// per-thread caches. Default: 16MB.
|
||||
//
|
||||
// "tcmalloc.current_total_thread_cache_bytes"
|
||||
// Number of bytes used across all thread caches.
|
||||
// This property is not writable.
|
||||
//
|
||||
// "tcmalloc.central_cache_free_bytes"
|
||||
// Number of free bytes in the central cache that have been
|
||||
// assigned to size classes. They always count towards virtual
|
||||
// memory usage, and unless the underlying memory is swapped out
|
||||
// by the OS, they also count towards physical memory usage.
|
||||
// This property is not writable.
|
||||
//
|
||||
// "tcmalloc.transfer_cache_free_bytes"
|
||||
// Number of free bytes that are waiting to be transfered between
|
||||
// the central cache and a thread cache. They always count
|
||||
// towards virtual memory usage, and unless the underlying memory
|
||||
// is swapped out by the OS, they also count towards physical
|
||||
// memory usage. This property is not writable.
|
||||
//
|
||||
// "tcmalloc.thread_cache_free_bytes"
|
||||
// Number of free bytes in thread caches. They always count
|
||||
// towards virtual memory usage, and unless the underlying memory
|
||||
// is swapped out by the OS, they also count towards physical
|
||||
// memory usage. This property is not writable.
|
||||
//
|
||||
// "tcmalloc.pageheap_free_bytes"
|
||||
// Number of bytes in free, mapped pages in page heap. These
|
||||
// bytes can be used to fulfill allocation requests. They
|
||||
// always count towards virtual memory usage, and unless the
|
||||
// underlying memory is swapped out by the OS, they also count
|
||||
// towards physical memory usage. This property is not writable.
|
||||
//
|
||||
// "tcmalloc.pageheap_unmapped_bytes"
|
||||
// Number of bytes in free, unmapped pages in page heap.
|
||||
// These are bytes that have been released back to the OS,
|
||||
// possibly by one of the MallocExtension "Release" calls.
|
||||
// They can be used to fulfill allocation requests, but
|
||||
// typically incur a page fault. They always count towards
|
||||
// virtual memory usage, and depending on the OS, typically
|
||||
// do not count towards physical memory usage. This property
|
||||
// is not writable.
|
||||
// -------------------------------------------------------------------
|
||||
|
||||
// Get the named "property"'s value. Returns true if the property
|
||||
// is known. Returns false if the property is not a valid property
|
||||
// name for the current malloc implementation.
|
||||
// REQUIRES: property != NULL; value != NULL
|
||||
virtual bool GetNumericProperty(const char* property, size_t* value);
|
||||
|
||||
// Set the named "property"'s value. Returns true if the property
|
||||
// is known and writable. Returns false if the property is not a
|
||||
// valid property name for the current malloc implementation, or
|
||||
// is not writable.
|
||||
// REQUIRES: property != NULL
|
||||
virtual bool SetNumericProperty(const char* property, size_t value);
|
||||
|
||||
// Mark the current thread as "idle". This routine may optionally
|
||||
// be called by threads as a hint to the malloc implementation that
|
||||
// any thread-specific resources should be released. Note: this may
|
||||
// be an expensive routine, so it should not be called too often.
|
||||
//
|
||||
// Also, if the code that calls this routine will go to sleep for
|
||||
// a while, it should take care to not allocate anything between
|
||||
// the call to this routine and the beginning of the sleep.
|
||||
//
|
||||
// Most malloc implementations ignore this routine.
|
||||
virtual void MarkThreadIdle();
|
||||
|
||||
// Mark the current thread as "busy". This routine should be
|
||||
// called after MarkThreadIdle() if the thread will now do more
|
||||
// work. If this method is not called, performance may suffer.
|
||||
//
|
||||
// Most malloc implementations ignore this routine.
|
||||
virtual void MarkThreadBusy();
|
||||
|
||||
// Gets the system allocator used by the malloc extension instance. Returns
|
||||
// NULL for malloc implementations that do not support pluggable system
|
||||
// allocators.
|
||||
virtual SysAllocator* GetSystemAllocator();
|
||||
|
||||
// Sets the system allocator to the specified.
|
||||
//
|
||||
// Users could register their own system allocators for malloc implementation
|
||||
// that supports pluggable system allocators, such as TCMalloc, by doing:
|
||||
// alloc = new MyOwnSysAllocator();
|
||||
// MallocExtension::instance()->SetSystemAllocator(alloc);
|
||||
// It's up to users whether to fall back (recommended) to the default
|
||||
// system allocator (use GetSystemAllocator() above) or not. The caller is
|
||||
// responsible to any necessary locking.
|
||||
// See tcmalloc/system-alloc.h for the interface and
|
||||
// tcmalloc/memfs_malloc.cc for the examples.
|
||||
//
|
||||
// It's a no-op for malloc implementations that do not support pluggable
|
||||
// system allocators.
|
||||
virtual void SetSystemAllocator(SysAllocator *a);
|
||||
|
||||
// Try to release num_bytes of free memory back to the operating
|
||||
// system for reuse. Use this extension with caution -- to get this
|
||||
// memory back may require faulting pages back in by the OS, and
|
||||
// that may be slow. (Currently only implemented in tcmalloc.)
|
||||
virtual void ReleaseToSystem(size_t num_bytes);
|
||||
|
||||
// Same as ReleaseToSystem() but release as much memory as possible.
|
||||
virtual void ReleaseFreeMemory();
|
||||
|
||||
// Sets the rate at which we release unused memory to the system.
|
||||
// Zero means we never release memory back to the system. Increase
|
||||
// this flag to return memory faster; decrease it to return memory
|
||||
// slower. Reasonable rates are in the range [0,10]. (Currently
|
||||
// only implemented in tcmalloc).
|
||||
virtual void SetMemoryReleaseRate(double rate);
|
||||
|
||||
// Gets the release rate. Returns a value < 0 if unknown.
|
||||
virtual double GetMemoryReleaseRate();
|
||||
|
||||
// Returns the estimated number of bytes that will be allocated for
|
||||
// a request of "size" bytes. This is an estimate: an allocation of
|
||||
// SIZE bytes may reserve more bytes, but will never reserve less.
|
||||
// (Currently only implemented in tcmalloc, other implementations
|
||||
// always return SIZE.)
|
||||
// This is equivalent to malloc_good_size() in OS X.
|
||||
virtual size_t GetEstimatedAllocatedSize(size_t size);
|
||||
|
||||
// Returns the actual number N of bytes reserved by tcmalloc for the
|
||||
// pointer p. The client is allowed to use the range of bytes
|
||||
// [p, p+N) in any way it wishes (i.e. N is the "usable size" of this
|
||||
// allocation). This number may be equal to or greater than the number
|
||||
// of bytes requested when p was allocated.
|
||||
// p must have been allocated by this malloc implementation,
|
||||
// must not be an interior pointer -- that is, must be exactly
|
||||
// the pointer returned to by malloc() et al., not some offset
|
||||
// from that -- and should not have been freed yet. p may be NULL.
|
||||
// (Currently only implemented in tcmalloc; other implementations
|
||||
// will return 0.)
|
||||
// This is equivalent to malloc_size() in OS X, malloc_usable_size()
|
||||
// in glibc, and _msize() for windows.
|
||||
virtual size_t GetAllocatedSize(const void* p);
|
||||
|
||||
// Returns kOwned if this malloc implementation allocated the memory
|
||||
// pointed to by p, or kNotOwned if some other malloc implementation
|
||||
// allocated it or p is NULL. May also return kUnknownOwnership if
|
||||
// the malloc implementation does not keep track of ownership.
|
||||
// REQUIRES: p must be a value returned from a previous call to
|
||||
// malloc(), calloc(), realloc(), memalign(), posix_memalign(),
|
||||
// valloc(), pvalloc(), new, or new[], and must refer to memory that
|
||||
// is currently allocated (so, for instance, you should not pass in
|
||||
// a pointer after having called free() on it).
|
||||
enum Ownership {
|
||||
// NOTE: Enum values MUST be kept in sync with the version in
|
||||
// malloc_extension_c.h
|
||||
kUnknownOwnership = 0,
|
||||
kOwned,
|
||||
kNotOwned
|
||||
};
|
||||
virtual Ownership GetOwnership(const void* p);
|
||||
|
||||
// The current malloc implementation. Always non-NULL.
|
||||
static MallocExtension* instance();
|
||||
|
||||
// Change the malloc implementation. Typically called by the
|
||||
// malloc implementation during initialization.
|
||||
static void Register(MallocExtension* implementation);
|
||||
|
||||
// Returns detailed information about malloc's freelists. For each list,
|
||||
// return a FreeListInfo:
|
||||
struct FreeListInfo {
|
||||
size_t min_object_size;
|
||||
size_t max_object_size;
|
||||
size_t total_bytes_free;
|
||||
const char* type;
|
||||
};
|
||||
// Each item in the vector refers to a different freelist. The lists
|
||||
// are identified by the range of allocations that objects in the
|
||||
// list can satisfy ([min_object_size, max_object_size]) and the
|
||||
// type of freelist (see below). The current size of the list is
|
||||
// returned in total_bytes_free (which count against a processes
|
||||
// resident and virtual size).
|
||||
//
|
||||
// Currently supported types are:
|
||||
//
|
||||
// "tcmalloc.page{_unmapped}" - tcmalloc's page heap. An entry for each size
|
||||
// class in the page heap is returned. Bytes in "page_unmapped"
|
||||
// are no longer backed by physical memory and do not count against
|
||||
// the resident size of a process.
|
||||
//
|
||||
// "tcmalloc.large{_unmapped}" - tcmalloc's list of objects larger
|
||||
// than the largest page heap size class. Only one "large"
|
||||
// entry is returned. There is no upper-bound on the size
|
||||
// of objects in the large free list; this call returns
|
||||
// kint64max for max_object_size. Bytes in
|
||||
// "large_unmapped" are no longer backed by physical memory
|
||||
// and do not count against the resident size of a process.
|
||||
//
|
||||
// "tcmalloc.central" - tcmalloc's central free-list. One entry per
|
||||
// size-class is returned. Never unmapped.
|
||||
//
|
||||
// "debug.free_queue" - free objects queued by the debug allocator
|
||||
// and not returned to tcmalloc.
|
||||
//
|
||||
// "tcmalloc.thread" - tcmalloc's per-thread caches. Never unmapped.
|
||||
virtual void GetFreeListSizes(std::vector<FreeListInfo>* v);
|
||||
|
||||
// Get a list of stack traces of sampled allocation points. Returns
|
||||
// a pointer to a "new[]-ed" result array, and stores the sample
|
||||
// period in "sample_period".
|
||||
//
|
||||
// The state is stored as a sequence of adjacent entries
|
||||
// in the returned array. Each entry has the following form:
|
||||
// uintptr_t count; // Number of objects with following trace
|
||||
// uintptr_t size; // Total size of objects with following trace
|
||||
// uintptr_t depth; // Number of PC values in stack trace
|
||||
// void* stack[depth]; // PC values that form the stack trace
|
||||
//
|
||||
// The list of entries is terminated by a "count" of 0.
|
||||
//
|
||||
// It is the responsibility of the caller to "delete[]" the returned array.
|
||||
//
|
||||
// May return NULL to indicate no results.
|
||||
//
|
||||
// This is an internal extension. Callers should use the more
|
||||
// convenient "GetHeapSample(string*)" method defined above.
|
||||
virtual void** ReadStackTraces(int* sample_period);
|
||||
|
||||
// Like ReadStackTraces(), but returns stack traces that caused growth
|
||||
// in the address space size.
|
||||
virtual void** ReadHeapGrowthStackTraces();
|
||||
|
||||
// Returns the size in bytes of the calling threads cache.
|
||||
virtual size_t GetThreadCacheSize();
|
||||
|
||||
// Note, as of gperftools 3.11 it is identical to
|
||||
// MarkThreadIdle. See github issue #880
|
||||
virtual void MarkThreadTemporarilyIdle();
|
||||
};
|
||||
|
||||
namespace base {
|
||||
|
||||
// Information passed per range. More fields may be added later.
|
||||
struct MallocRange {
|
||||
enum Type {
|
||||
INUSE, // Application is using this range
|
||||
FREE, // Range is currently free
|
||||
UNMAPPED, // Backing physical memory has been returned to the OS
|
||||
UNKNOWN
|
||||
// More enum values may be added in the future
|
||||
};
|
||||
|
||||
uintptr_t address; // Address of range
|
||||
size_t length; // Byte length of range
|
||||
Type type; // Type of this range
|
||||
double fraction; // Fraction of range that is being used (0 if !INUSE)
|
||||
|
||||
// Perhaps add the following:
|
||||
// - stack trace if this range was sampled
|
||||
// - heap growth stack trace if applicable to this range
|
||||
// - age when allocated (for inuse) or freed (if not in use)
|
||||
};
|
||||
|
||||
} // namespace base
|
||||
|
||||
#endif // BASE_MALLOC_EXTENSION_H_
|
103
3party/gperftools/src/gperftools/malloc_extension_c.h
Normal file
103
3party/gperftools/src/gperftools/malloc_extension_c.h
Normal file
@ -0,0 +1,103 @@
|
||||
/* Copyright (c) 2008, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* --
|
||||
* Author: Craig Silverstein
|
||||
*
|
||||
* C shims for the C++ malloc_extension.h. See malloc_extension.h for
|
||||
* details. Note these C shims always work on
|
||||
* MallocExtension::instance(); it is not possible to have more than
|
||||
* one MallocExtension object in C applications.
|
||||
*/
|
||||
|
||||
#ifndef _MALLOC_EXTENSION_C_H_
|
||||
#define _MALLOC_EXTENSION_C_H_
|
||||
|
||||
#include <stddef.h>
|
||||
#include <sys/types.h>
|
||||
|
||||
/* Annoying stuff for windows -- makes sure clients can import these fns */
|
||||
#ifndef PERFTOOLS_DLL_DECL
|
||||
# ifdef _WIN32
|
||||
# define PERFTOOLS_DLL_DECL __declspec(dllimport)
|
||||
# else
|
||||
# define PERFTOOLS_DLL_DECL
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
#define kMallocExtensionHistogramSize 64
|
||||
|
||||
PERFTOOLS_DLL_DECL int MallocExtension_VerifyAllMemory(void);
|
||||
PERFTOOLS_DLL_DECL int MallocExtension_VerifyNewMemory(const void* p);
|
||||
PERFTOOLS_DLL_DECL int MallocExtension_VerifyArrayNewMemory(const void* p);
|
||||
PERFTOOLS_DLL_DECL int MallocExtension_VerifyMallocMemory(const void* p);
|
||||
PERFTOOLS_DLL_DECL int MallocExtension_MallocMemoryStats(int* blocks, size_t* total,
|
||||
int histogram[kMallocExtensionHistogramSize]);
|
||||
PERFTOOLS_DLL_DECL void MallocExtension_GetStats(char* buffer, int buffer_length);
|
||||
|
||||
/* TODO(csilvers): write a C version of these routines, that perhaps
|
||||
* takes a function ptr and a void *.
|
||||
*/
|
||||
/* void MallocExtension_GetHeapSample(string* result); */
|
||||
/* void MallocExtension_GetHeapGrowthStacks(string* result); */
|
||||
|
||||
PERFTOOLS_DLL_DECL int MallocExtension_GetNumericProperty(const char* property, size_t* value);
|
||||
PERFTOOLS_DLL_DECL int MallocExtension_SetNumericProperty(const char* property, size_t value);
|
||||
PERFTOOLS_DLL_DECL void MallocExtension_MarkThreadIdle(void);
|
||||
PERFTOOLS_DLL_DECL void MallocExtension_MarkThreadBusy(void);
|
||||
PERFTOOLS_DLL_DECL void MallocExtension_ReleaseToSystem(size_t num_bytes);
|
||||
PERFTOOLS_DLL_DECL void MallocExtension_ReleaseFreeMemory(void);
|
||||
PERFTOOLS_DLL_DECL void MallocExtension_SetMemoryReleaseRate(double rate);
|
||||
PERFTOOLS_DLL_DECL double MallocExtension_GetMemoryReleaseRate(void);
|
||||
PERFTOOLS_DLL_DECL size_t MallocExtension_GetEstimatedAllocatedSize(size_t size);
|
||||
PERFTOOLS_DLL_DECL size_t MallocExtension_GetAllocatedSize(const void* p);
|
||||
PERFTOOLS_DLL_DECL size_t MallocExtension_GetThreadCacheSize(void);
|
||||
PERFTOOLS_DLL_DECL void MallocExtension_MarkThreadTemporarilyIdle(void);
|
||||
|
||||
/*
|
||||
* NOTE: These enum values MUST be kept in sync with the version in
|
||||
* malloc_extension.h
|
||||
*/
|
||||
typedef enum {
|
||||
MallocExtension_kUnknownOwnership = 0,
|
||||
MallocExtension_kOwned,
|
||||
MallocExtension_kNotOwned
|
||||
} MallocExtension_Ownership;
|
||||
|
||||
PERFTOOLS_DLL_DECL MallocExtension_Ownership MallocExtension_GetOwnership(const void* p);
|
||||
|
||||
#ifdef __cplusplus
|
||||
} /* extern "C" */
|
||||
#endif
|
||||
|
||||
#endif /* _MALLOC_EXTENSION_C_H_ */
|
359
3party/gperftools/src/gperftools/malloc_hook.h
Normal file
359
3party/gperftools/src/gperftools/malloc_hook.h
Normal file
@ -0,0 +1,359 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat
|
||||
//
|
||||
// Some of our malloc implementations can invoke the following hooks whenever
|
||||
// memory is allocated or deallocated. MallocHook is thread-safe, and things
|
||||
// you do before calling AddFooHook(MyHook) are visible to any resulting calls
|
||||
// to MyHook. Hooks must be thread-safe. If you write:
|
||||
//
|
||||
// CHECK(MallocHook::AddNewHook(&MyNewHook));
|
||||
//
|
||||
// MyNewHook will be invoked in subsequent calls in the current thread, but
|
||||
// there are no guarantees on when it might be invoked in other threads.
|
||||
//
|
||||
// There are a limited number of slots available for each hook type. Add*Hook
|
||||
// will return false if there are no slots available. Remove*Hook will return
|
||||
// false if the given hook was not already installed.
|
||||
//
|
||||
// The order in which individual hooks are called in Invoke*Hook is undefined.
|
||||
//
|
||||
// It is safe for a hook to remove itself within Invoke*Hook and add other
|
||||
// hooks. Any hooks added inside a hook invocation (for the same hook type)
|
||||
// will not be invoked for the current invocation.
|
||||
//
|
||||
// One important user of these hooks is the heap profiler.
|
||||
//
|
||||
// CAVEAT: If you add new MallocHook::Invoke* calls then those calls must be
|
||||
// directly in the code of the (de)allocation function that is provided to the
|
||||
// user and that function must have an ATTRIBUTE_SECTION(malloc_hook) attribute.
|
||||
//
|
||||
// Note: the Invoke*Hook() functions are defined in malloc_hook-inl.h. If you
|
||||
// need to invoke a hook (which you shouldn't unless you're part of tcmalloc),
|
||||
// be sure to #include malloc_hook-inl.h in addition to malloc_hook.h.
|
||||
//
|
||||
// NOTE FOR C USERS: If you want to use malloc_hook functionality from
|
||||
// a C program, #include malloc_hook_c.h instead of this file.
|
||||
|
||||
#ifndef _MALLOC_HOOK_H_
|
||||
#define _MALLOC_HOOK_H_
|
||||
|
||||
#include <stddef.h>
|
||||
#include <sys/types.h>
|
||||
extern "C" {
|
||||
#include "malloc_hook_c.h" // a C version of the malloc_hook interface
|
||||
}
|
||||
|
||||
// Annoying stuff for windows -- makes sure clients can import these functions
|
||||
#ifndef PERFTOOLS_DLL_DECL
|
||||
# ifdef _WIN32
|
||||
# define PERFTOOLS_DLL_DECL __declspec(dllimport)
|
||||
# else
|
||||
# define PERFTOOLS_DLL_DECL
|
||||
# endif
|
||||
#endif
|
||||
|
||||
// The C++ methods below call the C version (MallocHook_*), and thus
|
||||
// convert between an int and a bool. Windows complains about this
|
||||
// (a "performance warning") which we don't care about, so we suppress.
|
||||
#ifdef _MSC_VER
|
||||
#pragma warning(push)
|
||||
#pragma warning(disable:4800)
|
||||
#endif
|
||||
|
||||
// Note: malloc_hook_c.h defines MallocHook_*Hook and
|
||||
// MallocHook_{Add,Remove}*Hook. The version of these inside the MallocHook
|
||||
// class are defined in terms of the malloc_hook_c version. See malloc_hook_c.h
|
||||
// for details of these types/functions.
|
||||
|
||||
class PERFTOOLS_DLL_DECL MallocHook {
|
||||
public:
|
||||
// The NewHook is invoked whenever an object is allocated.
|
||||
// It may be passed NULL if the allocator returned NULL.
|
||||
typedef MallocHook_NewHook NewHook;
|
||||
inline static bool AddNewHook(NewHook hook) {
|
||||
return MallocHook_AddNewHook(hook);
|
||||
}
|
||||
inline static bool RemoveNewHook(NewHook hook) {
|
||||
return MallocHook_RemoveNewHook(hook);
|
||||
}
|
||||
inline static void InvokeNewHook(const void* p, size_t s);
|
||||
|
||||
// The DeleteHook is invoked whenever an object is deallocated.
|
||||
// It may be passed NULL if the caller is trying to delete NULL.
|
||||
typedef MallocHook_DeleteHook DeleteHook;
|
||||
inline static bool AddDeleteHook(DeleteHook hook) {
|
||||
return MallocHook_AddDeleteHook(hook);
|
||||
}
|
||||
inline static bool RemoveDeleteHook(DeleteHook hook) {
|
||||
return MallocHook_RemoveDeleteHook(hook);
|
||||
}
|
||||
inline static void InvokeDeleteHook(const void* p);
|
||||
|
||||
// The PreMmapHook is invoked with mmap or mmap64 arguments just
|
||||
// before the call is actually made. Such a hook may be useful
|
||||
// in memory limited contexts, to catch allocations that will exceed
|
||||
// a memory limit, and take outside actions to increase that limit.
|
||||
typedef MallocHook_PreMmapHook PreMmapHook;
|
||||
inline static bool AddPreMmapHook(PreMmapHook hook) {
|
||||
return MallocHook_AddPreMmapHook(hook);
|
||||
}
|
||||
inline static bool RemovePreMmapHook(PreMmapHook hook) {
|
||||
return MallocHook_RemovePreMmapHook(hook);
|
||||
}
|
||||
inline static void InvokePreMmapHook(const void* start,
|
||||
size_t size,
|
||||
int protection,
|
||||
int flags,
|
||||
int fd,
|
||||
off_t offset);
|
||||
|
||||
// The MmapReplacement is invoked after the PreMmapHook but before
|
||||
// the call is actually made. The MmapReplacement should return true
|
||||
// if it handled the call, or false if it is still necessary to
|
||||
// call mmap/mmap64.
|
||||
// This should be used only by experts, and users must be be
|
||||
// extremely careful to avoid recursive calls to mmap. The replacement
|
||||
// should be async signal safe.
|
||||
// Only one MmapReplacement is supported. After setting an MmapReplacement
|
||||
// you must call RemoveMmapReplacement before calling SetMmapReplacement
|
||||
// again.
|
||||
typedef MallocHook_MmapReplacement MmapReplacement;
|
||||
inline static bool SetMmapReplacement(MmapReplacement hook) {
|
||||
return MallocHook_SetMmapReplacement(hook);
|
||||
}
|
||||
inline static bool RemoveMmapReplacement(MmapReplacement hook) {
|
||||
return MallocHook_RemoveMmapReplacement(hook);
|
||||
}
|
||||
inline static bool InvokeMmapReplacement(const void* start,
|
||||
size_t size,
|
||||
int protection,
|
||||
int flags,
|
||||
int fd,
|
||||
off_t offset,
|
||||
void** result);
|
||||
|
||||
|
||||
// The MmapHook is invoked whenever a region of memory is mapped.
|
||||
// It may be passed MAP_FAILED if the mmap failed.
|
||||
typedef MallocHook_MmapHook MmapHook;
|
||||
inline static bool AddMmapHook(MmapHook hook) {
|
||||
return MallocHook_AddMmapHook(hook);
|
||||
}
|
||||
inline static bool RemoveMmapHook(MmapHook hook) {
|
||||
return MallocHook_RemoveMmapHook(hook);
|
||||
}
|
||||
inline static void InvokeMmapHook(const void* result,
|
||||
const void* start,
|
||||
size_t size,
|
||||
int protection,
|
||||
int flags,
|
||||
int fd,
|
||||
off_t offset);
|
||||
|
||||
// The MunmapReplacement is invoked with munmap arguments just before
|
||||
// the call is actually made. The MunmapReplacement should return true
|
||||
// if it handled the call, or false if it is still necessary to
|
||||
// call munmap.
|
||||
// This should be used only by experts. The replacement should be
|
||||
// async signal safe.
|
||||
// Only one MunmapReplacement is supported. After setting an
|
||||
// MunmapReplacement you must call RemoveMunmapReplacement before
|
||||
// calling SetMunmapReplacement again.
|
||||
typedef MallocHook_MunmapReplacement MunmapReplacement;
|
||||
inline static bool SetMunmapReplacement(MunmapReplacement hook) {
|
||||
return MallocHook_SetMunmapReplacement(hook);
|
||||
}
|
||||
inline static bool RemoveMunmapReplacement(MunmapReplacement hook) {
|
||||
return MallocHook_RemoveMunmapReplacement(hook);
|
||||
}
|
||||
inline static bool InvokeMunmapReplacement(const void* p,
|
||||
size_t size,
|
||||
int* result);
|
||||
|
||||
// The MunmapHook is invoked whenever a region of memory is unmapped.
|
||||
typedef MallocHook_MunmapHook MunmapHook;
|
||||
inline static bool AddMunmapHook(MunmapHook hook) {
|
||||
return MallocHook_AddMunmapHook(hook);
|
||||
}
|
||||
inline static bool RemoveMunmapHook(MunmapHook hook) {
|
||||
return MallocHook_RemoveMunmapHook(hook);
|
||||
}
|
||||
inline static void InvokeMunmapHook(const void* p, size_t size);
|
||||
|
||||
// The MremapHook is invoked whenever a region of memory is remapped.
|
||||
typedef MallocHook_MremapHook MremapHook;
|
||||
inline static bool AddMremapHook(MremapHook hook) {
|
||||
return MallocHook_AddMremapHook(hook);
|
||||
}
|
||||
inline static bool RemoveMremapHook(MremapHook hook) {
|
||||
return MallocHook_RemoveMremapHook(hook);
|
||||
}
|
||||
inline static void InvokeMremapHook(const void* result,
|
||||
const void* old_addr,
|
||||
size_t old_size,
|
||||
size_t new_size,
|
||||
int flags,
|
||||
const void* new_addr);
|
||||
|
||||
// The PreSbrkHook is invoked just before sbrk is called -- except when
|
||||
// the increment is 0. This is because sbrk(0) is often called
|
||||
// to get the top of the memory stack, and is not actually a
|
||||
// memory-allocation call. It may be useful in memory-limited contexts,
|
||||
// to catch allocations that will exceed the limit and take outside
|
||||
// actions to increase such a limit.
|
||||
typedef MallocHook_PreSbrkHook PreSbrkHook;
|
||||
inline static bool AddPreSbrkHook(PreSbrkHook hook) {
|
||||
return MallocHook_AddPreSbrkHook(hook);
|
||||
}
|
||||
inline static bool RemovePreSbrkHook(PreSbrkHook hook) {
|
||||
return MallocHook_RemovePreSbrkHook(hook);
|
||||
}
|
||||
inline static void InvokePreSbrkHook(ptrdiff_t increment);
|
||||
|
||||
// The SbrkHook is invoked whenever sbrk is called -- except when
|
||||
// the increment is 0. This is because sbrk(0) is often called
|
||||
// to get the top of the memory stack, and is not actually a
|
||||
// memory-allocation call.
|
||||
typedef MallocHook_SbrkHook SbrkHook;
|
||||
inline static bool AddSbrkHook(SbrkHook hook) {
|
||||
return MallocHook_AddSbrkHook(hook);
|
||||
}
|
||||
inline static bool RemoveSbrkHook(SbrkHook hook) {
|
||||
return MallocHook_RemoveSbrkHook(hook);
|
||||
}
|
||||
inline static void InvokeSbrkHook(const void* result, ptrdiff_t increment);
|
||||
|
||||
// Get the current stack trace. Try to skip all routines up to and
|
||||
// and including the caller of MallocHook::Invoke*.
|
||||
// Use "skip_count" (similarly to GetStackTrace from stacktrace.h)
|
||||
// as a hint about how many routines to skip if better information
|
||||
// is not available.
|
||||
inline static int GetCallerStackTrace(void** result, int max_depth,
|
||||
int skip_count) {
|
||||
return MallocHook_GetCallerStackTrace(result, max_depth, skip_count);
|
||||
}
|
||||
|
||||
// Unhooked versions of mmap() and munmap(). These should be used
|
||||
// only by experts, since they bypass heapchecking, etc.
|
||||
// Note: These do not run hooks, but they still use the MmapReplacement
|
||||
// and MunmapReplacement.
|
||||
static void* UnhookedMMap(void *start, size_t length, int prot, int flags,
|
||||
int fd, off_t offset);
|
||||
static int UnhookedMUnmap(void *start, size_t length);
|
||||
|
||||
// The following are DEPRECATED.
|
||||
inline static NewHook GetNewHook();
|
||||
inline static NewHook SetNewHook(NewHook hook) {
|
||||
return MallocHook_SetNewHook(hook);
|
||||
}
|
||||
|
||||
inline static DeleteHook GetDeleteHook();
|
||||
inline static DeleteHook SetDeleteHook(DeleteHook hook) {
|
||||
return MallocHook_SetDeleteHook(hook);
|
||||
}
|
||||
|
||||
inline static PreMmapHook GetPreMmapHook();
|
||||
inline static PreMmapHook SetPreMmapHook(PreMmapHook hook) {
|
||||
return MallocHook_SetPreMmapHook(hook);
|
||||
}
|
||||
|
||||
inline static MmapHook GetMmapHook();
|
||||
inline static MmapHook SetMmapHook(MmapHook hook) {
|
||||
return MallocHook_SetMmapHook(hook);
|
||||
}
|
||||
|
||||
inline static MunmapHook GetMunmapHook();
|
||||
inline static MunmapHook SetMunmapHook(MunmapHook hook) {
|
||||
return MallocHook_SetMunmapHook(hook);
|
||||
}
|
||||
|
||||
inline static MremapHook GetMremapHook();
|
||||
inline static MremapHook SetMremapHook(MremapHook hook) {
|
||||
return MallocHook_SetMremapHook(hook);
|
||||
}
|
||||
|
||||
inline static PreSbrkHook GetPreSbrkHook();
|
||||
inline static PreSbrkHook SetPreSbrkHook(PreSbrkHook hook) {
|
||||
return MallocHook_SetPreSbrkHook(hook);
|
||||
}
|
||||
|
||||
inline static SbrkHook GetSbrkHook();
|
||||
inline static SbrkHook SetSbrkHook(SbrkHook hook) {
|
||||
return MallocHook_SetSbrkHook(hook);
|
||||
}
|
||||
// End of DEPRECATED methods.
|
||||
|
||||
private:
|
||||
// Slow path versions of Invoke*Hook.
|
||||
static void InvokeNewHookSlow(const void* p, size_t s);
|
||||
static void InvokeDeleteHookSlow(const void* p);
|
||||
static void InvokePreMmapHookSlow(const void* start,
|
||||
size_t size,
|
||||
int protection,
|
||||
int flags,
|
||||
int fd,
|
||||
off_t offset);
|
||||
static void InvokeMmapHookSlow(const void* result,
|
||||
const void* start,
|
||||
size_t size,
|
||||
int protection,
|
||||
int flags,
|
||||
int fd,
|
||||
off_t offset);
|
||||
static bool InvokeMmapReplacementSlow(const void* start,
|
||||
size_t size,
|
||||
int protection,
|
||||
int flags,
|
||||
int fd,
|
||||
off_t offset,
|
||||
void** result);
|
||||
static void InvokeMunmapHookSlow(const void* p, size_t size);
|
||||
static bool InvokeMunmapReplacementSlow(const void* p,
|
||||
size_t size,
|
||||
int* result);
|
||||
static void InvokeMremapHookSlow(const void* result,
|
||||
const void* old_addr,
|
||||
size_t old_size,
|
||||
size_t new_size,
|
||||
int flags,
|
||||
const void* new_addr);
|
||||
static void InvokePreSbrkHookSlow(ptrdiff_t increment);
|
||||
static void InvokeSbrkHookSlow(const void* result, ptrdiff_t increment);
|
||||
};
|
||||
|
||||
#ifdef _MSC_VER
|
||||
#pragma warning(pop)
|
||||
#endif
|
||||
|
||||
|
||||
#endif /* _MALLOC_HOOK_H_ */
|
173
3party/gperftools/src/gperftools/malloc_hook_c.h
Normal file
173
3party/gperftools/src/gperftools/malloc_hook_c.h
Normal file
@ -0,0 +1,173 @@
|
||||
/* Copyright (c) 2008, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* --
|
||||
* Author: Craig Silverstein
|
||||
*
|
||||
* C shims for the C++ malloc_hook.h. See malloc_hook.h for details
|
||||
* on how to use these.
|
||||
*/
|
||||
|
||||
#ifndef _MALLOC_HOOK_C_H_
|
||||
#define _MALLOC_HOOK_C_H_
|
||||
|
||||
#include <stddef.h>
|
||||
#include <sys/types.h>
|
||||
|
||||
/* Annoying stuff for windows; makes sure clients can import these functions */
|
||||
#ifndef PERFTOOLS_DLL_DECL
|
||||
# ifdef _WIN32
|
||||
# define PERFTOOLS_DLL_DECL __declspec(dllimport)
|
||||
# else
|
||||
# define PERFTOOLS_DLL_DECL
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
/* Get the current stack trace. Try to skip all routines up to and
|
||||
* and including the caller of MallocHook::Invoke*.
|
||||
* Use "skip_count" (similarly to GetStackTrace from stacktrace.h)
|
||||
* as a hint about how many routines to skip if better information
|
||||
* is not available.
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_GetCallerStackTrace(void** result, int max_depth,
|
||||
int skip_count);
|
||||
|
||||
/* The MallocHook_{Add,Remove}*Hook functions return 1 on success and 0 on
|
||||
* failure.
|
||||
*/
|
||||
|
||||
typedef void (*MallocHook_NewHook)(const void* ptr, size_t size);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_AddNewHook(MallocHook_NewHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_RemoveNewHook(MallocHook_NewHook hook);
|
||||
|
||||
typedef void (*MallocHook_DeleteHook)(const void* ptr);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_AddDeleteHook(MallocHook_DeleteHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_RemoveDeleteHook(MallocHook_DeleteHook hook);
|
||||
|
||||
typedef void (*MallocHook_PreMmapHook)(const void *start,
|
||||
size_t size,
|
||||
int protection,
|
||||
int flags,
|
||||
int fd,
|
||||
off_t offset);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_AddPreMmapHook(MallocHook_PreMmapHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_RemovePreMmapHook(MallocHook_PreMmapHook hook);
|
||||
|
||||
typedef void (*MallocHook_MmapHook)(const void* result,
|
||||
const void* start,
|
||||
size_t size,
|
||||
int protection,
|
||||
int flags,
|
||||
int fd,
|
||||
off_t offset);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_AddMmapHook(MallocHook_MmapHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_RemoveMmapHook(MallocHook_MmapHook hook);
|
||||
|
||||
typedef int (*MallocHook_MmapReplacement)(const void* start,
|
||||
size_t size,
|
||||
int protection,
|
||||
int flags,
|
||||
int fd,
|
||||
off_t offset,
|
||||
void** result);
|
||||
int MallocHook_SetMmapReplacement(MallocHook_MmapReplacement hook);
|
||||
int MallocHook_RemoveMmapReplacement(MallocHook_MmapReplacement hook);
|
||||
|
||||
typedef void (*MallocHook_MunmapHook)(const void* ptr, size_t size);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_AddMunmapHook(MallocHook_MunmapHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_RemoveMunmapHook(MallocHook_MunmapHook hook);
|
||||
|
||||
typedef int (*MallocHook_MunmapReplacement)(const void* ptr,
|
||||
size_t size,
|
||||
int* result);
|
||||
int MallocHook_SetMunmapReplacement(MallocHook_MunmapReplacement hook);
|
||||
int MallocHook_RemoveMunmapReplacement(MallocHook_MunmapReplacement hook);
|
||||
|
||||
typedef void (*MallocHook_MremapHook)(const void* result,
|
||||
const void* old_addr,
|
||||
size_t old_size,
|
||||
size_t new_size,
|
||||
int flags,
|
||||
const void* new_addr);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_AddMremapHook(MallocHook_MremapHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_RemoveMremapHook(MallocHook_MremapHook hook);
|
||||
|
||||
typedef void (*MallocHook_PreSbrkHook)(ptrdiff_t increment);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_AddPreSbrkHook(MallocHook_PreSbrkHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_RemovePreSbrkHook(MallocHook_PreSbrkHook hook);
|
||||
|
||||
typedef void (*MallocHook_SbrkHook)(const void* result, ptrdiff_t increment);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_AddSbrkHook(MallocHook_SbrkHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
int MallocHook_RemoveSbrkHook(MallocHook_SbrkHook hook);
|
||||
|
||||
/* The following are DEPRECATED. */
|
||||
PERFTOOLS_DLL_DECL
|
||||
MallocHook_NewHook MallocHook_SetNewHook(MallocHook_NewHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
MallocHook_DeleteHook MallocHook_SetDeleteHook(MallocHook_DeleteHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
MallocHook_PreMmapHook MallocHook_SetPreMmapHook(MallocHook_PreMmapHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
MallocHook_MmapHook MallocHook_SetMmapHook(MallocHook_MmapHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
MallocHook_MunmapHook MallocHook_SetMunmapHook(MallocHook_MunmapHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
MallocHook_MremapHook MallocHook_SetMremapHook(MallocHook_MremapHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
MallocHook_PreSbrkHook MallocHook_SetPreSbrkHook(MallocHook_PreSbrkHook hook);
|
||||
PERFTOOLS_DLL_DECL
|
||||
MallocHook_SbrkHook MallocHook_SetSbrkHook(MallocHook_SbrkHook hook);
|
||||
/* End of DEPRECATED functions. */
|
||||
|
||||
#ifdef __cplusplus
|
||||
} // extern "C"
|
||||
#endif
|
||||
|
||||
#endif /* _MALLOC_HOOK_C_H_ */
|
37
3party/gperftools/src/gperftools/nallocx.h
Normal file
37
3party/gperftools/src/gperftools/nallocx.h
Normal file
@ -0,0 +1,37 @@
|
||||
#ifndef _NALLOCX_H_
|
||||
#define _NALLOCX_H_
|
||||
#include <stddef.h>
|
||||
|
||||
#ifndef PERFTOOLS_DLL_DECL
|
||||
# ifdef _WIN32
|
||||
# define PERFTOOLS_DLL_DECL __declspec(dllimport)
|
||||
# else
|
||||
# define PERFTOOLS_DLL_DECL
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
#define MALLOCX_LG_ALIGN(la) ((int)(la))
|
||||
|
||||
/*
|
||||
* The nallocx function allocates no memory, but it performs the same size
|
||||
* computation as the malloc function, and returns the real size of the
|
||||
* allocation that would result from the equivalent malloc function call.
|
||||
* nallocx is a malloc extension originally implemented by jemalloc:
|
||||
* http://www.unix.com/man-page/freebsd/3/nallocx/
|
||||
*
|
||||
* Note, we only support MALLOCX_LG_ALIGN flag and nothing else.
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL size_t nallocx(size_t size, int flags);
|
||||
|
||||
/* same as above but never weak */
|
||||
PERFTOOLS_DLL_DECL size_t tc_nallocx(size_t size, int flags);
|
||||
|
||||
#ifdef __cplusplus
|
||||
} /* extern "C" */
|
||||
#endif
|
||||
|
||||
#endif /* _NALLOCX_H_ */
|
173
3party/gperftools/src/gperftools/profiler.h
Normal file
173
3party/gperftools/src/gperftools/profiler.h
Normal file
@ -0,0 +1,173 @@
|
||||
/* -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*- */
|
||||
/* Copyright (c) 2005, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Sanjay Ghemawat
|
||||
*
|
||||
* Module for CPU profiling based on periodic pc-sampling.
|
||||
*
|
||||
* For full(er) information, see docs/cpuprofile.html
|
||||
*
|
||||
* This module is linked into your program with
|
||||
* no slowdown caused by this unless you activate the profiler
|
||||
* using one of the following methods:
|
||||
*
|
||||
* 1. Before starting the program, set the environment variable
|
||||
* "CPUPROFILE" to be the name of the file to which the profile
|
||||
* data should be written.
|
||||
*
|
||||
* 2. Programmatically, start and stop the profiler using the
|
||||
* routines "ProfilerStart(filename)" and "ProfilerStop()".
|
||||
*
|
||||
*
|
||||
* (Note: if using linux 2.4 or earlier, only the main thread may be
|
||||
* profiled.)
|
||||
*
|
||||
* Use pprof to view the resulting profile output.
|
||||
* % pprof <path_to_executable> <profile_file_name>
|
||||
* % pprof --gv <path_to_executable> <profile_file_name>
|
||||
*
|
||||
* These functions are thread-safe.
|
||||
*/
|
||||
|
||||
#ifndef BASE_PROFILER_H_
|
||||
#define BASE_PROFILER_H_
|
||||
|
||||
#include <time.h> /* For time_t */
|
||||
|
||||
/* Annoying stuff for windows; makes sure clients can import these functions */
|
||||
#ifndef PERFTOOLS_DLL_DECL
|
||||
# ifdef _WIN32
|
||||
# define PERFTOOLS_DLL_DECL __declspec(dllimport)
|
||||
# else
|
||||
# define PERFTOOLS_DLL_DECL
|
||||
# endif
|
||||
#endif
|
||||
|
||||
/* All this code should be usable from within C apps. */
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
/* Profiler options, for use with ProfilerStartWithOptions. To use:
|
||||
*
|
||||
* struct ProfilerOptions options;
|
||||
* memset(&options, 0, sizeof options);
|
||||
*
|
||||
* then fill in fields as needed.
|
||||
*
|
||||
* This structure is intended to be usable from C code, so no constructor
|
||||
* is provided to initialize it. (Use memset as described above).
|
||||
*/
|
||||
struct ProfilerOptions {
|
||||
/* Filter function and argument.
|
||||
*
|
||||
* If filter_in_thread is not NULL, when a profiling tick is delivered
|
||||
* the profiler will call:
|
||||
*
|
||||
* (*filter_in_thread)(filter_in_thread_arg)
|
||||
*
|
||||
* If it returns nonzero, the sample will be included in the profile.
|
||||
* Note that filter_in_thread runs in a signal handler, so must be
|
||||
* async-signal-safe.
|
||||
*
|
||||
* A typical use would be to set up filter results for each thread
|
||||
* in the system before starting the profiler, then to make
|
||||
* filter_in_thread be a very simple function which retrieves those
|
||||
* results in an async-signal-safe way. Retrieval could be done
|
||||
* using thread-specific data, or using a shared data structure that
|
||||
* supports async-signal-safe lookups.
|
||||
*/
|
||||
int (*filter_in_thread)(void *arg);
|
||||
void *filter_in_thread_arg;
|
||||
};
|
||||
|
||||
/* Start profiling and write profile info into fname, discarding any
|
||||
* existing profiling data in that file.
|
||||
*
|
||||
* This is equivalent to calling ProfilerStartWithOptions(fname, NULL).
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL int ProfilerStart(const char* fname);
|
||||
|
||||
/* Start profiling and write profile into fname, discarding any
|
||||
* existing profiling data in that file.
|
||||
*
|
||||
* The profiler is configured using the options given by 'options'.
|
||||
* Options which are not specified are given default values.
|
||||
*
|
||||
* 'options' may be NULL, in which case all are given default values.
|
||||
*
|
||||
* Returns nonzero if profiling was started successfully, or zero else.
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL int ProfilerStartWithOptions(
|
||||
const char *fname, const struct ProfilerOptions *options);
|
||||
|
||||
/* Stop profiling. Can be started again with ProfilerStart(), but
|
||||
* the currently accumulated profiling data will be cleared.
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL void ProfilerStop(void);
|
||||
|
||||
/* Flush any currently buffered profiling state to the profile file.
|
||||
* Has no effect if the profiler has not been started.
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL void ProfilerFlush(void);
|
||||
|
||||
|
||||
/* DEPRECATED: these functions were used to enable/disable profiling
|
||||
* in the current thread, but no longer do anything.
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL void ProfilerEnable(void);
|
||||
PERFTOOLS_DLL_DECL void ProfilerDisable(void);
|
||||
|
||||
/* Returns nonzero if profile is currently enabled, zero if it's not. */
|
||||
PERFTOOLS_DLL_DECL int ProfilingIsEnabledForAllThreads(void);
|
||||
|
||||
/* Routine for registering new threads with the profiler.
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL void ProfilerRegisterThread(void);
|
||||
|
||||
/* Stores state about profiler's current status into "*state". */
|
||||
struct ProfilerState {
|
||||
int enabled; /* Is profiling currently enabled? */
|
||||
time_t start_time; /* If enabled, when was profiling started? */
|
||||
char profile_name[1024]; /* Name of profile file being written, or '\0' */
|
||||
int samples_gathered; /* Number of samples gathered so far (or 0) */
|
||||
};
|
||||
PERFTOOLS_DLL_DECL void ProfilerGetCurrentState(struct ProfilerState* state);
|
||||
|
||||
/* Returns the current stack trace, to be called from a SIGPROF handler. */
|
||||
PERFTOOLS_DLL_DECL int ProfilerGetStackTrace(
|
||||
void** result, int max_depth, int skip_count, const void *uc);
|
||||
|
||||
#ifdef __cplusplus
|
||||
} // extern "C"
|
||||
#endif
|
||||
|
||||
#endif /* BASE_PROFILER_H_ */
|
117
3party/gperftools/src/gperftools/stacktrace.h
Normal file
117
3party/gperftools/src/gperftools/stacktrace.h
Normal file
@ -0,0 +1,117 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat
|
||||
//
|
||||
// Routines to extract the current stack trace. These functions are
|
||||
// thread-safe.
|
||||
|
||||
#ifndef GOOGLE_STACKTRACE_H_
|
||||
#define GOOGLE_STACKTRACE_H_
|
||||
|
||||
// Annoying stuff for windows -- makes sure clients can import these functions
|
||||
#ifndef PERFTOOLS_DLL_DECL
|
||||
# ifdef _WIN32
|
||||
# define PERFTOOLS_DLL_DECL __declspec(dllimport)
|
||||
# else
|
||||
# define PERFTOOLS_DLL_DECL
|
||||
# endif
|
||||
#endif
|
||||
|
||||
|
||||
// Skips the most recent "skip_count" stack frames (also skips the
|
||||
// frame generated for the "GetStackFrames" routine itself), and then
|
||||
// records the pc values for up to the next "max_depth" frames in
|
||||
// "result", and the corresponding stack frame sizes in "sizes".
|
||||
// Returns the number of values recorded in "result"/"sizes".
|
||||
//
|
||||
// Example:
|
||||
// main() { foo(); }
|
||||
// foo() { bar(); }
|
||||
// bar() {
|
||||
// void* result[10];
|
||||
// int sizes[10];
|
||||
// int depth = GetStackFrames(result, sizes, 10, 1);
|
||||
// }
|
||||
//
|
||||
// The GetStackFrames call will skip the frame for "bar". It will
|
||||
// return 2 and will produce pc values that map to the following
|
||||
// procedures:
|
||||
// result[0] foo
|
||||
// result[1] main
|
||||
// (Actually, there may be a few more entries after "main" to account for
|
||||
// startup procedures.)
|
||||
// And corresponding stack frame sizes will also be recorded:
|
||||
// sizes[0] 16
|
||||
// sizes[1] 16
|
||||
// (Stack frame sizes of 16 above are just for illustration purposes.)
|
||||
// Stack frame sizes of 0 or less indicate that those frame sizes couldn't
|
||||
// be identified.
|
||||
//
|
||||
// This routine may return fewer stack frame entries than are
|
||||
// available. Also note that "result" and "sizes" must both be non-NULL.
|
||||
extern PERFTOOLS_DLL_DECL int GetStackFrames(void** result, int* sizes, int max_depth,
|
||||
int skip_count);
|
||||
|
||||
// Same as above, but to be used from a signal handler. The "uc" parameter
|
||||
// should be the pointer to ucontext_t which was passed as the 3rd parameter
|
||||
// to sa_sigaction signal handler. It may help the unwinder to get a
|
||||
// better stack trace under certain conditions. The "uc" may safely be NULL.
|
||||
extern PERFTOOLS_DLL_DECL int GetStackFramesWithContext(void** result, int* sizes, int max_depth,
|
||||
int skip_count, const void *uc);
|
||||
|
||||
// This is similar to the GetStackFrames routine, except that it returns
|
||||
// the stack trace only, and not the stack frame sizes as well.
|
||||
// Example:
|
||||
// main() { foo(); }
|
||||
// foo() { bar(); }
|
||||
// bar() {
|
||||
// void* result[10];
|
||||
// int depth = GetStackTrace(result, 10, 1);
|
||||
// }
|
||||
//
|
||||
// This produces:
|
||||
// result[0] foo
|
||||
// result[1] main
|
||||
// .... ...
|
||||
//
|
||||
// "result" must not be NULL.
|
||||
extern PERFTOOLS_DLL_DECL int GetStackTrace(void** result, int max_depth,
|
||||
int skip_count);
|
||||
|
||||
// Same as above, but to be used from a signal handler. The "uc" parameter
|
||||
// should be the pointer to ucontext_t which was passed as the 3rd parameter
|
||||
// to sa_sigaction signal handler. It may help the unwinder to get a
|
||||
// better stack trace under certain conditions. The "uc" may safely be NULL.
|
||||
extern PERFTOOLS_DLL_DECL int GetStackTraceWithContext(void** result, int max_depth,
|
||||
int skip_count, const void *uc);
|
||||
|
||||
#endif /* GOOGLE_STACKTRACE_H_ */
|
166
3party/gperftools/src/gperftools/tcmalloc.h.in
Normal file
166
3party/gperftools/src/gperftools/tcmalloc.h.in
Normal file
@ -0,0 +1,166 @@
|
||||
/* -*- Mode: C; c-basic-offset: 2; indent-tabs-mode: nil -*- */
|
||||
/* Copyright (c) 2003, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Sanjay Ghemawat <opensource@google.com>
|
||||
* .h file by Craig Silverstein <opensource@google.com>
|
||||
*/
|
||||
|
||||
#ifndef TCMALLOC_TCMALLOC_H_
|
||||
#define TCMALLOC_TCMALLOC_H_
|
||||
|
||||
#include <stddef.h> /* for size_t */
|
||||
#ifdef __cplusplus
|
||||
#include <new> /* for std::nothrow_t, std::align_val_t */
|
||||
#endif
|
||||
|
||||
/* Define the version number so folks can check against it */
|
||||
#define TC_VERSION_MAJOR @TC_VERSION_MAJOR@
|
||||
#define TC_VERSION_MINOR @TC_VERSION_MINOR@
|
||||
#define TC_VERSION_PATCH "@TC_VERSION_PATCH@"
|
||||
#define TC_VERSION_STRING "gperftools @TC_VERSION_MAJOR@.@TC_VERSION_MINOR@@TC_VERSION_PATCH@"
|
||||
|
||||
/* For struct mallinfo, if it's defined. */
|
||||
#if @ac_cv_have_struct_mallinfo@ || @ac_cv_have_struct_mallinfo2@
|
||||
# include <malloc.h>
|
||||
#endif
|
||||
|
||||
#ifndef PERFTOOLS_NOTHROW
|
||||
|
||||
#if __cplusplus >= 201103L
|
||||
#define PERFTOOLS_NOTHROW noexcept
|
||||
#elif defined(__cplusplus)
|
||||
#define PERFTOOLS_NOTHROW throw()
|
||||
#else
|
||||
# ifdef __GNUC__
|
||||
# define PERFTOOLS_NOTHROW __attribute__((__nothrow__))
|
||||
# else
|
||||
# define PERFTOOLS_NOTHROW
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#endif
|
||||
|
||||
#ifndef PERFTOOLS_DLL_DECL
|
||||
# ifdef _WIN32
|
||||
# define PERFTOOLS_DLL_DECL __declspec(dllimport)
|
||||
# else
|
||||
# define PERFTOOLS_DLL_DECL
|
||||
# endif
|
||||
#endif
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
/*
|
||||
* Returns a human-readable version string. If major, minor,
|
||||
* and/or patch are not NULL, they are set to the major version,
|
||||
* minor version, and patch-code (a string, usually "").
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL const char* tc_version(int* major, int* minor,
|
||||
const char** patch) PERFTOOLS_NOTHROW;
|
||||
|
||||
PERFTOOLS_DLL_DECL void* tc_malloc(size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_malloc_skip_new_handler(size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_free(void* ptr) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_free_sized(void *ptr, size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_realloc(void* ptr, size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_calloc(size_t nmemb, size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_cfree(void* ptr) PERFTOOLS_NOTHROW;
|
||||
|
||||
PERFTOOLS_DLL_DECL void* tc_memalign(size_t __alignment,
|
||||
size_t __size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL int tc_posix_memalign(void** ptr,
|
||||
size_t align, size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_valloc(size_t __size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_pvalloc(size_t __size) PERFTOOLS_NOTHROW;
|
||||
|
||||
PERFTOOLS_DLL_DECL void tc_malloc_stats(void) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL int tc_mallopt(int cmd, int value) PERFTOOLS_NOTHROW;
|
||||
#if @ac_cv_have_struct_mallinfo@
|
||||
PERFTOOLS_DLL_DECL struct mallinfo tc_mallinfo(void) PERFTOOLS_NOTHROW;
|
||||
#endif
|
||||
#if @ac_cv_have_struct_mallinfo2@
|
||||
PERFTOOLS_DLL_DECL struct mallinfo2 tc_mallinfo2(void) PERFTOOLS_NOTHROW;
|
||||
#endif
|
||||
|
||||
/*
|
||||
* This is an alias for MallocExtension::instance()->GetAllocatedSize().
|
||||
* It is equivalent to
|
||||
* OS X: malloc_size()
|
||||
* glibc: malloc_usable_size()
|
||||
* Windows: _msize()
|
||||
*/
|
||||
PERFTOOLS_DLL_DECL size_t tc_malloc_size(void* ptr) PERFTOOLS_NOTHROW;
|
||||
|
||||
#ifdef __cplusplus
|
||||
PERFTOOLS_DLL_DECL int tc_set_new_mode(int flag) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_new(size_t size);
|
||||
PERFTOOLS_DLL_DECL void* tc_new_nothrow(size_t size,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_delete(void* p) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_delete_sized(void* p, size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_delete_nothrow(void* p,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_newarray(size_t size);
|
||||
PERFTOOLS_DLL_DECL void* tc_newarray_nothrow(size_t size,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_deletearray(void* p) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_deletearray_sized(void* p, size_t size) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_deletearray_nothrow(void* p,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
|
||||
#if @ac_cv_have_std_align_val_t@ && __cplusplus >= 201703L
|
||||
PERFTOOLS_DLL_DECL void* tc_new_aligned(size_t size, std::align_val_t al);
|
||||
PERFTOOLS_DLL_DECL void* tc_new_aligned_nothrow(size_t size, std::align_val_t al,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_delete_aligned(void* p, std::align_val_t al) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_delete_sized_aligned(void* p, size_t size, std::align_val_t al) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_delete_aligned_nothrow(void* p, std::align_val_t al,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void* tc_newarray_aligned(size_t size, std::align_val_t al);
|
||||
PERFTOOLS_DLL_DECL void* tc_newarray_aligned_nothrow(size_t size, std::align_val_t al,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_deletearray_aligned(void* p, std::align_val_t al) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_deletearray_sized_aligned(void* p, size_t size, std::align_val_t al) PERFTOOLS_NOTHROW;
|
||||
PERFTOOLS_DLL_DECL void tc_deletearray_aligned_nothrow(void* p, std::align_val_t al,
|
||||
const std::nothrow_t&) PERFTOOLS_NOTHROW;
|
||||
#endif
|
||||
}
|
||||
#endif
|
||||
|
||||
/* We're only un-defining for public */
|
||||
#if !defined(GPERFTOOLS_CONFIG_H_)
|
||||
|
||||
#undef PERFTOOLS_NOTHROW
|
||||
|
||||
#endif /* GPERFTOOLS_CONFIG_H_ */
|
||||
|
||||
#endif /* #ifndef TCMALLOC_TCMALLOC_H_ */
|
98
3party/gperftools/src/heap-checker-bcad.cc
Normal file
98
3party/gperftools/src/heap-checker-bcad.cc
Normal file
@ -0,0 +1,98 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// All Rights Reserved.
|
||||
//
|
||||
// Author: Maxim Lifantsev
|
||||
//
|
||||
// A file to ensure that components of heap leak checker run before
|
||||
// all global object constructors and after all global object
|
||||
// destructors.
|
||||
//
|
||||
// This file must be the last library any binary links against.
|
||||
// Otherwise, the heap checker may not be able to run early enough to
|
||||
// catalog all the global objects in your program. If this happens,
|
||||
// and later in the program you allocate memory and have one of these
|
||||
// "uncataloged" global objects point to it, the heap checker will
|
||||
// consider that allocation to be a leak, even though it's not (since
|
||||
// the allocated object is reachable from global data and hence "live").
|
||||
|
||||
#include <stdlib.h> // for abort()
|
||||
#include <gperftools/malloc_extension.h>
|
||||
|
||||
// A dummy variable to refer from heap-checker.cc. This is to make
|
||||
// sure this file is not optimized out by the linker.
|
||||
bool heap_leak_checker_bcad_variable;
|
||||
|
||||
extern void HeapLeakChecker_AfterDestructors(); // in heap-checker.cc
|
||||
|
||||
// A helper class to ensure that some components of heap leak checking
|
||||
// can happen before construction and after destruction
|
||||
// of all global/static objects.
|
||||
class HeapLeakCheckerGlobalPrePost {
|
||||
public:
|
||||
HeapLeakCheckerGlobalPrePost() {
|
||||
if (count_ == 0) {
|
||||
// The 'new int' will ensure that we have run an initial malloc
|
||||
// hook, which will set up the heap checker via
|
||||
// MallocHook_InitAtFirstAllocation_HeapLeakChecker. See malloc_hook.cc.
|
||||
// This is done in this roundabout fashion in order to avoid self-deadlock
|
||||
// if we directly called HeapLeakChecker_BeforeConstructors here.
|
||||
//
|
||||
// We use explicit global operator new/delete functions since
|
||||
// plain 'naked' delete new int modern compilers optimize out to
|
||||
// nothing. And apparently calling those global new/delete
|
||||
// functions is assumed by compilers to be 'for effect' as well.
|
||||
(operator delete)((operator new)(4));
|
||||
// This needs to be called before the first allocation of an STL
|
||||
// object, but after libc is done setting up threads (because it
|
||||
// calls setenv, which requires a thread-aware errno). By
|
||||
// putting it here, we hope it's the first bit of code executed
|
||||
// after the libc global-constructor code.
|
||||
MallocExtension::Initialize();
|
||||
}
|
||||
++count_;
|
||||
}
|
||||
~HeapLeakCheckerGlobalPrePost() {
|
||||
if (count_ <= 0) abort();
|
||||
--count_;
|
||||
if (count_ == 0) HeapLeakChecker_AfterDestructors();
|
||||
}
|
||||
private:
|
||||
// Counter of constructions/destructions of objects of this class
|
||||
// (just in case there are more than one of them).
|
||||
static int count_;
|
||||
};
|
||||
|
||||
int HeapLeakCheckerGlobalPrePost::count_ = 0;
|
||||
|
||||
// The early-construction/late-destruction global object.
|
||||
static const HeapLeakCheckerGlobalPrePost heap_leak_checker_global_pre_post;
|
2442
3party/gperftools/src/heap-checker.cc
Normal file
2442
3party/gperftools/src/heap-checker.cc
Normal file
File diff suppressed because it is too large
Load Diff
78
3party/gperftools/src/heap-profile-stats.h
Normal file
78
3party/gperftools/src/heap-profile-stats.h
Normal file
@ -0,0 +1,78 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2013, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// This file defines structs to accumulate memory allocation and deallocation
|
||||
// counts. These structs are commonly used for malloc (in HeapProfileTable)
|
||||
// and mmap (in MemoryRegionMap).
|
||||
|
||||
// A bucket is data structure for heap profiling to store a pair of a stack
|
||||
// trace and counts of (de)allocation. Buckets are stored in a hash table
|
||||
// which is declared as "HeapProfileBucket**".
|
||||
//
|
||||
// A hash value is computed from a stack trace. Collision in the hash table
|
||||
// is resolved by separate chaining with linked lists. The links in the list
|
||||
// are implemented with the member "HeapProfileBucket* next".
|
||||
//
|
||||
// A structure of a hash table HeapProfileBucket** bucket_table would be like:
|
||||
// bucket_table[0] => NULL
|
||||
// bucket_table[1] => HeapProfileBucket() => HeapProfileBucket() => NULL
|
||||
// ...
|
||||
// bucket_table[i] => HeapProfileBucket() => NULL
|
||||
// ...
|
||||
// bucket_table[n] => HeapProfileBucket() => NULL
|
||||
|
||||
#ifndef HEAP_PROFILE_STATS_H_
|
||||
#define HEAP_PROFILE_STATS_H_
|
||||
|
||||
struct HeapProfileStats {
|
||||
// Returns true if the two HeapProfileStats are semantically equal.
|
||||
bool Equivalent(const HeapProfileStats& other) const {
|
||||
return allocs - frees == other.allocs - other.frees &&
|
||||
alloc_size - free_size == other.alloc_size - other.free_size;
|
||||
}
|
||||
|
||||
int64_t allocs; // Number of allocation calls.
|
||||
int64_t frees; // Number of free calls.
|
||||
int64_t alloc_size; // Total size of all allocated objects so far.
|
||||
int64_t free_size; // Total size of all freed objects so far.
|
||||
};
|
||||
|
||||
// Allocation and deallocation statistics per each stack trace.
|
||||
struct HeapProfileBucket : public HeapProfileStats {
|
||||
// Longest stack trace we record.
|
||||
static const int kMaxStackDepth = 32;
|
||||
|
||||
uintptr_t hash; // Hash value of the stack trace.
|
||||
int depth; // Depth of stack trace.
|
||||
const void** stack; // Stack trace.
|
||||
HeapProfileBucket* next; // Next entry in hash-table.
|
||||
};
|
||||
|
||||
#endif // HEAP_PROFILE_STATS_H_
|
628
3party/gperftools/src/heap-profile-table.cc
Normal file
628
3party/gperftools/src/heap-profile-table.cc
Normal file
@ -0,0 +1,628 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2006, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat
|
||||
// Maxim Lifantsev (refactoring)
|
||||
//
|
||||
|
||||
#include <config.h>
|
||||
|
||||
#ifdef HAVE_UNISTD_H
|
||||
#include <unistd.h> // for write()
|
||||
#endif
|
||||
#include <fcntl.h> // for open()
|
||||
#ifdef HAVE_GLOB_H
|
||||
#include <glob.h>
|
||||
#ifndef GLOB_NOMATCH // true on some old cygwins
|
||||
# define GLOB_NOMATCH 0
|
||||
#endif
|
||||
#endif
|
||||
#include <inttypes.h> // for PRIxPTR
|
||||
#ifdef HAVE_POLL_H
|
||||
#include <poll.h>
|
||||
#endif
|
||||
#include <errno.h>
|
||||
#include <stdarg.h>
|
||||
#include <string>
|
||||
#include <map>
|
||||
#include <algorithm> // for sort(), equal(), and copy()
|
||||
|
||||
#include "heap-profile-table.h"
|
||||
|
||||
#include "base/logging.h"
|
||||
#include "raw_printer.h"
|
||||
#include "symbolize.h"
|
||||
#include <gperftools/stacktrace.h>
|
||||
#include <gperftools/malloc_hook.h>
|
||||
#include "memory_region_map.h"
|
||||
#include "base/commandlineflags.h"
|
||||
#include "base/logging.h" // for the RawFD I/O commands
|
||||
#include "base/sysinfo.h"
|
||||
|
||||
using std::sort;
|
||||
using std::equal;
|
||||
using std::copy;
|
||||
using std::string;
|
||||
using std::map;
|
||||
|
||||
using tcmalloc::FillProcSelfMaps; // from sysinfo.h
|
||||
using tcmalloc::DumpProcSelfMaps; // from sysinfo.h
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
DEFINE_bool(cleanup_old_heap_profiles,
|
||||
EnvToBool("HEAP_PROFILE_CLEANUP", true),
|
||||
"At initialization time, delete old heap profiles.");
|
||||
|
||||
DEFINE_int32(heap_check_max_leaks,
|
||||
EnvToInt("HEAP_CHECK_MAX_LEAKS", 20),
|
||||
"The maximum number of leak reports to print.");
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
// header of the dumped heap profile
|
||||
static const char kProfileHeader[] = "heap profile: ";
|
||||
static const char kProcSelfMapsHeader[] = "\nMAPPED_LIBRARIES:\n";
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
const char HeapProfileTable::kFileExt[] = ".heap";
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
static const int kHashTableSize = 179999; // Size for bucket_table_.
|
||||
/*static*/ const int HeapProfileTable::kMaxStackDepth;
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
// We strip out different number of stack frames in debug mode
|
||||
// because less inlining happens in that case
|
||||
#ifdef NDEBUG
|
||||
static const int kStripFrames = 2;
|
||||
#else
|
||||
static const int kStripFrames = 3;
|
||||
#endif
|
||||
|
||||
// For sorting Stats or Buckets by in-use space
|
||||
static bool ByAllocatedSpace(HeapProfileTable::Stats* a,
|
||||
HeapProfileTable::Stats* b) {
|
||||
// Return true iff "a" has more allocated space than "b"
|
||||
return (a->alloc_size - a->free_size) > (b->alloc_size - b->free_size);
|
||||
}
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
HeapProfileTable::HeapProfileTable(Allocator alloc,
|
||||
DeAllocator dealloc,
|
||||
bool profile_mmap)
|
||||
: alloc_(alloc),
|
||||
dealloc_(dealloc),
|
||||
profile_mmap_(profile_mmap),
|
||||
bucket_table_(NULL),
|
||||
num_buckets_(0),
|
||||
address_map_(NULL) {
|
||||
// Make a hash table for buckets.
|
||||
const int table_bytes = kHashTableSize * sizeof(*bucket_table_);
|
||||
bucket_table_ = static_cast<Bucket**>(alloc_(table_bytes));
|
||||
memset(bucket_table_, 0, table_bytes);
|
||||
|
||||
// Make an allocation map.
|
||||
address_map_ =
|
||||
new(alloc_(sizeof(AllocationMap))) AllocationMap(alloc_, dealloc_);
|
||||
|
||||
// Initialize.
|
||||
memset(&total_, 0, sizeof(total_));
|
||||
num_buckets_ = 0;
|
||||
}
|
||||
|
||||
HeapProfileTable::~HeapProfileTable() {
|
||||
// Free the allocation map.
|
||||
address_map_->~AllocationMap();
|
||||
dealloc_(address_map_);
|
||||
address_map_ = NULL;
|
||||
|
||||
// Free the hash table.
|
||||
for (int i = 0; i < kHashTableSize; i++) {
|
||||
for (Bucket* curr = bucket_table_[i]; curr != 0; /**/) {
|
||||
Bucket* bucket = curr;
|
||||
curr = curr->next;
|
||||
dealloc_(bucket->stack);
|
||||
dealloc_(bucket);
|
||||
}
|
||||
}
|
||||
dealloc_(bucket_table_);
|
||||
bucket_table_ = NULL;
|
||||
}
|
||||
|
||||
HeapProfileTable::Bucket* HeapProfileTable::GetBucket(int depth,
|
||||
const void* const key[]) {
|
||||
// Make hash-value
|
||||
uintptr_t h = 0;
|
||||
for (int i = 0; i < depth; i++) {
|
||||
h += reinterpret_cast<uintptr_t>(key[i]);
|
||||
h += h << 10;
|
||||
h ^= h >> 6;
|
||||
}
|
||||
h += h << 3;
|
||||
h ^= h >> 11;
|
||||
|
||||
// Lookup stack trace in table
|
||||
unsigned int buck = ((unsigned int) h) % kHashTableSize;
|
||||
for (Bucket* b = bucket_table_[buck]; b != 0; b = b->next) {
|
||||
if ((b->hash == h) &&
|
||||
(b->depth == depth) &&
|
||||
equal(key, key + depth, b->stack)) {
|
||||
return b;
|
||||
}
|
||||
}
|
||||
|
||||
// Create new bucket
|
||||
const size_t key_size = sizeof(key[0]) * depth;
|
||||
const void** kcopy = reinterpret_cast<const void**>(alloc_(key_size));
|
||||
copy(key, key + depth, kcopy);
|
||||
Bucket* b = reinterpret_cast<Bucket*>(alloc_(sizeof(Bucket)));
|
||||
memset(b, 0, sizeof(*b));
|
||||
b->hash = h;
|
||||
b->depth = depth;
|
||||
b->stack = kcopy;
|
||||
b->next = bucket_table_[buck];
|
||||
bucket_table_[buck] = b;
|
||||
num_buckets_++;
|
||||
return b;
|
||||
}
|
||||
|
||||
int HeapProfileTable::GetCallerStackTrace(
|
||||
int skip_count, void* stack[kMaxStackDepth]) {
|
||||
return MallocHook::GetCallerStackTrace(
|
||||
stack, kMaxStackDepth, kStripFrames + skip_count + 1);
|
||||
}
|
||||
|
||||
void HeapProfileTable::RecordAlloc(
|
||||
const void* ptr, size_t bytes, int stack_depth,
|
||||
const void* const call_stack[]) {
|
||||
Bucket* b = GetBucket(stack_depth, call_stack);
|
||||
b->allocs++;
|
||||
b->alloc_size += bytes;
|
||||
total_.allocs++;
|
||||
total_.alloc_size += bytes;
|
||||
|
||||
AllocValue v;
|
||||
v.set_bucket(b); // also did set_live(false); set_ignore(false)
|
||||
v.bytes = bytes;
|
||||
address_map_->Insert(ptr, v);
|
||||
}
|
||||
|
||||
void HeapProfileTable::RecordFree(const void* ptr) {
|
||||
AllocValue v;
|
||||
if (address_map_->FindAndRemove(ptr, &v)) {
|
||||
Bucket* b = v.bucket();
|
||||
b->frees++;
|
||||
b->free_size += v.bytes;
|
||||
total_.frees++;
|
||||
total_.free_size += v.bytes;
|
||||
}
|
||||
}
|
||||
|
||||
bool HeapProfileTable::FindAlloc(const void* ptr, size_t* object_size) const {
|
||||
const AllocValue* alloc_value = address_map_->Find(ptr);
|
||||
if (alloc_value != NULL) *object_size = alloc_value->bytes;
|
||||
return alloc_value != NULL;
|
||||
}
|
||||
|
||||
bool HeapProfileTable::FindAllocDetails(const void* ptr,
|
||||
AllocInfo* info) const {
|
||||
const AllocValue* alloc_value = address_map_->Find(ptr);
|
||||
if (alloc_value != NULL) {
|
||||
info->object_size = alloc_value->bytes;
|
||||
info->call_stack = alloc_value->bucket()->stack;
|
||||
info->stack_depth = alloc_value->bucket()->depth;
|
||||
}
|
||||
return alloc_value != NULL;
|
||||
}
|
||||
|
||||
bool HeapProfileTable::FindInsideAlloc(const void* ptr,
|
||||
size_t max_size,
|
||||
const void** object_ptr,
|
||||
size_t* object_size) const {
|
||||
const AllocValue* alloc_value =
|
||||
address_map_->FindInside(&AllocValueSize, max_size, ptr, object_ptr);
|
||||
if (alloc_value != NULL) *object_size = alloc_value->bytes;
|
||||
return alloc_value != NULL;
|
||||
}
|
||||
|
||||
bool HeapProfileTable::MarkAsLive(const void* ptr) {
|
||||
AllocValue* alloc = address_map_->FindMutable(ptr);
|
||||
if (alloc && !alloc->live()) {
|
||||
alloc->set_live(true);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
void HeapProfileTable::MarkAsIgnored(const void* ptr) {
|
||||
AllocValue* alloc = address_map_->FindMutable(ptr);
|
||||
if (alloc) {
|
||||
alloc->set_ignore(true);
|
||||
}
|
||||
}
|
||||
|
||||
// We'd be happier using snprintfer, but we don't to reduce dependencies.
|
||||
int HeapProfileTable::UnparseBucket(const Bucket& b,
|
||||
char* buf, int buflen, int bufsize,
|
||||
const char* extra,
|
||||
Stats* profile_stats) {
|
||||
if (profile_stats != NULL) {
|
||||
profile_stats->allocs += b.allocs;
|
||||
profile_stats->alloc_size += b.alloc_size;
|
||||
profile_stats->frees += b.frees;
|
||||
profile_stats->free_size += b.free_size;
|
||||
}
|
||||
int printed =
|
||||
snprintf(buf + buflen, bufsize - buflen, "%6" PRId64 ": %8" PRId64 " [%6" PRId64 ": %8" PRId64 "] @%s",
|
||||
b.allocs - b.frees,
|
||||
b.alloc_size - b.free_size,
|
||||
b.allocs,
|
||||
b.alloc_size,
|
||||
extra);
|
||||
// If it looks like the snprintf failed, ignore the fact we printed anything
|
||||
if (printed < 0 || printed >= bufsize - buflen) return buflen;
|
||||
buflen += printed;
|
||||
for (int d = 0; d < b.depth; d++) {
|
||||
printed = snprintf(buf + buflen, bufsize - buflen, " 0x%08" PRIxPTR,
|
||||
reinterpret_cast<uintptr_t>(b.stack[d]));
|
||||
if (printed < 0 || printed >= bufsize - buflen) return buflen;
|
||||
buflen += printed;
|
||||
}
|
||||
printed = snprintf(buf + buflen, bufsize - buflen, "\n");
|
||||
if (printed < 0 || printed >= bufsize - buflen) return buflen;
|
||||
buflen += printed;
|
||||
return buflen;
|
||||
}
|
||||
|
||||
HeapProfileTable::Bucket**
|
||||
HeapProfileTable::MakeSortedBucketList() const {
|
||||
Bucket** list = static_cast<Bucket**>(alloc_(sizeof(Bucket) * num_buckets_));
|
||||
|
||||
int bucket_count = 0;
|
||||
for (int i = 0; i < kHashTableSize; i++) {
|
||||
for (Bucket* curr = bucket_table_[i]; curr != 0; curr = curr->next) {
|
||||
list[bucket_count++] = curr;
|
||||
}
|
||||
}
|
||||
RAW_DCHECK(bucket_count == num_buckets_, "");
|
||||
|
||||
sort(list, list + num_buckets_, ByAllocatedSpace);
|
||||
|
||||
return list;
|
||||
}
|
||||
|
||||
void HeapProfileTable::IterateOrderedAllocContexts(
|
||||
AllocContextIterator callback) const {
|
||||
Bucket** list = MakeSortedBucketList();
|
||||
AllocContextInfo info;
|
||||
for (int i = 0; i < num_buckets_; ++i) {
|
||||
*static_cast<Stats*>(&info) = *static_cast<Stats*>(list[i]);
|
||||
info.stack_depth = list[i]->depth;
|
||||
info.call_stack = list[i]->stack;
|
||||
callback(info);
|
||||
}
|
||||
dealloc_(list);
|
||||
}
|
||||
|
||||
int HeapProfileTable::FillOrderedProfile(char buf[], int size) const {
|
||||
Bucket** list = MakeSortedBucketList();
|
||||
|
||||
// Our file format is "bucket, bucket, ..., bucket, proc_self_maps_info".
|
||||
// In the cases buf is too small, we'd rather leave out the last
|
||||
// buckets than leave out the /proc/self/maps info. To ensure that,
|
||||
// we actually print the /proc/self/maps info first, then move it to
|
||||
// the end of the buffer, then write the bucket info into whatever
|
||||
// is remaining, and then move the maps info one last time to close
|
||||
// any gaps. Whew!
|
||||
int map_length = snprintf(buf, size, "%s", kProcSelfMapsHeader);
|
||||
if (map_length < 0 || map_length >= size) {
|
||||
dealloc_(list);
|
||||
return 0;
|
||||
}
|
||||
bool dummy; // "wrote_all" -- did /proc/self/maps fit in its entirety?
|
||||
map_length += FillProcSelfMaps(buf + map_length, size - map_length, &dummy);
|
||||
RAW_DCHECK(map_length <= size, "");
|
||||
char* const map_start = buf + size - map_length; // move to end
|
||||
memmove(map_start, buf, map_length);
|
||||
size -= map_length;
|
||||
|
||||
Stats stats;
|
||||
memset(&stats, 0, sizeof(stats));
|
||||
int bucket_length = snprintf(buf, size, "%s", kProfileHeader);
|
||||
if (bucket_length < 0 || bucket_length >= size) {
|
||||
dealloc_(list);
|
||||
return 0;
|
||||
}
|
||||
bucket_length = UnparseBucket(total_, buf, bucket_length, size,
|
||||
" heapprofile", &stats);
|
||||
|
||||
// Dump the mmap list first.
|
||||
if (profile_mmap_) {
|
||||
BufferArgs buffer(buf, bucket_length, size);
|
||||
MemoryRegionMap::LockHolder holder{};
|
||||
MemoryRegionMap::IterateBuckets<BufferArgs*>(DumpBucketIterator, &buffer);
|
||||
bucket_length = buffer.buflen;
|
||||
}
|
||||
|
||||
for (int i = 0; i < num_buckets_; i++) {
|
||||
bucket_length = UnparseBucket(*list[i], buf, bucket_length, size, "",
|
||||
&stats);
|
||||
}
|
||||
RAW_DCHECK(bucket_length < size, "");
|
||||
|
||||
dealloc_(list);
|
||||
|
||||
RAW_DCHECK(buf + bucket_length <= map_start, "");
|
||||
memmove(buf + bucket_length, map_start, map_length); // close the gap
|
||||
|
||||
return bucket_length + map_length;
|
||||
}
|
||||
|
||||
// static
|
||||
void HeapProfileTable::DumpBucketIterator(const Bucket* bucket,
|
||||
BufferArgs* args) {
|
||||
args->buflen = UnparseBucket(*bucket, args->buf, args->buflen, args->bufsize,
|
||||
"", NULL);
|
||||
}
|
||||
|
||||
inline
|
||||
void HeapProfileTable::DumpNonLiveIterator(const void* ptr, AllocValue* v,
|
||||
const DumpArgs& args) {
|
||||
if (v->live()) {
|
||||
v->set_live(false);
|
||||
return;
|
||||
}
|
||||
if (v->ignore()) {
|
||||
return;
|
||||
}
|
||||
Bucket b;
|
||||
memset(&b, 0, sizeof(b));
|
||||
b.allocs = 1;
|
||||
b.alloc_size = v->bytes;
|
||||
b.depth = v->bucket()->depth;
|
||||
b.stack = v->bucket()->stack;
|
||||
char buf[1024];
|
||||
int len = UnparseBucket(b, buf, 0, sizeof(buf), "", args.profile_stats);
|
||||
RawWrite(args.fd, buf, len);
|
||||
}
|
||||
|
||||
// Callback from NonLiveSnapshot; adds entry to arg->dest
|
||||
// if not the entry is not live and is not present in arg->base.
|
||||
void HeapProfileTable::AddIfNonLive(const void* ptr, AllocValue* v,
|
||||
AddNonLiveArgs* arg) {
|
||||
if (v->live()) {
|
||||
v->set_live(false);
|
||||
} else {
|
||||
if (arg->base != NULL && arg->base->map_.Find(ptr) != NULL) {
|
||||
// Present in arg->base, so do not save
|
||||
} else {
|
||||
arg->dest->Add(ptr, *v);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
bool HeapProfileTable::WriteProfile(const char* file_name,
|
||||
const Bucket& total,
|
||||
AllocationMap* allocations) {
|
||||
RAW_VLOG(1, "Dumping non-live heap profile to %s", file_name);
|
||||
RawFD fd = RawOpenForWriting(file_name);
|
||||
if (fd == kIllegalRawFD) {
|
||||
RAW_LOG(ERROR, "Failed dumping filtered heap profile to %s", file_name);
|
||||
return false;
|
||||
}
|
||||
RawWrite(fd, kProfileHeader, strlen(kProfileHeader));
|
||||
char buf[512];
|
||||
int len = UnparseBucket(total, buf, 0, sizeof(buf), " heapprofile", NULL);
|
||||
RawWrite(fd, buf, len);
|
||||
const DumpArgs args(fd, NULL);
|
||||
allocations->Iterate<const DumpArgs&>(DumpNonLiveIterator, args);
|
||||
RawWrite(fd, kProcSelfMapsHeader, strlen(kProcSelfMapsHeader));
|
||||
DumpProcSelfMaps(fd);
|
||||
RawClose(fd);
|
||||
return true;
|
||||
}
|
||||
|
||||
void HeapProfileTable::CleanupOldProfiles(const char* prefix) {
|
||||
if (!FLAGS_cleanup_old_heap_profiles)
|
||||
return;
|
||||
string pattern = string(prefix) + ".*" + kFileExt;
|
||||
#if defined(HAVE_GLOB_H)
|
||||
glob_t g;
|
||||
const int r = glob(pattern.c_str(), GLOB_ERR, NULL, &g);
|
||||
if (r == 0 || r == GLOB_NOMATCH) {
|
||||
const int prefix_length = strlen(prefix);
|
||||
for (int i = 0; i < g.gl_pathc; i++) {
|
||||
const char* fname = g.gl_pathv[i];
|
||||
if ((strlen(fname) >= prefix_length) &&
|
||||
(memcmp(fname, prefix, prefix_length) == 0)) {
|
||||
RAW_VLOG(1, "Removing old heap profile %s", fname);
|
||||
unlink(fname);
|
||||
}
|
||||
}
|
||||
}
|
||||
globfree(&g);
|
||||
#else /* HAVE_GLOB_H */
|
||||
RAW_LOG(WARNING, "Unable to remove old heap profiles (can't run glob())");
|
||||
#endif
|
||||
}
|
||||
|
||||
HeapProfileTable::Snapshot* HeapProfileTable::TakeSnapshot() {
|
||||
Snapshot* s = new (alloc_(sizeof(Snapshot))) Snapshot(alloc_, dealloc_);
|
||||
address_map_->Iterate(AddToSnapshot, s);
|
||||
return s;
|
||||
}
|
||||
|
||||
void HeapProfileTable::ReleaseSnapshot(Snapshot* s) {
|
||||
s->~Snapshot();
|
||||
dealloc_(s);
|
||||
}
|
||||
|
||||
// Callback from TakeSnapshot; adds a single entry to snapshot
|
||||
void HeapProfileTable::AddToSnapshot(const void* ptr, AllocValue* v,
|
||||
Snapshot* snapshot) {
|
||||
snapshot->Add(ptr, *v);
|
||||
}
|
||||
|
||||
HeapProfileTable::Snapshot* HeapProfileTable::NonLiveSnapshot(
|
||||
Snapshot* base) {
|
||||
RAW_VLOG(2, "NonLiveSnapshot input: %" PRId64 " %" PRId64 "\n",
|
||||
total_.allocs - total_.frees,
|
||||
total_.alloc_size - total_.free_size);
|
||||
|
||||
Snapshot* s = new (alloc_(sizeof(Snapshot))) Snapshot(alloc_, dealloc_);
|
||||
AddNonLiveArgs args;
|
||||
args.dest = s;
|
||||
args.base = base;
|
||||
address_map_->Iterate<AddNonLiveArgs*>(AddIfNonLive, &args);
|
||||
RAW_VLOG(2, "NonLiveSnapshot output: %" PRId64 " %" PRId64 "\n",
|
||||
s->total_.allocs - s->total_.frees,
|
||||
s->total_.alloc_size - s->total_.free_size);
|
||||
return s;
|
||||
}
|
||||
|
||||
// Information kept per unique bucket seen
|
||||
struct HeapProfileTable::Snapshot::Entry {
|
||||
int count;
|
||||
size_t bytes;
|
||||
Bucket* bucket;
|
||||
Entry() : count(0), bytes(0) { }
|
||||
|
||||
// Order by decreasing bytes
|
||||
bool operator<(const Entry& x) const {
|
||||
return this->bytes > x.bytes;
|
||||
}
|
||||
};
|
||||
|
||||
// State used to generate leak report. We keep a mapping from Bucket pointer
|
||||
// the collected stats for that bucket.
|
||||
struct HeapProfileTable::Snapshot::ReportState {
|
||||
map<Bucket*, Entry> buckets_;
|
||||
};
|
||||
|
||||
// Callback from ReportLeaks; updates ReportState.
|
||||
void HeapProfileTable::Snapshot::ReportCallback(const void* ptr,
|
||||
AllocValue* v,
|
||||
ReportState* state) {
|
||||
Entry* e = &state->buckets_[v->bucket()]; // Creates empty Entry first time
|
||||
e->bucket = v->bucket();
|
||||
e->count++;
|
||||
e->bytes += v->bytes;
|
||||
}
|
||||
|
||||
void HeapProfileTable::Snapshot::ReportLeaks(const char* checker_name,
|
||||
const char* filename,
|
||||
bool should_symbolize) {
|
||||
// This is only used by the heap leak checker, but is intimately
|
||||
// tied to the allocation map that belongs in this module and is
|
||||
// therefore placed here.
|
||||
RAW_LOG(ERROR, "Leak check %s detected leaks of %zu bytes "
|
||||
"in %zu objects",
|
||||
checker_name,
|
||||
size_t(total_.alloc_size),
|
||||
size_t(total_.allocs));
|
||||
|
||||
// Group objects by Bucket
|
||||
ReportState state;
|
||||
map_.Iterate(&ReportCallback, &state);
|
||||
|
||||
// Sort buckets by decreasing leaked size
|
||||
const int n = state.buckets_.size();
|
||||
Entry* entries = new Entry[n];
|
||||
int dst = 0;
|
||||
for (map<Bucket*,Entry>::const_iterator iter = state.buckets_.begin();
|
||||
iter != state.buckets_.end();
|
||||
++iter) {
|
||||
entries[dst++] = iter->second;
|
||||
}
|
||||
sort(entries, entries + n);
|
||||
|
||||
// Report a bounded number of leaks to keep the leak report from
|
||||
// growing too long.
|
||||
const int to_report =
|
||||
(FLAGS_heap_check_max_leaks > 0 &&
|
||||
n > FLAGS_heap_check_max_leaks) ? FLAGS_heap_check_max_leaks : n;
|
||||
RAW_LOG(ERROR, "The %d largest leaks:", to_report);
|
||||
|
||||
// Print
|
||||
SymbolTable symbolization_table;
|
||||
for (int i = 0; i < to_report; i++) {
|
||||
const Entry& e = entries[i];
|
||||
for (int j = 0; j < e.bucket->depth; j++) {
|
||||
symbolization_table.Add(e.bucket->stack[j]);
|
||||
}
|
||||
}
|
||||
static const int kBufSize = 2<<10;
|
||||
char buffer[kBufSize];
|
||||
if (should_symbolize)
|
||||
symbolization_table.Symbolize();
|
||||
for (int i = 0; i < to_report; i++) {
|
||||
const Entry& e = entries[i];
|
||||
base::RawPrinter printer(buffer, kBufSize);
|
||||
printer.Printf("Leak of %zu bytes in %d objects allocated from:\n",
|
||||
e.bytes, e.count);
|
||||
for (int j = 0; j < e.bucket->depth; j++) {
|
||||
const void* pc = e.bucket->stack[j];
|
||||
printer.Printf("\t@ %" PRIxPTR " %s\n",
|
||||
reinterpret_cast<uintptr_t>(pc), symbolization_table.GetSymbol(pc));
|
||||
}
|
||||
RAW_LOG(ERROR, "%s", buffer);
|
||||
}
|
||||
|
||||
if (to_report < n) {
|
||||
RAW_LOG(ERROR, "Skipping leaks numbered %d..%d",
|
||||
to_report, n-1);
|
||||
}
|
||||
delete[] entries;
|
||||
|
||||
// TODO: Dump the sorted Entry list instead of dumping raw data?
|
||||
// (should be much shorter)
|
||||
if (!HeapProfileTable::WriteProfile(filename, total_, &map_)) {
|
||||
RAW_LOG(ERROR, "Could not write pprof profile to %s", filename);
|
||||
}
|
||||
}
|
||||
|
||||
void HeapProfileTable::Snapshot::ReportObject(const void* ptr,
|
||||
AllocValue* v,
|
||||
char* unused) {
|
||||
// Perhaps also log the allocation stack trace (unsymbolized)
|
||||
// on this line in case somebody finds it useful.
|
||||
RAW_LOG(ERROR, "leaked %zu byte object %p", v->bytes, ptr);
|
||||
}
|
||||
|
||||
void HeapProfileTable::Snapshot::ReportIndividualObjects() {
|
||||
char unused;
|
||||
map_.Iterate(ReportObject, &unused);
|
||||
}
|
399
3party/gperftools/src/heap-profile-table.h
Normal file
399
3party/gperftools/src/heap-profile-table.h
Normal file
@ -0,0 +1,399 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2006, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat
|
||||
// Maxim Lifantsev (refactoring)
|
||||
//
|
||||
|
||||
#ifndef BASE_HEAP_PROFILE_TABLE_H_
|
||||
#define BASE_HEAP_PROFILE_TABLE_H_
|
||||
|
||||
#include "addressmap-inl.h"
|
||||
#include "base/basictypes.h"
|
||||
#include "base/logging.h" // for RawFD
|
||||
#include "heap-profile-stats.h"
|
||||
|
||||
// Table to maintain a heap profile data inside,
|
||||
// i.e. the set of currently active heap memory allocations.
|
||||
// thread-unsafe and non-reentrant code:
|
||||
// each instance object must be used by one thread
|
||||
// at a time w/o self-recursion.
|
||||
//
|
||||
// TODO(maxim): add a unittest for this class.
|
||||
class HeapProfileTable {
|
||||
public:
|
||||
|
||||
// Extension to be used for heap pforile files.
|
||||
static const char kFileExt[];
|
||||
|
||||
// Longest stack trace we record.
|
||||
static const int kMaxStackDepth = 32;
|
||||
|
||||
// data types ----------------------------
|
||||
|
||||
// Profile stats.
|
||||
typedef HeapProfileStats Stats;
|
||||
|
||||
// Info we can return about an allocation.
|
||||
struct AllocInfo {
|
||||
size_t object_size; // size of the allocation
|
||||
const void* const* call_stack; // call stack that made the allocation call
|
||||
int stack_depth; // depth of call_stack
|
||||
bool live;
|
||||
bool ignored;
|
||||
};
|
||||
|
||||
// Info we return about an allocation context.
|
||||
// An allocation context is a unique caller stack trace
|
||||
// of an allocation operation.
|
||||
struct AllocContextInfo : public Stats {
|
||||
int stack_depth; // Depth of stack trace
|
||||
const void* const* call_stack; // Stack trace
|
||||
};
|
||||
|
||||
// Memory (de)allocator interface we'll use.
|
||||
typedef void* (*Allocator)(size_t size);
|
||||
typedef void (*DeAllocator)(void* ptr);
|
||||
|
||||
// interface ---------------------------
|
||||
|
||||
HeapProfileTable(Allocator alloc, DeAllocator dealloc, bool profile_mmap);
|
||||
~HeapProfileTable();
|
||||
|
||||
// Collect the stack trace for the function that asked to do the
|
||||
// allocation for passing to RecordAlloc() below.
|
||||
//
|
||||
// The stack trace is stored in 'stack'. The stack depth is returned.
|
||||
//
|
||||
// 'skip_count' gives the number of stack frames between this call
|
||||
// and the memory allocation function.
|
||||
static int GetCallerStackTrace(int skip_count, void* stack[kMaxStackDepth]);
|
||||
|
||||
// Record an allocation at 'ptr' of 'bytes' bytes. 'stack_depth'
|
||||
// and 'call_stack' identifying the function that requested the
|
||||
// allocation. They can be generated using GetCallerStackTrace() above.
|
||||
void RecordAlloc(const void* ptr, size_t bytes,
|
||||
int stack_depth, const void* const call_stack[]);
|
||||
|
||||
// Record the deallocation of memory at 'ptr'.
|
||||
void RecordFree(const void* ptr);
|
||||
|
||||
// Return true iff we have recorded an allocation at 'ptr'.
|
||||
// If yes, fill *object_size with the allocation byte size.
|
||||
bool FindAlloc(const void* ptr, size_t* object_size) const;
|
||||
// Same as FindAlloc, but fills all of *info.
|
||||
bool FindAllocDetails(const void* ptr, AllocInfo* info) const;
|
||||
|
||||
// Return true iff "ptr" points into a recorded allocation
|
||||
// If yes, fill *object_ptr with the actual allocation address
|
||||
// and *object_size with the allocation byte size.
|
||||
// max_size specifies largest currently possible allocation size.
|
||||
bool FindInsideAlloc(const void* ptr, size_t max_size,
|
||||
const void** object_ptr, size_t* object_size) const;
|
||||
|
||||
// If "ptr" points to a recorded allocation and it's not marked as live
|
||||
// mark it as live and return true. Else return false.
|
||||
// All allocations start as non-live.
|
||||
bool MarkAsLive(const void* ptr);
|
||||
|
||||
// If "ptr" points to a recorded allocation, mark it as "ignored".
|
||||
// Ignored objects are treated like other objects, except that they
|
||||
// are skipped in heap checking reports.
|
||||
void MarkAsIgnored(const void* ptr);
|
||||
|
||||
// Return current total (de)allocation statistics. It doesn't contain
|
||||
// mmap'ed regions.
|
||||
const Stats& total() const { return total_; }
|
||||
|
||||
// Allocation data iteration callback: gets passed object pointer and
|
||||
// fully-filled AllocInfo.
|
||||
typedef void (*AllocIterator)(const void* ptr, const AllocInfo& info);
|
||||
|
||||
// Iterate over the allocation profile data calling "callback"
|
||||
// for every allocation.
|
||||
void IterateAllocs(AllocIterator callback) const {
|
||||
address_map_->Iterate(MapArgsAllocIterator, callback);
|
||||
}
|
||||
|
||||
// Allocation context profile data iteration callback
|
||||
typedef void (*AllocContextIterator)(const AllocContextInfo& info);
|
||||
|
||||
// Iterate over the allocation context profile data calling "callback"
|
||||
// for every allocation context. Allocation contexts are ordered by the
|
||||
// size of allocated space.
|
||||
void IterateOrderedAllocContexts(AllocContextIterator callback) const;
|
||||
|
||||
// Fill profile data into buffer 'buf' of size 'size'
|
||||
// and return the actual size occupied by the dump in 'buf'.
|
||||
// The profile buckets are dumped in the decreasing order
|
||||
// of currently allocated bytes.
|
||||
// We do not provision for 0-terminating 'buf'.
|
||||
int FillOrderedProfile(char buf[], int size) const;
|
||||
|
||||
// Cleanup any old profile files matching prefix + ".*" + kFileExt.
|
||||
static void CleanupOldProfiles(const char* prefix);
|
||||
|
||||
// Return a snapshot of the current contents of *this.
|
||||
// Caller must call ReleaseSnapshot() on result when no longer needed.
|
||||
// The result is only valid while this exists and until
|
||||
// the snapshot is discarded by calling ReleaseSnapshot().
|
||||
class Snapshot;
|
||||
Snapshot* TakeSnapshot();
|
||||
|
||||
// Release a previously taken snapshot. snapshot must not
|
||||
// be used after this call.
|
||||
void ReleaseSnapshot(Snapshot* snapshot);
|
||||
|
||||
// Return a snapshot of every non-live, non-ignored object in *this.
|
||||
// If "base" is non-NULL, skip any objects present in "base".
|
||||
// As a side-effect, clears the "live" bit on every live object in *this.
|
||||
// Caller must call ReleaseSnapshot() on result when no longer needed.
|
||||
Snapshot* NonLiveSnapshot(Snapshot* base);
|
||||
|
||||
private:
|
||||
|
||||
// data types ----------------------------
|
||||
|
||||
// Hash table bucket to hold (de)allocation stats
|
||||
// for a given allocation call stack trace.
|
||||
typedef HeapProfileBucket Bucket;
|
||||
|
||||
// Info stored in the address map
|
||||
struct AllocValue {
|
||||
// Access to the stack-trace bucket
|
||||
Bucket* bucket() const {
|
||||
return reinterpret_cast<Bucket*>(bucket_rep & ~uintptr_t(kMask));
|
||||
}
|
||||
// This also does set_live(false).
|
||||
void set_bucket(Bucket* b) { bucket_rep = reinterpret_cast<uintptr_t>(b); }
|
||||
size_t bytes; // Number of bytes in this allocation
|
||||
|
||||
// Access to the allocation liveness flag (for leak checking)
|
||||
bool live() const { return bucket_rep & kLive; }
|
||||
void set_live(bool l) {
|
||||
bucket_rep = (bucket_rep & ~uintptr_t(kLive)) | (l ? kLive : 0);
|
||||
}
|
||||
|
||||
// Should this allocation be ignored if it looks like a leak?
|
||||
bool ignore() const { return bucket_rep & kIgnore; }
|
||||
void set_ignore(bool r) {
|
||||
bucket_rep = (bucket_rep & ~uintptr_t(kIgnore)) | (r ? kIgnore : 0);
|
||||
}
|
||||
|
||||
private:
|
||||
// We store a few bits in the bottom bits of bucket_rep.
|
||||
// (Alignment is at least four, so we have at least two bits.)
|
||||
static const int kLive = 1;
|
||||
static const int kIgnore = 2;
|
||||
static const int kMask = kLive | kIgnore;
|
||||
|
||||
uintptr_t bucket_rep;
|
||||
};
|
||||
|
||||
// helper for FindInsideAlloc
|
||||
static size_t AllocValueSize(const AllocValue& v) { return v.bytes; }
|
||||
|
||||
typedef AddressMap<AllocValue> AllocationMap;
|
||||
|
||||
// Arguments that need to be passed DumpBucketIterator callback below.
|
||||
struct BufferArgs {
|
||||
BufferArgs(char* buf_arg, int buflen_arg, int bufsize_arg)
|
||||
: buf(buf_arg),
|
||||
buflen(buflen_arg),
|
||||
bufsize(bufsize_arg) {
|
||||
}
|
||||
|
||||
char* buf;
|
||||
int buflen;
|
||||
int bufsize;
|
||||
|
||||
DISALLOW_COPY_AND_ASSIGN(BufferArgs);
|
||||
};
|
||||
|
||||
// Arguments that need to be passed DumpNonLiveIterator callback below.
|
||||
struct DumpArgs {
|
||||
DumpArgs(RawFD fd_arg, Stats* profile_stats_arg)
|
||||
: fd(fd_arg),
|
||||
profile_stats(profile_stats_arg) {
|
||||
}
|
||||
|
||||
RawFD fd; // file to write to
|
||||
Stats* profile_stats; // stats to update (may be NULL)
|
||||
};
|
||||
|
||||
// helpers ----------------------------
|
||||
|
||||
// Unparse bucket b and print its portion of profile dump into buf.
|
||||
// We return the amount of space in buf that we use. We start printing
|
||||
// at buf + buflen, and promise not to go beyond buf + bufsize.
|
||||
// We do not provision for 0-terminating 'buf'.
|
||||
//
|
||||
// If profile_stats is non-NULL, we update *profile_stats by
|
||||
// counting bucket b.
|
||||
//
|
||||
// "extra" is appended to the unparsed bucket. Typically it is empty,
|
||||
// but may be set to something like " heapprofile" for the total
|
||||
// bucket to indicate the type of the profile.
|
||||
static int UnparseBucket(const Bucket& b,
|
||||
char* buf, int buflen, int bufsize,
|
||||
const char* extra,
|
||||
Stats* profile_stats);
|
||||
|
||||
// Get the bucket for the caller stack trace 'key' of depth 'depth'
|
||||
// creating the bucket if needed.
|
||||
Bucket* GetBucket(int depth, const void* const key[]);
|
||||
|
||||
// Helper for IterateAllocs to do callback signature conversion
|
||||
// from AllocationMap::Iterate to AllocIterator.
|
||||
static void MapArgsAllocIterator(const void* ptr, AllocValue* v,
|
||||
AllocIterator callback) {
|
||||
AllocInfo info;
|
||||
info.object_size = v->bytes;
|
||||
info.call_stack = v->bucket()->stack;
|
||||
info.stack_depth = v->bucket()->depth;
|
||||
info.live = v->live();
|
||||
info.ignored = v->ignore();
|
||||
callback(ptr, info);
|
||||
}
|
||||
|
||||
// Helper to dump a bucket.
|
||||
inline static void DumpBucketIterator(const Bucket* bucket,
|
||||
BufferArgs* args);
|
||||
|
||||
// Helper for DumpNonLiveProfile to do object-granularity
|
||||
// heap profile dumping. It gets passed to AllocationMap::Iterate.
|
||||
inline static void DumpNonLiveIterator(const void* ptr, AllocValue* v,
|
||||
const DumpArgs& args);
|
||||
|
||||
// Helper for IterateOrderedAllocContexts and FillOrderedProfile.
|
||||
// Creates a sorted list of Buckets whose length is num_buckets_.
|
||||
// The caller is responsible for deallocating the returned list.
|
||||
Bucket** MakeSortedBucketList() const;
|
||||
|
||||
// Helper for TakeSnapshot. Saves object to snapshot.
|
||||
static void AddToSnapshot(const void* ptr, AllocValue* v, Snapshot* s);
|
||||
|
||||
// Arguments passed to AddIfNonLive
|
||||
struct AddNonLiveArgs {
|
||||
Snapshot* dest;
|
||||
Snapshot* base;
|
||||
};
|
||||
|
||||
// Helper for NonLiveSnapshot. Adds the object to the destination
|
||||
// snapshot if it is non-live.
|
||||
static void AddIfNonLive(const void* ptr, AllocValue* v,
|
||||
AddNonLiveArgs* arg);
|
||||
|
||||
// Write contents of "*allocations" as a heap profile to
|
||||
// "file_name". "total" must contain the total of all entries in
|
||||
// "*allocations".
|
||||
static bool WriteProfile(const char* file_name,
|
||||
const Bucket& total,
|
||||
AllocationMap* allocations);
|
||||
|
||||
// data ----------------------------
|
||||
|
||||
// Memory (de)allocator that we use.
|
||||
Allocator alloc_;
|
||||
DeAllocator dealloc_;
|
||||
|
||||
// Overall profile stats; we use only the Stats part,
|
||||
// but make it a Bucket to pass to UnparseBucket.
|
||||
Bucket total_;
|
||||
|
||||
bool profile_mmap_;
|
||||
|
||||
// Bucket hash table for malloc.
|
||||
// We hand-craft one instead of using one of the pre-written
|
||||
// ones because we do not want to use malloc when operating on the table.
|
||||
// It is only few lines of code, so no big deal.
|
||||
Bucket** bucket_table_;
|
||||
int num_buckets_;
|
||||
|
||||
// Map of all currently allocated objects and mapped regions we know about.
|
||||
AllocationMap* address_map_;
|
||||
|
||||
DISALLOW_COPY_AND_ASSIGN(HeapProfileTable);
|
||||
};
|
||||
|
||||
class HeapProfileTable::Snapshot {
|
||||
public:
|
||||
const Stats& total() const { return total_; }
|
||||
|
||||
// Report anything in this snapshot as a leak.
|
||||
// May use new/delete for temporary storage.
|
||||
// If should_symbolize is true, will fork (which is not threadsafe)
|
||||
// to turn addresses into symbol names. Set to false for maximum safety.
|
||||
// Also writes a heap profile to "filename" that contains
|
||||
// all of the objects in this snapshot.
|
||||
void ReportLeaks(const char* checker_name, const char* filename,
|
||||
bool should_symbolize);
|
||||
|
||||
// Report the addresses of all leaked objects.
|
||||
// May use new/delete for temporary storage.
|
||||
void ReportIndividualObjects();
|
||||
|
||||
bool Empty() const {
|
||||
return (total_.allocs == 0) && (total_.alloc_size == 0);
|
||||
}
|
||||
|
||||
private:
|
||||
friend class HeapProfileTable;
|
||||
|
||||
// Total count/size are stored in a Bucket so we can reuse UnparseBucket
|
||||
Bucket total_;
|
||||
|
||||
// We share the Buckets managed by the parent table, but have our
|
||||
// own object->bucket map.
|
||||
AllocationMap map_;
|
||||
|
||||
Snapshot(Allocator alloc, DeAllocator dealloc) : map_(alloc, dealloc) {
|
||||
memset(&total_, 0, sizeof(total_));
|
||||
}
|
||||
|
||||
// Callback used to populate a Snapshot object with entries found
|
||||
// in another allocation map.
|
||||
inline void Add(const void* ptr, const AllocValue& v) {
|
||||
map_.Insert(ptr, v);
|
||||
total_.allocs++;
|
||||
total_.alloc_size += v.bytes;
|
||||
}
|
||||
|
||||
// Helpers for sorting and generating leak reports
|
||||
struct Entry;
|
||||
struct ReportState;
|
||||
static void ReportCallback(const void* ptr, AllocValue* v, ReportState*);
|
||||
static void ReportObject(const void* ptr, AllocValue* v, char*);
|
||||
|
||||
DISALLOW_COPY_AND_ASSIGN(Snapshot);
|
||||
};
|
||||
|
||||
#endif // BASE_HEAP_PROFILE_TABLE_H_
|
595
3party/gperftools/src/heap-profiler.cc
Normal file
595
3party/gperftools/src/heap-profiler.cc
Normal file
@ -0,0 +1,595 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat
|
||||
//
|
||||
// TODO: Log large allocations
|
||||
|
||||
#include <config.h>
|
||||
#include <stddef.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#ifdef HAVE_UNISTD_H
|
||||
#include <unistd.h>
|
||||
#endif
|
||||
#include <inttypes.h>
|
||||
#ifdef HAVE_FCNTL_H
|
||||
#include <fcntl.h> // for open()
|
||||
#endif
|
||||
#ifdef HAVE_MMAP
|
||||
#include <sys/mman.h>
|
||||
#endif
|
||||
#include <errno.h>
|
||||
#include <assert.h>
|
||||
#include <sys/types.h>
|
||||
#include <signal.h>
|
||||
|
||||
#include <algorithm>
|
||||
#include <string>
|
||||
|
||||
#include <gperftools/heap-profiler.h>
|
||||
|
||||
#include "base/logging.h"
|
||||
#include "base/basictypes.h" // for PRId64, among other things
|
||||
#include "base/googleinit.h"
|
||||
#include "base/commandlineflags.h"
|
||||
#include "malloc_hook-inl.h"
|
||||
#include "tcmalloc_guard.h"
|
||||
#include <gperftools/malloc_hook.h>
|
||||
#include <gperftools/malloc_extension.h>
|
||||
#include "base/spinlock.h"
|
||||
#include "base/low_level_alloc.h"
|
||||
#include "base/sysinfo.h" // for GetUniquePathFromEnv()
|
||||
#include "heap-profile-table.h"
|
||||
#include "memory_region_map.h"
|
||||
#include "mmap_hook.h"
|
||||
|
||||
#ifndef PATH_MAX
|
||||
#ifdef MAXPATHLEN
|
||||
#define PATH_MAX MAXPATHLEN
|
||||
#else
|
||||
#define PATH_MAX 4096 // seems conservative for max filename len!
|
||||
#endif
|
||||
#endif
|
||||
|
||||
using std::string;
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
// Flags that control heap-profiling
|
||||
//
|
||||
// The thread-safety of the profiler depends on these being immutable
|
||||
// after main starts, so don't change them.
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
DEFINE_int64(heap_profile_allocation_interval,
|
||||
EnvToInt64("HEAP_PROFILE_ALLOCATION_INTERVAL", 1 << 30 /*1GB*/),
|
||||
"If non-zero, dump heap profiling information once every "
|
||||
"specified number of bytes allocated by the program since "
|
||||
"the last dump.");
|
||||
DEFINE_int64(heap_profile_deallocation_interval,
|
||||
EnvToInt64("HEAP_PROFILE_DEALLOCATION_INTERVAL", 0),
|
||||
"If non-zero, dump heap profiling information once every "
|
||||
"specified number of bytes deallocated by the program "
|
||||
"since the last dump.");
|
||||
// We could also add flags that report whenever inuse_bytes changes by
|
||||
// X or -X, but there hasn't been a need for that yet, so we haven't.
|
||||
DEFINE_int64(heap_profile_inuse_interval,
|
||||
EnvToInt64("HEAP_PROFILE_INUSE_INTERVAL", 100 << 20 /*100MB*/),
|
||||
"If non-zero, dump heap profiling information whenever "
|
||||
"the high-water memory usage mark increases by the specified "
|
||||
"number of bytes.");
|
||||
DEFINE_int64(heap_profile_time_interval,
|
||||
EnvToInt64("HEAP_PROFILE_TIME_INTERVAL", 0),
|
||||
"If non-zero, dump heap profiling information once every "
|
||||
"specified number of seconds since the last dump.");
|
||||
DEFINE_bool(mmap_log,
|
||||
EnvToBool("HEAP_PROFILE_MMAP_LOG", false),
|
||||
"Should mmap/munmap calls be logged?");
|
||||
DEFINE_bool(mmap_profile,
|
||||
EnvToBool("HEAP_PROFILE_MMAP", false),
|
||||
"If heap-profiling is on, also profile mmap, mremap, and sbrk)");
|
||||
DEFINE_bool(only_mmap_profile,
|
||||
EnvToBool("HEAP_PROFILE_ONLY_MMAP", false),
|
||||
"If heap-profiling is on, only profile mmap, mremap, and sbrk; "
|
||||
"do not profile malloc/new/etc");
|
||||
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
// Locking
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
// A pthread_mutex has way too much lock contention to be used here.
|
||||
//
|
||||
// I would like to use Mutex, but it can call malloc(),
|
||||
// which can cause us to fall into an infinite recursion.
|
||||
//
|
||||
// So we use a simple spinlock.
|
||||
static SpinLock heap_lock(SpinLock::LINKER_INITIALIZED);
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
// Simple allocator for heap profiler's internal memory
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
static LowLevelAlloc::Arena *heap_profiler_memory;
|
||||
|
||||
static void* ProfilerMalloc(size_t bytes) {
|
||||
return LowLevelAlloc::AllocWithArena(bytes, heap_profiler_memory);
|
||||
}
|
||||
static void ProfilerFree(void* p) {
|
||||
LowLevelAlloc::Free(p);
|
||||
}
|
||||
|
||||
// We use buffers of this size in DoGetHeapProfile.
|
||||
static const int kProfileBufferSize = 1 << 20;
|
||||
|
||||
// This is a last-ditch buffer we use in DumpProfileLocked in case we
|
||||
// can't allocate more memory from ProfilerMalloc. We expect this
|
||||
// will be used by HeapProfileEndWriter when the application has to
|
||||
// exit due to out-of-memory. This buffer is allocated in
|
||||
// HeapProfilerStart. Access to this must be protected by heap_lock.
|
||||
static char* global_profiler_buffer = NULL;
|
||||
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
// Profiling control/state data
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
// Access to all of these is protected by heap_lock.
|
||||
static bool is_on = false; // If are on as a subsytem.
|
||||
static bool dumping = false; // Dumping status to prevent recursion
|
||||
static char* filename_prefix = NULL; // Prefix used for profile file names
|
||||
// (NULL if no need for dumping yet)
|
||||
static int dump_count = 0; // How many dumps so far
|
||||
static int64 last_dump_alloc = 0; // alloc_size when did we last dump
|
||||
static int64 last_dump_free = 0; // free_size when did we last dump
|
||||
static int64 high_water_mark = 0; // In-use-bytes at last high-water dump
|
||||
static int64 last_dump_time = 0; // The time of the last dump
|
||||
|
||||
static HeapProfileTable* heap_profile = NULL; // the heap profile table
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
// Profile generation
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
// Input must be a buffer of size at least 1MB.
|
||||
static char* DoGetHeapProfileLocked(char* buf, int buflen) {
|
||||
// We used to be smarter about estimating the required memory and
|
||||
// then capping it to 1MB and generating the profile into that.
|
||||
if (buf == NULL || buflen < 1)
|
||||
return NULL;
|
||||
|
||||
RAW_DCHECK(heap_lock.IsHeld(), "");
|
||||
int bytes_written = 0;
|
||||
if (is_on) {
|
||||
HeapProfileTable::Stats const stats = heap_profile->total();
|
||||
(void)stats; // avoid an unused-variable warning in non-debug mode.
|
||||
bytes_written = heap_profile->FillOrderedProfile(buf, buflen - 1);
|
||||
// FillOrderedProfile should not reduce the set of active mmap-ed regions,
|
||||
// hence MemoryRegionMap will let us remove everything we've added above:
|
||||
RAW_DCHECK(stats.Equivalent(heap_profile->total()), "");
|
||||
// if this fails, we somehow removed by FillOrderedProfile
|
||||
// more than we have added.
|
||||
}
|
||||
buf[bytes_written] = '\0';
|
||||
RAW_DCHECK(bytes_written == strlen(buf), "");
|
||||
|
||||
return buf;
|
||||
}
|
||||
|
||||
extern "C" char* GetHeapProfile() {
|
||||
// Use normal malloc: we return the profile to the user to free it:
|
||||
char* buffer = reinterpret_cast<char*>(malloc(kProfileBufferSize));
|
||||
SpinLockHolder l(&heap_lock);
|
||||
return DoGetHeapProfileLocked(buffer, kProfileBufferSize);
|
||||
}
|
||||
|
||||
// defined below
|
||||
static void NewHook(const void* ptr, size_t size);
|
||||
static void DeleteHook(const void* ptr);
|
||||
|
||||
// Helper for HeapProfilerDump.
|
||||
static void DumpProfileLocked(const char* reason) {
|
||||
RAW_DCHECK(heap_lock.IsHeld(), "");
|
||||
RAW_DCHECK(is_on, "");
|
||||
RAW_DCHECK(!dumping, "");
|
||||
|
||||
if (filename_prefix == NULL) return; // we do not yet need dumping
|
||||
|
||||
dumping = true;
|
||||
|
||||
// Make file name
|
||||
char file_name[1000];
|
||||
dump_count++;
|
||||
snprintf(file_name, sizeof(file_name), "%s.%04d%s",
|
||||
filename_prefix, dump_count, HeapProfileTable::kFileExt);
|
||||
|
||||
// Dump the profile
|
||||
RAW_VLOG(0, "Dumping heap profile to %s (%s)", file_name, reason);
|
||||
// We must use file routines that don't access memory, since we hold
|
||||
// a memory lock now.
|
||||
RawFD fd = RawOpenForWriting(file_name);
|
||||
if (fd == kIllegalRawFD) {
|
||||
RAW_LOG(ERROR, "Failed dumping heap profile to %s. Numeric errno is %d", file_name, errno);
|
||||
dumping = false;
|
||||
return;
|
||||
}
|
||||
|
||||
// This case may be impossible, but it's best to be safe.
|
||||
// It's safe to use the global buffer: we're protected by heap_lock.
|
||||
if (global_profiler_buffer == NULL) {
|
||||
global_profiler_buffer =
|
||||
reinterpret_cast<char*>(ProfilerMalloc(kProfileBufferSize));
|
||||
}
|
||||
|
||||
char* profile = DoGetHeapProfileLocked(global_profiler_buffer,
|
||||
kProfileBufferSize);
|
||||
RawWrite(fd, profile, strlen(profile));
|
||||
RawClose(fd);
|
||||
|
||||
dumping = false;
|
||||
}
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
// Profile collection
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
// Dump a profile after either an allocation or deallocation, if
|
||||
// the memory use has changed enough since the last dump.
|
||||
static void MaybeDumpProfileLocked() {
|
||||
if (!dumping) {
|
||||
const HeapProfileTable::Stats& total = heap_profile->total();
|
||||
const int64_t inuse_bytes = total.alloc_size - total.free_size;
|
||||
bool need_to_dump = false;
|
||||
char buf[128];
|
||||
|
||||
if (FLAGS_heap_profile_allocation_interval > 0 &&
|
||||
total.alloc_size >=
|
||||
last_dump_alloc + FLAGS_heap_profile_allocation_interval) {
|
||||
snprintf(buf, sizeof(buf), ("%" PRId64 " MB allocated cumulatively, "
|
||||
"%" PRId64 " MB currently in use"),
|
||||
total.alloc_size >> 20, inuse_bytes >> 20);
|
||||
need_to_dump = true;
|
||||
} else if (FLAGS_heap_profile_deallocation_interval > 0 &&
|
||||
total.free_size >=
|
||||
last_dump_free + FLAGS_heap_profile_deallocation_interval) {
|
||||
snprintf(buf, sizeof(buf), ("%" PRId64 " MB freed cumulatively, "
|
||||
"%" PRId64 " MB currently in use"),
|
||||
total.free_size >> 20, inuse_bytes >> 20);
|
||||
need_to_dump = true;
|
||||
} else if (FLAGS_heap_profile_inuse_interval > 0 &&
|
||||
inuse_bytes >
|
||||
high_water_mark + FLAGS_heap_profile_inuse_interval) {
|
||||
snprintf(buf, sizeof(buf), "%" PRId64 " MB currently in use",
|
||||
inuse_bytes >> 20);
|
||||
need_to_dump = true;
|
||||
} else if (FLAGS_heap_profile_time_interval > 0 ) {
|
||||
int64 current_time = time(NULL);
|
||||
if (current_time - last_dump_time >=
|
||||
FLAGS_heap_profile_time_interval) {
|
||||
snprintf(buf, sizeof(buf), "%" PRId64 " sec since the last dump",
|
||||
current_time - last_dump_time);
|
||||
need_to_dump = true;
|
||||
last_dump_time = current_time;
|
||||
}
|
||||
}
|
||||
if (need_to_dump) {
|
||||
DumpProfileLocked(buf);
|
||||
|
||||
last_dump_alloc = total.alloc_size;
|
||||
last_dump_free = total.free_size;
|
||||
if (inuse_bytes > high_water_mark)
|
||||
high_water_mark = inuse_bytes;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Record an allocation in the profile.
|
||||
static void RecordAlloc(const void* ptr, size_t bytes, int skip_count) {
|
||||
// Take the stack trace outside the critical section.
|
||||
void* stack[HeapProfileTable::kMaxStackDepth];
|
||||
int depth = HeapProfileTable::GetCallerStackTrace(skip_count + 1, stack);
|
||||
SpinLockHolder l(&heap_lock);
|
||||
if (is_on) {
|
||||
heap_profile->RecordAlloc(ptr, bytes, depth, stack);
|
||||
MaybeDumpProfileLocked();
|
||||
}
|
||||
}
|
||||
|
||||
// Record a deallocation in the profile.
|
||||
static void RecordFree(const void* ptr) {
|
||||
SpinLockHolder l(&heap_lock);
|
||||
if (is_on) {
|
||||
heap_profile->RecordFree(ptr);
|
||||
MaybeDumpProfileLocked();
|
||||
}
|
||||
}
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
// Allocation/deallocation hooks for MallocHook
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
// static
|
||||
void NewHook(const void* ptr, size_t size) {
|
||||
if (ptr != NULL) RecordAlloc(ptr, size, 0);
|
||||
}
|
||||
|
||||
// static
|
||||
void DeleteHook(const void* ptr) {
|
||||
if (ptr != NULL) RecordFree(ptr);
|
||||
}
|
||||
|
||||
static tcmalloc::MappingHookSpace mmap_logging_hook_space;
|
||||
|
||||
static void LogMappingEvent(const tcmalloc::MappingEvent& evt) {
|
||||
if (!FLAGS_mmap_log) {
|
||||
return;
|
||||
}
|
||||
|
||||
if (evt.file_valid) {
|
||||
// We use PRIxPTR not just '%p' to avoid deadlocks
|
||||
// in pretty-printing of NULL as "nil".
|
||||
// TODO(maxim): instead should use a safe snprintf reimplementation
|
||||
RAW_LOG(INFO,
|
||||
"mmap(start=0x%" PRIxPTR ", len=%zu, prot=0x%x, flags=0x%x, "
|
||||
"fd=%d, offset=0x%llx) = 0x%" PRIxPTR "",
|
||||
(uintptr_t) evt.before_address, evt.after_length, evt.prot,
|
||||
evt.flags, evt.file_fd, (unsigned long long) evt.file_off,
|
||||
(uintptr_t) evt.after_address);
|
||||
} else if (evt.after_valid && evt.before_valid) {
|
||||
// We use PRIxPTR not just '%p' to avoid deadlocks
|
||||
// in pretty-printing of NULL as "nil".
|
||||
// TODO(maxim): instead should use a safe snprintf reimplementation
|
||||
RAW_LOG(INFO,
|
||||
"mremap(old_addr=0x%" PRIxPTR ", old_size=%zu, "
|
||||
"new_size=%zu, flags=0x%x, new_addr=0x%" PRIxPTR ") = "
|
||||
"0x%" PRIxPTR "",
|
||||
(uintptr_t) evt.before_address, evt.before_length, evt.after_length, evt.flags,
|
||||
(uintptr_t) evt.after_address, (uintptr_t) evt.after_address);
|
||||
} else if (evt.is_sbrk) {
|
||||
intptr_t increment;
|
||||
uintptr_t result;
|
||||
if (evt.after_valid) {
|
||||
increment = evt.after_length;
|
||||
result = reinterpret_cast<uintptr_t>(evt.after_address) + evt.after_length;
|
||||
} else {
|
||||
increment = -static_cast<intptr_t>(evt.before_length);
|
||||
result = reinterpret_cast<uintptr_t>(evt.before_address);
|
||||
}
|
||||
|
||||
RAW_LOG(INFO, "sbrk(inc=%zd) = 0x%" PRIxPTR "",
|
||||
increment, (uintptr_t) result);
|
||||
} else if (evt.before_valid) {
|
||||
// We use PRIxPTR not just '%p' to avoid deadlocks
|
||||
// in pretty-printing of NULL as "nil".
|
||||
// TODO(maxim): instead should use a safe snprintf reimplementation
|
||||
RAW_LOG(INFO, "munmap(start=0x%" PRIxPTR ", len=%zu)",
|
||||
(uintptr_t) evt.before_address, evt.before_length);
|
||||
}
|
||||
}
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
// Starting/stopping/dumping
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
extern "C" void HeapProfilerStart(const char* prefix) {
|
||||
SpinLockHolder l(&heap_lock);
|
||||
|
||||
if (is_on) return;
|
||||
|
||||
is_on = true;
|
||||
|
||||
RAW_VLOG(0, "Starting tracking the heap");
|
||||
|
||||
// This should be done before the hooks are set up, since it should
|
||||
// call new, and we want that to be accounted for correctly.
|
||||
MallocExtension::Initialize();
|
||||
|
||||
if (FLAGS_only_mmap_profile) {
|
||||
FLAGS_mmap_profile = true;
|
||||
}
|
||||
|
||||
if (FLAGS_mmap_profile) {
|
||||
// Ask MemoryRegionMap to record all mmap, mremap, and sbrk
|
||||
// call stack traces of at least size kMaxStackDepth:
|
||||
MemoryRegionMap::Init(HeapProfileTable::kMaxStackDepth,
|
||||
/* use_buckets */ true);
|
||||
}
|
||||
|
||||
if (FLAGS_mmap_log) {
|
||||
// Install our hooks to do the logging:
|
||||
tcmalloc::HookMMapEvents(&mmap_logging_hook_space, LogMappingEvent);
|
||||
}
|
||||
|
||||
heap_profiler_memory =
|
||||
LowLevelAlloc::NewArena(0, LowLevelAlloc::DefaultArena());
|
||||
|
||||
// Reserve space now for the heap profiler, so we can still write a
|
||||
// heap profile even if the application runs out of memory.
|
||||
global_profiler_buffer =
|
||||
reinterpret_cast<char*>(ProfilerMalloc(kProfileBufferSize));
|
||||
|
||||
heap_profile = new(ProfilerMalloc(sizeof(HeapProfileTable)))
|
||||
HeapProfileTable(ProfilerMalloc, ProfilerFree, FLAGS_mmap_profile);
|
||||
|
||||
last_dump_alloc = 0;
|
||||
last_dump_free = 0;
|
||||
high_water_mark = 0;
|
||||
last_dump_time = 0;
|
||||
|
||||
// We do not reset dump_count so if the user does a sequence of
|
||||
// HeapProfilerStart/HeapProfileStop, we will get a continuous
|
||||
// sequence of profiles.
|
||||
|
||||
if (FLAGS_only_mmap_profile == false) {
|
||||
// Now set the hooks that capture new/delete and malloc/free.
|
||||
RAW_CHECK(MallocHook::AddNewHook(&NewHook), "");
|
||||
RAW_CHECK(MallocHook::AddDeleteHook(&DeleteHook), "");
|
||||
}
|
||||
|
||||
// Copy filename prefix
|
||||
RAW_DCHECK(filename_prefix == NULL, "");
|
||||
const int prefix_length = strlen(prefix);
|
||||
filename_prefix = reinterpret_cast<char*>(ProfilerMalloc(prefix_length + 1));
|
||||
memcpy(filename_prefix, prefix, prefix_length);
|
||||
filename_prefix[prefix_length] = '\0';
|
||||
}
|
||||
|
||||
extern "C" int IsHeapProfilerRunning() {
|
||||
SpinLockHolder l(&heap_lock);
|
||||
return is_on ? 1 : 0; // return an int, because C code doesn't have bool
|
||||
}
|
||||
|
||||
extern "C" void HeapProfilerStop() {
|
||||
SpinLockHolder l(&heap_lock);
|
||||
|
||||
if (!is_on) return;
|
||||
|
||||
if (FLAGS_only_mmap_profile == false) {
|
||||
// Unset our new/delete hooks, checking they were set:
|
||||
RAW_CHECK(MallocHook::RemoveNewHook(&NewHook), "");
|
||||
RAW_CHECK(MallocHook::RemoveDeleteHook(&DeleteHook), "");
|
||||
}
|
||||
if (FLAGS_mmap_log) {
|
||||
// Restore mmap/sbrk hooks, checking that our hooks were set:
|
||||
tcmalloc::UnHookMMapEvents(&mmap_logging_hook_space);
|
||||
}
|
||||
|
||||
// free profile
|
||||
heap_profile->~HeapProfileTable();
|
||||
ProfilerFree(heap_profile);
|
||||
heap_profile = NULL;
|
||||
|
||||
// free output-buffer memory
|
||||
ProfilerFree(global_profiler_buffer);
|
||||
|
||||
// free prefix
|
||||
ProfilerFree(filename_prefix);
|
||||
filename_prefix = NULL;
|
||||
|
||||
if (!LowLevelAlloc::DeleteArena(heap_profiler_memory)) {
|
||||
RAW_LOG(FATAL, "Memory leak in HeapProfiler:");
|
||||
}
|
||||
|
||||
if (FLAGS_mmap_profile) {
|
||||
MemoryRegionMap::Shutdown();
|
||||
}
|
||||
|
||||
is_on = false;
|
||||
}
|
||||
|
||||
extern "C" void HeapProfilerDump(const char *reason) {
|
||||
SpinLockHolder l(&heap_lock);
|
||||
if (is_on && !dumping) {
|
||||
DumpProfileLocked(reason);
|
||||
}
|
||||
}
|
||||
|
||||
// Signal handler that is registered when a user selectable signal
|
||||
// number is defined in the environment variable HEAPPROFILESIGNAL.
|
||||
static void HeapProfilerDumpSignal(int signal_number) {
|
||||
(void)signal_number;
|
||||
if (!heap_lock.TryLock()) {
|
||||
return;
|
||||
}
|
||||
if (is_on && !dumping) {
|
||||
DumpProfileLocked("signal");
|
||||
}
|
||||
heap_lock.Unlock();
|
||||
}
|
||||
|
||||
|
||||
//----------------------------------------------------------------------
|
||||
// Initialization/finalization code
|
||||
//----------------------------------------------------------------------
|
||||
|
||||
// Initialization code
|
||||
static void HeapProfilerInit() {
|
||||
// Everything after this point is for setting up the profiler based on envvar
|
||||
char fname[PATH_MAX];
|
||||
if (!GetUniquePathFromEnv("HEAPPROFILE", fname)) {
|
||||
return;
|
||||
}
|
||||
// We do a uid check so we don't write out files in a setuid executable.
|
||||
#ifdef HAVE_GETEUID
|
||||
if (getuid() != geteuid()) {
|
||||
RAW_LOG(WARNING, ("HeapProfiler: ignoring HEAPPROFILE because "
|
||||
"program seems to be setuid\n"));
|
||||
return;
|
||||
}
|
||||
#endif
|
||||
|
||||
char *signal_number_str = getenv("HEAPPROFILESIGNAL");
|
||||
if (signal_number_str != NULL) {
|
||||
long int signal_number = strtol(signal_number_str, NULL, 10);
|
||||
intptr_t old_signal_handler = reinterpret_cast<intptr_t>(signal(signal_number, HeapProfilerDumpSignal));
|
||||
if (old_signal_handler == reinterpret_cast<intptr_t>(SIG_ERR)) {
|
||||
RAW_LOG(FATAL, "Failed to set signal. Perhaps signal number %s is invalid\n", signal_number_str);
|
||||
} else if (old_signal_handler == 0) {
|
||||
RAW_LOG(INFO,"Using signal %d as heap profiling switch", signal_number);
|
||||
} else {
|
||||
RAW_LOG(FATAL, "Signal %d already in use\n", signal_number);
|
||||
}
|
||||
}
|
||||
|
||||
HeapProfileTable::CleanupOldProfiles(fname);
|
||||
|
||||
HeapProfilerStart(fname);
|
||||
}
|
||||
|
||||
// class used for finalization -- dumps the heap-profile at program exit
|
||||
struct HeapProfileEndWriter {
|
||||
~HeapProfileEndWriter() {
|
||||
char buf[128];
|
||||
if (heap_profile) {
|
||||
const HeapProfileTable::Stats& total = heap_profile->total();
|
||||
const int64_t inuse_bytes = total.alloc_size - total.free_size;
|
||||
|
||||
if ((inuse_bytes >> 20) > 0) {
|
||||
snprintf(buf, sizeof(buf), ("Exiting, %" PRId64 " MB in use"),
|
||||
inuse_bytes >> 20);
|
||||
} else if ((inuse_bytes >> 10) > 0) {
|
||||
snprintf(buf, sizeof(buf), ("Exiting, %" PRId64 " kB in use"),
|
||||
inuse_bytes >> 10);
|
||||
} else {
|
||||
snprintf(buf, sizeof(buf), ("Exiting, %" PRId64 " bytes in use"),
|
||||
inuse_bytes);
|
||||
}
|
||||
} else {
|
||||
snprintf(buf, sizeof(buf), ("Exiting"));
|
||||
}
|
||||
HeapProfilerDump(buf);
|
||||
}
|
||||
};
|
||||
|
||||
// We want to make sure tcmalloc is up and running before starting the profiler
|
||||
static const TCMallocGuard tcmalloc_initializer;
|
||||
REGISTER_MODULE_INITIALIZER(heapprofiler, HeapProfilerInit());
|
||||
static HeapProfileEndWriter heap_profile_end_writer;
|
192
3party/gperftools/src/internal_logging.cc
Normal file
192
3party/gperftools/src/internal_logging.cc
Normal file
@ -0,0 +1,192 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Sanjay Ghemawat <opensource@google.com>
|
||||
|
||||
#include <config.h>
|
||||
#include "internal_logging.h"
|
||||
#include <stdarg.h> // for va_end, va_start
|
||||
#include <stdio.h> // for vsnprintf, va_list, etc
|
||||
#include <stdlib.h> // for abort
|
||||
#include <string.h> // for strlen, memcpy
|
||||
#ifdef HAVE_UNISTD_H
|
||||
#include <unistd.h> // for write()
|
||||
#endif
|
||||
|
||||
#include <gperftools/malloc_extension.h>
|
||||
#include "base/logging.h" // for perftools_vsnprintf
|
||||
#include "base/spinlock.h" // for SpinLockHolder, SpinLock
|
||||
|
||||
// Variables for storing crash output. Allocated statically since we
|
||||
// may not be able to heap-allocate while crashing.
|
||||
static SpinLock crash_lock(base::LINKER_INITIALIZED);
|
||||
static bool crashed = false;
|
||||
static const int kStatsBufferSize = 16 << 10;
|
||||
static char stats_buffer[kStatsBufferSize] = { 0 };
|
||||
|
||||
namespace tcmalloc {
|
||||
|
||||
static void WriteMessage(const char* msg, int length) {
|
||||
write(STDERR_FILENO, msg, length);
|
||||
}
|
||||
|
||||
void (*log_message_writer)(const char* msg, int length) = WriteMessage;
|
||||
|
||||
|
||||
class Logger {
|
||||
public:
|
||||
bool Add(const LogItem& item);
|
||||
bool AddStr(const char* str, int n);
|
||||
bool AddNum(uint64_t num, int base); // base must be 10 or 16.
|
||||
|
||||
static const int kBufSize = 200;
|
||||
char* p_;
|
||||
char* end_;
|
||||
char buf_[kBufSize];
|
||||
};
|
||||
|
||||
void Log(LogMode mode, const char* filename, int line,
|
||||
LogItem a, LogItem b, LogItem c, LogItem d) {
|
||||
Logger state;
|
||||
state.p_ = state.buf_;
|
||||
state.end_ = state.buf_ + sizeof(state.buf_);
|
||||
state.AddStr(filename, strlen(filename))
|
||||
&& state.AddStr(":", 1)
|
||||
&& state.AddNum(line, 10)
|
||||
&& state.AddStr("]", 1)
|
||||
&& state.Add(a)
|
||||
&& state.Add(b)
|
||||
&& state.Add(c)
|
||||
&& state.Add(d);
|
||||
|
||||
// Teminate with newline
|
||||
if (state.p_ >= state.end_) {
|
||||
state.p_ = state.end_ - 1;
|
||||
}
|
||||
*state.p_ = '\n';
|
||||
state.p_++;
|
||||
|
||||
int msglen = state.p_ - state.buf_;
|
||||
if (mode == kLog) {
|
||||
(*log_message_writer)(state.buf_, msglen);
|
||||
return;
|
||||
}
|
||||
|
||||
bool first_crash = false;
|
||||
{
|
||||
SpinLockHolder l(&crash_lock);
|
||||
if (!crashed) {
|
||||
crashed = true;
|
||||
first_crash = true;
|
||||
}
|
||||
}
|
||||
|
||||
(*log_message_writer)(state.buf_, msglen);
|
||||
if (first_crash && mode == kCrashWithStats) {
|
||||
MallocExtension::instance()->GetStats(stats_buffer, kStatsBufferSize);
|
||||
(*log_message_writer)(stats_buffer, strlen(stats_buffer));
|
||||
}
|
||||
|
||||
abort();
|
||||
}
|
||||
|
||||
bool Logger::Add(const LogItem& item) {
|
||||
// Separate items with spaces
|
||||
if (p_ < end_) {
|
||||
*p_ = ' ';
|
||||
p_++;
|
||||
}
|
||||
|
||||
switch (item.tag_) {
|
||||
case LogItem::kStr:
|
||||
return AddStr(item.u_.str, strlen(item.u_.str));
|
||||
case LogItem::kUnsigned:
|
||||
return AddNum(item.u_.unum, 10);
|
||||
case LogItem::kSigned:
|
||||
if (item.u_.snum < 0) {
|
||||
// The cast to uint64_t is intentionally before the negation
|
||||
// so that we do not attempt to negate -2^63.
|
||||
return AddStr("-", 1)
|
||||
&& AddNum(- static_cast<uint64_t>(item.u_.snum), 10);
|
||||
} else {
|
||||
return AddNum(static_cast<uint64_t>(item.u_.snum), 10);
|
||||
}
|
||||
case LogItem::kPtr:
|
||||
return AddStr("0x", 2)
|
||||
&& AddNum(reinterpret_cast<uintptr_t>(item.u_.ptr), 16);
|
||||
default:
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
bool Logger::AddStr(const char* str, int n) {
|
||||
if (end_ - p_ < n) {
|
||||
return false;
|
||||
} else {
|
||||
memcpy(p_, str, n);
|
||||
p_ += n;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
bool Logger::AddNum(uint64_t num, int base) {
|
||||
static const char kDigits[] = "0123456789abcdef";
|
||||
char space[22]; // more than enough for 2^64 in smallest supported base (10)
|
||||
char* end = space + sizeof(space);
|
||||
char* pos = end;
|
||||
do {
|
||||
pos--;
|
||||
*pos = kDigits[num % base];
|
||||
num /= base;
|
||||
} while (num > 0 && pos > space);
|
||||
return AddStr(pos, end - pos);
|
||||
}
|
||||
|
||||
} // end tcmalloc namespace
|
||||
|
||||
void TCMalloc_Printer::printf(const char* format, ...) {
|
||||
if (left_ > 0) {
|
||||
va_list ap;
|
||||
va_start(ap, format);
|
||||
const int r = perftools_vsnprintf(buf_, left_, format, ap);
|
||||
va_end(ap);
|
||||
if (r < 0) {
|
||||
// Perhaps an old glibc that returns -1 on truncation?
|
||||
left_ = 0;
|
||||
} else if (r > left_) {
|
||||
// Truncation
|
||||
left_ = 0;
|
||||
} else {
|
||||
left_ -= r;
|
||||
buf_ += r;
|
||||
}
|
||||
}
|
||||
}
|
148
3party/gperftools/src/internal_logging.h
Normal file
148
3party/gperftools/src/internal_logging.h
Normal file
@ -0,0 +1,148 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat <opensource@google.com>
|
||||
//
|
||||
// Internal logging and related utility routines.
|
||||
|
||||
#ifndef TCMALLOC_INTERNAL_LOGGING_H_
|
||||
#define TCMALLOC_INTERNAL_LOGGING_H_
|
||||
|
||||
#include <config.h>
|
||||
#include <stddef.h> // for size_t
|
||||
#include <stdint.h>
|
||||
|
||||
//-------------------------------------------------------------------
|
||||
// Utility routines
|
||||
//-------------------------------------------------------------------
|
||||
|
||||
// Safe logging helper: we write directly to the stderr file
|
||||
// descriptor and avoid FILE buffering because that may invoke
|
||||
// malloc().
|
||||
//
|
||||
// Example:
|
||||
// Log(kLog, __FILE__, __LINE__, "error", bytes);
|
||||
|
||||
namespace tcmalloc {
|
||||
enum LogMode {
|
||||
kLog, // Just print the message
|
||||
kCrash, // Print the message and crash
|
||||
kCrashWithStats // Print the message, some stats, and crash
|
||||
};
|
||||
|
||||
class Logger;
|
||||
|
||||
// A LogItem holds any of the argument types that can be passed to Log()
|
||||
class LogItem {
|
||||
public:
|
||||
LogItem() : tag_(kEnd) { }
|
||||
LogItem(const char* v) : tag_(kStr) { u_.str = v; }
|
||||
LogItem(int v) : tag_(kSigned) { u_.snum = v; }
|
||||
LogItem(long v) : tag_(kSigned) { u_.snum = v; }
|
||||
LogItem(long long v) : tag_(kSigned) { u_.snum = v; }
|
||||
LogItem(unsigned int v) : tag_(kUnsigned) { u_.unum = v; }
|
||||
LogItem(unsigned long v) : tag_(kUnsigned) { u_.unum = v; }
|
||||
LogItem(unsigned long long v) : tag_(kUnsigned) { u_.unum = v; }
|
||||
LogItem(const void* v) : tag_(kPtr) { u_.ptr = v; }
|
||||
private:
|
||||
friend class Logger;
|
||||
enum Tag {
|
||||
kStr,
|
||||
kSigned,
|
||||
kUnsigned,
|
||||
kPtr,
|
||||
kEnd
|
||||
};
|
||||
Tag tag_;
|
||||
union {
|
||||
const char* str;
|
||||
const void* ptr;
|
||||
int64_t snum;
|
||||
uint64_t unum;
|
||||
} u_;
|
||||
};
|
||||
|
||||
extern PERFTOOLS_DLL_DECL void Log(LogMode mode, const char* filename, int line,
|
||||
LogItem a, LogItem b = LogItem(),
|
||||
LogItem c = LogItem(), LogItem d = LogItem());
|
||||
|
||||
// Tests can override this function to collect logging messages.
|
||||
extern PERFTOOLS_DLL_DECL void (*log_message_writer)(const char* msg, int length);
|
||||
|
||||
} // end tcmalloc namespace
|
||||
|
||||
// Like assert(), but executed even in NDEBUG mode
|
||||
#undef CHECK_CONDITION
|
||||
#define CHECK_CONDITION(cond) \
|
||||
do { \
|
||||
if (!(cond)) { \
|
||||
::tcmalloc::Log(::tcmalloc::kCrash, __FILE__, __LINE__, #cond); \
|
||||
for (;;) {} /* unreachable */ \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
#define CHECK_CONDITION_PRINT(cond, str) \
|
||||
do { \
|
||||
if (!(cond)) { \
|
||||
::tcmalloc::Log(::tcmalloc::kCrash, __FILE__, __LINE__, str); \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
// Our own version of assert() so we can avoid hanging by trying to do
|
||||
// all kinds of goofy printing while holding the malloc lock.
|
||||
#ifndef NDEBUG
|
||||
#define ASSERT(cond) CHECK_CONDITION(cond)
|
||||
#define ASSERT_PRINT(cond, str) CHECK_CONDITION_PRINT(cond, str)
|
||||
#else
|
||||
#define ASSERT(cond) ((void) 0)
|
||||
#define ASSERT_PRINT(cond, str) ((void)0)
|
||||
#endif
|
||||
|
||||
// Print into buffer
|
||||
class TCMalloc_Printer {
|
||||
private:
|
||||
char* buf_; // Where should we write next
|
||||
int left_; // Space left in buffer (including space for \0)
|
||||
|
||||
public:
|
||||
// REQUIRES: "length > 0"
|
||||
TCMalloc_Printer(char* buf, int length) : buf_(buf), left_(length) {
|
||||
buf[0] = '\0';
|
||||
}
|
||||
|
||||
void printf(const char* format, ...)
|
||||
#ifdef HAVE___ATTRIBUTE__
|
||||
__attribute__ ((__format__ (__printf__, 2, 3)))
|
||||
#endif
|
||||
;
|
||||
};
|
||||
|
||||
#endif // TCMALLOC_INTERNAL_LOGGING_H_
|
99
3party/gperftools/src/libc_override.h
Normal file
99
3party/gperftools/src/libc_override.h
Normal file
@ -0,0 +1,99 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2011, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Craig Silverstein <opensource@google.com>
|
||||
//
|
||||
// This .h file imports the code that causes tcmalloc to override libc
|
||||
// versions of malloc/free/new/delete/etc. That is, it provides the
|
||||
// logic that makes it so calls to malloc(10) go through tcmalloc,
|
||||
// rather than the default (libc) malloc.
|
||||
//
|
||||
// This file also provides a method: ReplaceSystemAlloc(), that every
|
||||
// libc_override_*.h file it #includes is required to provide. This
|
||||
// is called when first setting up tcmalloc -- that is, when a global
|
||||
// constructor in tcmalloc.cc is executed -- to do any initialization
|
||||
// work that may be required for this OS. (Note we cannot entirely
|
||||
// control when tcmalloc is initialized, and the system may do some
|
||||
// mallocs and frees before this routine is called.) It may be a
|
||||
// noop.
|
||||
//
|
||||
// Every libc has its own way of doing this, and sometimes the compiler
|
||||
// matters too, so we have a different file for each libc, and often
|
||||
// for different compilers and OS's.
|
||||
|
||||
#ifndef TCMALLOC_LIBC_OVERRIDE_INL_H_
|
||||
#define TCMALLOC_LIBC_OVERRIDE_INL_H_
|
||||
|
||||
#include <config.h>
|
||||
#ifdef HAVE_FEATURES_H
|
||||
#include <features.h> // for __GLIBC__
|
||||
#endif
|
||||
#include <gperftools/tcmalloc.h>
|
||||
|
||||
#if __cplusplus >= 201103L || (defined(_MSC_VER) && _MSC_VER >= 1900)
|
||||
#define CPP_NOTHROW noexcept
|
||||
#define CPP_BADALLOC
|
||||
#else
|
||||
#define CPP_NOTHROW throw()
|
||||
#define CPP_BADALLOC throw(std::bad_alloc)
|
||||
#endif
|
||||
|
||||
static void ReplaceSystemAlloc(); // defined in the .h files below
|
||||
|
||||
// For windows, there are two ways to get tcmalloc. If we're
|
||||
// patching, then src/windows/patch_function.cc will do the necessary
|
||||
// overriding here. Otherwise, we doing the 'redefine' trick, where
|
||||
// we remove malloc/new/etc from mscvcrt.dll, and just need to define
|
||||
// them now.
|
||||
#if defined(_WIN32) && defined(WIN32_DO_PATCHING)
|
||||
void PatchWindowsFunctions(); // in src/windows/patch_function.cc
|
||||
static void ReplaceSystemAlloc() { PatchWindowsFunctions(); }
|
||||
|
||||
#elif defined(_WIN32) && !defined(WIN32_DO_PATCHING)
|
||||
#include "libc_override_redefine.h"
|
||||
|
||||
#elif defined(__APPLE__)
|
||||
#include "libc_override_osx.h"
|
||||
|
||||
#elif defined(__GLIBC__)
|
||||
#include "libc_override_glibc.h"
|
||||
|
||||
// Not all gcc systems necessarily support weak symbols, but all the
|
||||
// ones I know of do, so for now just assume they all do.
|
||||
#elif defined(__GNUC__)
|
||||
#include "libc_override_gcc_and_weak.h"
|
||||
|
||||
#else
|
||||
#error Need to add support for your libc/OS here
|
||||
|
||||
#endif
|
||||
|
||||
#endif // TCMALLOC_LIBC_OVERRIDE_INL_H_
|
62
3party/gperftools/src/libc_override_aix.h
Normal file
62
3party/gperftools/src/libc_override_aix.h
Normal file
@ -0,0 +1,62 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2021, IBM Ltd.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Chris Cambly <ccambly@ca.ibm.com>
|
||||
//
|
||||
// Used to override malloc routines on AIX
|
||||
|
||||
#ifndef TCMALLOC_LIBC_OVERRIDE_AIX_INL_H_
|
||||
#define TCMALLOC_LIBC_OVERRIDE_AIX_INL_H_
|
||||
|
||||
#ifndef _AIX
|
||||
# error libc_override_aix.h is for AIX systems only.
|
||||
#endif
|
||||
|
||||
extern "C" {
|
||||
// AIX user-defined malloc replacement routines
|
||||
void* __malloc__(size_t size) __THROW ALIAS(tc_malloc);
|
||||
void __free__(void* ptr) __THROW ALIAS(tc_free);
|
||||
void* __realloc__(void* ptr, size_t size) __THROW ALIAS(tc_realloc);
|
||||
void* __calloc__(size_t n, size_t size) __THROW ALIAS(tc_calloc);
|
||||
int __posix_memalign__(void** r, size_t a, size_t s) __THROW ALIAS(tc_posix_memalign);
|
||||
int __mallopt__(int cmd, int value) __THROW ALIAS(tc_mallopt);
|
||||
#ifdef HAVE_STRUCT_MALLINFO
|
||||
struct mallinfo __mallinfo__(void) __THROW ALIAS(tc_mallinfo);
|
||||
#endif
|
||||
#ifdef HAVE_STRUCT_MALLINFO2
|
||||
struct mallinfo2 __mallinfo2__(void) __THROW ALIAS(tc_mallinfo2);
|
||||
#endif
|
||||
void __malloc_init__(void) { tc_free(tc_malloc(1));}
|
||||
void* __malloc_prefork_lock__(void) { /* nothing to lock */ }
|
||||
void* __malloc_postfork_unlock__(void) { /* nothing to unlock */}
|
||||
} // extern "C"
|
||||
|
||||
#endif // TCMALLOC_LIBC_OVERRIDE_AIX_INL_H_
|
261
3party/gperftools/src/libc_override_gcc_and_weak.h
Normal file
261
3party/gperftools/src/libc_override_gcc_and_weak.h
Normal file
@ -0,0 +1,261 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2011, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Craig Silverstein <opensource@google.com>
|
||||
//
|
||||
// Used to override malloc routines on systems that define the
|
||||
// memory allocation routines to be weak symbols in their libc
|
||||
// (almost all unix-based systems are like this), on gcc, which
|
||||
// suppports the 'alias' attribute.
|
||||
|
||||
#ifndef TCMALLOC_LIBC_OVERRIDE_GCC_AND_WEAK_INL_H_
|
||||
#define TCMALLOC_LIBC_OVERRIDE_GCC_AND_WEAK_INL_H_
|
||||
|
||||
#ifdef HAVE_SYS_CDEFS_H
|
||||
#include <sys/cdefs.h> // for __THROW
|
||||
#endif
|
||||
#include <gperftools/tcmalloc.h>
|
||||
|
||||
#include "getenv_safe.h" // TCMallocGetenvSafe
|
||||
#include "base/commandlineflags.h"
|
||||
|
||||
#ifndef __THROW // I guess we're not on a glibc-like system
|
||||
# define __THROW // __THROW is just an optimization, so ok to make it ""
|
||||
#endif
|
||||
|
||||
#ifndef __GNUC__
|
||||
# error libc_override_gcc_and_weak.h is for gcc distributions only.
|
||||
#endif
|
||||
|
||||
#define ALIAS(tc_fn) __attribute__ ((alias (#tc_fn), used))
|
||||
|
||||
void* operator new(size_t size) CPP_BADALLOC ALIAS(tc_new);
|
||||
void operator delete(void* p) CPP_NOTHROW ALIAS(tc_delete);
|
||||
void* operator new[](size_t size) CPP_BADALLOC ALIAS(tc_newarray);
|
||||
void operator delete[](void* p) CPP_NOTHROW ALIAS(tc_deletearray);
|
||||
void* operator new(size_t size, const std::nothrow_t& nt) CPP_NOTHROW
|
||||
ALIAS(tc_new_nothrow);
|
||||
void* operator new[](size_t size, const std::nothrow_t& nt) CPP_NOTHROW
|
||||
ALIAS(tc_newarray_nothrow);
|
||||
void operator delete(void* p, const std::nothrow_t& nt) CPP_NOTHROW
|
||||
ALIAS(tc_delete_nothrow);
|
||||
void operator delete[](void* p, const std::nothrow_t& nt) CPP_NOTHROW
|
||||
ALIAS(tc_deletearray_nothrow);
|
||||
|
||||
#if defined(ENABLE_SIZED_DELETE)
|
||||
|
||||
void operator delete(void *p, size_t size) CPP_NOTHROW
|
||||
ALIAS(tc_delete_sized);
|
||||
void operator delete[](void *p, size_t size) CPP_NOTHROW
|
||||
ALIAS(tc_deletearray_sized);
|
||||
|
||||
#elif defined(ENABLE_DYNAMIC_SIZED_DELETE) && \
|
||||
(__GNUC__ * 100 + __GNUC_MINOR__) >= 405
|
||||
|
||||
static void delegate_sized_delete(void *p, size_t s) {
|
||||
(operator delete)(p);
|
||||
}
|
||||
|
||||
static void delegate_sized_deletearray(void *p, size_t s) {
|
||||
(operator delete[])(p);
|
||||
}
|
||||
|
||||
extern "C" __attribute__((weak))
|
||||
int tcmalloc_sized_delete_enabled(void);
|
||||
|
||||
static bool sized_delete_enabled(void) {
|
||||
if (tcmalloc_sized_delete_enabled != 0) {
|
||||
return !!tcmalloc_sized_delete_enabled();
|
||||
}
|
||||
|
||||
const char *flag = TCMallocGetenvSafe("TCMALLOC_ENABLE_SIZED_DELETE");
|
||||
return tcmalloc::commandlineflags::StringToBool(flag, false);
|
||||
}
|
||||
|
||||
extern "C" {
|
||||
|
||||
static void *resolve_delete_sized(void) {
|
||||
if (sized_delete_enabled()) {
|
||||
return reinterpret_cast<void *>(tc_delete_sized);
|
||||
}
|
||||
return reinterpret_cast<void *>(delegate_sized_delete);
|
||||
}
|
||||
|
||||
static void *resolve_deletearray_sized(void) {
|
||||
if (sized_delete_enabled()) {
|
||||
return reinterpret_cast<void *>(tc_deletearray_sized);
|
||||
}
|
||||
return reinterpret_cast<void *>(delegate_sized_deletearray);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
void operator delete(void *p, size_t size) CPP_NOTHROW
|
||||
__attribute__((ifunc("resolve_delete_sized")));
|
||||
void operator delete[](void *p, size_t size) CPP_NOTHROW
|
||||
__attribute__((ifunc("resolve_deletearray_sized")));
|
||||
|
||||
#else /* !ENABLE_SIZED_DELETE && !ENABLE_DYN_SIZED_DELETE */
|
||||
|
||||
void operator delete(void *p, size_t size) CPP_NOTHROW
|
||||
ALIAS(tc_delete_sized);
|
||||
void operator delete[](void *p, size_t size) CPP_NOTHROW
|
||||
ALIAS(tc_deletearray_sized);
|
||||
|
||||
#endif /* !ENABLE_SIZED_DELETE && !ENABLE_DYN_SIZED_DELETE */
|
||||
|
||||
#if defined(ENABLE_ALIGNED_NEW_DELETE)
|
||||
|
||||
void* operator new(size_t size, std::align_val_t al)
|
||||
ALIAS(tc_new_aligned);
|
||||
void operator delete(void* p, std::align_val_t al) CPP_NOTHROW
|
||||
ALIAS(tc_delete_aligned);
|
||||
void* operator new[](size_t size, std::align_val_t al)
|
||||
ALIAS(tc_newarray_aligned);
|
||||
void operator delete[](void* p, std::align_val_t al) CPP_NOTHROW
|
||||
ALIAS(tc_deletearray_aligned);
|
||||
void* operator new(size_t size, std::align_val_t al, const std::nothrow_t& nt) CPP_NOTHROW
|
||||
ALIAS(tc_new_aligned_nothrow);
|
||||
void* operator new[](size_t size, std::align_val_t al, const std::nothrow_t& nt) CPP_NOTHROW
|
||||
ALIAS(tc_newarray_aligned_nothrow);
|
||||
void operator delete(void* p, std::align_val_t al, const std::nothrow_t& nt) CPP_NOTHROW
|
||||
ALIAS(tc_delete_aligned_nothrow);
|
||||
void operator delete[](void* p, std::align_val_t al, const std::nothrow_t& nt) CPP_NOTHROW
|
||||
ALIAS(tc_deletearray_aligned_nothrow);
|
||||
|
||||
#if defined(ENABLE_SIZED_DELETE)
|
||||
|
||||
void operator delete(void *p, size_t size, std::align_val_t al) CPP_NOTHROW
|
||||
ALIAS(tc_delete_sized_aligned);
|
||||
void operator delete[](void *p, size_t size, std::align_val_t al) CPP_NOTHROW
|
||||
ALIAS(tc_deletearray_sized_aligned);
|
||||
|
||||
#else /* defined(ENABLE_SIZED_DELETE) */
|
||||
|
||||
#if defined(ENABLE_DYNAMIC_SIZED_DELETE) && \
|
||||
(__GNUC__ * 100 + __GNUC_MINOR__) >= 405
|
||||
|
||||
static void delegate_sized_aligned_delete(void *p, size_t s, std::align_val_t al) {
|
||||
(operator delete)(p, al);
|
||||
}
|
||||
|
||||
static void delegate_sized_aligned_deletearray(void *p, size_t s, std::align_val_t al) {
|
||||
(operator delete[])(p, al);
|
||||
}
|
||||
|
||||
extern "C" {
|
||||
|
||||
static void *resolve_delete_sized_aligned(void) {
|
||||
if (sized_delete_enabled()) {
|
||||
return reinterpret_cast<void *>(tc_delete_sized_aligned);
|
||||
}
|
||||
return reinterpret_cast<void *>(delegate_sized_aligned_delete);
|
||||
}
|
||||
|
||||
static void *resolve_deletearray_sized_aligned(void) {
|
||||
if (sized_delete_enabled()) {
|
||||
return reinterpret_cast<void *>(tc_deletearray_sized_aligned);
|
||||
}
|
||||
return reinterpret_cast<void *>(delegate_sized_aligned_deletearray);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
void operator delete(void *p, size_t size, std::align_val_t al) CPP_NOTHROW
|
||||
__attribute__((ifunc("resolve_delete_sized_aligned")));
|
||||
void operator delete[](void *p, size_t size, std::align_val_t al) CPP_NOTHROW
|
||||
__attribute__((ifunc("resolve_deletearray_sized_aligned")));
|
||||
|
||||
#else /* defined(ENABLE_DYN_SIZED_DELETE) */
|
||||
|
||||
void operator delete(void *p, size_t size, std::align_val_t al) CPP_NOTHROW
|
||||
ALIAS(tc_delete_sized_aligned);
|
||||
void operator delete[](void *p, size_t size, std::align_val_t al) CPP_NOTHROW
|
||||
ALIAS(tc_deletearray_sized_aligned);
|
||||
|
||||
#endif /* defined(ENABLE_DYN_SIZED_DELETE) */
|
||||
|
||||
#endif /* defined(ENABLE_SIZED_DELETE) */
|
||||
|
||||
#endif /* defined(ENABLE_ALIGNED_NEW_DELETE) */
|
||||
|
||||
extern "C" {
|
||||
void* malloc(size_t size) __THROW ALIAS(tc_malloc);
|
||||
void free(void* ptr) __THROW ALIAS(tc_free);
|
||||
void* realloc(void* ptr, size_t size) __THROW ALIAS(tc_realloc);
|
||||
void* calloc(size_t n, size_t size) __THROW ALIAS(tc_calloc);
|
||||
#if __QNXNTO__
|
||||
// QNX has crazy cfree declaration
|
||||
int cfree(void* ptr) { tc_cfree(ptr); return 0; }
|
||||
#else
|
||||
void cfree(void* ptr) __THROW ALIAS(tc_cfree);
|
||||
#endif
|
||||
void* memalign(size_t align, size_t s) __THROW ALIAS(tc_memalign);
|
||||
void* aligned_alloc(size_t align, size_t s) __THROW ALIAS(tc_memalign);
|
||||
void* valloc(size_t size) __THROW ALIAS(tc_valloc);
|
||||
void* pvalloc(size_t size) __THROW ALIAS(tc_pvalloc);
|
||||
int posix_memalign(void** r, size_t a, size_t s) __THROW
|
||||
ALIAS(tc_posix_memalign);
|
||||
#ifndef __UCLIBC__
|
||||
void malloc_stats(void) __THROW ALIAS(tc_malloc_stats);
|
||||
#endif
|
||||
#if __QNXNTO__
|
||||
int mallopt(int, intptr_t) ALIAS(tc_mallopt);
|
||||
#else
|
||||
int mallopt(int cmd, int value) __THROW ALIAS(tc_mallopt);
|
||||
#endif
|
||||
#ifdef HAVE_STRUCT_MALLINFO
|
||||
struct mallinfo mallinfo(void) __THROW ALIAS(tc_mallinfo);
|
||||
#endif
|
||||
#ifdef HAVE_STRUCT_MALLINFO2
|
||||
struct mallinfo2 mallinfo2(void) __THROW ALIAS(tc_mallinfo2);
|
||||
#endif
|
||||
size_t malloc_size(void* p) __THROW ALIAS(tc_malloc_size);
|
||||
#if defined(__ANDROID__)
|
||||
size_t malloc_usable_size(const void* p) __THROW
|
||||
ALIAS(tc_malloc_size);
|
||||
#else
|
||||
size_t malloc_usable_size(void* p) __THROW ALIAS(tc_malloc_size);
|
||||
#endif
|
||||
} // extern "C"
|
||||
|
||||
/* AIX User-defined malloc replacement interface overrides */
|
||||
#if defined(_AIX)
|
||||
#include "libc_override_aix.h"
|
||||
#endif
|
||||
|
||||
#undef ALIAS
|
||||
|
||||
// No need to do anything at tcmalloc-registration time: we do it all
|
||||
// via overriding weak symbols (at link time).
|
||||
static void ReplaceSystemAlloc() { }
|
||||
|
||||
#endif // TCMALLOC_LIBC_OVERRIDE_GCC_AND_WEAK_INL_H_
|
92
3party/gperftools/src/libc_override_glibc.h
Normal file
92
3party/gperftools/src/libc_override_glibc.h
Normal file
@ -0,0 +1,92 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2011, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Craig Silverstein <opensource@google.com>
|
||||
//
|
||||
// Used to override malloc routines on systems that are using glibc.
|
||||
|
||||
#ifndef TCMALLOC_LIBC_OVERRIDE_GLIBC_INL_H_
|
||||
#define TCMALLOC_LIBC_OVERRIDE_GLIBC_INL_H_
|
||||
|
||||
#include <config.h>
|
||||
#include <features.h> // for __GLIBC__
|
||||
#include <gperftools/tcmalloc.h>
|
||||
|
||||
#ifndef __GLIBC__
|
||||
# error libc_override_glibc.h is for glibc distributions only.
|
||||
#endif
|
||||
|
||||
// In glibc, the memory-allocation methods are weak symbols, so we can
|
||||
// just override them with our own. If we're using gcc, we can use
|
||||
// __attribute__((alias)) to do the overriding easily (exception:
|
||||
// Mach-O, which doesn't support aliases). Otherwise we have to use a
|
||||
// function call.
|
||||
#if !defined(__GNUC__) || defined(__MACH__)
|
||||
|
||||
// This also defines ReplaceSystemAlloc().
|
||||
# include "libc_override_redefine.h" // defines functions malloc()/etc
|
||||
|
||||
#else // #if !defined(__GNUC__) || defined(__MACH__)
|
||||
|
||||
// If we get here, we're a gcc system, so do all the overriding we do
|
||||
// with gcc. This does the overriding of all the 'normal' memory
|
||||
// allocation. This also defines ReplaceSystemAlloc().
|
||||
# include "libc_override_gcc_and_weak.h"
|
||||
|
||||
// We also have to do some glibc-specific overriding. Some library
|
||||
// routines on RedHat 9 allocate memory using malloc() and free it
|
||||
// using __libc_free() (or vice-versa). Since we provide our own
|
||||
// implementations of malloc/free, we need to make sure that the
|
||||
// __libc_XXX variants (defined as part of glibc) also point to the
|
||||
// same implementations. Since it only matters for redhat, we
|
||||
// do it inside the gcc #ifdef, since redhat uses gcc.
|
||||
// TODO(csilvers): only do this if we detect we're an old enough glibc?
|
||||
|
||||
#define ALIAS(tc_fn) __attribute__ ((alias (#tc_fn)))
|
||||
extern "C" {
|
||||
void* __libc_malloc(size_t size) ALIAS(tc_malloc);
|
||||
void __libc_free(void* ptr) ALIAS(tc_free);
|
||||
void* __libc_realloc(void* ptr, size_t size) ALIAS(tc_realloc);
|
||||
void* __libc_calloc(size_t n, size_t size) ALIAS(tc_calloc);
|
||||
void __libc_cfree(void* ptr) ALIAS(tc_cfree);
|
||||
void* __libc_memalign(size_t align, size_t s) ALIAS(tc_memalign);
|
||||
void* __libc_valloc(size_t size) ALIAS(tc_valloc);
|
||||
void* __libc_pvalloc(size_t size) ALIAS(tc_pvalloc);
|
||||
int __posix_memalign(void** r, size_t a, size_t s) ALIAS(tc_posix_memalign);
|
||||
} // extern "C"
|
||||
#undef ALIAS
|
||||
|
||||
#endif // #if defined(__GNUC__) && !defined(__MACH__)
|
||||
|
||||
// No need to write ReplaceSystemAlloc(); one of the #includes above
|
||||
// did it for us.
|
||||
|
||||
#endif // TCMALLOC_LIBC_OVERRIDE_GLIBC_INL_H_
|
314
3party/gperftools/src/libc_override_osx.h
Normal file
314
3party/gperftools/src/libc_override_osx.h
Normal file
@ -0,0 +1,314 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2011, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Craig Silverstein <opensource@google.com>
|
||||
//
|
||||
// Used to override malloc routines on OS X systems. We use the
|
||||
// malloc-zone functionality built into OS X to register our malloc
|
||||
// routine.
|
||||
//
|
||||
// 1) We used to use the normal 'override weak libc malloc/etc'
|
||||
// technique for OS X. This is not optimal because mach does not
|
||||
// support the 'alias' attribute, so we had to have forwarding
|
||||
// functions. It also does not work very well with OS X shared
|
||||
// libraries (dylibs) -- in general, the shared libs don't use
|
||||
// tcmalloc unless run with the DYLD_FORCE_FLAT_NAMESPACE envvar.
|
||||
//
|
||||
// 2) Another approach would be to use an interposition array:
|
||||
// static const interpose_t interposers[] __attribute__((section("__DATA, __interpose"))) = {
|
||||
// { (void *)tc_malloc, (void *)malloc },
|
||||
// { (void *)tc_free, (void *)free },
|
||||
// };
|
||||
// This requires the user to set the DYLD_INSERT_LIBRARIES envvar, so
|
||||
// is not much better.
|
||||
//
|
||||
// 3) Registering a new malloc zone avoids all these issues:
|
||||
// http://www.opensource.apple.com/source/Libc/Libc-583/include/malloc/malloc.h
|
||||
// http://www.opensource.apple.com/source/Libc/Libc-583/gen/malloc.c
|
||||
// If we make tcmalloc the default malloc zone (undocumented but
|
||||
// possible) then all new allocs use it, even those in shared
|
||||
// libraries. Allocs done before tcmalloc was installed, or in libs
|
||||
// that aren't using tcmalloc for some reason, will correctly go
|
||||
// through the malloc-zone interface when free-ing, and will pick up
|
||||
// the libc free rather than tcmalloc free. So it should "never"
|
||||
// cause a crash (famous last words).
|
||||
//
|
||||
// 4) The routines one must define for one's own malloc have changed
|
||||
// between OS X versions. This requires some hoops on our part, but
|
||||
// is only really annoying when it comes to posix_memalign. The right
|
||||
// behavior there depends on what OS version tcmalloc was compiled on,
|
||||
// but also what OS version the program is running on. For now, we
|
||||
// punt and don't implement our own posix_memalign. Apps that really
|
||||
// care can use tc_posix_memalign directly.
|
||||
|
||||
#ifndef TCMALLOC_LIBC_OVERRIDE_OSX_INL_H_
|
||||
#define TCMALLOC_LIBC_OVERRIDE_OSX_INL_H_
|
||||
|
||||
#include <config.h>
|
||||
#ifdef HAVE_FEATURES_H
|
||||
#include <features.h>
|
||||
#endif
|
||||
#include <gperftools/tcmalloc.h>
|
||||
|
||||
#if !defined(__APPLE__)
|
||||
# error libc_override_glibc-osx.h is for OS X distributions only.
|
||||
#endif
|
||||
|
||||
#include <AvailabilityMacros.h>
|
||||
#include <malloc/malloc.h>
|
||||
|
||||
namespace tcmalloc {
|
||||
void CentralCacheLockAll();
|
||||
void CentralCacheUnlockAll();
|
||||
}
|
||||
|
||||
// from AvailabilityMacros.h
|
||||
#if defined(MAC_OS_X_VERSION_10_6) && \
|
||||
MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_6
|
||||
extern "C" {
|
||||
// This function is only available on 10.6 (and later) but the
|
||||
// LibSystem headers do not use AvailabilityMacros.h to handle weak
|
||||
// importing automatically. This prototype is a copy of the one in
|
||||
// <malloc/malloc.h> with the WEAK_IMPORT_ATTRBIUTE added.
|
||||
extern malloc_zone_t *malloc_default_purgeable_zone(void)
|
||||
WEAK_IMPORT_ATTRIBUTE;
|
||||
}
|
||||
#endif
|
||||
|
||||
// We need to provide wrappers around all the libc functions.
|
||||
namespace {
|
||||
size_t mz_size(malloc_zone_t* zone, const void* ptr) {
|
||||
if (MallocExtension::instance()->GetOwnership(ptr) != MallocExtension::kOwned)
|
||||
return 0; // malloc_zone semantics: return 0 if we don't own the memory
|
||||
|
||||
// TODO(csilvers): change this method to take a const void*, one day.
|
||||
return MallocExtension::instance()->GetAllocatedSize(const_cast<void*>(ptr));
|
||||
}
|
||||
|
||||
ATTRIBUTE_SECTION(google_malloc) void* mz_malloc(malloc_zone_t* zone, size_t size) {
|
||||
return tc_malloc(size);
|
||||
}
|
||||
|
||||
ATTRIBUTE_SECTION(google_malloc) void* mz_calloc(malloc_zone_t* zone, size_t num_items, size_t size) {
|
||||
return tc_calloc(num_items, size);
|
||||
}
|
||||
|
||||
ATTRIBUTE_SECTION(google_malloc) void* mz_valloc(malloc_zone_t* zone, size_t size) {
|
||||
return tc_valloc(size);
|
||||
}
|
||||
|
||||
ATTRIBUTE_SECTION(google_malloc) void mz_free(malloc_zone_t* zone, void* ptr) {
|
||||
return tc_free(ptr);
|
||||
}
|
||||
|
||||
ATTRIBUTE_SECTION(google_malloc) void mz_free_definite_size(malloc_zone_t* zone, void *ptr, size_t size) {
|
||||
return tc_free(ptr);
|
||||
}
|
||||
|
||||
ATTRIBUTE_SECTION(google_malloc) void* mz_realloc(malloc_zone_t* zone, void* ptr, size_t size) {
|
||||
return tc_realloc(ptr, size);
|
||||
}
|
||||
|
||||
ATTRIBUTE_SECTION(google_malloc) void* mz_memalign(malloc_zone_t* zone, size_t align, size_t size) {
|
||||
return tc_memalign(align, size);
|
||||
}
|
||||
|
||||
void mz_destroy(malloc_zone_t* zone) {
|
||||
// A no-op -- we will not be destroyed!
|
||||
}
|
||||
|
||||
// malloc_introspection callbacks. I'm not clear on what all of these do.
|
||||
kern_return_t mi_enumerator(task_t task, void *,
|
||||
unsigned type_mask, vm_address_t zone_address,
|
||||
memory_reader_t reader,
|
||||
vm_range_recorder_t recorder) {
|
||||
// Should enumerate all the pointers we have. Seems like a lot of work.
|
||||
return KERN_FAILURE;
|
||||
}
|
||||
|
||||
size_t mi_good_size(malloc_zone_t *zone, size_t size) {
|
||||
// I think it's always safe to return size, but we maybe could do better.
|
||||
return size;
|
||||
}
|
||||
|
||||
boolean_t mi_check(malloc_zone_t *zone) {
|
||||
return MallocExtension::instance()->VerifyAllMemory();
|
||||
}
|
||||
|
||||
void mi_print(malloc_zone_t *zone, boolean_t verbose) {
|
||||
int bufsize = 8192;
|
||||
if (verbose)
|
||||
bufsize = 102400; // I picked this size arbitrarily
|
||||
char* buffer = new char[bufsize];
|
||||
MallocExtension::instance()->GetStats(buffer, bufsize);
|
||||
fprintf(stdout, "%s", buffer);
|
||||
delete[] buffer;
|
||||
}
|
||||
|
||||
void mi_log(malloc_zone_t *zone, void *address) {
|
||||
// I don't think we support anything like this
|
||||
}
|
||||
|
||||
void mi_force_lock(malloc_zone_t *zone) {
|
||||
tcmalloc::CentralCacheLockAll();
|
||||
}
|
||||
|
||||
void mi_force_unlock(malloc_zone_t *zone) {
|
||||
tcmalloc::CentralCacheUnlockAll();
|
||||
}
|
||||
|
||||
void mi_statistics(malloc_zone_t *zone, malloc_statistics_t *stats) {
|
||||
// TODO(csilvers): figure out how to fill these out
|
||||
stats->blocks_in_use = 0;
|
||||
stats->size_in_use = 0;
|
||||
stats->max_size_in_use = 0;
|
||||
stats->size_allocated = 0;
|
||||
}
|
||||
|
||||
boolean_t mi_zone_locked(malloc_zone_t *zone) {
|
||||
return false; // Hopefully unneeded by us!
|
||||
}
|
||||
|
||||
} // unnamed namespace
|
||||
|
||||
// OS X doesn't have pvalloc, cfree, malloc_statc, etc, so we can just
|
||||
// define our own. :-) OS X supplies posix_memalign in some versions
|
||||
// but not others, either strongly or weakly linked, in a way that's
|
||||
// difficult enough to code to correctly, that I just don't try to
|
||||
// support either memalign() or posix_memalign(). If you need them
|
||||
// and are willing to code to tcmalloc, you can use tc_posix_memalign().
|
||||
extern "C" {
|
||||
void cfree(void* p) { tc_cfree(p); }
|
||||
void* pvalloc(size_t s) { return tc_pvalloc(s); }
|
||||
void malloc_stats(void) { tc_malloc_stats(); }
|
||||
int mallopt(int cmd, int v) { return tc_mallopt(cmd, v); }
|
||||
// No struct mallinfo on OS X, so don't define mallinfo().
|
||||
// An alias for malloc_size(), which OS X defines.
|
||||
size_t malloc_usable_size(void* p) { return tc_malloc_size(p); }
|
||||
} // extern "C"
|
||||
|
||||
static malloc_zone_t *get_default_zone() {
|
||||
malloc_zone_t **zones = NULL;
|
||||
unsigned int num_zones = 0;
|
||||
|
||||
/*
|
||||
* On OSX 10.12, malloc_default_zone returns a special zone that is not
|
||||
* present in the list of registered zones. That zone uses a "lite zone"
|
||||
* if one is present (apparently enabled when malloc stack logging is
|
||||
* enabled), or the first registered zone otherwise. In practice this
|
||||
* means unless malloc stack logging is enabled, the first registered
|
||||
* zone is the default.
|
||||
* So get the list of zones to get the first one, instead of relying on
|
||||
* malloc_default_zone.
|
||||
*/
|
||||
if (KERN_SUCCESS != malloc_get_all_zones(0, NULL, (vm_address_t**) &zones,
|
||||
&num_zones)) {
|
||||
/* Reset the value in case the failure happened after it was set. */
|
||||
num_zones = 0;
|
||||
}
|
||||
|
||||
if (num_zones)
|
||||
return zones[0];
|
||||
|
||||
return malloc_default_zone();
|
||||
}
|
||||
|
||||
|
||||
static void ReplaceSystemAlloc() {
|
||||
static malloc_introspection_t tcmalloc_introspection;
|
||||
memset(&tcmalloc_introspection, 0, sizeof(tcmalloc_introspection));
|
||||
|
||||
tcmalloc_introspection.enumerator = &mi_enumerator;
|
||||
tcmalloc_introspection.good_size = &mi_good_size;
|
||||
tcmalloc_introspection.check = &mi_check;
|
||||
tcmalloc_introspection.print = &mi_print;
|
||||
tcmalloc_introspection.log = &mi_log;
|
||||
tcmalloc_introspection.force_lock = &mi_force_lock;
|
||||
tcmalloc_introspection.force_unlock = &mi_force_unlock;
|
||||
|
||||
static malloc_zone_t tcmalloc_zone;
|
||||
memset(&tcmalloc_zone, 0, sizeof(malloc_zone_t));
|
||||
|
||||
// Start with a version 4 zone which is used for OS X 10.4 and 10.5.
|
||||
tcmalloc_zone.version = 4;
|
||||
tcmalloc_zone.zone_name = "tcmalloc";
|
||||
tcmalloc_zone.size = &mz_size;
|
||||
tcmalloc_zone.malloc = &mz_malloc;
|
||||
tcmalloc_zone.calloc = &mz_calloc;
|
||||
tcmalloc_zone.valloc = &mz_valloc;
|
||||
tcmalloc_zone.free = &mz_free;
|
||||
tcmalloc_zone.realloc = &mz_realloc;
|
||||
tcmalloc_zone.destroy = &mz_destroy;
|
||||
tcmalloc_zone.batch_malloc = NULL;
|
||||
tcmalloc_zone.batch_free = NULL;
|
||||
tcmalloc_zone.introspect = &tcmalloc_introspection;
|
||||
|
||||
// from AvailabilityMacros.h
|
||||
#if defined(MAC_OS_X_VERSION_10_6) && \
|
||||
MAC_OS_X_VERSION_MAX_ALLOWED >= MAC_OS_X_VERSION_10_6
|
||||
// Switch to version 6 on OSX 10.6 to support memalign.
|
||||
tcmalloc_zone.version = 6;
|
||||
tcmalloc_zone.memalign = &mz_memalign;
|
||||
#ifndef __POWERPC__
|
||||
tcmalloc_zone.free_definite_size = &mz_free_definite_size;
|
||||
tcmalloc_introspection.zone_locked = &mi_zone_locked;
|
||||
#endif
|
||||
|
||||
// Request the default purgable zone to force its creation. The
|
||||
// current default zone is registered with the purgable zone for
|
||||
// doing tiny and small allocs. Sadly, it assumes that the default
|
||||
// zone is the szone implementation from OS X and will crash if it
|
||||
// isn't. By creating the zone now, this will be true and changing
|
||||
// the default zone won't cause a problem. This only needs to
|
||||
// happen when actually running on OS X 10.6 and higher (note the
|
||||
// ifdef above only checks if we were *compiled* with 10.6 or
|
||||
// higher; at runtime we have to check if this symbol is defined.)
|
||||
if (malloc_default_purgeable_zone) {
|
||||
malloc_default_purgeable_zone();
|
||||
}
|
||||
#endif
|
||||
|
||||
// Register the tcmalloc zone. At this point, it will not be the
|
||||
// default zone.
|
||||
malloc_zone_register(&tcmalloc_zone);
|
||||
|
||||
// Unregister and reregister the default zone. Unregistering swaps
|
||||
// the specified zone with the last one registered which for the
|
||||
// default zone makes the more recently registered zone the default
|
||||
// zone. The default zone is then re-registered to ensure that
|
||||
// allocations made from it earlier will be handled correctly.
|
||||
// Things are not guaranteed to work that way, but it's how they work now.
|
||||
malloc_zone_t *default_zone = get_default_zone();
|
||||
malloc_zone_unregister(default_zone);
|
||||
malloc_zone_register(default_zone);
|
||||
}
|
||||
|
||||
#endif // TCMALLOC_LIBC_OVERRIDE_OSX_INL_H_
|
134
3party/gperftools/src/libc_override_redefine.h
Normal file
134
3party/gperftools/src/libc_override_redefine.h
Normal file
@ -0,0 +1,134 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2011, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Craig Silverstein <opensource@google.com>
|
||||
//
|
||||
// Used on systems that don't have their own definition of
|
||||
// malloc/new/etc. (Typically this will be a windows msvcrt.dll that
|
||||
// has been edited to remove the definitions.) We can just define our
|
||||
// own as normal functions.
|
||||
//
|
||||
// This should also work on systems were all the malloc routines are
|
||||
// defined as weak symbols, and there's no support for aliasing.
|
||||
|
||||
#ifndef TCMALLOC_LIBC_OVERRIDE_REDEFINE_H_
|
||||
#define TCMALLOC_LIBC_OVERRIDE_REDEFINE_H_
|
||||
|
||||
void* operator new(size_t size) { return tc_new(size); }
|
||||
void operator delete(void* p) CPP_NOTHROW { tc_delete(p); }
|
||||
void* operator new[](size_t size) { return tc_newarray(size); }
|
||||
void operator delete[](void* p) CPP_NOTHROW { tc_deletearray(p); }
|
||||
void* operator new(size_t size, const std::nothrow_t& nt) CPP_NOTHROW {
|
||||
return tc_new_nothrow(size, nt);
|
||||
}
|
||||
void* operator new[](size_t size, const std::nothrow_t& nt) CPP_NOTHROW {
|
||||
return tc_newarray_nothrow(size, nt);
|
||||
}
|
||||
void operator delete(void* ptr, const std::nothrow_t& nt) CPP_NOTHROW {
|
||||
return tc_delete_nothrow(ptr, nt);
|
||||
}
|
||||
void operator delete[](void* ptr, const std::nothrow_t& nt) CPP_NOTHROW {
|
||||
return tc_deletearray_nothrow(ptr, nt);
|
||||
}
|
||||
|
||||
#ifdef ENABLE_SIZED_DELETE
|
||||
void operator delete(void* p, size_t s) CPP_NOTHROW { tc_delete_sized(p, s); }
|
||||
void operator delete[](void* p, size_t s) CPP_NOTHROW{ tc_deletearray_sized(p, s);}
|
||||
#endif
|
||||
|
||||
#if defined(ENABLE_ALIGNED_NEW_DELETE)
|
||||
|
||||
void* operator new(size_t size, std::align_val_t al) {
|
||||
return tc_new_aligned(size, al);
|
||||
}
|
||||
void operator delete(void* p, std::align_val_t al) CPP_NOTHROW {
|
||||
tc_delete_aligned(p, al);
|
||||
}
|
||||
void* operator new[](size_t size, std::align_val_t al) {
|
||||
return tc_newarray_aligned(size, al);
|
||||
}
|
||||
void operator delete[](void* p, std::align_val_t al) CPP_NOTHROW {
|
||||
tc_deletearray_aligned(p, al);
|
||||
}
|
||||
void* operator new(size_t size, std::align_val_t al, const std::nothrow_t& nt) CPP_NOTHROW {
|
||||
return tc_new_aligned_nothrow(size, al, nt);
|
||||
}
|
||||
void* operator new[](size_t size, std::align_val_t al, const std::nothrow_t& nt) CPP_NOTHROW {
|
||||
return tc_newarray_aligned_nothrow(size, al, nt);
|
||||
}
|
||||
void operator delete(void* ptr, std::align_val_t al, const std::nothrow_t& nt) CPP_NOTHROW {
|
||||
return tc_delete_aligned_nothrow(ptr, al, nt);
|
||||
}
|
||||
void operator delete[](void* ptr, std::align_val_t al, const std::nothrow_t& nt) CPP_NOTHROW {
|
||||
return tc_deletearray_aligned_nothrow(ptr, al, nt);
|
||||
}
|
||||
|
||||
#ifdef ENABLE_SIZED_DELETE
|
||||
void operator delete(void* p, size_t s, std::align_val_t al) CPP_NOTHROW {
|
||||
tc_delete_sized_aligned(p, s, al);
|
||||
}
|
||||
void operator delete[](void* p, size_t s, std::align_val_t al) CPP_NOTHROW {
|
||||
tc_deletearray_sized_aligned(p, s, al);
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif // defined(ENABLE_ALIGNED_NEW_DELETE)
|
||||
|
||||
extern "C" {
|
||||
void* malloc(size_t s) { return tc_malloc(s); }
|
||||
void free(void* p) { tc_free(p); }
|
||||
void* realloc(void* p, size_t s) { return tc_realloc(p, s); }
|
||||
void* calloc(size_t n, size_t s) { return tc_calloc(n, s); }
|
||||
void cfree(void* p) { tc_cfree(p); }
|
||||
void* memalign(size_t a, size_t s) { return tc_memalign(a, s); }
|
||||
void* aligned_alloc(size_t a, size_t s) { return tc_memalign(a, s); }
|
||||
void* valloc(size_t s) { return tc_valloc(s); }
|
||||
void* pvalloc(size_t s) { return tc_pvalloc(s); }
|
||||
int posix_memalign(void** r, size_t a, size_t s) {
|
||||
return tc_posix_memalign(r, a, s);
|
||||
}
|
||||
void malloc_stats(void) { tc_malloc_stats(); }
|
||||
int mallopt(int cmd, int v) { return tc_mallopt(cmd, v); }
|
||||
#ifdef HAVE_STRUCT_MALLINFO
|
||||
struct mallinfo mallinfo(void) { return tc_mallinfo(); }
|
||||
#endif
|
||||
#ifdef HAVE_STRUCT_MALLINFO2
|
||||
struct mallinfo2 mallinfo2(void) { return tc_mallinfo2(); }
|
||||
#endif
|
||||
size_t malloc_size(void* p) { return tc_malloc_size(p); }
|
||||
size_t malloc_usable_size(void* p) { return tc_malloc_size(p); }
|
||||
} // extern "C"
|
||||
|
||||
// No need to do anything at tcmalloc-registration time: we do it all
|
||||
// via overriding weak symbols (at link time).
|
||||
static void ReplaceSystemAlloc() { }
|
||||
|
||||
#endif // TCMALLOC_LIBC_OVERRIDE_REDEFINE_H_
|
115
3party/gperftools/src/linked_list.h
Normal file
115
3party/gperftools/src/linked_list.h
Normal file
@ -0,0 +1,115 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2008, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat <opensource@google.com>
|
||||
//
|
||||
// Some very basic linked list functions for dealing with using void * as
|
||||
// storage.
|
||||
|
||||
#ifndef TCMALLOC_LINKED_LIST_H_
|
||||
#define TCMALLOC_LINKED_LIST_H_
|
||||
|
||||
#include <stddef.h>
|
||||
|
||||
namespace tcmalloc {
|
||||
|
||||
inline void *SLL_Next(void *t) {
|
||||
return *(reinterpret_cast<void**>(t));
|
||||
}
|
||||
|
||||
inline void SLL_SetNext(void *t, void *n) {
|
||||
*(reinterpret_cast<void**>(t)) = n;
|
||||
}
|
||||
|
||||
inline void SLL_Push(void **list, void *element) {
|
||||
void *next = *list;
|
||||
*list = element;
|
||||
SLL_SetNext(element, next);
|
||||
}
|
||||
|
||||
inline void *SLL_Pop(void **list) {
|
||||
void *result = *list;
|
||||
*list = SLL_Next(*list);
|
||||
return result;
|
||||
}
|
||||
|
||||
inline bool SLL_TryPop(void **list, void **rv) {
|
||||
void *result = *list;
|
||||
if (!result) {
|
||||
return false;
|
||||
}
|
||||
void *next = SLL_Next(*list);
|
||||
*list = next;
|
||||
*rv = result;
|
||||
return true;
|
||||
}
|
||||
|
||||
// Remove N elements from a linked list to which head points. head will be
|
||||
// modified to point to the new head. start and end will point to the first
|
||||
// and last nodes of the range. Note that end will point to NULL after this
|
||||
// function is called.
|
||||
inline void SLL_PopRange(void **head, int N, void **start, void **end) {
|
||||
if (N == 0) {
|
||||
*start = NULL;
|
||||
*end = NULL;
|
||||
return;
|
||||
}
|
||||
|
||||
void *tmp = *head;
|
||||
for (int i = 1; i < N; ++i) {
|
||||
tmp = SLL_Next(tmp);
|
||||
}
|
||||
|
||||
*start = *head;
|
||||
*end = tmp;
|
||||
*head = SLL_Next(tmp);
|
||||
// Unlink range from list.
|
||||
SLL_SetNext(tmp, NULL);
|
||||
}
|
||||
|
||||
inline void SLL_PushRange(void **head, void *start, void *end) {
|
||||
if (!start) return;
|
||||
SLL_SetNext(end, *head);
|
||||
*head = start;
|
||||
}
|
||||
|
||||
inline size_t SLL_Size(void *head) {
|
||||
int count = 0;
|
||||
while (head) {
|
||||
count++;
|
||||
head = SLL_Next(head);
|
||||
}
|
||||
return count;
|
||||
}
|
||||
|
||||
} // namespace tcmalloc
|
||||
|
||||
#endif // TCMALLOC_LINKED_LIST_H_
|
388
3party/gperftools/src/malloc_extension.cc
Normal file
388
3party/gperftools/src/malloc_extension.cc
Normal file
@ -0,0 +1,388 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat <opensource@google.com>
|
||||
|
||||
#include <config.h>
|
||||
#include <assert.h>
|
||||
#include <string.h>
|
||||
#include <stdio.h>
|
||||
#include <stdint.h>
|
||||
#include <string>
|
||||
#include "base/dynamic_annotations.h"
|
||||
#include "base/sysinfo.h" // for FillProcSelfMaps
|
||||
#ifndef NO_HEAP_CHECK
|
||||
#include "gperftools/heap-checker.h"
|
||||
#endif
|
||||
#include "gperftools/malloc_extension.h"
|
||||
#include "gperftools/malloc_extension_c.h"
|
||||
#include "base/googleinit.h"
|
||||
|
||||
using std::string;
|
||||
using std::vector;
|
||||
|
||||
static void DumpAddressMap(string* result) {
|
||||
*result += "\nMAPPED_LIBRARIES:\n";
|
||||
// We keep doubling until we get a fit
|
||||
const size_t old_resultlen = result->size();
|
||||
for (int amap_size = 10240; amap_size < 10000000; amap_size *= 2) {
|
||||
result->resize(old_resultlen + amap_size);
|
||||
bool wrote_all = false;
|
||||
const int bytes_written =
|
||||
tcmalloc::FillProcSelfMaps(&((*result)[old_resultlen]), amap_size,
|
||||
&wrote_all);
|
||||
if (wrote_all) { // we fit!
|
||||
(*result)[old_resultlen + bytes_written] = '\0';
|
||||
result->resize(old_resultlen + bytes_written);
|
||||
return;
|
||||
}
|
||||
}
|
||||
result->reserve(old_resultlen); // just don't print anything
|
||||
}
|
||||
|
||||
// Note: this routine is meant to be called before threads are spawned.
|
||||
void MallocExtension::Initialize() {
|
||||
static bool initialize_called = false;
|
||||
|
||||
if (initialize_called) return;
|
||||
initialize_called = true;
|
||||
|
||||
#ifdef __GLIBC__
|
||||
// GNU libc++ versions 3.3 and 3.4 obey the environment variables
|
||||
// GLIBCPP_FORCE_NEW and GLIBCXX_FORCE_NEW respectively. Setting
|
||||
// one of these variables forces the STL default allocator to call
|
||||
// new() or delete() for each allocation or deletion. Otherwise
|
||||
// the STL allocator tries to avoid the high cost of doing
|
||||
// allocations by pooling memory internally. However, tcmalloc
|
||||
// does allocations really fast, especially for the types of small
|
||||
// items one sees in STL, so it's better off just using us.
|
||||
// TODO: control whether we do this via an environment variable?
|
||||
setenv("GLIBCPP_FORCE_NEW", "1", false /* no overwrite*/);
|
||||
setenv("GLIBCXX_FORCE_NEW", "1", false /* no overwrite*/);
|
||||
|
||||
// Now we need to make the setenv 'stick', which it may not do since
|
||||
// the env is flakey before main() is called. But luckily stl only
|
||||
// looks at this env var the first time it tries to do an alloc, and
|
||||
// caches what it finds. So we just cause an stl alloc here.
|
||||
string dummy("I need to be allocated");
|
||||
dummy += "!"; // so the definition of dummy isn't optimized out
|
||||
#endif /* __GLIBC__ */
|
||||
}
|
||||
|
||||
// SysAllocator implementation
|
||||
SysAllocator::~SysAllocator() {}
|
||||
|
||||
// Default implementation -- does nothing
|
||||
MallocExtension::~MallocExtension() { }
|
||||
bool MallocExtension::VerifyAllMemory() { return true; }
|
||||
bool MallocExtension::VerifyNewMemory(const void* p) { return true; }
|
||||
bool MallocExtension::VerifyArrayNewMemory(const void* p) { return true; }
|
||||
bool MallocExtension::VerifyMallocMemory(const void* p) { return true; }
|
||||
|
||||
bool MallocExtension::GetNumericProperty(const char* property, size_t* value) {
|
||||
return false;
|
||||
}
|
||||
|
||||
bool MallocExtension::SetNumericProperty(const char* property, size_t value) {
|
||||
return false;
|
||||
}
|
||||
|
||||
void MallocExtension::GetStats(char* buffer, int length) {
|
||||
assert(length > 0);
|
||||
buffer[0] = '\0';
|
||||
}
|
||||
|
||||
bool MallocExtension::MallocMemoryStats(int* blocks, size_t* total,
|
||||
int histogram[kMallocHistogramSize]) {
|
||||
*blocks = 0;
|
||||
*total = 0;
|
||||
memset(histogram, 0, sizeof(*histogram) * kMallocHistogramSize);
|
||||
return true;
|
||||
}
|
||||
|
||||
void** MallocExtension::ReadStackTraces(int* sample_period) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
void** MallocExtension::ReadHeapGrowthStackTraces() {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
void MallocExtension::MarkThreadIdle() {
|
||||
// Default implementation does nothing
|
||||
}
|
||||
|
||||
void MallocExtension::MarkThreadBusy() {
|
||||
// Default implementation does nothing
|
||||
}
|
||||
|
||||
SysAllocator* MallocExtension::GetSystemAllocator() {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
void MallocExtension::SetSystemAllocator(SysAllocator *a) {
|
||||
// Default implementation does nothing
|
||||
}
|
||||
|
||||
void MallocExtension::ReleaseToSystem(size_t num_bytes) {
|
||||
// Default implementation does nothing
|
||||
}
|
||||
|
||||
void MallocExtension::ReleaseFreeMemory() {
|
||||
ReleaseToSystem(static_cast<size_t>(-1)); // SIZE_T_MAX
|
||||
}
|
||||
|
||||
void MallocExtension::SetMemoryReleaseRate(double rate) {
|
||||
// Default implementation does nothing
|
||||
}
|
||||
|
||||
double MallocExtension::GetMemoryReleaseRate() {
|
||||
return -1.0;
|
||||
}
|
||||
|
||||
size_t MallocExtension::GetEstimatedAllocatedSize(size_t size) {
|
||||
return size;
|
||||
}
|
||||
|
||||
size_t MallocExtension::GetAllocatedSize(const void* p) {
|
||||
assert(GetOwnership(p) != kNotOwned);
|
||||
return 0;
|
||||
}
|
||||
|
||||
MallocExtension::Ownership MallocExtension::GetOwnership(const void* p) {
|
||||
return kUnknownOwnership;
|
||||
}
|
||||
|
||||
void MallocExtension::GetFreeListSizes(
|
||||
vector<MallocExtension::FreeListInfo>* v) {
|
||||
v->clear();
|
||||
}
|
||||
|
||||
size_t MallocExtension::GetThreadCacheSize() {
|
||||
return 0;
|
||||
}
|
||||
|
||||
void MallocExtension::MarkThreadTemporarilyIdle() {
|
||||
// Default implementation does nothing
|
||||
}
|
||||
|
||||
// The current malloc extension object.
|
||||
|
||||
static MallocExtension* current_instance;
|
||||
|
||||
static union {
|
||||
char chars[sizeof(MallocExtension)];
|
||||
void *ptr;
|
||||
} mallocextension_implementation_space;
|
||||
|
||||
static void InitModule() {
|
||||
if (current_instance != NULL) {
|
||||
return;
|
||||
}
|
||||
current_instance = new (mallocextension_implementation_space.chars) MallocExtension();
|
||||
#ifndef NO_HEAP_CHECK
|
||||
HeapLeakChecker::IgnoreObject(current_instance);
|
||||
#endif
|
||||
}
|
||||
|
||||
REGISTER_MODULE_INITIALIZER(malloc_extension_init, InitModule())
|
||||
|
||||
MallocExtension* MallocExtension::instance() {
|
||||
InitModule();
|
||||
return current_instance;
|
||||
}
|
||||
|
||||
void MallocExtension::Register(MallocExtension* implementation) {
|
||||
InitModule();
|
||||
// When running under valgrind, our custom malloc is replaced with
|
||||
// valgrind's one and malloc extensions will not work. (Note:
|
||||
// callers should be responsible for checking that they are the
|
||||
// malloc that is really being run, before calling Register. This
|
||||
// is just here as an extra sanity check.)
|
||||
if (!RunningOnValgrind()) {
|
||||
current_instance = implementation;
|
||||
}
|
||||
}
|
||||
|
||||
// -----------------------------------------------------------------------
|
||||
// Heap sampling support
|
||||
// -----------------------------------------------------------------------
|
||||
|
||||
namespace {
|
||||
|
||||
// Accessors
|
||||
uintptr_t Count(void** entry) {
|
||||
return reinterpret_cast<uintptr_t>(entry[0]);
|
||||
}
|
||||
uintptr_t Size(void** entry) {
|
||||
return reinterpret_cast<uintptr_t>(entry[1]);
|
||||
}
|
||||
uintptr_t Depth(void** entry) {
|
||||
return reinterpret_cast<uintptr_t>(entry[2]);
|
||||
}
|
||||
void* PC(void** entry, int i) {
|
||||
return entry[3+i];
|
||||
}
|
||||
|
||||
void PrintCountAndSize(MallocExtensionWriter* writer,
|
||||
uintptr_t count, uintptr_t size) {
|
||||
char buf[100];
|
||||
snprintf(buf, sizeof(buf),
|
||||
"%6" PRIu64 ": %8" PRIu64 " [%6" PRIu64 ": %8" PRIu64 "] @",
|
||||
static_cast<uint64>(count),
|
||||
static_cast<uint64>(size),
|
||||
static_cast<uint64>(count),
|
||||
static_cast<uint64>(size));
|
||||
writer->append(buf, strlen(buf));
|
||||
}
|
||||
|
||||
void PrintHeader(MallocExtensionWriter* writer,
|
||||
const char* label, void** entries) {
|
||||
// Compute the total count and total size
|
||||
uintptr_t total_count = 0;
|
||||
uintptr_t total_size = 0;
|
||||
for (void** entry = entries; Count(entry) != 0; entry += 3 + Depth(entry)) {
|
||||
total_count += Count(entry);
|
||||
total_size += Size(entry);
|
||||
}
|
||||
|
||||
const char* const kTitle = "heap profile: ";
|
||||
writer->append(kTitle, strlen(kTitle));
|
||||
PrintCountAndSize(writer, total_count, total_size);
|
||||
writer->append(" ", 1);
|
||||
writer->append(label, strlen(label));
|
||||
writer->append("\n", 1);
|
||||
}
|
||||
|
||||
void PrintStackEntry(MallocExtensionWriter* writer, void** entry) {
|
||||
PrintCountAndSize(writer, Count(entry), Size(entry));
|
||||
|
||||
for (int i = 0; i < Depth(entry); i++) {
|
||||
char buf[32];
|
||||
snprintf(buf, sizeof(buf), " %p", PC(entry, i));
|
||||
writer->append(buf, strlen(buf));
|
||||
}
|
||||
writer->append("\n", 1);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
void MallocExtension::GetHeapSample(MallocExtensionWriter* writer) {
|
||||
int sample_period = 0;
|
||||
void** entries = ReadStackTraces(&sample_period);
|
||||
if (entries == NULL) {
|
||||
const char* const kErrorMsg =
|
||||
"This malloc implementation does not support sampling.\n"
|
||||
"As of 2005/01/26, only tcmalloc supports sampling, and\n"
|
||||
"you are probably running a binary that does not use\n"
|
||||
"tcmalloc.\n";
|
||||
writer->append(kErrorMsg, strlen(kErrorMsg));
|
||||
return;
|
||||
}
|
||||
|
||||
char label[32];
|
||||
sprintf(label, "heap_v2/%d", sample_period);
|
||||
PrintHeader(writer, label, entries);
|
||||
for (void** entry = entries; Count(entry) != 0; entry += 3 + Depth(entry)) {
|
||||
PrintStackEntry(writer, entry);
|
||||
}
|
||||
delete[] entries;
|
||||
|
||||
DumpAddressMap(writer);
|
||||
}
|
||||
|
||||
void MallocExtension::GetHeapGrowthStacks(MallocExtensionWriter* writer) {
|
||||
void** entries = ReadHeapGrowthStackTraces();
|
||||
if (entries == NULL) {
|
||||
const char* const kErrorMsg =
|
||||
"This malloc implementation does not support "
|
||||
"ReadHeapGrowthStackTraces().\n"
|
||||
"As of 2005/09/27, only tcmalloc supports this, and you\n"
|
||||
"are probably running a binary that does not use tcmalloc.\n";
|
||||
writer->append(kErrorMsg, strlen(kErrorMsg));
|
||||
return;
|
||||
}
|
||||
|
||||
// Do not canonicalize the stack entries, so that we get a
|
||||
// time-ordered list of stack traces, which may be useful if the
|
||||
// client wants to focus on the latest stack traces.
|
||||
PrintHeader(writer, "growth", entries);
|
||||
for (void** entry = entries; Count(entry) != 0; entry += 3 + Depth(entry)) {
|
||||
PrintStackEntry(writer, entry);
|
||||
}
|
||||
delete[] entries;
|
||||
|
||||
DumpAddressMap(writer);
|
||||
}
|
||||
|
||||
void MallocExtension::Ranges(void* arg, RangeFunction func) {
|
||||
// No callbacks by default
|
||||
}
|
||||
|
||||
// These are C shims that work on the current instance.
|
||||
|
||||
#define C_SHIM(fn, retval, paramlist, arglist) \
|
||||
extern "C" PERFTOOLS_DLL_DECL retval MallocExtension_##fn paramlist { \
|
||||
return MallocExtension::instance()->fn arglist; \
|
||||
}
|
||||
|
||||
C_SHIM(VerifyAllMemory, int, (void), ());
|
||||
C_SHIM(VerifyNewMemory, int, (const void* p), (p));
|
||||
C_SHIM(VerifyArrayNewMemory, int, (const void* p), (p));
|
||||
C_SHIM(VerifyMallocMemory, int, (const void* p), (p));
|
||||
C_SHIM(MallocMemoryStats, int,
|
||||
(int* blocks, size_t* total, int histogram[kMallocHistogramSize]),
|
||||
(blocks, total, histogram));
|
||||
|
||||
C_SHIM(GetStats, void,
|
||||
(char* buffer, int buffer_length), (buffer, buffer_length));
|
||||
C_SHIM(GetNumericProperty, int,
|
||||
(const char* property, size_t* value), (property, value));
|
||||
C_SHIM(SetNumericProperty, int,
|
||||
(const char* property, size_t value), (property, value));
|
||||
|
||||
C_SHIM(MarkThreadIdle, void, (void), ());
|
||||
C_SHIM(MarkThreadBusy, void, (void), ());
|
||||
C_SHIM(ReleaseFreeMemory, void, (void), ());
|
||||
C_SHIM(ReleaseToSystem, void, (size_t num_bytes), (num_bytes));
|
||||
C_SHIM(SetMemoryReleaseRate, void, (double rate), (rate));
|
||||
C_SHIM(GetMemoryReleaseRate, double, (void), ());
|
||||
C_SHIM(GetEstimatedAllocatedSize, size_t, (size_t size), (size));
|
||||
C_SHIM(GetAllocatedSize, size_t, (const void* p), (p));
|
||||
C_SHIM(GetThreadCacheSize, size_t, (void), ());
|
||||
C_SHIM(MarkThreadTemporarilyIdle, void, (void), ());
|
||||
|
||||
// Can't use the shim here because of the need to translate the enums.
|
||||
extern "C"
|
||||
MallocExtension_Ownership MallocExtension_GetOwnership(const void* p) {
|
||||
return static_cast<MallocExtension_Ownership>(
|
||||
MallocExtension::instance()->GetOwnership(p));
|
||||
}
|
148
3party/gperftools/src/malloc_hook-inl.h
Normal file
148
3party/gperftools/src/malloc_hook-inl.h
Normal file
@ -0,0 +1,148 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat
|
||||
//
|
||||
// This has the implementation details of malloc_hook that are needed
|
||||
// to use malloc-hook inside the tcmalloc system. It does not hold
|
||||
// any of the client-facing calls that are used to add new hooks.
|
||||
|
||||
#ifndef _MALLOC_HOOK_INL_H_
|
||||
#define _MALLOC_HOOK_INL_H_
|
||||
|
||||
#include <stddef.h>
|
||||
#include <sys/types.h>
|
||||
|
||||
#include <atomic>
|
||||
|
||||
#include "base/basictypes.h"
|
||||
#include <gperftools/malloc_hook.h>
|
||||
|
||||
#include "common.h" // for UNLIKELY
|
||||
|
||||
namespace base { namespace internal {
|
||||
|
||||
// Capacity of 8 means that HookList is 9 words.
|
||||
static const int kHookListCapacity = 8;
|
||||
// last entry is reserved for deprecated "singular" hooks. So we have
|
||||
// 7 "normal" hooks per list
|
||||
static const int kHookListMaxValues = 7;
|
||||
static const int kHookListSingularIdx = 7;
|
||||
|
||||
// HookList: a class that provides synchronized insertions and removals and
|
||||
// lockless traversal. Most of the implementation is in malloc_hook.cc.
|
||||
template <typename T>
|
||||
struct PERFTOOLS_DLL_DECL HookList {
|
||||
static_assert(sizeof(T) <= sizeof(uintptr_t), "must fit in uintptr_t");
|
||||
|
||||
constexpr HookList() = default;
|
||||
explicit constexpr HookList(T priv_data_initial) : priv_end{1}, priv_data{priv_data_initial} {}
|
||||
|
||||
// Adds value to the list. Note that duplicates are allowed. Thread-safe and
|
||||
// blocking (acquires hooklist_spinlock). Returns true on success; false
|
||||
// otherwise (failures include invalid value and no space left).
|
||||
bool Add(T value);
|
||||
|
||||
void FixupPrivEndLocked();
|
||||
|
||||
// Removes the first entry matching value from the list. Thread-safe and
|
||||
// blocking (acquires hooklist_spinlock). Returns true on success; false
|
||||
// otherwise (failures include invalid value and no value found).
|
||||
bool Remove(T value);
|
||||
|
||||
// Store up to n values of the list in output_array, and return the number of
|
||||
// elements stored. Thread-safe and non-blocking. This is fast (one memory
|
||||
// access) if the list is empty.
|
||||
int Traverse(T* output_array, int n) const;
|
||||
|
||||
// Fast inline implementation for fast path of Invoke*Hook.
|
||||
bool empty() const {
|
||||
return priv_end.load(std::memory_order_relaxed) == 0;
|
||||
}
|
||||
|
||||
// Used purely to handle deprecated singular hooks
|
||||
T GetSingular() const {
|
||||
return bit_cast<T>(cast_priv_data(kHookListSingularIdx)->load(std::memory_order_relaxed));
|
||||
}
|
||||
|
||||
T ExchangeSingular(T new_val);
|
||||
|
||||
// This internal data is not private so that the class is an aggregate and can
|
||||
// be initialized by the linker. Don't access this directly. Use the
|
||||
// INIT_HOOK_LIST macro in malloc_hook.cc.
|
||||
|
||||
// One more than the index of the last valid element in priv_data. During
|
||||
// 'Remove' this may be past the last valid element in priv_data, but
|
||||
// subsequent values will be 0.
|
||||
//
|
||||
// Index kHookListCapacity-1 is reserved as 'deprecated' single hook pointer
|
||||
std::atomic<uintptr_t> priv_end;
|
||||
T priv_data[kHookListCapacity];
|
||||
|
||||
// C++ 11 doesn't let us initialize array of atomics, so we made
|
||||
// priv_data regular array of T and cast when reading and writing
|
||||
// (which is portable in practice)
|
||||
std::atomic<T>* cast_priv_data(int index) {
|
||||
return reinterpret_cast<std::atomic<T>*>(priv_data + index);
|
||||
}
|
||||
std::atomic<T> const * cast_priv_data(int index) const {
|
||||
return reinterpret_cast<std::atomic<T> const *>(priv_data + index);
|
||||
}
|
||||
};
|
||||
|
||||
ATTRIBUTE_VISIBILITY_HIDDEN extern HookList<MallocHook::NewHook> new_hooks_;
|
||||
ATTRIBUTE_VISIBILITY_HIDDEN extern HookList<MallocHook::DeleteHook> delete_hooks_;
|
||||
|
||||
} } // namespace base::internal
|
||||
|
||||
// The following method is DEPRECATED
|
||||
inline MallocHook::NewHook MallocHook::GetNewHook() {
|
||||
return base::internal::new_hooks_.GetSingular();
|
||||
}
|
||||
|
||||
inline void MallocHook::InvokeNewHook(const void* p, size_t s) {
|
||||
if (PREDICT_FALSE(!base::internal::new_hooks_.empty())) {
|
||||
InvokeNewHookSlow(p, s);
|
||||
}
|
||||
}
|
||||
|
||||
// The following method is DEPRECATED
|
||||
inline MallocHook::DeleteHook MallocHook::GetDeleteHook() {
|
||||
return base::internal::delete_hooks_.GetSingular();
|
||||
}
|
||||
|
||||
inline void MallocHook::InvokeDeleteHook(const void* p) {
|
||||
if (PREDICT_FALSE(!base::internal::delete_hooks_.empty())) {
|
||||
InvokeDeleteHookSlow(p);
|
||||
}
|
||||
}
|
||||
|
||||
#endif /* _MALLOC_HOOK_INL_H_ */
|
614
3party/gperftools/src/malloc_hook.cc
Normal file
614
3party/gperftools/src/malloc_hook.cc
Normal file
@ -0,0 +1,614 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat <opensource@google.com>
|
||||
|
||||
#include <config.h>
|
||||
|
||||
#include <gperftools/malloc_hook.h>
|
||||
#include "malloc_hook-inl.h"
|
||||
|
||||
#include <stddef.h>
|
||||
#include <stdint.h>
|
||||
#if HAVE_SYS_SYSCALL_H
|
||||
#include <sys/syscall.h>
|
||||
#endif
|
||||
|
||||
#ifdef HAVE_MMAP
|
||||
#include <sys/mman.h>
|
||||
#endif
|
||||
|
||||
#include <algorithm>
|
||||
#include "base/logging.h"
|
||||
#include "base/spinlock.h"
|
||||
#include "maybe_emergency_malloc.h"
|
||||
|
||||
// This #ifdef should almost never be set. Set NO_TCMALLOC_SAMPLES if
|
||||
// you're porting to a system where you really can't get a stacktrace.
|
||||
#ifdef NO_TCMALLOC_SAMPLES
|
||||
// We use #define so code compiles even if you #include stacktrace.h somehow.
|
||||
# define GetStackTrace(stack, depth, skip) (0)
|
||||
#else
|
||||
# include <gperftools/stacktrace.h>
|
||||
#endif
|
||||
|
||||
// __THROW is defined in glibc systems. It means, counter-intuitively,
|
||||
// "This function will never throw an exception." It's an optional
|
||||
// optimization tool, but we may need to use it to match glibc prototypes.
|
||||
#ifndef __THROW // I guess we're not on a glibc system
|
||||
# define __THROW // __THROW is just an optimization, so ok to make it ""
|
||||
#endif
|
||||
|
||||
using std::copy;
|
||||
|
||||
|
||||
// Declaration of default weak initialization function, that can be overridden
|
||||
// by linking-in a strong definition (as heap-checker.cc does). This is
|
||||
// extern "C" so that it doesn't trigger gold's --detect-odr-violations warning,
|
||||
// which only looks at C++ symbols.
|
||||
//
|
||||
// This function is declared here as weak, and defined later, rather than a more
|
||||
// straightforward simple weak definition, as a workround for an icc compiler
|
||||
// issue ((Intel reference 290819). This issue causes icc to resolve weak
|
||||
// symbols too early, at compile rather than link time. By declaring it (weak)
|
||||
// here, then defining it below after its use, we can avoid the problem.
|
||||
extern "C" {
|
||||
ATTRIBUTE_WEAK int MallocHook_InitAtFirstAllocation_HeapLeakChecker() {
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
namespace {
|
||||
|
||||
bool RemoveInitialHooksAndCallInitializers(); // below.
|
||||
|
||||
// These hooks are installed in MallocHook as the only initial hooks. The first
|
||||
// hook that is called will run RemoveInitialHooksAndCallInitializers (see the
|
||||
// definition below) and then redispatch to any malloc hooks installed by
|
||||
// RemoveInitialHooksAndCallInitializers.
|
||||
//
|
||||
// Note(llib): there is a possibility of a race in the event that there are
|
||||
// multiple threads running before the first allocation. This is pretty
|
||||
// difficult to achieve, but if it is then multiple threads may concurrently do
|
||||
// allocations. The first caller will call
|
||||
// RemoveInitialHooksAndCallInitializers via one of the initial hooks. A
|
||||
// concurrent allocation may, depending on timing either:
|
||||
// * still have its initial malloc hook installed, run that and block on waiting
|
||||
// for the first caller to finish its call to
|
||||
// RemoveInitialHooksAndCallInitializers, and proceed normally.
|
||||
// * occur some time during the RemoveInitialHooksAndCallInitializers call, at
|
||||
// which point there could be no initial hooks and the subsequent hooks that
|
||||
// are about to be set up by RemoveInitialHooksAndCallInitializers haven't
|
||||
// been installed yet. I think the worst we can get is that some allocations
|
||||
// will not get reported to some hooks set by the initializers called from
|
||||
// RemoveInitialHooksAndCallInitializers.
|
||||
//
|
||||
// Note, RemoveInitialHooksAndCallInitializers returns false if
|
||||
// MallocHook_InitAtFirstAllocation_HeapLeakChecker was already called
|
||||
// (i.e. through mmap hooks). And true otherwise (i.e. we're first to
|
||||
// call it). In that former case (return of false), we assume that
|
||||
// heap checker already installed it's hook, so we don't re-execute
|
||||
// new hook.
|
||||
void InitialNewHook(const void* ptr, size_t size) {
|
||||
if (RemoveInitialHooksAndCallInitializers()) {
|
||||
MallocHook::InvokeNewHook(ptr, size);
|
||||
}
|
||||
}
|
||||
|
||||
// This function is called at most once by one of the above initial malloc
|
||||
// hooks. It removes all initial hooks and initializes all other clients that
|
||||
// want to get control at the very first memory allocation. The initializers
|
||||
// may assume that the initial malloc hooks have been removed. The initializers
|
||||
// may set up malloc hooks and allocate memory.
|
||||
bool RemoveInitialHooksAndCallInitializers() {
|
||||
static tcmalloc::TrivialOnce once;
|
||||
once.RunOnce([] () {
|
||||
RAW_CHECK(MallocHook::RemoveNewHook(&InitialNewHook), "");
|
||||
});
|
||||
|
||||
// HeapLeakChecker is currently the only module that needs to get control on
|
||||
// the first memory allocation, but one can add other modules by following the
|
||||
// same weak/strong function pattern.
|
||||
return (MallocHook_InitAtFirstAllocation_HeapLeakChecker() != 0);
|
||||
}
|
||||
|
||||
} // namespace
|
||||
|
||||
namespace base { namespace internal {
|
||||
|
||||
// This lock is shared between all implementations of HookList::Add & Remove.
|
||||
// The potential for contention is very small. This needs to be a SpinLock and
|
||||
// not a Mutex since it's possible for Mutex locking to allocate memory (e.g.,
|
||||
// per-thread allocation in debug builds), which could cause infinite recursion.
|
||||
static SpinLock hooklist_spinlock(base::LINKER_INITIALIZED);
|
||||
|
||||
template <typename T>
|
||||
bool HookList<T>::Add(T value) {
|
||||
if (value == T{}) {
|
||||
return false;
|
||||
}
|
||||
SpinLockHolder l(&hooklist_spinlock);
|
||||
// Find the first slot in data that is 0.
|
||||
int index = 0;
|
||||
while ((index < kHookListMaxValues) &&
|
||||
cast_priv_data(index)->load(std::memory_order_relaxed) != T{}) {
|
||||
++index;
|
||||
}
|
||||
if (index == kHookListMaxValues) {
|
||||
return false;
|
||||
}
|
||||
uintptr_t prev_num_hooks = priv_end.load(std::memory_order_acquire);
|
||||
cast_priv_data(index)->store(value, std::memory_order_relaxed);
|
||||
if (prev_num_hooks <= index) {
|
||||
priv_end.store(index + 1, std::memory_order_relaxed);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
void HookList<T>::FixupPrivEndLocked() {
|
||||
uintptr_t hooks_end = priv_end.load(std::memory_order_relaxed);
|
||||
while ((hooks_end > 0) &&
|
||||
cast_priv_data(hooks_end-1)->load(std::memory_order_relaxed) == 0) {
|
||||
--hooks_end;
|
||||
}
|
||||
priv_end.store(hooks_end, std::memory_order_relaxed);
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
bool HookList<T>::Remove(T value) {
|
||||
if (value == T{}) {
|
||||
return false;
|
||||
}
|
||||
SpinLockHolder l(&hooklist_spinlock);
|
||||
uintptr_t hooks_end = priv_end.load(std::memory_order_relaxed);
|
||||
int index = 0;
|
||||
while (index < hooks_end
|
||||
&& value != cast_priv_data(index)->load(std::memory_order_relaxed)) {
|
||||
++index;
|
||||
}
|
||||
if (index == hooks_end) {
|
||||
return false;
|
||||
}
|
||||
cast_priv_data(index)->store(T{}, std::memory_order_relaxed);
|
||||
FixupPrivEndLocked();
|
||||
return true;
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
int HookList<T>::Traverse(T* output_array, int n) const {
|
||||
uintptr_t hooks_end = priv_end.load(std::memory_order_acquire);
|
||||
int actual_hooks_end = 0;
|
||||
for (int i = 0; i < hooks_end && n > 0; ++i) {
|
||||
T data = cast_priv_data(i)->load(std::memory_order_acquire);
|
||||
if (data != T{}) {
|
||||
*output_array++ = data;
|
||||
++actual_hooks_end;
|
||||
--n;
|
||||
}
|
||||
}
|
||||
return actual_hooks_end;
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
T HookList<T>::ExchangeSingular(T value) {
|
||||
T old_value;
|
||||
SpinLockHolder l(&hooklist_spinlock);
|
||||
old_value = cast_priv_data(kHookListSingularIdx)->load(std::memory_order_relaxed);
|
||||
cast_priv_data(kHookListSingularIdx)->store(value, std::memory_order_relaxed);
|
||||
if (value != T{}) {
|
||||
priv_end.store(kHookListSingularIdx + 1, std::memory_order_relaxed);
|
||||
} else {
|
||||
FixupPrivEndLocked();
|
||||
}
|
||||
return old_value;
|
||||
}
|
||||
|
||||
// Explicit instantiation for malloc_hook_test.cc. This ensures all the methods
|
||||
// are instantiated.
|
||||
template struct HookList<MallocHook::NewHook>;
|
||||
|
||||
HookList<MallocHook::NewHook> new_hooks_{InitialNewHook};
|
||||
HookList<MallocHook::DeleteHook> delete_hooks_;
|
||||
|
||||
} } // namespace base::internal
|
||||
|
||||
using base::internal::kHookListMaxValues;
|
||||
using base::internal::new_hooks_;
|
||||
using base::internal::delete_hooks_;
|
||||
|
||||
// These are available as C bindings as well as C++, hence their
|
||||
// definition outside the MallocHook class.
|
||||
extern "C"
|
||||
int MallocHook_AddNewHook(MallocHook_NewHook hook) {
|
||||
RAW_VLOG(10, "AddNewHook(%p)", hook);
|
||||
return new_hooks_.Add(hook);
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_RemoveNewHook(MallocHook_NewHook hook) {
|
||||
RAW_VLOG(10, "RemoveNewHook(%p)", hook);
|
||||
return new_hooks_.Remove(hook);
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_AddDeleteHook(MallocHook_DeleteHook hook) {
|
||||
RAW_VLOG(10, "AddDeleteHook(%p)", hook);
|
||||
return delete_hooks_.Add(hook);
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_RemoveDeleteHook(MallocHook_DeleteHook hook) {
|
||||
RAW_VLOG(10, "RemoveDeleteHook(%p)", hook);
|
||||
return delete_hooks_.Remove(hook);
|
||||
}
|
||||
|
||||
// Next are "legacy" singular new/delete hooks
|
||||
|
||||
// The code below is DEPRECATED.
|
||||
extern "C"
|
||||
MallocHook_NewHook MallocHook_SetNewHook(MallocHook_NewHook hook) {
|
||||
RAW_VLOG(10, "SetNewHook(%p)", hook);
|
||||
return new_hooks_.ExchangeSingular(hook);
|
||||
}
|
||||
|
||||
extern "C"
|
||||
MallocHook_DeleteHook MallocHook_SetDeleteHook(MallocHook_DeleteHook hook) {
|
||||
RAW_VLOG(10, "SetDeleteHook(%p)", hook);
|
||||
return delete_hooks_.ExchangeSingular(hook);
|
||||
}
|
||||
|
||||
// Note: embedding the function calls inside the traversal of HookList would be
|
||||
// very confusing, as it is legal for a hook to remove itself and add other
|
||||
// hooks. Doing traversal first, and then calling the hooks ensures we only
|
||||
// call the hooks registered at the start.
|
||||
#define INVOKE_HOOKS(HookType, hook_list, args) do { \
|
||||
HookType hooks[kHookListMaxValues]; \
|
||||
int num_hooks = hook_list.Traverse(hooks, kHookListMaxValues); \
|
||||
for (int i = 0; i < num_hooks; ++i) { \
|
||||
(*hooks[i])args; \
|
||||
} \
|
||||
} while (0)
|
||||
|
||||
// There should only be one replacement. Return the result of the first
|
||||
// one, or false if there is none.
|
||||
#define INVOKE_REPLACEMENT(HookType, hook_list, args) do { \
|
||||
HookType hooks[kHookListMaxValues]; \
|
||||
int num_hooks = hook_list.Traverse(hooks, kHookListMaxValues); \
|
||||
return (num_hooks > 0 && (*hooks[0])args); \
|
||||
} while (0)
|
||||
|
||||
|
||||
void MallocHook::InvokeNewHookSlow(const void* p, size_t s) {
|
||||
if (tcmalloc::IsEmergencyPtr(p)) {
|
||||
return;
|
||||
}
|
||||
INVOKE_HOOKS(NewHook, new_hooks_, (p, s));
|
||||
}
|
||||
|
||||
void MallocHook::InvokeDeleteHookSlow(const void* p) {
|
||||
if (tcmalloc::IsEmergencyPtr(p)) {
|
||||
return;
|
||||
}
|
||||
INVOKE_HOOKS(DeleteHook, delete_hooks_, (p));
|
||||
}
|
||||
|
||||
#undef INVOKE_HOOKS
|
||||
|
||||
#ifndef NO_TCMALLOC_SAMPLES
|
||||
|
||||
DEFINE_ATTRIBUTE_SECTION_VARS(google_malloc);
|
||||
DECLARE_ATTRIBUTE_SECTION_VARS(google_malloc);
|
||||
// actual functions are in debugallocation.cc or tcmalloc.cc
|
||||
DEFINE_ATTRIBUTE_SECTION_VARS(malloc_hook);
|
||||
DECLARE_ATTRIBUTE_SECTION_VARS(malloc_hook);
|
||||
// actual functions are in this file, malloc_hook.cc, and low_level_alloc.cc
|
||||
|
||||
#define ADDR_IN_ATTRIBUTE_SECTION(addr, name) \
|
||||
(reinterpret_cast<uintptr_t>(ATTRIBUTE_SECTION_START(name)) <= \
|
||||
reinterpret_cast<uintptr_t>(addr) && \
|
||||
reinterpret_cast<uintptr_t>(addr) < \
|
||||
reinterpret_cast<uintptr_t>(ATTRIBUTE_SECTION_STOP(name)))
|
||||
|
||||
// Return true iff 'caller' is a return address within a function
|
||||
// that calls one of our hooks via MallocHook:Invoke*.
|
||||
// A helper for GetCallerStackTrace.
|
||||
static inline bool InHookCaller(const void* caller) {
|
||||
return ADDR_IN_ATTRIBUTE_SECTION(caller, google_malloc) ||
|
||||
ADDR_IN_ATTRIBUTE_SECTION(caller, malloc_hook);
|
||||
// We can use one section for everything except tcmalloc_or_debug
|
||||
// due to its special linkage mode, which prevents merging of the sections.
|
||||
}
|
||||
|
||||
#undef ADDR_IN_ATTRIBUTE_SECTION
|
||||
|
||||
static bool checked_sections = false;
|
||||
|
||||
static inline void CheckInHookCaller() {
|
||||
if (!checked_sections) {
|
||||
INIT_ATTRIBUTE_SECTION_VARS(google_malloc);
|
||||
if (ATTRIBUTE_SECTION_START(google_malloc) ==
|
||||
ATTRIBUTE_SECTION_STOP(google_malloc)) {
|
||||
RAW_LOG(ERROR, "google_malloc section is missing, "
|
||||
"thus InHookCaller is broken!");
|
||||
}
|
||||
INIT_ATTRIBUTE_SECTION_VARS(malloc_hook);
|
||||
if (ATTRIBUTE_SECTION_START(malloc_hook) ==
|
||||
ATTRIBUTE_SECTION_STOP(malloc_hook)) {
|
||||
RAW_LOG(ERROR, "malloc_hook section is missing, "
|
||||
"thus InHookCaller is broken!");
|
||||
}
|
||||
checked_sections = true;
|
||||
}
|
||||
}
|
||||
|
||||
#endif // !NO_TCMALLOC_SAMPLES
|
||||
|
||||
// We can improve behavior/compactness of this function
|
||||
// if we pass a generic test function (with a generic arg)
|
||||
// into the implementations for GetStackTrace instead of the skip_count.
|
||||
extern "C" int MallocHook_GetCallerStackTrace(void** result, int max_depth,
|
||||
int skip_count) {
|
||||
#if defined(NO_TCMALLOC_SAMPLES)
|
||||
return 0;
|
||||
#elif !defined(HAVE_ATTRIBUTE_SECTION_START)
|
||||
// Fall back to GetStackTrace and good old but fragile frame skip counts.
|
||||
// Note: this path is inaccurate when a hook is not called directly by an
|
||||
// allocation function but is daisy-chained through another hook,
|
||||
// search for MallocHook::(Get|Set|Invoke)* to find such cases.
|
||||
return GetStackTrace(result, max_depth, skip_count + int(DEBUG_MODE));
|
||||
// due to -foptimize-sibling-calls in opt mode
|
||||
// there's no need for extra frame skip here then
|
||||
#else
|
||||
CheckInHookCaller();
|
||||
// MallocHook caller determination via InHookCaller works, use it:
|
||||
static const int kMaxSkip = 32 + 6 + 3;
|
||||
// Constant tuned to do just one GetStackTrace call below in practice
|
||||
// and not get many frames that we don't actually need:
|
||||
// currently max passsed max_depth is 32,
|
||||
// max passed/needed skip_count is 6
|
||||
// and 3 is to account for some hook daisy chaining.
|
||||
static const int kStackSize = kMaxSkip + 1;
|
||||
void* stack[kStackSize];
|
||||
int depth = GetStackTrace(stack, kStackSize, 1); // skip this function frame
|
||||
if (depth == 0) // silenty propagate cases when GetStackTrace does not work
|
||||
return 0;
|
||||
for (int i = 0; i < depth; ++i) { // stack[0] is our immediate caller
|
||||
if (InHookCaller(stack[i])) {
|
||||
// fast-path to slow-path calls may be implemented by compiler
|
||||
// as non-tail calls. Causing two functions on stack trace to be
|
||||
// inside google_malloc. In such case we're skipping to
|
||||
// outermost such frame since this is where malloc stack frames
|
||||
// really start.
|
||||
while (i + 1 < depth && InHookCaller(stack[i+1])) {
|
||||
i++;
|
||||
}
|
||||
RAW_VLOG(10, "Found hooked allocator at %d: %p <- %p",
|
||||
i, stack[i], stack[i+1]);
|
||||
i += 1; // skip hook caller frame
|
||||
depth -= i; // correct depth
|
||||
if (depth > max_depth) depth = max_depth;
|
||||
copy(stack + i, stack + i + depth, result);
|
||||
if (depth < max_depth && depth + i == kStackSize) {
|
||||
// get frames for the missing depth
|
||||
depth +=
|
||||
GetStackTrace(result + depth, max_depth - depth, 1 + kStackSize);
|
||||
}
|
||||
return depth;
|
||||
}
|
||||
}
|
||||
RAW_LOG(WARNING, "Hooked allocator frame not found, returning empty trace");
|
||||
// If this happens try increasing kMaxSkip
|
||||
// or else something must be wrong with InHookCaller,
|
||||
// e.g. for every section used in InHookCaller
|
||||
// all functions in that section must be inside the same library.
|
||||
return 0;
|
||||
#endif
|
||||
}
|
||||
|
||||
// All mmap hooks functions are empty and bogus. All of those below
|
||||
// are no op and we keep them only because we have them exposed in
|
||||
// headers we ship. So keep them for somewhat formal ABI compat.
|
||||
//
|
||||
// For non-public API for hooking mapping updates see
|
||||
// mmap_hook.h
|
||||
|
||||
extern "C"
|
||||
int MallocHook_AddPreMmapHook(MallocHook_PreMmapHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_RemovePreMmapHook(MallocHook_PreMmapHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_SetMmapReplacement(MallocHook_MmapReplacement hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_RemoveMmapReplacement(MallocHook_MmapReplacement hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_AddMmapHook(MallocHook_MmapHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_RemoveMmapHook(MallocHook_MmapHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_AddMunmapHook(MallocHook_MunmapHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_RemoveMunmapHook(MallocHook_MunmapHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_SetMunmapReplacement(MallocHook_MunmapReplacement hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_RemoveMunmapReplacement(MallocHook_MunmapReplacement hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_AddMremapHook(MallocHook_MremapHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_RemoveMremapHook(MallocHook_MremapHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_AddPreSbrkHook(MallocHook_PreSbrkHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_RemovePreSbrkHook(MallocHook_PreSbrkHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_AddSbrkHook(MallocHook_SbrkHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
int MallocHook_RemoveSbrkHook(MallocHook_SbrkHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*static*/void* MallocHook::UnhookedMMap(void *start, size_t length, int prot,
|
||||
int flags, int fd, off_t offset) {
|
||||
errno = ENOSYS;
|
||||
return MAP_FAILED;
|
||||
}
|
||||
|
||||
/*static*/int MallocHook::UnhookedMUnmap(void *start, size_t length) {
|
||||
errno = ENOSYS;
|
||||
return -1;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
MallocHook_PreMmapHook MallocHook_SetPreMmapHook(MallocHook_PreMmapHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
MallocHook_MmapHook MallocHook_SetMmapHook(MallocHook_MmapHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
MallocHook_MunmapHook MallocHook_SetMunmapHook(MallocHook_MunmapHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
MallocHook_MremapHook MallocHook_SetMremapHook(MallocHook_MremapHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
MallocHook_PreSbrkHook MallocHook_SetPreSbrkHook(MallocHook_PreSbrkHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
extern "C"
|
||||
MallocHook_SbrkHook MallocHook_SetSbrkHook(MallocHook_SbrkHook hook) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
void MallocHook::InvokePreMmapHookSlow(const void* start,
|
||||
size_t size,
|
||||
int protection,
|
||||
int flags,
|
||||
int fd,
|
||||
off_t offset) {
|
||||
}
|
||||
|
||||
void MallocHook::InvokeMmapHookSlow(const void* result,
|
||||
const void* start,
|
||||
size_t size,
|
||||
int protection,
|
||||
int flags,
|
||||
int fd,
|
||||
off_t offset) {
|
||||
}
|
||||
|
||||
bool MallocHook::InvokeMmapReplacementSlow(const void* start,
|
||||
size_t size,
|
||||
int protection,
|
||||
int flags,
|
||||
int fd,
|
||||
off_t offset,
|
||||
void** result) {
|
||||
return false;
|
||||
}
|
||||
|
||||
void MallocHook::InvokeMunmapHookSlow(const void* p, size_t s) {
|
||||
}
|
||||
|
||||
bool MallocHook::InvokeMunmapReplacementSlow(const void* p,
|
||||
size_t s,
|
||||
int* result) {
|
||||
return false;
|
||||
}
|
||||
|
||||
void MallocHook::InvokeMremapHookSlow(const void* result,
|
||||
const void* old_addr,
|
||||
size_t old_size,
|
||||
size_t new_size,
|
||||
int flags,
|
||||
const void* new_addr) {
|
||||
}
|
||||
|
||||
void MallocHook::InvokePreSbrkHookSlow(ptrdiff_t increment) {
|
||||
}
|
||||
|
||||
void MallocHook::InvokeSbrkHookSlow(const void* result, ptrdiff_t increment) {
|
||||
}
|
||||
|
55
3party/gperftools/src/maybe_emergency_malloc.h
Normal file
55
3party/gperftools/src/maybe_emergency_malloc.h
Normal file
@ -0,0 +1,55 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2014, gperftools Contributors
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
#ifndef MAYBE_EMERGENCY_MALLOC_H
|
||||
#define MAYBE_EMERGENCY_MALLOC_H
|
||||
|
||||
#include "config.h"
|
||||
|
||||
#ifdef ENABLE_EMERGENCY_MALLOC
|
||||
|
||||
#include "emergency_malloc.h"
|
||||
|
||||
#else
|
||||
|
||||
namespace tcmalloc {
|
||||
static inline void *EmergencyMalloc(size_t size) {return NULL;}
|
||||
static inline void EmergencyFree(void *p) {}
|
||||
static inline void *EmergencyCalloc(size_t n, size_t elem_size) {return NULL;}
|
||||
static inline void *EmergencyRealloc(void *old_ptr, size_t new_size) {return NULL;}
|
||||
|
||||
static inline bool IsEmergencyPtr(const void *_ptr) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
#endif // ENABLE_EMERGENCY_MALLOC
|
||||
|
||||
#endif
|
281
3party/gperftools/src/memfs_malloc.cc
Normal file
281
3party/gperftools/src/memfs_malloc.cc
Normal file
@ -0,0 +1,281 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2007, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Arun Sharma
|
||||
//
|
||||
// A tcmalloc system allocator that uses a memory based filesystem such as
|
||||
// tmpfs or hugetlbfs
|
||||
//
|
||||
// Since these only exist on linux, we only register this allocator there.
|
||||
|
||||
#ifdef __linux
|
||||
|
||||
#include <config.h>
|
||||
#include <errno.h> // for errno, EINVAL
|
||||
#include <inttypes.h> // for PRId64
|
||||
#include <limits.h> // for PATH_MAX
|
||||
#include <stddef.h> // for size_t, NULL
|
||||
#include <stdint.h> // for int64_t, uintptr_t
|
||||
#include <stdio.h> // for snprintf
|
||||
#include <stdlib.h> // for mkstemp
|
||||
#include <string.h> // for strerror
|
||||
#include <sys/mman.h> // for mmap, MAP_FAILED, etc
|
||||
#include <sys/statfs.h> // for fstatfs, statfs
|
||||
#include <unistd.h> // for ftruncate, off_t, unlink
|
||||
#include <new> // for operator new
|
||||
#include <string>
|
||||
|
||||
#include <gperftools/malloc_extension.h>
|
||||
#include "base/basictypes.h"
|
||||
#include "base/googleinit.h"
|
||||
#include "base/sysinfo.h"
|
||||
#include "internal_logging.h"
|
||||
#include "safe_strerror.h"
|
||||
|
||||
// TODO(sanjay): Move the code below into the tcmalloc namespace
|
||||
using tcmalloc::kLog;
|
||||
using tcmalloc::kCrash;
|
||||
using tcmalloc::Log;
|
||||
using std::string;
|
||||
|
||||
DEFINE_string(memfs_malloc_path, EnvToString("TCMALLOC_MEMFS_MALLOC_PATH", ""),
|
||||
"Path where hugetlbfs or tmpfs is mounted. The caller is "
|
||||
"responsible for ensuring that the path is unique and does "
|
||||
"not conflict with another process");
|
||||
DEFINE_int64(memfs_malloc_limit_mb,
|
||||
EnvToInt("TCMALLOC_MEMFS_LIMIT_MB", 0),
|
||||
"Limit total allocation size to the "
|
||||
"specified number of MiB. 0 == no limit.");
|
||||
DEFINE_bool(memfs_malloc_abort_on_fail,
|
||||
EnvToBool("TCMALLOC_MEMFS_ABORT_ON_FAIL", false),
|
||||
"abort() whenever memfs_malloc fails to satisfy an allocation "
|
||||
"for any reason.");
|
||||
DEFINE_bool(memfs_malloc_ignore_mmap_fail,
|
||||
EnvToBool("TCMALLOC_MEMFS_IGNORE_MMAP_FAIL", false),
|
||||
"Ignore failures from mmap");
|
||||
DEFINE_bool(memfs_malloc_map_private,
|
||||
EnvToBool("TCMALLOC_MEMFS_MAP_PRIVATE", false),
|
||||
"Use MAP_PRIVATE with mmap");
|
||||
DEFINE_bool(memfs_malloc_disable_fallback,
|
||||
EnvToBool("TCMALLOC_MEMFS_DISABLE_FALLBACK", false),
|
||||
"If we run out of hugepage memory don't fallback to default "
|
||||
"allocator.");
|
||||
|
||||
// Hugetlbfs based allocator for tcmalloc
|
||||
class HugetlbSysAllocator: public SysAllocator {
|
||||
public:
|
||||
explicit HugetlbSysAllocator(SysAllocator* fallback)
|
||||
: failed_(true), // To disable allocator until Initialize() is called.
|
||||
big_page_size_(0),
|
||||
hugetlb_fd_(-1),
|
||||
hugetlb_base_(0),
|
||||
fallback_(fallback) {
|
||||
}
|
||||
|
||||
void* Alloc(size_t size, size_t *actual_size, size_t alignment);
|
||||
bool Initialize();
|
||||
|
||||
bool failed_; // Whether failed to allocate memory.
|
||||
|
||||
private:
|
||||
void* AllocInternal(size_t size, size_t *actual_size, size_t alignment);
|
||||
|
||||
int64 big_page_size_;
|
||||
int hugetlb_fd_; // file descriptor for hugetlb
|
||||
off_t hugetlb_base_;
|
||||
|
||||
SysAllocator* fallback_; // Default system allocator to fall back to.
|
||||
};
|
||||
static union {
|
||||
char buf[sizeof(HugetlbSysAllocator)];
|
||||
void *ptr;
|
||||
} hugetlb_space;
|
||||
|
||||
// No locking needed here since we assume that tcmalloc calls
|
||||
// us with an internal lock held (see tcmalloc/system-alloc.cc).
|
||||
void* HugetlbSysAllocator::Alloc(size_t size, size_t *actual_size,
|
||||
size_t alignment) {
|
||||
if (!FLAGS_memfs_malloc_disable_fallback && failed_) {
|
||||
return fallback_->Alloc(size, actual_size, alignment);
|
||||
}
|
||||
|
||||
// We don't respond to allocation requests smaller than big_page_size_ unless
|
||||
// the caller is ok to take more than they asked for. Used by MetaDataAlloc.
|
||||
if (!FLAGS_memfs_malloc_disable_fallback &&
|
||||
actual_size == NULL && size < big_page_size_) {
|
||||
return fallback_->Alloc(size, actual_size, alignment);
|
||||
}
|
||||
|
||||
// Enforce huge page alignment. Be careful to deal with overflow.
|
||||
size_t new_alignment = alignment;
|
||||
if (new_alignment < big_page_size_) new_alignment = big_page_size_;
|
||||
size_t aligned_size = ((size + new_alignment - 1) /
|
||||
new_alignment) * new_alignment;
|
||||
if (!FLAGS_memfs_malloc_disable_fallback && aligned_size < size) {
|
||||
return fallback_->Alloc(size, actual_size, alignment);
|
||||
}
|
||||
|
||||
void* result = AllocInternal(aligned_size, actual_size, new_alignment);
|
||||
if (result != NULL) {
|
||||
return result;
|
||||
} else if (FLAGS_memfs_malloc_disable_fallback) {
|
||||
return NULL;
|
||||
}
|
||||
Log(kLog, __FILE__, __LINE__,
|
||||
"HugetlbSysAllocator: (failed, allocated)", failed_, hugetlb_base_);
|
||||
if (FLAGS_memfs_malloc_abort_on_fail) {
|
||||
Log(kCrash, __FILE__, __LINE__,
|
||||
"memfs_malloc_abort_on_fail is set");
|
||||
}
|
||||
return fallback_->Alloc(size, actual_size, alignment);
|
||||
}
|
||||
|
||||
void* HugetlbSysAllocator::AllocInternal(size_t size, size_t* actual_size,
|
||||
size_t alignment) {
|
||||
// Ask for extra memory if alignment > pagesize
|
||||
size_t extra = 0;
|
||||
if (alignment > big_page_size_) {
|
||||
extra = alignment - big_page_size_;
|
||||
}
|
||||
|
||||
// Test if this allocation would put us over the limit.
|
||||
off_t limit = FLAGS_memfs_malloc_limit_mb*1024*1024;
|
||||
if (limit > 0 && hugetlb_base_ + size + extra > limit) {
|
||||
// Disable the allocator when there's less than one page left.
|
||||
if (limit - hugetlb_base_ < big_page_size_) {
|
||||
Log(kLog, __FILE__, __LINE__, "reached memfs_malloc_limit_mb");
|
||||
failed_ = true;
|
||||
}
|
||||
else {
|
||||
Log(kLog, __FILE__, __LINE__,
|
||||
"alloc too large (size, bytes left)", size, limit-hugetlb_base_);
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// This is not needed for hugetlbfs, but needed for tmpfs. Annoyingly
|
||||
// hugetlbfs returns EINVAL for ftruncate.
|
||||
int ret = ftruncate(hugetlb_fd_, hugetlb_base_ + size + extra);
|
||||
if (ret != 0 && errno != EINVAL) {
|
||||
Log(kLog, __FILE__, __LINE__,
|
||||
"ftruncate failed", tcmalloc::SafeStrError(errno).c_str());
|
||||
failed_ = true;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
// Note: size + extra does not overflow since:
|
||||
// size + alignment < (1<<NBITS).
|
||||
// and extra <= alignment
|
||||
// therefore size + extra < (1<<NBITS)
|
||||
void *result;
|
||||
result = mmap(0, size + extra, PROT_WRITE|PROT_READ,
|
||||
FLAGS_memfs_malloc_map_private ? MAP_PRIVATE : MAP_SHARED,
|
||||
hugetlb_fd_, hugetlb_base_);
|
||||
if (result == reinterpret_cast<void*>(MAP_FAILED)) {
|
||||
if (!FLAGS_memfs_malloc_ignore_mmap_fail) {
|
||||
Log(kLog, __FILE__, __LINE__,
|
||||
"mmap failed (size, error)", size + extra,
|
||||
tcmalloc::SafeStrError(errno).c_str());
|
||||
failed_ = true;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
uintptr_t ptr = reinterpret_cast<uintptr_t>(result);
|
||||
|
||||
// Adjust the return memory so it is aligned
|
||||
size_t adjust = 0;
|
||||
if ((ptr & (alignment - 1)) != 0) {
|
||||
adjust = alignment - (ptr & (alignment - 1));
|
||||
}
|
||||
ptr += adjust;
|
||||
hugetlb_base_ += (size + extra);
|
||||
|
||||
if (actual_size) {
|
||||
*actual_size = size + extra - adjust;
|
||||
}
|
||||
|
||||
return reinterpret_cast<void*>(ptr);
|
||||
}
|
||||
|
||||
bool HugetlbSysAllocator::Initialize() {
|
||||
char path[PATH_MAX];
|
||||
const int pathlen = FLAGS_memfs_malloc_path.size();
|
||||
if (pathlen + 8 > sizeof(path)) {
|
||||
Log(kCrash, __FILE__, __LINE__, "XX fatal: memfs_malloc_path too long");
|
||||
return false;
|
||||
}
|
||||
memcpy(path, FLAGS_memfs_malloc_path.data(), pathlen);
|
||||
memcpy(path + pathlen, ".XXXXXX", 8); // Also copies terminating \0
|
||||
|
||||
int hugetlb_fd = mkstemp(path);
|
||||
if (hugetlb_fd == -1) {
|
||||
Log(kLog, __FILE__, __LINE__,
|
||||
"warning: unable to create memfs_malloc_path",
|
||||
path, tcmalloc::SafeStrError(errno).c_str());
|
||||
return false;
|
||||
}
|
||||
|
||||
// Cleanup memory on process exit
|
||||
if (unlink(path) == -1) {
|
||||
Log(kCrash, __FILE__, __LINE__,
|
||||
"fatal: error unlinking memfs_malloc_path", path,
|
||||
tcmalloc::SafeStrError(errno).c_str());
|
||||
return false;
|
||||
}
|
||||
|
||||
// Use fstatfs to figure out the default page size for memfs
|
||||
struct statfs sfs;
|
||||
if (fstatfs(hugetlb_fd, &sfs) == -1) {
|
||||
Log(kCrash, __FILE__, __LINE__,
|
||||
"fatal: error fstatfs of memfs_malloc_path",
|
||||
tcmalloc::SafeStrError(errno).c_str());
|
||||
return false;
|
||||
}
|
||||
int64 page_size = sfs.f_bsize;
|
||||
|
||||
hugetlb_fd_ = hugetlb_fd;
|
||||
big_page_size_ = page_size;
|
||||
failed_ = false;
|
||||
return true;
|
||||
}
|
||||
|
||||
REGISTER_MODULE_INITIALIZER(memfs_malloc, {
|
||||
if (FLAGS_memfs_malloc_path.length()) {
|
||||
SysAllocator* alloc = MallocExtension::instance()->GetSystemAllocator();
|
||||
HugetlbSysAllocator* hp =
|
||||
new (hugetlb_space.buf) HugetlbSysAllocator(alloc);
|
||||
if (hp->Initialize()) {
|
||||
MallocExtension::instance()->SetSystemAllocator(hp);
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
#endif /* ifdef __linux */
|
788
3party/gperftools/src/memory_region_map.cc
Normal file
788
3party/gperftools/src/memory_region_map.cc
Normal file
@ -0,0 +1,788 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2006, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Maxim Lifantsev
|
||||
*/
|
||||
|
||||
//
|
||||
// Background and key design points of MemoryRegionMap.
|
||||
//
|
||||
// MemoryRegionMap is a low-level module with quite atypical requirements that
|
||||
// result in some degree of non-triviality of the implementation and design.
|
||||
//
|
||||
// MemoryRegionMap collects info about *all* memory regions created with
|
||||
// mmap, munmap, mremap, sbrk.
|
||||
// They key word above is 'all': all that are happening in a process
|
||||
// during its lifetime frequently starting even before global object
|
||||
// constructor execution.
|
||||
//
|
||||
// This is needed by the primary client of MemoryRegionMap:
|
||||
// HeapLeakChecker uses the regions and the associated stack traces
|
||||
// to figure out what part of the memory is the heap:
|
||||
// if MemoryRegionMap were to miss some (early) regions, leak checking would
|
||||
// stop working correctly.
|
||||
//
|
||||
// To accomplish the goal of functioning before/during global object
|
||||
// constructor execution MemoryRegionMap is done as a singleton service
|
||||
// that relies on own on-demand initialized static constructor-less data,
|
||||
// and only relies on other low-level modules that can also function properly
|
||||
// even before global object constructors run.
|
||||
//
|
||||
// Accomplishing the goal of collecting data about all mmap, munmap, mremap,
|
||||
// sbrk occurrences is a more involved: conceptually to do this one needs to
|
||||
// record some bits of data in particular about any mmap or sbrk call,
|
||||
// but to do that one needs to allocate memory for that data at some point,
|
||||
// but all memory allocations in the end themselves come from an mmap
|
||||
// or sbrk call (that's how the address space of the process grows).
|
||||
//
|
||||
// Also note that we need to do all the above recording from
|
||||
// within an mmap/sbrk hook which is sometimes/frequently is made by a memory
|
||||
// allocator, including the allocator MemoryRegionMap itself must rely on.
|
||||
// In the case of heap-checker usage this includes even the very first
|
||||
// mmap/sbrk call happening in the program: heap-checker gets activated due to
|
||||
// a link-time installed mmap/sbrk hook and it initializes MemoryRegionMap
|
||||
// and asks it to record info about this very first call right from that
|
||||
// very first hook invocation.
|
||||
//
|
||||
// MemoryRegionMap is doing its memory allocations via LowLevelAlloc:
|
||||
// unlike more complex standard memory allocator, LowLevelAlloc cooperates with
|
||||
// MemoryRegionMap by not holding any of its own locks while it calls mmap
|
||||
// to get memory, thus we are able to call LowLevelAlloc from
|
||||
// our mmap/sbrk hooks without causing a deadlock in it.
|
||||
// For the same reason of deadlock prevention the locking in MemoryRegionMap
|
||||
// itself is write-recursive which is an exception to Google's mutex usage.
|
||||
//
|
||||
// We still need to break the infinite cycle of mmap calling our hook,
|
||||
// which asks LowLevelAlloc for memory to record this mmap,
|
||||
// which (sometimes) causes mmap, which calls our hook, and so on.
|
||||
// We do this as follows: on a recursive call of MemoryRegionMap's
|
||||
// mmap/sbrk/mremap hook we record the data about the allocation in a
|
||||
// static fixed-sized stack (saved_regions and saved_buckets), when the
|
||||
// recursion unwinds but before returning from the outer hook call we unwind
|
||||
// this stack and move the data from saved_regions and saved_buckets to its
|
||||
// permanent place in the RegionSet and "bucket_table" respectively,
|
||||
// which can cause more allocations and mmap-s and recursion and unwinding,
|
||||
// but the whole process ends eventually due to the fact that for the small
|
||||
// allocations we are doing LowLevelAlloc reuses one mmap call and parcels out
|
||||
// the memory it created to satisfy several of our allocation requests.
|
||||
//
|
||||
|
||||
// ========================================================================= //
|
||||
|
||||
#include <config.h>
|
||||
|
||||
#ifdef HAVE_UNISTD_H
|
||||
#include <unistd.h>
|
||||
#endif
|
||||
#include <inttypes.h>
|
||||
#ifdef HAVE_MMAP
|
||||
#include <sys/mman.h>
|
||||
#elif !defined(MAP_FAILED)
|
||||
#define MAP_FAILED -1 // the only thing we need from mman.h
|
||||
#endif
|
||||
#ifdef HAVE_PTHREAD
|
||||
#include <pthread.h> // for pthread_t, pthread_self()
|
||||
#endif
|
||||
#include <stddef.h>
|
||||
|
||||
#include <algorithm>
|
||||
#include <set>
|
||||
|
||||
#include "memory_region_map.h"
|
||||
|
||||
#include "base/googleinit.h"
|
||||
#include "base/logging.h"
|
||||
#include "base/low_level_alloc.h"
|
||||
#include "mmap_hook.h"
|
||||
|
||||
#include <gperftools/stacktrace.h>
|
||||
#include <gperftools/malloc_hook.h> // For MallocHook::GetCallerStackTrace
|
||||
|
||||
using std::max;
|
||||
|
||||
// ========================================================================= //
|
||||
|
||||
int MemoryRegionMap::client_count_ = 0;
|
||||
int MemoryRegionMap::max_stack_depth_ = 0;
|
||||
MemoryRegionMap::RegionSet* MemoryRegionMap::regions_ = NULL;
|
||||
LowLevelAlloc::Arena* MemoryRegionMap::arena_ = NULL;
|
||||
SpinLock MemoryRegionMap::lock_(SpinLock::LINKER_INITIALIZED);
|
||||
SpinLock MemoryRegionMap::owner_lock_( // ACQUIRED_AFTER(lock_)
|
||||
SpinLock::LINKER_INITIALIZED);
|
||||
int MemoryRegionMap::recursion_count_ = 0; // GUARDED_BY(owner_lock_)
|
||||
pthread_t MemoryRegionMap::lock_owner_tid_; // GUARDED_BY(owner_lock_)
|
||||
int64 MemoryRegionMap::map_size_ = 0;
|
||||
int64 MemoryRegionMap::unmap_size_ = 0;
|
||||
HeapProfileBucket** MemoryRegionMap::bucket_table_ = NULL; // GUARDED_BY(lock_)
|
||||
int MemoryRegionMap::num_buckets_ = 0; // GUARDED_BY(lock_)
|
||||
int MemoryRegionMap::saved_buckets_count_ = 0; // GUARDED_BY(lock_)
|
||||
HeapProfileBucket MemoryRegionMap::saved_buckets_[20]; // GUARDED_BY(lock_)
|
||||
// GUARDED_BY(lock_)
|
||||
const void* MemoryRegionMap::saved_buckets_keys_[20][kMaxStackDepth];
|
||||
tcmalloc::MappingHookSpace MemoryRegionMap::mapping_hook_space_;
|
||||
|
||||
// ========================================================================= //
|
||||
|
||||
// Simple hook into execution of global object constructors,
|
||||
// so that we do not call pthread_self() when it does not yet work.
|
||||
static bool libpthread_initialized = false;
|
||||
REGISTER_MODULE_INITIALIZER(libpthread_initialized_setter,
|
||||
libpthread_initialized = true);
|
||||
|
||||
static inline bool current_thread_is(pthread_t should_be) {
|
||||
// Before main() runs, there's only one thread, so we're always that thread
|
||||
if (!libpthread_initialized) return true;
|
||||
// this starts working only sometime well into global constructor execution:
|
||||
return pthread_equal(pthread_self(), should_be);
|
||||
}
|
||||
|
||||
// ========================================================================= //
|
||||
|
||||
// Constructor-less place-holder to store a RegionSet in.
|
||||
union MemoryRegionMap::RegionSetRep {
|
||||
char rep[sizeof(RegionSet)];
|
||||
void* align_it; // do not need a better alignment for 'rep' than this
|
||||
RegionSet* region_set() { return reinterpret_cast<RegionSet*>(rep); }
|
||||
};
|
||||
|
||||
// The bytes where MemoryRegionMap::regions_ will point to.
|
||||
// We use RegionSetRep with noop c-tor so that global construction
|
||||
// does not interfere.
|
||||
static MemoryRegionMap::RegionSetRep regions_rep;
|
||||
|
||||
// ========================================================================= //
|
||||
|
||||
// Has InsertRegionLocked been called recursively
|
||||
// (or rather should we *not* use regions_ to record a hooked mmap).
|
||||
static bool recursive_insert = false;
|
||||
|
||||
void MemoryRegionMap::Init(int max_stack_depth, bool use_buckets) NO_THREAD_SAFETY_ANALYSIS {
|
||||
RAW_VLOG(10, "MemoryRegionMap Init");
|
||||
RAW_CHECK(max_stack_depth >= 0, "");
|
||||
// Make sure we don't overflow the memory in region stacks:
|
||||
RAW_CHECK(max_stack_depth <= kMaxStackDepth,
|
||||
"need to increase kMaxStackDepth?");
|
||||
Lock();
|
||||
client_count_ += 1;
|
||||
max_stack_depth_ = max(max_stack_depth_, max_stack_depth);
|
||||
if (client_count_ > 1) {
|
||||
// not first client: already did initialization-proper
|
||||
Unlock();
|
||||
RAW_VLOG(10, "MemoryRegionMap Init increment done");
|
||||
return;
|
||||
}
|
||||
|
||||
// Set our hooks and make sure they were installed:
|
||||
tcmalloc::HookMMapEvents(&mapping_hook_space_, HandleMappingEvent);
|
||||
|
||||
// We need to set recursive_insert since the NewArena call itself
|
||||
// will already do some allocations with mmap which our hooks will catch
|
||||
// recursive_insert allows us to buffer info about these mmap calls.
|
||||
// Note that Init() can be (and is) sometimes called
|
||||
// already from within an mmap/sbrk hook.
|
||||
recursive_insert = true;
|
||||
arena_ = LowLevelAlloc::NewArena(0, LowLevelAlloc::DefaultArena());
|
||||
recursive_insert = false;
|
||||
HandleSavedRegionsLocked(&InsertRegionLocked); // flush the buffered ones
|
||||
// Can't instead use HandleSavedRegionsLocked(&DoInsertRegionLocked) before
|
||||
// recursive_insert = false; as InsertRegionLocked will also construct
|
||||
// regions_ on demand for us.
|
||||
if (use_buckets) {
|
||||
const int table_bytes = kHashTableSize * sizeof(*bucket_table_);
|
||||
recursive_insert = true;
|
||||
bucket_table_ = static_cast<HeapProfileBucket**>(
|
||||
MyAllocator::Allocate(table_bytes));
|
||||
recursive_insert = false;
|
||||
memset(bucket_table_, 0, table_bytes);
|
||||
num_buckets_ = 0;
|
||||
}
|
||||
if (regions_ == NULL) { // init regions_
|
||||
InitRegionSetLocked();
|
||||
}
|
||||
Unlock();
|
||||
RAW_VLOG(10, "MemoryRegionMap Init done");
|
||||
}
|
||||
|
||||
bool MemoryRegionMap::Shutdown() NO_THREAD_SAFETY_ANALYSIS {
|
||||
RAW_VLOG(10, "MemoryRegionMap Shutdown");
|
||||
Lock();
|
||||
RAW_CHECK(client_count_ > 0, "");
|
||||
client_count_ -= 1;
|
||||
if (client_count_ != 0) { // not last client; need not really shutdown
|
||||
Unlock();
|
||||
RAW_VLOG(10, "MemoryRegionMap Shutdown decrement done");
|
||||
return true;
|
||||
}
|
||||
if (bucket_table_ != NULL) {
|
||||
for (int i = 0; i < kHashTableSize; i++) {
|
||||
for (HeapProfileBucket* curr = bucket_table_[i]; curr != 0; /**/) {
|
||||
HeapProfileBucket* bucket = curr;
|
||||
curr = curr->next;
|
||||
MyAllocator::Free(bucket->stack, 0);
|
||||
MyAllocator::Free(bucket, 0);
|
||||
}
|
||||
}
|
||||
MyAllocator::Free(bucket_table_, 0);
|
||||
num_buckets_ = 0;
|
||||
bucket_table_ = NULL;
|
||||
}
|
||||
|
||||
tcmalloc::UnHookMMapEvents(&mapping_hook_space_);
|
||||
|
||||
if (regions_) regions_->~RegionSet();
|
||||
regions_ = NULL;
|
||||
bool deleted_arena = LowLevelAlloc::DeleteArena(arena_);
|
||||
if (deleted_arena) {
|
||||
arena_ = 0;
|
||||
} else {
|
||||
RAW_LOG(WARNING, "Can't delete LowLevelAlloc arena: it's being used");
|
||||
}
|
||||
Unlock();
|
||||
RAW_VLOG(10, "MemoryRegionMap Shutdown done");
|
||||
return deleted_arena;
|
||||
}
|
||||
|
||||
bool MemoryRegionMap::IsRecordingLocked() {
|
||||
RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
|
||||
return client_count_ > 0;
|
||||
}
|
||||
|
||||
// Invariants (once libpthread_initialized is true):
|
||||
// * While lock_ is not held, recursion_count_ is 0 (and
|
||||
// lock_owner_tid_ is the previous owner, but we don't rely on
|
||||
// that).
|
||||
// * recursion_count_ and lock_owner_tid_ are only written while
|
||||
// both lock_ and owner_lock_ are held. They may be read under
|
||||
// just owner_lock_.
|
||||
// * At entry and exit of Lock() and Unlock(), the current thread
|
||||
// owns lock_ iff pthread_equal(lock_owner_tid_, pthread_self())
|
||||
// && recursion_count_ > 0.
|
||||
void MemoryRegionMap::Lock() NO_THREAD_SAFETY_ANALYSIS {
|
||||
{
|
||||
SpinLockHolder l(&owner_lock_);
|
||||
if (recursion_count_ > 0 && current_thread_is(lock_owner_tid_)) {
|
||||
RAW_CHECK(lock_.IsHeld(), "Invariants violated");
|
||||
recursion_count_++;
|
||||
RAW_CHECK(recursion_count_ <= 5,
|
||||
"recursive lock nesting unexpectedly deep");
|
||||
return;
|
||||
}
|
||||
}
|
||||
lock_.Lock();
|
||||
{
|
||||
SpinLockHolder l(&owner_lock_);
|
||||
RAW_CHECK(recursion_count_ == 0,
|
||||
"Last Unlock didn't reset recursion_count_");
|
||||
if (libpthread_initialized)
|
||||
lock_owner_tid_ = pthread_self();
|
||||
recursion_count_ = 1;
|
||||
}
|
||||
}
|
||||
|
||||
void MemoryRegionMap::Unlock() NO_THREAD_SAFETY_ANALYSIS {
|
||||
SpinLockHolder l(&owner_lock_);
|
||||
RAW_CHECK(recursion_count_ > 0, "unlock when not held");
|
||||
RAW_CHECK(lock_.IsHeld(),
|
||||
"unlock when not held, and recursion_count_ is wrong");
|
||||
RAW_CHECK(current_thread_is(lock_owner_tid_), "unlock by non-holder");
|
||||
recursion_count_--;
|
||||
if (recursion_count_ == 0) {
|
||||
lock_.Unlock();
|
||||
}
|
||||
}
|
||||
|
||||
bool MemoryRegionMap::LockIsHeld() {
|
||||
SpinLockHolder l(&owner_lock_);
|
||||
return lock_.IsHeld() && current_thread_is(lock_owner_tid_);
|
||||
}
|
||||
|
||||
const MemoryRegionMap::Region*
|
||||
MemoryRegionMap::DoFindRegionLocked(uintptr_t addr) {
|
||||
RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
|
||||
if (regions_ != NULL) {
|
||||
Region sample;
|
||||
sample.SetRegionSetKey(addr);
|
||||
RegionSet::iterator region = regions_->lower_bound(sample);
|
||||
if (region != regions_->end()) {
|
||||
RAW_CHECK(addr <= region->end_addr, "");
|
||||
if (region->start_addr <= addr && addr < region->end_addr) {
|
||||
return &(*region);
|
||||
}
|
||||
}
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
|
||||
bool MemoryRegionMap::FindRegion(uintptr_t addr, Region* result) {
|
||||
Lock();
|
||||
const Region* region = DoFindRegionLocked(addr);
|
||||
if (region != NULL) *result = *region; // create it as an independent copy
|
||||
Unlock();
|
||||
return region != NULL;
|
||||
}
|
||||
|
||||
bool MemoryRegionMap::FindAndMarkStackRegion(uintptr_t stack_top,
|
||||
Region* result) {
|
||||
Lock();
|
||||
const Region* region = DoFindRegionLocked(stack_top);
|
||||
if (region != NULL) {
|
||||
RAW_VLOG(10, "Stack at %p is inside region %p..%p",
|
||||
reinterpret_cast<void*>(stack_top),
|
||||
reinterpret_cast<void*>(region->start_addr),
|
||||
reinterpret_cast<void*>(region->end_addr));
|
||||
const_cast<Region*>(region)->set_is_stack(); // now we know
|
||||
// cast is safe (set_is_stack does not change the set ordering key)
|
||||
*result = *region; // create *result as an independent copy
|
||||
}
|
||||
Unlock();
|
||||
return region != NULL;
|
||||
}
|
||||
|
||||
HeapProfileBucket* MemoryRegionMap::GetBucket(int depth,
|
||||
const void* const key[]) {
|
||||
RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
|
||||
// Make hash-value
|
||||
uintptr_t hash = 0;
|
||||
for (int i = 0; i < depth; i++) {
|
||||
hash += reinterpret_cast<uintptr_t>(key[i]);
|
||||
hash += hash << 10;
|
||||
hash ^= hash >> 6;
|
||||
}
|
||||
hash += hash << 3;
|
||||
hash ^= hash >> 11;
|
||||
|
||||
// Lookup stack trace in table
|
||||
unsigned int hash_index = (static_cast<unsigned int>(hash)) % kHashTableSize;
|
||||
for (HeapProfileBucket* bucket = bucket_table_[hash_index];
|
||||
bucket != 0;
|
||||
bucket = bucket->next) {
|
||||
if ((bucket->hash == hash) && (bucket->depth == depth) &&
|
||||
std::equal(key, key + depth, bucket->stack)) {
|
||||
return bucket;
|
||||
}
|
||||
}
|
||||
|
||||
// Create new bucket
|
||||
const size_t key_size = sizeof(key[0]) * depth;
|
||||
HeapProfileBucket* bucket;
|
||||
if (recursive_insert) { // recursion: save in saved_buckets_
|
||||
const void** key_copy = saved_buckets_keys_[saved_buckets_count_];
|
||||
std::copy(key, key + depth, key_copy);
|
||||
bucket = &saved_buckets_[saved_buckets_count_];
|
||||
memset(bucket, 0, sizeof(*bucket));
|
||||
++saved_buckets_count_;
|
||||
bucket->stack = key_copy;
|
||||
bucket->next = NULL;
|
||||
} else {
|
||||
recursive_insert = true;
|
||||
const void** key_copy = static_cast<const void**>(
|
||||
MyAllocator::Allocate(key_size));
|
||||
recursive_insert = false;
|
||||
std::copy(key, key + depth, key_copy);
|
||||
recursive_insert = true;
|
||||
bucket = static_cast<HeapProfileBucket*>(
|
||||
MyAllocator::Allocate(sizeof(HeapProfileBucket)));
|
||||
recursive_insert = false;
|
||||
memset(bucket, 0, sizeof(*bucket));
|
||||
bucket->stack = key_copy;
|
||||
bucket->next = bucket_table_[hash_index];
|
||||
}
|
||||
bucket->hash = hash;
|
||||
bucket->depth = depth;
|
||||
bucket_table_[hash_index] = bucket;
|
||||
++num_buckets_;
|
||||
return bucket;
|
||||
}
|
||||
|
||||
MemoryRegionMap::RegionIterator MemoryRegionMap::BeginRegionLocked() {
|
||||
RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
|
||||
RAW_CHECK(regions_ != NULL, "");
|
||||
return regions_->begin();
|
||||
}
|
||||
|
||||
MemoryRegionMap::RegionIterator MemoryRegionMap::EndRegionLocked() {
|
||||
RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
|
||||
RAW_CHECK(regions_ != NULL, "");
|
||||
return regions_->end();
|
||||
}
|
||||
|
||||
inline void MemoryRegionMap::DoInsertRegionLocked(const Region& region) {
|
||||
RAW_VLOG(12, "Inserting region %p..%p from %p",
|
||||
reinterpret_cast<void*>(region.start_addr),
|
||||
reinterpret_cast<void*>(region.end_addr),
|
||||
reinterpret_cast<void*>(region.caller()));
|
||||
RegionSet::const_iterator i = regions_->lower_bound(region);
|
||||
if (i != regions_->end() && i->start_addr <= region.start_addr) {
|
||||
RAW_DCHECK(region.end_addr <= i->end_addr, ""); // lower_bound ensures this
|
||||
return; // 'region' is a subset of an already recorded region; do nothing
|
||||
// We can be stricter and allow this only when *i has been created via
|
||||
// an mmap with MAP_NORESERVE flag set.
|
||||
}
|
||||
if (DEBUG_MODE) {
|
||||
RAW_CHECK(i == regions_->end() || !region.Overlaps(*i),
|
||||
"Wow, overlapping memory regions");
|
||||
Region sample;
|
||||
sample.SetRegionSetKey(region.start_addr);
|
||||
i = regions_->lower_bound(sample);
|
||||
RAW_CHECK(i == regions_->end() || !region.Overlaps(*i),
|
||||
"Wow, overlapping memory regions");
|
||||
}
|
||||
region.AssertIsConsistent(); // just making sure
|
||||
// This inserts and allocates permanent storage for region
|
||||
// and its call stack data: it's safe to do it now:
|
||||
regions_->insert(region);
|
||||
RAW_VLOG(12, "Inserted region %p..%p :",
|
||||
reinterpret_cast<void*>(region.start_addr),
|
||||
reinterpret_cast<void*>(region.end_addr));
|
||||
if (VLOG_IS_ON(12)) LogAllLocked();
|
||||
}
|
||||
|
||||
// These variables are local to MemoryRegionMap::InsertRegionLocked()
|
||||
// and MemoryRegionMap::HandleSavedRegionsLocked()
|
||||
// and are file-level to ensure that they are initialized at load time.
|
||||
|
||||
// Number of unprocessed region inserts.
|
||||
static int saved_regions_count = 0;
|
||||
|
||||
// Unprocessed inserts (must be big enough to hold all allocations that can
|
||||
// be caused by a InsertRegionLocked call).
|
||||
// Region has no constructor, so that c-tor execution does not interfere
|
||||
// with the any-time use of the static memory behind saved_regions.
|
||||
static MemoryRegionMap::Region saved_regions[20];
|
||||
|
||||
inline void MemoryRegionMap::HandleSavedRegionsLocked(
|
||||
void (*insert_func)(const Region& region)) {
|
||||
while (saved_regions_count > 0) {
|
||||
// Making a local-var copy of the region argument to insert_func
|
||||
// including its stack (w/o doing any memory allocations) is important:
|
||||
// in many cases the memory in saved_regions
|
||||
// will get written-to during the (*insert_func)(r) call below.
|
||||
Region r = saved_regions[--saved_regions_count];
|
||||
(*insert_func)(r);
|
||||
}
|
||||
}
|
||||
|
||||
void MemoryRegionMap::RestoreSavedBucketsLocked() {
|
||||
RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
|
||||
while (saved_buckets_count_ > 0) {
|
||||
HeapProfileBucket bucket = saved_buckets_[--saved_buckets_count_];
|
||||
unsigned int hash_index =
|
||||
static_cast<unsigned int>(bucket.hash) % kHashTableSize;
|
||||
bool is_found = false;
|
||||
for (HeapProfileBucket* curr = bucket_table_[hash_index];
|
||||
curr != 0;
|
||||
curr = curr->next) {
|
||||
if ((curr->hash == bucket.hash) && (curr->depth == bucket.depth) &&
|
||||
std::equal(bucket.stack, bucket.stack + bucket.depth, curr->stack)) {
|
||||
curr->allocs += bucket.allocs;
|
||||
curr->alloc_size += bucket.alloc_size;
|
||||
curr->frees += bucket.frees;
|
||||
curr->free_size += bucket.free_size;
|
||||
is_found = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (is_found) continue;
|
||||
|
||||
const size_t key_size = sizeof(bucket.stack[0]) * bucket.depth;
|
||||
const void** key_copy = static_cast<const void**>(
|
||||
MyAllocator::Allocate(key_size));
|
||||
std::copy(bucket.stack, bucket.stack + bucket.depth, key_copy);
|
||||
HeapProfileBucket* new_bucket = static_cast<HeapProfileBucket*>(
|
||||
MyAllocator::Allocate(sizeof(HeapProfileBucket)));
|
||||
memset(new_bucket, 0, sizeof(*new_bucket));
|
||||
new_bucket->hash = bucket.hash;
|
||||
new_bucket->depth = bucket.depth;
|
||||
new_bucket->stack = key_copy;
|
||||
new_bucket->next = bucket_table_[hash_index];
|
||||
bucket_table_[hash_index] = new_bucket;
|
||||
++num_buckets_;
|
||||
}
|
||||
}
|
||||
|
||||
inline void MemoryRegionMap::InitRegionSetLocked() {
|
||||
RAW_VLOG(12, "Initializing region set");
|
||||
regions_ = regions_rep.region_set();
|
||||
recursive_insert = true;
|
||||
new (regions_) RegionSet();
|
||||
HandleSavedRegionsLocked(&DoInsertRegionLocked);
|
||||
recursive_insert = false;
|
||||
}
|
||||
|
||||
inline void MemoryRegionMap::InsertRegionLocked(const Region& region) {
|
||||
RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
|
||||
// We can be called recursively, because RegionSet constructor
|
||||
// and DoInsertRegionLocked() (called below) can call the allocator.
|
||||
// recursive_insert tells us if that's the case. When this happens,
|
||||
// region insertion information is recorded in saved_regions[],
|
||||
// and taken into account when the recursion unwinds.
|
||||
// Do the insert:
|
||||
if (recursive_insert) { // recursion: save in saved_regions
|
||||
RAW_VLOG(12, "Saving recursive insert of region %p..%p from %p",
|
||||
reinterpret_cast<void*>(region.start_addr),
|
||||
reinterpret_cast<void*>(region.end_addr),
|
||||
reinterpret_cast<void*>(region.caller()));
|
||||
RAW_CHECK(saved_regions_count < arraysize(saved_regions), "");
|
||||
// Copy 'region' to saved_regions[saved_regions_count]
|
||||
// together with the contents of its call_stack,
|
||||
// then increment saved_regions_count.
|
||||
saved_regions[saved_regions_count++] = region;
|
||||
} else { // not a recusrive call
|
||||
if (regions_ == NULL) { // init regions_
|
||||
InitRegionSetLocked();
|
||||
}
|
||||
recursive_insert = true;
|
||||
// Do the actual insertion work to put new regions into regions_:
|
||||
DoInsertRegionLocked(region);
|
||||
HandleSavedRegionsLocked(&DoInsertRegionLocked);
|
||||
recursive_insert = false;
|
||||
}
|
||||
}
|
||||
|
||||
// We strip out different number of stack frames in debug mode
|
||||
// because less inlining happens in that case
|
||||
#ifdef NDEBUG
|
||||
static const int kStripFrames = 1;
|
||||
#else
|
||||
static const int kStripFrames = 3;
|
||||
#endif
|
||||
|
||||
void MemoryRegionMap::RecordRegionAddition(const void* start, size_t size) {
|
||||
// Record start/end info about this memory acquisition call in a new region:
|
||||
Region region;
|
||||
region.Create(start, size);
|
||||
// First get the call stack info into the local varible 'region':
|
||||
int depth = 0;
|
||||
// NOTE: libunwind also does mmap and very much likely while holding
|
||||
// it's own lock(s). So some threads may first take libunwind lock,
|
||||
// and then take region map lock (necessary to record mmap done from
|
||||
// inside libunwind). On the other hand other thread(s) may do
|
||||
// normal mmap. Which would call this method to record it. Which
|
||||
// would then proceed with installing that record to region map
|
||||
// while holding region map lock. That may cause mmap from our own
|
||||
// internal allocators, so attempt to unwind in this case may cause
|
||||
// reverse order of taking libuwind and region map locks. Which is
|
||||
// obvious deadlock.
|
||||
//
|
||||
// Thankfully, we can easily detect if we're holding region map lock
|
||||
// and avoid recording backtrace in this (rare and largely
|
||||
// irrelevant) case. By doing this we "declare" that thread needing
|
||||
// both locks must take region map lock last. In other words we do
|
||||
// not allow taking libuwind lock when we already have region map
|
||||
// lock. Note, this is generally impossible when somebody tries to
|
||||
// mix cpu profiling and heap checking/profiling, because cpu
|
||||
// profiler grabs backtraces at arbitrary places. But at least such
|
||||
// combination is rarer and less relevant.
|
||||
if (max_stack_depth_ > 0 && !LockIsHeld()) {
|
||||
depth = MallocHook::GetCallerStackTrace(const_cast<void**>(region.call_stack),
|
||||
max_stack_depth_, kStripFrames + 1);
|
||||
}
|
||||
region.set_call_stack_depth(depth); // record stack info fully
|
||||
RAW_VLOG(10, "New global region %p..%p from %p",
|
||||
reinterpret_cast<void*>(region.start_addr),
|
||||
reinterpret_cast<void*>(region.end_addr),
|
||||
reinterpret_cast<void*>(region.caller()));
|
||||
// Note: none of the above allocates memory.
|
||||
Lock(); // recursively lock
|
||||
map_size_ += size;
|
||||
InsertRegionLocked(region);
|
||||
// This will (eventually) allocate storage for and copy over the stack data
|
||||
// from region.call_stack_data_ that is pointed by region.call_stack().
|
||||
if (bucket_table_ != NULL) {
|
||||
HeapProfileBucket* b = GetBucket(depth, region.call_stack);
|
||||
++b->allocs;
|
||||
b->alloc_size += size;
|
||||
if (!recursive_insert) {
|
||||
recursive_insert = true;
|
||||
RestoreSavedBucketsLocked();
|
||||
recursive_insert = false;
|
||||
}
|
||||
}
|
||||
Unlock();
|
||||
}
|
||||
|
||||
void MemoryRegionMap::RecordRegionRemoval(const void* start, size_t size) {
|
||||
Lock();
|
||||
if (recursive_insert) {
|
||||
// First remove the removed region from saved_regions, if it's
|
||||
// there, to prevent overrunning saved_regions in recursive
|
||||
// map/unmap call sequences, and also from later inserting regions
|
||||
// which have already been unmapped.
|
||||
uintptr_t start_addr = reinterpret_cast<uintptr_t>(start);
|
||||
uintptr_t end_addr = start_addr + size;
|
||||
int put_pos = 0;
|
||||
int old_count = saved_regions_count;
|
||||
for (int i = 0; i < old_count; ++i, ++put_pos) {
|
||||
Region& r = saved_regions[i];
|
||||
if (r.start_addr == start_addr && r.end_addr == end_addr) {
|
||||
// An exact match, so it's safe to remove.
|
||||
RecordRegionRemovalInBucket(r.call_stack_depth, r.call_stack, size);
|
||||
--saved_regions_count;
|
||||
--put_pos;
|
||||
RAW_VLOG(10, ("Insta-Removing saved region %p..%p; "
|
||||
"now have %d saved regions"),
|
||||
reinterpret_cast<void*>(start_addr),
|
||||
reinterpret_cast<void*>(end_addr),
|
||||
saved_regions_count);
|
||||
} else {
|
||||
if (put_pos < i) {
|
||||
saved_regions[put_pos] = saved_regions[i];
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if (regions_ == NULL) { // We must have just unset the hooks,
|
||||
// but this thread was already inside the hook.
|
||||
Unlock();
|
||||
return;
|
||||
}
|
||||
if (!recursive_insert) {
|
||||
HandleSavedRegionsLocked(&InsertRegionLocked);
|
||||
}
|
||||
// first handle adding saved regions if any
|
||||
uintptr_t start_addr = reinterpret_cast<uintptr_t>(start);
|
||||
uintptr_t end_addr = start_addr + size;
|
||||
// subtract start_addr, end_addr from all the regions
|
||||
RAW_VLOG(10, "Removing global region %p..%p; have %zu regions",
|
||||
reinterpret_cast<void*>(start_addr),
|
||||
reinterpret_cast<void*>(end_addr),
|
||||
regions_->size());
|
||||
Region sample;
|
||||
sample.SetRegionSetKey(start_addr);
|
||||
// Only iterate over the regions that might overlap start_addr..end_addr:
|
||||
for (RegionSet::iterator region = regions_->lower_bound(sample);
|
||||
region != regions_->end() && region->start_addr < end_addr;
|
||||
/*noop*/) {
|
||||
RAW_VLOG(13, "Looking at region %p..%p",
|
||||
reinterpret_cast<void*>(region->start_addr),
|
||||
reinterpret_cast<void*>(region->end_addr));
|
||||
if (start_addr <= region->start_addr &&
|
||||
region->end_addr <= end_addr) { // full deletion
|
||||
RAW_VLOG(12, "Deleting region %p..%p",
|
||||
reinterpret_cast<void*>(region->start_addr),
|
||||
reinterpret_cast<void*>(region->end_addr));
|
||||
RecordRegionRemovalInBucket(region->call_stack_depth, region->call_stack,
|
||||
region->end_addr - region->start_addr);
|
||||
RegionSet::iterator d = region;
|
||||
++region;
|
||||
regions_->erase(d);
|
||||
continue;
|
||||
} else if (region->start_addr < start_addr &&
|
||||
end_addr < region->end_addr) { // cutting-out split
|
||||
RAW_VLOG(12, "Splitting region %p..%p in two",
|
||||
reinterpret_cast<void*>(region->start_addr),
|
||||
reinterpret_cast<void*>(region->end_addr));
|
||||
RecordRegionRemovalInBucket(region->call_stack_depth, region->call_stack,
|
||||
end_addr - start_addr);
|
||||
// Make another region for the start portion:
|
||||
// The new region has to be the start portion because we can't
|
||||
// just modify region->end_addr as it's the sorting key.
|
||||
Region r = *region;
|
||||
r.set_end_addr(start_addr);
|
||||
InsertRegionLocked(r);
|
||||
// cut *region from start:
|
||||
const_cast<Region&>(*region).set_start_addr(end_addr);
|
||||
} else if (end_addr > region->start_addr &&
|
||||
start_addr <= region->start_addr) { // cut from start
|
||||
RAW_VLOG(12, "Start-chopping region %p..%p",
|
||||
reinterpret_cast<void*>(region->start_addr),
|
||||
reinterpret_cast<void*>(region->end_addr));
|
||||
RecordRegionRemovalInBucket(region->call_stack_depth, region->call_stack,
|
||||
end_addr - region->start_addr);
|
||||
const_cast<Region&>(*region).set_start_addr(end_addr);
|
||||
} else if (start_addr > region->start_addr &&
|
||||
start_addr < region->end_addr) { // cut from end
|
||||
RAW_VLOG(12, "End-chopping region %p..%p",
|
||||
reinterpret_cast<void*>(region->start_addr),
|
||||
reinterpret_cast<void*>(region->end_addr));
|
||||
RecordRegionRemovalInBucket(region->call_stack_depth, region->call_stack,
|
||||
region->end_addr - start_addr);
|
||||
// Can't just modify region->end_addr (it's the sorting key):
|
||||
Region r = *region;
|
||||
r.set_end_addr(start_addr);
|
||||
RegionSet::iterator d = region;
|
||||
++region;
|
||||
// It's safe to erase before inserting since r is independent of *d:
|
||||
// r contains an own copy of the call stack:
|
||||
regions_->erase(d);
|
||||
InsertRegionLocked(r);
|
||||
continue;
|
||||
}
|
||||
++region;
|
||||
}
|
||||
RAW_VLOG(12, "Removed region %p..%p; have %zu regions",
|
||||
reinterpret_cast<void*>(start_addr),
|
||||
reinterpret_cast<void*>(end_addr),
|
||||
regions_->size());
|
||||
if (VLOG_IS_ON(12)) LogAllLocked();
|
||||
unmap_size_ += size;
|
||||
Unlock();
|
||||
}
|
||||
|
||||
void MemoryRegionMap::RecordRegionRemovalInBucket(int depth,
|
||||
const void* const stack[],
|
||||
size_t size) {
|
||||
RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
|
||||
if (bucket_table_ == NULL) return;
|
||||
HeapProfileBucket* b = GetBucket(depth, stack);
|
||||
++b->frees;
|
||||
b->free_size += size;
|
||||
}
|
||||
|
||||
void MemoryRegionMap::HandleMappingEvent(const tcmalloc::MappingEvent& evt) {
|
||||
RAW_VLOG(10, "MMap: before: %p, +%zu; after: %p, +%zu; fd: %d, off: %lld, sbrk: %s",
|
||||
evt.before_address, evt.before_valid ? evt.before_length : 0,
|
||||
evt.after_address, evt.after_valid ? evt.after_length : 0,
|
||||
evt.file_valid ? evt.file_fd : -1, evt.file_valid ? (long long)evt.file_off : 0LL,
|
||||
evt.is_sbrk ? "true" : "false");
|
||||
if (evt.before_valid && evt.before_length != 0) {
|
||||
RecordRegionRemoval(evt.before_address, evt.before_length);
|
||||
}
|
||||
if (evt.after_valid && evt.after_length != 0) {
|
||||
RecordRegionAddition(evt.after_address, evt.after_length);
|
||||
}
|
||||
}
|
||||
|
||||
void MemoryRegionMap::LogAllLocked() {
|
||||
RAW_CHECK(LockIsHeld(), "should be held (by this thread)");
|
||||
RAW_LOG(INFO, "List of regions:");
|
||||
uintptr_t previous = 0;
|
||||
for (RegionSet::const_iterator r = regions_->begin();
|
||||
r != regions_->end(); ++r) {
|
||||
RAW_LOG(INFO, "Memory region 0x%" PRIxPTR "..0x%" PRIxPTR " "
|
||||
"from 0x%" PRIxPTR " stack=%d",
|
||||
r->start_addr, r->end_addr, r->caller(), r->is_stack);
|
||||
RAW_CHECK(previous < r->end_addr, "wow, we messed up the set order");
|
||||
// this must be caused by uncontrolled recursive operations on regions_
|
||||
previous = r->end_addr;
|
||||
}
|
||||
RAW_LOG(INFO, "End of regions list");
|
||||
}
|
407
3party/gperftools/src/memory_region_map.h
Normal file
407
3party/gperftools/src/memory_region_map.h
Normal file
@ -0,0 +1,407 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2006, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Maxim Lifantsev
|
||||
*/
|
||||
|
||||
#ifndef BASE_MEMORY_REGION_MAP_H_
|
||||
#define BASE_MEMORY_REGION_MAP_H_
|
||||
|
||||
#include <config.h>
|
||||
|
||||
#ifdef HAVE_PTHREAD
|
||||
#include <pthread.h>
|
||||
#endif
|
||||
#include <stddef.h>
|
||||
#include <set>
|
||||
#include "base/stl_allocator.h"
|
||||
#include "base/spinlock.h"
|
||||
#include "base/thread_annotations.h"
|
||||
#include "base/low_level_alloc.h"
|
||||
#include "heap-profile-stats.h"
|
||||
#include "mmap_hook.h"
|
||||
|
||||
// TODO(maxim): add a unittest:
|
||||
// execute a bunch of mmaps and compare memory map what strace logs
|
||||
// execute a bunch of mmap/munmup and compare memory map with
|
||||
// own accounting of what those mmaps generated
|
||||
|
||||
// Thread-safe class to collect and query the map of all memory regions
|
||||
// in a process that have been created with mmap, munmap, mremap, sbrk.
|
||||
// For each memory region, we keep track of (and provide to users)
|
||||
// the stack trace that allocated that memory region.
|
||||
// The recorded stack trace depth is bounded by
|
||||
// a user-supplied max_stack_depth parameter of Init().
|
||||
// After initialization with Init()
|
||||
// (which can happened even before global object constructor execution)
|
||||
// we collect the map by installing and monitoring MallocHook-s
|
||||
// to mmap, munmap, mremap, sbrk.
|
||||
// At any time one can query this map via provided interface.
|
||||
// For more details on the design of MemoryRegionMap
|
||||
// see the comment at the top of our .cc file.
|
||||
class MemoryRegionMap {
|
||||
private:
|
||||
// Max call stack recording depth supported by Init(). Set it to be
|
||||
// high enough for all our clients. Note: we do not define storage
|
||||
// for this (doing that requires special handling in windows), so
|
||||
// don't take the address of it!
|
||||
static const int kMaxStackDepth = 32;
|
||||
|
||||
// Size of the hash table of buckets. A structure of the bucket table is
|
||||
// described in heap-profile-stats.h.
|
||||
static const int kHashTableSize = 179999;
|
||||
|
||||
public:
|
||||
// interface ================================================================
|
||||
|
||||
// Every client of MemoryRegionMap must call Init() before first use,
|
||||
// and Shutdown() after last use. This allows us to reference count
|
||||
// this (singleton) class properly.
|
||||
|
||||
// Initialize this module to record memory allocation stack traces.
|
||||
// Stack traces that have more than "max_stack_depth" frames
|
||||
// are automatically shrunk to "max_stack_depth" when they are recorded.
|
||||
// Init() can be called more than once w/o harm, largest max_stack_depth
|
||||
// will be the effective one.
|
||||
// When "use_buckets" is true, then counts of mmap and munmap sizes will be
|
||||
// recorded with each stack trace. If Init() is called more than once, then
|
||||
// counting will be effective after any call contained "use_buckets" of true.
|
||||
// It will install mmap, munmap, mremap, sbrk hooks
|
||||
// and initialize arena_ and our hook and locks, hence one can use
|
||||
// MemoryRegionMap::Lock()/Unlock() to manage the locks.
|
||||
// Uses Lock/Unlock inside.
|
||||
static void Init(int max_stack_depth, bool use_buckets);
|
||||
|
||||
// Try to shutdown this module undoing what Init() did.
|
||||
// Returns true iff could do full shutdown (or it was not attempted).
|
||||
// Full shutdown is attempted when the number of Shutdown() calls equals
|
||||
// the number of Init() calls.
|
||||
static bool Shutdown();
|
||||
|
||||
// Return true if MemoryRegionMap is initialized and recording, i.e. when
|
||||
// then number of Init() calls are more than the number of Shutdown() calls.
|
||||
static bool IsRecordingLocked();
|
||||
|
||||
// Locks to protect our internal data structures.
|
||||
// These also protect use of arena_ if our Init() has been done.
|
||||
// The lock is recursive.
|
||||
static void Lock() EXCLUSIVE_LOCK_FUNCTION(lock_);
|
||||
static void Unlock() UNLOCK_FUNCTION(lock_);
|
||||
|
||||
// Returns true when the lock is held by this thread (for use in RAW_CHECK-s).
|
||||
static bool LockIsHeld();
|
||||
|
||||
// Locker object that acquires the MemoryRegionMap::Lock
|
||||
// for the duration of its lifetime (a C++ scope).
|
||||
class SCOPED_LOCKABLE LockHolder {
|
||||
public:
|
||||
LockHolder() EXCLUSIVE_LOCK_FUNCTION(lock_) { Lock(); }
|
||||
~LockHolder() UNLOCK_FUNCTION(lock_) { Unlock(); }
|
||||
private:
|
||||
DISALLOW_COPY_AND_ASSIGN(LockHolder);
|
||||
};
|
||||
|
||||
// A memory region that we know about through mmap hooks.
|
||||
// This is essentially an interface through which MemoryRegionMap
|
||||
// exports the collected data to its clients. Thread-compatible.
|
||||
struct Region {
|
||||
uintptr_t start_addr; // region start address
|
||||
uintptr_t end_addr; // region end address
|
||||
int call_stack_depth; // number of caller stack frames that we saved
|
||||
const void* call_stack[kMaxStackDepth]; // caller address stack array
|
||||
// filled to call_stack_depth size
|
||||
bool is_stack; // does this region contain a thread's stack:
|
||||
// a user of MemoryRegionMap supplies this info
|
||||
|
||||
// Convenience accessor for call_stack[0],
|
||||
// i.e. (the program counter of) the immediate caller
|
||||
// of this region's allocation function,
|
||||
// but it also returns NULL when call_stack_depth is 0,
|
||||
// i.e whe we weren't able to get the call stack.
|
||||
// This usually happens in recursive calls, when the stack-unwinder
|
||||
// calls mmap() which in turn calls the stack-unwinder.
|
||||
uintptr_t caller() const {
|
||||
return reinterpret_cast<uintptr_t>(call_stack_depth >= 1
|
||||
? call_stack[0] : NULL);
|
||||
}
|
||||
|
||||
// Return true iff this region overlaps region x.
|
||||
bool Overlaps(const Region& x) const {
|
||||
return start_addr < x.end_addr && end_addr > x.start_addr;
|
||||
}
|
||||
|
||||
private: // helpers for MemoryRegionMap
|
||||
friend class MemoryRegionMap;
|
||||
|
||||
// The ways we create Region-s:
|
||||
void Create(const void* start, size_t size) {
|
||||
start_addr = reinterpret_cast<uintptr_t>(start);
|
||||
end_addr = start_addr + size;
|
||||
is_stack = false; // not a stack till marked such
|
||||
call_stack_depth = 0;
|
||||
AssertIsConsistent();
|
||||
}
|
||||
void set_call_stack_depth(int depth) {
|
||||
RAW_DCHECK(call_stack_depth == 0, ""); // only one such set is allowed
|
||||
call_stack_depth = depth;
|
||||
AssertIsConsistent();
|
||||
}
|
||||
|
||||
// The ways we modify Region-s:
|
||||
void set_is_stack() { is_stack = true; }
|
||||
void set_start_addr(uintptr_t addr) {
|
||||
start_addr = addr;
|
||||
AssertIsConsistent();
|
||||
}
|
||||
void set_end_addr(uintptr_t addr) {
|
||||
end_addr = addr;
|
||||
AssertIsConsistent();
|
||||
}
|
||||
|
||||
// Verifies that *this contains consistent data, crashes if not the case.
|
||||
void AssertIsConsistent() const {
|
||||
RAW_DCHECK(start_addr < end_addr, "");
|
||||
RAW_DCHECK(call_stack_depth >= 0 &&
|
||||
call_stack_depth <= kMaxStackDepth, "");
|
||||
}
|
||||
|
||||
// Post-default construction helper to make a Region suitable
|
||||
// for searching in RegionSet regions_.
|
||||
void SetRegionSetKey(uintptr_t addr) {
|
||||
// make sure *this has no usable data:
|
||||
if (DEBUG_MODE) memset(this, 0xFF, sizeof(*this));
|
||||
end_addr = addr;
|
||||
}
|
||||
|
||||
// Note: call_stack[kMaxStackDepth] as a member lets us make Region
|
||||
// a simple self-contained struct with correctly behaving bit-vise copying.
|
||||
// This simplifies the code of this module but wastes some memory:
|
||||
// in most-often use case of this module (leak checking)
|
||||
// only one call_stack element out of kMaxStackDepth is actually needed.
|
||||
// Making the storage for call_stack variable-sized,
|
||||
// substantially complicates memory management for the Region-s:
|
||||
// as they need to be created and manipulated for some time
|
||||
// w/o any memory allocations, yet are also given out to the users.
|
||||
};
|
||||
|
||||
// Find the region that covers addr and write its data into *result if found,
|
||||
// in which case *result gets filled so that it stays fully functional
|
||||
// even when the underlying region gets removed from MemoryRegionMap.
|
||||
// Returns success. Uses Lock/Unlock inside.
|
||||
static bool FindRegion(uintptr_t addr, Region* result);
|
||||
|
||||
// Find the region that contains stack_top, mark that region as
|
||||
// a stack region, and write its data into *result if found,
|
||||
// in which case *result gets filled so that it stays fully functional
|
||||
// even when the underlying region gets removed from MemoryRegionMap.
|
||||
// Returns success. Uses Lock/Unlock inside.
|
||||
static bool FindAndMarkStackRegion(uintptr_t stack_top, Region* result);
|
||||
|
||||
// Iterate over the buckets which store mmap and munmap counts per stack
|
||||
// trace. It calls "callback" for each bucket, and passes "arg" to it.
|
||||
template<class Type>
|
||||
static void IterateBuckets(void (*callback)(const HeapProfileBucket*, Type),
|
||||
Type arg) EXCLUSIVE_LOCKS_REQUIRED(lock_);
|
||||
|
||||
// Get the bucket whose caller stack trace is "key". The stack trace is
|
||||
// used to a depth of "depth" at most. The requested bucket is created if
|
||||
// needed.
|
||||
// The bucket table is described in heap-profile-stats.h.
|
||||
static HeapProfileBucket* GetBucket(int depth, const void* const key[]) EXCLUSIVE_LOCKS_REQUIRED(lock_);
|
||||
|
||||
private: // our internal types ==============================================
|
||||
|
||||
// Region comparator for sorting with STL
|
||||
struct RegionCmp {
|
||||
bool operator()(const Region& x, const Region& y) const {
|
||||
return x.end_addr < y.end_addr;
|
||||
}
|
||||
};
|
||||
|
||||
// We allocate STL objects in our own arena.
|
||||
struct MyAllocator {
|
||||
static void *Allocate(size_t n) {
|
||||
return LowLevelAlloc::AllocWithArena(n, arena_);
|
||||
}
|
||||
static void Free(const void *p, size_t /* n */) {
|
||||
LowLevelAlloc::Free(const_cast<void*>(p));
|
||||
}
|
||||
};
|
||||
|
||||
// Set of the memory regions
|
||||
typedef std::set<Region, RegionCmp,
|
||||
STL_Allocator<Region, MyAllocator> > RegionSet;
|
||||
|
||||
public: // more in-depth interface ==========================================
|
||||
|
||||
// STL iterator with values of Region
|
||||
typedef RegionSet::const_iterator RegionIterator;
|
||||
|
||||
// Return the begin/end iterators to all the regions.
|
||||
// These need Lock/Unlock protection around their whole usage (loop).
|
||||
// Even when the same thread causes modifications during such a loop
|
||||
// (which are permitted due to recursive locking)
|
||||
// the loop iterator will still be valid as long as its region
|
||||
// has not been deleted, but EndRegionLocked should be
|
||||
// re-evaluated whenever the set of regions has changed.
|
||||
static RegionIterator BeginRegionLocked();
|
||||
static RegionIterator EndRegionLocked();
|
||||
|
||||
// Return the accumulated sizes of mapped and unmapped regions.
|
||||
static int64 MapSize() { return map_size_; }
|
||||
static int64 UnmapSize() { return unmap_size_; }
|
||||
|
||||
// Effectively private type from our .cc =================================
|
||||
// public to let us declare global objects:
|
||||
union RegionSetRep;
|
||||
|
||||
private:
|
||||
// representation ===========================================================
|
||||
|
||||
// Counter of clients of this module that have called Init().
|
||||
static int client_count_;
|
||||
|
||||
// Maximal number of caller stack frames to save (>= 0).
|
||||
static int max_stack_depth_;
|
||||
|
||||
// Arena used for our allocations in regions_.
|
||||
static LowLevelAlloc::Arena* arena_;
|
||||
|
||||
// Set of the mmap/sbrk/mremap-ed memory regions
|
||||
// To be accessed *only* when Lock() is held.
|
||||
// Hence we protect the non-recursive lock used inside of arena_
|
||||
// with our recursive Lock(). This lets a user prevent deadlocks
|
||||
// when threads are stopped by TCMalloc_ListAllProcessThreads at random spots
|
||||
// simply by acquiring our recursive Lock() before that.
|
||||
static RegionSet* regions_;
|
||||
|
||||
// Lock to protect regions_ and buckets_ variables and the data behind.
|
||||
static SpinLock lock_;
|
||||
// Lock to protect the recursive lock itself.
|
||||
static SpinLock owner_lock_;
|
||||
|
||||
// Recursion count for the recursive lock.
|
||||
static int recursion_count_;
|
||||
// The thread id of the thread that's inside the recursive lock.
|
||||
static pthread_t lock_owner_tid_;
|
||||
|
||||
// Total size of all mapped pages so far
|
||||
static int64 map_size_;
|
||||
// Total size of all unmapped pages so far
|
||||
static int64 unmap_size_;
|
||||
|
||||
// Bucket hash table which is described in heap-profile-stats.h.
|
||||
static HeapProfileBucket** bucket_table_ GUARDED_BY(lock_);
|
||||
static int num_buckets_ GUARDED_BY(lock_);
|
||||
|
||||
// The following members are local to MemoryRegionMap::GetBucket()
|
||||
// and MemoryRegionMap::HandleSavedBucketsLocked()
|
||||
// and are file-level to ensure that they are initialized at load time.
|
||||
//
|
||||
// These are used as temporary storage to break the infinite cycle of mmap
|
||||
// calling our hook which (sometimes) causes mmap. It must be a static
|
||||
// fixed-size array. The size 20 is just an expected value for safety.
|
||||
// The details are described in memory_region_map.cc.
|
||||
|
||||
// Number of unprocessed bucket inserts.
|
||||
static int saved_buckets_count_ GUARDED_BY(lock_);
|
||||
|
||||
// Unprocessed inserts (must be big enough to hold all mmaps that can be
|
||||
// caused by a GetBucket call).
|
||||
// Bucket has no constructor, so that c-tor execution does not interfere
|
||||
// with the any-time use of the static memory behind saved_buckets.
|
||||
static HeapProfileBucket saved_buckets_[20] GUARDED_BY(lock_);
|
||||
|
||||
static const void* saved_buckets_keys_[20][kMaxStackDepth] GUARDED_BY(lock_);
|
||||
|
||||
static tcmalloc::MappingHookSpace mapping_hook_space_;
|
||||
|
||||
// helpers ==================================================================
|
||||
|
||||
// Helper for FindRegion and FindAndMarkStackRegion:
|
||||
// returns the region covering 'addr' or NULL; assumes our lock_ is held.
|
||||
static const Region* DoFindRegionLocked(uintptr_t addr);
|
||||
|
||||
// Verifying wrapper around regions_->insert(region)
|
||||
// To be called to do InsertRegionLocked's work only!
|
||||
inline static void DoInsertRegionLocked(const Region& region);
|
||||
// Handle regions saved by InsertRegionLocked into a tmp static array
|
||||
// by calling insert_func on them.
|
||||
inline static void HandleSavedRegionsLocked(
|
||||
void (*insert_func)(const Region& region));
|
||||
|
||||
// Restore buckets saved in a tmp static array by GetBucket to the bucket
|
||||
// table where all buckets eventually should be.
|
||||
static void RestoreSavedBucketsLocked() EXCLUSIVE_LOCKS_REQUIRED(lock_);
|
||||
|
||||
// Initialize RegionSet regions_.
|
||||
inline static void InitRegionSetLocked();
|
||||
|
||||
// Wrapper around DoInsertRegionLocked
|
||||
// that handles the case of recursive allocator calls.
|
||||
inline static void InsertRegionLocked(const Region& region);
|
||||
|
||||
// Record addition of a memory region at address "start" of size "size"
|
||||
// (called from our mmap/mremap/sbrk hook).
|
||||
static void RecordRegionAddition(const void* start, size_t size);
|
||||
// Record deletion of a memory region at address "start" of size "size"
|
||||
// (called from our munmap/mremap/sbrk hook).
|
||||
static void RecordRegionRemoval(const void* start, size_t size);
|
||||
|
||||
// Record deletion of a memory region of size "size" in a bucket whose
|
||||
// caller stack trace is "key". The stack trace is used to a depth of
|
||||
// "depth" at most.
|
||||
static void RecordRegionRemovalInBucket(int depth,
|
||||
const void* const key[],
|
||||
size_t size) EXCLUSIVE_LOCKS_REQUIRED(lock_);
|
||||
|
||||
static void HandleMappingEvent(const tcmalloc::MappingEvent& evt);
|
||||
|
||||
// Log all memory regions; Useful for debugging only.
|
||||
// Assumes Lock() is held
|
||||
static void LogAllLocked();
|
||||
|
||||
DISALLOW_COPY_AND_ASSIGN(MemoryRegionMap);
|
||||
};
|
||||
|
||||
template <class Type>
|
||||
void MemoryRegionMap::IterateBuckets(
|
||||
void (*callback)(const HeapProfileBucket*, Type), Type callback_arg) {
|
||||
for (int index = 0; index < kHashTableSize; index++) {
|
||||
for (HeapProfileBucket* bucket = bucket_table_[index];
|
||||
bucket != NULL;
|
||||
bucket = bucket->next) {
|
||||
callback(bucket, callback_arg);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#endif // BASE_MEMORY_REGION_MAP_H_
|
476
3party/gperftools/src/mmap_hook.cc
Normal file
476
3party/gperftools/src/mmap_hook.cc
Normal file
@ -0,0 +1,476 @@
|
||||
/* -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
* Copyright (c) 2023, gperftools Contributors
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
#include <config.h>
|
||||
|
||||
#include "mmap_hook.h"
|
||||
|
||||
#include "base/spinlock.h"
|
||||
#include "base/logging.h"
|
||||
|
||||
#include <atomic>
|
||||
|
||||
#if HAVE_SYS_SYSCALL_H
|
||||
#include <sys/syscall.h>
|
||||
#endif
|
||||
|
||||
// Disable the glibc prototype of mremap(), as older versions of the
|
||||
// system headers define this function with only four arguments,
|
||||
// whereas newer versions allow an optional fifth argument:
|
||||
#ifdef HAVE_MMAP
|
||||
# define mremap glibc_mremap
|
||||
# include <sys/mman.h>
|
||||
# ifndef MAP_ANONYMOUS
|
||||
# define MAP_ANONYMOUS MAP_ANON
|
||||
# endif
|
||||
#include <sys/types.h>
|
||||
# undef mremap
|
||||
#endif
|
||||
|
||||
// __THROW is defined in glibc systems. It means, counter-intuitively,
|
||||
// "This function will never throw an exception." It's an optional
|
||||
// optimization tool, but we may need to use it to match glibc prototypes.
|
||||
#ifndef __THROW // I guess we're not on a glibc system
|
||||
# define __THROW // __THROW is just an optimization, so ok to make it ""
|
||||
#endif
|
||||
|
||||
// Used in initial hooks to call into heap checker
|
||||
// initialization. Defined empty and weak inside malloc_hooks and
|
||||
// proper definition is in heap_checker.cc
|
||||
extern "C" int MallocHook_InitAtFirstAllocation_HeapLeakChecker();
|
||||
|
||||
namespace tcmalloc {
|
||||
|
||||
namespace {
|
||||
|
||||
struct MappingHookDescriptor {
|
||||
MappingHookDescriptor(MMapEventFn fn) : fn(fn) {}
|
||||
|
||||
const MMapEventFn fn;
|
||||
|
||||
std::atomic<bool> inactive{false};
|
||||
std::atomic<MappingHookDescriptor*> next;
|
||||
};
|
||||
|
||||
static_assert(sizeof(MappingHookDescriptor) ==
|
||||
(sizeof(MappingHookSpace) - offsetof(MappingHookSpace, storage)), "");
|
||||
static_assert(alignof(MappingHookDescriptor) == alignof(MappingHookSpace), "");
|
||||
|
||||
class MappingHooks {
|
||||
public:
|
||||
MappingHooks(base::LinkerInitialized) {}
|
||||
|
||||
static MappingHookDescriptor* SpaceToDesc(MappingHookSpace* space) {
|
||||
return reinterpret_cast<MappingHookDescriptor*>(space->storage);
|
||||
}
|
||||
|
||||
void Add(MappingHookSpace *space, MMapEventFn fn) {
|
||||
MappingHookDescriptor* desc = SpaceToDesc(space);
|
||||
if (space->initialized) {
|
||||
desc->inactive.store(false);
|
||||
return;
|
||||
}
|
||||
|
||||
space->initialized = true;
|
||||
new (desc) MappingHookDescriptor(fn);
|
||||
|
||||
MappingHookDescriptor* next_candidate = list_head_.load(std::memory_order_relaxed);
|
||||
do {
|
||||
desc->next.store(next_candidate, std::memory_order_relaxed);
|
||||
} while (!list_head_.compare_exchange_strong(next_candidate, desc));
|
||||
}
|
||||
|
||||
void Remove(MappingHookSpace* space) {
|
||||
RAW_CHECK(space->initialized, "");
|
||||
SpaceToDesc(space)->inactive.store(true);
|
||||
}
|
||||
|
||||
void InvokeAll(const MappingEvent& evt) {
|
||||
if (!ran_initial_hooks_.load(std::memory_order_relaxed)) {
|
||||
bool already_ran = ran_initial_hooks_.exchange(true, std::memory_order_seq_cst);
|
||||
if (!already_ran) {
|
||||
MallocHook_InitAtFirstAllocation_HeapLeakChecker();
|
||||
}
|
||||
}
|
||||
|
||||
std::atomic<MappingHookDescriptor*> *place = &list_head_;
|
||||
while (MappingHookDescriptor* desc = place->load(std::memory_order_acquire)) {
|
||||
place = &desc->next;
|
||||
if (!desc->inactive) {
|
||||
desc->fn(evt);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void InvokeSbrk(void* result, intptr_t increment) {
|
||||
MappingEvent evt;
|
||||
evt.is_sbrk = 1;
|
||||
if (increment > 0) {
|
||||
evt.after_address = result;
|
||||
evt.after_length = increment;
|
||||
evt.after_valid = 1;
|
||||
} else {
|
||||
intptr_t res_addr = reinterpret_cast<uintptr_t>(result);
|
||||
intptr_t new_brk = res_addr + increment;
|
||||
evt.before_address = reinterpret_cast<void*>(new_brk);
|
||||
evt.before_length = -increment;
|
||||
evt.before_valid = 1;
|
||||
}
|
||||
|
||||
InvokeAll(evt);
|
||||
}
|
||||
|
||||
private:
|
||||
std::atomic<MappingHookDescriptor*> list_head_;
|
||||
std::atomic<bool> ran_initial_hooks_;
|
||||
} mapping_hooks{base::LINKER_INITIALIZED};
|
||||
|
||||
} // namespace
|
||||
|
||||
void HookMMapEvents(MappingHookSpace* place, MMapEventFn callback) {
|
||||
mapping_hooks.Add(place, callback);
|
||||
}
|
||||
|
||||
void UnHookMMapEvents(MappingHookSpace* place) {
|
||||
mapping_hooks.Remove(place);
|
||||
}
|
||||
|
||||
} // namespace tcmalloc
|
||||
|
||||
#if defined(__linux__) && HAVE_SYS_SYSCALL_H
|
||||
static void* do_sys_mmap(long sysnr, void* start, size_t length, int prot, int flags, int fd, long offset) {
|
||||
#if defined(__s390__)
|
||||
long args[6] = {
|
||||
(long)start, (long)length,
|
||||
(long)prot, (long)flags, (long)fd, (long)offset };
|
||||
return reinterpret_cast<void*>(syscall(sysnr, args));
|
||||
#else
|
||||
return reinterpret_cast<void*>(
|
||||
syscall(sysnr, reinterpret_cast<uintptr_t>(start), length, prot, flags, fd, offset));
|
||||
#endif
|
||||
}
|
||||
|
||||
static void* do_mmap(void* start, size_t length, int prot, int flags, int fd, int64_t offset) {
|
||||
#ifdef SYS_mmap2
|
||||
static int pagesize = 0;
|
||||
if (!pagesize) {
|
||||
pagesize = getpagesize();
|
||||
}
|
||||
if ((offset & (pagesize - 1))) {
|
||||
errno = EINVAL;
|
||||
return MAP_FAILED;
|
||||
}
|
||||
offset /= pagesize;
|
||||
|
||||
#if !defined(_LP64) && !defined(__x86_64__)
|
||||
// 32-bit and not x32 (which has "honest" 64-bit syscalls args)
|
||||
uintptr_t truncated_offset = offset;
|
||||
// This checks offset being too large for page number still not
|
||||
// fitting into 32-bit pgoff argument.
|
||||
if (static_cast<int64_t>(truncated_offset) != offset) {
|
||||
errno = EINVAL;
|
||||
return MAP_FAILED;
|
||||
}
|
||||
#else
|
||||
int64_t truncated_offset = offset;
|
||||
#endif
|
||||
return do_sys_mmap(SYS_mmap2, start, length, prot, flags, fd, truncated_offset);
|
||||
#else
|
||||
|
||||
return do_sys_mmap(SYS_mmap, start, length, prot, flags, fd, offset);
|
||||
#endif
|
||||
}
|
||||
|
||||
#define DEFINED_DO_MMAP
|
||||
|
||||
#endif // __linux__
|
||||
|
||||
// Note, we're not risking syscall-ing mmap with 64-bit off_t on
|
||||
// 32-bit on BSDs.
|
||||
#if defined(__FreeBSD__) && defined(_LP64) && HAVE_SYS_SYSCALL_H
|
||||
static void* do_mmap(void* start, size_t length, int prot, int flags, int fd, int64_t offset) {
|
||||
// BSDs need __syscall to deal with 64-bit args
|
||||
return reinterpret_cast<void*>(__syscall(SYS_mmap, start, length, prot, flags, fd, offset));
|
||||
}
|
||||
|
||||
#define DEFINED_DO_MMAP
|
||||
#endif // 64-bit FreeBSD
|
||||
|
||||
#ifdef DEFINED_DO_MMAP
|
||||
|
||||
static inline ATTRIBUTE_ALWAYS_INLINE
|
||||
void* do_mmap_with_hooks(void* start, size_t length, int prot, int flags, int fd, int64_t offset) {
|
||||
void* result = do_mmap(start, length, prot, flags, fd, offset);
|
||||
if (result == MAP_FAILED) {
|
||||
return result;
|
||||
}
|
||||
|
||||
tcmalloc::MappingEvent evt;
|
||||
evt.before_address = start;
|
||||
evt.after_address = result;
|
||||
evt.after_length = length;
|
||||
evt.after_valid = 1;
|
||||
evt.file_fd = fd;
|
||||
evt.file_off = offset;
|
||||
evt.file_valid = 1;
|
||||
evt.flags = flags;
|
||||
evt.prot = prot;
|
||||
|
||||
tcmalloc::mapping_hooks.InvokeAll(evt);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
static int do_munmap(void* start, size_t length) {
|
||||
return syscall(SYS_munmap, start, length);
|
||||
}
|
||||
#endif // DEFINED_DO_MMAP
|
||||
|
||||
|
||||
// On systems where we know how, we override mmap/munmap/mremap/sbrk
|
||||
// to provide support for calling the related hooks (in addition,
|
||||
// of course, to doing what these functions normally do).
|
||||
|
||||
// Some Linux libcs already have "future on" by default and ship with
|
||||
// native 64-bit off_t-s. One example being musl. We cannot rule out
|
||||
// glibc changing defaults in future, somehow, or people introducing
|
||||
// more 32-bit systems with 64-bit off_t (x32 already being one). So
|
||||
// we check for the case of 32-bit system that has wide off_t.
|
||||
//
|
||||
// Note, it would be nice to test some define that is available
|
||||
// everywhere when off_t is 64-bit, but sadly stuff isn't always
|
||||
// consistent. So we detect 32-bit system that doesn't have
|
||||
// _POSIX_V7_ILP32_OFF32 set to 1, which looks less robust than we'd
|
||||
// like. But from some tests and code inspection this check seems to
|
||||
// cover glibc, musl, uclibc and bionic.
|
||||
#if defined(__linux__) && (defined(_LP64) || (!defined(_POSIX_V7_ILP32_OFF32) || _POSIX_V7_ILP32_OFF32 < 0))
|
||||
#define GOOD_LINUX_SYSTEM 1
|
||||
#else
|
||||
#define GOOD_LINUX_SYSTEM 0
|
||||
#endif
|
||||
|
||||
#if defined(DEFINED_DO_MMAP) && (!defined(__linux__) || GOOD_LINUX_SYSTEM)
|
||||
// Simple case for 64-bit kernels or 32-bit systems that have native
|
||||
// 64-bit off_t. On all those systems there are no off_t complications
|
||||
static_assert(sizeof(int64_t) == sizeof(off_t), "");
|
||||
|
||||
// We still export mmap64 just in case. Linux libcs tend to have it. But since off_t is 64-bit they're identical
|
||||
// Also, we can safely assume gcc-like compiler and elf.
|
||||
|
||||
#undef mmap64
|
||||
#undef mmap
|
||||
|
||||
extern "C" void* mmap64(void* start, size_t length, int prot, int flags, int fd, off_t off)
|
||||
__THROW ATTRIBUTE_SECTION(malloc_hook);
|
||||
extern "C" void* mmap(void* start, size_t length, int prot, int flags, int fd, off_t off)
|
||||
__THROW ATTRIBUTE_SECTION(malloc_hook);
|
||||
|
||||
void* mmap64(void* start, size_t length, int prot, int flags, int fd, off_t off) __THROW {
|
||||
return do_mmap_with_hooks(start, length, prot, flags, fd, off);
|
||||
}
|
||||
void* mmap(void* start, size_t length, int prot, int flags, int fd, off_t off) __THROW {
|
||||
return do_mmap_with_hooks(start, length, prot, flags, fd, off);
|
||||
}
|
||||
|
||||
#define HOOKED_MMAP
|
||||
|
||||
#elif defined(DEFINED_DO_MMAP) && defined(__linux__) && !GOOD_LINUX_SYSTEM
|
||||
// Linuxes with 32-bit off_t. We're being careful with mmap64 being
|
||||
// 64-bit and mmap being 32-bit.
|
||||
|
||||
static_assert(sizeof(int32_t) == sizeof(off_t), "");
|
||||
|
||||
extern "C" void* mmap64(void* start, size_t length, int prot, int flags, int fd, int64_t off)
|
||||
__THROW ATTRIBUTE_SECTION(malloc_hook);
|
||||
extern "C" void* mmap(void* start, size_t length, int prot, int flags, int fd, off_t off)
|
||||
__THROW ATTRIBUTE_SECTION(malloc_hook);
|
||||
|
||||
void* mmap(void *start, size_t length, int prot, int flags, int fd, off_t off) __THROW {
|
||||
return do_mmap_with_hooks(start, length, prot, flags, fd, off);
|
||||
}
|
||||
|
||||
void* mmap64(void *start, size_t length, int prot, int flags, int fd, int64_t off) __THROW {
|
||||
return do_mmap_with_hooks(start, length, prot, flags, fd, off);
|
||||
}
|
||||
|
||||
#define HOOKED_MMAP
|
||||
|
||||
#endif // Linux/32-bit off_t case
|
||||
|
||||
|
||||
#ifdef HOOKED_MMAP
|
||||
|
||||
extern "C" int munmap(void* start, size_t length) __THROW ATTRIBUTE_SECTION(malloc_hook);
|
||||
int munmap(void* start, size_t length) __THROW {
|
||||
int result = tcmalloc::DirectMUnMap(/* invoke_hooks=*/ false, start, length);
|
||||
if (result < 0) {
|
||||
return result;
|
||||
}
|
||||
|
||||
tcmalloc::MappingEvent evt;
|
||||
evt.before_address = start;
|
||||
evt.before_length = length;
|
||||
evt.before_valid = 1;
|
||||
|
||||
tcmalloc::mapping_hooks.InvokeAll(evt);
|
||||
|
||||
return result;
|
||||
}
|
||||
#else // !HOOKED_MMAP
|
||||
// No mmap/munmap interceptions. But we still provide (internal) DirectXYZ APIs.
|
||||
#define do_mmap mmap
|
||||
#define do_munmap munmap
|
||||
#endif
|
||||
|
||||
tcmalloc::DirectAnonMMapResult tcmalloc::DirectAnonMMap(bool invoke_hooks, size_t length) {
|
||||
tcmalloc::DirectAnonMMapResult result;
|
||||
if (invoke_hooks) {
|
||||
result.addr = mmap(nullptr, length, PROT_READ|PROT_WRITE, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
|
||||
} else {
|
||||
result.addr = do_mmap(nullptr, length, PROT_READ|PROT_WRITE, MAP_ANONYMOUS|MAP_PRIVATE, -1, 0);
|
||||
}
|
||||
result.success = (result.addr != MAP_FAILED);
|
||||
return result;
|
||||
}
|
||||
|
||||
int tcmalloc::DirectMUnMap(bool invoke_hooks, void *start, size_t length) {
|
||||
if (invoke_hooks) {
|
||||
return munmap(start, length);
|
||||
}
|
||||
|
||||
return do_munmap(start, length);
|
||||
}
|
||||
|
||||
#if __linux__
|
||||
extern "C" void* mremap(void* old_addr, size_t old_size, size_t new_size,
|
||||
int flags, ...) __THROW ATTRIBUTE_SECTION(malloc_hook);
|
||||
// We only handle mremap on Linux so far.
|
||||
void* mremap(void* old_addr, size_t old_size, size_t new_size,
|
||||
int flags, ...) __THROW {
|
||||
va_list ap;
|
||||
va_start(ap, flags);
|
||||
void *new_address = va_arg(ap, void *);
|
||||
va_end(ap);
|
||||
void* result = (void*)syscall(SYS_mremap, old_addr, old_size, new_size, flags,
|
||||
new_address);
|
||||
|
||||
if (result != MAP_FAILED) {
|
||||
tcmalloc::MappingEvent evt;
|
||||
evt.before_address = old_addr;
|
||||
evt.before_length = old_size;
|
||||
evt.before_valid = 1;
|
||||
evt.after_address = result;
|
||||
evt.after_length = new_size;
|
||||
evt.after_valid = 1;
|
||||
evt.flags = flags;
|
||||
|
||||
tcmalloc::mapping_hooks.InvokeAll(evt);
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
#if defined(__linux__) && HAVE___SBRK
|
||||
// glibc's version:
|
||||
extern "C" void* __sbrk(intptr_t increment);
|
||||
|
||||
extern "C" void* sbrk(intptr_t increment) __THROW ATTRIBUTE_SECTION(malloc_hook);
|
||||
|
||||
void* sbrk(intptr_t increment) __THROW {
|
||||
void *result = __sbrk(increment);
|
||||
if (increment == 0 || result == reinterpret_cast<void*>(static_cast<intptr_t>(-1))) {
|
||||
return result;
|
||||
}
|
||||
|
||||
tcmalloc::mapping_hooks.InvokeSbrk(result, increment);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
#define HOOKED_SBRK
|
||||
|
||||
#endif
|
||||
|
||||
#if defined(__FreeBSD__) && defined(_LP64)
|
||||
extern "C" void* sbrk(intptr_t increment) __THROW ATTRIBUTE_SECTION(malloc_hook);
|
||||
|
||||
void* sbrk(intptr_t increment) __THROW {
|
||||
uintptr_t curbrk = __syscall(SYS_break, nullptr);
|
||||
uintptr_t badbrk = static_cast<uintptr_t>(static_cast<intptr_t>(-1));
|
||||
if (curbrk == badbrk) {
|
||||
nomem:
|
||||
errno = ENOMEM;
|
||||
return reinterpret_cast<void*>(badbrk);
|
||||
}
|
||||
|
||||
if (increment == 0) {
|
||||
return reinterpret_cast<void*>(curbrk);
|
||||
}
|
||||
|
||||
if (increment > 0) {
|
||||
if (curbrk + static_cast<uintptr_t>(increment) < curbrk) {
|
||||
goto nomem;
|
||||
}
|
||||
} else {
|
||||
if (curbrk + static_cast<uintptr_t>(increment) > curbrk) {
|
||||
goto nomem;
|
||||
}
|
||||
}
|
||||
|
||||
if (brk(reinterpret_cast<void*>(curbrk + increment)) < 0) {
|
||||
goto nomem;
|
||||
}
|
||||
|
||||
auto result = reinterpret_cast<void*>(curbrk);
|
||||
tcmalloc::mapping_hooks.InvokeSbrk(result, increment);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
#define HOOKED_SBRK
|
||||
|
||||
#endif
|
||||
|
||||
namespace tcmalloc {
|
||||
#ifdef HOOKED_MMAP
|
||||
const bool mmap_hook_works = true;
|
||||
#else
|
||||
const bool mmap_hook_works = false;
|
||||
#endif
|
||||
|
||||
#ifdef HOOKED_SBRK
|
||||
const bool sbrk_hook_works = true;
|
||||
#else
|
||||
const bool sbrk_hook_works = false;
|
||||
#endif
|
||||
} // namespace tcmalloc
|
128
3party/gperftools/src/mmap_hook.h
Normal file
128
3party/gperftools/src/mmap_hook.h
Normal file
@ -0,0 +1,128 @@
|
||||
/* -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
* Copyright (c) 2023, gperftools Contributors
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*/
|
||||
|
||||
// mmap_hook.h holds strictly non-public API for hooking mmap/sbrk
|
||||
// events as well invoking mmap/munmap with ability to bypass hooks
|
||||
// (i.e. for low_level_alloc).
|
||||
#ifndef MMAP_HOOK_H
|
||||
#define MMAP_HOOK_H
|
||||
|
||||
#include <stddef.h>
|
||||
#include <stdint.h>
|
||||
#include <string.h>
|
||||
|
||||
#include "base/basictypes.h"
|
||||
|
||||
namespace tcmalloc {
|
||||
|
||||
struct DirectAnonMMapResult {
|
||||
void* addr;
|
||||
bool success;
|
||||
};
|
||||
|
||||
// DirectAnonMMap does mmap of r+w anonymous memory. Optionally
|
||||
// bypassing or not mmap hooks.
|
||||
ATTRIBUTE_VISIBILITY_HIDDEN DirectAnonMMapResult DirectAnonMMap(bool invoke_hooks, size_t length);
|
||||
// DirectMUnMap does munmap of given region optionally bypassing mmap hooks.
|
||||
ATTRIBUTE_VISIBILITY_HIDDEN int DirectMUnMap(bool invoke_hooks, void* start, size_t length);
|
||||
|
||||
// We use those by tests to see what parts we think should work.
|
||||
extern ATTRIBUTE_VISIBILITY_HIDDEN const bool mmap_hook_works;
|
||||
extern ATTRIBUTE_VISIBILITY_HIDDEN const bool sbrk_hook_works;
|
||||
|
||||
// MMapEventFn gets this struct with all the details of
|
||||
// mmap/munmap/mremap/sbrk event.
|
||||
struct MappingEvent {
|
||||
MappingEvent() {
|
||||
memset(this, 0, sizeof(*this));
|
||||
}
|
||||
|
||||
// before_XXX fields describe address space chunk that was removed
|
||||
// from address space (say via munmap or mremap)
|
||||
void* before_address;
|
||||
size_t before_length;
|
||||
|
||||
// after_XXX fields describe address space chunk that was added to
|
||||
// address space.
|
||||
void* after_address;
|
||||
size_t after_length;
|
||||
|
||||
// This group of fields gets populated from mmap file, flags, prot
|
||||
// fields.
|
||||
int prot;
|
||||
int flags;
|
||||
int file_fd;
|
||||
int64_t file_off;
|
||||
|
||||
unsigned after_valid:1;
|
||||
unsigned before_valid:1;
|
||||
unsigned file_valid:1;
|
||||
unsigned is_sbrk:1;
|
||||
};
|
||||
|
||||
// Pass this to Hook/Unhook function below. Note, nature of
|
||||
// implementation requires that this chunk of memory must be valid
|
||||
// even after unhook. So typical use-case is to use global variable
|
||||
// storage.
|
||||
//
|
||||
// All fields are private.
|
||||
class MappingHookSpace {
|
||||
public:
|
||||
constexpr MappingHookSpace() = default;
|
||||
|
||||
bool initialized = false;
|
||||
|
||||
static constexpr size_t kSize = sizeof(void*) * 3;
|
||||
alignas(alignof(void*)) char storage[kSize] = {};
|
||||
};
|
||||
|
||||
using MMapEventFn = void (*)(const MappingEvent& evt);
|
||||
|
||||
// HookMMapEvents address hook for mmap events, using given place to
|
||||
// store relevant metadata (linked list membership etc).
|
||||
//
|
||||
// It does no memory allocation and is safe to be called from hooks of all kinds.
|
||||
ATTRIBUTE_VISIBILITY_HIDDEN void HookMMapEvents(MappingHookSpace* place, MMapEventFn callback);
|
||||
|
||||
// UnHookMMapEvents undoes effect of HookMMapEvents. This one is also
|
||||
// entirely safe to be called from out of anywhere. Including from
|
||||
// inside MMapEventFn invokations.
|
||||
//
|
||||
// As noted on MappingHookSpace the place ***must not** be deallocated or
|
||||
// reused for anything even after unhook. This requirement makes
|
||||
// implementation simple enough and fits our internal usage use-case
|
||||
// fine.
|
||||
ATTRIBUTE_VISIBILITY_HIDDEN void UnHookMMapEvents(MappingHookSpace* place);
|
||||
|
||||
} // namespace tcmalloc
|
||||
|
||||
|
||||
#endif // MMAP_HOOK_H
|
214
3party/gperftools/src/packed-cache-inl.h
Normal file
214
3party/gperftools/src/packed-cache-inl.h
Normal file
@ -0,0 +1,214 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2007, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Geoff Pike
|
||||
//
|
||||
// This file provides a minimal cache that can hold a <key, value> pair
|
||||
// with little if any wasted space. The types of the key and value
|
||||
// must be unsigned integral types or at least have unsigned semantics
|
||||
// for >>, casting, and similar operations.
|
||||
//
|
||||
// Synchronization is not provided. However, the cache is implemented
|
||||
// as an array of cache entries whose type is chosen at compile time.
|
||||
// If a[i] is atomic on your hardware for the chosen array type then
|
||||
// raciness will not necessarily lead to bugginess. The cache entries
|
||||
// must be large enough to hold a partial key and a value packed
|
||||
// together. The partial keys are bit strings of length
|
||||
// kKeybits - kHashbits, and the values are bit strings of length kValuebits.
|
||||
//
|
||||
// In an effort to use minimal space, every cache entry represents
|
||||
// some <key, value> pair; the class provides no way to mark a cache
|
||||
// entry as empty or uninitialized. In practice, you may want to have
|
||||
// reserved keys or values to get around this limitation. For example, in
|
||||
// tcmalloc's PageID-to-sizeclass cache, a value of 0 is used as
|
||||
// "unknown sizeclass."
|
||||
//
|
||||
// Usage Considerations
|
||||
// --------------------
|
||||
//
|
||||
// kHashbits controls the size of the cache. The best value for
|
||||
// kHashbits will of course depend on the application. Perhaps try
|
||||
// tuning the value of kHashbits by measuring different values on your
|
||||
// favorite benchmark. Also remember not to be a pig; other
|
||||
// programs that need resources may suffer if you are.
|
||||
//
|
||||
// The main uses for this class will be when performance is
|
||||
// critical and there's a convenient type to hold the cache's
|
||||
// entries. As described above, the number of bits required
|
||||
// for a cache entry is (kKeybits - kHashbits) + kValuebits. Suppose
|
||||
// kKeybits + kValuebits is 43. Then it probably makes sense to
|
||||
// chose kHashbits >= 11 so that cache entries fit in a uint32.
|
||||
//
|
||||
// On the other hand, suppose kKeybits = kValuebits = 64. Then
|
||||
// using this class may be less worthwhile. You'll probably
|
||||
// be using 128 bits for each entry anyway, so maybe just pick
|
||||
// a hash function, H, and use an array indexed by H(key):
|
||||
// void Put(K key, V value) { a_[H(key)] = pair<K, V>(key, value); }
|
||||
// V GetOrDefault(K key, V default) { const pair<K, V> &p = a_[H(key)]; ... }
|
||||
// etc.
|
||||
//
|
||||
// Further Details
|
||||
// ---------------
|
||||
//
|
||||
// For caches used only by one thread, the following is true:
|
||||
// 1. For a cache c,
|
||||
// (c.Put(key, value), c.GetOrDefault(key, 0)) == value
|
||||
// and
|
||||
// (c.Put(key, value), <...>, c.GetOrDefault(key, 0)) == value
|
||||
// if the elided code contains no c.Put calls.
|
||||
//
|
||||
// 2. Has(key) will return false if no <key, value> pair with that key
|
||||
// has ever been Put. However, a newly initialized cache will have
|
||||
// some <key, value> pairs already present. When you create a new
|
||||
// cache, you must specify an "initial value." The initialization
|
||||
// procedure is equivalent to Clear(initial_value), which is
|
||||
// equivalent to Put(k, initial_value) for all keys k from 0 to
|
||||
// 2^kHashbits - 1.
|
||||
//
|
||||
// 3. If key and key' differ then the only way Put(key, value) may
|
||||
// cause Has(key') to change is that Has(key') may change from true to
|
||||
// false. Furthermore, a Put() call that doesn't change Has(key')
|
||||
// doesn't change GetOrDefault(key', ...) either.
|
||||
//
|
||||
// Implementation details:
|
||||
//
|
||||
// This is a direct-mapped cache with 2^kHashbits entries; the hash
|
||||
// function simply takes the low bits of the key. We store whole keys
|
||||
// if a whole key plus a whole value fits in an entry. Otherwise, an
|
||||
// entry is the high bits of a key and a value, packed together.
|
||||
// E.g., a 20 bit key and a 7 bit value only require a uint16 for each
|
||||
// entry if kHashbits >= 11.
|
||||
//
|
||||
// Alternatives to this scheme will be added as needed.
|
||||
|
||||
#ifndef TCMALLOC_PACKED_CACHE_INL_H_
|
||||
#define TCMALLOC_PACKED_CACHE_INL_H_
|
||||
|
||||
#include "config.h"
|
||||
#include <stddef.h> // for size_t
|
||||
#include <stdint.h> // for uintptr_t
|
||||
#include "base/basictypes.h"
|
||||
#include "common.h"
|
||||
#include "internal_logging.h"
|
||||
|
||||
// A safe way of doing "(1 << n) - 1" -- without worrying about overflow
|
||||
// Note this will all be resolved to a constant expression at compile-time
|
||||
#define N_ONES_(IntType, N) \
|
||||
( (N) == 0 ? 0 : ((static_cast<IntType>(1) << ((N)-1))-1 + \
|
||||
(static_cast<IntType>(1) << ((N)-1))) )
|
||||
|
||||
// The types K and V provide upper bounds on the number of valid keys
|
||||
// and values, but we explicitly require the keys to be less than
|
||||
// 2^kKeybits and the values to be less than 2^kValuebits. The size
|
||||
// of the table is controlled by kHashbits, and the type of each entry
|
||||
// in the cache is uintptr_t (native machine word). See also the big
|
||||
// comment at the top of the file.
|
||||
template <int kKeybits>
|
||||
class PackedCache {
|
||||
public:
|
||||
typedef uintptr_t T;
|
||||
typedef uintptr_t K;
|
||||
typedef uint32 V;
|
||||
#ifdef TCMALLOC_SMALL_BUT_SLOW
|
||||
// Decrease the size map cache if running in the small memory mode.
|
||||
static const int kHashbits = 12;
|
||||
#else
|
||||
static const int kHashbits = 16;
|
||||
#endif
|
||||
static const int kValuebits = 7;
|
||||
// one bit after value bits
|
||||
static const int kInvalidMask = 0x80;
|
||||
|
||||
explicit PackedCache() {
|
||||
COMPILE_ASSERT(kKeybits + kValuebits + 1 <= 8 * sizeof(T), use_whole_keys);
|
||||
COMPILE_ASSERT(kHashbits <= kKeybits, hash_function);
|
||||
COMPILE_ASSERT(kHashbits >= kValuebits + 1, small_values_space);
|
||||
Clear();
|
||||
}
|
||||
|
||||
bool TryGet(K key, V* out) const {
|
||||
// As with other code in this class, we touch array_ as few times
|
||||
// as we can. Assuming entries are read atomically then certain
|
||||
// races are harmless.
|
||||
ASSERT(key == (key & kKeyMask));
|
||||
T hash = Hash(key);
|
||||
T expected_entry = key;
|
||||
expected_entry &= ~N_ONES_(T, kHashbits);
|
||||
T entry = array_[hash];
|
||||
entry ^= expected_entry;
|
||||
if (PREDICT_FALSE(entry >= (1 << kValuebits))) {
|
||||
return false;
|
||||
}
|
||||
*out = static_cast<V>(entry);
|
||||
return true;
|
||||
}
|
||||
|
||||
void Clear() {
|
||||
// sets 'invalid' bit in every byte, include value byte
|
||||
memset(const_cast<T* >(array_), kInvalidMask, sizeof(array_));
|
||||
}
|
||||
|
||||
void Put(K key, V value) {
|
||||
ASSERT(key == (key & kKeyMask));
|
||||
ASSERT(value == (value & kValueMask));
|
||||
array_[Hash(key)] = KeyToUpper(key) | value;
|
||||
}
|
||||
|
||||
void Invalidate(K key) {
|
||||
ASSERT(key == (key & kKeyMask));
|
||||
array_[Hash(key)] = KeyToUpper(key) | kInvalidMask;
|
||||
}
|
||||
|
||||
private:
|
||||
// we just wipe all hash bits out of key. I.e. clear lower
|
||||
// kHashbits. We rely on compiler knowing value of Hash(k).
|
||||
static T KeyToUpper(K k) {
|
||||
return static_cast<T>(k) ^ Hash(k);
|
||||
}
|
||||
|
||||
static T Hash(K key) {
|
||||
return static_cast<T>(key) & N_ONES_(size_t, kHashbits);
|
||||
}
|
||||
|
||||
// For masking a K.
|
||||
static const K kKeyMask = N_ONES_(K, kKeybits);
|
||||
|
||||
// For masking a V or a T.
|
||||
static const V kValueMask = N_ONES_(V, kValuebits);
|
||||
|
||||
// array_ is the cache. Its elements are volatile because any
|
||||
// thread can write any array element at any time.
|
||||
volatile T array_[1 << kHashbits];
|
||||
};
|
||||
|
||||
#undef N_ONES_
|
||||
|
||||
#endif // TCMALLOC_PACKED_CACHE_INL_H_
|
830
3party/gperftools/src/page_heap.cc
Normal file
830
3party/gperftools/src/page_heap.cc
Normal file
@ -0,0 +1,830 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2008, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat <opensource@google.com>
|
||||
|
||||
#include "config.h"
|
||||
|
||||
#include <inttypes.h> // for PRIuPTR
|
||||
#include <errno.h> // for ENOMEM, errno
|
||||
|
||||
#include <algorithm>
|
||||
#include <limits>
|
||||
|
||||
#include "gperftools/malloc_extension.h" // for MallocRange, etc
|
||||
#include "base/basictypes.h"
|
||||
#include "base/commandlineflags.h"
|
||||
#include "internal_logging.h" // for ASSERT, TCMalloc_Printer, etc
|
||||
#include "page_heap_allocator.h" // for PageHeapAllocator
|
||||
#include "static_vars.h" // for Static
|
||||
#include "system-alloc.h" // for TCMalloc_SystemAlloc, etc
|
||||
|
||||
DEFINE_double(tcmalloc_release_rate,
|
||||
EnvToDouble("TCMALLOC_RELEASE_RATE", 1.0),
|
||||
"Rate at which we release unused memory to the system. "
|
||||
"Zero means we never release memory back to the system. "
|
||||
"Increase this flag to return memory faster; decrease it "
|
||||
"to return memory slower. Reasonable rates are in the "
|
||||
"range [0,10]");
|
||||
|
||||
DEFINE_int64(tcmalloc_heap_limit_mb,
|
||||
EnvToInt("TCMALLOC_HEAP_LIMIT_MB", 0),
|
||||
"Limit total size of the process heap to the "
|
||||
"specified number of MiB. "
|
||||
"When we approach the limit the memory is released "
|
||||
"to the system more aggressively (more minor page faults). "
|
||||
"Zero means to allocate as long as system allows.");
|
||||
|
||||
namespace tcmalloc {
|
||||
|
||||
struct SCOPED_LOCKABLE PageHeap::LockingContext {
|
||||
PageHeap * const heap;
|
||||
size_t grown_by = 0;
|
||||
|
||||
explicit LockingContext(PageHeap* heap, SpinLock* lock) EXCLUSIVE_LOCK_FUNCTION(lock)
|
||||
: heap(heap) {
|
||||
lock->Lock();
|
||||
}
|
||||
~LockingContext() UNLOCK_FUNCTION() {
|
||||
heap->HandleUnlock(this);
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
PageHeap::PageHeap(Length smallest_span_size)
|
||||
: smallest_span_size_(smallest_span_size),
|
||||
pagemap_(MetaDataAlloc),
|
||||
scavenge_counter_(0),
|
||||
// Start scavenging at kMaxPages list
|
||||
release_index_(kMaxPages),
|
||||
aggressive_decommit_(false) {
|
||||
COMPILE_ASSERT(kClassSizesMax <= (1 << PageMapCache::kValuebits), valuebits);
|
||||
// smallest_span_size needs to be power of 2.
|
||||
CHECK_CONDITION((smallest_span_size_ & (smallest_span_size_-1)) == 0);
|
||||
for (int i = 0; i < kMaxPages; i++) {
|
||||
DLL_Init(&free_[i].normal);
|
||||
DLL_Init(&free_[i].returned);
|
||||
}
|
||||
}
|
||||
|
||||
Span* PageHeap::SearchFreeAndLargeLists(Length n) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
ASSERT(Check());
|
||||
ASSERT(n > 0);
|
||||
|
||||
// Find first size >= n that has a non-empty list
|
||||
for (Length s = n; s <= kMaxPages; s++) {
|
||||
Span* ll = &free_[s - 1].normal;
|
||||
// If we're lucky, ll is non-empty, meaning it has a suitable span.
|
||||
if (!DLL_IsEmpty(ll)) {
|
||||
ASSERT(ll->next->location == Span::ON_NORMAL_FREELIST);
|
||||
return Carve(ll->next, n);
|
||||
}
|
||||
// Alternatively, maybe there's a usable returned span.
|
||||
ll = &free_[s - 1].returned;
|
||||
if (!DLL_IsEmpty(ll)) {
|
||||
// We did not call EnsureLimit before, to avoid releasing the span
|
||||
// that will be taken immediately back.
|
||||
// Calling EnsureLimit here is not very expensive, as it fails only if
|
||||
// there is no more normal spans (and it fails efficiently)
|
||||
// or SystemRelease does not work (there is probably no returned spans).
|
||||
if (EnsureLimit(n)) {
|
||||
// ll may have became empty due to coalescing
|
||||
if (!DLL_IsEmpty(ll)) {
|
||||
ASSERT(ll->next->location == Span::ON_RETURNED_FREELIST);
|
||||
return Carve(ll->next, n);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
// No luck in free lists, our last chance is in a larger class.
|
||||
return AllocLarge(n); // May be NULL
|
||||
}
|
||||
|
||||
static const size_t kForcedCoalesceInterval = 128*1024*1024;
|
||||
|
||||
Length PageHeap::RoundUpSize(Length n) {
|
||||
Length rounded_n = (n + smallest_span_size_ - 1) & ~(smallest_span_size_ - 1);
|
||||
if (rounded_n < n) {
|
||||
// Overflow happened. So make sure we oom by asking for biggest
|
||||
// amount possible.
|
||||
return std::numeric_limits<Length>::max() & ~(smallest_span_size_ - 1);
|
||||
}
|
||||
|
||||
return rounded_n;
|
||||
}
|
||||
|
||||
void PageHeap::HandleUnlock(LockingContext* context) {
|
||||
StackTrace* t = nullptr;
|
||||
if (context->grown_by) {
|
||||
t = Static::stacktrace_allocator()->New();
|
||||
t->size = context->grown_by;
|
||||
}
|
||||
|
||||
lock_.Unlock();
|
||||
|
||||
if (t) {
|
||||
t->depth = GetStackTrace(t->stack, kMaxStackDepth-1, 0);
|
||||
Static::push_growth_stack(t);
|
||||
}
|
||||
}
|
||||
|
||||
Span* PageHeap::NewWithSizeClass(Length n, uint32 sizeclass) {
|
||||
LockingContext context{this, &lock_};
|
||||
|
||||
Span* span = NewLocked(n, &context);
|
||||
if (!span) {
|
||||
return span;
|
||||
}
|
||||
InvalidateCachedSizeClass(span->start);
|
||||
if (sizeclass) {
|
||||
RegisterSizeClass(span, sizeclass);
|
||||
}
|
||||
return span;
|
||||
}
|
||||
|
||||
Span* PageHeap::NewLocked(Length n, LockingContext* context) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
ASSERT(Check());
|
||||
n = RoundUpSize(n);
|
||||
|
||||
Span* result = SearchFreeAndLargeLists(n);
|
||||
if (result != NULL)
|
||||
return result;
|
||||
|
||||
if (stats_.free_bytes != 0 && stats_.unmapped_bytes != 0
|
||||
&& stats_.free_bytes + stats_.unmapped_bytes >= stats_.system_bytes / 4
|
||||
&& (stats_.system_bytes / kForcedCoalesceInterval
|
||||
!= (stats_.system_bytes + (n << kPageShift)) / kForcedCoalesceInterval)) {
|
||||
// We're about to grow heap, but there are lots of free pages.
|
||||
// tcmalloc's design decision to keep unmapped and free spans
|
||||
// separately and never coalesce them means that sometimes there
|
||||
// can be free pages span of sufficient size, but it consists of
|
||||
// "segments" of different type so page heap search cannot find
|
||||
// it. In order to prevent growing heap and wasting memory in such
|
||||
// case we're going to unmap all free pages. So that all free
|
||||
// spans are maximally coalesced.
|
||||
//
|
||||
// We're also limiting 'rate' of going into this path to be at
|
||||
// most once per 128 megs of heap growth. Otherwise programs that
|
||||
// grow heap frequently (and that means by small amount) could be
|
||||
// penalized with higher count of minor page faults.
|
||||
//
|
||||
// See also large_heap_fragmentation_unittest.cc and
|
||||
// https://github.com/gperftools/gperftools/issues/371
|
||||
ReleaseAtLeastNPages(static_cast<Length>(0x7fffffff));
|
||||
|
||||
// then try again. If we are forced to grow heap because of large
|
||||
// spans fragmentation and not because of problem described above,
|
||||
// then at the very least we've just unmapped free but
|
||||
// insufficiently big large spans back to OS. So in case of really
|
||||
// unlucky memory fragmentation we'll be consuming virtual address
|
||||
// space, but not real memory
|
||||
result = SearchFreeAndLargeLists(n);
|
||||
if (result != NULL) return result;
|
||||
}
|
||||
|
||||
// Grow the heap and try again.
|
||||
if (!GrowHeap(n, context)) {
|
||||
ASSERT(stats_.unmapped_bytes+ stats_.committed_bytes==stats_.system_bytes);
|
||||
ASSERT(Check());
|
||||
// underlying SysAllocator likely set ENOMEM but we can get here
|
||||
// due to EnsureLimit so we set it here too.
|
||||
//
|
||||
// Setting errno to ENOMEM here allows us to avoid dealing with it
|
||||
// in fast-path.
|
||||
errno = ENOMEM;
|
||||
return NULL;
|
||||
}
|
||||
return SearchFreeAndLargeLists(n);
|
||||
}
|
||||
|
||||
Span* PageHeap::NewAligned(Length n, Length align_pages) {
|
||||
n = RoundUpSize(n);
|
||||
|
||||
// Allocate extra pages and carve off an aligned portion
|
||||
const Length alloc = n + align_pages;
|
||||
if (alloc < n || alloc < align_pages) {
|
||||
// overflow means we asked huge amounts, so lets trigger normal
|
||||
// oom handling by asking enough to trigger oom.
|
||||
Span* span = New(std::numeric_limits<Length>::max());
|
||||
CHECK_CONDITION(span == nullptr);
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
LockingContext context{this, &lock_};
|
||||
|
||||
Span* span = NewLocked(alloc, &context);
|
||||
if (PREDICT_FALSE(span == nullptr)) return nullptr;
|
||||
|
||||
// Skip starting portion so that we end up aligned
|
||||
Length skip = 0;
|
||||
size_t align_bytes = align_pages << kPageShift;
|
||||
while ((((span->start+skip) << kPageShift) & (align_bytes - 1)) != 0) {
|
||||
skip++;
|
||||
}
|
||||
ASSERT(skip < alloc);
|
||||
if (skip > 0) {
|
||||
Span* rest = Split(span, skip);
|
||||
DeleteLocked(span);
|
||||
span = rest;
|
||||
}
|
||||
|
||||
ASSERT(span->length >= n);
|
||||
if (span->length > n) {
|
||||
Span* trailer = Split(span, n);
|
||||
DeleteLocked(trailer);
|
||||
}
|
||||
InvalidateCachedSizeClass(span->start);
|
||||
return span;
|
||||
}
|
||||
|
||||
Span* PageHeap::AllocLarge(Length n) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
Span *best = NULL;
|
||||
Span *best_normal = NULL;
|
||||
|
||||
// Create a Span to use as an upper bound.
|
||||
Span bound;
|
||||
bound.start = 0;
|
||||
bound.length = n;
|
||||
|
||||
// First search the NORMAL spans..
|
||||
SpanSet::iterator place = large_normal_.upper_bound(SpanPtrWithLength(&bound));
|
||||
if (place != large_normal_.end()) {
|
||||
best = place->span;
|
||||
best_normal = best;
|
||||
ASSERT(best->location == Span::ON_NORMAL_FREELIST);
|
||||
}
|
||||
|
||||
// Try to find better fit from RETURNED spans.
|
||||
place = large_returned_.upper_bound(SpanPtrWithLength(&bound));
|
||||
if (place != large_returned_.end()) {
|
||||
Span *c = place->span;
|
||||
ASSERT(c->location == Span::ON_RETURNED_FREELIST);
|
||||
if (best_normal == NULL
|
||||
|| c->length < best->length
|
||||
|| (c->length == best->length && c->start < best->start))
|
||||
best = place->span;
|
||||
}
|
||||
|
||||
if (best == best_normal) {
|
||||
return best == NULL ? NULL : Carve(best, n);
|
||||
}
|
||||
|
||||
// best comes from RETURNED set.
|
||||
|
||||
if (EnsureLimit(n, false)) {
|
||||
return Carve(best, n);
|
||||
}
|
||||
|
||||
if (EnsureLimit(n, true)) {
|
||||
// best could have been destroyed by coalescing.
|
||||
// best_normal is not a best-fit, and it could be destroyed as well.
|
||||
// We retry, the limit is already ensured:
|
||||
return AllocLarge(n);
|
||||
}
|
||||
|
||||
// If best_normal existed, EnsureLimit would succeeded:
|
||||
ASSERT(best_normal == NULL);
|
||||
// We are not allowed to take best from returned list.
|
||||
return NULL;
|
||||
}
|
||||
|
||||
Span* PageHeap::Split(Span* span, Length n) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
ASSERT(0 < n);
|
||||
ASSERT(n < span->length);
|
||||
ASSERT(span->location == Span::IN_USE);
|
||||
ASSERT(span->sizeclass == 0);
|
||||
|
||||
const int extra = span->length - n;
|
||||
Span* leftover = NewSpan(span->start + n, extra);
|
||||
ASSERT(leftover->location == Span::IN_USE);
|
||||
RecordSpan(leftover);
|
||||
pagemap_.set(span->start + n - 1, span); // Update map from pageid to span
|
||||
span->length = n;
|
||||
|
||||
return leftover;
|
||||
}
|
||||
|
||||
void PageHeap::CommitSpan(Span* span) {
|
||||
++stats_.commit_count;
|
||||
|
||||
TCMalloc_SystemCommit(reinterpret_cast<void*>(span->start << kPageShift),
|
||||
static_cast<size_t>(span->length << kPageShift));
|
||||
stats_.committed_bytes += span->length << kPageShift;
|
||||
stats_.total_commit_bytes += (span->length << kPageShift);
|
||||
}
|
||||
|
||||
bool PageHeap::DecommitSpan(Span* span) {
|
||||
++stats_.decommit_count;
|
||||
|
||||
bool rv = TCMalloc_SystemRelease(reinterpret_cast<void*>(span->start << kPageShift),
|
||||
static_cast<size_t>(span->length << kPageShift));
|
||||
if (rv) {
|
||||
stats_.committed_bytes -= span->length << kPageShift;
|
||||
stats_.total_decommit_bytes += (span->length << kPageShift);
|
||||
}
|
||||
|
||||
return rv;
|
||||
}
|
||||
|
||||
Span* PageHeap::Carve(Span* span, Length n) {
|
||||
ASSERT(n > 0);
|
||||
ASSERT(span->location != Span::IN_USE);
|
||||
const int old_location = span->location;
|
||||
RemoveFromFreeList(span);
|
||||
span->location = Span::IN_USE;
|
||||
|
||||
const int extra = span->length - n;
|
||||
ASSERT(extra >= 0);
|
||||
if (extra > 0) {
|
||||
Span* leftover = NewSpan(span->start + n, extra);
|
||||
leftover->location = old_location;
|
||||
RecordSpan(leftover);
|
||||
|
||||
// The previous span of |leftover| was just splitted -- no need to
|
||||
// coalesce them. The next span of |leftover| was not previously coalesced
|
||||
// with |span|, i.e. is NULL or has got location other than |old_location|.
|
||||
#ifndef NDEBUG
|
||||
const PageID p = leftover->start;
|
||||
const Length len = leftover->length;
|
||||
Span* next = GetDescriptor(p+len);
|
||||
ASSERT (next == NULL ||
|
||||
next->location == Span::IN_USE ||
|
||||
next->location != leftover->location);
|
||||
#endif
|
||||
|
||||
PrependToFreeList(leftover); // Skip coalescing - no candidates possible
|
||||
span->length = n;
|
||||
pagemap_.set(span->start + n - 1, span);
|
||||
}
|
||||
ASSERT(Check());
|
||||
if (old_location == Span::ON_RETURNED_FREELIST) {
|
||||
// We need to recommit this address space.
|
||||
CommitSpan(span);
|
||||
}
|
||||
ASSERT(span->location == Span::IN_USE);
|
||||
ASSERT(span->length == n);
|
||||
ASSERT(stats_.unmapped_bytes+ stats_.committed_bytes==stats_.system_bytes);
|
||||
return span;
|
||||
}
|
||||
|
||||
void PageHeap::Delete(Span* span) {
|
||||
SpinLockHolder h(&lock_);
|
||||
DeleteLocked(span);
|
||||
}
|
||||
|
||||
void PageHeap::DeleteLocked(Span* span) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
ASSERT(Check());
|
||||
ASSERT(span->location == Span::IN_USE);
|
||||
ASSERT(span->length > 0);
|
||||
ASSERT(GetDescriptor(span->start) == span);
|
||||
ASSERT(GetDescriptor(span->start + span->length - 1) == span);
|
||||
const Length n = span->length;
|
||||
span->sizeclass = 0;
|
||||
span->sample = 0;
|
||||
span->location = Span::ON_NORMAL_FREELIST;
|
||||
MergeIntoFreeList(span); // Coalesces if possible
|
||||
IncrementalScavenge(n);
|
||||
ASSERT(stats_.unmapped_bytes+ stats_.committed_bytes==stats_.system_bytes);
|
||||
ASSERT(Check());
|
||||
}
|
||||
|
||||
// Given span we're about to free and other span (still on free list),
|
||||
// checks if 'other' span is mergable with 'span'. If it is, removes
|
||||
// other span from free list, performs aggressive decommit if
|
||||
// necessary and returns 'other' span. Otherwise 'other' span cannot
|
||||
// be merged and is left untouched. In that case NULL is returned.
|
||||
Span* PageHeap::CheckAndHandlePreMerge(Span* span, Span* other) {
|
||||
if (other == NULL) {
|
||||
return other;
|
||||
}
|
||||
// if we're in aggressive decommit mode and span is decommitted,
|
||||
// then we try to decommit adjacent span.
|
||||
if (aggressive_decommit_ && other->location == Span::ON_NORMAL_FREELIST
|
||||
&& span->location == Span::ON_RETURNED_FREELIST) {
|
||||
bool worked = DecommitSpan(other);
|
||||
if (!worked) {
|
||||
return NULL;
|
||||
}
|
||||
} else if (other->location != span->location) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
RemoveFromFreeList(other);
|
||||
return other;
|
||||
}
|
||||
|
||||
void PageHeap::MergeIntoFreeList(Span* span) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
ASSERT(span->location != Span::IN_USE);
|
||||
|
||||
// Coalesce -- we guarantee that "p" != 0, so no bounds checking
|
||||
// necessary. We do not bother resetting the stale pagemap
|
||||
// entries for the pieces we are merging together because we only
|
||||
// care about the pagemap entries for the boundaries.
|
||||
//
|
||||
// Note: depending on aggressive_decommit_ mode we allow only
|
||||
// similar spans to be coalesced.
|
||||
//
|
||||
// The following applies if aggressive_decommit_ is enabled:
|
||||
//
|
||||
// TODO(jar): "Always decommit" causes some extra calls to commit when we are
|
||||
// called in GrowHeap() during an allocation :-/. We need to eval the cost of
|
||||
// that oscillation, and possibly do something to reduce it.
|
||||
|
||||
// TODO(jar): We need a better strategy for deciding to commit, or decommit,
|
||||
// based on memory usage and free heap sizes.
|
||||
|
||||
const PageID p = span->start;
|
||||
const Length n = span->length;
|
||||
|
||||
if (aggressive_decommit_ && span->location == Span::ON_NORMAL_FREELIST) {
|
||||
if (DecommitSpan(span)) {
|
||||
span->location = Span::ON_RETURNED_FREELIST;
|
||||
}
|
||||
}
|
||||
|
||||
Span* prev = CheckAndHandlePreMerge(span, GetDescriptor(p-1));
|
||||
if (prev != NULL) {
|
||||
// Merge preceding span into this span
|
||||
ASSERT(prev->start + prev->length == p);
|
||||
const Length len = prev->length;
|
||||
DeleteSpan(prev);
|
||||
span->start -= len;
|
||||
span->length += len;
|
||||
pagemap_.set(span->start, span);
|
||||
}
|
||||
Span* next = CheckAndHandlePreMerge(span, GetDescriptor(p+n));
|
||||
if (next != NULL) {
|
||||
// Merge next span into this span
|
||||
ASSERT(next->start == p+n);
|
||||
const Length len = next->length;
|
||||
DeleteSpan(next);
|
||||
span->length += len;
|
||||
pagemap_.set(span->start + span->length - 1, span);
|
||||
}
|
||||
|
||||
PrependToFreeList(span);
|
||||
}
|
||||
|
||||
void PageHeap::PrependToFreeList(Span* span) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
ASSERT(span->location != Span::IN_USE);
|
||||
if (span->location == Span::ON_NORMAL_FREELIST)
|
||||
stats_.free_bytes += (span->length << kPageShift);
|
||||
else
|
||||
stats_.unmapped_bytes += (span->length << kPageShift);
|
||||
|
||||
if (span->length > kMaxPages) {
|
||||
SpanSet *set = &large_normal_;
|
||||
if (span->location == Span::ON_RETURNED_FREELIST)
|
||||
set = &large_returned_;
|
||||
std::pair<SpanSet::iterator, bool> p =
|
||||
set->insert(SpanPtrWithLength(span));
|
||||
ASSERT(p.second); // We never have duplicates since span->start is unique.
|
||||
span->SetSpanSetIterator(p.first);
|
||||
return;
|
||||
}
|
||||
|
||||
SpanList* list = &free_[span->length - 1];
|
||||
if (span->location == Span::ON_NORMAL_FREELIST) {
|
||||
DLL_Prepend(&list->normal, span);
|
||||
} else {
|
||||
DLL_Prepend(&list->returned, span);
|
||||
}
|
||||
}
|
||||
|
||||
void PageHeap::RemoveFromFreeList(Span* span) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
ASSERT(span->location != Span::IN_USE);
|
||||
if (span->location == Span::ON_NORMAL_FREELIST) {
|
||||
stats_.free_bytes -= (span->length << kPageShift);
|
||||
} else {
|
||||
stats_.unmapped_bytes -= (span->length << kPageShift);
|
||||
}
|
||||
if (span->length > kMaxPages) {
|
||||
SpanSet *set = &large_normal_;
|
||||
if (span->location == Span::ON_RETURNED_FREELIST)
|
||||
set = &large_returned_;
|
||||
SpanSet::iterator iter = span->ExtractSpanSetIterator();
|
||||
ASSERT(iter->span == span);
|
||||
ASSERT(set->find(SpanPtrWithLength(span)) == iter);
|
||||
set->erase(iter);
|
||||
} else {
|
||||
DLL_Remove(span);
|
||||
}
|
||||
}
|
||||
|
||||
void PageHeap::IncrementalScavenge(Length n) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
// Fast path; not yet time to release memory
|
||||
scavenge_counter_ -= n;
|
||||
if (scavenge_counter_ >= 0) return; // Not yet time to scavenge
|
||||
|
||||
const double rate = FLAGS_tcmalloc_release_rate;
|
||||
if (rate <= 1e-6) {
|
||||
// Tiny release rate means that releasing is disabled.
|
||||
scavenge_counter_ = kDefaultReleaseDelay;
|
||||
return;
|
||||
}
|
||||
|
||||
++stats_.scavenge_count;
|
||||
|
||||
Length released_pages = ReleaseAtLeastNPages(1);
|
||||
|
||||
if (released_pages == 0) {
|
||||
// Nothing to scavenge, delay for a while.
|
||||
scavenge_counter_ = kDefaultReleaseDelay;
|
||||
} else {
|
||||
// Compute how long to wait until we return memory.
|
||||
// FLAGS_tcmalloc_release_rate==1 means wait for 1000 pages
|
||||
// after releasing one page.
|
||||
const double mult = 1000.0 / rate;
|
||||
double wait = mult * static_cast<double>(released_pages);
|
||||
if (wait > kMaxReleaseDelay) {
|
||||
// Avoid overflow and bound to reasonable range.
|
||||
wait = kMaxReleaseDelay;
|
||||
}
|
||||
scavenge_counter_ = static_cast<int64_t>(wait);
|
||||
}
|
||||
}
|
||||
|
||||
Length PageHeap::ReleaseSpan(Span* s) {
|
||||
ASSERT(s->location == Span::ON_NORMAL_FREELIST);
|
||||
|
||||
if (DecommitSpan(s)) {
|
||||
RemoveFromFreeList(s);
|
||||
const Length n = s->length;
|
||||
s->location = Span::ON_RETURNED_FREELIST;
|
||||
MergeIntoFreeList(s); // Coalesces if possible.
|
||||
return n;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
Length PageHeap::ReleaseAtLeastNPages(Length num_pages) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
Length released_pages = 0;
|
||||
|
||||
// Round robin through the lists of free spans, releasing a
|
||||
// span from each list. Stop after releasing at least num_pages
|
||||
// or when there is nothing more to release.
|
||||
while (released_pages < num_pages && stats_.free_bytes > 0) {
|
||||
for (int i = 0; i < kMaxPages+1 && released_pages < num_pages;
|
||||
i++, release_index_++) {
|
||||
Span *s;
|
||||
if (release_index_ > kMaxPages) release_index_ = 0;
|
||||
|
||||
if (release_index_ == kMaxPages) {
|
||||
if (large_normal_.empty()) {
|
||||
continue;
|
||||
}
|
||||
s = (large_normal_.begin())->span;
|
||||
} else {
|
||||
SpanList* slist = &free_[release_index_];
|
||||
if (DLL_IsEmpty(&slist->normal)) {
|
||||
continue;
|
||||
}
|
||||
s = slist->normal.prev;
|
||||
}
|
||||
// TODO(todd) if the remaining number of pages to release
|
||||
// is significantly smaller than s->length, and s is on the
|
||||
// large freelist, should we carve s instead of releasing?
|
||||
// the whole thing?
|
||||
Length released_len = ReleaseSpan(s);
|
||||
// Some systems do not support release
|
||||
if (released_len == 0) return released_pages;
|
||||
released_pages += released_len;
|
||||
}
|
||||
}
|
||||
return released_pages;
|
||||
}
|
||||
|
||||
bool PageHeap::EnsureLimit(Length n, bool withRelease) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
Length limit = (FLAGS_tcmalloc_heap_limit_mb*1024*1024) >> kPageShift;
|
||||
if (limit == 0) return true; //there is no limit
|
||||
|
||||
// We do not use stats_.system_bytes because it does not take
|
||||
// MetaDataAllocs into account.
|
||||
Length takenPages = TCMalloc_SystemTaken >> kPageShift;
|
||||
//XXX takenPages may be slightly bigger than limit for two reasons:
|
||||
//* MetaDataAllocs ignore the limit (it is not easy to handle
|
||||
// out of memory there)
|
||||
//* sys_alloc may round allocation up to huge page size,
|
||||
// although smaller limit was ensured
|
||||
|
||||
ASSERT(takenPages >= stats_.unmapped_bytes >> kPageShift);
|
||||
takenPages -= stats_.unmapped_bytes >> kPageShift;
|
||||
|
||||
if (takenPages + n > limit && withRelease) {
|
||||
takenPages -= ReleaseAtLeastNPages(takenPages + n - limit);
|
||||
}
|
||||
|
||||
return takenPages + n <= limit;
|
||||
}
|
||||
|
||||
void PageHeap::RegisterSizeClass(Span* span, uint32 sc) {
|
||||
// Associate span object with all interior pages as well
|
||||
ASSERT(span->location == Span::IN_USE);
|
||||
ASSERT(GetDescriptor(span->start) == span);
|
||||
ASSERT(GetDescriptor(span->start+span->length-1) == span);
|
||||
span->sizeclass = sc;
|
||||
for (Length i = 1; i < span->length-1; i++) {
|
||||
pagemap_.set(span->start+i, span);
|
||||
}
|
||||
}
|
||||
|
||||
void PageHeap::GetSmallSpanStatsLocked(SmallSpanStats* result) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
for (int i = 0; i < kMaxPages; i++) {
|
||||
result->normal_length[i] = DLL_Length(&free_[i].normal);
|
||||
result->returned_length[i] = DLL_Length(&free_[i].returned);
|
||||
}
|
||||
}
|
||||
|
||||
void PageHeap::GetLargeSpanStatsLocked(LargeSpanStats* result) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
result->spans = 0;
|
||||
result->normal_pages = 0;
|
||||
result->returned_pages = 0;
|
||||
for (SpanSet::iterator it = large_normal_.begin(); it != large_normal_.end(); ++it) {
|
||||
result->normal_pages += it->length;
|
||||
result->spans++;
|
||||
}
|
||||
for (SpanSet::iterator it = large_returned_.begin(); it != large_returned_.end(); ++it) {
|
||||
result->returned_pages += it->length;
|
||||
result->spans++;
|
||||
}
|
||||
}
|
||||
|
||||
bool PageHeap::GetNextRange(PageID start, base::MallocRange* r) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
Span* span = reinterpret_cast<Span*>(pagemap_.Next(start));
|
||||
if (span == NULL) {
|
||||
return false;
|
||||
}
|
||||
r->address = span->start << kPageShift;
|
||||
r->length = span->length << kPageShift;
|
||||
r->fraction = 0;
|
||||
switch (span->location) {
|
||||
case Span::IN_USE:
|
||||
r->type = base::MallocRange::INUSE;
|
||||
r->fraction = 1;
|
||||
if (span->sizeclass > 0) {
|
||||
// Only some of the objects in this span may be in use.
|
||||
const size_t osize = Static::sizemap()->class_to_size(span->sizeclass);
|
||||
r->fraction = (1.0 * osize * span->refcount) / r->length;
|
||||
}
|
||||
break;
|
||||
case Span::ON_NORMAL_FREELIST:
|
||||
r->type = base::MallocRange::FREE;
|
||||
break;
|
||||
case Span::ON_RETURNED_FREELIST:
|
||||
r->type = base::MallocRange::UNMAPPED;
|
||||
break;
|
||||
default:
|
||||
r->type = base::MallocRange::UNKNOWN;
|
||||
break;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
bool PageHeap::GrowHeap(Length n, LockingContext* context) {
|
||||
ASSERT(lock_.IsHeld());
|
||||
ASSERT(kMaxPages >= kMinSystemAlloc);
|
||||
if (n > kMaxValidPages) return false;
|
||||
Length ask = (n>kMinSystemAlloc) ? n : static_cast<Length>(kMinSystemAlloc);
|
||||
size_t actual_size;
|
||||
void* ptr = NULL;
|
||||
if (EnsureLimit(ask)) {
|
||||
ptr = TCMalloc_SystemAlloc(ask << kPageShift, &actual_size, kPageSize);
|
||||
}
|
||||
if (ptr == NULL) {
|
||||
if (n < ask) {
|
||||
// Try growing just "n" pages
|
||||
ask = n;
|
||||
if (EnsureLimit(ask)) {
|
||||
ptr = TCMalloc_SystemAlloc(ask << kPageShift, &actual_size, kPageSize);
|
||||
}
|
||||
}
|
||||
if (ptr == NULL) return false;
|
||||
}
|
||||
ask = actual_size >> kPageShift;
|
||||
context->grown_by += ask << kPageShift;
|
||||
|
||||
++stats_.reserve_count;
|
||||
++stats_.commit_count;
|
||||
|
||||
uint64_t old_system_bytes = stats_.system_bytes;
|
||||
stats_.system_bytes += (ask << kPageShift);
|
||||
stats_.committed_bytes += (ask << kPageShift);
|
||||
|
||||
stats_.total_commit_bytes += (ask << kPageShift);
|
||||
stats_.total_reserve_bytes += (ask << kPageShift);
|
||||
|
||||
const PageID p = reinterpret_cast<uintptr_t>(ptr) >> kPageShift;
|
||||
ASSERT(p > 0);
|
||||
|
||||
// If we have already a lot of pages allocated, just pre allocate a bunch of
|
||||
// memory for the page map. This prevents fragmentation by pagemap metadata
|
||||
// when a program keeps allocating and freeing large blocks.
|
||||
|
||||
if (old_system_bytes < kPageMapBigAllocationThreshold
|
||||
&& stats_.system_bytes >= kPageMapBigAllocationThreshold) {
|
||||
pagemap_.PreallocateMoreMemory();
|
||||
}
|
||||
|
||||
// Make sure pagemap_ has entries for all of the new pages.
|
||||
// Plus ensure one before and one after so coalescing code
|
||||
// does not need bounds-checking.
|
||||
if (pagemap_.Ensure(p-1, ask+2)) {
|
||||
// Pretend the new area is allocated and then Delete() it to cause
|
||||
// any necessary coalescing to occur.
|
||||
Span* span = NewSpan(p, ask);
|
||||
RecordSpan(span);
|
||||
DeleteLocked(span);
|
||||
ASSERT(stats_.unmapped_bytes+ stats_.committed_bytes==stats_.system_bytes);
|
||||
ASSERT(Check());
|
||||
return true;
|
||||
} else {
|
||||
// We could not allocate memory within "pagemap_"
|
||||
// TODO: Once we can return memory to the system, return the new span
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
bool PageHeap::Check() {
|
||||
ASSERT(lock_.IsHeld());
|
||||
return true;
|
||||
}
|
||||
|
||||
bool PageHeap::CheckExpensive() {
|
||||
bool result = Check();
|
||||
CheckSet(&large_normal_, kMaxPages + 1, Span::ON_NORMAL_FREELIST);
|
||||
CheckSet(&large_returned_, kMaxPages + 1, Span::ON_RETURNED_FREELIST);
|
||||
for (int s = 1; s <= kMaxPages; s++) {
|
||||
CheckList(&free_[s - 1].normal, s, s, Span::ON_NORMAL_FREELIST);
|
||||
CheckList(&free_[s - 1].returned, s, s, Span::ON_RETURNED_FREELIST);
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
bool PageHeap::CheckList(Span* list, Length min_pages, Length max_pages,
|
||||
int freelist) {
|
||||
for (Span* s = list->next; s != list; s = s->next) {
|
||||
CHECK_CONDITION(s->location == freelist); // NORMAL or RETURNED
|
||||
CHECK_CONDITION(s->length >= min_pages);
|
||||
CHECK_CONDITION(s->length <= max_pages);
|
||||
CHECK_CONDITION(GetDescriptor(s->start) == s);
|
||||
CHECK_CONDITION(GetDescriptor(s->start+s->length-1) == s);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
bool PageHeap::CheckSet(SpanSet* spanset, Length min_pages,int freelist) {
|
||||
for (SpanSet::iterator it = spanset->begin(); it != spanset->end(); ++it) {
|
||||
Span* s = it->span;
|
||||
CHECK_CONDITION(s->length == it->length);
|
||||
CHECK_CONDITION(s->location == freelist); // NORMAL or RETURNED
|
||||
CHECK_CONDITION(s->length >= min_pages);
|
||||
CHECK_CONDITION(GetDescriptor(s->start) == s);
|
||||
CHECK_CONDITION(GetDescriptor(s->start+s->length-1) == s);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
} // namespace tcmalloc
|
397
3party/gperftools/src/page_heap.h
Normal file
397
3party/gperftools/src/page_heap.h
Normal file
@ -0,0 +1,397 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2008, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat <opensource@google.com>
|
||||
|
||||
#ifndef TCMALLOC_PAGE_HEAP_H_
|
||||
#define TCMALLOC_PAGE_HEAP_H_
|
||||
|
||||
#include <config.h>
|
||||
#include <stddef.h> // for size_t
|
||||
#include <stdint.h> // for uint64_t, int64_t, uint16_t
|
||||
#include <gperftools/malloc_extension.h>
|
||||
#include "base/basictypes.h"
|
||||
#include "base/spinlock.h"
|
||||
#include "base/thread_annotations.h"
|
||||
#include "common.h"
|
||||
#include "packed-cache-inl.h"
|
||||
#include "pagemap.h"
|
||||
#include "span.h"
|
||||
|
||||
// We need to dllexport PageHeap just for the unittest. MSVC complains
|
||||
// that we don't dllexport the PageHeap members, but we don't need to
|
||||
// test those, so I just suppress this warning.
|
||||
#ifdef _MSC_VER
|
||||
#pragma warning(push)
|
||||
#pragma warning(disable:4251)
|
||||
#endif
|
||||
|
||||
// This #ifdef should almost never be set. Set NO_TCMALLOC_SAMPLES if
|
||||
// you're porting to a system where you really can't get a stacktrace.
|
||||
// Because we control the definition of GetStackTrace, all clients of
|
||||
// GetStackTrace should #include us rather than stacktrace.h.
|
||||
#ifdef NO_TCMALLOC_SAMPLES
|
||||
// We use #define so code compiles even if you #include stacktrace.h somehow.
|
||||
# define GetStackTrace(stack, depth, skip) (0)
|
||||
#else
|
||||
# include <gperftools/stacktrace.h>
|
||||
#endif
|
||||
|
||||
namespace base {
|
||||
struct MallocRange;
|
||||
}
|
||||
|
||||
namespace tcmalloc {
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Map from page-id to per-page data
|
||||
// -------------------------------------------------------------------------
|
||||
|
||||
// We use PageMap2<> for 32-bit and PageMap3<> for 64-bit machines.
|
||||
// We also use a simple one-level cache for hot PageID-to-sizeclass mappings,
|
||||
// because sometimes the sizeclass is all the information we need.
|
||||
|
||||
// Selector class -- general selector uses 3-level map
|
||||
template <int BITS> class MapSelector {
|
||||
public:
|
||||
typedef TCMalloc_PageMap3<BITS-kPageShift> Type;
|
||||
};
|
||||
|
||||
#ifndef TCMALLOC_SMALL_BUT_SLOW
|
||||
// x86-64 and arm64 are using 48 bits of address space. So we can use
|
||||
// just two level map, but since initial ram consumption of this mode
|
||||
// is a bit on the higher side, we opt-out of it in
|
||||
// TCMALLOC_SMALL_BUT_SLOW mode.
|
||||
template <> class MapSelector<48> {
|
||||
public:
|
||||
typedef TCMalloc_PageMap2<48-kPageShift> Type;
|
||||
};
|
||||
|
||||
#endif // TCMALLOC_SMALL_BUT_SLOW
|
||||
|
||||
// A two-level map for 32-bit machines
|
||||
template <> class MapSelector<32> {
|
||||
public:
|
||||
typedef TCMalloc_PageMap2<32-kPageShift> Type;
|
||||
};
|
||||
|
||||
// -------------------------------------------------------------------------
|
||||
// Page-level allocator
|
||||
// * Eager coalescing
|
||||
//
|
||||
// Heap for page-level allocation. We allow allocating and freeing a
|
||||
// contiguous runs of pages (called a "span").
|
||||
// -------------------------------------------------------------------------
|
||||
|
||||
class PERFTOOLS_DLL_DECL PageHeap {
|
||||
public:
|
||||
PageHeap() : PageHeap(1) {}
|
||||
PageHeap(Length smallest_span_size);
|
||||
|
||||
SpinLock* pageheap_lock() {
|
||||
return &lock_;
|
||||
}
|
||||
|
||||
// Aligns given size up to be multiple of smallest_span_size.
|
||||
Length RoundUpSize(Length n);
|
||||
|
||||
// Allocate a run of "n" pages. Returns zero if out of memory.
|
||||
// Caller should not pass "n == 0" -- instead, n should have
|
||||
// been rounded up already.
|
||||
Span* New(Length n) {
|
||||
return NewWithSizeClass(n, 0);
|
||||
}
|
||||
|
||||
Span* NewWithSizeClass(Length n, uint32 sizeclass);
|
||||
|
||||
// Same as above but with alignment. Requires page heap
|
||||
// lock, like New above.
|
||||
Span* NewAligned(Length n, Length align_pages);
|
||||
|
||||
// Delete the span "[p, p+n-1]".
|
||||
// REQUIRES: span was returned by earlier call to New() and
|
||||
// has not yet been deleted.
|
||||
void Delete(Span* span);
|
||||
|
||||
template <typename Body>
|
||||
void PrepareAndDelete(Span* span, const Body& body) LOCKS_EXCLUDED(lock_) {
|
||||
SpinLockHolder h(&lock_);
|
||||
body();
|
||||
DeleteLocked(span);
|
||||
}
|
||||
|
||||
// Mark an allocated span as being used for small objects of the
|
||||
// specified size-class.
|
||||
// REQUIRES: span was returned by an earlier call to New()
|
||||
// and has not yet been deleted.
|
||||
void RegisterSizeClass(Span* span, uint32 sc);
|
||||
|
||||
Span* SplitForTest(Span* span, Length n) {
|
||||
SpinLockHolder l(&lock_);
|
||||
return Split(span, n);
|
||||
}
|
||||
|
||||
// Return the descriptor for the specified page. Returns NULL if
|
||||
// this PageID was not allocated previously.
|
||||
inline ATTRIBUTE_ALWAYS_INLINE
|
||||
Span* GetDescriptor(PageID p) const {
|
||||
return reinterpret_cast<Span*>(pagemap_.get(p));
|
||||
}
|
||||
|
||||
// If this page heap is managing a range with starting page # >= start,
|
||||
// store info about the range in *r and return true. Else return false.
|
||||
bool GetNextRange(PageID start, base::MallocRange* r);
|
||||
|
||||
// Page heap statistics
|
||||
struct Stats {
|
||||
Stats() : system_bytes(0), free_bytes(0), unmapped_bytes(0), committed_bytes(0),
|
||||
scavenge_count(0), commit_count(0), total_commit_bytes(0),
|
||||
decommit_count(0), total_decommit_bytes(0),
|
||||
reserve_count(0), total_reserve_bytes(0) {}
|
||||
uint64_t system_bytes; // Total bytes allocated from system
|
||||
uint64_t free_bytes; // Total bytes on normal freelists
|
||||
uint64_t unmapped_bytes; // Total bytes on returned freelists
|
||||
uint64_t committed_bytes; // Bytes committed, always <= system_bytes_.
|
||||
|
||||
uint64_t scavenge_count; // Number of times scavagened flush pages
|
||||
|
||||
uint64_t commit_count; // Number of virtual memory commits
|
||||
uint64_t total_commit_bytes; // Bytes committed in lifetime of process
|
||||
uint64_t decommit_count; // Number of virtual memory decommits
|
||||
uint64_t total_decommit_bytes; // Bytes decommitted in lifetime of process
|
||||
|
||||
uint64_t reserve_count; // Number of virtual memory reserves
|
||||
uint64_t total_reserve_bytes; // Bytes reserved in lifetime of process
|
||||
};
|
||||
inline Stats StatsLocked() const { return stats_; }
|
||||
|
||||
struct SmallSpanStats {
|
||||
// For each free list of small spans, the length (in spans) of the
|
||||
// normal and returned free lists for that size.
|
||||
//
|
||||
// NOTE: index 'i' accounts the number of spans of length 'i + 1'.
|
||||
int64 normal_length[kMaxPages];
|
||||
int64 returned_length[kMaxPages];
|
||||
};
|
||||
void GetSmallSpanStatsLocked(SmallSpanStats* result);
|
||||
|
||||
// Stats for free large spans (i.e., spans with more than kMaxPages pages).
|
||||
struct LargeSpanStats {
|
||||
int64 spans; // Number of such spans
|
||||
int64 normal_pages; // Combined page length of normal large spans
|
||||
int64 returned_pages; // Combined page length of unmapped spans
|
||||
};
|
||||
void GetLargeSpanStatsLocked(LargeSpanStats* result);
|
||||
|
||||
bool Check();
|
||||
// Like Check() but does some more comprehensive checking.
|
||||
bool CheckExpensive();
|
||||
bool CheckList(Span* list, Length min_pages, Length max_pages,
|
||||
int freelist); // ON_NORMAL_FREELIST or ON_RETURNED_FREELIST
|
||||
bool CheckSet(SpanSet *s, Length min_pages, int freelist);
|
||||
|
||||
// Try to release at least num_pages for reuse by the OS. Returns
|
||||
// the actual number of pages released, which may be less than
|
||||
// num_pages if there weren't enough pages to release. The result
|
||||
// may also be larger than num_pages since page_heap might decide to
|
||||
// release one large range instead of fragmenting it into two
|
||||
// smaller released and unreleased ranges.
|
||||
Length ReleaseAtLeastNPages(Length num_pages);
|
||||
|
||||
// Reads and writes to pagemap_cache_ do not require locking.
|
||||
bool TryGetSizeClass(PageID p, uint32* out) const {
|
||||
return pagemap_cache_.TryGet(p, out);
|
||||
}
|
||||
void SetCachedSizeClass(PageID p, uint32 cl) {
|
||||
ASSERT(cl != 0);
|
||||
pagemap_cache_.Put(p, cl);
|
||||
}
|
||||
void InvalidateCachedSizeClass(PageID p) { pagemap_cache_.Invalidate(p); }
|
||||
uint32 GetSizeClassOrZero(PageID p) const {
|
||||
uint32 cached_value;
|
||||
if (!TryGetSizeClass(p, &cached_value)) {
|
||||
cached_value = 0;
|
||||
}
|
||||
return cached_value;
|
||||
}
|
||||
|
||||
bool GetAggressiveDecommit(void) {return aggressive_decommit_;}
|
||||
void SetAggressiveDecommit(bool aggressive_decommit) {
|
||||
aggressive_decommit_ = aggressive_decommit;
|
||||
}
|
||||
|
||||
private:
|
||||
struct LockingContext;
|
||||
|
||||
void HandleUnlock(LockingContext* context) UNLOCK_FUNCTION(lock_) ;
|
||||
|
||||
// Allocates a big block of memory for the pagemap once we reach more than
|
||||
// 128MB
|
||||
static const size_t kPageMapBigAllocationThreshold = 128 << 20;
|
||||
|
||||
// Minimum number of pages to fetch from system at a time. Must be
|
||||
// significantly bigger than kBlockSize to amortize system-call
|
||||
// overhead, and also to reduce external fragementation. Also, we
|
||||
// should keep this value big because various incarnations of Linux
|
||||
// have small limits on the number of mmap() regions per
|
||||
// address-space.
|
||||
// REQUIRED: kMinSystemAlloc <= kMaxPages;
|
||||
static const int kMinSystemAlloc = kMaxPages;
|
||||
|
||||
// Never delay scavenging for more than the following number of
|
||||
// deallocated pages. With 4K pages, this comes to 4GB of
|
||||
// deallocation.
|
||||
static const int kMaxReleaseDelay = 1 << 20;
|
||||
|
||||
// If there is nothing to release, wait for so many pages before
|
||||
// scavenging again. With 4K pages, this comes to 1GB of memory.
|
||||
static const int kDefaultReleaseDelay = 1 << 18;
|
||||
|
||||
const Length smallest_span_size_;
|
||||
|
||||
SpinLock lock_;
|
||||
|
||||
// Pick the appropriate map and cache types based on pointer size
|
||||
typedef MapSelector<kAddressBits>::Type PageMap;
|
||||
typedef PackedCache<kAddressBits - kPageShift> PageMapCache;
|
||||
mutable PageMapCache pagemap_cache_;
|
||||
PageMap pagemap_;
|
||||
|
||||
// We segregate spans of a given size into two circular linked
|
||||
// lists: one for normal spans, and one for spans whose memory
|
||||
// has been returned to the system.
|
||||
struct SpanList {
|
||||
Span normal;
|
||||
Span returned;
|
||||
};
|
||||
|
||||
// Sets of spans with length > kMaxPages.
|
||||
//
|
||||
// Rather than using a linked list, we use sets here for efficient
|
||||
// best-fit search.
|
||||
SpanSet large_normal_;
|
||||
SpanSet large_returned_;
|
||||
|
||||
// Array mapping from span length to a doubly linked list of free spans
|
||||
//
|
||||
// NOTE: index 'i' stores spans of length 'i + 1'.
|
||||
SpanList free_[kMaxPages];
|
||||
|
||||
// Statistics on system, free, and unmapped bytes
|
||||
Stats stats_;
|
||||
|
||||
Span* NewLocked(Length n, LockingContext* context) EXCLUSIVE_LOCKS_REQUIRED(lock_);
|
||||
void DeleteLocked(Span* span) EXCLUSIVE_LOCKS_REQUIRED(lock_);
|
||||
|
||||
// Split an allocated span into two spans: one of length "n" pages
|
||||
// followed by another span of length "span->length - n" pages.
|
||||
// Modifies "*span" to point to the first span of length "n" pages.
|
||||
// Returns a pointer to the second span.
|
||||
//
|
||||
// REQUIRES: "0 < n < span->length"
|
||||
// REQUIRES: span->location == IN_USE
|
||||
// REQUIRES: span->sizeclass == 0
|
||||
Span* Split(Span* span, Length n);
|
||||
|
||||
Span* SearchFreeAndLargeLists(Length n);
|
||||
|
||||
bool GrowHeap(Length n, LockingContext* context) EXCLUSIVE_LOCKS_REQUIRED(lock_);
|
||||
|
||||
// REQUIRES: span->length >= n
|
||||
// REQUIRES: span->location != IN_USE
|
||||
// Remove span from its free list, and move any leftover part of
|
||||
// span into appropriate free lists. Also update "span" to have
|
||||
// length exactly "n" and mark it as non-free so it can be returned
|
||||
// to the client. After all that, decrease free_pages_ by n and
|
||||
// return span.
|
||||
Span* Carve(Span* span, Length n);
|
||||
|
||||
void RecordSpan(Span* span) {
|
||||
pagemap_.set(span->start, span);
|
||||
if (span->length > 1) {
|
||||
pagemap_.set(span->start + span->length - 1, span);
|
||||
}
|
||||
}
|
||||
|
||||
// Allocate a large span of length == n. If successful, returns a
|
||||
// span of exactly the specified length. Else, returns NULL.
|
||||
Span* AllocLarge(Length n);
|
||||
|
||||
// Coalesce span with neighboring spans if possible, prepend to
|
||||
// appropriate free list, and adjust stats.
|
||||
void MergeIntoFreeList(Span* span);
|
||||
|
||||
// Commit the span.
|
||||
void CommitSpan(Span* span);
|
||||
|
||||
// Decommit the span.
|
||||
bool DecommitSpan(Span* span);
|
||||
|
||||
// Prepends span to appropriate free list, and adjusts stats.
|
||||
void PrependToFreeList(Span* span);
|
||||
|
||||
// Removes span from its free list, and adjust stats.
|
||||
void RemoveFromFreeList(Span* span);
|
||||
|
||||
// Incrementally release some memory to the system.
|
||||
// IncrementalScavenge(n) is called whenever n pages are freed.
|
||||
void IncrementalScavenge(Length n);
|
||||
|
||||
// Attempts to decommit 's' and move it to the returned freelist.
|
||||
//
|
||||
// Returns the length of the Span or zero if release failed.
|
||||
//
|
||||
// REQUIRES: 's' must be on the NORMAL freelist.
|
||||
Length ReleaseSpan(Span *s);
|
||||
|
||||
// Checks if we are allowed to take more memory from the system.
|
||||
// If limit is reached and allowRelease is true, tries to release
|
||||
// some unused spans.
|
||||
bool EnsureLimit(Length n, bool allowRelease = true);
|
||||
|
||||
Span* CheckAndHandlePreMerge(Span *span, Span *other);
|
||||
|
||||
// Number of pages to deallocate before doing more scavenging
|
||||
int64_t scavenge_counter_;
|
||||
|
||||
// Index of last free list where we released memory to the OS.
|
||||
int release_index_;
|
||||
|
||||
bool aggressive_decommit_;
|
||||
};
|
||||
|
||||
} // namespace tcmalloc
|
||||
|
||||
#ifdef _MSC_VER
|
||||
#pragma warning(pop)
|
||||
#endif
|
||||
|
||||
#endif // TCMALLOC_PAGE_HEAP_H_
|
179
3party/gperftools/src/page_heap_allocator.h
Normal file
179
3party/gperftools/src/page_heap_allocator.h
Normal file
@ -0,0 +1,179 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2008, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat <opensource@google.com>
|
||||
|
||||
#ifndef TCMALLOC_PAGE_HEAP_ALLOCATOR_H_
|
||||
#define TCMALLOC_PAGE_HEAP_ALLOCATOR_H_
|
||||
|
||||
#include <stddef.h> // for NULL, size_t
|
||||
|
||||
#include "common.h" // for MetaDataAlloc
|
||||
#include "internal_logging.h" // for ASSERT
|
||||
|
||||
namespace tcmalloc {
|
||||
|
||||
// Simple allocator for objects of a specified type. External locking
|
||||
// is required before accessing one of these objects.
|
||||
template <class T>
|
||||
class PageHeapAllocator {
|
||||
public:
|
||||
// We use an explicit Init function because these variables are statically
|
||||
// allocated and their constructors might not have run by the time some
|
||||
// other static variable tries to allocate memory.
|
||||
void Init() {
|
||||
ASSERT(sizeof(T) <= kAllocIncrement);
|
||||
inuse_ = 0;
|
||||
free_area_ = NULL;
|
||||
free_avail_ = 0;
|
||||
free_list_ = NULL;
|
||||
// Reserve some space at the beginning to avoid fragmentation.
|
||||
Delete(New());
|
||||
}
|
||||
|
||||
T* New() {
|
||||
// Consult free list
|
||||
void* result;
|
||||
if (free_list_ != NULL) {
|
||||
result = free_list_;
|
||||
free_list_ = *(reinterpret_cast<void**>(result));
|
||||
} else {
|
||||
if (free_avail_ < sizeof(T)) {
|
||||
// Need more room. We assume that MetaDataAlloc returns
|
||||
// suitably aligned memory.
|
||||
free_area_ = reinterpret_cast<char*>(MetaDataAlloc(kAllocIncrement));
|
||||
if (free_area_ == NULL) {
|
||||
Log(kCrash, __FILE__, __LINE__,
|
||||
"FATAL ERROR: Out of memory trying to allocate internal "
|
||||
"tcmalloc data (bytes, object-size)",
|
||||
kAllocIncrement, sizeof(T));
|
||||
}
|
||||
free_avail_ = kAllocIncrement;
|
||||
}
|
||||
result = free_area_;
|
||||
free_area_ += sizeof(T);
|
||||
free_avail_ -= sizeof(T);
|
||||
}
|
||||
inuse_++;
|
||||
return reinterpret_cast<T*>(result);
|
||||
}
|
||||
|
||||
void Delete(T* p) {
|
||||
*(reinterpret_cast<void**>(p)) = free_list_;
|
||||
free_list_ = p;
|
||||
inuse_--;
|
||||
}
|
||||
|
||||
int inuse() const { return inuse_; }
|
||||
|
||||
private:
|
||||
// How much to allocate from system at a time
|
||||
static const int kAllocIncrement = 128 << 10;
|
||||
|
||||
// Free area from which to carve new objects
|
||||
char* free_area_;
|
||||
size_t free_avail_;
|
||||
|
||||
// Free list of already carved objects
|
||||
void* free_list_;
|
||||
|
||||
// Number of allocated but unfreed objects
|
||||
int inuse_;
|
||||
};
|
||||
|
||||
// STL-compatible allocator which forwards allocations to a PageHeapAllocator.
|
||||
//
|
||||
// Like PageHeapAllocator, this requires external synchronization. To avoid multiple
|
||||
// separate STLPageHeapAllocator<T> from sharing the same underlying PageHeapAllocator<T>,
|
||||
// the |LockingTag| template argument should be used. Template instantiations with
|
||||
// different locking tags can safely be used concurrently.
|
||||
template <typename T, class LockingTag>
|
||||
class STLPageHeapAllocator {
|
||||
public:
|
||||
typedef size_t size_type;
|
||||
typedef ptrdiff_t difference_type;
|
||||
typedef T* pointer;
|
||||
typedef const T* const_pointer;
|
||||
typedef T& reference;
|
||||
typedef const T& const_reference;
|
||||
typedef T value_type;
|
||||
|
||||
template <class T1> struct rebind {
|
||||
typedef STLPageHeapAllocator<T1, LockingTag> other;
|
||||
};
|
||||
|
||||
STLPageHeapAllocator() { }
|
||||
STLPageHeapAllocator(const STLPageHeapAllocator&) { }
|
||||
template <class T1> STLPageHeapAllocator(const STLPageHeapAllocator<T1, LockingTag>&) { }
|
||||
~STLPageHeapAllocator() { }
|
||||
|
||||
pointer address(reference x) const { return &x; }
|
||||
const_pointer address(const_reference x) const { return &x; }
|
||||
|
||||
size_type max_size() const { return size_t(-1) / sizeof(T); }
|
||||
|
||||
void construct(pointer p, const T& val) { ::new(p) T(val); }
|
||||
void construct(pointer p) { ::new(p) T(); }
|
||||
void destroy(pointer p) { p->~T(); }
|
||||
|
||||
// There's no state, so these allocators are always equal
|
||||
bool operator==(const STLPageHeapAllocator&) const { return true; }
|
||||
bool operator!=(const STLPageHeapAllocator&) const { return false; }
|
||||
|
||||
pointer allocate(size_type n, const void* = 0) {
|
||||
if (!underlying_.initialized) {
|
||||
underlying_.allocator.Init();
|
||||
underlying_.initialized = true;
|
||||
}
|
||||
|
||||
CHECK_CONDITION(n == 1);
|
||||
return underlying_.allocator.New();
|
||||
}
|
||||
void deallocate(pointer p, size_type n) {
|
||||
CHECK_CONDITION(n == 1);
|
||||
underlying_.allocator.Delete(p);
|
||||
}
|
||||
|
||||
private:
|
||||
struct Storage {
|
||||
explicit Storage(base::LinkerInitialized x) {}
|
||||
PageHeapAllocator<T> allocator;
|
||||
bool initialized;
|
||||
};
|
||||
static Storage underlying_;
|
||||
};
|
||||
|
||||
template<typename T, class LockingTag>
|
||||
typename STLPageHeapAllocator<T, LockingTag>::Storage STLPageHeapAllocator<T, LockingTag>::underlying_(base::LINKER_INITIALIZED);
|
||||
|
||||
} // namespace tcmalloc
|
||||
|
||||
#endif // TCMALLOC_PAGE_HEAP_ALLOCATOR_H_
|
322
3party/gperftools/src/pagemap.h
Normal file
322
3party/gperftools/src/pagemap.h
Normal file
@ -0,0 +1,322 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2005, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat <opensource@google.com>
|
||||
//
|
||||
// A data structure used by the caching malloc. It maps from page# to
|
||||
// a pointer that contains info about that page. We use two
|
||||
// representations: one for 32-bit addresses, and another for 64 bit
|
||||
// addresses. Both representations provide the same interface. The
|
||||
// first representation is implemented as a flat array, the seconds as
|
||||
// a three-level radix tree that strips away approximately 1/3rd of
|
||||
// the bits every time.
|
||||
//
|
||||
// The BITS parameter should be the number of bits required to hold
|
||||
// a page number. E.g., with 32 bit pointers and 4K pages (i.e.,
|
||||
// page offset fits in lower 12 bits), BITS == 20.
|
||||
|
||||
#ifndef TCMALLOC_PAGEMAP_H_
|
||||
#define TCMALLOC_PAGEMAP_H_
|
||||
|
||||
#include "config.h"
|
||||
|
||||
#include <stddef.h> // for NULL, size_t
|
||||
#include <string.h> // for memset
|
||||
#include <stdint.h>
|
||||
#include "internal_logging.h" // for ASSERT
|
||||
|
||||
// Single-level array
|
||||
template <int BITS>
|
||||
class TCMalloc_PageMap1 {
|
||||
private:
|
||||
static const int LENGTH = 1 << BITS;
|
||||
|
||||
void** array_;
|
||||
|
||||
public:
|
||||
typedef uintptr_t Number;
|
||||
|
||||
explicit TCMalloc_PageMap1(void* (*allocator)(size_t)) {
|
||||
array_ = reinterpret_cast<void**>((*allocator)(sizeof(void*) << BITS));
|
||||
memset(array_, 0, sizeof(void*) << BITS);
|
||||
}
|
||||
|
||||
// Ensure that the map contains initialized entries "x .. x+n-1".
|
||||
// Returns true if successful, false if we could not allocate memory.
|
||||
bool Ensure(Number x, size_t n) {
|
||||
// Nothing to do since flat array was allocated at start. All
|
||||
// that's left is to check for overflow (that is, we don't want to
|
||||
// ensure a number y where array_[y] would be an out-of-bounds
|
||||
// access).
|
||||
return n <= LENGTH - x; // an overflow-free way to do "x + n <= LENGTH"
|
||||
}
|
||||
|
||||
void PreallocateMoreMemory() {}
|
||||
|
||||
// Return the current value for KEY. Returns NULL if not yet set,
|
||||
// or if k is out of range.
|
||||
ATTRIBUTE_ALWAYS_INLINE
|
||||
void* get(Number k) const {
|
||||
if ((k >> BITS) > 0) {
|
||||
return NULL;
|
||||
}
|
||||
return array_[k];
|
||||
}
|
||||
|
||||
// REQUIRES "k" is in range "[0,2^BITS-1]".
|
||||
// REQUIRES "k" has been ensured before.
|
||||
//
|
||||
// Sets the value 'v' for key 'k'.
|
||||
void set(Number k, void* v) {
|
||||
array_[k] = v;
|
||||
}
|
||||
|
||||
// Return the first non-NULL pointer found in this map for
|
||||
// a page number >= k. Returns NULL if no such number is found.
|
||||
void* Next(Number k) const {
|
||||
while (k < (1 << BITS)) {
|
||||
if (array_[k] != NULL) return array_[k];
|
||||
k++;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
};
|
||||
|
||||
// Two-level radix tree
|
||||
template <int BITS>
|
||||
class TCMalloc_PageMap2 {
|
||||
private:
|
||||
static const int LEAF_BITS = (BITS + 1) / 2;
|
||||
static const int LEAF_LENGTH = 1 << LEAF_BITS;
|
||||
|
||||
static const int ROOT_BITS = BITS - LEAF_BITS;
|
||||
static const int ROOT_LENGTH = 1 << ROOT_BITS;
|
||||
|
||||
// Leaf node
|
||||
struct Leaf {
|
||||
void* values[LEAF_LENGTH];
|
||||
};
|
||||
|
||||
Leaf* root_[ROOT_LENGTH]; // Pointers to child nodes
|
||||
void* (*allocator_)(size_t); // Memory allocator
|
||||
|
||||
public:
|
||||
typedef uintptr_t Number;
|
||||
|
||||
explicit TCMalloc_PageMap2(void* (*allocator)(size_t)) {
|
||||
allocator_ = allocator;
|
||||
memset(root_, 0, sizeof(root_));
|
||||
}
|
||||
|
||||
ATTRIBUTE_ALWAYS_INLINE
|
||||
void* get(Number k) const {
|
||||
const Number i1 = k >> LEAF_BITS;
|
||||
const Number i2 = k & (LEAF_LENGTH-1);
|
||||
if ((k >> BITS) > 0 || root_[i1] == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
return root_[i1]->values[i2];
|
||||
}
|
||||
|
||||
void set(Number k, void* v) {
|
||||
const Number i1 = k >> LEAF_BITS;
|
||||
const Number i2 = k & (LEAF_LENGTH-1);
|
||||
ASSERT(i1 < ROOT_LENGTH);
|
||||
root_[i1]->values[i2] = v;
|
||||
}
|
||||
|
||||
bool Ensure(Number start, size_t n) {
|
||||
for (Number key = start; key <= start + n - 1; ) {
|
||||
const Number i1 = key >> LEAF_BITS;
|
||||
|
||||
// Check for overflow
|
||||
if (i1 >= ROOT_LENGTH)
|
||||
return false;
|
||||
|
||||
// Make 2nd level node if necessary
|
||||
if (root_[i1] == NULL) {
|
||||
Leaf* leaf = reinterpret_cast<Leaf*>((*allocator_)(sizeof(Leaf)));
|
||||
if (leaf == NULL) return false;
|
||||
memset(leaf, 0, sizeof(*leaf));
|
||||
root_[i1] = leaf;
|
||||
}
|
||||
|
||||
// Advance key past whatever is covered by this leaf node
|
||||
key = ((key >> LEAF_BITS) + 1) << LEAF_BITS;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
void PreallocateMoreMemory() {
|
||||
// Allocate enough to keep track of all possible pages
|
||||
if (BITS < 20) {
|
||||
Ensure(0, Number(1) << BITS);
|
||||
}
|
||||
}
|
||||
|
||||
void* Next(Number k) const {
|
||||
while (k < (Number(1) << BITS)) {
|
||||
const Number i1 = k >> LEAF_BITS;
|
||||
Leaf* leaf = root_[i1];
|
||||
if (leaf != NULL) {
|
||||
// Scan forward in leaf
|
||||
for (Number i2 = k & (LEAF_LENGTH - 1); i2 < LEAF_LENGTH; i2++) {
|
||||
if (leaf->values[i2] != NULL) {
|
||||
return leaf->values[i2];
|
||||
}
|
||||
}
|
||||
}
|
||||
// Skip to next top-level entry
|
||||
k = (i1 + 1) << LEAF_BITS;
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
};
|
||||
|
||||
// Three-level radix tree
|
||||
template <int BITS>
|
||||
class TCMalloc_PageMap3 {
|
||||
private:
|
||||
// How many bits should we consume at each interior level
|
||||
static const int INTERIOR_BITS = (BITS + 2) / 3; // Round-up
|
||||
static const int INTERIOR_LENGTH = 1 << INTERIOR_BITS;
|
||||
|
||||
// How many bits should we consume at leaf level
|
||||
static const int LEAF_BITS = BITS - 2*INTERIOR_BITS;
|
||||
static const int LEAF_LENGTH = 1 << LEAF_BITS;
|
||||
|
||||
// Interior node
|
||||
struct Node {
|
||||
Node* ptrs[INTERIOR_LENGTH];
|
||||
};
|
||||
|
||||
// Leaf node
|
||||
struct Leaf {
|
||||
void* values[LEAF_LENGTH];
|
||||
};
|
||||
|
||||
Node root_; // Root of radix tree
|
||||
void* (*allocator_)(size_t); // Memory allocator
|
||||
|
||||
Node* NewNode() {
|
||||
Node* result = reinterpret_cast<Node*>((*allocator_)(sizeof(Node)));
|
||||
if (result != NULL) {
|
||||
memset(result, 0, sizeof(*result));
|
||||
}
|
||||
return result;
|
||||
}
|
||||
|
||||
public:
|
||||
typedef uintptr_t Number;
|
||||
|
||||
explicit TCMalloc_PageMap3(void* (*allocator)(size_t)) {
|
||||
allocator_ = allocator;
|
||||
memset(&root_, 0, sizeof(root_));
|
||||
}
|
||||
|
||||
ATTRIBUTE_ALWAYS_INLINE
|
||||
void* get(Number k) const {
|
||||
const Number i1 = k >> (LEAF_BITS + INTERIOR_BITS);
|
||||
const Number i2 = (k >> LEAF_BITS) & (INTERIOR_LENGTH-1);
|
||||
const Number i3 = k & (LEAF_LENGTH-1);
|
||||
if ((k >> BITS) > 0 ||
|
||||
root_.ptrs[i1] == NULL || root_.ptrs[i1]->ptrs[i2] == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
return reinterpret_cast<Leaf*>(root_.ptrs[i1]->ptrs[i2])->values[i3];
|
||||
}
|
||||
|
||||
void set(Number k, void* v) {
|
||||
ASSERT(k >> BITS == 0);
|
||||
const Number i1 = k >> (LEAF_BITS + INTERIOR_BITS);
|
||||
const Number i2 = (k >> LEAF_BITS) & (INTERIOR_LENGTH-1);
|
||||
const Number i3 = k & (LEAF_LENGTH-1);
|
||||
reinterpret_cast<Leaf*>(root_.ptrs[i1]->ptrs[i2])->values[i3] = v;
|
||||
}
|
||||
|
||||
bool Ensure(Number start, size_t n) {
|
||||
for (Number key = start; key <= start + n - 1; ) {
|
||||
const Number i1 = key >> (LEAF_BITS + INTERIOR_BITS);
|
||||
const Number i2 = (key >> LEAF_BITS) & (INTERIOR_LENGTH-1);
|
||||
|
||||
// Check for overflow
|
||||
if (i1 >= INTERIOR_LENGTH || i2 >= INTERIOR_LENGTH)
|
||||
return false;
|
||||
|
||||
// Make 2nd level node if necessary
|
||||
if (root_.ptrs[i1] == NULL) {
|
||||
Node* n = NewNode();
|
||||
if (n == NULL) return false;
|
||||
root_.ptrs[i1] = n;
|
||||
}
|
||||
|
||||
// Make leaf node if necessary
|
||||
if (root_.ptrs[i1]->ptrs[i2] == NULL) {
|
||||
Leaf* leaf = reinterpret_cast<Leaf*>((*allocator_)(sizeof(Leaf)));
|
||||
if (leaf == NULL) return false;
|
||||
memset(leaf, 0, sizeof(*leaf));
|
||||
root_.ptrs[i1]->ptrs[i2] = reinterpret_cast<Node*>(leaf);
|
||||
}
|
||||
|
||||
// Advance key past whatever is covered by this leaf node
|
||||
key = ((key >> LEAF_BITS) + 1) << LEAF_BITS;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
void PreallocateMoreMemory() {
|
||||
}
|
||||
|
||||
void* Next(Number k) const {
|
||||
while (k < (Number(1) << BITS)) {
|
||||
const Number i1 = k >> (LEAF_BITS + INTERIOR_BITS);
|
||||
const Number i2 = (k >> LEAF_BITS) & (INTERIOR_LENGTH-1);
|
||||
if (root_.ptrs[i1] == NULL) {
|
||||
// Advance to next top-level entry
|
||||
k = (i1 + 1) << (LEAF_BITS + INTERIOR_BITS);
|
||||
} else {
|
||||
Leaf* leaf = reinterpret_cast<Leaf*>(root_.ptrs[i1]->ptrs[i2]);
|
||||
if (leaf != NULL) {
|
||||
for (Number i3 = (k & (LEAF_LENGTH-1)); i3 < LEAF_LENGTH; i3++) {
|
||||
if (leaf->values[i3] != NULL) {
|
||||
return leaf->values[i3];
|
||||
}
|
||||
}
|
||||
}
|
||||
// Advance to next interior entry
|
||||
k = ((k >> LEAF_BITS) + 1) << LEAF_BITS;
|
||||
}
|
||||
}
|
||||
return NULL;
|
||||
}
|
||||
};
|
||||
|
||||
#endif // TCMALLOC_PAGEMAP_H_
|
5580
3party/gperftools/src/pprof
Executable file
5580
3party/gperftools/src/pprof
Executable file
File diff suppressed because it is too large
Load Diff
604
3party/gperftools/src/profile-handler.cc
Normal file
604
3party/gperftools/src/profile-handler.cc
Normal file
@ -0,0 +1,604 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
// Copyright (c) 2009, Google Inc.
|
||||
// All rights reserved.
|
||||
//
|
||||
// Redistribution and use in source and binary forms, with or without
|
||||
// modification, are permitted provided that the following conditions are
|
||||
// met:
|
||||
//
|
||||
// * Redistributions of source code must retain the above copyright
|
||||
// notice, this list of conditions and the following disclaimer.
|
||||
// * Redistributions in binary form must reproduce the above
|
||||
// copyright notice, this list of conditions and the following disclaimer
|
||||
// in the documentation and/or other materials provided with the
|
||||
// distribution.
|
||||
// * Neither the name of Google Inc. nor the names of its
|
||||
// contributors may be used to endorse or promote products derived from
|
||||
// this software without specific prior written permission.
|
||||
//
|
||||
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
|
||||
// ---
|
||||
// Author: Sanjay Ghemawat
|
||||
// Nabeel Mian
|
||||
//
|
||||
// Implements management of profile timers and the corresponding signal handler.
|
||||
|
||||
#include "config.h"
|
||||
#include "profile-handler.h"
|
||||
|
||||
#if !(defined(__CYGWIN__) || defined(__CYGWIN32__))
|
||||
|
||||
#include <stdio.h>
|
||||
#include <errno.h>
|
||||
#include <sys/time.h>
|
||||
|
||||
#include <list>
|
||||
#include <string>
|
||||
|
||||
#if HAVE_LINUX_SIGEV_THREAD_ID
|
||||
#include <pthread.h>
|
||||
// for timer_{create,settime} and associated typedefs & constants
|
||||
#include <time.h>
|
||||
// for sigevent
|
||||
#include <signal.h>
|
||||
// for SYS_gettid
|
||||
#include <sys/syscall.h>
|
||||
#endif
|
||||
|
||||
#include "base/dynamic_annotations.h"
|
||||
#include "base/googleinit.h"
|
||||
#include "base/logging.h"
|
||||
#include "base/spinlock.h"
|
||||
|
||||
// Some Linux systems don't have sigev_notify_thread_id defined in
|
||||
// signal.h (despite having SIGEV_THREAD_ID defined) and also lack
|
||||
// working linux/signal.h. So lets workaround. Note, we know that at
|
||||
// least on Linux sigev_notify_thread_id is macro.
|
||||
//
|
||||
// See https://sourceware.org/bugzilla/show_bug.cgi?id=27417 and
|
||||
// https://bugzilla.kernel.org/show_bug.cgi?id=200081
|
||||
//
|
||||
#if __linux__ && HAVE_LINUX_SIGEV_THREAD_ID && !defined(sigev_notify_thread_id)
|
||||
#define sigev_notify_thread_id _sigev_un._tid
|
||||
#endif
|
||||
|
||||
using std::list;
|
||||
using std::string;
|
||||
|
||||
// This structure is used by ProfileHandlerRegisterCallback and
|
||||
// ProfileHandlerUnregisterCallback as a handle to a registered callback.
|
||||
struct ProfileHandlerToken {
|
||||
// Sets the callback and associated arg.
|
||||
ProfileHandlerToken(ProfileHandlerCallback cb, void* cb_arg)
|
||||
: callback(cb),
|
||||
callback_arg(cb_arg) {
|
||||
}
|
||||
|
||||
// Callback function to be invoked on receiving a profile timer interrupt.
|
||||
ProfileHandlerCallback callback;
|
||||
// Argument for the callback function.
|
||||
void* callback_arg;
|
||||
};
|
||||
|
||||
// Blocks a signal from being delivered to the current thread while the object
|
||||
// is alive. Unblocks it upon destruction.
|
||||
class ScopedSignalBlocker {
|
||||
public:
|
||||
ScopedSignalBlocker(int signo) {
|
||||
sigemptyset(&sig_set_);
|
||||
sigaddset(&sig_set_, signo);
|
||||
RAW_CHECK(sigprocmask(SIG_BLOCK, &sig_set_, NULL) == 0,
|
||||
"sigprocmask (block)");
|
||||
}
|
||||
~ScopedSignalBlocker() {
|
||||
RAW_CHECK(sigprocmask(SIG_UNBLOCK, &sig_set_, NULL) == 0,
|
||||
"sigprocmask (unblock)");
|
||||
}
|
||||
|
||||
private:
|
||||
sigset_t sig_set_;
|
||||
};
|
||||
|
||||
// This class manages profile timers and associated signal handler. This is a
|
||||
// a singleton.
|
||||
class ProfileHandler {
|
||||
public:
|
||||
// Registers the current thread with the profile handler.
|
||||
void RegisterThread();
|
||||
|
||||
// Registers a callback routine to receive profile timer ticks. The returned
|
||||
// token is to be used when unregistering this callback and must not be
|
||||
// deleted by the caller.
|
||||
ProfileHandlerToken* RegisterCallback(ProfileHandlerCallback callback,
|
||||
void* callback_arg);
|
||||
|
||||
// Unregisters a previously registered callback. Expects the token returned
|
||||
// by the corresponding RegisterCallback routine.
|
||||
void UnregisterCallback(ProfileHandlerToken* token)
|
||||
NO_THREAD_SAFETY_ANALYSIS;
|
||||
|
||||
// Unregisters all the callbacks and stops the timer(s).
|
||||
void Reset();
|
||||
|
||||
// Gets the current state of profile handler.
|
||||
void GetState(ProfileHandlerState* state);
|
||||
|
||||
// Initializes and returns the ProfileHandler singleton.
|
||||
static ProfileHandler* Instance();
|
||||
|
||||
private:
|
||||
ProfileHandler();
|
||||
~ProfileHandler();
|
||||
|
||||
// Largest allowed frequency.
|
||||
static const int32 kMaxFrequency = 4000;
|
||||
// Default frequency.
|
||||
static const int32 kDefaultFrequency = 100;
|
||||
|
||||
// ProfileHandler singleton.
|
||||
static ProfileHandler* instance_;
|
||||
|
||||
// Initializes the ProfileHandler singleton via GoogleOnceInit.
|
||||
static void Init();
|
||||
|
||||
// Timer state as configured previously.
|
||||
bool timer_running_;
|
||||
|
||||
// The number of profiling signal interrupts received.
|
||||
int64 interrupts_ GUARDED_BY(signal_lock_);
|
||||
|
||||
// Profiling signal interrupt frequency, read-only after construction.
|
||||
int32 frequency_;
|
||||
|
||||
// ITIMER_PROF (which uses SIGPROF), or ITIMER_REAL (which uses SIGALRM).
|
||||
// Translated into an equivalent choice of clock if per_thread_timer_enabled_
|
||||
// is true.
|
||||
int timer_type_;
|
||||
|
||||
// Signal number for timer signal.
|
||||
int signal_number_;
|
||||
|
||||
// Counts the number of callbacks registered.
|
||||
int32 callback_count_ GUARDED_BY(control_lock_);
|
||||
|
||||
// Is profiling allowed at all?
|
||||
bool allowed_;
|
||||
|
||||
// Must be false if HAVE_LINUX_SIGEV_THREAD_ID is not defined.
|
||||
bool per_thread_timer_enabled_;
|
||||
|
||||
#if HAVE_LINUX_SIGEV_THREAD_ID
|
||||
// this is used to destroy per-thread profiling timers on thread
|
||||
// termination
|
||||
pthread_key_t thread_timer_key;
|
||||
#endif
|
||||
|
||||
// This lock serializes the registration of threads and protects the
|
||||
// callbacks_ list below.
|
||||
// Locking order:
|
||||
// In the context of a signal handler, acquire signal_lock_ to walk the
|
||||
// callback list. Otherwise, acquire control_lock_, disable the signal
|
||||
// handler and then acquire signal_lock_.
|
||||
SpinLock control_lock_ ACQUIRED_BEFORE(signal_lock_);
|
||||
SpinLock signal_lock_;
|
||||
|
||||
// Holds the list of registered callbacks. We expect the list to be pretty
|
||||
// small. Currently, the cpu profiler (base/profiler) and thread module
|
||||
// (base/thread.h) are the only two components registering callbacks.
|
||||
// Following are the locking requirements for callbacks_:
|
||||
// For read-write access outside the SIGPROF handler:
|
||||
// - Acquire control_lock_
|
||||
// - Disable SIGPROF handler.
|
||||
// - Acquire signal_lock_
|
||||
// - Nothing that takes ~any other lock can be nested
|
||||
// here. E.g. including malloc. Otherwise deadlock is possible.
|
||||
// For read-only access in the context of SIGPROF handler
|
||||
// (Read-write access is *not allowed* in the SIGPROF handler)
|
||||
// - Acquire signal_lock_
|
||||
// For read-only access outside SIGPROF handler:
|
||||
// - Acquire control_lock_
|
||||
typedef list<ProfileHandlerToken*> CallbackList;
|
||||
typedef CallbackList::iterator CallbackIterator;
|
||||
CallbackList callbacks_ GUARDED_BY(signal_lock_);
|
||||
|
||||
// Starts or stops the interval timer.
|
||||
// Will ignore any requests to enable or disable when
|
||||
// per_thread_timer_enabled_ is true.
|
||||
void UpdateTimer(bool enable) EXCLUSIVE_LOCKS_REQUIRED(control_lock_);
|
||||
|
||||
// Returns true if the handler is not being used by something else.
|
||||
// This checks the kernel's signal handler table.
|
||||
bool IsSignalHandlerAvailable();
|
||||
|
||||
// Signal handler. Iterates over and calls all the registered callbacks.
|
||||
static void SignalHandler(int sig, siginfo_t* sinfo, void* ucontext);
|
||||
|
||||
DISALLOW_COPY_AND_ASSIGN(ProfileHandler);
|
||||
};
|
||||
|
||||
ProfileHandler* ProfileHandler::instance_ = NULL;
|
||||
|
||||
const int32 ProfileHandler::kMaxFrequency;
|
||||
const int32 ProfileHandler::kDefaultFrequency;
|
||||
|
||||
// If we are LD_PRELOAD-ed against a non-pthreads app, then these functions
|
||||
// won't be defined. We declare them here, for that case (with weak linkage)
|
||||
// which will cause the non-definition to resolve to NULL. We can then check
|
||||
// for NULL or not in Instance.
|
||||
extern "C" {
|
||||
#if HAVE_LINUX_SIGEV_THREAD_ID
|
||||
int timer_create(clockid_t clockid, struct sigevent* evp,
|
||||
timer_t* timerid) ATTRIBUTE_WEAK;
|
||||
int timer_delete(timer_t timerid) ATTRIBUTE_WEAK;
|
||||
int timer_settime(timer_t timerid, int flags, const struct itimerspec* value,
|
||||
struct itimerspec* ovalue) ATTRIBUTE_WEAK;
|
||||
#endif
|
||||
}
|
||||
|
||||
#if HAVE_LINUX_SIGEV_THREAD_ID
|
||||
|
||||
struct timer_id_holder {
|
||||
timer_t timerid;
|
||||
timer_id_holder(timer_t _timerid) : timerid(_timerid) {}
|
||||
};
|
||||
|
||||
extern "C" {
|
||||
static void ThreadTimerDestructor(void *arg) {
|
||||
if (!arg) {
|
||||
return;
|
||||
}
|
||||
timer_id_holder *holder = static_cast<timer_id_holder *>(arg);
|
||||
timer_delete(holder->timerid);
|
||||
delete holder;
|
||||
}
|
||||
}
|
||||
|
||||
static void CreateThreadTimerKey(pthread_key_t *pkey) {
|
||||
int rv = pthread_key_create(pkey, ThreadTimerDestructor);
|
||||
if (rv) {
|
||||
RAW_LOG(FATAL, "aborting due to pthread_key_create error: %s", strerror(rv));
|
||||
}
|
||||
}
|
||||
|
||||
static void StartLinuxThreadTimer(int timer_type, int signal_number,
|
||||
int32 frequency, pthread_key_t timer_key) {
|
||||
int rv;
|
||||
struct sigevent sevp;
|
||||
timer_t timerid;
|
||||
struct itimerspec its;
|
||||
memset(&sevp, 0, sizeof(sevp));
|
||||
sevp.sigev_notify = SIGEV_THREAD_ID;
|
||||
sevp.sigev_notify_thread_id = syscall(SYS_gettid);
|
||||
sevp.sigev_signo = signal_number;
|
||||
clockid_t clock = CLOCK_THREAD_CPUTIME_ID;
|
||||
if (timer_type == ITIMER_REAL) {
|
||||
clock = CLOCK_MONOTONIC;
|
||||
}
|
||||
rv = timer_create(clock, &sevp, &timerid);
|
||||
if (rv) {
|
||||
RAW_LOG(FATAL, "aborting due to timer_create error: %s", strerror(errno));
|
||||
}
|
||||
|
||||
timer_id_holder *holder = new timer_id_holder(timerid);
|
||||
rv = pthread_setspecific(timer_key, holder);
|
||||
if (rv) {
|
||||
RAW_LOG(FATAL, "aborting due to pthread_setspecific error: %s", strerror(rv));
|
||||
}
|
||||
|
||||
its.it_interval.tv_sec = 0;
|
||||
its.it_interval.tv_nsec = 1000000000 / frequency;
|
||||
its.it_value = its.it_interval;
|
||||
rv = timer_settime(timerid, 0, &its, 0);
|
||||
if (rv) {
|
||||
RAW_LOG(FATAL, "aborting due to timer_settime error: %s", strerror(errno));
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
void ProfileHandler::Init() {
|
||||
instance_ = new ProfileHandler();
|
||||
}
|
||||
|
||||
|
||||
ProfileHandler* ProfileHandler::Instance() {
|
||||
static tcmalloc::TrivialOnce once;
|
||||
|
||||
once.RunOnce(&Init);
|
||||
|
||||
assert(instance_ != nullptr);
|
||||
|
||||
return instance_;
|
||||
}
|
||||
|
||||
ProfileHandler::ProfileHandler()
|
||||
: timer_running_(false),
|
||||
interrupts_(0),
|
||||
callback_count_(0),
|
||||
allowed_(true),
|
||||
per_thread_timer_enabled_(false) {
|
||||
SpinLockHolder cl(&control_lock_);
|
||||
|
||||
timer_type_ = (getenv("CPUPROFILE_REALTIME") ? ITIMER_REAL : ITIMER_PROF);
|
||||
signal_number_ = (timer_type_ == ITIMER_PROF ? SIGPROF : SIGALRM);
|
||||
|
||||
// Get frequency of interrupts (if specified)
|
||||
char junk;
|
||||
const char* fr = getenv("CPUPROFILE_FREQUENCY");
|
||||
if (fr != NULL && (sscanf(fr, "%u%c", &frequency_, &junk) == 1) &&
|
||||
(frequency_ > 0)) {
|
||||
// Limit to kMaxFrequency
|
||||
frequency_ = (frequency_ > kMaxFrequency) ? kMaxFrequency : frequency_;
|
||||
} else {
|
||||
frequency_ = kDefaultFrequency;
|
||||
}
|
||||
|
||||
if (!allowed_) {
|
||||
return;
|
||||
}
|
||||
|
||||
#if HAVE_LINUX_SIGEV_THREAD_ID
|
||||
// Do this early because we might be overriding signal number.
|
||||
|
||||
const char *per_thread = getenv("CPUPROFILE_PER_THREAD_TIMERS");
|
||||
const char *signal_number = getenv("CPUPROFILE_TIMER_SIGNAL");
|
||||
|
||||
if (per_thread || signal_number) {
|
||||
if (timer_create) {
|
||||
CreateThreadTimerKey(&thread_timer_key);
|
||||
per_thread_timer_enabled_ = true;
|
||||
// Override signal number if requested.
|
||||
if (signal_number) {
|
||||
signal_number_ = strtol(signal_number, NULL, 0);
|
||||
}
|
||||
} else {
|
||||
RAW_LOG(INFO,
|
||||
"Ignoring CPUPROFILE_PER_THREAD_TIMERS and\n"
|
||||
" CPUPROFILE_TIMER_SIGNAL due to lack of timer_create().\n"
|
||||
" Preload or link to librt.so for this to work");
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
// If something else is using the signal handler,
|
||||
// assume it has priority over us and stop.
|
||||
if (!IsSignalHandlerAvailable()) {
|
||||
RAW_LOG(INFO, "Disabling profiler because signal %d handler is already in use.",
|
||||
signal_number_);
|
||||
allowed_ = false;
|
||||
return;
|
||||
}
|
||||
|
||||
// Install the signal handler.
|
||||
struct sigaction sa;
|
||||
sa.sa_sigaction = SignalHandler;
|
||||
sa.sa_flags = SA_RESTART | SA_SIGINFO;
|
||||
sigemptyset(&sa.sa_mask);
|
||||
RAW_CHECK(sigaction(signal_number_, &sa, NULL) == 0, "sigprof (enable)");
|
||||
}
|
||||
|
||||
ProfileHandler::~ProfileHandler() {
|
||||
Reset();
|
||||
#if HAVE_LINUX_SIGEV_THREAD_ID
|
||||
if (per_thread_timer_enabled_) {
|
||||
pthread_key_delete(thread_timer_key);
|
||||
}
|
||||
#endif
|
||||
}
|
||||
|
||||
void ProfileHandler::RegisterThread() {
|
||||
SpinLockHolder cl(&control_lock_);
|
||||
|
||||
if (!allowed_) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Record the thread identifier and start the timer if profiling is on.
|
||||
#if HAVE_LINUX_SIGEV_THREAD_ID
|
||||
if (per_thread_timer_enabled_) {
|
||||
StartLinuxThreadTimer(timer_type_, signal_number_, frequency_,
|
||||
thread_timer_key);
|
||||
return;
|
||||
}
|
||||
#endif
|
||||
UpdateTimer(callback_count_ > 0);
|
||||
}
|
||||
|
||||
ProfileHandlerToken* ProfileHandler::RegisterCallback(
|
||||
ProfileHandlerCallback callback, void* callback_arg) {
|
||||
|
||||
ProfileHandlerToken* token = new ProfileHandlerToken(callback, callback_arg);
|
||||
CallbackList copy;
|
||||
copy.push_back(token);
|
||||
|
||||
SpinLockHolder cl(&control_lock_);
|
||||
{
|
||||
ScopedSignalBlocker block(signal_number_);
|
||||
SpinLockHolder sl(&signal_lock_);
|
||||
callbacks_.splice(callbacks_.end(), copy);
|
||||
}
|
||||
|
||||
++callback_count_;
|
||||
UpdateTimer(true);
|
||||
return token;
|
||||
}
|
||||
|
||||
void ProfileHandler::UnregisterCallback(ProfileHandlerToken* token) {
|
||||
SpinLockHolder cl(&control_lock_);
|
||||
RAW_CHECK(callback_count_ > 0, "Invalid callback count");
|
||||
|
||||
CallbackList copy;
|
||||
bool found = false;
|
||||
for (ProfileHandlerToken* callback_token : callbacks_) {
|
||||
if (callback_token == token) {
|
||||
found = true;
|
||||
} else {
|
||||
copy.push_back(callback_token);
|
||||
}
|
||||
}
|
||||
|
||||
if (!found) {
|
||||
RAW_LOG(FATAL, "Invalid token");
|
||||
}
|
||||
|
||||
{
|
||||
ScopedSignalBlocker block(signal_number_);
|
||||
SpinLockHolder sl(&signal_lock_);
|
||||
// Replace callback list holding signal lock. We cannot call
|
||||
// pretty much anything that takes locks. Including malloc
|
||||
// locks. So we only swap here and cleanup later.
|
||||
using std::swap;
|
||||
swap(copy, callbacks_);
|
||||
}
|
||||
// copy gets deleted after signal_lock_ is dropped
|
||||
|
||||
--callback_count_;
|
||||
if (callback_count_ == 0) {
|
||||
UpdateTimer(false);
|
||||
}
|
||||
delete token;
|
||||
}
|
||||
|
||||
void ProfileHandler::Reset() {
|
||||
SpinLockHolder cl(&control_lock_);
|
||||
CallbackList copy;
|
||||
{
|
||||
ScopedSignalBlocker block(signal_number_);
|
||||
SpinLockHolder sl(&signal_lock_);
|
||||
// Only do swap under this critical lock.
|
||||
using std::swap;
|
||||
swap(copy, callbacks_);
|
||||
}
|
||||
for (ProfileHandlerToken* token : copy) {
|
||||
delete token;
|
||||
}
|
||||
callback_count_ = 0;
|
||||
UpdateTimer(false);
|
||||
// copy gets deleted here
|
||||
}
|
||||
|
||||
void ProfileHandler::GetState(ProfileHandlerState* state) {
|
||||
SpinLockHolder cl(&control_lock_);
|
||||
{
|
||||
ScopedSignalBlocker block(signal_number_);
|
||||
SpinLockHolder sl(&signal_lock_); // Protects interrupts_.
|
||||
state->interrupts = interrupts_;
|
||||
}
|
||||
state->frequency = frequency_;
|
||||
state->callback_count = callback_count_;
|
||||
state->allowed = allowed_;
|
||||
}
|
||||
|
||||
void ProfileHandler::UpdateTimer(bool enable) {
|
||||
if (per_thread_timer_enabled_) {
|
||||
// Ignore any attempts to disable it because that's not supported, and it's
|
||||
// always enabled so enabling is always a NOP.
|
||||
return;
|
||||
}
|
||||
|
||||
if (enable == timer_running_) {
|
||||
return;
|
||||
}
|
||||
timer_running_ = enable;
|
||||
|
||||
struct itimerval timer;
|
||||
static const int kMillion = 1000000;
|
||||
int interval_usec = enable ? kMillion / frequency_ : 0;
|
||||
timer.it_interval.tv_sec = interval_usec / kMillion;
|
||||
timer.it_interval.tv_usec = interval_usec % kMillion;
|
||||
timer.it_value = timer.it_interval;
|
||||
setitimer(timer_type_, &timer, 0);
|
||||
}
|
||||
|
||||
bool ProfileHandler::IsSignalHandlerAvailable() {
|
||||
struct sigaction sa;
|
||||
RAW_CHECK(sigaction(signal_number_, NULL, &sa) == 0, "is-signal-handler avail");
|
||||
|
||||
// We only take over the handler if the current one is unset.
|
||||
// It must be SIG_IGN or SIG_DFL, not some other function.
|
||||
// SIG_IGN must be allowed because when profiling is allowed but
|
||||
// not actively in use, this code keeps the handler set to SIG_IGN.
|
||||
// That setting will be inherited across fork+exec. In order for
|
||||
// any child to be able to use profiling, SIG_IGN must be treated
|
||||
// as available.
|
||||
return sa.sa_handler == SIG_IGN || sa.sa_handler == SIG_DFL;
|
||||
}
|
||||
|
||||
void ProfileHandler::SignalHandler(int sig, siginfo_t* sinfo, void* ucontext) {
|
||||
int saved_errno = errno;
|
||||
// At this moment, instance_ must be initialized because the handler is
|
||||
// enabled in RegisterThread or RegisterCallback only after
|
||||
// ProfileHandler::Instance runs.
|
||||
ProfileHandler* instance = instance_;
|
||||
RAW_CHECK(instance != NULL, "ProfileHandler is not initialized");
|
||||
{
|
||||
SpinLockHolder sl(&instance->signal_lock_);
|
||||
++instance->interrupts_;
|
||||
for (CallbackIterator it = instance->callbacks_.begin();
|
||||
it != instance->callbacks_.end();
|
||||
++it) {
|
||||
(*it)->callback(sig, sinfo, ucontext, (*it)->callback_arg);
|
||||
}
|
||||
}
|
||||
errno = saved_errno;
|
||||
}
|
||||
|
||||
// This module initializer registers the main thread, so it must be
|
||||
// executed in the context of the main thread.
|
||||
REGISTER_MODULE_INITIALIZER(profile_main, ProfileHandlerRegisterThread());
|
||||
|
||||
void ProfileHandlerRegisterThread() {
|
||||
ProfileHandler::Instance()->RegisterThread();
|
||||
}
|
||||
|
||||
ProfileHandlerToken* ProfileHandlerRegisterCallback(
|
||||
ProfileHandlerCallback callback, void* callback_arg) {
|
||||
return ProfileHandler::Instance()->RegisterCallback(callback, callback_arg);
|
||||
}
|
||||
|
||||
void ProfileHandlerUnregisterCallback(ProfileHandlerToken* token) {
|
||||
ProfileHandler::Instance()->UnregisterCallback(token);
|
||||
}
|
||||
|
||||
void ProfileHandlerReset() {
|
||||
return ProfileHandler::Instance()->Reset();
|
||||
}
|
||||
|
||||
void ProfileHandlerGetState(ProfileHandlerState* state) {
|
||||
ProfileHandler::Instance()->GetState(state);
|
||||
}
|
||||
|
||||
#else // OS_CYGWIN
|
||||
|
||||
// ITIMER_PROF doesn't work under cygwin. ITIMER_REAL is available, but doesn't
|
||||
// work as well for profiling, and also interferes with alarm(). Because of
|
||||
// these issues, unless a specific need is identified, profiler support is
|
||||
// disabled under Cygwin.
|
||||
void ProfileHandlerRegisterThread() {
|
||||
}
|
||||
|
||||
ProfileHandlerToken* ProfileHandlerRegisterCallback(
|
||||
ProfileHandlerCallback callback, void* callback_arg) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
void ProfileHandlerUnregisterCallback(ProfileHandlerToken* token) {
|
||||
}
|
||||
|
||||
void ProfileHandlerReset() {
|
||||
}
|
||||
|
||||
void ProfileHandlerGetState(ProfileHandlerState* state) {
|
||||
}
|
||||
|
||||
#endif // OS_CYGWIN
|
139
3party/gperftools/src/profile-handler.h
Normal file
139
3party/gperftools/src/profile-handler.h
Normal file
@ -0,0 +1,139 @@
|
||||
// -*- Mode: C++; c-basic-offset: 2; indent-tabs-mode: nil -*-
|
||||
/* Copyright (c) 2009, Google Inc.
|
||||
* All rights reserved.
|
||||
*
|
||||
* Redistribution and use in source and binary forms, with or without
|
||||
* modification, are permitted provided that the following conditions are
|
||||
* met:
|
||||
*
|
||||
* * Redistributions of source code must retain the above copyright
|
||||
* notice, this list of conditions and the following disclaimer.
|
||||
* * Redistributions in binary form must reproduce the above
|
||||
* copyright notice, this list of conditions and the following disclaimer
|
||||
* in the documentation and/or other materials provided with the
|
||||
* distribution.
|
||||
* * Neither the name of Google Inc. nor the names of its
|
||||
* contributors may be used to endorse or promote products derived from
|
||||
* this software without specific prior written permission.
|
||||
*
|
||||
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
*
|
||||
* ---
|
||||
* Author: Nabeel Mian
|
||||
*
|
||||
* This module manages the cpu profile timers and the associated interrupt
|
||||
* handler. When enabled, all threads in the program are profiled.
|
||||
*
|
||||
* Any component interested in receiving a profile timer interrupt can do so by
|
||||
* registering a callback. All registered callbacks must be async-signal-safe.
|
||||
*
|
||||
* Note: This module requires the sole ownership of the configured timer and
|
||||
* signal. The timer defaults to ITIMER_PROF, can be changed to ITIMER_REAL by
|
||||
* the environment variable CPUPROFILE_REALTIME, or is changed to a POSIX timer
|
||||
* with CPUPROFILE_PER_THREAD_TIMERS. The signal defaults to SIGPROF/SIGALRM to
|
||||
* match the choice of timer and can be set to an arbitrary value using
|
||||
* CPUPROFILE_TIMER_SIGNAL with CPUPROFILE_PER_THREAD_TIMERS.
|
||||
*/
|
||||
|
||||
#ifndef BASE_PROFILE_HANDLER_H_
|
||||
#define BASE_PROFILE_HANDLER_H_
|
||||
|
||||
#include "config.h"
|
||||
#include <signal.h>
|
||||
#include "base/basictypes.h"
|
||||
|
||||
/* Forward declaration. */
|
||||
struct ProfileHandlerToken;
|
||||
|
||||
/*
|
||||
* Callback function to be used with ProfilefHandlerRegisterCallback. This
|
||||
* function will be called in the context of SIGPROF signal handler and must
|
||||
* be async-signal-safe. The first three arguments are the values provided by
|
||||
* the SIGPROF signal handler. We use void* to avoid using ucontext_t on
|
||||
* non-POSIX systems.
|
||||
*
|
||||
* Requirements:
|
||||
* - Callback must be async-signal-safe.
|
||||
* - None of the functions in ProfileHandler are async-signal-safe. Therefore,
|
||||
* callback function *must* not call any of the ProfileHandler functions.
|
||||
* - Callback is not required to be re-entrant. At most one instance of
|
||||
* callback can run at a time.
|
||||
*
|
||||
* Notes:
|
||||
* - The SIGPROF signal handler saves and restores errno, so the callback
|
||||
* doesn't need to.
|
||||
* - Callback code *must* not acquire lock(s) to serialize access to data shared
|
||||
* with the code outside the signal handler (callback must be
|
||||
* async-signal-safe). If such a serialization is needed, follow the model
|
||||
* used by profiler.cc:
|
||||
*
|
||||
* When code other than the signal handler modifies the shared data it must:
|
||||
* - Acquire lock.
|
||||
* - Unregister the callback with the ProfileHandler.
|
||||
* - Modify shared data.
|
||||
* - Re-register the callback.
|
||||
* - Release lock.
|
||||
* and the callback code gets a lockless, read-write access to the data.
|
||||
*/
|
||||
typedef void (*ProfileHandlerCallback)(int sig, siginfo_t* sig_info,
|
||||
void* ucontext, void* callback_arg);
|
||||
|
||||
/*
|
||||
* Registers a new thread with profile handler and should be called only once
|
||||
* per thread. The main thread is registered at program startup. This routine
|
||||
* is called by the Thread module in google3/thread whenever a new thread is
|
||||
* created. This function is not async-signal-safe.
|
||||
*/
|
||||
void ProfileHandlerRegisterThread();
|
||||
|
||||
/*
|
||||
* Registers a callback routine. This callback function will be called in the
|
||||
* context of SIGPROF handler, so must be async-signal-safe. The returned token
|
||||
* is to be used when unregistering this callback via
|
||||
* ProfileHandlerUnregisterCallback. Registering the first callback enables
|
||||
* the SIGPROF signal handler. Caller must not free the returned token. This
|
||||
* function is not async-signal-safe.
|
||||
*/
|
||||
ProfileHandlerToken* ProfileHandlerRegisterCallback(
|
||||
ProfileHandlerCallback callback, void* callback_arg);
|
||||
|
||||
/*
|
||||
* Unregisters a previously registered callback. Expects the token returned
|
||||
* by the corresponding ProfileHandlerRegisterCallback and asserts that the
|
||||
* passed token is valid. Unregistering the last callback disables the SIGPROF
|
||||
* signal handler. It waits for the currently running callback to
|
||||
* complete before returning. This function is not async-signal-safe.
|
||||
*/
|
||||
void ProfileHandlerUnregisterCallback(ProfileHandlerToken* token);
|
||||
|
||||
/*
|
||||
* FOR TESTING ONLY
|
||||
* Unregisters all the callbacks, stops the timers (if shared) and disables the
|
||||
* SIGPROF handler. All the threads, including the main thread, need to be
|
||||
* re-registered after this call. This function is not async-signal-safe.
|
||||
*/
|
||||
void ProfileHandlerReset();
|
||||
|
||||
/*
|
||||
* Stores profile handler's current state. This function is not
|
||||
* async-signal-safe.
|
||||
*/
|
||||
struct ProfileHandlerState {
|
||||
int32 frequency; /* Profiling frequency */
|
||||
int32 callback_count; /* Number of callbacks registered */
|
||||
int64 interrupts; /* Number of interrupts received */
|
||||
bool allowed; /* Profiling is allowed */
|
||||
};
|
||||
void ProfileHandlerGetState(struct ProfileHandlerState* state);
|
||||
|
||||
#endif /* BASE_PROFILE_HANDLER_H_ */
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user