this adds a version guard like the protobuf C++ implementation. it
ensures that protoc-c and <protobuf-c.h> are from the exact same version
of protobuf-c.
this replaces the changes in Issue #53 with a slightly different way of
representing / retrieving the version number.
protobuf_c_version() returns the version of the *library* as a string.
protobuf_c_version_number() returns the version of the *library* as an
integer.
PROTOBUF_C_VERSION is the version of the *headers* as a string constant.
PROTOBUF_C_VERSION_NUMBER is the version of the *headers* as an integer.
rename PROTOBUF_C_FIELD_FLAGS_PACKED to PROTOBUF_C_FIELD_FLAG_PACKED.
rename ProtobufCFieldFlagType to ProtobufCFieldFlag.
wrap some particular long lines.
update documentation.
for clarity, use a "uint32_t" instead of "unsigned" for the 'flags'
field in _ProtobufCFieldDescriptor.
Originally, someone complained about protobuf_c_message_unpack() using
alloca() for the allocation of the temporary bitmap used to detect that
all required fields were present in the unpacked message (Issue #60).
Commit 248eae1d eliminated the use of alloca(), replacing the
variable-length alloca()'d bitmap with a 16 byte stack-allocated bitmap,
treating field numbers mod 128.
Andrei Nigmatulin noted in PR #137 problems with this approach:
Apparently 248eae1d has introduced a serious problem to protobuf-c
decoder.
Originally the function of required_fields_bitmap was to prevent
decoder from returning incomplete messages. That means, each
required field in the message either must have a default_value or be
present in the protobuf stream. The purpose of this behaviour was to
provide user with 100% complete ProtobufCMessage struct on return
from protobuf_c_message_unpack(), which does not need to be checked
for completeness just after. This is exactly how original protobuf
C++ decoder behaves. The patch 248eae1d broke this functionality by
hashing bits of required fields instead of storing them separately.
Consider a protobuf message with 129 fields where the first and the
last fields set as 'required'. In this case it is possible to trick
decoder to return incomplete ProtobufCMessage struct with missing
required fields by providing only one of the two fields in the
source byte stream. This can be considered as a security issue as
well because user's code do not expect incomplete messages with
missing required fields out from protobuf_c_message_unpack(). Such a
change could introduce undefined behaviour to user programs.
This patch is based on Andrei's fix and restores the exact detection of
missing required fields, but avoids doing a separate allocation for the
required fields bitmap except for messages whose descriptors define a
large number of fields. In the "typical" case where the message
descriptor has <= 128 fields we can just use a 16 byte array allocated
on the stack. (Note that the hash-based approach also used a 16 byte
stack allocated array.)
protoc may not be on the default PATH, so augment $PATH with the
executable path registered by pkg-config for the protobuf package.
additionally declare PROTOC as a precious variable, thus allowing it to
be explicitly set by the user at ./configure time.
based on a patch from Andrei Nigmatulin.
the protobuf header files may be installed in a non-standard location
and thus we need to use the CFLAGS registered for protobuf in pkg-config
in order to find them.
based on a patch from Andrei Nigmatulin.
if pkg-config is installed, the libprotobuf-c .pc file will be
installed; if pkg-config is not installed, the .pc file won't be
installed.
this behavior only applies when we're building with ./configure
--disable-protoc, since pkg-config is required in order to detect the
protobuf dependency.
this is conditional on whether the linker supports version scripts, for
which we use the gl_LD_VERSION_SCRIPT macro from the gnulib project.
on platforms without version scripts, we fall back to libtool's
-export-symbols-regex.
it's possible for the <google/protobuf/compiler/> header files to be
shipped in a separate package (e.g., debian's libprotoc-dev). check for
this at configure time rather than allowing the build process to fail.
there is some confusion with regard to the use of lower case letters in
enum values. take the following message definition:
message LowerCase {
enum CaseEnum {
UPPER = 1;
lower = 2;
}
optional CaseEnum value = 1 [default = lower];
}
this generates the following C enum:
typedef enum _LowerCase__CaseEnum {
LOWER_CASE__CASE_ENUM__UPPER = 1,
LOWER_CASE__CASE_ENUM__lower = 2
_PROTOBUF_C_FORCE_ENUM_TO_BE_INT_SIZE(LOWER_CASE__CASE_ENUM)
} LowerCase__CaseEnum;
note that the case of the enum value 'lower' was preserved in the C
symbol name as 'LOWER_CASE__CASE_ENUM__lower', but that the _INIT macro
references the same enum value with the (non-existent) C symbol name
'LOWER_CASE__CASE_ENUM__LOWER':
#define LOWER_CASE__INIT \
{ PROTOBUF_C_MESSAGE_INIT (&lower_case__descriptor) \
, 0,LOWER_CASE__CASE_ENUM__LOWER }
additionally, the ProtobufCEnumValue array generated also refers to the
same enum value with the (non-existent) upper cased version:
const ProtobufCEnumValue lower_case__case_enum__enum_values_by_number[2] =
{
{ "UPPER", "LOWER_CASE__CASE_ENUM__UPPER", 1 },
{ "lower", "LOWER_CASE__CASE_ENUM__LOWER", 2 },
};
we should preserve the existing behavior of copying the case from the
enum values in the message definition and fix up the places where the
(non-existent) upper case version is used, rather than changing the enum
definition itself to match the case used in the _INIT macro and
enum_values_by_number array, because it's possible that there might be
existing working code that uses enum values with lower case letters that
would be affected by such a change.
incidentally, google's C++ protobuf implementation preserves case in
enum values. protoc --cpp_out generates the following enum declaration
for the message descriptor above:
enum LowerCase_CaseEnum {
LowerCase_CaseEnum_UPPER = 1,
LowerCase_CaseEnum_lower = 2
};
Still need to add the comments in the source code. Currently I've
seeded it with the libprotobuf-c files. I've configured it
to make man pages and html pages. Might not be ideal, but makes it easy
for me to check things (html is nicer, but man pages are handier for
remote servers).
It’s important to note that, differently from what we’ve seen for
the serial test harness (see Parallel Test Harness), the
AM_TESTS_ENVIRONMENT and TESTS_ENVIRONMENT variables cannot be use
to define a custom test runner; the LOG_COMPILER and LOG_FLAGS (or
their extension-specific counterparts) should be used instead:
## This is WRONG!
AM_TESTS_ENVIRONMENT = PERL5LIB='$(srcdir)/lib' $(PERL) -Mstrict -w
## Do this instead.
AM_TESTS_ENVIRONMENT = PERL5LIB='$(srcdir)/lib'; export PERL5LIB;
LOG_COMPILER = $(PERL)
AM_LOG_FLAGS = -Mstrict -w
(http://www.gnu.org/software/automake/manual/html_node/Parallel-Test-Harness.html)
"As with the serial harness above, by default one status line is printed
per completed test, and a short summary after the suite has completed.
However, standard output and standard error of the test are redirected
to a per-test log file, so that parallel execution does not produce
intermingled output. The output from failed tests is collected in the
test-suite.log file. If the variable ‘VERBOSE’ is set, this file is
output after the summary."
(http://www.gnu.org/software/automake/manual/html_node/Parallel-Test-Harness.html)