It has always been the case that pugixml does not perform Unicode
validation or name/tag Unicode character class validation, but it wasn't
very obvious from documentation.
Fixes#162
We support Latin-1 and automatically detect it by parsing the encoding
from document declaration; both of these were omitted from the
description of the automatic detection.
Additionally, the description has been rewritten to be more concise and
a bit more abstract - there's no need to specify the algorithm precisely
here.
Fixes#158.
Using LTCG restricts the resulting .lib files to a specific compiler
version, causing version conflicts when the compiler gets updated
without changing the toolset version. VS2017 now has two incompatible
compilers, 15.0 and 15.3, both of which use toolset v141...
These tests simulate various error conditions when reading data from
streams - seeks failing in seekable streams, underflow throwing an
exception causing read to set badbit, etc.
This change also adjusts memory thresholds to cause a reliable out of
memory during construction of a final buffer for non-seekable streams.
It's not clear whether we still need PUGI__MSVC_CRT_VERSION, but it's
more consistent for now to use it for _snprintf_s since this is relying
on a CRT extension, not on a compiler feature.
These functions were deprecated via comments in 1.5 but never got the
deprecated attribute; now is the time!
Using deprecated functions produces a warning; to silence it, this
change moves the relevant tests to a separate translation unit that has
deprecation disabled.
Unify build paths in all MSBuild VS projects and extract common build
logic into functions.
Note that this change changes both VS2010 and VS2013 projects to have
more predictable output paths and fixed output file name (pugixml).
We'd like to build pugixml with both static & dynamic CRT and put it
all in one NuGet package.
CoApp sort of allows us to do this via dynamic/static pivots, but it
does not let us customize the names of the pivots and additionally has
some bugs with the project setup. Their project modifications are also
much more complicated - really, at this point we should do this
ourselves.
Create a simple native NuGet package with Linkage setting that picks the
right library, and package all libraries appropriately. Note that we use
the unified path syntax to make it simple to just get the right .lib
file from the toolset/platform/configuration/linkage combo.
The macro only works correctly when the input argument is an array with
a statically known size - pointers or arrays decayed to pointers won't
work silently.
While this is unlikely to surface issues that aren't caught in
tests/code review, use _countof for MSVC to prevent such code from
compiling.
Correctly check for error codes and don't run .bat file since it doesn't
work anyway (the variables it sets aren't accessible in PowerShell, and
the path to the script doesn't seem to be the same in VS2017).
Add memory allocation failure test for concact with a very large list
and make sure we have every single axis covered with and without a
predicate, with and without a previous step.
Instead of branching code at each invocation site, use variadic macros
to create a wrapping macro that use snprintf for the buffer of a
statically known size.
Variadic macros are supported by all C++11 compilers, as is snprintf;
on MSVC 2005+ we don't necessarily have snprintf, but we can use
_snprintf_s with _TRUNCATE to get the same behavior. In all other cases
we fall back to sprintf, that (theoretically) can lead to a stack buffer
overflow.
In practice all snprintfs used in pugixml use buffers that should be
large enough to never be overflown but snprintf is safe even if this is
not the case.
We use references to arrays elsewhere in the codebase and there's just
one caller for this function so it's easier to fix the size.
This will simplify snprintf refactoring.
codecov.io does not seem to support lcov regex customization;
additionally, we can't just replace unreachable with LCOV_LINE_EXCL
in gcov file - so we have to patch the ##### indicator (which suggests
the line hasn't been hit) with 1.
See also https://github.com/codecov/support/issues/144
New tests try to load a folder as an XML document, and a device. Both
are intended to exercise some otherwise non-hittable error paths in
load_file implementation.
This adds tests that complete branch coverage in compact pointer
encoding/decoding code (previously first_attribute was always encoded
using compact encoding in the entire test suite).
Integer sanitizer is flagging unsigned integer overflow in several
functions in pugixml; unsigned integer overflow is well defined but it
may not necessarily be intended.
Apart from hash functions, both string_to_integer and integer_to_string
use unsigned overflow - string_to_integer uses it to perform
two-complement negation so that the bulk of the operation can run using
unsigned integers. This makes it possible to simplify overflow checking.
Similarly integer_to_string negates the number before generating a
decimal representation, but negating is impossible without unsigned
overflow or special-casing certain integer limits.
For now just silence the integer overflow using a special attribute;
also move unsigned overflow into string_to_integer from get_value_* so
that we have fewer functions marked with the attribute.
Fixes#133.
This reverts commit 79109a8546f963d17522d75112cffcfd8cbe35fc.
This warning does not happen on gcc-4.8.4; the workaround introduces an
unsigned integer overflow which results in a runtime error when compiled
with integer sanitizer.
This is accomplished by putting a // fallthrough
comment at the right place.
This seems to be more portable than an attribute-based
solution like [[fallthrough]] or __attribute__((fallthrough)).
Instead of a separate implementation for find/insert, use just one that
can do both. This reduces the code size and simplifies code coverage;
the resulting code is close to what we had in terms of performance and
since hash table is a fall back should not affect any real workloads.
Instead of a complicated partitioning scheme that tries to maintain the
equal area in the middle, use a scheme where we keep the equal area in
the left part of the array and then move it to the middle.
Since generally sorted arrays don't contain many duplicates this extra
copy is not too expensive, and it significantly simplifies the logic and
maintains good complexity for sorting arrays with many equal elements
nonetheless (unlike Hoare partitioning).
Instead of a median of 9 just use a median of 3 - it performs pretty
much identically on some internal performance tests, despite having a
bit more comparisons in some cases.
Finally, change the insertion sort threshold to 16 elements since that
appears to have slightly better performance.
The previous implementation opted for doing two comparisons per element
in the sorted case in order to remove one iterator bounds check per
moved element when we actually need to copy. In our case however the
comparator is pretty expensive (except for remove_duplicates which is
fast as it is) so an extra object comparison hurts much more than an
iterator comparison saves.
This makes sorting by document order up to 3% faster for random
sequences.
Instead of relying on a specific string in the parse result, use
allocator error state to report the error and then convert it to a
string if necessary.
We currently have to manually trigger the OOM error in two places
because we use global allocator in rare cases; we don't really need to
do this so this will be cleaned up later.
Add tests for PI erroring exactly at the buffer boundary with
non-zero-terminated buffers (so we have to clear the last character
which changes the parsing flow slightly) and a test that makes sure
parse_embed_pcdata works properly with XML fragments where PCDATA can be
at the root level but can't be embedded into the document node.
The code works fine regardless of the *j->name check, and omitting this
makes the code more symmetric between the "count" and "write" stage;
additionally this improves coverage - due to how strcpy_insitu works
it's not really possible to get an empty non-NULL name in the node.
The only point was to try to test all paths where we can run out of
memory while decoding something. It seems like it may be impossible to
actually do this given that we can't run all paths as wchar_t size
detection is done at runtime...
This makes sure all .reserve calls failure paths are covered. These
tests don't explicitly test if reserve is present on all paths - this is
much harder to test since not all modifications require reserve to be
called, so we'll have to rely on a combination of automated testing and
sanity checking for this.
Also add more parsing out of memory coverage tests.