This test is very sensitive to the particular implementation of union
aggregation; for now lets disable this.
We need a more robust way to test union allocation failures.
The behavior on Linux is very different between kernel versions, and it
triggers an unexpected OOM during sanitizer runs because somehow the
size is reported to be LONG_MAX. It's not clear that it helps us cover
any paths we don't cover otherwise - it would be nice to be able to test
failing to load a multi-gigabyte file on a 32-bit system, but we can't
do this easily atm anyway.
We had a few places in test code and library source where we used an
implicit float->double cast; while it should preserve the value exactly,
gcc/clang implement this warning to make sure uses of double are intentional.
This change also adds the warning to Makefile to make sure we don't
regress on this warning.
Fixes#243.
This change modifies the table entries for ctx_special_attr to treat TAB
character as special, which makes the output code escape it.
Before this change, trying to use TAB in an attribute value would output
it verbatim; during subsequent parsing, pugixml - and other compliant
parsers - would apply attribute-value normalization, turning the TAB
into a space and losing the original value.
Using 	 fixes this; if an input document has 	 in an attribute
value, that gets unescaped into \t during parsing and escaped back into
	 during output, which means we can now roundtrip values like this.
Fixes#242.
Intel compiler by default sets flush-to-zero flags which causes our
denorm test to produce 0.0. So make sure that denorms work on FPU before
testing the string output.
Fixes#218.
On some Debian systems it looks like we *can* open the current folder as
a file and read its contents, but parsing the result produces an empty
document. We now handle this case as well.
Fixes#225.
Several tests got the buffer size wrong when sizeof(char_t)>1, and one
test didn't meet the carefully tuned allocation criteria under compact
mode due to the hash table usage and had to be changed a bit.
Apparently at some point OSX behavior when reading /dev/tty switched
from "can't open the file" to "the file can be opened and 0 bytes can be
read from it" which generates a wrong error and doesn't exercise the
code path we care about.
This change implements move ctor and assign support for xml_document.
All node handles remain valid after the move and point to the new document; the only exception is the document node itself (that remains unmoved).
Move is O(document size) in theory because it needs to relocate immediate document children (there is just one in conformant documents) and all memory pages; in practice the memory pages only need the header adjusted, which is ~0.1% of the actual data size.
Move requires no allocations in general, except when using compact mode where some moves need to grow the hash table which can fail (throw).
Fixes#104
We now check that appending a child to a moved document performs no
allocations - this is already the case, but if we neglected to copy the
allocator state this test would fail.
These just verify that move ctor/assignment operator work as expected in
simple cases - there are a number of ways in which the internal
structure can be incorrect...
These tests simulate various error conditions when reading data from
streams - seeks failing in seekable streams, underflow throwing an
exception causing read to set badbit, etc.
This change also adjusts memory thresholds to cause a reliable out of
memory during construction of a final buffer for non-seekable streams.
These functions were deprecated via comments in 1.5 but never got the
deprecated attribute; now is the time!
Using deprecated functions produces a warning; to silence it, this
change moves the relevant tests to a separate translation unit that has
deprecation disabled.
Add memory allocation failure test for concact with a very large list
and make sure we have every single axis covered with and without a
predicate, with and without a previous step.
New tests try to load a folder as an XML document, and a device. Both
are intended to exercise some otherwise non-hittable error paths in
load_file implementation.
This adds tests that complete branch coverage in compact pointer
encoding/decoding code (previously first_attribute was always encoded
using compact encoding in the entire test suite).
Add tests for PI erroring exactly at the buffer boundary with
non-zero-terminated buffers (so we have to clear the last character
which changes the parsing flow slightly) and a test that makes sure
parse_embed_pcdata works properly with XML fragments where PCDATA can be
at the root level but can't be embedded into the document node.
The only point was to try to test all paths where we can run out of
memory while decoding something. It seems like it may be impossible to
actually do this given that we can't run all paths as wchar_t size
detection is done at runtime...
This makes sure all .reserve calls failure paths are covered. These
tests don't explicitly test if reserve is present on all paths - this is
much harder to test since not all modifications require reserve to be
called, so we'll have to rely on a combination of automated testing and
sanity checking for this.
Also add more parsing out of memory coverage tests.
Currently this test has very large runtime and relies on the fact that
the first memory allocation error causes the test to terminate. This
does not work with new behavior of running the query through and
reporting the error at the end, so make the runtime reasonable but still
generate enough memory to blow past the budget.
gcov -b surfaced many lines with partial coverage, where branch is only
ever taken or not taken, or one of the expressions in a complex
conditional is always either true or false. This change adds a series of
tests (mostly focusing on XPath) to reduce the number of partially
covered lines.
This test is supposed to test error coverage in different expressions
that are nested in other expressions to reduce the number of never-taken
branches in tests (and make sure we aren't missing any).
Previously the error offset pointed to the first mismatching character, which
can be confusing especially if the start tag name is a prefix of the end tag
name. Instead, move the offset to the first character of the name - that way
it should be more obvious that the problem is that the entire name mismatches.
Fixes#112.
This test tests two important invariants:
- Every combination of write flags has to result in a valid document
- Parsing that document and saving the result has to result in identical output
We don't test all flags since parse_no_escapes can intentionally result in
malformed documents and other flags aren't relevant for node output.
Also note that we test both no-whitespace and whitespace version to make sure
we don't have unnecessary whitespace added during formatting.
When using format_raw the space in the empty tag (<node />) is the only
character that does not have to be there; so format_raw almost results in
a minimal XML but not quite.
It's pretty unlikely that this is crucial for any users - the formatting
change should be benign, and it's better to improve format_raw than to add
yet another flag.
Fixes#87.
Since they don't contribute to the resulting value just skip them before
parsing. This matches the behavior of strtol/strtoll and results in more
intuitive behavior.
Previously test allocator only guaranteed alignment enough for a pointer.
On some platforms (e.g. SPARC) double has to be aligned to 8 bytes but pointers
can have a size of 4 bytes. This commit increases allocation header to fix that.
In practical terms the allocation header is now always 8 bytes.
This fixes tests in PUGIXML_NO_XPATH mode on SPARC64 (#48).
SPARC does not allow unaligned accesses - e.g. you can't read an unaligned int.
Normally pugixml does not perform unaligned integer/pointer accesses, but page
heap can allocate blocks that are not aligned so that we can detect a single-
byte read/write overrun.
Additionally, the hardcoded page size we're currently using is really system
specific - on SPARC the page size can be 8 Kb instead of 4 Kb so mprotect can
fail.