Add tests for PI erroring exactly at the buffer boundary with
non-zero-terminated buffers (so we have to clear the last character
which changes the parsing flow slightly) and a test that makes sure
parse_embed_pcdata works properly with XML fragments where PCDATA can be
at the root level but can't be embedded into the document node.
The only point was to try to test all paths where we can run out of
memory while decoding something. It seems like it may be impossible to
actually do this given that we can't run all paths as wchar_t size
detection is done at runtime...
This makes sure all .reserve calls failure paths are covered. These
tests don't explicitly test if reserve is present on all paths - this is
much harder to test since not all modifications require reserve to be
called, so we'll have to rely on a combination of automated testing and
sanity checking for this.
Also add more parsing out of memory coverage tests.
Currently this test has very large runtime and relies on the fact that
the first memory allocation error causes the test to terminate. This
does not work with new behavior of running the query through and
reporting the error at the end, so make the runtime reasonable but still
generate enough memory to blow past the budget.
gcov -b surfaced many lines with partial coverage, where branch is only
ever taken or not taken, or one of the expressions in a complex
conditional is always either true or false. This change adds a series of
tests (mostly focusing on XPath) to reduce the number of partially
covered lines.
This test is supposed to test error coverage in different expressions
that are nested in other expressions to reduce the number of never-taken
branches in tests (and make sure we aren't missing any).
Previously the error offset pointed to the first mismatching character, which
can be confusing especially if the start tag name is a prefix of the end tag
name. Instead, move the offset to the first character of the name - that way
it should be more obvious that the problem is that the entire name mismatches.
Fixes#112.
This test tests two important invariants:
- Every combination of write flags has to result in a valid document
- Parsing that document and saving the result has to result in identical output
We don't test all flags since parse_no_escapes can intentionally result in
malformed documents and other flags aren't relevant for node output.
Also note that we test both no-whitespace and whitespace version to make sure
we don't have unnecessary whitespace added during formatting.
When using format_raw the space in the empty tag (<node />) is the only
character that does not have to be there; so format_raw almost results in
a minimal XML but not quite.
It's pretty unlikely that this is crucial for any users - the formatting
change should be benign, and it's better to improve format_raw than to add
yet another flag.
Fixes#87.
Since they don't contribute to the resulting value just skip them before
parsing. This matches the behavior of strtol/strtoll and results in more
intuitive behavior.
Previously test allocator only guaranteed alignment enough for a pointer.
On some platforms (e.g. SPARC) double has to be aligned to 8 bytes but pointers
can have a size of 4 bytes. This commit increases allocation header to fix that.
In practical terms the allocation header is now always 8 bytes.
This fixes tests in PUGIXML_NO_XPATH mode on SPARC64 (#48).
SPARC does not allow unaligned accesses - e.g. you can't read an unaligned int.
Normally pugixml does not perform unaligned integer/pointer accesses, but page
heap can allocate blocks that are not aligned so that we can detect a single-
byte read/write overrun.
Additionally, the hardcoded page size we're currently using is really system
specific - on SPARC the page size can be 8 Kb instead of 4 Kb so mprotect can
fail.
Extra argument 'hint' is used to start the attribute lookup; if the attribute
is not found the lookup is restarted from the beginning of the attriubte list.
This allows to optimize attribute lookups if you need to get many attributes
from the node and can make assumptions about the likely ordering. The code is
correct regardless of the order, but it is faster than using vanilla lookups
if the order matches the calling order.
Fixes#30.
Address sanitizer can detect underflows so we don't really need the custom
allocator.
Additionally, custom allocator can return memory that is not pointer-aligned;
this causes undefined behavior sanitizer to complain.
xpath_variable_set is essentially an associative container; it's about time it
became copyable.
Implementation is slightly tricky due to out of memory handling. Both copy ctor
and assignment operator have strong exception guarantee (even if exceptions are
disabled! which translates to "roll back on allocation errors").
Fix code style and revert redundant parameters/whitespace changes.
Also remove format_each_attribute_on_new_line - we're only introducing one
extra formatting flag. The flag implies format_indent but does not include its
bitmask.
Also add a few more tests.
Fixes#14.
End of an era.
Make can be used for regular development (Linux/OSX), documentation building
and release packaging.
CMake can be used for regular development (Windows); it's also used by some
Linux distributions.
Continuous integration is now performed by Travis CI and AppVeyor.
Ensure that all the necessary cleanup is performed in case the allocation fails
with an exception - files are closed, buffers are reclaimed, etc.
Any test that triggers a simulated out-of-memory condition is ran once again
with a throwing allocation function. Unobserved std::bad_alloc count as test
failures and require CHECK_ALLOC_FAIL macro.
Fixes#17.
Previously attributes that were copied with their node used string sharing,
but standalone attributes that were copied using xml_node::*_copy(xml_attribute)
were not.
If an out of memory error happens in load_file there's a danger of leaking
the FILE object. Since there is a limited supply of the objects we can easily
test that the leak does not happen.
Previously there was no guarantee that the tests that check for out of memory
handling behavior are actually correct - e.g. that they correctly simulate out
of memory conditions.
Now every simulated out of memory condition has to be "guarded" using
CHECK_ALLOC_FAIL. It makes sure that every piece of code that is supposed to
cause out-of-memory does so, and that no other code runs out of memory
unnoticed.
When parsing XPath variables, we need to perform a heap allocation; if it
fails, an xpath_exception instead of bad_alloc used to be thrown.
Now we throw the exception of a correct type so that xpath_exception means
'parsing error'.
Previously we omitted extra whitespace for single PCDATA/CDATA children, but in
mixed content there was extra indentation before/after text nodes.
One of the problems with that is that the text that you saved is not exactly
the same as the parsing result using default flags (parse_trim_pcdata helps).
Another problem is that parse-format cycles do not have a fixed point for mixed
content - the result expands indefinitely. Some XML libraries, like Python
minidom, have the same issue, but this is definitely a problem.
Pretty-printing mixed content is hard. It seems that the only other sensible
choice is to switch mixed content nodes to raw formatting. In a way the code in
this change is a weaker version of that - it removes indentation around text
nodes but still keeps it around element siblings/children.
Thus we can switch to mixed-raw formatting at some point later, which will be
a superset of the current behavior.
To do this we have to either switch at the first text node (.NET XmlDocument
does that), or scan the children of each element for a possible text node and
switch before we output the first child.
The former behavior seems non-intuitive (and a bit broken); unfortunately, the
latter behavior can cost up to 20% of the output time for trees *without* mixed
content.
Fixes#13.
data/truncation.xml was corrupted at some point and was not actually valid.
Fix the file and make the test fail if we can't parse truncation.xml at all.
Also add new tests for translate. These are technically redundant since other
tests would catch the bug with the fixed comparison, but more tests is better.
Align allocations to right end of page boundary to catch buffer overruns,
instead of unmapping on deallocations mark the page as no-access to guarantee
a page fault on use-after-free.
We test min/max and several different mantissas for the entire exponent range
for both float and double.
It's not clear whether all supported compilers provide an implementation of
sprintf/strtod that supports roundtripping so we may need to disable some of
these tests in the future.
Make float/double round-trip
This change also adds xml_text::set and xml_attribute::set_value overloads for float so that float is only printed using just enough digits to represent float, instead of enough digits to represent double.
It's sufficient to define PUGIXML_HEADER_ONLY anywhere now, source is included
automatically.
This is a second attempt; this time it includes a workaround for QMake bug
that caused it to generate incorrect Makefile.
Unfortunately, standard headers on MinGW32 insist on undefining off64_t
and _wfopen extensions if __STRICT_ANSI__ is true (e.g. C++11 mode). This
leads to compilation errors since b7a1fec started to use _wfopen in strict
mode. That change erroneously checked GCC version - however, the version
itself is irrelevant; the actual criteria is whether mingw64 runtime is
used.
off64_t is not useful on MinGW32 since we only need it to open large files
on 64-bit platforms; unfortunately, the lack of _wfopen means we won't be
able to support wide-char paths on Windows for MinGW32.
Fixes#24.
Since MinGW 4.5 does not define these functions if __STRICT_ANSI__ is defined
(in case of _wfopen it defines it inconsistently between stdio.h and wchar.h)
use the baseline functions for MinGW 4.5 and earlier.
Fixes#23.
node_copy_string relied on the fact that target node had an empty name and
value. Normally this is a safe assumption (and a good one to make since it
makes copying faster), however it was not checked and there was one case when
it did not hold.
Since we're reusing the logic for inserting nodes, newly inserted declaration
nodes had the name set automatically to xml, which in our case violates the
assumption and is counter-productive since we'll override the name right after
setting it.
For now the best solution is to do the same insertion manually - that results
in some code duplication that we can refactor later (same logic is partially
shared by _move variants anyway so on a level duplicating is not that bad).
Some compilers don't handle NaNs properly.
Some compilers don't implement fmod in a IEEE-compatible way.
Some compilers have exception handling codegen bugs (DMC...).
This should completely eliminate the confusion between load and load_file.
Of course, for compatibility reasons we have to preserve the old variant -
it will be deprecated in a future version and subsequently removed.
This lets us do fewer null pointer checks (making printing 2% faster with -O3)
and removes a lot of function calls (making printing 20% faster with -O0).
To get more benefits from constant predicate/filter optimization we rewrite
[position()=expr] predicates into [expr] for numeric expressions. Right now
the rewrite is only for entire expressions - it may be beneficial to split
complex expressions like [position()=constant and expr] into [constant][expr]
but that is more complicated.
last() does not depend on the node set contents so is "constant" as far as
our optimization is concerned so we can evaluate it once.
If a filter/predicate expression is a constant, we don't need to evaluate it
for every nodeset element - we can evaluate it once and pick the right element
or keep/discard the entire collection.
If the expression is 1, we can early out on first node when evaluating the
node set - queries like following::item[1] are now significantly faster.
Additionally this change refactors filters/predicates to have additional
metadata describing the expression type in _test field that is filled during
optimization.
Note that predicate_constant selection right now is very simple (but captures
most common use cases except for maybe [last()]).
A page can fail to allocate during attribute creation; this case was not
previously handled.
git-svn-id: https://pugixml.googlecode.com/svn/trunk@1080 99668b35-9821-0410-8761-19e4c4f06640