strcat does not allow overlapping ranges; we didn't have a test for this
but now we do.
As an added bonus, this also means we only compute the length of each
fragment once now.
strconcat in the parsing loop only works if we know the source string
comes from the same buffer that we're parsing. This is somewhat
cumbersome to establish during parsing and it requires extra tracking
data, so we just disable this combination as it's unlikely to be
actually useful - usually append_buffer would be called on a possibly
empty collection of elements, not on something with PCDATA.
Add tests for double escape and a test for interaction with
parse_ws_pcdata flags; this behavior might change but we should pin the
current result.
Also slightly clean up the previously added test.
Here we also test what happens when text gets assigned to an empty
string after initially being non-empty, to make sure this is not
different from the initial state.
Different OSes have different behavior when trying to fopen/fseek/ftell
a folder. On Linux, some systems return 0 size, some systems return an
error, and some systems return LONG_MAX. LONG_MAX is particularly
problematic because that causes spurious OOMs under address sanitizer.
Using fstat manually cleans this up, however it introduces a new
dependency on platform specific headers that we didn't have before, and
also has unclear behavior on 64-bit systems wrt 32-bit sizes which will
need to be tested further as I'm not certain if the behavior needs to be
special-cased only for MSVC/MinGW, which are currently not handled by
this path (unless MinGW defines __unix__...)
This is the same fix as #497, but we're using auto_deleter instead
because if allocation function throws, we can't rely on an explicit call
to deallocate.
Comes along with two tests that validate the behavior.
Instead of trying to detect if we can safely use random shuffle simply reimplement it ourselves.
The quality of the RNG is not essential for these tests.
Previously when copying the allocator state we would copy an incorrect
root pointer into the document's current state; while this had a minimal
impact on the allocation state due to the fact that any new allocation
would need to create a new page, this used a potentially stale field of
the moved document when setting up new pages, which could create issues
in future uses of the pages.
This change fixes the core problem and also removes the use of the
_root->allocator from allocate_page since it's not clear why we need it
there in the first place.
Since foo//bar//baz adds two nodes for each //, we need to increment the
depth by 2 on each iteration to limit the AST correctly.
Fixes the stack overflow found by cluster-fuzz (I suspect the issue
there is a bit deeper, but this part is definitely a bug and as such I'd
rather wait for the next test case for now).
Function call arguments are stored in a list which is processed
recursively during optimize(). We now limit the depth of this construct
as well to make sure optimize() doesn't run out of stack space.
The default stack on MSVC/x64/debug is sufficient for 1692 nested
invocations only, whereas on clang/linux it's ~8K...
For now set the limit to be conservative.
XPath parser and execution engine isn't stackless; the depth of the
query controls the amount of C stack space required.
This change instruments places in the parser where the control flow can
recurse, requiring too much C stack space to produce an AST, or where a
stackless parse is used to produce arbitrarily deep AST which will
create issues for downstream processing.
As a result XPath parser should now be fuzz safe for malicious inputs.
The newly added tests make sure that during node/attribute destruction
we deallocate a few memory pages; this makes sure that we don't read
node data after it's being destroyed.
Also clean up formatting/style in the remove_* implementation a bit.
According to XML spec, > sometimes needs to be escaped in PCDATA (when
it occurs as a ]]> pattern), but it doesn't need to be escaped in
attribute values.
Contributes to #272.
When using double quotes for attributes, we don't need to escape '; when
using single quotes, we don't need to escape ".
This changes behavior to match 1.9 by default (where we don't escape ').
Contributes to #272.
Note: this chang also updates PUGIXML_VERSION macro to allow for
double-digit minor versions; this preserves the continuity of versions
so PUGIXML_VERSION >= 190 will still work.
Create visual studio projects that are vs2019 compliant.
* nuget_build.ps1 :
Introduce a new argument that will define how we implement the nuget
build. For now we accept 201{9,7.5.3} as possible argument values.
* pugixml_vs2019{,_static}.vcxproj :
Add two visual studio projects that build pugi with the latest SDK and
build tools
* appveyor.yml
- Add Visual Studio 2019 to build targets
- Add Visual Studio 201{9,3,5} to build_scripts. And call
nuget_build.ps1 with a new argument.
- Add Visual Studio 2019 to the test_scripts.
This change adds format_attribute_single_quote flag that uses single quotes (`'`) instead of double quotes (`"`) for formatting attribute values.
Internal quotation marks are escaped using `"` and `'`.
We now have two tests: one tests behavior when we run out of space when
appending the node set (in which case the append fails), another one
tests behavior when we run out of space when filtering the node set (in
which case the set still contains redundant data).
Given an unsorted sequence, remove_duplicates would sort it using the
pointer value of attributes/nodes and then remove consecutive
duplicates.
This was problematic because it meant that the result of XPath queries
was dependent on the memory allocation pattern. While it's technically
incorrect to rely on the order, this results in easy to miss bugs.
This is particularly common when XPath queries use union operators -
although we also will call remove_duplicates in other cases.
This change reworks the code to use a hash set instead, using the same
hash function we use for compact storage. To make sure it performs well,
we allocate enough buckets for count * 1.5 (assuming all elements are
unique); since each bucket is a single pointer unlike xpath_node which
is two pointers, we need somewhere between size * 0.75 and size * 1.5
temporary storage.
The resulting filtering is stable - we remove elements that we have seen
before but we don't change the order - and is actually significantly
faster than sorting was.
With a large union operation, before this change it took ~56 ms per 100
query invocations to remove duplicates, and after this change it takes
~20ms.
Fixes#254.