Add memory allocation failure test for concact with a very large list
and make sure we have every single axis covered with and without a
predicate, with and without a previous step.
Some compilers don't handle NaNs properly.
Some compilers don't implement fmod in a IEEE-compatible way.
Some compilers have exception handling codegen bugs (DMC...).
To get more benefits from constant predicate/filter optimization we rewrite
[position()=expr] predicates into [expr] for numeric expressions. Right now
the rewrite is only for entire expressions - it may be beneficial to split
complex expressions like [position()=constant and expr] into [constant][expr]
but that is more complicated.
last() does not depend on the node set contents so is "constant" as far as
our optimization is concerned so we can evaluate it once.
If a filter/predicate expression is a constant, we don't need to evaluate it
for every nodeset element - we can evaluate it once and pick the right element
or keep/discard the entire collection.
If the expression is 1, we can early out on first node when evaluating the
node set - queries like following::item[1] are now significantly faster.
Additionally this change refactors filters/predicates to have additional
metadata describing the expression type in _test field that is filled during
optimization.
Note that predicate_constant selection right now is very simple (but captures
most common use cases except for maybe [last()]).
Some steps relied on step_push rejecting null inputs; this is no longer
the case. Additionally stepping now more rigorously filters null inputs.
git-svn-id: https://pugixml.googlecode.com/svn/trunk@1069 99668b35-9821-0410-8761-19e4c4f06640
Sometimes when evaluating the node set we don't need the entire set and
only need the first element in docorder or any element. In the absence of
iterator support we can still use this information to short-circuit
traversals.
This does not have any effect on straightforward node collection queries,
but frequently improves performance of complex queries with predicates
etc. XMark benchmark gets 15x faster with some queries enjoying 100x
speedup on 10 Mb dataset due to a significant complexity improvement.
git-svn-id: https://pugixml.googlecode.com/svn/trunk@1067 99668b35-9821-0410-8761-19e4c4f06640
Use descendant-or-self::node() transformation for self, descendant and
descendant-or-self axis. Self axis should be semi-frequent; descendant
axes should not really be used with // but if they ever are the complexity
of the step becomes quadratic so it's better to optimize this if possible.
git-svn-id: https://pugixml.googlecode.com/svn/trunk@1063 99668b35-9821-0410-8761-19e4c4f06640
When looking for an attribute by name, finding the first attribute means
we can stop looking since attribute names are unique. This makes some
queries faster by 40%.
Another very common pattern in XPath queries is finding an attribute with
a specified value using a predicate (@name = 'value'). While we perform an
optimal amount of traversal in that case, there is a substantial overhead
with evaluating the nodes, saving and restoring the stack state, pushing
the attribute node into a set, etc. Detecting this pattern allows us to
use optimized code, resulting in up to 2x speedup for some queries.
git-svn-id: https://pugixml.googlecode.com/svn/trunk@1061 99668b35-9821-0410-8761-19e4c4f06640
The actual condition for the optimization is invariance from context list
-- this includes both position() and last().
Instead of splitting the posinv concept just include last() into
non-posinv expressions - this requires sorting for boolean predicates that
depend on last() and do not depend on position(). These cases should be
very rare.
git-svn-id: https://pugixml.googlecode.com/svn/trunk@1060 99668b35-9821-0410-8761-19e4c4f06640