Kaydet (Commit) 8625d7e5 authored tarafından Tim Peters's avatar Tim Peters

PyNode_AddChild(): Backporting an aggressive over-allocation policy

when a parse node grows a very large number of children.  This sidesteps
platform realloc() disasters on several platforms.
üst 9f02d5b8
......@@ -4,6 +4,12 @@ Release date: dd-mmm-2002
Core and builtins
- Source that creates parse nodes with an extremely large number of
children (e.g., test_longexp.py) triggers problems with the
platform realloc() under several platforms (e.g., MacPython, and
Win98). This has been fixed via a more-aggressive overallocation
strategy.
- Fixed a bug with a continue inside a try block and a yield in the
finally clause. [SF bug 567538]
......@@ -27,7 +33,7 @@ Core and builtins
- String methods lstrip(), rstrip() and strip() now take an optional
argument that specifies the characters to strip. For example,
"Foo!!!?!?!?".rstrip("?!") -> "Foo". In addition, "200L".strip("L")
"Foo!!!?!?!?".rstrip("?!") -> "Foo". In addition, "200L".strip("L")
will return "200". This is useful for replacing code that assumed
longs will always be printed with a trailing "L".
......
......@@ -18,25 +18,63 @@ PyNode_New(int type)
return n;
}
#define XXX 3 /* Node alignment factor to speed up realloc */
#define XXXROUNDUP(n) ((n) == 1 ? 1 : ((n) + XXX - 1) / XXX * XXX)
/* See comments at XXXROUNDUP below. Returns -1 on overflow. */
static int
fancy_roundup(int n)
{
/* Round up to the closest power of 2 >= n. */
int result = 256;
assert(n > 128);
while (result < n) {
result <<= 1;
if (result <= 0)
return -1;
}
return result;
}
/* A gimmick to make massive numbers of reallocs quicker. The result is
* a number >= the input. For n=0 we must return 0.
* For n=1, we return 1, to avoid wasting memory in common 1-child nodes
* (XXX are those actually common?).
* Else for n <= 128, round up to the closest multiple of 4. Why 4?
* Rounding up to a multiple of an exact power of 2 is very efficient.
* Else call fancy_roundup() to grow proportionately to n. We've got an
* extreme case then (like test_longexp.py), and on many platforms doing
* anything less than proportional growth leads to exorbitant runtime
* (e.g., MacPython), or extreme fragmentation of user address space (e.g.,
* Win98).
* This would be straightforward if a node stored its current capacity. The
* code is tricky to avoid that.
*/
#define XXXROUNDUP(n) ((n) == 1 ? 1 : \
(n) <= 128 ? (((n) + 3) & ~3) : \
fancy_roundup(n))
int
PyNode_AddChild(register node *n1, int type, char *str, int lineno)
{
register int nch = n1->n_nchildren;
register int nch1 = nch+1;
register node *n;
const int nch = n1->n_nchildren;
int current_capacity;
int required_capacity;
node *n;
if (nch == INT_MAX || nch < 0)
return E_OVERFLOW;
if (XXXROUNDUP(nch) < nch1) {
current_capacity = XXXROUNDUP(nch);
required_capacity = XXXROUNDUP(nch + 1);
if (current_capacity < 0 || required_capacity < 0)
return E_OVERFLOW;
if (current_capacity < required_capacity) {
n = n1->n_child;
nch1 = XXXROUNDUP(nch1);
PyMem_RESIZE(n, node, nch1);
PyMem_RESIZE(n, node, required_capacity);
if (n == NULL)
return E_NOMEM;
n1->n_child = n;
}
n = &n1->n_child[n1->n_nchildren++];
n->n_type = type;
n->n_str = str;
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment