On 04/18/2012 01:53 AM, Eric Blake wrote:
On 04/17/2012 11:49 AM, Eric Blake wrote:
> On 04/17/2012 11:45 AM, Guannan Ren wrote:
>> The result is indeterminate for NULL argument to python
>> functions as follows. It's better to return negative value in
>> these situations.
>>
>> PyObject_IsTrue will segfault if the argument is NULL
>> PyFloat_AsDouble(NULL) is -1.000000
>> PyLong_AsUnsignedLongLong(NULL) is 0.000000
>> ---
>> python/typewrappers.c | 26 +++++++++++++++++++++++---
>> 1 files changed, 23 insertions(+), 3 deletions(-)
> ACK.
I spoke too soon. On looking at it more, I'm wondering if all callers
do the right thing in this case. It would probably be better if we
explicitly raised a python exception before returning -1, so that
callers can blindly assume that a return of -1 means that the correct
python error is already present.
That is, instead of:
if (!obj)
return -1;
should we instead be doing something like:
if (!obj) {
PyErr_SetString(PyExc_TypeError, "unexpected type");
return -1;
}
Or are we guaranteed that the only way obj is NULL is if the caller
already encountered an error, so there is already a python error set?
Generally, when an obj is set to NULL by C/Python function
such as PyTuple_GetItem, the error has been set, we just need a return
value NULL in python bindings to user python caller to trigger it out.
But if the value is set by ourselves( a little crazy :) ), then no
error is set.
The negative value could let the python binding API return NULL to user
python caller for error showing.
For the NULL set by ourselves, we could try to avoid it in codes I
think.