My Panoply Python SDK github issue


#1

I had some dependency issues installing your SDK on Windows with Anaconda Python. After (apparently) fixing those, I’m still unable to push data to an existing table in our Panoply db.

https://github.com/panoplyio/panoply-python-sdk/issues/12


#2

I believe that the issue you encountered is simply because the script you had did not flush the data out of the buffer. Based on the code you’ve attached to the issue (if indeed that’s all the script you ran) the script ended and exited before flushing the data into Panoply.
The SDK works in a way that it holds the data in a buffer and only based on time or size it will clear that buffer.
This is the reason that we’ve added sleep in the tests.

Note that the SDK is built for a long (ever) running process like event tracking from your server so in these cases the sleep is not really needed. When you want to push specific data within the script than the sleep or flush are required otherwise the script will simply end before the buffer is flushed.

Can you retry that using the sleep in your code as well?

I will reply in the Github issue as well


#3

Thanks, @alon

I tried running it this way, but I don’t see anything in the table yet. Feedback in the console is at the bottom.

# -*- coding: utf-8 -*-
"""
Spyder Editor
"""
import time
import panoply
conn = panoply.SDK( "id", "secret" )
conn.write( "tablename0", { "foo": "bar" } )
print(conn.qurl)
time.sleep(5) 

runfile(‘E:/CrossTie/Panoply/PythonSDK.py’, wdir=‘E:/CrossTie/Panoply’)
https://sqs.us-east-1.amazonaws.com/id...id/sdk-panoply-mvx...bt9
SENDING NOW


#4

@moranbuying I see the issue that you’ve encountered. The “tablename0” table already exists with a field that wasn’t created by one of Panoply’s processes. It is highly suggested to let the platform build the tables.

If you will drop the existing table or send the data using the SDK to a different table it should work for you.

As you are sending just a single event you should see it populated in the table within 20 minutes of sending it. The more events you send the faster the ingestion is because we batch the incoming events into batches to optimize the ingestion


#5

working now, thanks @alon !


#6

@moranbuying Perfect!


#7

@alon , is there any reason to sleep with every XX amount of data that goes through conn.write( targettables, dict ) ? I have previously had a sleep time after uploading every page of a JSON API, and I’d rather sleep after all the pages to save time. Do you have a recommendation?


#8

@moranbuying
There is no reason to sleep every XX amount of data. The only time that you should sleep is before your script ends and exits. This sleep is necessary in order to allow the SDK to reach the idle time to flush the buffer it holds into the queue. If the script would just exit it might not flush the buffer and will end with missing data in Panoply