— aleatory

Archive
python

Python Cache

In part 2 the chart app was successfully loaded with historical data using appcfg.py’s bulkloader tool and the remote_api module. However as queries over time series data quickly become expensive when we begin to deal with months and years, we should use some kind of caching mechanism to prevent too many datastore operations from slowing down our chart display.

App Engine’s powerful built-in object caching framework is memcache. With memcache we first check if the data we’re looking for is in memory and if so we will retrieve it from there, removing the need for any datastore querying. If not we will retrieve it using GQL as done in part 2 of the tutorial.

so for our 5 day data our datastore code in main.py would become:

Read More

Rutherford Financial chart

Here I will walkthrough pre-populating the Google App Engine datastore with historical data in CSV format using the python bulkloader tool to do the data transformation and import.

In the last tutorial we seen how to create a financial chart with Google App Engine & update it periodically via some css-based web scraping. The bulkloader tool that comes in the appcfg.py script will enable us to visualise long term trends *right now* without needing to wait on our datastore growing one day at a time. Again the 3 Month EONIA Index Swap from the European Banking Federation’s Euribor site will be used as sample historical data.

So we need to grab the EONIA historic data. It’s available in CSV already but additionally I needed to:

Read More

5 day chart of 3month EONIA

This is a short 3 part tutorial series that will guide you through how to create & host your own financial charts on Google App Engine.

To begin we’ll see how simple it is to create a web scraper that uses CSS selectors and string manipulation to grab whatever data you want from a website. You can adapt the code to target whichever financial instrument you require. We will use a python app instance on Google App Engine to host the code and its scheduled tasks API to grab the required data periodically and persist to the datastore.

lxml is a python web scraper that uniquely uses CSS to query HTML documents. This makes it ideal for web developers and designers who are already familiar with the selector syntax.

If you haven’t already done so, sign up to Google App Engine, create a new python 2.7 application and download the SDK. Also ensure you have python 2.7 installed locally together with the 2.7 version of the lxml library.

For the purposes of this example we’ll seek to regularly poll the Euribor site and scrap the daily EONIA (Euro Over Night Index Average) 3 Month Swap fixing.

Read More

ye olde Python logo

Or reasons why Python is cool #273.

There are many snippets of code on the internet detailing how to go about dynamic class creation when the compiler (and thus, you the programmer) knows the name and module location of the class some time in advance, e.g. initial deploy time or via some parameter being entered in by the user when they want to instantiate such a class.

However if I want to create objects without knowing their module/class in advance a little bit more thinking is required. Here is a simple tutorial that will leave you with a package that when updated with new modules will introspect & instantiate their classes on demand.

Read More