skip to navigation
skip to content

Planet Python

Last update: June 15, 2016 07:50 AM

June 15, 2016


Nicola Iarocci

EveGenie makes Eve schema generation a breeze

Released by the nice folks at Drud, EveGenie is a tool for making Eve schema generation easier. Eve’s schema definitions are full of features, but can take a good amount of time to create when dealing with lots of complex resources. From our experience, it’s often helpful to describe an endpoint in JSON before creating […]

June 15, 2016 07:19 AM

June 14, 2016


Dataquest

Building a data science portfolio: How to setup up a data science blog

This is the second in a series of posts on how to build a Data Science Portfolio. If you like this and want to know when the next post in the series is released, you can subscribe at the bottom of the page.

You can read the first post in this series here: Building a data science portfolio: Storytelling with data.

Blogging can be a fantastic way to demonstrate your skills, learn topics in more depth, and build an audience. There are quite a few examples of data science and programming blogs that have helped their authors land jobs or make important connections. Blogging is one of the most important things that any aspiring programmer or data scientist should be doing on a regular basis.

Unfortunately, one very arbitrary barrier to blogging can be knowing how to setup a blog in the first place. In this post, we’ll cover how to create a blog using Python, how to create posts using Jupyter notebook, and how to deploy the blog live using Github Pages. After reading this post, you’ll be able to create your own data science blog, and author posts in a familiar and simple interface.

...

June 14, 2016 09:00 PM


Robin Wilson

Conda revisions: letting you ‘rollback’ to a previous version of your environment

I now use Anaconda as my primary Python distribution – and my company have also adopted it for use on all of their developer machines as well as their servers – so I like to think I’m a relatively knowledgeable user. However, the other day I came across a wonderful feature that I’d never known about before…revisions!

The best way to explain is by a quick example. If you run conda list --revisions, you’ll get an output like this:

2016-06-10 20:20:37  (rev 10)
    +affine-2.0.0.post1
    +click-6.6
    +click-plugins-1.0.3
    +cligj-0.4.0
    +rasterio-0.35.1
    +snuggs-1.3.1

2016-06-10 20:22:19  (rev 11)
     libpng  {1.6.17 -> 1.6.22}

2016-06-10 20:25:49  (rev 12)
    -gdal-2.1.0

In this output you can see a number of specific versions (or revisions) of this environment (in this case the default conda environment), along with the date/time they were created, and the differences (installed packages shown as +, uninstalled shown as - and upgrades shown as ->). If you want to revert to a previous revision you can simply run conda install --revision N (where N is the revision number). This will ask you to confirm the relevant package uninstallation/installation – and get you back to exactly where you were before!

So, I think that’s pretty awesome – and really handy if you screw things up and want to go back to a previously working environment. I’ve got a few other hints for you though…

Firstly, if you ‘revert’ to a previous revision then you will find that an ‘inverse’ revision is created, simply doing the opposite of what the previous revision did. For example, if your revision list looks like this:

2016-06-14 21:12:34  (rev 1)
    +mkl-11.3.3
    +numpy-1.11.0
    +pandas-0.18.1
    +python-dateutil-2.5.3
    +pytz-2016.4
    +six-1.10.0

2016-06-14 21:13:08  (rev 2)
    +cycler-0.10.0
    +freetype-2.6.3
    +libpng-1.6.22
    +matplotlib-1.5.1
    +pyparsing-2.1.4

and you revert to revision 1 by running conda install --revision 1, and then run conda list --revisions again, you’ll get this:

2016-06-14 21:13:08 (rev 2)
+cycler-0.10.0
+freetype-2.6.3
+libpng-1.6.22
+matplotlib-1.5.1
+pyparsing-2.1.4

2016-06-14 21:15:45 (rev 3)
-cycler-0.10.0
-freetype-2.6.3
-libpng-1.6.22
-matplotlib-1.5.1
-pyparsing-2.1.4

You can see that the changes for revision 3 are just the inverse of revision 2.

One more thing is that I’ve found out that all of this data is stored in the history file in the conda-meta directory of your environment (CONDA_ROOT/conda-meta for your default environment and CONDA_ROOT/envs/ENV_NAME/conda-meta for any other environment). You don’t want to know why I went searching for this file (it’s a long story involving some stupidity on my part), but it’s got some really useful contents:

==> 2016-06-07 22:41:06 <==
# cmd: /Users/robin/anaconda3/bin/conda create --name hotbar python=2.7
openssl-1.0.2h-1
pip-8.1.2-py27_0
python-2.7.11-0
readline-6.2-2
setuptools-22.0.5-py27_0
sqlite-3.13.0-0
tk-8.5.19-0
wheel-0.29.0-py27_0
zlib-1.2.8-3
# create specs: ['python 2.7*']
==> 2016-06-07 22:46:28 <==
# cmd: /Users/robin/anaconda3/envs/hotbar/bin/conda install matplotlib numpy scipy ipython jupyter mahotas statsmodels scikit-image pandas gdal tqdm
-sqlite-3.13.0-0
+appnope-0.1.0-py27_0
+backports-1.0-py27_0
...

Specifically, it doesn’t just give you the list of what was installed, uninstalled or upgraded – but it also gives you the commands you ran! If you want, you can extract these commands with a bit of command-line magic:

cat ~/anaconda3/envs/hotbar/conda-meta/history | grep '# cmd' | cut -d" " -f3-

/Users/robin/anaconda3/bin/conda create --name hotbar python=2.7
/Users/robin/anaconda3/envs/hotbar/bin/conda install matplotlib numpy scipy ipython jupyter mahotas statsmodels scikit-image pandas gdal tqdm
/Users/robin/anaconda3/envs/hotbar/bin/conda install -c conda-forge rasterio

(For reference, the command-line magic gets the content of the history file, searches for all lines starting with # cmd, and then splits the line by spaces and extracts everything from the 3rd group onwards)

I find environment.yml files to be a bit of a pain sometimes (they’re not always cross-platform compatible – see this issue), so this is quite useful as it actually gives me the commands that I ran to create the environment.

June 14, 2016 08:28 PM


qutebrowser development blog

Day 7: Fixing things

Handling callbacks

I did a small experiment today, trying to make it easier to use QtWebEngine functions which expect a callback, like QWebEnginePage::runJavaScript.

Normally, you'd call such a function like this:

def calc(view):
    def cb(data):
        print("data: {}".format(data))
    view.page().runJavaScript('1 + 1', cb)

What I wanted is to (ab)use Python's yield statement instead (as a coroutine), so I can yield a callback to call, and the rest of the function would run after it:

@wrap_cb
def calc(view):
    data = yield view.page().runJavaScript, '1 + 1'
    print("data: {}".format(data))

This worked fine, and the wrap_cb decorator looks like this:

def wrap_cb(func):

    @functools.wraps(func)
    def inner(*args):
        gen = func(*args)
        arg = next(gen)

        def _send(*args):
            try:
                gen.send(*args)
            except StopIteration:
                pass

        if callable(arg):
            cb_func = arg
            args = []
        else:
            cb_func, *args = arg
        cb_func(*args, _send)

    return inner

In the end I ended up not using it though, because it felt like too much magic.

Definitely was an interesting experiment, and I'm a step closer wrapping my head around how coroutines work.

QtWebEngine branch

Yesterday, I branched off a qtwebengine branch, and started refactoring everything so there would be a clearly defined interface which hides the implementation details of a single tab in qutebrowser (QWebView or QWebEngineView).

This means even the current QtWebKit backend broke, which is why the work is still in a branch. I got both QtWebKit and QtWebEngine to run enough to show you a nice screenshot, but as soon as you do anything except opening an URL (like scrolling, or going back/forward) qutebrowser crashed.

Today I worked on getting everything running with QtWebKit first again, and expanding the API of a tab. Here's what's working so far:

  • Scrolling
  • Going back/forward
  • :debug-dump-page (needed for tests)
  • :jseval (needed for tests)
  • Caret browsing

Everything apart from that is either broken or untested - but it's a start!

Seeing the first tests pass definitely was a satisfying feeling :)

June 14, 2016 07:23 PM


Weekly Python Chat

Python Variable Scope

Ever wondered why creating global variables is so tricky? Why does it seem like changing variables just works sometimes but other times it doesn't?

We'll answer those questions and a couple others in this chat on how Python's variable scope rules.

June 14, 2016 05:00 PM


Wesley Chun

Using the new Google Sheets API

.

Introduction

In this post, we're going to demonstrate how to use the latest generation Google Sheets API. Launched at Google I/O 2016 (full talk here), the Sheets API v4 can do much more than previous versions, bringing it to near-parity with what you can do with the Google Sheets UI (user interface) on desktop and mobile. Below, I'll walk you through a Python script that reads the rows of a relational database representing customer orders for a toy company and pushes them into a Google Sheet. Other API calls we'll make: one to create new Google Sheets with and another that reads the rows from a Sheet.

Earlier posts demonstrated the structure and "how-to" use Google APIs in general, so more recent posts, including this one, focus on solutions and use of specific APIs. Once you review the earlier material, you're ready to start with authorization scopes then see how to use the API itself.

Google Sheets API authorization & scopes

Previous versions of the Google Sheets API (formerly called the Google Spreadsheets API), were part of a group of "GData APIs" that implemented the Google Data (GData) protocol, an older, less-secure, REST-inspired technology for reading, writing, and modifying information on the web. The new API version falls under the more modern set of Google APIs requiring OAuth2 authorization and whose use is made easier with the Google APIs Client Libraries.

The current API version features a pair of authorization scopes: read-only and read-write. As usual, we always recommend you use the most restrictive scope possible that allows your app to do its work. You'll request fewer permissions from your users (which makes them happier), and it also makes your app more secure, possibly preventing modifying, destroying, or corrupting data, or perhaps inadvertently going over quotas. Since we're creating a Google Sheet and writing data into it, we must use the read-write scope:

Using the Google Sheets API

Let's look at some code that reads rows from a SQLite database and creates a Google Sheet with that data. Since we covered the authorization boilerplate fully in earlier posts and videos, we're going straight to creating a Sheets service endpoint. The API string to use is 'sheets' and the version string to use is 'v4' as we call the apiclient.discovey.build() function:

SHEETS = discovery.build('sheets', 'v4', http=creds.authorize(Http()))

With the SHEETS service endpoint in hand, the first thing to do is to create a brand new Google Sheet. Before we use it, one thing to know about the Sheets API is that most calls require a JSON payload representing the data & operations you wish to perform, and you'll see this as you become more familiar with it. For creating new Sheets, it's pretty simple, you don't have to provide anything, in which case you'd pass in an empty (dict as the) body, but a better bare minimum would be a name for the Sheet, so that's what data is for:

data = {'properties': {'title': 'Toy orders [%s]' % time.ctime()}}

Notice that a Sheet's "title" is part of its "properties," and we also happen to add the timestamp as part of its name. With the payload complete, we call the API with the command to create a new Sheet [spreadsheets().create()], passing in data in the (eventual) request body:

res = SHEETS.spreadsheets().create(body=data).execute()

Alternatively, you can use the Google Drive API (v2 or v3) to create a Sheet but would also need to pass in the Google Sheets (file) MIME type:
data = {
'name': 'Toy orders [%s]' % time.ctime(),
'mimeType': 'application/vnd.google-apps.spreadsheet',
}
res = DRIVE.files().create(body=data).execute() # insert() for v2
The general rule-of-thumb is that if you're only working with Sheets, you can do all the operations with its API, but if creating files other than Sheets or performing other Drive file or folder operations, you may want to stick with the Drive API. You can also use both or any other Google APIs for more complex applications. We'll stick with just the Sheets API for now. After creating the Sheet, grab and display some useful information to the user:
SHEET_ID = res['spreadsheetId']
print('Created "%s"' % res['properties']['title'])
You may be wondering: Why do I need to create a Sheet and then make a separate API call to add data to it? Why can't I do this all when creating the Sheet? The answer (to this likely FAQ) is you can, but you would need to construct and pass in a JSON payload representing the entire Sheet—meaning all cells and their formatting—a much larger and more complex data structure than just an array of rows. (Don't believe me? Try it yourself!) This is why we have all of the spreadsheets().values() methods... to simplify uploading or downloading of only values to or from a Sheet.

Now let's turn our attention to the simple SQLite database file (db.sqlite) available from the Google Sheets Node.js codelab. The next block of code just connects to the database with the standard library sqlite3 package, grabs all the rows, adds a header row, and filters the last two (timestamp) columns:
FIELDS = ('ID', 'Customer Name', 'Product Code', 'Units Ordered',
'Unit Price', 'Status', 'Created at', 'Updated at')
cxn = sqlite3.connect('db.sqlite')
cur = cxn.cursor()
rows = cur.execute('SELECT * FROM orders').fetchall()
cxn.close()
rows.insert(0, FIELDS)
data = {'values': [row[:6] for row in rows]}
When you have a payload (array of row data) you want to stick into a Sheet, you simply pass in those values to spreadsheets().values().update() like we do here:
SHEETS.spreadsheets().values().update(spreadsheetId=SHEET_ID,
range='A1', body=data, valueInputOption='RAW').execute()
The call requires a Sheet's ID and command body as expected, but there are two other fields: the full (or, as in our case, the "upper left" corner of the) range of cells to write to (in A1 notation), and valueInputOption indicates how the data should be interpreted, writing the raw values ("RAW") or interpreting them as if a user were entering them into the UI ("USER_ENTERED"), possibly converting strings & numbers based on the cell formatting.

Reading rows out of a Sheet is even easier, the spreadsheets().values().get() call needing only an ID and a range of cells to read:
print('Wrote data to Sheet:')
rows = SHEETS.spreadsheets().values().get(spreadsheetId=SHEET_ID,
range='Sheet1').execute().get('values', [])
for row in rows:
print(row)
The API call returns a dict which has a 'values' key if data is available, otherwise we default to an empty list so the for loop doesn't fail.

If you run the code (entire script below) and grant it permission to manage your Google Sheets (via the OAuth2 prompt that pops up in the browser), the output you get should look like this:
$ python3 sheets-toys.py # or python (2.x)
Created "Toy orders [Thu May 26 18:58:17 2016]" with this data:
['ID', 'Customer Name', 'Product Code', 'Units Ordered', 'Unit Price', 'Status']
['1', "Alice's Antiques", 'FOO-100', '25', '12.5', 'DELIVERED']
['2', "Bob's Brewery", 'FOO-200', '60', '18.75', 'SHIPPED']
['3', "Carol's Car Wash", 'FOO-100', '100', '9.25', 'SHIPPED']
['4', "David's Dog Grooming", 'FOO-250', '15', '29.95', 'PENDING']
['5', "Elizabeth's Eatery", 'FOO-100', '35', '10.95', 'PENDING']

Conclusion

Below is the entire script for your convenience which runs on both Python 2 and Python 3 (unmodified!):

'''sheets-toys.py -- Google Sheets API demo
created Jun 2016 by +Wesley Chun/@wescpy
'''
from __future__ import print_function
import argparse
import sqlite3
import time

from apiclient import discovery
from httplib2 import Http
from oauth2client import file, client, tools

SCOPES = 'https://www.googleapis.com/auth/spreadsheets'
store = file.Storage('storage.json')
creds = store.get()
if not creds or creds.invalid:
flags = argparse.ArgumentParser(parents=[tools.argparser]).parse_args()
flow = client.flow_from_clientsecrets('client_id.json', SCOPES)
creds = tools.run_flow(flow, store, flags)

SHEETS = discovery.build('sheets', 'v4', http=creds.authorize(Http()))
data = {'properties': {'title': 'Toy orders [%s]' % time.ctime()}}
res = SHEETS.spreadsheets().create(body=data).execute()
SHEET_ID = res['spreadsheetId']
print('Created "%s"' % res['properties']['title'])

FIELDS = ('ID', 'Customer Name', 'Product Code', 'Units Ordered',
'Unit Price', 'Status', 'Created at', 'Updated at')
cxn = sqlite3.connect('db.sqlite')
cur = cxn.cursor()
rows = cur.execute('SELECT * FROM orders').fetchall()
cxn.close()
rows.insert(0, FIELDS)
data = {'values': [row[:6] for row in rows]}

SHEETS.spreadsheets().values().update(spreadsheetId=SHEET_ID,
range='A1', body=data, valueInputOption='RAW').execute()
print('Wrote data to Sheet:')
rows = SHEETS.spreadsheets().values().get(spreadsheetId=SHEET_ID,
range='Sheet1').execute().get('values', [])
for row in rows:
print(row)
You can now customize this code for your own needs, for a mobile frontend, devops script, or a server-side backend, perhaps accessing other Google APIs. If this example is too complex, check the Python quickstart in the docs that way simpler, only reading data out of an existing Sheet. If you know JavaScript and are ready for something more serious, try the Node.js codelab where we got the SQLite database from. That's it... hope you find these code samples useful in helping you get started with the latest Sheets API!

EXTRA CREDIT: Feel free to experiment and try cell formatting or other API features. Challenge yourself as there's a lot more to Sheets than just reading and writing values! 

June 14, 2016 12:25 PM


Caktus Consulting Group

PyCon 2016 Recap

PyCon, beyond being the best community event for Python developers, is also an event that we happily began thinking about eleven months ago. Almost as soon as PyCon 2015 ended, we had the good fortune of planning the look and feel of PyCon 2016 with organizer extraordinaires Ewa Jodlowska, Diana Clark, and new this year, Brandon Rhodes. Our team has loved working with the organizers on the PyCon websites for the past three years now. They’re great people who always prioritize the needs of PyCon attendees, whether that’s babysitting services or a smooth PyCon web experience.

Seeing the PyCon 2016 Artwork

The Caktus team arrived in Portland and were almost immediately greeted with large-scale versions of the artwork our team made for PyCon. Seeing it on arrival, throughout the event, and especially during the keynotes was surreal.

PyCon 2016 sponsor banner

Getting ready for the tradeshow

Our team got ready for the booth first, ahead of the PyCon Education Summit and Sponsor Workshops where we had team members speaking. Here’s the booth before everyone came to grab t-shirts and PyCon tattoos and to learn more about us.

The Caktus booth at PyCon before the festivities begin.

Here’s a closeup of our live RapidPro dashboard too.

The RapidPro live dashboard Caktus built for PyCon.

Supporting our team members

This year, at the PyCon Education Summit, Rebecca Conley spoke about expanding diversity in tech by increasing early access to coding education. Erin Mullaney and Rebecca Muraya spoke at a Sponsor Workshop on RapidPro, UNICEF’s SMS application platform. Sadly, we didn’t get a picture of Rebecca C, but Erin shared this picture of herself and Rebecca M. on Twitter.

Erin and Rebecca M. after giving their RapidPro talk at PyCon.

Tradeshow time!

PyCon, for our booth team, is always intense. Here’s a taste of the crowds across three days.

A busy crowd around the Caktus booth.

The excitement, of course, included a giveaway. Here’s the winner of our BB8 Sphero Ball raffle prize, Adam Porad of MetaBrite, with our Sales Director, Julie White:

PyCon attendee wins the Caktus BB8 Sphero giveaway.

So many talks

With our office almost empty and most of our team at PyCon, there were a lot of talks we went to, too many to list here (don’t worry, we’re going to start highlighting the talks in our annual PyCon Must See Series). We do want to highlight one of the best things about the talks— the representation of women, as described by the PyCon Diversity chair:

Across three packed days, here’s some of the topics we got to learn more about: real time train detection, inclusivity in the tech community, and better testing with less code. With the videos now available, we can still catch all the great talks even if we couldn’t be there.

PyLadies auction

One of the highlights of PyCon is definitely the PyLadies auction. Every year, it’s a raucous event that’s just plain fun. This year, we contributed original concept art for the PyCon 2016 logo. It went for $650 to Jacob Kaplan-Moss, the co-creator of Django. Since we’re a Django shop, there definitely was quite a bit of excited fandom for us.

Jacob Kaplan-Moss holds won auction item: Caktus' early concept art for PyCon 2016 logo

And we can’t leave without a cookie selfie

Whoever came up with the cookie selfie idea is brilliant. Here’s Technical Director Mark Lavin with his cookie selfie.

Hope to see you next year!

In the meantime, make sure to return to our blog for our annual PyCon Must See Series.

June 14, 2016 12:00 PM


Python Software Foundation

The PSF's Growing Success

In honor of the 2016-2017 board of director's first board meeting today, I wanted to share the PSF's growing success with the public!

For as long as I have been with the PSF, our goal has been to encourage people all around the world to learn and use Python. We have done this by funding conferences, workshops, and dev work. Due to the success of our community, each year more and more people have become aware of the PSF and our mission.  The success is hard to measure. More in-depth research can be done on how the PSF's mission has bettered the world, but for now, let us start with a simple, tangible, measurement: money.

Turning gut feelings into metrics
Besides our treasurer, Kurt Kaiser, most of us have not paid much attention to these metrics. Even though members of the PSF have received yearly reports from Kurt, sometimes that snapshot does not have the progression across several years. I have been helping with grant management since 2012 and recently it felt like the board mailing list was receiving much more traffic than when I first joined the board as Secretary. In April I decided to scrape https://www.python.org/psf/records/board/resolutions/ to the best of my ability. Luckily, Kurt was able to help me extract that data from the PSF’s accounting system. Below is a graph depicting the data from those reports. The reporting only goes back to 2010 as prior to that our accounting was done elsewhere and the transitioned info is not as detailed as the accounting we keep now.

If you would like to see a higher resolution copy, click here:
https://www.dropbox.com/s/od9jy5i2cyi1b1k/psf_history.png?dl=0



I did a comparison of grants disbursed from the 2014-2015 term to the 2015-2016 term and noticed that our disbursements increased by approximately $65,000. When I compared the 2013-2014 term to the 2014-2015, I saw that the grant disbursement also increased by approximately $65,000! As I mentioned above, this was surprising to me because I was under the impression that we only recently started receiving many more requests. Therefore, I also plotted the average grant size; which shows a spike in 2013-2014 and has since returned back to its former level. In conclusion: we gave out more money between 2013-2014 and 2014-2015, but that money primarily went to larger grants. The total amount we disburse continues to increase, but that money is spread across more grants, explaining the visibly increased volume of requests.

A growing trend?
Personally, I feel this is a huge milestone for the PSF and our community. If we continue in this pattern, the 2016-2017 term might give out over $300,000 USD to fund python education all around the world! I am astonished by this comparison, especially since when I started the disbursements totaled a little over $40,000. If it is an indication, we will have to continue expanding our staff as well as look into software that can help us better manage these tasks!

For now though, let's keep up the great work!

June 14, 2016 11:34 AM


Python Insider

Python 2.7.12 release candidate available

The first release candidate of Python 2.7.12, the next bugfix release in the Python 2.7.x series, is now available for download.

June 14, 2016 02:06 AM


Omaha Python Users Group

June Meeting Canceled

Due to several complications, the June meeting has been canceled.

Jeff M. will be presenting in July so mark your calendar.

June 14, 2016 01:24 AM


Python Insider

Python 3.6.0 alpha 2 preview release is now available

Python 3.6.0a2 has been released.  3.6.0a2 is the second of four planned alpha releases of Python 3.6, the next major release of Python.  During the alpha phase, Python 3.6 remains under heavy development: additional features will be added and existing features may be modified or deleted.  Please keep in mind that this is a preview release and its use is not recommended for production environments.  Python 3.6.0 is planned to be released by the end of 2016.  The next alpha, 3.6.0a3, is planned for 2016-07-11.

June 14, 2016 12:06 AM

June 13, 2016


Ian Ozsvald

Will we see “[module] on Python 3.4+ is free but only paid-support for Python 2.7”?

I’m curious about the transition in our ecosystem from Python 2 to Python 3. On stage at our monthly PyDataLondon meetups I’m known to badger folk to take the step and upgrade to reduce the support burden on developers. The transition gathers pace but it still feels slow. I’ve noted my recommendations for moving to Python 3+ at the end. See also the reddit discussion.

I’m wondering – when will we see the point where open source projects say “We support Python 3.x for free but if you want bugs fixed for Python 2.7, you’ll have to pay“? I’m not saying “if”, but “when”. There’s already one example below and others will presumably follow.

In the last couple of years a slew of larger projects have dropped or are dropping support for Python 2.6 – numpy (some discussion), pandas, scipy, matplotlib, NLTK, astropy, dask, ipythondjango, numba, twisted, scrapy. Good – Python 2.6 was deprecated when 2.7 was released in 2010 (that’s 6 years ago!).

The position of the matplotlib and django teams is clearly “Python 2.7 and Python 3.4+”. Django states that Python 2.7 will be supported until the 2020 sunset date:

“As a final heads up, Django 1.11 is likely to be the last version to support Python 2.7 as it will be supported until the end of Python 2 upstream support in 2020. We’ve adopted a Python version support policy…”

We can expect the larger projects to support legacy userbases with a mix of Python 2.7 and 3.4+ for 3.5 years (at least until 2020). After this we should expect projects to start to drop 2.7 support, some (hopefully) more aggressively than others.

What about smaller projects? Several have outright dropped Python 2.7 support already – errbot (2016 is the last Python 2.7-supported year), nikola, python-thumbnails – or never supported it – wordfreq, featherweight. Which others have I missed? UpdateJupyterHub (cheers Thomas) too.

More interestingly David MacIver (of Hypothesis) stated a while back that he’d support Python 2.7 for free but Python 2.6 would be a paid support option. He’s also tagged (regardless of version) a bunch of bugs that can be fixed for a fee. Viewflow is another – Python 3.4 is free for non-commercial use but a commercial license or support for Python 2.7 requires a fee. Asking for money to support old, PITA or difficult options seems mightily sensible. I guess we’ll see this first for tools that have a good industrial userbase who’d be used to paying for support (like Viewflow).

Aaron Meurer (lead dev on SymPy) has taken the position that library leaders should pledge for a switch to Python 3.x only by 2020. The pledge shows that scikit-bio is about to go Python 3-only and that IPython 6.x+ will be Python 3 only (from 2017). Increasingly we’ll see new libraries adding the shiny features for their Python 3 branch only.

What next? I imagine most new smaller projects will be Python 3.4+ (probably 3.5+ only soon), they’ll have no legacy userbase to support. They could widen their potential userbase by supporting Python 2.7 but this window only exists for 3 years and those users will have to upgrade anyhow. So why bother going backwards?

Once users notice that cooler new toys are Python 3.4+ only they’ll want to upgrade (e.g. NetworKit is Python 3.3+ only for high volume graph network analysis). They’ll only hold back if they’re supporting legacy internal systems (which will be the case for an awful lot of people). We’ll see this more as we get closer to 2020. What about after 2020?

I guess many companies will be slow to jump to Python 3 (it’d take a lot of effort for no practical improvement), so I’d imagine separate groups will start to support Python 2.7 libraries as forks. Hopefully the main library developers will drop support fairly quickly, to stop open source (cost-free) developers having a tax on their time supporting both platforms.

Separate evidence – Drupal 6 adopted a commercial-support-for-old-versions policy (thanks @chx). It is also worth noting that Ubuntu 16.04 LTS ships without Python 2. Microsoft and Brett Cannon have discussed the benefits of moving to Python 3+ recently.

My recommendations (coming from a Senior Industrial Data Scientist with 15+ years commercial experience and 10+ years using Python):

Graham and I did a lightning talk on jumping to Python 3 six months back, there’s a lot of new features in Python 3.4+ that will make your life easier (and make your code safer, so you burn less time hunting for problems). Jake also discussed the general problem for scientists back in 2013, it’ll be lovely when we get past this (now-very-boring) discussion.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

June 13, 2016 01:25 PM

Statistically Solving Sneezes and Sniffles – a Work in Progress Report at PyDataLondon 2016

This is a Work in Progress report, presented this morning at my PyDataLondon 2016 conference. A group of 4 of us are modelling a year’s worth of self-reported data from my wife around her allergies – we’re learning to model which environmental conditions cause her sneezes such that she might have more control over her antihistamine use. Join the email updates list for low-volume updates about this project.

I really should have warned my audience that I was about to photograph them (honest – they seemed to enjoy the talk!):

Emily created the Allergy Tracker (open src) iPhone app a year ago, she logs every sneeze, antihistamine, alcoholic drink, runny nose and more. She’s sneezed for 20 years and by heck, we wondered if we could apply some Data Science to the problem to see if her symptoms correlate with weather, food and pollution. I’m pleased to say we’ve made some progress – it looks like humidity is connected to her propensity to use an antihistamine.

This talk (co-presented with Giles Weaver) discusses the data, the app, our approach to analysis and our tools (including Jupyter, scikit-learn, R, Anaconda and Seaborn) to build a variety of machine learned models to try to model antihistamine usage against external factors. Here are the slides:

Now we’re moving forward to a couple of other participants (we’d like a few more to join us – if you’re on iOS and in London and can commit to 3 months consistent usage we’ll try to tell you what drives your sneezes). We also have academic introductions so we can validate our ideas (and/or kick them into the ground and try again!).

This is the second full day of the conference – we have 330 attendees and we’ve had 2 great keynote speakers and a host of wonderful talks and tutorials (yesterday). Tonight we have our conference party. I’m super happy with how things are progressing – many thanks to all of our speakers, volunteers, Bloomberg and our sponsors for making this work so well.

Update – featured in Mode Analytics #23.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight, sign-up for Data Science tutorials in London. Historically Ian ran Mor Consulting. He also founded the image and text annotation API Annotate.io, co-authored SocialTies, programs Python, authored The Screencasting Handbook, lives in London and is a consumer of fine coffees.

June 13, 2016 01:19 PM


Doug Hellmann

socketserver — Creating Network Servers — PyMOTW 3

The socketserver module is a framework for creating network servers. It defines classes for handling synchronous network requests (the server request handler blocks until the request is completed) over TCP, UDP, UNIX streams, and UNIX datagrams. It also provides mix-in classes for easily converting servers to use a separate thread or process for each request. … Continue reading socketserver — Creating Network Servers — PyMOTW 3

June 13, 2016 01:00 PM


Mike Driscoll

PyDev of the Week: Georg Brandl

This week we welcome Georg Brandl (@birkenfeld) as our PyDev of the Week. Georg is a core developer of the Python language and has been for over 10 years. He is also a part of the Pocoo team. You can see what projects get him excited by checking out his Github profile. Let’s take a few moments to learn more about him!

Can you tell us a little about yourself (hobbies, education, etc):

I studied physics and during my PhD project was working at a large German research center, where software is a big part of conducting research, and external scientists come and use our facilitiy for their own experiments. Meanwhile I’m still at the same place, but working on that software as my main job. We’re using Python extensively, basically giving our guests a highly fancified Python shell as the main user interface.

Why did you start using Python?

I was near the end of high school and looking for something new after getting tired of Windows and working with the early .NET and PHP. Somehow the simplicity and the friendly community were very appealing. It was the time when everybody started writing Wikis, so I developed my own in Python and got something working faster than I thought possible. I started contributing to the CPython project itself quickly, and became a core developer about 10 years ago.

What other programming languages do you know and which is your favorite?

As a CPython developer, I got to know C pretty well, and although I often wish I hadn’t. It’s necessary and helpful quite often (especially since I’m working in a heavily Unix-oriented environment where .NET and Java codebases are few and far between). The situation is reversed with Haskell: I learned it a while ago and loved its concepts and beauty, but never got to use it anywhere really. (That’s probably a good thing because it didn’t ruin the beauty with messy day-to-day details.)

Recently I got interested in Rust, and love it because it combines a lot of the beautiful concepts of Haskell with the pragmatic choices of Python, and has the static typing that every programmer working on large Python codebases dreams of at least every now and then. It’s still a young language, but I think it can complement Python (still my favorite) very well.

What projects are you working on now?

My open-source time right now is pretty limited, but I try to continue helping development along in my projects — CPython, Sphinx, Pygments mainly. Especially for Sphinx, which was on my shoulders alone for a long time, I am very happy that for a few years now we have an active maintenance team.

Which Python libraries are your favorite (core or 3rd party)?

I’ll have to say I love the whole scientific ecosystem, and I’d like to highlight numpy in particular; these guys are doing amazing work and numpy often is just taken for granted. Also Matplotlib, not so much because of visibility but because it has been invaluable for me during my scientific career. Finally, I love pytest and its plugins for writing test suites quickly and with minimal tedium.

Is there anything else you’d like to say?

Like probably a lot of other core developers I’d like to invite everyone to see if they’d be interested in “giving back” (really, learning a whole lot while contributing to Python). Come by the core-mentorship list or on IRC, and we’ll be happy to chat and work with you! When we switch to Git/Github later this year, it’ll be even easier 🙂

Thanks for doing the interview!

June 13, 2016 12:30 PM


Emmanuel Leblond

Learning async programming the hard way

This weekend, I noticed diesel.io is no longer reachable. I guess it's time to organize the funerals of this project.

First a word about the dead from it pypi page:

diesel is a framework for easily writing reliable and scalable network applications in Python. It uses the greenlet library layered atop asynchronous socket I/O in Python to achieve benefits of both the threaded-style (linear, blocking-ish code flow) and evented-style (no locking, low overhead per connection) concurrency paradigms. It’s design is heavily inspired by the Erlang/OTP platform.

Announced in 2010, the project was well received (+500 stars on github) but eventually failed to make it hole in the crowded world of Python async frameworks.

In late 2013, Bump, the company that developed diesel for it own needs, was bought by Google, it products discontinued and team merged.

This put a neat stop to the project, which entered in hibernation: diesel's contributions

It's only a year later I got interested by the project.

Why? Maybe it's because I've always had a weakness for lost causes (to paraphrase Rhett Buttler).

What really caught my eye was diesel offers an implementation of Flask and it it ecosystem (let aside the extensions making use of the network which blocking nature couldn't play nice in an async framework).

The drawback, however, was diesel was Python 2 only and, as I stated earlier, it community was sparse if not vanished.

Given I was having some free time (that's the reason I start researching about async frameworks in the first place) I choose the hard way: porting the project to Python 3 !

Who knows ? Maybe seeing someone investing time into there project could re-motivate it creators, and with some posts&benchmarks on reddit we could even bring new people in...

Dowski, the maintainer of the project, was really friendly and helpful and shared my hopes on the project. So I started working on the port, and a month later the work was done.

However my attempt to shake the sleeping giant turned short: too few free time, missing motivation and Diesel4 (the new major version with Python 3 support) didn't get released on pypi, in fact it didn't even make it way to the master :'-(

Put this way, this seems harsh: I worked hard and no-one will ever use my work (not even me !). But in fact it's all the opposite !

One of the biggest grieve Armin Ronacher gave to Python3 is the str/bytes separation which make porting low level code pretty hard. Well diesel was full of such things (protocols, transport, socket communication etc.) so porting the code was far more than just a 2to3 pass, I actually had to read and understand all the code !

This project gave me the opportunity to go deep into async programming, to understand the reactor pattern and hack into an implementation of it, to discover greenlet, to play with the redis and mongodb protocols, and to discover someone finally did something to replace the awful logging module.

With asyncio raising as the new unified standard for async programming in Python, diesel is damn no matter what. So it's time to move on let it rest in peace, but I would suggest you anyway to go pay your respects to it, may it code teaches you as much as it did to me.


PS: After 2 years of inactivity, Dowski's blog got a new entry:

I think it's time to make things again.

Dowski

Is it really the end of diesel ?

June 13, 2016 12:00 AM


hypothesis.works articles

Testing Configuration Parameters

A lot of applications end up growing a complex configuration system, with a large number of different knobs and dials you can turn to change behaviour. Some of these are just for performance tuning, some change operational concerns, some have other functions.

Testing these is tricky. As the number of parameters goes up, the number of possible configuration goes up exponentially. Manual testing of the different combinations quickly becomes completely unmanageable, not to mention extremely tedious.

Fortunately, this is somewhere where property-based testing in general and Hypothesis in particular can help a lot.

Read more...

June 13, 2016 12:00 AM

June 12, 2016


Python Insider

Python 3.5.2rc1 and Python 3.4.5rc1 are now available

Python 3.5.2rc1 and Python 3.4.5rc1 are now available for download.

You can download Python 3.5.2rc1 here, and you can download Python 3.4.5rc1 here.

June 12, 2016 11:18 PM


Vasudev Ram

Driving Python function execution from a text file

By Vasudev Ram



Beagle Voyage image attribution

Dynamic and adaptive systems are interesting, since they are more flexible with respect to their environment. Python itself is a dynamic language. Here is a technique which can make Python programs even more dynamic or adaptive.

This post shows a simple technique for driving or dynamically configuring, via an external text file, which specific functions (out of some possible ones), get executed in a program. As such, you can consider it as a sort of configuration technique [1] for externally configuring the behavior of programs, at run time, without needing to change their source code each time.

[1] There are, of course, many other configuration techniques, some of which are more powerful. But this one is very easy, since it only needs a dict and a text file.

Here is the code for the program, func_from_file.py:
# Driving Python function execution from a text file.
# Author: Vasudev Ram:
# https://vasudevram.github.io,
# http://jugad2.blogspot.com
# Copyright 2016 Vasudev Ram

def square(n): return n * n
def cube(n): return n * square(n)
def fourth(n): return square(square(n))

# 1) Define the fns dict literally ...
#fns = {'square': square, 'cube': cube, 'fourth': fourth}
# 2a) ... or programmatically with a dict comprehension ...
fns = { fn.func_name : fn for fn in (square, cube, fourth) }
# OR:
# 2b)
# fns = { fn.__name__ : fn for fn in (square, cube, fourth) }
# The latter approach (2a or 2b) scales better with more functions,
# and reduces the chance of typos in the function names.

with open('functions.txt') as fil:
for line in fil:
print
line = line[:-1]
if line.lower() not in fns:
print "Skipping invalid function name:", line
continue
for item in range(1, 5):
print 'item: ' + str(item) + ' : ' + line + \
'(' + str(item) + ') : ' + str(fns[line](item)).rjust(3)
And here is an example text file used to drive the program, functions.txt:
$ type functions.txt
fourth
cube
fourth_power
Running the program with:
python func_from_file.py
gives this output:
$ python func_from_file.py

item: 1 : fourth(1) : 1
item: 2 : fourth(2) : 16
item: 3 : fourth(3) : 81
item: 4 : fourth(4) : 256

item: 1 : cube(1) : 1
item: 2 : cube(2) : 8
item: 3 : cube(3) : 27
item: 4 : cube(4) : 64

Skipping invalid function name: fourth_power
The map image at the top of the post, is of the Voyage of the Beagle, the ship in which Charles Darwin made his expedition. His book, On the Origin of Species, "is considered to be the foundation of evolutionary biology", and "Darwin's concept of evolutionary adaptation through natural selection became central to modern evolutionary theory, and it has now become the unifying concept of the life sciences.".

Note to my readers: In my next post, I'll continue on the topic of randomness, which I started on in this post:

The many uses of randomness.

- Vasudev Ram - Online Python training and consulting

Signup to hear about my new courses and products.

My Python posts     Subscribe to my blog by email

My ActiveState recipes

June 12, 2016 04:04 AM


Podcast.__init__

Episode 61 - Sentry with David Cramer

Visit our site to listen to past episodes, support the show, join our community, and sign up for our mailing list.

Summary

As developers we all have to deal with bugs sometimes, but we don’t have to make our users deal with them too. Sentry is a project that automatically detects errors in your applications and surfaces the necessary information to help you fix them quickly. In this episode we interviewed David Cramer about the history of Sentry and how he has built a team around it to provide a hosted offering of the open source project. We covered how the Sentry project got started, how it scales, and how to run a company based on open source.

Brief Introduction

Linode Sponsor Banner

Use the promo code podcastinit20 to get a $20 credit when you sign up!

sentry-horizontal-black.png

Stop hoping your users will report bugs. Sentry’s real-time tracking gives you insight into production deployments and information to reproduce and fix crashes. Use the code podcastinit at signup to get a $50 credit!

Interview with Firstname Lastname

Keep In Touch

Picks

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Visit our site to listen to past episodes, support the show, join our community, and sign up for our mailing list.Summary As developers we all have to deal with bugs sometimes, but we don't have to make our users deal with them too. Sentry is a project that automatically detects errors in your applications and surfaces the necessary information to help you fix them quickly. In this episode we interviewed David Cramer about the history of Sentry and how he has built a team around it to provide a hosted offering of the open source project. We covered how the Sentry project got started, how it scales, and how to run a company based on open source.Brief IntroductionHello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.I would like to thank everyone who has donated to the show. Your contributions help us make the show sustainable. For details on how to support the show, subscribe, join our newsletter, check out the show notes, and get in touch you can visit our site at pythonpodcast.comLinode is sponsoring us this week. Check them out at linode.com/podcastinit and get a $20 credit to try out their fast and reliable Linux virtual servers for your next projectWe are also sponsored by Sentry this week. Stop hoping your users will report bugs. Sentry's real-time tracking gives you insight into production deployments and information to reproduce and fix crashes. Check them out at getsentry.com and use the code podcastinit at signup to get a $50 credit!- Join our community! Visit discourse.pythonpodcast.com for your opportunity to find out about upcoming guests, suggest questions, and propose show ideas.Your hosts as usual are Tobias Macey and Chris PattiToday we're interviewing David Cramer about Sentry which is an open source and hosted service for capturing and tracking exceptions in your applications. Use the promo code podcastinit20 to get a $20 credit when you sign up! Stop hoping your users will report bugs. Sentry's real-time tracking gives you insight into production deployments and information to reproduce and fix crashes. Use the code podcastinit at signup to get a $50 credit!Interview with Firstname LastnameIntroductionsHow did you get introduced to Python? - ChrisWhat is Sentry and how did it get started? - TobiasWhat led you to choose Python for writing Sentry and would you make the same choice again? - TobiasError reporting needs to be super light weight in order to be useful. What were some implementation challenges you faced around this issue? - ChrisWhy would a developer want to use a project like Sentry and what makes it stand out from other offerings? - TobiasWhen would someone want to use a different error tracking service? - TobiasCan you describe the architecture of the Sentry project both in terms of the software design and the infrastructure necessary to run it? - TobiasWhat made you choose Django versus another Python web framework, and would you choose it today? - ChrisWhat languages and platforms does Sentry support and how does a developer integrate it into their application? - TobiasOne of the big discussions in open source these days is around maintainability and a common approach is to have a hosted offering to pay the bills for keeping the project moving forward. How has your experience been with managing the open source community around the project in conjunction with providing a stable and reliable hosted service for it? - TobiasAre there any benefits to using the hosted offering beyond the fact of not having to manage the service on your own? - TobiasHave you faced any performance challenges implementing Sentry's server side? - ChrisWhat advice can you give to people who are trying to get the most utility out of their usage of Sentry? - TobiasWhat kinds of challenges have you encountered in the process of adding support for such a wide variety of languages and runtimes? - TobiasCapturing the context of an error can be immensely useful in finding and solving i

June 12, 2016 01:43 AM

June 11, 2016


Weekly Python StackOverflow Report

(xxiii) stackoverflow python report

These are the ten most rated questions at Stack Overflow last week.
Between brackets: [question score / answers count]
Build date: 2016-06-11 18:34:52 GMT


  1. How can I make sense of the `else` statement in Python loops? - [137/13]
  2. How to classify blurry numbers with openCV - [9/2]
  3. How to generate permutations of a list without “moving” zeros. in Python - [7/3]
  4. Bytecode optimization - [7/2]
  5. sorting points to form a continuous line - [7/2]
  6. How do Python Recursive Generators work? - [6/3]
  7. Why must we declare that a variable is a global variable BEFORE the assignment of that variable? - [6/3]
  8. Modifying a dict during iteration - [6/2]
  9. Convert exception error to string - [6/1]
  10. Selecting values from a JSON file in Python - [6/1]

June 11, 2016 06:35 PM


Python Software Foundation

Unconference Day at CubaConf

Note: This is the third post on my trip in April to Havana, Cuba to attend the International Open Software Convention, CubaConf.
The second day of CubaConf, structured as an unconference, was equally as lively as the first. In an unconference, the audience actively participates by proposing topics and then voting on which ones will be presented. This was especially effective as a way of conforming to the conference’s purpose to explore ways in which open software can be most effectively used in poorer nations, like Cuba, and how it can contribute to development. I was impressed by the number of audience members who came prepared to give a talk and who lined up at the front of the room to pitch their ideas. The suggestions were recorded on a white board, and at the end of the session we voted for the agenda.
Proposed unconference topics
Before moving on to the unconference talks, we heard an already scheduled keynote. Etiene Dalcol, a Brazilian software engineer, told us of her experiences and observations within the tech scene in Brazil. 
Early on in her career, Dalcol created a web framework, Sailor, in ten days. She said that although it was lousy, people began to contribute and to request more features, indicating their thirst for local grown tech. Sailor is now quite improved and popular, and will be participating for its second time in Google’s Summer of Code.
Dalcol then talked about her experiences working on the programming language, Lua. She used the history of Lua to illustrate what she sees as a hindrance to tech development in Latin America. Lua, created in 1993 in Brazil, was never marketed in Brazil. In fact, it wasn’t until 2015 that the first Portuguese language book on Lua was published. According to Dalcol, this type of suppression of local efforts contributes to a belief, prevalent even among Brazilian engineers, that Silicon Valley tech is superior. 
Etiene Dalcol
To combat this situation, her advice for Latin American developers is to stay true to their own unique perspectives and needs; to develop software that will solve local problems and to offer products that reflect their own cultures–to do what can’t be done anywhere else. In fact, according to Dalcol, this approach will produce software that will in turn be of benefit to other cultures. She cited advice she had found useful as a musician--don't play Chopin to Europeans; rather offer them what you know but they don't.
Dalcol also spoke of her experiences as a woman in tech, a theme that was addressed head-on later in the conference in its final keynote on women in open source (more on this later). 
Dalcol’s advice was clearly reflected in many of the afternoon’s unconference talks. We heard from participants about projects that were actively benefiting their communities. 
One such talk, by a developer from Costa Rica, was about the use of Open Street Map. OSM allows collaborators to create interactive maps geared to specific purposes. Examples included Ecuador’s rapid mapping of areas of damage caused by the devastating earthquake that had just happened April 26. By the following Sunday,  just five days after the quake, nine cities had been completely mapped, providing crucial information for emergency responders, survivors, and reconstruction workers.
Other OSM projects discussed included a public transport map in Nicaragua and a map of  humanitarian services in Costa Rica. There have even been open street mapping parties in Indonesia to develop useful maps. The speaker invited participants to join his open source mapping workshop to be held the next day.
Open Street Mappers
Another day two talk, by Tony Wasserman on evaluating technology for business needs was well-received by many who had entrepreneurial intent and appreciated a run-down of the factors that lead businesses to adopt some software products over others.
At day’s end, we gathered in the main room for Lightning Talks and announcements. It was clear that the day had generated a great deal of excitement that would carry over to the next day’s sprints and to projects that would continue beyond the conference.

I would love to hear from readers. Please send feedback, comments, or blog ideas to me at [email protected].

June 11, 2016 06:06 PM


Dan Stromberg

from-table

I've put from-table here.

It's a small python3 script that knows how to extract one or more HTML tables as CSV data.  You can give it a URL or a file.  It can extract to stdout or to a series of numbered filenames (one file per table).

I hope folks find it useful.

June 11, 2016 05:48 PM


PythonClub - A Brazilian collaborative blog about Python

Curso Python asyncio: Aula 01 - Iterators e Generators

Entendendo o conceito de Iterator e Generator.

Primeira Aula: https://www.youtube.com/watch?v=xGoEpCaachs

Slides: http://carlosmaniero.github.io/curso-asyncio/aula01/

GitHub: http://github.com/carlosmaniero/ http://github.com/carlosmaniero/curso-asyncio

http://carlosmaniero.github.io/

June 11, 2016 03:30 PM


codeboje

How to Build a nice looking Product Box for Amazon Affiliate Sites

Lately, I've been venturing into building niche sites. Besides the business, I am also always curious what technic other use to create their sites.

I noticed a lot of them use Wordpress, and when they invest money, people even buy expensive Wordpress Themes and Plugins. There are some good out there, for sure, but when you don't use Wordpress you are pretty stuck with doing it yourself.

But one can learn a lot from these plugins, i.e. Thrive or Amazon Simple Affiliate.

One of those things is, how to display Products nicely. Inspired by that and a side note on HarryVSInternet about Embed.ly, and the fact i use Amazon, I started to build a small little helper: The Amazon Product Box Builder.

amzn_product_box.jpg

It is free to use and does not even require a backlink for the generated box. But I'd appreciate any feedback and of course, don't forget to share it. You need a side using Bootstrap 3 or style it yourself.

You can just use the tool and leave the article or if you like to know about the technical side, continue to read.

The Technical Part

The little helper consists of a server side doing the Amazon query and box builder and a client side for the input form and display of the result.

You need an Amazon Associate Account aka Affiliate Account and must also be signed up with the Amazon Advertising API (Part of AWS). Sadly you have to use your root AWS for the Advertising API; IAM Account didn't work for me.

Server Side

The server side is written in Python, Flask, Jinja and Amazon Simple Product API. It has just one single Route. It parses the input value from our form, translates everything to a query for Amazon and then joins the product we got with the template.

@app.route('/', methods=['POST'])
def buildbox():
    try:
        asin =  request.form['asin']
        associatetag = request.form['associatetag']
        if not associatetag:
            associatetag = ASSOCIATE_TAG
        marketplace = request.form['marketplace']

        amazon = AmazonAPI(AMAZON_ACCESS_KEY, AMAZON_SECRET_KEY, associatetag, region = marketplace.upper())
        product = amazon.lookup(ItemId=asin, )
        x = product.get_attribute('ProductGroup')

        link_name = 'Buy on Amazon'
        if marketplace == 'de':
            link_name = 'Bei Amazon kaufen'
        ret_value = render_template('std_box.html', product=product, buy_link_title= link_name)
        status = 200

    except urllib.error.HTTPError as e:

        if e.code == 400:
            status = 401
            ret_value = 'Amazon return a 400. Is your associate tag valid for the selected market?'
        else:
            status = 500
            ret_value  = 'Sorry, an error happend. Please try later again.'
    except AsinNotFound:
        status = 404
        ret_value = 'ASIN not found'

    resp = Response(ret_value, status=status)
    # set add header for testing
    return resp

I reduced the code to the essentials and think it explains itself.

Client Side

The Client is a single HTML page with some Javascript handling the form input, making an AJAX call to the backend and displaying its result. I used Bootstrap 3 for making the Box and site looking beautiful.

It is using Jquery, Bootstrap 3 and clipboard.js for the copy to clipboard functionality. I used the bootstrap starter example.

$(function(){

  new Clipboard('#copy2clip');

  $("#apbbform").submit(function(e){
    $('#errorBox').hide();
    $('#errorMsg').empty();

    $.ajax({
      url: '/amzn_box/',
      data: $('form').serialize(),
      type: 'POST',
      success: function(data,textStatus,jqXHR ){
        console.log(data);
        $('#result_code').text(data);
        $('#result_display').html(data);
      },
      error: function(jqXHR, textStatus, error){
        $('#errorMsg').text(jqXHR.responseText);
        $('#errorBox').show();
        console.log(jqXHR.responseText);
      }
    });
    e.preventDefault();
  });
});

Conclusion

It is not that time consuming to create small tools. I build the tool in a few hours, and the only hurdle was installing it on my webfaction account last night. lxml didn't want to get installed, and I to find the cause. The fix was to install an older version which was compatible with the server I am still on.

June 11, 2016 03:16 PM