63°F 4:48pm

Aaron Parecki

  • Articles
  • Notes
  • Projects
  • Automatically uploading screenshots to a Micropub Media Endpoint

    Wed, Nov 2, 2016 12:40pm -07:00

    I often want to share screenshots in text chat from my computer, and I don't like sharing Dropbox links since they load a huge HTML+JS app just to show an image.

    Since I already have a Micropub Media Endpoint that apps like Quill use to upload photos, I thought that would be a good place to also share screenshots from. The Media Endpoint is a simple HTTP API. It accepts a file upload, and returns the URL to the file in the Location header.

    OSX has what's called "Folder Actions" where the system will trigger a script to run any time a file is added to a folder. I created an Automator script that will upload the newly added file to the media endpoint, and copy the resulting URL onto the clipboard.

    The curl command I used is below. Of course you'll want to change the values of the access token and endpoint URL for yourself:

    curl -i -F "file=@$1" -H "Authorization: Bearer xxx" https://media.aaronpk.com/endpoint.php 
    | grep Location: | sed -En 's/^Location: (.+)/\1/p' | tr -d '\r\n' | pbcopy

    To activate the Folder Action, save this in:

    ~/Library/Workflows/Applications/Folder Actions/

    Now, open Finder to the folder where your screenshots are saved, and right click the folder and navigate to "Folder Actions Setup".

    Attach the script to the folder in the window that appears. Your script should appear in the list once you've saved it to your "Folder Actions" folder in ~/Library.

    Now, whenever you take a screenshot (or drop any other file in that folder), the Automator workflow will run and will upload your image to your media endpoint! After a few seconds, you should see a popup notification that the URL to your file is on your clipboard.


    Portland, Oregon
    2 replies 1 mention
    #screenshot #micropub #media-endpoint #indieweb
    Wed, Nov 2, 2016 12:40pm -07:00
  • Brainstorming "verified" IndieWeb checkins

    Fri, Oct 14, 2016 6:31pm -07:00

    Checkins can easily be faked. The Foursquare app does a reasonable job of preventing fake (and accidental fake) checkins, but it's still possible. If checkins weren't posted on Foursquare, but instead were posted on each person's own website, the possibility of fake checkins is much greater. What would it look like to have a way for a venue to know (and republish) checkins that it knows were real?

    The checkin post would need to include some piece of information that could only have been discovered by physically being at the venue.

    What if:

    • The venue has a website that receives Webmentions and supports verified checkins.
    • The venue has a TV screen inside that shows who is checked in there. (It's not that crazy, I promise)

    The venue's TV screen always displays a 4-digit code and instructs people to include that code in their checkin post on their website.

    When someone posts a checkin on their site, they link to the venue URL, so their site sends a Webmention to the venue's website.

    The venue's Webmention receiver then looks at the checkin post, and verifies that the special code is included and that it has not already been used.

    By providing people this code in the physical location, there is no way for someone to know the code unless they were actually there!

    Portland, Oregon
    2 replies 1 mention
    #checkins #indieweb #foursquare
    Fri, Oct 14, 2016 6:31pm -07:00
  • Multi-Camera Portable Live Video Rig

    Sat, Oct 8, 2016 7:40pm -07:00

    Here's a walkthrough of my multi-camera live switching rig that I use to record conferences and events. I've spent a while finding small enough parts so that the entire setup can fit on my bike. I intentionally created this rig with dedicated components so that no computer is required for any part of this workflow.

    Video Switcher

    The heart of this whole operation is my favorite switcher, the Roland V-1HD. This little device is a powerhouse! It has 4 HDMI inputs (so you need an SDI to HDMI converter to get an SDI cable back into it), and analog audio inputs.

    It can do picture-in-picture or split screen composites, which is useful if you want to combine the presenter's slides with the video. Kind of amazing for a device of this size and price point.

    The analog inputs are super useful as well, since I can run the audio from the Zoom H6 mixer directly into this device, which gets the good audio feed onto the recording and HDMI output that goes to the livestream.

    Cameras

    I use three Canon Vixia HF G20 cameras. These are relatively inexpensive camcorder-style cameras with a mini HDMI output. They also record to an SD card in the camera which I use as a backup recording.

    I usually have one camera right next to me, connected via the short cable that comes with the camera, and one camera about 20 feet behind me, connected with a 25 foot mini HDMI to HDMI cable. The camera produces a strong enough HDMI signal to carry the 25 feet, but that's about the maximum length that will work.

    The third camera I connect with either an HDMI to SDI converter so that I can place it 100+ feet away, or via wireless HDMI. The wireless HDMI transmitter adds a slight delay, so it's not ideal, but is often easier to get a camera in a far corner of the room that way.

    HDMI Scaler

    Capturing the presenters' computers has always been a challenge. The video switcher requires all inputs have the same resolution, so since my cameras are running at 1080, I need the computers to be outputting that same signal as well. I've had some amount of luck telling the presenters to switch their display settings to 1080 when they plug in to the projector, but as you can imagine, that is not always successful.

    The Decimator MD-HX Scaler will convert an HDMI input at whatever resolution the computer outputs into a consistent 1080 signal, and output it on both HDMI and SDI. This is perfect, as it lets me pass the HDMI signal through to the projector, and run a long SDI cable to the video switcher.

    Audio

    The Zoom H6 six-track recorder is the audio hub of this setup. This allows me to record ambient room audio with the built-in microphones, an XLR or 1/4" feed from the house audio, and also run one or two of my own microphones as needed.

    I prefer to get a feed from the house sound if possible, since it means the house crew will be handling mixing and leveling the stage mics. If that's not an option, then I can run my own mic to the stage and capture that audio separate from what's used to amplify the speaker in the room.

    I typically use an Audio-Technica Shotgun microphone and/or a Shure Boundary Condenser microphone.

    One thing that makes this so powerful is the Zoom H6 has built-in effects such as a compressor, so I can get a strong audio signal into the video mixer. Another bonus feature is this device can act as a USB interface in case you need to use it to get mic feeds into a computer.

    I also record all tracks individually onto an SD card in the device, so that I have the raw audio files if I need to re-mix it later when publishing the final videos.

    Monitor

    I stumbled across this amazing 13" HDMI monitor from GeChic. It's super compact, about the size of a 13" Macbook Air, and only 1cm thick, so it packs up great in a backpack along with a laptop. It supports full 1920x1080 resolution, and is super bright and crisp. This makes a perfect monitor for viewing the multi-camera preview out that the Roland V-1HD provides.

    Recorder

    The program output from the switcher is fed into this Atomos Ninja Blade HDMI Recorder. This acts as a monitor so I can see the final output, and also records the mixed video to a Samsung 850 PRO 512gb 2.5" SSD hard drive. It also has an HDMI pass-through that feeds into the H.264 encoder.

    H.264 Encoder

    The last piece in the puzzle is getting the final video feed broadcast to the Internet. I take the HDMI output from the recorder and feed it into the Teradek VidiU H.264 encoder. This is a fantastic little device that's a dedicated encoder, with both a wired and wireless network connection.

    I typically broadcast to YouTube Live, although this can actually push to any arbitrary RTMP endpoint.

    End Result

    Below are some links to playlists of videos from events I've recorded with various iterations of this rig.

    • DonutJS September 2016
    • ACT-W Conference 2016
    • Responsive Field Day 2015
    • Sprocket Podcast Live 2015
    Portland, Oregon
    1 like
    #video #livestream
    Sat, Oct 8, 2016 7:40pm -07:00
  • First draft of Private Webmention sending

    Fri, Sep 30, 2016 2:31pm -07:00

    The thing I was most excited about at IndieWebCamp Brighton was coming up with a Private Webmention extension to Webmention. The version we outlined in Brighton was drastically simplified from previous iterations of potential ways to send private Webmentions.

    Nearly a week after speccing it out, I now have a first draft implementation of sending. My goal this week was to finish implementing sending private Webmentions, to get some real-world feedback on the spec.

    Telegraph

    First, since I use the Telegraph API to send Webmentions, I had to add the ability for it to pass through the new "code" and "realm" values. The neat thing about doing it this way is if you want to use Telegraph to send Webmentions for you, you don't have to give it access to your private posts.

    Private Posts

    I then had to add the concept of private posts to p3k. One of the challenges I've been facing with p3k is adding the concept of user accounts and other people logging in. To do so, I would need some sort of user database (likely treating the person's URL as the unique identifier for their identity in my system), and then would need to associate users with posts to keep track of who can see what. Then the next challenge would be writing the queries to return different items in various feeds people are viewing when logged in. This all sounded terribly complicated, and there were a number of implementation decisions I didn't want to make just yet.

    I decided to scrap that whole idea and do the simplest possible thing instead. I realized that the way the Private Webmention spec is written, I don't actually need "user accounts" to send a private Webmention at all. Instead, all I need to do is to be able to generate and verify tokens that can fetch a specific page. I don't need these tokens associated with users or even domain names.

    This simplified my implementation a lot. It meant a relatively small amount of self-contained code to generate the authorization codes and access tokens. The access tokens are locked to a specific post URL, so each token issued can only be used to view a specific post. This is obviously not useful as a generic login mechanism, but it's absolutely sufficient to have a Webmention receiver verify a private Webmention!

    Also worth noting is that my implementation does not currently take advantage of the "realm" value. This means every private Webmention I send will require the receiver obtain a new access token. Once I add the of concept of user accounts and mapping posts to users, then I'll be able to generate a "realm" value for that particular user so they will be able to reuse access tokens to fetch additional posts. This is a good future optimization, but not necessary for a first draft implementation.

    Future Work

    Next up I need to implement receiving private Webmentions. Since I use webmention.io to handle receiving Webmentions, I'll be adding support for receiving private Webmentions to it.

    I am also looking forward to others implementing receiving private Webmentions so that I can start sending some! If you're interested, take a look at the spec, as well as the implementation guide. Hop in our chat if you're not already there and feel free to ask questions!

    Portland, Oregon
    8 likes 3 replies 1 mention
    #webmention #indiewebcamp #indieweb #private #p3k
    Fri, Sep 30, 2016 2:31pm -07:00
  • Micropub CR

    Tue, Aug 23, 2016 10:00am -07:00

    I'm excited to announce that Micropub is now a W3C Candidate Recommendation!

    Three Years of Incubation and Selfdogfooding

    Micropub began in 2013 when I outlined a simple API to create blog posts and short notes for my website, and then implemented it both on my server, several new clients, and started using it day-to-day. Micropub aims to be simple to understand and implement, built on top of existing standards such as OAuth 2.0 and the Microformats 2 vocabulary.

    Designed for Incremental Implementation

    Micropub is also intended to be implemented incrementally. You can start by implementing just the basics of creating simple posts, and then expand your implementation to support additional properties of posts, and later expand to enable editing posts as well.

    Widespread Interoperability Across Numerous Implementations

    One of the benefits of supporting Micropub is that it allows you to leverage other peoples' work in building an interface to create posts on your own website. By 2014, there were already six independent server implementations: five created by individuals for their own websites, as well as a plugin for the Known content management system. In addition to the client I wrote, there were four other people who built their posting interfaces using Micropub, which meant that anybody else with a Micropub server could sign in and use them!

    Over the next several months, more and more people built out Micropub support in their blogging systems, including plugins for Wordpress and Drupal! I continued to build Micropub clients like OwnYourGram, which imports your Instagram photos to your website, and Teacup which I use to track everything I eat and drink, even posting from my watch.

    See also: Complete list of live Micropub implementations

    Openly Iterated and Formally Standardized

    I gave a talk on Micropub at Open Source Bridge in 2015, when we had just started prototyping clients and servers that could start editing posts.

    At the beginning on 2016, we published the First Public Working Draft of Micropub under the W3C Social Web Working Group. For the past several months, we've been iterating on the spec, refining the language, clarifying how to edit and delete posts, and working on ways to ensure a good user experience for applications that post photos and videos.

    W3C Official Call For Micropub Implementations

    Last week the W3C announced that Micropub is now a Candidate Recommendation, and is inviting a wider audience to implement it and provide feedback.

    Stay tuned for updates as I build out the test suite and debugging tools to help you build Micropub clients and servers. They will be launched at micropub.rocks in the coming months!

    Portland, Oregon
    58 likes 7 reposts 4 replies 17 mentions
    #w3c #socialwg #micropub #indieweb
    Tue, Aug 23, 2016 10:00am -07:00
  • Centered and Cropped Thumbnails with CSS

    Sat, Aug 13, 2016 11:30am -07:00

    When working on my photo album thumbnail navigation for this site, I wanted a way to show a square thumbnail of a photo, centered and cropped from the original version. I wanted this to work without pre-rendering a square version on the server, and without using background image tricks.

    I found a great technique for doing exactly this at jonathannicol.com/blog/2014/06/16/centre-crop-thumbnails-with-css, adapted from the WordPress media library. The code below is modified slightly from this example.

    Markup

    <div class="album-thumbnails">
      <a href="photo-1.jpg">
        <img src="photo-1.jpg">
      </a>
      <a href="photo-2.jpg">
        <img src="photo-2.jpg">
      </a>
    </div>
    

    CSS

    .album-thumbnails a {
      /* set the desired width/height and margin here */
      width: 14%;
      height: 88px;
      margin-right: 1px;
    
      position: relative;
      overflow: hidden;
      display: inline-block;
    }
    .album-thumbnails a img {
      position: absolute;
      left: 50%;
      top: 50%;
      height: 100%;
      width: auto;
      -webkit-transform: translate(-50%,-50%);
          -ms-transform: translate(-50%,-50%);
              transform: translate(-50%,-50%);
    }
    .album-thumbnails a img.portrait {
      width: 100%;
      height: auto;
    }
    

    For my use, I am showing a series of 7 square thumbnails on one line, so I set the width to 14% (about 1/7) so that they will fit on one line. Since the width of my container is about 620px wide, I set the height to a fixed amount, 88px, so that the thumbnails will be approximately square.

    The neat thing about this is that when the container shrinks when viewed on smaller devices, the widths of the thumbnails will shrink as well. This does mean the thumbnails will no longer be square on narrow screens, but I'm okay with that result. You can also use a fixed pixel height and width if you don't want them to shrink at all.

    Javascript

    You may notice that the last CSS rule requires a class of "portrait" on image tags where the image is portrait orientation. You can either add that class server-side, or use the Javascript below to add the class when viewed.

    document.addEventListener("DOMContentLoaded", function(event) { 
    
      var addImageOrientationClass = function(img) {
        if(img.naturalHeight > img.naturalWidth) {
          img.classList.add("portrait");
        }
      }
    
      // Add "portrait" class to thumbnail images that are portrait orientation
      var images = document.querySelectorAll(".album-thumbnails img");
      for(var i=0; i<images.length; i++) {
        if(images[i].complete) {
          addImageOrientationClass(images[i]);
        } else {
          images[i].addEventListener("load", function(evt) {
            addImageOrientationClass(evt.target);
          });
        }
      }
    
    });
    
    Portland, Oregon
    4 likes 3 reposts 1 reply 1 mention
    #css #layout
    Sat, Aug 13, 2016 11:30am -07:00
  • Signed git commits with Tower

    Fri, Jul 29, 2016 10:18am -07:00

    My favorite Git client is Tower. I wanted to find a way to sign my git commits despite that not being a supported feature of Tower. Turns out it only took a couple configuration options to make it work.

    First, set up your GPG key however you normally do it. I use GPG Tools for OSX, as well as Keybase. Follow GitHub's instructions for adding your GPG key to your account here.

    Configure your git client to always sign commits:

    git config --global commit.gpgsign true

    Try to sign a commit from the command line before trying it with Tower. Once you're able to successfully sign commits from the command line, you can set it up to work with Tower.

    Add no-tty to your GPG configuration, to allow Tower to use it:

    echo no-tty >> ~/.gnupg/gpg.conf

    You'll need to specify the absolute path to the gpg program in order for Tower to be able to find it.

    git config --global gpg.program /usr/local/bin/gpg

    Now when you make a commit from Tower, you should be prompted to unlock your key with your passphrase from GPG Tools, and if you save it in your keychain it should continue to work seamlessly.

    Now, whenever you make a commit and push it to GitHub, you should see the "verified" mark next to your commits!

    Portland, Oregon
    5 replies
    #git #tower #gpg
    Fri, Jul 29, 2016 10:18am -07:00
  • This Year in the IndieWeb

    Mon, Jul 18, 2016 7:30pm -07:00

    It's been an exciting year in the IndieWeb so far!

    June 2016: IndieWeb Summit, Portland

    IndieWeb Summit 2016

    In June, we held our main event in Portland, newly called IndieWeb Summit to differentiate it from the IndieWebCamps happening all over the world.

    One of our attendees, Julie Anne, is an amazing photographer and took some great pictures of the event!

    • Day 1 Photos
    • Day 2 Photos

    We livestreamed the morning keynotes and second day demos, so you can watch them online!

    • State of the IndieWeb
    • Cutting Edge IndieWeb
    • Day 2 Demos

    Some people wrote some great blog posts afterwards.

    • gRegor Morrill
    • Kyle Mahan
    • Jim Pick
    • Tantek Çelik

    May 2016: Düsseldorf, Germany

    Düsseldorf 2016

    We had a great IndieWebCamp in Düsseldorf in May, adjacent to the beyond tellerrand conference.

    Julie Anne took some amazing photos of this one as well!

    Steffen Rademacker wrote a great post afterwards and included some of his own photos as well.

    April 2016: Nürnberg, Germany

    Nürnberg 2016

    • More Photos

    March 2016: MIT, Cambridge

    MIT 2016

    January 2016: New York

    New York 2016

    December 2015: San Francisco

    San Francisco 2015

    November 2015: MIT, Cambridge

    MIT 2015

    July 2015: Portland, Brighton and Edinburgh

    PDX 2015

    Brighton 2015

    Edinburgh 2015

    Homebrew Website Club

    Between IndieWebCamp Portland 2015 and IndieWeb Summit 2016, the community organized 102 Homebrew Website Club events across 11 cities worldwide! San Francisco, Los Angeles, Portland, Bellingham, Montréal, Detroit, Brighton, Washington DC, Edinburgh, Göteborg, Nürnberg, and Malmö.

    IndieWeb at the W3C

    This year, thanks to a lot of effort on the part of everyone participating in the W3C Social Web Working Group, the Webmention specification that started from the IndieWeb community has been published as a Candidate Recommendation by the W3C!

    w3.org/TR/webmention

    We developed a Webmention test suite so you can test your implementations as well!

    The Micropub specification has also been published by the W3C as a Working Draft, and a couple more specs are on the way!

    New Logo and Website!

    Portland, Oregon, USA
    #indieweb
    Mon, Jul 18, 2016 7:30pm -07:00
  • Where was I when I took this photo?

    Sat, Jul 16, 2016 8:59pm -07:00

    My DSLR camera doesn't have GPS, so normally all my photos would not include the location of where I was when I took the photo. I used to use the Eye-Fi card that did geotagging, but that is no longer supported in the new "mobi" line. I could get an external GPS unit for my camera, but that sounds cumbersome and would only work with that one camera.

    Since I already track everywhere I go, I figured I could use this data to geotag my photos when I upload them to Flickr. It turns out, due to the limitations of Exif, the metadata format that digital cameras use to store information about photos, it wasn't so easy.

    Adventures in Exif

    Exif lets the camera write arbitrary text data into a jpg when it saves it. There are a handful of standard properties that most cameras write, such as the time the photo was taken, the camera settings such as shutter speed, f-stop, etc, and GPS location if the camera knows where it is. My thought was that if I know when the photo was taken, I can find out where I was at that time, and then add the GPS data to the photo.

    Unfortunately, the format for storing dates in Exif does not support specifying a timezone offset. The format for dates is YYYY:MM:DD HH:MM:SS. Without the timezone offset, this series of numbers corresponds to many different actual points in time, depending on which timezone you interpret it as. So what I need is a way to turn the camera time into a specific point in time in order to find out where I was at that time.

    A + B = C

    I realized that since I have a complete log of my GPS coordinates, I should have enough information to piece this together. Essentially the question I am asking is "where was I when my clock read 7:00pm on July 16 2016?" Note that there are two parts to the answer: my location, and the absolute point in time. It's kind of like solving an equation where there are three variables and you know two of them. The three variables are: my location, the clock time, and the timezone offset. If we knew my location and the clock time, we could find the timezone offset. If we knew the timezone offset and the clock time, then we could find my location. 

    Where was I when my clock read "7:00pm on July 16 2016"?

    If we knew what timezone I was in, then "7:00pm on July 16, 2016" becomes a single reference to an absolute point in time. But we don't know what timezone I was in yet, so there are actually 24 possible absolute points in time this could be. (I'm simplifying this problem slightly by ignoring the 30-minute offset timezones.)

    The solution is to find my location (which includes the absolute point in time) at all 24 possible points in time, find the timezone offset that corresponds to each location, then find the location where its timezone offset matches the candidate offset. Below is an example:

    Offset-less time in question: 2016-05-12 16:00:00

    This could be any of the absolute points in time:

    • 2016-05-12 16:00:00 -23:00
    • 2016-05-12 16:00:00 -22:00
    • ...
    • 2016-05-12 16:00:00 -07:00
    • 2016-05-12 16:00:00 -08:00
    • 2016-05-12 16:00:00 -06:00
    • 2016-05-12 16:00:00 -05:00
    • 2016-05-12 16:00:00 -04:00
    • 2016-05-12 16:00:00 -03:00
    • ...
    • 2016-05-12 16:00:00 +00:00
    • 2016-05-12 16:00:00 +01:00
    • 2016-05-12 16:00:00 +02:00
    • ...
    • 2016-05-12 16:00:00 +22:00
    • 2016-05-12 16:00:00 +23:00

    (I left out some of the less common timezone offsets I frequent for the sake of clarity in this example.) Now let's query my GPS database to find out what my local time actually was at each of these points in time:

    Potential Time Time from GPS Location
    2016-05-12 16:00:00 -23:00 2016-05-13 10:59:03 -04:00 New York
    2016-05-12 16:00:00 -22:00 2016-05-13 10:00:00 -04:00 New York
    ...
    2016-05-12 16:00:00 -07:00 2016-05-12 19:00:00 -04:00 New York
    2016-05-12 16:00:00 -08:00 no data
    2016-05-12 16:00:00 -06:00 2016-05-12 17:59:21 -04:00 New York
    2016-05-12 16:00:00 -05:00 2016-05-12 16:59:53 -04:00 New York
    2016-05-12 16:00:00 -04:00 2016-05-12 15:59:57 -04:00 New York
    2016-05-12 16:00:00 -03:00 2016-05-12 14:52:46 +02:00 France
    ...
    2016-05-12 16:00:00 +00:00 2016-05-12 14:52:46 +02:00 France
    2016-05-12 16:00:00 +01:00 2016-05-12 14:52:46 +02:00 France
    2016-05-12 16:00:00 +02:00 2016-05-12 14:52:46 +02:00 France
    ...
    2016-05-12 16:00:00 +22:00 2016-05-11 19:15:41 +02:00 Düsseldorf
    2016-05-12 16:00:00 +23:00 2016-05-11 18:46:26 +02:00 Düsseldorf

    (Note that the times aren't an exact match, because my GPS device doesn't log a point every second. In reality it's more like every second when I'm moving and have a good GPS lock, and when I'm not moving, it records less data. Also on plane flights I sometimes lose the GPS signal part way through the flight which is why many of the rows in this case show the same time from my GPS.)

    As you can see by comparing the potential timezone on the left with the actual timezone on the right, there are two offsets that match (highlighted in yellow), so we need to determine which is the correct one. This happens when I am traveling on a plane and cross timezones very quickly.

    If we take the two candidates and look at the actual time difference in seconds between the timestamps described, the answer becomes obvious.

    Potential Time Time from GPS Difference
    2016-05-12 16:00:00 -04:00
    unixtime: 1463083200
    2016-05-12 15:59:57 -04:00
    unixtime: 1463083197
    3
    2016-05-12 16:00:00 +02:00
    unixtime: 1463061600
    2016-05-12 14:52:46 +02:00
    unixtime: 1463057566
    4034

    From this, I can conclude that when my clock read "2016-05-12 16:00:00" it was at "2016-05-12 16:00:00 -0400" when I was in New York.

    Most of the time only one offset matches, so this last step isn't necessary. It's only when I quickly cross timezones that there are potentially more than one match.

    Turning this into an API

    Since I want to be able to use this to geotag photos, it makes sense to include it as an API in the same system that stores my GPS logs. I encapsulated this logic in my GPS server, Compass with a simple API that returns the answer given an offset-less time. Now I can use it in my geotagging script!

    Portland, Oregon
    12 likes 9 replies 1 mention
    #time #timezone #geotag #gps #p3k
    Sat, Jul 16, 2016 8:59pm -07:00
  • The Sad State of Wifi SD Cards

    Fri, Jul 15, 2016 1:51pm -07:00

    I've been a long-time fan of the Eye-Fi SD cards. My primary use for them is to have all my photos automatically uploaded to Flickr from my camera. It turns out I'm lazy and having to manually copy photos off an SD card and upload them is too much work.

    I've had the Eye-Fi Pro X2 card for years. I have it configured to upload everything to Flickr marked "private". I recently got an email saying that they are discontinuing the X2 product line in favor of their new "mobi" line, which will essentially brick the cards. I, as well as many others, were upset by this news.

    Eyefi Mobi

    Their new "mobi" line seems to be completely different, and heavily promotes subscribing to the Eye-Fi cloud service, something that I have no interest in. I don't want to use their tools to store and manage my photos. I want to send them to Flickr, or even better, my own website. Sadly their new cloud service doesn't even support uploading to Flickr.

    I started looking into other options, but the state of wifi-enabled SD cards is pretty terrible right now. There are a handful of other brands of cards, but they all are limited to downloading photos directly to an iPhone/Android, rather than uploading from the card to something on the Internet.

    The one promising card I found is the Toshiba FlashAir, which has the ability to write custom code that runs on it. I wrote up my initial experiments with it, which were only mildly successful. I tried to pick up that work again, but did not have any luck. There's almost no visibility into the code that's running, so it's very hard to debug. I decided it's not worth it to sink any more time into making that card work.

    I decided to again look into the new Eye-Fi card to see what it's actually all about. It seems that my initial understanding of it was completely wrong. I managed to get a Eyefi Mobi Pro card for $36, including a year of their cloud service, so at least worst case I can write that off as paying $3/mo for a year of their service.

    What I Learned

    After some experiments, I learned that everything I read about the new Mobi card was actually totally wrong! Here is my understanding of the difference between the two cards.

    Eyefi Pro X2

    The card connects to a configured wifi network, and uploads the photos to the Eyefi servers. The Eyefi servers then upload to Flickr, or whatever I've configured. The upside is that the card can upload to the internet without my computer or phone helping. The downside is that it requires Eyefi servers to be involved in the process. Also they are shutting down these servers in September presumably because they never figured out a way to make people want to pay for them.

    Eyefi mobiPRO

    The card connects to a configured wifi network. If my computer is also on that same network, the app on my computer will download the photos from the card. If I have an Eyefi Cloud account, my computer will upload the photos there as well. The upside is that I don't need a Eyefi servers in order to use the card. The downside is the card can only upload photos when my computer is on the same network.

    So for now, I'll try out this Mobi card and see if it ends up being useful even though it can't connect to the internet on its own.

    My wish is for a wifi SD card that can join a wifi network and upload to an FTP/HTTP server itself, without going through a third-party cloud service and without another device helping it out.

    Portland, Oregon
    #eyefi #flashair #wifi #sdcard #photography
    Fri, Jul 15, 2016 1:51pm -07:00
load more

Hi, I'm Aaron Parecki, co-founder of IndieWebCamp. I maintain oauth.net, write about OAuth, and am the editor of the W3C Webmention and Micropub specifications.

I've been tracking my location since 2008, and write down everything I eat and drink. I've spoken at conferences around the world about owning your data, OAuth, quantified self, and explained why R is a vowel.

  • IndieWebCamp Founder
  • W3C Editor
  • These are a few of my favorite things.
  • All
  • Articles
  • Bookmarks
  • Notes
  • Photos
  • Sleep
  • Travel
  • Contact
© 1999-2016 by Aaron Parecki. Powered by p3k. This site supports Webmention.
Except where otherwise noted, text content on this site is licensed under a Creative Commons Attribution 3.0 License.