I aspire to travel light. I can usually live out of a backpack. The Nintendo Switch is a recent addition to my kit. It’s heavy but totally worth the weight. Anything that adds 400 grams to my bag has to carry it’s weight, and the Switch delivers. Not only does it melt away those too-tired-to-code hours of airport delays, it turns out that the Joy-Con controllers make wonderful presentation remotes on MacOS!
Here’s how to turn a fun game controller into a boring slide clicker.
Hold the little pairing button down tiny pairing button on your Joy-Con for a few seconds. The lights might blink.
The Joy-Con will show up in the Bluetooth device list. Click through the strange pairing menu to pair it.
Open USB Overdrive, go to Status and verify that it sees the Joy-Con.
Switch back to the Settings tab. Press the buttons on the Joy-Con to see what they map to.
When you’re done with slides and ready to game, slide the Joy-Con back on to your Switch. Once you turn the console on, it will re-pair the Joy-Con with the Switch and disconnect from your computer. You must re-pair it with your computer to click through slides again, but the USB Overdrive mapping should stick.
Was this entry useful? Did I say something wrong? Let me know on Twitter.
Last week I had a rough day. Well, not really rough… mostly it just made me embarrassed to be a technologist. My role in this debacle was kind of fun.
I spent most of a day trying to get a video file off of a Samsung SDR-3100N DVR that’s hooked up to the security cameras at my condo. I managed to get the files off, and convert them into a usable format, but it was pretty painful. Here’s a writeup, just in case someone else has to deal with this mess.
We’ve been using this Samsung SDR-3100N for years, but we haven’t ever had to access the video. Security cameras are a bit like disaster recovery. You don’t know if they work until you really need the video. Last week we needed the video.
Last week a couple of police officers knocked on our door. We learned that some jerkfaces broke into a bunch of bars near my house… on Christmas. It seems that they drove their vehicles past my condo’s security cameras, and the police wanted the video footage to aid in their investigation.
While they patiently watched, I attempted to access the video. The mobile app didn’t work. I grabbed a laptop. The Wifi access point didn’t work. I plugged into the switch with an Ethernet cable. The DHCP on the security camera’s network didn’t work. Plugging the laptop directly into the DVR and sniffed ARP traffic. There wasn’t any. It seems that entropy had set in and broken every piece of technology attached to the security cameras.
Not wanting to waste their time, I promised to send them the video files. We exchanged contact info, and my pretend forensics work began.
If you just want to get video off of your DVR, skip over this section. It was unsuccessful.
I unhooked the security DVR, plugged it into my working home network, and fired it up. It seemed OK. It booted with a harsh beep noise, and a green LED. I saw it get an IP address. It’s alive!
The mobile app connected and was able to see live video, but it could not find any previously recorded video.
The mobile app was pretty light in administrative features, so I could not investigate the root cause. But, it’s supposed to have a web interface. So, I typed its IP address into Chrome and gave it a poke. It demanded that I access it with IE or Safari (although the JavaScript it loaded seems to indicate that Firefox is also acceptable). I fired up Safari, and gave it another poke. A login page! And beyond that… a message that I need to install Microsoft Silverlight :(
But here’s the kicker, it wanted me to install an unsigned Silverlight binary delivered from the DVR itself.
I downloaded a signed Silverlight binary from Microsoft, but it still complained that I needed to install its fishy Silverlight binary. It turns out that Safari’s security settings disables Silverlight on all website by default. I enabled it for all websites by toggling ‘unsafe mode’, and gave it one more try.
This got me one step further. I was greeted by two loading spinners that displayed no content. This is where I stopped. I didn’t feel like reverse engineering their ancient web app. There has to be a more sensible way.
Maybe the software is just too old for modern browsers. If that’s the case there has to be a firmware update, right? Nope. The Samsung support webpage does not even have a listing for this DVR, which they sold until 2013.
Poking around a bunch of forums didn’t turn up much either. As far as I can tell, they’ve never released any firmware updates for this product… ever. I couldn’t even find a mention of firmware updates in the paper instruction manual.
The security DVR had spent its whole life in headless mode: it was never connected to a monitor. But it certainly has that capability. You can plug it into either a BNC composite display, or a VGA monitor to watch live feeds directly. This is perfect if you want to role-play one of those action movie scenes where a ninja knocks out security cameras one at a time while the security guard’s panic increases from dropped doughnut to spilled coffee. However, this option is not perfect for me. I have 0 display devices capable either VGA or BNC composite. But I do have a TV that can display RCA composite video. Rather than walk a block to buy an adapter, I hacked this up… and it worked!
When I carefully placed this delicate disaster near my TV, I could watch it boot up!
It was a dead end. After many minutes of booting up, it only displayed SAMSUNG across my TV in big letters. Fiddling with the remote control or a USB mouse didn’t do anything.
This is when I flipped form lawful neutral to chaotic neutral. It’s also when I reached for a screwdriver.
I opened up the DVR and found a SATA hard disk inside. I plugged it into a USB write-blocker and connected it to my mac. diskutil
reported it to be a Linux disk, but neither ext-fuse
nor Paragon’s EXTfs driver could mount it. I’m not sure what’s up with that.
But I had another option: Kali linux on the laptop I took to DEFCON. And sure enough, it was able to read the disk. Oddly it had two partitions, both named NATALIE
. The big NATALIE
partition contained 500GB of video in ssf
files all dated 2001. It seems that the real-time clock battery died at some point. The mobile app probably would have worked, if I had paged back to 2001. It’d be nice is the mobile app had told me that.
Anyway, I copied the files to another disk drive, this one formatted with exFAT so I could access them from my macOS laptop. Neither preview nor VLC were able to open the ssf
files, but ffprobe
displayed a lot of warnings and details about a single h264 video steam.
$ ffprobe 010404035310_000.ssf
... lots of warnings ...
Input #0, h264, from '010404035310_000.ssf':
Duration: N/A, bitrate: N/A
Stream #0:0: Video: h264 (Constrained Baseline), yuv420p(progressive), 352x240, 25 fps, 25 tbr, 1200k tbn, 50 tbc
And ffmpeg
was able to copy the stream to a more typical container.
$ ffmpeg -i ./010404035310_000.ssf -c copy 010404035310_000.mp4
... ~5000 lines of warnings ..
frame=513808 fps=48048 q=-1.0 Lsize= 1915044kB time=05:42:32.42 bitrate= 763.3kbits/s speed=1.92e+03x
video:1912815kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.116570%
Note the exponential notation on the speed. Aren’t modern disks wonderful?
And this resulting mp4 file works in VLC, but looks terribly corrupted. It seems that all of the video cameras have been smashed into a single h264 video stream, and players get really confused. It looks like compressed P-frames refer to the wrong complete I-frames. After trying a bunch of different video players and editors, Adobe Premier seems to read the video with the least corruption.
I tinkered around with ffmpeg
but was unable to coerce it into reshuffling the frames. It didn’t like the idea of extracting a raw P-frame or attaching it to a different I-frame. But, you can remove the corruption by filtering out the confused P-frame references. You can use ffmpeg to pull out the I-frames (aka keyframes).
$ ffmpeg -i 010404035310_000.mp4 -vf select="eq(pict_type\,PICT_TYPE_I)" -vsync 0 iframe%03d.jpg
... cpb: bitrate max/min/avg: 0/0/200000 buffer size: 0 vbv_delay: -1
frame= 4 fps=0.0 q=5.6 Lsize=N/A time=00:00:06.03 bitrate=N/A speed=16.8x
video:468kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
This yields a bunch of sharp jpg
files. This is also where I stopped my analysis.
I was able to access the video, and review footage until I found what I was looking for. I also changed the real-time clock battery in the DVR, so the mobile app should work better now.
But, that was a pretty rough experience. I can’t imagine what a typical non-technical user would do, or even a technical user who didn’t have a day to kill tinkering with a broken-by-design security DVR. Samsung: shame on you.
I’d love to replace this DVR with something better, but all of the options I can find seem to suffer from the same terrible software and lack of updates. Now I understand why magic cloud cameras, like the Nest camera, are so appealing.
I’m moving my data out of Amazon Drive (formerly known as Amazon Cloud Drive). I have 4TB across 1,000,000 files. I’ve struggled to download my data, but I found some tricks to make it easier. Here they are in a handy listicle.
Every couple of years, someone announces an unlimited capacity cloud storage product targeted at consumers. Then, inevitably, a few jerks with multiple terabytes of data swoop in and ruin the deal for everyone.
I’m one of those jerks. I’ve migrated 4TB of data from one consumer cloud storage provider to another over the course of several years.
With their prices increasing significantly, it gave me a challenge: download all of my data using only the official sync client. This blog entry describes the lessons I learned from that process.
When you install Amazon Drive, you can select a folder to use for synchronization. On macOS the default is ~/Amazon Drive
, and the setup configurator prevents you from pointing it to a removable disk. This is a bummer because modern computers have smaller, faster boot disks. None of my computers have a boot disk bigger than 1TB.
Changing the configuration file in ~/Library/Application Support/Amazon Cloud Drive
seems to make the sync client angry, but there is another way: symbolic links.
# Dangeously stop the sync client with this shell-fu, or just quit from the menu
$ ps -ef | grep 'Amazon Drive' | awk '{print $2}' | xargs -n1 kill
# Delete the old target
$ rm -rf '~/Amazon Drive'
# Swap in your removable storage
$ ln -s /Volumes/4tb/ '~/Amazon Drive'
The client is happy to sync down to a removable disk behind a symlink, but I have no idea what will happen if the disk is removed while the client is working (I found out… it purges all client metadata, and you have to start over). So, don’t do that.
Your sync host computer must have an SSD boot disk. The sync client has very poor performance when managing metadata on spinning disks.
My primary computer is a laptop. I often carry it with me. This means it’s disconnected from the Internet, and not a great sync host.
I had this brilliant idea of dusting off an old mac mini from 2011, putting the Amazon Drive sync client on it, and letting it churn away for a few days to recover all of my data.
This did not work. First, Amazon Drive spent two days Preparing
. After that, file synchronization proceeded at about 10 files per minute, regardless of their size. There were a few spikes of CPU and network usage, but nothing that explained the glacial pace. At this pace, it would not finish until early October.
I did what any engineer would do, and whipped out dtrace
. A little probing found the problem. The sync client was doing a staggering number of tiny, scattered I/O operations. This probably has something to do with their heavy use of SQLite. Check this out:
~/Library/Application Support/Amazon Cloud Drive$ ls -l
-rw-r--r-- 1 mim eng 758280192 Jul 31 00:58 amzn1.account.MSSM74Z-cloud.db
-rw-r--r-- 1 mim eng 32768 Jul 31 12:00 amzn1.account.MSSM74Z-cloud.db-shm
-rw-r--r-- 1 mim eng 212966952 Jul 31 14:55 amzn1.account.MSSM74Z-cloud.db-wal
-rw-r--r-- 1 mim eng 4096 May 28 14:24 amzn1.account.MSSM74Z-download.db
-rw-r--r-- 1 mim eng 32768 Jul 31 12:00 amzn1.account.MSSM74Z-download.db-shm
-rw-r--r-- 1 mim eng 2171272 Jul 31 14:00 amzn1.account.MSSM74Z-download.db-wal
-rw-r--r-- 1 mim eng 129 May 28 14:25 amzn1.account.MSSM74Z-settings.json
-rw-r--r-- 1 mim eng 81358848 Jul 31 14:56 amzn1.account.MSSM74Z-sync.db
-rw-r--r-- 1 mim eng 65536 Jul 31 14:31 amzn1.account.MSSM74Z-sync.db-shm
-rw-r--r-- 1 mim eng 44982192 Jul 31 14:56 amzn1.account.MSSM74Z-sync.db-wal
-rw-r--r-- 1 mim eng 4096 May 28 14:24 amzn1.account.MSSM74Z-uploads.db
-rw-r--r-- 1 mim eng 32768 Jul 31 12:00 amzn1.account.MSSM74Z-uploads.db-shm
-rw-r--r-- 1 mim eng 2171272 Jul 31 14:00 amzn1.account.MSSM74Z-uploads.db-wal
-rw-r--r-- 1 mim eng 352 Jul 31 13:01 app-settings.json
-rw-r--r-- 1 mim eng 368 May 28 14:24 refresh-token
-rw-r--r-- 1 mim eng 32 May 28 14:23 serial-number
~/Library/Application Support/Amazon Cloud Drive$ sqlite3 amzn1.account.MSSM74Z-cloud.db 'select count(*) from nodes;'
1077668
~/Library/Application Support/Amazon Cloud Drive$
Yeah, that’s over a gigabyte of SQLite databases! Some tables have more than a million records. Count queries take a few seconds, and toggling an option in the client sometimes can trigger millions of SQLite queries across multiple databases. This had the read head of my spinning disk thrashing back and fourth. Fortunately, random access penalties are much lower on SSDs.
The client is more stable when attempting to sync fewer files in one batch. Sync at most 100,000 files at a time, allow it to finish, and then sync another batch.
If you try to sync too many files at once, the client gets CPU and memory hungry, slows down, and becomes unstable. If the sync request is over 1,000,000 files, the client may start crashing on launch. Once this happens, you must delete the SQLite databases, and start over.
Don’t copy files into sync client’s target path. This means no attempting to help it along by copying in previous partial download attempts. Let the client sync every file down itself.
Copying files into the sync path confused my sync client, and it delete a bunch of stuff from Amazon Drive. If you suspect this happened, don’t panic. You have a few days to restore files from the web interface. Sign in, navigate to trash, and restore deleted files from there.
At this pace, I’ll be able to download all of my data out before the new rates hit for me. Yay!
In retrospect I should have written my own sync client on the API, or tried to get the possibly-banned rclone
client working. However, I did enjoy the adventure in exploring how the sync client works.
With this migration wrapping up, I’ve given up on consumer cloud storage products. They’re too painful to use for large volumes of data. It’s time to switch to an enterprise storage product so I can use real APIs to move data around, and benefit from SLAs and deprecation policies.
I shared this post around, and got some great feedback on r/DataHoarder, the subreddit for people who laugh at my meager 4TB of accumulated data.
Here are their proposed solutions:
Thanks for the advice Redditors! :)