Free Beta Invites: Project Search and Replace for Unity

Are you a developer/artist/designer working in Unity? Would you like to work faster? Would you like the ability to better search and replace text, strings, and more? If so, join my closed beta!

My plugin for Unity, shown in the video below, allows you to search and replace within Unity with ease.

I’m getting ready for launch in the Unity Asset Store but beforehand I’d love to see how it works for others and what people think of it.


This is an editor tool currently for *Unity 5 and above*. It is an editor window, meaning there is no code to write. It should be quite artist friendly.

This is a project-wide search and replace tool meaning: do not become a tester if you are not using version control or have frequent backups. This is also beta software which so far Works On My Machine™ but has not been tested in the wild. General legal limitation of liability language applies.

Becoming A Tester

Have you read all of the above and want to test this out still? Great! Glad to have you. Please email me at and I will add you to the testing team and provide you a build. Space is limited, I’m not looking for a large number of people.

Thanks for you interest!

Unity Tip: Advanced SerializedProperties

If you are writing editor code, it is common to use the SerializedObject and SerializedProperty interface to view the serialized form of data. How this data looks and acts is somewhat documented but it has a couple quirks. For this reason I’ve created a small utility class for viewing the representation of a SerializedObject in unity. I will talk a bit about SerializedProperties. If you are interested in the viewer code, the Gist is at the bottom of the post.


And here are some of my interesting findings using my viewer.

Strings…are Arrays!

You may want to iterate over all arrays in an object. Turns out that strings….are also serialized as arrays of chars! This may not be a surprise to those with a C background but what is also interesting is that *both* representations are present when iterating:


This is quite important to understand when using SerializedProperty.IsArray:

Strings will return true for being arrays.


When you are iterating over SerializedProperties, knowing the structure of the data is not intuitive and highly important. SerializedProperty.Next() will take you to the ‘next’ property…but what does that mean? What defines ‘next-ness?’ Unfortunately the docs could be improved a bit on this.

You may think that .Next(false ) would return false once it stopped iterating over the current depth of items because you are saying you don’t want to enter children. This is not the case. The serializedProperty will continue on to the ‘next’ property in the list. The correct way to determine the ‘end’ of a series of SerializedProperties is to use SerializedProperty.GetEndProperty() and SerializedProperty.EqualContents() to compare these values.


I initially wrote this utility to view Materials because they are more verbose than most UnityEngine.Objects, and the serialized format is pretty much NOTHING  like what you see in the editor. I also added the ability to highlight search strings and a button that would select objects so I could easily go to references:


Custom Data

I have also used this information to understand how SerializedProperties work with custom data types. Let’s say you have a class such as this:

 public class StringData
 public string foo;
 public string bar;

And you declare a member variable like so:

public StringData[] data;

What does the representation look like? Does Unity declare ‘this is an array of StringData’ in some form? The answer is no. There is no concept of the final data type within the serialization. The data is instead stored as generic data types such as lists, which you can see in the SerializedPropertyViewer output:


You can see that the type of data is called ‘Generic Mono’ and given the name of the member property that it is a part of. What is interesting is that you can use the SerializedProperty.propertyPath to update and change values of items in custom data types. For example you can update the value for the 2nd StringData’s Foo value with the following:

SerializedProperty secondFoo = serializedObject.FindProperty("[1].foo");
 secondFoo.stringValue = "new value";

Once you know the path, which isn’t necessarily obvious, it becomes simpler to manipulate these values in the editor.

Finding Leaks

Another useful aspect of viewing the serialized properties is finding weird references that are causing memory usage. To give you an idea: I had an issue where a superclass of an object was always having a texture be serialized by an editor script. BUT this variable had the format of:

 private Texture2D mTex;

So guess what? These textures were being loaded at runtime but I had no idea how/why! I couldn’t see them in the editor. But they were loading! I finally found the references being serialized and manually removed them from the YAML.

Show Me The Code!

Here’s code, and a quick screengrab of usage. Enjoy!

Prototype Postmortem: PixelCraft

I have been working towards a couple prototypes for different possible games. My hope is that one of them will have the potential for a complete game. My direction is intentionally unorthodox, because I do not want to re-tread the ground of previous games. This also means that things don’t always work out like you want them to. 🙂

One of the ideas is PixelCraft: a game where you draw pixel art. You complete ‘waves’ and as you complete levels, you uncover more of the art. The idea is to make it feel more like drawing than tracing. You start with raw shapes and sculpt at them, drawing pixels then removing them. Starting loose and tightening as you progress.

I liked the idea of making a game that wasn’t a traditional drawing game that mirrored the act of drawing. One of the intents was to ‘reveal’ the drawing slowly instead of providing the final image. By providing an image of what you might be drawing that would feel more like copying or tracing.

My first attempt involved showing you the first few pixels, and then revealing the ones around you as you painted.


I then continued with the ability to remove pixels…WITH FIRE.


This worked well in the sense that it kept the act of drawing accessible. You knew where to fill in, and you could continue to draw. It didn’t provide the ability to mess up, i.e. ‘lose’ if we’re going to take the act of drawing into the Game Department. So it was a very winnable game, to the point it was…kinda boring.

And so I put together a few ‘waves’ of pixels to draw a slime:

And then nyan cat:

When creating the cat level I wanted to see what different content would look like. And also how much effort would be involved in making more content. I thought that from a meta-game perspective you could potentially unlock more levels: Mexican Nyan Cat or Afro Nyan Cat. So much nyan nyan nyannnnn.

After showing to a few people it was clear that the game wasn’t coming through. It had a few benefits. It was easy to play. It was quick to complete a drawing and some reward in doing so. There was a progression of complexity as the level increased colors. BUT it had no lose condition, and it felt too much like tracing. The fun factor wasn’t obvious.

I decided to try an alternate style and see how things would work. And so the Pixel Ray was born:


The idea is that you can draw freeform: make mistakes, fill areas, or erase. And what you’re intending to draw is shown by waving  the ‘Pixel Ray’ over the image. It will show you the image temporarily, and after a while, the image fades out. You need to remember what was shown, or draw over while its still visible, then re-apply the Pixel Ray to reveal the image again.

This approach added a large amount of complexity to the game, and it took a good amount of iteration to get it to feel right. Because making mistakes was now expected, there needed to be proper support to be able to fix bad input. This required communicating to the user what the expected image should look like.  The user had to be able to use any color that was introduced so far, instead of being locked into the ‘correct’ colors.

It also became obvious that drawing individual pixels required the user to be very, very zoomed in. And once you’re zoomed in you need a way to pan around.  So the Pan Tool was born.  Initially the game was zoomed out more. The zoom level was the one shown when using the Pixel Ray.

Even with these features, it was very, very easy to draw accidentally on a phone. Add to this the fact that you can’t see where your finger is painting easily. I added a ‘mouse fudge’ feature that moved the pixel drawing area up. This then became a guessing game and a ‘crosshair’ needed to be added. But I needed a way for the user to know where they were about to draw, but there wasn’t any way for the program to know before the finger touched down. Ultimately I added an animation to the crosshair where it transitioned in for 1 second then started painting. This allowed me to tap and get a feel of where the finger would paint, and I could hone in on the correct location. You’ll noticed that I tap a lot in the videos to get a feel for where to draw, and this actually works really well.

Pixel Ray levels required a lot more effort, and there is a skill to it. It almost feels like you could also score your ability on how few mistakes are made. I say ‘ability’ and not ‘drawing ability’ because the ability to precisely place these pixels feels like a separate skill from drawing.

The amount of time required for the Pixel Ray version of Nyan Cat was 6m 22s, compared to 2m 07s for the ‘Paint Wave’ version. Three times longer to complete…and that’s if you’ve played it a lot 😛 . The time felt a little long. I do feel like there were points where I was actually getting into the zone and able to focus and improve my skill at the game.

There are a few unsolved problems, some technical but mostly from a design perspective. First is that the game has little in the way of retention. What makes me want to continue? One of the ideas is to unlock more content, but I don’t think that is enough. There would need to be an ability to unlock different play styles, ‘wave’ types, and explore different ways of painting.

Another is replayability. What makes me want to re-visit a level? It seems like improving accuracy could be a reason. Different color variants? Taco Nyan Cat? That would be better served if you could provide an ability to unlock skills/tools that would allow you to cruise through the level faster and get the satisfaction of skill progression. But what those tools or skills are I have yet been able to unlock inside my head. 🙂

It was a positive experience to get this prototype made and explore the concept. I am sad to say that it is not a very good game…but that’s OK. That’s why you prototype. You learn…and you move on!

Unity Tip: Working with Resources.Load() Strings

Resources.Load() is the way to load ‘baked’ data (ie data that was shipped with Unity at compile-time). This is done via strings. Messy, messy strings. And what format are these strings?

The path is relative to any Resources folder inside the Assets folder of your project, extensions must be omitted.

This path value is surprisingly easy to get wrong! You can forget a folder, you can misspell the name, etc. So here is a simple script that you can use to copy the correct value to the clipboard with right-click or with cmd-alt-c. You can then paste this value into whatever you need, and not worry about it again! Enjoy.



Unity Tip: Importing Textures

Usually when working with textures in Unity you may have a few specific requirements on how textures should be imported. Compression, mip-maps, filter mode, and wrap are usually my most commonly set flags. I’ve taken to writing a small chunk of editor code to help me set the properties of my images without requiring manually flipping the bits. This means I can select then right-click on all my images and set the texture settings to exactly what I want with one click:


The full script is below. I iterate over the currently selected objects in the editor and if it is a Texture2D ( Sprites exist within Texture2Ds so you can select them as well). I then create a TextureImporter object for the texture, which lets you set the ‘bake’ settings. You can see that I have some non-standard things I’m doing, partially due to this project being an 8-bit style. I want the filter mode set to ‘Point’. And since the images are tiny I do not think texture compression is worthwhile so I set it to true color. Among a few other details.


Game Development And Timelines

I have been working with Galatea Studios here in Tucson as we work on some exciting and unique projects. As we move forward with new projects Monty posed the question to me: “Can we define a general timeline and expectations for game development?” The end result is a rough ‘order of operations’ document defining terminology, timeline, and technology on how to approach game development.

Of course there is no silver bullet. It is hard to quantify some things, and one size does not fit all. This probably fits the size of a small to medium studio. This info is also skewed by my personal experiences in what I’ve seen work, and the pitfalls I’ve encountered. I hope this information sheds some light on process in game development. Enjoy!

Important Concepts


Who does what on a project? (of course, not an exhaustive list)

Engineers – Writes software.

  • Writes client software.
  • Writes server software.
  • Provides support via tools and documentation to the rest of the team.
  • Provides specifications to art and design.
  • Maintains functioning builds that always compile.
  • Peer reviews code to maintain

Artists – Develops concept, UI and in-game art.

  • Creating art to the specifications of the game.
  • Checking in the art and testing that it displays in-game as expected.

QA Testers – Checks and makes sure that builds work as expected.

  • Smokes important builds.
  • Logs bugs.
  • Verifies that bugs are fixed satisfactorily.
  • Defines tests that make sure that all aspects of the game work before release.

Designers – Develops the mechanics and systems necessary for the game.

  • Maintains a wiki of information that defines how the game should work.
  • Defines new features, systems, powerups, units, etc.
  • Creates flavor text and descriptions for Art.
  • Plays the game a lot to ensure a good level of progression and flow.
  • Ensures the game is balanced and does not suffer from loopholes or flaws in the mechanics.
  • Uses tools provided by engineering and art from the artists to create content such as levels, units, and systems.

Producers – Keeps the pipeline full and SHIP IT

  • Identifies blocked areas and does whatever necessary to unblock team members.
  • Cracks the whip on team members to get things done.
  • Maintains the schedule based on feedback from the team.
  • Prioritizes all work so that all members are working on the most important stuff.
  • Constantly communicates to the team the current workload so that everyone knows exactly what they should be doing.
  • Communicates with stakeholders and translates their needs into something actionable by the team.

Core Game Loop

A core loop is the thing the player will be doing over and over in a game. This should be something that is fun and challenging, ultimately resulting in the user winning or losing the game. Players should experience ‘flow’ through the core game loops.

Core Game Loop Examples

  • Playing a card in a card game.
  • Stalking and killing an enemy in an FPS.
  • Jumping on a platform in a platformer.

The core loop frequently involves improving the skill of the player. An example of this ‘skill loop’ is learning how to place your jump in a platformer, then learning how to place your jump when you are on ice.

Meta Loop

A ‘meta’ loop ties together all the player’s interactions and provides a larger scope to the play. Without the meta there is little reason for the player to return and continue playing.

Meta Loop Examples

  • Using dropped loot to upgrade your character.
  • Crafting items found to collect new items.
  • Progressing a story forward.
  • Going for an even higher score.

High Concept

A High Concept defines the context and narrative that the game takes place in.

High Concept Examples for a Sports Game

  • Fantasy sports with lots of gore.
  • Kids on a playground.
  • Soldiers on break in Afghanistan.
  • Kids playing in a slum in Brazil.


A pitch is a short explanation of a game idea.

  • This can be with or without a high concept.
  • Core Game Loop
  • Win/Lose Scenarios
  • Basic Descriptions of units.
  • Potential for expansion.

Pitch Example

Angry Road

You play a flock of birds that you fling across an infinitely long busy road. You avoid the ever more dangerous path of cars and increase your high score until you are hit. Cars move in left/right directions. The frequency and intensity of traffic increases as you progress. You have multiple types of birds at your disposal. Some birds can destroy certain types of traffic. Virtual currency is earned for each playthrough. Special birds can be bought between levels and utilized to increase your distance on your next run. Game could also be level-based instead of high-score based?

Content Pipeline

One of the more complicated aspects of game development can be getting content into the game. Frequently the content pipeline is ‘give it a to a developer and he can add it to the game’. Developing a better content pipeline is the best way to utilize a team and should always be a focus when developing features.

Important Technologies

  • Issue Tracker – Make sure you’ve got a methodology to define work granularly and assign it to team members. Options: Assemble/Trac/Unfuddle/Phabricator/Jira
  • Build Server – Set up Jenkins/Team City/etc. to build nightly and keep your builds functional at all times.
  • Version Control – Having some kind of version control that all team members is comfortable with is highly important.
  • Wiki – A location for all parties to add information and look back at it later.

Phase 1: Preproduction

Preproduction is the phase where we experiment, try different things, and figure out ‘where is the fun’.

  • Define Technologies
    • What are we using for the client? Sometimes this is defined by the platform (example: html/js)
    • Is there a server component? How complicated is it?
    • What third party services might we want to use?
    • Target Platforms?
    • 2D? 3D?
  • Pitch Ideas
    • Core Game Loop – What are people going to do that is really fun?
    • Meta Loop – What are people going to do to make them want to come back?
    • End Goal – How is this going to make money or provide value to the stakeholders?
  • Develop Prototypes
    • Code it up in 1-2 days. Move on to the next one.
    • You don’t need to code it! Paper prototyping can be significantly faster depending on the genre. Once you’ve iterated on paper a bit then code it.
    • Use asset store assets and cubes. Don’t worry about art.
    • Focus on the Core Game Loop, and any mechanics you can build around it.
  • Decide on a High Concept
    • Define a narrative that the player can immerse themselves in.
    • What is the context of the narrative?
      • Coach
      • Player
      • Hero’s Journey
      • Fallen Hero
      • Soldier
    • Style Guide – Image Search the style you are after.
      • Realistic
      • Toon
      • Voxel
      • 8-Bit
      • Build a portfolio of reference images.
  • Create Concept Art
    • Build out some rough concepts for:
      • In-Game Art
      • Characters
      • ‘Hero’ artwork.

Things to note:

  • Avoid creating production artwork. Mock things up fast, use things from asset stores.
  • Avoid writing big design docs. Think of a number of small pitches, then prototype the most promising. If you’ve written more than half a page its too long.
  • Don’t focus on one idea because you are ‘sure’ it is great. An idea can sound great and flop in execution. Don’t get married to a concept. Prototype multiple ideas to be sure.
  • Breadth over depth: Don’t spec out 100 enemies and 10 player classes. Chances are all that hard work is wasted because the final game won’t match the concept.
  • The Xmas Tree of Death – Stay focused on a singular vision. Don’t compromise and ‘ornament’ the game with useless features to please stakeholders. It will just confuse the purpose of the game.

Focusing On The Fun

“Easy to pick up, hard to master.”

Use metaphors, narrative, and story to transport the player.

Don’t ‘fix’ mechanics by making them more complicated. This isn’t more fun, just more work.

More game ‘variants’ isn’t necessarily more fun, just more work.

Pre-production Deliverables

What do we want to get out of Preproduction?

  • A prototype that defines the core game loop.
  • An immersive high concept that players will want to inhabit.
  • A ‘style guide’ that shows examples of the high concept and art style we’re after.
  • Concept art that shows the rough direction we’re going in.

Phase 2: Production

Getting to First Playable

Now that we’ve explored the design space and settled on a direction, it is time to get to First Playable as soon as possible. First Playable is where you’ve got something that actually feels like a game. This involves a full ‘slice’ of functionality of the Core Game Loop.

Scheduling is the most complicated aspect of Production. It will probably go something like this:

  • Engineering is super busy developing the First Playable and are a major bottleneck on everyone else.
  • Designers can get started with designing some systems, mechanics, etc. but are generally bottlenecked.
  • Artists are bottlenecked on most in-game art and UI.
  • QA has literally nothing to do.
  • Production probably won’t have much to do until First Playable except crack the whip on Engineering.

Full Blown Production

Congrats! You’ve got your First Playable, time to enter Full Blown Production. Since you’ve got a general idea of how the game works, you can now have the Content Pipeline conversation. IE: how are non-developers going to produce content for the game? Developers can now add supporting content creation tools to their workload.

Production scheduling now looks like this:

  • Engineers are adding features and tools to allow other team members to build content.
  • Artists are adding artwork to the game directly.
  • Designers are utilizing the tools made by engineering and incorporating artwork to create units, systems, levels, etc.
  • QA might have a little to do.
  • Production is juggling all these balls to get as much in as possible.

Important Engineering Initiatives

  • Content Pipeline.
  • Adding Features.
  • Implement Sound.
  • Implement the ability to mute sound. 🙂
  • Get a build server building nightlies.
  • Implement a basic localization system, do not sprinkle your code with strings.
  • Develop some kind of architecture that fits your needs.
  • Tools, tools, and more tools will keep your engineers from being bottlenecks.

Important Design Initiatives

  • Defining specifications for art.
  • Creating systems and units.
  • Playing the game and improving balance and progression.

Important Art Initiatives

  • Implementing in-game art.
  • Mocking up UI concepts.
  • Creating UIs in-game which can then be ‘made live’ by Engineers.

Important Production Initiatives

  • Prioritizing features that will have the most important impact.
  • Unblocking team members by getting the resources they need to get the job done.
  • Maintaining the quality bar, and putting systems into place to make sure it stays up.
  • Identifying inefficiencies in day-to-day work and eliminating them.

Phase 3: Post Production

Congrats! You’ve entered Post Production. How do you know you’ve entered Post Production? There are a few indicators:

  • The game is feature complete but not bug free.
  • Most tasks are polish tasks.
  • The Artists and Designers are now the bottleneck, not the Engineers.
  • QA has lots to do.
  • It is getting harder to give Engineers something to do.

The team should keep working on the existing features and get it out the door as soon as possible. Avoid getting into ‘analysis paralysis’ or adding features you *think* your players want. If you ship you will know *exactly* what they want because they will not be quiet about it.

Phase 4: Live Ops

Congrats! You’ve squashed all your bugs! Or at least the P1s and let’s hope noone will notice the others. Time to ship! You are now supporting a Live Game. This is the point where:

  • Hopefully you made a game people really enjoy and your players are telling you what they want.
  • Fix all the bugs you didn’t catch (oh yes this will happen).
  • Add new features you didn’t know you needed until you shipped.
  • Improve your content creation tools to provide more content more faster.

If you’ve read this far, wow! I would be interested in your experience working in teams on games. Sound like mine? Completely different? Send me a tweet and let me know.

Stereoscopes and Cardboard Part 2: Virtual Stereoscope

In Part 1 I learn about the history of stereoscopic photography. Here in Part 2 I am attempting to display stereoscope cards from the 1890s on my futuristic Google Cardboard.

I took some time to calibrate as best I could the images in photoshop and place them in a 3d environment. At first I tried to make something that looks like this:


But this is, of course, a ‘one-camera’ approach. What I really want is to display the image for each camera (aka eye).


This variation was an attempt to have the image displayed ‘directly ahead’ for each eye (aka offset by 0.3 world units for each camera). I also put each image on a layer, and made each camera display the image. Unfortunately I had a couple snafus in my code. The ‘toggle mask’ meant I had accidentally swapped the image for each eye, and the offset was funky.

Finally I figured that I just need to put each image right on top of each other, and display the left image for the left, and right image for the right. Too simple, right?

Either that’s two cameras or one camera that’s had a lot of coffee.

You may also notice that I’ve added some UI:


These calibration controls allowed me to change the X and Z values of the pictures to see if moving them would better calibrate the image. I can move them closer and farther away, and increase/decrease the gap between them. The ‘Y’ value is fixed at 1.0 world units, which is effectively ‘head height’.

I was able to determine that, overall, the calibration I did in Photoshop was ‘good enough’. The X gap could be slightly better, but that may be subjective.

So how is the experience?

Upon the 3rd build with the calibration controls I saw this:


Which, if you have a Google Cardboard on, looks really, really cool!! My jaw kind of dropped when I saw it work. Not only did the 3d work  almost perfectly upon boot, it is like traveling through time and seeing another era!

I initially wrote I was skeptical that the parallax would have much of an effect, and boy I was wrong. All kinds of details pop out, and you get a feel for the people and things of the world. The pumpkins are round. The  corn didn’t have much pattern before suddenly coalesced into rows. The girl and the man in the background are quite alive and visible now!

Now having said that, the standard issues of current-gen VR do exist, specifically view persistence and the ‘screen door’ effect.  I was initially thinking that displaying the image in a 3d world and being able to ‘look around’ the image would allow for a better experience. But at times it seemed like head tracking (aka gyroscope) was adding blur, and that perhaps looking at the image ‘off axis’ instead of directly at it like in a traditional stereoscope was causing distortion.

So I decided to turn off head tracking, essentially turning the Google Cardboard into a more traditional stereoscope. It turned out (surprise) that while head tracking was adding a bit of blur, it always stabilised. And looking at the image off axis? This wasn’t a problem, but actually improved the ability to see more details! In order to look at the same details in a traditional stereoscope you would have to move the image farther away, allowing you to see the edges of the image without distortion. This also mean the image was smaller and harder to see. With head tracking on, you can see the edges of the image close up and with less distortion.

It is quite fun to think that noone has viewed these antique images in quite the same way and detail that I’m able to see now. It is also somewhat irritating that my only way to convey this to you is through words.  It is quite possible that only I am interested in such quirky things. But if I am not alone, perhaps there will be a Part 3 to this series someday! I hope that I can scan and calibrate all 26 images I have bought and include them in an app for Google Cardboard. It should not be a complicated endeavor, but it is not for today.

Stereoscopes and Cardboard Part 1: Digital Archaeology

Here in Part 1, I attempt to learn more about the history of stereoscopic photography. Part 2 involves converting this old photography into a Google Cardboard virtual reality app.

I was in a rather large antique shop in Globe, Az snooping around and I came across some rather interesting photographs:


These are stereoscopic images taken in the late 1800s onwards. The selection they had, which I now own, generally fell into three categories: nature, cities, and the Sears-Roebuck offices.

These stereoscope cards are, unsurprisingly, put into a stereoscope. Most commonly for the Holmes Stereoscope, which was intentionally left unpatented so that an economical and simple stereoscope could be made.

Thousands of stereoscopes and many, many card sets were made. Some of these sets were available in book-shaped containers, with one hundred or more cards.

The images I bought all were a piece of thick white card with images glued onto them. The Sears Roebuck cards were simply printed copies.

A closeup that shows the halftone pattern of the lithographic black and white photo print.
A closeup that shows the halftone pattern of the lithographic black and white photo print.

Regardless they were quite fascinating to me in many ways. Partially because of what was considered worthy photographic subject matter. Here is where Sear’s stores its ‘talking machine records’.


The back of the card reveals that…yes…this was a subtle sales ploy:

“If you are interested in a talking machine don’t fail to read about our offers in our General Catalogue”

I mean, if you own a stereoscope, chances are you are the sort of chap who’d love a talking machine, yes?

While these are interesting cards, my favorites were the higher quality photo cards. These cards were generally more interesting, and are genuine photos from the late 1800s, most likely circa 1890. The gorgeous sepias and continuous tones make these stand out.

When the frost is on the pumpkin, and the fodder's in the shock.
When the frost is on the pumpkin, and the fodder’s in the shock.

Yes, this is…I don’t really know what this is. I mean, first these two are kissing:


These two are sneaking a kiss in after their work is done (aka When the frost is on the pumpkin, and the fodder’s in the shock). I’m assuming this is supposed to be more funny than titillating, and at least she seems to be laughing. These two however are not:


But hey, I’d look surly if I was stuck dancing with the dog, too.

So my first thought about these photos was…are these really stereoscopic? I do not actually own a stereoscope! But I do have a scanner and a few Google Cardboards for various projects. But before I go ahead and create an app for viewing these quaint (what better word for them than quaint?) images I figured I should verify they actually are stereoscopic.

At first glance, they do look like the same image. The imagery is rather small (the stereoscope has a magnifying effect). The images were cut out and glued by hand, making it more difficult to determine if the images were simply cut out a little to the left, and then to the right.

When I scanned the images and digitally superimposed them, I had no ‘registration point’ with which to determine the location of the images. It also seemed like there is a bit of vertical mis-alignment as well which I had to adjust for.

But for both my ‘talking machine’ and ‘kissing farmers’ image it became quite obvious that the images were indeed stereoscopic.


The row of corn is the most prominent detail, showing the parallax effect between the two cameras. It seems also that the parallax between the cameras is even exaggerated, making it much larger than the actual parallax that occurs between a person’s eyes naturally. Perhaps this was on purpose, or it was simply because the camera equipment required was so large.


While both cards were not very well ‘registered’ (ie calibrated for stereoscopic display) the talking machine card definitely was worse due to its rotation. The kissing farmers were vertically mis-aligned, but the talking machine was both vertically off and slightly rotated. I could only imagine this would make it quite difficult to get the correct stereoscopic effect.

But it seems to me that the usage of stereoscopic imagery was gratuitous and did not add much to the imagery. I’m not sure what value there is to seeing a hallway full of vintage vinyl in stereo, unless maybe you are a classic vinyl junkie. Perhaps the camera lenses of the time did not provide enough distortion to give the images a sense of perspective. It also seems like the calibration of the cards may have posed issues as well. I am very interested to see if there is something more to these images in 3D that becomes more immersive and fun.

And immersion is on my mind because now I will be attempting to bring these images into a 3D environment using Google Cardboard and Unity. Stay tuned for Part 2!

Fixing Time Machine Files

Infrequently I’ll grab something off my Time Machine. But restoring stuff from time machine can sometimes fail. For example if you are restoring a file from another computer.

I then have to resort to copying things over from the Time Machine hard drive, which causes weird permissions issue. And even if you change the user and permissions in Finder…its STILL broken!

Thankfully it is possible to fix this. I found this link about ACLs (access control lists) which talks about how there is a hidden weird os x permissions system that isn’t obvious. And then this link about file attributes talks about the fact that on the command line you can’t see if a file has ACL settings if it has an attribute set!

So how to fix a file?

First you can clear the acl like so in Terminal:

sudo chmod -N $MYFILE

And you can remove ALL attributes (maaaay need to be careful with this one) like so:

xattr -c $MYFILE

This will convert your time machine file to a ‘normal’ file.

And how to better diagnose this is the issue? If you run

ls -al
-rw-r--r--@  1 chris  staff      740 Nov 16 11:50 time_machine_file.txt

See the @?  As mentioned in the article above this @ will obscure the ‘+’ that would normally show, that tells you that an ACL controls this file. You can find out with:

ls -le .
-rw-r--r--@  1 chris  staff      740 Nov 16 11:50 time_machine_file.txt
 0: group:everyone deny write,delete,append,writeattr,writeextattr,chown

Oh hey, look at that! ‘group everyone: deny write’ that might be a problem. And now you know the fix!

Fixing Cmd-tab for Full Screen in Hearthstone on Mac

One of the more annoying things about Hearthstone on Mac is Blizzard’s decision/oversight to not allow cmd-tab to work in full-screen mode. Considering its written in Unity, and I’m a Unity game dev I knew exactly what was going on: their ‘Mac Full Screen Mode’ setting was not set to ‘Full Screen Window With Menu Bar and Dock’ but ‘Full Screen Window’. This meant that you have to exit full screen in order to switch applications, which for me was a major pain in the ass (especially while net decking! 🙂 ).

And so I realized hey, I know how to fix this. Its just data. I could find the setting in the binary data, and flip the bit. I created a simple test project where the only change was this setting. I then did a diff on the mainData file and found this:


Oh look! I can switch a 1 to a 2 and fix the issue. Ok, so next step was to open Hearthstone’s mainData. Let’s just say that this required some basic snooping to find the correct offset. The offset in my mainData was 10 32-bit words off from two 0xFF words followed by what looked like a lot of bit flags. So I found two 0xFF words in Blizzard’s mainData and counted 10 words over, flipped a 1 to a 2, and bam: cmd-tab works.

How do?

Perhaps you would like to turn cmd-tab on as well? I will attempt to give some simple steps:

  • Download Hex Fiend here.
  • Get the mainData file from /Applications/Hearthstone/ and back it up.
  • Copy the file to another location, and open the file in Hex Fiend.
  • Choose Edit->Jump To Offset. At the time of this writing the offset is 4884. Move your cursor until the status bar on the bottom says ‘4884 out of XX bytes’.
  • Change the 01 to a 02 at this location.
  • Save the file.
  • Move the modified file to the location mentioned above and start Hearthstone.
  • Observe that you can now Cmd-Tab.


Your Mileage May Vary

Needless to say that this information will most likely go out of date as soon as the game is patched. Additionally Blizzard had this feature in the Beta, but then removed it at launch, and my assumption is that they had a reason for it (probably full screen compatibility issues on crappy graphics cards). But hey, if its not that, maybe they can flip this bit and make us Mac players happy campers?