1. Persistence of scores and stats across platforms. Being able to save your character data and scores between the two platforms is easy, but what would be the gaming implications? Will certain encounters or scenarios lend themselves better to mouse input as opposed to touch input? Of course. So is it wise to mix the worlds up when a single platform has a competitive edge? Maybe not.
2. Having to choose favorites when introducing or removing a mechanic. No matter how hard one tries, there’s going to be times where a certain feature, game mechanic, or even a whole encounter rely on a type of input not possible or made trivial by either a mouse pointer or a touch input. There’s ways to treat a touch input solely as mouse pointer by forcing the user to hold down on the screen and drag the avatar, but is it worth it? One mechanic I’ve been going over is one where obstacles are making their way towards the player, but by using a touch device users can essentially “jump” over these because a touchscreen is not going to track input unless there’s contact made on the screen. So a user could point anywhere on the screen and have the avatar teleport there. On PC, the mouse would have to drag the avatar through the openings and it presents a fun challenge. (There’s ways around this on PC too by dragging the mouse pointer out of the game area and back on somewhere else but it wouldn’t give you a precise teleport tactic without a third party app to position the mouse). In this particular scenario, I’ll probably program a workaround that utilizes an actual “follow” of the input instead of a precise “sprite position = input position” kind of system. This would also allow for interesting new mechanics like slowing down the avatar if it gets a frozen effect and having to lead the avatar through obstacles to get somewhere and not just cut through everything.
3. UI Layout. UI layout is important regardless of platform, but things that work on mobile simply will not work with a mobile platform and they’re definitely not as obvious as you’d think until you’re testing on both platforms at the same time. When you have your thumb constantly on the screen, you have to keep in mind that visibility is going to be hindered, especially when all of the touch input is happening in the field of play. There’s no UI buttons, you point directly at what you want to kill. I’ve programmed the UI around this already to an extent, mainly by offsetting pop-ups like Damage and HP Bars off to the left so you can see them with your thumb down (remind me to add left-handed option to flip that). Managing the size of every single thing is also important. Apple publishes some basic outlines on how big a button should be, and you can roughly translate that to enemy and avatar size, but if you’re touching something to kill it, you want to be able to see what’s going on to that monster but can’t because it’s obscured by your finger or thumb. I’ve prototyped one tbing to alleviate this: a magnifying glass that gives you a view of the action off to the side, but it has to look good for it to fit in with what’s going on. I think I do have a good way to implement it though, so I’m not too concerned with it yet, I’m actually more concerned with how the potential wrappers like Cordova or CocoonJS treat it (which is by crashing, right now).