While noodling about with PICO-8, I thought I’d have a go at POKEing random and semi-random values into the screen memory. As you do.
A long time ago, I used to do this sort of thing with the Spectrum, writing very simple assembly routines (and compiling by hand!) that made fancy screen wipes and transferred images from memory to the screen and stuff. So fancy were they, that I lost the comp.sys.sinclair Crap Games Competition one year for being too clever.
In case you’re wondering, POKE is a command that lets you store a value directly into memory without using variables or pointers or other things. On the Spectrum, it was pretty much the quickest way of outputting pixels to the screen outside of actual machine code, and was also used for modifying code. In fact, POKEing became the way of cheating. POKE a bigger number into the Lives counter, or POKE zero into the part of RAM that holds “how much should I decrease energy by?”. Devices like the Romantic Robot Multiface existed mainly for the purpose of enabling this functionality with ease – press a button, POKE, done. You filthy, filthy cheat.
No cheating here though, just a load of sort of pleasing GIFs of the output I’ve been producing. There’s something ethereal about them, if you can get past the pointlessness and eyebleed they suggest.
There’s also the command PEEK, which lets you see the contents of a memory location – useful to check if something is there already.
For these silly experiments, I was POKEing values into the memory range 0x6000 to 0x7FFF – 8K of RAM that make up the PICO-8 screen. Each 8 bit value is two adjacent pixels, reversed. The left 4 bits are the pixel on the right, and the 4 bits on the right are the pixel on the left. It’s a little confusing, but if you saw how the Spectrum arranged its screen you’d cry.
This means each pixel can have a value from 0 to 15, or 0000 to 1111 in binary. 16 colours to match the 16 colours of the PICO-8 palette. All very interesting, but all I was going to do was bung random stuff in there.
When my PC hard drive reached critical capacity last week, and I was figuring out where all the space had gone, I found one of the major bit-thieves was Spotify. This folder (in Windows 7 at least):
had grown to be over 12GB. 12GB! For a streaming service where everything is online? That can’t be right. Even more odd is how this folder isn’t even the Spotify cache folder, which by default is here:
and for me was only a gig in size. The user interface for Spotify does let you reconfigure the location of this Storage folder, so you can move it to another drive to make a bit of space, but this doesn’t help with the larger issue of the Data folder – which can’t be configured like this.
Luckily, there is still a solution. Firstly, close Spotify completely (making sure it’s not still running in the system tray), then go to this folder:
Note that’s the Roaming folder, not the Local folder where the Data and Storage folders are. In here, there’s a file called “prefs”. Open this in a text editor – Wordpad is better than Notepad because the file contains UNIX style carriage returns and Notepad doesn’t cope well with them.
At the end of the file (although it doesn’t seem to matter where, so long as it’s on a line by itself, add this:
1024 is the maximum size, in megabytes, you want to give over to this cache. So for 1GB, use 1024. For half a gig, 512, and so on. I’m sure you can figure that bit out for yourself!
Save the file, making sure your text editor doesn’t add a rogue file extension when you do so, and then go back to the Data folder mentioned earlier and delete the contents. No, really. It’s fine. Or move them somewhere if you’re scared.
Open up Spotify again, and if it’s all working, you’re set! It isn’t clear why this folder fills up when there’s a perfectly good folder already there for caching purposes, but at least there’s a way of stopping it growing out of control.
The day Google did what that large bottomed lady failed to do.
Google AMP, or rather, the Google AMP Cache, is rolling out to users right now. It’s been in use for Google News searches for a little while, but now general Google searches are becoming infected by it, and there’s no way to turn it off.
The intention of the AMP project is noble enough: Make mobile pages work faster. On the webmaster side of the project, some work needs to be done in order to make mobile versions of their pages AMP compliant. For many folk, this is little more than triggering a plugin for their CMS, but for those who code sites a little closer to the metal, there are specific AMP HTML pages to create and check. You know how HTML5 and the likes of Bootstrap helped unify devices, so they only need a single page regardless of screen type or viewport size? Well, it seems AMP reverses that.
I don’t pretend to understand it all. But I don’t need to in order to find faults with Google AMP Cache. What this does, is (as the name implies) cache AMP pages. It rolls them up and spits them out quickly to your phone when you access them.
The Google AMP Cache is a proxy-based content delivery network for delivering all valid AMP documents. It fetches AMP HTML pages, caches them, and improves page performance automatically. When using the Google AMP Cache, the document, all JS files and all images load from the same origin that is using HTTP 2.0 for maximum efficiency.
Which would be good, only it isn’t. When you use Google on your mobile device to search now, AMP pages are preferred in the results list so generally appear at the top – even if the content is “better” on a non-AMP page. When you tap the link, you get Google’s cache of the page, and herein lie most of the issues.
It’s cached, so inherently isn’t necessarily the newest content. You also don’t get the correct link from the page – the URL bar shows a Google URL. For example, instead of:
If you then decide to pass this link on to someone not on a mobile device, then you end up passing on the AMP’d link instead, only it doesn’t work. Just copy and paste that second link into your desktop web browser URL bar and see. Not only do you not get taken to the page, you get sent to a page of search results for which the top match isn’t even the correct site 1:
It’s even worse than that. Without hacking apart the AMP Cache URL, you can’t even find a link to the correct “real” page to pass on or save. The cached pages also tend to strip out certain content, such as adverts or input forms. This may be a bonus, or may be because of the ineptitude of the webmaster, but it doesn’t matter either way: Content is not served up correctly and that is a problem.
But things are worse still. Because the Google AMP Cache is, by their own definition, “a proxy-based content delivery network” it can be used to bypass web filters and restrictions. Page blocked by your school? Just access the AMP Cache version of it on your mobile device. In fact, you’ll bypass the filter automatically and inadvertently, potentially breaching an acceptable use policy.
The worst bit of all? You can’t turn it off. There’s no switch in your browser or your Google account settings. You can block access to google.com/amp (or .co.uk/amp, or other country specific variations), but that stops search from working properly. You can ask webmasters to disable AMP support, but there are so many using it now that isn’t going to happen. I do wonder if many webmasters were hoodwinked into this: They saw the benefits of AMP, so embraced it, and now Google have screwed them over by forcing the cache and breaking their content. How does advert revenue work now for those people, if the adverts are cached? Clickthroughs and hits? Did webmasters realise this was the endgame, because when I looked into AMP a while back for WordPress I certainly didn’t. Is there a legal issue with Google AMP Cache essentially cloning your content and serving it up from their server? It’s a mess.
And what if you do manage to convince a webmaster to turn it off? What happens then? This: 404s everywhere. That’s Google’s answer.
The situation now is that mobile search, via Google, is effectively broken just so we can get a page on the screen a few milliseconds faster. This is not progress.
Note that it’s Google who redirected to this search – I didn’t stupidly just put the URL in the search box! ↩
Like many Your Sinclair readers, I have a strange attraction to one of the Spectrum’s most creative games: Advanced Lawnmower Simulator. In the past, I’ve written versions of it for the Game Boy, Game Boy Advance, Amstrad NC100, the Codebug, the BBC Microbit, a Casio calculator, and various other things.
Today I bring to you my latest port: a version for the PICO-8 virtual console. It’s called “Advanced Lawnmower Simul8or” because of the 8, you see.
The PICO-8 doesn’t actually exist. It’s a fabricated retro console with 8bit style limitations intentionally imposed, and a built-in development environment complete with sound tracker, sprite and map editors, and a “compiler” of sorts which uses Dark Magic to somehow squeeze your program into a PNG file for distribution. Like this one:
I’ve posted a few games on here before, like Hug Arena and Duck Duck on the Loose, but this is the first I’ve created myself. The code is horrible. The game barely registers as one. Play it now!
I’ve only ever been to Kempston once, and that was to go to Sainsbury’s in the middle of the night. I’ve driven through it a handful of times, though, and without wishing to offend those who live there, it is pretty unremarkable. Sinclair is not, as far as I’m aware, a location anywhere in the UK or any of the countries I’ve ever visited. With this deduction, I presume the comparison requested is between the two competing joystick interface standards for the Sinclair Spectrum.
That certainly makes more sense, anyway.
Notice that I said “joystick interface” rather than “joystick”. Although Sinclair, and probably Kempston, branded sticks existed, and games always referred to “Kempston Joystick” and so on, it was actually the interface that made the difference.
Unlike many other machines at the time, the Spectrum didn’t have any joystick ports built in, and wouldn’t until the +2 model came along. Instead, an interface device needed to be plugged into the computer’s edge connector, and joysticks were plugged into that. The joysticks themselves were actually standard Atari-compatible one-button sticks with 9-pin connectors, and could be used with either interface (or indeed most of the less common alternatives, like AGF and Fuller), or indeed with many other computers and consoles of the era.
To the player, that meant that there was no real difference between the two. In my experience, more (most, in fact) games supported Kempston as a default option, but the Sinclair interface was actually quite clever: It replicated number keys on the keyboard. If a game didn’t support joysticks but did support redefining the keys, then you could still use a stick. Of course, the Cursor joystick interface did the same, but that isn’t up for discussion. In addition, the Sinclair interface offered the capability to use two sticks at once – one replicating the keys 1-5, the other 6-0.
At a technical level, and I’m barely a machine code coder so bear with me, communicating with the interfaces was just as simple for each. You read hardware port 0x1f for Kempston, and either 0xf7fe or 0xeffe for the two Sinclair ports. However, as a BASIC programmer (which I’m infinitely more proficient at, I mean, have you not seen Octopus Lite?) reading joystick ports is slightly more complicated and unusual. Not so with Sinclair sticks as they register as keys, so a quick INKEY$ is all you need.
However. It was much, much easier, and cheaper, to get hold of a Kempston interface (not least because the official Sinclair ones also did other things besides just provide joystick sockets) so that’s what most Spectrum owners did. Kempston became the de facto standard for Speccy joystick interfaces, with some games equating the term “Joystick support” with support for the Kempston interface, and so effectively Kempston was best just for that reason.
Sinclair may have been more useful, but Kempston was King.
OBS (Open Broadcaster Software) is a great program for streaming and recording gameplay on a PC. I’ve used it quite a lot, but it has frustrated me for a while that the Mac version has – through no fault of OBS – no ability to capture game audio. On the PC, you can capture “desktop audio” or “what U hear”, but that’s not an option on OS X. On older Macs, you could run a 3.5mm to 3.5mm cable from the headphone socket to the line in socket, but on newer Macs there’s one socket that does both directions, so that’s out.
A year or so ago I tried to get either of two solutions working: Soundflower, which sort of did but was really fiddly, and WavTap which never seemed to work at all. Recent versions of OS X have actually prevented Soundflower from working as intended at all, so there was no (free) solution and I stopped using OBS on a Mac. Until last week.
Soundflower was still in my system settings, albeit unused, and I’d never got round to uninstalling it. It irritated me every time I saw it there but was always busy doing something else and kept forgetting to remove it. This time, however, I finally looked up how to and in the process came across a new utility which effectively replaces it, and works: IShowU Audio Capture. There’s a full (paid for) software package called IShowU, but all you need for this purpose is the audio capture part, which is free.
How to make IShowU work with OBS
Firstly, download IShowU Audio Capture from this link, and install it as shown. You don’t need Step 6 yet, so do 1-5 and come back.
Done? Hello again!
So, step 6 is going into System Preferences > Sound and choosing IShowU Audio Capture as your sound input device, which will work, but keep reading for an additional tweak.
All you need to do now is open up OBS and choose a new Audio Input Capture source (click the + under the Sources box), then choose IShowU Audio Capture as the device. That’s it!
Only… there’s a snag. This will indeed capture all “desktop audio” (so you’ll probably want to close or mute email notifications and so on when streaming or recording), but crucially it won’t actually output any sound to your speakers or headphones so you’ll be playing mute. This might not be a problem, but if it is, read on.
Open up the Audio MIDI Setup app from Applications > Utilities. In here we’re going to create a multi-output device, so you can output your desktop audio to both IShowU and headphones/speakers/whatever at the same time.
Click the + in the bottom left, and choose “Create Multi-Output Device”. Then, in the right-hand pane for this new device, make sure you tick “Built-in Output” and “iShow Audio Capture”. Leave drift correction set to Built-in Output.
Close that, and head back over to System Preferences > Sound. You’ll now have an output option for your new Multi-Output Device. Before you choose it, make sure you set your volume level how you want it: you can’t adjust the volume of a multi-output device!
With the volume set, choose Multi-Output Device as your, er, output device, and you’ll notice the volume slider grey-out. It’s time to go back to OBS and configure the Audio Input Capture there – same as before, choosing IShowU Audio Capture.
You’re done, although you might want to remember to choose your usual output settings in System Preferences when you’re finished recording!
“I reckon I’ll stay in this evening, I’ve got a feeling something’s going to happen!”
Like we’re supposed to believe that the lovely Debbie is 1) really into computers, 2) owns a modem, and 3) is actually a woman.
In the early 90s, bulletin board systems like this didn’t have women on them, and anyone who appeared to be a woman was almost always a man. It’s not being sexist, that’s just how it was. Geeky men with thick rimmed glasses and facial bumfluff but enough money to dial these incredibly expensive phone numbers for hours at a time.
I wasn’t one of them, of course, because I didn’t wear glasses and had nowhere near enough money. I did, for a time in around 1993-4 have a modem for my Spectrum and again briefly in 1995 had one for my Amiga 500. I can’t recall what I dialled for the former, and the latter was a board called Digisomething. Digidrive? I’ve no idea.
Digiwhatever gave me my first personal email address (I’d used email at school years before, but never had my own email account) which I never used, and had the usual at the time set of message boards, chatrooms (which were always empty), text files full of jokes and hacking guides and stuff, download areas (to get Amiga public domain software and no doubt some viruses as a bonus) and games you could play within the system. One was Hunt the Wumpus, another was a sort of MUD you could create rooms in as you go. It was incredible at the time. To get an idea of some of the stuff that got put on these things to look at, you can find some are archived here.
A year or so later I had free internet access at university and joined a telnet BBS called Room 101. It was “run” by some geeks at the University of Ripon (which isn’t where I went), but in reality was probably loaded onto a computer sat in a cupboard somewhere and the staff didn’t know about it. It was almost certainly accidentally turned off in around 2001 and since nobody knew it was even there, was never turned back on.
Like Digiwhojilly, it had games and chat rooms and things, but because it was more widely used, mainly by students across the country, it was much busier. It was fun while it lasted, and didn’t come with a massive phone bill.
Amazingly, there was one actual real actual genuine girl on there, called Justine. I know she was a girl because I rang her twice while drunk. The second time I called she was out and I spoke to her mum at 2am. It was awkward. “So how do you know Justine?” “We spoke on a computer, er, thingie.” “And why are you calling at 2am?” “I’m drunk”.
That’s how Debbie met a stranger. No, I’m not Debbie – Justine is.
More amazingly, considering the internet is a global thing, and Room 101 was at the very least UK-wide (although I do remember at least one American on there), I was in a computer room at three in the morning or some other ungodly hour chatting on Room 101. Slowly it dawned on me that the guy I was chatting to was sat behind me. Then I found out he lived in the same halls of residence as me in an adjacent corridor. What are the chances?
But really, Debbie in that advert? Advertising in nerdy computer magazines that bored women go on some online chatroom? In 1993? No way.
ADDENDUM: It was DigiBank! Also known as Digital Databank, and closed in 1998. Turns out it was a mainly Acorn BBS, but they definitely did Amiga stuff.
Upon replacing two aging fileservers with two shiny new ones, this time running Windows Server 2012 R2 rather than Server 2008, I came across a strange issue regarding File Server Resource Manager notification emails (you know, like “your quota is full” warnings and so on). As in, it can’t send them, instead giving this message when you attempt to generate a test email:
Failed to send the test e-mail due to the following error: Cannot send the email due to an error. Check the application event log for more information.
Sure enough, when I checked the application event log on the server running FSRM, I found this entry for SRMSVC with event ID 12306:
A File Server Resource Manager Service email action could not be run.
Error: IFsrmEmailExternal::SendMail, 0x8004531c, Mailbox unavailable. The server response was: 5.7.1
Client does not have permissions to send as this sender
What’s odd about this, is that it suggests that the email address I’d given FSRM as a “send-as” email address, can’t send emails via our Exchange 2010 server. On our old file servers, it worked just fine with exactly the same settings. It seems Server 2012 R2 has a new requirement!
To fix it, I created a new “Equipment Mailbox” on the Exchange Server (using the Exchange Management Console) called “FSRM-ServerName”, and gave it the email address “FSRM-ServerName@domain.tld”. Then, using the Exchange Management Shell, I gave the server the rights to use that mailbox:
It now all works as expected (and of course I repeated the task for the other server too), but it’s still a mystery to me as to why it’s unnecessary for File Server Resource Manager on Server 2008 but it is on 2012 R2, when surely the issue lies at the Exchange Server end of things?