Putting AWS To Work On My Hobby
I'm sure I've never mentioned it, but I'm a licensed amateur radio operator, commonly called a ham. KC1KCE is my call sign. OK, those who know me are currently mumbling to themselves "He won't shut up about it." I can hear you. I promise, I'll keep the radio discussions to a minimum in this article. Mainly, I want to talk about three topics, AWS as a development platform, PostgreSQL as a data storage engine, more specifically though, AWS Aurora hosting PostgreSQL, and the postGIS extension to get geographical information into and out of my database. I'm targeting these topics for a few reasons. One, I do want to show how easy it is to get things done within AWS, all kinds of things. Second, one of the true powers of PostgreSQL are the extensions, and postGIS is a showpiece of an extension. My storage, data manipulation and reports are barely going to scratch the surface of all that you can do with postGIS, but the fact that you can run this on AWS Aurora, well, that's a win we should talk about. Finally, heck, I just want an excuse to play with radios at work and I've finally figured out a few (sorry Kirsty, but I'm really working when I'm fiddling with that radio at my desk, honest).
To do all this, I combined several different tools, which I'll go over, Also, and I want to be 100% up front, I used Claude to assist me while building all this stuff out. I know Python, a little, but my skills there need a lot of work. By using Claude, I got over a lot of hurdles. I also let Claude build some, but not all, of the database and database code, just to speed things up. Some of it's database suggestions were wrong. Some were poor, but quite a few made it into the project. Everything is visible here on Github.
I promised a minimum of radio discussion, but, I need to talk about the data that I'm manipulating, what it is, and why we might care about this kind of data.
Active Packet Reporting System (APRS)
One of the things I enjoy about the ham hobby is the wide variety of things you can do. Sure, I can sit on a park bench in Oklahoma with a wire tossed up in a tree and talk to the Azores (true story), but another thing I like to do is hike. Sometimes around my property, sometimes other places. One of things I take with me on hikes is my radio (Bridgecom Systems Maverick) because it can transmit APRS packets over the airwaves. What's APRS sending, I can hear you say. Well, let me tell you. It can send a bunch of information, there are even bots connected to it that can send more, but the main thing we're going to worry about here is geolocation information because my radio has a Global Positioning System (GPS) receiver built into it.
Here's how it works. My radio receives triangulation signals from the GPS satellites. That allows it to determine my latitude, longitude, height, speed and some other data. On an interval, it transmits this information set, blind, into the ether, on a specified frequency (144.39 MHz in the USA, 144.8 MHz in most of Europe, 145.825 for the ISS & some satellites, there are others). In and around many of our communities there radio receivers tuned to that frequency, but also connected to the internet. These are called iGates. An iGate will receive the signal from my radio and it will transmit, over the internet, my information to a service located at aprs.fi. Here's an example of my last trip on APRS.fi:
No, that wasn't a hike. I was letting the radio beacon from the car. That's a drive from where I work out to home. While the radio sends a signal every 75 seconds (configurable), I won't always get picked up by an iGate. However, you get the basics. Radio gets location from GPS. Radio sends location over radio waves. Internet. Pretty pictures. Ta-da!
This can be used in a variety of ways, not the least being, when I am hiking, I tell someone that, if I don't come home, check aprs.fi for my location. Guarantee, in most circumstances, this'll show something close.
However, I wanted an easier way to look at just my information (and an excuse to move data around that involves radios). I wanted to see stuff like maximum speed I recorded, highest height, other things along those lines. To do that, I needed to be able to query my data. Now, aprs.fi is cool as can be. One cool thing they've done is make an API so you can retrieve your information. However, I needed a way to keep it around for querying to show off things like, say, the last 10 trips as well as the other things I mentioned. Hmmm... how does one keep data around? And, wow, Geographical Information. Might even need a Geographical Information System to manage it? Hmmm...
OK, OK. Hush now. Yeah, I decided to get this information into a PostgreSQL database so I could play with it using postGIS. Further, I'm going to show you how I did all of it. Let's get started by talking about AWS Aurora PostgreSQL.
AWS Aurora PostgreSQL and postGIS
Why PostgreSQL, has an easy enough answer: Because. OK, mainly, because I wanted a chance to showcase a bit of how postGIS works. Why PostgreSQL on AWS Aurora? Well, let's be really honest here, I'm not putting much of a load on this system. In fact, it's exceedingly light. So I don't need some of what AWS Aurora offers like the ability to scale reads very quickly to improve performance on the fly, fast failovers due to taking advantage of the underlying storage system, and heck, a pretty amazing storage system for fast overall performance. This example database needs none of that. My main goal was to demonstrate just how much plain vanilla PostgreSQL can be used to set up a database using one of the most popular extensions, postGIS, all within AWS Aurora PostgreSQL.
Since my needs are minimal, I went with as minimal an instance of AWS Aurora as I possibly could:
With that provisioned, otherwise using all defaults, I just had to create a database, aprs_reporter. For those who don't know, once you have AWS Aurora provisioned, it's basically just a PostgreSQL cluster (not perfect 1:1 mapping, but basically). Creating a database involved the complex syntax of CREATE DATABASE. Done. No, my worry was, how was I going to get postGIS on there? Could I get postGIS on there? Second question first, AWS has a published list of supported extensions by version of PostgreSQL. Went there, did a search. postGIS is available. We're in business. Now, what crazed syntax will I have to follow to get it installed?
The standard syntax to install postGIS on my local container would look like this:
CREATE EXTENSION IF NOT EXISTS postgis;
Sure, depending on the install of PostgreSQL you may need to get the binary there first. However, most installations of Postgres already have it available. I just validated this on my TimeScale, yes, container. Worked great. How about AWS Aurora. Gotta be a few hoops to hop through, right?
SAME. SYNTAX.
Yep. That easy. Not all extensions are installed on a given instance of AWS Aurora PostgreSQL, so you may need to follow these instructions. However, pretty painless.
OK. With that, on with the project. Again, I'm keeping this project intentionally simple for a bunch of reasons, not least of which, I hope to take it on the road and do presentations with it, and simpler is better when presenting. I currently only have three tables in the database. One is for the APRS data, the next is an aggregation storage location and the last is a place to store region information for reuse (so I don't have to do location lookups on every call to the database):
CREATE TABLE reports.location_info ( location_info_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY, loc_class text NOT NULL, loc_name text NOT NULL, loc_type character(1) NOT NULL, loc_time timestamp with time zone NOT NULL, lasttime timestamp with time zone NOT NULL, lat numeric(9,5) NOT NULL, lon numeric(10,5) NOT NULL, gis_point public.geography(Point,4326) NOT NULL, course numeric(5,2) NOT NULL, speed numeric(6,2) NOT NULL, altitude numeric(8,2) NOT NULL, symbol text NOT NULL, srccall text NOT NULL, dstcall text NOT NULL, loc_comment text, loc_path text, phg text, status text, status_lasttime timestamp with time zone, distance_from_previous numeric(10,2), time_since_previous interval, actual_speed_kmh numeric(6,2), is_moving boolean ); ALTER TABLE reports.location_info ADD CONSTRAINT location_info_pkey PRIMARY KEY (location_info_id); CREATE TABLE reports.journey_stats ( journey_date date NOT NULL, total_distance_km numeric(10,2), max_speed_kmh numeric(6,2), total_moving_time interval, unique_locations integer, bounding_box public.geography, first_transmission timestamp with time zone, last_transmission timestamp with time zone ); ALTER TABLE reports.journey_stats ADD CONSTRAINT journey_stats_pkey PRIMARY KEY (journey_date); CREATE TABLE reports.location_lookup ( location_id bigint NOT NULL GENERATED ALWAYS AS IDENTITY, center_point public.geography(Point,4326) NOT NULL, lat numeric(9,5) NOT NULL, lon numeric(10,5) NOT NULL, city text, state text, country text NOT NULL DEFAULT 'USA'::text, radius_km numeric(6,2) DEFAULT 5.0, point_count integer DEFAULT 0, first_seen timestamp with time zone DEFAULT now(), last_geocoded timestamp with time zone, geocode_source text DEFAULT 'manual'::text ); ALTER TABLE reports.location_lookup ADD CONSTRAINT location_lookup_pkey PRIMARY KEY (location_id);
Could I normalize this better... OK, at all? Sure. Not the point. The main thing, the one I'm excited about, is just the one, simple column, gis_point, in the reports.location_info table. That's the point of the magic. Because it lets me do things like this:
CREATE VIEW reports.route_track (loc_name, track_date, route_line, point_count, total_km) AS SELECT loc_name, date(lasttime) AS track_date, (public.st_makeline((gis_point)::public.geometry ORDER BY lasttime))::public.geography AS route_line, count(*) AS point_count, sum(distance_from_previous) AS total_km FROM reports.location_info WHERE (loc_name = 'KC1KCE-8'::text) GROUP BY loc_name, (date(lasttime)); ALTER VIEW reports.route_track OWNER TO postgres;
That, you may be able to tell, is more than one route. Oh yeah, and that's part of the web page I built (with Claude).
I can keep going, showing the code from PostgreSQL. However, I think the point is made. Those are bog standard CREATE TABLE statements. The view is just a query. The postGIS functions are available to me. All this in Aurora PostgreSQL. So, let's talk about where Claude do most of the heavy lifting.
AWS Lambda Functions & the EventBridge
Lambda Functions have been around on AWS for a long time, launched in 2014. The basic model is simple enough. I don't want a server. I don't want a service. Rather, I want a service, but I want an exceedingly small service that just does, maybe, one thing. For my purposes, I needed three things. First, I wanted to poll the aprs.fi API to retrieve my transmitted APRS data. Next, a way to call all the procedures and functions in the database, and finally, I had to do some geocoding, looking up where the individual lat/lon are located in an area so I can do things like this:
I was able to set that functionality up through three Lambda functions: aprs_fi_poller, aprs_api, and aprs_geocoder. The functions themselves are just Python code. As with anything else, they can all be deployed and managed through the command line. In fact, I used that quite a bit while working with Claude in Visual Studio. However, I also had to do a lot of the work myself, especially when the AI would get things wrong, occasionally hallucinate, or lose context. I used a mix of the command line, and the GUI. The GUI was handy because it gave me a great reference point to be able to see a lot of information about the function, all in one place:
The hardest part in all of this was getting the custom layer added to AWS to be able to connect to the PostgreSQL database. I'm not sure I ever got a good, straight answer out of the AI that I was doing this the easiest way, and I went to multiple AIs with the question. Ultimately though, I got it to work.
Security setup was easy enough. Next, I needed a scheduler. I had to be careful on this because I absolutely want to play nice with aprs.fi and stay well inside it's call limits on the API. Further, when I'm broadcasting APRS, I don't want to hog the radio waves either. I chose to transmit my location every 75 seconds, but I'm only polling aprs.fi every 3 minutes. That means my data doesn't have the granularity of the data in aprs.fi, but I'm OK with that. I wanted to make different data, like that Locations Visited view up above or this example of the coverage area of all the points (may have to rework this one when I start traveling with the radio all over Europe & the US):
In addition to the 3 minute timer for polling the API, I needed to load up the geocoding information. I picked hourly on that because, how many locations am I likely to be in over an hour? With that, I used the EventBridge to schedule the Lambda functions. That part of the whole project was silly simple. For SQL Server people, it was like using SQL Agent. Dumb, straight forward, not even remotely hard to figure out.
Finally, it's all hosted through a web page. You can check it out here. Let me know if you have any suggestions or improvements.
In the end, it looks like this:
- GPS satellites transmit signals that my radio receives in order to triangulate it's position.
- My radio transmits APRS data.
- The iGate picks up my transmission.
- The iGate sends the data over the internet to aprs.fi.
- I call the aprs.fi API from a Lambda function in AWS.
- The Lambda function writes to an AWS Aurora PostgreSQL database with the postGIS extension.
Conclusion
This project was a blast. I'm so excited I finally figured out how to cross the streams, playing with data and playing with radios, my two biggest passions. Combine that with my joy of taking a hike (more frequently around my 13 acres than anywhere else you may note when you look at the web site), it was the perfect project. However, the main goals here are to show off postGIS and AWS. I can't emphasize enough just how powerful PostgreSQL is because of the extensions and postGIS is one of the two extensions I'm most enamored with lately (the other being TimeScale). If you're contemplating a move to PostgreSQL, the technical reason for the move is the extensions. Hopefully this shows how easy they can be to implement and integrate within a database. AWS itself was a joy to work with on this project. I was frankly a little surprised at how easy Aurora was to work within using just standard pl/pgsql. The rest was easy too, even though, freely admitted, I cheated and used Claude to do a bunch of the work. I hope I also demonstrated the ease of getting going in AWS Aurora PostgreSQL.
See, not very much radio at all.
Tools in this post
Redgate Flyway Desktop
<strong>Redgate Flyway Desktop</strong> helps you easily and safely version control your database schema and prepare validated deployments












