Work on S.A.D.I. continues.
Right now, I have the bot hooked into the Star Citizen API (community edition) and commands created so you can ask S.A.D.I. to get ship and star system information.

This is good. However, to be a true corporate bot, S.A.D.I. needs to put her H.R. hat on and do some persistant enumeration.
On Join
When S.A.D.I. is added to a new server (not that I plan on making this a public bot, but best to pretend I will for completeness sake), she’ll pull the guild information of her new home. Surprise! Internally, your Discord server is referred to as a “guild”. This contains pretty much everything about the server that you’d want to know, like the ID, the name, all of the channels (text and voice and groups), all the roles, and all of the users.
Some of this data is sent to a database. First, she’ll check the database to see if there is a record with the guild ID in there already and if so, she’s been on this server before and is rejoining and she’ll just update the “last updated” field with a new date-time stamp. Otherwise, she’ll add the guild ID and name to the database. Then, she’ll collect all of the users — just the relevant info like ID, name, the nickname for this guild, and avatar URL — and do the check-update-add. Then she’ll do the same for guild roles and finally will wire up the user records with the guild records so we can have a match of user roles. Technically this last bit is a fallback and probably not necessary. If a website were to use Discord’s OAuth login abilities, I suspect that we’d have access to the logged-in user’s roles and all that, but I haven’t gotten that far and aren’t sure what kind of info the OAuth system will let us query on a website. Instead, we’ll have some data in the database.
Database
Here’s a bit of a problem. I have found that it’s kind of difficult to find a place to host a database for this purpose. My bot code is currently written and hosted on repl.it, but they don’t do database hosting. I have looked into AWS and Azure, but both require some Olympian-level technical hoop-jumping just to get public databases set up, and then there’s the cost. For development, that’s not a bill I want to foot right now or in the future, really, if this takes off. I have found a few “ephemeral database hosts” for free, but “free” comes with a cost: those services will delete your database after a certain period — hence “ephemeral”.
My current solution is to use the MySQL database hosting here at the ol’ blog host. This is not optimal. My training is primarily in SQLServer, and there’s just enough of a difference between the two platforms to cause me headaches, but even still, this is not what my host was meant for. The upside is that I could prototype a site on my host that uses the same database. An alternative would be to host it myself, but then we’re talking about getting a new machine set up at home to do this. I’d have total control over it but for right now the web host DB is better than nothing.
Promises and MySQL.js
From a technical standpoint, there is a library for Node called “mysql.js” which handles the connection and querying of the database. Fine. Dandy. Just what I needed. Unfortunately, it’s asynchronous, which means that it’ll issue the query and then leave it to it, continuing to execute the next line of code even if the query hadn’t returned data yet. That makes things really difficult to work with: the only solution out of the box is to execute additional code in the internal function that fires when the query completes, and that’s a load of crap in 2019…even worse for the impending 2020.
One method for dealing with async operations is to use the “callback”. This is a function that is passed into another function and is called when the containing function completes its mission.
var _myCallback = function(result) { //Use the eventual results in here };
function AsyncFunction(arg1, arg2, _myCallback) {
//When the process completes...
_myCallback(asyncProcessResults);
}
Personally, I like this because it can lead to an organized codebase, but apparently some people use this to the nth degree, nesting callbacks within callbacks, leading to “callback hell” — a chain of functions that need to be examined, possibly in all corners of the codebase, whenever there’s an error. I can understand that kind of headache. Another way is using the await system, but that seems a little too bare metal for my needs, so instead I’m looking into the promise.
A promise in javascript is what it sounds like: a contract stating that after a process completes, something will happen. If so, then the promise is kept. If not, then the promise is broken.
var _myPromise = new Promise(result, reject) { //processing goes here }
_myPromise
.then(function(rslt) { //Success! Use the result }
.catch(function(rjct) { //Oops! Not-success. }
This is a hell of a lot cleaner! We execute the promise and let it do its thing. If the process is successful, we deal with the results in the then. If the process fails or if we trap for an error, we deal with the fallout in catch. It’s a little cleaner than using callbacks, but it’s a little more obfuscated as to what’s going on under the hood.
The problem is that the mysql.js library doesn’t implement promises, allow for callbacks, or implement async-await. In order to use the results in a way that doesn’t involve stacking subsequent queries that need info from previous queries, I’ll have to either import the forked “promise-mysql” library or implement a promise wrapper around the base mysql query mechanism. I found an article on the second option and I think I’ll go with that, as the promise-mysql library relies on another library specifically for creating promises, which Node now handles internally, and seems like overkill.