The index.html is the default home page and had it's own styling. Based on the
feedback provided by Nic, regarding the layout in the page entry HTML templates,
I've added the HTML tags and CSS classes to the index.html template. This is me
just getting ahead of a change she is going to ask for in the future.
It is a minor change so changing it back won't be a problem.
The original 404 page was a self-contained HTML template. This meant it didn't
use the render function like the rest of the routes in the web package. I had to
change merge-pathnames to render because I wanted the 404 page to use the Djula
tempting engine. By doing this, the 404 page could use the main.css file and the
insert-snippet lisp code-block.
The original 404 page was the default one generated by the caveman2 project
generator. This change brings the template more in line with the design/looks of
the ritherdon-archive design language.
The page.html template's text was filling the entire width of the browser window
-- regardless of its size. Nic wanted the page entries to match the max-width
size used by the archive-entry template. This change wraps the CSS classes and
HTML tags in the page.html template's content-area with the same ones used in
the archive-entry.html template.
It looks like the caveman2 project generator has a typo regarding the robots.txt
file in the static file and paths list (in app.lisp file). The list has 'robot'
instead of 'robots' in it. This means the search engine crawlers can't reach the
robots.txt file because the site returns a 404. This change fixes that.
This is a hard-coded change which addresses the bug (#1) which points to the
wrong URL for the Meilisearch server -- when running in prod. -- but this will
need further refactoring. I've already created an issue (#2) in the Issues
tracker in prep. of that work.
I fixed typo's and added placeholders (E.G. '<INSERT USERNAME HERE>') in places
where you need to add your own data which is used on your server/system.
This is tag-along code from porting the storage package over from another
project. It's never got in the way so it's never caused any errors -- hence no
deletion/dealing with it until now. I've commented it out with the intention of
deleting if no use for it develops as it the site gets closer to going into
production.
The first part is just a minor change to get Emacs to indent the defpackage
stuff in an orderly fashion.
The second part refers to the 'main' function. I've changed the server it
uses/specifies from Hunchentoot to Woo. I've been using Woo throughout
development so I'm more confident with the system using that when it goes into
production. The 'main' is used, instead of 'start' when running the website as a
systemd service on the production server.
I used Caveman2's project maker and it adds packages, systems, exports Etc. with
strings. I changes how they were called here by replacing the string-quotes with
'#:'. It annoys me how Emacs indents/aligns the system and package stuff in a
wonky way when you don't use '#:'. I finally had fed-up with it and changed
it. Overall, this is a minor change.
The <a> tag would stretch across the top of the page and it made it hard to
click in an 'empty' space on the page. It was, also, confusing when the page
would change to the home page when you were not paying attention whilst clicking.
Because the code now establishes the 'server' variable in the various HTML Djula
templates (grabbed from the database). The code which generates the old 'server'
variable and it's data has been commented out or deleted.
This variable contains the URL for the Meilisearch instance this site calls out
to. It is grabbed from this site's database and passed to the Djula templates
providing Meilisearch-based features. The 'server' variable is called here so
it's easy for the dev. to see how and when it's called.
The filter code was moved to its own file and these two files utilse that
code. So, these templates now must call them. The reason for moving the code out
is to stop the browser's console printing errors -- because the filter JS code
was trying to run on pages which didn't have the correct HTML.
I've left the old code as a comment just in case I need to reverse course. The
new way the function gets the (Meilisearch )search URL is to get it from the
database (site-settings table). This should make it easier to pass the URL
around between the back-end and front-end.
This file contains the code for the filtering feature in pages.html and
archive.html -- a basic search feature which works by filtering the list of
entries on the page.
It was originally in main.js but I moved it here because it was producing errors
on pages which didn't have the filer stuff on the HTML template. This change
allows each template to call it when the template actually uses it.
This feature is for updating the search-url used by this site to call out to the
Meilisearch service (which provides the search database for this website). It
doesn't touch or alter anything on the Meilisearch service/instance/server.
search-url is part of the site-settings class. It is used to help tell the
system which URL to use for the Meilisearch instance this website's search
features are utilising/calling out to.
I, also, replaced some 'logged-in' permission checks to 'administrator' in
several defroutes (mostly 'danger-zone' routes).
This is the back-end functionality which allows users to upload Snapshots (in
.zip files) to the /snapshot directory.
The route accepts multi-file uploads and ignores files which are not either a
.zip file or if a file has the same name as one of the Snapshots already in the
/snapshots directory.
Technically, the user can upload several files at once which are not .zip files
and the alert-message will relay a 'success' message, even when nothing was
added to the system. This is because the system is relaying the upload went
without errors and not how valid each file was. The system doesn't have anything
built-in which allows the multi-faceted alert-message approach to work.
Another thing to note here is the lack of checks for the contents within a .zip
Snapshot file. Basically, there isn't any. I am unsure how many moving parts are
going to be in these Snapshots in the future and hard-coding checks for
directories and file names seems a bit premature (maybe unpredictable?). The
HTML template responsible for dealing with the front-end of the Snapshot
features clearly state it is a 'danger zone' section of the site. So, there is
an expectation (hopefully) of 'if you don't know what you're doing, then don't
touch it'. Hello, person of the future. I was really wrong with that assumption,
wasn't I?
This feature provides the ability to unzip a .zip file (the expected file format
users must upload Snapshots with) and store the contents of the .zip file in the
/snapshots directory.
This commit is just the front-end. The back-end, at time of this commit has not
been implemented.
The form allows users to upload 'Snapshots' to the website -- with the
intention of then restoring the website from that back-up.
The server needs to be restarted after restoring the website from a
Snapshot. This commit has code which establishes if the website is running on
localhost and informs the user to restart the server (most likely in SLIME)
manaully. If the website is running in prod. and using Systemd, the service will
need to be restarted that way (user doesn't have access to SBCL or SLIME in that
context). So, a Bash script will need to be written and that script will need
to be called using (most likely) utils:run-bash-command.
At the moment, I haven't got far enough into developing this website to have
established a Systemd service or running outside my local dev. machine. So, I
have left a TODO comment here stating the prod. side of the defroute is not
implemented yet.
This is part of a multi-part commit to port the string-is-nil-or-empty? function
from the utils package to the validation package. The code has a lot of
'utils:string-is-nil-or-empty?' dotted around so this took a few commits to
port. There is a chance I've missed it in some obsure places so don't be
surprised if you see a future commit relaying something similar to this one.
This route zips up the specified Snapshot and moves it to the /storage/media
directory. I was originally planning of having the user download the Snapshot at
this point but I decided to change how this works.
I decided to go with the 'zip up a Snapshot and move it to /storage/media'
because I didn't want to re-implement the 'download' functionality outsite of
the /storage features. Maintaining two 'download' sections is not something I
want to be doing -- that is what the /storage section is for (sort out the
downloading). Doing this way, also, adds another chance place for the site's
data to be recovered from.