Nextcloud has a nice home page called the Dashboard, which has calendar items and to-do list on it. But ever since Calendar App version 4.5, it has been broken for items sourced outside of Nextcloud. In other words, if you create a calendar item on your smartphone and sync it in to Nextcloud, on the Calendar web page you can see the item, but on the Dashboard home page it will be missing. The solution is to downgrade the Calendar app to version 4.4.5
Steps to perform:
In the Nextcloud admin interface, find the Calendar app and disable it
In the Nextcloud admin interface, select the Disabled apps section. Then Enable (but not update) the Calendar 4.4.5 app.
And now, when you go back to your Dashboard home page, your calendar will have all the items on it. đ
You do get to apply this fix after every update. đ
Technically, this post title is somewhat misleading: sync is not broken. What is broken is that items that sync in from CalDAV sources apparently have something that, when it is present, causes the Dashboard page to skip those calendar items. It just looks like sync is broken because you knew the items were on your calendar: but when you look at the Dashboard for today, they are missing. I suppose a better title would be Temporary fix for Nextcloud calendar (some) items missing from Dashboard
I have several CDs (Compact Discs, not Certificates of Deposit) of music that I like. When I popped them into my PC, I got several folders of files I could copy from. I chose to copy the .ogg files because I liked the idea of using an encoding format without weird licensing issues.
Apple has foiled that plan. If I try to play a playlist on an Apple device, the .ogg files get skipped because (apparently) Apple doesn’t feel like playing nice with the Open Source community. They may have more money than God, but adding another codec – that doesn’t have license issues – to their devices isn’t something they are going to spend money on.
When I work on-premises in the office, my co-workers are often noisy and annoying. I want to pop in my Airpods and play background music to drown out their inane chatter. I don’t want to carry the music files on my device; but I do have a Nextcloud server at home that can stream the audio from the Music app web page. I can log in on my iPhone and play the playlist.
But because it’s an iPhone, it auto-skips the Ogg Vorbis files. This doesn’t happen when I’m at home playing the same playlists on Linux or Windows.
So now I get to re-copy the files from the physical media to my NAS (network attached storage) which in this case is a Synology.
First, I get to delete the files with the .ogg file extension. Two steps (for example):
exiftool -p '$filename' -if '$album =~ /WOW Worship: Yellow \(disc 1\)/' *.ogg > wow_worship_ogg_file_list This generates a file, wow_worship_ogg_file_list, which has the file names in a list.
Second, after having cleared out the disk space, I can copy from my physical CD to my NAS. That takes a while; and, after it is done, the file names aren’t wonderful. Rename music files to their title to the rescue.
Except, of course, for a duplicate file name. I have an MP3 file I bought from Amazon (published by Monstercat) with the same title as one of the files from the WOW Worship CD. I would prefer to rename the Monstercat file, but really if I’m going to be running the rename music files to their title command often, I need to change the Title inside the .mp3 file. If I don’t, the next time I run it, it will attempt to rename the file to a duplicate name that is already in use.
Exiftool doesn’t really write new Titles, apparently. I think it can, depending on the file type. I wonder if the weird license problems of MP3 are at the root of the problem. Whatever: the answer was to add the id3v2 program and use it instead.
id3v2 -t 'Title by Artist' file.mp3
Now the rename music files to their title script moves the one file to the new file name, and the other file to its simpler file name than what came off the CD.
In my last post I whined that it takes a while before OpenSuSE gets the updated Discord app. I checked today, and the update was there. I only have three days to catch up on: nice!
Thank you to Wojciech Kazubski for updating the app in the repository.
I think instead of creating a new post of when Discord goes broken (does an update) and when the new version shows up fixed, I’ll just make a list here.
2023-12-27 Discord client update came through, and this time it matches the server version đ I was down fifteen days, but I’ve been out of town for six days.
2023-12-19 Discord update came through, but maybe it’s too late? Yep, still down. OpenSuSE updated to version .38-1.1 but the Discord app wants to install version .39
In my previous post, I explained that every time Discord publishes an app update, I’m locked out* until someone fixes it in the OpenSuSE repositories. Four days ago, it showed up in my list of updates, and hooray! I was back in Discord after eight days of being locked out.
Today, another update is published. Sigh.
*“Locked out” is a poor term: I am voluntarily opting-out. I boycott Flatpak and Snap Apps because I dislike those technologies.
Every time Discord updates their app, I don’t use Discord for many days. This is a bummer because I have a circle of friends on the Internet I’ve known for 20+ years, and Discord is where we have settled (so far). So when Discord does an update, I go through a dry spell of not being in contact with them.
The OpenSuSE folk tell me I should install the Flatpak version of Discord or the Discord Snap app. I’m not a fan of either Flatpak or Snap. Snap fouled up a machine months ago (and it’s still broken to this day) when I installed it. Flatpak seems like a (no better) replacement for my RPM repositories. Worse, it duplicates storage and doesn’t integrate unless I convert everything over; so that just creates more points for stuff to break. But at least the single developer doing the Flatpak doesn’t have to integrate, so that’s not his problem.
I can use the web version of the Discord app. It has keystroke conflicts with the web browser, though, and because I use temporary containers for everything, it treats me like a new user who didn’t see that sticky post from five years ago….
So, I wait until someone asks the Discord people to update the RPM in the OpenSuSE repository. Eventually it happens. Time before last, it was eleven days.
Ooof. This one kicked my ass for a really long time. The question is “How to connect the Home Assistant Media folder to an SMB share?” There’s a wizard, but what to enter for the Remote share entry is murky.
A part of this is pretty obvious, but the other part is not. Of course, I tried the wizard first, but I didn’t enter the Remote share entry correctly. I tried reading the documentation, but it wasn’t much more than “For the Remote share entry, put in the remote share.” Home Assistant would always fail to mount the share, and the error message was (essentially) “It didn’t work”. Sigh.
I had previously created an SMB share on my Synology NAS, and could map to it just fine from my main Linux desktop, from my Nextcloud instance, and from Windows machines I have here on my home network. I knew from my Nextcloud install (adding it to /etc/fstab) that the vers=3.0 option was important.
Doing a search found a Youtube video about editing the /config/configuration.yaml file and running a shell command. It mentioned that the vers=3.0 was important. Maybe this is what I need? This turned out to be a rabbit hole (but no rodent with a mean streak a mile wide at the end 1).
Since the system-launched shell command wasn’t working, I tried the next logical step: try it from an actual command line. It didn’t work. I think that is because of Docker and the impermanence of the terminal shell and sandbox for security.
I installed a terminal app in Home Assistant, but whenever I tried the same mount command that worked on Linux, it would fail on Home Assistant with “permission denied”. Not really helpful. In fact, it seems unhelpful, because if I read between the lines 2 I see “your password is wrong” – which it wasn’t. “Permission denied” is the error message you get when your password expires, and the credentials file has the old password. Of course, I knew my password was correct: but if I were someone brand new to this, I would have been mislead by my own thinking.
Here is the mount command that does work in Linux but not in Home Assistant:
mount -t cifs -o vers=3.0,credentials=/config/.smbcredentials //mysynology.domain.tld/sharename_smb/data /media/nasfiles34
The problem that I was running into was that the Home Assistant documentation never tells you what it wants for “Remote share”. The dialog box says “This is the name of the share on your storage server” – but that doesn’t help, because it doesn’t specify what to put in. That’s why I’m writing this post: if you have a mount command that does work elsewhere, the pieces you need are here.
Over on the Synology, it told me the share name was smb://mysynology.domain.tld/sharename_smb/data
That does not work here in Home Assistant.
Here are the settings that do work:
So, from the mount command above, the Server entry is mysynology.domain.tld and the Remote share entry is sharename_smb/data
Phew. This was a long time in figuring out, as I tried all sorts of stuff for the Remote share entry:
host name and share name changed to protect the innocent. Not that any of this is on the public Internet, but why tempt the random bored teenager? They can be pretty clever and persistent. âŠī¸
Yes, I had to create a directory named config off the root of my Linux box and copy the .smbcredentials file to it for the mount command to be an exact replica of what would have gone in the shell command in /config/configuration.yamlâŠī¸
I set up Nextcloud on a new instance of Debian, and thought I had added all the pieces for memory cache and file cache, and had set up cron to run php -f /var/www/html/nextcloud/cron.php correctly. But in the Administration Overview screen I was still seeing this:
Last background job execution ran 2 hours ago. Something seems wrong.
The database is used for transactional file locking. To enhance performance, please configure memcache, if available.
But had installed Redis and APCu and configured them … so what is wrong?
I should mention that I’m using php 8.2. Apparently, with that new version of php, the APCu code now needs an additional setting that wasn’t needed before.
Find your way to /etc/php/8.2/mods-available and edit the apcu.ini file. Add this:
apc.enable_cli=1
Finally! I have the green check mark: All checks passed.
How to test if you cron job is going to run correctly:
I had to add the sudo package to Debian, because the basic server build did not come with that. But what it does do, is let me switch user and do the command. First, I specify the same user that Apache is going to use: www-data and then I run the PHP interpreter, using the file /var/www/html/nextcloud/cron.php
Prior to the change, it erred out with a rather ugly OCP\HintException: [0]: Memcache \OC\Memcache\APCu not available for local cache (Is the matching PHP module installed and enabled?)
Now after the change it simply runs without reporting anything (everything ran sucessful)
This seems to have solved some the frustation I’ve had; Audacity is better now. NextCloud client is up to date too.
Way back when, I had tried Tumbleweed. But at some point I moved away from it. If reason I moved away reappears, I’ll note the why of it. I’m pretty sure it was because LibreOffice got upgraded and broke KDE.
I recently did an “upgrade” from OpenSuSE Leap 15.3 to 15.4. As expected, it did not go well.
I ended up doing a manual install (as if the disk were new, except for /home), and then re-installing every application I need. Thankfully, there aren’t that many I need.
But I didn’t add any weird repositories. Today I happen to need to use Audacity. Hmmm. The version on this machine is 2.2.2 The current version is 3.3. Well that would explain why the Noise Gate plugin isn’t present.
I did add some weird repository to get the latest version (there appear to be seven of the them). Nope. Doesn’t work.
I happen to be running NextCloud. Every time I start the machine, it warns me that the desktop client is out of date. Okay, I’d like to add a repo please. Nope. Only manual installs, like the uncivilized practice, are what is done here.
I suspect that repositories are considered difficult, so the decicion was to do away with them over time: let programmers define flatpaks and snaps, instead. I kinda hate flatpaks and snaps; but, what I’ve got here isn’t working, either.
Another new irritating thing is that I use “focus follow mouse”. Every time I’m on a Windows machine (one day a week), I’m reminded how nice it is to wave the mouse over the screen I want to work on, and that’s the window with current focus. Lately, however, this stops working after a while. Time to reboot. What is this, MS Windows?
Did I mention that about four times in the last three weeks (out of multiple times a day), the power down function doesn’t? It appears to go mostly down, but leaves the motherboard running. I’m trying to save electricity here, since rates went way up, and if I’m not using the machine, there is zero good reason to be burning electricity wastefully. Power up takes less than 20 seconds, so why not?
Well, because sometimes the machine doesn’t go fully down. I later want to power it up, but it’s locked up in the mostly-down state. I have to go to the back of the machine and flip the switch on the power supply. That could just be a Linux thing instead of OpenSuSE thing, though.
I wasn’t fond of the idea using Snap, but I recognize that might be my dislike of change speaking. I needed to add a domain name to my Let’s Encrypt SSL certificate, and all signs said to install the Snap version of Certbot. Okay, maybe I’m in the wrong, and should just get with the program.
And now since adding Snapper to my Ubuntu machine, every time I go to update packages to keep things up-to-date security-wise, I get a kernel upgrade warning that always fails to install. Thank you, Snapper folk, for breaking my system. I so very much appreciate adding your stuff and creating trouble in my life. Don’t know where I’d be without you.
All that really happens is that after every update, I get “Pending kernel upgrade” “Newer kernel available”
“The currently running kernel version is 6.1.10-x86_64-linode159 which is not the expected kernel version 5.15.0-73-generic.”
“Restarting the system to load the new kernel will not be handled automatically, so you should consider rebooting.” Thank you. Do you have any more ideas that don’t work? I’ll try those too.
I suspect that because the running kernel is newer, it’s just some entry somewhere that says I’ve got an older version installed. Nothing I easily found told me where to fix that though.
All I’m really doing is complaining that I didn’t have this problem prior to installing Snapper to support the Let’s Encrypt certbot.
2024-08-10 – I finally got this fixed a couple of weeks ago. Looks like I did:
dpkg uname -a
grep ^deb /etc/apt/sources.list
cat /etc/*release
sudo apt -s install linux-generic-hwe-22.04
sudo apt install linux-generic-hwe-22.04
sudo apt -s purge ?config-files
uname -a
reboot now