Post by D. Hugh Redelmeier via talk| Desktop freezing can be an issue if you don't get the ratios in line with
| the hw.
Really? swappiness values (except for 0) should only affect
performance. Not something binary like freezing. Of course really
really really bad performance can appear to be freezing.
Perhaps I should have said stuttering instead.
Evans question was coined in some general observations, albeit as related
to apparent latency issues and some apparently negative effects. I assumed
some bad performance there and also perhaps incorrectly assumed an onboard
video card which draws down on system ram.
Evans question was simple enough. "I have a desktop that, to me, seems like
it's running to swap just a little to often and coming back to RAM just a
little slower than I'd like. What are the ill effects, on a desktop, of
lowering swappiness down from the default of 60?"
There wasn't much helpful commentary about those perceived issues, so I
answered with one link which I thought to be, if not absolutely true,
perhaps demonstratively so, however anecdotal or subjective.
It's always helpful to know the exact hw in use however vague
specifications in questions do lead to vague speculations, like mine.
Post by D. Hugh Redelmeier via talk| I've been considering this a bit for my own desktop. In order to mitigate
| possible write amplification issues on my SSD running RH F27, at this time
| I have 8gib ram with a 4gib swap.
Post by D. Hugh Redelmeier via talkswappiness should not have a direct effect on write amplification.
The obvious way of affecting write amplification is by
overprovisioning (as we've discussed before).
I did provide some, not unsubstantial unallocated space as you described in
that previous thread. 15gib on a 512gib SSD. The Fedora automatic installer
only allowed 56mb on the M.2 install. I scheduled an fstrim for once a week
based on the info you posted.
Post by D. Hugh Redelmeier via talkswappiness could and would change the number of writes. Except in
pathological cases I would not expect swap to challenge the lifetime
of an SSD.
I'm unsure of the term pathalogical in respect of electro-mechanical
devices. However it is a very human trait to humanize objects so that the
terms of reference are ... well more humane. Ships are she, land is
mother/father etc. Even lower order mammals are not imnune to our liberal
use of metaphore.
As an example someone might say, we took the cat out for a sail on the last
dog day of summer. (That was hopefully yesterday for this year) If you
don't sail, you don't know a cat is a diminutive of catamaran, a particular
type of pontoon sailboat. The dog days of summer are the days so hot even
the most active dogs just lie around in the heat.
So I think of performance this way. If you drive your car at 65 mph, you
get one set of outcomes. high gas consumption, higher parts wear and
greater risk of accidents. If you lower your speed to 50 mph, you reduce,
not insignificantly, those apparent adverse effects. I think most people
understand that fairly easily, but how many of us could actually prove that
fact mathematically. Anecdotally, that is an entirely different kettle of
fish.
Post by D. Hugh Redelmeier via talk| Apparently, on more modern systems, if
| you have greater than 2gib ram you are in a better position to play around
| with the checks and balances.
There is no magic number independent of workload.
That is true enough but people often coin their performance metrics
casually. Simple rule of thumb advice is far easier to source than
qualified details with reasonable rational explanations, like yours.
Post by D. Hugh Redelmeier via talkI have an inexpensive RHEL 7 OpenVZ instance in the cloud with 256G of
"RAM" that seems to work fine. No desktop or X, of course.
2G seems to have become too low for me to pleasantly web browse on
Fedora 28. It still works but can get sluggish with web pages that I
visit (a few tabs of Ars Technica, for instance).
swappiness is not a check, only a balance (except for 0).
For most ordinary workloads, swappiness should only matter when memory
gets tight. 2G would be a good example.
swapping behariour generally follows a hockey-stick curve. Anything
you can do to stay left of where it takes off is worthwile. Anything
moving you further left from that isn't very important.
(Write amplification also involves a (different) hockey stick curve.)
Another interesting thing to play with might be zswap. I've never
tried it but some claim it is quite effective. Essentially, with
zswap: swap has a compressed cache in memory; actual writes may be
avoided. To me this feels like a way of bumping things towards the
left on the hockey stick curve. So it should matter on some systems
and workloads and not on some others.
<https://en.wikipedia.org/wiki/Zswap>
Thanks for that link. I also came across this the other day. Extending your
metaphore, perhaps this is the right curved hockey stick of swappieness?
https://packages.debian.org/wheezy/dphys-swapfile
Post by D. Hugh Redelmeier via talk| On my M.2 Nvram in the same system running RH F28 where I let the system
| installer choose for me, I am experiencing certain display widget redrawing
| issues in that the widget is blacked out until the mouse hovers over it.
That should have nothing to do with swappiness. Unless there is some
pretty odd bug.
| However at this time and considering all the microcode updates for my MB in
| the last eight months, I'm unsure whether this is connected to the display
| drivers or the swap parameters.
One never knows what it is connected to until one tracks it down. But
swappiness is that last place I'd look. Microcode updates might be
the second last place.
The first place I'd look is the video driver. Those are very
complicated and too often buggy. Consider switching between X and
Wayland to see if the problem follow you.
Kde Plasma seems to run fine on F27 as do Mate and Gnome. I'm not booting
much from F28 on the small stick at this time, too much else on the go. I
confess my original intention was to use the M.2 for its original design
purpose, as a caching drive for a fixed disk running Windows 10. I'm almost
completely unaware of the M$ desktop these days and I thought I'd have a
look. However spectre, meltdown and the drop in prices for SSD's changed
that plan.
Post by D. Hugh Redelmeier via talkOf course the zeroth step would be to discover how to create the glitch
reliably.
From my understanding, celestial navigation was somewhat problematic until
Hindu scholars and mathematician's came up with a record of the zero
constant, in order to describe azimuth.
Post by D. Hugh Redelmeier via talk| I haven't had much time to hack around lately, but as I stabilize this
| build, I definitely don't use hibernate and suspend as they draw heavily on
| swap.
As I understand it (imperfectly) hibernate uses swap (and common sense
says you need enough swap to hold all dirty pages) and suspend uses no
more than the system did before suspension. So only hibernate draws
heavily on swap.
Generally speaking, hibernate isn't done often enough to challenge SSD
lifetimes.
You really need to think quantitatively to understand what matters in
performance. That includes SSD lifetime issues. Hockey stick curves
drive one into non-linear systems, something a little harder to deal
with. "The Tipping Point"
Thanks for all your tips, most helpful and enlightening, as always. Most
especially your descriptions help to wade through the somewhat archaic and
obscure technical language which can be so problematic. Especially when you
want stuff to just work as designed, not to mention, as advertised.
---
Post by D. Hugh Redelmeier via talkTalk Mailing List
https://gtalug.org/mailman/listinfo/talk