Friday, October 6, 2023

How Your Correspondent Spent the Year So Far

How Your Correspondent Spent the Year So Far

Alternatively: Why so quiet?

The year began with a hangover. Normally, the holiday period at the day gig is quiet; everyone gears up for the new year by spending what vacation they've squirreled away. This year was different.

I had a couple of potential projects come up in that usually relaxing period. Then, the first few months of the year added to the pile. Every time I looked at my phone, it seems, another new project had landed on it.

Now, on the one hand, this is great. Job security is a thing, right? On the other hand, you start to look at the calendar, count up the hours and the days and wonder, hey, does anyone understand how much work these will take?

Let's back up. I spend my time at the intersection of science and engineering. I came from multi-discipline and to multi-discipline I go. I'm learning what it means to be a generalist. In many senses; development at several levels has entered the picture.

The research part is supposed to be a given. That's my baseline. It's also, I realize, the time that I need to protect, while still being available for the service aspect of the job.

The first five months of the year, I did my thing on more or less the usual schedule. One week on the road, one or two weeks at home. Since June though, it's been more of one month on the road, one month at home.

I say a month on the road, it's more complex than that. It's much more like being a touring musician, Monday through on the road, weekends at home. Then pile as much downtime at home as I can get for a month, rinse and repeat.

Some years ago, one of my mentors talked about daily practice. He meant it quite in the same sense as we both understood it as sometime musicians: there is a daily rhythm that needs to be there, of the practice of the work.

Out of rhythm, out of time, and shortly out of sorts on all kinds of axes. Personally, I'd love to be able to say "Hey, I learned that already" and not revisit it given all the other things I could be digging into.

I do myself and others a disservice when I think that way. So, in between and alongside, I've begun digging back into daily practice. I'm behind myself, but I can see a road ahead, or a trail, asking for footprints.

I also need to remind myself not to get a whole list of goals and dreams and things to add to my work. Otherwise I'll just bury myself in all of those pieces which I haven't been able to get to. There's enough of that already, no need to do it to myself.

One of the elements of generalization I'm working on is learning to be a novice again. The point, I remind myself, is not to be the expert. The point is to learn to talk to the experts, understand and collate and merge.

As ever, a work in progress.

Saturday, May 20, 2023

They Might Be Giants

waiting for the show to begin, and marveling at how so many in line for this show look and sound so much like we all did, lo' those many years ago when we first caught TMBG.
it's a good crowd all 'round, sold out and ready to roll.
a few board issues later, a good night was had by all...

Friday, April 28, 2023

A Confusion So Common

A Confusion So Common

Brad Delong expresses a type of confusion that is so common that it has its own literature. Specifically, he's worried that he doesn't understand what practitioners mean when they write out things using some of the tools of quantum mechanics. In particular, some of the quick and dirty algebraic manipulations that practicing physical scientists throw around when using that most mysterious of objects, the wave function.

It's always a good idea to go back and look at what's going on under the hood. First, remember the first rule: to the best of our understanding, the fundamental particles are all both wave and particle. Photon, electron, all the others, to any degree that we can measure, all are both tiny little particles. And they are waves.

So, anything we do to describe these particles must carry the same fundamental duality. A wave function that describes such an object must carry both particle and wave information, simultaneously, if it is to do its mathematical job. Otherwise, it's not up to the task.

So what then is the mathematical object we write as |A)? |A), our potential wave function, is a complex function. That is, it is a function of complex numbers. As such, the object (A| is the complex conjugate to |A). If these were real numbers, (A| would be the inverse of |A).

Which leads to the next object. (A|A) is a single, real number. Often, depending on normalization convention, as implied by the inverse or complex conjugate, (A|A) = 1.

If |A) were a relatively simple function, that by itself would be enough. But because of the first rule, it's a little more dramatic than that. (A|A) means then something more complicated than simply multiplying |A) by its complex conjugate. What it means more fully is, multiply |A) by its complex conjugate, then integrate the result. If A is a function of space and time, we integrate over space and time to get 1.

If A is a function of momentum and frequency, then we integrate over momentum and frequency. But the operations involved are the same. Multiply, and then integrate.

Of course, the first rule means that this isn't the end. |A) is also a matrix. And (A| is then the conjugate transpose of |A). In which case, (A|A) means multiply the matrix A by its conjugate transpose, which gives a matrix, and then take the trace of that resulting matrix. The trace is then a single number, usually 1 due to normalization.

This goes even further. |A) is also a field, and an operator. But that comes later.

First, let's talk about H. H, the Hamiltonian, is, for the particles we know of at least, a special function (operator, matrix, field) of its own. In particular, H|A), which means to take the operator H and act upon the function A, gives the energy of the system as E|A).

More specifically, if (A|H|A) means operate H on A, multiply the resulting matrix by the complex conjugate of A, and then integrate, then the result is E, the average energy of our particle. Or, in matrix language, multiply the matrix H by the matrix A, multiply the result by the conjugate transpose of A, and then take the trace. The result is E, the energy of the particle. (A|H|A) = E.

H, the Hamiltonian, is the operator which measures the energy of system. Or, alternatively, if we perform an experiment on a particle and measure its energy in a given experimental setup, then H is the theoretic function that we seek which, when operating on a test function, gives the same E as our experiment did. In which case, we speak of H as defining the system. There are other details about H.

One of them is that H also generates the dynamic information of a system, not just its average energy. That object looks like exp(iHt), where exp is the exponential, i is the imaginary number (i.e. square root of -1), and t is time. Then exp(iHt)|A) is the dynamic represenation of |A); alternatively, exp(iHt) acting on |A) generates |A(t)), the propagation of A into the future (or the past).

Either way, the algebra involved always looks like some version of (A|H|A), the multiplication of two matrices, followed by multiplication by the complex conjugate and taking the trace.

Now, let's go back to H|A) = E|A). H is an operator. E is a diagonal matrix of scalar, real numbers.

Or, to put it another, equivalent way, |A) is the matrix which diagonalizes the Hamiltonian. Thus, the wave function is an operator in and of itself. This is where a detailed linear algebra book, one that goes all the way through orthogonality, similarity and unitary transformations, and so on, begins. This is also where practitioners can get funny looks when people ask "what is the wave function?" In practical terms, the wave function here is "all of space", or more particularly any of a broad class of functions which measure (or span) space in a particular way. This is a particular generalization of the way in which position means "any real number" in the equations of classical physics. To ask after the "nature" of a wave function is to ask after the "nature" of numbers. They're the same thing, just written and collected in slightly different ways as needed for the use.

This property of A has some interesting side effects: A can have a simple, easy to write down structure for small scale systems. But that structure can be drastically different at larger scale. So much so that "two-level model" is either a curse or a blessing depending on area of work. Or the time of day, phase of the moon, color of the wine...

All of this is really back to the first rule. Which is that we have to keep track of both particle and wave nature simultaneously. Specifically, we have to deal with functions like A(r,k). r here is position, k is wave number (momentum with certain conditions). All of the notation is a reminder that we must always be careful about when, and in what order, we do something like B(r,k)A(r,k), a multiplication that could be over r, or k, or both, followed by an integration. Or a sum. If you write it out in detail, with full notation, it's tedious, painful, and you're guaranteed to lose track the farther into the work that you go.

Eventually, if you try and do everything in full detail at every stage of derivation, you are guaranteed to screw it up. So first Heisenberg, then Dirac, came up with different shorthand methods. Which just confuses things, because any of the notations can be written as any of the others. And, more unfortunately, Schrodinger's detailed methods involve so many elements that the shorthand has become the common method of representation even when their use confuses everyone involved, expert and non-expert alike.

This is the point where, if you've heard of it, the "shut up and calculate" school of thought stops, more or less. And, for all intents and purposes, that's sufficient. Assuming I haven't just made your confusion worse, the thumbnail description above gives the nuts and bolts elements. For many problems, there's not really any need to go any further.

But there are problems for which this explanation isn't enough. Feynman, Dyson, Bohm, all of them useful and, for some very significant problems, essential to go any farther. Who knows yet whether or how Many-Worlds will lead further, but it's one of the current cases where folks have tackled the basics again. There's always something there to think about anew.

And get confused over. Duality all the way up and down.

Wednesday, March 22, 2023

Penrose vs. einstein

A few thoughts on the new toy theoreticians just received...

This one is probably lost already: it's Penrose because named after a person. It's einstein because ein stein, not Einstein. Even odds that the original namers didn't even notice the pun until 2 or three others read the paper. Some of us aren't allowed to name our discoveries without adult supervision...

The Penrose tilings require multiple shapes. The ein stein requires only 1. This is where the magic lives. It's also going to be the "huh? But what about..." moment for a lot of the innocent.

Local repeats here are not periodicities; rather, they are similar to tossing multiple heads in a row with a fair coin. This is an area where visual intuition clashes with an algebra.

So how is this useful? Oh my word. Give us all time, every theorist has a bag full of toy models and questions to sort through at the moment, checking fit. In my old world, glasses liquids gasses and plasmas should all be getting checked over like a teenager with a hand me down car and a bucket full of paint.

Wednesday, February 15, 2023

Thus Machine Learning

Thus Machine Learning

Whence then the new way forward? I'm thinking what we're looking at is similar to when windowing operating systems came in, then the web, search engines, then yes finally Facebook and Twitter. What do all of these little twists and turns have in common?

Whatever else, each of these little steps gave people a way to interact with and through computers than they otherwise had. To use a computer prior to windowing OS's meant a blinking cursor that gave no information whatsoever. You had to ask someone to show you the way. Windows at least had a funny little pointer and some clicky responses. Same thing with the web versus the internet over telnet and BBS's. Yahoo and Google in turn made it possible to find more of these funny little visual objects.

Then Facebook and Twitter made it possible to talk to other people. Forums and blog comments, sure, but just like DOS and VAX and Unix all existed and were perfectly cromulent before Windows and MacOS...

Point being, machine learning systems give folks another route to interact with and via computer. It's already here: Siri, Google Home, they're useful as hell if you play with them. And of course I've hit the limits of what they'll let me get away with; if I find a way to trick the little beast into giving me voice access to its operating system, oh boy are we on our way. But I'll settle for what it has steadily grown more capable of.

The weird part being that, just as with all these other analogous steps, we'll see folks treating them as both bigger and smaller changes than they are. Bigger as in no, Francis, we're not any closer to Skynet today than yesterday; smaller as in discounting the flood of crap that's already being felt at short story markets. I've no doubt at all that there are plenty of quick-built autobooks on Amazon as we speak.

Some of these are better steps than you might think. If nothing else, form letters now might have a little personality. And yes this is a big help to folks who approach anything longer than a text message with anxiety (and that's far more people than care to admit it). Auto-complete a word-fragment or two at a time is distracting to me.

But I recognize well that there are many people for whom writing is a chore, at best. We're already living through a round of the "death of email", an accident of that text messages and tweets require a more terse approach. Yes it's somewhat of an irony that this will result in more email that doesn't get read or is misunderstood through clicking away halfway through the first paragraph, but what are you gonna do?

Visual artists may yet end up with something similar to Spotify and similar, some sort of clearing house approach with guaranteed pennies per month for access to train the latest and greatest art program. But there'll be pain to get there and no guarantees anyone would take the practical. Not to mention that there's apparently not even a hint of extending Discord or Getty Images to such a possibility. Just sue and pray.

Of writers I've my doubts that something similar could reach either proposal or acceptance. There's too large a gap between the haves and the have nots, and a waiting on my lottery ticket to pay off attitude among most of the have nots. Point being... nah, there's no point.

Will it stop you from writing? Or making art of any kind? That's the only answer that matters. Twenty years from now the kids will use it to create new forms of art. The current short story editors rejecting machine-written work out of hand are right to do so. Now.

Their successors will need to have different attitudes. Because the future writers certainly will. Can you imagine a radio station refusing to play music that has samples in it? That's where we're headed; I just wonder who's gonna blow up a stack of computers in Commiskey Park?

Actually, let me go ahead and say that point again: Sampling and re-mixing have been part of music for going on 60 years now (and yes it really did start with the Beatles, if only accidentally. The Who and then Pink Floyd circa Syd Barrett did it on purpose). If your playlist includes rap, contemporary music of pretty much all varieties, or electronic music of any era, you'd best be looking in the mirror before you dismiss machine-generated writing or visual art. Because it takes a lotta damn gall to hike up your skirt and start screaming now that they're coming for your art after musicians have been forced to accept the same phenomenon without recourse or acknowledgement from the rest of the art community. But I guess a little bit of hypocrisy goes a long way? Solidarity baby, at least for the write sort?

Of course I'm going to take that pot shot. The class snobbery at the heart of publishing (English language) is as viciously small-minded as it's ever been. Some few niches have been carved out for those who've taken advantage of the e-book opportunity; will they even now draw up the bridge behind them? Historically that's the way to bet. It's time to practice our sneers folks, it's always best to make sure the younger folks have ways to easily memorialize their elders. It's a sign of respect what what?

What about me? My biggest issue right now is that I'm struggling through a year of burnout and now recovery. The rise of machine learning is interesting for a variety of reasons; it means as little as to why I put my fingers to the keyboard as does programmed music for what I do with the guitars and other instruments sitting around the house, i.e. not a whole hell of a lot. Excepting inspiration of course, but that's a story of a different horse.

Sunday, January 15, 2023

Ok, Machine Learning

Ok, Machine Learning

You are a scholar and a teacher. You're worried about these AI chat systems; you don't necessarily care that your students are using the thing. What you really care about is that if they do ask one of these systems a question, they get the right answer.

And, for your own research, you wonder if you can get a good answer to your own questions. How do you tell if they're any good?

First you go and feed one of these systems your own homework question, right?

Do not try this, at least not first thing out of the gate. You can be fooled by your own head if you try and "grade" the results without knowing what the system is doing.

Instead, try this. Ask it a question that looks and sounds like something Wikipedia can answer, then see if it does 2 things: do the answers corresponde to the Wikipedia page relevant to the question? And, just as importantly, does it use only the answers found in the Wikipedia page in question?

The first test is of course for accuracy. Note, I don't mean that the answer is quote for quote from Wikipedia, in fact it's better here if it doesn't quote pull directly. I just mean, do the facts and assertions match up to those of the Wikipedia page?

The second test is for completeness. Extra information here is not by default extra credit, and should be discounted unless you are dealing with a field you know well enough to find that information in a trustworthy, publically available digital source. This is a test for completeness: only trustworthy, creditable information that's publically available and verifiable independent of the chat system should be included.

And yes, you should also try this with "known shitty" internet questions. If you start seeing lunatic fringe answers in the results you know the system in question has not been evaluated completely for Garbage In, Garbage Out. Not all data sets are valid for the purpose presented.

You should also try this with other questions that, though you aren't necessarily expert in, you can readily track down both the Wikipedia page, and the top 10 or 20 field standard references to. This is a test for breadth of knowledge: has the system been built to fool you in particular?

And then, if you're ready for finding out if the system really knows its stuff, find out if it can do the same thing with a well-known review article in your field or one you're interested in learning...

You are an artist. Really, you're intrigued by whether these systems can work for you. And, deep down maybe you're worried that it's using your own art somehow. How do you know if the system is useful, first? How do you know that it's actually doing something artistically worthwhile, and not just copying in a hidden way?

First thing you do is feed it a prompt for one of your own artworks, right?

Don't do this first. Wait a bit on fishing for your stuff and try something else. Your eyes will play tricks on you.

Instead, try this: ask the system to reproduce your favorite Van Gogh. Or Rembrandt. Or whomever, just make it a public-domain piece that you know well. One that you've studied yourself.

How did it do? Now, find out if it can do Jackson Pollock, or Andy Warhol? And yes I'm serious, if it has Jackson's or Andy's work in its dataset, it should be able to reliably get to a named artwork. If not?

It's restricted in some way from reproducing that newer work. This can be good or bad depending on your view on copyright, but know that this means that, artistically, there's a hole in its view of the world somewhere. Whether or not its useful for your purpose I'll leave to your artistic mind.

Depending on how well it did with a newer, name artist, now is also the time to ask it if it's capable of producing one of your works. Then, if you're interested in how well it works under the hood, go on to find out how it combines two well known works to produce something you haven't seen before? Here's where you get to judge whether or not it can do something useful for you. What would have happened had Annie Lebowitz been able to work with Ansel Adams? How would Picasso have done the Sistine Chapel? What would Van Gogh's Forty Views Of Fuji look like?

You're a pro musician: you're booked. Can you use one of these systems to compose, produce? How do you know they're doing something useful and not just sampling?

First, ask it to reproduce a piece you know, and not one of your own. Bach, the Beatles, listen widely and deeply.

Did it work for all of your tests? Get wild: pick one and ask it to change the key. After that, ask it for a different rhythm.

Note: depending on what the algorithm is doing, these two questions in particular can be either very easy, or very nearly impossible. If it does work, then they're doing it properly (ie. signal analysis is involved at the important levels). If not, it's sampling in an obscured way, in which case you can ask it for your own works with a completely different purpose in mind.

The point being: an expert system that is only sampling (Type 1) has its uses. However, an expert system that can actually morph something properly (Type 2), like a key change or a samba to four on the floor rhythm change, now that's a different tool entirely. And, fundamentally, there's a very real difference in what's going on under the hood between the two: a sampling machine that reproduces one of your own works is straight up copying.

A music-signal analysis expert system can get to your work through a different route entirely. It sounds weird, but this kind of system may indeed know you well enough to reproduce something you wrote without directly copying.

In fact, this applies to the artist, the musician, and the scholar as well: if you find a system that can quote you, or that can reproduce one of your works, whether its a Copier (Type 1) or an Analyzer (Type 2) matters. Type 2 systems are the most useful, the most properly constructed, and the most likely to be capable of reproducing your work without directly copying it.

At least in the immediate gold rush mentality that always accompanies new tech, I would suspect that we'll see quite a few Type 1, Copier, systems, because it's one of the easiest ways to take computational and data analysis shortcuts that allow those in a hurry to produce something that can fool people into thinking they're dealing with a Type 2, Analyzer, system. But as with sampling as it already exists, Type 1 systems that can reliably re-word known information very much have uses, if in a quite different manner than do Type 2 systems.

Wednesday, January 4, 2023

Alas, Machine Learning

Alas, Machine Learning

Thoughts and ramblings for my own purposes.

Under the hood, it's both computational and communications bound as ever. In practical terms, this means there's a point coming where mass computational bounds will kick in. What's economically viable to build for computers limits us all, but then that's in effect what ML was built to address, in some ways. Still kicks in, just at a new level.

I wonder what the effective "word" length here is, or will be? Think of letters, then words, then sentences as 1 point, 2-point, ... n-point basis functions. Are paragraphs then the n+1 limit? Essay length? Not in the sense of not being able to construct longer systems, more in the sense of repetitiveness, enforced periodicity by basis set limit rather than formal limit. "Perception of the machine" falls here.

Some years back, generic sports articles based simply on the line scores began to be generated this way, similarly with AP style news reports. "So and so won, so and so lost, here's the breakdown" kind of thing. Certain kinds of traffic and weather reports could as well be generated this way. Web pages, summaries.

Where's the error creep in, and how do you work with or around it? Garbage in, Garbage out always applies. In a purely numerical context, new algorithms can always be measured. How do you insure accuracy here? Replicability, too?

In a couple of the major fields involved, when asked long ago I made the comparison to the periodic table: meaning that what was missing was an empirical map. How does X relate to Y? Everything is foggy and dim; is it even possible to lay out a map in such flickering shadow and light? Here then is ML coming in with at least a possible construction.

Which is of course where the formal part began, or one way into it. Here's this arbitrary data set we know nought of. How does it relate to itself? What can we do with this arbitrarily large volume of presumptive knowledge that we don't yet understand?

Suppose you had a library accumulated by a sage since passed on. The sage was mysterious, crusty and cranky, and disinclined to tell anyone of their methods. Now your hands pour over old manuscripts in forgotten tongues, all organized, clearly, but in some fashion our old friend forgot to teach us. What do we do? We don't speak any of these languages, we don't know how our friend did it, what they meant by putting this scroll next to this codex next to this little sheet of paper much scratched and stained.

Let us consult the crystal; can it tell us where and how and why each text fits with another? Can it summarize for us what is contained there, and, better, which questions we can ask of which text? What if we could, then, summon forth both a librarian to organize, to systematize, and a scholar to help us understand what we have? And, perhaps, if we're dreaming, a new sage to add to the collection of knowledge?

This last is, formally, where we break down. "Artificial Intelligence" was/is market speak. Machine learning is what the experts preferred, though whether the distinction continues to be respected with mass adoption, I dunno. That aside, the difference is that the first two questions relate to transformations contained within a data set.

The third relates both to generalization between data sets, and to generalization beyond data sets. Crudely, interpolation versus extrapolation, though just like diagonalization versus singular value decomposition the equivalence is there. Still and all... asking for something new becomes the frontier.

Just like Wikipedia, scholarly communities will be obligated to query refine and strengthen a given instance, out of self defense. You'll need to make sure that if such a thing is out there it's giving correct answers. This took a long time to even begin happening with Wikipedia, and it's only done now in narrow instances. Professional obligations will expand; disciplines unused to programming should now understand that they'll need to require it.

Just like every other stage of computational development: does the computer do what I need done? Calculator, spreadsheet, web, can I get the answer I need? Can I trust the answer? It's a tool, how do I use it?

Listen: transformative work is transformative. That this allows automated transformation is irrelevant. The copyright office recognizes this, it also recognizes that the person using the computer to transform my work into something else, no matter what work they've put into the computer, isn't creating something in the same way as they would have if they had written it themselves. Thus, at present, ML-generated works are not copyrightable.

This has many implications.

First, cheap copies where someone takes one of my works, changes just enough of it to fly under the radar at Amazon or wherever, and tries to cash in, becomes untenable in the long run. Why would you need to do that if you can just ask an instance to generate a new work? Even if it's incorporating my work into the melange, so what, that's what would happen anyway, just in bits on a computer rather than the memories of the next generation of artists.

Second, it means there's going to an almighty fight when the media conglomerates realize what uncopyrightable means in this context. Right now, the media conglomerates appear to recognize that their catalogue has significant value in the brand new future.

And it does. WarnerBros or Disney or whoever appear to sit on the gold mine for training the next generation of ML machines to spit out branded media.

Sounds great. Each house will be able to perpetuate their secret sauce, down to the actors and voices and music and images... too bad for them its not creation in the artistic sense, and thus, for now, uncopyrightable. Neither is it something they can prevent others from doing. At least not if they actually want someone to view their product in the first place. If you can use today's actors in perpetuity, so can anyone else, sayeth the copyright office.

Which of course means that the media conglomerates are going to raise high holy hell when they figure it out. Gods preserve us. You thought they bowed when Mickey was threatened, look out.

For movies and music, assuming that no one manages to completely screw up all of copyright law by doing something "novel", I suspect the fine line that makes this work economically for the conglomerates is finding someone who can use the ML systems to generate as a part of something larger. In other words, ML systems as an element of a broader, complete artistic creation process. Like sampling only with broader extent than audio.

At the same time, there will then also be now video and written story equivalents of Muzak, only generated for airplane seatbacks or waiting rooms or whatever.

So, video and audio; Dylan and Simon and all the rest selling their catalogues, Cameron and Avatar 2, the last great cash grabs available before the previous financial landscape changes irrevocably.

What then of text? I'll use Stephen King as an example here, not because I know anything of what he or his heirs are planning, but because he's one of the primary household names in the written word.

Suppose that someone involved recognizes that King's life's work represents not just a present value, but a future value: in an ML world, all of King's works become the basis for future works, long after the author has left us.

If the copyright office says, great, fine, but it's still not copyrightable, is this life's work valuable in the instance of generating ML work in the future?

It is if you've heirs then capable of their own transformations and creative contributions to the eventual new work. Or, failing that, well able to hire it done. If we accept that conglomerates will find producers and directors who can successfully generate "based on" work to be monetized, then so too can estates find a combination of writers to generate "based on" novels and stories.

Only, now, without even the need to go digging for half-finished trunk books, or outlines, or notes on, or all the other ways they've done it in the past. The computer can generate that outline to order. And the estate can commission, or ask a son or daughter or...

So, then, thus: if there's now multiple generations of writers who "grew up" as Star Wars or Star Trek or "insert media here" writers for hire, the future will hold then estate-trademarked (because remember this: you can't stop someone else from using already published works to do their own transformations. But that is what trademarks help with if used properly...) Stephen King media writers, and Dean Koontz media writers. Think what Brandon Sanderson did with the Wheel of Time, but now perpetually and at much larger scale. No longer half a dozen at best, but like Tom Clancy's estate, over and over again as needed. At least for the 70 years after the original author passes.

And, this applies not just to someone of the stature of King or Clancy or Koontz. Imagine what will happen with the Song of Ice and Fire. Or the Name of the Wind. Or even your own works, you little writer you. Maybe there's room here, not just for your heirs to continue a little bit of money coming their way, but to even extend it a little. We can all make a little business for our kids to work in, even if how they do it doesn't quite resemble the way we did it.

So: there are creative ways that ML will be used to jump the uncopyrightable hurdle. Book it, it's already happening. And thus, the financial landscape will change, not burn down.

This provides opportunity. Protection, in that the silly cheap copy bullshit will likely fall away as unnecessary. And yeah, they'll be using your work, but transformative is transformative, you're never protected from that. It's actually better to have your work then be part of a much broader library that's built from. Then it's part of a stew, not a sushi bar. And some idea that the sort of Tom Clancy/Frank Herbert perpetual zombiehood now becomes a tool that any reasonably savvy artist can use for their heirs and assigns.

It's kind of a big deal, ain't it? And in a good way if you're ready for it.

The doom and gloomers here are missing the forest for the trees, especially on one big thing: there's always someone better than you at what you do. So what? That great orators exist doesn't stop me from speaking to those I must needs talk to. That Andres Segovia played and was recorded stops me not at all from picking up my guitar. If I need to sketch, I don't let all the much better artists and draftsfolks out there prevent me from doing my little cartoons up.

Art is communication. If you are not simply to be a consumer of art, you will have your place to go to when you need it. You have a way to let your voice ring out. No one can prevent it.

And, perhaps, just maybe, and with care, the computer will show you some more new options for how to pass your voice on to others. Find that inner 15 year old that doesn't give a shit what their parents say, doesn't look for a moment to whether it's worth anything, doesn't know or care who's done it before, damnit they're gonna make their own art come hell or high water.

That voice? That hand? That story, that song? It's always yours. It's always you. Embrace it no matter what. Let the worry warts go bother someone else.

Sunday, January 1, 2023

Just Gotta Love The Great Computer Randomizations

Just Gotta Love The Great Computer Randomizations

Ah, that wonderful feeling: some update blobbed a configuration file somewhere. One that I haven't touched in (scanning file dates) six years now. One that I set a customization flag in by hand the last time it drove me crazy that I couldn't get a gui widget to give me the environment I prefer, damnit, not the as-shipped.

It's really amazing how fast you get out of the habit of looking under the hood when you don't have a daily. I say how fast; 6 years isn't. Well, it used to not be, but that's one of those things you can understand intellectually, not the gut level. Not until you look up and see the dust on the proverbial bookshelf, anyway.

One of this bit twirler's bad habits is to have ingrained a handful of key-bindings into the subconscious long ago. And then have never revisited the muscle memory because it was faster at any given moment to just go with the habits engrained. It's something like having learned scales with a given rhythmic pattern, and then having never gone back and re-learned them with another. I get locked into something that isn't necessarily the best use of keystrokes, but isn't sufficiently troublesome so as to make me take the time to code a different way.

I like to think I should have done something like that for book formatting, for instance. I could and should sit down and write out the various steps, script them, and automate the process, at least so far as the steps to which automation is the better choice. I just haven't. Yet, I'll get to it... eventually. There's always something else though, you see.

And look at me. I did actually want to talk about Machine Learning today. I blame Noah Smith, he put up a Substack post that set out some things I like, and a couple I don't, in relation to where Machine Learning is just at this moment. And then here I am getting caught first of the year arguing with my computer.

Maybe that's it. Maybe the computer's trying to keep me away from discussing the topic. Hmm, I'll have to meditate on that.

If so, I'd like to think that it's my computer telling me that, if I am going to be spending time at the keyboard, wouldn't I rather be fictioning?

Yes, yes I would. We are our bits then, aren't we? inside and outside the brain case. Kind of cool, ain't it? And here we thought we'd all need to directly use our brain stem for signal transport. Understanding really does only come in small chunks.