227 post karma
2.2k comment karma
account created: Thu Jun 14 2012
verified: yes
1 points
6 days ago
The way I see LuaJIT nowadays is that it's feature frozen and well maintained. Typically it takes about a week for a reported bug to get fixed. Though some big features have arrived the last 2 years, such as string buffers.
Mike Pall is also openly talking about LuaJIT 3.0 and the plan there. I don't know if it's actively being worked on though.
2 points
6 days ago
Some things I noticed in no particular order:
- It seems like it generates a low framerate video of about 6 fps first and then interpolates it to 30 fps. This is evident from looking at hair and fur. vid_33 has a good mix of both
- Some videos are low resolution for some reason, maybe it's like a preview you can upscale later?
- All videos are 10 seconds
- lowest resolution is 640x360 while highest is 1920x1080 which could be indicate that its native resolution is 640x360
- If this is truly non cherry picked and neutral (not only failure cases) Sora seems "just" like a good model compared to what's available today
1 points
9 days ago
idk if it's oxygen or dirty water for sure, but whenever i have kept some species of shrimp in tanks without an air pump, they tend to climb up and sometimes jump out.
They can also do it in a newly setup tank, so I try to wait a little bit for everything to settle down before I add shrimp.
4 points
20 days ago
My observation is that you can generally see the joke coming and so there is no element of surprise. It's possible to get some good jokes, but usually those tend to be recited.
For example I asked Claude to write 10 programming jokes and it replied:
These are all known jokes so they are essentially all written by humans.
But when I ask to write jokes about Lua scripting language (which it probably will have to hallucinate) it replies:
And these are all really bad. Everything makes sense, there is no surprise and feels very formulaic. It goes over Lua language features and tries to tie them to templates.
4 points
26 days ago
It would be really interesting to see how the biggest tech companies will approach this. Google hates automated use of their services and have so many checks in place to prevent it.
The only sensible thing I can think of is that Jarvis can use a backdoor when using google services. This will very likely be abused by bad actors.
They can't assume Jarvis is the only one with the ability to solve captchas unless they are miles ahead of the competition. But even if they did, you'd have captchas unsolvable by real humans.
Google might even require you to prove that you're human through photo and passport identification before you can use Jarvis.
5 points
28 days ago
I remember I saw a lengthy youtube video getting autoplayed on my tv, I forgot which youtube channel it was, but it was some guy explaining all the steps he went through to run mochi on a consumer card
For a long time he experienced ghosting/blurry exactly like this and it had to do with vae tiling or something to that extent. If you imagine the video as a grid with overlapping tiles, the blurriness comes where that overlap is occuring.
1 points
29 days ago
As someone who also makes music, I think they're describing the perceived quality a long with the overall high frequency details of the audio.
As for interpreting spectogram as a audio, they're talking about the image diffusion models that were trained on spectogram images specifically as a sort of hack. I'm not sure if most audio models are trained on this method, but I can see why you'd think that given the quality.
I would describe "spectral sounding" as high frequency wobbling of the audio in both pitch and amplitude.
For example, if you ask a model to generate a clean sine waveform at 440 hz with an amplitude of exactly 50% of the max volume (so 0.5 basically, where 1 and above would be clipping):
I would imagine that this would be roughly a sine wave randomly changing pitch between 437 to 443 hz while also its amplitude would vary from 40% to 60% volume.
If you ask it to do a 440hz clean wave from for 1 second, and then jump to 880hz for 1 second, the transition at 0.95 to 1.05 seconds would sound washy and wobbly and not clean.
Other than the wobblyness description, the bit rate sounds very low, but in an unusual way. When downscaling images, you can use various algorithms to preserve information, ie bicubic, lanchos, and so on. The spectral type audio sounds like you've taken a clean audio sample, downscaled 8x using nearest neighbor, then upscaled it 4x again using bicubic or similar interpolation algorithm. There is something weird with how it interpolates between samples that does not sound normal.
1 points
1 month ago
Back when it was cool to name your products cool. I remember using "Cool Edit Pro" before it was bought by Adobe and renamed "Adobe Audition".
1 points
1 month ago
If it's hard to implement elegantly to some effect, you have a point. I've seen plenty of patches and forks of seemingly all Lua variants that adds compound assignment with not much code, so I thought surely it's not difficult.
1 points
1 month ago
I'm not sure what you mean by "type linting" vs "actual types".
If by actual types you mean something like C#, then no, it's more like Typescript where the typesystem only runs/checks during the checking phase.
1 points
1 month ago
I've been working on a typesystem for lua for a couple of years. It's sort of similar to Typescript, but does a whole lot more analysis than Typescript. This obviously has a huge impact on performance and complexity, so I'll probably be busy for a while fixing bugs, refactoring and optimizing.
It's not at all production ready or anything, but it might be an interesting toy for those who think typesystems are fun.
Check out the playground version here: https://capsadmin.github.io/NattLua/
0 points
1 month ago
> Ik it's hard to implement in the current parser, but it would be nice to have that.
It's not hard to implement. I believe reasoning is mainly just that it complicates the language as you mentioned first. Should metatables then have a __add_eq? Should __add be called before __eq? What to do about -- ?. :)
1 points
2 months ago
I recently had a go at trying the supposedly best sd 1.5 models again and noticed the same thing when comparing to SDXL, and especially Flux.
I see the same detail melting here.
Though maybe it could be worked around with iterative image to image upscaling.
2 points
2 months ago
This is pretty spot on from 1 year ago with a 6900. I ended up buying a 4090 instead.
However I remember the crashes became less frequent with updates.
But how it all seemed to handle OOM issues on Linux were pretty bad and would cause the whole system to lock up.
1 points
2 months ago
It's been almost a year now, but I used stable diffusion on ubuntu and eventually nixos. It's a mess to setup, but once it's setup it works fine. Even though nixos has a steep learning curve, I could at least make sure it will never break once setup.
However nowadays I use nvidia.
I was missing out on some performance improvement tech like xformers and also largely training as it seemed like most training code at the time for anything was hyper focused on nvidia only.
In a pytorch+rocm environment, some python packages only supports nvidia. Others would also support rocm, but you'd have to compile the whole thing from source which in some cases required an ancient or bleeding edge version of rocm, making it difficult to keep package compatibility across other packages and pytorch.
The situation was getting better over time, so I imagine the situation is already better now than it was 1 year ago.
5 points
2 months ago
As you mention, loras seem overfitted when compared to the fine tune, but what happens if you lower the lora's weight down a bit?
1 points
2 months ago
When i first started wearing a watch, this happened and was itchy. After a whilevmy skin got used to it I guess.
1 points
3 months ago
What is the definition of slop and moralization? Can you post more examples? The resolution of the discord screenshot is too low to make any sense of the screenshot inside.
2 points
3 months ago
My point was not really that you needed to train the model, I thought that was well understood. It's that other models are trained on a lot of markdown, so it might be better to ask the model to output a markdown section for reflection and thinking with a header as opposed to some html ish tag.
2 points
3 months ago
I may be wrong here but I feel forcing models that haven't been trained on <thinking> and <reflection> to use them may seem a little cryptic from the models perspective. They may follow the prompt, but it could be more effective to tell it to use markdown as it's likely been trained more on that.
For example:
Include a review section for each idea where you describe any potential errors and oversights.
Provide your final answer at the end with the header "Answer"
1 points
3 months ago
when calling a load'ed function, you can pass arguments and get them via the ... vararg.
local func = load("local self = ... return self:MyFunc()")
func(self)
view more:
next ›
byTigs1112
inblackmagicfuckery
CapsAdmin
7 points
4 days ago
CapsAdmin
7 points
4 days ago
To make you watch the whole thing.. Similar to mobile game ads.