00:02 sena_kun joined 00:03 Altai-man_ left 00:07 nebuchadnezzar left 00:25 [Coke] left 00:27 [Coke] joined, [Coke] left, [Coke] joined 01:10 MasterDuke left 01:55 Kaiepi left 01:58 Kaiepi joined
timotimo hrm. i'm unboxing the enum values at spesh time now (if they're constant and known, of course), but it looks like that part of the optimization happens too late to propagate values forward towards being used as flags for readuint 01:58
02:01 Altai-man_ joined
timotimo doing it post_inline as well seems fine 02:01
02:03 sena_kun left
timotimo how do i even implement coerce_ui and coerce_iu when all we have for known values is a .i value that's MVMint64, but also, bit-per-bit, as long as we don't do some specific ops on the values? 02:12
04:02 sena_kun joined 04:03 Altai-man_ left 05:10 nebuchadnezzar joined 06:01 Altai-man_ joined 06:03 sena_kun left
nwc10 good *, #moarvm 06:10
06:12 squashable6 left 06:13 squashable6 joined 07:57 zakharyas joined 08:02 sena_kun joined 08:04 Altai-man_ left 08:51 nebuchadnezzar left 09:12 nebuchadnezzar joined
jnthn morning o/ 09:32
tellable6 2020-07-01T08:05:07Z #raku <SmokeMachine> jnthn I'm still just wondering, sorry if interrupting, but would it make sense if there was something like RakuAST::Statement::Conditional that RakuAST::Statement::If and RakuAST::Statement::With would extend that?
2020-07-01T08:10:29Z #raku <SmokeMachine> jnthn the same for RakuAST::Statement::Elsif and RakuAST::Statement::Orwith
hey jnthn, you have a message: gist.github.com/b46293967dd7cd9af3...3647684c58
nwc10 \o 09:38
09:47 nebuchadnezzar left 09:53 nebuchadnezzar joined 10:01 Altai-man_ joined 10:04 sena_kun left
jnthn So...where wsa that ASAN barf that I was going to look at today... 10:08
Ah, paste.scsys.co.uk/591952 10:11
10:12 MasterDuke joined
jnthn And...it doesn't blow up 10:14
Lots of leaks though :)
nwc10 do we "blame" timotimo for fixing something? 10:17
the harness reported it in the list of erros when I ran all the tests today in parallel
but I can't recreate any technicolor barfage when I run it individually today 10:18
or even failures
Geth MoarVM/new-disp: 26142fa6c2 | (Jonathan Worthington)++ | 3 files
Free memory associated with dispatch recordings

jnthn That's nearly all of the leak barf
Woulda been nice to provoke the other failure though 10:32
Unless it really is fixed, but that you saw it in the parallel run this morning makes me suspect that it just learned to play hide and seek better...
Hm, and it's in legacy args processing too 10:33
Though still the output makes no sense to me 10:34
Geth MoarVM/new-disp: 9ffe441e86 | (Jonathan Worthington)++ | src/core/callsite.c
Fix a leak when we drop all callsite args

jnthn And that nails the last new-disp related ASAN leak complaint I've seen
Geth MoarVM/new-disp: 3cb8874d6b | (Jonathan Worthington)++ | src/core/callstack.c
Free dispatch recording on exception too

jnthn Of course, right after saying it was the last one, I spot another :)
nwc10 jnthn: no, strictly and to be clear "harness repoted test not OK" 11:01
reported test not 100% passes
er, actually Parse errors: No plan found in TAP output
but there were also all those whatever-it-was syntxa error thingies that I saw and you could never reproduce 11:02
that made no sense
I no longer see those
jnthn That's something at least 11:10
ooh, lunch time
11:28 zakharyas left 12:10 mst left, mst joined, ChanServ sets mode: +o mst 12:16 bartolin_ left 12:29 sena_kun joined, Altai-man_ left
nwc10 jnthn: when running tests in parallel, the harness reports "Parse errors: No plan found in TAP output" for all of t/02-rakudo/03-corekeys-6d.t t/02-rakudo/03-cmp-ok.t t/04-nativecall/01-argless.t t/07-pod-to-text/02-input-output.t t/07-pod-to-text/01-whitespace.t 12:29
there is no extra output on STDERR
tests fail like this:
t/04-nativecall/01-argless.t .................................... Dubious, test
returned 1 (wstat 256, 0x100)
No subtests run
odd, I seem to be seeing different behaviour between my two terminal windows
so ... what differs in the ENV vars
12:29 bartolin joined
jnthn Mmmm....that was a good bowl of red curry 12:29
nine nine loves red curry 12:32
12:43 MasterDuke left
jnthn I'm lucky to have a place reliably doing a good one about 5 minutes from the office. :) 12:49
moritz moritz is more one for fried noodles or fried rice 12:54
jnthn nwc10: In spectest I get t/spec/S17-promise/start.t doing similar, but it runs fine alone. grmbl. 12:56
nwc10 my hunch is that it is something related to load average
and hence speeds
but does that mean that spesh can complete earler-or-later, and that might then inject changes into some other running code at different points in its execution?
jnthn Potentially, yes
Spesh runs on a background thread, so unless the blocking env var is set, its behavior will vary, and in a threaded program it will still vary even with that, due to the timing differences caused by the threads themselves
nwc10 but I do have this set: MVM_SPESH_BLOCKING=1
so, hmmm
jnthn Will see if I can catch one later. Going to try and get the reliable failures sorted out first.
13:22 zakharyas joined
Geth MoarVM/new-disp: f8cef2ff69 | (Jonathan Worthington)++ | 3 files
Implement captureexistsnamed on MVMCapture

nwc10 oh my, there are a lot of code point names in Unicode 13:32
138008 13:34
oh wait, 138007 13:35
jnthn And it's only growing... :) 13:46
.oO( or that's that the medium predicted... )
Hm, I think I'm missing something in the new dispatcher 13:51
In some cases, we need to do some late-bound work to decide on a candidate, e.g. more than we can ever reasonably guard. 13:53
And it'd be nice to be able to decide on the candidate, and then have it invoked, without having an intermediate thingy on the call stack 13:55
This'd potentially be useful for megamorphic caess too 13:56
Just not quite sure exactly what it'd look like 13:57
14:01 Altai-man_ joined 14:04 sena_kun left
timotimo rr has a "chaos mode" that does something to scheduling that's supposed to provoke trouble much more often 14:21
14:33 brrt joined 14:39 MasterDuke joined
brrt \o 15:25
jnthn o/ 15:27
brrt we're finally having rain again 15:33
timotimo we had cold for two days, now it's heat again, but not as strong 15:38
jnthn 29C outside at the moment. 15:42
Correction: 30C 15:43
[Coke] only 27C here. (but I'm in the US, so the AC has been on for weeks :) 15:44
jnthn Office air conditioning has a hole through the wall to eject air nowadays. Not a perfect setup, but works way better than tossing the pipe out of an ajar door. It's keeping it to...well, the thermostat thanks 24C, which may be correct, but it feels a little better than that, maybe 'cus the humidity is lower
The current bit of design works is quite headache-inducing even in the cool air... :) 15:45
I've realized that a multi-dispatch that needs late-bound stuff is probably just the same as a megamorphic callsite in that both want to do a little bit of work to get over the megamorphic "hump" 15:46
My figuring is that many places will be megamoprhic only along one dimension of the dispatch. 15:47
That is, you may well end up with a site that sees numerous types to dispatch a method on, but once we reach the point of having found one, it'll be a Method object that wants a code ref inside of it unwrapped 15:48
Ditto with later-bound multi-dispatches that depend on values, unpacking, etc. 15:49
I'm still not too happy with the lack of answers for polyvariance too 15:52
OO::Monitors is a classic example; the callsame after the lock acquisition is megamorphic at the location of the callsame, but keyed on the root of the dispatch that we're resuming it's probably monomorphic 15:54
Maybe resumptions thus want their ICs hung off the IC of the root of the dispatch generally. 15:55
And that brings me back to wondering if resumption of dispatch and megamorphic dispatch are both looking for some common mechanism. 15:56
Which I've been going around in circles on for the last hour or so. :)
16:02 sena_kun joined
nine That megamorphism at the callsite but monomorphism when looking at the larger picture is also true for lots of call chains 16:02
But that's probably out of scope if you try to stay sane during this dispatch rework :) 16:04
16:04 Altai-man_ left 16:56 zakharyas left 17:15 nine left, leont_ left, AlexDaniel left, dogbert17 left, japhb left, gugod left, MasterDuke left, camelia left, avar left, nativecallable6 left, linkable6 left, releasable6 left, unicodable6 left, kawaii left, tbrowder left, BinGOs left, mtj_ left, reportable6 left, tellable6 left, bisectable6 left, evalable6 left, committable6 left, statisfiable6 left, Geth left, hoelzro left, Altreus left, moon-child left, jnthn left, nebuchadnezzar left, squashable6 left, greppable6 left, bloatable6 left, shareable6 left, benchable6 left, sourceable6 left, notable6 left, quotable6 left, coverable6 left, lizmat left, raku-bridge left, harrow left, leedo left, TimToady left, Voldenet left, robertle left, xiaomiao left, AlexDaniel` left, vrurg left, nwc10 left, jjatria left, SmokeMachine left, Util left, moritz left, ChanServ left, brrt left, [Coke] left, samcv left, synthmeat left, elcaro left, vesper11 left 17:16 sena_kun left, mst left, Kaiepi left, sivoais left, timotimo left, krunen left, eater left, chansen_ left, rypervenche left, rba left, jpf1 left, bartolin left, klapperl left 17:19 ilogger2 joined 17:29 leont_ joined, AlexDaniel joined, dogbert17 joined, japhb joined, gugod joined 18:02 Altai-man_ joined 18:05 brrt joined 18:15 MasterDuke joined 18:37 MasterDuke left 19:40 brrt left 19:50 AlexDaniel` joined 20:03 sena_kun joined 20:04 Altai-man_ left 22:02 Altai-man_ joined 22:04 sena_kun left 23:11 AlexDani` joined 23:12 AlexDaniel left