Re: re-engineering overloading and rebindable syntax

2019-12-05 Thread MarLinn

Hi,

On 05/12/2019 10.53, Richard Eisenberg wrote:

Con:
  - worse error messages, which would now refer to the desugared code instead 
of the user-written code.


I can think of several major parts that seem useful to solve this. 
(Please excuse the very abstract and imprecise descriptions.)


1. Most "plugins" will probably still need a way to annotate the AST in 
some way to inform their error message (i.e. SrcSpan?). These 
annotations would be expand-only for all other plugins, the type 
checker, and all other possible transformations. Their only job is to 
inform their originator about what it did in case of an error. Ideally 
this would be done in an expandable way.


2. Some stages might want to create suggested ASTs to help the user 
understand the error, so this option should be part of an error 
structure bubbling back up through the "plugins". But: the originators 
of said ASTs should not need to know about annotations other than their 
own, so they can not be expected to faithfully add them.


3. A "plugin" needs the ability to analyse an erroneous AST inside an 
error structure and discover how it might have been constructed by said 
plugin. Crucially, it might want to analyse the suggested ASTs with 
potentially missing or misleading annotations.


This last part has me curious… if a plugin could do that well, it could 
also quite possibly re-sugar any random piece of AST. In other words 
there might come a day where GHC can tell a user what their code might 
look like with different language extensions applied. And that's just 
the tip of an iceberg of tooling possibilities.


So this might be a pipe dream for now. But practically speaking, I 
expect most programmers to lean in one of two directions: to rely on 
annotations heavily and ignore any suggested ASTs, or to go all-in on 
the analysing part and annotate as little as possible. So if even one or 
two implementers choose this second route, the pipe dream might become a 
partial reality in the not-so-long term.


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Treatment of unknown pragmas

2018-10-18 Thread MarLinn

  
  

  
I think it makes a lot of sense to have a standard way for third-parties
to attach string-y information to Haskell source constructs. While it's
not strictly speaking necessary to standardize the syntax, doing
so minimizes the chance that tools overlap and hopefully reduces
the language ecosystem learning curve.

  
  
This sounds exactly like the existing ANN pragma, which is what I've wanted LiquidHaskell to move towards for a long time. What is wrong with using the ANN pragma?



As far as I understand it ANN's are not as generally useful as pure
pragmas. For example I don't think you can attach an ANN to an
import.*

Speaking of imports: would it be a viable idea to define and import
pragmas on a source level? What I mean is something like this:
Option A:
-- | In, say, Language.Lint.Pragma: | 

{-# PRAGMA LINT  LintSetting #-} -- Tells GHC that the pragma "LINT" exists and optionally adds information for the type checker
{-# PRAGMA HLINT LintSetting #-} -- Define as many as you want per tool
:

-- | At usage site: | 

{-# IMPORT Language.Lint.Pragma #-}  -- Could also be more specific, like "USES_TOOL"
:
{-# LINT defaultLintSetting{ … } #-}
:

Option B:
-- | In Language.Lint.Pragma: | --

{-# DEFINE LINT :: LintSetting #-}   -- Same as above, different choice of syntax
{-# ALIAS HLINT LINT #-} -- As long as I'm inventing syntax I might as well go a bit further
:
-- | At usage site: | 

import Language.Lint.Pragma  -- Pragmas are imported implicitly, just like instances. An import is needed anyway if pragma type ≠ String and/or types are checked
:
{-# LINT defaultLintSetting{ … } #-}
:
Is that too complicated for tool users? Is that easier than {-#
OPTIONS_GHC -Wno-pragma=HLINT #-}? Would it make
  preprocessing harder because imports would have to be parsed
  first? Could the "import" in option A be implicit if a tool is
  used via a GHC command line? I don't know enough about either the
  use of tools nor the implementation of ghc to answer this. But the
  idea looked like it might be a compromise where tool pragmas could
  be added outside the ghc source.
Cheers,
  MarLinn
PS: * Why would I want to annotate imports?
Because many of my experiments aren't bigger than one file. It
would be nice if I could make them self-contained and "portable"
between development devices. In other words the goal is to embed
as much cabal data as possible into my one source file. I can
add quite a few things via pragmas and a shebang already, but I
haven't found a way to add the list of packages the code depends
on in an elegant fashion. What I imagine is that I might
annotate groups of imports with the package they come from,
similar to what XPackageImports allows. The related project in
my ever-growing backlog would be a shim that would extract info
like this from annotations and pass it on to cabal. Such a tool
probably wouldn't be widely used, so there should be no need to extend
GHC for it. But it would be nice to have GHC check spelling and
possibly type especially whenever I don't use my
as-of-yet non-existent tool.
PPS: It feels like pragmas and Template Haskell
might merge into one thing somewhere in the future. Might be worth
contemplating that when designing features.

  

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Proposal: Professionalizing GHC Development

2018-04-01 Thread MarLinn

Could you clarify? I see two promising proposals in this:

A) Redefining proof-of-work to mean one has to compile a GHC instead of 
computing some obscure hashes only nerds care about
B) GHC will be compiled via contracts in the blockchain, to make sure 
all mistake remain attributable


I like both ideas, but maybe you had something different in mind?

Or maybe we can combine both. Nested blockchains. Recursion! I wonder if 
there's a lens for that already…



On 2018-04-01 07:33, David Kraeutmann wrote:

Leveraging the blockchain to compile GHC is a great idea!

Unfortunately the proof-of-work algorithm is still just wasted cycles.

On Sun, 1 Apr 2018, 07:28 , > wrote:


Overall this is a great proposal; glad we're finally modernizing!
Still, it's got a pretty steep price tag - maybe we can offset
costs with an I.C.O.? ("GHC Coin"?)


> El 1 abr 2018, a las 00:56, Gershom B > escribió:
>
> Fellow Haskellers,
>
> Recently there has been much work into creating a better and more
> professional GHC development process, including in the form of
DevOps
> infrastructure, scheduled releases and governance, etc. But much
> remains to be done. There continues to be concern about the lack of
> use of industry-standard tools. For example, GHC development is tied
> to Phabricator, which is a custom product originally developed for
> in-house use by an obscure startup. GHC development is
documented on a
> wiki still -- ancient technology, not appropriate for 2018. Wiki
> syntax for documentation needs to be replaced by the only modern
> standard -- github flavored markdown. Trac itself is ancient
> technology, dating to 2003, well before anybody knew how to program
> real software. It provides no support for all the most important
> aspects of software development -- Kanban boards, sprint management,
> or even burndown charts.
>
> What is necessary is an integrated solution that holistically
> addresses all aspects of development, fostering a DevOps culture,
> embracing cloud-first, agile-first, test-first, disrupt-first
> principles, and with an
> ironclad SLA. Rather than homegrown solutions, we need a GHC
> development process that utilizes tools and procedures already
> familiar to regular developers. Cross-sectional feature comparison
> analysis yields a clear front-runner -- Visual Studio Team Services.
>
> VSTS is a recognized Leader in the Gartner Magic Quadrant for
> Enterprise Agile Planning tools. It lets us migrate from custom git
> hosting to a more reliable source control system -- Team Foundation
> Version Control. By enforcing the locking of checked-out files,
we can
> prevent the sorts of overlap between different patches that occur in
> the current distributed version management system, and coordinate
> tightly between developers, enabling and fostering T-shaped skills.
> Team Build also lets us migrate from antiquated makefiles to modern,
> industry-standard technology -- XML descriptions of build processes
> that integrate automatically with tracking of PBIs (product backlog
> items), and one-button release management.
>
> In terms of documentation, rather than deal with the subtleties of
> different markdown implementations and the confusing world of
> restructured text, we can utilize the full power of Word, including
> SharePoint integration as well as Office 365 capabilities, and
integration
> with Microsoft Teams, the chat-based workspace for
collaboration. This
> enables much more effective cross-team collaboration with
product and
> marketing divisions.
>
> One of the most exciting features of VSTS is powerful extensibility,
> with APIs offered in both major programming paradigms in use
today --
> JVM and .NET. The core organizational principle for full application
> lifecycle management is a single data construct -- the "work item"
> which documentation informs us "represents a thing," which can be
> anything that "a user can imagine." The power of work items comes
> through their extensible XML representation. Work items are combined
> into a Process Template, with such powerful Process Templates
> available as Agile, Scrum, and CMMI. VSTS will also allow us to
> analyze GHC Developer team performance with an integrated reporting
> data warehouse that uses a cube.
>
> Pricing for up to 100 users is $750 a month. Individual
developers can
> also purchase subscriptions to Visual Studio Professional for $45 a
> month. I suggest we start directing resources towards a
transition. I
> imagine all work to accomplish this could be done within a year, and
> by next April 1, the GHC development 

Re: Long standing annoying issue in ghci

2017-12-08 Thread MarLinn
I opened an issue on the Haskeline github 
(https://github.com/judah/haskeline/issues/72).


But it seems to be completely Haskeline-side, so I'm not sure if it's 
worth re-opening the one for ghci? As missing documentation maybe?
(BTW, I found this on the wiki: https://wiki.haskell.org/GHCi_in_colour. 
Might be a good place to put it, if linked.)


If you want to, here are my test cases rewritten as ghci prompts:

    -- single line, positioning error
:set prompt " \ESC[36m%\ESC[0m "
-- single line, works
    :set prompt " \ESC[36m\STX%\ESC[0m\STX "
-- multiline, bad output
    :set prompt "\ESC[32m\STX–––\ESC[0m\STX\n \ESC[36m\STX%\ESC[0m\STX "
-- multiline, works but is inconsistent
    :set prompt "\ESC[32m–––\ESC[0m\n \ESC[36m\STX%\ESC[0m\STX "

In my tests, the positioning errors consistently happen if there are any 
"unclosed" escape-sequences on the last line of the prompt, regardless 
of its length. Escape sequences on previous lines consistently create 
"weird characters", but don't influence the positioning. Also regardless 
of their lengths. That makes sense, as both sets of lines seem to be 
handled quite differently.


Are multiline prompts even used by a lot of people? I like mine because 
it gives me a both a list of modules and a consistent cursor position. 
But maybe I'm the exception?


Cheers.

On 2017-12-07 23:15, cheater00 cheater00 wrote:


Interesting. Would you mind reopening the issue and providing a buggy 
example? Amd alerting haskeline maintainers? How does it work on a 1 
line prompt that is so long it wraps?



On Thu, 7 Dec 2017 23:11 MarLinn, <monkle...@gmail.com 
<mailto:monkle...@gmail.com>> wrote:



> Here's what I use:
>
> :set prompt "\ESC[46m\STX%s>\ESC[39;49m\STX "
>
> I believe \STX is a signal to haskeline for control sequences.
> Documentation is here:
> https://github.com/judah/haskeline/wiki/ControlSequencesInPrompt
Note: If you're using a multi-line prompt, things may be different
again. I don't know what the rules are, but I found that if I put \STX
on any but the last line of prompts I get weird characters. The same
goes for any \SOH you might want to add for some reason.

Cheers,
MarLinn



___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Long standing annoying issue in ghci

2017-12-07 Thread MarLinn



Here's what I use:

:set prompt "\ESC[46m\STX%s>\ESC[39;49m\STX "

I believe \STX is a signal to haskeline for control sequences.
Documentation is here:
https://github.com/judah/haskeline/wiki/ControlSequencesInPrompt
Note: If you're using a multi-line prompt, things may be different 
again. I don't know what the rules are, but I found that if I put \STX 
on any but the last line of prompts I get weird characters. The same 
goes for any \SOH you might want to add for some reason.


Cheers,
MarLinn

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Can I get the internal name of a package at runtime?

2017-10-15 Thread MarLinn

Hi Daniel,

that looks very interesting. I think it'll take some time to understand 
what's going on, but I already got some good parts. And even if I won't 
end up using it, this seems like a good way to learn some stuff. So 
thanks a lot!


For now I don't need the full power, so I think I'll take what I learned 
here and stick to a simple, hacky solution along the lines of Brandon's 
suggestions. Like enumerating object files in some specified directory, 
then mapping


readCreateProcessWithExitCode (shell $ "readelf --symbols --wide " ++ path ++ " | grep 
closure | tr --squeeze-repeat ' ' | cut --delimiter=' ' --fields=9") ""

over them and z-encoding back and forth to discover what the heck I'm 
actually loading.

Elegant? No. Secure? No. Portable? …sufficiently. Works? …hopefully.

I got the feeling there is no good non-hacky way. Somewhere there's 
always some extra c code or something. I'm just glad my current goal is 
to just load object files, not compile user-supplied "scripts" into a 
running project or something.


So thanks again to you all!

Cheers,
MarLinn

On 2017-10-15 02:30, Daniel Gröber wrote:

Hi,

I think you might be interrested in my rts-loader package,
particularly the [mkSymbol 
function](https://github.com/DanielG/rts-loader/blob/master/System/Loader/RTS.hs#L275).

It should demonstrate how you can construct the symbol names for
dynamic loading and how to coax the information needed out of Cabal,
see the README and the rest of the source.

--Daniel

On Sat, Oct 14, 2017 at 09:59:03PM +0200, MarLinn wrote:

That sounds reasonable, but also like there *can not be* a way to obtain
that hash at runtime. And therefore, no way to discover the true package
name.

Which in turn makes discovery and loading of plug-ins a bit harder. Well, I
guess it's for a good reason so I'll have to work around it. Good to know.

Thanks for helping out!

Cheers,
MarLinn


On 2017-10-14 20:11, Brandon Allbery wrote:

On Sat, Oct 14, 2017 at 12:48 PM, MarLinn <monkle...@gmail.com
<mailto:monkle...@gmail.com>> wrote:

 So the "actual" package name seems to be
 "Plugin-0.0.0.0-2QaFQQzYhnKJSPRXA7VtPe".
 That leaves the random(?) characters behind the version number to
 be explained.


ABI hash of that specific package build, which is needed because
compiling with different optimization levels etc. will change what part
of the internals gets exposed in the .hi file for inlining into other
modules; mismatches there lead to *really* weird behavior. (If you're
lucky, it'll "just" be a type mismatch in code you didn't write, because
it came from the .hi file. If unlucky, it compiles but dumps core at
runtime.)

--
brandon s allbery kf8nh sine nomine associates
allber...@gmail.com <mailto:allber...@gmail.com> ballb...@sinenomine.net
<mailto:ballb...@sinenomine.net>
unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Can I get the internal name of a package at runtime?

2017-10-14 Thread MarLinn
That sounds reasonable, but also like there *can not be* a way to obtain 
that hash at runtime. And therefore, no way to discover the true package 
name.


Which in turn makes discovery and loading of plug-ins a bit harder. 
Well, I guess it's for a good reason so I'll have to work around it. 
Good to know.


Thanks for helping out!

Cheers,
MarLinn


On 2017-10-14 20:11, Brandon Allbery wrote:
On Sat, Oct 14, 2017 at 12:48 PM, MarLinn <monkle...@gmail.com 
<mailto:monkle...@gmail.com>> wrote:


So the "actual" package name seems to be
"Plugin-0.0.0.0-2QaFQQzYhnKJSPRXA7VtPe".
That leaves the random(?) characters behind the version number to
be explained.


ABI hash of that specific package build, which is needed because 
compiling with different optimization levels etc. will change what 
part of the internals gets exposed in the .hi file for inlining into 
other modules; mismatches there lead to *really* weird behavior. (If 
you're lucky, it'll "just" be a type mismatch in code you didn't 
write, because it came from the .hi file. If unlucky, it compiles but 
dumps core at runtime.)


--
brandon s allbery kf8nh sine nomine associates
allber...@gmail.com <mailto:allber...@gmail.com> 
ballb...@sinenomine.net <mailto:ballb...@sinenomine.net>

unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Can I get the internal name of a package at runtime?

2017-10-14 Thread MarLinn

Hi Edward,

thank you.
That knowledge revealed that the "Ozi" part was actually the version number.

So the "actual" package name seems to be 
"Plugin-0.0.0.0-2QaFQQzYhnKJSPRXA7VtPe".
That leaves the random(?) characters behind the version number to be 
explained.
But at least now I can exploit the fact that a 
"libHSPlugin-0.0.0.0-2QaFQQzYhnKJSPRXA7VtPe.a" file is generated. So if 
I don't find the complete answer I still have a more portable way for 
discovery than inspecting headers.


That's quite useful.

Cheers,
MarLinn


On 2017-10-14 18:01, Edward Z. Yang wrote:

Hi MarLinn,

The mangling name is "z-encoded".  It is documented here:
https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/SymbolNames

Edward

Excerpts from MarLinn's message of 2017-10-14 17:35:28 +0200:

Hi.

I'm experimenting with plug-ins right now. I did manage to dynamically
load functions at runtime. The caveat: Something (cabal? ghc?) mangles
the package names. For example, to load a function called "theFunction"
from a module called "Callee" in a package "Plugin", I had to address it
via the name
"Pluginzm0zi0zi0zi0zm2QaFQQzzYhnKJSPRXA7VtPe_Callee_theFunction_closure".
O…K. Most parts of that are clear, and thanks for making my package
cooler by appending a "z", but who is this Ozi guy and why is he rapping
about modems? Without knowing Ozi, the only way I found to get at this
magic string is to manually look at the actual ELF-header of the
compiled module. While that might be a robust way, it seems neither
portable nor elegant.

The "plugins" library failed too, probably for the same reason. (Or it's
under-documented. Probably both.) The "dynamic-loader" library does
something via c, therefore no.

Which brings me to the question: Is there any way for a module to get at
its own internal package name? Or even at the internal name of an
external package? If not, can I somehow recreate the magic mangling at
runtime? At first I thought the functions in the "Module", "Name" etc
modules of GHC might help – but it seems I either need an existing Name
(that I have no idea how to get) or I have to create one (with no idea
what magic mangler to call).

I'm asking this question here rather than on café as I feel that if
there is a solution, it's probably buried in the details of GHC.

Thanks for any suggestions,
MarLinn



___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Can I get the internal name of a package at runtime?

2017-10-14 Thread MarLinn

Hi.

I'm experimenting with plug-ins right now. I did manage to dynamically 
load functions at runtime. The caveat: Something (cabal? ghc?) mangles 
the package names. For example, to load a function called "theFunction" 
from a module called "Callee" in a package "Plugin", I had to address it 
via the name 
"Pluginzm0zi0zi0zi0zm2QaFQQzzYhnKJSPRXA7VtPe_Callee_theFunction_closure". 
O…K. Most parts of that are clear, and thanks for making my package 
cooler by appending a "z", but who is this Ozi guy and why is he rapping 
about modems? Without knowing Ozi, the only way I found to get at this 
magic string is to manually look at the actual ELF-header of the 
compiled module. While that might be a robust way, it seems neither 
portable nor elegant.


The "plugins" library failed too, probably for the same reason. (Or it's 
under-documented. Probably both.) The "dynamic-loader" library does 
something via c, therefore no.


Which brings me to the question: Is there any way for a module to get at 
its own internal package name? Or even at the internal name of an 
external package? If not, can I somehow recreate the magic mangling at 
runtime? At first I thought the functions in the "Module", "Name" etc 
modules of GHC might help – but it seems I either need an existing Name 
(that I have no idea how to get) or I have to create one (with no idea 
what magic mangler to call).


I'm asking this question here rather than on café as I feel that if 
there is a solution, it's probably buried in the details of GHC.


Thanks for any suggestions,
MarLinn

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Operating on HsSyn

2017-07-28 Thread MarLinn

by

  (parser . prettyPrint . parser) = id

I meant

(prettyPrint . parser . prettyPrint) = id

for a valid input.

Simplifying, (parser ∷ String → something), and (prettyPrint ∷ something 
→ String).


Therefore, (parser . prettyPrint . parser ∷ String → something) and 
(prettyPrint . parser . prettyPrint ∷ something → String).


Therefore, both criteria could only apply for (something ~ String). But 
as pretty printing adds quotation marks, not even that is true.


There are four formulations that might be applicable:

1.

   parser . prettyPrint ≍ id

2.

   prettyPrint . parser ≍ id -- ∷ String → String, useless here

3.

   prettyPrint . parser . prettyPrint ≍ prettyPrint

4.

   parser . prettyPrint . parser ≍ parser

5. Well, you could go beyond to (prettyPrint . parser . prettyPrint.
   parser ≍prettyPrint. parser) etc…

I don't think 1 (or 2) follow from one of the last two. But 1 does imply 
them. So it is a stronger criterion than both, and therefore probably 
not the one to choose. Assuming the parser is internally consistent, 3 
just says something about the internal consistency of the pretty 
printer, while 4 says something about the relationship of the pretty 
printer to the parser. Thus 4 looks like the best candidate for a 
criterion. Possibly with 3 as a secondary target.


Cheers,
MarLinn

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Telemetry

2016-12-09 Thread MarLinn via ghc-devs
Pretty random idea: What if ghc exposed measurement points for 
performance and telemetry, but a separate tool would handle the 
read-out, configuration, upload etc. That would keep the telemetry from 
being built-in, while still being a way to get *some* information.


Such a support tool might be interesting for other projects, too, or 
even for slightly different use cases like monitoring servers. The 
question is if such a tool would bring enough benefit to enough projects 
for buy-in and to attract contributors. And just separating it doesn't 
solve the underlying issues of course, so attracting contributors and 
buy-in might be even harder than it already is for "normal" projects. 
Close ties to ghc might improve that, but I doubt how big such an effect 
would be.


Additionally, this approach would just shift many of the questions over 
to Haskell-platform and/or Stack instead of addressing them – or even 
further, on that volatile front-line space where inner-community 
conflict roared recently. It wouldn't be the worst place to address 
them, but I would hesitate to throw yet another potential point of 
contention onto that burned field.


Basically: I like that idea, but I might just have proven it fruitless 
anyway.



Cheers,
MarLinn
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Telemetry (WAS: Attempt at a real world benchmark)

2016-12-09 Thread MarLinn via ghc-devs



It could tell us which language features are most used.


Language features are hard if they are not available in separate libs. 
If in libs, then IIRC debian is packaging those in separate packages, 
again you can use their package contest. 


What in particular makes them hard? Sorry if this seems like a stupid 
question to you, I'm just not that knowledgeable yet. One reason I can 
think of would be that we would want attribution, i.e. did the developer 
turn on the extension himself, or is it just used in a lib or template – 
but that should be easy to solve with a source hash, right? That source 
hash itself might need a bit of thought though. Maybe it should not be a 
hash of a source file, but of the parse tree.


The big issue is (a) design and implementation effort, and (b) 
dealing with the privacy issues. I think (b) used to be a big deal, 
but nowadays people mostly assume that their software is doing 
telemetry, so it feels more plausible.  But someone would need to 
work out whether it had to be opt-in or opt-out, and how to actually 
make it work in practice.


Privacy here is complete can of worms (keep in mind you are dealing 
with a lot of different law systems), I strongly suggest not to even 
think about it for a second. Your note "but nowadays people mostly 
assume that their software is doing telemetry" may perhaps be true in 
sick mobile apps world, but I guess is not true in the world of 
developing secure and security related applications for either server 
usage or embedded.


My first reaction to "nowadays people mostly assume that their software 
is doing telemetry" was to amend it with "* in the USA" in my mind. But 
yes, mobile is another place. Nowadays I do assume most software uses 
some sort of phone-home feature, but that's because it's on my To Do 
list of things to search for on first configuration. Note that I am 
using "phone home" instead of "telemetry" because some companies hide it 
in "check for updates" or mix it with some useless "account" stuff. 
Finding out where it's hidden and how much information they give about 
the details tells a lot about the developers, as does opt-in vs opt-out. 
Therefore it can be a reason to not choose a piece of software or even 
an ecosystem after a first try. (Let's say an operating system almost 
forces me to create an online account on installation. That not only 
tells me I might not want to use that operating system, it also sends a 
marketing message that the whole ecosystem is potentially toxic to my 
privacy because they live in a bubble where that appears to be 
acceptable.) So I do have that aversion even in non-security-related 
contexts.


I would say people are aware that telemetry exists, and developers in 
particular. I would also say developers are aware of the potential 
benefits, so they might be open to it. But what they care and worry 
about is /what/ is reported and how they can /control/ it. Software 
being Open Source is a huge factor in that, because they know that, at 
least in theory, they could vet the source. But the reaction might still 
be very mixed – see Mozilla Firefox.


My suggestion would be a solution that gives the developer the feeling 
of making the choices, and puts them in control. It should also be 
compatible with configuration management so that it can be integrated 
into company policies as easily as possible. Therefore my suggestions 
would be


 *

   Opt-In. Nothing takes away the feeling of being in control more than
   perceived "hijacking" of a device with "spy ware". This also helps
   circumvent legal problems because the users or their employers now
   have the responsibility.

 *

   The switches to turn it on or off should be in a configuration file.
   There should be several staged configuration files, one for a
   project, one for a user, one system-wide. This is for compatibility
   with configuration management. Configuration higher up the hierarchy
   override ones lower in the hierarchy, but they can't force telemetry
   to be on – at least not the sensitive kind.

 *

   There should be several levels or a set of options that can be
   switched on or off individually, for fine-grained control. All
   should be very well documented. Once integrated and documented, they
   can never change without also changing the configuration flag that
   switches them on.

There still might be some backlash, but a careful approach like this 
could soothe the minds.


If you are worried that we might get too little data this way, here's 
another thought, leading back to performance data: The most benefit in 
that regard would come from projects that are built regularly, on 
different architectures, with sources that can be inspected and with an 
easy way to get diffs. In other words, projects that live on github and 
travis anyway. Their main

Re: Separating typechecking and type error reporting in two passes?

2016-11-30 Thread MarLinn via ghc-devs



But you are right that when the programmer sits there and waits for a
result, that’s when snappyness is important.


I had a random idea based on this observation:
(With a certain flag set) the compiler could follow the existing 
strategy until it has hit the first n errors, possibly with n=1. Then it 
could switch off the context overhead and all subsequent errors could be 
deferred or not fleshed out. Or, alternatively, the proposed new 
strategy is used, but the second pass only looks at the first n errors.
Benefit: Correct code is on the fast path, but error reporting doesn't 
add too much of an overhead. My experience when using the compiler to 
have a conversation about errors was that I was correcting one or two 
errors at a time, then re-compiling. I discarded all the extra 
information about the other errors anyway, at least most of the time. I 
don't know if that is a usual pattern, but if it is we might as well 
exploit it.


This idea could already benefit from a separation, but we can go further.
What if, in interactive sessions, you would only get the result of the 
first pass at first. No details, but only a list of error positions. In 
some cases, that is all you need to find a dumb typo. It also doesn't 
clutter the screen with loads of fluff while still giving you a basic 
idea of how much is wrong. Now what if you could then instruct the 
system to do the second pass at places you choose, interactively? In 
other words the conversation would be even more conversational.
Of course the benefits are debatable and this is not something that's 
going to be happening soon anyway. But for me the idea alone is an 
argument for the proposed new separation, because it would give us the 
flexibility to think of features like this.


Cheers,
MarLinn
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Making (useful subsets of) bytecode portable between targets

2016-11-25 Thread MarLinn via ghc-devs

On 2016-11-25 12:11, Simon Marlow wrote:
We basically have two worlds: first, the compile-time world. In this 
world, we need all the packages and modules of the current package 
built for the host platform. Secondly, we need the runtime world, with 
all the packages and modules of the current package cross-compiled for 
the target platform.


Maybe this separation and the preceding discussion of the two possible 
solutions suggests a usable approach to the architecture of this future 
system?


First, let me reframe the "runner" idea. In a real-world environment, 
this seems like a viable solution either with two separate machines or 
with a VM nested in the main build machine. In both cases, we would need 
two parts of the compiler, communicating over customized channels.
The cross-compiler approach is more or less just a variation on this 
with far less overhead.

So why not build an architecture that supports both solutions?

I practice, this would mean we need a tightly defined, but flexible API 
between at least two "architecture plugins" and one controller that 
could run on either side. To me, this sounds more like a build system 
than a mere compiler. And I'm okay with that, but I don't think 
GHC+Cabal alone can and should shoulder the complexity. There are nice, 
working build-systems out there that could take over the role of the 
controller, so all GHC and Cabal would have to offer are parsing, 
modularized steps, and nice hooks. In other words, /a //kind of 
meta-language to describe compiler deployments/ – and Haskell is great 
for describing languages.


Here's yet another idea I'd like to add, although it is rather silly. 
The idea of a meta-language that describes a conversion structure seems 
very close to what Pandoc is doing for documents. And while Pandoc's 
architecture and history make it a bit static, GHC can still learn from 
it. Maybe, someday, there could even be a bigger, even more over-arching 
build language that describes the program, the documentation, and the 
deployment processes of the whole system?


Cheers,
MarLinn
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Request for feedback: deriving strategies syntax

2016-09-28 Thread MarLinn via ghc-devs

On 2016-09-28 04:06, Richard Eisenberg wrote:
+1 on `stock` from me. Though I was all excited to get my class next 
semester jazzed for PL work by explaining that I had slipped a new 
keyword `bespoke` into a language. :)


Maybe there's still a spot you can slip it in, e.g. bespoke error 
messages. ;)



I agree that "stock" is an acceptable alternative.

MarLinn
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs