Help on reading an yaml file using dyaml

2023-08-25 Thread vino via Digitalmars-d-learn

Hi All,

  Request your help on reading a yaml file using dyaml.

input.yaml
```
name: "This is test Program"
program:
prg: "whoami"
args: "/?"
env:
config:
flag:
workdir:
shellPath:
```
Program:
```
import dyaml;
import std.stdio;

void main () {

Node config;
config = Loader.fromFile("input.yml").load();
string program = config["program"]["prg"].as!string;
string[] args = config["program"]["args"].as!(string[]);
string[string] env = 
config["program"]["env"].as!(string[string]);
std.process.Config config = 
config["program"]["configs"].as!(Config);
std.process.Config.Flags flag = 
config["program"]["flag"].as!(Flags);

string workdir = config["program"]["workdir"].as!string;
string shell = config["program"]["shellPath"].as!string;

 writeln(args);
 writeln(env);
 writeln(config);
 writeln(flag);
}
```
From,
Vino



Config and Config.Flags difference

2023-08-25 Thread vino via Digitalmars-d-learn

Hi All,

   Request you to please help me in understanding the difference 
between the below 2, if possible with an example.


Config.suppressConsole
Config.Flags.suppressConsole

From,
Vino.




Re: parallel threads stalls until all thread batches are finished.

2023-08-25 Thread Joe--- via Digitalmars-d-learn

On Wednesday, 23 August 2023 at 14:43:33 UTC, Sergey wrote:

On Wednesday, 23 August 2023 at 13:03:36 UTC, Joe wrote:

I use

foreach(s; taskPool.parallel(files, numParallel))
{ L(s); } // L(s) represents the work to be done.


If you make for example that L function return “ok” in case 
file successfully downloaded, you can try to use TaskPool.amap.


The other option - use std.concurrency probably.


Any idea why it is behaving the way it is?


Cool pattern or tragic?

2023-08-25 Thread Guillaume Piolat via Digitalmars-d-learn
The idea is to deliberately mark @system functions that need 
special scrutiny to use, regardless of their memory-safety. 
Function that would typically be named `assumeXXX`.




```d
class MyEncodedThing
{
Encoding encoding;

/// Unsafe cast of encoding.
void assumeEncoding (Encoding encoding) /* here */ @system /* 
here */

{
this.encoding = encoding;
}
}

char* assumeZeroTerminated(char[] str) @system
{
return str.ptr;
}

```

That way, @safe code will still need to manually @trust them.




Re: parallel threads stalls until all thread batches are finished.

2023-08-25 Thread Joe--- via Digitalmars-d-learn

On Wednesday, 23 August 2023 at 14:43:33 UTC, Sergey wrote:

On Wednesday, 23 August 2023 at 13:03:36 UTC, Joe wrote:

I use

foreach(s; taskPool.parallel(files, numParallel))
{ L(s); } // L(s) represents the work to be done.


If you make for example that L function return “ok” in case 
file successfully downloaded, you can try to use TaskPool.amap.


The other option - use std.concurrency probably.


I think I might know what is going on but not sure:

The tasks are split up in batches and each batch gets a thread. 
What happens then is some long task will block it's entire batch 
and there will be no re-balancing of the batches.



"A work unit is a set of consecutive elements of range to be 
processed by a worker thread between communication with any other 
thread. The number of elements processed per work unit is 
controlled by the workUnitSize parameter. "


So the question is how to rebalance these work units?

E.g., when a worker thread is done with it's batch it should look 
to help finish that batch rather than terminating and leaving all 
the work for the last thread.


This seems like a flaw in the design. E.g., if one happens to 
have n batches and every batch but one has tasks that finish 
instantly then essentially one has no parallelization.




Re: parallel threads stalls until all thread batches are finished.

2023-08-25 Thread Ali Çehreli via Digitalmars-d-learn

On 8/25/23 14:27, j...@bloow.edu wrote:

> "A work unit is a set of consecutive elements of range to be processed
> by a worker thread between communication with any other thread. The
> number of elements processed per work unit is controlled by the
> workUnitSize parameter. "
>
> So the question is how to rebalance these work units?

Ok, your question brings me back from summer hibernation. :)

This is what I do:

- Sort the tasks in decreasing time order; the ones that will take the 
most time should go first.


- Use a work unit size of 1.

The longest running task will start first. You can't get better than 
that. When I print some progress reporting, I see that most of the time 
N-1 tasks have finished and we are waiting for that one longest running 
task.


Ali
"back to sleep"


Re: parallel threads stalls until all thread batches are finished.

2023-08-25 Thread Adam D Ruppe via Digitalmars-d-learn

On Wednesday, 23 August 2023 at 13:03:36 UTC, Joe wrote:

to download files from the internet.


Are they particularly big files? You might consider using one of 
the other libs that does it all in one thread. (i ask about size 
cuz mine ive never tested doing big files at once, i usually use 
it for smaller things, but i think it can do it)


The reason why this causes me problems is that the downloaded 
files, which are cashed to a temporary file, stick around and 
do not free up space(think of it just as using memory) and this 
can cause some problems some of the time.


this is why im a lil worried about my thing, like do they have to 
be temporary files or can it be memory that is recycled?




Re: Cool pattern or tragic?

2023-08-25 Thread Richard (Rikki) Andrew Cattermole via Digitalmars-d-learn

I do something similar with my error types.

I have a method called assumeOkay. It'll assert if it isn't ok.

There is also unsafeGetLiteral for slice based types.

All @system.


Re: Cool pattern or tragic?

2023-08-25 Thread Jonathan M Davis via Digitalmars-d-learn
On Friday, August 25, 2023 3:00:08 PM MDT Guillaume Piolat via Digitalmars-d-
learn wrote:
> The idea is to deliberately mark @system functions that need
> special scrutiny to use, regardless of their memory-safety.
> Function that would typically be named `assumeXXX`.
>
>
>
> ```d
> class MyEncodedThing
> {
>  Encoding encoding;
>
>  /// Unsafe cast of encoding.
>  void assumeEncoding (Encoding encoding) /* here */ @system /*
> here */
>  {
>  this.encoding = encoding;
>  }
> }
>
> char* assumeZeroTerminated(char[] str) @system
> {
>  return str.ptr;
> }
>
> ```
>
> That way, @safe code will still need to manually @trust them.

Well, if no attribute inference is involved, then @system isn't required.
However, explicitly marking it @system makes it so that you won't
accidentally make it @safe via later introducing attribute inference or by
adding something like @safe: or @safe {} to the code. It also makes it clear
that the @system is intentional rather than it being the case that no one
decided to put @safe or @trusted on it.

So, it arguable is good practice to mark functions @system if they're
intended to be @system rather than leaving it up to the defaults.

Either way, if the code using those functions are going to be able to use
@trusted correctly, the documentation should probably be very clear about
what the @system function is doing - at least if you're not in an
environment where everyone is expected to look at the code itself rather
than at documentation.

- Jonathan M Davis





Re: parallel threads stalls until all thread batches are finished.

2023-08-25 Thread Joe--- via Digitalmars-d-learn

On Friday, 25 August 2023 at 21:31:37 UTC, Ali Çehreli wrote:

On 8/25/23 14:27, j...@bloow.edu wrote:

> "A work unit is a set of consecutive elements of range to be
processed
> by a worker thread between communication with any other
thread. The
> number of elements processed per work unit is controlled by
the
> workUnitSize parameter. "
>
> So the question is how to rebalance these work units?

Ok, your question brings me back from summer hibernation. :)

This is what I do:

- Sort the tasks in decreasing time order; the ones that will 
take the most time should go first.


- Use a work unit size of 1.

The longest running task will start first. You can't get better 
than that. When I print some progress reporting, I see that 
most of the time N-1 tasks have finished and we are waiting for 
that one longest running task.


Ali
"back to sleep"



I do not know the amount of time they will run. They are files 
that are being downloaded and I neither know the file size nor 
the download rate(in fact, the actual download happens 
externally).


While I could use work unit of size 1 then problem then is I 
would be downloading N files at once and that will cause other 
problems if N is large(and sometimes it is).


There should be a "work unit size" and a "max simultaneous 
workers". Then I could set the work unit size to 1 and say the 
max simultaneous workers to 8 to get 8 simultaneous downloads 
without stalling.







Re: parallel threads stalls until all thread batches are finished.

2023-08-25 Thread Joe--- via Digitalmars-d-learn

On Friday, 25 August 2023 at 21:43:26 UTC, Adam D Ruppe wrote:

On Wednesday, 23 August 2023 at 13:03:36 UTC, Joe wrote:

to download files from the internet.


Are they particularly big files? You might consider using one 
of the other libs that does it all in one thread. (i ask about 
size cuz mine ive never tested doing big files at once, i 
usually use it for smaller things, but i think it can do it)


The reason why this causes me problems is that the downloaded 
files, which are cashed to a temporary file, stick around and 
do not free up space(think of it just as using memory) and 
this can cause some problems some of the time.


this is why im a lil worried about my thing, like do they have 
to be temporary files or can it be memory that is recycled?


The downloading is simply a wrapper that provides some caching to 
a ram drive and management of other things and doesn't have any 
clue how or what is being downloaded. It passes a link to 
something like youtube-dl or yt-dlp and has it do the downloaded.


Everything works great except for the bottle neck when things are 
not balancing out. It's not a huge deal since it does work and, 
for the most part, gets everything downloaded but sorta defeats 
the purpose of having multiple downloads(which is much faster 
since each download seems to be throttled).


Increasing the work unit size will make the problem worse while 
reducing it to 1 will flood the downloads(e.g., having 200 or 
even 2000 downloads at once).


Ultimately this seems like a design flaw in ThreadPool which 
should auto rebalance the threads and not treat the number of 
threads as identical to the worker unit size(well, 
length/workerunitsize).


e.g., suppose we have 1000 tasks and set worker unit size to 100. 
This gives 10 workers and 10 workers will be spawned(not sure if 
this is limited to total number of cpu threads or not)


What would be nice is to be able to set worker unit size to 1 and 
this gives 1000 workers but limit concurent workers to, say 10. 
So we would have at any time 10 workers each working on 1 
element. When one gets finished it can be repurposed for any 
unfinished tasks.


The second case is preferable since there should be no issues 
with balancing but one still gets 10 workers. The stalling comes 
from the algorithm design and not anything innate in the problem 
or workload itself.




Package resusage issue.

2023-08-25 Thread vino via Digitalmars-d-learn

Hi All,

 I am trying to use the package resusage, and I am getting the 
below error message, googled for this error and tried all the 
solution provided still facing the same issue, hence requesting 
your help.



Solution Provided
```
dism.exe /image:C: /cleanup-image /revertpendingactions
dism.exe /online /cleanup-image /startcomponentcleanup
```
Error
```
std.windows.syserror.WindowsException@resusage\common.d(25): 
Could not open process: The parameter is incorrect. (error 87)

```

From,
Vino