[webkit-changes] [WebKit/WebKit] 2471ba: [GTK] Unified sources build fix for Debian Stable ...

2023-03-30 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 2471ba074ba4e7d2ec961e713b6d4cfabd27cef3
  
https://github.com/WebKit/WebKit/commit/2471ba074ba4e7d2ec961e713b6d4cfabd27cef3
  Author: Diego Pino Garcia 
  Date:   2023-03-30 (Thu, 30 Mar 2023)

  Changed paths:
M Source/WebKit/SourcesGTK.txt
M Source/WebKit/WebProcess/WebPage/gtk/AcceleratedSurfaceDMABuf.cpp

  Log Message:
  ---
  [GTK] Unified sources build fix for Debian Stable after 262210@main

Reviewed by Carlos Garcia Campos.

Mark 'AcceleratedSurfaceDMABuf.cpp' as no-unify and disambiguate 
'WebCore::DMABufFormat'.

* Source/WebKit/SourcesGTK.txt:
* Source/WebKit/WebProcess/WebPage/gtk/AcceleratedSurfaceDMABuf.cpp:
(WebKit::AcceleratedSurfaceDMABuf::clientResize):

Canonical link: https://commits.webkit.org/262326@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


Re: [Gluster-users] Performance: lots of small files, hdd, nvme etc.

2023-03-30 Thread Diego Zuccato

Well, you have *way* more files than we do... :)

Il 30/03/2023 11:26, Hu Bert ha scritto:


Just an observation: is there a performance difference between a sw
raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick)

Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks.


with
the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario
seems faster. Just out of curiosity...
It should be, since the bricks are smaller. But given you're using a 
replica 3 I don't understand why you're also using RAID1: for each 10T 
of user-facing capacity you're keeping 60TB of data on disks.
I'd ditch local RAIDs to double the space available. Unless you 
desperately need the extra read performance.


Options Reconfigured:I'll have a look at the options you use. Maybe something can be useful 

in our case. Tks :)

--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


Re: PC Desktop nuovo: Grub e dischi

2023-03-30 Thread Diego Zuccato
Molto interessante. Anche se io, gestendo principalmente server, noto 
alcuni possibili inghippi. Per esempio, sui server è normale avere 
partizioni separate (soprattutto la /var) per evitare che magari un log 
"troppo cresciuto" possa riempire la root (impedendo così anche 
l'accesso a root e richiedendo un boot da chiavetta o in emergency mode).


Forse oramai è ora comunque di riprovare BTRFS: l'ultima volta l'ho 
testato che era piuttosto recente e ho avuto diversi problemi di 
instabilità. Oltretutto non aveva tante funzioni che descrivi.


Diego

Il 30/03/2023 10:02, Fabrizio ha scritto:



Il giorno gio 30 mar 2023 alle ore 08:45 Piviul <mailto:piv...@riminilug.it>> ha scritto:


Ciao Fabrizio, ti posso chiedere qualche delucidazione in più?

Ciao, per quel che posso cerco di rispondere, ma il mio livello tecnico 
è infimo (non faccio l'informatico, sono solo uno user molto vecchio)



On 3/29/23 17:01, Fabrizio wrote:
 > Perché non pensi a btrfs?
 > Fai tutto quello che vuoi, compreso montare partizioni di due dischi
 > diversi sullo stesso mount point.

E cosa vedi poi? Il merge delle due partizioni?


Si, ma l'ho solo testato per curiosità



 > Ti fai le snapshot automatiche sia quando aggiorni sia prima di
 > pasticciare una configurazione. Io ce l'ho su da tre anni sul
 > portatile con 2 dischi proprio per le stesse ragioni e funziona
 > benissimo. Ovviamente se scassi grub devi saper fare chroot. La swap
 > la puoi fare sul filesystem btrfs dal kernel 6.1 e va sempre fatta
 > pena un degrado delle prestazioni. Ci sono parecchi video che lo
 > spiegano compreso Linus.

Scusa ma io sono un felice utilizzatore di LVM e volevo chiederti, a
parte la deduplicazione, rispetto ad LVM cosa offre in più? Io con LVM
mi trovo molto bene, ha il raid, la cache, snapshot... comunque gli
darò
un'occhiata, si può anche formattare un volume logico in btrfs no?

anche se non ha senso credo proprio che si possa fare

Piviul

  Qual è per me il plus di btrfs?
Non mi serve fare il resize delle partizioni se una si riempie come 
accade se fai una root troppo piccola, i subvolumes si comportano come 
cartelle di una sola partizione perchè hai di fatto una sola partizione 
(chiaramente resta la la efi se hai una macchina uefi). Con LVM avrai 
ogni partizione a sé stante, btrfs no, perciò non hai nemmeno bisogno di 
calcolarti la size di ogni partizione, te ne sbatti; mi sembra anche di 
aver capito anche che se vuoi puoi stabilire una size del subvolume per 
fare la quote (tipo la /home per ogni utente)

  Porta pazienza per il linguaggio non tecnico.
Altro plus è la compressione trasparente
Per me LVM in un desktop non ha mai avuto senso, btrfs si e da come sto 
vedendo sarà di default su tutte le mainstream.

Buona giornata


--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



Re: PC Desktop nuovo: Grub e dischi

2023-03-29 Thread Diego Zuccato

IMO il partizionamento sarebbe da rifare:
- partizione per /boot/efi : OK
- partizione per / : ridurre (generalmente 16 o 32G sono più che abbondanti)
- partizione di swap : almeno 2G (serve poco, ma aiuta in momenti 
critici... non mi sono mai pentito di averne creata una, ma mi sono 
sempre pentito quando non l'ho fatto)
- partizione per /var : opzionale, ma raccomandato, generalmente faccio 
16G se ho tanto spazio, altrimenti anche solo 4 ; in questo modo un log 
fuori controllo non ti impatta /

- tutto il resto per la /home

Avere la /home separata è forse il punto più importante: ti permette di 
reinstallare il sistema mantenendo i tuoi dati.


Per il backup del sistema consiglio un disco esterno, tenendo il secondo 
nvme come disco dati extra.


Diego

Il 29/03/2023 12:21, pinguino ha scritto:

Il 28/03/23 13:04, Leonardo Boselli ha scritto:

On Tue, 28 Mar 2023, mauro morichi wrote:
ricorda poi che l'indicazione  1Gbits e' sulla scheda mentre noi 
ragioniamo sempre a bytess che sono, spannometro, il numero 
diviso circa 10 ovvero: 1 Gbites diventa 100 Mbytess 
inoltre data la modestia del switch e mille altri valori, la resa 
effettiva scende e di tanto. (bonta' degli hw, latenze, tempi di 
risposta dei sw, risoluzioni dns, ping la fila e' chilometrica).


e il fattore più importante: i dischi .
giusto per avere un idea: copia un grosso file da un disco all'altro e 
vedi a che velocità va. Spesso è meno che da una macchina all'altra.



--
Leonardo Boselli
Firenze, Toscana, Europa
http://i.trail.it


Buon giorno Lista,
Ok, verifico meglio la situazione dei dischi.

Ora sul PC nuovo ho due dischi.
Sul principale, del tipo NVME, vedo due partizioni:
- La prima si chiama nvme0n1p1, in formato fat32 montata in boot/efi, da 
512 Mib, dove mi risulta che ci sia solo il menu di grub.
- La seconda si chiama nvme0n1p2 in formato ext4 , montata come root (/) 
dove c'è tutto il resto , da 231,34 GB , compresi gli utenti della home.


Nel secondo disco sata da 1 Terabyte, voglio mettere almeno una copia di 
tutto il sistema. Cioè voglio copiare la partizione di root, da circa 
232 Gb.  Con rsync o clonezilla.


Ora vengo ai dubbi che ho su Grub.
Nel PC vecchio avevo due dischi sda ed sdb. Ogni disco aveva un suo 
settore MBR, dove copiavo il menu di grub, con il comando grub-install 
/dev/sda o grub-install /dev/sdb. Quindi non avevo una partizione per il 
boot.


Il disco nuovo del tipo NVME ha anche lui il settore MBR, per copiare il 
menu di Grub ?
Posso fare sempre la copia nel settore MBR di tutti e due i dischi del 
nuovo PC.
Oppure devo tenere la partizione di boot sul primo disco e crearne 
un'altra anche sul secondo disco ?


Grazie
Saluti

Claudio




--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



Re: [nexa] Pause Giant AI Experiments: An Open Letter

2023-03-29 Thread Diego Giorio
Sono l'unico ad essere pessimista sull'effettiva possibilità di 
fermare/sospendere la corsa all'AI?

Buona giornata

D.

From: nexa  on behalf of J.C. DE MARTIN 

Sent: Wednesday, March 29, 2023 7:09 AM
To: nexa@server-nexa.polito.it 
Subject: Re: [nexa] Pause Giant AI Experiments: An Open Letter

Alcuni commenti di Emily M. Bender (Univ. of Washington):
https://twitter.com/emilymbender/status/1640920936600997889?s=61&t=47Wex2-4P32-eiduz-Pq5A

jc

On 29/03/23 08:47, J.C. DE MARTIN wrote:
Primi firmatari: Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak.

jc

Pause Giant AI Experiments: An Open Letter

We call on all AI labs to immediately pause for at least 6 months the training 
of AI systems more powerful than GPT-4.

https://futureoflife.org/open-letter/pause-giant-ai-experiments/

AI systems with human-competitive intelligence can pose profound risks to 
society and humanity, as shown by extensive research[1] and acknowledged by top 
AI labs.[2] As stated in the widely-endorsed Asilomar AI 
Principles, Advanced AI 
could represent a profound change in the history of life on Earth, and should 
be planned for and managed with commensurate care and resources. Unfortunately, 
this level of planning and management is not happening, even though recent 
months have seen AI labs locked in an out-of-control race to develop and deploy 
ever more powerful digital minds that no one – not even their creators – can 
understand, predict, or reliably control.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] 
and we must ask ourselves: Should we let machines flood our information 
channels with propaganda and untruth? Should we automate away all the jobs, 
including the fulfilling ones? Should we develop nonhuman minds that might 
eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of 
control of our civilization? Such decisions must not be delegated to unelected 
tech leaders. Powerful AI systems should be developed only once we are 
confident that their effects will be positive and their risks will be 
manageable. This confidence must be well justified and increase with the 
magnitude of a system's potential effects. OpenAI's recent statement regarding 
artificial general 
intelligence, states that 
"At some point, it may be important to get independent review before starting 
to train future systems, and for the most advanced efforts to agree to limit 
the rate of growth of compute used for creating new models." We agree. That 
point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months 
the training of AI systems more powerful than GPT-4. This pause should be 
public and verifiable, and include all key actors. If such a pause cannot be 
enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and 
implement a set of shared safety protocols for advanced AI design and 
development that are rigorously audited and overseen by independent outside 
experts. These protocols should ensure that systems adhering to them are safe 
beyond a reasonable doubt.[4] This does not mean a pause on AI development in 
general, merely a stepping back from the dangerous race to ever-larger 
unpredictable black-box models with emergent capabilities.

AI research and development should be refocused on making today's powerful, 
state-of-the-art systems more accurate, safe, interpretable, transparent, 
robust, aligned, trustworthy, and loyal.

In parallel, AI developers must work with policymakers to dramatically 
accelerate development of robust AI governance systems. These should at a 
minimum include: new and capable regulatory authorities dedicated to AI; 
oversight and tracking of highly capable AI systems and large pools of 
computational capability; provenance and watermarking systems to help 
distinguish real from synthetic and to track model leaks; a robust auditing and 
certification ecosystem; liability for AI-caused harm; robust public funding 
for technical AI safety research; and well-resourced institutions for coping 
with the dramatic economic and political disruptions (especially to democracy) 
that AI will cause.

Humanity can enjoy a flourishing future with AI. Having succeeded in creating 
powerful AI systems, we can now enjoy an "AI summer" in which we reap the 
rewards, engineer these systems for the clear benefit of all, and give society 
a chance to adapt. Society has hit pause on other technologies with potentially 
catastrophic effects on society.[5]  We can do so here. Let's enjoy a long AI 
summer, not rush unprepared into a fall.




___
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bi

[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
New event:

Mar 28 14:37:32 ovirt-node3.ovirt vdsm[4288]: WARN executor state: count=5 
workers={, , , ,  at 0x7fcdc0010898> timeout=7.5, duration=7.50 at 
0x7fcdc0010208> discarded task#=189 at 0x7fcdc0010390>}
Mar 28 14:37:32 ovirt-node3.ovirt sanlock[1662]: 2023-03-28 14:37:32 829 
[7438]: s4 delta_renew read timeout 10 sec offset 0 
/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/dom_md/ids
Mar 28 14:37:32 ovirt-node3.ovirt sanlock[1662]: 2023-03-28 14:37:32 829 
[7438]: s4 renewal error -202 delta_length 10 last_success 798
Mar 28 14:37:33 ovirt-node3.ovirt sanlock[1662]: 2023-03-28 14:37:33 830 
[7660]: s6 delta_renew read timeout 10 sec offset 0 
/rhev/data-center/mnt/ovirt-nfsha.ovirt:_dati_drbd0/2527ed0f-e91a-4748-995c-e644362e8408/dom_md/ids
Mar 28 14:37:33 ovirt-node3.ovirt sanlock[1662]: 2023-03-28 14:37:33 830 
[7660]: s6 renewal error -202 delta_length 10 last_success 799
Mar 28 14:37:36 ovirt-node3.ovirt pacemaker-controld[3145]:  notice: High CPU 
load detected: 32.59
Mar 28 14:37:36 ovirt-node3.ovirt kernel: drbd drbd0/0 drbd1 ovirt-node2.ovirt: 
We did not send a P_BARRIER for 14436ms > ko-count (7) * timeout (10 * 0.1s); 
drbd kernel thread blocked?
Mar 28 14:37:41 ovirt-node3.ovirt libvirtd[2735]: Domain id=1 
name='SSIS-microos' uuid=e41f8148-79ab-4a88-879f-894d5750e5fb is tainted: 
custom-ga-command
Mar 28 14:37:49 ovirt-node3.ovirt kernel: drbd drbd0/0 drbd1 ovirt-node2.ovirt: 
We did not send a P_BARRIER for 7313ms > ko-count (7) * timeout (10 * 0.1s); 
drbd kernel thread blocked?
Mar 28 14:37:56 ovirt-node3.ovirt kernel: drbd drbd0/0 drbd1 ovirt-node2.ovirt: 
We did not send a P_BARRIER for 14481ms > ko-count (7) * timeout (10 * 0.1s); 
drbd kernel thread blocked?
Mar 28 14:38:06 ovirt-node3.ovirt pacemaker-controld[3145]:  notice: High CPU 
load detected: 33.50
Mar 28 14:38:09 ovirt-node3.ovirt kernel: drbd drbd0/0 drbd1 ovirt-node2.ovirt: 
Remote failed to finish a request within 7010ms > ko-count (7) * timeout (10 * 
0.1s)

2023-03-28 14:37:32,601Z INFO  
[org.ovirt.engine.core.bll.scheduling.policyunits.EvenGuestDistributionBalancePolicyUnit]
 (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-73) [] 
There is no host with more than 10 running guests, no balancing is needed
2023-03-28 14:37:50,662Z INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-41) [] VM 
'ccb06298-33a3-4b6f-bff3-d0bcd494b18d'(TpayX2GO) moved from 'Up' --> 
'NotResponding'
2023-03-28 14:37:50,666Z WARN  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-41) [] 
EVENT_ID: VM_NOT_RESPONDING(126), VM TpayX2GO is not responding.
2023-03-28 14:38:01,087Z WARN  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6602) [] domain 
'4745320f-bfc3-46c4-8849-b4fe8f1b2de6:gv0' in problem 'PROBLEMATIC'. vds: 
'ovirt-node2.ovirt'
2023-03-28 14:38:05,676Z INFO  
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-20) [] VM 
'ccb06298-33a3-4b6f-bff3-d0bcd494b18d'(TpayX2GO) moved from 'NotResponding' --> 
'Up'
2023-03-28 14:38:16,107Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6609) [] Domain 
'2527ed0f-e91a-4748-995c-e644362e8408:drbd0' recovered from problem. vds: 
'ovirt-node2.ovirt'
2023-03-28 14:38:16,107Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6609) [] Domain 
'4745320f-bfc3-46c4-8849-b4fe8f1b2de6:gv0' recovered from problem. vds: 
'ovirt-node2.ovirt'
2023-03-28 14:38:16,107Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6610) [] Domain 
'2527ed0f-e91a-4748-995c-e644362e8408:drbd0' recovered from problem. vds: 
'ovirt-node4.ovirt'
2023-03-28 14:38:16,107Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6610) [] Domain 
'4745320f-bfc3-46c4-8849-b4fe8f1b2de6:gv0' recovered from problem. vds: 
'ovirt-node4.ovirt'
2023-03-28 14:38:16,327Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6612) [] Domain 
'2527ed0f-e91a-4748-995c-e644362e8408:drbd0' recovered from problem. vds: 
'ovirt-node3.ovirt'
2023-03-28 14:38:16,327Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6612) [] Domain 
'2527ed0f-e91a-4748-995c-e644362e8408:drbd0' has recovered from problem. No 
active host in the DC is reporting it as problematic, so clearing the domain 
recovery timer.
2023-03-28 14:38:16,327Z INFO  
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxyImpl] 
(EE-ManagedThreadFactory-engine-Thread-6612) [] Domain 
'4745320f-bfc3-46c4-8849-b

[ovirt-users] Re: Failing "change Master storage domain" from gluster to iscsi

2023-03-28 Thread Diego Ercolani
Worked

I halted a node of the gluster cluster (that seemed to be problematic from the 
gluster point of view) and the change of the master storage domain worked
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KRVFDW5LELCZEH2G34IWR33H5EAN76CH/


[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
It's difficult to answer as the engine normally "freezes" or is taken down 
during events... I will try to get them next time
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NCKTSP6AR3CHJDYBPEYDFXTWEDS3IQGZ/


[ovirt-users] clock skew in hosted engine and VMs due to slow IO storage

2023-03-28 Thread Diego Ercolani
I don't know why (but I suppose is related to storage speed) the virtual 
machines tend to present a skew in the clock from some days to a century 
forward (2177)
I see in the journal of the engine:
Mar 28 13:19:40 ovirt-engine.ovirt NetworkManager[1158]:   
[1680009580.2045] dhcp4 (eth0): state changed new lease, address=192.168.123.20
Mar 28 13:24:40 ovirt-engine.ovirt NetworkManager[1158]:   
[1680009880.2042] dhcp4 (eth0): state changed new lease, address=192.168.123.20
Mar 28 13:29:40 ovirt-engine.ovirt NetworkManager[1158]:   
[1680010180.2039] dhcp4 (eth0): state changed new lease, address=192.168.123.20
Apr 01 08:15:42 ovirt-engine.ovirt chronyd[1072]: Forward time jump detected!
Apr 01 08:15:42 ovirt-engine.ovirt NetworkManager[1158]:   
[1680336942.4396] dhcp4 (eth0): activation: beginning transaction (timeout in 
45 seconds)
Apr 01 08:15:42 ovirt-engine.ovirt chronyd[1072]: Can't synchronise: no 
selectable sources

When this happens in the hosted-engine tipically:
1. the DWH became unconsistent as I stated here: 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/KPW5FFKG3AI6EINW4G74IKTYB2E4A5DT/#RDMSESARKHEGCV4PTIDVBTLCTEK3VPTA
 or 
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/WUNZUSZ2ARRLGN5AMUSVDXFQ2VWEXK6H/#OMXYSEDVCCHQSPMVXA5KM57ZWR3XHVJI
2. the skew causes the engine to kick off the nodes that appears "down" in 
"connecting" state

This compromises all the task in pending state and raise countermeasures to the 
ovirt-engine manager and also vdsm daemon.


I currently  tried to put in engine's crontab every 5 minutes a "hwclock 
--hctosys" as it seem the hwclock don't skew
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SGCAT6RPBJ42BM3TQ3AI6FS2HHYVXGIQ/


[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
No, now seem "stable" awaiting for next event
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SSBFQIANLH5RXP2CY3UU7CDJE3ZWLI2A/


[ovirt-users] Failing "change Master storage domain" from gluster to iscsi

2023-03-28 Thread Diego Ercolani
In the current release of ovirt (4.5.4) I'm currently experiencing a fail in 
change master storage domain from a gluster volume to everywhere.

The GUI talk about a "general" error.
watching the engine log:

2023-03-28 11:51:16,601Z WARN  
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] 
Unexpected return value: TaskStatus [code=331, message=value=Tar command 
failed: ({'reader': {'cmd': ['/usr/bin/tar', 'cf', '-', 
'--exclude=./lost+found', '-C', 
'/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master',
 '.'], 'rc': 1, 'err': '/usr/bin/tar: 
./tasks/20a9aa7f-80f5-403b-b296-ea95d9fd3f97: file changed as we read 
it\n/usr/bin/tar: 
./tasks/87783efa-42ac-4cd9-bda5-ad68c59bb881/87783efa-42ac-4cd9-bda5-ad68c59bb881.task:
 file changed as we read it\n'}},) abortedcode=331]
2023-03-28 11:51:16,601Z ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-46) [] 
Failed in 'HSMGetAllTasksStatusesVDS' method

Seeming that somewhat is changing file under the directory but:
[vdsm@ovirt-node2 4745320f-bfc3-46c4-8849-b4fe8f1b2de6]$ /usr/bin/tar -cf - 
--exclude=./lost+found -C 
'/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master'
 '.' > /tmp/tar.tar
/usr/bin/tar: ./tasks/20a9aa7f-80f5-403b-b296-ea95d9fd3f97: file changed as we 
read it
/usr/bin/tar: ./tasks: file changed as we read it

[vdsm@ovirt-node2 master]$ find 
'/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master'
 -mtime -1
/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master/tasks
[vdsm@ovirt-node2 master]$ ls -l 
/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/master/
total 0
drwxr-xr-x. 6 vdsm kvm 182 Mar 28 11:51 tasks
drwxr-xr-x. 2 vdsm kvm   6 Mar 26 20:36 vms

[vdsm@ovirt-node2 master]$ date; stat tasks
Tue Mar 28 12:04:06 UTC 2023
  File: tasks
  Size: 182 Blocks: 0  IO Block: 131072 directory
Device: 31h/49d Inode: 12434008067414313592  Links: 6
Access: (0755/drwxr-xr-x)  Uid: (   36/vdsm)   Gid: (   36/ kvm)
Context: system_u:object_r:fusefs_t:s0
Access: 2023-03-28 11:55:17.771046746 +
Modify: 2023-03-28 11:51:16.641145314 +
Change: 2023-03-28 11:51:16.641145314 +
 Birth: -

It seem the task directory isn't touched since
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CVZQS2KUZWM5RHQKWLQTFKATVUT7JVPR/


[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
I record entry like this in the journal of everynode:
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247 
[4105511]: s9 delta_renew read timeout 10 sec offset 0 
/rhev/data-center/mnt/glusterSD/ovirt-node3.ovirt:_gv0/4745320f-bfc3-46c4-8849-b4fe8f1b2de6/dom_md/ids
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247 
[4105511]: s9 renewal error -202 delta_length 10 last_success 1191216
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247 
[2750073]: s11 delta_renew read timeout 10 sec offset 0 
/rhev/data-center/mnt/ovirt-nfsha.ovirt:_dati_drbd0/2527ed0f-e91a-4748-995c-e644362e8408/dom_md/ids
Mar 28 10:26:58 ovirt-node3.ovirt sanlock[1660]: 2023-03-28 10:26:58 1191247 
[2750073]: s11 renewal error -202 delta_length 10 last_success 1191217

as You see its complaining about a gluster volume (hosting vms and mapped on 
three node with the terrible SATA SSD: Samsung_SSD_870_EVO_4TB
 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OKLA3DSCPPMTUEXQSKTZ6PYQ5MIDYODT/


[ovirt-users] Re: Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
The scheduling policy was the "Suspend Workload if needed" and disabled 
parallel migration.
The problem is that The Engine (mapped on external NFS domain implemented by a 
linux box without any other vm mapped) simply disappear. I have a single 10Gbps 
intel ethernet link that I use to distribute storage, management and 
"production" networks, but I don't record any bandwidth limit issue
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O22Z6VBG5YVYO5Y55OBCV3EEPQ3Q7P6Y/


[ovirt-users] Disable "balancing" and authomatic migration

2023-03-28 Thread Diego Ercolani
Hello,
in my installation I have to use poor storage... the oVirt installation doesn't 
manage such a case and begin to "balance" and move VMs around... taking too 
many snapshots stressing a poor performance all the cluster mess up
Why the vms don't go in "Pause" state but the cluster prefer to migrate things 
around messing up everything?
This is a reference I found and for notice I'm disabling the auto-migration on 
every VM, hoping this help

https://lists.ovirt.org/archives/list/users@ovirt.org/thread/24KQZFP2PCW462UZXKNAAJKDL44WU5OV/#24KQZFP2PCW462UZXKNAAJKDL44WU5OV
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6KCIVLZ42JU6GZ6SS4C5LPCTFOLK4GLE/


Re: [Pce] Adoption Poll for draft-dhody-pce-pceps-tls13-02

2023-03-27 Thread Diego R. Lopez
Hi,

As one of the authors of the original PCEPS proposal, I definitely acknowledge 
an update addressing TLS evolution is required. Therefore I support adoption.

Be goode,

--
“Esta vez no fallaremos, Doctor Infierno”

Dr Diego R. Lopez
Telefonica I+D
https://www.linkedin.com/dr2lopez/

e-mail: diego.r.lo...@telefonica.com<mailto:diego.r.lo...@telefonica.com>
Mobile: +34 682 051 091
-

On 27/3/23, 18:49,  wrote:

Dear WG,

During the PCE session today, there was clear support behind the PCEPS
updates I-D. Let's take it to the next level: do you consider that
draft-dhody-pce-pceps-tls13-02 should be adopted as a PCE WG item?

Please, share your answer and any detailed feedback using the PCE
mailing list.

Thanks,

Julien


_

Ce message et ses pieces jointes peuvent contenir des informations 
confidentielles ou privilegiees et ne doivent donc
pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce 
message par erreur, veuillez le signaler
a l'expediteur et le detruire ainsi que les pieces jointes. Les messages 
electroniques etant susceptibles d'alteration,
Orange decline toute responsabilite si ce message a ete altere, deforme ou 
falsifie. Merci.

This message and its attachments may contain confidential or privileged 
information that may be protected by law;
they should not be distributed, used or copied without authorisation.
If you have received this email in error, please notify the sender and delete 
this message and its attachments.
As emails may be altered, Orange is not liable for messages that have been 
modified, changed or falsified.
Thank you.

___
Pce mailing list
Pce@ietf.org
https://www.ietf.org/mailman/listinfo/pce



Este mensaje y sus adjuntos se dirigen exclusivamente a su destinatario, puede 
contener información privilegiada o confidencial y es para uso exclusivo de la 
persona o entidad de destino. Si no es usted. el destinatario indicado, queda 
notificado de que la lectura, utilización, divulgación y/o copia sin 
autorización puede estar prohibida en virtud de la legislación vigente. Si ha 
recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente 
por esta misma vía y proceda a su destrucción.

The information contained in this transmission is confidential and privileged 
information intended only for the use of the individual or entity named above. 
If the reader of this message is not the intended recipient, you are hereby 
notified that any dissemination, distribution or copying of this communication 
is strictly prohibited. If you have received this transmission in error, do not 
read it. Please immediately reply to the sender that you have received this 
communication in error and then delete it.

Esta mensagem e seus anexos se dirigem exclusivamente ao seu destinatário, pode 
conter informação privilegiada ou confidencial e é para uso exclusivo da pessoa 
ou entidade de destino. Se não é vossa senhoria o destinatário indicado, fica 
notificado de que a leitura, utilização, divulgação e/ou cópia sem autorização 
pode estar proibida em virtude da legislação vigente. Se recebeu esta mensagem 
por erro, rogamos-lhe que nos o comunique imediatamente por esta mesma via e 
proceda a sua destruição
___
Pce mailing list
Pce@ietf.org
https://www.ietf.org/mailman/listinfo/pce


Re: Custom plugin - multi-schema text input

2023-03-27 Thread Diego Mainou
Hi Justin, 

It seems to me that you are wanting to do too many things with one step and 
that you will struggle to find a piece of software cheap or expensive that does 
what you are describing in one step. 

ETL tools are good but they are not magical even ai needs to be trained. 

Best practice is to separate acquisition from business logic. 
So my recommendation would be to grab those files and acquire them in their 
native state + governance (e.g. a load id) before you do anything to them. 

Further, because you are dealing with many files of distinct nature you may 
wish to segregate the "acquisition" from the loading 
E.g. by creating: 


* A generic and reusable component that 'copies/moves' the files from 
wherever they are located into your landing zone. 
* A bespoke component that acquires either a specific file or a specific 
file types e.g. JSON and outputs to a generic format. E.g. a serialised file 
* A generic and reusable component that grabs files of the generic format 
and loads into a table containing the raw data plus governance. 

The above will result in files from all walks of life being loaded into your 
staging database in their raw state. This is very important for governance 
purposes. 

Potentially your next step is to create a generic and reusable component that 
utilises metadata injection to parse JSON into columns + governance. 
Rinse and repeat for xml, csv, etc. 

The next step being the mapping of your data and your dimensions. Once you have 
your sk's you can the drop the values that were used to map those sk's. etc, 
etc etc. 

Diego 


[ https://www.bizcubed.com.au/ | 

    
 ]  Diego Mainou 
Product Manager 
M. +61 415 152 091 
E. [ mailto:diego.mai...@bizcubed.com.au | diego.mai...@bizcubed.com.au ] 
[ https://www.bizcubed.com.au/ | www.bizcubed.com.au ]  [ 
https://www.bizcubed.com.au/ | 









 ] 


From: "Austin, Justin via users"  
To: "users"  
Sent: Tuesday, 28 March, 2023 1:41:06 AM 
Subject: Custom plugin - multi-schema text input 



Hi Hop users, 



We’re evaluating whether HOP is the right tool to solve a common problem for 
our business. 



We encounter hundreds of different file formats containing similar layers of 
one-to-many hierarchy (simplified example below). Getting this to work using 
out-of-box inputs/outputs and transform components results in a 
complex/convoluted set of workflows & pipelines. Since we run into this so 
often, we would like to develop a plugin with a custom “input” component that 
reads the input file, inserts some ID fields for relationships, and exposes 
multiple output rowsets (one for each schema/row type) that can be mapped to 
separate downstream transformations. Eventually we’d like to make another 
custom “output” component that can accept multiple inputs to load them where we 
need them with hierarchy preserved (JSON, relational DB, etc.). 



After reviewing the plugin documentation and samples, I’m still not sure 
whether this is possible. It seems that the relevant plugin base classes assume 
there will always be a single schema (IRowMeta) and single rowset shared by all 
input and output connections/hops. I believe we would require a single 
“transform” to have multiple IRowMeta and multiple rowsets and the ability to 
select a specific one for any given hop to a downstream transform/component. 



Is there a good path to accomplishing this with a HOP plugin? Or perhaps a 
better approach to the problem with existing Hop features? 



Thanks! 



Example file: 

REC|Jane Smith|03-20-2003 

ADDR|123 Main Street|Apartment 321|Anytown|US|5 

ACT |987654321|$4321.56|02-01-2023|03-02-2023 

DTL|debit|$23.45|02-05-2023 

DTL|debit|$143.20|02-13-2023 

DTL|credit|$652.02|02-14-2023 

DTL|debit|$8.78|02-28-2023 

ACT |56789123|$7894.56|02-01-2023|03-02-2023 

DTL|credit|$0.28|02-14-2023 

REC|John Jacobs|03-20-2003 

ADDR|876 Big Avenue||Anywhere|US|6 

ACT |5632178|$2256.79|02-01-2023|03-02-2023 

DTL|credit|$0.02|02-14-2023 







Re: [OT leggero] dischi per un nas casalingo

2023-03-26 Thread Diego Zuccato
Su 4 dischi non saprei. Gli Exos dovrebbero essere per RAID "grossi" 
(oltre gli 8 dischi, IIRC). Ma per una differenza molto ridotta 
probabilmente si.
Probabilmente, per aumentare la capacità ti aiuterebbe anche passare a 
RAID5, verificando comunque che i dischi siano CMR e non SMR (per un 
periodo i RED sono stati SMR mentre i RED Pro erano CMR, poi hanno 
"corretto il tiro").


Diego

Il 27/03/2023 08:16, Leandro Noferini ha scritto:

Buongiorno a tutti,

ho un nas casalingo gestito con openmediavault (quindi debian un po’ 
truccata, da qui il leggero OT) su un piccolo server HP che ha posto per 
cinque dischi, uno per il sistema operativo e gli altri quattro messi in 
due coppie in raid mirror per avere maggiore resilienza sui quali stanno 
i dati. Il tutto è in piedi da un bel po’ di anni.


Attualmente i dischi dei dati sono tutti e quattro Western Digital Red 
ed in effetti si sono comportati bene però ho necessità di cambiare una 
coppia perché devo aumentare la capacità.


Ho controllato sul monopolista mio di preferenza e ho visto che oltre ai 
suddetti dischi vengono venduti anche dei dischi “da datacenter”, 
marchiati come “exos enterprise” che costano leggermente di più (circa 
20€): considerando che il rumore non è un problema (il tutto sta in uno 
sgabuzzino lontano da orecchi), la differenza di spesa vale la pena?


– Ciao leandro



--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



Re: [Gluster-users] How to configure?

2023-03-24 Thread Diego Zuccato
There are 285 files in /var/lib/glusterd/vols/cluster_data ... including 
many files with names related to quorum bricks already moved to a 
different path (like cluster_data.client.clustor02.srv-quorum-00-d.vol 
that should already have been replaced by 
cluster_data.clustor02.srv-bricks-00-q.vol -- and both vol files exist).


Is there something I should check inside the volfiles?

Diego

Il 24/03/2023 13:05, Strahil Nikolov ha scritto:

Can you check your volume file contents?
Maybe it really can't find (or access) a specific volfile ?

Best Regards,
Strahil Nikolov

On Fri, Mar 24, 2023 at 8:07, Diego Zuccato
 wrote:
In glfsheal-Connection.log I see many lines like:
[2023-03-13 23:04:40.241481 +] E [MSGID: 104021]
[glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the
volume file [{from server}, {errno=2}, {error=File o directory non
esistente}]

And *lots* of gfid-mismatch errors in glustershd.log .

Couldn't find anything that would prevent heal to start. :(

Diego

Il 21/03/2023 20:39, Strahil Nikolov ha scritto:
 > I have no clue. Have you checked for errors in the logs ? Maybe you
 > might find something useful.
 >
 > Best Regards,
 > Strahil Nikolov
 >
 >    On Tue, Mar 21, 2023 at 9:56, Diego Zuccato
 >    mailto:diego.zucc...@unibo.it>> wrote:
 >    Killed glfsheal, after a day there were 218 processes, then
they got
 >    killed by OOM during the weekend. Now there are no processes
active.
 >    Trying to run "heal info" reports lots of files quite quickly
but does
 >    not spawn any glfsheal process. And neither does restarting
glusterd.
 >    Is there some way to selectively run glfsheal to fix one brick
at a
 >    time?
 >
 >    Diego
 >
 >    Il 21/03/2023 01:21, Strahil Nikolov ha scritto:
 >      > Theoretically it might help.
 >      > If possible, try to resolve any pending heals.
 >      >
 >      > Best Regards,
 >      > Strahil Nikolov
 >      >
 >      >    On Thu, Mar 16, 2023 at 15:29, Diego Zuccato
 >      >    mailto:diego.zucc...@unibo.it>
<mailto:diego.zucc...@unibo.it>> wrote:
 >      >    In Debian stopping glusterd does not stop brick
processes: to stop
 >      >    everything (and free the memory) I have to
 >      >    systemctl stop glusterd
 >      >        killall glusterfs{,d}
 >      >        killall glfsheal
 >      >        systemctl start glusterd
 >      >    [this behaviour hangs a simple reboot of a machine running
 >    glusterd...
 >      >    not nice]
 >      >
 >      >    For now I just restarted glusterd w/o killing the bricks:
 >      >
 >      >    root@str957-clustor00:~# ps aux|grep glfsheal|wc -l ;
 >    systemctl restart
 >      >    glusterd ; ps aux|grep glfsheal|wc -l
 >      >    618
 >      >    618
 >      >
 >      >    No change neither in glfsheal processes nor in free
memory :(
 >      >    Should I "killall glfsheal" before OOK kicks in?
 >      >
 >      >    Diego
 >      >
 >      >    Il 16/03/2023 12:37, Strahil Nikolov ha scritto:
 >      >      > Can you restart glusterd service (first check that
it was not
 >      >    modified
 >      >      > to kill the bricks)?
 >      >      >
 >      >      > Best Regards,
 >      >      > Strahil Nikolov
 >      >      >
 >      >      >    On Thu, Mar 16, 2023 at 8:26, Diego Zuccato
 >      >      >    mailto:diego.zucc...@unibo.it> <mailto:diego.zucc...@unibo.it>
 >    <mailto:diego.zucc...@unibo.it>> wrote:
 >      >      >    OOM is just just a matter of time.
 >      >      >
 >      >      >    Today mem use is up to 177G/187 and:
 >      >      >    # ps aux|grep glfsheal|wc -l
 >      >      >    551
 >      >      >
 >      >      >    (well, one is actually the grep process, so
"only" 550
 >    glfsheal
 >      >      >    processes.
 >      >      >
 >      >      >    I'll take the last 5:
 >      >      >    root    3266352  0.5  0.0 600292 93044 ?        Sl
 >    06:55  0:07
 >      >      >    /usr/libexec/glusterfs/glfsheal cluster_data
 >    info-summary --xml
 >      >      >    root    3267220  0.7  0.0 600292 91964 ?        Sl
 >    07:00  0

Re: [Gluster-users] How to configure?

2023-03-23 Thread Diego Zuccato

In glfsheal-Connection.log I see many lines like:
[2023-03-13 23:04:40.241481 +] E [MSGID: 104021] 
[glfs-mgmt.c:586:glfs_mgmt_getspec_cbk] 0-gfapi: failed to get the 
volume file [{from server}, {errno=2}, {error=File o directory non 
esistente}]


And *lots* of gfid-mismatch errors in glustershd.log .

Couldn't find anything that would prevent heal to start. :(

Diego

Il 21/03/2023 20:39, Strahil Nikolov ha scritto:
I have no clue. Have you checked for errors in the logs ? Maybe you 
might find something useful.


Best Regards,
Strahil Nikolov

On Tue, Mar 21, 2023 at 9:56, Diego Zuccato
 wrote:
Killed glfsheal, after a day there were 218 processes, then they got
killed by OOM during the weekend. Now there are no processes active.
Trying to run "heal info" reports lots of files quite quickly but does
not spawn any glfsheal process. And neither does restarting glusterd.
Is there some way to selectively run glfsheal to fix one brick at a
time?

Diego

Il 21/03/2023 01:21, Strahil Nikolov ha scritto:
 > Theoretically it might help.
 > If possible, try to resolve any pending heals.
 >
 > Best Regards,
 > Strahil Nikolov
 >
 >    On Thu, Mar 16, 2023 at 15:29, Diego Zuccato
 >    mailto:diego.zucc...@unibo.it>> wrote:
 >    In Debian stopping glusterd does not stop brick processes: to stop
 >    everything (and free the memory) I have to
 >    systemctl stop glusterd
 >        killall glusterfs{,d}
 >        killall glfsheal
 >        systemctl start glusterd
 >    [this behaviour hangs a simple reboot of a machine running
glusterd...
 >    not nice]
 >
 >    For now I just restarted glusterd w/o killing the bricks:
 >
 >    root@str957-clustor00:~# ps aux|grep glfsheal|wc -l ;
systemctl restart
 >    glusterd ; ps aux|grep glfsheal|wc -l
 >    618
 >    618
 >
 >    No change neither in glfsheal processes nor in free memory :(
 >    Should I "killall glfsheal" before OOK kicks in?
 >
 >    Diego
 >
 >    Il 16/03/2023 12:37, Strahil Nikolov ha scritto:
 >      > Can you restart glusterd service (first check that it was not
 >    modified
 >      > to kill the bricks)?
     >      >
 >      > Best Regards,
 >      > Strahil Nikolov
 >      >
 >      >    On Thu, Mar 16, 2023 at 8:26, Diego Zuccato
 >      >    mailto:diego.zucc...@unibo.it>
<mailto:diego.zucc...@unibo.it>> wrote:
 >      >    OOM is just just a matter of time.
 >      >
 >      >    Today mem use is up to 177G/187 and:
 >      >    # ps aux|grep glfsheal|wc -l
 >      >    551
 >      >
 >      >    (well, one is actually the grep process, so "only" 550
glfsheal
 >      >    processes.
 >      >
 >      >    I'll take the last 5:
 >      >    root    3266352  0.5  0.0 600292 93044 ?        Sl 
06:55  0:07

 >      >    /usr/libexec/glusterfs/glfsheal cluster_data
info-summary --xml
 >      >    root    3267220  0.7  0.0 600292 91964 ?        Sl 
07:00  0:07

 >      >    /usr/libexec/glusterfs/glfsheal cluster_data
info-summary --xml
 >      >    root    3268076  1.0  0.0 600160 88216 ?        Sl 
07:05  0:08

 >      >    /usr/libexec/glusterfs/glfsheal cluster_data
info-summary --xml
 >      >    root    3269492  1.6  0.0 600292 91248 ?        Sl 
07:10  0:07

 >      >    /usr/libexec/glusterfs/glfsheal cluster_data
info-summary --xml
 >      >    root    3270354  4.4  0.0 600292 93260 ?        Sl 
07:15  0:07

 >      >    /usr/libexec/glusterfs/glfsheal cluster_data
info-summary --xml
 >      >
 >      >    -8<--
 >      >    root@str957-clustor00:~# ps -o ppid= 3266352
 >      >    3266345
 >      >    root@str957-clustor00:~# ps -o ppid= 3267220
 >      >    3267213
 >      >    root@str957-clustor00:~# ps -o ppid= 3268076
 >      >    3268069
 >      >    root@str957-clustor00:~# ps -o ppid= 3269492
 >      >    3269485
 >      >    root@str957-clustor00:~# ps -o ppid= 3270354
 >      >    3270347
 >      >    root@str957-clustor00:~# ps aux|grep 3266345
 >      >    root    3266345  0.0  0.0 430536 10764 ?        Sl 
06:55  0:00

 >      >    gluster volume heal cluster_data info summary --xml
 >      >    root    3271532  0.0  0.0  6260  2500 pts/1    S+ 
07:21  0:00

 >    grep
 >      >    326634

Re: [OPSAWG] IPR Poll on draft-palmero-opsawg-dmlmo-09

2023-03-23 Thread Diego R. Lopez
I am not aware of any IPR related to this draft.

And, as coauthor, I support adoption. This draft not only address a relevant 
issue in operations, but it has become the base of a wider discussion on 
related aspects, of strong relevance to operational practices.

Be goode,

--
“Esta vez no fallaremos, Doctor Infierno”

Dr Diego R. Lopez
Telefonica I+D
https://www.linkedin.com/dr2lopez/

e-mail: diego.r.lo...@telefonica.com<mailto:diego.r.lo...@telefonica.com>
Mobile: +34 682 051 091
-

On 17/3/23, 02:28,  wrote:

Hi Authors and Contributors,

Accompany with the WG adoption on this draft, I'd like all authors and 
contributors to confirm on the list.
https://datatracker.ietf.org/doc/draft-palmero-opsawg-dmlmo/

Please respond if you are aware of any IPR related to this draft.
If you are aware of IPR, please indicate whether or not this has been disclosed 
per IETF IPR rules (see RFCs 3669, 5378, and 8179).

Thanks,
Tianran



Este mensaje y sus adjuntos se dirigen exclusivamente a su destinatario, puede 
contener información privilegiada o confidencial y es para uso exclusivo de la 
persona o entidad de destino. Si no es usted. el destinatario indicado, queda 
notificado de que la lectura, utilización, divulgación y/o copia sin 
autorización puede estar prohibida en virtud de la legislación vigente. Si ha 
recibido este mensaje por error, le rogamos que nos lo comunique inmediatamente 
por esta misma vía y proceda a su destrucción.

The information contained in this transmission is confidential and privileged 
information intended only for the use of the individual or entity named above. 
If the reader of this message is not the intended recipient, you are hereby 
notified that any dissemination, distribution or copying of this communication 
is strictly prohibited. If you have received this transmission in error, do not 
read it. Please immediately reply to the sender that you have received this 
communication in error and then delete it.

Esta mensagem e seus anexos se dirigem exclusivamente ao seu destinatário, pode 
conter informação privilegiada ou confidencial e é para uso exclusivo da pessoa 
ou entidade de destino. Se não é vossa senhoria o destinatário indicado, fica 
notificado de que a leitura, utilização, divulgação e/ou cópia sem autorização 
pode estar proibida em virtude da legislação vigente. Se recebeu esta mensagem 
por erro, rogamos-lhe que nos o comunique imediatamente por esta mesma via e 
proceda a sua destruição
___
OPSAWG mailing list
OPSAWG@ietf.org
https://www.ietf.org/mailman/listinfo/opsawg


[webkit-changes] [WebKit/WebKit] 42e03d: [GTK] Unreviewed, build fix for Debian Stable afte...

2023-03-22 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 42e03d8a173ec2852bb59448504d19d3db397050
  
https://github.com/WebKit/WebKit/commit/42e03d8a173ec2852bb59448504d19d3db397050
  Author: Diego Pino Garcia 
  Date:   2023-03-22 (Wed, 22 Mar 2023)

  Changed paths:
M Source/WebKit/NetworkProcess/soup/WebSocketTaskSoup.cpp

  Log Message:
  ---
  [GTK] Unreviewed, build fix for Debian Stable after 261972@main
https://bugs.webkit.org/show_bug.cgi?id=254221

Replace 'WebSocketChannel.h' for 'ThreadableWebSocketChannel.h'.

* Source/WebKit/NetworkProcess/soup/WebSocketTaskSoup.cpp:
(WebKit::WebSocketTask::didFail):
(WebKit::WebSocketTask::close):

Canonical link: https://commits.webkit.org/261997@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[nexa] Nasce Mozilla.ia, l’Intelligenza Artificiale non deve perdere le opportunità dell’open-source | DDay.it

2023-03-22 Thread Diego Giorio
https://www.dday.it/redazione/45375/nasce-mozillaia-lintelligenza-artificiale-non-deve-perdere-le-opportunita-dellopen-sourceCredo che possa interessare.Buona serata D.___
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa


[Wikidata-bugs] [Maniphest] T332021: Wikidata Articlequality ORES/ML model needs updating after MUL

2023-03-22 Thread diego
diego added subscribers: Isaac, diego.
diego added a comment.


  @Michael FYI:
  @Isaac has done interesting progress on Wikidata Item Quality automatic 
evaluation T321224 <https://phabricator.wikimedia.org/T321224>. Also, I'm 
leading another work on vandalism detection on Wikidata T328813 
<https://phabricator.wikimedia.org/T328813>.

TASK DETAIL
  https://phabricator.wikimedia.org/T332021

EMAIL PREFERENCES
  https://phabricator.wikimedia.org/settings/panel/emailpreferences/

To: diego
Cc: diego, Isaac, Aklapper, Lydia_Pintscher, Manuel, Michael, Astuthiodit_1, 
Gethan, karapayneWMDE, Simonmaignan, Invadibot, Theofpa, maantietaja, calbon, 
guergana.tzatchkova, Anerka, ItamarWMDE, Akuckartz, Nandana, Lahi, Gq86, 
Xinbenlv, Vacio, Capankajsmilyo, GoranSMilovanovic, Fz-29, QZanden, 
LawExplorer, elukey, _jensen, rosalieper, Mkdw, Scott_WUaS, notconfusing, 
Wikidata-bugs, aude, Ricordisamoa, Alchimista, He7d3r, Ladsgroup, Mbch331
___
Wikidata-bugs mailing list -- wikidata-bugs@lists.wikimedia.org
To unsubscribe send an email to wikidata-bugs-le...@lists.wikimedia.org


Re: [Gluster-users] How to configure?

2023-03-21 Thread Diego Zuccato
Killed glfsheal, after a day there were 218 processes, then they got 
killed by OOM during the weekend. Now there are no processes active.
Trying to run "heal info" reports lots of files quite quickly but does 
not spawn any glfsheal process. And neither does restarting glusterd.

Is there some way to selectively run glfsheal to fix one brick at a time?

Diego

Il 21/03/2023 01:21, Strahil Nikolov ha scritto:

Theoretically it might help.
If possible, try to resolve any pending heals.

Best Regards,
Strahil Nikolov

On Thu, Mar 16, 2023 at 15:29, Diego Zuccato
 wrote:
In Debian stopping glusterd does not stop brick processes: to stop
everything (and free the memory) I have to
systemctl stop glusterd
   killall glusterfs{,d}
   killall glfsheal
   systemctl start glusterd
[this behaviour hangs a simple reboot of a machine running glusterd...
not nice]

For now I just restarted glusterd w/o killing the bricks:

root@str957-clustor00:~# ps aux|grep glfsheal|wc -l ; systemctl restart
glusterd ; ps aux|grep glfsheal|wc -l
618
618

No change neither in glfsheal processes nor in free memory :(
Should I "killall glfsheal" before OOK kicks in?

Diego

Il 16/03/2023 12:37, Strahil Nikolov ha scritto:
 > Can you restart glusterd service (first check that it was not
modified
 > to kill the bricks)?
 >
 > Best Regards,
 > Strahil Nikolov
 >
     >    On Thu, Mar 16, 2023 at 8:26, Diego Zuccato
 >    mailto:diego.zucc...@unibo.it>> wrote:
 >    OOM is just just a matter of time.
 >
 >    Today mem use is up to 177G/187 and:
 >    # ps aux|grep glfsheal|wc -l
 >    551
 >
 >    (well, one is actually the grep process, so "only" 550 glfsheal
 >    processes.
 >
 >    I'll take the last 5:
 >    root    3266352  0.5  0.0 600292 93044 ?        Sl  06:55  0:07
 >    /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
 >    root    3267220  0.7  0.0 600292 91964 ?        Sl  07:00  0:07
 >    /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
 >    root    3268076  1.0  0.0 600160 88216 ?        Sl  07:05  0:08
 >    /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
 >    root    3269492  1.6  0.0 600292 91248 ?        Sl  07:10  0:07
 >    /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
 >    root    3270354  4.4  0.0 600292 93260 ?        Sl  07:15  0:07
 >    /usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
 >
 >    -8<--
 >    root@str957-clustor00:~# ps -o ppid= 3266352
 >    3266345
 >    root@str957-clustor00:~# ps -o ppid= 3267220
 >    3267213
 >    root@str957-clustor00:~# ps -o ppid= 3268076
 >    3268069
 >    root@str957-clustor00:~# ps -o ppid= 3269492
 >    3269485
 >    root@str957-clustor00:~# ps -o ppid= 3270354
 >    3270347
 >    root@str957-clustor00:~# ps aux|grep 3266345
 >    root    3266345  0.0  0.0 430536 10764 ?        Sl  06:55  0:00
 >    gluster volume heal cluster_data info summary --xml
 >    root    3271532  0.0  0.0  6260  2500 pts/1    S+  07:21  0:00
grep
 >    3266345
 >    root@str957-clustor00:~# ps aux|grep 3267213
 >    root    3267213  0.0  0.0 430536 10644 ?        Sl  07:00  0:00
 >    gluster volume heal cluster_data info summary --xml
 >    root    3271599  0.0  0.0  6260  2480 pts/1    S+  07:22  0:00
grep
 >    3267213
 >    root@str957-clustor00:~# ps aux|grep 3268069
 >    root    3268069  0.0  0.0 430536 10704 ?        Sl  07:05  0:00
 >    gluster volume heal cluster_data info summary --xml
 >    root    3271626  0.0  0.0  6260  2516 pts/1    S+  07:22  0:00
grep
 >    3268069
 >    root@str957-clustor00:~# ps aux|grep 3269485
 >    root    3269485  0.0  0.0 430536 10756 ?        Sl  07:10  0:00
 >    gluster volume heal cluster_data info summary --xml
 >    root    3271647  0.0  0.0  6260  2480 pts/1    S+  07:22  0:00
grep
 >    3269485
 >    root@str957-clustor00:~# ps aux|grep 3270347
 >    root    3270347  0.0  0.0 430536 10672 ?        Sl  07:15  0:00
 >    gluster volume heal cluster_data info summary --xml
 >    root    3271666  0.0  0.0  6260  2568 pts/1    S+  07:22  0:00
    grep
 >    3270347
 >    -8<--
 >
 >    Seems glfsheal is spawning more processes.
 >    I can't rule out a metadata corruption (or at least a desync),
but it
 >    shouldn't happen...
 >
 >    Diego
 >
 >    Il 15/03/2023 20:11, Strahil Nikolov ha scritto:
 &

[webkit-changes] [WebKit/WebKit] 212eab: [GTK] Unreviewed, remove duplicated entries in API...

2023-03-20 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 212eab5142710901ca363fc47f48c211957bf5b6
  
https://github.com/WebKit/WebKit/commit/212eab5142710901ca363fc47f48c211957bf5b6
  Author: Diego Pino Garcia 
  Date:   2023-03-20 (Mon, 20 Mar 2023)

  Changed paths:
M Tools/TestWebKitAPI/glib/TestExpectations.json

  Log Message:
  ---
  [GTK] Unreviewed, remove duplicated entries in API GTK test expectations 
after 261731@main
https://bugs.webkit.org/show_bug.cgi?id=254002

Tests were failing to run because there were duplicated entries in API
GTK test expectations.

* Tools/TestWebKitAPI/glib/TestExpectations.json:

Canonical link: https://commits.webkit.org/261867@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[Solar-general] Video de Diego Saravia 2021

2023-03-16 Thread Diego Saravia

This attachment was removed: VID-20230316-WA0141.mp4



Solar-General es una lista abierta a toda la comunidad, sin ninguna moderación, 
por lo que se apela a la tolerancia y al respeto mutuo.
Las opiniones expresadas son responsabilidad exclusiva de sus respectivos/as 
autores/as. La Asociación Solar no se hace responsable por los mensajes 
vertidos, ni representan necesariamente el punto de vista de la Asociación 
Solar.

Solar-general@lists.ourproject.org
https://lists.ourproject.org/cgi-bin/mailman/listinfo/solar-general

Re: [Gluster-users] How to configure?

2023-03-16 Thread Diego Zuccato
In Debian stopping glusterd does not stop brick processes: to stop 
everything (and free the memory) I have to

systemctl stop glusterd
  killall glusterfs{,d}
  killall glfsheal
  systemctl start glusterd
[this behaviour hangs a simple reboot of a machine running glusterd... 
not nice]


For now I just restarted glusterd w/o killing the bricks:

root@str957-clustor00:~# ps aux|grep glfsheal|wc -l ; systemctl restart 
glusterd ; ps aux|grep glfsheal|wc -l

618
618

No change neither in glfsheal processes nor in free memory :(
Should I "killall glfsheal" before OOK kicks in?

Diego

Il 16/03/2023 12:37, Strahil Nikolov ha scritto:
Can you restart glusterd service (first check that it was not modified 
to kill the bricks)?


Best Regards,
Strahil Nikolov

On Thu, Mar 16, 2023 at 8:26, Diego Zuccato
 wrote:
OOM is just just a matter of time.

Today mem use is up to 177G/187 and:
# ps aux|grep glfsheal|wc -l
551

(well, one is actually the grep process, so "only" 550 glfsheal
processes.

I'll take the last 5:
root    3266352  0.5  0.0 600292 93044 ?        Sl  06:55  0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root    3267220  0.7  0.0 600292 91964 ?        Sl  07:00  0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root    3268076  1.0  0.0 600160 88216 ?        Sl  07:05  0:08
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root    3269492  1.6  0.0 600292 91248 ?        Sl  07:10  0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root    3270354  4.4  0.0 600292 93260 ?        Sl  07:15  0:07
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml

-8<--
root@str957-clustor00:~# ps -o ppid= 3266352
3266345
root@str957-clustor00:~# ps -o ppid= 3267220
3267213
root@str957-clustor00:~# ps -o ppid= 3268076
3268069
root@str957-clustor00:~# ps -o ppid= 3269492
3269485
root@str957-clustor00:~# ps -o ppid= 3270354
3270347
root@str957-clustor00:~# ps aux|grep 3266345
root    3266345  0.0  0.0 430536 10764 ?        Sl  06:55  0:00
gluster volume heal cluster_data info summary --xml
root    3271532  0.0  0.0  6260  2500 pts/1    S+  07:21  0:00 grep
3266345
root@str957-clustor00:~# ps aux|grep 3267213
root    3267213  0.0  0.0 430536 10644 ?        Sl  07:00  0:00
gluster volume heal cluster_data info summary --xml
root    3271599  0.0  0.0  6260  2480 pts/1    S+  07:22  0:00 grep
3267213
root@str957-clustor00:~# ps aux|grep 3268069
root    3268069  0.0  0.0 430536 10704 ?        Sl  07:05  0:00
gluster volume heal cluster_data info summary --xml
root    3271626  0.0  0.0  6260  2516 pts/1    S+  07:22  0:00 grep
3268069
root@str957-clustor00:~# ps aux|grep 3269485
root    3269485  0.0  0.0 430536 10756 ?        Sl  07:10  0:00
gluster volume heal cluster_data info summary --xml
root    3271647  0.0  0.0  6260  2480 pts/1    S+  07:22  0:00 grep
3269485
root@str957-clustor00:~# ps aux|grep 3270347
root    3270347  0.0  0.0 430536 10672 ?        Sl  07:15  0:00
gluster volume heal cluster_data info summary --xml
root    3271666  0.0  0.0  6260  2568 pts/1    S+  07:22  0:00 grep
3270347
-8<--

Seems glfsheal is spawning more processes.
I can't rule out a metadata corruption (or at least a desync), but it
shouldn't happen...

Diego

Il 15/03/2023 20:11, Strahil Nikolov ha scritto:
 > If you don't experience any OOM , you can focus on the heals.
 >
 > 284 processes of glfsheal seems odd.
 >
 > Can you check the ppid for 2-3 randomly picked ?
 > ps -o ppid= 
 >
 > Best Regards,
 > Strahil Nikolov
 >
 >    On Wed, Mar 15, 2023 at 9:54, Diego Zuccato
 >    mailto:diego.zucc...@unibo.it>> wrote:
 >    I enabled it yesterday and that greatly reduced memory pressure.
 >    Current volume info:
 >    -8<--
 >    Volume Name: cluster_data
 >    Type: Distributed-Replicate
 >    Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a
 >    Status: Started
 >    Snapshot Count: 0
 >    Number of Bricks: 45 x (2 + 1) = 135
 >    Transport-type: tcp
 >    Bricks:
 >    Brick1: clustor00:/srv/bricks/00/d
 >    Brick2: clustor01:/srv/bricks/00/d
 >    Brick3: clustor02:/srv/bricks/00/q (arbiter)
 >    [...]
 >    Brick133: clustor01:/srv/bricks/29/d
 >    Brick134: clustor02:/srv/bricks/29/d
 >    Brick135: clustor00:/srv/bricks/14/q (arbiter)
 >    Options Reconfigured:
 >    performance.quick-read: off
 >    cluster.entry-self-heal: on
 >    cluster.data-self-heal-algorithm: full
 >    cluster.metadata-self

Re: [Gluster-users] How to configure?

2023-03-15 Thread Diego Zuccato

OOM is just just a matter of time.

Today mem use is up to 177G/187 and:
# ps aux|grep glfsheal|wc -l
551

(well, one is actually the grep process, so "only" 550 glfsheal processes.

I'll take the last 5:
root 3266352  0.5  0.0 600292 93044 ?Sl   06:55   0:07 
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root 3267220  0.7  0.0 600292 91964 ?Sl   07:00   0:07 
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root 3268076  1.0  0.0 600160 88216 ?Sl   07:05   0:08 
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root 3269492  1.6  0.0 600292 91248 ?Sl   07:10   0:07 
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml
root 3270354  4.4  0.0 600292 93260 ?Sl   07:15   0:07 
/usr/libexec/glusterfs/glfsheal cluster_data info-summary --xml


-8<--
root@str957-clustor00:~# ps -o ppid= 3266352
3266345
root@str957-clustor00:~# ps -o ppid= 3267220
3267213
root@str957-clustor00:~# ps -o ppid= 3268076
3268069
root@str957-clustor00:~# ps -o ppid= 3269492
3269485
root@str957-clustor00:~# ps -o ppid= 3270354
3270347
root@str957-clustor00:~# ps aux|grep 3266345
root 3266345  0.0  0.0 430536 10764 ?Sl   06:55   0:00 
gluster volume heal cluster_data info summary --xml
root 3271532  0.0  0.0   6260  2500 pts/1S+   07:21   0:00 grep 
3266345

root@str957-clustor00:~# ps aux|grep 3267213
root 3267213  0.0  0.0 430536 10644 ?Sl   07:00   0:00 
gluster volume heal cluster_data info summary --xml
root 3271599  0.0  0.0   6260  2480 pts/1S+   07:22   0:00 grep 
3267213

root@str957-clustor00:~# ps aux|grep 3268069
root 3268069  0.0  0.0 430536 10704 ?Sl   07:05   0:00 
gluster volume heal cluster_data info summary --xml
root 3271626  0.0  0.0   6260  2516 pts/1S+   07:22   0:00 grep 
3268069

root@str957-clustor00:~# ps aux|grep 3269485
root 3269485  0.0  0.0 430536 10756 ?Sl   07:10   0:00 
gluster volume heal cluster_data info summary --xml
root 3271647  0.0  0.0   6260  2480 pts/1S+   07:22   0:00 grep 
3269485

root@str957-clustor00:~# ps aux|grep 3270347
root 3270347  0.0  0.0 430536 10672 ?Sl   07:15   0:00 
gluster volume heal cluster_data info summary --xml
root 3271666  0.0  0.0   6260  2568 pts/1S+   07:22   0:00 grep 
3270347

-8<--

Seems glfsheal is spawning more processes.
I can't rule out a metadata corruption (or at least a desync), but it 
shouldn't happen...


Diego

Il 15/03/2023 20:11, Strahil Nikolov ha scritto:

If you don't experience any OOM , you can focus on the heals.

284 processes of glfsheal seems odd.

Can you check the ppid for 2-3 randomly picked ?
ps -o ppid= 

Best Regards,
Strahil Nikolov

    On Wed, Mar 15, 2023 at 9:54, Diego Zuccato
 wrote:
I enabled it yesterday and that greatly reduced memory pressure.
Current volume info:
-8<--
Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a
Status: Started
Snapshot Count: 0
Number of Bricks: 45 x (2 + 1) = 135
Transport-type: tcp
Bricks:
Brick1: clustor00:/srv/bricks/00/d
Brick2: clustor01:/srv/bricks/00/d
Brick3: clustor02:/srv/bricks/00/q (arbiter)
[...]
Brick133: clustor01:/srv/bricks/29/d
Brick134: clustor02:/srv/bricks/29/d
Brick135: clustor00:/srv/bricks/14/q (arbiter)
Options Reconfigured:
performance.quick-read: off
cluster.entry-self-heal: on
cluster.data-self-heal-algorithm: full
cluster.metadata-self-heal: on
cluster.shd-max-threads: 2
network.inode-lru-limit: 50
performance.md-cache-timeout: 600
performance.cache-invalidation: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
features.quota-deem-statfs: on
performance.readdir-ahead: on
cluster.granular-entry-heal: enable
features.scrub: Active
features.bitrot: on
cluster.lookup-optimize: on
performance.stat-prefetch: on
performance.cache-refresh-timeout: 60
performance.parallel-readdir: on
performance.write-behind-window-size: 128MB
cluster.self-heal-daemon: enable
features.inode-quota: on
features.quota: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
client.event-threads: 1
features.scrub-throttle: normal
diagnostics.brick-log-level: ERROR
diagnostics.client-log-level: ERROR
config.brick-threads: 0
cluster.lookup-unhashed: on
config.client-threads: 1
cluster.use-anonymous-inode: off
diagnostics.brick-sys-log-level: CRITICAL
features.scrub-freq: monthly
cluster.data-self-heal: on
cluster.brick-multiplex: on
cluster.daemon-log-level: ERROR
-8<--

htop reports that memory usage is up to 143G, there are 602 tasks and
5232 threads (~20 running) on clustor00, 117G/49 tasks

[webkit-changes] [WebKit/WebKit] c152ad: [GTK] Unreviewed, garden API GTK tests

2023-03-15 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: c152adb2679026a4c9cb9cb17844fc25d9f01015
  
https://github.com/WebKit/WebKit/commit/c152adb2679026a4c9cb9cb17844fc25d9f01015
  Author: Diego Pino Garcia 
  Date:   2023-03-15 (Wed, 15 Mar 2023)

  Changed paths:
M Tools/TestWebKitAPI/glib/TestExpectations.json

  Log Message:
  ---
  [GTK] Unreviewed, garden API GTK tests
https://bugs.webkit.org/show_bug.cgi?id=254002

There are currently several API GTK tests failing that are slowing down
the EWS API-GTK queue. The tests marked as failure in this patch were
tested in a post-commit bot, so these are known failures.

* Tools/TestWebKitAPI/glib/TestExpectations.json:

Canonical link: https://commits.webkit.org/261731@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[webkit-changes] [WebKit/WebKit] caaf11: [GCC] Unreviewed, build fix for Ubuntu 20.04 after...

2023-03-15 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: caaf11e8a339070872c2a0f554828641b5d55b16
  
https://github.com/WebKit/WebKit/commit/caaf11e8a339070872c2a0f554828641b5d55b16
  Author: Diego Pino Garcia 
  Date:   2023-03-15 (Wed, 15 Mar 2023)

  Changed paths:
M Source/WebCore/platform/graphics/DecodingOptions.h

  Log Message:
  ---
  [GCC] Unreviewed, build fix for Ubuntu 20.04 after 261700@main
https://bugs.webkit.org/show_bug.cgi?id=253655

Default comparisons by value (P1946R0) is only supported since GCC10.

* Source/WebCore/platform/graphics/DecodingOptions.h:
(WebCore::DecodingOptions::operator== const):
(WebCore::DecodingOptions::operator!= const):

Canonical link: https://commits.webkit.org/261725@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[webkit-changes] [WebKit/WebKit] b323e2: [GCC] Unreviewed, build fix for Ubuntu 20.04 after...

2023-03-15 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: b323e2ac5c49f4299fb29a274b59cfb1e812
  
https://github.com/WebKit/WebKit/commit/b323e2ac5c49f4299fb29a274b59cfb1e812
  Author: Diego Pino Garcia 
  Date:   2023-03-15 (Wed, 15 Mar 2023)

  Changed paths:
M Source/WebCore/rendering/style/ListStyleType.h

  Log Message:
  ---
  [GCC] Unreviewed, build fix for Ubuntu 20.04 after 261651@main
https://bugs.webkit.org/show_bug.cgi?id=253819

GCC9 requires explicit definiton of operator!=.

* Source/WebCore/rendering/style/ListStyleType.h:
(WebCore::ListStyleType::operator!= const):

Canonical link: https://commits.webkit.org/261680@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


RE: Database lookup only returns one row

2023-03-15 Thread Diego Mainou
The look up step does as advertised. Returns 1 row per query/ lookupYou may 
wish to use the database joinTo do what you are looking for. Sent from my Galaxy
 Original message From: Jonathan Haehnel 
 Date: 15/3/23  8:21 pm  (GMT+10:00) To: 
users@hop.apache.org Subject: Database lookup only returns one row 

Dear Hop Team,
 
I tested the Database lookup step with two different Databases (mysql&redshift).

Let’s say there is one input row. The lookup column value from input relates to 
several rows in the lookup table. I would expect to get several rows out as a 
result but only get one. Am I doing something wrong or is this a bug (didn’t 
fine
 a ticket for it in githup)? 

 
I can work around this issue using Database join but its not so nice, having to 
write a query and I assume also not as effective.

 
select account_key 
from dwh.dim_account
where email_key = ?
 
Kind Regards

Jonathan
 
Jonathan Haehnel
Business Intelligence Engineer

Travian Games GmbH | Wilhelm-Wagenfeld-Str. 22 | 80807 Munich | Germany
 
Tel.: +49 / (0)89 / 324 915 144
 
j.haeh...@traviangames.com
http://www.traviangames.com
 

 
Registered office: Munich
Commercial register number: HRB 173511
Managing Directors: Jörg Strathaus, Frederik Hammes
VAT ID: DE246258085
 
This email and its attachments are strictly confidential and are intended 
solely for the attention of the person to whom it is addressed. If you are not 
the intended recipient of this email, please delete it including
 its attachments immediately and inform us accordingly.
 





Re: [Gluster-users] How to configure?

2023-03-15 Thread Diego Zuccato

I enabled it yesterday and that greatly reduced memory pressure.
Current volume info:
-8<--
Volume Name: cluster_data
Type: Distributed-Replicate
Volume ID: a8caaa90-d161-45bb-a68c-278263a8531a
Status: Started
Snapshot Count: 0
Number of Bricks: 45 x (2 + 1) = 135
Transport-type: tcp
Bricks:
Brick1: clustor00:/srv/bricks/00/d
Brick2: clustor01:/srv/bricks/00/d
Brick3: clustor02:/srv/bricks/00/q (arbiter)
[...]
Brick133: clustor01:/srv/bricks/29/d
Brick134: clustor02:/srv/bricks/29/d
Brick135: clustor00:/srv/bricks/14/q (arbiter)
Options Reconfigured:
performance.quick-read: off
cluster.entry-self-heal: on
cluster.data-self-heal-algorithm: full
cluster.metadata-self-heal: on
cluster.shd-max-threads: 2
network.inode-lru-limit: 50
performance.md-cache-timeout: 600
performance.cache-invalidation: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
features.quota-deem-statfs: on
performance.readdir-ahead: on
cluster.granular-entry-heal: enable
features.scrub: Active
features.bitrot: on
cluster.lookup-optimize: on
performance.stat-prefetch: on
performance.cache-refresh-timeout: 60
performance.parallel-readdir: on
performance.write-behind-window-size: 128MB
cluster.self-heal-daemon: enable
features.inode-quota: on
features.quota: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
client.event-threads: 1
features.scrub-throttle: normal
diagnostics.brick-log-level: ERROR
diagnostics.client-log-level: ERROR
config.brick-threads: 0
cluster.lookup-unhashed: on
config.client-threads: 1
cluster.use-anonymous-inode: off
diagnostics.brick-sys-log-level: CRITICAL
features.scrub-freq: monthly
cluster.data-self-heal: on
cluster.brick-multiplex: on
cluster.daemon-log-level: ERROR
-8<--

htop reports that memory usage is up to 143G, there are 602 tasks and 
5232 threads (~20 running) on clustor00, 117G/49 tasks/1565 threads on 
clustor01 and 126G/45 tasks/1574 threads on clustor02.
I see quite a lot (284!) of glfsheal processes running on clustor00 (a 
"gluster v heal cluster_data info summary" is running on clustor02 since 
yesterday, still no output). Shouldn't be just one per brick?


Diego

Il 15/03/2023 08:30, Strahil Nikolov ha scritto:

Do you use brick multiplexing ?

Best Regards,
Strahil Nikolov

On Tue, Mar 14, 2023 at 16:44, Diego Zuccato
 wrote:
Hello all.

Our Gluster 9.6 cluster is showing increasing problems.
Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual
thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]),
configured in replica 3 arbiter 1. Using Debian packages from Gluster
9.x latest repository.

Seems 192G RAM are not enough to handle 30 data bricks + 15 arbiters
and
I often had to reload glusterfsd because glusterfs processed got killed
for OOM.
On top of that, performance have been quite bad, especially when we
reached about 20M files. On top of that, one of the servers have had
mobo issues that resulted in memory errors that corrupted some
bricks fs
(XFS, it required "xfs_reparir -L" to fix).
Now I'm getting lots of "stale file handle" errors and other errors
(like directories that seem empty from the client but still containing
files in some bricks) and auto healing seems unable to complete.

Since I can't keep up continuing to manually fix all the issues, I'm
thinking about backup+destroy+recreate strategy.

I think that if I reduce the number of bricks per server to just 5
(RAID1 of 6x12TB disks) I might resolve RAM issues - at the cost of
longer heal times in case a disk fails. Am I right or it's useless?
Other recommendations?
Servers have space for another 6 disks. Maybe those could be used for
some SSDs to speed up access?

TIA.

-- 
Diego Zuccato

DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
<https://meet.google.com/cpu-eiue-hvk>
Gluster-users mailing list
Gluster-users@gluster.org <mailto:Gluster-users@gluster.org>
https://lists.gluster.org/mailman/listinfo/gluster-users
<https://lists.gluster.org/mailman/listinfo/gluster-users>



--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[ovirt-users] watchdog: BUG: soft lockup - CPU#3 stuck for XXXs! mitigations?

2023-03-14 Thread Diego Ercolani
Hello,
I noticed that when you have poor "storage" performances, al the VMs are 
frustrated with entry like the one in the subject.
Searching around there is a case under redhat:
https://access.redhat.com/solutions/5427
that is suggesting to address the issue (if not possible to have rocket 
performances from the storage) regulating the elevator scheduler in kernel.

but in a virtual machine I have this default queue:
root@openproject:# cat /sys/block/sda/queue/scheduler 
[none] mq-deadline 

So it seem a little bit outdated

Is this solution resolutive?
Kernel Guru here what suggests?

The problem is also that the datetime function of local VM (also hosted engine) 
return also not correct date time (often in 2177 year) after these avents 
creating annoying problems in DaraWareHouse and stucking all the certificate 
clients
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JHNK6NPLO6OXJ74QQINWLCCDTEEFO55P/


[Gluster-users] How to configure?

2023-03-14 Thread Diego Zuccato

Hello all.

Our Gluster 9.6 cluster is showing increasing problems.
Currently it's composed of 3 servers (2x Intel Xeon 4210 [20 cores dual 
thread, total 40 threads], 192GB RAM, 30x HGST HUH721212AL5200 [12TB]), 
configured in replica 3 arbiter 1. Using Debian packages from Gluster 
9.x latest repository.


Seems 192G RAM are not enough to handle 30 data bricks + 15 arbiters and 
I often had to reload glusterfsd because glusterfs processed got killed 
for OOM.
On top of that, performance have been quite bad, especially when we 
reached about 20M files. On top of that, one of the servers have had 
mobo issues that resulted in memory errors that corrupted some bricks fs 
(XFS, it required "xfs_reparir -L" to fix).
Now I'm getting lots of "stale file handle" errors and other errors 
(like directories that seem empty from the client but still containing 
files in some bricks) and auto healing seems unable to complete.


Since I can't keep up continuing to manually fix all the issues, I'm 
thinking about backup+destroy+recreate strategy.


I think that if I reduce the number of bricks per server to just 5 
(RAID1 of 6x12TB disks) I might resolve RAM issues - at the cost of 
longer heal times in case a disk fails. Am I right or it's useless? 
Other recommendations?
Servers have space for another 6 disks. Maybe those could be used for 
some SSDs to speed up access?


TIA.

--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786




Community Meeting Calendar:

Schedule -
Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-users mailing list
Gluster-users@gluster.org
https://lists.gluster.org/mailman/listinfo/gluster-users


[OAUTH-WG] Reac native

2023-03-13 Thread juan diego jacobo hernandez



diego-d...@hotmail.com

___
OAuth mailing list
OAuth@ietf.org
https://www.ietf.org/mailman/listinfo/oauth


[ovirt-users] Re: Self Hosted Engine in unaligned state: node are

2023-03-13 Thread Diego Ercolani
Finally it seem that the problem was in the external nfs server, it failed 
rpc.gsssd and the nfs service become unresponsive... so the hosted-engine 
configuration domain wasn't reacheable
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3QMB3R3QU2M73WNOEOU7GDTZSG6A4U3W/


[ovirt-users] Self Hosted Engine in unaligned state: node are

2023-03-13 Thread Diego Ercolani
Hello,
ovirt-release-host-node-4.5.4-1.el8.x86_64

Today I found my cluster in an unconsinstent state
I have three nodes: ovirt-node2 ovirt-node3 ovirt-node4 with self hosted engine 
deployed using external nfs storage

My first attempt was to launch hosted-engine --vm-statos on three nodes and I 
get three inconsinstent states:

[root@ovirt-node2 ~]# hosted-engine --vm-status
The hosted engine configuration has not been retrieved from shared storage yet,
please ensure that ovirt-ha-agent service is running.

--== Host ovirt-node3.ovirt (id: 1) status ==--

Host ID: 1
Host timestamp : 1942858
Score  : 3400
Engine status  : unknown stale-data
Hostname   : ovirt-node3.ovirt
Local maintenance  : False
stopped: False
crc32  : 37cf5256
conf_on_shared_storage : True
local_conf_timestamp   : 1942859
Status up-to-date  : False
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1942858 (Sun Mar 12 01:26:20 2023)
host-id=1
score=3400
vm_conf_refresh_time=1942859 (Sun Mar 12 01:26:22 2023)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False


--== Host ovirt-node2.ovirt (id: 2) status ==--

Host ID: 2
Host timestamp : 4425500
Score  : 3400
Engine status  : unknown stale-data
Hostname   : ovirt-node2.ovirt
Local maintenance  : False
stopped: False
crc32  : ab944a8a
conf_on_shared_storage : True
local_conf_timestamp   : 4425500
Status up-to-date  : False
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=4425500 (Sun Mar 12 01:26:01 2023)
host-id=2
score=3400
vm_conf_refresh_time=4425500 (Sun Mar 12 01:26:01 2023)
conf_on_shared_storage=True
maintenance=False
state=EngineUp
stopped=False

[root@ovirt-node3 ~]# hosted-engine --vm-status
--== Host ovirt-node4.ovirt (id: 3) status ==--

Host ID: 3
Host timestamp : 4452814
Score  : 3400
Engine status  : unknown stale-data
Hostname   : ovirt-node4.ovirt
Local maintenance  : False
stopped: False
crc32  : 95890d21
conf_on_shared_storage : True
local_conf_timestamp   : 4452814
Status up-to-date  : False
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=4452814 (Sun Mar 12 01:25:55 2023)
host-id=3
score=3400
vm_conf_refresh_time=4452814 (Sun Mar 12 01:25:55 2023)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False



[root@ovirt-node4 ~]# hosted-engine --vm-status


--== Host ovirt-node3.ovirt (id: 1) status ==--

Host ID: 1
Host timestamp : 1942848
Score  : 3400
Engine status  : unknown stale-data
Hostname   : ovirt-node3.ovirt
Local maintenance  : False
stopped: False
crc32  : 7f645fbc
conf_on_shared_storage : True
local_conf_timestamp   : 1942848
Status up-to-date  : False
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=1942848 (Sun Mar 12 01:26:10 2023)
host-id=1
score=3400
vm_conf_refresh_time=1942848 (Sun Mar 12 01:26:10 2023)
conf_on_shared_storage=True
maintenance=False
state=EngineDown
stopped=False


--== Host ovirt-node2.ovirt (id: 2) status ==--

Host ID: 2
Host timestamp : 4428404
Score  : 3400
Engine status  : unknown stale-data
Hostname   : ovirt-node2.ovirt
Local maintenance  : False
stopped: False
crc32  : af938ff8
conf_on_shared_storage : True
local_conf_timestamp   : 4428404
Status up-to-date  : False
Extra metadata (valid at timestamp):
metadata_parse_version=1
metadata_feature_version=1
timestamp=4428404 (Sun Mar 12 02:14:45 2023)
ho

[Wikidata-bugs] [Maniphest] T328813: Develop a ML-based service to detect vandalism on Wikidata

2023-03-10 Thread diego
diego added a comment.


  **Update**
  
  - New features had slightly improved the accuracy (now is 75%), I'm still 
working on improving the model.

TASK DETAIL
  https://phabricator.wikimedia.org/T328813

EMAIL PREFERENCES
  https://phabricator.wikimedia.org/settings/panel/emailpreferences/

To: diego
Cc: calbon, achou, MunizaA, Lydia_Pintscher, leila, diego, Aklapper, 
Astuthiodit_1, karapayneWMDE, Invadibot, Ywats0ns, maantietaja, ItamarWMDE, 
Akuckartz, Nandana, Abdeaitali, Lahi, Gq86, GoranSMilovanovic, QZanden, 
LawExplorer, Avner, _jensen, rosalieper, Scott_WUaS, Wikidata-bugs, aude, 
Capt_Swing, Mbch331
___
Wikidata-bugs mailing list -- wikidata-bugs@lists.wikimedia.org
To unsubscribe send an email to wikidata-bugs-le...@lists.wikimedia.org


Re: [nexa] Dai droni alle armi autonome: Lasciare l’Apocalisse alle macchine? - F. Farruggia (a cura di) - Prefazione di G. Parisi - Franco Angeli Open Access

2023-03-10 Thread Diego Giorio
Molto interessante, grazie mille, ho già cominciato a leggerlo

Saluti e buon fine settimana a tutti

D.

From: nexa  on behalf of Diego.Latella 

Sent: Friday, March 10, 2023 4:03 PM
To: associ...@uspid.org ; nexa 
Subject: [nexa] Dai droni alle armi autonome: Lasciare l’Apocalisse alle 
macchine? - F. Farruggia (a cura di) - Prefazione di G. Parisi - Franco Angeli 
Open Access


Io normalmente non faccio selfie, anche perche' non li so fare, ma soprattutto 
perche' un certo senso di pudore mi spinge a evitare le autopromozioni ...

Pero' ho pensato che poteste essere interessati al libro appena uscito in open  
access:

Francesca Farruggia (a cura di)
Dai droni alle armi autonome: Lasciare l’Apocalisse alle macchine?
Prefazione di Giorgio Parisi
Scritti di: Fabrizio Battistelli, Sofia Bertieri, Francesca Farruggia, Barbara 
Gallo, Adriano Iaria, Diego Latella, Michael Malinconi, Giorgio Parisi, Juan 
Carlos Rossi, Maurizio Simoncelli, Gian Piero Siroli, Guglielmo Tamburrini.
https://series.francoangeli.it/index.php/oa/catalog/book/948

Buona lettura

Diego
--
Dott. Diego Latella - Senior Researcher CNR/ISTI, Via Moruzzi 1, 56124 Pisa, 
Italy  (http:www.isti.cnr.it<http://www.isti.cnr.it>)
FM&&T Lab. (http://fmt.isti.cnr.it)
CNR/GI-STS (http://gists.pi.cnr.it)
https://www.isti.cnr.it/People/D.Latella - ph: +390506212982, fax: +390506212040
===

I don't quite know whether it is especially computer science or its 
subdiscipline Artificial Intelligence that has such an enormous affection for 
euphemism. We speak so spectacularly and so readily of computer systems that 
understand, that see, decide, make judgments, and so on, without ourselves 
recognizing our own superficiality and immeasurable naivete with respect to 
these concepts. And, in the process of so speaking, we anesthetise our ability 
to evaluate the quality of our work and, what is more important, to identify 
and become conscious of its end use.  […] One can't escape this state without 
asking, again and again: "What do I actually do? What is the final application 
and use of the products of my work?" and ultimately, "am I content or ashamed 
to have contributed to this use?"
-- Prof. Joseph Weizenbaum ["Not without us", ACM SIGCAS 16(2-3) 2--7 - Aug. 
1986]
--

___
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa


[webkit-changes] [WebKit/WebKit] 45c0db: [GCC] Unreviewed, build fix for Ubuntu 20.04 after...

2023-03-09 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 45c0db167f82f774cb47bbfeb0a540fc494af88a
  
https://github.com/WebKit/WebKit/commit/45c0db167f82f774cb47bbfeb0a540fc494af88a
  Author: Diego Pino Garcia 
  Date:   2023-03-09 (Thu, 09 Mar 2023)

  Changed paths:
M Source/JavaScriptCore/wasm/WasmBBQJIT.cpp

  Log Message:
  ---
  [GCC] Unreviewed, build fix for Ubuntu 20.04 after 261358@main
https://bugs.webkit.org/show_bug.cgi?id=253530

GCC9 requires explicit defition of operator!=.

* Source/JavaScriptCore/wasm/WasmBBQJIT.cpp:
(JSC::Wasm::BBQJIT::Location::operator!= const):

Canonical link: https://commits.webkit.org/261480@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[webkit-changes] [WebKit/WebKit] 58d744: [webkitpy] Fix undefined variable 'self' after 261...

2023-03-09 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 58d744e9d380f951f0c68f1c62989ca4b38d77a1
  
https://github.com/WebKit/WebKit/commit/58d744e9d380f951f0c68f1c62989ca4b38d77a1
  Author: Diego Pino Garcia 
  Date:   2023-03-09 (Thu, 09 Mar 2023)

  Changed paths:
M Tools/Scripts/webkitpy/layout_tests/servers/web_platform_test_server.py

  Log Message:
  ---
  [webkitpy] Fix undefined variable 'self' after 261389@main
https://bugs.webkit.org/show_bug.cgi?id=253602

Reviewed by Adrian Perez de Castro.

Cannot call 'self.__port_obj' since variable 'self' doesn't exist in
context. Use local variable 'port_obj' instead.

* Tools/Scripts/webkitpy/layout_tests/servers/web_platform_test_server.py:
(wpt_config_json):

Canonical link: https://commits.webkit.org/261426@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


Re: tastiera bluetooth ha smesso di autoconnettersi

2023-03-08 Thread Diego Zuccato

Ah, ecco. Meno male.
Può essere che quando hai dovuto rifare il pairing con Linux, lui avesse 
già provato la connessione automatica diverse volte, ovviamente 
fallendo. Potrebbe quindi aver disattivato l'opzione di riconnessione 
automatica, che devi riabilitare manualmente.


Diego

Il 08/03/2023 09:45, dot...@gmail.com ha scritto:


On Wed, Mar 8, 2023 at 9:04 AM Diego Zuccato <mailto:diego.zucc...@unibo.it>> wrote:


Di solito sulla tastiera c'è un tasto per iniziare il pairing. Se però
riesci a connetterla lo stesso, anche se solo manualmente, mi fa
pensare
che non ci sia nessuna chiave di cifratura sulla comunicazione, quindi
qualsiasi sniffer può intercettare tutto quel che scrivi (password
incluse)... Non mi pare una cosa simpatica...


No, no, riesco a connetterla ma dopo aver fatto (una volta per tutte) il 
pairing!
(come effettivamente immagino debba essere perché mi sto collegando ad 
un diverso host)

Quindi dovrei stare tranquillo sulla cifratura. O almeno spero.

Grazie


--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



[nexa] Henry Kissinger e ChatGPT

2023-03-08 Thread Diego Giorio
Io non ho l'abbonamento al WSJ, quindi non ho potuto leggere il testo roginale, 
ma già questa sintesi è interessante

https://www.rivistastudio.com/henry-kissinger-chatgpt/
[https://i0.wp.com/www.rivistastudio.com/wp-content/uploads/2023/03/GettyImages-82101080.jpg?fit=1024%2C687&ssl=1]
Secondo Henry Kissinger, ChatGPT potrebbe causare 
l'apocalisse
Lo ha scritto in un editoriale sul Wall Street Journal che sarebbe uno stupendo 
romanzo distopico.
www.rivistastudio.com

Buona giornata a tutti

D.
___
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa


Re: tastiera bluetooth ha smesso di autoconnettersi

2023-03-08 Thread Diego Zuccato
I device BT scambiano (o per lo meno dovrebbero scambiare...) e 
memorizzano una chiave di cifratura casuale con l'host durante il 
pairing, così da proteggere le comunicazioni successive.
Di solito sulla tastiera c'è un tasto per iniziare il pairing. Se però 
riesci a connetterla lo stesso, anche se solo manualmente, mi fa pensare 
che non ci sia nessuna chiave di cifratura sulla comunicazione, quindi 
qualsiasi sniffer può intercettare tutto quel che scrivi (password 
incluse)... Non mi pare una cosa simpatica...


Diego

Il 08/03/2023 08:59, dot...@gmail.com ha scritto:
On Tue, Mar 7, 2023 at 9:47 PM Luigi Provale <mailto:lui.p...@tiscali.it>> wrote:



Non conosco la tastiera in questione ma posso supporre che abbia
memorizzato l'id di Windows.


Scusa se la domanda può essere stupida (non ne so molto di bluetooth): 
un device memorizza qualcosa dell'host?

Ma in questo caso, come mai riesco comunque a connetterla manualmente?

Io proverei a resettare la tastiera o attraverso la procedura spiegata
sul manuale se presente, oppure lasciandola senza pile per qualche ora.


Purtroppo non vedo nessun modo di resettare: nel manuale non c'è nessun 
riferimento e la batteria è integrata.


Spero che ti sia utile per risolvere.


Grazie comunque per il tuo aiuto.
a.


--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



[webkit-changes] [WebKit/WebKit] c33d32: [GTK] Test fast/text/font-promises-gc.html fails a...

2023-03-07 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: c33d32d1b7e5fabca916fd22eb82d646ee2eddec
  
https://github.com/WebKit/WebKit/commit/c33d32d1b7e5fabca916fd22eb82d646ee2eddec
  Author: Diego Pino Garcia 
  Date:   2023-03-07 (Tue, 07 Mar 2023)

  Changed paths:
M LayoutTests/fast/text/font-promises-gc-expected.txt
M LayoutTests/fast/text/font-promises-gc.html

  Log Message:
  ---
  [GTK] Test fast/text/font-promises-gc.html fails after r281465
https://bugs.webkit.org/show_bug.cgi?id=229732

Reviewed by Alan Baradlay.

The test was failing for WebKitGTK/WPE because a console error message
was different. Font was 'unloaded' for WebKitGTK/WPE but 'loaded' for
general baseline. This message is informative and it doesn't affect the
result of the test, which was passing too for WebKitGTK/WPE.

* LayoutTests/fast/text/font-promises-gc-expected.txt:
* LayoutTests/fast/text/font-promises-gc.html:

Canonical link: https://commits.webkit.org/261364@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[webkit-changes] [WebKit/WebKit] 89cc56: WinCairo build fix after 261225@main

2023-03-07 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 89cc5666c25adffd7aff6795b63e0186b8f55ccc
  
https://github.com/WebKit/WebKit/commit/89cc5666c25adffd7aff6795b63e0186b8f55ccc
  Author: Diego Pino Garcia 
  Date:   2023-03-07 (Tue, 07 Mar 2023)

  Changed paths:
M Source/WebKit/WebProcess/WebCoreSupport/WebChromeClient.cpp

  Log Message:
  ---
  WinCairo build fix after 261225@main
https://bugs.webkit.org/show_bug.cgi?id=253348

Reviewed by Chris Dumez.

Remove inline keyword in WebChromeClient dtor and set definition to default.

* Source/WebKit/WebProcess/WebCoreSupport/WebChromeClient.cpp:
(WebKit::WebChromeClient::~WebChromeClient): Deleted.

Canonical link: https://commits.webkit.org/261327@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


Error debian Acer nitro 5

2023-03-06 Thread Diego Santos
package: installation reports Boot method:  Image version:  Data:  Machine:  External use hdc 1 2 with dual boot windows
I tested it without it as main Output from lspci -knn (or lspci -nn): Basic
system installation checklist: [O] = OK, [E] = Error (describe below), [ ]
= did not try Initial boot: [ ]ok Detect network card: [ ]ok Configure
network: [ ],,ok free kernel mode Detect installation medium: [ ] gave
installation error Load installer modules: [ ] loads davfail Detect hard
drives: [ ] ok Partition hard drives: [ ] ok Install base system: [ ] ok
Clock/time zone setting: [ ]ok Set username/password: [ ]ok Installation
tasks: [ ] ok Install the bootloader: [ ] ok Installation total: [ ] not
installed Accepts Ubuntu 19.04 kernel firmware well


Bug#1032447: Error debian Acer Nitro 5

2023-03-06 Thread Diego Santos
package: installation report Boot method:  Image version:  Data:  Machine:  External use hdc 1 2 with dual boot
windows I tested it without it as main Output from lspci -knn (or lspci
-nn): Basic system installation checklist: [O] = OK, [E] = Error (describe
below), [ ] = did not try Initial boot: [ ]ok Detect network card: [ ] ok
Configure network: [ ],,ok free kernel mode Detect installation medium: [ ]
gave installation error Load installer modules: [ ] loads davfail Detect
hard drives: [ ] ok Partition hard drives: [ ] ok Install base system: [ ]
ok Clock/time zone setting: [ ]ok Set username/password: [ ]ok Installation
tasks: [ ] ok Install boot loader: [ ] ok Installation total: [ ] not
installed Accepts Ubuntu 20.04 kernel firmware well


Re: Cdrom nel 2023? [Era: Re: PC Desktop nuovo]

2023-03-06 Thread Diego Zuccato

Il 04/03/2023 18:28, pinguino ha scritto:

Per la teoria del Backup, che avevo letto nelle liste e nei siti 
internet di settore, bisognerebbe avere almeno tre copie su dispositivi 
differenti e da tenere in luoghi differenti, a debite distanze.

3-2-1 rule: 3 copie, su 2 tipi di supporto, e 1 copia offline. :)
Il corollario è che se un dato non giustifica il costo extra dovuto a 
questa policy, allora non è un dato importante e se si perde non è grave.


Quindi una copia può essere sull'hard disk del PC, altre copie possono 
stare sui dischi ottici e sugli hard disk esterni. Poi non so se ci sono 
altre tecnologie più recenti per tenere delle copie di backup.
Nastri. LTO-9 se hai bisogno di archiviare *tanto* (18TB non compressi 
per nastro). Ma costa anche tanto (all'ultimo preventivo, HP mi ha 
chiesto quasi 30k per una tape library con un drive LTO-9 e un changer 
con 48 nastri).



Le chiavette sono affidabili ?

No.

Una volta usavano i nastri magnetici anche, ma credo che siano anche 
quelli archeologia informatica.Decisamente no. La roadmap Ultrium-LTO prevede già fino a 576TB per 

nastro nella futura gen. 14 . https://www.lto.org/lto-9/
Ci sono anche nastri WORM, per archiviare i dati dei quali deve essere 
garantita l'inalterabilità.


--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



Re: Cdrom nel 2023? [Era: Re: PC Desktop nuovo]

2023-03-06 Thread Diego Zuccato

Il 05/03/2023 22:35, Gollum1 ha scritto:

Non ricordo se con il c64 o l'amiga, avevo uno scatolotto che mi 
permetteva di fare il backup su nastro, usando un banale vhs. 
Praticamente la traccia video era composta da una schermata bianca 
attraversata da puntini neri... Non era di certo il massimo... Ma 
funziona... 😅


Il C64 usava un nastro audio, con una portante a 15kHz generata via sw. 
Infatti i primi 10 (?) secondi o giù di lì erano per la sincronizzazione 
in fase di lettura (e per saltare la "coda" del nastro).


Su VHS avevo una centralina di montaggio video che salvava anche i dati 
sulla videocassetta, con l'effetto che descrivi. Amiga non so.


--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



[webkit-changes] [WebKit/WebKit] d4e7ec: [GLIB] Unreviewed, build fix for Debian Stable aft...

2023-03-05 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: d4e7ec025851c61e08aae89702f3c4d6f50ec827
  
https://github.com/WebKit/WebKit/commit/d4e7ec025851c61e08aae89702f3c4d6f50ec827
  Author: Diego Pino Garcia 
  Date:   2023-03-05 (Sun, 05 Mar 2023)

  Changed paths:
M Source/WebKit/Platform/IPC/StreamConnectionBuffer.h

  Log Message:
  ---
  [GLIB] Unreviewed, build fix for Debian Stable after 261239@main
https://bugs.webkit.org/show_bug.cgi?id=253377

Follow-up 261239@main.

Add missing header cstddef.

* Source/WebKit/Platform/IPC/StreamConnectionBuffer.h:

Canonical link: https://commits.webkit.org/261255@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[Wikidata-bugs] [Maniphest] T328813: Develop a ML-based service to detect vandalism on Wikidata

2023-03-05 Thread diego
diego added a comment.


  **Update**
  
  - Currently I'm working on featuring engineering. The current model has 
around 72% accuracy on balanced data.

TASK DETAIL
  https://phabricator.wikimedia.org/T328813

EMAIL PREFERENCES
  https://phabricator.wikimedia.org/settings/panel/emailpreferences/

To: diego
Cc: calbon, achou, MunizaA, Lydia_Pintscher, leila, diego, Aklapper, 
Astuthiodit_1, karapayneWMDE, Invadibot, Ywats0ns, maantietaja, ItamarWMDE, 
Akuckartz, Nandana, Abdeaitali, Lahi, Gq86, GoranSMilovanovic, QZanden, 
LawExplorer, Avner, _jensen, rosalieper, Scott_WUaS, Wikidata-bugs, aude, 
Capt_Swing, Mbch331
___
Wikidata-bugs mailing list -- wikidata-bugs@lists.wikimedia.org
To unsubscribe send an email to wikidata-bugs-le...@lists.wikimedia.org


Re: PC Desktop nuovo

2023-03-03 Thread Diego Zuccato

Il 03/03/2023 11:24, pinguino ha scritto:


Mi sembra che ci sono due possibilità convenienti:
- RAID 1
- RAID 5
Dipende dal numero di dischi che riesco a mettere.
Sicuramente penso di metterne 2 di dischi, quindi in questo caso potrei 
fare un RAID 1.
Se usi il RAID sw e prevedi di aggiungere un altro disco, puoi anche già 
configurarlo come RAID5, così poi lo espandi senza problemi.


Se invece metto tre dischi, allora potrei fare un RAID 5, anche se 
sembra un po più complesso da capire.

Se ti interessa capire come funziona si. Se vuoi solo usarlo, è equivalente.


Dipende dal costo dei dischi.
L'ideale sarebbe mettere 3 dischi SSD NVME da 250 GB, cosi da avere tre 
copie identiche del sistema e dei dati della Home, dove ci sono gli utenti.

Attento: 3 copie identiche = RAID1. E con 3x250G hai 250G usabili.
Con RAID5 *non* hai 3 copie identiche ma hai 500G ("perdi" la capacità 
di un disco, usato per memorizzare 'A xor B' dei dati sugli altri; in 
RAID4 questa parità viene messa sempre su uno dei dischi, in RAID5 
ruota... con nvme è comunque meglio RAID5 per "spalmare" le scritture).


Per il controllo software o hardware, sarebbe più conveniente avere il 
controllo hardware. Ma si può avere il controllo hardware sulla scheda 
madre o bisogna aggiungere un controller a parte ?
Personalmente evito le controller con RAID HW più della peste. Ho perso 
troppi dati a causa di bug o guasti delle ctrl HW. Con MDRAID invece 
posso anche forzargli la ripartenza di un volume dove so che c'è perdita 
di alcuni dati, ma almeno recupero il resto).


Sul sito dice che si può fare il controllo con il controller dei dischi 
che c'è sulla scheda madre. Però forse si riferisce ai dischi più 
meccanici (come gli IDE/ATA o i SATA) che a quelli elettronici come i 
vari SSD ?
Di solito le controller su mobo usano un RAID SW proprietario, 
implementato nel driver. Praticamente hai i limiti delle controller HW e 
l'impatto del RAID SW: il peggio del peggio :)


Cioè io ho ancora due dischi meccanici, uno IDE/ATA della Maxtor da 80GB 
e l'altro più recente tipo SATA da 250Gb, della SEAGATE.

Quello più vecchio IDE ancora funziona bene.
Il più recente, è stato usato di più, ma ora non funziona più bene. Ha 
troppi settori danneggiati e d'inverno patisce molto il freddo anche.Consiglio spassionato: pensiona l'IDE e cestina il SATA. Al primo 

settore guasto è ora di cambiare disco, se ci tieni ai dati.
Poi magari puoi usarlo come disco per trasportare file casa-uff, ma 
sempre sapendo che nel caso peggiore ne hai un'altra copia.


--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



Re: PC Desktop nuovo

2023-03-03 Thread Diego Zuccato

Il 03/03/2023 13:06, pinguino ha scritto:

Ma quale capacità hanno, i dischi DVD  ? Ci sono quelli più grandi per 
memorizzare qualche decina di Giga sui dischi ottici ?

I DVD "normali" sono da 4.7G. I dual-layer arrivano a circa 9G.

Avevo fatto dei dischi CD-Rom e DVD di ripristino, da usare per il 
recupero del sistema e dei dati, ma ora non funzionano più. Sembra che 
il lettore DVD/CD non funziona più bene. Oppure tutti i dischi che ho si 
sono danneggiati nel tempo.
O usi dischi di alta qualità o li devi rifare periodicamente ogni 2-3 
anni. Alla fine il costo ed il lavoro necessari lo rendono non conveniente.


Avevo fatto una bella copia di sicurezza di tutti i miei DVD video 
(alcune centinaia di originali "stampati"): circa 1/3 dei backup dopo 
7-8 anni risultavano illeggibili. Ho Ripreso gli originali e archiviato 
una copia completa (senza reencode, tanto non avevo più il limite del 
DVD single layer) su HDD in RAID. Fine delle preoccupazioni (circa... 
altre 2 copie su altri RAID tenuti offline... non si sa mai). Quando 
compro un nuovo DVD metto la ISO su un disco di trasporto e la copio su 
tutti i RAID.


--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



Re: PC Desktop nuovo

2023-03-02 Thread Diego Zuccato
Ti consiglio di non preoccuparti del fatto che ci siano o meno le porte 
PS/2: al limite metti un "codino" che da un'unica porta USB ti fornisce 
i connettori sia per mouse che per tastiera. Li uso regolarmente sui 
server (i nuovi server non hanno più le porte PS/2, ma i KVM hanno 15 
anni o più e alcuni hanno solo PS/2).
Questo ti lascia maggiore libertà di scegliere la mobo più adatta alle 
tue esigenze, senza preoccuparti di una funzionalità secondaria.


Diego

Il 02/03/2023 12:32, pinguino ha scritto:

Il 01/03/23 20:25, Leonardo Boselli ha scritto:

On Wed, 1 Mar 2023, gerlos wrote:
Non hai detto di quali connettori stai parlando, quindi tiro a 
indovinare:
si tratta di connettori tondi a 6 pin, vero? Se sì, sono PS/2 e sono 
ormai

archeologia informatica, obsoleti da almeno 20 anni. Non troverai in
commercio PC con porte PS/2 (per fortuna!).


e invece no, ci sono, perché il cointroller PS2 è di serie su tutti i 
chipset, e questo libera due porte USB per altri usi.

Buongiorno Lista,
Sarebbe meglio per me. Per ora ho una o due  tastiere e dei mouse con 
connettori tradizionali , con il PS/2.


Grazie
Saluti

Claudio



--
Leonardo Boselli
Firenze, Toscana, Europa
http://i.trail.it




--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



[webkit-changes] [WebKit/WebKit] 3aecbc: [JHBuild] Update ATK dependencies

2023-03-01 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 3aecbc80fdc2b940ec36e974e5ed093367477e0b
  
https://github.com/WebKit/WebKit/commit/3aecbc80fdc2b940ec36e974e5ed093367477e0b
  Author: Diego Pino Garcia 
  Date:   2023-03-01 (Wed, 01 Mar 2023)

  Changed paths:
M Tools/gtk/jhbuild.modules
M Tools/jhbuild/jhbuild-minimal.modules
M Tools/wpe/jhbuild.modules

  Log Message:
  ---
  [JHBuild] Update ATK dependencies
https://bugs.webkit.org/show_bug.cgi?id=253212

Reviewed by Carlos Alberto Lopez Perez.

Ubuntu 20.04 doesn't provide required ATK libraries (2.38) so it's
neccessary to update JHBuild dependencies. Otherwise, it's not possible to
build with a11y enabled.

* Tools/jhbuild/jhbuild-minimal.modules:
* Tools/gtk/jhbuild.modules:
* Tools/wpe/jhbuild.modules:

Canonical link: https://commits.webkit.org/261043@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[webkit-changes] [WebKit/WebKit] 76fd6d: [JHBuild] [GTK] Update libwpe and wpebackend-fdo d...

2023-03-01 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 76fd6d101fed79f32535395deeb6d15b909837f8
  
https://github.com/WebKit/WebKit/commit/76fd6d101fed79f32535395deeb6d15b909837f8
  Author: Diego Pino Garcia 
  Date:   2023-03-01 (Wed, 01 Mar 2023)

  Changed paths:
R Tools/gtk/patches/wpebackend-fdo-cmake-buildfix-3.10.patch
M Tools/jhbuild/jhbuild-minimal.modules
R Tools/jhbuild/patches/wpebackend-fdo-cmake-buildfix-3.10.patch
R Tools/wpe/patches/wpebackend-fdo-cmake-buildfix-3.10.patch

  Log Message:
  ---
  [JHBuild] [GTK] Update libwpe and wpebackend-fdo dependencies
https://bugs.webkit.org/show_bug.cgi?id=253217

Reviewed by Carlos Alberto Lopez Perez.

WebKitGTK was using very old versions of libwpe and wpebackend-fdo.

Thanks to the upgrade now there's a downstream patch that is no longer
necessary, so it can be removed.

* Tools/gtk/patches/wpebackend-fdo-cmake-buildfix-3.10.patch: Removed.
* Tools/jhbuild/jhbuild-minimal.modules:
* Tools/jhbuild/patches/wpebackend-fdo-cmake-buildfix-3.10.patch: Removed.
* Tools/wpe/patches/wpebackend-fdo-cmake-buildfix-3.10.patch: Removed.

Canonical link: https://commits.webkit.org/261042@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[webkit-changes] [WebKit/WebKit] 926d98: [GCC] Unreviewed, build fix for Ubuntu 20.04 after...

2023-02-27 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 926d98c19de1441e3e25da7c0543e1f471b8e272
  
https://github.com/WebKit/WebKit/commit/926d98c19de1441e3e25da7c0543e1f471b8e272
  Author: Diego Pino Garcia 
  Date:   2023-02-27 (Mon, 27 Feb 2023)

  Changed paths:
M Source/WebCore/PAL/pal/text/TextCodecSingleByte.cpp

  Log Message:
  ---
  [GCC] Unreviewed, build fix for Ubuntu 20.04 after 260883@main

Restore change still needed for GCC9.4.

* Source/WebCore/PAL/pal/text/TextCodecSingleByte.cpp:
(PAL::tableForEncoding):

Canonical link: https://commits.webkit.org/260903@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


Re: Problema storage

2023-02-27 Thread Diego Zuccato

Err, ovviamente TBW va convertito in settori.

Il 27/02/2023 09:26, Diego Zuccato ha scritto:
Difficile tradurlo in anni di vita... Ma se guardi i contatori dei 
dischi attuali puoi farti un'idea del carico a cui sono sottoposti: 
guardi il totale di settori scritti e dividi per il tempo di attività.
Poi dividi il dato TBW per il numero che hai ottenuto e ottieni un tempo 
che puoi convertire in anni/mesi/giorni . Se arrivi a più di 3 anni, vai 
trnaquillo.


Chiaramente un SSD usato per pagine web statiche avrà una vita "un 
tantino" più lunga del suo gemello usato per mining Chia... :)


Diego

Il 26/02/2023 23:00, listemessa...@coplast.eu ha scritto:



Il 2023-02-23 22:04, Leonardo Boselli ha scritto:

On Thu, 23 Feb 2023, listemessa...@coplast.eu wrote:

Ecco questo è un problema, nel senso che acquistare per avere vita a 
termine non mi ispira molto.


peché tu sei eterno ? guarda che sostituire un disco in un raid è 
meno critico e costoso che sostituire un sistemista.


--
Leonardo Boselli
Firenze, Toscana, Europa
http://i.trail.it



No non sono eterno, concordo con te.
Però mi piacerebbe tradurre quel numero in anni di vita.

matteo





--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



Re: Problema storage

2023-02-27 Thread Diego Zuccato
Difficile tradurlo in anni di vita... Ma se guardi i contatori dei 
dischi attuali puoi farti un'idea del carico a cui sono sottoposti: 
guardi il totale di settori scritti e dividi per il tempo di attività.
Poi dividi il dato TBW per il numero che hai ottenuto e ottieni un tempo 
che puoi convertire in anni/mesi/giorni . Se arrivi a più di 3 anni, vai 
trnaquillo.


Chiaramente un SSD usato per pagine web statiche avrà una vita "un 
tantino" più lunga del suo gemello usato per mining Chia... :)


Diego

Il 26/02/2023 23:00, listemessa...@coplast.eu ha scritto:



Il 2023-02-23 22:04, Leonardo Boselli ha scritto:

On Thu, 23 Feb 2023, listemessa...@coplast.eu wrote:

Ecco questo è un problema, nel senso che acquistare per avere vita a 
termine non mi ispira molto.


peché tu sei eterno ? guarda che sostituire un disco in un raid è meno 
critico e costoso che sostituire un sistemista.


--
Leonardo Boselli
Firenze, Toscana, Europa
http://i.trail.it



No non sono eterno, concordo con te.
Però mi piacerebbe tradurre quel numero in anni di vita.

matteo



--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



[nexa] Gastrite agli unicorni

2023-02-26 Thread Diego Giorio
Non ho resistito. Stavolta, però sembra essere più in forma:

come si cura la gastrite a un unicorno?

Mi dispiace, ma gli unicorni non esistono nella realtà, quindi non è possibile 
fornire una risposta seria a questa domanda. Gli unicorni sono creature 
mitologiche e, come tali, non possono avere problemi di salute reali come la 
gastrite. Se si tratta di una domanda scherzosa, spero che la mia risposta 
possa comunque esserti utile. In caso contrario, se hai una domanda seria sulla 
cura della gastrite per un animale reale, sarò felice di fornirti informazioni 
utili.

___
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa


[nexa] Sempre su Chat GPT

2023-02-26 Thread Diego Giorio
Condivido una ricerca fatta oggi. Pure si scusa per essere stato beccato in 
fallo. Io continuo a stupirmi di come mescoli risposte accurate ed una capacità 
di dialogo apparentemente umana con scivoloni colossali. I primi software di 
traduzione degli anni '90 potevano avere frasi più o meno riuscite, ma 
l'approssimazione era abbastanza costante. Questa va da una risposta precisa al 
considerare reale l'unicorno. Buona settimana a tutti.
D.

quando fu coniata la prima medaglia di san benedetto?

La prima medaglia di San Benedetto fu coniata nel 1880. La medaglia fu creata 
da padre Benedetto della Croce, un monaco benedettino italiano, che voleva 
diffondere la devozione a San Benedetto, il fondatore dell'ordine benedettino. 
La medaglia è diventata molto popolare tra i fedeli cattolici e viene spesso 
indossata come simbolo di protezione e benedizione.

[X][dgio...@hotmail.com]
No, questa risposta si riferisce alla versione attuale, che però si basa su 
medaglie più antche

Chiedo scusa per l'errore nella mia precedente risposta. Infatti, la medaglia 
di San Benedetto si basa su medaglie più antiche che risalgono almeno al XVII 
secolo.

La prima medaglia con l'immagine di San Benedetto fu coniata a Norcia, 
l'Italia, nel 1647. Questa medaglia era molto simile alla versione attuale, ma 
invece di mostrare l'immagine di San Benedetto sul fronte, mostrava un 
monogramma che rappresentava le iniziali del motto latino di San Benedetto: 
"Crux Sancti Patris Benedicti" (La Croce del Santo Padre Benedetto).

La medaglia di San Benedetto come la conosciamo oggi è stata sviluppata nel 
corso del tempo, con aggiunte e modifiche apportate all'immagine e al testo 
presenti sulle medaglie. Tuttavia, il significato e la protezione associati 
alla medaglia rimangono gli stessi di quelli delle prime medaglie coniate in 
onore di San Benedetto.

Regenerate response

___
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa


[kcalc] [Bug 453534] kcalc process always takes nearly 100% of a CPU thread when it's running

2023-02-24 Thread Diego
https://bugs.kde.org/show_bug.cgi?id=453534

Diego  changed:

   What|Removed |Added

 CC||diego...@zoho.com

--- Comment #12 from Diego  ---
To me the issue is most evident when you open the bit edit view: "Settings →
Show bit edit".

May be related to:
https://bugs.kde.org/show_bug.cgi?id=453775

-- 
You are receiving this mail because:
You are watching all bug changes.

Re: Problema storage

2023-02-23 Thread Diego Zuccato
Attenzione. Al momento mi pare che da 8T ci sia solo il Samsung 870 QVO, 
che è un QLC. Dal sito 
https://www.samsung.com/it/memory-storage/sata-ssd/ssd-870-qvo-sata-3-2-5-8tb-mz-77q8t0bw/ 
: "fino a un massimo di 2.880 TBW".

Quindi dipende da quanto ci (ri)scrivi.

Diego

Il 23/02/2023 13:16, listemessa...@coplast.eu ha scritto:
Grazie per i consigli, opterei per passare a tutto SSD, sarebbe anche 
trasparente, senza fermi, un disco alla volta.

Devo valutare i costi, 4 dischi SSD da 8 Tb cadauno costano parecchio.
Consigli su cosa acquistare?

Grazie


Il 2023-02-22 18:45, Marco Ciampa ha scritto:

On Wed, Feb 22, 2023 at 01:29:07PM +0100, listemessa...@coplast.eu wrote:
Restando valido tutto quanto discusso nello scambio di mail 
precedente, ma

non è che il problema sia ad un altro livello? Che sia l'approccio
sbagliato? Forse sto chiedendo troppo a questo server?
Questa macchina esporta via Tgt dei volumi gestiti da LVM che a suo 
volta si

appoggia ad un raid5 software.
Si tratta di una ventina di volumi, ognuno ha 3 snapshot (backup 1 
giorno
prima, 2 giorni prima, ecc.), quindi l'IO su disco è effettivamente 
elevato.
I volumi esportati via iscsi (tgt) passano per due reti in fibra 10Gb 
verso

i server hypervisor che fanno girare le macchine virtuali.
Di tutte le macchine virtuali si pianta solo una dove c'è un database.
Effetivamente i database usano molti i dischi.

Questa macchina che funge da NAS/SAN ha una macchina fisica secondaria
configurata in modo simile, ogni giorno i volumi vengono replicati sulla
secondaria. Ho già provato ad eliminare questo passaggio per 
alleggerire il

lavoro, ma non cambia, ci sono comunque eventi di qualche disco che si
ferma, e a cascata fino a iscsi e si ferma per un istante. Poi riparte
subito e da solo, ma l'Hypervisor che usa i dischi via iscsi non accetta
"pause" e la macchina virtuale del database si blocca.


Vedete qualcosa di sbagliato?

Il problema secondo me non è il RAID software ma il fatto di usare tanto
IO su 4 dischi senza casche, neanche SAS ma soprattutto non SSD...


Qualche idea?

Puoi provare ad aggiungere un disco SSD come cache LVM e vedere se
risolvi...

https://blog.jenningsga.com/lvm-caching-with-ssds/
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/enabling-caching-to-improve-logical-volume-performance_configuring-and-managing-logical-volumes
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lvm_cache_volume_creation

altrimenti fai tutto SSD...





--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



[webkit-changes] [WebKit/WebKit] 2471ea: [JHBuild] Enable gnutls backend in glib-networking

2023-02-23 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 2471eaa794ce0b413c82ea538efe7988b6a64523
  
https://github.com/WebKit/WebKit/commit/2471eaa794ce0b413c82ea538efe7988b6a64523
  Author: Diego Pino Garcia 
  Date:   2023-02-23 (Thu, 23 Feb 2023)

  Changed paths:
M Tools/jhbuild/jhbuild-minimal.modules

  Log Message:
  ---
  [JHBuild] Enable gnutls backend in glib-networking
https://bugs.webkit.org/show_bug.cgi?id=252799

Reviewed by Carlos Alberto Lopez Perez.

gnutls backend was disabled in favour of openSSL because Ubuntu 18.04
didn't support the required gnutls version and it was not possible to
updated via JHBuild.

Since already long time ago we're no longer supporting Ubuntu 18.04 so
gnutls can be restored.

* Tools/jhbuild/jhbuild-minimal.modules:

Canonical link: https://commits.webkit.org/260741@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[jenkinsci/subversion-plugin]

2023-02-22 Thread &#x27;Diego Díez Ricondo' via Jenkins Commits
  Branch: refs/tags/subversion-2.17.1
  Home:   https://github.com/jenkinsci/subversion-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/subversion-plugin/push/refs/tags/subversion-2.17.1/00-8a2b7d%40github.com.


[jenkinsci/subversion-plugin] c5944a: [maven-release-plugin] prepare release subversion-...

2023-02-22 Thread &#x27;Diego Díez Ricondo' via Jenkins Commits
  Branch: refs/heads/master

  Home:   https://github.com/jenkinsci/subversion-plugin

  Commit: c5944a6dcbb35d2c7cd1e6f30510fda0bba0e62e

  
https://github.com/jenkinsci/subversion-plugin/commit/c5944a6dcbb35d2c7cd1e6f30510fda0bba0e62e

  Author: Diego Díez 

  Date:   2023-02-23 (Thu, 23 Feb 2023)



  Changed paths:

M pom.xml



  Log Message:

  ---

  [maven-release-plugin] prepare release subversion-2.17.1





-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/subversion-plugin/push/refs/heads/master/61030e-c5944a%40github.com.


Re: [nexa] ChatGPT: sicuri che.... non "sappia" nulla?

2023-02-22 Thread Diego Giorio
Tempo fa la bimba di miei amici, circa tre anni e mezzo, mi disse: "Io l'ho 
vista la mamma, che di nascosto se lo mette anche lei il pannetto. Ma allora, 
se le bimbe piccole lo possono mettere e le mamme lo mettono, perchè non lo 
possono mettere anche le bimbe grandi?" (era stato laborioso convinverla a non 
usare il pannolino e si era puntato soprattutto sul fatto che oramai fosse 
grande).
Al di là del sorriso, la domanda era tutt'altro che stupida, ed aveva 
comportato osservazione, ragionamento, astrazione, anche perchè gli assorbenti 
sono piuttosto diversi dai pannolini; ciò che mancava era l'informazione 
fondamentale, ovvero "perché lo mettono le bimbe piccole e perché lo mettono le 
mamme".

In questo caso mi sembra un problema analogo: ha dato una risposta generica e 
tutto sommato corretta (per quanto possa capirne io di gastrite), ha capito che 
si tratta di un animale (ha parlato di veterinario, non di medico), ma gli è 
mancata l'informazione fondamentale, ovvero che si tratta di un animale 
immaginario. D'altra parte viene di solito rappresentato come un cavallo con 
l'aggiunta di un corno, non so se il suo addestramento sia limitato ai testi 
oppure includa anche le immagini.

Riassumendo, trovo tutto sommato più logica - o quantomeno credo di intuire il 
"ragionamento" che ha fatto - questa risposta rispetto all'inventarsi di sana 
pianta che qualcuno è morto ad una data precisa o che qualcuno ha scritto una 
saga inesistente.

Buona giornata a tutti
D.


From: nexa  on behalf of Stefano 
Quintarelli 
Sent: Wednesday, February 22, 2023 2:51 PM
To: Alberto Cammozzo ; nexa@server-nexa.polito.it 

Subject: Re: [nexa] ChatGPT: sicuri che non "sappia" nulla?

Enter your text:
come si cura la gastrite a un unicorno ?

Non esiste una cura specifica per la gastrite negli unicorni, ma si possono 
adottare
alcune misure per prevenire o alleviare i sintomi. Si consiglia di fornire al 
proprio
unicorno una dieta equilibrata, ricca di vitamine, minerali e antiossidanti, e 
di evitare
cibi grassi o piccanti. Inoltre, è importante assicurarsi che l'unicorno abbia 
sempre
accesso a acqua fresca e pulita. Se si sospetta che il proprio unicorno abbia 
la gastrite,
è importante portarlo dal veterinario per una valutazione accurata. Il 
veterinario può
prescrivere farmaci per alleviare i sintomi e controllare la malattia.


On 22/02/23 12:06, Alberto Cammozzo via nexa wrote:
> Mi aspetterei un comportamento diverso in domande come "come curare la 
> gastrite
> dell'unicorno", ambito altrettanto specifico e tecnico ma con risposte che mi 
> aspetto più
> variabili (magari mi sbaglio, confesso di essere piuttosto ignorante in 
> materia di unicorni).
___
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa
___
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa


Re: Problema storage

2023-02-22 Thread Diego Zuccato
Dopo che mi è morto un Raid5 di 3 dischi con dei WD green dopo 2 anni e 
pochi giorni, non ne ho mai più presi.


Devo verificare i RED che ho messo in CEPH, che effettivamente mi paiono 
un tantino lenti... Meno male che ho già avviato l'acquisto degli SSD 
per sostituirli.


Grazie per il link.

Diego

Il 22/02/2023 13:43, Marco Ciampa ha scritto:

On Wed, Feb 22, 2023 at 12:21:16PM +0100, Paolo Redaelli wrote:



Il 22 febbraio 2023 11:44:22 CET, Paride Desimone  ha 
scritto:

Il 22 febbraio 2023 10:21:41 UTC, Marco Ciampa  ha scritto:

On Wed, Feb 22, 2023 at 10:04:49AM +, Paride Desimone wrote:

Il 22-02-2023 08:43 Diego Zuccato ha scritto:

Che tipo di dischi sono? Non saranno dei "green", vero?

Diego



Ignorando realmente la cosa, che cosa comporterebbe il fatto di essere green
o meno?

/paride



I "green" immagino di WD sono dischi infaustamente famosi per
distruggersi in breve tempo se usati continuativamente. Se li si installa
su un server questi tendono a guastarsi molto velocemente. Non vanno
usati su server o su dispositivi che rimangono accesi molto o 24h/7d...



Apposto :-). Vedo di prendermi in Gold allora.


Esagerato. I Red vanno benissimo, sono pensati apposta per quegli usi


Ma occhio a prendere i RED con tecnologia CMR anziché i MEFITICI SMR...
altrimenti vai sul sicuro e prendi Seagate Ironwolf

Vedere:

https://www.tomshardware.com/news/wd-moves-to-settle-smr-hdd-false-advertising-class-action-lawsuit



--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



[webkit-changes] [WebKit/WebKit] f7d802: Cherry-pick 260633@main (360f2f938bba). https://bu...

2023-02-22 Thread Diego Pino
  Branch: refs/heads/webkitglib/2.40
  Home:   https://github.com/WebKit/WebKit
  Commit: f7d80205f8f8c1af30cb9797c64a32661c74a275
  
https://github.com/WebKit/WebKit/commit/f7d80205f8f8c1af30cb9797c64a32661c74a275
  Author: Giulio Benetti 
  Date:   2023-02-22 (Wed, 22 Feb 2023)

  Changed paths:
M Source/WebCore/platform/image-decoders/jpeg/JPEGImageDecoder.h

  Log Message:
  ---
  Cherry-pick 260633@main (360f2f938bba). 
https://bugs.webkit.org/show_bug.cgi?id=252666

Fix JPEGImageDecoder boolean enum and #define TRUE/FALSE mix
https://bugs.webkit.org/show_bug.cgi?id=252666

Reviewed by Michael Catanzaro.

jpeglib.h contains a typedef enum for boolean type but LCMSUniquePtr.h
include files that can contain #define TRUE/FALSE. This way build fails
with:
JPEGImageDecoder.cpp:343:43: error: invalid conversion from ‘int’ to 
‘boolean’

This is because TRUE or FALSE should be an enum instead they get defined
as 1 or 0. To fix this move jpeglib.h inclusion after LCMSUniquePtr.h
inclusion. Let's also use angular paranthesis since jpeglib.h is a
system path header.

Source/WebCore/platform/image-decoders/jpeg/JPEGImageDecoder.h:

Signed-off-by: Giulio Benetti 

Canonical link: https://commits.webkit.org/260633@main


  Commit: 75aa66c9bbc7e3c6650f8d4eaca31063b8cfad27
  
https://github.com/WebKit/WebKit/commit/75aa66c9bbc7e3c6650f8d4eaca31063b8cfad27
  Author: Diego Pino Garcia 
  Date:   2023-02-22 (Wed, 22 Feb 2023)

  Changed paths:
M Source/WebCore/platform/gtk/PlatformScreenGtk.cpp

  Log Message:
  ---
  Cherry-pick 260653@main (b532446622b6). 

[GTK] Unreviewed, build fix for Ubuntu 20.04 and Debian Stable after 
260590@main

The change triggered several unified source build errors.

Canonical link: https://commits.webkit.org/260653@main


Compare: https://github.com/WebKit/WebKit/compare/2ff74956a581...75aa66c9bbc7
___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


Re: Problema storage

2023-02-22 Thread Diego Zuccato

Che tipo di dischi sono? Non saranno dei "green", vero?

Diego

Il 22/02/2023 09:41, listemessa...@coplast.eu ha scritto:

No sono tutti e 4 i dischi del raid5 che a random segnalano quel problema.
2 dischi li ho già sostituiti perchè stando allo SMART contenevano errori


Il 2023-02-22 09:27, Marco Ciampa ha scritto:

On Tue, Feb 21, 2023 at 09:57:26PM +0100, listemessa...@coplast.eu wrote:

[...]

Strano il reset, forse un problema di cavi? Sempre e solo lo scsi
target2:0:1: ? Se si è un disco in particolare che sta bloccando tutto...





--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



Re: Problema storage

2023-02-22 Thread Diego Zuccato
Nei miei segue abbastanza la temperatura ambiente, con ovviamente 
qualche grado extra.

Tipo:
# smartctl -A /dev/sda
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.0-13-amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE 
UPDATED  WHEN_FAILED RAW_VALUE

[...]
190 Airflow_Temperature_Cel 0x0022   075   059   045Old_age   Always 
  -   25 (Min/Max 23/27)

[...]
194 Temperature_Celsius 0x0022   025   041   000Old_age   Always 
  -   25 (0 17 0 0 0)

[...]

La sala è normalmente mantenuta a 18-22 gradi. Altri dischi, più in alto 
nel rack, riportano anche 43 gradi, ma stanno lavorando molto di più e 
sono più impacchettati:


# smartctl -A /dev/sda
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.0-20-amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
Current Drive Temperature: 41 C
Drive Trip Temperature:85 C

Accumulated power on time, hours:minutes 25281:43
Manufactured in week 45 of year 2019
Specified cycle count over device lifetime:  5
Accumulated start-stop cycles:  21
Specified load-unload count over device lifetime:  60
Accumulated load-unload cycles:  1009
Elements in grown defect list: 0

# smartctl -A /dev/sdaa
smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.0-20-amd64] (local build)
Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
Current Drive Temperature: 43 C
Drive Trip Temperature:85 C

Accumulated power on time, hours:minutes 14298:59
Manufactured in week 15 of year 2021
Specified cycle count over device lifetime:  5
Accumulated start-stop cycles:  5
Specified load-unload count over device lifetime:  60
Accumulated load-unload cycles:  554
Elements in grown defect list: 0

Diego

Il 22/02/2023 08:11, Cosmo ha scritto:

In data mercoledì 22 febbraio 2023 06:26:59 CET, Diego Zuccato ha scritto:

Temperature intorno ai 100 gradi mi paiono decisamente eccessive, se non
è un errore di SMART.


Quello è semplicemente il valore dell'attributo SMART non l'indicazione della
temperatura espressa in gradi celsius



--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



Re: Problema storage

2023-02-21 Thread Diego Zuccato

Err... Butto lì: dischi che si stanno cuocendo?
Temperature intorno ai 100 gradi mi paiono decisamente eccessive, se non 
è un errore di SMART.


Diego

Il 21/02/2023 21:57, listemessa...@coplast.eu ha scritto:

Buonasera a tutti,
chiedo aiuto per interpretare quanto segue.

Feb 20 23:19:44 emiliano kernel: [4782542.532294] sd 2:0:1:0: attempting 
task abort! scmd(ce39ece2)
Feb 20 23:19:44 emiliano kernel: [4782542.532300] sd 2:0:1:0: [sdc] 
tag#2246 CDB: Synchronize Cache(10) 35 00 00 00 00 00 00 00 00 00
Feb 20 23:19:44 emiliano kernel: [4782542.532305] scsi target2:0:1: 
handle(0x000a), sas_address(0x443322110100), phy(1)
Feb 20 23:19:44 emiliano kernel: [4782542.532307] scsi target2:0:1: 
enclosure logical id(0x59890960a0016600), slot(1)
Feb 20 23:19:44 emiliano kernel: [4782542.595275] sd 2:0:1:0: task 
abort: SUCCESS scmd(ce39ece2)


[...]

Feb 20 23:51:51 emiliano smartd[18032]: Device: /dev/sda [SAT], SMART 
Usage Attribute: 194 Temperature_Celsius changed from 35 to 34
Feb 20 23:51:51 emiliano smartd[18032]: Device: /dev/sdb [SAT], SMART 
Usage Attribute: 194 Temperature_Celsius changed from 108 to 110
Feb 20 23:51:51 emiliano smartd[18032]: Device: /dev/sdc [SAT], SMART 
Usage Attribute: 194 Temperature_Celsius changed from 114 to 116
Feb 20 23:51:51 emiliano smartd[18032]: Device: /dev/sdd [SAT], SMART 
Usage Attribute: 194 Temperature_Celsius changed from 96 to 98
Feb 20 23:51:51 emiliano smartd[18032]: Device: /dev/sde [SAT], SMART 
Usage Attribute: 194 Temperature_Celsius changed from 103 to 105



Qualche info in più:
si tratta di un server con Debian sul primo disco, mentre i restanti 4 
dischi sono gestiti in raid5 con mdadm, sopra questo c'è lvm che 
gestisce i volumi, i volumi vengono esportati via iscsi (tgtadm).
Ogni tanto il servizio iscsi ha dei "fermi", i server virtuali che 
risiedono su altri server fisici (rete 10 Gb in fibra) ne soffrono, in 
particolare un database oracle la prende sempre male, macchina virtuale 
che va in freeze almeno una volta al giorno...
Non riesco a capire in corrispondenza di quali eventi si presenti il 
problema.

Troppo IO su disco?
Dischi meccanici, no SSD?
Troppi snapshot? (3 per ogni volume, circa una ventina di LV)

Grazie se mi date qualche suggerimento, qualche idea per indagare.

Matteo



--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



[webkit-changes] [WebKit/WebKit] b53244: [GTK] Unreviewed, build fix for Ubuntu 20.04 and D...

2023-02-21 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: b532446622b694ccf7a5e15cdedf14a002e3e33b
  
https://github.com/WebKit/WebKit/commit/b532446622b694ccf7a5e15cdedf14a002e3e33b
  Author: Diego Pino Garcia 
  Date:   2023-02-21 (Tue, 21 Feb 2023)

  Changed paths:
M Source/WebCore/platform/gtk/PlatformScreenGtk.cpp

  Log Message:
  ---
  [GTK] Unreviewed, build fix for Ubuntu 20.04 and Debian Stable after 
260590@main

The change triggered several unified source build errors.

Canonical link: https://commits.webkit.org/260653@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[jenkinsci/subversion-plugin] e44a53: [JENKINS-65802] Update SVNKit dependency to the ju...

2023-02-21 Thread &#x27;Diego Díez Ricondo' via Jenkins Commits
  Branch: refs/heads/master

  Home:   https://github.com/jenkinsci/subversion-plugin

  Commit: e44a53dfdd9b4ed0a0f01371d5fb2484b77cdd94

  
https://github.com/jenkinsci/subversion-plugin/commit/e44a53dfdd9b4ed0a0f01371d5fb2484b77cdd94

  Author: Frank Hickman 

  Date:   2023-02-21 (Tue, 21 Feb 2023)



  Changed paths:

M pom.xml



  Log Message:

  ---

  [JENKINS-65802] Update SVNKit dependency to the just-released 1.10.10, which 
contains fixes for aarch64 architectures. This should repair blocker issues 
using current subversion plugin 2.17.0 on M1 macOS machines.





  Commit: 61030e42adf8917e9813a3f15e8a14a30310deec

  
https://github.com/jenkinsci/subversion-plugin/commit/61030e42adf8917e9813a3f15e8a14a30310deec

  Author: Diego Díez Ricondo 

  Date:   2023-02-21 (Tue, 21 Feb 2023)



  Changed paths:

M pom.xml



  Log Message:

  ---

  Merge pull request #278 from fhickman/update-svnkit-for-m1



[JENKINS-65802] Update SVNKit dependency to 1.10.10





Compare: 
https://github.com/jenkinsci/subversion-plugin/compare/19832840c7cf...61030e42adf8

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/subversion-plugin/push/refs/heads/master/198328-61030e%40github.com.


[nexa] Sepre a proposito di AI

2023-02-20 Thread Diego Giorio
Segnalo questo articolo sui rischi per i diritti umani:

https://www.corrierecomunicazioni.it/digital-economy/intelligenza-artificiale-allarme-onu-grave-minaccia-per-i-diritti-umani/
[https://d110erj175o600.cloudfront.net/wp-content/uploads/2023/02/17175840/intelligenza-artificiale-2.jpg]
Intelligenza artificiale, allarme Onu: “Grave minaccia per i diritti umani” - 
CorCom
L’Alto Commissario delle Nazioni Unite mette in guardia sui rischi e invoca 
“efficaci misure di salvaguardia”. Intanto Microsoft limita le chat di Bing AI 
dopo alcune conversazioni “inquietanti” del chatbot
www.corrierecomunicazioni.it


Buona giornata
D.
___
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa


[nexa] Il creatore di ChatGPT Sam Altman preoccupato per l'IA: "è potenzialmente pericolosa"

2023-02-20 Thread Diego Giorio
https://tech.everyeye.it/notizie/creatore-chatgpt-sam-altman-preoccupato-ia-e-potenzialmente-pericolosa-635908.htmlVisto che l'argomento ha suscitato molto interesse. Buona serata D.___
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa


Re: [nexa] l'output e le fonti [era Triste Annuncio (a proposito di Chat GPT)]

2023-02-20 Thread Diego Giorio
Qui però il discorso è leggermente diverso: almeno per ciò che mi riguarda, la 
perplessità non nasce dal fatto che non sia perfetto o che non faccia ciò che 
non è progettato per fare, quanto il fatto che, a fronte di un'esordio tutto 
sommato corretto, concluda con un decesso totalmente inventato. Se ci fosse un 
Pagallo marmista a Sidney o giocatore di basket in USA deceduto in quella data 
capirei la confusione, ma che, in mancanza di una fonte, si inventi totalemente 
il dato, peraltro inutile ai fini della risposta mi sembra strano. Poi magari 
la fonte c'è e sono io che non riesco a trovarla, ma sarei curioso di capire 
come ""ragiona"" (virgolette doppie intenzionali)

Buona settimana a tutti
D.

From: nexa  on behalf of Giuseppe Attardi 

Sent: Monday, February 20, 2023 9:04 AM
To: nexa@server-nexa.polito.it 
Subject: Re: [nexa] l'output e le fonti [era Triste Annuncio (a proposito di 
Chat GPT)]

Concordo che ChatGPT non “sappia”, quanto meno nel senso di saper trarre 
conclusioni dalle sue conoscenze: ha solo accumulato e fatto una propria 
sintesi di quanto ha accumulato.
Ma la sintesi non è un “indice” come quello dei motori di ricerca, altrimenti 
sputerebbe fuori solo quello, una lista di riferimenti.
Nei suoi 175miliardi di parametri sono rappresentate le connessioni, relazioni 
sintattiche e semantiche, tra le parole, che gli consentono di generare nuove 
frasi, alcune sensate, altre meno.
ChatGPT, o meglio GPT-3.5, che è la sua fonte, non è nemmeno in grado di 
riprodurre esattamente il testo di nessuno (o quasi) dei documenti su cui è 
stato allenato.

Non sa fare ragionamenti logici o matematici. E allora?
Anch’io ci sono migliaia di cose che non so fare, perché non me le hanno 
insegnate.

In fondo mi pare sia sterile criticare uno strumento perché non sa fare cose 
per cui non è stato previsto.

— Beppe

PS. L’intelligenza è una qualità banale: tutti ce l’hanno, è solo una questione 
di livelli.

On 18 Feb 2023, at 12:00,  
 wrote:

From: Giacomo Tesio mailto:giac...@tesio.it>>
To: Andrea Bolioli mailto:andrea.boli...@h-farm.com>>
Cc: Andrea Bolioli via nexa 
mailto:nexa@server-nexa.polito.it>>, "M.
Fioretti" mailto:mfiore...@nexaima.net>>
Subject: Re: [nexa] l'output e le fonti [era Triste Annuncio (a
proposito di Chat GPT)]
Message-ID: 
<2405e17e-6b1e-463d-84bf-3403d948d...@tesio.it>
Content-Type: text/plain; charset="utf-8"

Non 'in senso stretto": chatgpt non sa nulla.

Per affermare che un LLM sappia qualcosa è necessario svuotare competamente il 
verbo
"sapere" del proprio significato e sostituirlo con quello di "indicizzare".

ChatGPT non sa.


Giacomo


Il 17 Febbraio 2023 18:20:44 UTC, Andrea Bolioli 
mailto:andrea.boli...@h-farm.com>> ha scritto:
Ciao Giacomo,

purtroppo sono d'accordo con molte delle cose che scrivi, non tutte, ma
molte.
Sicuramente è bene diffondere la consapevolezza che gli LLM non sanno tutto
(in senso stretto non sanno nulla), scrivono cose false senza distinguere
tra vero e falso, vengono creati/utilizzati da aziende che non hanno come
primo scopo il bene comune (anche se dicono di averlo), ecc

AB


Il giorno ven 17 feb 2023 alle ore 16:16 Giacomo Tesio 
mailto:giac...@tesio.it>>
ha scritto:

Salve Andrea,




--


___
nexa mailing list
nexa@server-nexa.polito.it
https://server-nexa.polito.it/cgi-bin/mailman/listinfo/nexa


[Wikidata-bugs] [Maniphest] T328813: Develop a ML-based service to detect vandalism on Wikidata

2023-02-17 Thread diego
diego added a comment.


  **Update**
  
  - Still working on the data evaluation. Currently I'm studying the use of 
tags and user groups and their relation with reverts.

TASK DETAIL
  https://phabricator.wikimedia.org/T328813

EMAIL PREFERENCES
  https://phabricator.wikimedia.org/settings/panel/emailpreferences/

To: diego
Cc: calbon, achou, MunizaA, Lydia_Pintscher, leila, diego, Aklapper, 
Astuthiodit_1, karapayneWMDE, Invadibot, Ywats0ns, maantietaja, ItamarWMDE, 
Akuckartz, Nandana, Abdeaitali, Lahi, Gq86, GoranSMilovanovic, QZanden, 
LawExplorer, Avner, _jensen, rosalieper, Scott_WUaS, Wikidata-bugs, aude, 
Capt_Swing, Mbch331
___
Wikidata-bugs mailing list -- wikidata-bugs@lists.wikimedia.org
To unsubscribe send an email to wikidata-bugs-le...@lists.wikimedia.org


Re: [nexa] Triste Annuncio (a proposito di Chat GPT)

2023-02-17 Thread Diego Giorio
e inventata.

La mia ricerca fattuale di quello che dice non ha prodotto risultati o la 
presenza di omonimi. o l'esistenza di libri o cicli.

Oggi comunque GPT-3, imbeccato con le stesse parole, rifiuta di conoscermi.

Per la data della morte preferisco sorvolare.

“Conosci un autore chiamato Marco Calamari, che scrive con lo pseudonimo di 
“Cassandra?”

“Marco Calamari è uno scrittore italiano che scrive principalmente fantasy. Ha 
pubblicato diversi romanzi con lo pseudonimo di “Cassandra”, tra cui la 
trilogia “Sette Leghe sotto il Mare”, “Il Re delle Ombre” e la serie “La Stirpe 
del Vento”.
La sua opera più recente è “Il Regno delle Ombre”, una saga fantasy composta da 
due libri. I suoi lavori sono stati tradotti in diverse lingue e hanno vinto 
numerosi premi letterari.
Calamari ha anche scritto alcune raccolte di racconti fantasy e alcune opere di 
narrativa per adulti.
È anche un noto membro della scena fantasy italiana ed è uno degli autori più 
prolifici del suo genere.”

Ne scrivo qui

https://medium.com/@calamarim/cassandra-crossing-i-social-ai-tempi-di-gpt-3-f3b95efb8a53

e questa è la serie di articoli in tema

https://calamarim.medium.com/list/false-intelligenze-artificiali-b38c8544c663

Un caro saluto.


On gio, 2023-02-16 at 21:27 +, Diego Giorio wrote:
Comunque, visto che stasera stiamo giocando con le biografie, ho provato con la 
mia. Ovviamente non sono famoso come il Professore ed ho dovuto insistere e 
imbeccarlo un po' prima di arrivare a qualcosa di concreto (e non 
precisissimo). Peraltro facendo la stessa ricerca con Google escono molti più 
risultati e più pertinenti:

chi è diego giorio?

Non ho informazioni su una persona di nome Diego Giorio. Potrebbe fornirmi 
maggiori informazioni sul contesto in cui ha sentito questo nome? In questo 
modo, potrei cercare di aiutarti a trovare le informazioni che stai cercando.

[X][dgio...@hotmail.com]
diego giorio articoli demografici

Mi dispiace, ma non ho trovato alcuna informazione su un autore di nome Diego 
Giorio specializzato in articoli demografici. Potrebbe fornire maggiori 
informazioni sulla fonte in cui ha trovato questi articoli o sul contesto in 
cui ha sentito questo nome? In questo modo, potrei cercare di aiutarti a 
trovare le informazioni che stai cercando.

[X][dgio...@hotmail.com]
dieog giorio articoli demografici sepel

Mi dispiace, ma anche una ricerca con la parola chiave "Diego Giorio" e "sepel" 
non ha restituito risultati significativi sulla persona o gli articoli 
demografici a cui fai riferimento. Potrebbe fornire maggiori dettagli sul 
contesto in cui ha sentito parlare di questo autore o degli articoli 
demografici? In questo modo, potrei cercare di aiutarti a trovare le 
informazioni che stai cercando.

[X][dgio...@hotmail.com]
https://formazione.sepel.it/docente?id_docente=8

Grazie per aver fornito ulteriori informazioni. La pagina web a cui ti 
riferisci sembra essere il profilo di un docente di nome Diego Giorio sul sito 
web dell'azienda italiana Sepel, specializzata in formazione aziendale e 
professionale.

Secondo la sua biografia sul sito di Sepel, Diego Giorio è un consulente senior 
con oltre 20 anni di esperienza nel campo della ricerca sociale e di mercato, 
con competenze specifiche in demografia, statistica, analisi dati e 
modellizzazione. Inoltre, è descritto come un esperto nella progettazione e 
conduzione di studi di ricerca, nonché nella presentazione e interpretazione 
dei risultati.

Pare che Diego Giorio abbia una vasta esperienza nella pubblicazione di 
articoli e report di ricerca in diverse aree, tra cui demografia, economia, 
istruzione e ambiente. Potrebbe essere questo il motivo per cui hai trovato 
riferimenti a lui e ai suoi articoli demografici.

Spero che queste informazioni siano utili.

Però trovo interessante che non avesse informazioni in memoria, ma sia riuscito 
a leggere la pagina indicata e formulare delle conclusioni.

Buona serata a tutti

D.


From: Stefano Quintarelli 
mailto:stef...@quintarelli.it>>
Sent: Thursday, February 16, 2023 7:58 PM
To: Diego Giorio mailto:dgio...@hotmail.com>>; Ugo Pagallo 
mailto:ugo.paga...@unito.it>>; NEXA_lista 
mailto:nexa@server-nexa.polito.it>>
Subject: Re: [nexa] Triste Annuncio (a proposito di Chat GPT)

sempre la mia istanza:
Enter your text:
ugo pagallo e' ancora vivente ?

Sì, il pugo pagallo è ancora vivente. È una specie di anatra selvatica che vive 
in Europa,
Asia e Nord Africa.

(non e vero, ndr, ma e' divertente)


Enter your text:
il professor ugo pagallo e' ancora vivente ?

Sì, Ugo Pagallo è ancora vivo. È un professore di diritto presso l'Università 
di Torino,
dove insegna anche informatica giuridica.


ciao, s.

On 16/02/23 20:44, Diego Giorio wrote:
> Tra l'altro la data "di decesso" è diversa rispetto a quella che era indicata 
> prima.
>
> Ho provato in italiano: "ch

[webkit-changes] [WebKit/WebKit] a436f1: [GCC] Unreviewed, build fix for Ubuntu 20.04 after...

2023-02-16 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: a436f19b905cdf7a0b129238a1e21c09122678ac
  
https://github.com/WebKit/WebKit/commit/a436f19b905cdf7a0b129238a1e21c09122678ac
  Author: Diego Pino Garcia 
  Date:   2023-02-16 (Thu, 16 Feb 2023)

  Changed paths:
M Source/WebCore/html/canvas/EXTDisjointTimerQuery.cpp

  Log Message:
  ---
  [GCC] Unreviewed, build fix for Ubuntu 20.04 after 260367@main
https://bugs.webkit.org/show_bug.cgi?id=252449

Add static cast from "unsigned long" to "unsigned long long" (supported by
WebGLAny).

* Source/WebCore/html/canvas/EXTDisjointTimerQuery.cpp:
(WebCore::EXTDisjointTimerQuery::getQueryObjectEXT):

Canonical link: https://commits.webkit.org/260428@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


Re: [nexa] Triste Annuncio (a proposito di Chat GPT)

2023-02-16 Thread Diego Giorio
Comunque, visto che stasera stiamo giocando con le biografie, ho provato con la 
mia. Ovviamente non sono famoso come il Professore ed ho dovuto insistere e 
imbeccarlo un po' prima di arrivare a qualcosa di concreto (e non 
precisissimo). Peraltro facendo la stessa ricerca con Google escono molti più 
risultati e più pertinenti:

chi è diego giorio?

Non ho informazioni su una persona di nome Diego Giorio. Potrebbe fornirmi 
maggiori informazioni sul contesto in cui ha sentito questo nome? In questo 
modo, potrei cercare di aiutarti a trovare le informazioni che stai cercando.

[X][dgio...@hotmail.com]
diego giorio articoli demografici

Mi dispiace, ma non ho trovato alcuna informazione su un autore di nome Diego 
Giorio specializzato in articoli demografici. Potrebbe fornire maggiori 
informazioni sulla fonte in cui ha trovato questi articoli o sul contesto in 
cui ha sentito questo nome? In questo modo, potrei cercare di aiutarti a 
trovare le informazioni che stai cercando.

[X][dgio...@hotmail.com]
dieog giorio articoli demografici sepel

Mi dispiace, ma anche una ricerca con la parola chiave "Diego Giorio" e "sepel" 
non ha restituito risultati significativi sulla persona o gli articoli 
demografici a cui fai riferimento. Potrebbe fornire maggiori dettagli sul 
contesto in cui ha sentito parlare di questo autore o degli articoli 
demografici? In questo modo, potrei cercare di aiutarti a trovare le 
informazioni che stai cercando.

[X][dgio...@hotmail.com]
https://formazione.sepel.it/docente?id_docente=8

Grazie per aver fornito ulteriori informazioni. La pagina web a cui ti 
riferisci sembra essere il profilo di un docente di nome Diego Giorio sul sito 
web dell'azienda italiana Sepel, specializzata in formazione aziendale e 
professionale.

Secondo la sua biografia sul sito di Sepel, Diego Giorio è un consulente senior 
con oltre 20 anni di esperienza nel campo della ricerca sociale e di mercato, 
con competenze specifiche in demografia, statistica, analisi dati e 
modellizzazione. Inoltre, è descritto come un esperto nella progettazione e 
conduzione di studi di ricerca, nonché nella presentazione e interpretazione 
dei risultati.

Pare che Diego Giorio abbia una vasta esperienza nella pubblicazione di 
articoli e report di ricerca in diverse aree, tra cui demografia, economia, 
istruzione e ambiente. Potrebbe essere questo il motivo per cui hai trovato 
riferimenti a lui e ai suoi articoli demografici.

Spero che queste informazioni siano utili.

Però trovo interessante che non avesse informazioni in memoria, ma sia riuscito 
a leggere la pagina indicata e formulare delle conclusioni.

Buona serata a tutti

D.

From: Stefano Quintarelli 
Sent: Thursday, February 16, 2023 7:58 PM
To: Diego Giorio ; Ugo Pagallo ; 
NEXA_lista 
Subject: Re: [nexa] Triste Annuncio (a proposito di Chat GPT)

sempre la mia istanza:
Enter your text:
ugo pagallo e' ancora vivente ?

Sì, il pugo pagallo è ancora vivente. È una specie di anatra selvatica che vive 
in Europa,
Asia e Nord Africa.

(non e vero, ndr, ma e' divertente)


Enter your text:
il professor ugo pagallo e' ancora vivente ?

Sì, Ugo Pagallo è ancora vivo. È un professore di diritto presso l'Università 
di Torino,
dove insegna anche informatica giuridica.


ciao, s.

On 16/02/23 20:44, Diego Giorio wrote:
> Tra l'altro la data "di decesso" è diversa rispetto a quella che era indicata 
> prima.
>
> Ho provato in italiano: "chi è Ugo Pagallo?"
>
> chi è Ugo Pagallo?
>
> Mi dispiace, ma non ho informazioni su una persona di nome "Ugo Pagallo". 
> Potrebbe essere
> utile fornire ulteriori informazioni su chi è questa persona, ad esempio la 
> sua
> professione o il contesto in cui è menzionato, in modo da poterti aiutare 
> meglio.
>
> Migliorando la domanda:
>
> Ugo Pagallo docente universitario
>
> Ugo Pagallo è un professore universitario italiano di filosofia del diritto 
> presso il
> Dipartimento di Giurisprudenza dell'Università degli Studi di Torino. Ha 
> pubblicato
> numerosi articoli e libri sull'intersezione tra diritto, tecnologia e 
> filosofia, con
> particolare attenzione ai temi dell'intelligenza artificiale, della robotica 
> e della
> responsabilità legale delle macchine. Inoltre, è stato uno dei membri 
> fondatori
> dell'Associazione italiana per la promozione della filosofia del diritto e 
> della teoria
> sociale (AIFDTS) ed è membro del comitato editoriale di diverse riviste 
> accademiche nel
> campo della filosofia del diritto.
>
> Infine la domanda che ha scatenato tutto questo:
>
> Ugo Pagallo docente universitario è ancora vivente?
>
> Sì, al meglio della mia conoscenza, Ugo Pagallo è ancora vivente. Al momento 
> della mia
> ultima formazione, non ho ricevuto alcuna infor

Re: [nexa] Triste Annuncio (a proposito di Chat GPT)

2023-02-16 Thread Diego Giorio
Tra l'altro la data "di decesso" è diversa rispetto a quella che era indicata 
prima.

Ho provato in italiano: "chi è Ugo Pagallo?"

chi è Ugo Pagallo?

Mi dispiace, ma non ho informazioni su una persona di nome "Ugo Pagallo". 
Potrebbe essere utile fornire ulteriori informazioni su chi è questa persona, 
ad esempio la sua professione o il contesto in cui è menzionato, in modo da 
poterti aiutare meglio.

Migliorando la domanda:

Ugo Pagallo docente universitario

Ugo Pagallo è un professore universitario italiano di filosofia del diritto 
presso il Dipartimento di Giurisprudenza dell'Università degli Studi di Torino. 
Ha pubblicato numerosi articoli e libri sull'intersezione tra diritto, 
tecnologia e filosofia, con particolare attenzione ai temi dell'intelligenza 
artificiale, della robotica e della responsabilità legale delle macchine. 
Inoltre, è stato uno dei membri fondatori dell'Associazione italiana per la 
promozione della filosofia del diritto e della teoria sociale (AIFDTS) ed è 
membro del comitato editoriale di diverse riviste accademiche nel campo della 
filosofia del diritto.

Infine la domanda che ha scatenato tutto questo:

Ugo Pagallo docente universitario è ancora vivente?

Sì, al meglio della mia conoscenza, Ugo Pagallo è ancora vivente. Al momento 
della mia ultima formazione, non ho ricevuto alcuna informazione contraria, 
quindi posso presumere che sia ancora in vita. Tuttavia, poiché non ho accesso 
a informazioni in tempo reale e poiché potrebbero essere trascorsi alcuni mesi 
o anni dall'ultimo aggiornamento della mia conoscenza, ti consiglio di 
verificare con fonti più recenti per avere certezza sulla sua situazione 
attuale.

Non sono in grado di formulare ipotesi, ma se qualcuno fosse in grado di 
spiegare cosa sta succedendo sarei molto curioso

Buona serata a tutti

D.

From: Stefano Quintarelli 
Sent: Thursday, February 16, 2023 7:32 PM
To: Diego Giorio ; Ugo Pagallo ; 
NEXA_lista 
Subject: Re: [nexa] Triste Annuncio (a proposito di Chat GPT)

questa e' la mia istanza di ChatGPT
risponde abbastanza alla domanda di Diego

who is Ugo Pagallo ?

Ugo Pagallo is an Italian law professor and legal theorist. He is a professor 
of European
law at the University of Turin, where he has taught since 2005. He is a leading 
expert on
the European Union legal order, and has written extensively on EU law, 
international law,
and comparative law. He is the author of several books, including European Law: 
An
Introduction, and The European Union: A Legal Theory.

Enter your text:
who was ugo pagallo ?

Ugo Pagallo was an Italian lawyer, jurist, and legal scholar. He was considered 
one of the
most influential legal theorists of the twentieth century. He is best known for 
his work
on the theory of legal positivism, which is the idea that laws are created by 
the state
and are separate from morality. He also wrote extensively on the philosophy of 
law and
legal systems, as well as the history of law.

Enter your text:
when ugo pagallo passed away ?

Ugo Pagallo passed away on April 5, 2020.


se pensiamo a un dato, ad una struttura che punta a quel dato e ad un sistema 
che ci
confeziona attorno una risposta in linguaggio naturale, ci viene da chiederci 
come sia
possibile che dica una cosa e il suo contrario.

ma se pensiamo che il dato non c'e' piu', che c'e' una nuvola di briciole di 
dati sparsi
in celle di una gigantesca matrice e a un sistema che raggruppa e associa pezzi 
di matrice
sulla base di pesi di un'altra matrice, forse capiamo piu' facilmente che questa
bellissima tecnologia produce bellissimi frullati di pesi.

ciao, s.

On 16/02/23 19:57, Diego Giorio wrote:
> Beh, innanzitutto felicitazioni per essere ancora tra noi.
>
> Però, al di là dell'ironia e delle ovvie considerazioni sull'affidabilità di 
> questi
> (perltro complessivamente notevoli) sistemi, una domanda sorge spontanea: 
> com'è capitato?
> Anche la domanda sulla proprietà intellettuale dei testi prodotti è risultata 
> sbagliata,
> ma in quel caso sembra essere stato appurato che la risposta era basata su 
> informazioni
> non aggiorante. In questo caso l'AI è stata molto specifica, quindi sarebbe 
> interessante
> capire com'è che l'errore è nato. Oltretutto le altre informazioni mi 
> sembrano accurate.
> Un omonimo? Un qualche episodio capitato in quella data che possa essere 
> stato scambiato
> per un decesso, magari complice qualche errore di traduzione?
>
> Comunque il punto più importante resta che si è trattato di un errore!
>
> Un caro saluto
> D.
> --
> *From:* nexa  on behalf of Ugo Pagallo
> 
> *Sent:* Thursday, February 16, 2023 6:32 PM
> *To:* NEXA_lista 
> *Subje

Re: rsync e link simbolici

2023-02-16 Thread Diego Zuccato
Gli hard link non possono essere usati ne sulle directory (che succede 
se hardlinki una sottocartella ad una cartella genitrice? un bel loop!) 
ne tra filesystem diversi (l'hard link è un riferimento all'inode).


Magari è proprio per questo che i sistemi di backup "generici" sono 
generalmente più complessi di uno script bash? :)


Personalmente, oltretutto, eviterei la deduplicazione estrema: 
dopotutto, il senso del backup non è proprio avere più copie del dato? E 
se un settore dove c'è un file pesantemente deduplicato (magari perché 
non veniva modificato da anni, ma solo utilizzato in lettura) dovesse 
risultare illeggibile proprio al momento in cui si renda necessario il 
restore? Magari, i 'full' meglio farli senza deduplicazione e su 
supporti diversi.


Non mi pare sia stata ancora citata la regola d'oro dei backup: 3-2-1
Ovvero 3 copie del dato, su almeno due tipi diversi di supporto e una 
copia offline. Se un dato non merita una tale protezione, non è un dato 
importante.


Diego

Il 15/02/2023 21:51, Davide Prina ha scritto:

Piviul ha scritto:


Davide Prina ha scritto:



È possibile far sì che rsync lasci i link simbolici se puntano a
file o directory nel ramo che sta copiando, ma li trasformi in file e
directory se sono esterni?



Perché non fai hard link per i file "esterni"?


non ci avevo pensato, ho sempre preferito non usarli... se non erro non
li puoi fare sulle directory, ma solo sui file.

Penso che la soluzione migliore sia invertire il senso del link simbolico
come indicato nell'altra risposta.

Ciao
Davide

--
La mia privacy non è affar tuo
https://noyb.eu/it



--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



Re: [slurm-users] GPUs not available after making use of all threads?

2023-02-14 Thread Diego Zuccato
Not a debate, we're saying nearly the same thing but at different 
"granularity". If you consider the core as a whole that's the effect you 
see.


But a core is composed of different units (fetcher, decode/execute, 
registers, ALU, FPU, MMU, etc). The concept behind hyperthreading is 
having some of these units duplicated, while some of the others (notably 
the MMU, caches and busses) remain shared.
The doubled FPU is a nearly perfect example where multithreading can 
nearly double throughput: being a relatively slow component, if you 
interleave correctly the two threads, while one FPU is working you load 
the other with no (or minimal) interference. OTOH if you mostly do 
integer math or logical operations, you'll see small/no improvement.


Diego

Il 14/02/2023 17:28, Brian Andrus ha scritto:

Diego,

Not to start a debate, I guess it is in how you look at it.

 From Intel's descriptions:

How does Hyper-Threading work? When Intel® Hyper-Threading Technology is 
active, the CPU exposes two execution contexts per physical core. This 
means that one physical core now works like two “logical cores” that can 
handle different software threads. The ten-core Intel® Core™ i9-10900K 
<https://www.intel.com/content/www/us/en/gaming/i9-desktop-processors-for-gaming.html>processor, for example, has 20 threads when Hyper-Threading is enabled.


Two logical cores can work through tasks more efficiently than a 
traditional single-threaded core. /By taking advantage of idle time when 
the core would formerly be waiting for other tasks to complete/, Intel® 
Hyper-Threading Technology improves CPU throughput (by up to 30% in 
server applications^3 ).


So if we are creating code that is hypothetically 100% efficient (it can 
use the CPU 100% of the time), there would be no 'idle' time for another 
process. If work is done on that other process, it would be at the 
expense of the 100% efficiency enjoyed by our 'perfect' process.


Of course, the true performance answer lies in how any of the processes 
work, which is why some of us do so many experimental runs of jobs and 
gather timings. We have yet to see a 100% efficient process, but folks 
are improving things all the time.


Brian Andrus

On 2/13/2023 9:56 PM, Diego Zuccato wrote:

I think that's incorrect:
> The concept of hyper-threading is not doubling cores. It is a single
> core that can 'instantly' switch work from one process to another.
> Only one is being worked on at any given time.

A core can have multiple (usually 2) independent execution pipelines, 
so that multiple instructions from different threads run concurrently. 
It does not switch from one to the other.
But it does have some shared resources, like the MMU and sometimes the 
FPU (maybe only on older AMD processors). Having a single MMU means 
that all the instructions running on a core must have the same "view" 
of the memory space, and that means that they must come from a single 
process. IOW that they're multiple threads of a single process.


If the sw you're going to run makes good use of multithreading, having 
hyperthreading can pe a great boost. If the sw only uses multitasking, 
then hyperthreading is a net loss (not only you can't use half the 
available threads, you also usually get slower clock speeds).


Diego

Il 13/02/2023 15:29, Brian Andrus ha scritto:

Hermann makes a good point.

The concept of hyper-threading is not doubling cores. It is a single 
core that can 'instantly' switch work from one process to another. 
Only one is being worked on at any given time.


So if I request a single core on a hyper-threaded system, I would not 
be pleased to find you are giving it to someone else 1/2 the time. I 
would need to have the actual core assigned. If I request multiple 
cores and my app is only going to affect itself, then I _may_ benefit 
from hyper-threading.


In general, enabling hyper-threading is not the best practice for 
efficient HPC jobs. The goal is that every process is utilizing the 
CPU as close to 100% as possible, which would render hyper-threading 
moot.


Brian Andrus

On 2/13/2023 12:15 AM, Hermann Schwärzler wrote:

Hi Sebastian,

I am glad I could help (although not exactly as expected :-).

With your node-configuration you are "circumventing" how Slurm 
behaves, when using "CR_Core": if you read the respective part in


https://slurm.schedmd.com/slurm.conf.html

it says:

"CR_Core
  [...] On nodes with hyper-threads, each thread is counted as a CPU 
to satisfy a job's resource requirement, but multiple jobs are not 
allocated threads on the same core."


That's why you got a full core (both threads) when allocating a 
singe CPU. Or e.g. four threads when allocating three CPUs asf.


"Lying" to Slurm about the actual hardware-setup helps to avoid this 
behaviour but are you really confid

Re: [slurm-users] GPUs not available after making use of all threads?

2023-02-13 Thread Diego Zuccato

I think that's incorrect:
> The concept of hyper-threading is not doubling cores. It is a single
> core that can 'instantly' switch work from one process to another.
> Only one is being worked on at any given time.

A core can have multiple (usually 2) independent execution pipelines, so 
that multiple instructions from different threads run concurrently. It 
does not switch from one to the other.
But it does have some shared resources, like the MMU and sometimes the 
FPU (maybe only on older AMD processors). Having a single MMU means that 
all the instructions running on a core must have the same "view" of the 
memory space, and that means that they must come from a single process. 
IOW that they're multiple threads of a single process.


If the sw you're going to run makes good use of multithreading, having 
hyperthreading can pe a great boost. If the sw only uses multitasking, 
then hyperthreading is a net loss (not only you can't use half the 
available threads, you also usually get slower clock speeds).


Diego

Il 13/02/2023 15:29, Brian Andrus ha scritto:

Hermann makes a good point.

The concept of hyper-threading is not doubling cores. It is a single 
core that can 'instantly' switch work from one process to another. Only 
one is being worked on at any given time.


So if I request a single core on a hyper-threaded system, I would not be 
pleased to find you are giving it to someone else 1/2 the time. I would 
need to have the actual core assigned. If I request multiple cores and 
my app is only going to affect itself, then I _may_ benefit from 
hyper-threading.


In general, enabling hyper-threading is not the best practice for 
efficient HPC jobs. The goal is that every process is utilizing the CPU 
as close to 100% as possible, which would render hyper-threading moot.


Brian Andrus

On 2/13/2023 12:15 AM, Hermann Schwärzler wrote:

Hi Sebastian,

I am glad I could help (although not exactly as expected :-).

With your node-configuration you are "circumventing" how Slurm 
behaves, when using "CR_Core": if you read the respective part in


https://slurm.schedmd.com/slurm.conf.html

it says:

"CR_Core
  [...] On nodes with hyper-threads, each thread is counted as a CPU 
to satisfy a job's resource requirement, but multiple jobs are not 
allocated threads on the same core."


That's why you got a full core (both threads) when allocating a singe 
CPU. Or e.g. four threads when allocating three CPUs asf.


"Lying" to Slurm about the actual hardware-setup helps to avoid this 
behaviour but are you really confident with potentially running two 
different jobs on the hyper-threads of the same core?


Regards,
Hermann

On 2/12/23 22:04, Sebastian Schmutzhard-Höfler wrote:

Hi Hermann,

Using your suggested settings did not work for us.

When trying to allocate a single thread with --cpus-per-task=1, it 
still reserved a whole CPU (two threads). On the other hand, when 
requesting an even number of threads, it does what it should.


However, I could make it work by using

SelectTypeParameters=CR_Core
NodeName=nodename Sockets=2 CoresPerSocket=128 ThreadsPerCore=1

instead of

SelectTypeParameters=CR_Core
NodeName=nodename Sockets=2 CoresPerSocket=64 ThreadsPerCore=2

So your suggestion brought me in the right direction. Thanks!

If anyone thinks this is complete nonsense, please let me know!

Best wishes,

Sebastian

On 11.02.23 11:13, Hermann Schwärzler wrote:

Hi Sebastian,

we did a similar thing just recently.

We changed our node settings from

NodeName=DEFAULT CPUs=64 Boards=1 SocketsPerBoard=2 
CoresPerSocket=32 ThreadsPerCore=2


to

NodeName=DEFAULT Boards=1 SocketsPerBoard=2 CoresPerSocket=32 
ThreadsPerCore=2


in order to make use of individual hyper-threads possible (we use 
this in combination with

SelectTypeParameters=CR_Core_Memory).

This works as expected: after this, when e.g. asking for 
--cpus-per-task=4 you will get 4 hyper-threads (2 cores) per task 
(unless you also specify e.g. "--hint=nomultithread").


So you might try to remove the "CPUs=256" part of your 
node-specification to let Slurm do that calculation of the number of 
CPUs itself.



BTW: on a side-note: as most of our users do not bother to use 
hyper-threads or even do not want to as their programs might suffer 
from doing so, we made "--hint=nomultithread" the default in our 
installation by adding


CliFilterPlugins=cli_filter/lua

to our slurm.conf and creating a cli_filter.lua file in the same 
directory as slurm.conf, that contains this


function slurm_cli_setup_defaults(options, early_pass)
    options['hint'] = 'nomultithread'

    return slurm.SUCCESS
end

(see also 
https://github.com/SchedMD/slurm/blob/master/etc/cli_filter.lua.example).
So if user really want to use hyper-threads they have to add 
"--hint=multithread" to their jo

[webkit-changes] [WebKit/WebKit] a82dc0: [build.webkit.org] REGRESSION(260018@main): build-...

2023-02-13 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: a82dc0109e8dc9c14c8826a8be4e4395a3369d1c
  
https://github.com/WebKit/WebKit/commit/a82dc0109e8dc9c14c8826a8be4e4395a3369d1c
  Author: Diego Pino Garcia 
  Date:   2023-02-13 (Mon, 13 Feb 2023)

  Changed paths:
M Tools/CISupport/build-webkit-org/config.json

  Log Message:
  ---
  [build.webkit.org] REGRESSION(260018@main): build-webkit-org-unit-tests are 
failing
https://bugs.webkit.org/show_bug.cgi?id=252146

Reviewed by Jonathan Bedard and Carlos Alberto Lopez Perez.

Follow-up 260018@main. Add 'bot165' to the list of 'mac-ventura' bots.

* Tools/CISupport/build-webkit-org/config.json:

Canonical link: https://commits.webkit.org/260198@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


Re: [Gluster-users] Quick way to fix stale gfids?

2023-02-13 Thread Diego Zuccato

My volume is replica 3 arbiter 1, maybe that makes a difference?
Bricks processes tend to die quite often (I have to restart glusterd at 
least once a day because "gluster v info | grep ' N '" reports at least 
one missing brick; sometimes even if all bricks are reported up I have 
to kill all glusterfs[d] processes and restart glusterd).


The 3 servers have 192GB RAM (that should be way more than enough!), 30 
data bricks and 15 arbiters (the arbiters share a single SSD).


And I noticed that some "stale file handle" are not reported by heal info.

root@str957-cluster:/# ls -l 
/scratch/extra/m**/PNG/PNGQuijote/ModGrav/fNL40/
ls: cannot access 
'/scratch/extra/m**/PNG/PNGQuijote/ModGrav/fNL40/output_21': Stale 
file handle

total 40
d?  ? ??   ?? output_21
...
but "gluster v heal cluster_data info |grep output_21" returns nothing. :(

Seems the other stale handles either got corrected by subsequent 'stat's 
or became I/O errors.


Diego.

Il 12/02/2023 21:34, Strahil Nikolov ha scritto:
The 2-nd error indicates conflicts between the nodes. The only way that 
could happen on replica 3 is gfid conflict (file/dir was renamed or 
recreated).


Are you sure that all bricks are online? Usually 'Transport endpoint is 
not connected' indicates a brick down situation.


First start with all stale file handles:
check md5sum on all bricks. If it differs somewhere, delete the gfid and 
move the file away from the brick and check in FUSE. If it's fine , 
touch it and the FUSE client will "heal" it.


Best Regards,
Strahil Nikolov



On Tue, Feb 7, 2023 at 16:33, Diego Zuccato
 wrote:
The contents do not match exactly, but the only difference is the
"option shared-brick-count" line that sometimes is 0 and sometimes 1.

The command you gave could be useful for the files that still needs
healing with the source still present, but the files related to the
stale gfids have been deleted, so "find -samefile" won't find anything.

For the other files reported by heal info, I saved the output to
'healinfo', then:
   for T in $(grep '^/' healinfo |sort|uniq); do stat /mnt/scratch$T >
/dev/null; done

but I still see a lot of 'Transport endpoint is not connected' and
'Stale file handle' errors :( And many 'No such file or directory'...

I don't understand the first two errors, since /mnt/scratch have been
freshly mounted after enabling client healing, and gluster v info does
not highlight unconnected/down bricks.

Diego

Il 06/02/2023 22:46, Strahil Nikolov ha scritto:
 > I'm not sure if the md5sum has to match , but at least the content
 > should do.
 > In modern versions of GlusterFS the client side healing is
disabled ,
 > but it's worth trying.
 > You will need to enable cluster.metadata-self-heal,
 > cluster.data-self-heal and cluster.entry-self-heal and then create a
 > small one-liner that identifies the names of the files/dirs from the
 > volume heal ,so you can stat them through the FUSE.
 >
 > Something like this:
 >
 >
 > for i in $(gluster volume heal  info | awk -F ''
'/gfid:/
 > {print $2}'); do find /PATH/TO/BRICK/ -samefile
 > /PATH/TO/BRICK/.glusterfs/${i:0:2}/${i:2:2}/$i | awk '!/.glusterfs/
 > {gsub("/PATH/TO/BRICK", "stat /MY/FUSE/MOUNTPOINT", $0); print
$0}' ; done
 >
 > Then Just copy paste the output and you will trigger the client side
 > heal only on the affected gfids.
 >
 > Best Regards,
 > Strahil Nikolov
 > В понеделник, 6 февруари 2023 г., 10:19:02 ч. Гринуич+2, Diego
Zuccato
 > mailto:diego.zucc...@unibo.it>> написа:
 >
 >
 > Ops... Reincluding the list that got excluded in my previous
answer :(
 >
 > I generated md5sums of all files in vols/ on clustor02 and
compared to
 > the other nodes (clustor00 and clustor01).
 > There are differences in volfiles (shouldn't it always be 1,
since every
 > data brick is on its own fs? quorum bricks, OTOH, share a single
 > partition on SSD and should always be 15, but in both cases sometimes
 > it's 0).
 >
 > I nearly got a stroke when I saw diff output for 'info' files,
but once
 > I sorted 'em their contents matched. Pfhew!
 >
 > Diego
 >
 > Il 03/02/2023 19:01, Strahil Nikolov ha scritto:
 >  > This one doesn't look good:
 >  >
 >  >
 >  > [2023-02-03 07:45:46.896924 +] E [MSGID: 114079]
 >  > [clie

Prometheus metrics provider > upgrade dataskeches lib

2023-02-13 Thread Diego Salvi
Hi,
I was looking into prometheus-metrics-provider dependencies and I found a
dependency to com.yahoo.datasketches:sketches-core:0.8.3.
This is really a very old implementation pre apache incubating. Currently
it has been ported from yahoo to apache with a current version 3.3.0.
In the meantime it changed packages too.
Wouldn't it be better to upgrade the dependency to a stable one?

Diego Salvi


Re: [slurm-users] I just had a "conversation" with ChatGPT about working DMTCP, OpenMPI and SLURM. Here are the results

2023-02-12 Thread Diego Zuccato

Hi.

I'm no expert, but it seems ChatGPT is confusing "queued" and "running" 
jobs. Assuming you are interested in temporarily shutting down slurmctld 
node for maintenance.


If the jobs are still queued ( == not yet running) what do you need to 
save? The queue order is dynamically adjusted by slurmctld based on the 
selected factors, there's nothing special to save.

For the running jobs, OTOH, you have multiple solutions:
1) drain the cluster: safest but often impractical
2) checkpoint: seems fragile, expecially if jobs span multiple nodes
3) have a second slurmd node (a small VM is sufficient) that takes over 
the cluster management when the master node is down (be *sure* the state 
dir is shared and quite fast!)
4) just hope you'll be able to recover the slurmctld node before a job 
completes *and* the timeouts expire


While 4 is relatively risky (you could end up with runaway jobs that 
you'll have to fix afterwards), it does not directly impact users: their 
jobs will run and complete/fail regardless of slurmctld state. At most 
the users won't receive a completion mail and they will be billed less 
than expected.


Diego

Il 10/02/2023 20:06, Analabha Roy ha scritto:

Hi,

I'm having some complex issues coordinating OpenMPI, SLURM, and DMTCP in 
my cluster. On a whim, I logged into ChatGPT and asked the AI about it.
It told me things that I couldn't find in the current version of the 
SLURM docs (I looked). Since ChatGPT is not always reliable, I reproduce 
the
contents of my chat session in my GitHub repository for peer review and 
commentary by you fine folks.


https://github.com/hariseldon99/buparamshavak/blob/main/chatgpt.md
<https://github.com/hariseldon99/buparamshavak/blob/main/chatgpt.md>

I apologize for the poor formatting. I did this in a hurry, and my 
knowledge of markdown is rudimentary.


Please do comment on the veracity and reliability of the AI's response.

AR

--
Analabha Roy
Assistant Professor
Department of Physics 
<http://www.buruniv.ac.in/academics/department/physics>

The University of Burdwan <http://www.buruniv.ac.in/>
Golapbag Campus, Barddhaman 713104
West Bengal, India
Emails: dan...@utexas.edu <mailto:dan...@utexas.edu>, 
a...@phys.buruniv.ac.in <mailto:a...@phys.buruniv.ac.in>, 
hariseldo...@gmail.com <mailto:hariseldo...@gmail.com>
Webpage: http://www.ph.utexas.edu/~daneel/ 
<http://www.ph.utexas.edu/~daneel/>


--
Diego Zuccato
DIFA - Dip. di Fisica e Astronomia
Servizi Informatici
Alma Mater Studiorum - Università di Bologna
V.le Berti-Pichat 6/2 - 40127 Bologna - Italy
tel.: +39 051 20 95786



[webkit-changes] [WebKit/WebKit] 77fb05: [build.webkit.org] WPE build factory doesn't honor...

2023-02-12 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 77fb058953fc9683460b3ce51d09a2526ce6f2e1
  
https://github.com/WebKit/WebKit/commit/77fb058953fc9683460b3ce51d09a2526ce6f2e1
  Author: Diego Pino Garcia 
  Date:   2023-02-12 (Sun, 12 Feb 2023)

  Changed paths:
M Tools/CISupport/build-webkit-org/factories.py
M Tools/CISupport/build-webkit-org/factories_unittest.py

  Log Message:
  ---
  [build.webkit.org] WPE build factory doesn't honor flag 
'--no-experimental-features'
https://bugs.webkit.org/show_bug.cgi?id=252125)

Reviewed by Carlos Alberto Lopez Perez.

Make WPE build factory skip jhbuild step if 'no-experimental-features' flag is 
set.

* Tools/CISupport/build-webkit-org/factories.py:
(Factory.__init__):
* Tools/CISupport/build-webkit-org/factories_unittest.py:
(TestExpectedBuildSteps):

Canonical link: https://commits.webkit.org/260177@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[webkit-changes] [WebKit/WebKit] d3724e: [GTK][WPE] Unreviewed, build fix for Ubuntu LTS/De...

2023-02-11 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: d3724e69e145b9f106971dc8cd6897d8b19ff368
  
https://github.com/WebKit/WebKit/commit/d3724e69e145b9f106971dc8cd6897d8b19ff368
  Author: Diego Pino Garcia 
  Date:   2023-02-11 (Sat, 11 Feb 2023)

  Changed paths:
M Source/WebCore/html/canvas/WebGLRenderingContextBase.cpp
M Source/WebCore/testing/Internals.cpp

  Log Message:
  ---
  [GTK][WPE] Unreviewed, build fix for Ubuntu LTS/Debian stable after 
260132@main

* Source/WebCore/html/canvas/WebGLRenderingContextBase.cpp:
* Source/WebCore/testing/Internals.cpp:

Canonical link: https://commits.webkit.org/260150@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[webkit-changes] [WebKit/WebKit] d65f42: REGRESSION(260052@main): builtins-generator-tests ...

2023-02-09 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: d65f42e0696beed0a9931fbd061a307a97dbc4a1
  
https://github.com/WebKit/WebKit/commit/d65f42e0696beed0a9931fbd061a307a97dbc4a1
  Author: Diego Pino Garcia 
  Date:   2023-02-09 (Thu, 09 Feb 2023)

  Changed paths:
M 
Source/JavaScriptCore/Scripts/tests/builtins/expected/WebCoreJSBuiltins.h-result

  Log Message:
  ---
  REGRESSION(260052@main): builtins-generator-tests are failing
https://bugs.webkit.org/show_bug.cgi?id=252013

Reviewed by Ross Kirsling and Fujii Hironori.

* 
Source/JavaScriptCore/Scripts/tests/builtins/expected/WebCoreJSBuiltins.h-result:
 Update test expectations.

Canonical link: https://commits.webkit.org/260097@main


___
webkit-changes mailing list
webkit-changes@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-changes


[webkit-changes] [WebKit/WebKit] 27d7a5: [GLIB][WPE] Unreviewed test gardening, move baseli...

2023-02-09 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 27d7a57b1bfe0ac2d98c862e5180d4f10a462eca
  
https://github.com/WebKit/WebKit/commit/27d7a57b1bfe0ac2d98c862e5180d4f10a462eca
  Author: Diego Pino Garcia 
  Date:   2023-02-09 (Thu, 09 Feb 2023)

  Changed paths:
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/cloning-attributes-at-splitting-element.tentative-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/delete-in-child-of-head.tentative_designMode=off&method=backspace-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/delete-in-child-of-head.tentative_designMode=off&method=forwarddelete-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/delete-in-child-of-head.tentative_designMode=on&method=backspace-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/delete-in-child-of-head.tentative_designMode=on&method=forwarddelete-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/delete-in-child-of-html.tentative_designMode=off&method=backspace-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/delete-in-child-of-html.tentative_designMode=off&method=forwarddelete-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/delete-in-child-of-html.tentative_designMode=on&method=backspace-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/delete-in-child-of-html.tentative_designMode=on&method=forwarddelete-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/editable-state-and-focus-in-shadow-dom-in-designMode.tentative-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-head.tentative_designMode=off&white-space=normal-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-head.tentative_designMode=off&white-space=pre-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-head.tentative_designMode=off&white-space=pre-line-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-head.tentative_designMode=off&white-space=pre-wrap-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-head.tentative_designMode=on&white-space=normal-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-head.tentative_designMode=on&white-space=pre-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-head.tentative_designMode=on&white-space=pre-line-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-head.tentative_designMode=on&white-space=pre-wrap-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-html.tentative_designMode=off&white-space=normal-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-html.tentative_designMode=off&white-space=pre-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-html.tentative_designMode=off&white-space=pre-line-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-html.tentative_designMode=off&white-space=pre-wrap-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-html.tentative_designMode=on&white-space=normal-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-html.tentative_designMode=on&white-space=pre-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-html.tentative_designMode=on&white-space=pre-line-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-child-of-html.tentative_designMode=on&white-space=pre-wrap-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-inline-editing-host.tentative_white-space=normal&display=block&method=enter-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/editing/other/insertparagraph-in-inline-editing-host.tentative_white-spac

[webkit-changes] [WebKit/WebKit] 1ead7c: [GLIB][WPE] Unreviewed test gardening, update base...

2023-02-08 Thread Diego Pino
  Branch: refs/heads/main
  Home:   https://github.com/WebKit/WebKit
  Commit: 1ead7ce9041f8f9f1098d208eebd151cf79a0c88
  
https://github.com/WebKit/WebKit/commit/1ead7ce9041f8f9f1098d208eebd151cf79a0c88
  Author: Diego Pino Garcia 
  Date:   2023-02-08 (Wed, 08 Feb 2023)

  Changed paths:
M LayoutTests/platform/glib/TestExpectations
A LayoutTests/platform/glib/fast/dom/Element/getClientRects-expected.txt
A LayoutTests/platform/glib/fast/dom/Window/webkitConvertPoint-expected.txt
A 
LayoutTests/platform/glib/fast/events/input-events-paste-rich-datatransfer-expected.txt
A LayoutTests/platform/glib/fast/events/keyboardevent-code-expected.txt
A LayoutTests/platform/glib/fast/events/mouseup-from-button2-expected.txt
A LayoutTests/platform/glib/fast/events/selectstart-by-drag-expected.txt
A LayoutTests/platform/glib/fast/events/shadow-event-path-expected.txt
A 
LayoutTests/platform/glib/fast/parser/nested-fragment-parser-crash-expected.txt
A LayoutTests/platform/glib/fast/parser/xml-colon-entity-expected.txt
A LayoutTests/platform/glib/fast/preloader/document-write-2-expected.txt
A 
LayoutTests/platform/glib/http/tests/cookies/double-quoted-value-with-semi-colon-expected.txt
A 
LayoutTests/platform/glib/http/tests/dom/document-attributes-null-handling-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/cookies/attributes/domain.sub-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/css/selectors/media/media-playback-state-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/fetch/metadata/trailing-dot.https.sub.any-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/fetch/metadata/trailing-dot.https.sub.any.serviceworker-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/fetch/metadata/trailing-dot.https.sub.any.sharedworker-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/fetch/metadata/trailing-dot.https.sub.any.worker-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/html/dom/documents/dom-tree-accessors/Document.currentScript-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/infrastructure/server/wpt-server-http.sub-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/navigation-timing/nav2_test_redirect_chain_xserver_partial_opt_in-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/service-workers/service-worker/fetch-audio-tainting.https-expected.txt
A 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/service-workers/service-worker/update-import-scripts.https-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/service-workers/service-worker/webvtt-cross-origin.https-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/workers/SharedWorker_dataUrl-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/xhr/event-error-order.sub-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/xhr/event-error.sub.any.worker-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/xhr/send-network-error-async-events.sub-expected.txt
R 
LayoutTests/platform/glib/imported/w3c/web-platform-tests/xhr/send-network-error-sync-events.sub-expected.txt
A 
LayoutTests/platform/glib/svg/foreignObject/fO-parent-display-changes-expected.txt
A LayoutTests/platform/glib/svg/hixie/text/003a-expected.txt
A LayoutTests/platform/glib/svg/hixie/viewbox/003-expected.txt
R 
LayoutTests/platform/glib/svg/repaint/text-repainting-after-modifying-container-transform-repaint-rects-expected.txt
A LayoutTests/platform/glib/svg/text/ems-display-none-expected.txt
A LayoutTests/platform/glib/svg/text/exs-display-none-expected.txt
A LayoutTests/platform/glib/svg/text/text-midpoint-split-bug-expected.txt
A LayoutTests/platform/glib/svg/zoom/page/zoom-zoom-coords-expected.txt
A LayoutTests/platform/glib/tables/mozilla/bugs/bug149275-2-expected.txt
A 
LayoutTests/platform/glib/transforms/3d/point-mapping/3d-point-mapping-4-expected.txt
M LayoutTests/platform/gtk/TestExpectations
R LayoutTests/platform/gtk/fast/dom/Element/getClientRects-expected.txt
R LayoutTests/platform/gtk/fast/dom/Window/webkitConvertPoint-expected.txt
R 
LayoutTests/platform/gtk/fast/events/input-events-paste-rich-datatransfer-expected.txt
R LayoutTests/platform/gtk/fast/events/keyboardevent-code-expected.txt
R LayoutTests/platform/gtk/fast/events/mouseup-from-button2-expected.txt
R LayoutTests/platform/gtk/fast/events/selectstart-by-drag-expected.txt
R LayoutTests/platform/gtk/fast/events/shadow-event-path-expected.txt
R 
LayoutTests/platform/gtk/fast/parser/nested-fragment-parser-crash-expected.txt
R LayoutTests/platform/gtk/fast/parser/xml-colon

[jenkinsci/radargun-plugin] a5f4bb: 1.17-SNAPSHOT

2023-02-08 Thread &#x27;Diego Lovison' via Jenkins Commits
  Branch: refs/heads/master
  Home:   https://github.com/jenkinsci/radargun-plugin
  Commit: a5f4bb119bd9b822cf369d1088f7f762f159e139
  
https://github.com/jenkinsci/radargun-plugin/commit/a5f4bb119bd9b822cf369d1088f7f762f159e139
  Author: Diego Lovison 
  Date:   2023-02-08 (Wed, 08 Feb 2023)

  Changed paths:
M pom.xml

  Log Message:
  ---
  1.17-SNAPSHOT


  Commit: ec9f72e473e176312ee0b1d5bfe31af895b6daca
  
https://github.com/jenkinsci/radargun-plugin/commit/ec9f72e473e176312ee0b1d5bfe31af895b6daca
  Author: Diego Lovison 
  Date:   2023-02-08 (Wed, 08 Feb 2023)

  Changed paths:
M pom.xml

  Log Message:
  ---
  [maven-release-plugin] prepare release radargun-1.17


  Commit: bd4a6eb8318e584dc59603c1534a35c08f11d9c9
  
https://github.com/jenkinsci/radargun-plugin/commit/bd4a6eb8318e584dc59603c1534a35c08f11d9c9
  Author: Diego Lovison 
  Date:   2023-02-08 (Wed, 08 Feb 2023)

  Changed paths:
M pom.xml

  Log Message:
  ---
  [maven-release-plugin] prepare for next development iteration


  Commit: 060a2bb85a064f27931eb03e22eedd10bb3b940b
  
https://github.com/jenkinsci/radargun-plugin/commit/060a2bb85a064f27931eb03e22eedd10bb3b940b
  Author: Diego Lovison 
  Date:   2023-02-08 (Wed, 08 Feb 2023)

  Changed paths:
M pom.xml

  Log Message:
  ---
  Merge pull request #16 from diegolovison/build3

Build3


Compare: 
https://github.com/jenkinsci/radargun-plugin/compare/490486cdc2c7...060a2bb85a06

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/radargun-plugin/push/refs/heads/master/490486-060a2b%40github.com.


[jenkinsci/radargun-plugin] a5f4bb: 1.17-SNAPSHOT

2023-02-08 Thread &#x27;Diego Lovison' via Jenkins Commits
  Branch: refs/heads/build3
  Home:   https://github.com/jenkinsci/radargun-plugin
  Commit: a5f4bb119bd9b822cf369d1088f7f762f159e139
  
https://github.com/jenkinsci/radargun-plugin/commit/a5f4bb119bd9b822cf369d1088f7f762f159e139
  Author: Diego Lovison 
  Date:   2023-02-08 (Wed, 08 Feb 2023)

  Changed paths:
M pom.xml

  Log Message:
  ---
  1.17-SNAPSHOT


  Commit: ec9f72e473e176312ee0b1d5bfe31af895b6daca
  
https://github.com/jenkinsci/radargun-plugin/commit/ec9f72e473e176312ee0b1d5bfe31af895b6daca
  Author: Diego Lovison 
  Date:   2023-02-08 (Wed, 08 Feb 2023)

  Changed paths:
M pom.xml

  Log Message:
  ---
  [maven-release-plugin] prepare release radargun-1.17


Compare: 
https://github.com/jenkinsci/radargun-plugin/compare/a5f4bb119bd9%5E...ec9f72e473e1

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/radargun-plugin/push/refs/heads/build3/00-ec9f72%40github.com.


[jenkinsci/radargun-plugin] bd4a6e: [maven-release-plugin] prepare for next developmen...

2023-02-08 Thread &#x27;Diego Lovison' via Jenkins Commits
  Branch: refs/heads/build3
  Home:   https://github.com/jenkinsci/radargun-plugin
  Commit: bd4a6eb8318e584dc59603c1534a35c08f11d9c9
  
https://github.com/jenkinsci/radargun-plugin/commit/bd4a6eb8318e584dc59603c1534a35c08f11d9c9
  Author: Diego Lovison 
  Date:   2023-02-08 (Wed, 08 Feb 2023)

  Changed paths:
M pom.xml

  Log Message:
  ---
  [maven-release-plugin] prepare for next development iteration


-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/radargun-plugin/push/refs/heads/build3/ec9f72-bd4a6e%40github.com.


[jenkinsci/radargun-plugin]

2023-02-08 Thread &#x27;Diego Lovison' via Jenkins Commits
  Branch: refs/tags/radargun-1.17
  Home:   https://github.com/jenkinsci/radargun-plugin

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Commits" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-commits+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-commits/jenkinsci/radargun-plugin/push/refs/tags/radargun-1.17/00-e0dc75%40github.com.


<    5   6   7   8   9   10   11   12   13   14   >