Re: [LincolnTalk] Favorite auto body repair shops

2024-05-09 Thread Joan Kimball
State Road Autobody in Wayland.  They are really good, reasonable and
helpful.  I am not going anywhere else. Joan

On Wed, May 8, 2024, 2:56 PM Emily Haslett  wrote:

> Hi Lincoln friends,
>
> Do you have an auto body shop you love?
>
> With thanks!
> Emily Haslett
> --
> The LincolnTalk mailing list.
> To post, send mail to Lincoln@lincolntalk.org.
> Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/
> .
> Change your subscription settings at
> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>
>
-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



Re: Default databse script for a brand new install

2024-05-07 Thread Joan Moreau

Hi Adam

thank you for your answer

But I do not see any Sql script there.

Where exactly is the initial schema to start Fineract ?

I am surely not the only person trying just o start the engine

Thank you in advance




On 7 May 2024 22:56:14 Ádám Sághy  wrote:

hi

you can find it multiple places :)

fineract-provider/src/main/resources/db/changelog is the common one,
- where “tenant-store” directory contains the liquibase scripts to build 
the tenant store tables and entries (fineract_tenants database)
- where “tenant” directory contains the liquibase scripts to build the 
tenant tables (fineract_default database)


Also we started to modularize the Fineract, so you might found additional 
liquibase scripts int the modules (like: 
fineract-loan/src/main/resources/db/changelog/tenant/module/loan)


Liquibase is by default enabled, so it should be executed when you are 
running the application as jar or as bootRun, unless you are executing 
Fineract with "liquibase-only” profile or FINERACT_LIQUIBASE_ENABLED 
environment variable is set to FALSE.


I hope it helps.

The above details are applicable to the latest version of Fineract 
(apache/develop, 1.8.0 and 1.9.0 if i remember correctly )


Regards,
Adam


On 7 May 2024, at 16:47, Joan Moreau  wrote:

Hi

Would really appreciate to know where to find the mariadb schema to upload 
before starting the process


For now, it asks for tables that of course do bot exist as never created

Thank you very much in advance

On 6 May 2024 16:38:01 Joan Moreau  wrote:

I copied paste all the variables , fixing the _tenants  into _default
I still get to
Caused by: java.sql.SQLSyntaxErrorException: (conn=385204) Table 
'fineract_tenants.m_adhoc' doesn't exist
   at 
   org.mariadb.jdbc.export.ExceptionFactory.createException(ExceptionFactory.java:289)

   at org.mariadb.jdbc.export.ExceptionFactory.create(ExceptionFactory.java:378)

I really need to know where is the initial schema to create the required 
tables in MariaDB

Thank you in advance

On 5/6/24 16:19, Joan Moreau wrote:

You send me a link to XML files

I just need
(a) the actual (real ones !!! not some approximate names) variable to set
(b) where is the damn default schema to create the initial tables


On 2024-05-06 16:09, VICTOR MANUEL ROMERO RODRIGUEZ wrote:

Hello Joan,

Apache Fineract uses Liquibase, take a look at:

https://github.com/apache/fineract/tree/develop/fineract-provider/src/main/resources/db/changelog
https://docs.liquibase.com/home.html

Apache Fineract variables for MariaDB on Docker are the same as the one 
used for Native installations, just try them. Please read files included in 
the PR that I have shared, there is a startup script on it.


Regards



El lun, 6 may 2024 a las 2:04, Joan Moreau () escribió:
Note: I am not using docker but a normal server with a normal MariaDB 
(serving multiple applications)


On 5/6/24 16:02, Joan Moreau wrote:
Also, the variable you mentioned do not fit the variables available in the 
source code


In addition to teh start-up schema, where to have the actual variable to 
set to start Fineract ?


On 5/6/24 16:00, Joan Moreau wrote:

Thank you but that does not answer the question:
- Where to find the initial database schema  to create the initial tables ?
Thank you very much

On 5/5/24 10:20, VICTOR MANUEL ROMERO RODRIGUEZ wrote:

Hello,

1. No, you can use any user that you require (root, fineract, mariadbuser, 
customeuser, exampleuser... etc).
2. Database names are defined in these variables (please notices that there 
is a difference tenantS and tenant ) :

FINERACT_TENANTS_DB_NAME - For tenants database
FINERACT_TENANT_DEFAULT_DB_NAME - for the default database
3. You are setting the values of the database in a mixed way, you have set 
variables for the default tenant as tenants and tenants as default database.


By the way if you are using the develop branch or previous version (since 
1.6)  mariaDB is the supported database (also PostgreSQL since version 
1.7.0), if you use MySQL you could face some issues.


If you require to use MariaDB check this docker compose as reference:

https://github.com/apache/fineract/blob/develop/docker-compose.yml

And these are the variables linked to that docker compose.

https://github.com/apache/fineract/blob/develop/config/docker/env/fineract.env
https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-common.env
https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-mariadb.env

In this PR you can take a look how to start the Apache Fineract jar file

https://github.com/apache/fineract/pull/3879/files

java -jar fineract-provider-*.jar -Duser.home=/tmp -Dfile.encoding=UTF-8 
-Duser.timezone=UTC -Djava.security.egd=file:/dev/./urandom


Best regards

Victor




El sáb, 4 may 2024 a las 8:55, Joan Moreau () escribió:

Hi

Where is stipulated the default schema of the mysql database ?

I face 3 issues

1 - The software seems to need to create databases as root (

Re: Default databse script for a brand new install

2024-05-07 Thread Joan Moreau

Hi

Would really appreciate to know where to find the mariadb schema to upload 
before starting the process


For now, it asks for tables that of course do bot exist as never created

Thank you very much in advance

On 6 May 2024 16:38:01 Joan Moreau  wrote:

I copied paste all the variables , fixing the _tenants  into _default
I still get to
Caused by: java.sql.SQLSyntaxErrorException: (conn=385204) Table 
'fineract_tenants.m_adhoc' doesn't exist

at
org.mariadb.jdbc.export.ExceptionFactory.createException(ExceptionFactory.java:289)
at
org.mariadb.jdbc.export.ExceptionFactory.create(ExceptionFactory.java:378)

I really need to know where is the initial schema to create the required 
tables in MariaDB

Thank you in advance

On 5/6/24 16:19, Joan Moreau wrote:

You send me a link to XML files

I just need
(a) the actual (real ones !!! not some approximate names) variable to set
(b) where is the damn default schema to create the initial tables


On 2024-05-06 16:09, VICTOR MANUEL ROMERO RODRIGUEZ wrote:

Hello Joan,

Apache Fineract uses Liquibase, take a look at:

https://github.com/apache/fineract/tree/develop/fineract-provider/src/main/resources/db/changelog
https://docs.liquibase.com/home.html

Apache Fineract variables for MariaDB on Docker are the same as the one 
used for Native installations, just try them. Please read files included in 
the PR that I have shared, there is a startup script on it.


Regards




El lun, 6 may 2024 a las 2:04, Joan Moreau () escribió:
Note: I am not using docker but a normal server with a normal MariaDB 
(serving multiple applications)


On 5/6/24 16:02, Joan Moreau wrote:
Also, the variable you mentioned do not fit the variables available in the 
source code


In addition to teh start-up schema, where to have the actual variable to 
set to start Fineract ?


On 5/6/24 16:00, Joan Moreau wrote:

Thank you but that does not answer the question:
- Where to find the initial database schema  to create the initial tables ?
Thank you very much

On 5/5/24 10:20, VICTOR MANUEL ROMERO RODRIGUEZ wrote:

Hello,

1. No, you can use any user that you require (root, fineract, mariadbuser, 
customeuser, exampleuser... etc).
2. Database names are defined in these variables (please notices that there 
is a difference tenantS and tenant ) :

FINERACT_TENANTS_DB_NAME - For tenants database
FINERACT_TENANT_DEFAULT_DB_NAME - for the default database3. You are 
setting the values of the database in a mixed way, you have set variables 
for the default tenant as tenants and tenants as default database.


By the way if you are using the develop branch or previous version (since 
1.6)  mariaDB is the supported database (also PostgreSQL since version 
1.7.0), if you use MySQL you could face some issues.


If you require to use MariaDB check this docker compose as reference:

https://github.com/apache/fineract/blob/develop/docker-compose.yml

And these are the variables linked to that docker compose.

https://github.com/apache/fineract/blob/develop/config/docker/env/fineract.env
https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-common.env
https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-mariadb.env

In this PR you can take a look how to start the Apache Fineract jar file

https://github.com/apache/fineract/pull/3879/files

java -jar fineract-provider-*.jar -Duser.home=/tmp -Dfile.encoding=UTF-8 
-Duser.timezone=UTC -Djava.security.egd=file:/dev/./urandom


Best regards

Victor



El sáb, 4 may 2024 a las 8:55, Joan Moreau () escribió:
Hi
Where is stipulated the default schema of the mysql database ?
I face 3 issues
1 - The software seems to need to create databases as root (?)
2 - It seems databases MUST be named fineract_tenants and fineract_default. 
How to change that behavior ?
3 - When I put the database user to have rights on those 2 databases, it 
ends-up with an error :
liquibase.exception.DatabaseException: (conn=7381) Table 
'fineract_tenants.m_adhoc' doesn't exist [Failed SQL: (1146) ALTER TABLE 
`fineract_tenants`.`m_adhoc` CHANGE `IsActive` `is_active` TINYINT]


How to initiate properly a brand new install ?
I have the following script to start fineract:
#!/bin/bash
cd /data/microfinance
export FINERACT_HIKARI_USERNAME=fineract
export FINERACT_HIKARI_PASSWORD=mypassword
export FINERACT_SERVER_SSL_ENABLED=false
export FINERACT_SERVER_PORT=8080
export FINERACT_HIKARI_DRIVER_SOURCE_CLASS_NAME=org.mariadb.jdbc.Driver
export
FINERACT_HIKARI_JDBC_URL="jdbc:mariadb://localhost:3306/fineract_tenants?serverTimezone=UTC=f
alse=time_zone=UTC"
export FINERACT_DEFAULT_TENANTDB_PORT=3306
export FINERACT_DEFAULT_TENANTDB_UID=fineract
export FINERACT_DEFAULT_TENANTDB_TIMEZONE=GMT+0
export FINERACT_DEFAULT_TENANTDB_HOSTNAME=localhost
export FINERACT_DEFAULT_TENANTDB_NAME=fineract_tenants
export FINERACT_DEFAULT_TENANTDB_PWD=mypassword
export FINERACT_USER=fineract
export FINERACT_GROUP=fineract
export FINERACT_DEFAULT_TENANTDB_D

Re: [LincolnTalk] What am I doing wrong with my hummingbird feeder?

2024-05-06 Thread Joan Kimball
We got a new red feeder from Audubon and humminhbirds love it.  The 4
feeder tubes look like flowers. (They did not find the old one very
interesting)   The feeder should be red, not the water. I have read red
food dye is not good for birds. Onward,
Joan

On Mon, May 6, 2024, 12:55 PM Rob Haslinger  wrote:

> So a week ago I hung up a hummingbird feeder. I’ve seen three of the
> little guys come and check it out and then fly away without drinking. So I
> must be doing something wrong. I did 1 part sugar to 4 parts water. I’ve
> also been changing it out so I don’t think the nectar has gone bad.
>
> Anyone have any tips or tricks?
>
> Thanks!!
> Rob Haslinger
> South Great Road
> --
> The LincolnTalk mailing list.
> To post, send mail to Lincoln@lincolntalk.org.
> Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/
> .
> Change your subscription settings at
> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>
>
-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



Re: Default databse script for a brand new install

2024-05-06 Thread Joan Moreau

I copied paste all the variables , fixing the _tenants  into _default

I still get to

Caused by: java.sql.SQLSyntaxErrorException: (conn=385204) Table 
'fineract_tenants.m_adhoc' doesn't exist
    at 
org.mariadb.jdbc.export.ExceptionFactory.createException(ExceptionFactory.java:289)
    at 
org.mariadb.jdbc.export.ExceptionFactory.create(ExceptionFactory.java:378)



I really need to know where is the initial schema to create the required 
tables in MariaDB


Thank you in advance


On 5/6/24 16:19, Joan Moreau wrote:


You send me a link to XML files


I just need

(a) the actual (real ones !!! not some approximate names) variable to set

(b) where is the damn default schema to create the initial tables



On 2024-05-06 16:09, VICTOR MANUEL ROMERO RODRIGUEZ wrote:


Hello Joan,
Apache Fineract uses Liquibase, take a look at:
https://github.com/apache/fineract/tree/develop/fineract-provider/src/main/resources/db/changelog
https://docs.liquibase.com/home.html
Apache Fineract variables for MariaDB on Docker are the same as the 
one used for Native installations, just try them. Please read files 
included in the PR that I have shared, there is a startup script on it.

Regards

El lun, 6 may 2024 a las 2:04, Joan Moreau () escribió:

Note: I am not using docker but a normal server with a normal
MariaDB (serving multiple applications)


On 5/6/24 16:02, Joan Moreau wrote:


Also, the variable you mentioned do not fit the variables
available in the source code


In addition to teh start-up schema, where to have the actual
variable to set to start Fineract ?


On 5/6/24 16:00, Joan Moreau wrote:


Thank you but that does not answer the question:

- Where to find the initial database schema  to create the
initial tables ?

Thank you very much


On 5/5/24 10:20, VICTOR MANUEL ROMERO RODRIGUEZ wrote:

Hello,
1. No, you can use any user that you require (root, fineract,
mariadbuser, customeuser, exampleuser... etc).
2. Database names are defined in these variables (please
notices that there is a difference tenantS and tenant ) :
FINERACT_TENANTS_DB_NAME - For tenants database
FINERACT_TENANT_DEFAULT_DB_NAME - for the default database
3. You are setting the values of the database in a mixed way,
you have set variables for the default tenant as tenants and
tenants as default database.
By the way if you are using the develop branch or
previous version (since 1.6) mariaDB is the supported database
(also PostgreSQL since version 1.7.0), if you use MySQL you
could face some issues.
If you require to use MariaDB check this docker compose as
reference:
https://github.com/apache/fineract/blob/develop/docker-compose.yml
And these are the variables linked to that docker compose.

https://github.com/apache/fineract/blob/develop/config/docker/env/fineract.env

https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-common.env

https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-mariadb.env
In this PR you can take a look how to start the Apache
Fineract jar file
https://github.com/apache/fineract/pull/3879/files
java -jar fineract-provider-*.jar -Duser.home=/tmp
-Dfile.encoding=UTF-8 -Duser.timezone=UTC
-Djava.security.egd=file:/dev/./urandom
Best regards
Victor

El sáb, 4 may 2024 a las 8:55, Joan Moreau ()
escribió:

Hi

Where is stipulated the default schema of the mysql database ?

I face 3 issues

1 - The software seems to need to create databases as root (?)

2 - It seems databases MUST be named fineract_tenants and
fineract_default. How to change that behavior ?

3 - When I put the database user to have rights on those 2
databases, it ends-up with an error :

liquibase.exception.DatabaseException: (conn=7381) Table
'fineract_tenants.m_adhoc' doesn't exist [Failed SQL:
(1146) ALTER TABLE `fineract_tenants`.`m_adhoc` CHANGE
`IsActive` `is_active` TINYINT]


How to initiate properly a brand new install ?

I have the following script to start fineract:

#!/bin/bash
cd /data/microfinance
export FINERACT_HIKARI_USERNAME=fineract
export FINERACT_HIKARI_PASSWORD=mypassword
export FINERACT_SERVER_SSL_ENABLED=false
export FINERACT_SERVER_PORT=8080
export
FINERACT_HIKARI_DRIVER_SOURCE_CLASS_NAME=org.mariadb.jdbc.Driver
export

FINERACT_HIKARI_JDBC_URL="jdbc:mariadb://localhost:3306/fineract_tenants?serverTimezone=UTC=f
alse=time_zone=UTC"
export FINERACT_DEFAULT_TENANTDB_PORT=3306
export FINERACT_DEFAULT_TENANTDB_UID=fineract
export FINERACT_DEFAULT_TENANTDB_TIMEZONE=GMT+0
export FINERACT_DEFAULT_TENANTDB_HOSTNAME=localhost
export FINERACT_DEFAULT_TENANTDB_NAME=finera

Re: Default databse script for a brand new install

2024-05-06 Thread Joan Moreau

You send me a link to XML files

I just need

(a) the actual (real ones !!! not some approximate names) variable to 
set


(b) where is the damn default schema to create the initial tables

On 2024-05-06 16:09, VICTOR MANUEL ROMERO RODRIGUEZ wrote:


Hello Joan,

Apache Fineract uses Liquibase, take a look at:

https://github.com/apache/fineract/tree/develop/fineract-provider/src/main/resources/db/changelog
https://docs.liquibase.com/home.html

Apache Fineract variables for MariaDB on Docker are the same as the one 
used for Native installations, just try them. Please read files 
included in the PR that I have shared, there is a startup script on it.


Regards

El lun, 6 may 2024 a las 2:04, Joan Moreau () escribió:

Note: I am not using docker but a normal server with a normal MariaDB 
(serving multiple applications)


On 5/6/24 16:02, Joan Moreau wrote:

Also, the variable you mentioned do not fit the variables available in 
the source code


In addition to teh start-up schema, where to have the actual variable 
to set to start Fineract ?


On 5/6/24 16:00, Joan Moreau wrote:

Thank you but that does not answer the question:

- Where to find the initial database schema  to create the initial 
tables ?


Thank you very much

On 5/5/24 10:20, VICTOR MANUEL ROMERO RODRIGUEZ wrote:
Hello,

1. No, you can use any user that you require (root, fineract, 
mariadbuser, customeuser, exampleuser... etc).
2. Database names are defined in these variables (please notices that 
there is a difference tenantS and tenant ) :

FINERACT_TENANTS_DB_NAME - For tenants database
FINERACT_TENANT_DEFAULT_DB_NAME - for the default database
3. You are setting the values of the database in a mixed way, you have 
set variables for the default tenant as tenants and tenants as default 
database.


By the way if you are using the develop branch or previous version 
(since 1.6)  mariaDB is the supported database (also PostgreSQL since 
version 1.7.0), if you use MySQL you could face some issues.


If you require to use MariaDB check this docker compose as reference:

https://github.com/apache/fineract/blob/develop/docker-compose.yml

And these are the variables linked to that docker compose.

https://github.com/apache/fineract/blob/develop/config/docker/env/fineract.env
https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-common.env
https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-mariadb.env

In this PR you can take a look how to start the Apache Fineract jar 
file


https://github.com/apache/fineract/pull/3879/files

java -jar fineract-provider-*.jar -Duser.home=/tmp 
-Dfile.encoding=UTF-8 -Duser.timezone=UTC 
-Djava.security.egd=file:/dev/./urandom


Best regards

Victor

El sáb, 4 may 2024 a las 8:55, Joan Moreau () escribió:

Hi

Where is stipulated the default schema of the mysql database ?

I face 3 issues

1 - The software seems to need to create databases as root (?)

2 - It seems databases MUST be named fineract_tenants and 
fineract_default. How to change that behavior ?


3 - When I put the database user to have rights on those 2 databases, 
it ends-up with an error :


liquibase.exception.DatabaseException: (conn=7381) Table 
'fineract_tenants.m_adhoc' doesn't exist [Failed SQL: (1146) ALTER 
TABLE `fineract_tenants`.`m_adhoc` CHANGE `IsActive` `is_active` 
TINYINT]


How to initiate properly a brand new install ?

I have the following script to start fineract:

#!/bin/bash
cd /data/microfinance
export FINERACT_HIKARI_USERNAME=fineract
export FINERACT_HIKARI_PASSWORD=mypassword
export FINERACT_SERVER_SSL_ENABLED=false
export FINERACT_SERVER_PORT=8080
export FINERACT_HIKARI_DRIVER_SOURCE_CLASS_NAME=org.mariadb.jdbc.Driver
export 
FINERACT_HIKARI_JDBC_URL="jdbc:mariadb://localhost:3306/fineract_tenants?serverTimezone=UTC=f

alse=time_zone=UTC"
export FINERACT_DEFAULT_TENANTDB_PORT=3306
export FINERACT_DEFAULT_TENANTDB_UID=fineract
export FINERACT_DEFAULT_TENANTDB_TIMEZONE=GMT+0
export FINERACT_DEFAULT_TENANTDB_HOSTNAME=localhost
export FINERACT_DEFAULT_TENANTDB_NAME=fineract_tenants
export FINERACT_DEFAULT_TENANTDB_PWD=mypassword
export FINERACT_USER=fineract
export FINERACT_GROUP=fineract
export FINERACT_DEFAULT_TENANTDB_DESCRIPTION=GJ_Microfinance

java -Dloader.path=/data/mmicrofinance/libs/ -jar fineract-provider.jar

Thank you

Re: Default databse script for a brand new install

2024-05-06 Thread Joan Moreau

The easy way would be just to know

(1) What variable to set

(2) with which values for a normal user (no docker, no fancy stuff, just 
a Unix server and a MariadDB)


(3) where is that damn mariadb schema to create to enable the app to start



On 5/6/24 16:04, Joan Moreau wrote:


Note: I am not using docker but a normal server with a normal MariaDB 
(serving multiple applications)



On 5/6/24 16:02, Joan Moreau wrote:


Also, the variable you mentioned do not fit the variables available 
in the source code



In addition to teh start-up schema, where to have the actual variable 
to set to start Fineract ?



On 5/6/24 16:00, Joan Moreau wrote:


Thank you but that does not answer the question:

- Where to find the initial database schema  to create the initial 
tables ?


Thank you very much


On 5/5/24 10:20, VICTOR MANUEL ROMERO RODRIGUEZ wrote:

Hello,

1. No, you can use any user that you require (root, fineract, 
mariadbuser, customeuser, exampleuser... etc).
2. Database names are defined in these variables (please notices 
that there is a difference tenantS and tenant ) :

FINERACT_TENANTS_DB_NAME - For tenants database
FINERACT_TENANT_DEFAULT_DB_NAME - for the default database
3. You are setting the values of the database in a mixed way, you 
have set variables for the default tenant as tenants and tenants as 
default database.


By the way if you are using the develop branch or previous version 
(since 1.6)  mariaDB is the supported database (also PostgreSQL 
since version 1.7.0), if you use MySQL you could face some issues.


If you require to use MariaDB check this docker compose as reference:

https://github.com/apache/fineract/blob/develop/docker-compose.yml

And these are the variables linked to that docker compose.

https://github.com/apache/fineract/blob/develop/config/docker/env/fineract.env
https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-common.env
https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-mariadb.env

In this PR you can take a look how to start the Apache Fineract jar 
file


https://github.com/apache/fineract/pull/3879/files

java -jar fineract-provider-*.jar -Duser.home=/tmp 
-Dfile.encoding=UTF-8 -Duser.timezone=UTC 
-Djava.security.egd=file:/dev/./urandom


Best regards

Victor



El sáb, 4 may 2024 a las 8:55, Joan Moreau () escribió:

Hi

Where is stipulated the default schema of the mysql database ?

I face 3 issues

1 - The software seems to need to create databases as root (?)

2 - It seems databases MUST be named fineract_tenants and
fineract_default. How to change that behavior ?

3 - When I put the database user to have rights on those 2
databases, it ends-up with an error :

liquibase.exception.DatabaseException: (conn=7381) Table
'fineract_tenants.m_adhoc' doesn't exist [Failed SQL: (1146)
ALTER TABLE `fineract_tenants`.`m_adhoc` CHANGE `IsActive`
`is_active` TINYINT]


How to initiate properly a brand new install ?

I have the following script to start fineract:

#!/bin/bash
cd /data/microfinance
export FINERACT_HIKARI_USERNAME=fineract
export FINERACT_HIKARI_PASSWORD=mypassword
export FINERACT_SERVER_SSL_ENABLED=false
export FINERACT_SERVER_PORT=8080
export
FINERACT_HIKARI_DRIVER_SOURCE_CLASS_NAME=org.mariadb.jdbc.Driver
export

FINERACT_HIKARI_JDBC_URL="jdbc:mariadb://localhost:3306/fineract_tenants?serverTimezone=UTC=f
alse=time_zone=UTC"
export FINERACT_DEFAULT_TENANTDB_PORT=3306
export FINERACT_DEFAULT_TENANTDB_UID=fineract
export FINERACT_DEFAULT_TENANTDB_TIMEZONE=GMT+0
export FINERACT_DEFAULT_TENANTDB_HOSTNAME=localhost
export FINERACT_DEFAULT_TENANTDB_NAME=fineract_tenants
export FINERACT_DEFAULT_TENANTDB_PWD=mypassword
export FINERACT_USER=fineract
export FINERACT_GROUP=fineract
export FINERACT_DEFAULT_TENANTDB_DESCRIPTION=GJ_Microfinance

java -Dloader.path=/data/mmicrofinance/libs/ -jar
fineract-provider.jar


Thank you



Re: Default databse script for a brand new install

2024-05-06 Thread Joan Moreau
Note: I am not using docker but a normal server with a normal MariaDB 
(serving multiple applications)



On 5/6/24 16:02, Joan Moreau wrote:


Also, the variable you mentioned do not fit the variables available in 
the source code



In addition to teh start-up schema, where to have the actual variable 
to set to start Fineract ?



On 5/6/24 16:00, Joan Moreau wrote:


Thank you but that does not answer the question:

- Where to find the initial database schema  to create the initial 
tables ?


Thank you very much


On 5/5/24 10:20, VICTOR MANUEL ROMERO RODRIGUEZ wrote:

Hello,

1. No, you can use any user that you require (root, fineract, 
mariadbuser, customeuser, exampleuser... etc).
2. Database names are defined in these variables (please notices 
that there is a difference tenantS and tenant ) :

FINERACT_TENANTS_DB_NAME - For tenants database
FINERACT_TENANT_DEFAULT_DB_NAME - for the default database
3. You are setting the values of the database in a mixed way, you 
have set variables for the default tenant as tenants and tenants as 
default database.


By the way if you are using the develop branch or previous version 
(since 1.6)  mariaDB is the supported database (also PostgreSQL 
since version 1.7.0), if you use MySQL you could face some issues.


If you require to use MariaDB check this docker compose as reference:

https://github.com/apache/fineract/blob/develop/docker-compose.yml

And these are the variables linked to that docker compose.

https://github.com/apache/fineract/blob/develop/config/docker/env/fineract.env
https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-common.env
https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-mariadb.env

In this PR you can take a look how to start the Apache Fineract jar file

https://github.com/apache/fineract/pull/3879/files

java -jar fineract-provider-*.jar -Duser.home=/tmp 
-Dfile.encoding=UTF-8 -Duser.timezone=UTC 
-Djava.security.egd=file:/dev/./urandom


Best regards

Victor



El sáb, 4 may 2024 a las 8:55, Joan Moreau () escribió:

Hi

Where is stipulated the default schema of the mysql database ?

I face 3 issues

1 - The software seems to need to create databases as root (?)

2 - It seems databases MUST be named fineract_tenants and
fineract_default. How to change that behavior ?

3 - When I put the database user to have rights on those 2
databases, it ends-up with an error :

liquibase.exception.DatabaseException: (conn=7381) Table
'fineract_tenants.m_adhoc' doesn't exist [Failed SQL: (1146)
ALTER TABLE `fineract_tenants`.`m_adhoc` CHANGE `IsActive`
`is_active` TINYINT]


How to initiate properly a brand new install ?

I have the following script to start fineract:

#!/bin/bash
cd /data/microfinance
export FINERACT_HIKARI_USERNAME=fineract
export FINERACT_HIKARI_PASSWORD=mypassword
export FINERACT_SERVER_SSL_ENABLED=false
export FINERACT_SERVER_PORT=8080
export
FINERACT_HIKARI_DRIVER_SOURCE_CLASS_NAME=org.mariadb.jdbc.Driver
export

FINERACT_HIKARI_JDBC_URL="jdbc:mariadb://localhost:3306/fineract_tenants?serverTimezone=UTC=f
alse=time_zone=UTC"
export FINERACT_DEFAULT_TENANTDB_PORT=3306
export FINERACT_DEFAULT_TENANTDB_UID=fineract
export FINERACT_DEFAULT_TENANTDB_TIMEZONE=GMT+0
export FINERACT_DEFAULT_TENANTDB_HOSTNAME=localhost
export FINERACT_DEFAULT_TENANTDB_NAME=fineract_tenants
export FINERACT_DEFAULT_TENANTDB_PWD=mypassword
export FINERACT_USER=fineract
export FINERACT_GROUP=fineract
export FINERACT_DEFAULT_TENANTDB_DESCRIPTION=GJ_Microfinance

java -Dloader.path=/data/mmicrofinance/libs/ -jar
fineract-provider.jar


Thank you



Re: Default databse script for a brand new install

2024-05-06 Thread Joan Moreau
Also, the variable you mentioned do not fit the variables available in 
the source code



In addition to teh start-up schema, where to have the actual variable to 
set to start Fineract ?



On 5/6/24 16:00, Joan Moreau wrote:


Thank you but that does not answer the question:

- Where to find the initial database schema  to create the initial 
tables ?


Thank you very much


On 5/5/24 10:20, VICTOR MANUEL ROMERO RODRIGUEZ wrote:

Hello,

1. No, you can use any user that you require (root, fineract, 
mariadbuser, customeuser, exampleuser... etc).
2. Database names are defined in these variables (please notices that 
there is a difference tenantS and tenant ) :

FINERACT_TENANTS_DB_NAME - For tenants database
FINERACT_TENANT_DEFAULT_DB_NAME - for the default database
3. You are setting the values of the database in a mixed way, you 
have set variables for the default tenant as tenants and tenants as 
default database.


By the way if you are using the develop branch or previous version 
(since 1.6)  mariaDB is the supported database (also PostgreSQL since 
version 1.7.0), if you use MySQL you could face some issues.


If you require to use MariaDB check this docker compose as reference:

https://github.com/apache/fineract/blob/develop/docker-compose.yml

And these are the variables linked to that docker compose.

https://github.com/apache/fineract/blob/develop/config/docker/env/fineract.env
https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-common.env
https://github.com/apache/fineract/blob/develop/config/docker/env/fineract-mariadb.env

In this PR you can take a look how to start the Apache Fineract jar file

https://github.com/apache/fineract/pull/3879/files

java -jar fineract-provider-*.jar -Duser.home=/tmp 
-Dfile.encoding=UTF-8 -Duser.timezone=UTC 
-Djava.security.egd=file:/dev/./urandom


Best regards

Victor



El sáb, 4 may 2024 a las 8:55, Joan Moreau () escribió:

Hi

Where is stipulated the default schema of the mysql database ?

I face 3 issues

1 - The software seems to need to create databases as root (?)

2 - It seems databases MUST be named fineract_tenants and
fineract_default. How to change that behavior ?

3 - When I put the database user to have rights on those 2
databases, it ends-up with an error :

liquibase.exception.DatabaseException: (conn=7381) Table
'fineract_tenants.m_adhoc' doesn't exist [Failed SQL: (1146)
ALTER TABLE `fineract_tenants`.`m_adhoc` CHANGE `IsActive`
`is_active` TINYINT]


How to initiate properly a brand new install ?

I have the following script to start fineract:

#!/bin/bash
cd /data/microfinance
export FINERACT_HIKARI_USERNAME=fineract
export FINERACT_HIKARI_PASSWORD=mypassword
export FINERACT_SERVER_SSL_ENABLED=false
export FINERACT_SERVER_PORT=8080
export
FINERACT_HIKARI_DRIVER_SOURCE_CLASS_NAME=org.mariadb.jdbc.Driver
export

FINERACT_HIKARI_JDBC_URL="jdbc:mariadb://localhost:3306/fineract_tenants?serverTimezone=UTC=f
alse=time_zone=UTC"
export FINERACT_DEFAULT_TENANTDB_PORT=3306
export FINERACT_DEFAULT_TENANTDB_UID=fineract
export FINERACT_DEFAULT_TENANTDB_TIMEZONE=GMT+0
export FINERACT_DEFAULT_TENANTDB_HOSTNAME=localhost
export FINERACT_DEFAULT_TENANTDB_NAME=fineract_tenants
export FINERACT_DEFAULT_TENANTDB_PWD=mypassword
export FINERACT_USER=fineract
export FINERACT_GROUP=fineract
export FINERACT_DEFAULT_TENANTDB_DESCRIPTION=GJ_Microfinance

java -Dloader.path=/data/mmicrofinance/libs/ -jar
fineract-provider.jar


Thank you



Default databse script for a brand new install

2024-05-04 Thread Joan Moreau

Hi

Where is stipulated the default schema of the mysql database ?

I face 3 issues

1 - The software seems to need to create databases as root (?)

2 - It seems databases MUST be named fineract_tenants and 
fineract_default. How to change that behavior ?


3 - When I put the database user to have rights on those 2 databases, it 
ends-up with an error :


liquibase.exception.DatabaseException: (conn=7381) Table 
'fineract_tenants.m_adhoc' doesn't exist [Failed SQL: (1146) ALTER TABLE 
`fineract_tenants`.`m_adhoc` CHANGE `IsActive` `is_active` TINYINT]


How to initiate properly a brand new install ?

I have the following script to start fineract:

#!/bin/bash
cd /data/microfinance
export FINERACT_HIKARI_USERNAME=fineract
export FINERACT_HIKARI_PASSWORD=mypassword
export FINERACT_SERVER_SSL_ENABLED=false
export FINERACT_SERVER_PORT=8080
export FINERACT_HIKARI_DRIVER_SOURCE_CLASS_NAME=org.mariadb.jdbc.Driver
export 
FINERACT_HIKARI_JDBC_URL="jdbc:mariadb://localhost:3306/fineract_tenants?serverTimezone=UTC=f

alse=time_zone=UTC"
export FINERACT_DEFAULT_TENANTDB_PORT=3306
export FINERACT_DEFAULT_TENANTDB_UID=fineract
export FINERACT_DEFAULT_TENANTDB_TIMEZONE=GMT+0
export FINERACT_DEFAULT_TENANTDB_HOSTNAME=localhost
export FINERACT_DEFAULT_TENANTDB_NAME=fineract_tenants
export FINERACT_DEFAULT_TENANTDB_PWD=mypassword
export FINERACT_USER=fineract
export FINERACT_GROUP=fineract
export FINERACT_DEFAULT_TENANTDB_DESCRIPTION=GJ_Microfinance

java -Dloader.path=/data/mmicrofinance/libs/ -jar fineract-provider.jar

Thank you

Re: [LincolnTalk] Car mechanic

2024-04-29 Thread Joan Kimball
We also like the mechanics at Dougherty's.  Its important to keep all our
options.

Joan

On Mon, Apr 29, 2024, 8:54 AM Deb Wallace  wrote:

> Judy -- Marconi's on Rt. 117, formerly Joey's Auto. He was trained by Joey
> who sold him the business after he retired. Marconi is an excellent
> mechanic and always fair. 781-259-9794
>
> Deb
> --
> The LincolnTalk mailing list.
> To post, send mail to Lincoln@lincolntalk.org.
> Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/
> .
> Change your subscription settings at
> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>
>
-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



[sumo-user] pois2inductionLoops error

2024-04-25 Thread Joan Carmona Mercadé via sumo-user
Hi,

I'm trying to convert a shapefile of points into detectors. First, I use 
polyconvert to convert the shapefile into an additional XML file of  
structures.
Then, I try to use the pois2inductionLoops.py utility to convert pois file into 
detectors, but I'm getting the following error:

Reading net...
Reading PoIs...
Traceback (most recent call last):
  File "C:\Program Files 
(x86)\Eclipse\Sumo\tools\purgatory\pois2inductionLoops.py", line 45, in 
pois = sumolib.poi.readPois(sys.argv[2])
   ^^^
AttributeError: module 'sumolib' has no attribute 'poi'

Any idea?

Thanks!

Joan
___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


Re: Sieve not getting recompiled

2024-04-20 Thread Joan Moreau via dovecot

I changed it to the following to stick to the doc

   sieve = file:/mails/%d/%n/sieve/
sieve_after = file:/mails/sieve/after.sieve
sieve_default = file:/mails/sieve/before.sieve
sieve_before = file:/mails/sieve/before.sieve

Still no scripts are compiled/executed (and it was working fine before 
!)


On 2024-04-21 09:21, Joan Moreau wrote:


Hi

I have

sieve = /mails/%d/%n/sieve/roundcube.sieve
sieve_after = /mails/sieve/after.sieve
sieve_before = /mails/sieve/before.sieve
sieve_dir = /mails/%d/%n/sieve/
sieve_global_dir = /mails/sieve/

But sieve scripts are not compiled and not executed

It was working until I removed the setting "sieve_global_path"

Something I dont understand ?

Thank you

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Sieve not getting recompiled

2024-04-20 Thread Joan Moreau via dovecot

Hi

I have

sieve = /mails/%d/%n/sieve/roundcube.sieve
sieve_after = /mails/sieve/after.sieve
sieve_before = /mails/sieve/before.sieve
sieve_dir = /mails/%d/%n/sieve/
sieve_global_dir = /mails/sieve/

But sieve scripts are not compiled and not executed

It was working until I removed the setting "sieve_global_path"

Something I dont understand ?

Thank you
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: exfat not supported ?

2024-04-20 Thread Joan Moreau via dovecot

That resolve the fisrt bug

but I get now :

Error: link(/xxx/dovecot.list.index.log, /xxx/dovecot.list.index.log.2) 
failed: Operation not permitted


On 2024-04-21 02:02, Aki Tuomi via dovecot wrote:


Try setting lock_method = dotlock

Aki
On 20/04/2024 15:32 EEST Joan Moreau via dovecot
 wrote:

I tried and get the following:

Error: Couldn't create mailbox list lock /xxx/mailboxes.lock:
file_create_locked(/xxx/mailboxes.lock) failed:
link(/xxx/mailboxes.locka94f3757318b0b90, /xxx/mailboxes.lock)
failed:
Operation not permitted

On 2024-04-20 17:39, Aki Tuomi via dovecot wrote:

On 20/04/2024 12:27 EEST Joan Moreau via dovecot
 wrote:

Hi

Would placing my storage on a exfat partition work ? If no,
why ?

Thank you
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org

I can't see any reason why not. As long as it behaves like
POSIX
filesystem.

Aki
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: exfat not supported ?

2024-04-20 Thread Joan Moreau via dovecot

I tried and get the following:

Error: Couldn't create mailbox list lock /xxx/mailboxes.lock: 
file_create_locked(/xxx/mailboxes.lock) failed: 
link(/xxx/mailboxes.locka94f3757318b0b90, /xxx/mailboxes.lock) failed: 
Operation not permitted


On 2024-04-20 17:39, Aki Tuomi via dovecot wrote:


On 20/04/2024 12:27 EEST Joan Moreau via dovecot
 wrote:

Hi

Would placing my storage on a exfat partition work ? If no, why ?

Thank you
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org

I can't see any reason why not. As long as it behaves like POSIX 
filesystem.


Aki
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


thread->detach() creates confusion of dovecot

2024-04-20 Thread Joan Moreau via dovecot

Hi

When I try to "detach" 
(https://en.cppreference.com/w/cpp/thread/thread/detach) a thread 
running inside a plugin, it seems the core dovecot has some influence on 
that , tries to close this for some unknown reason and usually ends up 
crashing


What is the cause of this ?

Thank you
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


exfat not supported ?

2024-04-20 Thread Joan Moreau via dovecot

Hi

Would placing my storage on a exfat partition work ? If no, why ?

Thank you
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


[sumo-user] shortest path exercise

2024-04-18 Thread Joan Carmona Mercadé via sumo-user
Hi,

I'm trying to find the shortest path between points A and B. I got my network 
from OSM. Then I create a simulation with a unique trip between A and B with no 
middle edges.

When I ask to Google, it gives me a route. But Sumo gives me a very different 
one. Then I modify the trip and add a middle edge from the route that Google 
gave to me. Then Sumo gives me the same route as Google.

If I check the values of output variables routeLength and duration, both values 
are greater in the first case (with no middle point). So, I don't understand 
why Sumo is giving me that route if it is longer and takes more time that the 
one with the middle edge.

What other parameters may I be forgetting?

Thanks,
Joan

___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


[protobuf] Best solution to merge ByteStrings

2024-04-12 Thread Joan Balagueró
Hello,

I have a protobuf like this in a ByteString variable 'cp':

1: {
1: 30
2: {
2: 100
}
}
   
   
And I also have another ByteString 'hotel' with this content:

 1: 87354
 2: {
(...) a lot of content here
 }

 
Now I need:

1. Put the 'hotel' ByteString under a "Hotels" tag with key = 2:

2: {
 1: 87354
 2: {
(...) a lot of content here
 }
}
  
  
2. And finally "concat" these two ByetString 'cp' and 'hotels' and put them 
under another tag "Success" with key = "1":

1: {
1: {
1: 30
2: {
2: 100
} 
}
2: {
1: 87354
2: {
(...) a lot of content here
}
}
}

I have solved this and it's working. What I did:

// Add hotel under '2' (hotels) and "concat" cp + hotels
UnknownFieldSet.Builder b2 = 
UnknownFieldSet.newBuilder().mergeFrom(cp).mergeLengthDelimitedField(2, 
hotel);

// Add 'b2' to '1' ('Success')
UnknownFieldSet.Builder b1 = 
UnknownFieldSet.newBuilder().mergeLengthDelimitedField(1, 
b2.build().toByteString());


But since I'm a newbie in protobuf, I would like to know if my solution is 
the optimal or there is a better way to solve this.

Any suggestion would be really appreciated.

Thanks,

Joan.

-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/protobuf/707ed7d8-5718-4034-a5dc-503baf1ff126n%40googlegroups.com.


Re: [Evergreen-general] removal of vendor from paid support wiki page

2024-04-10 Thread Joan Kranich via Evergreen-general
Thank you for this information Rogan.

Joan

On Wed, Apr 10, 2024 at 8:47 AM Rogan Hamby via Evergreen-general <
evergreen-general@list.evergreen-ils.org> wrote:

> We are have been doing a review of the vendors on the paid support listing
> of the wiki :
> https://wiki.evergreen-ils.org/doku.php?id=faqs:evergreen_companies#catalyte_inc
>
> One vendor, Catalyte no longer offers Evergreen services and does not
> provide a link back to the Evergreen community web site, nor has responded
> to an email inquiry so they are being removed. This is being posted because
> we set as a procedure that the general list would be updated of any
> removals.
>
>
> Rogan Hamby (he/him/his), MLIS
>
> Data and Project Manager
>
> Equinox Open Library Initiative
>
> rogan.ha...@equinoxoli.org
>
> 1-770-709- x5570
>
> www.equinoxOLI.org
>
>
> ___
> Evergreen-general mailing list
> Evergreen-general@list.evergreen-ils.org
> http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-general
>


-- 

Joan Kranich (she/her/hers)
Library Applications Manager, C/W MARS, Inc.

--

[image: icon] jkran...@cwmars.org | [image: icon]www.cwmars.org

[image: icon] 508-755-3323 x 1
___
Evergreen-general mailing list
Evergreen-general@list.evergreen-ils.org
http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-general


[nysbirds-l] Eclipse Bird (& more) Behavior

2024-04-09 Thread Joan Collins
eper
for the first time during the event.  Nearly 70 minutes past totality I
noted it had finally become quiet and we boated back.  At our dock, there
was a Common Loon across the lake (FOS).

 

This is the first total solar eclipse I've ever experienced and it is nearly
impossible to describe how the light changed (and hard to capture in
photos).  It was fascinating!  I expected it would get dark gradually, but
it was really abrupt!  And after the 3+ minutes of total darkness, it
abruptly became light, but again, hard to describe the light.  The surgeon
noted that it was like someone suddenly shining a bright flashlight on us!

 

Here are a couple observations from our younger son and his family at their
Willsboro home (near Lake Champlain):  (the photo they sent me of their 3
small (one a baby) children in eclipse glasses was adorable!).  They have a
lot of chickens!  They roam around outside all day and head into the coup
when it begins to get dark.  My son said the chickens started to head for
the coup, but it got dark so fast, that they didn't make it and they looked
lost!  (I read a similar account of chicken behavior from a prior eclipse -
not having enough time to actually get to the coup!)  He said they also
noted crickets started up during totality!  My nearly 4-year old grandson
was very animated about the whole event with me over the phone!  I think he
will actually remember it.

 

I did take photos with my cell phone and camera.  If I get any up on
Facebook, I'll send a link.

 

I hope everyone got to experience this remarkable event.  I can now
understand why people become eclipse chasers around the world!

 

Joan Collins

Long Lake, NY


--

(copy & paste any URL below, then modify any text "_DOT_" to a period ".")

NYSbirds-L List Info:
NortheastBirding_DOT_com/NYSbirdsWELCOME_DOT_htm
NortheastBirding_DOT_com/NYSbirdsRULES_DOT_htm
NortheastBirding_DOT_com/NYSbirdsSubscribeConfigurationLeave_DOT_htm

ARCHIVES:
1) mail-archive_DOT_com/nysbirds-l@cornell_DOT_edu/maillist_DOT_html
2) surfbirds_DOT_com/birdingmail/Group/NYSBirds-L
3) birding_DOT_aba_DOT_org/maillist/NY01

Please submit your observations to eBird:
ebird_DOT_org/content/ebird/

--

Re: Back to the Future Initiative

2024-04-02 Thread Joan Bagley
how do I undo something that is happening with all my Apache documents? When I 
open a document, with multiple pages, all the pages on one 8 1/2 x 11 and I 
cannot edit or add two things like my journal. Help!


Sent from Yahoo Mail for iPhone


On Monday, April 1, 2024, 3:23 AM, David  wrote:

Arrigo Marchiori wrote:
> Dear All,
>
> The Apache OpenOffice Development Team is proud to announce that the
> next releases will introduce an important change: a text-only user
> interface.
>
> Read more:
> https://openoffice.apache.org/blog/back-to-the-future-initiative.html
>
> :-)
You are standing on the shoulders of giants of the WP world such as 
Multimate. I await the release with bated breath :-)

-
To unsubscribe, e-mail: users-unsubscr...@openoffice.apache.org
For additional commands, e-mail: users-h...@openoffice.apache.org






Re: Separate index get dovecot lost

2024-03-30 Thread Joan Moreau
> To do that kind of a change, mailbox migration is required. 
 
Meaning ?

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Separate index get dovecot lost

2024-03-29 Thread Joan Moreau
Hi
I have large number of email (~TB) and want to put the index in a separate,
rapid drive

Initially, I have
mail_location = mdbox:/files/mail/%d/%n

If I put
mail_location = mdbox:/files/mail/%d/%n:INDEX=/data/mailindexes/%d/%n
then dovecot gets totally lost and tries to reach mailboxes content and tree
from the INDEX location instead of the original location

What is wrong ?
Thank you


___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


ISO Mount NFS Version Cloudstack 4.18 + xcpng 8.2.1

2024-03-24 Thread Joan g
Hi Community,

My cloudstack deployment has NFS server that support NFS V4 only. The
Secondary Storage is attached successfully and we are able to deploy VM's.
Its Getting Mounted to XenServer as Ver=4.1.

The Actual issue is happening when we try to mount an ISO to a deployed VM.
It's always trying with NFS v3 and the ISO mount is failing.
Tried to setting the *secstorage.nfs.version to 4 and it does not have a
effect*

Xen logs:
Mar 24 16:34:51 xcp01 SM: [13233] sr_create {'sr_uuid':
'e16d9494-dd48-85e2-9166-6cbb12ffbdd4', 'subtask_of':
'DummyRef:|f271073d-ba31-4127-b375-dc78301167cf|SR.create', 'args': ['0'],
'host_ref': 'OpaqueRef:1f4335df-4651-4fa7-b249-9b97f770801d',
'session_ref': 'OpaqueRef:e9724b23-9f13-49cc-a582-b25978e23390',
'device_config': {'SRmaster': 'true', 'location':
'10.30.11.43:/Secondary/template/tmpl/1/207'},
'command': 'sr_create', 'sr_ref':
'OpaqueRef:c9453484-6e40-4d3c-a942-7098fae868e3'}
Mar 24 16:34:51 xcp01 SM: [13233] _testHost: Testing host/port:
10.30.11.43,2049
Mar 24 16:34:51 xcp01 SM: [13233] ['/usr/sbin/rpcinfo', '-s', '10.30.11.43']
Mar 24 16:34:51 xcp01 SM: [13233]   pread SUCCESS
Mar 24 16:34:51 xcp01 SM: [13233] ['mount.nfs',
'10.30.11.43:/Secondary/template/tmpl/1/207',
'/var/run/sr-mount/e16d9494-dd48-85e2-9166-6cbb12ffbdd4', '-o',
'soft,proto=tcp,vers=3,acdirmin=0,acdirmax=0']
Mar 24 16:34:51 xcp01 SM: [13233] FAILED in util.pread: (rc 32) stdout: '',
stderr: 'mount.nfs: Protocol not supported
Mar 24 16:34:51 xcp01 SM: [13233] '
Mar 24 16:34:52 xcp01 SM: [13233] ['mount.nfs',
'10.30.11.43:/Secondary/template/tmpl/1/207',
'/var/run/sr-mount/e16d9494-dd48-85e2-9166-6cbb12ffbdd4', '-o',
'soft,proto=tcp,vers=3,acdirmin=0,acdirmax=0']
Mar 24 16:34:52 xcp01 SM: [13233] FAILED in util.pread: (rc 32) stdout: '',
stderr: 'mount.nfs: Protocol not supported


Any Guidance on this

Joan


[sumo-user] Importing network from OSM data

2024-03-21 Thread Joan Carmona Mercadé via sumo-user
Hi Community,

I'm importing a network from OSM data.

The resulting network seems good, but I notice that generates many undesirable 
connections between lanes of the same edge but opposite direction. Here is an 
example (marked in yellow):

[cid:image001.png@01DA7B73.87C4FB60]

The question is: is there any option in netconvert that avoid such connections? 
Or I must remove them manually?

Thanks,

Joan

___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


Re: [Evergreen-general] [External] Paperback vs. Hardcover Records

2024-03-20 Thread Joan Kranich via Evergreen-general
Hi Eva,

Thank you for the Part suggestion.  That has been suggested by some of our
librarians.  The problem is that C/W MARS uses a lot of Parts.  We use
Parts for our periodical issues which are added as items, travel guides and
things with parts.  We also use Parts for DVD sets that are separated into
multiple items.  This becomes complicated.  All suggestions are welcome tho
so we can determine the best path forward.

Joan

On Wed, Mar 20, 2024 at 1:14 PM Cerninakova Eva via Evergreen-general <
evergreen-general@list.evergreen-ils.org> wrote:

>
> Have you considered using the functionality for parts? I believe this
> would make it possible to keep one record and at the same time allow
> patrons to choose paperback or hardcover when placing hold
>
>
>
>
>
>
>
> Mgr. Eva Cerniňáková, Ph.D.
> vedoucí Knihovny Jabok
>
> Jabok - Vyšší odborná škola sociálně pedagogická a teologická
> +420 211 222 409
> cer...@jabok.cz
> knihovna.jabok.cz
> Salmovská 8, Praha 2, 120 00
>
>
>
>
>
>
>
> Dne st 20. 3. 2024 18:09 uživatel Tiffany Little via Evergreen-general <
> evergreen-general@list.evergreen-ils.org> napsal:
>
>> PINES uses the same record if everything else about the item is the same
>> aside from the binding.
>>
>>
>> https://pines.georgialibraries.org/dokuwiki/doku.php?id=cat:matching_criteria#special_cases
>>
>>
>> [image: logo with link to Georgia Public Library Service website]
>> <https://georgialibraries.org/>
>>
>> Tiffany Little
>>
>> *PINES Bibliographic Projects Manager*
>>
>> --
>>
>> Georgia Public Library Service
>>
>> 2872 Woodcock Blvd, Suite 250 | Atlanta, GA 30341
>>
>> (404) 235-7161 | tlit...@georgialibraries.org
>>
>> [image: logo with link to Georgia Public Library Service Facebook page]
>> <https://www.facebook.com/georgialibraries>[image: logo with link to
>> Georgia Public Library Service Instagram page]
>> <https://www.instagram.com/georgialibraries/>[image: logo with link to
>> Georgia Public Library Service LinkedIn page]
>> <https://www.linkedin.com/company/georgia-public-library-service/>[image:
>> logo with link to Georgia Public Library Service Threads page]
>> <https://www.threads.net/@georgialibraries>
>>
>> Join our email list <http://georgialibraries.org/subscription> for
>> stories of Georgia libraries making an impact in our communities.
>>
>>
>> On Wed, Mar 20, 2024 at 1:06 PM Frasur, Ruth via Evergreen-general <
>> evergreen-general@list.evergreen-ils.org> wrote:
>>
>>> Evergreen Indiana uses separate records if the paperback and hardcover
>>> are dramatically different in terms of pagination/additional contents.  We
>>> use the same record is the main difference is just the cover type.
>>>
>>>
>>>
>>> Ruth Frasur Davis (she/they)
>>>
>>> Coordinator
>>>
>>> *Evergreen Indiana Library Consortium*
>>>
>>> *Evergreen Community Development Initiative*
>>>
>>> Indiana State Library
>>>
>>> 140 N. Senate Ave.
>>>
>>> Indianapolis, IN 46204
>>>
>>> (317) 232-3691
>>>
>>>
>>>
>>> *From:* Evergreen-general <
>>> evergreen-general-boun...@list.evergreen-ils.org> *On Behalf Of *Terran
>>> McCanna via Evergreen-general
>>> *Sent:* Wednesday, March 20, 2024 1:01 PM
>>> *To:* Evergreen Discussion Group <
>>> evergreen-general@list.evergreen-ils.org>
>>> *Cc:* Terran McCanna 
>>> *Subject:* Re: [Evergreen-general] [External] Paperback vs. Hardcover
>>> Records
>>>
>>>
>>>
>>>  This is an EXTERNAL email. Exercise caution. DO NOT open
>>> attachments or click links from unknown senders or unexpected email. 
>>> --
>>>
>>> PINES uses separate records.
>>>
>>>
>>>
>>> On Wed, Mar 20, 2024, 12:57 PM Szwagiel, Will via Evergreen-general <
>>> evergreen-general@list.evergreen-ils.org> wrote:
>>>
>>> Good afternoon Joan,
>>>
>>>
>>>
>>> Like you, our bibliographic records can contain both paperback and
>>> hardcover versions of a book.  We actually encourage this, as well, as part
>>> of our cataloging best practices to try and cut down on duplicate records
>>> and to make sure that as many items as possible are available for patrons
>>> on a single re

Re: [Evergreen-general] [External] Paperback vs. Hardcover Records

2024-03-20 Thread Joan Kranich via Evergreen-general
Hi Tiffany,

Thank you for the link to your documentation.

And thanks everyone for your really helpful responses!  I appreciate it.

Joan


On Wed, Mar 20, 2024 at 1:09 PM Tiffany Little via Evergreen-general <
evergreen-general@list.evergreen-ils.org> wrote:

> PINES uses the same record if everything else about the item is the same
> aside from the binding.
>
>
> https://pines.georgialibraries.org/dokuwiki/doku.php?id=cat:matching_criteria#special_cases
>
>
> [image: logo with link to Georgia Public Library Service website]
> <https://georgialibraries.org/>
>
> Tiffany Little
>
> *PINES Bibliographic Projects Manager*
>
> --
>
> Georgia Public Library Service
>
> 2872 Woodcock Blvd, Suite 250 | Atlanta, GA 30341
>
> (404) 235-7161 | tlit...@georgialibraries.org
>
> [image: logo with link to Georgia Public Library Service Facebook page]
> <https://www.facebook.com/georgialibraries>[image: logo with link to
> Georgia Public Library Service Instagram page]
> <https://www.instagram.com/georgialibraries/>[image: logo with link to
> Georgia Public Library Service LinkedIn page]
> <https://www.linkedin.com/company/georgia-public-library-service/>[image:
> logo with link to Georgia Public Library Service Threads page]
> <https://www.threads.net/@georgialibraries>
>
> Join our email list <http://georgialibraries.org/subscription> for
> stories of Georgia libraries making an impact in our communities.
>
>
> On Wed, Mar 20, 2024 at 1:06 PM Frasur, Ruth via Evergreen-general <
> evergreen-general@list.evergreen-ils.org> wrote:
>
>> Evergreen Indiana uses separate records if the paperback and hardcover
>> are dramatically different in terms of pagination/additional contents.  We
>> use the same record is the main difference is just the cover type.
>>
>>
>>
>> Ruth Frasur Davis (she/they)
>>
>> Coordinator
>>
>> *Evergreen Indiana Library Consortium*
>>
>> *Evergreen Community Development Initiative*
>>
>> Indiana State Library
>>
>> 140 N. Senate Ave.
>>
>> Indianapolis, IN 46204
>>
>> (317) 232-3691
>>
>>
>>
>> *From:* Evergreen-general <
>> evergreen-general-boun...@list.evergreen-ils.org> *On Behalf Of *Terran
>> McCanna via Evergreen-general
>> *Sent:* Wednesday, March 20, 2024 1:01 PM
>> *To:* Evergreen Discussion Group <
>> evergreen-general@list.evergreen-ils.org>
>> *Cc:* Terran McCanna 
>> *Subject:* Re: [Evergreen-general] [External] Paperback vs. Hardcover
>> Records
>>
>>
>>
>>  This is an EXTERNAL email. Exercise caution. DO NOT open attachments
>> or click links from unknown senders or unexpected email. 
>> --
>>
>> PINES uses separate records.
>>
>>
>>
>> On Wed, Mar 20, 2024, 12:57 PM Szwagiel, Will via Evergreen-general <
>> evergreen-general@list.evergreen-ils.org> wrote:
>>
>> Good afternoon Joan,
>>
>>
>>
>> Like you, our bibliographic records can contain both paperback and
>> hardcover versions of a book.  We actually encourage this, as well, as part
>> of our cataloging best practices to try and cut down on duplicate records
>> and to make sure that as many items as possible are available for patrons
>> on a single record.  There will be instances, however, where we suggest
>> using a separate record, but that is usually based on the content of the
>> book, not whether it is paperback or hardcover.
>>
>>
>>
>> The majority of our member libraries are fine with this, but we do
>> occasionally receive requests to separate paperbacks and hardcovers,
>> because some libraries have patrons who only want one or the other.  One
>> recommendation we have made is for libraries to use call numbers and/or
>> shelving locations to identify if a specific item is paperback.  For
>> example, one member library puts "Apb" for "Adult Paperback" at the
>> beginning of the call numbers for mass market paperback books.
>>
>>
>>
>> This may not help patrons as much when placing holds, because they cannot
>> place item level holds, but it allows staff to easily identify a paperback
>> version so they can place an item hold for the patron.  Staff would just
>> have to encourage patrons to come to them to place the hold, so the staff
>> can place the item level hold for the patron.
>>
>>
>>
>> It is admittedly not a perfect solution, but because we have combined
>> paper

Re: [Evergreen-general] [External] Paperback vs. Hardcover Records

2024-03-20 Thread Joan Kranich via Evergreen-general
Hi Elizabeth,

When we place a meta record hold all of print that is not large print has
the format book so we would need to add formats.  This is a
good consideration that I've been thinking about.

Thank you for your help.

Joan

On Wed, Mar 20, 2024 at 1:06 PM Elizabeth Davis via Evergreen-general <
evergreen-general@list.evergreen-ils.org> wrote:

> Hello
>
>
>
> PaILS uses the same record for hardcover and trade paper editions.  We
> encourage a separate record for mass market paperback.  I’d have to test it
> but would the multi-format hold work for instances where you have records
> for the hardcover and one for the paperback?
>
>
>
>
>
>
>
> *Elizabeth Davis* (she/her), *Support & Project Management Specialist*
>
> *Pennsylvania Integrated Library System **(PaILS) | SPARK*
>
> (717) 256-1627 | elizabeth.da...@sparkpa.org
> 
> support.sparkpa.org | supp...@sparkpa.org
>
>
>
> *From:* Evergreen-general <
> evergreen-general-boun...@list.evergreen-ils.org> *On Behalf Of *Terran
> McCanna via Evergreen-general
> *Sent:* Wednesday, March 20, 2024 1:01 PM
> *To:* Evergreen Discussion Group  >
> *Cc:* Terran McCanna 
> *Subject:* Re: [Evergreen-general] [External] Paperback vs. Hardcover
> Records
>
>
>
> PINES uses separate records.
>
>
>
> On Wed, Mar 20, 2024, 12:57 PM Szwagiel, Will via Evergreen-general <
> evergreen-general@list.evergreen-ils.org> wrote:
>
> Good afternoon Joan,
>
>
>
> Like you, our bibliographic records can contain both paperback and
> hardcover versions of a book.  We actually encourage this, as well, as part
> of our cataloging best practices to try and cut down on duplicate records
> and to make sure that as many items as possible are available for patrons
> on a single record.  There will be instances, however, where we suggest
> using a separate record, but that is usually based on the content of the
> book, not whether it is paperback or hardcover.
>
>
>
> The majority of our member libraries are fine with this, but we do
> occasionally receive requests to separate paperbacks and hardcovers,
> because some libraries have patrons who only want one or the other.  One
> recommendation we have made is for libraries to use call numbers and/or
> shelving locations to identify if a specific item is paperback.  For
> example, one member library puts "Apb" for "Adult Paperback" at the
> beginning of the call numbers for mass market paperback books.
>
>
>
> This may not help patrons as much when placing holds, because they cannot
> place item level holds, but it allows staff to easily identify a paperback
> version so they can place an item hold for the patron.  Staff would just
> have to encourage patrons to come to them to place the hold, so the staff
> can place the item level hold for the patron.
>
>
>
> It is admittedly not a perfect solution, but because we have combined
> paperbacks and hardcovers on single records for so long, trying to split
> them up now would simply be unfeasible.  And even if we began instructing
> catalogers to use separate records moving forward, that would still leave
> countless existing records in the catalog with both paperbacks and
> hardcovers on them.
>
>
>
> *William C. Szwagiel*
>
> NC Cardinal Project Manager
>
> State Library of North Carolina
>
> william.szwag...@ncdcr.gov | 919.814.6721
>
> https://statelibrary.ncdcr.gov/services-libraries/nc-cardinal
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__statelibrary.ncdcr.gov_services-2Dlibraries_nc-2Dcardinal=DwMFaQ=euGZstcaTDllvimEN8b7jXrwqOf-v5A_CdpgnVfiiMM=LRHEWfG7tKtoSjGM1XBmJX5tlkBCMt3lnyKxcaVacsw=kjPccD-EhhqnmskWzrp3GLzuWBZxnfCLqsKtt5gujNc5U-uEWr8LMFuzty892oD5=qaTgbSaMM5WJ5Tpq9J4HiytjSLhXS36iM3CB0mUn3Bk=>
>
> 109 East Jones Street  | 4640 Mail Service Center
>
> Raleigh, North Carolina 27699-4600
>
> The State Library is part of the NC Department of Natural & Cultural
> Resources.
>
> *Email correspondence to and from this address is subject to the North
> Carolina Public Records Law and may be disclosed to third parties.*
>
>
> --
>
> *From:* Evergreen-general <
> evergreen-general-boun...@list.evergreen-ils.org> on behalf of Joan
> Kranich via Evergreen-general 
> *Sent:* Wednesday, March 20, 2024 12:43 PM
> *To:* Evergreen Discussion Group  >
> *Cc:* Joan Kranich 
> *Subject:* [External] [Evergreen-general] Paperback vs. Hardcover Records
>
>
>
> *CAUTION:* External email. Do not click links or open attachments unless
> verified. Report suspicious emails with the Report Message button located
> on your Outlook menu bar on the Ho

Re: [Evergreen-general] [External] Paperback vs. Hardcover Records

2024-03-20 Thread Joan Kranich via Evergreen-general
Hi Terran,

Thank you!

We're thinking separate records will need more work to make the record
specifically for paperback.  We are considering this.

Joan

On Wed, Mar 20, 2024 at 1:01 PM Terran McCanna via Evergreen-general <
evergreen-general@list.evergreen-ils.org> wrote:

> PINES uses separate records.
>
> On Wed, Mar 20, 2024, 12:57 PM Szwagiel, Will via Evergreen-general <
> evergreen-general@list.evergreen-ils.org> wrote:
>
>> Good afternoon Joan,
>>
>> Like you, our bibliographic records can contain both paperback and
>> hardcover versions of a book.  We actually encourage this, as well, as part
>> of our cataloging best practices to try and cut down on duplicate records
>> and to make sure that as many items as possible are available for patrons
>> on a single record.  There will be instances, however, where we suggest
>> using a separate record, but that is usually based on the content of the
>> book, not whether it is paperback or hardcover.
>>
>> The majority of our member libraries are fine with this, but we do
>> occasionally receive requests to separate paperbacks and hardcovers,
>> because some libraries have patrons who only want one or the other.  One
>> recommendation we have made is for libraries to use call numbers and/or
>> shelving locations to identify if a specific item is paperback.  For
>> example, one member library puts "Apb" for "Adult Paperback" at the
>> beginning of the call numbers for mass market paperback books.
>>
>> This may not help patrons as much when placing holds, because they cannot
>> place item level holds, but it allows staff to easily identify a paperback
>> version so they can place an item hold for the patron.  Staff would just
>> have to encourage patrons to come to them to place the hold, so the staff
>> can place the item level hold for the patron.
>>
>> It is admittedly not a perfect solution, but because we have combined
>> paperbacks and hardcovers on single records for so long, trying to split
>> them up now would simply be unfeasible.  And even if we began instructing
>> catalogers to use separate records moving forward, that would still leave
>> countless existing records in the catalog with both paperbacks and
>> hardcovers on them.
>>
>> *William C. Szwagiel*
>>
>> NC Cardinal Project Manager
>>
>> State Library of North Carolina
>>
>> william.szwag...@ncdcr.gov | 919.814.6721
>>
>> https://statelibrary.ncdcr.gov/services-libraries/nc-cardinal
>>
>> 109 East Jones Street  | 4640 Mail Service Center
>>
>> Raleigh, North Carolina 27699-4600
>>
>> The State Library is part of the NC Department of Natural & Cultural
>> Resources.
>>
>> *Email correspondence to and from this address is subject to the North
>> Carolina Public Records Law and may be disclosed to third parties.*
>>
>>
>> --
>> *From:* Evergreen-general <
>> evergreen-general-boun...@list.evergreen-ils.org> on behalf of Joan
>> Kranich via Evergreen-general 
>> *Sent:* Wednesday, March 20, 2024 12:43 PM
>> *To:* Evergreen Discussion Group <
>> evergreen-general@list.evergreen-ils.org>
>> *Cc:* Joan Kranich 
>> *Subject:* [External] [Evergreen-general] Paperback vs. Hardcover Records
>>
>> CAUTION: External email. Do not click links or open attachments unless
>> verified. Report suspicious emails with the Report Message button located
>> on your Outlook menu bar on the Home tab.
>>
>> Hi,
>>
>> In C/W MARS a bibliographic record may contain items for paperback and
>> for hardcover.  We have had some recommendations to separate paperback
>> items from hardcover items.
>>
>> This is a change on the cataloging side but also with how holds would be
>> filled.
>>
>> Do any of you use separate bibliographic records for paperback vs.
>> hardcover or do you have another workflow to make it easy for staff and
>> patrons to place holds to be filled by one format vs. the other?
>>
>> Thank you.
>>
>> Joan
>>
>> --
>>
>> Joan Kranich (she/her/hers)
>> Library Applications Manager, C/W MARS, Inc.
>>
>> --
>>
>> [image: icon] jkran...@cwmars.org | [image: icon]www.cwmars.org
>>
>> [image: icon] 508-755-3323 x 1
>>
>> --
>>
>> Email correspondence to and from this address may be subject to the North
>> Carolina Public Records Law and may be disclosed to third parties by a

Re: [Evergreen-general] [External] Paperback vs. Hardcover Records

2024-03-20 Thread Joan Kranich via Evergreen-general
Hi Will,

Thank you for your suggestions and very helpful details!

Some of our libraries indicate paperback in the call number.  The shelving
location is an interesting thought because we can filter by the location.

Joan

On Wed, Mar 20, 2024 at 12:57 PM Szwagiel, Will <
william.szwag...@dncr.nc.gov> wrote:

> Good afternoon Joan,
>
> Like you, our bibliographic records can contain both paperback and
> hardcover versions of a book.  We actually encourage this, as well, as part
> of our cataloging best practices to try and cut down on duplicate records
> and to make sure that as many items as possible are available for patrons
> on a single record.  There will be instances, however, where we suggest
> using a separate record, but that is usually based on the content of the
> book, not whether it is paperback or hardcover.
>
> The majority of our member libraries are fine with this, but we do
> occasionally receive requests to separate paperbacks and hardcovers,
> because some libraries have patrons who only want one or the other.  One
> recommendation we have made is for libraries to use call numbers and/or
> shelving locations to identify if a specific item is paperback.  For
> example, one member library puts "Apb" for "Adult Paperback" at the
> beginning of the call numbers for mass market paperback books.
>
> This may not help patrons as much when placing holds, because they cannot
> place item level holds, but it allows staff to easily identify a paperback
> version so they can place an item hold for the patron.  Staff would just
> have to encourage patrons to come to them to place the hold, so the staff
> can place the item level hold for the patron.
>
> It is admittedly not a perfect solution, but because we have combined
> paperbacks and hardcovers on single records for so long, trying to split
> them up now would simply be unfeasible.  And even if we began instructing
> catalogers to use separate records moving forward, that would still leave
> countless existing records in the catalog with both paperbacks and
> hardcovers on them.
>
> *William C. Szwagiel*
>
> NC Cardinal Project Manager
>
> State Library of North Carolina
>
> william.szwag...@ncdcr.gov | 919.814.6721
>
> https://statelibrary.ncdcr.gov/services-libraries/nc-cardinal
>
> 109 East Jones Street  | 4640 Mail Service Center
>
> Raleigh, North Carolina 27699-4600
>
> The State Library is part of the NC Department of Natural & Cultural
> Resources.
>
> *Email correspondence to and from this address is subject to the North
> Carolina Public Records Law and may be disclosed to third parties.*
>
>
> --
> *From:* Evergreen-general <
> evergreen-general-boun...@list.evergreen-ils.org> on behalf of Joan
> Kranich via Evergreen-general 
> *Sent:* Wednesday, March 20, 2024 12:43 PM
> *To:* Evergreen Discussion Group  >
> *Cc:* Joan Kranich 
> *Subject:* [External] [Evergreen-general] Paperback vs. Hardcover Records
>
> CAUTION: External email. Do not click links or open attachments unless
> verified. Report suspicious emails with the Report Message button located
> on your Outlook menu bar on the Home tab.
>
> Hi,
>
> In C/W MARS a bibliographic record may contain items for paperback and for
> hardcover.  We have had some recommendations to separate paperback items
> from hardcover items.
>
> This is a change on the cataloging side but also with how holds would be
> filled.
>
> Do any of you use separate bibliographic records for paperback vs.
> hardcover or do you have another workflow to make it easy for staff and
> patrons to place holds to be filled by one format vs. the other?
>
> Thank you.
>
> Joan
>
> --
>
> Joan Kranich (she/her/hers)
> Library Applications Manager, C/W MARS, Inc.
>
> --
>
> [image: icon] jkran...@cwmars.org | [image: icon]www.cwmars.org
>
> [image: icon] 508-755-3323 x 1
>
> --
>
> Email correspondence to and from this address may be subject to the North
> Carolina Public Records Law and may be disclosed to third parties by an
> authorized state official.
>


-- 

Joan Kranich (she/her/hers)
Library Applications Manager, C/W MARS, Inc.

--

[image: icon] jkran...@cwmars.org | [image: icon]www.cwmars.org

[image: icon] 508-755-3323 x 1
___
Evergreen-general mailing list
Evergreen-general@list.evergreen-ils.org
http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-general


[Evergreen-general] Paperback vs. Hardcover Records

2024-03-20 Thread Joan Kranich via Evergreen-general
Hi,

In C/W MARS a bibliographic record may contain items for paperback and for
hardcover.  We have had some recommendations to separate paperback items
from hardcover items.

This is a change on the cataloging side but also with how holds would be
filled.

Do any of you use separate bibliographic records for paperback vs.
hardcover or do you have another workflow to make it easy for staff and
patrons to place holds to be filled by one format vs. the other?

Thank you.

Joan

-- 

Joan Kranich (she/her/hers)
Library Applications Manager, C/W MARS, Inc.

--

[image: icon] jkran...@cwmars.org | [image: icon]www.cwmars.org

[image: icon] 508-755-3323 x 1
___
Evergreen-general mailing list
Evergreen-general@list.evergreen-ils.org
http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-general


[OSList] Re: Loving you...

2024-03-17 Thread Elwin and Joan via OSList
 There is now an Open Space at our kitchen tablethe place where Harrison sits 
along with Curtis the catwhile enroute Down East each year.
one olive or two??
eg

On Sunday, March 17, 2024 at 02:21:03 PM EDT, Suzanne Daigle via OSList 
 wrote:  
 
 Dear precious friends and colleagues, 
Last night Barry Owen reached out to me to say that his dad had passed away... 
peacefully a few hours earlier in Camden Maine.  
He briefly described that Harrison had prepared his family, friends and 
colleagues well for this moment. Staying in touch and connecting, opening and 
holding the space. 
And that space will always be open, said Barry.  "He's still holding the space 
now...
Earlier in the day, Barry had also shared a love note with me that he wrote to 
his dad. I asked if he/we could post it here feeling that the message speaks to 
all of us — his Global Open Space family. He heartily agreed. 
Barry was back on the job this morning, in a way, honoring his father's wishes 
that none of us work too hard or make too much of a big deal of him leaving us. 
 Of course we know that's impossible for a billion reasons. 
But for now as we sit together in this big global circle, grieving and missing 
him,  with hige blank white sheets in the middle of the virtual floor, each of 
us expressing our thoughts, feelings and memories, captured in a very special 
future Book of Proceedings, HERE..the love note that Barry send to his to his 
dad. I am but the humble messenger.  Suzanne
The Title:  Loving You
Good morning Dad!I hope  you are feeling the love and comfort from the many 
millions (BILLIONS) people whose lives have been enriched  by your presence 
throughout your 88 years while here on Earth.
I wish for you a passing of comfort, Love, Peace, Joy HAPPINESS to have lived 
your life to your FULLEST.
You Taught us not to work so damned HARD
Right PeopleRight PlaceRightTimeAll the right things have happened    the Lord 
knows you will pass when "it" is over!
--THANK YOU for "The Law of 2 feet"BumbleBeesButterfliesSpace invadersNoses out 
of joint
I Love you Dad! 

Barry






OSList mailing list -- everyone@oslist.org
To unsubscribe send an email to everyone-le...@oslist.org
See the archives here: https://oslist.org/empathy/list/everyone.oslist.org  OSList mailing list -- everyone@oslist.org
To unsubscribe send an email to everyone-le...@oslist.org
See the archives here: https://oslist.org/empathy/list/everyone.oslist.org

Re: [EXT] Re: How to get a memory pointer in the core process

2024-03-14 Thread Joan Moreau via dovecot
Thanks Eduardo

I am trying to avoid closing/ reopening a file pointer to the exact same file
between each call to the plugin



On 14 March 2024 20:08:37 Eduardo M KALINOWSKI via dovecot
 wrote:

 On 14/03/2024 02:49, Joan Moreau via dovecot wrote:
  No, you don´t understand
  There is a core process (/usr/bin/dovecot) running all the
  time. So I want to
  allocate a memory block, the core process keep it and it is
  retrievable by the
  pluging when laded again
  At exit of /usr/bin/dovecot, it just does a "delete()" of
  the said allocation

 While I cannot help you with plugin writing or dovecot internals,
 this 
 does seem like an example of the XY problem[0]. Perhaps if you
 provide a 
 high level description of what you're attempting to do someone might 
 come up with a way to achieve that.

 [0] https://en.wikipedia.org/wiki/XY_problem

 -- 
 Eduardo M KALINOWSKI
 edua...@kalinowski.com.br

 ___
 dovecot mailing list -- dovecot@dovecot.org
 To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: [EXT] Re: How to get a memory pointer in the core process

2024-03-13 Thread Joan Moreau via dovecot
No, you don´t understand
There is a core process (/usr/bin/dovecot) running all the time. So I want to
allocate a memory block, the core process keep it and it is retrievable by the
pluging when laded again
At exit of /usr/bin/dovecot, it just does a "delete()" of the said allocation


On 2024-03-14 13:25, Aki Tuomi via dovecot wrote:
 Hi!

 Sorry but that's just not possible, ther is no "core" where to create
 such object. There is no "dovecot" where to store things.

 When user logs in, dovecot executes /usr/libexec/dovecot/imap and
 transfers the connection fd there. then plugins and stuff are loaded,
 and the user does what he does, and then plugins and stuff are
 unloaded and process exists and no longer exists in memory.

 You are clearly asking about memory persistence between sessions, and
 this can be done with

 a) services (internal or external), such as redis, sql, or something
 else
 b) storing things to disk

 Aki

  On 13/03/2024 18:45 EET Joan Moreau via dovecot
   wrote:

   
  No, I am not referring to that

  I want to create an object at first call in memory

  that object would be retrievable at second and furthers
  calls of the
  plugin, as long as dovecot is running

  On 2024-03-13 16:29, Aki Tuomi via dovecot wrote:

   Not really no. You should use e.g. dict inteface
   for storing this kind
   of stateful data. When deinit is called the
   calling core process will
   likely die too.

   Aki

   On 13/03/2024 10:19 EET Joan Moreau
wrote:

   Keep a pointer in memory retrievable each time a
   plugin is called

   So the plugin keep the memory, not has to restart
   everything at each
   call

   On 12 March 2024 08:53:38 Aki Tuomi via dovecot
   
   wrote:

   On 11/03/2024 10:42 EET Joan Moreau
wrote:

   Hi
   Is it possible, from a plugin perspective, to
   create and recover a
   pointer in the core process (i.e. memory not lost
   between 2 calls to
   the plugin, even after the "deinit" of the
   plugin" ) ?

   Thanks
   Hi Joan!

   May I ask what you are attempting to achieve in
   more detail?

   Aki
   ___
   dovecot mailing list -- dovecot@dovecot.org
   To unsubscribe send an email to dovecot-
   le...@dovecot.org
    ___
  dovecot mailing list -- dovecot@dovecot.org
  To unsubscribe send an email to dovecot-
  leave@dovecot.orgNo, I am not referring to that
  I want to create an object at first call in memory
  that object would be retrievable at second and furthers
  calls of the plugin, as
  long as dovecot is running




  On 2024-03-13 16:29, Aki Tuomi via dovecot wrote:
       Not really no. You should use e.g. dict inteface for
  storing this
       kind of stateful data. When deinit is called the
  calling core process
       will likely die too.

       Aki

                On 13/03/2024 10:19 EET Joan Moreau
   wrote:


            Keep a pointer in memory retrievable each time a
  plugin is
            called

            So the plugin keep the memory, not has to restart
            everything at each call



            On 12 March 2024 08:53:38 Aki Tuomi via dovecot
             wrote:

                      On 11/03/2024 10:42 EET Joan Moreau
                       wrote:


                      Hi
                      Is it possible, from a plugin
                      perspective, to create and recover a
                      pointer in the core process (i.e.
                      memory not lost between 2 calls to the
                      plugin, even after the "deinit" of the
                      plugin" ) ?

                      Thanks

                 Hi Joan!

                 May I ask what you are attempting to achieve
  in
                 more detail?

                 Aki
               
   ___
                 dovecot mailing list -- dovecot@dovecot.org
                 To unsubscribe send an email to dovecot-
                 le...@dovecot.org
       __

Re: [EXT] Re: How to get a memory pointer in the core process

2024-03-13 Thread Joan Moreau via dovecot
No, I am not referring to that
I want to create an object at first call in memory
that object would be retrievable at second and furthers calls of the plugin, as
long as dovecot is running




On 2024-03-13 16:29, Aki Tuomi via dovecot wrote:
 Not really no. You should use e.g. dict inteface for storing this
 kind of stateful data. When deinit is called the calling core process
 will likely die too.

 Aki

  On 13/03/2024 10:19 EET Joan Moreau  wrote:


  Keep a pointer in memory retrievable each time a plugin is
  called

  So the plugin keep the memory, not has to restart
  everything at each call



  On 12 March 2024 08:53:38 Aki Tuomi via dovecot
   wrote:

On 11/03/2024 10:42 EET Joan Moreau
 wrote:


Hi
Is it possible, from a plugin
perspective, to create and recover a
pointer in the core process (i.e.
memory not lost between 2 calls to the
plugin, even after the "deinit" of the
plugin" ) ?

Thanks

   Hi Joan!

   May I ask what you are attempting to achieve in
   more detail?

   Aki
   ___
   dovecot mailing list -- dovecot@dovecot.org
   To unsubscribe send an email to dovecot-
   le...@dovecot.org
 ___
 dovecot mailing list -- dovecot@dovecot.org
 To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


Re: How to get a memory pointer in the core process

2024-03-13 Thread Joan Moreau via dovecot
Keep a pointer in memory retrievable each time a plugin is called

So the plugin keep the memory, not has to restart everything at each call



On 12 March 2024 08:53:38 Aki Tuomi via dovecot  wrote:

  On 11/03/2024 10:42 EET Joan Moreau  wrote:


  Hi
  Is it possible, from a plugin perspective, to create and
  recover a pointer in the core process (i.e. memory not lost
  between 2 calls to the plugin, even after the "deinit" of
  the plugin" ) ?

  Thanks

 Hi Joan!

 May I ask what you are attempting to achieve in more detail?

 Aki
 ___
 dovecot mailing list -- dovecot@dovecot.org
 To unsubscribe send an email to dovecot-le...@dovecot.org

___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


How to get a memory pointer in the core process

2024-03-11 Thread Joan Moreau via dovecot
Hi
Is it possible, from a plugin perspective, to create and recover a pointer in
the core process (i.e. memory not lost between 2 calls to the plugin, even
after the "deinit" of the plugin" ) ?

Thanks
___
dovecot mailing list -- dovecot@dovecot.org
To unsubscribe send an email to dovecot-le...@dovecot.org


[sumo-user] Network conversion

2024-03-11 Thread Joan Carmona Mercadé via sumo-user
Hello Community,

I'm trying to export a SUMO network to a VISUM environment.
Is there any use for exporting a SUMO network to some format that VISUM 
understands?

Thanks,
Joan

___
sumo-user mailing list
sumo-user@eclipse.org
To unsubscribe from this list, visit 
https://www.eclipse.org/mailman/listinfo/sumo-user


RE: [EXTERNAL] [PestList] nibbled pages in a ledger

2024-03-01 Thread 'Bacharach, Joan' via MuseumPests
Hello Nancy;

Please check out the National Park Service Conserve O Gram [COG] on identifying 
mouse and rat damage in museum collections at  3/12-COG-Rodent-Damage.pdf 
(nps.gov)<https://www.nps.gov/museum/publications/conserveogram/3-12-COG-Rodent-Damage.pdf>
 and COG 3/11: Identifying Museum Insect Pest Damage 
(nps.gov)<https://www.nps.gov/museum/publications/conserveogram/03-11.pdf>.  
These two COGs include illustrations of damage that help with identifying the 
pests that cause the damage.

You might also take a look at the COG 2/ 8: Hantavirus Disease Health and 
Safety Update 
(nps.gov)<https://www.nps.gov/museum/publications/conserveogram/02-08.pdf> that 
provides information on this rare but serious respiratory virus transmitted by 
mice and rats.

Best,
Joan

Joan Bacharach
Senior Curator
Museum Management Program
National Park Service

Museum Management Program Website<https://www.nps.gov/museum/>  |  
www.nps.gov/museum<http://www.nps.gov/museum>
Treasured Landscapes NPS Art Collections 
(nps.gov)<https://www.nps.gov/museum/exhibits/landscape_art/index.html>
The Hidden Worlds of the National 
Parks<https://artsandculture.google.com/project/national-park-service>



From: pestlist@googlegroups.com  On Behalf Of 
Hingst, Volker
Sent: Friday, March 1, 2024 4:22 AM
To: pestlist@googlegroups.com
Subject: [EXTERNAL] [PestList] nibbled pages in a ledger




 This email has been received from outside of DOI - Use caution before clicking 
on links, opening attachments, or responding.


dear nancy,

I also suspect mouse or rodent damage.
please wear cotton gloves, if you are handling original objects.

best regards

V o l k e rH i n g s t

Diplom-Restaurator Volker Hingst
IPM-Koordinator
---
LVR - Archivberatungs- und Fortbildungszentrum
Technisches Zentrum / Papierrestaurierung

Ehrenfriedstr.19
50259 Pulheim
Tel.: 02234-9854-236
Fax:  0221-8284-2479
E-Mail: volker.hin...@lvr.de<mailto:volker.hin...@lvr.de>
Internet: www.afz.lvr.de<http://www.afz.lvr.de/>


Von: 'bugman22' via MuseumPests 
mailto:pestlist@googlegroups.com>>
Gesendet: Freitag, 1. März 2024 01:16
An: pestlist@googlegroups.com<mailto:pestlist@googlegroups.com>
Betreff: Re: [PestList] nibbled pages in a ledger

***Hinweis: Diese E-Mail von 
pestlist+bncbdum5qf7zmcbbjv4qsxqmgqercyb...@googlegroups.com<mailto:pestlist+bncbdum5qf7zmcbbjv4qsxqmgqercyb...@googlegroups.com>
 stammt von einer externen Quelle. Öffnen Sie nur Inhalte, denen Sie 
vertrauen.***

Nancy -

Mice.  You can actually see their two incisors' marks for each bite.

Tom Parker

On Thursday, February 29, 2024 at 06:19:32 PM EST, Jenner, Nancy@Parks 
mailto:nancy.jen...@parks.ca.gov>> wrote:



Hello-I'm attaching two photos of damage to a ledger book at one of our parks.  
The ledger is calf skin covered, the damage is to the paper.



I have my suspicion, but I don't want to prejudice anyone as to whom the 
culprit might be for this damage.   Can I ask for ID suggestions?



Thanks very much!



--Nancy







Nancy Jenner, Curator II

California State Parks

Statewide Museum Collections Center


--
You received this message because you are subscribed to the Google Groups 
"MuseumPests" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
pestlist+unsubscr...@googlegroups.com<mailto:pestlist+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/pestlist/BY5PR09MB542540F1F56998983286903BB05F2%40BY5PR09MB5425.namprd09.prod.outlook.com<https://groups.google.com/d/msgid/pestlist/BY5PR09MB542540F1F56998983286903BB05F2%40BY5PR09MB5425.namprd09.prod.outlook.com?utm_medium=email_source=footer>.
--
You received this message because you are subscribed to the Google Groups 
"MuseumPests" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
pestlist+unsubscr...@googlegroups.com<mailto:pestlist+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/pestlist/290264290.2914359.1709252131269%40mail.yahoo.com<https://groups.google.com/d/msgid/pestlist/290264290.2914359.1709252131269%40mail.yahoo.com?utm_medium=email_source=footer>.
--
You received this message because you are subscribed to the Google Groups 
"MuseumPests" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
pestlist+unsubscr...@googlegroups.com<mailto:pestlist+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/pestlist/26011f9d7d0e4c6db24dc11e404b967b%40lvr.de<https://groups.google.com/d/msgid/pestlist/26011f9d7d0e4c6db24dc11e404b967b%40lvr.de?utm_medium=email_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"M

[protobuf] Split 1 protobuf into N protobufs

2024-02-29 Thread Joan Balagueró
Hello,

I'm new in protobuf, and I was wondering if the following is possible. I 
have a protobuf response coming from an api that contains a list of hotels. 
If we imagine this response  in xml, it would be something as follows:
H1H2

I need to split this into 2 pieces:
H1
H2

Is it possible to do this if the response is protobuf? Can I put "marks" on 
this protobuf in any way? Something like using "*" below:
*H1*H2*

So I can know where a hotel starts and ends just checking this mark or 
field, and then be able to substract portions of the proto byte array to 
build each piece.

What I want to avoid is to parse the proto, create a java object with the 
whole response, then split each response and convert each one into a 
protobuf again. The idea would be to get this split by only streaming the 
proto.

Thanks,
Joan.



-- 
You received this message because you are subscribed to the Google Groups 
"Protocol Buffers" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to protobuf+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/protobuf/8a767294-779b-4e6b-9942-2a5f1b90d29en%40googlegroups.com.


Re: Slow Metrics output in GUI

2024-02-27 Thread Joan g
just cleaning up the vm_stats table in cloudstack 'cloud' db.

> truncate table vm_stats;

and setting  vm.stats.max.retention.time to a lower value addressed our
issues.

Joan

On Tue, Feb 27, 2024 at 12:19 AM Andrei Mikhailovsky
 wrote:

> Interesting.
>
> Joan, do you mind sharing how you are doing it?
>
> Thanks
>
> - Original Message -
> > From: "Joan g" 
> > To: "users" 
> > Sent: Monday, 26 February, 2024 18:06:58
> > Subject: Re: Slow Metrics output in GUI
>
> > I am facing the same problem in my 4.17.2 version. We are manually
> clearing
> > the stats table to  make the instance list page load faster :(
> >
> >
> > Joan
> >
> > On Mon, 26 Feb, 2024, 22:24 Andrei Mikhailovsky,
> 
> > wrote:
> >
> >> Hello everyone,
> >>
> >> My setup: ACS 4.18.1.0 on Ubuntu 20.04.6. Two management servers and
> mysql
> >> active-active replication.
> >>
> >>
> >> I seem to have a very slow response on viewing vms. It takes about 20
> >> seconds for the vm data to show when I click on any vm under Compute >
> >> Instances. When I click on various vm tabs (like NICs, Disks, Details,
> etc)
> >> the only tab that takes about 15-20 seconds to refresh is the Metrics
> tab.
> >> When the spinner stops I get the following message: "No data to show for
> >> the selected period." Also this information is shown in red colour: The
> >> Control Plane Status of this instance is Offline. Some actions on this
> >> instance will fail, if so please wait a while and retry. When I click on
> >> the 12 or 24 hours tab it takes a bit of time, but it does show the
> tables
> >> and the message in red colour is not shown.
> >> On mysql server I see the mysql process is using over 100% cpu (with 0%
> >> iowait) while ACS tries to retrieve the Metrics data. Also, the
> >> cloudstack-management server cpu usage goes to 200-400%.
> >>
> >>
> >> I've tried all the obvious (restarting management servers, stopping one
> of
> >> the management servers, restarting host servers).
> >>
> >> Does anyone know what is the issue? why does it take so long to retrieve
> >> the vm data and metrics? I don't remember having this problem before
> 4.18.
> >>
> >> Many thanks for any pointers.
> >>
> >> cheers
> >>
> >> Andrei
> >>
> >>
> >>
> >>
> >>
> >>
> >>
>


Re: Slow Metrics output in GUI

2024-02-26 Thread Joan g
I am facing the same problem in my 4.17.2 version. We are manually clearing
the stats table to  make the instance list page load faster :(


Joan

On Mon, 26 Feb, 2024, 22:24 Andrei Mikhailovsky, 
wrote:

> Hello everyone,
>
> My setup: ACS 4.18.1.0 on Ubuntu 20.04.6. Two management servers and mysql
> active-active replication.
>
>
> I seem to have a very slow response on viewing vms. It takes about 20
> seconds for the vm data to show when I click on any vm under Compute >
> Instances. When I click on various vm tabs (like NICs, Disks, Details, etc)
> the only tab that takes about 15-20 seconds to refresh is the Metrics tab.
> When the spinner stops I get the following message: "No data to show for
> the selected period." Also this information is shown in red colour: The
> Control Plane Status of this instance is Offline. Some actions on this
> instance will fail, if so please wait a while and retry. When I click on
> the 12 or 24 hours tab it takes a bit of time, but it does show the tables
> and the message in red colour is not shown.
> On mysql server I see the mysql process is using over 100% cpu (with 0%
> iowait) while ACS tries to retrieve the Metrics data. Also, the
> cloudstack-management server cpu usage goes to 200-400%.
>
>
> I've tried all the obvious (restarting management servers, stopping one of
> the management servers, restarting host servers).
>
> Does anyone know what is the issue? why does it take so long to retrieve
> the vm data and metrics? I don't remember having this problem before 4.18.
>
> Many thanks for any pointers.
>
> cheers
>
> Andrei
>
>
>
>
>
>
>
>


Re: Cloudstack DB using 3 Node Galrea Cluster.

2024-02-26 Thread Joan g
Thank you kiran for the detailed information. For me the replication is
fine, its failing only when new install or upgrade of cloudstack, which
calls for a many schema changes. I think during install or upgrade we may
need to disable percona replication.

Regards Joan

On Mon, 26 Feb, 2024, 18:10 Kiran Chavala, 
wrote:

> Hi Joan
>
> You can refer this article
>
>
> https://severalnines.com/blog/how-deploy-high-availability-cloudstackcloudplatform-mariadb-galera-cluster/
>
>
> I had these in my notes when I tried setting it up percona-xtradb, hope
> its useful to you.
>
> 
> Install 2 ubuntu nodes for percona-xtradb cluster
>
> On ubuntu node 1
>
> $ sudo apt update
>
> $ sudo apt install gnupg2
>
> $ wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release
> -sc)_all.deb
>
> $ sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
>
> $ sudo apt update
>
> $ sudo apt install percona-server-server-5.7
>
>
>
> cat >>/etc/mysql/my.cnf<
> [mysqld]
>
> wsrep_provider=/usr/lib/libgalera_smm.so
> wsrep_cluster_name=democluster
> wsrep_cluster_address=gcomm://
> wsrep_node_name=ubuntuvm01
> wsrep_node_address=172.42.42.101
> wsrep_sst_method=xtrabackup-v2
> wsrep_sst_auth=repuser:reppassword
> pxc_strict_mode=ENFORCING
> binlog_format=ROW
> default_storage_engine=InnoDB
> innodb_autoinc_lock_mode=2
>
> EOF
>
>
>
> $systemctl start mysql
>
> login to mysql on node 1 and execute the following commands
>
>
> mysql -uroot -p -e "create user repuser@localhost identified by
> 'reppassword'"
> mysql -uroot -p -e "grant reload, replication client, process, lock tables
> on *.* to repuser@localhost"
> mysql -uroot -p -e "flush privileges"
>
>
>
> On Ubuntu Node 2
>
>
> $ sudo apt update
>
> $ sudo apt install gnupg2
>
> $ wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release
> -sc)_all.deb
>
> $ sudo dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
>
> $ sudo apt update
>
> $ sudo apt install percona-server-server-5.7
>
>
> cat >>/etc/mysql/my.cnf<
> [mysqld]
>
> wsrep_provider=/usr/lib/libgalera_smm.so
>
> wsrep_cluster_name=democluster
>
> wsrep_cluster_address= gcomm://172.42.42.101,172.42.42.102
>
> wsrep_node_name=ubuntuvm02
>
> wsrep_node_address=172.42.42.101
>
> wsrep_sst_method=xtrabackup-v2
>
> wsrep_sst_auth=repuser:reppassword
>
> pxc_strict_mode=ENFORCING
>
> binlog_format=ROW
>
> default_storage_engine=InnoDB
>
> innodb_autoinc_lock_mode=2
>
> EOF
>
>
>
> $systemctl start mysql
>
>
>
>
> Login back to node 1 check the status of the xtradb cluster
>
> mysql >show status like 'wsrep%';
>
>
> mysql>use mysql
> mysql>GRANT ALL ON *.* to root@'%' IDENTIFIED BY 'password';
> mysql>GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password'
> WITH GRANT OPTION;
> mysql>FLUSH PRIVILEGES;
> mysql> SELECT host FROM mysql.user WHERE User = 'root';
> mysql>SET GLOBAL pxc_strict_mode=PERMISSIVE
>
>
>
> Regards
> Kiran
>
> From: Joan g 
> Date: Saturday, 24 February 2024 at 12:29 AM
> To: users@cloudstack.apache.org 
> Subject: Cloudstack DB using 3 Node Galrea Cluster.
> Hi Community,
>
> I need some suggestions  on using 3 node Mariadb *Galera Cluster or percona
> xtradb* for Cloudstack Databases.
>
> In My setup the Databases are behind a LB and write happens only to a
> single node
>
> With new Cloudstack 4.18.1 install  initial database migration is always
> failing because of schema update/sync issues with other DB nodes.
>
> Logs in Mysql err::
> 2024-02-23T12:55:15.521278Z 17 [ERROR] [MY-010584] [Repl] Replica SQL:
> Error 'Duplicate column name 'display'' on query. Default
>  database: 'cloud'. Query: 'ALTER TABLE cloud.guest_os ADD COLUMN display
> tinyint(1) DEFAULT '1' COMMENT 'should this guest_os b
> e shown to the end user'', Error_code: MY-001060
>
> Due to this Cloudstack initialisation is always failing.
>
> Can someone point me with a suggested method for DB HA
>
> Jon
>
>
>
>


Cloudstack DB using 3 Node Galrea Cluster.

2024-02-23 Thread Joan g
Hi Community,

I need some suggestions  on using 3 node Mariadb *Galera Cluster or percona
xtradb* for Cloudstack Databases.

In My setup the Databases are behind a LB and write happens only to a
single node

With new Cloudstack 4.18.1 install  initial database migration is always
failing because of schema update/sync issues with other DB nodes.

Logs in Mysql err::
2024-02-23T12:55:15.521278Z 17 [ERROR] [MY-010584] [Repl] Replica SQL:
Error 'Duplicate column name 'display'' on query. Default
 database: 'cloud'. Query: 'ALTER TABLE cloud.guest_os ADD COLUMN display
tinyint(1) DEFAULT '1' COMMENT 'should this guest_os b
e shown to the end user'', Error_code: MY-001060

Due to this Cloudstack initialisation is always failing.

Can someone point me with a suggested method for DB HA

Jon


Re: Update Password For Existing XEN Host

2024-02-14 Thread Joan g
Hi Kiran & Hari,

Thank you. The API helped

Joan

On Thu, 15 Feb, 2024, 10:26 Harikrishna Patnala, <
harikrishna.patn...@shapeblue.com> wrote:

> I see there is an API for this
>
> https://cloudstack.apache.org/api/apidocs-4.19/apis/updateHostPassword.html
>
> Can you please try that
>
> Regards,
> Harikrishna
>
> From: Joan g 
> Date: Wednesday, 14 February 2024 at 7:22 PM
> To: users@cloudstack.apache.org 
> Subject: Update Password For Existing XEN Host
> Hello Community,
>
> My cloudstack Setup is  Using Xen server. Due to security reasons i was
> forced to change my xen root password.
>
> Can someone let me know how I can update the password for my host in
> cloudstack DB ?
>
> Following blow link does not help
>
> http://docs.cloudstack.apache.org/projects/archived-cloudstack-administration/en/latest/hosts.html#changing-host-password
>
> Regards,
> Joan
>
>
>
>


Update Password For Existing XEN Host

2024-02-14 Thread Joan g
Hello Community,

My cloudstack Setup is  Using Xen server. Due to security reasons i was
forced to change my xen root password.

Can someone let me know how I can update the password for my host in
cloudstack DB ?

Following blow link does not help
http://docs.cloudstack.apache.org/projects/archived-cloudstack-administration/en/latest/hosts.html#changing-host-password

Regards,
Joan


RE: [PestList] IPM Policies and Procedures

2024-02-13 Thread 'Bacharach, Joan' via MuseumPests
Greetings;

Please see the U.S. National Park Service IPM policies and procedures in 
Chapter 5: Biological Infestations 
(nps.gov)<https://www.nps.gov/museum/publications/MHI/CHAP5.pdf> of the NPS 
Museum Handbook, Part I, Museum Collections.

This Handbook, at National Park Service - Museum Management Program 
(nps.gov)<https://www.nps.gov/museum/publications/MHI/mushbkI.html> provides 
policies, procedures and guidance for the preservation, protection, and care of 
all disciplines and materials represented in National Park Service collections.

Joan

Joan Bacharach
Senior Curator
Museum Management Program
National Park Service

Museum Management Program Website<https://www.nps.gov/museum/>  |  
www.nps.gov/museum<http://www.nps.gov/museum>
The Hidden Worlds of the National 
Parks<https://artsandculture.google.com/project/national-park-service>



From: pestlist@googlegroups.com  On Behalf Of 
Breitung, Eric
Sent: Monday, February 12, 2024 2:40 PM
To: pestlist@googlegroups.com
Subject: RE: [External] - RE: [PestList] IPM Policies and Procedures

Can I suggest that those willing to share their policies do so to this group 
email or directly with Rachael Arenstein indicating that it's okay to put them 
on the website as a way to update the information that's on the site?  It's 
something that can happen at the next Museum Pest working group meeting, 
perhaps?

Eric


--
Eric Breitung
Scientific Research
212 396 5390

The Metropolitan Museum of Art
1000 Fifth Avenue
New York, NY 10028
@metmuseum<https://www.instagram.com/metmuseum>
metmuseum.org<http://www.metmuseum.org/>



From: pestlist@googlegroups.com<mailto:pestlist@googlegroups.com> 
mailto:pestlist@googlegroups.com>> On Behalf Of 
Jenner, Nancy@Parks
Sent: Monday, February 12, 2024 1:40 PM
To: pestlist@googlegroups.com<mailto:pestlist@googlegroups.com>
Subject: [External] - RE: [PestList] IPM Policies and Procedures

I second this request!  If anyone feels comfortable sharing a recent version of 
their IPM plan to this group or to the MuseumPests site, I think it would be 
very much appreciated.

--Nancy

Nancy Jenner, Curator II
California State Parks
Statewide Museum Collections Center

From: 'lrestemyer' via MuseumPests 
mailto:pestlist@googlegroups.com>>
Sent: Monday, February 12, 2024 10:25 AM
To: MuseumPests mailto:pestlist@googlegroups.com>>
Subject: [PestList] IPM Policies and Procedures

Hello,

My institution is working on refining our IPM policies and procedures, and I 
was wondering if anyone would be willing to share their IPM policies/procedures 
with us. The MuseumPests website has been a great starting point, but many of 
those are a decade old or more. With how fast and often IPM evolves, I wanted 
to look at more recent documents to make sure we don't miss anything.

I would especially be interested in any natural history museum's policies, but 
any documents would be of great help.

Please send your documents to lrestem...@hmns.org<mailto:lrestem...@hmns.org> - 
any documents sent will be kept in confidence.

Thank you,
Lacey Restemyer
Houston Museum of Natural Science


--
You received this message because you are subscribed to the Google Groups 
"MuseumPests" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
pestlist+unsubscr...@googlegroups.com<mailto:pestlist+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/pestlist/7653c542-f79f-48fc-b7b2-f288d641549fn%40googlegroups.com<https://groups.google.com/d/msgid/pestlist/7653c542-f79f-48fc-b7b2-f288d641549fn%40googlegroups.com?utm_medium=email_source=footer>.
--
You received this message because you are subscribed to the Google Groups 
"MuseumPests" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
pestlist+unsubscr...@googlegroups.com<mailto:pestlist+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/pestlist/BY5PR09MB542536C36A7E63D9A10C16D2B0482%40BY5PR09MB5425.namprd09.prod.outlook.com<https://groups.google.com/d/msgid/pestlist/BY5PR09MB542536C36A7E63D9A10C16D2B0482%40BY5PR09MB5425.namprd09.prod.outlook.com?utm_medium=email_source=footer>.
--
You received this message because you are subscribed to the Google Groups 
"MuseumPests" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to 
pestlist+unsubscr...@googlegroups.com<mailto:pestlist+unsubscr...@googlegroups.com>.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/pestlist/DS0PR05MB9221AB3DD8C58F6708818DBDE3482%40DS0PR05MB9221.namprd05.prod.outlook.com<https://groups.google.com/d/msgid/pestlist/DS0PR05MB9221AB3DD8C58F6708818DBDE3482%40DS0PR05MB9221.namprd05.prod.outlook.com?utm_medium=email_source=footer>.

-- 
You received 

Re: VM utilisation issues

2024-02-06 Thread Joan g
Palash,

Does CPU Capping enabled in compute offering?

We are observing similar issues in VM with cpu capping.

Joan.

On Wed, 7 Feb, 2024, 10:33 Palash Biswas,  wrote:

> Hi Daan,
>
> We measure by using Zabbix Agent inside the VM. Zabbix Agent is calling
> these keys for accurate CPU Utilisation.
>
> Windows: perf_counter["\Processor(_Total)\% Processor Time",1] Linux :
> mpstat 1 1 | awk '/Average:/ {print 100-$NF}'
>
>
> This VM has 32Core, 128GB RAM
>
> Regards,
> Palash
>
>
> On Tue, 6 Feb 2024 at 11:18 PM, Daan Hoogland 
> wrote:
>
> > Palash,
> > Are you logged in to the VM while measuring? That might explain (part) of
> > the difference.
> > What utilisation are you referring to? (cpu mem)
> > In the case of CPU; how many cpus are you measuring? (i.e. 95 ≃ 50*2)
> >
> > On Tue, Feb 6, 2024 at 3:29 PM Palash Biswas 
> wrote:
> >
> > > Hi Community,
> > >
> > > I have a VM. When i monitor the contents of the VM, it shows the
> > > utilisation is 95% and above.
> > >
> > > But from Cloudstack Metrics, it shows only 50% Max Utilisation.
> > >
> > > How does Cloudstack monitor the VM Metrics (CPU and Memory). Is there
> any
> > > way to make it more accurate?
> > >
> > > My Vm stats interval (vm.stats.interval) is set to 6Miliseconds (60
> > > Seconds)
> > >
> > > Regards,
> > > Palash
> > >
> > >
> > >
> >
> > --
> > Daan
> >
>


[jira] [Updated] (AVRO-3934) Generated Java code still fails with union containing logical type

2024-02-02 Thread Joan Soto Targa (Jira)


 [ 
https://issues.apache.org/jira/browse/AVRO-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joan Soto Targa updated AVRO-3934:
--
Description: 
In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling to 
Java 8 (or 11) and in both cases we have observed that bug AVRO-1981 is still 
active and deserialization fails for nullable fields that have a logical type: 
happens at least for both "uuid" (in version 1.11.3, previous one just ignores 
this type) and "timestamp-millis" (in both versions).


The fields are defined as follows:

{code:json}
//(...)
{
  "name": "requestId",
  "type": [
"null",
{
  "type": "string",
  "logicalType": "uuid"
}
  ],
  "default": null
}
//(...)
{
  "name": "upperTimeLimit",
  //"doc" field
  "type": [
"null",
{
  "type": "long",
  "logicalType": "timestamp-millis"
}
  ],
  "default": null
}
{code}
 

The error we get for the uuid case when attempting to deserialize is:
{noformat}
java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast to 
class java.util.UUID
{noformat}

In the generated class file we should be getting:
{code:java}
private static final org.apache.avro.Conversion[] conversions;
//(...)
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
conversions = new org.apache.avro.Conversion[] {
null,
null,
new org.apache.avro.Conversions.UUIDConversion(),
null,
new org.apache.avro.data.TimeConversions.TimestampMillisConversion(),
null
};
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

but we get:
{code:java}
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

Both the definition and the initialization of the conversions field are missing.

  was:
In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling to 
Java 8 (or 11) and in both cases we have observed that bug AVRO-1981 is still 
active and deserialization fails for nullable fields that have a logical type: 
happens at least for both "uuid" (in version 1.11.3, previous one just ignores 
this type) and "timestamp-millis" (in both versions).

 
The error we get for the uuid case when attempting to deserialize is:
{noformat}
java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast to 
class java.util.UUID
{noformat}

In the generated class file we should be getting:
{code:java}
private static final org.apache.avro.Conversion[] conversions;
//(...)
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
conversions = new org.apache.avro.Conversion[] {
null,
null,
new org.apache.avro.Conversions.UUIDConversion(),
null,
new org.apache.avro.data.TimeConversions.TimestampMillisConversion(),
null
};
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

but we get:
{code:java}
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

Both the definition and the initialization of the conversions field are missing.


> Generated Java code still fails with union containing logical type
> --
>
> Key: AVRO-3934
> URL: https://issues.apache.org/jira/browse/AVRO-3934
> Project: Apache Avro
>  Issue Type: Bug
>  Components: java, logical types
>Affects Versions: 1.9.2, 1.11.3
> Environment: apache maven, avro maven plugin, avro v1.11.3 

[jira] [Updated] (AVRO-3934) Generated Java code still fails with union containing logical type

2024-02-02 Thread Joan Soto Targa (Jira)


 [ 
https://issues.apache.org/jira/browse/AVRO-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joan Soto Targa updated AVRO-3934:
--
Description: 
In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling to 
Java 8 (or 11) and in both cases we have observed that bug AVRO-1981 is still 
active and deserialization fails for nullable fields that have a logical type: 
happens at least for both "uuid" (in version 1.11.3, previous one just ignores 
this type) and "timestamp-millis" (in both versions).

 
The error we get for the uuid case when attempting to deserialize is:
{noformat}
java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast to 
class java.util.UUID
{noformat}

In the generated class file we should be getting:
{code:java}
private static final org.apache.avro.Conversion[] conversions;
//(...)
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
conversions = new org.apache.avro.Conversion[] {
null,
null,
new org.apache.avro.Conversions.UUIDConversion(),
null,
new org.apache.avro.data.TimeConversions.TimestampMillisConversion(),
null
};
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

but we get:
{code:java}
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

Both the definition and the initialization of the conversions field are missing.

  was:
In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling to 
Java 8 and in both cases we have observed that bug AVRO-1981 is still active 
and deserialization fails for nullable fields that have a logical type: happens 
at least for both "uuid" (in version 1.11.3, previous one just ignores this 
type) and "timestamp-millis" (in both versions).

 
The error we get for the uuid case when attempting to deserialize is:
{noformat}
java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast to 
class java.util.UUID
{noformat}

In the generated class file we should be getting:
{code:java}
private static final org.apache.avro.Conversion[] conversions;
//(...)
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
conversions = new org.apache.avro.Conversion[] {
null,
null,
new org.apache.avro.Conversions.UUIDConversion(),
null,
new org.apache.avro.data.TimeConversions.TimestampMillisConversion(),
null
};
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

but we get:
{code:java}
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

Both the definition and the initialization of the conversions field are missing.


> Generated Java code still fails with union containing logical type
> --
>
> Key: AVRO-3934
> URL: https://issues.apache.org/jira/browse/AVRO-3934
> Project: Apache Avro
>  Issue Type: Bug
>  Components: java, logical types
>Affects Versions: 1.9.2, 1.11.3
> Environment: apache maven, avro maven plugin, avro v1.11.3 or 1.9.2 
> (tried with both), code being generated either in Java 8 or 11.
> Faulty java generation happens in both maven running locally on intellij idea 
> and on jenkins pipelines.
> Issue happens in both windows and linux.
>Reporter: Joan Soto Targa
>Priority: Major
>
> In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling 
> to Java 8 (or 11) and in both cases we have observed that bug AVRO-1981 is 
> still active and deserialization fails for nullable fields that have a 
> lo

[jira] [Updated] (AVRO-3934) Generated Java code still fails with union containing logical type

2024-02-02 Thread Joan Soto Targa (Jira)


 [ 
https://issues.apache.org/jira/browse/AVRO-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joan Soto Targa updated AVRO-3934:
--
Environment: 
apache maven, avro maven plugin, avro v1.11.3 or 1.9.2 (tried with both), code 
being generated either in Java 8 or 11.

Faulty java generation happens in both maven running locally on intellij idea 
and on jenkins pipelines.

Issue happens in both windows and linux.

  was:
apache maven, avro maven plugin, avro v1.11.3 or 1.9.2 (tried with both), code 
being generated in Java 8.

Faulty java generation happens in both maven running locally on intellij idea 
and on jenkins pipelines.

Issue happens in both windows and linux.


> Generated Java code still fails with union containing logical type
> --
>
> Key: AVRO-3934
> URL: https://issues.apache.org/jira/browse/AVRO-3934
> Project: Apache Avro
>  Issue Type: Bug
>  Components: java, logical types
>Affects Versions: 1.9.2, 1.11.3
> Environment: apache maven, avro maven plugin, avro v1.11.3 or 1.9.2 
> (tried with both), code being generated either in Java 8 or 11.
> Faulty java generation happens in both maven running locally on intellij idea 
> and on jenkins pipelines.
> Issue happens in both windows and linux.
>Reporter: Joan Soto Targa
>Priority: Major
>
> In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling 
> to Java 8 and in both cases we have observed that bug AVRO-1981 is still 
> active and deserialization fails for nullable fields that have a logical 
> type: happens at least for both "uuid" (in version 1.11.3, previous one just 
> ignores this type) and "timestamp-millis" (in both versions).
>  
> The error we get for the uuid case when attempting to deserialize is:
> {noformat}
> java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast 
> to class java.util.UUID
> {noformat}
> In the generated class file we should be getting:
> {code:java}
> private static final org.apache.avro.Conversion[] conversions;
> //(...)
> static {
> MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
> MODEL$.addLogicalTypeConversion(new 
> TimeConversions.TimestampMillisConversion());
> ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
> DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
> conversions = new org.apache.avro.Conversion[] {
> null,
> null,
> new org.apache.avro.Conversions.UUIDConversion(),
> null,
> new org.apache.avro.data.TimeConversions.TimestampMillisConversion(),
> null
> };
> WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
> READER$ = MODEL$.createDatumReader(SCHEMA$);
> }
> {code}
> but we get:
> {code:java}
> static {
> MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
> MODEL$.addLogicalTypeConversion(new 
> TimeConversions.TimestampMillisConversion());
> ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
> DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
> WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
> READER$ = MODEL$.createDatumReader(SCHEMA$);
> }
> {code}
> Both the definition and the initialization of the conversions field are 
> missing.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (AVRO-3934) Generated Java code still fails with union containing logical type

2024-02-02 Thread Joan Soto Targa (Jira)


 [ 
https://issues.apache.org/jira/browse/AVRO-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joan Soto Targa updated AVRO-3934:
--
Description: 
In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling to 
Java 8 and in both cases we have observed that bug AVRO-1981 is still active 
and deserialization fails for nullable fields that have a logical type: happens 
at least for both "uuid" (in version 1.11.3, previous one just ignores this 
type) and "timestamp-millis" (in both versions).

 
The error we get for the uuid case when attempting to deserialize is:
{noformat}
java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast to 
class java.util.UUID
{noformat}

In the generated class file we should be getting:
{code:java}
private static final org.apache.avro.Conversion[] conversions;
//(...)
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
conversions = new org.apache.avro.Conversion[] {
null,
null,
new org.apache.avro.Conversions.UUIDConversion(),
null,
new org.apache.avro.data.TimeConversions.TimestampMillisConversion(),
null
};
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

but we get:
{code:java}
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

Both the definition and the initialization of the conversions field are missing.

  was:
In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling to 
Java 8 and in both cases we have observed that bug AVRO-1981 is still active 
and deserialization fails for nullable fields that have a logical type: happens 
at least for both "uuid" (in version 1.11.3, previous one just ignores this 
type) and "timestamp-millis" (in both versions).

 
The error we get for the uuid case when attempting to deserialize is:
{noformat}
java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast to 
class java.util.UUID
{noformat}

In the generated class file we should be getting:
{code:java}
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
conversions = new org.apache.avro.Conversion[] {
null,
null,
new org.apache.avro.Conversions.UUIDConversion(),
null,
new org.apache.avro.data.TimeConversions.TimestampMillisConversion(),
null
};
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

but we get:
{code:java}
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}



> Generated Java code still fails with union containing logical type
> --
>
> Key: AVRO-3934
> URL: https://issues.apache.org/jira/browse/AVRO-3934
> Project: Apache Avro
>  Issue Type: Bug
>  Components: java, logical types
>Affects Versions: 1.9.2, 1.11.3
> Environment: apache maven, avro maven plugin, avro v1.11.3 or 1.9.2 
> (tried with both), code being generated in Java 8.
> Faulty java generation happens in both maven running locally on intellij idea 
> and on jenkins pipelines.
> Issue happens in both windows and linux.
>Reporter: Joan Soto Targa
>Priority: Major
>
> In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling 
> to Java 8 and in both cases we have observed that bug AVRO-1981 is still 
> active and deserialization fails for nullable fields that have a logical 
> type: happens at least for both "uuid" (in version 1.11.3, previous one just 
> ignores this type) and "timestamp-millis" (in both versions).

[jira] [Updated] (AVRO-3934) Generated Java code still fails with union containing logical type

2024-02-02 Thread Joan Soto Targa (Jira)


 [ 
https://issues.apache.org/jira/browse/AVRO-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joan Soto Targa updated AVRO-3934:
--
Description: 
In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling to 
Java 8 and in both cases we have observed that bug AVRO-1981 is still active 
and deserialization fails for nullable fields that have a logical type: happens 
at least for both "uuid" (in version 1.11.3, previous one just ignores this 
type) and "timestamp-millis" (in both versions).

 
The error we get for the uuid case when attempting to deserialize is:
{noformat}
java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast to 
class java.util.UUID
{noformat}

In the generated class file we should be getting:
{code:java}
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
conversions = new org.apache.avro.Conversion[] {
null,
null,
new org.apache.avro.Conversions.UUIDConversion(),
null,
new org.apache.avro.data.TimeConversions.TimestampMillisConversion(),
null
};
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

but we get:
{code:java}
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}


  was:
In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling to 
Java 8 and in both cases we have observed that bug AVRO-1981 is still active 
and deserialization fails for nullable fields that have a logical type: happens 
at least for both "uuid" (in version 1.11.3, previous one just ignores this 
type) and "timestamp-millis" (in both versions).

 
The error we get for the uuid case when attempting to deserialize is:
{noformat}
java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast to 
class java.util.UUID
{noformat}

In the class code, we should be getting:
{code:java}
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
conversions = new org.apache.avro.Conversion[] {
null,
null,
new org.apache.avro.Conversions.UUIDConversion(),
null,
new org.apache.avro.data.TimeConversions.TimestampMillisConversion(),
null
};
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

but we get:
{code:java}
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}



> Generated Java code still fails with union containing logical type
> --
>
> Key: AVRO-3934
> URL: https://issues.apache.org/jira/browse/AVRO-3934
> Project: Apache Avro
>  Issue Type: Bug
>  Components: java, logical types
>Affects Versions: 1.9.2, 1.11.3
> Environment: apache maven, avro maven plugin, avro v1.11.3 or 1.9.2 
> (tried with both), code being generated in Java 8.
> Faulty java generation happens in both maven running locally on intellij idea 
> and on jenkins pipelines.
> Issue happens in both windows and linux.
>Reporter: Joan Soto Targa
>Priority: Major
>
> In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling 
> to Java 8 and in both cases we have observed that bug AVRO-1981 is still 
> active and deserialization fails for nullable fields that have a logical 
> type: happens at least for both "uuid" (in version 1.11.3, previous one just 
> ignores this type) and "timestamp-millis" (in both versions).
>  
> The error we get for the uuid case when attempting to deserialize is:
> {noformat}
> java.lang.ClassCastException: class org.apache.avro.util.Utf8 canno

[jira] [Created] (AVRO-3934) Generated Java code still fails with union containing logical type

2024-02-02 Thread Joan Soto Targa (Jira)
Joan Soto Targa created AVRO-3934:
-

 Summary: Generated Java code still fails with union containing 
logical type
 Key: AVRO-3934
 URL: https://issues.apache.org/jira/browse/AVRO-3934
 Project: Apache Avro
  Issue Type: Bug
  Components: java, logical types
Affects Versions: 1.11.3, 1.9.2
 Environment: apache maven, avro maven plugin, avro v1.11.3 or 1.9.2 
(tried with both), code being generated in Java 8.

Faulty java generation happens in both maven running on intellij idea and on 
jenkins pipelines.

Issue happens in both windows and linux.
Reporter: Joan Soto Targa


In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling to 
Java 8 and in both cases we have observed that bug AVRO-1981 is still active 
and deserialization fails for nullable fields that have a logical type: happens 
at least for both "uuid" (in version 1.11.3, previous one just ignores this 
type) and "timestamp-millis" (in both versions).

 
The error we get for the uuid case when attempting to deserialize is:
{noformat}
java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast to 
class java.util.UUID
{noformat}

In the class code, we should be getting:
{code:java}
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
conversions = new org.apache.avro.Conversion[] {
null,
null,
new org.apache.avro.Conversions.UUIDConversion(),
null,
new org.apache.avro.data.TimeConversions.TimestampMillisConversion(),
null
};
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}

but we get:
{code:java}
static {
MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
MODEL$.addLogicalTypeConversion(new 
TimeConversions.TimestampMillisConversion());
ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
READER$ = MODEL$.createDatumReader(SCHEMA$);
}
{code}




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (AVRO-3934) Generated Java code still fails with union containing logical type

2024-02-02 Thread Joan Soto Targa (Jira)


 [ 
https://issues.apache.org/jira/browse/AVRO-3934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joan Soto Targa updated AVRO-3934:
--
Environment: 
apache maven, avro maven plugin, avro v1.11.3 or 1.9.2 (tried with both), code 
being generated in Java 8.

Faulty java generation happens in both maven running locally on intellij idea 
and on jenkins pipelines.

Issue happens in both windows and linux.

  was:
apache maven, avro maven plugin, avro v1.11.3 or 1.9.2 (tried with both), code 
being generated in Java 8.

Faulty java generation happens in both maven running on intellij idea and on 
jenkins pipelines.

Issue happens in both windows and linux.


> Generated Java code still fails with union containing logical type
> --
>
> Key: AVRO-3934
> URL: https://issues.apache.org/jira/browse/AVRO-3934
> Project: Apache Avro
>  Issue Type: Bug
>  Components: java, logical types
>Affects Versions: 1.9.2, 1.11.3
> Environment: apache maven, avro maven plugin, avro v1.11.3 or 1.9.2 
> (tried with both), code being generated in Java 8.
> Faulty java generation happens in both maven running locally on intellij idea 
> and on jenkins pipelines.
> Issue happens in both windows and linux.
>Reporter: Joan Soto Targa
>Priority: Major
>
> In our company we're using both 1.9.2 and 1.11.3 versions of avro compiling 
> to Java 8 and in both cases we have observed that bug AVRO-1981 is still 
> active and deserialization fails for nullable fields that have a logical 
> type: happens at least for both "uuid" (in version 1.11.3, previous one just 
> ignores this type) and "timestamp-millis" (in both versions).
>  
> The error we get for the uuid case when attempting to deserialize is:
> {noformat}
> java.lang.ClassCastException: class org.apache.avro.util.Utf8 cannot be cast 
> to class java.util.UUID
> {noformat}
> In the class code, we should be getting:
> {code:java}
> static {
> MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
> MODEL$.addLogicalTypeConversion(new 
> TimeConversions.TimestampMillisConversion());
> ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
> DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
> conversions = new org.apache.avro.Conversion[] {
> null,
> null,
> new org.apache.avro.Conversions.UUIDConversion(),
> null,
> new org.apache.avro.data.TimeConversions.TimestampMillisConversion(),
> null
> };
> WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
> READER$ = MODEL$.createDatumReader(SCHEMA$);
> }
> {code}
> but we get:
> {code:java}
> static {
> MODEL$.addLogicalTypeConversion(new Conversions.UUIDConversion());
> MODEL$.addLogicalTypeConversion(new 
> TimeConversions.TimestampMillisConversion());
> ENCODER = new BinaryMessageEncoder(MODEL$, SCHEMA$);
> DECODER = new BinaryMessageDecoder(MODEL$, SCHEMA$);
> WRITER$ = MODEL$.createDatumWriter(SCHEMA$);
> READER$ = MODEL$.createDatumReader(SCHEMA$);
> }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: Custom auto-vacuum toogle disabled on partitioned tables

2024-01-30 Thread Joan
Sorry, this is the menu, should be fine now (the cursor is just a pointer
in this case)
[image: ksnip_tmp_rLNmEs.png]


Missatge de Aditya Toshniwal  del dia
dl., 29 de gen. 2024 a les 12:51:

> Hi Joan,
>
> The image you shared is broken. Please share the correct one.
>
> On Mon, Jan 29, 2024 at 4:10 PM Joan  wrote:
>
>> Hi , I am using pgadmin4 8.2 to connecte a postgres 14 server, I wanted
>> to enable the custom auto-vacuum on a partitioned table because with the
>> ordinary vacuum the table gets slower over time.
>> I've found that the toggle is disabled (while I can change it on the
>> regular tables), I can do a manual vacuum on the maintenance menu but
>> having it automated is more convenient.
>> [image: ksnip_tmp_ejSoUK.png]
>> On documentation I've found the vacuum process should walk on the
>> partitioned tables as if they where normal tables and vacuum them when
>> needed.
>>
>> Regards
>>
>
>
> --
> Thanks,
> Aditya Toshniwal
> pgAdmin Hacker | Sr. Software Architect | *enterprisedb.com*
> <https://www.enterprisedb.com/>
> "Don't Complain about Heat, Plant a TREE"
>


Custom auto-vacuum toogle disabled on partitioned tables

2024-01-29 Thread Joan
Hi , I am using pgadmin4 8.2 to connecte a postgres 14 server, I wanted to
enable the custom auto-vacuum on a partitioned table because with the
ordinary vacuum the table gets slower over time.
I've found that the toggle is disabled (while I can change it on the
regular tables), I can do a manual vacuum on the maintenance menu but
having it automated is more convenient.
[image: ksnip_tmp_ejSoUK.png]
On documentation I've found the vacuum process should walk on the
partitioned tables as if they where normal tables and vacuum them when
needed.

Regards


[nysbirds-l] Solo Bohemian Waxwing

2024-01-03 Thread Joan Collins
Happy New Year to everyone!

 

There is a solo Bohemian Waxwing in Long Lake.  (I don’t recall ever seeing
just one!)  Emily and Brian Farr texted me photos yesterday afternoon of a
Bohemian Waxwing feeding in their Japanese Apple Tree.  They said it has
been in their yard for a week.  The bird can be seen from the road (owners
were fine with me posting).  Their home is across from Stewart’s Shop on
Route 28N to the right of the Hoss’s Country Corners buildings.  It is the
first residential home and has feeders to the left of the house.  The fruit
tree is located to the left of the garage behind the home.  Interesting that
they have two other fruit trees that the bird isn’t interested in!  The
homeowners said they donated one of the Japanese Apple trees to the Long
Lake Library years ago – a tree I’ve kept an eye on this winter!  If the
bird runs out of fruit at their home, it may move to this fruit tree in
front of the library a short distance from their home.  I was talking to the
homeowners for an hour and the Bohemian Waxwing just sits in the tree and
occasionally grabs an apple!  The only time it left the tree was when it was
chased by Blue Jays.

 

Quick update on finches: There is a large irruption of Pine Siskins in the
Adirondacks that started in early fall.  Purple Finches and Amer.
Goldfinches are still around.  Both Red and White-winged Crossbills are
around in patchy areas.  A flock of 8 White-winged Crossbills have been
feeding near Sabattis Bog in Long Lake and I found a pair along Route 30
near John Dillon Park in Long Lake.  A Red Crossbill was singing recently at
the Round Lake Trailhead on Sabattis Road.  I counted 41 White-winged
Crossbills during the Saranac Lake CBC on 12/30/23 (Route 55, Oregon Plains,
Bigelow Road, and about a mile of the bog trail – ½ north and ½ south).
Three different males along Oregon Plains Road were singing.  My Pine Siskin
count was 455 and that was conservative!

 

A few climate change notes: Our first frost in Long Lake was October 31 this
year – about 6 weeks later than it used to be a couple decades ago.  I had
hanging flowers alive into November – in the past they would always be dead
by mid-September.  On a warmish evening on November 17, 2023, I was driving
home on Route 28N dodging frogs in the road – just remarkable and
surprising.  As we watched the Bohemian Waxwing earlier today, we made note
of the open ground with no snow cover, and lamented the disappearance of
winter.

 

Joan Collins

Long Lake, NY

 

 


--

(copy & paste any URL below, then modify any text "_DOT_" to a period ".")

NYSbirds-L List Info:
NortheastBirding_DOT_com/NYSbirdsWELCOME_DOT_htm
NortheastBirding_DOT_com/NYSbirdsRULES_DOT_htm
NortheastBirding_DOT_com/NYSbirdsSubscribeConfigurationLeave_DOT_htm

ARCHIVES:
1) mail-archive_DOT_com/nysbirds-l@cornell_DOT_edu/maillist_DOT_html
2) surfbirds_DOT_com/birdingmail/Group/NYSBirds-L
3) birding_DOT_aba_DOT_org/maillist/NY01

Please submit your observations to eBird:
ebird_DOT_org/content/ebird/

--

Cannot build OOT from macos

2023-12-25 Thread Joan Solà Ortega
Hi!

I have been using gnuradio for a while on my macos M2, including the creation 
and compilation of out-of-tree modules or OOT.

Today, after a few months inactive, I tried ro re-compile my OOT blocks. 
Something must have been changed in the system during this time, since I cannot 
now compile.

I post the error message, and then my system configuration. (Sorry I do not 
know how to format code properly for the mailing list archives to print nicely)


(allaus) xxx@yyy build % make -j1
[ 10%] Building CXX object 
lib/CMakeFiles/gnuradio-iri_arva_clean.dir/parallel_adapted_iir_cvf_impl.cc.o
In file included from 
/Users/jsola/dev/allaus/allaus-gnuradio/gr-iri_arva_clean/lib/parallel_adapted_iir_cvf_impl.cc:8:
In file included from 
/Users/jsola/mambaforge/envs/allaus/include/gnuradio/io_signature.h:16:
/Users/xxx/mambaforge/envs/allaus/include/spdlog/fmt/fmt.h:27:14: fatal error: 
'spdlog/fmt/bundled/core.h' file not found
#include 
 ^~~
1 error generated.
make[2]: *** 
[lib/CMakeFiles/gnuradio-iri_arva_clean.dir/parallel_adapted_iir_cvf_impl.cc.o] 
Error 1
make[1]: *** [lib/CMakeFiles/gnuradio-iri_arva_clean.dir/all] Error 2
make: *** [all] Error 2


I am working with Airspy modules. The whole setup was installed in my machine 
via conda/mamba, in a completely dedicated conda environment. This is all I 
have:

1. Create a conda environment
mamba create -n allaus
mamba activate allaus
mamba env config vars set LD_LIBRARY_PATH=~/miniforge3/envs/allaus // use 
DYLD_LIBRARY_PATH if on MacOS
mamba activate allaus

2. Install all required packages
mamba install -c conda-forge cmake pybind11 libusb boost libairspyhf airspyhf 
soapysdr-module-airspyhf gnuradio

3. build
cd allaus-gnuradio
cd gr-iri_arva_clean
mkdir build
cd build
cmake --install-prefix=~/miniforge3/envs/allaus ..
make -j5

affter which the reported error occurs.

With these exact steps I could compile my OOT modules some months ago. These 
modules were created following the gnuradio instructions here 
<https://wiki.gnuradio.org/index.php?title=Creating_C++_OOT_with_gr-modtool#Compiling_and_Installing_the_Block>.

I tried to trace back the error to spdlog, with no luck. 

Any ideas ?

Thank you!

Joan

Bug#1059400: kubetail: Broken on Debian Bookworm ("syntax error near unexpected token") due to Bash 5.2 incompatibility

2023-12-24 Thread Joan Bruguera Micó
Package: kubetail
Version: 1.6.5-2
Severity: important
Tags: upstream

Dear Maintainer,

Unfortunately, attempting to use kubetail fails on Debian Bookworm.
In particular, any trivial use reports a "syntax error", as follows:

```
$ kubetail nginx
Will tail 2 logs...
nginx-deployment-7c79c4bf97-jdmkg
nginx-deployment-7c79c4bf97-p7bxr
/usr/bin/kubetail: eval: line 326: syntax error near unexpected token `kubectl'
/usr/bin/kubetail: eval: line 326: `kubectl  logs 
nginx-deployment-7c79c4bf97-jdmkg nginx -f=true --since=10s --tail=-1| 
while read -r; do echo "[nginx-deployment-7c79c4bf97-jdmkg] $REPLY " | tail -n 
+1; done  kubectl  logs nginx-deployment-7c79c4bf97-p7bxr nginx -f=true 
--since=10s --tail=-1| while read -r; do echo 
"[nginx-deployment-7c79c4bf97-p7bxr] $REPLY " | tail -n +1; done'
```

This error has been reported and fixed upstream, with the root cause
being a breaking change in Bash 5.2, the version used in Bookworm.
Upstream issue: https://github.com/johanhaleby/kubetail/issues/133

A fix for this issue has been merged in Kubetail 1.6.17:
https://github.com/johanhaleby/kubetail/pull/134

I'd be grateful if the version in Debian Bookworm can be updated or
include the fix above, as otherwise the package can not be used unless
one resolts to any of the workarounds published in the upstream issue.

Regards.

-- System Information:
Debian Release: 12.4
  APT prefers stable-updates
  APT policy: (500, 'stable-updates'), (500, 'stable-security'), (500, 'stable')
Architecture: amd64 (x86_64)

Kernel: Linux 6.7.0-rc6-1001-mainline-git-00303-g3f82f1c3a036 (SMP w/16 CPU 
threads; PREEMPT)
Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE, 
TAINT_UNSIGNED_MODULE
Locale: LANG=C, LC_CTYPE=C.UTF-8 (charmap=UTF-8), LANGUAGE not set
Shell: /bin/sh linked to /usr/bin/dash
Init: unable to detect

-- no debconf information



Re: [LincolnTalk] Town Meeting Moments

2023-12-20 Thread Joan Kimball
And in Mandarin Chinese,  Dick Bolt responded to An Wang. It was a
> memorable time in our lives.
>
> Joan
>
> On Wed, Dec 20, 2023, 3:49 PM Adam M Hogue  wrote:
>
>> Yes and there was the famous meeting where few people showed up and voted
>> down state funding for the school and cost us 60 plus million dollars
>> because the vocal minority showed up.  So we can thank our high tax bills
>> on this horrible process.
>>
>> *Adam M Hogue*
>> *Cell: **(978) 828-6184 <(978)%20828-6184>*
>>
>> On Dec 19, 2023, at 11:19 PM, Sara Mattes  wrote:
>>
>> Ah-thank you for filling in blanks and adding more nuance.
>> I hope others read this and add their memories too.
>>
>>
>> --
>> Sara Mattes
>>
>>
>>
>>
>> On Dec 19, 2023, at 11:06 PM, iearler...@verizon.net wrote:
>>
>> I was at that meeting. And after listening to Dr. Wang and others, the
>> petition was amended to accept the Peace Pole as a gift from the people of
>> Japan, rather than the government.
>>
>>
>> On Tuesday, December 19, 2023 at 09:02:10 PM EST, Sara Mattes <
>> samat...@gmail.com> wrote:
>>
>>
>> During some of those extended debates, I sometimes learned a lot.
>>
>> There was a famous meeting in the mid 1980s where the town was asked to
>> accept a “Peace Pole” form Japan.
>> It was a Citizens Petition that no one expected to be very controversial.
>> Who would/could be against a Peace Pole?
>>
>> It was my ignorance not to realize that many who had come from China or
>> had descended from those who had, had very strong feelings about Japan.
>> Dr. An Wang was one of them.
>> He stood before us explain his powerful emotions at the thought-how it
>> triggers memories of the brutal treatment he and his family had endured,
>> and that other Chinese had endured at the hands of the Japanese during WWII.
>> We had a history lesson, but it did not end there.
>> Other got up and spoke to the need to heal -if the Japanese could reach
>> out to us, after what they had endured too, surely we could all reach out
>> teach other.
>> The discussion went on for some time…back and forth.
>> By the end of the meeting, Dr. Wang expressed his appreciation of our
>> willingness to listen and that now he felt it was the right thing to do-to
>> accept the Peace Pole.
>>
>> You will find that pole planted in the SW corner of the town pocket park
>> across from the mall that the Garden Club loving cares for.
>>
>> At this recent Town Meeting, most were held to a strict 2 minutes, not
>> goin on for 10.
>> In fact, one former town official has his mike cut as he hit the 2.5
>> minute mark!
>>
>> I, for one, am willing to listen to my fellow citizens, all 15 or more.
>> Who knows, I might learn something.
>>
>> --
>> Sara Mattes
>>
>>
>>
>>
>> On Dec 19, 2023, at 8:33 PM, RAandBOB  wrote:
>>
>> No question annual town meeting could use some improvements. But with
>> regard to the line at the mike, imagine each of those 15 people wanting to
>> talk for 10 minutes. I have been to many a town meeting where that did
>> happen, and believe me, it’s almost beyond enduring.
>>
>> Ruth Ann
>> (She, her, hers)
>>
>> On Dec 19, 2023, at 7:49 PM, Terri via Lincoln 
>> wrote:
>>
>> 
>> Does anyone know what the COST of the Clickers ARE or the breakdown of
>> how many we need?
>>
>> The December 2nd Town Meeting was 100%  proof of how inefficient our Town
>> voting process is.  If you have questions about what I am referring to...
>> please watch the video of  the Dec 2nd Meeting especially if you left early
>> or missed it.
>> *It's an eye sore.*... not to mention *disheartening.*. to watch  >
>> 15 fellow residents standing in line  at the Mic waiting patiently to be
>> "allowed" to speak.. and then abruptly  told to sit down.  Sad indeed.
>>
>>
>> Theresa Kafina
>>
>>
>>
>>
>>
>>
>> On Tuesday, December 19, 2023 at 10:38:06 AM EST, ٍSarah Postlethwait <
>> sa...@bayhas.com> wrote:
>>
>>
>> Like all things in Lincoln- if you want it, you’ll have to fight for it.
>>
>> The moderator is against change in general, and it seems that the
>> “clickers” are unlikely to be purchased until after March, if at all. There
>> was debate about town meeting being a time to “stand and make your voice
>> heard” and the clickers make it anonymous.
>>
>> S

Re: Cloudstack Agent 4.18.1 metrics issue

2023-12-13 Thread Joan g
Thanks Wei, Unsuspending helped

Cheers
Jon :)

On Wed, Dec 13, 2023 at 4:07 PM Wei ZHOU  wrote:

> yes.
>
> the domain state is not supported in libvirt-java
> I just created a PR
> https://gitlab.com/libvirt/libvirt-java/-/merge_requests/40
>
>
> -Wei
>
> On Wed, 13 Dec 2023 at 11:36, Joan g  wrote:
>
> > yes. restarting libvirtd does not help.
> >
> > "virsh list" command output. Is the "pmsuspended" causing  issue ?
> >
> > root@kvm-1:~# virsh list
> >  IdName   State
> > ---
> >  11i-2-281-VMrunning
> >  25i-2-228-VMrunning
> >  27i-2-230-VMrunning
> >  52i-2-483-VMrunning
> >  121   i-2-585-VMpmsuspended
> >  122   i-2-584-VMrunning
> >  123   i-2-590-VMrunning
> >  139   i-2-635-VMrunning
> >  145   i-2-479-VMrunning
> >  164   i-8-703-VM running
> >
> >
> > Jon
> >
> > On Wed, Dec 13, 2023 at 3:48 PM Wei ZHOU  wrote:
> >
> > > Can you restart libvirtd ? If it does not work, can you share the
> result
> > of
> > > "virsh list" command ?
> > >
> > >
> > >
> > > -Wei
> > >
> > > On Wed, 13 Dec 2023 at 09:48, Joan g  wrote:
> > >
> > > > Hi Wei.
> > > >
> > > > OS : Ubuntu 20.04.6 LTS
> > > >
> > > > root@kvm-1:~# virsh version
> > > > Compiled against library: libvirt 6.0.0
> > > > Using library: libvirt 6.0.0
> > > > Using API: QEMU 6.0.0
> > > > Running hypervisor: QEMU 4.2.1
> > > >
> > > > Regards,
> > > > Jon
> > > >
> > > > On Wed, Dec 13, 2023 at 2:10 PM Wei ZHOU 
> > wrote:
> > > >
> > > > > What's the OS and libvirt/qemu version ?
> > > > >
> > > > > -Wei
> > > > >
> > > > > On Wed, 13 Dec 2023 at 09:30, Joan g  wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > Recently below is getting logged in my cloudstack agent and the
> > > manager
> > > > > is
> > > > > > not able to get any VM metrics. Can Someone help on below
> > > > > >
> > > > > > 2023-12-13 03:23:37,318 WARN  [cloud.agent.Agent]
> > > > > > (agentRequest-Handler-1:null) (logid:58ef1261) Caught:
> > > > > > java.lang.ArrayIndexOutOfBoundsException: Index 7 out of bounds
> for
> > > > > length
> > > > > > 7
> > > > > > at org.libvirt.DomainInfo.(Unknown Source)
> > > > > > at org.libvirt.Domain.getInfo(Unknown Source)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.getVmStat(LibvirtComputingResource.java:4176)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVmStatsCommandWrapper.execute(LibvirtGetVmStatsCommandWrapper.java:53)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVmStatsCommandWrapper.execute(LibvirtGetVmStatsCommandWrapper.java:37)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1848)
> > > > > > at com.cloud.agent.Agent.processRequest(Agent.java:662)
> > > > > > at
> > > > > > com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1082)
> > > > > > at com.cloud.utils.nio.Task.call(Task.java:83)
> > > > > > at com.cloud.utils.nio.Task.call(Task.java:29)
> > > > > > at
> > > > > >
> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> > > > > > at
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> > > > > > at java.base/java.lang.Thread.run(Thread.java:829)
> > > > > >
> > > > > > Regards,
> > > > > > Jon
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Cloudstack Agent 4.18.1 metrics issue

2023-12-13 Thread Joan g
yes. restarting libvirtd does not help.

"virsh list" command output. Is the "pmsuspended" causing  issue ?

root@kvm-1:~# virsh list
 IdName   State
---
 11i-2-281-VMrunning
 25i-2-228-VMrunning
 27i-2-230-VMrunning
 52i-2-483-VMrunning
 121   i-2-585-VMpmsuspended
 122   i-2-584-VMrunning
 123   i-2-590-VMrunning
 139   i-2-635-VMrunning
 145   i-2-479-VMrunning
 164   i-8-703-VM running


Jon

On Wed, Dec 13, 2023 at 3:48 PM Wei ZHOU  wrote:

> Can you restart libvirtd ? If it does not work, can you share the result of
> "virsh list" command ?
>
>
>
> -Wei
>
> On Wed, 13 Dec 2023 at 09:48, Joan g  wrote:
>
> > Hi Wei.
> >
> > OS : Ubuntu 20.04.6 LTS
> >
> > root@kvm-1:~# virsh version
> > Compiled against library: libvirt 6.0.0
> > Using library: libvirt 6.0.0
> > Using API: QEMU 6.0.0
> > Running hypervisor: QEMU 4.2.1
> >
> > Regards,
> > Jon
> >
> > On Wed, Dec 13, 2023 at 2:10 PM Wei ZHOU  wrote:
> >
> > > What's the OS and libvirt/qemu version ?
> > >
> > > -Wei
> > >
> > > On Wed, 13 Dec 2023 at 09:30, Joan g  wrote:
> > >
> > > > Hi,
> > > >
> > > > Recently below is getting logged in my cloudstack agent and the
> manager
> > > is
> > > > not able to get any VM metrics. Can Someone help on below
> > > >
> > > > 2023-12-13 03:23:37,318 WARN  [cloud.agent.Agent]
> > > > (agentRequest-Handler-1:null) (logid:58ef1261) Caught:
> > > > java.lang.ArrayIndexOutOfBoundsException: Index 7 out of bounds for
> > > length
> > > > 7
> > > > at org.libvirt.DomainInfo.(Unknown Source)
> > > > at org.libvirt.Domain.getInfo(Unknown Source)
> > > > at
> > > >
> > > >
> > >
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.getVmStat(LibvirtComputingResource.java:4176)
> > > > at
> > > >
> > > >
> > >
> >
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVmStatsCommandWrapper.execute(LibvirtGetVmStatsCommandWrapper.java:53)
> > > > at
> > > >
> > > >
> > >
> >
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVmStatsCommandWrapper.execute(LibvirtGetVmStatsCommandWrapper.java:37)
> > > > at
> > > >
> > > >
> > >
> >
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
> > > > at
> > > >
> > > >
> > >
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1848)
> > > > at com.cloud.agent.Agent.processRequest(Agent.java:662)
> > > > at
> > > > com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1082)
> > > > at com.cloud.utils.nio.Task.call(Task.java:83)
> > > > at com.cloud.utils.nio.Task.call(Task.java:29)
> > > > at
> > > > java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> > > > at
> > > >
> > > >
> > >
> >
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> > > > at
> > > >
> > > >
> > >
> >
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> > > > at java.base/java.lang.Thread.run(Thread.java:829)
> > > >
> > > > Regards,
> > > > Jon
> > > >
> > >
> >
>


Re: Cloudstack Agent 4.18.1 metrics issue

2023-12-13 Thread Joan g
Hi Wei.

OS : Ubuntu 20.04.6 LTS

root@kvm-1:~# virsh version
Compiled against library: libvirt 6.0.0
Using library: libvirt 6.0.0
Using API: QEMU 6.0.0
Running hypervisor: QEMU 4.2.1

Regards,
Jon

On Wed, Dec 13, 2023 at 2:10 PM Wei ZHOU  wrote:

> What's the OS and libvirt/qemu version ?
>
> -Wei
>
> On Wed, 13 Dec 2023 at 09:30, Joan g  wrote:
>
> > Hi,
> >
> > Recently below is getting logged in my cloudstack agent and the manager
> is
> > not able to get any VM metrics. Can Someone help on below
> >
> > 2023-12-13 03:23:37,318 WARN  [cloud.agent.Agent]
> > (agentRequest-Handler-1:null) (logid:58ef1261) Caught:
> > java.lang.ArrayIndexOutOfBoundsException: Index 7 out of bounds for
> length
> > 7
> > at org.libvirt.DomainInfo.(Unknown Source)
> > at org.libvirt.Domain.getInfo(Unknown Source)
> > at
> >
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.getVmStat(LibvirtComputingResource.java:4176)
> > at
> >
> >
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVmStatsCommandWrapper.execute(LibvirtGetVmStatsCommandWrapper.java:53)
> > at
> >
> >
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVmStatsCommandWrapper.execute(LibvirtGetVmStatsCommandWrapper.java:37)
> > at
> >
> >
> com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
> > at
> >
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1848)
> > at com.cloud.agent.Agent.processRequest(Agent.java:662)
> > at
> > com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1082)
> > at com.cloud.utils.nio.Task.call(Task.java:83)
> > at com.cloud.utils.nio.Task.call(Task.java:29)
> > at
> > java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> > at
> >
> >
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> > at
> >
> >
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> > at java.base/java.lang.Thread.run(Thread.java:829)
> >
> > Regards,
> > Jon
> >
>


Cloudstack Agent 4.18.1 metrics issue

2023-12-13 Thread Joan g
Hi,

Recently below is getting logged in my cloudstack agent and the manager is
not able to get any VM metrics. Can Someone help on below

2023-12-13 03:23:37,318 WARN  [cloud.agent.Agent]
(agentRequest-Handler-1:null) (logid:58ef1261) Caught:
java.lang.ArrayIndexOutOfBoundsException: Index 7 out of bounds for length 7
at org.libvirt.DomainInfo.(Unknown Source)
at org.libvirt.Domain.getInfo(Unknown Source)
at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.getVmStat(LibvirtComputingResource.java:4176)
at
com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVmStatsCommandWrapper.execute(LibvirtGetVmStatsCommandWrapper.java:53)
at
com.cloud.hypervisor.kvm.resource.wrapper.LibvirtGetVmStatsCommandWrapper.execute(LibvirtGetVmStatsCommandWrapper.java:37)
at
com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1848)
at com.cloud.agent.Agent.processRequest(Agent.java:662)
at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1082)
at com.cloud.utils.nio.Task.call(Task.java:83)
at com.cloud.utils.nio.Task.call(Task.java:29)
at
java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)

Regards,
Jon


Re: VM/Disk Metrics Prometheus Export

2023-12-07 Thread Joan g
Not sure these config is actually valid. Have someone implemented this ?

Jon

On Thu, 7 Dec, 2023, 16:28 Ruben Bosch,  wrote:

> I believe these are available to be sent to graphite/influxdb when
> configuring "stats.output.uri" and other "stats" config under global
> settings.
>
>
> https://github.com/apache/cloudstack/blob/main/server/src/main/java/com/cloud/server/StatsCollector.java
> <
> https://cwiki.apache.org/confluence/display/CLOUDSTACK/StatsCollector+output+to+Graphite
> >
>
> Met vriendelijke groet / Kind regards,
>
> Ruben Bosch
> CLDIN
>
> > On 7 Dec 2023, at 11:31, Joan g  wrote:
> >
> > Hi Team,
> >
> > I could see that we have prometheus exporter plugin available with
> > cloudstack.
> >
> > Do we have any plans to export VM,VR and disk metrices ?
> >
> > Regards,
> > Jon
>
>


VM/Disk Metrics Prometheus Export

2023-12-07 Thread Joan g
Hi Team,

I could see that we have prometheus exporter plugin available with
cloudstack.

Do we have any plans to export VM,VR and disk metrices ?

Regards,
Jon


[PATCH] lwip: Allocate the loopback netif by default

2023-12-02 Thread Joan Lledó
From: Joan Lledó 

The translator received a null `netif_list` during initialization, this
caused a few bugs.

When started without parameters, the translator didn't add any new
interface to `netif_list`, and that broke any subsequent fsysopts over
the translator, as the stack was being initialized again instead of
being reconfigured.

DHCP was broken because the translator is usually installed without
parameters, which are supposed to be added by the DHCP client through
fsysopts.

The absence of an allocated `netif_list` also prevented configuring a
loopback interface.

After these changes, starting the translator always allocates one
interface and configures it as loopback.
---
 lwip/lwip-util.c | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/lwip/lwip-util.c b/lwip/lwip-util.c
index fc4cb137..77d2c233 100644
--- a/lwip/lwip-util.c
+++ b/lwip/lwip-util.c
@@ -149,13 +149,13 @@ init_ifs (void *arg)
   ip6_addr_t *address6;
   int i;
 
-  if (netif_list != 0)
-{
-  if (netif_list->next == 0)
-   init_loopback ();
-  else
-   remove_ifs ();
-}
+  if (netif_list == 0)
+netif_list = calloc (1, sizeof (struct netif));
+
+  if (netif_list->next == 0)
+init_loopback ();
+  else
+remove_ifs ();
 
   /*
* Go through the list backwards. For LwIP
-- 
2.40.1




lwip: Allocate the loopback netif by default

2023-12-02 Thread Joan Lledó


Hi,

This patch fixes a few bugs. The translator was assuming one interface was 
already allocated in `netif_list` when calling `init_ifs` during startup, and 
used it to configure the loopback interface [1]. That was possibly true in the 
past but after upgrading liblwip I found `netif_list` is always null at the 
first call to `init_ifs`. That breaks fsysopts, the loopback interface and DHCP.

fsysopts is broken only when the translator is started without parameters. When 
that happens, `netif_list` remains null after `init_ifs` finished. Because of 
that, a call to fsysopts tries to initialize the stack again instead of 
reconfiguring it [2].

That's linked to the second problem, the lack of loopback interface. 
`parse_opt` at [2] assumes that a correctly initialized stack will at least 
have one configured interface, the loopback one. And `init_ifs` only configures 
it when there's a single netif allocated at `netif_list`, that condition is not 
met if `netif_list` arrives null.

Finally, DHCP fails on lwip if the translator is installed without parameters, 
like pfinet is:

$ showtrans /servers/socket/2
/hurd/pfinet -6 /servers/socket/26

The DHCP client calls fsysopts on the translator [3]. So when installed without 
parameters, this call to fsysopts tries to initialize the stack again and 
crashes.

This simple patch fixes the problems.

---
[1] https://git.savannah.gnu.org/cgit/hurd/hurd.git/tree/lwip/lwip-util.c#n155
[2] https://git.savannah.gnu.org/cgit/hurd/hurd.git/tree/lwip/options.c#n266
[3] 
https://salsa.debian.org/debian/isc-dhcp/-/blob/master/debian/dhclient-script.hurd#L184




Bug#1057047: tomcat10-common: Tomcat 10 helper script doesn't look for temurin based jdk installs

2023-11-28 Thread Joan
Package: tomcat10-common
Version: 10.1.15-1
Severity: normal
X-Debbugs-Cc: aseq...@gmail.com

Dear Maintainer,

   * What led up to the situation?
I am trying to use debian's tomcat 10 with java 21, since it's not present on 
debian I used the one from 
https://adoptium.net/installation/linux/ that has a repository.
When starting tomcat I see this error message on the logs: 
  " [crit] No JDK or JRE found - Please set the JAVA_HOME variable or install 
the default-jdk package"
I traced the message to the helper script 
/usr/libexec/tomcat10/tomcat-locate-java.sh, there adding the
 temurin naming scheme made the helper found the proper java

from:.
for jvmdir in /usr/lib/jvm/java-${java_version}-openjdk-* \
  /usr/lib/jvm/jdk-${java_version}-oracle-* \
  /usr/lib/jvm/jre-${java_version}-oracle-* \
  /usr/lib/jvm/java-${java_version}-oracle \
  /usr/lib/jvm/oracle-java${java_version}-jdk-* \
  /usr/lib/jvm/oracle-java${java_version}-jre-*
do

to:...
for jvmdir in /usr/lib/jvm/java-${java_version}-openjdk-* \
  /usr/lib/jvm/jdk-${java_version}-oracle-* \
  /usr/lib/jvm/jre-${java_version}-oracle-* \
  /usr/lib/jvm/java-${java_version}-oracle \
  /usr/lib/jvm/oracle-java${java_version}-jdk-* \
  /usr/lib/jvm/oracle-java${java_version}-jre-* \
  /usr/lib/jvm/temurin-${java_version}-jre-* \
  /usr/lib/jvm/temurin-${java_version}-jdk-*
do


Since currently the temurin path is quite a popular way to install recent 
jdk/jre version without
having to rely on oracle, supporting it on debian would be nice and without 
collateral issues


-- System Information:
Debian Release: bookworm/sid
  APT prefers jammy-updates
  APT policy: (500, 'jammy-updates'), (500, 'jammy-security'), (500, 'jammy')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 5.15.0-89-generic (SMP w/8 CPU threads)
Kernel taint flags: TAINT_OOT_MODULE
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), 
LANGUAGE=ca:en_US
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled

__
This is the maintainer address of Debian's Java team
.
 Please use
debian-j...@lists.debian.org for discussions and questions.


Bug#1057047: tomcat10-common: Tomcat 10 helper script doesn't look for temurin based jdk installs

2023-11-28 Thread Joan
Package: tomcat10-common
Version: 10.1.15-1
Severity: normal
X-Debbugs-Cc: aseq...@gmail.com

Dear Maintainer,

   * What led up to the situation?
I am trying to use debian's tomcat 10 with java 21, since it's not present on 
debian I used the one from 
https://adoptium.net/installation/linux/ that has a repository.
When starting tomcat I see this error message on the logs: 
  " [crit] No JDK or JRE found - Please set the JAVA_HOME variable or install 
the default-jdk package"
I traced the message to the helper script 
/usr/libexec/tomcat10/tomcat-locate-java.sh, there adding the
 temurin naming scheme made the helper found the proper java

from:.
for jvmdir in /usr/lib/jvm/java-${java_version}-openjdk-* \
  /usr/lib/jvm/jdk-${java_version}-oracle-* \
  /usr/lib/jvm/jre-${java_version}-oracle-* \
  /usr/lib/jvm/java-${java_version}-oracle \
  /usr/lib/jvm/oracle-java${java_version}-jdk-* \
  /usr/lib/jvm/oracle-java${java_version}-jre-*
do

to:...
for jvmdir in /usr/lib/jvm/java-${java_version}-openjdk-* \
  /usr/lib/jvm/jdk-${java_version}-oracle-* \
  /usr/lib/jvm/jre-${java_version}-oracle-* \
  /usr/lib/jvm/java-${java_version}-oracle \
  /usr/lib/jvm/oracle-java${java_version}-jdk-* \
  /usr/lib/jvm/oracle-java${java_version}-jre-* \
  /usr/lib/jvm/temurin-${java_version}-jre-* \
  /usr/lib/jvm/temurin-${java_version}-jdk-*
do


Since currently the temurin path is quite a popular way to install recent 
jdk/jre version without
having to rely on oracle, supporting it on debian would be nice and without 
collateral issues


-- System Information:
Debian Release: bookworm/sid
  APT prefers jammy-updates
  APT policy: (500, 'jammy-updates'), (500, 'jammy-security'), (500, 'jammy')
Architecture: amd64 (x86_64)
Foreign Architectures: i386

Kernel: Linux 5.15.0-89-generic (SMP w/8 CPU threads)
Kernel taint flags: TAINT_OOT_MODULE
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), 
LANGUAGE=ca:en_US
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled



Re: [Heb-NACO] To see - לראות

2023-11-25 Thread Joan Biella via Heb-naco
You’re right you’re right!—Joan

On Fri, Nov 24, 2023 at 11:27 PM Galron, Joseph via Heb-naco <
heb-naco@lists.osu.edu> wrote:

> Colleagues,
>
> I believe (I hope I am not wrong) that לראות (to see)  needs to be
> romanized as Li-re’ot and not Lir’ot. Am I wrong?
>
>
>
> Yossi
>
>
> ___
> Heb-naco mailing list
> Heb-naco@lists.osu.edu
> https://lists.osu.edu/mailman/listinfo/heb-naco
>
___
Heb-naco mailing list
Heb-naco@lists.osu.edu
https://lists.osu.edu/mailman/listinfo/heb-naco


Re: dhcpcd: FTBFS on Hurd

2023-11-25 Thread Joan Lledó

Hi Martin-Éric,

you can write me for help when you need it.

Regards

On 24/11/23 6:50, Martin-Éric Racine wrote:

Greetings,

As dhcpcd is slated to replace dhclient as the default DHCP client in
Debian, I've been trying to fix the build on Hurd, which is the only
architecture that has repeatedly FTBFS. Most of this has merely
required fixing missing includes so far.

Help is welcome, especially from people who have access to a live Hurd
host and who would be able to test the binaries, and also to help me
finalize the port.

Cheers!
Martin-Éric





Re: Retrieve results in PostgreSQL stored procedure allowing query parallelism

2023-11-21 Thread Joan Pujol
Thanks, David.

If I try to do something like:
EXECUTE SELECT INTO ARRAY_AGG(t.*) INTO result_records
Would internally use cursors too and have the same restrictions?

Cheers,

On Tue, 21 Nov 2023 at 19:22, David G. Johnston
 wrote:
>
> On Tue, Nov 21, 2023, 11:10 Joan Pujol  wrote:
>>
>> I want to execute an SQL query and process its results inside a stored
>> procedure without preventing query parallelism. Since I don't want to
>> prevent query parallelism, cursors can't be used, and I would like to
>> avoid creating a temporal table.
>>
>> Is this possible? If so, what is the best way to execute the query,
>> retrieve all results in memory, and process them inside the stored
>> procedure?
>
>
> You must use create table as if you want a result that is both accessible to 
> subsequent statements and uses parallelism to be produced.  There is no 
> saving results into memory - you either save them explicitly or iterate over 
> them and the later prevents parallelism as you've noted.
>
> David J.



-- 
Joan Jesús Pujol Espinar
http://www.joanpujol.cat




Retrieve results in PostgreSQL stored procedure allowing query parallelism

2023-11-21 Thread Joan Pujol
I want to execute an SQL query and process its results inside a stored
procedure without preventing query parallelism. Since I don't want to
prevent query parallelism, cursors can't be used, and I would like to
avoid creating a temporal table.

Is this possible? If so, what is the best way to execute the query,
retrieve all results in memory, and process them inside the stored
procedure?

-- 
Joan Pujol




Re: [LincolnTalk] Radical change in Lincoln

2023-11-20 Thread Joan Kimball
It always has been hard for people with young children. They used to have
babsitting. Do they still?

  It is a big chunk of time.  But it's not that often in  comparison to the
hours and hours that volunteer town officials put in.

Guess it can be seen as the price of democracy?

Joan (former LWV president)

On Mon, Nov 20, 2023, 6:28 PM Rob Haslinger  wrote:

> Being new to town I’m not terribly familiar with the town meeting process.
> I will note however that for those of us with time consuming
> responsibilities (in our case small children) it may be challenging to
> attend a lengthy meeting despite caring deeply about the issues and doing
> our best to stay informed.  Just a comment, I may be missing something
> about how this works.
>
> Best regards to all
> Rob Haslinger
> S Great Road
>
> On Mon, Nov 20, 2023 at 5:53 PM Margo Fisher-Martin <
> margo.fisher.mar...@gmail.com> wrote:
>
>> Hi All,
>>
>> I also agree that this should be a ballot item. If we can vote for a new
>> school by ballot, why can’t we use the same process to vote for other
>> extremely important town changes - ones that will impact our everyday
>> living here and our taxes?
>>
>> Respectfully,
>> Margo Fisher-Martin
>>
>>
>> On Mon, Nov 20, 2023 at 5:22 PM ٍSarah Postlethwait 
>> wrote:
>>
>>> A town wide secret ballot would be a great idea to make sure a wider
>>> audience gets their voice represented.
>>>
>>> I also encourage anyone not familiar with the proposed rezoning options
>>> to visit this website to gain a better understanding of the magnitude of
>>> the proposed changes and Lincoln’s uniquely impacted position.
>>>
>>> LincolnHCA.org
>>>
>>>
>>> Sarah Postlethwait
>>>
>>>
>>>
>>> On Mon, Nov 20, 2023 at 4:17 PM Tom Kennedy 
>>> wrote:
>>>
>>>> Part of the discussion at last weeks planning board meeting is
>>>> noteworthy, but has not been mentioned here.
>>>> There was substantial discussion, maybe even consensus, on a none of
>>>> the above option. Even a planning board member and some of the “working
>>>> group“ were in favor.
>>>> Yet when the vote came up for a warrant article, that was not mentioned.
>>>> Another part of the discussion was the fact that town meeting
>>>> determination is “skewed“ by attendance and timing. I have observed this to
>>>> be absolutely true.
>>>> It is my belief that something which will so radically transform our
>>>> community should be voted on in some form by secret ballot. It is also my
>>>> belief that that will never happen.
>>>> Tom Kennedy
>>>> --
>>>> The LincolnTalk mailing list.
>>>> To post, send mail to Lincoln@lincolntalk.org.
>>>> Browse the archives at
>>>> https://pairlist9.pair.net/mailman/private/lincoln/.
>>>> Change your subscription settings at
>>>> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>>>>
>>>> --
>>> The LincolnTalk mailing list.
>>> To post, send mail to Lincoln@lincolntalk.org.
>>> Browse the archives at
>>> https://pairlist9.pair.net/mailman/private/lincoln/.
>>> Change your subscription settings at
>>> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>>>
>>> --
>> The LincolnTalk mailing list.
>> To post, send mail to Lincoln@lincolntalk.org.
>> Browse the archives at
>> https://pairlist9.pair.net/mailman/private/lincoln/.
>> Change your subscription settings at
>> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>>
>> --
> The LincolnTalk mailing list.
> To post, send mail to Lincoln@lincolntalk.org.
> Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/
> .
> Change your subscription settings at
> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>
>
-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



Re: 2 errors on PDF Splitting

2023-11-17 Thread Joan Fisbein
Hi Tilman,

I've tested both cases and everything works as expected.
Thank you very much.
Is there a plan to release a 3.0.1 version soon?

Thanks!

On Fri, 17 Nov 2023 at 05:07, Tilman Hausherr  wrote:

> Hi Joan,
>
> There's now a snapshot release with the recent change
>
> https://repository.apache.org/content/groups/snapshots/org/apache/pdfbox/pdfbox-app/3.0.1-SNAPSHOT/
> so please try if it works with that one. It worked for me.
>
> Tilman
>
> On 15.11.2023 10:29, Joan Fisbein wrote:
> > I have 2 errors when trying to split some PDFs.
> > Tested against PDFBox 3.0.1-SNAPSHOT from Github.
> >
> > *Error 1:*
> > Exception in thread "main" java.lang.StackOverflowError
> > at java.base/java.util.HashMap.tableSizeFor(HashMap.java:378)
> > at java.base/java.util.HashMap.(HashMap.java:455)
> > at java.base/java.util.LinkedHashMap.(LinkedHashMap.java:439)
> > at java.base/java.util.HashSet.(HashSet.java:171)
> > at java.base/java.util.LinkedHashSet.(LinkedHashSet.java:167)
> > at org.apache.pdfbox.util.SmallMap.entrySet(SmallMap.java:384)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1453)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1486)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1486)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
> >
> > You can get the PDF that generates this error here:
> >
> https://independence.mgmt.clarity.ai/report/public/file/325ddbe0-9a05-4ec2-af93-d5be4dda8625?alternate=ORIGINAL=attachment
> >
> > ---
> >
> > *Error 2*:
> > (looks similar to Error 1, but line numbers are slightly different, so I
> > assumed it's a different error)
> >
> > Exception in thread "main" java.lang.StackOverflowError
> > at java.base/java.util.ArrayList.indexOf(ArrayList.java:286)
> > at java.base/java.util.ArrayList.contains(ArrayList.java:275)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:777)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
> >
> > You can get the PDF that generates this error here:
> >
> https://independence.mgmt.clarity.ai/report/public/file/d7ba9150-17d9-40e7-a737-b888a834d7d8?alternate=ORIGINAL=attachment
> >
> > If you prefer, I can open the Jira tickets myself, but I don't know what
> to
> > put in every issue field. (components, labels, etc)
> >
> > Thanks,
> > Joan
> >
>
>
> -
> To unsubscribe, e-mail: users-unsubscr...@pdfbox.apache.org
> For additional commands, e-mail: users-h...@pdfbox.apache.org
>
>

-- 

Joan Fisbein | Engineering Manager
joan.fisb...@clarity.ai
www.clarity.ai <https://clarity.ai/>
<https://clarity.ai/in-the-news/>


Re: "Crashed" vm hypervisor

2023-11-16 Thread Joan g
Similar issues are observed in our cloudstack v4.17.2 KVM with NFS. If we
poweroff the hypervisor the host status will show disconnected. But all the
guest VM's status  still shows  as running, actual case is VM's are  not
available anywhere

-jon


On Thu, Nov 16, 2023 at 6:47 PM Wei ZHOU  wrote:

> Do you use NFS ?
>
> -Wei
>
> On Thu, 16 Nov 2023 at 14:03, Jimmy Huybrechts 
> wrote:
>
> > The reason I put it between “” was that I turned off one of the
> > hypervisors to test what it would do with my vm’s and in this case system
> > vm’s, what I thought would happened was it would say that my vm’s are in
> a
> > state of not running / powered off / error and that it would mention the
> > host being offline.
> >
> > What actually happened is that it seems to be completely oblivious of the
> > vm’s it had in the dashboard, it still mentions them as state running (I
> > know they are dead as a I can’t ping them anymore).
> >
> > The only mention I can see that it has an Alert on the host and that it
> > says agent disconnected with the system vm’s.
> >
> > The big problem here in my mind is that I can’t do anything with the
> > system vm’s now, I tried destroying one but since it tries to shut it
> down
> > first as it still thinks it’s running, it runs into an error because of
> > course then it figures out the hypervisor is broken and it can’t. And we
> > run into a circle.
> >
> > What should a person do in such a situation?
> > --
> > Jimmy
> >
>


Re: [LincolnTalk] Fwd: HCA info session for Mothers Out Front Lincoln

2023-11-15 Thread Joan Kimball
Wonderful idea to host a zoom infornation meeting on HCA options. Staci and
Mothers Out Front. Thank you so much.

Joan

On Wed, Nov 15, 2023, 2:12 PM Staci Montori  wrote:

> Hi Friends,
> Jennifer is a yes. She prefers Monday or Tuesday 11/27 or 11/28 at 12:00.
> What are your thoughts on these days? Is one better than another?
> Thanks,
> Staci
>
>
> -- Forwarded message -
> From: Glass, Jennifer 
> Date: Wed, Nov 15, 2023 at 12:21 PM
> Subject: Re: HCA info session for Mothers Out Front Lincoln
> To: Staci Montori , Vaughn, Paula <
> vaug...@lincolntown.org>
>
>
> Hi Staci,
> Thank you for the opportunity! I could make any of those days work, with a
> preference for Monday or Tuesday.
> Jennifer
>
> Get Outlook for iOS <https://aka.ms/o0ukef>
> --
> *From:* Staci Montori 
> *Sent:* Wednesday, November 15, 2023 9:56:13 AM
> *To:* Vaughn, Paula ; Glass, Jennifer <
> jglasssel...@lincolntown.org>
> *Subject:* HCA info session for Mothers Out Front Lincoln
>
> Hi Paula and Jennifer,
>
> Mothers Out Front Lincoln is hoping to host an information session on the
> HCA. We would like it if either of you (or someone from the committee or
> Uitle) could help our members (and others interested) understand each of
> the HCA choices being brought to SOTT and their possible climate and
> environmental impacts.
>
>
> Mothers Out Front Brookline did something similar and it helped their
> members understand the choices and process.
>
> We were thinking about a short info session over zoom at lunchtime the
> week of 11/27. Preferably Monday, Tuesday, Wednesday or Thursday.
>
> Please let me know your thoughts and if yes, what time could work.
>
> Warm regards,
> Staci Montori
> Mothers Out Front Community Organizer
>
>
> --
> The LincolnTalk mailing list.
> To post, send mail to Lincoln@lincolntalk.org.
> Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/
> .
> Change your subscription settings at
> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>
>
-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



2 errors on PDF Splitting

2023-11-15 Thread Joan Fisbein
I have 2 errors when trying to split some PDFs.
Tested against PDFBox 3.0.1-SNAPSHOT from Github.

*Error 1:*
Exception in thread "main" java.lang.StackOverflowError
at java.base/java.util.HashMap.tableSizeFor(HashMap.java:378)
at java.base/java.util.HashMap.(HashMap.java:455)
at java.base/java.util.LinkedHashMap.(LinkedHashMap.java:439)
at java.base/java.util.HashSet.(HashSet.java:171)
at java.base/java.util.LinkedHashSet.(LinkedHashSet.java:167)
at org.apache.pdfbox.util.SmallMap.entrySet(SmallMap.java:384)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1453)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1486)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1486)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)

You can get the PDF that generates this error here:
https://independence.mgmt.clarity.ai/report/public/file/325ddbe0-9a05-4ec2-af93-d5be4dda8625?alternate=ORIGINAL=attachment

---

*Error 2*:
(looks similar to Error 1, but line numbers are slightly different, so I
assumed it's a different error)

Exception in thread "main" java.lang.StackOverflowError
at java.base/java.util.ArrayList.indexOf(ArrayList.java:286)
at java.base/java.util.ArrayList.contains(ArrayList.java:275)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:777)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:788)

You can get the PDF that generates this error here:
https://independence.mgmt.clarity.ai/report/public/file/d7ba9150-17d9-40e7-a737-b888a834d7d8?alternate=ORIGINAL=attachment

If you prefer, I can open the Jira tickets myself, but I don't know what to
put in every issue field. (components, labels, etc)

Thanks,
   Joan


[LincolnTalk] HVA

2023-11-13 Thread Joan Kimball
I feel fortunate to live here and one of  the reasons is Lincoln's ability
to rise  in positive ways to meet challenges.

Moving to Lincoln in the early 70s, we were glad that Lincoln's response to
housing needs and 40b was to build Lincoln Woods.( We recently went on a
tour of Lincoln woods, and the sense of community is impressive.) .And the
Town also had a thoughtful plan to  protect open space that would result in
protecting wildlife corridors.  Two important factors fir me.

Today Massachusetts once again  has a serious housing crisis, and we are
once again required to  meet the challenge.

I understand people are concerned. And that there is concern by both
individuals and by individuals working together in a group.

 Reading Lincoln Talk, I first became aware that a group of five was
investigating the HCA and the actions of Lincoln's working group. I read in
Lincoln Talk that the group now has 55 members.  And yesterday I read there
is a similar group in Newton and probably in other towns.

We need to remember that there are other views, not coordinated, and not
necessarily expressed on Lincoln Talk or at the Forum.

While zoning does give those of us in "the lifeboat" control, it also has
resulted in exclusion.  Many studies show this.  I recommend that people
watch 2 webinars presented by the Boston Foundation.(See below for links.)

I applaud the work of the Town's working group and their expressed goals
including Lincoln's character, awareness of  climate change, and providing
housing near our transportation area.  (I know that Lincoln will do what it
can to raise the limit of 10% affordable that HCA has imposed.)

May we work together and rise up once again to do our part in a creative
way to help solve serious housing crisis that affects individuals and
families as well as the Massachusetts's economy

Respectfully,   Joan Kimball

*




Vitas mori. Albus orexiss ducunt ad gabalium. Ubi est altus nomen? Liberi
de castus bubo, pugna species! Persuadere diligenter ducunt ad bi-color
barcas.
Greater Boston Housing Report Card 2023

Greater Boston faces a crippling housing shortage, the effects of which are
most destabilizing for moderate- and low-income families and families of
color in the region. The Greater Boston Housing Report Card highlights how
the region has been falling behind on housing production, and how housing
cost burdens have been increasing, making rental housing and homeownership
increasingly unaffordable.
Tuesday, November 14th, 2023, 9:00 a.m. - 10:30 a.m.
Register for this in-person event
<https://www.tbf.org/events/2023/november/greater-boston-housing-report-card-2023>

*
Vitas mori. Albus orexiss ducunt ad gabalium. Ubi est altus nomen? Liberi
de castus bubo, pugna species! Persuadere diligenter ducunt ad bi-color
barcas.
Exclusionary by Design:  Zoning as a Tool of Race, Class and Family
Exclusion

As part of a new Racial Wealth Equity Research Center initiative, Boston
Indicators commissioned Amy Dain, among the region’s leading zoning
experts, to research and write Exclusionary by Design: An Investigation of
Zoning’s Use as a Tool of Race, Class, and Family Exclusion in Boston’s
Suburbs, 1920 to Today. Based on extensive review of local planning
documents, state reports, and press coverage over the past 100 years, the
report finds widespread use of zoning as a tool of social exclusion against
residents of color, especially Black residents; lower-income and
working-class residents; families with school-aged children; religious
minorities; immigrants; and, in some cases, any newcomers/outsiders at all.
Wednesday, November 8, 2023, 10:00 a.m. - 11:00 a.m.
Click here for video and recap
<https://www.tbf.org/news-and-insights/videos/2023/november/exclusionary-by-design-20231108>
-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



Re: [LincolnTalk] Plumber recommendation for water heater

2023-11-10 Thread Joan Kimball
Agree. John Silva is terrific!!!

Joan

On Fri, Nov 10, 2023, 5:45 PM Scott Clary  wrote:

> John Silva. Perhaps the most intelligent plumber you'll meet, very nice,
> reasonably priced and great work and a Lincoln resident.
>
> 781-647-3236
>
> Kind Regards,
>
> Scott Clary
> 617-968-5769
>
> Sent from a mobile device - please excuse typos and errors
>
> On Fri, Nov 10, 2023, 5:08 PM Ellen Waldron 
> wrote:
>
>> Kirkland and Shaw did my work too. Recommend!
>>
>> ellen
>>
>> Sent from my iPhone
>>
>>
>> On Nov 10, 2023, at 11:29 AM, Sara Mattes  wrote:
>>
>> I second that recommendation
>>
>> --
>> Sara Mattes
>>
>>
>>
>>
>> On Nov 10, 2023, at 8:35 AM, June L Matthews  wrote:
>>
>> I use Kirkland and Shaw, 781-272-2670, and have been very pleased with
>> their service.
>>
>> June Matthews
>>
>> *From:* Lincoln  *On Behalf Of *Sam
>> inakbari
>> *Sent:* Thursday, November 9, 2023 9:53 PM
>> *To:* lincoln@lincolntalk.org
>> *Subject:* [LincolnTalk] Plumber recommendation for water heater
>>
>> Hello
>>
>> I am looking for a plumber that can change the water heater tank. We will
>> purchase the new tank and need someone to remove the old one and install
>> the new one. Do you have any plumber recommendations?
>>
>> Thank you
>> Sam
>> --
>> The LincolnTalk mailing list.
>> To post, send mail to Lincoln@lincolntalk.org.
>> Browse the archives at
>> https://pairlist9.pair.net/mailman/private/lincoln/.
>> Change your subscription settings at
>> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>>
>>
>> --
>> The LincolnTalk mailing list.
>> To post, send mail to Lincoln@lincolntalk.org.
>> Browse the archives at
>> https://pairlist9.pair.net/mailman/private/lincoln/.
>> Change your subscription settings at
>> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>>
>> --
>> The LincolnTalk mailing list.
>> To post, send mail to Lincoln@lincolntalk.org.
>> Browse the archives at
>> https://pairlist9.pair.net/mailman/private/lincoln/.
>> Change your subscription settings at
>> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>>
>> --
> The LincolnTalk mailing list.
> To post, send mail to Lincoln@lincolntalk.org.
> Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/
> .
> Change your subscription settings at
> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>
>
-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



Re: [LincolnTalk] Lincoln Town Dry cleaner

2023-11-10 Thread Joan Kimball
We also strongly recommend Lincoln Town Cleaner and Mr. Chang.  He does our
dry cleaning, zipper repair and hemming.  We feel fortunate he is in
Lincoln.

Joan and John Kimball

On Fri, Nov 10, 2023, 5:29 PM Bernadette and Mark 
wrote:

> Mr Chang is a wonderful person and honest as the day is long!  I recently
> had him replace a zipper in a pair of jeans and he was apologizing for the
> cost  It is what it is.  Since his move down the alley his business is
> NOT as robust as it was when he was on the corner!  HIGHLY RECOMMEND!
> Best,
> Bernadette
>
> On Thu, Nov 9, 2023 at 10:15 AM June L Matthews  wrote:
>
>> I second the reommendations.  Mr. Chang – I think that is his name but I
>> may be wrong – has cleaned or laundered many “specialty” items for me, from
>> down comforters to Navajo rugs.  And I’ve also used him for zipper repair.
>> And as you say, if he doesn’t like the result of a cleaning, he’ll send the
>> item back for another try.   Garments that I thought were hopeless stained
>> have come back spotless.  When he moved from the corner (where Clark
>> Gallery is now) down the alley to his present location, I was worried that
>> he might lose business.  I don’t know how he is doing, but I certainly take
>> all of my cleaning there.
>>
>>
>>
>> June
>>
>>
>>
>> *From:* Lincoln  *On Behalf Of *Deb
>> Wallace
>> *Sent:* Thursday, November 9, 2023 8:25 AM
>> *To:* LincolnTalk ; delis...@aol.com
>> *Subject:* [LincolnTalk] Dry cleaner
>>
>>
>>
>> Lynn, I also use Lincoln Town Cleaner tucked away behind the Clark
>> gallery. The owner is very knowledgeable about fabric. He will tell you if
>> an item can be machine washed instead of dry cleaned and provide specific
>>  instructions. He also does repairs including zippers. And he’s honest.  He
>> once cleaned something for me and didn’t like the result and didn’t charge
>> me.
>>
>>
>>
>> Deb
>>
>>
>>
>>
>> --
>> The LincolnTalk mailing list.
>> To post, send mail to Lincoln@lincolntalk.org.
>> Browse the archives at
>> https://pairlist9.pair.net/mailman/private/lincoln/.
>> Change your subscription settings at
>> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>>
>>
>
> --
> Bernadette and Mark
>
> --
> The LincolnTalk mailing list.
> To post, send mail to Lincoln@lincolntalk.org.
> Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/
> .
> Change your subscription settings at
> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>
>
-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



Re: Error splitting PDF

2023-11-10 Thread Joan Fisbein
Just submitted my request for a Jira account :-)

On Fri, 10 Nov 2023 at 05:06, Tilman Hausherr  wrote:

> On 09.11.2023 10:16, Joan Fisbein wrote:
> > Hi Tilman,
> >  I tried to register again at
> > https://selfserve.apache.org/jira-account.html and now I get an error
> > message about "The username you selected is already in use".
> >  Can you, maybe, accept my previous request?
>
> Hello Joan,
>
> Yes but this can be done only within 24 hours and I missed that :-( Can
> you try to register again now? I assume that the username should now be
> available.
>
> Tilman
>
>
> >
> > Thanks!
> > Joan
> >
> > On Thu, 9 Nov 2023 at 05:44, Tilman Hausherr 
> wrote:
> >
> >> Hello Joan,
> >>
> >> Sorry for the rejection, this was a close call, the description didn't
> >> mention what happened (a stack overflow). Feel free to register again so
> >> you can follow the issue I created
> >> https://issues.apache.org/jira/browse/PDFBOX-5712
> >>
> >> Tilman
> >>
> >> On 08.11.2023 21:36, Joan Fisbein wrote:
> >>> Hi all,
> >>>   I'm using PDFBox 3.0.0 and im getting this error when trying to
> >> split
> >>> this PDF.
> >>>
> >>>   I tested with code from trunk and the error is still there.
> >>>   Tried to get a Jira account to open an issue but was denied.
> >>>
> >>>   Here you can download the problematic PDF:
> >>>
> >>
> https://independence.mgmt.clarity.ai/report/public/file/2d1b4dd4-1ab3-4aaf-bb71-7f1759822f3a?disposition=inline=ORIGINAL
> >>>
> >>> Thanks!!
> >>>
> >>> The stacktrace exception:
> >>> ai.clarity.bus.subscribe.exception.ConsumerException:
> >>> java.lang.reflect.InvocationTargetException
> >>> at
> >>>
> >>
> ai.clarity.messagebus.spring.config.SubscriberConfiguration$MethodEventHandler.accept(SubscriberConfiguration.java:238)
> >>> at
> >>>
> >>
> ai.clarity.messagebus.spring.config.SubscriberConfiguration$MethodEventHandler.accept(SubscriberConfiguration.java:206)
> >>> at
> >>>
> >>
> ai.clarity.bus.subscribe.eventbridge.SQSMessageHandler.lambda$run$0(SQSMessageHandler.java:98)
> >>> at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
> >>> at
> >>>
> >>
> ai.clarity.bus.subscribe.eventbridge.SQSMessageHandler.run(SQSMessageHandler.java:98)
> >>> at
> >>>
> >>
> java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)
> >>> at
> >>>
> >>
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
> >>> at
> >>>
> >>
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> >>> at java.base/java.lang.Thread.run(Thread.java:840)
> >>> Caused by: java.lang.reflect.InvocationTargetException: null
> >>> at jdk.internal.reflect.GeneratedMethodAccessor451.invoke(Unknown
> Source)
> >>> at
> >>>
> >>
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> >>> at java.base/java.lang.reflect.Method.invoke(Method.java:568)
> >>> at
> >>>
> >>
> ai.clarity.messagebus.spring.config.SubscriberConfiguration$MethodEventHandler.accept(SubscriberConfiguration.java:236)
> >>> ... 8 common frames omitted
> >>> Caused by: java.lang.StackOverflowError: null
> >>> at
> >>>
> >>
> java.base/java.util.LinkedHashMap.afterNodeInsertion(LinkedHashMap.java:300)
> >>> at java.base/java.util.HashMap.putVal(HashMap.java:662)
> >>> at java.base/java.util.HashMap.put(HashMap.java:610)
> >>> at java.base/java.util.HashSet.add(HashSet.java:221)
> >>> at org.apache.pdfbox.util.SmallMap.entrySet(SmallMap.java:387)
> >>> at
> >>>
> >>
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1453)
> >>> at
> >> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:780)
> >>> at
> >>>
> >>
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
> >>> at
> >>>
> >>
> org.apache.pdfbox.cos.COSDictionary.getIndirect

Re: Error splitting PDF

2023-11-09 Thread Joan Fisbein
Hi Tilman,
I tried to register again at
https://selfserve.apache.org/jira-account.html and now I get an error
message about "The username you selected is already in use".
Can you, maybe, accept my previous request?

Thanks!
   Joan

On Thu, 9 Nov 2023 at 05:44, Tilman Hausherr  wrote:

> Hello Joan,
>
> Sorry for the rejection, this was a close call, the description didn't
> mention what happened (a stack overflow). Feel free to register again so
> you can follow the issue I created
> https://issues.apache.org/jira/browse/PDFBOX-5712
>
> Tilman
>
> On 08.11.2023 21:36, Joan Fisbein wrote:
> > Hi all,
> >  I'm using PDFBox 3.0.0 and im getting this error when trying to
> split
> > this PDF.
> >
> >  I tested with code from trunk and the error is still there.
> >  Tried to get a Jira account to open an issue but was denied.
> >
> >  Here you can download the problematic PDF:
> >
> https://independence.mgmt.clarity.ai/report/public/file/2d1b4dd4-1ab3-4aaf-bb71-7f1759822f3a?disposition=inline=ORIGINAL
> >
> >
> > Thanks!!
> >
> > The stacktrace exception:
> > ai.clarity.bus.subscribe.exception.ConsumerException:
> > java.lang.reflect.InvocationTargetException
> > at
> >
> ai.clarity.messagebus.spring.config.SubscriberConfiguration$MethodEventHandler.accept(SubscriberConfiguration.java:238)
> > at
> >
> ai.clarity.messagebus.spring.config.SubscriberConfiguration$MethodEventHandler.accept(SubscriberConfiguration.java:206)
> > at
> >
> ai.clarity.bus.subscribe.eventbridge.SQSMessageHandler.lambda$run$0(SQSMessageHandler.java:98)
> > at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
> > at
> >
> ai.clarity.bus.subscribe.eventbridge.SQSMessageHandler.run(SQSMessageHandler.java:98)
> > at
> >
> java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)
> > at
> >
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
> > at
> >
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
> > at java.base/java.lang.Thread.run(Thread.java:840)
> > Caused by: java.lang.reflect.InvocationTargetException: null
> > at jdk.internal.reflect.GeneratedMethodAccessor451.invoke(Unknown Source)
> > at
> >
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> > at java.base/java.lang.reflect.Method.invoke(Method.java:568)
> > at
> >
> ai.clarity.messagebus.spring.config.SubscriberConfiguration$MethodEventHandler.accept(SubscriberConfiguration.java:236)
> > ... 8 common frames omitted
> > Caused by: java.lang.StackOverflowError: null
> > at
> >
> java.base/java.util.LinkedHashMap.afterNodeInsertion(LinkedHashMap.java:300)
> > at java.base/java.util.HashMap.putVal(HashMap.java:662)
> > at java.base/java.util.HashMap.put(HashMap.java:610)
> > at java.base/java.util.HashSet.add(HashSet.java:221)
> > at org.apache.pdfbox.util.SmallMap.entrySet(SmallMap.java:387)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1453)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:780)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1481)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1481)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1481)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:780)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:780)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
> > at
> org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:780)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
> > at
> >
> org.apache.pdfbox.cos.COSDictionary.getIndirectObject

Error splitting PDF

2023-11-08 Thread Joan Fisbein
Hi all,
I'm using PDFBox 3.0.0 and im getting this error when trying to split
this PDF.

I tested with code from trunk and the error is still there.
Tried to get a Jira account to open an issue but was denied.

Here you can download the problematic PDF:
https://independence.mgmt.clarity.ai/report/public/file/2d1b4dd4-1ab3-4aaf-bb71-7f1759822f3a?disposition=inline=ORIGINAL


Thanks!!

The stacktrace exception:
ai.clarity.bus.subscribe.exception.ConsumerException:
java.lang.reflect.InvocationTargetException
at
ai.clarity.messagebus.spring.config.SubscriberConfiguration$MethodEventHandler.accept(SubscriberConfiguration.java:238)
at
ai.clarity.messagebus.spring.config.SubscriberConfiguration$MethodEventHandler.accept(SubscriberConfiguration.java:206)
at
ai.clarity.bus.subscribe.eventbridge.SQSMessageHandler.lambda$run$0(SQSMessageHandler.java:98)
at java.base/java.util.ArrayList.forEach(ArrayList.java:1511)
at
ai.clarity.bus.subscribe.eventbridge.SQSMessageHandler.run(SQSMessageHandler.java:98)
at
java.base/java.util.concurrent.CompletableFuture$AsyncRun.run(CompletableFuture.java:1804)
at
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: java.lang.reflect.InvocationTargetException: null
at jdk.internal.reflect.GeneratedMethodAccessor451.invoke(Unknown Source)
at
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:568)
at
ai.clarity.messagebus.spring.config.SubscriberConfiguration$MethodEventHandler.accept(SubscriberConfiguration.java:236)
... 8 common frames omitted
Caused by: java.lang.StackOverflowError: null
at
java.base/java.util.LinkedHashMap.afterNodeInsertion(LinkedHashMap.java:300)
at java.base/java.util.HashMap.putVal(HashMap.java:662)
at java.base/java.util.HashMap.put(HashMap.java:610)
at java.base/java.util.HashSet.add(HashSet.java:221)
at org.apache.pdfbox.util.SmallMap.entrySet(SmallMap.java:387)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1453)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:780)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1481)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1481)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1481)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:780)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:780)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:780)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:780)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:780)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:780)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1470)
at org.apache.pdfbox.cos.COSArray.getIndirectObjectKeys(COSArray.java:780)
at
org.apache.pdfbox.cos.COSDictionary.getIndirectObjectKeys(COSDictionary.java:1475)
[removed more lines like above]

   PS: Can look similar to
https://issues.apache.org/jira/browse/PDFBOX-5707 but
it's not the same.


Re: [LincolnTalk] Why Lincoln should overlay HCA zoning over existing multi-family districts

2023-11-07 Thread Joan Kimball
Tricia, thank you so much.  You have captured how I have been feeling and
thinking.

Lincoln Talk gives us an opportunity to share opinions and differences. And
that's great.

But there are a few  who pounce as if they are prosecuting attorneys and
judges all in one, pounce sometimes personally, sometimes nastilly.

Town officials, volunteers like.you and me , through time have worked hard
and created the Lincoln we have today.

  Our current officials are doing.the same. It takes vision, hard work and
meetings til midnight and constancy through issue by issue,. They can't
have the luxury to pounce.

The RLF has brought so much to the town in helping to shape it with
creativity and brilliance.  Preserving land, creating a shopping center,
helping with housing, creating the Lincoln we love.

We moved here in 1974, from Weston, valuing Lincoln, the land, the ethos,
the creativity that established open space and Lincoln Woods, the
preservation of farm lands and the community, working together.  There have
always been differences of opinion and we have managed to find solutions

So can we, please,  move forward--all of us who share deep feelings about
Lincoln-- showing respect for each other.



Joan


















On Mon, Nov 6, 2023, 8:49 PM Tricia Thornton-Wells <
triciathorntonwe...@gmail.com> wrote:

> It’s been very hard to read message after message of people accusing
> others of trying to ruin this town.  I’m really very sad and tired of
> hearing it all.
>
> 1. At the last town meeting, people told the RLF their renderings were too
> generic and asked them to put together renderings that were more specific
> to what they envisioned being developed.  So, trying to be responsive to
> these requests, they are putting together renderings of what is currently
> conceived. Of course these are going to be hypothetical!  That doesn’t mean
> they are acting in bad faith, putting forth a lie. I personally do not
> believe they are operating in bad faith. Do they have a vested interest in
> making sure *something* moves forward?  Yes, of course. That doesn’t mean
> they are willing to go along with something they think will be bad for the
> town? No. Fundamentally, I believe people who volunteer their time and
> money to an organization like RLF love this town and want good things for
> it.
>
> 2-3. The upcoming vote will determine whether a majority of the town (most
> of whom are not on this increasingly toxic echo chamber) believes the
> Lincoln Station area, owned by RLF, should be allowed to pursue development
> with more freedom to negotiate than they have now—something that would be
> profitable, because of increased density, and that could also be a great
> benefit to the town. I personally believe the RLF has good intent and will
> work with a developer to build such a space. I also believe that keeping
> the Mall space as-is is a recipe for commercial/economic failure. I also
> believe we can be a (small) part of the solution to increasing housing
> stock in the Boston metro area.
>
> Why is everyone so afraid and so convinced that *any* and all change will
> be bad? We all do the best we can under the circumstances, which are *not*
> the same circumstances as 20 or 40 years ago.  Please, let’s stop resisting
> every single thing and have a reasonable conversation about how to move
> forward based on today, here, in Lincoln in 2023. We all love Lincoln, or
> we wouldn’t be here.
>
> Sincerely,
> Tricia Thornton-Wells
> 112 Trapelo Rd
>
>
>
> On Nov 6, 2023, at 6:29 PM, Peter Buchthal  wrote:
>
> 
> At the previous planning/hcawg meeting of October 24th, we learned that
> RLF is planning to hire a consultant to draw potential renderings of a new
> Lincoln Mall and present them to the town at some point.   It is really
> hard for town residents to evaluate the potential impact of the proposed
> changes to  the Lincoln Mall zoning without a detailed discussion.  I
> understand the planning board is considering further zoning amendments for
> the new overlay districts to better protect the town's interests.I
> foresee several problems with this strategy.
>
> 1)  Any drawings or renderings will be purely hypothetical and not binding
> on the RLF or any future owner of the Mall as there is no requirement that
> they be actually submitted to the town for a building permit.
> 2)  Assuming a HCA Overlay district is passed at the March Town meeting, I
> see very few obstacles to building whatever the developer chooses to build
> as the town will have NO ability to influence a future developer to do
> anything unless they need a variance for something.
> 3)  Hypothetically, one day after the HCA Overlay district passes the
> March Town Meeting, CIVICO could submit drawings to the Town of Lincoln
> Building Depart

[LincolnTalk] Last call for Hanscom Jet Expansion meeting November 4 (tomorrow!) at 10:00 at Bemis Hall

2023-11-02 Thread Joan Kimball
 Dear Lincoln Talk
We hope you can join the Lincoln Democratic Town Committee at this open
meeting to hear Alex Chatfield from the Coalition to Stop Private Jet
Expansion at Hanscom (and Everywhere) discuss the proposal and efforts to
prevent this expansion that has serious implications for Climate Change!

*When:*  Saturday, November 4 at 10:00 at Bemis Hall

*What: * Alex Chatfield discusses the proposal and work to stop it because
of climate change implications

*Everyone invited!*

*Information below!*

Joan Kimball and Travis Roland, co-chairs
Lincoln Democratic Town Committee

Lincoln Dems




*EVENTS*



--




Presentation
*Open Meeting on Expansion of Private Jets at Hanscom Field and
Everywhere—A Climate Change Challenge
<https://lincolnmadems.us3.list-manage.com/track/click?u=4403464b595e5ff296ff1ab0f=141e513259=50034429ec>*




What:

Alex Chatfield and the Coalition to Stop Private Jet Expansion at Hanscom
and Everywhere will share the latest information:

*·* *Facts behind the proposal to expand private jets*

*· Implications for Climate Change & why we should care*

*· Progress to date*

*· Next steps and what we can do*

There will be ample time for questions and answers. (And a short video of
the rally.)
When:

*Saturday, November 4 at 10:00 AM* (coffee and sign in at 9:45)
Where:

*Bemis Hall*
What is the Coalition?

a group of state and local organizations that have joined together to make
a stand against private jet expansion that will significantly increase the
carbon footprint (see below)* as municipalities, the state and others are
working hard to drastically reduce their carbon footprint (a Climate
imperative).
Who are the coalition members?

Over 50 state and local groups including 350 MA; Mothers Out Front, Save
Our Heritage, Sierra Club MA, 3rd Act MA; UU Mass MetroWest;  Concord
Indivisible, Save our Heritage, Walden Woods, Thoreau Society, Lincoln
Democratic Town Committee, Waltham Democratic City Committee, League of
Women Voters (Bedford, Concord Carlisle, Lexington) Greater Boston
Physicians for Social Responsibility and local church groups (including the
First Parish  Bedford, Bedford Green Team at St. Paul’s Episcopal Church;
First Parish   Concord, and St Anne’s Lincoln Climate Justice Ministry)

* If private jet travel continues to grow at the same rate as in recent
years, those jets would produce 940 megatons of greenhouse gasses over the
next three years in the United States — equivalent to the emissions
produced by 65 million cars during the same period, according to a study in
the Journal of Transportation Research Interdisciplinary Perspectives. (As
quoted in the Boston Globe)
-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



[Libreoffice-commits] core.git: i18npool/source

2023-11-02 Thread Joan Montané (via logerrit)
 i18npool/source/localedata/data/ca_ES.xml |8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

New commits:
commit 44ca7832dea2c4e6243ed9fdbc828c25c2466bbd
Author: Joan Montané 
AuthorDate: Fri Aug 18 12:25:30 2023 +0200
Commit: Eike Rathke 
CommitDate: Thu Nov 2 11:43:36 2023 +0100

Localize  to Catalan

Change-Id: I088047a94f2bd7a7405cf42e9c0eca73cdb11c6b
Reviewed-on: https://gerrit.libreoffice.org/c/core/+/155776
Tested-by: Jenkins
Reviewed-by: Eike Rathke 

diff --git a/i18npool/source/localedata/data/ca_ES.xml 
b/i18npool/source/localedata/data/ca_ES.xml
index 881d38b3ca5e..ea817eb4c056 100644
--- a/i18npool/source/localedata/data/ca_ES.xml
+++ b/i18npool/source/localedata/data/ca_ES.xml
@@ -227,7 +227,13 @@
 
   
   
-  
+  
+A-Z
+0
+1
+ i seg.
+ i seg.
+  
   
 
   


[LincolnTalk] Concerned about Hanscom expansion? Come to this forum Saturdayv11-4 at 10:00 to learn more!

2023-10-29 Thread Joan Kimball
To Lincoln Talk,
The Lincoln Democrat Town Committee is sponsoring a meeting on Hanscom
proposal for expansion of private jets. Please join us on Saturday,
November 4 at 10 at Bemis to hear Alex Chatfield from the Coalition
opposing exspansion at Hanscom and everywhere--because of climate change.

Please join us!

  Information below
Joan Kimball and Travis Roland, Co-Chairs
Lincoln Dems

*EVENTS*


--

Presentation
*Open Meeting on Expansion of Private Jets at Hanscom Field and
Everywhere—A Climate Change Challenge
<https://lincolnmadems.us3.list-manage.com/track/click?u=4403464b595e5ff296ff1ab0f=141e513259=31a8fc7a62>*

  What:

Alex Chatfield and the Coalition to Stop Private Jet Expansion at Hanscom
and Everywhere will share the latest information:

*·* *Facts behind the proposal to expand private jets*

*· Implications for Climate Change & why we should care*

*· Progress to date*

*· Next steps and what we can do*

There will be ample time for questions and answers. (And a short video of
the rally.)
When:

*Saturday, November 4 at 10:00 AM* (coffee and sign in at 9:45)
Where:

*Bemis Hall*
What is the Coalition?

a group of state and local organizations that have joined together to make
a stand against private jet expansion that will significantly increase the
carbon footprint (see below)* as municipalities, the state and others are
working hard to drastically reduce their carbon footprint (a Climate
imperative).
Who are the coalition members?

Over 50 state and local groups including 350 MA; Mothers Out Front, Save
Our Heritage, Sierra Club MA, 3rd Act MA; UU Mass MetroWest;  Concord
Indivisible, Save our Heritage, Walden Woods, Thoreau Society, Lincoln
Democratic Town Committee, Waltham Democratic City Committee, League of
Women Voters (Bedford, Concord Carlisle, Lexington) Greater Boston
Physicians for Social Responsibility and local church groups (including the
First Parish  Bedford, Bedford Green Team at St. Paul’s Episcopal Church;
First Parish   Concord, and St Anne’s Lincoln Climate Justice Ministry)



* If private jet travel continues to grow at the same rate as in recent
years, those jets would produce 940 megatons of greenhouse gasses over the
next three years in the United States — equivalent to the emissions
produced by 65 million cars during the same period, according to a study in
the Journal of Transportation Research Interdisciplinary Perspectives. (As
quoted in the Boston Globe)
[image: Facebook]
<https://lincolnmadems.us3.list-manage.com/track/click?u=4403464b595e5ff296ff1ab0f=a8d9e03992=31a8fc7a62>
[image: LincolnMADems.org]
<https://lincolnmadems.us3.list-manage.com/track/click?u=4403464b595e5ff296ff1ab0f=aee59e1ba6=31a8fc7a62>





This email was sent to selene...@gmail.com
*why did I get this?*
<https://lincolnmadems.us3.list-manage.com/about?u=4403464b595e5ff296ff1ab0f=9422b72429=31a8fc7a62=0da6d700d0>
unsubscribe from this list
<https://lincolnmadems.us3.list-manage.com/unsubscribe?u=4403464b595e5ff296ff1ab0f=9422b72429=31a8fc7a62=0da6d700d0>
update subscription preferences
<https://lincolnmadems.us3.list-manage.com/profile?u=4403464b595e5ff296ff1ab0f=9422b72429=31a8fc7a62=0da6d700d0>
LDTC · PO 6337 · Lincoln, Ma 01773 · USA
-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



Re: [Evergreen-general] Patron Registration screen

2023-10-23 Thread Joan Kranich via Evergreen-general
Hi Milissa,

C/W MARS uses Quipu and we are happy with the performance and support.  You
can see our registration form at
https://catalog.cwmars.org/eg/opac/ecard/form.

Joan

On Mon, Oct 16, 2023 at 2:24 PM Millissa Macomber via Evergreen-general <
evergreen-general@list.evergreen-ils.org> wrote:

> Looking for examples of ways to integrate the self-registration screen on
> our website. Please send me any of your examples. (Integrations with Quipu,
> Smarty or any other vendors is also appreciated.)
>
>
> *Millissa Macomber, ILS Manager*
> Central Skagit Library District
> 110 W. State St.
> Sedro-Woolley, WA 98284
> 360-755-3985
> www.centralskagitlibrary.org
> milli...@centralskagitlibrary.org
> ___
> Evergreen-general mailing list
> Evergreen-general@list.evergreen-ils.org
> http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-general
>


-- 

Joan Kranich | Library Applications Manager

CW MARS

jkran...@cwmars.org

508-755-3323 x321 or x1 <(508)%20755-3323>

http://www.cwmars.org

Pronouns <https://www.mypronouns.org/what-and-why>: she, her, hers
___
Evergreen-general mailing list
Evergreen-general@list.evergreen-ils.org
http://list.evergreen-ils.org/cgi-bin/mailman/listinfo/evergreen-general


Re: [LincolnTalk] HCA & Codman Road

2023-10-16 Thread Joan Kimball
Thank you David for your clear message.  I agree with each town doing
their part to help the housing crisis we are facing. And besides, it's the
law.

  Senator Barrett at one point said, we have the jobs and we have workers,
but not adequate housing for them.

 I certainly benefit from other towns' services and businesses.   We are
not living in a vacuum.

  I treasure our town's open space,and as a former Conservation Commission
member when we wrote the Wetland Bylaw, I treasure the services for nature
and for us that our protected wetlands provide. These lands will not be
built upon.  And there will be site reviews.

Joan


* --Thich Nhat Hanh*


On Mon, Oct 16, 2023 at 10:32 AM David Onigman 
wrote:

> I have been hesitant to engage in the housing discussion on LincolnTalk,
> but after reading a few recent comments about the motives for some of the
> Codman Road residents and their advocacy in favor of the Housing Choice Act
> and our road being included in it, I am inspired to weigh in.
>
> I live on Codman Road and was one of the residents that advocated in favor
> of my area of South Lincoln to be included in the proposals submitted to
> the Commonwealth to be in compliance with the Housing Choice Act.
>
> I consider myself a housing advocate and generally speaking am in favor of
> the legislation. There is a housing crisis in this country, and in
> Massachusetts, and every town can do their part to contribute a small bit
> to increased inventory to support this issue.
>
> I also consider myself an advocate of public transportation and am a
> frequent user of the commuter rail. My family is able to currently be a one
> car family largely in part to my proximity to the train into Boston.
>
> I am in support of all plans that include these subdistricts to be as
> close to the Commuter Rail as possible, as I believe that to be in the
> spirit of this legislation, and also what is best for our town planning.
>
> I love Lincoln, I think Lincoln is an amazing place to live and raise
> children.
>
> Lincoln is over 40% conservation land and nothing is ever going to change
> that.
>
> I believe that the effects of the HCA to loosen a bit of the zoning laws
> in certain subdistricts to not be by-right single-family housing is a good
> thing.
>
> I believe towns like Lincoln that are looking to support a small
> commercial center and maintain services like a grocery store need to modify
> a bit of the by-right zoning to ensure that things like having a grocery
> store are sustainable.
>
> Let me clarify that my beliefs are not driven by any personal financial
> aspirations linked to my property. For those seeking assurance, my lot,
> surrounded by wetlands, isn't viable for further development. Our family
> home, built in 1951, has always stood here, and we have no intentions of
> leaving.
>
> So I am just here to say - yes, in my backyard, I support the HCA, I
> support Codman road being included as one of the subdistricts.
>
> Every town can do a small part to support more housing inventory and every
> town can do a small part to allow more housing near public transportation.
>
> I’m not looking to engage in any LincolnTalk back and forth on my thoughts
> on this, but if anyone is looking to discuss these topics further offline,
> please feel free to write me an email and we can grab a cup of coffee.
> --
> The LincolnTalk mailing list.
> To post, send mail to Lincoln@lincolntalk.org.
> Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/
> .
> Change your subscription settings at
> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>
>
-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



Re: [LincolnTalk] Housing Choice/Codman Corner Road

2023-10-13 Thread Joan Kimball
Staci, thank you for your thoughtful and inspiring explanation. Joan

On Fri, Oct 13, 2023, 8:13 PM Staci Montori  wrote:

> Hi Susanna,
>
> My understanding is that the first SLIPC working group looked at rezoning
> an 'out of the way' area, the Green Ridge/"Flying Nun" condo area. There
> was a huge amount of residential resistance against this proposal.
> Residents and neighbors were concerned that many of the people renting at
> more affordable rates would be forced to move out of Lincoln if the
> property owners wanted to sell to a developer. The current working group
> found this to be a reasonable concern and therefore didn't include this
> area in the recent rezoning plans.
>
> The Codman Road neighborhood (where I live) is all privately owned and
> therefore none of us will be forced to sell or move. Therefore, it is a
> more fair and sensitive rezoning proposal.
>
> We absolutely love living close to the MBTA, the town station, the school,
> the fields and the farms. We utilize all of these often by foot, not by
> car. The main reason we chose to live on Codman Road 23 years ago was for
> the walkability - to the grocery store, the bank, the cafe, and
> to Something Special. And for the bike paths from our home to the schools,
> soccer fields and farms, along with the proximity to all the green space
> and trails surrounding us. I would love for more families and individuals
> (perhaps one of my three children someday) to have the opportunity to live
> at the town station or on parts of my road or Lewis street if parcels are
> redeveloped.
>
> I am very optimistic that any possible negative aesthetic changes and
> traffic patterns to the town station and possibly my neighborhood, will be
> offset by the benefits of providing a home and community to many more
> people. They will all benefit from the amazing things that Lincoln has to
> offer such as extensive green space, organic food and farms, a fantastic
> public school and a commuter rail into Boston. I also think a more vibrant,
> dense town center will attract more successful businesses, making it so we
> all don't have to drive our cars into other communities (causing traffic
> for them) for a variety of things.
>
> Finally, I will say I find the comments from some on LT, that our
> neighborhood is possibly looking to make millions on this change in zoning,
> to be misinformed. We and several of our neighbors want to be part of the
> solution for the housing crisis, lack of diversity and the climate crisis.
> Building a more housing-dense, vibrant, green and accesible town center
> will help with these.
>
> We SO appreciate the HCA committee for being very creative, thoughtful and
> responsive to us and others in town, and for creating a plan that both
> meets the requirements and the spirit of the HCA .
>
> Warm regards,
> Staci Montori & the Montori-Bordiuk Family
> Codman Road
>
>
>
>
>
>
>
>
>
>
>
>
>>
>> --
> The LincolnTalk mailing list.
> To post, send mail to Lincoln@lincolntalk.org.
> Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/
> .
> Change your subscription settings at
> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>
>
-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



Re: Cloudstack VM HA

2023-10-11 Thread Joan g
Hi Nux,

My deployment is using KVM on centos 7 and NFS as primary storage. Even
after enabling HA HA State is showing as "Ineligible" on all 3 KVM hosts.
 Did I miss something?

Reg,
Jon

On Tue, 10 Oct, 2023, 19:06 Nux,  wrote:

> Hello,
>
> You need a stable NFS primary storage for the hearbeat file.
> You can keep it in disabled state after the testing - so VMs do not get
> created there - but it needs to be present.
> Watch out, if the NFS storage becomes unstable or unreachable via
> network (switch fault etc) the hypervisors will force reboot themselves.
>
>
> On 2023-10-10 11:35, Bryan Tiang wrote:
> > Hi All,
> >
> > We are setting up Cloudstack + Linbit SDS (via plugin). Hypervisor is
> > Ubuntu.
> >
> > We are trying to test the VM HA by powering down a physical node at
> > random. However, the VMs doesn’t seem to be failing over to the other
> > nodes.
> >
> > VM HA is enabled already, is there something we are missing?
> >
> > Regards,
> > Bryan
>


Re: Cloud init settings for Config Drive on L2 networks

2023-10-06 Thread Joan g
Thanks. it helps a lot

On Thu, 5 Oct, 2023, 16:43 Jorge Luiz Correa,
 wrote:

> Just sharing some scripts used here. I hope they can help you.
>
> Create file cloud.cfg_jammy
>
> Change the following lines:
> cloud_init_modules:
> .
> .
>  - [ssh, always]
>
> cloud_config_modules:
> .
> .
>  - [set-passwords, always]
>
> Download the cloud-set-guest-password-configdrive.sh script.
>
> Create custom-networking_v2.cfg:
>
> network:
>   version: 2
>   ethernets:
> ens3:
>   dhcp4: true
>
> apt install libguestfs-tools
> wget
>
> https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img
>
> virt-customize --run-command 'rm /etc/cloud/cloud.cfg' -a
> jammy-server-cloudimg-amd64.img
> virt-customize --upload cloud.cfg_jammy:/etc/cloud/cloud.cfg -a
> jammy-server-cloudimg-amd64.img
> virt-customize --mkdir /var/lib/cloud/scripts/per-boot -a
> jammy-server-cloudimg-amd64.img
> virt-customize --mkdir /var/lib/cloud/scripts/per-instance -a
> jammy-server-cloudimg-amd64.img
> virt-customize --upload
>
> cloud-set-guest-password-configdrive.sh:/var/lib/cloud/scripts/per-boot/cloud-set-guest-password-configdrive.sh
> -a jammy-server-cloudimg-amd64.img
> virt-customize --upload
>
> cloud-set-guest-password-configdrive.sh:/var/lib/cloud/scripts/per-instance/cloud-set-guest-password-configdrive.sh
> -a jammy-server-cloudimg-amd64.img
> virt-customize --upload
>
> cnptia-per-instance-script.sh:/var/lib/cloud/scripts/per-instance/cnptia-per-instance-script.sh
> -a jammy-server-cloudimg-amd64.img
> virt-customize --upload
> custom-networking_v2.cfg:/etc/cloud/cloud.cfg.d/custom-networking_v2.cfg -a
> jammy-server-cloudimg-amd64.img
>
> One important thing noted here, if you intend to use a DHCP server in this
> L2 network, without static configured hosts. All VMs will be launched from
> the same template and the /etc/machine-id will be the same. The DHCP client
> will derivate one client id from this information. So, for all VMs, the
> DHCP server thinks they are the same host, offerging the same IP. Caos!
>
> I've read some documents and posts saying the image distributor (maybe
> Canonical, distributing de qcow2 image), is the indicated figure to fix the
> problem, making some configuration to reset the machine id. Indeed, if you
> truncate (you cannot remove the file) /etc/machine-id and
> /var/lib/dbus/machine-id, it will be generated on first boot.
>
> Here, as the template is already uploaded and distributed to the Zone, I
> made one ansible that fix this problem. But, I think you could run
> virt-customize and truncate them.
>
> Maybe:
> virt-customize --run-command 'truncate -s0 /etc/machine-id
> /var/lib/dbus/machine-id' -a jammy-server-cloudimg-amd64.img
>
> Em qui., 5 de out. de 2023 às 05:57, Joan g  escreveu:
>
> > Thanks wei...
> >
> > On Thu, 5 Oct, 2023, 13:20 Wei ZHOU,  wrote:
> >
> > > You need to add a script in the template to get password from
> configdrive
> > > and reset user password. For example
> > >
> > >
> >
> https://github.com/apache/cloudstack/blob/main/setup/bindir/cloud-set-guest-sshkey-password-userdata-configdrive.in
> > >
> > >
> > >
> > > -Wei
> > >
> > > On Thu, 5 Oct 2023 at 09:38, Joan g  wrote:
> > >
> > > > Hello Community,
> > > >
> > > > Can someone guide me on configuration that should be added to
> > cloud-init
> > > > settings for creating password enabled templates using configdrive in
> > > > ubuntu 20,22.
> > > >
> > > > We need to deploy passsword and sshkey enabled templates on ubuntu
> that
> > > > will be using L2 networks.
> > > >
> > > > Thanks joan
> > > >
> > >
> >
>
> --
> __
> Aviso de confidencialidade
>
> Esta mensagem da
> Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), empresa publica
> federal  regida pelo disposto  na Lei Federal no. 5.851,  de 7 de dezembro
> de 1972,  e  enviada exclusivamente  a seu destinatario e pode conter
> informacoes  confidenciais, protegidas  por sigilo profissional.  Sua
> utilizacao desautorizada  e ilegal e  sujeita o infrator as penas da lei.
> Se voce  a recebeu indevidamente, queira, por gentileza, reenvia-la ao
> emitente, esclarecendo o equivoco.
>
> Confidentiality note
>
> This message from
> Empresa  Brasileira de Pesquisa  Agropecuaria (Embrapa), a government
> company  established under  Brazilian law (5.851/72), is directed
> exclusively to  its addressee  and may contain confidential data,
> protected under  professional secrecy  rules. Its unauthorized  use is
> illegal and  may subject the transgressor to the law's penalties. If you
> are not the addressee, please send it back, elucidating the failure.
>


Re: [LincolnTalk] Fwd: Many thanks...

2023-10-05 Thread Joan Kimball
And continuing on  Sara's email, also thanks by name to the Coalition to
stop Expansion of Private Jets at Hanscom and Everywhere that has over 50
groups as members and who distrubuted the bright yellow signs.

Joan Kimball

On Thu, Oct 5, 2023, 9:19 AM Sara Mattes  wrote:

>
>
> > …to those who have made this potential disaster front page news.
> > Thanks to our elected leadership led by Sen. Mike Barrett, the brilliant
> work of Save our Heritage and its president, Neil Rasmussen and all the
> citizen groups and individuals who show up and speak out on behalf of us
> all.
> >
> >
> >
> >
> >
> >
> > ———
> > Sara Mattes
> >
> >
> >
> >
> >
> >
> > --
> > Sara Mattes
> >
> >
> >
> >
>
> --
> The LincolnTalk mailing list.
> To post, send mail to Lincoln@lincolntalk.org.
> Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/
> .
> Change your subscription settings at
> https://pairlist9.pair.net/mailman/listinfo/lincoln.
>
>
-- 
The LincolnTalk mailing list.
To post, send mail to Lincoln@lincolntalk.org.
Browse the archives at https://pairlist9.pair.net/mailman/private/lincoln/.
Change your subscription settings at 
https://pairlist9.pair.net/mailman/listinfo/lincoln.



Re: Cloud init settings for Config Drive on L2 networks

2023-10-05 Thread Joan g
Thanks wei...

On Thu, 5 Oct, 2023, 13:20 Wei ZHOU,  wrote:

> You need to add a script in the template to get password from configdrive
> and reset user password. For example
>
> https://github.com/apache/cloudstack/blob/main/setup/bindir/cloud-set-guest-sshkey-password-userdata-configdrive.in
>
>
>
> -Wei
>
> On Thu, 5 Oct 2023 at 09:38, Joan g  wrote:
>
> > Hello Community,
> >
> > Can someone guide me on configuration that should be added to cloud-init
> > settings for creating password enabled templates using configdrive in
> > ubuntu 20,22.
> >
> > We need to deploy passsword and sshkey enabled templates on ubuntu that
> > will be using L2 networks.
> >
> > Thanks joan
> >
>


Cloud init settings for Config Drive on L2 networks

2023-10-05 Thread Joan g
Hello Community,

Can someone guide me on configuration that should be added to cloud-init
settings for creating password enabled templates using configdrive in
ubuntu 20,22.

We need to deploy passsword and sshkey enabled templates on ubuntu that
will be using L2 networks.

Thanks joan


  1   2   3   4   5   6   7   8   9   10   >