Re: [darktable-dev] Code reformatting...

2023-02-01 Thread Mark Feit
I don't contribute code to darktable often, but I do follow this list 
closely and feel the need to comment on this.  Take it or leave it as 
you wish.


On 2/1/23 7:04 AM, Moritz Moeller wrote:
This has nothing to do with how many character fit on a line on any 
display and everything with legibility.
Just because it's code doesn't mean rules discovered by typographers 
and used for typesetting text for centuries do not apply. ;)

45-90 chars/line is the suggested default for body text.


The darktable source code isn't prose and isn't consumed like it.  I can 
all but guarantee that formatting it according to _all_ of the rules for 
typesetting text and forcing people to work with it would result in a 
lot of screaming.




See https://practicaltypography.com/line-length.html


From that page:

"If you plan to use indenting to distinguish sections or hierarchies 
within your document, take this into account when setting up the initial 
line length. You want to start with long enough lines so that the 
indented parts also fall within the target range. Using fewer levels of 
indentation, and smaller indents, will help."


The implication there is that, despite the dictum at the top of the 
section, the characteristics of the material being set should be taken 
into consideration.



There are modern languages which enforce a unified code formatting 
style.  Rust e.g. hast rustfmt which almost all projects run 
automatically ...


Modernity doesn't really make a difference here.  Rust didn't invent 
bringing a formatter along with the language and it certainly didn't 
introduce forced formatting for commits.  I've been on projects where 
that was done to the assembly code.


darktable is written in C.  C is saddled with having exactly one 
namespace, which means globally-declared identifiers have to be unique.  
Uniqueness turns into length when the identifiers need to be 
human-readable and sensibly-organized.  Identifiers that run north of 20 
characters aren't unusual in this code base, nor are places where they 
end up in long lines.  A modest expansion of the line width to make 
those lines work better might not be a bad thing.



That said, in three years of using Rust and seeing Rust code in the 
wild, I've never seen 100c/l ever reached because there are other 
rules in rustfmt that commonly prevent this.


Most C does just fine constrained to 80.  Most parts of darktable I've 
browsed through probably would, too.  But...  In 35 years of writing C 
and seeing C code in the wild, I've run into code bases where formatters 
constrained to short lines made them awkward and difficult to read.  
Constraining darktable's C code base to whatever length some other code 
base in some other language uses as standard is dogma-driven design and 
it's not helpful.


The pragmatic approach would be to find something that works for *this* 
code base and *these* developers.  Propose a few different line widths, 
format the entire code base for each and put copies up someplace.  
Anyone with an interest can spot check whatever parts of the code they 
find interesting and provide feedback on readability and how usable 
those widths are with the development tools.  Give extra weight to the 
opinions of the people who spend the most time around the source.  Then 
settle on a standard, apply it to the *entire* code base and stick with 
it.  If there are different developers in five years who have different 
opinions, they can go through this exercise again.


--Mark



___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] where to discuss big changes

2019-04-17 Thread Mark Feit

On 4/16/19 1:51 PM, rawfiner wrote:


Using github by commenting directly on PRs


I'm in favor of this with some modifications with the mailing list as a 
second choice.  IRC is too ephemeral without logging and conversations 
can go on for days and end up intermixed with other topics, making it 
hard to pick through.


PRs shouldn't exist until there's code to be pulled.  Sometimes there's 
discussion about a project before anyone writes a single line.  These 
projects should be started as issues that we can organize using tags.



pros:
- we can comment directly near the code, and we can comment code details
- people can see the message when they want, and reply when they want, 
wherever they are on the planet

- we can have conversations organised by topics (PRs)


In addition to all of that, the commits can refer to the issues (e.g., 
"Overhaul of malloc() and free()  #1234") so anyone looking at the code 
has a near-direct path to the entire history of how it got there.


If a discussion takes place elsewhere (IRC or the mailing list), the 
conversation should be copied out and added to the issue or, if it 
persists, add a link to it.



cons:
- devs will have to check new PRs regularly to give their opinion, and 
a big-change PR may "hide" in between small ones. However, we could 
easily have a tag "big-change" to request devs to pay attention to 
particular PRs, or use the PRs names to indicate such big changes


This isn't any different than looking at the issues that get written for 
bugs, which makes a good case for using issues (or Redmine or anything 
similar).


- big changes should be discussed before making them. Yet, I think 
this drawback can be compensated by making PRs really early, which is 
already done by several of us (see PRs with [WIP] in the title)


Work-in-progress PRs lower the signal-to-noise ratio for those 
considering PRs.  If someone is working on a project, they should be 
doing it in their own GitHub space and requesting a pull when it's ready 
to be integrated.


As for the concerns about GitHub going away:  Everything the project has 
on GitHub can be backed up.  I do this twice a day for my own work and 
anything I depend on (darktable included): 
https://github.com/markfeit/github-backup.



Another way to have this discussion would be to come up with an 
agreeable workflow and then find the tools that fit it.


--Mark


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: Fwd: [darktable-dev] Pushing ISO (ISO invarience)

2019-03-02 Thread Mark Feit

On 3/2/19 5:37 AM, Bruce Williams wrote:
Yeah, but the numb nuts said "darktable could only give +3 stops of 
exposure".
Clearly, they did not know about the ability to instigate a second 
instead of a module.



As cameras continue to see better in the dark, I wonder if it would be 
worth considering bumping the upper limit on the slider to +4.  With my 
D750, +2.75 is no longer the source of amazement it once was.  :-)


--Mark


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Dynamic Memory Allocation Overhaul

2019-02-05 Thread Mark Feit

On 2/5/19 10:20 AM, parafin wrote:

One can argue that crashing might be helpful for debugging - backtrace
is produced and it's possible to deduce the reason DT exited. E.g. if
some allocation size is computed too high (say due to integer
underflow) malloc can fail and if we just exit cleanly we will lose all
context of the failure. Also it might be surprising for user unless DT
prints some error message (which BTW won't be seen in most cases
because DT is usually started by users without opening a terminal
window).


My definition of cleanly includes leaving a hint on the way out. 
dt_exit() takes printf()-style arguments, so it will have a reason in 
hand if the caller provides one, which the sanity-checked allocators 
do.  (See 
https://github.com/markfeit/darktable/blob/9c0787a3875708a94c8f7ae5cdcf33309c837606/src/common/utility.c#L60 
and the functions directly below it.)


What the program does with that information on the way out is up for 
discussion.  What I have there now prints a message to stderr and exits 
1 which, as you point out, isn't going to be seen any more than the 
prior segfaults if the program wasn't launched in a terminal.  Writing 
the message to a file in the config directory would be a reasonably-safe 
option if stderr isn't a tty.  Opening a dialog box to display the 
message before exiting would be another except in the OOM case.  Even 
then, de-initializing as much of dt as possible might free up enough RAM 
to make that work, but there are other things that could stymie it.  
Calling abort() would produce an abnormal end that would end up dumping 
core on Linux or as a crash report on OS X.  I have no idea what Windows 
would do.


This puts dt better off in terms of code quality and no worse off in 
terms of UX when the program has to die.  That's a net positive.  The UX 
problem is bigger than this.


--Mark


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Dynamic Memory Allocation Overhaul

2019-02-05 Thread Mark Feit

On 2/5/19 3:10 AM, Stefan Klinger wrote:

IMHO it would not make sense to try to be overly smart here.  A system
with failing `malloc` is on the brink of desaster, and writing
failsave code under these conditions is extremely difficult.  For one,
the recovery routines must not try to allocate memory.


Not looking for fail-safe so much as fail-nicely:  don't SIGESEGV by 
trying to use the NULL from a failed malloc(), just close the database, 
remove the lock file and head for the exit().  I've had dt crash hard 
enough times that I'm not worried about state.  The most I can recall 
losing is what I was doing on one image.


What I added exits through a function called dt_fail(), which provides a 
good single point of exit.  What happens there can be a subject for 
later discussion.


--Mark


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Dynamic Memory Allocation Overhaul

2019-02-04 Thread Mark Feit

On 2/4/19 2:22 AM, Andreas Schneider wrote:
If you want to change allocations anyway, you should really take a 
look at

talloc [1]. talloc is a hierarchical, reference counted memory pool system
with destructors. If you free the top memory context all children will be
freed to. It is just fun working with it and memory leaks don't really happen
anymore. If you forget to free some memory it will be gone once the parent is
freed. Also you can print the memory allocation tree for inspection.



My observation while doing this has been that most allocations are 
single blocks which I don't think would make talloc's benefits worth the 
additional overhead we'd get in trade.  That said, there are a handful 
of spots where that _might_ be useful, although some of that could be 
mitigated with carefully-written setup/teardown functions for the 
structures.


Right now I'm trying to make the static analysis report shorter by 
adding some safety and not making major changes to the way the existing 
code works.


--Mark


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



[darktable-dev] Dynamic Memory Allocation Overhaul

2019-02-03 Thread Mark Feit
(This has been split off from the static code analysis thread since it's 
a different topic.)


I've completed the first cut of an overhaul of the use of dynamic memory 
allocation in the source tree with an eye toward safety and an eventual 
place to do a clean shutdown (closing the database, removing locks, etc.).



What was done:

Malloc(), calloc(), realloc(), strdup() and strndup() have been made 
fail-safe through counterparts named dt_*() which exit cleanly through a 
common function.  The new versions are in the utilities.h header as 
short static inline functions that, on failure, call an error function 
in utilities.c.  This arrangement seems subjectively faster than a 
straight-up function call and doesn't pepper copies of the same static 
strings all over the object files.  If any functions have been missed, 
please point them out.


Calls to free() have been changed to dt_free(), which is currently a 
pass-through to free().


dt_alloc_align() and dt_free_align() have been given the same treatment 
and renamed to dt_malloc_aligned() and dt_free_aligned() for name 
consistency with the other functions.



What was not done:

Calls to these functions' glib equivalents are still intact. What I've 
been able to get from the documentation so far is that there's nothing 
special about them other than terminating on failure, but I want to make 
sure there are no other ramifications before proceeding.  The glib 
functions do a clean exit if allocation fails.


I may see about adding something to catch calls to malloc() et al and 
cause a compilation error or at least a hard runtime failure.


No effort has been made to do any kind of clean shutdown.  I need some 
advice from the learned on how to approach that.


There are a number of standalone programs that allocate memory but don't 
include any of the other common dt headers.  Those will be dealt with 
separately to avoid having to link anything additional.


These changes have not been run through PVS-Studio to see how much 
smaller the report is (and to catch any mistakes in my own work).



If anyone wants to give it a try, the current code is in the 
"malloc-overhaul" branch of my fork at 
https://github.com/markfeit/darktable.git.  So far all seems good, but 
I'm going to dogfood it for a bit longer before submitting a pull request.


--Mark


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] static code analysis

2019-01-28 Thread Mark Feit

On 1/28/19 3:15 AM, johannes hanika wrote:

re: malloc() and 0: linux overcommits, i.e. it will likely never
return 0 even if your memory is all full. this means checking for 0 is
completely useless in this context.


To be blunt, that reads like a rationalization for writing bad software.

Returning NULL when malloc() fails has been normal behavior since day 
one; the standard that codified it celebrates its 30th anniversary this 
year.  Unless there's a compelling reason to do otherwise, code should 
be written to the language standard, not the default behavior of the 
operating system supervising it.  The status quo does nothing to help 
the ports for Windows, which doesn't overcommit, and OS X, which might 
not (I can't find a solid reference one way or the other).  And, of 
course, it will break just the same on Linux systems that aren't set to 
the default.


To continue Stefan's theme, a clean exit leaves a fighting chance that 
dt's business will be successfully closed out; a SIGSEGV guarantees that 
it won't.  If wider use of dt is a goal of the project, making users 
pick through configuration directories after a crash when it could have 
been avoided won't help spur adoption.


I think wrappers for allocation has just become this week's project.

--Mark


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] static code analysis

2019-01-27 Thread Mark Feit

On 1/27/19 6:18 AM, Heiko Bauke wrote:


Currently, there is an offer for open source developers to get a free 
license for the PVS-Studio Analyzer tool.  I got one and applied the 
tool to the darktable master branch.

...
I was not yet able to study the results in detail.  There might be a 
lot of false positives or just minor issues.  But I expect to find 
also more serious things. 


I did a survey of the report index, dove into a few dozen of the errors 
and found it to be a high-quality report with little in the way of false 
positives.


A large percentage of the warnings are related to assumptions that 
pointers will not be NULL and a large percentage of those are directly 
or indirectly related to unchecked returns from malloc() and calloc().  
That could be made to go away by writing wrapper functions that check 
what's returned and halt the program nicely in the rare event of a 
failure.  Knowing it was that would send the developers on fewer goose 
chases to find the cause of a SIGSEGV further down.  If performance is a 
concern (not that malloc() is a real screamer to begin with), the 
solution can be split in a way that makes it attractive for the 
optimizer to inline the check.


Those aside, the others I looked at seem legitimate and are worth 
fixing.  None of it will require major work.


For example, 
https://rabauke.github.io/darktable_analyze/sources/collection.c_4.html#ln144 
looks very fishy to me.


That's definitely code I'd kick back during a review with a 
recommendation that it be written into a small function because the same 
logic is used repeatedly and that variable assignment inside the 
condition is ugly.  You might get a pull request for that shortly.  ;-)



If I can offer a few additional comments:

Before committing the project to PVS-Studio, it would be worth 
evaluating some of the alternatives, especially those that are 
open-source.  I think it's great that PVS offers a free license for so 
many situations, but there is the risk of having to go through and 
re-flag all of the spots in the code where warnings were suppressed 
should they change the license terms or go out of business and not 
release the sources.


Once the code is to a point where the analyzer has nothing to squawk 
about, static analysis needs to be repeated regularly. This could be 
done as a simple cron job that notifies the developers when something 
crops up or as a check run web hook to prevent code that doesn't pass 
from being committed to GitHub.  I have a system with cycles and space 
to play that role if needed and can make whatever configuration and 
scripts I develop part of the dt sources so others can run it.


Doing the cleanups for this is a great opportunity for someone like me 
who wants to give something back and doesn't have the time to take on a 
major project.  It isn't glamorous work, but I'd be happy to do it.


--Mark


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Feature request

2018-12-12 Thread Mark Feit

On 12/12/18 4:28 PM, Terry Duell wrote:


This is pretty much the solution that Patrick proposed.


Unless I missed something (not ruling that out; it's been known to 
happen), the first reply I saw was Pascal Obry's, which was GUI-centric.


The impression I got from your reply was that you were looking to do 
this through the CLI.  That piques my interest because my workflow is 
very batch-centric and all of the final final generation of images is 
done by shell scripts.  The only time I use the GUI for exporting is 
when I need to dump one image or something special like exporting a 
calibration card to build a camera color profile.



I did a test with one file where I set enabled to 0 for clipping and 
it didn't have any effect, and the prospect of attempting using sed or 
similar to edit the xmp to remove all clipping instructions was beyond 
my abilities.
I'll have to look at that again, as there may have been another 
clipping command later in that xmp.


I've created a tarball with my experiment that you can download from 
http://www.feitography.com/hole/nocrop.tar.gz. Unpack it, run 'make 
diff' to see the difference between the cropped and uncropped versions 
of the XMP, then run 'make' to produce JPEGs of versions of the image 
with the CLI.



If you attempt this please let me know how you get on and pass on your 
scripts if you are prepared to do so.


I'll see if I can knock it out in the next few days and will post the 
results here.  Shouldn't take long.  I need to learn how to use 
XMLStarlet, but that should be quick since I'm after two pretty simple 
operations.



--Mark


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Feature request

2018-12-12 Thread Mark Feit

On 12/11/18 10:07 PM, Terry Duell wrote:


The requirement for uncropped images seriously complicates the task if 
using darktable-cli.
The request is to have a 'crop=0' switch (or similar) for 
darktable-cli...if that is possible, and reasonable.


That would start a trend of having to propagate a lot of switches out to 
darktable-cli, which wouldn't end well.  Everything needed to bend 
darktable to your will is already in the sidecar (XMP) file that goes 
with each image.  What's missing is an alternate sidecar with the 
cropping turned off that could be handed to darktable-cli.


I did the following manually for a single image and it produced the 
right thing:


1.  Make a copy of the image's sidecar file (cp image.nef.xmp 
image.nef.xmp-nocrop).


2.  Edit the copy (vi image.nef.xmp-nocrop), locate all of the rdf:li 
items where darktable:operation is "clipping" and change 
darktable:enabled to "0".  This process can be automated with xsltproc 
or XMLStarlet.  The latter is available on all three of the platforms 
where darktable is supported.  (You could probably get away with some 
sneaky sed-based tricks, but I don't recommend that because the 
arrangement of the XML shouldn't be considered stable.  Use the right 
tool for the job.)


3.  Run darktable-cli against the image and edited XMP to produce an 
uncropped version of the image (darktable-cli image.nef 
image.nef.xmp-nocrop image-uncropped.jpg).


4.  Harvest the uncropped image for distribution and remove it and the 
edited sidecar if you don't want to keep them around.



Turning it into a shell script that can operate against any set of files 
you choose should be an easy exercise.  (It's an interesting enough 
problem that I might do it myself.)


HTH.

--Mark



___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Darkroom UI refactoring

2018-10-09 Thread Mark Feit

On 10/9/18 3:02 AM, Aurélien Pierre wrote:


But even if we keep the actual disposition, don't you think it's weird 
that :


  * in/out color profiles are stored in the color tabs, whereas they
are "basic" in the sense they are needed from technical
requirements and always on,

I don't think that's weird.  The first place I'd look to change the 
color profile is in the same tab with all of the other 
color-behavior-changing tools.  The fact that an input profile always 
has to be be processed is an implementation detail.  I don't think users 
who are unacquainted with what's going on under the hood are going to 
care about that.



  * signal-processing modules are mixed with creative ones



The same applies here.  I get the distinction you're making, but every 
module is still a signal processor and is there in pursuit of a creative 
goal.



The main problem I have with the current disposition is low-level 
stuff comes last in the UI.
I grouse about that a bit sometimes as well, but my workflow has turned 
out that 75% of what I process gets the same half-dozen low-level things 
applied from a preset and then everything I actually need to adjust is 
up top.  The things I toggle and/or adjust frequently are favorites, 
everything that's already in the pipe is under what's on and I go grab 
the rarities from their tabs as I need them.


Maybe the way to solve that problem would be to have a switch to invert 
the displayed order so the early-in-the-pipe stuff comes first.  
There'd  need to be some visual cue that it's being displayed that way, 
but I'm not sure I'd want to devote the screen real estate to it.


--Mark


___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org

Re: [darktable-dev] Which colour target is recommended for a proper colour matrix

2018-09-05 Thread Mark Feit

On 9/4/18 12:42 PM, Andreas Schneider wrote:

If you want to create a camera profile with a target, I suggest you read:

https://pixls.us/articles/profiling-a-camera-with-darktable-chart/

For what it's worth, I have a convenient system for building and 
installing color profiles into your dt config directory that does half 
the work: https://github.com/markfeit/darktable-input-color-profiles


--Mark

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Lens Correction

2018-04-23 Thread Mark Feit

On 04/23/2018 03:26 AM, sturmflut wrote:

Could it maybe be possible that the lensfun database got corrupted on
some systems?


It's not just some systems and it's not a database problem.  I've had 
this problem for a long time and, figuring it was pilot error, never got 
around to digging around to find the cause.


The DT I run (2.4.1 on Linux; I will be upgrading later this spring when 
I get a lull in shooting) is built from the sources against the lensfun 
installed on the system, which I update (binaries and data) before 
building.  That leaves no chance of a version mismatch.


DT correctly identifies the body and lens in the image information box.  
The Lens Correction tool correctly recognizes my bodies (SLR and 
point-and-shoot) but fails to recognize any of the lenses I use regularly:


Nikkor AF 18-35mm f/3.5-4.5D IF-ED
Nikkor AF-S 28-300mm f/3.5-5.6G ED VR
Nikkor AF 35-70mm f/2.8D
Nikkor AF 50mm f/1.4D
Nikkor AF-S 70-200mm f/2.8G VR IF-ED

I've checked the XML files that make up the Lensfun database, and all 
are in there.  I also have and use a 28-105 which is not in the database 
and, as expected, isn't found and behaves the same way.


One other thing I've noticed is that the list of choices available when 
trying to select a lens manually is a subset of what's in the database.  
The brands with a compatible mount show up (so I get Nikon, Tamron and 
Samyang but not Canon or Olympus), but the list of lenses is incomplete 
despite being in the XML.  Whether this is a problem in Lensfun or DT 
remains to be seen, but I'd be hard-pressed to believe that a Lensfun 
failure to recognize common lenses like the 70-200 or the 50 f/1.4 would 
have been left to fester for very long.


An image that illustrates the problem on my system can be downloaded 
from https://s3.wasabisys.com/darktable/lensfun.nef.


--Mark

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] darktable accessing lensfun issues

2018-04-18 Thread Mark Feit

On 4/18/18 12:29 PM, Patrick Shanahan wrote:

did you try lensfun-update-data


For what it's worth, I have the same problem and have verified that the 
lensfun data is current.


--Mark

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Show maximum possible export size in light table?

2018-01-12 Thread Mark Feit

On 1/12/18 3:29 AM, Heiko Bauke wrote:


At the moment, one can collect images by specific criteria.  Some of 
these criteria can be expressed by numerical values, e.g., focal 
length or ISO.  It would be nice if one could collect not only images 
taken at a specific ISO, let's say, but also taken at an ISO 
larger/greater than a specific given value.


In a next step one could add the possibility to collect images of a 
width/height larger/smaller than a specific value.  The possibility to 
collect images of a particular size only would not be powerful enough 
for most use cases.  There are too many possible sizes. 


A really, really good solution to this would be to dump all available 
image metadata into a full-text indexer and allow the user to filter 
using queries.  Done right, it gives users immense flexibility and saves 
having to write and maintain application-specific filtering infrastructure.


I've used Lucene to do similar things on several projects over the last 
15 years, some indexing tens of millions of records, with a great deal 
of success.  It's fast, reasonably lightweight, solid, makes a very 
small index relative to the corpus if you're not using it as a document 
store and has a query language that can't be beat. Lucene supports 
fields, which would allow search on specific attributes in addition to 
free-form, Google-style queries.  For example, a query to pull images 
taken with a Nikon at ISOs up to 1000 taken during 2017, have five 
stars, contain the word "green" (maybe as a result of having the green 
label applied) and not containing the word "wedding" anywhere would look 
like this:


    maker:nikon iso:[0 TO 1000] createdate:[20170101 TO 20171231] 
stars:5 green -wedding


The down side is that Lucene is written in Java, and I'm pretty sure the 
last thing anyone wants to do is try to integrate it directly with DT.  
There are ports to other languages (including C++, which could be 
wrapped in C) that are binary-compatible with the Java version's indexes 
and, of course, other indexers that could be considered.  It would also 
be possible to integrate with an external program that takes insertions, 
updates, deletions and queries from DT and returns results via a pipe or 
socket.  The presence or absence of the program could be used a switch 
for whether or not DT enables its indexing features.


I don't have the spare cycles to take on the complete project, but I 
would be more than happy to provide guidance on how to organize the 
index, evaluate indexing software and do enough at-the-edges integration 
work to make the indexing work easy to incorporate into DT.


--Mark

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Darktable + Cloud-computing

2017-11-02 Thread Mark Feit

Aurélien PIERRE wrote:


So… what do you think of having the heavy filters processed in 
Darktable through the servers of Amazon or anybody else instead of 
having to break the bank for a new (almost) disposable computer ? 
Possible or science-fiction ? How many of you don't have a 1MB/s or 
faster internet connection ? How difficult would it be to code ?



Possible?  Sure.  Practical?  Not so much.  Interesting thought, though.

For starters, you'd have to send the image there and back.  A 
losslessly-compressed, 14-bit NEF from my Nikon D750 runs 29 MB (bytes) 
and, to make the math easy, let's say you have a 29 Mb/s (megabits) 
connection to the Internet.  Sending that image in each direction will 
take eight seconds out and eight seconds back, so even if your slow 
computer takes, say, 15 seconds to run a filter, you're already behind 
the curve on data transfer alone.  The actual quantity of the data DT 
would need to transfer would be a lot larger since it's not dealing with 
the image in a compressed format internally.  On top of that, you don't 
just buy CPU from Amazon. They also charge you for using their pipes to 
get data in and out (more for out, because they want you to also pay 
them to store your data on their other services).  Faster compute is 
available but comes at a premium.


Where Amazon shines for this sort of thing is in large parallel jobs 
where you can spin up a bunch of machines that live long enough to do 
the work and then shut them off.  For DT to be practical, you'd have to 
have a system at the ready to do the work or instantiate one each time 
you start DT and tear it down when you exit.


You can get a lot of compute on your desk for not much money if you shop 
carefully, and being able to leverage the GPU(s) in your graphics card 
helps, too.


--Mar

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] PSA: Does anyone still use gcc-4.8/4.9 ?

2017-03-28 Thread Mark Feit

On 03/28/2017 04:57 AM, Roman Lebedev wrote:

I want to bump gcc requirement up to GCC-5 soon,

Thus, subj.
If you do, please speak up now.
PS: using outdated distro version is not an excuse.


I'm not even a bit player in this but...

This will put off anybody using EL7 or its derivatives because there are 
ABI and library changes and no easy way to get GCC 5 installed using the 
"usual" repositories (i.e., Base and EPEL).  Fedora has already made the 
jump, but those changes aren't going to be in the stable distro picture 
until EL8 happens.  Whether that makes EL7 outdated would, of course, be 
up for discussion.


If EL8 drags out beyond production DT having the GCC 5 requirement, EL 
users running the latest available won't have a way to use it other than 
a VM with a distro that supports it.  I've done that a couple of times 
out of necessity, but I/O is sluggish enough that I'd prefer not to.


--Mark

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Re: PSA: ATTENTION NIKON OWNERS !!! LAST CALL

2016-10-14 Thread Mark Feit

On 10/13/16 1:51 PM, Roman Lebedev wrote:


Okay, some samples were provided, a few cameras got off the list.
But not all.

Here is the current up-to-date list of the cameras, samples
from which are still wanted:

...
NIKON CORPORATION NIKON D1
How short is the timeline on this?  I have a D1 in storage that I can 
use to generate the files you need but might not be able to get at it 
for a week or so.


--Mark

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org



Re: [darktable-dev] Harsh reaction

2016-10-04 Thread Mark Feit

On 10/04/2016 02:54 AM, Frederic Crozat wrote:


Just ignore him. This kind of behavior is not acceptable (I'm not a 
darktable developer, I have zero interest in Windows or macos ports, 
I'm only helping for openSUSE packaging when needed).


You are fixing real bugs in the codebase, which happen by chance to 
not be visible on Linux. They should still be fixed.




I'm a heavy user and occasional developer and couldn't agree more with 
both points.


Recommending darktable and having to add the caveat that Windows users 
need to run it in a VM is becoming old.  There's no reason in this day 
and age that this program shouldn't run on Windows.  If Jan is going to 
be the one who gets it all the way there, great.  If he's not, that's 
fine, too, because sometimes it takes a lot of false starts before 
something happens.  I evaluated darktable at several points in its life 
before starting to use it full-time; had I given up after a few 
attempts, I'd still be using AfterShot (nee Bibble) and hating 
everything about it except Perfectly Clear.


In two weeks I'm giving a talk on photography to a local Girl Scout 
troop and I still have no idea what software to suggest for the Windows 
users who haven't earned their Linux-in-a-VM merit badges.


--Mark

___
darktable developer mailing list
to unsubscribe send a mail to darktable-dev+unsubscr...@lists.darktable.org