| | |
- builtins.object
-
- BBox
- BBoxList
-
- SubCompositeBBoxList
- CompositeImage
-
- SubCompositeImage
- SequentialExecutorFuture
- SubCompositeList
- concurrent.futures._base.Executor(builtins.object)
-
- SequentialExecutor
- constitch.constraints.ConstraintSet(builtins.object)
-
- CompositeConstraintSet
-
- SubCompositeConstraintSet
class BBox(builtins.object) |
| |
BBox(position=None, size=None, point1=None, point2=None)
Represents the bounding box of an image in a CompositeImage. Contains two
representations of the box, as a position and a size and as two points point1 and point2
which define the bounding box. Both representations can be retrieved from and assigned to
at BBox.position, BBox.size, BBox.point1, BBox.point2, and a BBox can be constructed
from either. |
| |
Methods defined here:
- __init__(self, position=None, size=None, point1=None, point2=None)
- Initialize self. See help(type(self)) for accurate signature.
- __repr__(self)
- Return repr(self).
- __str__(self)
- Return str(self).
- area(self)
- Returns the area of the box, basically self.size.prod()
However if the size of the rectangle is negative in either or both
dimentions the result will be negative, indicating an invalid rectangle
- as2d(self)
- Creates a copy of this box that only has two dimensions,
dropping extra values.
- collides(self, otherbox)
- Whether this box collides with the other box. This is defined as either
overlapping or sharing an edge
- contains(self, otherbox)
- Whether this box fully contains the other box, meaning every pixel in the other
box is also contained in this box
- copy(self)
- intersection(self, other)
- Returns the overlapping area between this box and the BBox other passed in
This may return boxes with negative size, which would indicate there is no overlap
- overlaps(self, otherbox)
- Whether this box overlaps with the other box. This does not consider
sharing an edge as overlapping, to count as overlapping there must be at least
one pixel that is contained in both images
Readonly properties defined here:
- center
- The center pixel of the image, rounded to the nearest pixel. Equivalent
to np.round(self.position + self.size / 2)
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
- point1
- The first position that defines the bounding box of the image.
When retrieved is identical to self.position.
Can be assigned to, however this is not the same as assigning to self.position.
When assigning self.point2 is maintained, meaning self.size will be changed.
The new size is calculated as self.position + self.size - value
Because of this difference in assignment behavior it cannot be updated
by indexing, ie box.point1[0] = 5 will fail.
- point2
- The second position that defines the bounding box of the image.
Calculated as self.position + self.size.
This can be assigned to, which will change the size of the image, keeping
self.point1 and self.position the same. The new size is calculated as
value - self._position
However it cannot be updated by indexing, ie box.point1[0] = 5 will fail.
- position
- The position of the image bounding box, measured from the
origin of the image, ie the (0, 0) pixel in the top left of
the image.
Setting the position will move the box to the new position, maintaning
the previous size. Additionally the position can be indexed and modified
in place, such as box.position[0] = 5. However care should be made sure to
not make a copy when indexing, as then changes will not take effect.
- size
- The size of the image bounding box, the (width, height) of the image
This can be assigned to, modifying the box size. As with BBox.position,
it can be indexed and assigned to as well.
|
class BBoxList(builtins.object) |
| |
BBoxList(positions=None, sizes=None)
A list of image bounding boxes, countained in a ConstraintImage.
Supports normal list operations such as indexing, len, index, as well
as bulk operations on all boxes with the numpy array properties
BBoxList.positions, BBoxList.sizes, and similar |
| |
Methods defined here:
- __contains__(self, box)
- __getitem__(self, index)
- __init__(self, positions=None, sizes=None)
- Initialize self. See help(type(self)) for accurate signature.
- __iter__(self)
- __len__(self)
- __repr__(self)
- Return repr(self).
- __str__(self)
- Return str(self).
- append(self, box)
- Add a new BBox to this list. Typically users should not need to use
this, instead add images through the CompositeImage.add_images and similar
methods
- copy(self)
- index(self, box)
- resize(self, n_dims)
- Changes the number of dimensions all boxes in the list have. It is enforced
that all boxes in the list have the same number of dimensions. When adding a dimension
it is filled with zeros
- setpositions(self, positions)
- Applies new positions to all boxes
Args:
positions (sequence of positions, dict of positions, callable):
Specifies a change in positions for boxes, depending on the type:
If a numpy array, the new positions are set as self.positions, maintaining sizes of boxes.
If a dict of positions, each entry will be set as the position of the box at the key.
If a callable, it is invoked for each box. If it returns a new position it is applied to the box
Readonly properties defined here:
- centers
- The center pixel of all image boxes, rounded to the nearest pixel.
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
- points1
- The lower point of all bounding boxes in the list as a 2d array. See BBox.point1.
- points2
- The higher point of all bounding boxes in the list as a 2d array. See BBox.point2.
Setting this will update all sizes, same as assigning to BBox.point2
- positions
- The positions of all boxes in the list, as a 2d numpy array. See BBox.position
for specifics about the positions of the boxes.
Setting this will update all positions, as well as modifying slices, such
as self.positions[:,0] += 100 would increase all x positions by 100.
- sizes
- The sizes of all boxes in the list as a 2d array. See BBox.size.
Setting this will update all sizes, same as assigning to BBox.size
|
class CompositeConstraintSet(constitch.constraints.ConstraintSet) |
| |
CompositeConstraintSet(composite, pair_func, random_pair_func)
|
| |
- Method resolution order:
- CompositeConstraintSet
- constitch.constraints.ConstraintSet
- builtins.object
Methods defined here:
- __call__(self, *args, **kwargs)
- Call self as a function.
- __init__(self, composite, pair_func, random_pair_func)
- Initialize self. See help(type(self)) for accurate signature.
- add(self, obj)
- Add constraints to this set
If a constraint between the same images is already present, the constraint
with the lower error is kept and the other is removed.
In the case of equal errors the new constraint is kept
Args:
other (Constraint or sequence of Constraints): Constraint or Constraints to add
Raises:
ValueError: The constraint(s) to be added are from a different CompositeImage
- filter(self, *args, random=False, **kwargs)
- Returns a new ConstraintSet with only constraints that are
matching the specified filter.
Either a ConstraintFilter instance can be passed in or an object that
can be converted into a filter, ie a dictionary or a set of keyword arguments.
See ConstraintFilter for the full documentation on creating a filter.
Args:
obj: The filter or object to be converted into a filter. Can be many types:
If it is a ConstraintFilter or a callable it is applied as the filter
If it is a numpy bool array, it is used to filter constraint, mapping with
the order of self.constraints
If it is a set or list of pairs of indices, only constraints for those pairs are kept
limit (int): The maximum number of constraints returned
random (bool): If true the constraints are shuffled
Normally used in conjunction with the limit argument to select a random
sample of the constraints
sorted_by (function or str): key to sort constraints on
Either a function that can be passed as a key to sorted() or a string
that is an attribute of a constraint. Used to sort the constraints, normally
used with the limit argument
kwargs: Any keyword arguments are passed to a new ConstraintFilter() constructor
and applied as a filter
Returns:
A new ConstraintSet with the filtered Constraints
Methods inherited from constitch.constraints.ConstraintSet:
- __contains__(self, obj)
- Tests if a Constraint or a pair is contained
- __getattr__(self, name)
- Some attributes of Constraint can be accessed from a ConstraintSet, returned as a numpy
array of the values for all constraints, in the order of self.keys()
- __getitem__(self, pair)
- __iter__(self)
- Iterates through all constraints in this set
- __len__(self)
- calculate(self, aligner=None, executor=None)
- Calculates new constraints using an alignment algorithm
For every constraint the provided aligner is invoked to calculate
a new constraint. See constitch.alignment for more information on
alignment.
Args:
aligner (constitch.Aligner): default self.composite.aligner
The aligner that is used to calculate the new constraints
executor (concurrent.futures.Executor): default self.composite.executor
A thread or process pool instance to parallelize the computation,
as some aligners can be quite slow
- debug(self, *args, **kwargs)
- A shorthand for self.composite.debug
- find(self, obj=None, **kwargs)
- Returns the first constraint to match a filter
Uses the same interface as self.filter but returns the first
constraint that matches the filter
- fit_model(self, model=None, outliers=False, random_state=12345)
- Fits a linear model to the constraints in this set
This learns the motion of the microscope stage, which can be used to fill in
constraints in areas where there are not enough features to align.
The model is trained on the relation between the offset in image positions,
that is box2.position - bos1.position, and the offset specified in dx and dy.
Args:
model (sklearn base model): default constitch.SimpleOffsetModel()
The linear model to train, it should be a sklearn model class, meaning
it has a fit and predict method. The fit method is called
with X as a 4 column matrix containing the x and y positions of image1 and image2
for all constraints, and y as a 2 column matrix with dx and dy for all constraints
outliers (bool): Whether to use an outlier resistant model
If set to True the provided model is wrapped in sklearn.linear_model.RANSACRegressor,
and the inlier and outlier classifications are added onto the returned
result as result.inliers and result.outliers. These are new ConstraintSets
containing only the inliers and outliers that the model classified
random_state: the random state passed to RANSACRegressor when outliers=True
Returns:
An Aligner class that can be used to calculate new constraints, using
the linear model fit here.
- items(self)
- keys(self)
- merge(self, other, *others)
- Returns a new ConstaintSet with combined constraints from this and other sets
As with add(), if constraints between the same image pair are present in both sets
then the constraint with the lowest error is kept, with ties defaulting to constraints in the
last passed in set
Args:
other (Constraint or sequence of Constraints): Constraint or Constraints to
merge with new set
Raises:
ValueError: The constraints to be merged are from a different ConstraintImage
- neighborhood_difference(self, constraint)
- A metric that measures how well this constraint matches the image
positions, taking into account neighboring constraints.
- neighboring(self, constraint, depth=1)
- Returns a new ConstraintSet containing only constraints that
are connected to an initial constraint or image
The starting location is specified by passing either a constraint, a
image index, or a sequence of either. Constraints are added by BFS to
the requested depth. Any constraints provided as a starting location
are not included in the resulting set
- progress(self, iter, **kwargs)
- A shorthand for self.composite.progress
- remove(self, other)
- Remove constraints from this set
Args:
other (Constraint, (int, int) or sequence of either):
The constraints or pairs to be removed
Raises:
KeyError: The specified constraint does not exist
- solve(self, solver='mae', **kwargs)
- Solve the constraints to get a global position for each image
Args:
solver (constitch.Solver or str): default constitch.LinearSolver()
The solver method that is used to combine the overconstrainted
system of constraints and optimize for the best global positions.
Options include 'mse' for standard least squares solving, 'mae'
for solving while minimizing mean absolute error, 'huber' for
minimizing the huber loss, or any subclass of constitch.Solver.
More info can be found in constitch.solving
**kwargs: Arguments passed to the constructor of the solver.
Any arguments specified here are passed to the constructor
of the solver, for example if solver='huber' epsilon=5 could
be included to change the default epsilon parameter for huber loss.
Cannot be specified if solver is an already instantiated constitch.Solver
instance.
Returns:
The solver instance, with an attribute positions containing a dict
mapping image indices to their global positions
- values(self)
Readonly properties inherited from constitch.constraints.ConstraintSet:
- composite
- The composite that contains all the images of the constraints of this
set. If constraints from a different constraint are added, an error will be
raised. If this instance contains no constraints this will return None
Data descriptors inherited from constitch.constraints.ConstraintSet:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
Data and other attributes inherited from constitch.constraints.ConstraintSet:
- ATTRS = ['dx', 'dy', 'score', 'error', 'overlap', 'overlap_x', 'overlap_y', 'overlap_ratio', 'overlap_ratio_x', 'overlap_ratio_y', 'size', 'difference']
|
class CompositeImage(builtins.object) |
| |
CompositeImage(images=None, positions=None, boxes=None, scale='pixel', channel_axis=None, grid_size=None, tile_shape=None, overlap=0.1, aligner=None, precalculate=False, debug=True, progress=False, executor=None)
This class encapsulates the whole stitching process, the smallest example of stitching is
shown below:
composite = constitch.CompositeImage()
composite.add_images(images, initial_positions)
overlapping = composite.constraints(touching=True)
constraints = overlapping.calculate()
constraints = constraints.filter(min_score=0.5)
solution = constraints.solve()
composite.setpositions(solution)
full_image = composite.stitch()
This class is meant to be adaptable to many different stitching use cases, and each step
can be customized and configured. The general steps for the stitching of a group images are as follows:
Creating the composite
To begin we have to instantiate the CompositeImage class.
The full method signature can be found at
__init__() but some important parameters are described below:
The executor is what the composite uses to perform intensive computation
tasks, namely calculating the alignment of all the images. If provided
it should be a concurrent.futures.Executor object, for example
concurrent.futures.ThreadPoolExecutor. Importantly, concurrent.futures.ProcessPoolExecutor
does not work very well as the images need to be passed to the executor
and in the case of ProcessPoolExecutor this means they need to be pickled
and unpickled to get to the other process. ThreadPoolExecutor doesn't need
this as the threads can share memory, but it doesn't take full advantage of
multithreading as the python GIL prevents python code from running in parallel.
Luckily most of the intensive computation happens in numpy functions which don't
hold the GIL, so ThreadPoolExecutor is usually the best choice.
The arguments debug and process define how logging should happen with the composite.
If debug is True logging messages summarizing the results of different operations
will be printed out to stderr. Setting it to False will disable these messages.
If process is True a progress bar will be printed out during long running steps.
The default progress bar is a simple ascii bar that works whether the output is
a tty or a file, but if you want you can pass a replacement in instead of setting
it to True, such as tqdm.
An example of setting up the composite would be something similar to this:
import constitch
import concurrent.futures
import tqdm
with concurrent.futures.ThreadPoolExecutor(max_workers=32) as executor:
composite = constitch.CompositeImage(executor=executor, debug=True, progress=tqdm.tqdm)
Adding the images
Once the composite is set up we can add the images, and this is done through the add_images()
method. There are a couple ways of adding images, depending on how much information you have on the images:
First of all, you can just add the images with no positions, meaning they will all default to being at 0,0.
This will work out, as when you calculate constraints between images it will calculate constraints for all
possible images and filter out constraints that have no overlap. However the number of constraints that have
to be calculated grows exponentially with the number of images, and if you have positional information on your
images it is best to pass that in to help with the alignment. If you would like to use this method but are running
into computational limits, the section on pruning constraints below can be helpful.
composite.add_images(images)
If your images are taken on a grid you can pass in their positions as grid positions, by setting the scale
parameter to 'tile'. For example:
positions=[(0,0), (0,1), (1,0), (1,1)]
composite.add_images(images, positions=positions, scale='tile')
Now when constraints are calculated only nearby images will be checked, speeding up computation greatly.
If your images are not on a grid or you have the exact position they were taken in, you can also specify
positions in pixels instead of grid positions, to do this simply set the scale parameter to 'pixel'
and the positions passed in will be interpreted as pixel coordinates. When specifying pixel specific
positions another parameter that is available is the uncertainty of the provided positions. If you know
to the pixel where each image is you probably don't have a need for this library, but there is still a
wide range of possibilities, from being precise to a couple pixels to only providing the general locations.
The error on positions can be provided to add_images with the keyword argument positional_error or by directly
setting CompositeImage.positional_error. This will be a pixel value that acts as error bars, meaning the
image positions are plus or minus that value.
When specifying positions, you can also specify more than two dimensions. The first two are the x and y
dimensions of the images, but a z dimension can be added if you are doing 3 dimensional stitching or in our case
if you are doing fisseq and want to make sure all the cycles line up perfectly. In the case of fisseq,
you can add the cycle index as the z coordinate for the image.
Calculating constraints
Once images have been added we need to calculate the constraints between overlapping images.
Creating, calculating, filtering, and solving constraints is an integral part of the stitching
algorithm, and a subpackage is dedicated to them, constitch.constraints. This module contains the
Constraint class, as well as the ConstraintSet and ConstraintFilter classes. Here we will go over the
typical usage of them but for more information consult constitch.constraints.
The core idea of constraints is that each constraint stores a positional offset between two images, "constraining"
them to have that difference in positions. Constraints are useful for stitching because most alignment algorithms
only work with a pair of images, meaning they take as input and provide as output constraints.
We can always create new constraints using the attribute CompositeImage.constraints, which generates
constraints using the image positions specified in the composite. To retrieve constraints between all touching
images we can do:
overlapping = composite.constraints(touching=True)
The constraints here are considered "implicit" constraints as they were created from the provided image positions,
which are not precise. The variable overlapping is a ConstraintSet class, which acts similar to a dictionary, holding
a set of constraints. Specific constraints can be retrieved by indexing into the set with the indices of the two images
the constraint is between, eg overlapping[1,2] will return the constraint between image 1 and image 2 in composite.
As mentioned before these constraints are not precise, and the next step in stitching is passing
them to an alignment algorithm to refine them. This is done with the ConstraintSet.calculate()
method, like so:
constraints = overlapping.calculate()
This creates a new ConstraintSet constraints, which holds all our new constraints generated by the Aligner class
passed to calculate. By default constitch.FFTAligner() is used, which runs the phase correlation algorithm to
find the offset with the maximum correlation.
Filtering constraints
With the constraints calculated we can filter out any erroneous constraints using the constraint scores:
constraints = constraints.filter(min_score=0.5)
This will only keep constraints with a score >= 0.5, which should eliminate almost all of the constraints
that are not accurate. To futher filter the constraints we can fit a linear model to the constraints that remain:
stage_model = constraints.fit_model(outliers=True)
constraints = stage_model.inliers
When fitting a linear model to the constraints, we can use an outlier resistant model by specifying outliers=True,
which uses RANSAC to classify some constraints as outliers. We can additionally use the stage model to estimate
constrains that were filtered out before:
model_constraints = overlapping.calculate(stage_model)
As you can see here, the stage_model is an Aligner class that can be used to calculate new constraints,
with the same interface as the FFTAligner. We can merge these two constraint sets together to replace any
constraints that were filtered out with the modeled constraints:
constraints = constraints.merge(model_constraints)
Solving constraints
The final step is to solve the constraints that have been calculated, which we can do with the ConstraintSet.solve()
method. This converts each constraint into two linear equations, and solves the system of equations to find the
image positions that minimize alignment error. We can then apply these positions to the composite with CompositeImage.setpositions():
solution = constraints.solve()
composite.setpositions(solution)
Creating the final image
To get the final merged composite image we can call the CompositeImage.stitch() function, which combines all individual images
based on the current image positions:
final_image = composite.stitch()
The method that this function merges the images together can be configured by passing a Merger instance,
by default it will use MeanMerger. More information on mergers can be found in the docs of the constitch.Merger class
and the constitch.merging module.
For each of these steps there are many different parameters to configure the behaviour, make sure to check out the documentation
for each method to see the details on how they work. |
| |
Methods defined here:
- __init__(self, images=None, positions=None, boxes=None, scale='pixel', channel_axis=None, grid_size=None, tile_shape=None, overlap=0.1, aligner=None, precalculate=False, debug=True, progress=False, executor=None)
- Initialize self. See help(type(self)) for accurate signature.
- add_image(self, image, position=None, box=None, scale='pixel', imagescale=1)
- add_images(self, images, positions=None, boxes=None, scale='pixel', channel_axis=None, imagescale=1)
- Adds images to the composite
Args:
images (np.ndarray shape (N, W, H) or list of N np.ndarrays shape (W, H) or list of strings):
The images that will be stitched together. Can pass a list of
paths that will be opened by imageio.v3.imread when needed.
Passing paths will require less memory as images are not stored,
but will increase computation time.
positions (np.ndarray shape (N, D) ):
Specifies the extimated positions of each image. The approx values are
used to decide which images are overlapping. These values are interpreted
using the scale argument, default they are pixel values.
boxes (sequence of BBox):
An alternative to specifying the positions, the full bounding boxes of every image can also
be passed in. The units of the boxes are interpreted the same as image positions,
with the scale argument deciding their relation to the scale of pixels.
scale ('pixel', 'tile', float, or sequence):
The scale argument is used to interpret the position values given.
'pixel' means the values are pixel values, equivalent to putting 1.
'tile' means the values are indices in a tile grid, eg a unit of 1 is
the width of an image.
a float value means the position values are a units where one unit is
the given number of pixels.
If a sequence is given, each element can be any of the previous values,
which are applied to each axis.
- add_split_image(self, image, grid_size=None, tile_shape=None, overlap=0.1, channel_axis=None)
- Adds an image split into a number of tiles. This can be used to divide up
a large image into smaller pieces for efficient processing. The resulting
images are guaranteed to all be the same size.
A common pattern would be:
composite.add_split_image(image, 10)
for i in range(len(composite.images)):
composite.images[i] = process(composite.images[i])
result = composite.stitch_images()
image: ndarray
the image that will be split into tiles
grid_size: int or (int, int)
the number of tiles to split the image into. Either this or tile_shape
should be specified.
tile_shape: (int, int)
The shape of the resulting tiles, if grid_size isn't specified the maximum
number of tiles that fit in the image are extracted. Whether specified or not,
the size of all tiles created is guaranteed to be uniform.
overlap: float, int or (float or int, float or int)
The amount of overlap between neighboring tiles. Zero will result in no overlap,
a floating point number represents a percentage of the size of the tile, and an
integer number represents a flat pixel overlap. The overlap is treated as a lower bound,
as it is not always possible to get the exact overlap requested due to rounding issues,
and in some cases more overlap will exist between some tiles
- align_disconnected_regions(self, num_test_points=0.05, expand_range=5)
- Looks at the current constraints in this composite and sees if there are any images or
groups of images that are fully disconnected from the rest of the images. If any are found,
they are attempted to be joined back together by calculating select constraints between the
two groups
- calc_score_threshold(self, num_samples=None, random_state=12345)
- Estimates a threshold for selecting constraints with good overlap.
Done by calculating random constraints and using a gaussian mixture model
to distinguish random constraints from real constraints
Args:
num_samples (float): optional
The number of fake constraints to be generated, defaults to 0.25*len(images).
In general the more samples the better the estimate, at the expense of speed
random_state (int): Used as a seed to get reproducible results
Returns (float):
threshold for score where all scores lower are likely to be bad constraints
- constraint_error(self, i, j, constraint)
- copy(self, **kwargs)
- Creates a full copy of this composite. The only thing shared between this composite
and the new copy is the raw image data.
- html_summary(self, path, score_func=None)
- layer(self, index)
- Returns a SubComposite with only images that are on the specified layer, that is
all images where box.position[2] == index.
Layers can be created when calling merge() with new_layer=True
or manually by specifying a third dimension when adding images
- merge(self, other_composite, *other_constraint_sets, new_layer=False, align_coords=False)
- Adds all images and constraints from another montage into this one.
other_composite: CompositeImage
Another composite instance that will be added to this one. All images from
it are added to this instance. All image positions are added, mantaining
the scale_factors of both composites.
Returns: list of indices
returns the list of indices of the images added from the other composite.
- pair_func(self)
- plot_scores(self, path, constraints=None, score_func=None, axis_size=12, constraint_multiplier=1)
- print_mem_usage(self)
- random_pair_func(self)
- resized_image(self, index, scale_x, scale_y)
- Returns the image at self.images[index] but upscaled or downscaled
by scale_x and scale_y.
Args:
index (int): index of image
scale_x (int, float, fractions.Fraction): The scale multiplier across the x axis
scale_x (int, float, fractions.Fraction): The scale multiplier across the y axis
- score_heatmap(self, path, score_func=None)
- set_aligner(self, aligner, rescore_constraints=False)
- set_executor(self, executor)
- set_logging(self, debug=True, progress=False)
- set_scale(self, scale_factor)
- Sets the scale factor of the composite. Normally this doesn't need to be changed,
however if you are trying to stich together images taken at different magnifications you
may need to modify the scale factor.
scale_factor: float or int
Scale of images in this composite, as a multiplier. Eg a scale_factor of
10 will result in each pixel in images corresponding to 10 pixels in the
output of functions like `CompositeImage.stitch_images()` or when merging composites together.
- setimages(self, images)
- Updates the images of this composite.
images (sequence or dict): The new images to be set
If a sequence, it must be the current length of self.images. Each
image is updated with the corresponding new image in the sequence.
If a dict, it must map from indices within the range [0, len(self.images))
to new images, that are set at said indices.
All images must be numpy arrays of shape either (W, H) or (W, H, C).
- setpositions(self, positions)
- Applies new positions to images in this composite. positions is either a dict
mapping image indices to new positions, a sequence of new positions, or a constitch.Solver
class that has had solve() run on it, usually from ConstraintSet.solve
- stitch(self, merger='mean', indices=None, real_images=None, out=None, bg_value=None, return_bg_mask=False, mins=None, maxes=None, keep_zero=False, use_executor=True, prevent_resize=False, **kwargs)
- Combines images in the composite into a single image
merger: str or merging.Merger instance
The merger used to combine overlapping regions of images. If a string it is mapped to a Merger
class as follows:
"mean": merging.MeanMerger,
"efficient_mean": merging.EfficientMeanMerger,
"last": merging.LastMerger,
"nearest": merging.NearestMerger,
"efficient_nearest": merging.EfficientNearestMerger,
indices: sequence of int
Indices of images in the composite to be stitched together
real_images: sequence of np.ndarray
An alternative image list to be used in the stitching, instead of
the stored images. Must be same length and each image must have the
first two dimensions the same size as self.images
bg_value: scalar or array
Value to fill empty areas of the image.
return_bg_mask: bool
If True a boolean mask of the background, pixels with no images
in them, is returned.
keep_zero: bool
Whether or not to keep the origin in the result. If true this could
result in extra blank space, which might be necessary when lining up
multiple images. Similar to mins=[0,0], except if image positions
are negative they wont be cropped out
use_executor: bool, default True
Whether or not to use self.executor when adding tiles, which can allow
for multithreading. If true, multiple non overlapping tiles are added at once,
to speed up stitching. Multithreading may be disabled by the Merger class
being used, see merging.Merger for more info
prevent_resize: bool, default False
If true an error is raised when an image would be resized
Returns: np.ndarray
image stitched together
- subcomposite(self, indices, **kwargs)
- Returns a new composite with a subset of the images and constraints in this one.
The images and positions are shared, so modifing them on the new composite will
change them on the original.
indices: sequence of ints, sequence of bools, function
A way to select the images to be included in the new composite. Can be:
a sequence of indices, a sequence of boolean values the same length as images,
kwargs: arguments passed to the constructor of the subcomposite
- to_obj(self)
Class methods defined here:
- from_obj(obj, **kwargs) from builtins.type
Readonly properties defined here:
- positions
Data descriptors defined here:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class SequentialExecutor(concurrent.futures._base.Executor) |
| | |
- Method resolution order:
- SequentialExecutor
- concurrent.futures._base.Executor
- builtins.object
Methods defined here:
- submit(self, func, *args, **kwargs)
- Submits a callable to be executed with the given arguments.
Schedules the callable to be executed as fn(*args, **kwargs) and returns
a Future instance representing the execution of the callable.
Returns:
A Future representing the given call.
Methods inherited from concurrent.futures._base.Executor:
- __enter__(self)
- __exit__(self, exc_type, exc_val, exc_tb)
- map(self, fn, *iterables, timeout=None, chunksize=1)
- Returns an iterator equivalent to map(fn, iter).
Args:
fn: A callable that will take as many arguments as there are
passed iterables.
timeout: The maximum number of seconds to wait. If None, then there
is no limit on the wait time.
chunksize: The size of the chunks the iterable will be broken into
before being passed to a child process. This argument is only
used by ProcessPoolExecutor; it is ignored by
ThreadPoolExecutor.
Returns:
An iterator equivalent to: map(func, *iterables) but the calls may
be evaluated out-of-order.
Raises:
TimeoutError: If the entire result iterator could not be generated
before the given timeout.
Exception: If fn(*args) raises for any values.
- shutdown(self, wait=True, *, cancel_futures=False)
- Clean-up the resources associated with the Executor.
It is safe to call this method several times. Otherwise, no other
methods can be called after this one.
Args:
wait: If True then shutdown will not return until all running
futures have finished executing and the resources used by the
executor have been reclaimed.
cancel_futures: If True then shutdown will cancel all pending
futures. Futures that are completed or running will not be
cancelled.
Data descriptors inherited from concurrent.futures._base.Executor:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class SubCompositeBBoxList(BBoxList) |
| |
SubCompositeBBoxList(boxes, mapping)
|
| |
- Method resolution order:
- SubCompositeBBoxList
- BBoxList
- builtins.object
Methods defined here:
- __init__(self, boxes, mapping)
- Initialize self. See help(type(self)) for accurate signature.
- append(self, box)
- Add a new BBox to this list. Typically users should not need to use
this, instead add images through the CompositeImage.add_images and similar
methods
- copy(self)
Readonly properties defined here:
- points2
- The higher point of all bounding boxes in the list as a 2d array. See BBox.point2.
Setting this will update all sizes, same as assigning to BBox.point2
Data descriptors defined here:
- points1
- The lower point of all bounding boxes in the list as a 2d array. See BBox.point1.
- positions
- The positions of all boxes in the list, as a 2d numpy array. See BBox.position
for specifics about the positions of the boxes.
Setting this will update all positions, as well as modifying slices, such
as self.positions[:,0] += 100 would increase all x positions by 100.
- sizes
- The sizes of all boxes in the list as a 2d array. See BBox.size.
Setting this will update all sizes, same as assigning to BBox.size
Methods inherited from BBoxList:
- __contains__(self, box)
- __getitem__(self, index)
- __iter__(self)
- __len__(self)
- __repr__(self)
- Return repr(self).
- __str__(self)
- Return str(self).
- index(self, box)
- resize(self, n_dims)
- Changes the number of dimensions all boxes in the list have. It is enforced
that all boxes in the list have the same number of dimensions. When adding a dimension
it is filled with zeros
- setpositions(self, positions)
- Applies new positions to all boxes
Args:
positions (sequence of positions, dict of positions, callable):
Specifies a change in positions for boxes, depending on the type:
If a numpy array, the new positions are set as self.positions, maintaining sizes of boxes.
If a dict of positions, each entry will be set as the position of the box at the key.
If a callable, it is invoked for each box. If it returns a new position it is applied to the box
Readonly properties inherited from BBoxList:
- centers
- The center pixel of all image boxes, rounded to the nearest pixel.
Data descriptors inherited from BBoxList:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
class SubCompositeConstraintSet(CompositeConstraintSet) |
| |
SubCompositeConstraintSet(composite, pair_func, random_pair_func, mapping)
|
| |
- Method resolution order:
- SubCompositeConstraintSet
- CompositeConstraintSet
- constitch.constraints.ConstraintSet
- builtins.object
Methods defined here:
- __getitem__(self, pair)
- __init__(self, composite, pair_func, random_pair_func, mapping)
- Initialize self. See help(type(self)) for accurate signature.
Methods inherited from CompositeConstraintSet:
- __call__(self, *args, **kwargs)
- Call self as a function.
- add(self, obj)
- Add constraints to this set
If a constraint between the same images is already present, the constraint
with the lower error is kept and the other is removed.
In the case of equal errors the new constraint is kept
Args:
other (Constraint or sequence of Constraints): Constraint or Constraints to add
Raises:
ValueError: The constraint(s) to be added are from a different CompositeImage
- filter(self, *args, random=False, **kwargs)
- Returns a new ConstraintSet with only constraints that are
matching the specified filter.
Either a ConstraintFilter instance can be passed in or an object that
can be converted into a filter, ie a dictionary or a set of keyword arguments.
See ConstraintFilter for the full documentation on creating a filter.
Args:
obj: The filter or object to be converted into a filter. Can be many types:
If it is a ConstraintFilter or a callable it is applied as the filter
If it is a numpy bool array, it is used to filter constraint, mapping with
the order of self.constraints
If it is a set or list of pairs of indices, only constraints for those pairs are kept
limit (int): The maximum number of constraints returned
random (bool): If true the constraints are shuffled
Normally used in conjunction with the limit argument to select a random
sample of the constraints
sorted_by (function or str): key to sort constraints on
Either a function that can be passed as a key to sorted() or a string
that is an attribute of a constraint. Used to sort the constraints, normally
used with the limit argument
kwargs: Any keyword arguments are passed to a new ConstraintFilter() constructor
and applied as a filter
Returns:
A new ConstraintSet with the filtered Constraints
Methods inherited from constitch.constraints.ConstraintSet:
- __contains__(self, obj)
- Tests if a Constraint or a pair is contained
- __getattr__(self, name)
- Some attributes of Constraint can be accessed from a ConstraintSet, returned as a numpy
array of the values for all constraints, in the order of self.keys()
- __iter__(self)
- Iterates through all constraints in this set
- __len__(self)
- calculate(self, aligner=None, executor=None)
- Calculates new constraints using an alignment algorithm
For every constraint the provided aligner is invoked to calculate
a new constraint. See constitch.alignment for more information on
alignment.
Args:
aligner (constitch.Aligner): default self.composite.aligner
The aligner that is used to calculate the new constraints
executor (concurrent.futures.Executor): default self.composite.executor
A thread or process pool instance to parallelize the computation,
as some aligners can be quite slow
- debug(self, *args, **kwargs)
- A shorthand for self.composite.debug
- find(self, obj=None, **kwargs)
- Returns the first constraint to match a filter
Uses the same interface as self.filter but returns the first
constraint that matches the filter
- fit_model(self, model=None, outliers=False, random_state=12345)
- Fits a linear model to the constraints in this set
This learns the motion of the microscope stage, which can be used to fill in
constraints in areas where there are not enough features to align.
The model is trained on the relation between the offset in image positions,
that is box2.position - bos1.position, and the offset specified in dx and dy.
Args:
model (sklearn base model): default constitch.SimpleOffsetModel()
The linear model to train, it should be a sklearn model class, meaning
it has a fit and predict method. The fit method is called
with X as a 4 column matrix containing the x and y positions of image1 and image2
for all constraints, and y as a 2 column matrix with dx and dy for all constraints
outliers (bool): Whether to use an outlier resistant model
If set to True the provided model is wrapped in sklearn.linear_model.RANSACRegressor,
and the inlier and outlier classifications are added onto the returned
result as result.inliers and result.outliers. These are new ConstraintSets
containing only the inliers and outliers that the model classified
random_state: the random state passed to RANSACRegressor when outliers=True
Returns:
An Aligner class that can be used to calculate new constraints, using
the linear model fit here.
- items(self)
- keys(self)
- merge(self, other, *others)
- Returns a new ConstaintSet with combined constraints from this and other sets
As with add(), if constraints between the same image pair are present in both sets
then the constraint with the lowest error is kept, with ties defaulting to constraints in the
last passed in set
Args:
other (Constraint or sequence of Constraints): Constraint or Constraints to
merge with new set
Raises:
ValueError: The constraints to be merged are from a different ConstraintImage
- neighborhood_difference(self, constraint)
- A metric that measures how well this constraint matches the image
positions, taking into account neighboring constraints.
- neighboring(self, constraint, depth=1)
- Returns a new ConstraintSet containing only constraints that
are connected to an initial constraint or image
The starting location is specified by passing either a constraint, a
image index, or a sequence of either. Constraints are added by BFS to
the requested depth. Any constraints provided as a starting location
are not included in the resulting set
- progress(self, iter, **kwargs)
- A shorthand for self.composite.progress
- remove(self, other)
- Remove constraints from this set
Args:
other (Constraint, (int, int) or sequence of either):
The constraints or pairs to be removed
Raises:
KeyError: The specified constraint does not exist
- solve(self, solver='mae', **kwargs)
- Solve the constraints to get a global position for each image
Args:
solver (constitch.Solver or str): default constitch.LinearSolver()
The solver method that is used to combine the overconstrainted
system of constraints and optimize for the best global positions.
Options include 'mse' for standard least squares solving, 'mae'
for solving while minimizing mean absolute error, 'huber' for
minimizing the huber loss, or any subclass of constitch.Solver.
More info can be found in constitch.solving
**kwargs: Arguments passed to the constructor of the solver.
Any arguments specified here are passed to the constructor
of the solver, for example if solver='huber' epsilon=5 could
be included to change the default epsilon parameter for huber loss.
Cannot be specified if solver is an already instantiated constitch.Solver
instance.
Returns:
The solver instance, with an attribute positions containing a dict
mapping image indices to their global positions
- values(self)
Readonly properties inherited from constitch.constraints.ConstraintSet:
- composite
- The composite that contains all the images of the constraints of this
set. If constraints from a different constraint are added, an error will be
raised. If this instance contains no constraints this will return None
Data descriptors inherited from constitch.constraints.ConstraintSet:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
Data and other attributes inherited from constitch.constraints.ConstraintSet:
- ATTRS = ['dx', 'dy', 'score', 'error', 'overlap', 'overlap_x', 'overlap_y', 'overlap_ratio', 'overlap_ratio_x', 'overlap_ratio_y', 'size', 'difference']
|
class SubCompositeImage(CompositeImage) |
| |
SubCompositeImage(composite, mapping, layer=None, debug=True, progress=False, executor=None, aligner=None)
A CompositeImage made from a subset of the images in another CompositeImage instance.
Typically returned from CompositeImage.layer() or CompositeImage.subcomposite().
SubCompositeImage instances share data with the CompositeImage they were created from. If
SubCompositeImage.boxes or SubCompositeImage.images are modified, the changes will
be reflected in the other composite, and vice versa. |
| |
- Method resolution order:
- SubCompositeImage
- CompositeImage
- builtins.object
Methods defined here:
- __init__(self, composite, mapping, layer=None, debug=True, progress=False, executor=None, aligner=None)
- Initialize self. See help(type(self)) for accurate signature.
- contains(self, index)
- convert(self, constraints)
- copy(self, **kwargs)
- Creates a full copy of this composite. The only thing shared between this composite
and the new copy is the raw image data.
- layer(self, index)
- Returns a SubComposite with only images that are on the specified layer, that is
all images where box.position[2] == index.
Layers can be created when calling merge() with new_layer=True
or manually by specifying a third dimension when adding images
- merge(self, other_composite)
- Adds all images and constraints from another montage into this one.
other_composite: CompositeImage
Another composite instance that will be added to this one. All images from
it are added to this instance. All image positions are added, mantaining
the scale_factors of both composites.
Returns: list of indices
returns the list of indices of the images added from the other composite.
- pair_func(self)
- random_pair_func(self)
- subcomposite(self, indices)
- Returns a new composite with a subset of the images and constraints in this one.
The images and positions are shared, so modifing them on the new composite will
change them on the original.
indices: sequence of ints, sequence of bools, function
A way to select the images to be included in the new composite. Can be:
a sequence of indices, a sequence of boolean values the same length as images,
kwargs: arguments passed to the constructor of the subcomposite
Readonly properties defined here:
- debug
- positional_error
- progress
- scale
Data descriptors defined here:
- multichannel
Methods inherited from CompositeImage:
- add_image(self, image, position=None, box=None, scale='pixel', imagescale=1)
- add_images(self, images, positions=None, boxes=None, scale='pixel', channel_axis=None, imagescale=1)
- Adds images to the composite
Args:
images (np.ndarray shape (N, W, H) or list of N np.ndarrays shape (W, H) or list of strings):
The images that will be stitched together. Can pass a list of
paths that will be opened by imageio.v3.imread when needed.
Passing paths will require less memory as images are not stored,
but will increase computation time.
positions (np.ndarray shape (N, D) ):
Specifies the extimated positions of each image. The approx values are
used to decide which images are overlapping. These values are interpreted
using the scale argument, default they are pixel values.
boxes (sequence of BBox):
An alternative to specifying the positions, the full bounding boxes of every image can also
be passed in. The units of the boxes are interpreted the same as image positions,
with the scale argument deciding their relation to the scale of pixels.
scale ('pixel', 'tile', float, or sequence):
The scale argument is used to interpret the position values given.
'pixel' means the values are pixel values, equivalent to putting 1.
'tile' means the values are indices in a tile grid, eg a unit of 1 is
the width of an image.
a float value means the position values are a units where one unit is
the given number of pixels.
If a sequence is given, each element can be any of the previous values,
which are applied to each axis.
- add_split_image(self, image, grid_size=None, tile_shape=None, overlap=0.1, channel_axis=None)
- Adds an image split into a number of tiles. This can be used to divide up
a large image into smaller pieces for efficient processing. The resulting
images are guaranteed to all be the same size.
A common pattern would be:
composite.add_split_image(image, 10)
for i in range(len(composite.images)):
composite.images[i] = process(composite.images[i])
result = composite.stitch_images()
image: ndarray
the image that will be split into tiles
grid_size: int or (int, int)
the number of tiles to split the image into. Either this or tile_shape
should be specified.
tile_shape: (int, int)
The shape of the resulting tiles, if grid_size isn't specified the maximum
number of tiles that fit in the image are extracted. Whether specified or not,
the size of all tiles created is guaranteed to be uniform.
overlap: float, int or (float or int, float or int)
The amount of overlap between neighboring tiles. Zero will result in no overlap,
a floating point number represents a percentage of the size of the tile, and an
integer number represents a flat pixel overlap. The overlap is treated as a lower bound,
as it is not always possible to get the exact overlap requested due to rounding issues,
and in some cases more overlap will exist between some tiles
- align_disconnected_regions(self, num_test_points=0.05, expand_range=5)
- Looks at the current constraints in this composite and sees if there are any images or
groups of images that are fully disconnected from the rest of the images. If any are found,
they are attempted to be joined back together by calculating select constraints between the
two groups
- calc_score_threshold(self, num_samples=None, random_state=12345)
- Estimates a threshold for selecting constraints with good overlap.
Done by calculating random constraints and using a gaussian mixture model
to distinguish random constraints from real constraints
Args:
num_samples (float): optional
The number of fake constraints to be generated, defaults to 0.25*len(images).
In general the more samples the better the estimate, at the expense of speed
random_state (int): Used as a seed to get reproducible results
Returns (float):
threshold for score where all scores lower are likely to be bad constraints
- constraint_error(self, i, j, constraint)
- html_summary(self, path, score_func=None)
- plot_scores(self, path, constraints=None, score_func=None, axis_size=12, constraint_multiplier=1)
- print_mem_usage(self)
- resized_image(self, index, scale_x, scale_y)
- Returns the image at self.images[index] but upscaled or downscaled
by scale_x and scale_y.
Args:
index (int): index of image
scale_x (int, float, fractions.Fraction): The scale multiplier across the x axis
scale_x (int, float, fractions.Fraction): The scale multiplier across the y axis
- score_heatmap(self, path, score_func=None)
- set_aligner(self, aligner, rescore_constraints=False)
- set_executor(self, executor)
- set_logging(self, debug=True, progress=False)
- set_scale(self, scale_factor)
- Sets the scale factor of the composite. Normally this doesn't need to be changed,
however if you are trying to stich together images taken at different magnifications you
may need to modify the scale factor.
scale_factor: float or int
Scale of images in this composite, as a multiplier. Eg a scale_factor of
10 will result in each pixel in images corresponding to 10 pixels in the
output of functions like `CompositeImage.stitch_images()` or when merging composites together.
- setimages(self, images)
- Updates the images of this composite.
images (sequence or dict): The new images to be set
If a sequence, it must be the current length of self.images. Each
image is updated with the corresponding new image in the sequence.
If a dict, it must map from indices within the range [0, len(self.images))
to new images, that are set at said indices.
All images must be numpy arrays of shape either (W, H) or (W, H, C).
- setpositions(self, positions)
- Applies new positions to images in this composite. positions is either a dict
mapping image indices to new positions, a sequence of new positions, or a constitch.Solver
class that has had solve() run on it, usually from ConstraintSet.solve
- stitch(self, merger='mean', indices=None, real_images=None, out=None, bg_value=None, return_bg_mask=False, mins=None, maxes=None, keep_zero=False, use_executor=True, prevent_resize=False, **kwargs)
- Combines images in the composite into a single image
merger: str or merging.Merger instance
The merger used to combine overlapping regions of images. If a string it is mapped to a Merger
class as follows:
"mean": merging.MeanMerger,
"efficient_mean": merging.EfficientMeanMerger,
"last": merging.LastMerger,
"nearest": merging.NearestMerger,
"efficient_nearest": merging.EfficientNearestMerger,
indices: sequence of int
Indices of images in the composite to be stitched together
real_images: sequence of np.ndarray
An alternative image list to be used in the stitching, instead of
the stored images. Must be same length and each image must have the
first two dimensions the same size as self.images
bg_value: scalar or array
Value to fill empty areas of the image.
return_bg_mask: bool
If True a boolean mask of the background, pixels with no images
in them, is returned.
keep_zero: bool
Whether or not to keep the origin in the result. If true this could
result in extra blank space, which might be necessary when lining up
multiple images. Similar to mins=[0,0], except if image positions
are negative they wont be cropped out
use_executor: bool, default True
Whether or not to use self.executor when adding tiles, which can allow
for multithreading. If true, multiple non overlapping tiles are added at once,
to speed up stitching. Multithreading may be disabled by the Merger class
being used, see merging.Merger for more info
prevent_resize: bool, default False
If true an error is raised when an image would be resized
Returns: np.ndarray
image stitched together
- to_obj(self)
Class methods inherited from CompositeImage:
- from_obj(obj, **kwargs) from builtins.type
Readonly properties inherited from CompositeImage:
- positions
Data descriptors inherited from CompositeImage:
- __dict__
- dictionary for instance variables (if defined)
- __weakref__
- list of weak references to the object (if defined)
|
|