API Documentation
version 2.1.0
Ink Module
Overview
The ink module provides the foundation for transforming archaeological pencil drawings into publication-ready inked versions. Each function serves a specific purpose in the workflow, carefully preserving archaeological details while ensuring professional output quality.
Process Single Image
def process_single_image(
input_image_path_or_pil: Union[str, Image.Image],
prompt: str,
model_path: str,
output_dir: str = 'output',
use_fp16: bool = False,
output_name: Optional[str] = None,
contrast_scale: float = 1,
return_pil: bool = False,
patch_size: int = 512,
overlap: int = 64,
upscale: float = 1,
) -> Union[str, Image.Image]This function handles the conversion of individual drawings, providing fine control over the processing parameters. Think of it as a digital artisan, carefully converting each drawing while maintaining archaeological accuracy.
Core Parameters
input_image_path_or_pil- Your drawing to process - either a file path or a PIL Image
prompt- Instructions for the model, typically “make it ready for publication”
model_path- Location of your trained model file
Optional Controls
contrast_scale· default: 1.0- Fine-tunes the intensity of lines and shading. Values between 1.25-1.5 often work best for archaeological materials.
patch_size· default: 512- Processing segment size. Like dividing a large drawing into manageable sections.
overlap· default: 64- Controls smooth transitions between processed sections.
upscale· default: 1- Upscaling or downscaling factor for processing. It doesn’t affect the output size
Examples
Basic processing:
result = process_single_image(
"vessel_123.jpg",
prompt="make it ready for publication",
model_path="model_601.pkl"
)Process Folder
def process_folder(
input_folder: str,
model_path: str,
prompt: str = "make it ready for publication",
output_dir: str = 'output',
use_fp16: bool = False,
contrast_scale: float = 1,
patch_size: int = 512,
overlap: int = 64,
file_extensions: tuple = ('.jpg', '.jpeg', '.png'),
upscale: float = 1,
) -> dictBatch processes a directory of archaeological drawings, maintaining consistency across the entire collection. The function tracks progress and generates detailed logs and comparisons.
Core Parameters
input_folder- Directory containing your archaeological drawings
model_path- Location of your trained model file
Optional Controls
contrast_scale· default: 1.0- Global contrast adjustment for the entire batch
file_extensions· default: (‘.jpg’, ‘.jpeg’, ‘.png’)- Supported file types to process
upscale· default: 1- Upscaling or downscaling factor for processing. It doesn’t affect the output size
Returns
Returns a dictionary containing: - successful: Number of successfully processed images - failed: Number of failed conversions - failed_files: List of problematic files - average_time: Mean processing time per image - log_file: Path to detailed processing log - comparison_dir: Path to before/after comparisons
Examples
Process all drawings in a directory:
results = process_folder(
"excavation_2024_drawings/",
model_path="model_601.pkl",
contrast_scale=1.25
)Run Diagnostics
def run_diagnostics(
input_folder: str,
model_path: str,
prompt: str = "make it ready for publication",
patch_size: int = 512,
overlap: int = 64,
num_sample_images: int = 5,
contrast_values: list = [0.5, 0.75, 1, 1.5, 2, 3],
output_dir: str = 'diagnostics'
) -> NonePerforms preliminary analysis on your dataset to optimize processing parameters. Creates visualizations of patch divisions and contrast effects to help fine-tune settings.
Core Parameters
input_folder- Directory with sample drawings to analyse
model_path- Location of your trained model file
Optional Parameters
num_sample_images· default: 5- Number of drawings to analyse (max 5)
contrast_values· default: [0.5, 0.75, 1, 1.5, 2, 3]- Contrast levels to test
Generated Outputs
- Patch visualization diagrams
- Contrast effect comparisons
- Image summary statistics
- Processing recommendations
Examples
Run analysis on a new dataset:
run_diagnostics(
"new_site_drawings/",
model_path="model_601.pkl",
contrast_values=[0.75, 1, 1.25, 1.5]
)Calculate Patches
def calculate_patches(
width: int,
height: int,
patch_size: int = 512,
overlap: int = 64
) -> tuple[int, int, int]Internal utility that determines optimal patch division for processing large drawings. Ensures efficient memory usage while maintaining detail preservation.
Parameters
width,height- Image dimensions in pixels
patch_size· default: 512- Size of processing segments
overlap· default: 64- Overlap between segments
Returns
Returns a tuple containing: - total_patches: Total number of segments - patches_per_row: Number of patches horizontally - num_rows: Number of patches vertically
Examples
Calculate processing segments:
patches, rows, cols = calculate_patches(2048, 1536)
print(f"Processing in {patches} segments")Postprocessing Module
Overview
The postprocessing module provides tools for refining and enhancing the converted archaeological drawings, focusing on output quality and archaeological detail preservation.
Binarize Image
def binarize_image(
image: Union[PIL.Image, np.ndarray],
threshold: int = 127
) -> PIL.ImageConverts grayscale drawings to binary format, ideal for final publication preparation.
Parameters
image- Input image (PIL Image or numpy array)
threshold- Intensity threshold (0-255)
Examples
binary = binarize_image("processed_vessel.png", threshold=150)Remove White Background
def remove_white_background(
image: PIL.Image,
threshold: int = 250
) -> PIL.ImageCreates transparent backgrounds for archaeological drawings, useful for figure composition.
Parameters
image- Input drawing
threshold- Value above which pixels are considered white
Examples
transparent = remove_white_background("vessel.png", threshold=245)Process Image Binarize
def process_image_binarize(
image_path: str,
binarize_threshold: int = 127,
white_threshold: int = 250,
save_path: Optional[str] = None
) -> PIL.ImageCombined function for binarization and background removal.
Parameters
image_path- Path to input image
binarize_threshold- Threshold for black/white conversion
white_threshold- Threshold for transparency
save_path- Optional output path
Binarize Folder Images
def binarize_folder_images(
input_folder: str,
binarize_threshold: int = 127,
white_threshold: int = 250
) -> NoneBatch processes a folder of drawings, applying binarization and background removal.
Parameters
input_folder- Directory containing drawings
binarize_threshold- Threshold for binarization
white_threshold- Threshold for transparency
Enhance Stippling
def enhance_stippling(
img: PIL.Image,
min_size: int = 80,
connectivity: int = 2
) -> Tuple[PIL.Image, PIL.Image]Isolates and enhances stippling patterns in archaeological drawings.
Parameters
img- Input drawing
min_size- Minimum object size to preserve
connectivity- Connection parameter for pattern detection
Returns
Returns tuple containing: - processed_image: Drawing with enhanced stippling - stippling_pattern: Isolated stippling mask
Modify Stippling
def modify_stippling(
processed_img: PIL.Image,
stippling_pattern: PIL.Image,
operation: str = 'dilate',
intensity: float = 0.5,
opacity: float = 1.0
) -> PIL.ImageAdjusts stippling patterns through morphological operations and intensity modulation.
Parameters
processed_img- Base image without stippling
stippling_pattern- Isolated stippling pattern
operation- Type of modification (‘dilate’, ‘fade’, or ‘both’)
intensity- Morphological modification strength (0.0-1.0)
opacity- Stippling opacity factor (0.0-1.0)
Examples
enhanced = modify_stippling(
base_img,
dots_pattern,
operation='both',
intensity=0.7,
opacity=0.8
)Control Stippling
def control_stippling(
input_folder: str,
min_size: int = 50,
connectivity: int = 2,
operation: str = 'fade',
intensity: float = 0.5,
opacity: float = 0.5
) -> NoneBatch processes stippling patterns in a folder of archaeological drawings.
Parameters
input_folder- Directory containing drawings
min_size- Minimum object size to preserve
connectivity- Pattern detection parameter
operation- Modification type (‘dilate’, ‘fade’, ‘both’)
intensity- Modification strength
opacity- Pattern opacity
Examples
control_stippling(
"vessel_drawings/",
min_size=60,
operation='both',
intensity=0.6
)Preprocessing Module
Overview
The preprocessing module provides tools for analyzing and adjusting archaeological drawings before conversion. It ensures optimal input quality through statistical analysis and targeted adjustments.
Dataset Analyzer
class DatasetAnalyzer:
def __init__(self):
self.metrics = {}
self.distributions = {}A comprehensive tool for analysing collections of archaeological drawings, establishing statistical baselines for quality control.
Key Methods
analyze_image
def analyze_image(self, image: Union[str, Image.Image]) -> dictExtracts key metrics from a single drawing.
Returns
- mean: Average brightness
- std: Standard deviation
- contrast_ratio: Dynamic range measure
- median: Middle intensity value
- dynamic_range: Total intensity range
- entropy: Image information content
- iqr: Inter-quartile range
- non_empty_ratio: Drawing density measure
analyze_dataset
def analyze_dataset(
self,
dataset_path: str,
file_pattern: tuple = ('.png', '.jpg', '.jpeg')
) -> dictBuilds statistical distributions from a collection of drawings.
visualize_distributions_kde
def visualize_distributions_kde(
self,
metrics_to_plot: Optional[List[str]] = None,
save: bool = False
)Creates KDE plots of metric distributions with statistical annotations.
save_analysis
def save_analysis(self, path: str) -> NoneSaves the current analysis results to a file for later use. This is particularly useful when establishing reference metrics for a specific archaeological context or drawing style.
Parameters
path- File path to save the analysis results
Examples
analyzer = DatasetAnalyzer()
stats = analyzer.analyze_dataset("reference_drawings/")
analyzer.save_analysis("reference_metrics.npy")load_analysis
@classmethod
def load_analysis(cls, path: str) -> 'DatasetAnalyzer'Class method that loads previously saved analysis results. This allows reuse of established reference metrics without reanalyzing the dataset.
Parameters
path- Path to previously saved analysis file
Returns
Returns a new DatasetAnalyzer instance with loaded analysis results
Examples
# Load previously computed statistics
analyzer = DatasetAnalyzer.load_analysis("reference_metrics.npy")
# Use loaded stats for quality checks
check = check_image_quality("new_drawing.jpg", analyzer.distributions)These methods enable efficient reuse of analysis results across multiple processing sessions, particularly valuable when working with established archaeological documentation standards or specific site collections.
Process Folder Metrics
def process_folder_metrics(
input_folder: str,
model_stats: dict,
file_extensions: tuple = ('.jpg', '.jpeg', '.png')
) -> NoneBatch processes a folder of drawings to align their metrics with reference statistics.
Parameters
input_folder- Directory containing drawings to process
model_stats- Reference statistics from DatasetAnalyzer
file_extensions- Supported file types
Apply Recommended Adjustments
def apply_recommended_adjustments(
image: Union[str, Image.Image],
model_stats: dict,
verbose: bool = True
) -> Image.ImageAutomatically adjusts a drawing based on statistical analysis.
Parameters
image- Drawing to adjust
model_stats- Reference statistics
verbose- Print adjustment details
Adjustments Applied
- Contrast normalization
- Brightness alignment
- Standard deviation correction
- Dynamic range optimization
Examples
adjusted = apply_recommended_adjustments(
"drawing.jpg",
reference_stats,
verbose=True
)Check Image Quality
def check_image_quality(
image: Union[str, Image.Image],
model_stats: dict
) -> dictEvaluates a drawing against reference metrics to identify needed adjustments.
Returns
Returns a dictionary containing:
- metrics: Current image measurements
- recommendations: List of suggested adjustments
- is_compatible: Boolean indicating if adjustments needed
Examples
check = check_image_quality("new_drawing.jpg", reference_stats)
if not check['is_compatible']:
print("Adjustments needed:", check['recommendations'])Visualize Metrics Change
def visualize_metrics_change(
original_metrics: dict,
adjusted_metrics: dict,
model_stats: dict,
metrics_to_plot: Optional[List[str]] = None,
save: bool = False
) -> NoneCreates detailed visualizations comparing original and adjusted metrics against reference distributions.
Parameters
original_metrics- Metrics before adjustment
adjusted_metrics- Metrics after adjustment
model_stats- Reference statistics
metrics_to_plot- Specific metrics to visualize
save- Save plot to file
Examples
visualize_metrics_change(
original_metrics,
adjusted_metrics,
reference_stats,
metrics_to_plot=['contrast_ratio', 'mean', 'std']
)