```Axel Etzold wrote:
> I have an optimization problem. I need to
> minimize || A(x-y) || while maximizing ||x-y||, where
> ||.|| is a norm, x,y are vectors with integer non-negative entries
> and A is a (at that point fixed) matrix with float/double entries .
> x is also fixed, so I am looking for vectors y "far from x" that are mapped
> closely to where x is mapped under the linear mapping induced by the
> matrix A.
>
> How can I declare a matrix with float or double precision entries in Gecoder to formulate the problem ?
>

Gecode/R does not handle float values in constraints. What you could do
is to set a precision of e.g. 4 decimals and then multiply the fixed
matrix with 10^4, so that you only get integers. The value of x-y that
minimize ||A(x-y)|| will also minimize ||(10^4*A)(x-y)||.

The problem itself could be formulated using variables for each
component in x and y. The matrix multiplication would then be converted
to a system of linear equations to obtain variables representing the
result of (10^4*A)(x-y). Assuming an Euclidean norm, the norm
(represented by an integer variable) could then be computed using the
squared constraint (recently introduced in Gecode/R 0.8.2), and then
minimized. It would be advantageous to just minimize the square of the
norm to get rid of the square root.

A toy example: minimize ||Ax|| with respect to x, subjected to 0 <= x <=
17, where A = [-0.5 2; 0.3 -1]

require 'rubygems'
require 'gecoder'

class NormOpt < Gecode::Model
attr :x
attr :squared_norm

def initialize
# This is ugly. A more convenient way of getting these domains
non_negative_domain = 0..Gecode::Raw::IntLimits::MAX
complete_domain =
Gecode::Raw::IntLimits::MIN..Gecode::Raw::IntLimits::MAX

# Variables
@x = int_var_array(2, 0..17)
# z is the result of the matrix multiplication.
z = int_var_array(2, complete_domain)
z_squared = int_var_array(2, non_negative_domain)
@squared_norm = int_var(non_negative_domain)

# Constraints
# To make things non-trivial.
@x[0].must > 0

# The matrix multiplication transformed into linear equations.
z[0].must == @x[0]*-5 + @x[1]*20
z[1].must == @x[0]*3 + @x[1]*-10

# Constrain the squared norm.
z.each_with_index do |z_element, i|
z_element.squared.must == z_squared[i]
end
@squared_norm.must == z_squared.inject do |sum, element|
sum + element
end

# A better heuristic can probably be selected.
branch_on @x
end
end

model = NormOpt.new
if model.minimize!(:squared_norm).nil?
raise 'Failed to find a solution.'
end
puts "x: #{model.x.values.inspect}"

__END__

which outputs the solution x = [4, 1]

Also note that you will need some sort of trade-off between minimizing
the distance of the mapped vectors and maximizing the distance between
the vectors themselves, i.e. a single objective function.

Additionally, assuming that you have a single objective function, this
appears to be a quadratic programming[1] problem (but I could be
mistaken). Hence a quadratic programming solver should give you better
performance if necessary.