You are on page 1of 17

Optimization on fractals and stability

M. Barnsley , U. Freiberg and D. La Torre


March 6, 2011
Department

of Mathematics, Australian National University, Australia.


Email: mbarnsley@aol.com
Department of Mathematics, University of Siegen, Germany. Email:
freiberg@mathematik.uni-siegen.de
Department of Economics, Business and Statistics, University of Milan,
Italy. Email: davide.latorre@unimi.it
Abstract
We study the stability of variational problems involving sequences
of sets and measures generated by iterated function systems and approaching a fractal object in Hausdorff and MongeKantorovich metric,
respectively. We give some results concerning the convergence of the
corresponding minimizers.

Keywords: Iterated function system, Fractal, Vvariable fractal, Variational problem, Selfsimilar set, Selfsimilar measure, Hausdorff distance,
MongeKantorovich distance.

Introduction: derivatives on fractals and variational problems

Many physical phenomena can be modeled by variational problems. More


precisely, one tries to find minimizers u of a certain functional J which in
general depends on the function and the gradient (or higher partial derivatives) of the function u. Hence, classical models and techniques rely on the
assumption that a gradient is defined, or equivalently that the underlying domain is smooth.
However, nonsmooth structures appear frequently in physical phenomena,
one may find examples in the formation and annihilation of substructures
in condensed matter, or their selforganization - creating irregular and
rough objects. For some applications it might be sufficient to approximate the fractal sets by sequences of smooth sets. Increasing problems
occur when one focusses the attention on this irregularity as for example

in the description of percolation or diffusion through porous media or transmission problems across highly conductive layers modeled by fractals. In the
latter case, one has to define an analysis on fractals which has been done by
several approaches, which split in a natural way into two main branches
(and, unfortunately, the notions of derivative on a fractal obtained by
each of the branches disagree): The first (extrinsic) approach is to regard a fractal as a subset of a higher dimensional Euclidean space. Then
the ambient analysis somehow induces an analysis on the fractals. The
main difficulty in doing so is the definition of the corresponding function
spaces, which arise as traces of classical Sobolev or Besovspaces applying sophisticated modification of Whitneys extension theorem. For these
functionspaceapproaches, we refer the interested reader to [15] or [25].
On the other hand, we have intrinsic approaches where the analysis is
constructed from the fractal itself. Hereby, an energy form (and hence - via
the GauGreenformula a Laplacian) is constructed on socalled post critically finite selfsimilar fractals as a limit of approximating energies which
are defined by suitable difference schemes on a sequence of prefractals.
Note that such fractals are in particular self-similar and finitely ramified.
The construction of the energy deeply relies on this property. For this intrinsic approach see [10] and the references therein. A short introduction
and overview can be found in [4]. Note that the latter approach has its
probabilistic counterpart, founding on the following observation: In Rn , the
Laplacian is the infinitesimal generator of the standard Brownian motion,
which can be obtained as the limit of random walks. On the other hand, the
construction of a random walk (on a fractal) does not require a differentiable
structure. Hence, one defines the Brownian motion on suitable fractals as
the limit of a sequence of suitable normalized random walks and calls
the infinitesimal generator of the limit process Laplacian on this fractal
(see [12]). The arising Laplacian agrees with the Laplacian obtained in [10].
The intrinsic approach usually leads to the definition of a Dirichlet form
(EK , D(EK )) which is the fractal analogue of the standard
R Dirichlet form on
a smooth domain Rn . Hence, EK corresponds to |u|2 dx while the
domain D(EK ) plays the role of the usual Sobolev space W 1,2 (). A main
difference between a smooth domain and a fractal K (like for example,
the Sierpinski gasket) is that in general EK [u] is not absolutely continuous
with respect to the volume measure K (see [8]). Thus, this approach does
not provide the notion of a gradient. However, a Laplacian on K is defined
in a natural way. This is a typical feature of fractals: First order derivatives
are harder to define than second order derivatives. Anyway, the domain of
the Dirichlet form D(EK ) is continuously embedded in C 0, (K), the space
of Holder continuous functions with exponent = ln(5/3)
2 ln 2 (in case, that K
is the Sierpinski gasket). Moreover, we want to emphasize, that in the construction of the energy form, one obtains some discrete gradients of the

form
n u n v(p) =

1 X u(p) u(q) v(p) v(q)


,
2 q p |p q|
|p q|

u, v D (EK ) ,

(1)

5
where = ln
ln 2 . Here q n p means that the sum is taken over certain
points q close to p (see [5] for details). As explained in [23], is the
unique positive number which yields a nontrivial limit of the sequence of
the approximating energy forms in the construction. Note that is not
only determined by the Hausdorff dimension of the fractal K but also by
the ramification properties of the underlying prefractal networks. This
means that from the viewpoint of the energy the effective distance on the
fractals is no longer given by the Euclidean metric but by a certain power
of it, that is by a quasimetric.
The latter observation leads to the idea that a fractional derivative might
be the right approach in defining a gradient which actually has been treated
out in [26]. The disadvantage of fractional derivatives is that they are not
linear operators (leading to the notion of a nonlinear gradient in [26]).
Moreover, there are recent developments to obtain the gradient on a fractal by homeomorphic deformations (a so-called harmonic transformation)
to make it a (almost everywhere) smooth set (see [1], [24]). In these approaches, roughly spoken, the gradient is introduced as the pullback or
trace of the classical one.
Finally, we want to mention a measure geometric approach where the derivative of a function defined on a fractal subset of the real line is taken with
respect to a (fractal) measure (see [4]).
In this paper, we do not present another way in defining (partial) derivatives on fractal sets, but we will investigate the stability of an optimization
problem involving the classical gradient. In fact, as already highlighted
above, the notions of derivatives on fractals disagree. On the other hand,
since we are interested in stability analysis we need a definition of gradient which doesnt depend on the fractal and on the associated measure.
Our choice is to consider the classical notion of gradient and not a gradient
defined on the fractal as introduced in [24]. Here stability means stability with respect to approximation of the fractal set by integer dimensional
smoother sets. More precisely, we are interested in the following optimization problem
Z
Jn (u) :=
F (x, u(x), u(x))d(x) min,
(2)
uF

where F is a given space of functions, F : R Rn Rn R is a given


Lipschitz function, K is a fractal and K is a measure supported on K.
The spirit of the paper is that the integral of the right hand side of (2) is
evaluated with respect to different fractal measures. We establish a stability
3

result for this problem under perturbations of the set K (with respect to
the Hausdorff metric) and the measure K (with respect to the Monge
Kantorovich metric). Note that our problem (2) includes the classical energy
minimization problem
Z
|u(x)|2 dK (x) min .
(3)
uF

Well known examples in real applications for such fractals K are the (middle
third) Cantor set and Sierpinski gasket (see Figure 1). The latter one will

Figure 1: a: Cantor set

b: Sierpinski gasket

serve as our standard example when we explain how a selfsimilar fractal


arises as the attractor of an Iterated function system (IFS) and how one can
get from the IFS reasonable approximating sets.
The paper is organized as follows: in Section 2 we recall the definitions
of Hausdorff and Monge-Kantorovich metrics, in Section 3 we provide our
main results on stability of solutions of the variational problem 2 under
perturbations of the set K and the measure K . Section 4 is devoted to
applications of the above results to Iterated function systems and V variable
fractals. Finally, in Section 5 we discuss directions of further developments.

Hausdorff and MongeKantorovich metrics

In the following will denote an open subset of Rn , X a compact


subset, and d the Euclidean metric in Rn . Let H(X) denote the space of all
nonempty compact subsets of X and dH (A, B) the Hausdorff metric between
A and B, that is
dH (A, B) = max{max d0 (x, B), max d0 (x, A)},
xA

xB

(4)

where d0 (x, A) is the usual distance between the point x and the set A, i.e.,
d0 (x, A) = min d(x, y).

(5)

yA

We set h(A, B) := maxxA d0 (x, B), hence dH (A, B) = h(A, B) h(B, A).
Note that in general h(A, B) 6= h(B, A).
For certain considerations the following alternative definition of Hausdorff
metric may be more appealing. For Y X and 0, the set
Y := {x X
=

{x X

d0 (x, Y ) }

d(x, y) for some y Y }

is called the closed neighborhood of the set Y . It also can be written as


the Minkowski addition Y + B(0, ), i.e. as the dilation of Y by a closed ball
with radius . Then for any A, B H(X) it holds that (see [1])
dH (A, B)

A B and B A .

Thus we have
dH (A, B) = inf{ 0

A B and B A }.

It is well known that the space (H(X), dH ) is a complete metric space if


(X, d) is complete (see [9]). For the rest of the paper we assume that (X, d)
is complete.
Now let M(X) denote the space of Borel probability measures on the
Borel -algebra B(X) and dM the Monge-Kantorovich metric on this space,
that is
Z

Z
dM (, ) :=
sup
f (x)d
f (x)d ,
(6)
f Lip1 (X,R)

where
Lip1 (X, R) = {f : X R | |f (x1 ) f (x2 )| kx1 x2 k, x1 , x2 X}. (7)
It is well know that the space (M(X), dM ) is a complete metric space
provided (X, d) is complete (see [9]).

Main results

In this section we present our main results. As already announced, we are


interested in analyzing the stability of the optimization problem 2 under
perturbation of the set K and the measure K generated through an IFS
(see Section 4). For our purposes, let F be the space of all C 1,1 -functions
5

on X. We recall that a function u : R is of class C 1,1 , or briefly a


C 1,1 -function at x0 , if u is differentiable and u is locally Lipschitz at x0 ,
i.e. there exists a constant K > 0 such that:
ku(x) u(y)k Kkx yk,

(8)

for all x and y in a neighborhood U of x0 . Let X . We say that u is


C 1,1 on X if u is C 1,1 at x0 for all x0 X. For a survey of the class of C k,1
functions see [7]. Define
kukF := sup |u(x)| + sup ku(x)k +
xX

xX

sup
x,yX,x6=y

ku(x) u(y)k
kx yk

(9)

Obviously, kukF is well defined for all u F. We have the following property.
Proposition 1. (F, kukF ) is a Banach space.
Proof. It is trivial to prove that F is a linear space on R and that k.kF
is a norm. We now prove that this space is complete with respect to the
metric dF (u, v) = kuvkF . Let (un ) be a Cauchy sequence of C 1,1 -functions.
Obviously, then (un ) and (un ) have to be Cauchy sequences in (C 0 (X), d )
and this implies that there exist u, g C 0 (X) such that un u and un
g in the d -metric. It is easy to prove that u = g. Now we have to prove
that u is Lipschitz. Since (un ) is a Cauchy sequence with respect to kkF ,
for fixed > 0 there exists some n0 = n0 () > 0 such that dF (un , un0 ) ,
for all n n0 . So kun kF kun0 kF + , for all n n0 . Observing that
kun (x) un (y)k Cn kx yk kun kF kx yk

(10)

for some constants Cn , we get


ku(x) u(y)k =

lim kun (x) un (y)k

n+

(kun0 kF + )kx yk,

(11)

which proves the assertion.


In the following, we will write that un u if ku un kF 0 when
n +.
Let (n ) be a sequence of measures converging to in the MongeKantorovich metric and let Kn := suppn . This implies the convergence
of Kn K in the Hausdorff metric ([9]) where K is the support of , that
is K = supp. Consider the following family of optimization problems
Z
Jn (u) :=
F (x, u(x), u(x))dn (x) min
(12)
uF

Kn

We have the following results. It is trivial to prove that whenever the function (t, y) F (x, t, y) is convex for all fixed x X then Jn : F R are
convex for all n.
6

Proposition 2. If ku un kF 0, n +, then |J(un ) J(u)| 0 as


n +.
Proof. Computing, we get
Z
|J(un ) J(u)|
|F (x, un (x), un (x)) F (x, u(x), u(x))|d(x)
K

CF kun ukF

(13)

for some constant CF .


Now we state our main result. Note that assumption (14) is fulfilled
whenever F is C 1 , and this includes the classical energy minimization problem (3).
Theorem 1. Suppose that
|F (x1 , t1 , y1 ) F (x2 , t2 , y2 )| CF (kx1 x2 k + |t1 t2 | + ky1 y2 k) (14)
for some constant CF and for all x1 , y1 , x2 , y2 Rn , t1 , t2 R. Let (un ) be a
sequence of global minimizers for (Jn ) over F and suppose that kun u
kF
0 for some u
F. Then u
is a global minimizer for J over F.
Proof. First of all we fix B > 0 and we observe that
sup
|J(u) Jn (u)| =
Z

= sup
F (x, u(x), u(x))dn (x)
F (x, u(x), u(x))d(x)
uF ZKn

Z K

= sup F (x, u(x), u(x))dn (x)


F (x, u(x), u(x))d(x)
uF
X
Z

ZX

CF (B + 1) sup (z)n (z)


(z)d(z) 0
(15)
uF ,kukF B

Lip1 (X)

since if we define (x) =

F (x,u(x),u(x))
,
CF (B+1)

we have

|(x) (y)| = |F (x, u(x), u(x)) F (y, u(y), u(y))|


CF (kx yk + |u(x) u(y)| + ku(x) u(y)k)

1.
CF (B + 1)kx yk
Choose B such that k
ukF B. We obtain that
lim Jn (un ) = J(u),

n+

since
7

(16)

|Jn (un ) J(u)| |Jn (un ) J(un )| + |J(un ) J(u)|

sup
|J(u) Jn (u)| + |J(un ) J(u)| 0.
uF,kukF B

Then for all u satisfying kukF B we get


J(
u) = lim Jn (un ) lim Jn (u) = J(u),
n+

n+

(17)

that is u
is a global minimizer for J.
We now extend the above results to vector valued functionals. In other
words, we are now interested in
Z
J(u) :=
F (x, u(x), u(x))d(x) min,
(18)
uF

where F : R Rn Rn Rl is a vector function. We suppose that Rl


is ordered by a pointed closed convex cone C, that is a C b iff. a b C.
We now recall the definition of weak minimum point for J.
F is called a local weak minimum point if there exists
Definition 3.1. u
a neighborhood N F of u
such that there is no u N F satisfying
J(
u) J(u) int C. u
F is called a global minimum point if there is no
u N F satisfying J(
u) J(u) int C for all u F.
Let now (n ) be a sequence of measures converging to in the
Monge-Kantorovich metric and let Kn := suppn and consider the sequence
of problems
Z
Jn (u) :=
F (x, u(x), u(x))dn (x) min
(19)
uF

Kn

We now have the following result. Proposition 3 can be proved in the


same manner as Proposition 2.
Proposition 3. If ku un kF 0, n +, then kJ(un ) J(u)k 0 as
n +.
Theorem 2. Suppose that
|Fj (x1 , t1 , y1 ) Fj (x2 , t2 , y2 )| Cj (kx1 x2 k + |t1 t2 | + ky1 y2 k)
for certain constants Cj , j = 1 . . . l, and for all x1 , y1 , x2 , y2 Rn , t1 , t2 R.
Let (un ) be a sequence of global minimizers for (Jn ) over F and suppose that
kun u
kF 0 for some u
F. Then u
is a global minimizer for J over F.
8

Proof. We fix B > 0 and we observe that


sup
uF ,kukF B
l
X

Cj (B + 1)

j=1

sup
Lip1 (X)

since if we define (x) =

kJ(u) Jn (u)k

(z)n (z)
(z)d(z) 0,

(20)

(21)

Fj (x,u(x),u(x))
Cj (B+1)

we have

|(x) (y)| = |Fj (x, u(x), u(x)) Fj (y, u(y), u(y))|


Cj (kx yk + |u(x) u(y)| + ku(x) u(y)k)

1.
Cj (B + 1)kx yk
Choose B such that k
ukF B and, trivially, we get that
lim Jn (un ) = J(
u).

n+

(22)

This implies that


Jn (un ) J(
u) + B(0, kJn (un ) J(
u)k)

(23)

where + denotes the Minkowski sum between sets and B(0, r) = {v Rl :


kvk r}. Since (un ) are weak global minimizers for Jn , we get for all u that
Jn (u) Jn (un ) + (int C)c

(24)

J(
u) Jn (u) (int C)c + B(0, kJn (un ) J(
u)k).

(25)

and this implies

Recalling that
Jn (u) J(u) + B(0, kJn (u) J(u)k)

(26)

we get
J(
u) J(u) + B(0, kJn (u) J(u)k) + B(0, kJn (un ) J(
u)k) (int C)c (27)
that is
J(u) J(
u) + n B(0, 1) + (int C)c .

(28)

Taking the limit for n + and recalling that (int C)c is closed, we get
J(u) J(
u) + (int C)c , that is the thesis.

4
4.1

Applications to Iterated Function Systems and


V variable fractals
Iterated function systems

Iterated Function Systems allow to formalize the notion of self-similarity or


scale invariance of some mathematical object. Hutchinson [9] and Barnsley and Demko [2] showed how systems of contractive maps with associated probabilities acting in a parallel manner either deterministically or
probabilistically, can be used to construct self-similar sets and measures.
In the IFS literature, these are called IFS with probabilities (IFSP) and
are based on the action of a contractive Markov operator on the complete
metric space of all Borel probability measures endowed with the MongeKantorovich metric. Applications of these methods can be found in image
compression, approximation theory, signal analysis, denoising, and density
estimation ([6, 11, 13, 14, 16, 17, 18, 19, 20, 21, 22]). In what follows, let
w = {w1 , , wN } be a family of injective contraction maps wi : X X, to
be referred to as an N -map IFS. Let ci (0, 1) denote the contraction factors
of wi and define c := max1iN ci . Note that c (0, 1). Associated with the
IFS mappings w1 , . . . , wN there is a set-valued mapping w
: H(X) H(X)
the action of which is defined to be
w(S)

:=

N
[

wi (S),

S H(X),

(29)

i=1

where wi (S) := {wi (x), x S} is the image of S under wi , i = 1, 2, , N .


If in addition, the contractive mappings wi are assumed to be similitudes,
i.e. if we assume that there exist numbers ci (0, 1) such that
|wi (x) wi (y)| = ci |x y|,

x, y X,

i = 1, . . . , N,

the invariant set S in (29) is called selfsimilar.


Theorem 3. [9] w
is a contractive mapping on (H(X), dH ):
dH (w(A),

w(B))

cdH (A, B),

A, B H(X).

(30)

As (H(X), dH ) is a complete metric space, we obtain from Banach fixed


point theorem the following corollary.
Corollary 1. There exists a unique set A H(X), such that w(A)

=
A, the so-called attractor of the IFS w.
Moreover, for any S H(X),
dH (w
n (S), A) 0 as n .
The latter property provides a construction method of approaching a
fractal with respect to the Hausdorff metric. The equation A = w(A)

obviously implies that A is self-tiling, i.e. A is union of (distorted) copies of


10

itself. In Figure 2 the first four approximation steps for the Sierpinski gasket
(cf. Figure 1.b) are shown, if we choose the starting set S to be the outer
(filled) triangle.

Figure 2: A sequence of approximating sets for the Sierpinski gasket K.


Note that K is selfsimilar with respect to the IFS {w1 , w2 , w3 } acting on
R2 , where wi (x) = 12 (x Pi ) + Pi , i = 1, 2, 3, and the points P1 , P2 , P3 are
the vertices of the outer triangle.
Let p = (p1 , p2 , . . . , pN ), 0 < pi < 1, 1 iP
N , be a partition of unity
associated with the IFS mappings wi , so that N
i=1 pi = 1. Associated with
this IFS with probabilities (IFSP) (w, p) is the so-called Markov operator,
M : M(X) M(X), the action of which is
(M )(S) :=

N
X

pi (wi1 (S)),

S B(X),

(31)

i=1

where wi1 (S) = {y X | wi (y) S}.


Theorem 4. [9] M is a contraction mapping on (M(X), dM ):
dM (M , M ) cdM (, ),

, M(X).

(32)

Corollary 2. There exists a unique measure


M(X), the so-called invariant measure of the IFSP (w, p), such that
= M
. Moreover, for any
M(X), dM (M n ,
) 0 as n .
Note that for any -integrable function u : X R, it holds that
11

Z
u(x)d(x) =
X

N
X

Z
pi

i=1

u(wi (x))d(x).

(33)

C 0 (X)

Let
denote the Banach space of continuous functions on X endowed with the uniform metric d . Associated with the IFSP (w, p) define
the following operator T : C 0 (X) C 0 (X):
T u :=

N
X

pi (u wi ), u C 0 (X).

(34)

i=1

Now for a given M(X) define the linear functional F : C 0 (X) R,


Z
F (u) :=< u, >=
u(x)d(x).
(35)
X

Then < T f, >=< f, M > i.e. T is the adjoint operator of M . The


operator T is a contraction on the complete metric space (C 0 (X), d ) with
contractivity factor p = maxi=1...N pi < 1. So we get
Z
Z
u(x)d(x) = lim
T n f (x)dL (x)
(36)
n+ X

where is the Lebesgue measure on X and n := M n in the MongeKantorovich distance.


Now, in the case of Iterated Function Systems, Theorem 1 reads as
follows.
Corollary 3. Choose an arbitrary Borel probability measure 0 in M(X).
Suppose that
|F (x1 , t1 , y1 ) F (x2 , t2 , y2 )| CF (kx1 x2 k + |t1 t2 | + ky1 y2 k) (37)
for some constant CF and for all x1 , y1 , x2 , y2 Rn , t1 , t2 R and consider
the following sequence of optimization problems
Z
Jn (u) =
F (x, u(x), u(x))dn (x) min,
(38)
uF

Kn

P
where n+1 := M n = ni=1 pi n (wi1 ) and Kn = suppn . Denote K the
unique fixed point of M . Let {un }n be a sequence of global minimizers for
problems Jn over F and suppose that kun u
kF 0 for some u
F. Then
u
is a global minimizer for the functional
Z
J(u) =
F (x, u(x), u(x))dK (x)
(39)
K

over F.
12

It is worth highlighting the fact that the functional J in (2) can be


computed, at least in approximated way, as
Z
Z
J(u) =
F (x, u(x), u(x))dK (x) = lim
Fn (x)d(x)
(40)
n+ X

where F0 (x) = F (x, u(x), u(x)), Fn (x) := (T n F0 )(x) and is the Lebesgue
measure.

4.2

V variable fractals

The concept of V variable fractals (developed by Barnsley, Hutchinson and


Stenflo, see [3]) allows describing new families of random fractals, which are
intermediate between the notions of deterministic and of random fractals
including random recursive as well as homogeneous random fractals. More
precisely, we are given a (not necessarily finite) family of IFSs with probabilities, and we apply the related set valued mappings and measure valued
Markov operators in a random manner. Hereby, the parameter V describes
the degree of variability of the realizations. Roughly spoken, this means
that at each construction step we have at most V different fundamental
shapes.
The coding of the (random) V variable machinery can be done by V
variable trees. A V variable tree is a labeled tree having at each height
at most V different subtrees (see [3] for details). Denote the probability
space of V variable trees. Then, for each , we denote by |n the
tree , cut at height n, n 1. Now choose an arbitrary Borel probability
measure 0 in M(X). For fixed we obtain a sequence of Borel
probability measures (|n )n , where |n is obtained by applying Markov
operators on 0 according to tree up to height n. For every fixed ,
the sequence (|n )n has a unique limit (which is a random measure) ()
with respect to the MongeKantorovich metric, which is independent of the
starting measure 0 .
Now, in the case of V variable fractals, Theorem 1 can be applied -wise
and leads to the following.
Corollary 4. Choose an arbitrary Borel probability measure 0 in M(X).
Suppose that
|F (x1 , t1 , y1 ) F (x2 , t2 , y2 )| CF (kx1 x2 k + |t1 t2 | + ky1 y2 k) (41)
for some constant CF and for all x1 , y1 , x2 , y2 Rn , t1 , t2 R. For each
consider the following sequence of parameterized optimization problems
Z
Jn() (u) =
F (x, u(x), u(x))d|n (x) min,
(42)
uF

K|n

13

Figure 3: The construction of a V variable fractal. The family of IFSs


consists of two IFSs only, namely the classical Sierpinski gasket (F) and
the modified Sierpinski gasket (G). The latter one is the unique attractor
with respect to 6 similitudes with contraction ration 1/3. In this picture, V
equals 5.

where |n is explained as above and K|n = supp|n . Denote () the


Monge-Kantorovich limit of the sequence (|n ) and K () = supp() . Let
()

()

{un }n be a sequence of global minimizers for problems Jn over F and


()
suppose that kun u
() kF 0 for some u
() F. Then u
() is a global
minimizer for the functional
Z
J () (u) =
F (x, u(x), u(x))d() (x)
(43)
K ()

over F.

14

Conclusions and further developments

We have studied a stability property for a sequence of variational problems


under perturbations of a set K and a measure with respect to Hausdorff
and Monge-Kantorovich metric, respectively. Of course this represents a
first solution to this problem and many other approaches and generalizations could be considered. Possible further developments of this paper could
involve an extension of the notion of gradient; here we have chosen to use
the classical definition but many other could be considered as outlined in
the introduction, see also [5].

Acknowledgements
This work has been carried out during research periods of Davide La Torre
and Uta Freiberg at the Department of Mathematics of the Australian National University, Canberra, Australia. We thank our hosts Prof. Michael
Barnsley and Prof. John Hutchinson for this opportunity.

References
[1] M.F.Barnsley, Fractals Everywhere, Academic Press, New York,
1989.
[2] M.F.Barnsley and S.Demko, Iterated function systems and the global
construction of fractals, Proc. Roy. Soc. London Ser. A, 399 (1985),
243275.
[3] M.F.Barnsley, J.Hutchinson and O.Stenflo, V variable fractals: fractals with partial self similarity, Advances in Math., 218, no.6 (2008),
20512088.
[4] U.R.Freiberg, Analytic properties of measure geometric KreinFeller
operators on the real line, Math. Nach., 260 (2003), 3447.
[5] U.R.Freiberg, Analysis on fractal objects, Meccanica, 40 (2005), 419
436.
[6] U.R.Freiberg, D.La Torre, F.Mendivil, Iterated function systems and
stability of variational problems on self-similar objects, Nonlinear
Anal. Real World Appl. 12 2 (2011), 1123-1129.
[7] I.Ginchev, D.La Torre, M. Rocca, C k,1 functions, characterization,
Taylors formula and optimization: a survey, Real Anal. Exch. 35 2
(2009/10), 515534.

15

[8] M.Hino, On singularity of energy measures on self-similar sets,


Probab. Theory Related Fields, 132 2 (2005), 265290.
[9] J.Hutchinson, Fractals and self-similarity, Indiana Univ. J. Math., 30
(1981), 713747.
[10] J.Kigami, Analysis on fractals, Cambridge Univ. Press., 2001.
[11] H.Kunze, D.La Torre, E.Vrscay, Contractive multifunctions, fixed
point inclusions and iterated multifunction systems, J. Math. Anal.
Appl., 330 (2007), 159173.
[12] S. Kusuoka, Diffusion processes on nested fractals. Lecture Notes in
Math., 1567, Springer, 1993.
[13] S.M.Iacus, D.La Torre, A comparative simulation study on the IFS
distribution function estimator, Nonlinear Anal. Real World Appl.,
6 5 (2005), 858873.
[14] S.M.Iacus, D.La Torre, Approximating distribution functions by iterated function systems, J. Appl. Math. Decis. Sci., 1 (2005), 3346.
[15] A.Jonnson, H.Wallin, Function spaces on subsets of Rn , Math. Rep.
Ser., 2 1 (1984), xiv+221.
[16] D.La Torre, F.Mendivil, E.Vrscay, IFS on multifunctions, Math everywhere: deterministic and stochastic modelling in biomedicine, economics, industry (G.Aletti and others eds.), Springer (2006), 125
134.
[17] D.La Torre, F. Mendivil, Iterated function systems on multifunctions
and inverse problems, J. Math. Anal. Appl., 340 2 (2008), 14691479.
[18] D.La Torre, E.R.Vrscay, A generalized fractal transform for measurevalued images, Nonlinear Anal., 71 12 (2009), e1598e1607.
[19] D.La Torre, E.R.Vrscay, M.Ebrahimi and M.Barnsley, Measurevalued Images, associated fractal transforms, and the affine selfsimilarity of images, SIAM J. Imaging Sci., 2 2 (2009), 470507.
[20] D.La Torre, F.Mendivil, Union-additive multimeasures and selfsimilarity, Commun. Math. Anal. 7 2 (2009), 5161.
[21] F.Mendivil, E.R.Vrscay, Fractal vector measures and vector calculus
on planar fractal domains, Chaos Solitons Fractals, 14 8 (2002), 1239
1254.
[22] F.Mendivil, E.R.Vrscay, Self-affine vector measures and vector calculus on fractals, Fractals in multimedia (Minneapolis, MN, 2001),
IMA Vol. Math. Appl., 132, Springer, New York (2002), 137155.
16

[23] U.Mosco, Energy functionals on certain fractal structures, J. Convex


Anal., 9 (2002), 581600.
[24] A.Teplyaev, Gradients on fractals, J.Func.Anal. 174 (2000), 128154.
[25] H. Triebel, Fractals and spectra related to Fourier analysis and function spaces. Monographs in Mathematics, 91, Birkhauser, Basel, 1997.
[26] M. Zahle, Harmonic calculus on fractals a measure geometric approach II, Trans. of the AMS 357 9 (2005), 34073423.

17

You might also like