You are on page 1of 148

August 2011, Vol. 30, No.

Special Section:

Multiple attenuation

Justice Justice Kepler Kepler

What about the source ghost?

Scan to learn more about GeoStreamer GS

GeoStreamer GSTM
Broadest bandwidth Increased confidence Reduced risk

GeoStreamer + GeoSource
GeoSourceTM - the ghost-free source, with enhanced and robust source design GeoStreamer with GeoSource - the totally ghost-free acquisition solution

Oslo Tel: +47 67 526400 Fax:+47 67 526464

London Tel: +44 1932 376000 Fax:+44 1932 376100

Houston Tel: +1 281 509 8000 Fax:+1 281 509 8500

Singapore Tel: +65 6735 6411 Fax:+65 6735 6413

A Clearer Image
www.pgs.com/GeoStreamerGS

Table of Contents

838. .......  InterferometrySource-receiver interferometry for seismic wavefield construction and ground-roll removal, C. Duguid, D. Halliday, and A. Curtis 844. .......  FracturesFracture network engineering for hydraulic fracturing, W. Pettitt, M. Pierce, B. Damjanac, J. Hazzard, L. Lorig, C. Fairhurst, I. Gil, M. Sanchez, N. Nagel, J. Reyes-Montes, and R. Young 854. .......  Meter ReaderDelineation of concealed basement depression from aeromagnetic studies of Mahanadi Basin, East Coast of India, B. Sarma and V. Chakravarthi 858. .......  Interpreter's CornerThinner than expected? Don't blame the seismic, J. Coggins 933. .......  SEG 2011SEG Annual Meeting returns to San Antonio, D. Clark 946. .......  ISEFEmpowering Tomorrow's Innovators in Los Angeles: The 2011 ISEF, R. Nolen-Hoeksema 948. .......  ISEFAcoustic imaging using optimized beamforming techniques, A. Feldman Special section: Multiple attenuation 862. .......  Introduction to this special section: Multiple attenuation, B. Goodway 864. .......  Multiple attenuation: Recent advances and the road ahead (2011), A. Weglein, S. Hsu, P. Terenghi, X. Li, and R. Stolt 876. .......  Exemplifying the specific properties of the inverse scattering series internal-multiple attenuation method that reside behind its capability for complex onshore and marine multiples, P. Terenghi, S. Hsu, A. Weglein, and X. Li 884. .......  Elimination of land internal multiples based on the inverse scattering series, Y. Luo, P. Kelamis, Q. Fu, S. Huo, G. Sindi, S. Hsu, and A. Weglein 890. .......  Resolution on multiples: Interpreters' perceptions, decision making, and multiple attenuation, L. Hunt, S. Reynolds, M. Hadley, S. Hadley, Y. Zheng, and M. Perz 906. .......  Applications of interbed multiple attenuation, M. Griffiths, J. Hembd, and H. Prigent 914. .......  Case studies in 3D interbed multiple attenuation, D. Brookes 920. .......  Enhanced demultiple by 3D SRME using dual-sensor measurements, R. van Borselen, R. Hegge, T. Martin, S. Barnes, and P. Aaron 928. .......  True azimuth 3D SRME in the Norwegian Sea, P. Smith, B. Szydlik, and T. Traylen

Departments
830. ......Editorial Calendar 832. ......Presidents Page 834. ......From the Other Side 836. ......Foundation News 954. ......Memorials 958. ......Reviews 960. ......Calendar 962. ......Announcements 964. ......Personals 964. ......Membership 966. ......Postal Report 967. ......Advertising Index 968. ......Interpreter Sam Cover design by Kathy Gamble. Graphics on cover were provided by Al Rendon/SACVB.

Petrel is a mark of Schlumberger. 2011 Schlumberger. All rights reserved. 11-IS-0135

Deliver confident prospect selections

Petrel 2011
E&P softwarE Platform

Exploration scalability
www.slb.com/petrel2011 Capture prospect uncertainty from the start; assess seal capacity and charge timing as you interpret seismic, make maps, and calculate volumesin one application. Deliver confident decisionswith Petrel.

826

The Leading Edge

August 2011

20102011 SEG Executive Committee

President

The Leading Edge Editorial Board

Chairman

Klaas Koster
Apache Corporation 2000 W Sam Houston Pkwy S. Suite 2000 Houston, TX 77042-3622 Ph: 281-302-2568 klaas.koster@apachecorp.com

Reinaldo J. Michelena
iReservoir.com, Inc. 1490 W. Canal Court, Suite 2000 Littleton, CO 80120, USA Ph: 1-303-713-1112 michelena@ireservoir.com

President-elect

Bob A. Hardage
Bureau of Economic Geology University Station, Box X Austin, TX 78713, USA Ph: 1-512-471-0300 Fax: 1-512-471-0140 bob.hardage@beg.utexas.edu First vice president

Gregory S. Baker
University of Tennessee 1412 Circle Drive Knoxville, TN 37996, USA Ph: 1-865-974-6003 Fax: 1-865-974-2368 gbaker@tennessee.edu

THE LEADING EDGE (Print ISSN 1070-485X; Online ISSN 1938-3789) is published monthly by the Society of Exploration Geophysicists, 8801 S. Yale Ave., Tulsa, Oklahoma 74137 USA; phone 1-918-497-5500. Periodicals postage paid at Tulsa and additional mailing ofces.
POSTMASTER:Send changes of address to THE LEADING EDGE Box 702740, Tulsa, OK 74170-2740 USA

Mike Graul
Texseis, Inc. 10810 Katy Freeway, Suite 201 Houston, TX 77043, USA Ph: 1-713-465-3181 Fax: 1-713-462-8618 mgraul@texseis.com Second vice president

William Goodway
Apache Canada 9 Avenue Southwest Calgary, AB T2P 3V4 Canada Ph: (403) 303-5958 bill.goodway@apachecorp.com

Susan Webb
University of the Witwatersrand School of Geosciences Wits, 2050 South Africa Ph: +27 11 717 6606 Fax: +27 11 717 6579 susan.webb@wits.ac.za Vice president

Alan Jackson
Shell International E&P 3737 Bellaire Blvd. Houston, TX 77001, USA Ph: 1-713-245-8389 alan.jackson@shell.com

Alfred L. Liaw
Anadarko Petroleum Corporation 1201 Lake Robbins Drive The Woodlands, TX 77380, USA Ph: 1-832-636-1225 Fax: 1-832-636-9625 alfred.liaw@anadarko.com

Shuki Ronen
CGGVeritas shuki.ronen@gmail.com

Secretary-treasurer

Tad Smith
Apache Corporation 2000 Post Oak Blvd. Suite 100 Houston, TX 77056, USA Ph: 1-713-296-6251 tad.smith@apachecorp.com

John Eastwood
Imperial Oil Resources 237 4th Ave. SW P.O. Box 2480, Station M Calgary, AB T2P 3M9, Canada Ph: 1-403-237-2777 Fax: 1-403-237-4447 john.eastwood@exxonmobil.com Editor

Special editor

Vladimir Grechka
Shell 200 North Dairy Ashford Houston, TX 77079, USA Ph: 1-281-544-3196 Fax: 1-281-544-2995 vladimir.grechka@shell.com

Christopher Liner
University of Houston 312 Science & Res. Bldg. 1 Houston, TX 77204, USA Ph: 1-713-743-9119 Fax: 1-713-748-7906 chris.liner@gmail.com

The SEG editor is an ex-ofcio member of THE LEADING EDGE Editorial Board. THE LEADING EDGE digital edition is available at: http://www.seg.org/resources/publications/tle-digital-edition.

STEVEN DAVIS, SEG executive director; TED BAKAMJIAN, director, publications; DEAN CLARK, editor; JENNY KUCERA, associate editor; SPRING HARRIS, assistant editor; KATHY GAMBLE , graphic design manager; ROBERT L. MILLER , graphic production designer; MERRILY SANZALONE, senior publications coordinator. Advertising information and rates: MEL BUCKNER, phone 1-918-497-5524. Editorial information: phone 1-918-497-5535; fax 1-918-4975557; e-mail dclark@seg.org. Subscription information: e-mail membership@seg.org.

Print subscriptions for members of the Society in good standing are included in membership dues paid at the World Bank III and IV rate. Dues for Active and Associate members for 2011 vary depending on the three-tiered dues structure based on World Bank classication of the members country of citizenship or primary work residence. Dues are US$90 (World Bank IV countries), $48 (World Bank III countries), and $12 (World Bank I and II countries). Dues for all Student members regardless of country of citizenship or primary residence are $21 and include online access to journals. Students may receive TLE in print by paying an additional $36. Print and online single-site subscriptions for academic institutions, public libraries, and nonmembers are as follows: $155, Domestic (United States and its possessions); $190, Surface Freight (Canada, Mexico, Central and South America, Caribbean); and $200, Mandatory Air Freight (Europe, Asia, Middle East, Africa, and Oceania). For corporations and government agencies, print and online single-site subscriptions are: $840, Domestic (United States and its possessions); $875, Surface Freight (Canada, Mexico, Central and South America, Caribbean); and $885, Mandatory Air Freight (Europe, Asia, Middle East, Africa, and Oceania). Print-only subscriptions for corporations and government agencies are: $340, Domestic (United States and its possessions); $375, Surface Freight (Canada, Mexico, Central and South America, Caribbean); and $385, Mandatory Air Freight (Europe, Asia, Middle East, Africa, and Oceania). Rates are subject to change without notice. Subscriptions to the SEG Digital Library include subscriptions to TLE. Subscribers to GeoScienceWorld are entitled to a $30 discount off print-only subscriptions to TLE. See www.seg.org/publications/subscriptions for ordering information and details. Single-copy price is $16 for members and $32 for nonmembers. Postage rates are available from the SEG business ofce. Advertising rates will be furnished upon request. No advertisement will be accepted for products or services that cannot be demonstrated to be based on accepted principles of the physical sciences. Statements of fact and opinion are made on the responsibility of the authors and advertisers alone and do not imply an opinion on the part of the ofcers or members of SEG. Unsolicited manuscripts and materials will not be returned unless accompanied by a self-addressed, stamped envelope. Copyright 2011 by the Society of Exploration Geophysicists. Material may not be reproduced without written permission. Printed in USA..

828

The Leading Edge

August 2011

Editorial Calendar

Issue ... Special Section theme ........................................ Due date ............. Guest editors 2011 Sep........Time-lapse measurements ..................................................past due ................... Colin MacBeth, colin.macbeth@pet.hw.ac.uk Reinaldo Michelena*, michelena@ireservoir.com Oct ........Land acquisition, vibroseis .................................................past due ................... Chris Liner*, chris.liner@gmail.com ........................................................................................................................................... Bob Rosenbladt, bob.rosenbladt@shell.com ........................................................................................................................................... Shuki Ronen*, shuki.ronen@gmail.com Nov .......Canada, Arctic technology...................................................past due ................... William Goodway*, bill.goodway@apachecorp.com ........................................................................................................................................... Jeff Deere, jeff@keyseismic.com ........................................................................................................................................... Tad Smith*, Tad.Smith@apachecorp.com Dec........Physics of rocks .................................................................15 Aug 2011 ............ Colin Sayers, csayers@slb.com ........................................................................................................................................... Tad Smith*, Tad.Smith@apachecorp.com ........................................................................................................................................... Chris Liner*, chris.liner@gmail.com 2012 Jan ........Near-surface measurements in exploration geophysics ......15 Sep 2011 ............ Greg Baker*, gbaker@tennessee.edu ........................................................................................................................................... Panos Kelamis, panos.kelamis@aramco.com Feb ........Carbonate research in China ...............................................15 Oct 2011 ............. Enru Liu, enru.liu@exxonmobil.com ........................................................................................................................................... Sam Sun, samzdsun@yahoo.com ........................................................................................................................................... Arthur Cheng, arthur.cheng@bakeratlas.com Mar .......Mining geophysics ..............................................................15 Nov 2011 ............ William Goodway*, bill.goodway@apachecorp.com Apr ........Marine and seabed technology ...........................................15 Dec 2011 ............ Shuki Ronen*, shuki.ronen@gmail.com May .......Seismic inversion for reservoir properties ..........................15 Jan 2012............. William Goodway*, bill.goodway@apachecorp.com ........................................................................................................................................... Reinaldo Michelena*, michelina@ireservoir.com ........................................................................................................................................... Tad Smith*, Tad.Smith@apachecorp.com Jun ........30th Anniversary issue TLE ................................................15 Feb 2012............. Dean Clark, dclark@seg.org Jul .........Mediterranean region ..........................................................15 Mar 2012 ............ Gabor Tari, Gabor.Tari@omv.com ........................................................................................................................................... Chris Liner*, chris.liner@gmail.com ........................................................................................................................................... Shuki Ronen*, shuki.ronen@gmail.com (* Current TLE Board member)
The Norwegian University of Science and Technology (NTNU) in Trondheim represents academic eminence in technology and the natural sciences as well as in other academic disciplines ranging from the social sciences, the arts, medicine, architecture to ne art. Cross-disciplinary cooperation results in innovative breakthroughs and creative solutions with far-reaching social and economic impact. Notice to authors TLE publishes articles on all areas of applied geophysics and disciplines which impact it. To submit a paper for possible publication in a specic issue, please e-mail an inquiry to the appropriate guest editor for that issue. Authors are encouraged to submit their papers at any time, regardless of whether they t the schedule. To submit an article on an unscheduled topic, contact Dean Clark, TLE editor, dclark@seg.org or 1-918-497-5535. Electronic submission of articles Electronic submissions should include the manuscript le, gures and other graphics, a PDF of the manuscript and gures, and the authors contact information. These les can be uploaded to an FTP site (the preferred method) or burned to a CD and mailed to the appropriate editor. Once accepted for TLE, the les will be opened and edited on a Mac or a PC using various software applications. To simplify conversion, gures should be submitted in TIFF, PDF or EPS (.tif, .pdf or .eps) le formats, with a resolution of at least 300 dpi (pixels per inch). High-resolution images can be placed in Word or PowerPoint if placed large on the page; these will be converted to PDF format. Once the paper is accepted, please also mail the appropriate editor a printed color copy of the manuscript with any gures, tables, and equations to be included. For assistance with electronic submission, contact Tonia Payne, tpayne@seg.org or 1-918-497-5575. More details are online at www.seg.org/publications/tle/tle_le_prep.shtml. Notice to lead authors Lead authors of articles published in TLE who are not members of SEG should apply for a one-year free membership and subscription to TLE by contacting Member Services, fax 1-918-497-5557 or membership@seg.org. Lead or corresponding authors also are required to sign a copyright transfer agreement, which gives TLE permission to publish the work and details the magazines and the authors rights. TLE staff will send a form to be signed and sent back after the article is accepted for publication. The form can be downloaded at www.seg.org/ publications/tle/copyright.shtml.

Research professorships
GZhZVgX]egd[Zhhdgh]^e^cHZ^hb^X   >ciZgegZiVi^dcViCdglZ\^VcJc^kZgh^in   d[HX^ZcXZVcYIZX]cdad\nCICJ   IgdcY]Z^b!CdglVn>KI"(-$&& GZhZVgX]egd[Zhhdgh]^e^c:c]VcXZYD^a   GZXdkZgnViCdglZ\^VcJc^kZgh^ind[HX^ZcXZ  VcYIZX]cdad\nCICJ!IgdcY]Z^b!   CdglVn>KI"(.$&& For more information please contact Professor Martin Landr, martin.landro@ntnu.no or Professor Jon Kleppe, jon.kleppe@ntnu.no. Applications are to be sent to the Norwegian University of Science and Technology, Faculty of Engineering Science and Technology, 7491 Trondheim, Norway. The le number for the positions, IVT-38/11 or IVT-39/11, is to be clearly stated on the application. Application deadline: 20th August 2011.
Jobbnnorge.no

Please see the full advertisements on www.jobbnorge.no, or visit NTNUs homepage http://nettopp.ntnu.no/?kat=N_JOBB

830

The Leading Edge

August 2011

PASSIONATE ABOUT SEISMIC


All Polarcus vessels are equipped with the latest high-end seismic acquisition, navigation and positioning technologies. Taken together, the vessels and the data acquisition systems provide complete exibility for Polarcus to meet the entire range of possible seismic survey objectives using marine towed streamer techniques. Find out more, visit www.polarcus.com Scan the QR-code with your smart phone and read more about our GREEN agenda

TM

Society of Exploration Geophysicists

Presidents page An expectation of more dedicated service


n issue that has dominated much time and energy of several sectors of our Society for the past three years has been the need to implement a new system of Bylaws. This has required repeated discussion and analysis by the Constitution and Bylaws Committee, SEGs senior management team at the Tulsa business oce, several Executive Committees, and numerous other individuals. Following the rejection of proposed Bylaws by the Council in 2010, a revised version of Bylaws was published for Council and member consideration in the July 2011 TLE. The Bylaws will be debated at the Council meeting in San Antonio and submitted for an approved or rejected Council vote. If approved, the Bylaws go to the Active membership for an approved or rejected vote. Before the Council vote, any arguments for revising the proposed Bylaws will be heard. Any revisions requested to the published Bylaws will be approved or rejected by Council vote before a nal vote occurs. A central theme is that the new Bylaws require more dedicated service by those elected to governance positions. Increasing the level of dedication is accomplished in several ways: Getting more people involved in governance, Lengthening the terms of governance ocers, and Expanding the number of ocial governance meetings. Increasing the number of people involved in governance For decades, Executive Committees have consisted of only seven people. In recent years, their titles have included President, President-elect, First Vice President, Second Vice President, Vice President, Secretary-Treasurer, and Editor. The proposed Bylaws dene the governing body of the Society as a 14-member Board of Directors (which can expand to 17 members if needed). Thus, the proposed Bylaws double the number of people in SEGs basic governing body. Chair of the Council A new ocer position will be created, the Chair of the Council. This ocer, elected by the Council, will serve a three-year term on the Board of Directors to give the Council a better voice in ongoing SEG governance and business issues. Increasing the lengths of ocer terms Board of Directors. Currently, only two members of an Executive Committee serve two-year terms, the Editor and the President-Elect. The remaining ve members serve one year. This short-term service has hindered progress on initiatives that require multiple years to implement. The proposed Bylaws eliminate all one-year terms and require multiple-year terms for all members of the Board of Directors. Eleven members of the 14-member Board will serve three-year terms. The others will serve two years. Ocers with two-year terms will be the Editor, Treasurer, and Second Vice President (who becomes First Vice President in her/his second year). Thus, the Bylaws require longer service of all Board members. People elected to some Board positions during the rst
832 The Leading Edge August 2011

two years of new governance will serve shorter terms so that a staggered service sequence is established in which approximately one-third of the Board steps down each year. Council. The proposed Bylaws require Council members to serve three years. This will go a long way toward establishing a more eective Council. For years, Council work has been handicapped by annual roll-o of many members who serve only one year. It has been dicult to impossible for the business oce, and the Council itself, to keep track of this continually changing membership. The proposed Bylaws should create stability in Council membership. During the rst two years of new governance, some Council members will be elected to terms of either one year or two years to create a staggered schedule that will result in approximately one-third of the Council stepping down each year. The Bylaws allow Council members to serve consecutive terms. Increasing the number of governance meetings Executive Committees have always met in formal sessions at least four times per year and have also engaged in one or more conference-call meetings each year. The new Bylaws require the Board of Directors to meet at least four times, so there is no change regarding how many days SEGs governing body will meet formally each year. However, the new Bylaws implement important changes that aect how and when the Council meets. The current Bylaws constrain the Council to meet only one time per year, at the Annual Meeting, and require that members meet face to face. The proposed Bylaws remove both constraints. The Council will continue to meet at the Annual Meeting but can meet at any other time if called by the Board, the Chair of the Council, or a majority of Council members. Perhaps most importantly, Council members can participate in meetings by remote telecommunication if a face-to-face meeting is not practical. Only 60% of Council members have participated in the annual face-to-face Council meeting since the mid-1990s, so the freedom to use telecommunication technology for remote Council participation should allow almost 100% of Council membership to be engaged. Also, rather than Council members participating in only one meeting per year, it is quite possible they may meet two or more times in some years depending on what challenges confront SEG. Conclusion If these proposed Bylaws are approved by the Council and then by the Active membership, SEG will need more people to step forward to participate in Society governance either on the Council or on the Board of Directors. Those elected to either governance group must be prepared to serve multiple years in oce. As stated, SEG governance will now require more dedicated service. BOB A. HARDAGE President-elect

Omega
SEISMIC PROCESSING SYSTEM

Enhance Productivity and Improve Project Turnaround


The Omega* seismic processing system is now available for license to clients for in-house processing. The fully scalable system offers

easy and intuitive management of complex processing and imaging workflows interactive processing tools seamless integration with geophysical interpretation systems access to the latest developments and software updates from our global research and support teams.

*Mark of Schlumberger. 2011 Schlumberger. 11-se-0084

Used in the field and in our dedicated processing facilities, the Omega system delivers unmatched geophysical functionality and user efficiency.

www.westerngeco.com/omega

Industry-leading processing system available for license


We listen to your challenges. We understand your needs. We deliver value.

The Leading Edge

From the other side


A column by Lee Lawyer with stories about geophysics and geophysicists

n or about 21 June 1921, geophysical ... The Pacic Coast Geophysical Society in Los Angles and shots were heard around the world! Bakerseld (12 April 1948). So much for intuition. The rst It was an experiment conducted non-U.S. Section was the Canadian SEG. It was chartered on near Oklahoma City to test the feasibility of 24 January 1952 as No. 9. All of this is beside the point. The monument GSOC acquiring reection seismic data. Fifty years later, a monument was erected near the site of that experiment placed back in 1971 is granite from southern Oklahoma. I to commemorate that early work. W. B. (Robby) Robinson, tried to call it Tishomingo granite, but it looks sort of brownSEG President at that time, presided at the dedication ish in color. I believe that Tishomingo granite is very pink ceremony. On 20 June 2011, almost exactly 40 years later because of the large crystals of orthoclase feldspar. I need to and 90 years after the original experiment, the Geophysical go there with a hand lens to check the rocks origin. It is a little unclear who championed the original placeSociety of Oklahoma City (GSOC) staged a rededication of that same monument to the eorts of those early pioneers in exploration geophysics. GSOC was the tenth Section (30 September 1952) to be chartered by SEG. Sections didnt start with the beginning of SEG. Provisions for their formation were placed in the newly written constitution in 1947. Along with the formation of Sections came the SEG Council, which is the ruling body of the SEG. The Geophysical Society of Tulsa was the rst Section (2 February 1948) perhaps because Tulsa was the location of the SEG business oce, or not. The second Section (2 February 1948) was the Geophysical Society of Houston. I had trouble discovering No. 3. I sort of assumed it W. B. Robinson (left) and John C. Karcher at the 1971 dedication of the monument commemorating the rst would be Dallas. Wrong! seismic reection experiment. Dallas was No. 4 (7 August 1948). In that case it had to be the Permian Basin Geo- ment of the monument. Robby Robinson was a member of physical Society in Midland. Wrong! It was No. 7 (30 January GSOC, was General Chairman of the SEG Annual Meeting 1950). What is going on here? I looked up the Southeastern in Oklahoma City in 1967, and was elected SEG President Geophysical Society in New Orleans. Surely they were No. for 19701971. He probably served as a GSOC ocer but 3. Wrong! It was No. 13 (1 April 1954). Do you give up? I dont have those records. Another candidate is Dick SchThe third Section to be chartered by SEG was ... wait for it neider. He tells me that his oce is just a few blocks from the monument. He has kept an eye on things and seen to its maintenance for many years. I accused him of laying a wreath To contact the Other Side, call or write L. C. (Lee) Lawyer, on the monument every June in honor of the Unknown GeoBox 441449, Houston, TX 77244-1449 (e-mail LLAWYER@ physicist. But that doesnt apply here, because we know who the geophysicists were and what they did to merit the monuprodigy.net).
834 The Leading Edge August 2011

ment. I asked him if there had been any geological graffiti Belle Isle creek! I have never heard that. The Oklahoma City sprayed on it but he said there was none. I tried to keep this Field was discovered in late 1928 at 6400 ft (a deep well for column sort of light but in Oklahoma City, they take this that time period). The field encroached on the city and there memorial seriously as is ilwas a lot of conflict belustrated by the gathering tween various operators. on June 20. Oklahoma City passed an I am told that about ordinance allowing only 100 attended. Dawson one well per city block! Geophysical conducted A year or so later the gova seismic survey in the ernor of Oklahoma devicinity using vibroseis. clared martial law around Tough to reenact the actuthe wells to prevent flaal work because they used grant law violations. This dynamite back in 1921! I pattern of city ordinances am told that the vibroseis and martial law to enforce data were acquired and them continued for sevprocessed the same day! eral years. A spectacular Hats off to the Oklaevent occurred in 1930 homa City Geophysical when the Mary Sudik Society for restoring the No. 1 blew out during monument to a pristine completion and scattered condition and for sponsorcrude oil across the couning this ceremony. Good try side. I know you have show. I decided to check seen pictures of it. the GSOC on the Internet The cleaned up monument as it looks in 2011. I checked out the reand came across and interesting item. Earlier in this article I sults of the new seis! We (the geophysical world) cant claim said that the shots near Oklahoma City were to test the feasi- the discovery of Oklahoma City Field is a result of a seismic bility of acquiring reflection seismic data. On the GSOC site, survey. We could do it today but it was discovered before we they say that it was to investigate some oil seeps in the nearby could get around to it.

Field Proven Navigation Systems

SeaPro Nav

TriggerFish

With more vessels equipped with SeaPro Nav and TriggerFish, Sercels powerful integrated portfolio is quickly becoming the new generation of navigation systems.

Nantes, France sales.nantes@sercel.com Houston, USA sales.houston@sercel.com

A N Y W H E R E . A N Y T I M E . E V E RY T I M E .
August 2011

www.sercel.com
The Leading Edge 835

The Leading Edge

Foundation News
Where are they now? Updates from three Geoscientists Without Borders projects
eoscientists Without Borders is transforming lives mania have caused severe environmental damage with imaround the world by providing humanitarian portant consequences for human and environmental health. application of geophysical solutions to global problems. With the help of Geoscientists Without Borders, students Currently, 11 projects have received grants and have teams in from the University of Bucharest (Romania) are making the eld, not only making an impact in those countries but strides to raise awareness and help educate the local popualso impacting the reputation of lation about the problem. The geophysics around the world. project uses geophysical and Here is a closer look at the work geochemical methods to anabeing performed in three current lyze the contamination present projects. at the soil level, underground Greece. Imagine piecing toterrain, hydrological, and hygether the puzzles of history, drogeological systems in the searching for clues to unlock area polluted by years of ore little-known secrets of an anprocessing and waste handling. cient past. For the students and Seismic, geoelectric, GPR, partner geophysicists on the Euand magnetic data were reroscienceGreece project, this is a corded in July-August 2010. reality. The goal of this project, Geochemical sampling for soil which began in February 2011, and water was realized throughis to detect the tomb of Roxout two acquisition campaigns anne, the second wife of Alex- Project members in the eld in Greece. conducted December 2010 ander the Great, and nd new through March 2011; geoevidence about this fascinating chemical analyses were comgure of the past. pleted at the Prospectiuni SA Laboratories; geophysical and So far, the excavations have revealed remains of buildings geochemical data have been processed and integrated in interand sanctuaries in the public and residential quarters of the pretation; and near-surface geological sections and pollution ancient city of Ampolis, northern Greece. Extensive parts of maps are currently being nalized. the defensive walls are preserved to considerable height and This project has truly proved benecial, in terms of careers are in good condition. Using seismic methods, IP/resistivity, and experiences, for the students involved from the Faculty of ground-penetrating radar, and electromagnetic and magnetic Geology and Geophysics, University of Bucharest, along with technology, project personnel hope to nd various items in students from three other universities (Babes-Bolyai Univerthe tomb that will provide insight to a little-documented peri- sity, Cluj-Napoca; Mines University, Petrosani, Romania; od of history; protect items from vandalism and black-market and Belgrade University, Serbia). These students have been sales; and provide a source of economic stability for the lo- involved in all stages of the project. They learned how to decal population sign a geophysical survey; perform good geophysical data acthrough the quisition; process and interpret the data; collect geochemical establishment samples; use advanced geochemical laboratories for geochemof a museum. ical analysis; and complete pollution maps. As a result, two The project will scientic papers were completed, using the available data, and d e m o n s t r a t e presented at the 2011 EAGE Annual Convention in Vienna. that geophysiJamaica. All eyes were on Haiti when it was struck by cal mapping of a devastating 7.0-magnitude earthquake in 2010. Realizing ancient tumuli that other coastal countries could suer if hit with a similar in Greece and earthquake, a timely Geoscientists Without Borders project Cyprus is an ef- began in Kingston, Jamaica, with the goal of locating active fort in preserv- faults and assessing the probability of future, large tsunamiStudents from the University of Bucharest per- ing the cultural producing earthquakes occurring near Kingston, the capital form geochemical sampling in Romania. information of and economic center of the country. a population. In early 2011, students and professors spent two proRomania. Would you know if heavy metals were in your ductive weeks collecting chirp seismic data across Kingston water or your soils? Thousands of years of mining and min- Harbor. More than 150 km of high-resolution chirp data, erals processing in the Zlatna mining region in central Ro- collected across the harbor in the past year, reveal evidence
836 The Leading Edge August 2011

of active faulting. The trends of several of these faults match the strike angles estimated from statistical analysis of microearthquakes in the vicinity of Kingston, indicating that these faults are likely active. The chirp data also clearly reveal widespread evidence of liquefaction and slope failure across the harbor, with one large slide in particular in the northeast corner of the harbor. This slide coincides with the location of a large tsunami that occurred during the 1907 earthquake, and the researchers believe the slide represents the source of this tsunami. The study indicates that active thrusts and transtensional faults exist beneath Kingston Harbor. Sediment coring next year will oer insight into the timing of deformation along these faults. The Kingston community has rallied behind this initiative. A prime example of community empowerment stemmed from a breakfast meeting and ensuing discussion. Interest in the project spread, and resulted in the local businessmen and women of Kingston raising funds for new seismic monitors to be placed in the areas of interest based on early project results. Not only does this endeavor add to Jamaicas earthquakemonitoring network in crucial areas, but the community initiative has added additional value to the GWB project as well.
Our primary motivation for assessing geohazards in Kingston, Jamaica, is to make a valuable, hopefully lifesaving impact on the people of Jamaica before the next big earthquake occurs. Too often, the scientic community studies geohazards after a disaster happens, when there is

relatively little that geoscientists can contribute, because the damage has already been done. Working together with the people of Kingston, SEGs Geoscientists Without Borders program is helping the citizens of Jamaica avoid repetition of the type of disaster that Haiti experienced in 2010.

MATT HORNBACH University of Texas at Austin Are you interested in more? For more information on these compelling projects, including the additional Geoscientists Without Borders projects, please visit www.seg.org/gwb/ currentprojects. Geoscientists Without Borders was established by the SEG Foundation in 2008 with a US $1 million leadership investment from Schlumberger. Additional commitments have since been received from Santos, Global Geophysical, CGGVeritas, and Geophysical Pursuit. For more information on Geoscientists Without Borders, please visit www.seg.org/gwb. NATALIE BLYTHE SEG Foundation Development OcerCommunications
Editors note: Geoscientists Without Borders is a registered trademark of the SEG Foundation.

August 2011

The Leading Edge

837

I n t e r f e r o m e t r y

Source-receiver interferometry for seismic waveeld construction and ground-roll removal


CRAIG DUGUID, University of Edinburgh, presently Tullow Oil DAVID HALLIDAY, Schlumberger Cambridge Research ANDREW CURTIS, University of Edinburgh

eismic interferometry describes the construction of unmeasured waveeld responses (or Greens functions) between two or more points by applying cross-correlation, deconvolution, or convolution to seismic data recordings. The practical implications are that, applying inter-receiver interferometry, a virtual (imaginary) source of energy can be created at the location to a real receiver by using energy recorded from surrounding sources. Similarly, by using inter-source interferometry, a virtual receiver can be created at the location of a real source by using energy recorded at surrounding receivers (Curtis et al., 2009). These two methods can be combined to create the new technique of source-receiver interferometry (Curtis, 2009; Curtis and Halliday, 2010; Halliday and Curtis, 2010) which synthesizes real-source to real-receiver Greens function estimates using only energy recorded at a surrounding boundary of receivers and from an additional surrounding boundary of sources. The boundary sources in each case can be active (such as an explosive or vibrating source) or passive (such as noise from anthropogenic activity or ocean waves). This paper describes the rst real-data application of source-receiver interferometry, and demonstrates its potential to enhance existing methods of interferometric ground-roll suppression. Exact expressions for the construction of inter-receiver Greens functions by cross-correlation are presented in Wapenaar and Fokkema (2006) and van Manen et al. (2006). For practical reasons, we use a number of approximations to the exact theory of seismic interferometry. To summarize, the equations utilize a high-frequency approximation, assume that the surrounding sources used in the interferometric construction exist on a closed boundary with large radius separated at intervals no greater than that required by the Nyquist spatial sampling criterion, and assume that the medium on and outside the boundary of sources is homogeneous. Under these assumptions and when dealing with sources with unknown power spectra, the following frequency domain expression allows for an estimate of the causal (time forward) and acausal (time reversed) Greens functions between two receiver locations to be obtained. This is achieved by crosscorrelating particle velocity measurements at two receivers and integrating the result over each source position on a surface S surrounding both receivers (after Wapenaar and Fokkema, 2006): (1)

C(w) is an (unknown) frequency-dependent scaling factor. The multiplication of one quantity with the complex conjugate of the other on the right hand side of Equation 1 is equivalent to cross-correlation in the time domain. Complex conjugation in the frequency domain results in time reversal, on the left of Equation 1 is the acausal and hence (time-reversed) Greens function. This means that the result of applying the operations on the right is to produce two Greens functions (or seismograms), both starting from zero , while time but one extends toward positive times, . the other extends toward negative times A similar expression exists for inter-receiver interferometry by convolution: (2) The application of this expression requires that the source boundary S surrounds only one of the receivers at r1 and r2, not both as required for correlation in Equation 1. In the case of convolution in Equation 2, only the acausal Greens function is recovered. In an extension of the theory for inter-receiver interferometry, exact expressions can be formulated for Greens function retrieval by source-receiver interferometry (Curtis and Halliday, 2010). These exact expressions can be simplied under the same assumptions as for Equations 1 and 2 above to (Curtis and Halliday, 2010): (3) An example geometry for the source position r1, receiver position r2 and boundaries S1 (containing sources r) and S2 (containing receivers r ) is illustrated in Figure 1. This equation allows the waveeld between a real source at r1 and a real receiver at r2 to be constructed using interferometry, without actually directly measuring this waveeld. A number of potential applications of source-receiver interferometry to noise removal, quality evaluation of the results of interferometry, and seismic imaging are discussed in Curtis and Halliday (2010), Halliday and Curtis (2010), and Poliannikov (2011). In this paper, we apply inter-receiver interferometry by both cross-correlation and convolution, and also source-receiver interferometry to a shallow land-seismic data set, with the results illustrating the potential of source-receiver interferometry in surface-wave (ground-roll) suppression, allowing the method to be added to the developing suite of interferometric techniques for the removal of ground roll. Source-receiver interferometry data example In order to illustrate the application of inter-receiver and

Here, is the Greens function representing the waveeld (particle velocity) at r2 due to a monopolar point is the observed particle velocity at r2 source at r1, due to a source at r on S, * denotes complex conjugation and
838 The Leading Edge August 2011

I n t e r f e r o m e t r y

Figure 1. One canonical geometry for source-receiver interferometry between a real source at r1 and a real receiver at r2. Stars represent sources and triangles represent receivers.

Figure 2. Survey acquisition geometry. Data recordings exist from a receiver at position r1 (black in-lled circle), receivers on a closed boundary S2 (green stippled line), and on a 24-geophone linear array (R1- R24, black/white triangles) from sources on a running track shaped boundary S1 (red line), a circular boundary S3 (blue line) and from a single source at position r1 (black in-lled circle). In the data examples presented here, only sources and receivers marked by in-lled stars and triangles on boundaries S1, S2, and S3 have been used in the interferometric constructions. These denoted sources and receivers lie within approximate stationary-phase regions for the considered frequency range.

source-receiver interferometry, use is made of data acquired at the Schlumberger Cambridge Research Centre. The survey geometry for the experiment is illustrated in Figure 2. Vertical component geophones recording particle velocity were located at position r1, at 4-m intervals on a convolutional boundary S2, and at 2-m intervals on a line of 24 geophones oriented approximately east-west and centered at point C (notice that north is approximately to the left in Figure 2). Active shot records from an accelerated weight drop source were made from a source positioned at r1, from a set of source positions separated by 4-m intervals on the correlational

source boundary S1, and from source positions separated by 4-m intervals on the convolutional boundary S3. All results plotted herein are either true measured responses or interferometric constructions for receivers on the 24 geophone line and for either a real or a virtual source at location r1. Before constructing waveeld responses by interferometry, a number of preprocessing steps were applied to all recorded data. In the rst instance, a frequency-dependent transfer function was used to correct the recorded seismic data for discrepancies in the frequency response of the specic geophones used at each location. The procedure converted the responses of the 14-Hz geophones on S2 and of the 4.5-Hz geophones on the 24 geophone linear array to the response of the 10-Hz geophone at position r1. Two steps were taken in order to reduce the eect of unwanted low-frequency ambient noise principally generated by vehicles driving on surrounding roads throughout the data acquisition period. Five individual shot records were acquired at each shot location. Following a visual inspection of each shot record, only clean records were stacked to obtain single-shot records, thus maximizing signal-to-noise ratio on records from each shot location. Furthermore, a high-pass lter with corner frequency of 10 Hz has been applied to the entire data set to further suppress this low-frequency noise. The Nyquist frequency for this data set is 125 Hz, thus all results have a bandwidth of 10125 Hz. Figure 3 depicts examples of both the preprocessed (real source) waveforms and interferometrically constructed (virtual-source) waveforms for a receiver (C) at the center of the linear array (taken as the average response of the two geophones either side of this central location denoted by white triangles in Figure 2). The black dashed trace in each panel of Figure 3 is the particle velocity measured at point C from an active source at position r1 (this direct recording is never used in the interferometric Greens function constructions). Higher-frequency (approximately 5080 Hz) direct and guided body-wave arrivals are evident between 0.05 and 0.2 s arrival time, whereas a strong surface-wave envelope consisting of both fundamental and higher mode surface-waves observed at an arrival time of between 0.6 and 0.9 s. One further signicant arrival is observed slightly after 0.4 s arrival time which is identied as a refracted shear-wave arrival. Weak scattered-wave or residual ambient-noise energy is present at arrival times greater than 1 s. In addition to this directly recorded single shot, three interferometric constructions are presented in Figure 3. The solid red curve in Figure 3a represents the response at point C from a virtual source at position r1 constructed by applying Equation 1 to data recorded by the receiver at position r1 and the receivers at C from sources depicted by red stars on S1 in Figure 2. These sources lie in the approximate stationary-phase region which Snieder (2004) showed was the main contributing part of the source boundary for constructing directly propagating waves in the causal interferometric estimate between a virtual source at r1 and the receivers at C by cross-correlation. Only the causal (time forward) part of the result has been plotted.
August 2011 The Leading Edge 839

I n t e r f e r o m e t r y

Figure 3. Colored traces are seismic interferometry results plotted for a receiver at point C in Figure 2, taken as an average of the traces recorded at receivers 12 and 13 on the linear array (white triangles) using a virtual source at position r1. The black dashed trace in each case is the particle velocity measured at point C from an active source at position r1. (a) The causal inter-receiver cross-correlation result (solid red) constructed using data recorded from active sources on boundary S1 denoted by red in-lled stars in Figure 2. (b) The inter-receiver convolution result (solid blue) constructed using data recorded from active sources on boundary S3 denoted by blue in-lled stars in Figure 2. (c) The source-receiver result (solid green) constructed using both data recorded from active sources on boundary S1 denoted by red in-lled stars, and data recorded at receivers on boundary S2 denoted by green triangles in Figure 2. In all cases, a high-pass frequency lter with corner frequency of 10 Hz has been applied before plotting (active source data) and before interferometry.

The solid blue curve in Figure 3b represents the response at point C from a virtual-source at position r1 constructed by applying Equation 2 to data recorded at the receiver at position r1 and the receivers at C from sources depicted by blue in-lled stars on S3. These sources lie in the approximate stationary-phase region for constructing directly propagating waves in the interferometric estimate between a virtual source at r1 and the receivers at C by convolution. Source-receiver interferometry can be applied to the geometry presented in Figure 2 via Equation 3. In practice, application of Equation 3 rst involves calculating Greens function estimates between each receiver on the convolutional boundary S2 and each receiver on the linear array. This is achieved by calculating the response at each receiver on the linear array due to a virtual source created at each receiver location on the convolutional boundary S2, using sources on the correlational boundary S1 via for example Equation 1. We then have an estimate of the Greens function between each receiver on S2 and each receiver on the linear array. To complete the source-receiver interferometry process in Equation 3 for one linear array receiver, we rst convolve the response at each given receiver on S2 from the source at r1 with the newly constructed response between this receiver on S2 and the linear array receiver, then integrate the results over all receivers (virtual sources) on the boundary S2. The result of applying Equation 3, using only data recorded from sources depicted by the red in-lled stars on S1 and data recorded at the green in-lled triangles on S2, is shown by the solid green curve in Figure 3c. Figure 3 shows that each of the three interferometric methods have recovered the directly measured waveeld response (black dashed line) to diering levels of accuracy. In each case, the surface waves (ground roll) are the most identiable events recovered, with the relative amplitude errors of the source-receiver and inter-receiver correlational results, contrasting with noticeable phase errors in the waveeld recovered by inter-receiver convolutional interferometry. The source-receiver result also appears to be the only one to re840 The Leading Edge August 2011

construct the event with an arrival time of just over 0.4 s; signicant nonphysical arrivals (artifacts constructed in the interferometry result which do not correspond to physically propagating waves) present in the inter-receiver cross-correlation result would mask this arrival if it exists there at all. These nonphysical arrivals are constructed when the approximations assumed in the derivation of Equations 1, 2, and 3 are violated. Particularly likely in this example are nonphysical arrivals present in the nal result due to using a source boundary which does not extend in depth to include sources within the subsurface of the Earth as required by exact formulations of the theory (see Halliday and Curtis, 2008). In practice, using only the highlighted stationary-phase source and receiver positions on the boundaries inhibits construction of further nonphysical arrivals compared to using the full 2D boundaries shown in Figure 2. The nonphysical arrivals evident in the inter-receiver cross-correlation result (Figure 3a) are a likely cause of the distinct ringing pattern in the source-receiver result (Figure 3c). This follows from the fact that the source-receiver expression (Equation 3) also uses a series of waveelds constructed by inter-receiver cross-correlation interferometry, integrated over virtual sources on S2. Application to removal of ground roll One additional dierence between the directly measured waveeld response (black dashed line in Figure 3) and each of the interferometrically constructed waveelds is that the highfrequency body waves evident in the directly measured waveeld (principally present between 0.05 and 0.2 s arrival time and noticeably present up to 0.5 s arrival time) have not been substantially recovered by any of the interferometric methods. This fact is again linked to the concept of stationary-phase analysis (Snieder, 2004) which can explain why surface waves are expected to dominate the results of interferometry experiments which utilize sources only on the surface of the medium, i.e., in this case using sources at the Earths surface only (Halliday et al., 2007; Halliday and Curtis, 2008; Forghani and Snieder, 2010). For this reason, interferometry has been pro-

Karoon has invested in DUG Insight because we believe it to be the best solution for exploration requiring visualisation.
DAVID ORMEROD KAROON ENERGY

Insights a cracking piece of software!


BRETT WOODS RIALTO ENERGY LTD

I cant imagine another piece of software where this would be so painless.


SHONA MACDONALD APACHE CORPORATION

What do 60 of the worlds top exploration companies have in common? Heres an Insight.
Your quick responses and help are the fastest out of all product service companies.
MATTHEW DIELESEN APACHE CORPORATION

Speed and simplicity of operation combined with a tool box of depth and attribute capabilities are making Insight our interpretation platform of choice.
PAUL SENYCIA OTTO ENERGY

DUG Insight has by far and away the best support for an evaluation licence Ive received anywhere.
IAN DOAK INFRASTRATA PLC

Hundreds of users world-wide, in more than 60 companies, are enjoying the benefits of DUG Insight. Theyre already taking advantage of the expanding

functionality of this unique seismic interpretation and visualisation package. Take a look for yourself and download a free 30-day evaluation from our website

www.dugsw.com or visit us at SEG (stand 2344) for a live demonstration of all the latest features.

software for the hardcore. www.dugsw.com

I n t e r f e r o m e t r y

Figure 4. Illustration of adaptive subtraction of source-receiver surface-wave estimates applied to data recorded on the 24-geophone linear array. (a) Data recorded from an active source at position r1 in Figure 2. (b) The source-receiver estimate with virtual source at position r1. (c) Surface-wave estimate as a result of adaptively matching the data in (b) to the data in (a). (d) Plot (a) minus plot (c). Receiver number increases towards the east. In all cases, a high-pass frequency lter with corner frequency of 10 Hz has been applied before plotting (a) and before both interferometry and further data processing of (b), (c), and (d).

posed as a potentially useful technique in estimating and subsequently attenuating surface waves from seismic data. In the context of noise attenuation for land-based exploration seismology, this method is commonly referred to as interferometric ground-roll removal (Curtis et al., 2006; Dong et al., 2006; Curtis and Halliday, 2010; Halliday et al., 2010; van Wijk et al., 2010). The method takes advantage of the observed dominance of surface waves (ground roll) in waveforms constructed via seismic interferometry in order to estimate and then adaptively subtract surface waves from conventionally recorded seismic shot gathers. Using inter-receiver interferometry, this method requires collocated sources and receivers in a seismic experiment, whereby the receiver used in constructing the interferometric (surface wave) estimate must be close enough to a real source position in order to produce a valid surface-wave estimate for data recorded from that real source. Using sourcereceiver interferometry, collocated sources and receivers are not required because the interferometric construction creates a virtual source directly from the real source itself, allowing a surface-wave estimate to be produced via interferometry for a source at the exact location of that used to acquire the real
842 The Leading Edge August 2011

shot record. A drawback of using source-receiver interferometry in the application of interferometric ground-roll removal is the requirement of having two appropriate boundaries (see Figure 1) compared to the one boundary required for interreceiver interferometry. Figure 4 illustrates the adaptive subtraction of surface waves estimated using source-receiver interferometry from data recorded on the linear array from a real source at position r1 in Figure 2. Figure 4a depicts data (preprocessed as described above) as recorded by each receiver on the 24 receiver linear array from a real source at r1. The waveeld responses constructed by source-receiver interferometry are presented in Figure 4b. Note again the lack of high-frequency body waves in the source-receiver estimate. There is also some evidence of nonphysical energy in the source-receiver estimate upon comparison of Figures 4a and 4b. The source-receiver interferometry result in Figure 4b is used as an estimate of the surface waves between position r1 and each linear array receiver. This estimate has then been adaptively subtracted from the data shown in Figure 4a using the method of Halliday et al. The adaptively ltered surface-wave estimate (Figure 4c) principally contains directly

I n t e r f e r o m e t r y

propagating surface waves. The final result of the adaptive subtraction process (Figure 4d) contains little evidence of the lowest-frequency (approximately 1020 Hz) surface waves arriving between 0.75 and 0.9 s. The slightly higher-frequency (approximately 2030 Hz) arrivals around approximately 0.7 s are not well recovered in the source-receiver interferometry result (Figure 4b), and thus persist in the final result (Figure 4d). Conclusions This paper describes the first real-data application of the new, source-receiver form of interferometry. The example demonstrates that source-receiver interferometry enables the removal of directly-propagating surface waves using similar algorithms to previous studies that used inter-receiver interferometry (Curtis et al., 2006; Dong et al., 2006; Halliday et al., 2007; Curtis and Halliday, 2010; Halliday et al., 2010; van Wijk et al., 2010). The advantage of source-receiver interferometry is that receivers and sources do not need to be collocated; the disadvantage is the requirement for an extra boundary of sources or receivers compared to the inter-receiver or inter-source methods. The positive comparison of the performance of source-receiver interferometry versus the other two methods when constructing Greens function estimates from real data provides further evidence of the future potential of source-receiver interferometry.
Bakulin, A. and R. Calvert, 2004, Virtual Source: new method for imaging and 4D below complex overburden: 74th Annual International Meeting, SEG, Expanded Abstracts, 24772480. Curtis, A., 2009, Source-receiver seismic interferometry: 79th Annual International Meeting, SEG, Expanded Abstracts, 28, 36553659. Curtis, A., P. Gerstoft, H. Sato, R. Snieder, and K. Wapenaar, 2006, Seismic interferometryturning noise into signal: The Leading Edge, 25, no. 9, 10821092, doi:10.1190/1.2349814. Curtis, A. and D. Halliday, 2010, Source-receiver wavefield interferometry: Physical Review E: Statistical, Nonlinear, and Soft Matter Physics, 81, no. 4, 046601, doi:10.1103/PhysRevE.81.046601. Curtis, A., H. Nicolson, D. Halliday, J. Trampert, and B. Baptie, 2009, Virtual seismometers in the subsurface of the Earth from seismic interferometry: Nature Geoscience, 2, no. 10, 700704, doi:10.1038/ngeo615. Dong, S., R. He, and G. Schuster, 2006, Interferometric prediction and least squares subtraction of surface waves: 76th Annual International Meeting, SEG, Expanded Abstracts, 27832786. Forghani, F. and R. Snieder, 2010, Underestimation of body waves and feasibility of surface-wave reconstruction by seismic interferometry: The Leading Edge, 29, no. 7, 790794, doi:10.1190/1.3462779. Halliday, D., and A. Curtis, 2008, Seismic interferometry, surface waves and source distribution: Geophysical Journal International, 175, no. 3, 10671087, doi:10.1111/j.1365-246X.2008.03918.x. Halliday, D. and A. Curtis, 2010, An interferometric theory of sourcereceiver scattering and imaging: Geophysics, 75, no. 6, SA95 SA103, doi:10.1190/1.3486453. Halliday, D. F., A. Curtis, J. O. A. Robertsson, and D.-J. van Manen, 2007, Interferometric surface-wave isolation and removal: Geophysics, 72, no. 5, A69A73, doi:10.1190/1.2761967. Halliday, D. F., A. Curtis, P. Vermeer, C. Strobbia, A. Glushchenko, D.-J. van Manen, and J. O. A. Robertsson, 2010, Interfero-

metric ground-roll removal: Attenuation of scattered surface waves in single-sensor data: Geophysics, 75, no. 2, SA15SA25, doi:10.1190/1.3360948. Poliannikov, O.,2011, Retrieving reflections by source-receiver wavefield interferometry: Geophysics, 76, no. 1, SA1SA8. Snieder, R., 2004, Extracting the Greens function from the correlation of coda waves: a derivation based on stationary-phase: Physical Review E: Statistical, Nonlinear, and Soft Matter Physics, 69, no. 4 Pt 2, 046610, doi:10.1103/PhysRevE.69.046610. van Manen, D.-J., A. Curtis, and J. O. A. Robertsson, 2006, Interferometric modeling of wave propagation in inhomogeneous elastic media using time reversal and reciprocity: Geophysics, 71, no. 4, SI47SI60, doi:10.1190/1.2213218. van Wijk, K., D. Mikesell, T. Blum, M. Haney, and A. Calvert, 2010, Surface-wave isolation with the interferometric Green tensor: 80th Annual International Meeting, SEG, Expanded Abstracts, 29, 39964000. Wapenaar, K. and J. Fokkema, 2006, Greens function representations for seismic interferometry: Geophysics, 71, no. 4, SI33SI46, doi:10.1190/1.2213955.

Acknowledgments: Schlumberger Cambridge Research and the Natural Environment Research Council are thanked for their generous support of this research. Many thanks go to Ed Kragh, Everhard Muyzert, Gavin Menzel-Jones, Jim Smith, and Colin Kay for their assistance in data acquisition. Craig Duguid was at the University of Edinburgh when this work was carried out. Corresponding author. Craig.duguid@tullowoil.com

References

New Views on Seismic Imaging Their Use in Exploration and Production

August 2011

The Leading Edge

843

F r a c t u r e s

Fracture network engineering for hydraulic fracturing


WILL PETTITT, MATT PIERCE, BRANKO DAMJANAC, JIM HAZZARD, LOREN LORIG, and CHARLES FAIRHURST, Itasca Consulting Group IVAN GIL, MARISELA SANCHEZ, and NEAL NAGEL, Itasca Houston JUAN REYES-MONTES and R. PAUL YOUNG, Applied Seismology Consultants

racture network engineering (FNE) involves the design, analysis, modeling, and monitoring of ineld activities aimed at enhancing or minimizing rock mass disturbance. FNE relies specically on advanced techniques to model fractured rock masses and correlate microseismic (MS) eld observations with simulated microseismicity generated from these models. Hydrofracture stimulation is an example where FNE is playing a role, with hydraulic treatments now being widely used to optimize production volumes and extraction rates in petroleum reservoirs, enhanced geothermal systems, and preconditioning operations in caving mines. MS monitoring is now becoming a standard tool for evaluating the geometry and evolution of the fracture network induced during a given treatment, principally by source locating MS hypocenters and visualizing these with respect to the treatment volume and infrastructure. The integrated use of synthetic rock mass (SRM) modeling of the hydrofracturing with enhanced microseismic analysis (EMA) within FNE provides a feedback loop in which SRM is enhanced and constrained by the information provided by the MS data. This improves interpretation via direct observation of the micromechanics within the distinct element models used. Recent developments in both SRM and EMA technologies are described using case studies of the techniques applied to hydrofracture stimulations. We identify and discuss some future developmental challenges these technologies face, including their further integration and validation so as to provide more ecient and robust application of the FNE approach. Introduction Societys growing demand for energy and minerals has motivated engineering industries to maximize the productivity of these resources and investigate the exploitation of new resources in increasingly challenging environments. Reservoir stimulations in, for example, tight-gas shales and engineered geothermal systems have become normal practice in the exploitation and development of these energy resources. The technologies use hydraulic fracturing to engineer the reservoir rock properties by inducing new fractures and enhancing the existing fracture network for permeability. The aim of a hydraulic treatment is thus the creation of pathways between dierent volumes within the reservoir or to enhance the conductivity of pre-existing joints. Fluid injection in rock reservoirs is also used for the permanent storage of CO2 in order to abate the emissions of greenhouse gases from large single-point sources and mitigate the potential environmental impact of the predicted growth in these emissions. Passive MS monitoring is now a well-established method for imaging the eect of a downhole injection and the changes imposed on the fracture network (e.g., Young and Maxwell, 1992; Vandamme et al., 1993; Maxwell et al., 1998; Young and Baker, 2001). It potentially provides real-time feedback on the eectiveness of a hydrofracture stimulation, either through the mapping of the fracture progress using MS event location (e.g., Drew et
844 The Leading Edge August 2011

al., 2005; Quirein et al., 2007) or by using waveform amplitudes to provide information on the mechanics of the fracturing (e.g., Maxwell et al., 2010; Pettitt et al., 2010). The challenge of developing an eective network of fractures is part of a general issue that arises in many sectors of rock engineeringnamely, how to characterize and predict the mechanical behavior of a rock mass through an engineering project. The analytical complexity of these problems, and inability to test the rock mass behavior directly on a large scale, has led to the development of a variety of empirical rules that are used widely in practical rock engineering design. As projects become more ambitious and extend beyond prior experience, such rules become increasingly unreliable. Therefore, attention is turning to numerical models, aided during development and validation by in-situ observation, to establish the large-scale response of rock in practical situations. Pine and Cundall (1985) illustrate an attempt to model the eect on fracture extension of injecting water at high pressure into fractured granite at a depth of more

Figure 1. Fracture network engineering applied to hydraulic fracturing of a petroleum reservoir. Synthetic seismicity generated within a synthetic rock mass model is compared with observed MS signals for potentially real-time control of fracture network development.

F r a c t u r e s

than 1.5 km below the Rosemanowes quarry Hot Dry Rock site in Cornwall, UK. The model used was a modied version of the explicit nite-dierence code of Cundall (1980). At the site, microseismic activity was monitored and analyzed in order to track fracture development and propagation during injection (Batchelor et al., 1983). Implementations of Cundalls explicit nite-dierence modeling procedure are fully dynamic and allow seismic and microseismic eects generated, for example, by rock fracturing, to be explicitly predicted and extracted from the models (for example, most recently, Cundall and Damjanac, 2009; Pierce et al., 2009; Damjanac et al., 2010). This scheme allows model predictions to be directly compared with eld observations and allows the reasons for dierences between the two to be examined and then used to upgrade the numerical model of the rock mass and its response to engineering. FNE thus involves the integrated use of two main technologies (Figure 1):
1) A synthetic rock mass (SRM) numerical model. SRM mod-

els combine bonded-particle modeling with discrete fracture networks (describing the pre-existing network of joints, faults or other fractures) to represent the rock mass at a representative scale. SRM samples are subjected to the same mechanical or uid disturbance expected in the eld and produce synthetic seismicity that can be compared directly with MS data collected in the eld. 2) Enhanced microseismic analysis (EMA). EMA is used to map a disturbed or enlarging fracture network within a monitored rock volume using MS data acquisition, signal processing, and interpretation. The feedback provides rst-order information to engineers on fracture development in situ, and provides network statistics such as fracture orientations, connectivity and failure mechanisms. The combined SRMEMA procedure has been used successfully in several practical applications, including design and control of very large surface and underground mining (massive caving) operations, evaluation of empirical rules for the eect of size on the strength of rock masses, monitoring of hydraulic fracturing in petroleum production and observation of microseismic activity around underground openings (e.g., Young and Pettitt, 2000; Hazzard and Young, 2004; Al-Busaidi et al., 2005; ReyesMontes et al., 2007; Reyes-Montes et al., 2010). The validation of the predictive models resulting from the SRM technique makes it possible to develop robust guidelines for engineering fracture networks based on in-situ conditions, rock mass properties, and operational controls. Figure 1 illustrates the concept of FNE applied to hydraulic fracturing in a petroleum reservoir, although a similar process can be applied across a range of applications in energy production, civil engineering, and mining. An SRM model of the reservoir region to be stimulated is developed, and an initial stimulation plan designed, assuming key variables in the design and geomechanical environment. Multiple SRM computations can be made, each representing a dierent stochastic realization of input properties, in order to test the sensitivity of the model and also predict the variability and uncertainty in system response.

Figure 2. SRM model for hydraulic fracturing, being an assemblage of a bonded-particle model of the intact rock and explicitly dened fracture network. The matrix enables new fracture growth and failure. A joint model enables slip and extension on the pre-existing fractures. The uid-ow network provides hydraulic pressures in the matrix and fracture system.

Figure 3. A three-dimensional DFN was constructed from eld observations in the study area (left). A vertical two-dimensional slice has been taken from the DFN (right) for analysis of uid injection. (From Damjanac et al., 2010)

Resulting joint slip and new fracture developments within the models are used to produce simulated microseismicity (source locations and mechanisms), which is then processed into fracture network statistics using identical techniques as utilized on observed seismicity. The agreement between the simulated and observed data is able to identify the primary causal eects of the fracture network pattern produced in the eld (e.g., Reyes-Montes et al., 2007) and helps interpret the mechanics of failure resulting in the detected microseismicity by direct investigation of the source processes within the distinct element micromechanics (e.g., Hazzard and Young, 2004). The synthetic rock mass (SRM) model The synthetic rock mass (SRM) model has been developed to study the strength and deformation behavior of jointed rock in three dimensions (Pierce et al., 2007). The procedure (Figure 2)
August 2011 The Leading Edge 845

F r a c t u r e s

is essentially as follows: (1) construct a bonded-particle model (BPM) of the intact rock; (2) establish a discrete fracture network (DFN) that most closely represents the eld information (the DFN is derived from eld measurements and observations of fractures in the rock mass); and (3) superimpose the DFN onto the intact rock model. This composite structure is assumed to represent the rock mass. Bonded-particle model (BPM). Potyondy and Cundall (2004) showed that the mechanical behavior of a rock can be represented conceptually as an assembly of grains bonded at their contacts. They proposed the use of a BPM to represent the behavior of real rock. In such a model, the rock is discretized as an assembly of distinct elements held together by intergranular bonds; these can be disks (2D), spheres (3D) or angular particles (e.g., Damjanac and Fairhurst, 2010). Both the microscale geometrical (e.g., size distribution, shape) and mechanical (e.g., stiness, strength) properties of the particles and the bonds can be selected to match the overall mechanical behavior of a given rock (microscale refers to the scale of the particles whereas macroscale refers to the scale of the model itself). Discrete fracture network (DFN). In order to account for the joint fabric within SRM samples, a DFN is produced and calibrated to available measures of joint density, size, and orientation. The DFN thus provides a statistical representation of the system of joints in the rock mass. Joints can be nonpersistent and noncollinear, with bridges of intact rock between large-scale discontinuities. Joint mechanical properties are typically determined from some combination of eld measurements, laboratory measurements and empirical relations. Smooth joint model. A smooth-joint contact model allows the user to specify macroscopic joints, with a given dimension and orientation, embedded within the assembly and these can then experience shearing in the manner of a smooth frictional surface. These are superior to the bumpy joints that result from simpler debonding procedures and ease computational constraints, which previously limited analyses to a relatively small number of joints in 2D. Very large-scale problems can be addressed using the SRM approach. Further discussion of the current limitations associated primarily with computer memory size and speed are discussed in Pierce et al. (2009), and are being addressed using advances in computing technologies and by upgrading the existing software codes to make better use of parallelization on latest PC CPUs. Damjanac et al. (2010) present an example SRM application to a study of hydraulic fracturing in a naturally fractured rock mass. Although applied in this case to preconditioning the rock to improve its caveability, essentially the same procedure can be followed for stimulating a petroleum or geothermal reservoir, and has been used in recent new developments for hydraulic fracturing of tight gas shales in three dimensions. A complete three-dimensional DFN, previously constructed for the studied area, was available. Given the two-dimensional nature of the model in this case, and in order to study the response of the fractured rock mass to uid injection, a vertical cross section 100 m long and 50 m high was taken from the DFN (Figure 3). The direction of the cross section was chosen to be parallel to the expected azimuth of the
846 The Leading Edge August 2011

Figure 4. Fracture propagation from an injection well in a simulated naturally fractured rock mass (vertical section through an SRM model). The injection progress at two separate times is shown. Circles indicate uid pressure into the matrix and hydraulic fracture growth. Lines indicate slip on the pre-existing network of joints.

Figure 5. Schematic diagram showing general relations between uid and fracture behavior and various parameters during simulated hydraulic fracture propagation from a borehole in distinct element modeling. Fluid is represented by pink ellipses and cracks are represented by blue lines. (From Hazzard et al., 2002)

maximum principal horizontal stress. (Hydraulic fractures tend to propagate perpendicular to the minimum stress.) Figure 4 shows the stimulation path developed from an example injection point within the pre-existing fracture network. Hydraulic fracture development is not only restricted to new fracture growth within the rock matrix, but slip also occurs on pre-existing fractures as the stresses change in the vicinity of the injection point. This disturbance is not only observed on connected joints, but also remotely due to stress transfer from the injection. New fractures are observed to bridge intervening volumes of intact rock between existing joints, in all making the hydraulic fracture development a complicated system. Although, as the authors acknowledge, a much more comprehensive program of tests is needed to develop a good understanding of the inuence of site and testing variables, the results demonstrate that the SRM model responds fundamentally well to changes in the

TRUE CABLE-FREE SEISMIC

Because we do everything nodal, we do everything nodal better.


Seismic nodal technology is changing whats possible in exploration and production. We should know. We practically invented it, beginning with the industrys first and only true cable-free acquisition systems. Today, our ZNodal technology covers the entire spectrum, from cable-free recording systems, acquisition services and processing to data imaging and licensing. No one else offers you the depth of nodal expertise, tools and services that we can, and were doing it for clients large and small, in some of the worlds most challenging fields. Get every possible advantage out there. Put ZNodal technology to work for you.

We know nodes.
FA I R F I E L D N O D A L . C O M

F r a c t u r e s

operational parameters in the injection process. Hazzard et al. (2002) carried out an extensive series of modeling tests using the distinct element method to investigate the eect of several practical variables in hydraulic fracturing that may inuence fracture propagation (Figure 5). These investigators also examined the MS signals generated during the simulated hydraulic fracture initiation and propagation process, to assess the feasibility of using this information in the eld to identify the mechanism(s) of propagation (i.e., tensile fracture or shear fracture). The pattern on the lower right indicates that, for the case where the dierence between the maximum, 1 and minimum, 3 compressive stresses is small, fractures may initiate in several orientations; as the dierence increases the fracture becomes more denitely oriented, propagating perpendicular to 3. Fracturing, which was oriented more randomly in the case where 1 and 3 were approximately equal, becomes restricted to the fracture path and ahead of the tips of the main crack. Increasing uid viscosity, higher injection rates, etc. (i.e., following the vectors in Figure 5) tend to limit uid penetration to the vicinity of the main fracture. Enhanced microseismic analysis (EMA) Developments in MS monitoring technologies have strengthened the ability to verify the simulations and improve overall understanding of rock mass behavior (e.g., Young and Pettitt, 2000; Reyes-Montes et al., 2007). SRM models use a dynamic time-marching scheme meaning that MS eects generated (for example, by rock fracturing in the modeling studies), can be predicted by the model and thus correlated with eld data. EMA is a suite of technological developments aiming to yield more ecient processing of observed eld data in terms of the number and resolution of location results (by increasing the sensitivity of the processing while reducing uncertainties), and provide a meaningful interpretation of the microseismicity in terms of fracture network connectivity and structure so that the data can be eectively correlated with the numerical simulations (Pettitt et al., 2010). The location of MS events induced from the opening or reactivation of fractures within the reservoir during the hydraulic treatment provides rst-order information on the position and extent of the fracture network. More ecient location processing is provided by applying algorithms that allow MS events with lower signal-to-noise ratios to be processed, utilizing events with smaller numbers of P-wave arrivals or single-phase events where generally only the S-waves are recorded. Analysis of fracture network connectivity and structure uses interpretation of the location data combined with algorithms that utilize amplitude information and source parameters processed from the waveforms. Pettitt et al. (2009) investigate this additional amplitude information through analysis of continuous microseismic records. The case studies presented utilized the continuous streams of MS amplitude data recorded during the hydraulic treatments of oilbearing reservoirs and illustrated that both P- and S-waves can only be identied for a small fraction of the seismic record, with most released seismic energy appearing as single phase triggers in the record (Figure 6). These triggers are primarily the more energetic S-waves depending on the specic source, path and re848 The Leading Edge August 2011

Figure 6. Continuous microseismic record from a hydrofracture stimulation that provides challenging data for MS location processing. (top to bottom) Time record, sonogram, treatment and MS location statistics, and microseismic stream energy. (From Pettitt et al., 2009)

ceiver eects at the site. The frequency and energy characteristics contained in the continuous record can be interpreted in terms of fracturing scale and accumulated seismic energy release. It can also provide a means for diagnosing the quality of a particular data set and then optimizing the processing of discrete MS events. A location method robust against the potential loss, or low quality, of phase arrivals could therefore increase signicantly the capability of MS monitoring to image the fracture network. The accuracy in the location of MS events on the other hand depends on a number of factors, including the accuracy in phase identication and picking, the uncertainty in geophone positioning and orientation, and especially the velocity model used in the location algorithm for the inversion of traveltimes. Full three-dimensional velocity models of the volume comprising the monitoring and treatment wells are rarely available. Classical location algorithms typically invert traveltimes using velocity models approximated from vertical sonic logs from neighboring wells or empirical calibrations using surveyed active sources (e.g., perf shots). Stepwise relative location (Reyes-Montes et al., 2009a) allows a more accurate source location within the observed MS cluster (and thus a more accurate analysis of the fracture network itself), as the technique depends only on the seismic velocity in the im-

F r a c t u r e s

Figure 7. Classication of events according to CIf. MS sources are represented as spheres containing the fracture. (From Reyes-Montes et al., 2009a)

mediate volume of the cluster rather than the velocity structure through the geological formation to the receivers. The approach uses a lattice of well-located master events along a developing hydrofracture to relatively locate target events with lower signal-tonoise and that may consist of only S-wave arrivals. Interpretation of the fracture network connectivity and structure can be further

analyzed by applying statistical techniques to the MS cloud and to source parameters calculated for the events (Reyes-Montes et al., 2009b). The cluster index function (CIf), presented by Reyes-Montes et al. (2009b) for hydrofracture stimulations, provides a means to identify the seismic activity corresponding to the development of connected fracturing that creates paths for uid transmission. The location of induced MS events is combined with their source dimension, derived from the frequency content, to interpret the degree of interaction between the induced fractures (Figure 7). The CIf allows isolated events to be dened and ltered out, highlighting the volumes where coalesced cracks are created. It has been developed for use with MS monitoring to delineate the volume of rock in which signicant damage is accumulating and assess the state of the rock mass where instability might occur (Falmagne, 2002). The CIf is based on the concepts of critical crack spacing and local crack density presented in Lockner et al. (1992) and Reches and Lockner (1994). It combines source location and eective event size idealizing MS events as representing spheres, with radii equal to their calculated source radii, which contain the source crack. This idealization of the seismic source means that there is an uncertainty in the actual separation of crack tips because it does not take into consideration the direction of the cracks. However, CIf is a powerful method for identifying potentially interacting or coalescing cracks. Figure 8 shows MS locations reprocessed from data present-

August 2011

The Leading Edge

849

F r a c t u r e s

data, can be used to provide an estimate for the range of apertures achieved during the injection (Pettitt et al., 2010). Discussion Although the FNE approach has been applied successfully in rock-engineering problems, and continues to be done so by the authors, further eort is required to develop the concept into an ecient and validated approach that could be applied to the range of engineering problems within its scope. This eort is planned on both the SRM and EMA sides of the method, and in quantifying and interpreting the correlation between the two. This work requires validation under controlled conditions, preferably through both laboratory and in-situ tests. In that respect the approach has a key advantage as its principal methodologies are able to scale from laboratory experiments, through underground rock mechanics investigations performed for radioactive waste storage facilities through to large-scale engineering projects in mining, petroleum, geothermal, and carbon sequestration. Developments and validations of the technologies on both sides will also yield greater understanding of the chemical, thermal, hydraulic and mechanical processes occurring in engineered rock masses and the coupling occurring between these processes. SRM applications have thus far been applied to representative rock mass samples of up to 100 m in three dimensions (where boundary conditions are applied so that the rock mass volume represents a portion of the full engineered volume in larger-scale projects). New techniques in distinct element modeling (Lorig et al., 2010) are being developed and tested to enable greater computational eciency and thus application to larger volumes on the scale of hydrofracture dimensions in reservoir stimulation. SRM would further benet from development of more advanced bond models that are capable of better representing the UCS:tensile strength ratio and friction angle of intact rock. The means for handling joint intersections within SRM models is also worthy of further investigation and renement. At present, smooth joints may only be applied to a single joint orientation at any contact. As a result, a joint hierarchy must be established by the user; the dominant joint will control the assignment of orientation at the shared contact while the less dominant joint will have an asperity introduced to its surface. While this may be a reasonable physical representation (because an oset often exists at intersecting joints in nature), the sensitivity of system response to the assumed (or measured) hierarchy and the character (size, strength) of the introduced asperity, should be studied. The behavior of an SRM model is strongly impacted by the geometry of the DFN that is used to represent the in-situ discontinuity network. While it is generally possible to constrain joint orientations and joint density through drilling and scan-line mapping, quantication of joint size and shape remains a challenge. Until in-situ characterization methods improve, it will be necessary to develop a range of DFNs (reecting the uncertainty in joint size distribution) and corresponding SRMs for any given problem. For EMA, some of the challenges are to continue broadening the dynamic range provided by the monitoring, providing more advanced amplitude studies for mechanisms and fracture

Figure 8. (a) Located MS events during a single-stage well stimulation exhibiting fault communication, color-scaled to the value of the cluster index. (b) MS events with CIf > 0, indicating a degree of interaction. Grid spacing is 25 m. (From Reyes-Montes et al., 2009a)

ed by Sharma et al. (2004), where fault communication between formations is observed. A total of 1408 events were relocated from the original data using algorithms described by Pettitt and Young (2007). The events dene two major MS clusters located around the injection well at two depth levels. Filtering out those events with CIf = 0, corresponding to isolated events, allows one to visually identify the 685 MS events corresponding to potentially connected fractures. This highlights four major fractures induced during the treatment, providing preferential paths for uids. The use of the cluster index improves the visual interpretation of the eectiveness of the hydraulic fracturing, highlighting the paths for uid ow induced by the stimulation. Using further statistical techniques, such as the three-point method, on the located data set obtains the dominant distribution of fracture orientations, separations and extensions within a network (e.g., Reyes-Montes et al., 2007). The information on pumped proppant timing, type, and volume from the treatment, combined with the fracture network dimensions analyzed from the MS
850 The Leading Edge August 2011

$IBOHFUIF8BZ :PV"OBMZ[F .VE(BT


8FBUIFSGPSEnT($53"$&3uTVSGBDFHBTEFUFDUPSSFJOWFOUTGPSNBUJPO HBTBOBMZTJTUPQSPWJEFWJUBMJOUFMMJHFODFGPSSFTFSWPJSDIBSBDUFSJ[BUJPO
8FBUIFSGPSE*OUFSOBUJPOBM-UE"MMSJHIUTSFTFSWFE*ODPSQPSBUFTQSPQSJFUBSZBOEQBUFOUFE8FBUIFSGPSEUFDIOPMPHZ

8JUIQSFDJTFDPNQPTJUJPOJOBXJEFSTQFDUSVNPGHBTNFBTVSFNFOUT  ZPVDBOOPX q JEFOUJGZVJEUZQFT IZESPDBSCPONBUVSJUZBOEEFHSBEBUJPO  TXFFUTQPUT GSBDUVSFTBOEGBVMUT q SFDFJWFFBSMZJOEJDBUJPOTPGOFUQBZ VJENPCJMJUZ QPSPTJUZ BOESFMBUJWFQFSNFBCJMJUZ q JNQSPWFVJETBNQMJOHQSPHSBNT QJDLDBTJOHQPJOUT HFPTUFFS IPSJ[POUBMXFMMTBOEPQUJNJ[FGSBDEFTJHO
%SJMMJOH 6XUIDFH/RJJLQJ6\VWHPV 'DWDPDQDJHPHQW ,QWHJUDWHGGDWDV\VWHPV 5HDOWLPHRSHUDWLRQV 'ULOOLQJRSWLPL]DWLRQ (OHFWURQLFGULOOLQJUHFRUGHUV *HRSUHVVXUHFRQVXOWLQJVHUYLFHV +6GHWHFWLRQ +ROHVWDELOLW\PRQLWRULQJ .LFNGHWHFWLRQ 8QGHUEDODQFHGGULOOLQJ 9LEUDWLRQGHWHFWLRQ )RUPDWLRQHYDOXDWLRQ $GYDQFHGIRUPDWLRQJDVDQDO\VLV )RUPDWLRQFXWWLQJVDQDO\VLV *HRVFLHQFHFRQVXOWLQJVHUYLFHV 6RXUFHURFNDQDO\VLV :HOOVLWHJHRFKHPLFDODQDO\VLV

5IF($53"$&3EFUFDUPSJT5BDUJDBM5FDIOPMPHZuUIBUIFMQT ZPVNBLFESJMMJOH FWBMVBUJPOBOEDPNQMFUJPOEFDJTJPOTXJUIHSFBUFS DFSUBJOUZUIBOFWFSCFGPSF$POUBDUVTBUTMT!XFBUIFSGPSEDPN PSWJTJUXFBUIFSGPSEDPNTVSGBDFMPHHJOH

&WBMVBUJPO

5IFDIBOHFXJMMEPZPVHPPE

4.

$PNQMFUJPO

1SPEVDUJPO

*OUFSWFOUJPO

XFBUIFSGPSEDPN

F r a c t u r e s

network structure, and to continue the lowering of uncertainties provided in source location principally from uncertain velocity models in complex reservoir formations, or around underground caved mines and large open pits. In doing so, one key objective is to increase sensitivity so that lower energy events can be processed effectively, for instance, so that more tensile events believed to be associated with the initial fracturing mechanisms of hydraulic fracturing can be collected routinely. For the SRM and EMA technologies to be applied effectively within the time constraints of engineering projects then process developments are required to enable greater interaction, correlation and visualization between simulated and observed data, and realistic quantification of uncertainties through the methods and correlation. Conclusions Fracture network engineering (FNE) is the general term to describe the design, analysis, modeling and monitoring of infield activities aimed at enhancing or minimizing rock mass disturbance via, for example, fluid injection, blasting or excavation. The nature and level of disturbance can vary widely, from shearing of pre-existing fractures (e.g., to enhance permeability) to complete disintegration (e.g., to permit ore extraction). FNE builds upon the two main component systems described above, namely, a synthetic rock mass (SRM) model of the field site, and enhanced microseismic analysis (EMA) to provide monitoring of infield activities. Application of the SRM to real problems is associated inextricably with, and dependent on, the field observations of the fracture network. This allows model predictions to be compared with field observations and allows the reasons for differences between the two to be examined, and then used to upgrade the numerical model of the rock mass and its response to stimulation. Although the FNE approach has been applied successfully in rock-engineering problems, considerable laboratory investigations, field validations, and continued software and algorithm developments, are envisaged to apply this strategy effectively, and potentially in real time, so that meaningful decisions on project design can be made and revised in practice. Through this approach, we hope that FNE can deliver on its potential for providing higher productivity from underground energy supply in petroleum and geothermal reservoirs, and expanded environmental benefits in underground storage of hazardous materials.
Al-Busaidi, A., J. F. Hazzard, and R. P. Young, 2005, Distinct element modeling of hydraulically fractured Lac du Bonnet granite: Journal of Geophysical Research, 110, B6, B06302, doi:10.1029/2004JB003297. Batchelor, A. S., R. B. Baria, and K. Hearn, 1983, Monitoring the effects of hydraulic stimulation by microseismic event location: A case study: SPE paper 12109. Cundall, P. A., 1980, UDECA generalized distinct element program for modelling jointed rock: European Research Office, U.S. Army, ref DAJA37-79-C-0548, 69. Cundall, P. A. and B. Damjanac, 2009, A comprehensive 3D model for rock slopes based on micromechanics: Proceedings of Slope Stability 2009. Damjanac, B., I. Gil, M. Pierce, M. Sanchez, A. Van As, and J. McLennan, 2010, A new approach to hydraulic fracturing modeling in nat852 The Leading Edge August 2011

References

urally fractured reservoirs: Proceedings of 44th U.S. and 5th U.S.Canada Rock Mechanics Symposium, paper ARMA 10400, p. 18. Damjanac B. and C. Fairhurst, 2010, Evidence for a long-term strength threshold in crystalline rock: Rock Mechanics and Rock Engineering, doi: 0.1007/s00603-010-0090-9. Drew, J., D. Leslie, P. Armstrong, and G. Michaud, 2005, Automated microseismic event detection and location by continuous spatial mapping: SPE paper 95513. Falmagne, V., 2002, Quantification of rock mass degradation using microseismic monitoring and applications for mine design: Ph.D. thesis, Queens University. Hazzard, J. F., R. P. Young, and S. J. Oates, 2002, Numerical modeling of seismicity induced by fluid injection in a fractured reservoir: Mining and Tunnel Innovation and Opportunity, Proceedings of the 5th North American Rock Mechanics Symposium, University of Toronto Press, 10231030. Hazzard, J. F. and R. P. Young, 2004, Dynamic modeling of induced seismicity: International Journal of Rock Mechanics and Mining Sciences, 41, no. 8, 13651376, doi:10.1016/j.ijrmms.2004.09.005. Lockner, D. A., D. E. Moore, and Z. Reches, 1992, Microcrack interaction leading to shear fracture, in J. R. Tiller and W. R. Wawersik, eds., Proceedings of the 33rd U.S. Symposium on Rock Mechanics: A. A. Balkema, 807816. Lorig, L. J., P. A. Cundall, B. Damjanac, and S. Emam, 2010, A three-dimensional model for rock slopes based on micromechanics: Proceedings of 44th U.S. and 5th U.S.-Canada Rock Mechanics Symposium. Maxwell, S. C., R. P. Young, R. Bossu, A. Jupe, and J. Dangerfield, 1998, Microseismic logging of the Ekofisk reservoir: SPE paper 47276. Maxwell, S. C., J. Rutledge, R. Jones, and M. Fehler, 2010, Petroleum reservoir characterization using downhole microseismic monitoring: Geophysics, 75, no. 5, 75A129-75A137, doi: 10.1190/1.3477966. Pettitt, W. S. and R. P. Young, 2007, InSite seismic processor user manual v2.14: Applied Seismology Consultants. Pettitt, W. S., J. M. Reyes-Montes, B. Hemmings, E. Hughes, and R. P. Young, 2009, Using continuous microseismic records for hydrofracture diagnostics and mechanics: 79th Annual International Meeting, SEG, Expanded Abstracts, 15421546. Pettitt, W. S., J. M. Reyes-Montes, J. Andrews, and R. P. Young, 2010, Enhanced imaging of hydraulic fracturing through induced seismicity: Proceedings of 44th U.S. and 5th U.S.-Canada Rock Mechanics Symposium. Pierce, M., P. Cundall, D. Potyondy, and D. Mas Ivars, 2007, A synthetic rock mass model for jointed rock: Rock Mechanics: Meeting Societys Challenges and Demands Taylor & Francis Group, v. 1: Fundamentals, New Technologies & New Ideas. Pierce, M., D. Mas Ivars, and B. Sainsbury, 2009, Use of synthetic rock masses (SRM) to investigate jointed rock mass strength and deformation behavior: Proceedings of the International Conference on Rock Joints and Jointed Rock Masses, paper 1091.Pine, R. J. and P. Cundall, 1985, Applications of the fluid-rock interaction program (FRIP) to the modeling of hot dry rock geothermal energy systems: Proceedings of the International Symposium on Fundamentals of Rock Joints, 293302. Potyondy, D. O. and P. A. Cundall, 2004, A bonded-particle model for rock: International Journal of Rock Mechanics and Mining Sciences, 41, no. 8, 13291364, doi:10.1016/j.ijrmms.2004.09.011. Quirein, J. A., C. Kessler, J. M. Trela, S. Zannoni, B. Cornish, R. J. Brewer, D. Gordy, W.S. Pettitt, C. B. Walker, J. Laney, and R. P. Young, 2007, Microseismic monitoring of a re-stimulation treatment to a Permian basin San Andres dolomite horizontal well: SPE paper 110333. Reches, Z. and D. A. Lockner, 1994, Nucleation and growth of faults

F r a c t u r e s

in brittle rocks: Journal of Geophysical Research, 99, no. B9, 18159 18173, doi:10.1029/94JB00115. Reyes-Montes, J. M., W. S. Pettitt, and R. P. Young, 2007, Validation of a synthetic rock mass model using excavation induced microseismicity: Rock Mechanics: Meeting Societys Challenges and Demands (1st Canada-U.S. Rock Mechanics Symposium, Vancouver, May 2007), Taylor & Francis Group, v. 1: Fundamentals, New Technologies & New Ideas, 365369. Reyes-Montes, J. M., W. S. Pettitt, J. R. Haycox, B. Hemmings, J. R. Andrews, and R. P. Young, 2009a, Application of relative location techniques to induced microseismicity from hydraulic fracturing: Proceedings of SPE Annual Technical Conference and Exhibition. Reyes-Montes, J. M., W. S. Pettitt, J. R. Haycox, B. Hemmings, and R. P. Young, 2009b, Microseismic analysis for the quantification of crack interaction during hydraulic stimulation: 79th Annual International Meeting, SEG, Expanded Abstracts, 16521656. Reyes-Montes, J. M., B. Sainsbury, W. S. Pettitt, M. Pierce, and R. P. Young, 2010, Microseismic tools for the analysis of the interaction between open pit and underground developments, in Y. Potvin, ed., Caving 2010, Proceedings of the Second International Symposium on Block and Sublevel Caving: Australian Centre for Geomechanics, 119132. Sharma, M. M., P. B. Gadde, R. Sullivan, R. Sigal, Fielder, D. Copeland, L. Griffin, and L. Weijers, 2004, Slick water and hybrid fracs in the Bossier: Some lessons learnt: SPE paper 89876. Vandamme, L., S. Talebi, and R. P. Young, 1993, Monitoring of a hydraulic fracture in a south Saskatchewan oil field: Journal of Canadian Petroleum Technology, 33, no. 1, 2734. Young, R. P. and S. Maxwell, 1992, Seismic characterization of a highly

stressed rock mass using tomographic imaging and induced seismicity: Journal of Geophysical Research, 97, B9, 1236112373, doi:10.1029/92JB00678. Young, R. P. and W. S. Pettitt, 2000, Investigating the stability of engineered structures using acoustic validation of numerical models: Trends in Rock Mechanics: ASCE Geotechnical Special Publication, 102, 115. Young, R. P. and C. Baker, 2001, Microseismic investigation of rock fracture and its application in rock and petroleum engineering: International Society of Rock Mechanics News Journal, 7, 1927.

Corresponding author: wpettitt@itascacg.com

August 2011

The Leading Edge

853

THE METER READER

C o o r d i n a t ed by Robert Pawlowski

Delineation of concealed basement depression from aeromagnetic studies of Mahanadi Basin, East Coast of India
B.S.P. Sarma, National Geophysical Research Institute V. Chakravarthi, National Geophysical Research Institute, presently Center for Earth & Space Sciences, University of Hyderabad

detailed study of aeromagnetic anomalies across the Mahanadi Basin on the east coast of India has brought out a concealed basement depression, trending ENE-WSW, in the Rajnagar-Paradeep region. In this article, the aeromagnetic data pertaining to the Rajnagar-Paradeep region are analyzed quantitatively for their basement structure. The inferred basement depression, bounded on the north by a low-angle normal fault and in the south by a steep fault, characterizes the features of a typical half-graben structure. Furthermore, the southern boundary fault of the inferred depression correlates with an ENE-WSW-striking northward-dipping fault in the offshore region. The fact that the marine gravity anomaly also shows discrete lows over this region substantiates the aeromagnetic interpretation.

Introduction In hydrocarbon exploration, aeromagnetic maps can provide information on the disposition and orientation of sedimentary basins, even when they are concealed under fluvial deposits. A priori knowledge of the region from aeromagnetic data reduces the risk and cost involved in seismic surveys and aids in properly locating seismic lines. The sedimentary basin fill, generally presumed to be nonmagnetic, acts as a magnetically transparent medium and the underlying basement consisting of magnetic minerals produces magnetic anomalies which can be analyzed quantitatively for the basement configuration of sedimentary basins. The interpretation of magnetic anomalies is generally complex because the anomalies are bipolar in nature; the shape of the anomaly depends on both the magnetic latitude and strike of the anomalous body. In addition, the presence of remanant magnetism, often unknown, further complicates the interpretation. Unlike gravity anomalies, magnetic anomalies are strongly influenced by the strike of the target being sought in addition to the latitude of its occurrence. The geomagnetic vector which is horizontal at the magnetic equator varies gradually and becomes vertical at the magnetic poles. The field strength gets roughly halved at the equator compared to its strength at the magnetic poles. A geological body with higher induced magnetization than the host rock produces a high anomaly at the poles, a low at the equator, and a high-low at the middle latitudes. Interestingly, a body with lower induced magnetization than the host rock manifests with a low at the poles, a high at the equator and a high-low at the middle latitudes. At the equatorial regions, a body striking east-west produces a recognizable anomaly whereas one striking north-south goes unrecognized except at the ends. Assuming that the magnetization is caused purely by induction, a body striking at an angle due west at the equator will be magnetized by the component H sin in a horizontal direction perpendicular to the strike. Here, H is the horizon854 The Leading Edge August 2011

tal component of the total magnetic field vector along magnetic north. Since the vertical component is zero, the effective magnetization is horizontal for all values of and, hence, the magnetic anomalies for different values of differ only in size. Furthermore, there is no distinction between the total field and its horizontal component at the equator and, hence, total field anomalies will be identical to horizontal magnetic field anomalies (Rao and Murthy, 1978). Figures 1a and 1c show total magnetic field anomalies produced by graben structure (down-dropped block) trending east-west and a horst structure (positive block structure) whose geometries are shown in Figures 1b and 1d, respectively. In each case, the dip of magnetization is presumed as 25 and intensity of magnetization as 150 E-05 gammas. In either case, the anomalies are calculated along a traverse oriented southnorth across the strike of the respective structure on the plane, z = 0, at 26 equispaced observations in the interval 050 km. The sediments within the graben (Figure 1b) and around the horst structure (Figure 1d) are presumed to be nonmagnetic and the basement rocks magnetized by induction responsible for generating the anomalies. Notice from Figure 1a that a high with smoothly varying trend is observed over the graben with sharp anomalies on either side over the shoulders. On the other hand, a low is observed over the horst (Figure 1c) with a smoothly varying high over the low-angle faulted margin

Figure 1. (a) Total magnetic field anomaly computed over a synthetic graben structure (b), and (c) total magnetic field anomaly computed over a synthetic horst structure (d).

THE METER READER

toward the north and a sharp anomaly toward the south. Detailed study of the aeromagnetic map of the Mahanadi Basin, which is in the low-magnetic latitudinal regions, brought out a concealed sub-basin in the Rajnagar-Paradeep region of the basin. Mahanadi Basin This is one of several sedimentary basins that developed along the eastern continental margin of India as a result of rifting and breakup of Gondwana commencing with the Permian and continuing into the Early Cretaceous (Sastri et al., 1974; Sastri et al., 1981; Bharali et al., 1991; Biswas et al., 1993; Fuloria 1993). A thick section of Holocene alluvium covers many sedimentary basins to the east (Figure 2). The outcropping basement to the west consists of the Eastern Ghats garnulite terrain in the south and banded iron formation (greenstone) granite terrain in the north. The North Orissa Boundary Fault (NOBF), which separates these terrains, runs several hundred kilometers from Central India to the east coast along the Mahanadi River valley. There are about seven sub-basins in the Mahanadi Gondwana Basin in the eastern part of Orissa (Hota et al., 2006). These are typical pull-apart basins that are formed along a lineament trending E-W and characterized by segments of a master strike-slip fault with a left-stepping arrangement (Chakravarthi, 2009). The disposition of these Gondwana basins indicates the existence of a master basin in the geological past with the present basins being its eroded remnants (Hota et al., 2006). Mahanadi Basin (Figure 3) extends both on land and oshore with the former covering about 18,000 km2 and the latter about 12,500 km2. Earlier studies reveal the presence, beneath the Holocene cover, of a few basement depressions, ridges, and lineaments (Figure 2), which have controlled the sedimentation up to the Mesozoic (Fuloria, 1993). The basement depressions and ridges formed near the Cuttack, Paradeep, and Puri regions are prominent structural features. The Rajnagar ridge, which separates the Cuttack and Paradeep depressions, extends into the oshore region (Sastri et al., 1974). The basin architecture is basically a horst-graben type with dierential subsidence of basement along fault planes. Based on geological and geophysical studies coupled with boreholedata, Dash (2000) identied a graben structure to the north of the study area with basement going down to a depth of 5 km in the shelf region. In the Early Cretaceous, widespread volcanic activity took place in the Mahanadi Basin (Jagannathan, 1983; Biswas, 1996) and a few oshore boreholes show that the degree of volcanism increases from northwest to southeast (Fuloria, 1993). The regional structural trends in the Mahanadi Basin are more or less subparallel to the well developed northeast-southwest trend of the east coast of India. Substantial lateral movements have taken place along the strike-slip faults that trend NW-SE (Figure 3). The Mahanadi Basin has hydrocarbon and coal potential (Jagannathan et al., 1983; Fuloria, 1993; Dash, 2000; Ravi Bastia, 2006; http://www.dghindia.org/18.aspx). Signicant hydrocarbon shows are reported from exploratory wells drilled in the basin. Oshore snier surveys revealed a number of

Figure 2. Geology and tectonic features of a part of the Mahanadi Basin, India (modied after Sastri et al., 1974; Fuloria, 1993; Hota et al., 2006).

geochemical anomalies and good hydrocarbon resources are prognosticated. Plenty of good sandstone reservoir rocks are present onshore. Evidence from the neighboring basins, particularly the southern Krishna-Godavari Basin, suggest that the Mahanadi Basin belongs to a petroleum province. Aeromagnetic map The National Geophysical Research Institute has carried out detailed aeromagnetic surveys on the land and oshore regions of the Mahanadi Basin (Rao et al., 1982). The survey was at an altitude of 600 m with ight-line azimuth of N30 W and a ight-line spacing of 2 km. The geomagnetic eld intensity, inclination, and declination in this region are 44,000 nT, 26 N and 1 W respectively. Jagannathan (1983) attributed the prominent oshore magnetic anomaly that trends NE-SW to a major tectonic zone that coincides with the shelf edge. Based on harmonic analysis, Mishra (1984) suggested the presence of permanently magnetized rock in the oshore region. Bharali et al. (1991) brought out a correlation between the aeromagnetic anomalies with fracture trends in the Mahanadi Delta. Nayak and Rao (2002) indicated the presence of a high-density intrabasement layer oshore. Nayak et al. (2006) identied prominent aeromagnetic trends and correlated them with the arcuate shape of the Mahanadi Delta. Subrahmanyam et al. (2008) used bathymetric, magnetic, and gravity data to infer a series of oshore depressions and ridges, parallel to the coast, south of Paradeep. However, these studies have not mentioned the anomaly discussed here. The observed aeromagnetic eld of the study area is shown in Figure 4. The northern part of the aeromagnetic map, south of the Chandbali region, shows a strong arcuate gradient, which is a part of the intense anomalies. These anomalies suggest shallow basic rocks in the basement (Bharali et al., 1991). The southernmost part of the map shows an elongated tight anomaly gradient, which may be indicative of the volcanic nature of the Eastern continental margin (Sarma, 2008). The
August 2011 The Leading Edge 855

THE METER READER

Figure 3. Map showing onshore and oshore parts of the Mahanadi Basin, India.

Figure 5. (a) Observed and computed magnetic anomalies along the prole A-A and (b) inferred basement structure of the Mahanadi Basin. The susceptibility values (in micro CGS units) assigned to the numbered geological units are: 0.0 for unit 1, 800 for units 2 and 4, 1600 for unit 3, and 8400 for unit 5.

Figure 4. Aeromagnetic map of the study area.

central part of the aeromagnetic map shows a conspicuous anomalous zone in the Rajnagar-Paradeep region, which runs ENE-WSW from onshore to oshore. This zone shows a markedly elongated high in the middle. In addition, this high shows a swerving near 87E longitude, where the coastline also shows an abrupt change from NE-SW to N-S (Figure 4). The disposition of the magnetic anomaly axes on the aeromagnetic map suggests that the eective magnetization is essentially north-south. In low magnetic latitudes, geologic features that trend east-west show prominent anomalies due to magnetic induction. Therefore, the induction anomaly over a sedimentary basin trending east-west which manifests as a high (Figure 1) reveals the existence of a concealed basement depression (marked in Figure 4) in the RajnagarParadeep region. Modeling A 60-km prole (AA in Figure 4) across the inferred depres856 The Leading Edge August 2011

sion in Rajnagar-Paradeep was interpreted for its basement conguration. The observed total magnetic eld corrected for both normal eld (using the IGRF Epoch model, 1980) and regional magnetic anomaly is shown in Figure 5a. The magnetic anomaly with relief of 315 nT shows a broad high in the central part with lows on either side. An initial model based on earlier geological studies (Sastri, 1981; Fuloria, 1993; Dash, 2000) in this region was assumed and its theoretical magnetic response computed using a 2D forward modeling software SAKI (Webring, 1985). The susceptibility values of the rock units of the basement rocks usually show a wide range of overlapping values (Telford et al., 1976; Subrahmanyam and Verma, 1981). The various units of the model and their susceptibility values are shown in Figure 5b. The model geometry and the susceptibility values were adjusted until the computed magnetic eld mimics the observed one. In this case, unit 1 represents the water and alluvium columns, which are nonmagnetic, units 2, 3, and 4 represent the basement. Unit 3 which may consist of charnockites (Fuloria, 1993) of higher susceptibility than adjacent units 2 and 4, which may represent granitoids. Unit 5 represents volcanic ows. Drilling indicated such ows in the adjacent areas (Fuloria, 1993; Dash, 2000). The modeling has brought out a rst-order basement conguration of the inferred depression with a maximum thickness of sediments of 4 km. Further, note that the southern limb of the inferred depression correlates with the northward-dipping oshore fault (Figure 2) reported by Sastri (1981). The marine gravity anomaly map of the region (Jagannathan et al., 1983) also shows discrete gravity lows (shown as contours with hachures in Figure 6) over the inferred depression, which further supports the aeromagnetic interpretation. The inferred structure bounded by a low-angle normal fault toward the north and by a steep fault toward the south characterizes the features of a half-graben structure. The width of the depression is about 1015 km and its length is more than 60 km. The depression may extend further

THE METER READER


Fuloria, R. C., 1993, Geology and hydrocarbon prospects of Mahanadi Basin, India, in S. K. Biswas, Alok Dave, P. Garg, Jagadish Pande, A. Maithani, N. J. Thomas, eds., Proceedings of second seminar on Petroliferous basins of India, 355369. Hota, R. N., W. Maejima, and B. Mishra, 2006, Similarity of palaeocurrent pattern of Lower Gondwana formations of the Talchir and the Ong-river basins of Orissa, IndiaAn indication of dismemberment of a major basin: Gondwana Research, 10, no. 3-4, 363369, doi:10.1016/j.gr.2006.02.009. Jagannathan, C. R., C. Ratnam, N. C. Baishva, and U. Dasgupta, 1983, Geology of the offshore Mahanadi basin: Petroleum Asia Journal, 4, 101104. Mishra, D. C., 1984, Magnetic anomaliesIndia and Antarctica: Earth and Planetary Science Letters, 71, no. 1, 173180, doi:10.1016/0012-821X(84)90063-3. Nayak, G. K. and C. R. Rao, 2002, Structural configuration of Mahanadi offshore basin, India: An aeromagnetic study: Marine Geophysical Researches, 23, no. 5/6, 471479, doi:10.1023/ B:MARI.0000018244.65222.9a. Nayak, G. K., C. R. Rao, and H. V. Rambabu, 2006, Aeromagnetic evidence for the arcuate shape of the Mahanadi delta, India: Earth, Planets, and Space, 58, 10931098. Rao, B. S. R. and I. V. R. Murthy, Gravity and magnetic methods of prospecting: Arnold-Heinemann. Rao, V. B., D. A. Rao, P. V. Sankarnarayan, and C. Ratnam, 1982, Aeromagnetic survey over parts of Mahanadi basin and the adjoining offshore region, Orissa, India: Geophysical Research Bulletin, 40, 219226. Sarma, B. S. P., 2008, Magnetic evidence for volcanism at Eastern continental margin of India: Juxtaposition with Elan Bank (Southern Indian Ocean): Surveys in Geophysics, 29, 5161. Sastri, V. V., A. T. R. Raju, R. N. Sinha, and B. S. Venkatachala, 1974, Evolution of Mesozoic sedimentary basins on the east coast of India: The Australian Petroleum Exploration Association Journal, 14, 2941. Sastri, V. V., B. Venkatachala, and V. Narayanan, 1981, The evolution of the East coast of India: Palaeogeography, Palaeoclimatology, Palaeoecology, 36, no. 12, 2354, doi:10.1016/00310182(81)90047-X. Subrahmanyam, C. and R. K. Verma, 1981, Densities and magnetic susceptibilities of Precambrian rocks of different metamorphic grade (Southern Indian Shield): Journal of Geophysics, 49, 101 107. Subrahmanyam, V., A. S. Subrahmanyam, G. P. S. Murthy, and K. S. R. Murthy, 2008, Morphology and tectonics of Mahanadi basin, northeastern continental margin of India from geophysical studies: Marine Geology, 253, no. 12, 6372, doi:10.1016/j.margeo.2008.04.007. Telford, W. M., L. P. Geldart, R. E. Sheriff, and D. A. Keys, 1976, Applied Geophysics: Cambridge University Press. Webring, M., 1985, SAKI: A FORTRAN program for generalized inversion of gravity and magnetic profiles. Open file report USGS, 85112.

Figure 6. Marine gravity anomaly map of the Mahanadi offshore region, India (after Jagannathan et al., 1983).

northeast beyond the boundary of the study area which might have been offset by the strike-slip faults trending northwestsoutheast. Conclusions A study of the aeromagnetic map of the Mahanadi Basin has inferred a concealed depression in the Rajnagar-Paradeep region. A first-order picture of the basement structure of the depression was obtained through modeling total magnetic field anomalies along a selected profile. The northern limb of the inferred depression indicates a low-angle normal fault and the southern limb a steep fault, which characterize the features of a half-graben structure. The southern boundary fault of the inferred depression correlates with the offshore northwarddipping fault inferred in previous studies. Furthermore, the marine gravity anomaly map of the region also shows discrete lows over the inferred depression.
References
Bastia, R., 2006, An overview of Indian sedimentary basins with special focus on emerging east coast deepwater frontiers: The Leading Edge, 7, no. 7, 818829, doi:10.1190/1.2221359. Bharali, B., S. Rath, and R. Sarma, 1991, A brief review of Mahanadi delta and the deltaic sediments in Mahanadi basin: Memoir of Geological society of India 22. Biswas, S. K., A. L. Bhasin, and Jokhan Ram, 1993, Classification of Indian sedimentary basins in the framework of plate tectonics, in S. K. Biswas, Alok Dave, P. Garg, Jagadish Pande, A. Maithani, N. J. Thomas, eds., Proceedings of the second seminar on Petroliferous basins of India, 146. Biswas, S. K., 1996, Mesozoic volcanism in the East coast basins of India: Indian Journal of Geology, 68, 237254. Chakravarthi, V., 2009, Gravity anomalies of pull-apart basins having finite strike length with depth-dependent density: A ridge regression inversion: Near Surface Geophysics, 7, 217226. Dash, D., 2000, Hydrocarbon potential of Mahanadi coastal basin, in N. K. Mahalik, ed., Mahanadi DeltaGeology, Resources and Biodiversity, India, AITAA, India Chapter, 125132.

Acknowledgments: The authors thank Robert Pawlowski for his excellent review and many useful suggestions to improve the manuscript. The first author is grateful to the Council of Scientific and Industrial Research, Government of India for granting an Emeritus Scientist Scheme. The director, National Geophysical Research Institute is thanked for encouragement to publish this work. Corresponding author: sarmabsp@yahoo.co.in
August 2011 The Leading Edge 857

INTERPRETERS CORNER

Coordinated by ALAN JACKSON

Thinner than expected? Dont blame the seismic


JEROME L. COGGINS, Shell International E&P

friend of mine was invited to be among a small group of aspiring leaders who were to give a presentation to management. While the other participants each highlighted their most signicant success, he chose to explain how his department had come to lose a major client. You can guess which presentation provided the most valuable information to the company leadership. Although we have more to learn from our failures than our successes, the latter get shared much more often than the former. The oil and gas business is not immune to this phenomenon. Particularly troubling to working class interpreters like myself is the overprediction of pay thickness. It is still a much too common occurrence. Nobody usually gets bent out of shape over it. After all, these were exploration wells. We did not know the rock properties very well, or even the hydrocarbon phase on these amplitude-supported targets. So it goes ... Well, not exactly. Often thin pays should be expected, but are not agged as a key risk predrill. In one case, a predrill review raised this issue, resulting in a signicant reduction in the expectation volumes of a gas well. It seems the method to recognize the risk of thin pay is not as well understood or as commonly applied as it should be. The purpose of this article is to demonstrate the simple process for screening predrill targets for thin (subresolution) reservoirs and incorporating the proper uncertainty into volumetrics/modeling. I will use for an example the previously mentioned case where we were able to dodge a bullet predrill. The suitably disguised Prospect X is shown in Figure 1. Bounded on the north by a trapping fault, the amplitude map strongly conforms to a structural contour. Taking this as evidence of a hydrocarbon-water contact, the structure map, seismic thickness measurements, and trend curves were used to calculate recoverable volumes. For convenience, we will focus on the gas case volumes, given as 200 BCF. (This is an arbitrary reference volume for the estimates and nal answer that follow.) Before delving more deeply into the Prospect X volumes, lets review what we know about seismic resolution. First, lets start with a simple description of resolution versus detection. For our purposes, a seismic loop is resolved when its amplitude and thickness can be independently and accurately measured. A seismic loop is detected when it can be distinguished from the background noise. Very thin, high-contrast loops may be easily detectible but not resolvable seismically. The example in Figure 2 of the only star resolved from Earth (Sol) serves as a dramatic reminder that detection below resolution is the consequence of band-limited data. Given in more concrete terms, resolution can be dened using the denition shown in Figure 3. Methods exist to indirectly estimate thickness well below this tuning limit, but they require well calibration or other a priori information. (It is important to recognize that such estimates do not represent an increase in the resolution of the data. Employing the astronomy analogy
858 The Leading Edge August 2011

Figure 1. Prospect X structure colored by amplitude.

Figure 2. The only star resolvable from Earth.

again, using the spectrum of a star to classify it as Sol-like does not mean that it is now resolved.) Most readers will be familiar with this description of tuning behavior for a two-spike seismic model, shown in Figure 3. Although the curve in red may be less common, this is the response resulting from a plot of measured seismic thickness versus amplitude. In practice, seismic thickness measurements and seismic amplitudes terminate at a sharp wall slightly thinner than the tuning thickness. Now, lets get back to Prospect X. The rst step in examining the resolution of seismic data is to estimate the tuning thickness (Figure 4). Various methods exist for this estimate; the distance between the peak and trough of the zero-phase wavelet is shown here. The tuning thickness, approximately

INTERPRETERS CORNER

Figure 3. Denition of resolution (after Kallweit and Wood, 1982). Figure 5. Comparison of tuning thickness and time thickness.

Figure 6. Two possible scenarios.

Figure 4. Estimating tuning thickness.

14 ms, applies to a perfect two-spike model of blocky sand. Because we dont know any better, this will have to do. A comparison of the tuning thickness to the time thickness measurements on the amplitude area of the prospect is shown in Figure 5. This should set o the warning siren in the mind of any seismic interpreter! This signature of a subtuning reservoir thickness is most often interpreted as the peak area of the tuning curve. Actually, this is the most optimistic case, as shown in Figure 6 (top). We can see that a much thinner reservoir also plots in the same way (Figure 6 bottom). Although relative amplitude is reduced, the plot is still centered

at tuning thickness. Our bias toward the optimistic solution may come from our familiarity with the tuning-curve plot for true thickness. To drive the point home, Figure 7 shows a simple histogram of the measured time thickness of the amplitude area. Again, that pesky tuning thickness is right there near the peak of a narrow distribution, with an average value close to our tuning approximation. Now our original volume estimate is in some serious jeopardy. Recognition of the risk of a thin reservoir is the rst and most important step. The obvious question, though, is how to quantify the uncertainty. Given our assumption of blocky sand, tuning thickness represents the maximum time thickness of our reservoir. Our base case just became the high (P10) case. For a mean (P50) case, the time where the amplitude decays from the tuning peak to the true value results in a time thickness decrease of
August 2011 The Leading Edge 859

INTERPRETERS CORNER

Figure 9. The answer in logsthin gas pay. Figure 7. Histogram of time thickness.

Figure 8. An approach to assigning an uncertainty range to thin pay.

and evaluated. The results are given in Figure 9. The well, drilled of course on a strong amplitude response, found 10 m of sand and 7 m of gas pay on water. The sand was high net-to-gross. The true time thickness at the well was 7 ms, considerably less than the 14 ms of measured seismic time. Using the area-under-the-curve method (Connolly, 2007), the average net pay was calculated to be 5.3 m, a result on the low side of the modified predrill estimate. Recalculated recoverable volumes had a mean value of about 65 BCF, quite a departure from the original 200 BCF. News that the estimated predrill volumes have fallen by a factor of two, however unwelcome, resulted in a business decision of much higher quality. Of course, the best approach is to make that initial volume estimate properly, and avoid being the bearer of bad news. Use this example to screen your seismically defined targets for subtuning thickness risk. I would expect that there are more than a few wells out there on the drilling schedule that will find reservoir thinner than expected. Make sure your well is not one of them. Dont blame the seismic!
References
Connolly, P., 2007, A simple, robust algorithm for seismic net pay estimation: The Leading Edge, 26, no. 10, p. 12781282, doi:10.1190/1.2794386. Kallweit, R. S. and L. C. Wood, 1982, The limits of resolution of zero-phase wavelets: Geophysics, 47, no. 7, 10351046, doi:10.1190/1.1441367.

about a factor of two. The low (P90) case is arbitrary chosen to represent a halving of the P50 time thickness. This approach is shown graphically in Figure 8. For any specific case, forward modeling may serve to better define this range. Armed with an approach to uncertainty, we can update the volume prediction on Prospect X. Trend curves bracket the gas case velocities in the range of 20002500 m/s. Using the apparent mean thickness, this translates into a gross thickness of 1418 m. Application of the described uncertainty for a thin reservoir, the mean thickness then becomes 79 m, with a low of 45 m. The apparent mean thickness is now the high case. Our new predrill volumes are now 50100200 BCF (L-M-H) recoverable. By my tally, this is a 50% reduction in volumenot good news, but important news to get predrill rather than postmortem. Now, at this stage we might hope to sell it to a less-savvy exploration outfit. However, the prospect was indeed drilled
860 The Leading Edge August 2011

Corresponding author: jlc@cogginsgeosciences.com

Enrich your life.

Saudi Aramco is not only the worlds leading energy provider, its a place to find rewarding
careers and a life outside work that puts beaches, sailing, golf courses and international travel at your fingertips. Enrich your life. www.Aramco.Jobs/LE

SPECIAL M u l t SECTION: i p l e a t Mtue ln tu i a p t l e i oa nt t e n u a t i o n

Introduction to this special section: Multiple attenuation


BILL GOODWAY, Calgary, Canada

ultiples? What multiples? This was the reply to my questioning of the seismic-to-synthetic log mis-tie shown in Figure 1. My inquiry followed a request for technical assistance in an AVO inversion project by a young entrylevel geophysicist interpreting the thin tram-track reservoirs typical of the Western Canadian Sedimentary Basin (WCSB). The project had been initiated with this log tie to establish the spectral characteristics of a wavelet needed for inversion and despite being at the end of the AVO-compliant processing ow, I suggested that we take a step back to deal with the obvious multiple contamination clearly visible within the zone of interest. I was met with blank stares, as there had been no discussion about multiples. It even crossed my mind that maybe multiples had not been a part of my young colleagues education and training. If this was indeed the case, then it was not her oversight, as I had seen no new technology or even convention presentations trying to address this persistently insidious land internal-multiple problem since the last time I was fully engaged in the mid 1990s. From my sense of a general lack of awareness, I decided to Google multiples or multipath to see if these signicant obstacles were solely seismic phenomena. I was not surprised to nd that the problem has aected us all at some point in our lives without our knowledge and with more direct impact through our ubiquitous use and reliance on wireless communication, radar, and GPS positioning. The following paragraph is excerpted from http:// en.wikipedia.org/wiki/Multipath_propagation:
In wireless telecommunications, multipath is the propagation phenomenon that results in radio signals reaching the receiving antenna by two or more paths. Causes of multipath include atmospheric ducting, ionospheric reection and refraction, and reection from water bodies and terrestrial objects such as mountains and buildings. In facsimile and television transmission, multipath causes jitter and ghosting, seen as a faded duplicate image to the right of the main image. In radar processing, multipath causes ghost targets to appear, deceiving the radar receiver. These ghosts are particularly bothersome since they move and behave like the normal targets (which they echo), and so the receiver has diculty in isolating the correct target echo. In a Global Positioning System receiver, multipath eect can cause a stationary receivers output to indicate as if it were randomly jumping about or creeping. When the unit is moving the jumping or creeping is hidden, but it still degrades the displayed accuracy.

puting to bear on the problem, specically with the new algorithms described in the articles that make up this special section of TLE. Given my observation of the level of despondency to even identify internal multiples in land seismic data sets, I am pleased to report that for this special section, we received ve out of eight articles that describe methods to eectively attenuate internal multiples. Unfortunately only one of the ve deals with the problem I continue to face in the WCSB and this is the paper written by Hunt et al. However, unlike me, this former colleague has shown remarkable persistence by continuing to work on the problem after having almost single-handedly initiated a solution that is still the industry standard in exploring for Nisku reefs in the WCSB. The theme that connects all papers (except for Hunt et al.s) in this special section, follows from the singular technological breakthrough, surface-related multiple elimination (SRME), led primarily by academia, routinely applied to successfully remove other types of marine multiple interference. As this section shows, however, the more encompassing inverse scattering series (ISS) basis for SRME has now been extended to internal multiple elimination (IME) for both ma-

Interestingly this Wikipedia link concentrated on the multipath phenomena with hardly any description of how this problem might be overcome. I suspect that the requirements in the wireless world of real-time processing, for what we would term 1D source-to-receiver sampling, preclude multiple attenuation. This is unlike seismic processing where we have time to bring the full force of high-powered com862 The Leading Edge August 2011

Figure 1. A Western Canadian seismic line and log synthetic with a strong internal multiple in the stack (red box) that does not tie.

M u l t i p l e

a t t e n u a t i o n

rine and land data. Five papers show successful applications of the method. This is a significant step forward, especially as land data, unlike marine data, generally do not exhibit a strong free-surface effect and, consequently, have not benefitted from SRME until now. The first article is an excellent tutorialMultiple attenuation: Recent advances and the road ahead (2011) in which Weglein and colleagues bring his 1999 perspective up to date with an overview of recent progress, advances and open issues for both offshore and onshore multiple removal. Following a description of the considerable challenges we face when dealing with multiples, Weglein et al. list the stringent requirements to completely eliminate them. The article describes the development and combining of various methods to model or predict and invert or subtract multiples for an evocatively extended goal as: The plan is to strengthen the prediction, and reduce the burden, dependence and mischief of the subtraction. An accompanying article (Exemplifying the specific properties of the inverse scattering series internal-multiple method that reside behind its capability for complex onshore and marine multiples by Terenghi et al.) introduces ISS through equations that lead to examples of ISS used to eliminate internal multiples. The third article (Elimination of land internal multiples based on inverse scattering series by Luo et al.) builds on the ISS approach for land internal multiple elimination. The article follows from a presentation at last years SEG Annual Meeting that described the first successful application of this new approach to what is described as the daunting challenge of land internal multiples as opposed to the simpler, better behaved marine free-surface generator that has been solved by the related SRME method. The fourth article (Resolution on multiples: interpreters perceptions, decision making, and multiple attenuation) is from my tenacious former colleague Lee Hunt and his coauthors, whom I describe in the preamble above. Hunt et al. investigate the serious impact of fast internal multiples on the Nisku and Blueridge formations in Western Canada. They show that reducing the impact of this type of multiple requires careful consideration of the problem with a clear understanding, expectation and assessment of the level of suppression achieved by tau-p space methods. They also assert the

philosophical notion that we must marry this understanding with determination (resolution). The fifth article (Applications of interbed multiple attenuation) by Griffiths et al. compares 3D data-driven methods and model-driven methods from Jakubowicz (1998) and Pica and Delmas (2008) respectively. A striking imaging improvement is achieved through the reduction of interbed multipleinduced migration artifacts within the prolific pre-salt reservoirs of the Santos Basin, Brazil. The sixth article (Case studies in 3D interbed multiple attenuation by Brookes) continues with the internal multiple theme by demonstrating a novel 3D extension of SRME that is now viable through recent increases in computing power. Brookes shows some successful results from Gulf of Mexico marine data and Egyptian Western Desert land data. The last two articles in this special section deal with enhancements of 3D SRME in marine data. Enhanced demultiple by 3D SRME using dual-sensor measurements by van Borselen et al. utilizes dual-sensor measurements to compute 3D surface-related multiples more accurately with respect to phase and amplitudes. The recorded pressure and particle-velocity wavefields enable a decomposition of the recorded data into downgoing and upgoing components to account for the discrepancy between the source and streamer depths in SRME. In addition, the angle-dependent relationship between the pressure field and the particle-velocity field is handled correctly. As a result, the adaptive subtraction of the surface-related multiple is more constrained and robust, leading to enhanced multiple removal and better primary preservation. The final paper (True-azimuth 3D SRME in the Norwegian Sea by Smith et al.) shows that true-azimuth 3D SRME algorithms, previously demonstrated in regions with deep and complex water-bottom topography, are also applicable in areas such as the Norwegian Sea, which have shallower lowrelief water bottoms. Lastly, I acknowledge my co-editor Jeff Deere for having gone beyond the call of duty in pulling this special section together as his term on the TLE Editorial Board had ended when we started the process. Without him, I would not have been able to generate what I believe is an excellent set of articles on a subject that has received limited and diminishing publication and investigation.
References
Jakubowicz, H., 1998, Wave equation prediction and removal of interbed multiple: 68th Annual International Meeting, SEG, Expanded Abstracts, 15271530, doi: 10.1190/1.1820204. Pica, A. and L. Delmas, 2008, Wave equation based internal multiple modeling in 3D: 78th Annual International Meeting, SEG, Expanded Abstracts, 24762480, doi:10.1190/1.3063858.

Corresponding author: bill.goodway@apachecorp.com

August 2011

The Leading Edge

863

SPECIAL M u l t SECTION: i p l e a t M tu el ntui a pt li eo a nt t e n u a t i o n

Multiple attenuation: Recent advances and the road ahead (2011)


ARTHUR B. WEGLEIN, SHIH-YING HSU, PAOLO TERENGHI, and XU LI, University of Houston ROBERT H. STOLT, ConocoPhillips

ultiple removal is a longstanding problem in exploration seismology. Although methods for removing multiples have advanced and have become more eective, the concomitant industry trend toward more complex exploration areas and dicult plays has often outpaced advances in multiple-attenuation technology. The topic of multiples, and developing ever more eective methods for their removal, remains high in terms of industry interest, priority and research investment. The question as to whether today, in 2011, multiples or multiple removal is winning is a way of describing what we are about to discuss. This paper focuses on recent advances, progress and strengths and limitations of current capability and a prioritized list of open issues that need to be addressed. In seismic exploration it is useful to catalog events as primary- or multiple-based on whether the wave arriving at the receiver has experienced one or more upward reection(s), respectively (Figure 1). Multiples are further subdivided and labeled according to the location of the downward reection between two upward reections. If the multiple has at least one downward reection at the free surface, it is called a freesurface multiple, and if all of its downward reections occur below the free surface, it is called an internal multiple. These denitions and cataloging of events into primary and multiple are operative and called upon only after the reference or background waveeld and the source and receiver ghosts have all been removed (Figure 2). Both primaries and multiples contain information about the subsurface; however, (1) unraveling the information within a multiply reected event is a daunting task, and (2) back-propagating a waveeld containing both primaries and multiples for imaging and inversion is usually beyond our ability to provide an accurate enough discontinuous overburden (required for migration and inversion). Hence, primaries are typically considered as signal and multiples are considered a form of coherent noise to be removed prior to extracting subsurface information from primaries. Multiple attenuation: an overview of recent advances and the road ahead (Weglein, 1999) provides a 1999 perspective of multiple attenuation and places wave-theory advances at that time in the context of earlier pioneering contributions. We suggest Multiple Attenuation (published by SEG in 2005) and the special section on multiple attenuation (TLE 1999) as background to comprehend and to set the stage for this update and overview of recent progress, advances, and open issues as of 2011. Oshore and onshore multiple removal: Responding to the challenges In oshore exploration, the industry trend to explore in deep water, with even a at horizontal water bottom and a 1D subsurface, immediately caused many traditional and useful sig864 The Leading Edge August 2011

nal processing/statistical-based multiple-removal methods to bump up against their assumptions, break down, and fail. In addition, marine exploration plays beneath complex multi-D laterally varying media and beneath and/or at corrugated, diractive rapid varying boundaries (for example, subsalt, sub-basalt and subkarsted sediments and fault shadow zones) cause a breakdown of many other multiple-removal methods. For example, decon, stacking, f-k, Radon transform, and waveeld modeling and subtraction of multiples are among methods that run into problems with the violation of any one or a combination of the following assumptions: (1) primaries are random and multiples are periodic, (2) knowledge of the velocity of primaries and assuming the Earth has no lateral variation in properties with assumptions about 1D moveout, (3) velocity discrimination between primaries and multiples, (4) interpreter intervention capable of picking and discriminating primary or multiple events, and (5) determining the generators of the experiences of the multiples, and then modeling and subtracting them. The conuence of (1) high drilling costs in deepwater plays, (2) specic deepwater and shallow subsea hazards and technical challenges, (3) the need to develop elds with fewer wells, and (4) the record of drilling dry holes drives the need for greater capability for removing marine free-surface and internal multiples, as well as improving methods of imaging. Moving onshore, the estimation and removal of land internal multiples can make the toughest marine-multiple problem pale in comparison. The presence of proximal and

Figure 1. Marine primaries and multiples: 1, 2 and 3 are examples of primaries, free-surface multiples, and internal multiples, respectively.

M u l t i p l e

a t t e n u a t i o n

interfering primaries and internal multiples of dierent orders can occur in marine situations, but their frequent occurrence for land internal multiples raises the bar of both the amplitude and phase delity of prediction and the priority and pressing need of developing an alternative to energy-minimizing-based adaptive subtraction techniques. For example, in Kelamis et al. (2006), Fu et al. (2010), and Luo et al. (in this special section), the basic cause of the land multiple-removal challenge in Saudi Arabia is identied as a series of complex, thin layers encountered in the near surface. In general, strong reectors at any depths can be identied as signicant sources of internal multiples, especially where geologic bodies with dierent seismic properties are in contact. Typical examples are alternating sequences of sedimentary rocks and basaltic layers or coal seams, which can give rise to short-period internal multiples. Multiples are a problem and a challenge due to violations of the assumptions and prerequisites behind methods used to remove them. There are two approaches to address those challenges: (1) remove the assumption violation (by satisfying the assumption), or (2) remove the assumption. That is, either develop a response and/or new methods that remove the violation, and arrange to satisfy the assumption, or develop fundamentally new methods that avoid the limiting or inhibiting assumption. There are cases and issues for which one or the other of these attitudes is called for and indicated. An example of seeking to satisfy a requisite is when a data acquisition is called for by a multiple-removal technique, and we seek methods of data collection and interpolation/extrapolation to remove the violation by satisfying the requirement. However, if a multiple-removal method is, for example, innately 1D in nature, then an interest in removing multiples in a multi-D Earth would call for developing a new method that did not assume a 1D Earth; i.e., it calls for developing a new multi-D method that altogether avoids the 1D assumption. The former, remove assumption violation approach would entail, e.g., arranging a 3D corrugated boundary subsalt play to somehow satisfy 1D layered Earth assumptions, velocity analysis, and moveout patterns, or modeling and subtraction of multiples, where seeking to satisfy those types of assumptions is not possible. The latter realization drove the search for new methods that avoid those increasingly dicult or impossible-to-satisfy criteria and prerequisites. The list of sought-after characteristics for multiple attenuation In response to those challenges, these new methods would therefore be required to satisfy the following criteria: (1) be fully multi-D, (2) make no assumptions about subsurface properties, (3) have no need for interpretive intervention, (4) be able to accommodate the broadest set of multiples of all orders, (5) extend to prime and composite events as introduced in Weglein and Dragoset (2005), where the denitions and meaning of primaries and multiples themselves can be extended from their original 1D Earth denitions and concepts, (6) be equally eective at all osets, retaining eectiveness in prestack and poststack applications, and (7) last

Figure 2. The marine conguration and reference Greens function.

but not least, surgically remove multiples by predicting both their amplitudes and phases, and thus not harm primaries even if they are proximal and overlapping. The ecacy and choice among multiple-removal methods in response to the challenges posed in a world of complex multiple generators, in 1D Earth settings and/or in heterogeneous rapid laterally varying media and boundaries, would ultimately be evaluated, judged, and selected by how well they satisfy all of these criteria. The evolution and merging of methods that originally sought to either separate or waveeld-predict multiples In Weglein (1999), multiple-removal methods were classied as: (1) separation and (2) waveeld prediction, and we refer the reader to Table 1 and Table 2 in that reference for a summary of methods within each category. Methods within the separation category were seeking a characteristic to separate primaries from multiples, whereas waveeld prediction was a way to waveeld-predict and then subtract multiples. Separation methods were dened by characteristics that distinguish primaries from multiples, with, e.g., primaries considered as random and multiples as periodic, or assumptions about how primaries and multiples would separate in dierent transform domains. These methods earned their keep, but were ultimately hampered by their assumptions about the statistical nature of primary reections, 1D Earth assumptions, and the assumed velocity determination for primaries. Waveeld-prediction methods began with modeling and subtracting the entire history of the multiples that were targeted to be removed (e.g., Morley and Claerbout, 1983; Wiggins, 1988; Weglein and Dragoset, Chapter 4). They moved away from 1D assumptions in principle, but were mainly conned to water-column reverberations, where they had demonstrated value, but had little hope or success in modeling and subtracting multiples with more complicated and sub-water-bottom experiences in their history. The next step in waveeld prediction sought to not model the entire history of the multiple one wanted to remove, but rather to just nd a wave-theory prediction to identify, isolate and separate the physical location and property that the multiple had experienced, and other events had not, and then to transform through a map of data with and without the experience as a way to separate events into
August 2011 The Leading Edge 865

M u l t i p l e

a t t e n u a t i o n

those that have and have not had that experience. That thinking became the cornerstone of the free-surface and interface method pioneered and developed by Berkhout of the DELPHI Consortium at Delft University. That DELPHI program for removing all marine multiples required a sequence of relationships between data with and without isolated and welldened reections, starting with downward reections at the air-water free surface, and then through a sequence of amplitude-preserving migrations, to image and transform away all internal multiples that had their shallowest downward reection at each successively deeper reector/interface starting at the water bottom. Hence, its called the free-surface and interface method. That program provided signicant added-value, especially with isolated free-surface multiples, or at times for internal multiples generated at a simple and not too complex water bottom. There was considerable reliance on adaptive subtraction to x omissions in the theory and limitations in data collection and prerequisites like deghosting and wavelet removal. The DELPHI approach is a waveeld-prediction method that doesnt require modeling the entire history and experience of the multiple, as earlier waveeld-prediction methods required, but required only modeling in detail the waveeld-prediction properties that separated the events experiencing a shallowest downward reection at the free surface, and then repeating that program at the next interface or boundary in a sequence of deeper interfaces. Events are thus separated by whether they have or have not had a downward reection at those reecting boundaries. Hence, waveeld prediction and separation merged, with the separation requiring detail of all subsurface properties down to and including a given interface to remove all multiples having a shallowest reection at that interface. However, that comprehensive program ran into problems of conceptual and practical issues, with the former, including: (1) how to transform away via, e.g., Greens theorem a relationship between data experiencing and not experiencing a corrugated and diractive boundary, and, (2) the stringent requirements of determining the properties above, and down to, and at, the interface. The latter issues made the use of these interface internal multiple-removal methods dicult to be applied in practice as targets became deeper and the overburden and interfaces became rapidly varying and dicult to adequately identify. The inverse scattering series (ISS) methods for removing free-surface and internal multiples can be viewed as representing the next step in the evolution of separation and waveeld-prediction concepts and methodology. The ISS methods are in some sense a direct response to the limitations of the DELPHI free-surface and interface approach, with (1) a more complete free-surface removal, in terms of amplitude and phase at all osets, and (2) an internal multiple-removal method that did not require any subsurface information whatsoever. There are waveeld-prediction and separation ingredients in the ISS free-surface and internal multiple-removal methods. For free-surface multiple removal, the free-surface properties are assumed to be known, and a subseries of the inverse scattering series separates deghosted data with free-surface multiples from deghosted data without
866 The Leading Edge August 2011

Figure 3. Data without a free surface (top) and with a free surface (bottom).

free-surface multiples. The ISS free-surface multiple separation is realized by the actual location and physical properties that free-surface multiples have experienced at the free surface, distinguishing themselves from data/events that have not shared that free-surface experience. For internal multiples the inverse scattering series takes on another attitude. The forward series allows the construction of primaries and internal multiples through a description entirely in terms of water speed and, through the reverse, the seismic processing or inverse scattering series, in turn, allows for the removal of internal multiples, and the depth imaging and inversion of primaries directly in terms of water speed. For internal multiple removal there is no downward continuation into the Earth, no interface identication and removal. The separation between primaries and internal multiples in the forward or data creation scattering series and inverse or data processing, inverse scattering series, is carried out by understanding how primaries and internal multiples dier in their forward construction, in terms of a water speed picture/construction, and then how to separate the removal of internal multiples from the imaging and inversion of primaries, also directly and only in terms of data and water speed. In contrast to the DELPHI internal multiple interface method, the ISS internal multiple-removal method never requires, determines or estimates the actual subsurface medium properties and interfaces the internal multiple experiences. The inverse scattering series multiple-removal methods are exible, allowing (1) the separation to be in terms of distinguishing by whether or not the event has a certain well-located and well-dened experience in its history, where the actual medium properties are available and reliable, as occurs with the free surface and in ISS free-surface multiple-removal algorithm, and (2) without knowing or needing to determine anything about the actual separating experience for ISS internal multiple removal. The ISS separation of the imaging and inversion of primaries from the removal of internal multiples thus avoids all of the conceptual and practical limitations of the DELPHI free-surface and interface approach, and ultimately accounts for its current position as stand-alone for addressing the most dicult

Todays challenging reservoirs require innovative and reliable solutions.

Discover why 7 out of 10 of the most active US and Canadian operators are Transform customers.

Transforms integrated software and services are targeted at optimizing E&P operations through:
Powerful MVstats production prediction Real-time field monitoring capabilities Modern multi-discipline data visualization Automated seismic and microseismic interpretation Accurate velocity modeling and depth conversion Team-based multi-user environment

See the Transform difference in booth #518 at the 2011 SEG, in San Antonio

Transform Software and Services, Inc.


To learn more about how Transform is working with leading operators to optimize unconventional AND conventional reservoir operations, visit our website at www.transformsw.com or contact us at info@transformsw.com for a software demonstration or quotation.

The Next Generation of E&P Technology

M u l t i p l e

a t t e n u a t i o n

and daunting marine and land internal multiple challenges. The two classic multiple-removal categories separation, and waveeld prediction, have evolved and merged into the maximally exible, accommodating and eective inverse scattering series multiple-removal methods: prediction and separation of events either with or without needing, knowing or determining the location and physical properties of the experience (e.g., a free surface or subsurface reector, respectively) that separates events into two categoriesevents that have, and events that have not, experienced in their history a shallowest downward reection at a specic reector, and without the need for any subsurface information, event picking or interpreter intervention. The ISS allows all internal multiples to be predicted and separated from all reectors, at all depths, at once, without knowing, needing, or determining anything about those reectors. The inverse scattering series multipleremoval methods have incorporated the strengths of earlier separation and waveeld-prediction concepts and thinking, while avoiding the practical limitations, drawbacks and weaknesses of earlier and competing approaches. Before discussing, classifying, and comparing methods for removing multiples, it will be useful to introduce and briey discuss two important background topics/subjects that will enhance and facilitate understanding the sometimes counterintuitive ideas we will be describing and attempting to convey. Modeling and inversion are two entirely dierent enterprises In this paper, we adopt an inclusive denition of inversion that includes any method that determines subsurface properties from measured surface data, or any intermediate task (e.g. multiple removal or depth imaging) toward that goal. Inversion methods can be direct or indirect, and these approaches are not in any practical or theoretical sense the same or equivalent. Modeling run backward, or model matching or iterative linear inverse model matching, or any form of indirect inversion, or solving a direct forward problem in an inverse sense, are not equivalent to direct inversion. Nor is any intermediate seismic processing objective, within a direct inversion algorithm, equivalent to solving for that same goal in some model-matching or indirect manner. That statement is true independent of: (1) the capability and speed of your computer, (2) the nature of the objective function, and (3) the local or global search engine. The only exception to that rule is when the direct inverse task is linear (e.g., when the goal is depth imaging and you know the velocity eld, the direct inverse for depth migration is linear, and then modeling run backward is direct depth imaging). If the direct inverse is nonlinear in either the entire data set or a single event, then modeling run backward is not the equivalent of a direct inverse solution. There is widespread confusion on this fundamental and central point within math, physics, and geophysics inversion circles with signicant and harmful conceptual and practical real-world consequence. See Weglein et al. (2009) for full detail and examples. And it is worth noting at this point that the inverse scattering series is the
868 The Leading Edge August 2011

only direct inverse for a multidimensional acoustic, elastic, or inelastic heterogeneous Earth. Prediction and subtraction: The plan to strengthen the prediction, and reduce the burden, dependence and mischief of the subtraction Multiple removal is often described as a two-step procedure: prediction and subtraction. The subtraction step is meant to try to compensate for any algorithmic compromises, or real world conditions, outside the physical framework behind the prediction. In multiple-removal applications, the subtraction step frequently takes the form of energy-minimizing adaptive subtraction. The idea is that a section of data (or some temporally local portion of data) without multiples has less energy than the data with multiples. One often hears that the problem with multiple attenuation is not the prediction but the subtraction. In fact, the real problem is excessive reliance on the adaptive subtraction to solve too many problems, with an energy-minimizing criteria that can be invalid or fail with proximal or overlapping events. The breakdown of the energy-minimization adaptive subtraction criteria itself can occur precisely when the underlying physics behind, e.g., high-end inverse scattering series multiple prediction (that it is intended to serve) will have its greatest strength and will undermine rather than enhance the prediction. The essence of ISS: An important prototype example We will demonstrate some of these ideas (using a 1D planewave normal incidence case) for the inverse scattering freesurface multiple elimination method. There are other ways to derive the free-surface multiple-removal algorithm (e.g. Ware and Aki, 1968; Fokkema and van den Berg, 1990), but the ISS is unique in its message that all processing goals (e.g., internal multiple removal, depth imaging, nonlinear direct target identication, and Q-compensation without Q) can each be achieved in the same manner that the ISS removes free-surface multiples, i.e., directly without subsurface information. Hence, this analysis below carries much broader consequences beyond the immediate goal of the ISS removing free-surface multiples. Figure 3 describes a situation in which a unit-amplitude downgoing wave leaves a source in the water column. The upper gure assumes that there is no free surface. R() denotes the single temporal frequency of the upgoing recorded eld. The lower gure corresponds to the same situation with the addition of the free surface. Rf() is the single tempo-

Figure 4. The forward problem. Constructing free-surface multiples [i.e., from R() to Rf()].

M u l t i p l e

a t t e n u a t i o n

ral frequency of the upgoing portion of the recorded data. R() contains all primaries and internal multiples. Rf(), on the other hand, is the upgoing portion of the total measured waveeld and consists of primaries, internal multiples, and free-surface multiples. The downgoing source waveeld and the upgoing receiver waveeld would be realized in practice by source and receiver deghosting. Source and receiver deghosting is a critically important step to assure subsequent amplitude and phase delity of the ISS free-surface multipleremoval methods, whose derivation follows below. Forward construction of data with free-surface multiples, Rf () in terms of data without free-surface multiples, R() The downgoing source waveeld of unit amplitude rst impinges on the Earth and R() emerges (consisting of all primaries and internal multiples). R() hits the free surface and R() is the resulting downgoing wave (because the reection coecient is 1 for the pressure eld at the free surface). This downgoing eld, R(), in turn enters the Earth as a wavelet, and R2() emerges, and this repeats in the manner shown in Figure 4. The total upgoing waveeld in the presence of a free surface, Rf(), is expressed in terms of the total upgoing waveeld in the absence of the free surface, R(): (1) (2) Several points are worth noting about this result. The inverse series for removing free-surface multiples corresponding to the forward series (Equation 1) that constructs free-surface multiples is found by rearranging Equation 2 into R = Rf /(1Rf ) and then expressing R as the innite series (3) This expression is, indeed, the 1D normal-incidence version of the inverse scattering free-surface multiple-attenuation algorithm (Carvalho, 1992; Weglein et al., 1997). Notice that neither the forward (construction) series for Rf in terms of R nor the removal (elimination) series for R in terms of Rf depend on knowing anything about the medium below the receivers. The ISS free-surface removal series derivation and algorithm (Equation 3) does not care about the Earth model type and is completely unchanged if the Earth is considered to be acoustic, elastic, or anelastic. That property is called model type independence, (see Weglein et al., 2003). The derivation of these series (Equations 1 and 3) was based on the dierence in the physical circumstances that gives rise to the events we are trying to isolate and separate: free-surface multiples and the (1) reection coecient at the free surface (the physical circumstance). Both the construction and elimination process assume a wavelet deconvolution in the forward problem. The wavelet,

S(), plays a role in the forward problem:

and in the inverse

where the meaning of the quantity Rf is S() times Rf in Equations 1 and 2. Hence, for free-surface multiple removal, there is a critical need for the wavelet because the eectiveness of the series has a nonlinear dependence on 1/S(). Free-surface demultiple algorithm: Instructive analytic examples We present an analytic 1D normal incidence example (Figure 5) to illustrate the inner workings of the ISS free-surface multiple-removal algorithm. The reection data in the time domain are expressed as

where R1 and R 2 are the amplitudes of the two primaries in this two reector example. In the frequency domain,

and

Hence R f ( ) + R 2 f ( ) precisely eliminates all free-surface multiples that have experienced one downward reection at the free surface. The absence of low frequency (and in fact all other frequencies) plays absolutely no role in this prediction. This is a nonlinear direct inverse that removes free-surface multiples. There is no imaginable way that one frequency of data could be used to model and subtract one frequency of free-surface multiples. A single frequency of data cannot even locate the water bottom. This is an example of how a direct nonlinear inverse does not correspond to a forward problem run backward. Furthermore, model matching and subtracting multiples are inconceivable without knowing or caring about the Earth model type for the modeling step. This illustrates how model matching, iteratively or otherwise, modeling run backward, and all forms of indirect inversion are not equivalent to a direct inverse solution. Recovering an invisible primary Consider a free-surface example (Figure 6) with the following data, corresponding to two primaries and a free-surface multiple: (4) Now assume for our example that
August 2011 The Leading Edge 869

M u l t i p l e

a t t e n u a t i o n

Then from Equation 4,

The second primary and the free-surface multiple cancel, and

al., 1997). In the previous Equation 5, the quantity b1(kg, ks, z) corresponds to an uncollapsed migration (Weglein et al., 1997) of an eective incident plane-wave data. The vertical wavenumbers for receiver and source, qg and qs are given by for i =(g,s); c0 is the constant reference velocity; zs and zg are source and receiver depths; and zi (i = 1, ... ,3) represents pseudodepth. b3IM(kg, ks, ) is a portion of a term in the ISS that performs prediction of all rst-order internal multiples at all depths at once. For a 1D Earth and a normal-incidence plane wave, Equation 5 reduces to (6) For the example shown in Figure 6 with two primaries:

resulting in the two primaries by recovering the primary not seen in the original data. The ISS free-surface multiple-removal algorithm, with deghosted and wavelet deconvolved data, can predict and subtract the hidden multiple and recover the hidden primary. If these obliquity factor deghosting and wavelet ingredients are compromised in the prediction, the amplitude and phase will be incorrect and the invisible primary will not be recovered. Furthermore, when the multiple is removed in the invisible reector example, the energy goes up, not down, and the adaptive subtraction energy-minimization criterion fails and cannot x the problem caused by missing obliquity factors, wavelet removal, and deghosting. The lesson: Dont compromise on prediction strengths and assume the subtraction (adaptive) will atone for any shortcomings. The ISS FS multiple prediction has no trouble recovering the hidden primary. Zhang (2007) demonstrates with a prestack example that with deghosted data the ISS free-surface algorithm precisely predicts the FS multiple without the need for adaptive subtraction. For these same examples and in general, the feedback loop free-surface multiple-attenuation algorithm, with its lack of an obliquity factor and retaining the source-side ghost, will not accurately predict the amplitude and phase of free-surface multiples. ISS internal multiple-attenuation algorithm The ISS internal-multiple-attenuation algorithm in 2D starts , that is deghosted, wavelet with the input data, deconvolved, and with free-surface multiples removed. The parameters, kg, ks, and , represent the Fourier conjugates to receiver, source, and time, respectively. The ISS internalmultiple-attenuation algorithm for rst-order internal multiple prediction in a 2D Earth is (Arajo, 1994; Weglein et

We transform the data into pseudodepth:

where produces

and

. The integral in Equation 6

and in the time domain:

The actual internal multiple is . Hence, Equations 5 and 6 predict the precise time and approximate amplitude of the internal multiple (i.e., its an attenuator). There is a closed form subseries of the ISS that eliminates that multiple (Ramirez and Weglein, 2005). Examples of 2D ISS free-surface and internal multiple removal with marine data Figure 7 shows an example of the inverse scattering series internal-multiple-attenuation algorithm applied to a 2D synthetic data set. The data were computed using an Earth model characterized by rapid lateral variations (Figure 7a). In Figure 7, from left to right, the three panels show the input data, the predicted internal multiples, and the result of inverse scattering internal multiple attenuation, respectively. Figures 8a and 8b illustrate the free-surface and internal multiple-attenuation algorithms applied to a data set from the Gulf of Mexico over a complex (5) salt body. Seismic imaging beneath salt is a challenging problem due to the complexity of the resultant waveeld. In Figure 8a, the left panel is a

870

The Leading Edge

August 2011

SeaRay
Redeployable Seabed Seismic Acquisition

SeaRay offers unparalleled data quality, reliability, flexibility and efficiency with state of the art technology including hydrophones and omni-directional 3C sensors, telemetry and power redundancy and operational integrity to 500M water depth. EASIER DEPLOYMENT Power Through the Line Eliminates Batteries CLEARER IMAGE Flat Pack Design for Excellent Coupling IMPROVED RELIAbILITY Redundant Power and Data Telemetry Architectures

Nantes, France sales.nantes@sercel.com Houston, USA sales.houston@sercel.com www.sercel.com

A N Y W H E R E . A N Y T I M E . E V E RY T I M E .

M u l t i p l e

a t t e n u a t i o n

Figure 5. An analytic 1D normal incidence example to illustrate the inner workings of the ISS free-surface multiple-removal algorithm.

Figure 6. A one-dimensional model with two interfaces.

stacked section of the input data and the right panel shows the result of the inverse scattering free-surface multiple-removal algorithm. Figure 8b illustrates the internal-multipleattenuation method applied to the same Gulf of Mexico data set. An internal multiple that has reverberated between the top of the salt body and the water bottom (and interferes with the base salt primary) is well attenuated through this method. ISS internal multiple application for land Fu et al. (2010), along with Terenghi et al. and Luo et al., (in this special section) describe the motivation, evaluation, and comparison of dierent approaches to removing internal multiples on complex synthetic and onshore data. Fu et al. concluded that Their (ISS internal multiple algorithm) performance was demonstrated with complex synthetic and challenging land eld data sets with encouraging results, where other internal multiple suppression methods were unable to demonstrate similar eectiveness. While the ISS internal multiple attenuator was unmatched in capability, in comparison with other internal multiple methods tested, an examination of the results shows that there are open issues yet to be addressed. A more complete understanding of the action of the ISS rst-order internal multiple attenuator (Equation 5) when the input consists of all the events in the recorded data, and the anticipated
872 The Leading Edge August 2011

Figure 7. (a) A 2D synthetic model characterized by gently curved reectors intersected by a fault. (b) The left panel shows a commonoset display from the synthetic data set created using the model. The middle panel shows the predicted internal multiples and the right panel is the result after subtracting the predicted multiples from the input data set. (From Matson et al., 1999, and Weglein et al., 2003)

need for further inclusion of ISS internal multiple-removal capability in our algorithm are our response to those issues, and are currently underway. The Delft group, led by Berkhout, at some point several years ago took note and acknowledged the ISS internal multiple approach and then formulated several new and innovative DELPHI approaches that drew upon certain (but not all) aspects and properties of the ISS internal multiple algorithm. The dierences between the latter DELPHI approaches and the ISS internal multiple method today remain signicant and substantive. The comparisons to ISS internal multiple attenuation referred to in Fu et al. included the DELPHI approaches to internal multiple removal. The details behind the Fu et al. tests and results are described, explicated and further analyzed in Terenghi et al. Discussion We have described a wish list of qualities that the ideal response to multiple-removal challenges would satisfy, and

M u l t i p l e

a t t e n u a t i o n

tions, todays reasonable and necessary assumption will invariably be tomorrows impediment to progress and increased eectiveness. And thats the case with adaptive subtraction today, especially with land and complex marine internal multiples. We have advocated a three-pronged response to land and complex marine internal multiples: (1) seeking further capability for amplitude delity for all orders of internal multiples, including convertedwave internal multiples, (2) satisfying prerequisites for the source signature and radiation pattern, and (3) look for a new bridge to replace the energyminimization adaptive criteria, a bridge consistent with the underlying physics rather than running at cross purposes with the greatest strength of the ISS prediction. For marine multiple removal, a key impediment for shallower-water exploration is the inability to extrapolate to near-source precritical angle traces when the nearest receiver is in the postcritical region. That can shut down free-surface multiple removal and can impede interpretation and drilling decisions. All methods for extrapolationincluding f-k, Radon, interferometry (i.e., Greens theorem), and migrate demigrate data reconstructionfail to provide that post- to precritical curve-jumping capability. One possibility with some ray of hope and optimism is to invert the postcritical data with model matching (Sen et al., 2001). That global search procedure and test, although positive and encouraging, was already pushing compute and algorithm capability with an initial 1D elastic test and application. Further attention and progress on this open issue is warranted and could pay signicant dividends. Our plan is to progress each of these issues as a strategy Figure 8. (a) The left panel is a stack of a eld data set from the Gulf of Mexico. The right panel is the result of ISS free-surface multiple removal. (b) The ISS to extend the current encouraging results and alinternal multiple-attenuation method applied to the same data set after freelow ISS multiple removal to reach its potential: to surface multiple removal. Data courtesy of WesternGeco. (From Matson et al., surgically remove all multiples without damaging 1999, and Weglein et al., 2003) primaries under simple or complex, daunting land have shown that only the ISS multiple-removal methods are and marine circumstances. candidates toward reaching that high standard. All methods have strengths and shortcomings, and as we recognize the Summary shortcomings of the current ISS attenuator, we also recognize The strategy that we advocate is a tool-box approach, where that removing them resides within the ISS and that upgrade the appropriate multiple-removal method is chosen, based on will never require subsurface information, picking events or the given data set and the processing goal. The relative use of any interpretive intervention or layer stripping. What all the dierent methods within the tool box has shifted over time ISS methods require is a reasonable source signature and de- as exploration portfolios have focused on more remote, comghosting, and we are developing onshore Greens theorem plex and dicult marine and land plays. That industry trend methods for that purpose (see Zhang and Weglein, 2005; and need drives our orientation and continued interest in multiple removal. Its objectives are: (1) delity of both ampliZhang and Weglein, 2006; and Mayhan et al., 2011). Adaptive energy-minimizing criteria are often employed tude and phase prediction to allow surgical multiple removal in an attempt to bridge the conditions and limitations of the of all multiples without damaging primaries; (2) including real world and the physics behind what our algorithms are as- all relevant multiples in the algorithms; (3) using appropriate suming. When rst introduced by Verschuur et al. (1992) and orders of multiple-removal terms from ISS multiple-removal Carvalho and Weglein (1994), the need was clear and good subseries) in the prediction; (4) strengthen the prediction and benet was derived, especially with isolated primaries and reduce the burden on the adaptive subtraction, and (5) defree-surface multiples of rst-order. But, as with all assump- velop a replacement to the energy-minimization criteria that
August 2011 The Leading Edge 873

M u l t i p l e

a t t e n u a t i o n

will align with rather than impede the method it is meant to serve. The ISS methods for removing free-surface and internal multiples are an essential and uniquely qualified ingredient/component in this strategy. When other priorities (like cost) might reasonably override the interest in (1) amplitude and phase fidelity, (2) inclusion of all internal multiples, and/ or when the generators of the relevant internal multiples can be reliably identified, then the DELPHI methods can be the appropriate and indicated choice. The potential cost of drilling dry holes always has to be taken into account. The industry move to 3D acquisition and processing was not put forth to save money on acquisition and processingit saved money by drilling fewer expensive dry holes. One exploratory well in the deepwater Gulf of Mexico can cost US $200 millionand we can significantly increase data acquisition investment and processing expenditure by the cost saving of avoiding dry holes and improving the exploration drilling success rate. Distinguishing between a multiple and a gas sand is a drill no-drill decision. In summary, multiple-removal prediction methods have progressed and there is much to celebrate. The capability and potential that resides within the ISS for attenuating multiples has already shown differential added value. However, the trend to more complex and challenging marine and onshore plays demands inclusiveness of all troublesome multiples in the removal, along with: (1) stronger and more competent prediction, with amplitude and phase fidelity at all offsets, and (2) the development of fundamentally new concepts and criteria for subtraction, that align with rather than undermine the strengths of high-end prediction. There will always be a need for a subtraction step, attempting to deal with issues beyond the framework of the prediction, and there will always be those types of beyond the framework issues. We need a more sophisticated and capable subtraction criteria. The adaptive subtraction concept has been enormously useful, with a strong record of contribution but it is now too blunt an instrument for the more complicated and complex challenges. In the interim, the strategy is to build the strength of the prediction and to reduce the burden on the adaptive subtraction. The ISS is also the source of an effective response to outstanding open issues on amplitude and all orders of internal multiples which have moved from the back burner to center stage. The key to that strategy builds predictive strength from a direct inverse machinery, and wave-theorydeterministic Greens theorem prerequisite satisfaction, while seeking near-term reduction of the burden on the energyminimization adaptive subtraction, and ultimately to replace the latter with an entirely consistent, comprehensive and more effective prediction and subtraction of multiples. The ISS multiple prediction, and the Greens theorem prerequisite satisfaction for the data wavelet and deghosting, are aligned and consistent. A subtraction on that same footing would provide an overall comprehensive and consistent methodology and a step improvement in multiple-removal capability. In this paper, we want to communicate our support and encouragement for that necessary future development and delivery. The progress and success represented by advances in mul874 The Leading Edge August 2011

tiple-attenuation methods has given hope to heretofore areas that were previously off-limits and no-go zones. That, in turn, has allowed our industry to imagine that yet more difficult exploration areas and targets could be accessible. In summary, that is the encouraging and positive response to the question multiples or multiple removal; who is winning?
References
Arajo, F., 1994, Linear and nonlinear methods derived from scattering theory: backscattered tomography and internal multiple attenuation: PhD thesis, Universidade Federal da Bahia. Arajo, F., A. Weglein, P. Carvalho, and R. Stolt, 1994, Inverse scattering series for multiple attenuation: An example with surface and internal multiples: 64th Annual International Meeting, SEG, Expanded Abstracts, 10391041. Carvalho, P. and A. Weglein, 1994, Wavelet estimation for surface multiple attenuation using a simulated annealing algorithm: 64th Annual International Meeting, SEG, Expanded Abstracts, 1481 1484. Carvalho, P., 1992, Free-surface multiple reflection elimination method based on nonlinear inversion of seismic data: PhD thesis, Universidade Federal da Bahia. Coates, R. and A. Weglein, 1996, Internal multiple attenuation using inverse scattering: Results from prestack 1 and 2D acoustic and elastic synthetics: 66th Annual International Meeting, SEG, Expanded Abstracts, 15221525. Fokkema, J. and P. van den Berg, 1990, Removal of surface-related wave phenomena: The marine case: 60th Annual International Meeting, SEG, Expanded Abstracts, 16891692. Fu, Q., Y. Luo, P. Kelamis, S. Huo, G. Sindi, S. Hsu, and A. Weglein, 2010, The inverse scattering series approach toward the elimination of land internal multiples: 80th Annual International Meeting, SEG, Expanded Abstracts, 34563461. Kelamis, P., W. Zhu, K. Rufaii, and Y. Luo, 2006, Land multiple attenuationThe future is bright: 76th Annual International Meeting, SEG, Expanded Abstracts, 26992703. Matson, K., D. Corrigan, A. B. Weglein, C. Y. Young, and P. Carvalho, 1999, Inverse scattering internal multiple attenuation: Results from complex synthetic and field data examples: 69th Annual International Meeting, SEG, Expanded Abstracts, 10601063. Mayhan, J., P. Terenghi, A. Weglein, and N. Chemingui, 2011, Greens theorem derived methods for deghosting seismic data when the pressure P and its normal derivative are measured: 81st Annual International Meeting, SEG, Expanded Abstracts, 27222726. Morley, L. and J. Claerbout, 1983, Predictive deconvolution in shot-receiver space: Geophysics, 48, no. 5, 515531, doi:10.1190/1.1441483. Ramrez, A. and A. Weglein, 2005, An inverse scattering internal multiple elimination method: Beyond attenuation, a new algorithm and initial tests: 75th Annual International Meeting, SEG, Expanded Abstracts, 21152118. Sen, M., A. B. Weglein, and P. Stoffa, 2001, Prediction of precritical seismograms from postcritical traces, University of Texas Institute for Geophysics Research Project Report: M-OSRP report year 2009. Shaw, S. A. and H. Zhang, 2003, Inverse Scattering Series and Seismic Exploration: Inverse Problems: R27R83. Terenghi, P., S. Hsu, A. B. Weglein, and X. Li, 2011, Exemplifying the specific properties/characteristics of the ISS internal multiple method that reside behind its capability for complex on-shore and marine multiples: in this issue. Verschuur, D., A. J. Berkhout, and C. P. A. Wapenaar, 1992, Adap-

M u l t i p l e

a t t e n u a t i o n

tive surface-related multiple elimination: Geophysics, 57, no. 9, 11661177, doi:10.1190/1.1443330. Ware, J. A., and K. Aki, 1968, Continuous and discrete inverse scattering problems in a stratified elastic medium. I. Plane waves at normal incidence: The Journal of the Acoustical Society of America, 45, no. 4, 911921, doi:10.1121/1.1911568. Weglein, A., F. Arajo, P. Carvalho, R. Stolt, K. Matson, R. Coates, D. Corrigan, D. Foster, S. Shaw, and H. Zhang, 2003, Inverse scattering series and seismic exploration: Inverse Problems, 19, no. 6, R27R83, doi:10.1088/0266-5611/19/6/R01. Weglein, A. and W. Dragoset, eds., 2005, Multiple attenuation: SEG Geophysics Reprint Series. Weglein, A., 1999, Multiple attenuation: an overview of recent advances and the road ahead: The Leading Edge, 18, no. 1, 4044, doi:10.1190/1.1438150. Weglein, A., F. Arajo Gasparotto, P. Carvalho, and R. Stolt, 1997, An inverse-scattering series method for attenuating multiples in seismic reflection data: Geophysics, 62, no. 6, 19751989, doi:10.1190/1.1444298. Weglein, A., H. Zhang, A. Ramrez, F. Liu, and J. Lira, 2009, Clarifying the underlying and fundamental meaning of the approximate linear inversion of seismic data: Geophysics, 74, no. 6, WCD1 WCD13, doi:10.1190/1.3256286. Wiggins, J., 1988, Attenuation of complex water-bottom multiples by wave-equation-based prediction and subtraction: Geophysics, 53, no. 12, 15271539, doi:10.1190/1.1442434. Zhang, J., 2007, Wave theory based data preparation for inverse scattering multiple removal, depth imaging and parameter estimation: analysis and numerical tests of Greens theorem: PhD thesis, University of Houston.

Zhang, J., and A. Weglein, 2005, Extinction theorem deghosting method using towed streamer pressure data: Analysis of the receiver array effect on deghosting and subsequent free-surface multiple removal: 75th Annual International Meeting, SEG, Expanded Abstracts, 20952098. Zhang, J. and A. Weglein, 2006, Application of extinction theorem deghosting method on ocean bottom data: 76th Annual International Meeting, SEG, Expanded Abstracts, 26742678.

Acknowledgments: The authors express our deepest appreciation and gratitude to Dolores Proubasta, James D. Mayhan, and Hong Liang for their excellent technical suggestions and advice that have greatly benefited this paper. All authors wish to thank all M-OSRP sponsors, NSF (Award DMS-0327778) and DOE (BES Award DE-FG02-05ER15697) for their encouragement and support. We thank Bill Dragoset and WesternGeco for providing the data shown in example 2 and for permission to publish the results. Corresponding author: aweglein@uh.edu

Exceptional Results
Introducing the ultimate in marine seismic delity

Environmentally Safe,

P: (408) 954-0522 F: (408) 954-0902 E: sales@geometrics.com 2190 Fortune Drive San Jose, CA 95131 U.S.A.
August 2011

www.geometrics.com
The Leading Edge 875

SPECIAL M u l t SECTION: i p l e a t M tu el ntui a pt li eo a nt t e n u a t i o n

Exemplifying the specic properties of the inverse scattering series internal-multiple attenuation method that reside behind its capability for complex onshore and marine multiples
PAOLO TERENGHI, SHIH-YING HSU, ARTHUR B. WEGLEIN, and XU LI, University of Houston

he world of petroleum exploration constantly demands higher ecacy at every link in the data processing chain, from preconditioning to imaging and inversion. Within that chain, the removal of internal multiples constitutes a particularly resilient problem, whose resolution has only partially beneted from the advent of the data-driven technologies which have transformed the practice of freesurface multiple elimination. Historically, internal multiples have received less attention than free-surface multiples. In oshore surveys, for example, their relative importance is often outweighed by dominant water-column multiples. However, as the demand for accuracy increases driven by improvements in imaging capability and removal of free-surface multiples, the interest in removing troublesome internal multiples rises in priority. Internal multiples can cause uncertainty in the interpretation process and can obscure both onshore and oshore exploration targets. In all scenarios, the key to addressing the internal multiple problem consists of responding to a combination of several challenges, and exemplied in Weglein et al. in this special section of TLE. We seek a method that can accommodate an Earth with strong lateral variations and the waveeld phenomena it creates (such as multipathing, diracted internal multiples). That capability is likely to be critical in oshore areas with a highly rugose (diractive) water bottom and for internal multiples generated within salt bodies. Other desirable characteristics are the ability to (1) operate independently of a priori information and (2) accommodate the broadest set of multiples without the user being required to identify the portion of the Earth responsible for the multiples subevents. In a variety of situations, internal multiples are generated within alternating sequences of rocks and sediments with contrasting seismic properties. In certain geologic settings, those sequences can exist for several hundred or thousand meters and choosing one or more signicant multiple generators represents a challenge by itself, which can only be condently addressed using information from nearby well logs. A further interest is in producing a simultaneous prediction of all internal multiples with equal accuracy at all osets, because an accurate match between predicted and actual multiples (amplitude, phase, number of events) alleviates the burden on adaptive subtraction. Currently available methods for internal multiple attenuation/removal can be divided into two groups. The rst group of methods requires the user to identify the primaries as internal multiple subevents or the portion of the Earth responsible for the internal multiples downward reection. Typically, the
876 The Leading Edge August 2011

interpretation consists in picking the traveltime of the event corresponding to a chosen downward reector, often referred to as the internal multiple generator. The interpretation can be used directly to isolate the chosen generator from other events corresponding to deeper reectors (pioneered by Key-

Figure 1. (a) Velocity model; (b) zero-oset section of the input data; (c) zero-oset section of the water-speed f-k migration, rst-order term in the ISS internal-multiple algorithm.

M u l t i p l e

a t t e n u a t i o n

dar et al., 1997, promulgated by Jakubowicz, 1998) or used to downward continue the waveeld (through common-focuspoint operators) toward the generator (feedback methods, boundary approach) or toward a chosen reference level, i.e., layer approach (Berkhout and Verschuur, 2005; Verschuur and Berkhout, 2005). The reference level is chosen to separate the regions of the Earth that contain downward reectors from those that contain upward reectors in the construction of internal multiples. The second group of internal multiple attenuation/removal methods does not require generator identication and the internal multiples are constructed by combining three events that satisfy an automated constraint. In the method based on the inverse scattering series (ISS), the constraint is a deeper-shallower-deeper relationship in pseudodepth or vertical travel time (Arajo, 1994; Weglein et al., 1997; Weglein et al., 2003, Nita and Weglein, 2007). Ten Kroode (2002) proposed an asymptotic derivation of the results in Weglein et al. (1997), where the constraint is a longer-shorter-longer relationship between total traveltimes under the assumption of traveltime monotonicity (deeper events yield longer traveltime). The automated constraint enables the algorithms in the second group to predict internal multiples for all possible generators in one step and can be considered truly independent of subsurface information. Through a set of examples, the analysis in this paper provides insights into the inner workings of the ISS algorithm, and an explanation for the success recently reported in a eld data application on data from Saudi Arabia (Fu et al., 2010; and Luo et al. in this special section). The rst example illustrates the case of internal multiples generated at a highly curved interface and demonstrates the advantage of the ISS internal multiple algorithm using vertical traveltime in contrast to total traveltime based algorithms. In the second numerical example, we illustrate the ability of the ISS internal multiple-attenuation algorithm to address all existing internal multiples generated by all downward reectors in a single step. That ability is in contrast to the methods of the rst group above, which are unable to match that removal ecacy. Internal multiple attenuation using the inverse scattering series The removal of internal multiples can be regarded as a particular task within the general inversion machinery of the ISS (Weglein et al., 2003). Within that framework, it is possible to identify a subset of ISS terms to suppress internal multiples starting from an input waveeld with all free-surface eects (source- and receiver-side ghosts and free-surface multiples) removed (Arajo et al., 1994; Weglein et al., 1997). ISS and all tasks within ISS (e.g., free-surface and the internal multiple-attenuation algorithms) are entirely data-driven tools which do not require information about the medium through which the multiples propagate, nor do they require moveout discrimination between primaries and multiples, nor interpretive intervention. The ISS internal multiple-attenuation algorithm predicts internal multiples for all horizons at once without needing or using information about the reectors involved in generating them. Its leading-order term

Figure 2. An internal multiple (solid blue) satisfying monotonicity in vertical time but not in total traveltime. If wave speed c1 is much greater than c0, the (dashed blue) and (dashed green) primaries arrive at the surface earlier than the (dashed red) primary. The multiple is removed by the ISS method, but not by methods based on total traveltime monotonicity.

Figure 3. (a) Earth model and (b) event labeling. Densities are chosen to yield a vertical-incidence reection coecient of 0.8 at all layer boundaries.

predicts the correct traveltimes and approximate amplitudes of all the internal multiples in the data. Ramrez and Weglein (2005) extended the theory from attenuation toward elimination by including more terms in the elimination subseries, thereby improving the amplitude prediction. Although the ISS free surface and internal-multiple algorithms were initially designed for a marine towed streamer experiment (had an acoustic reference medium of water), Coates and Weglein (1996) showed that all free surface and internal multiples with converted waves in their history were also predicted. The latter free-surface and internal multiple cases are using a reference medium for which S-waves dont even exist. This is not model matching, indirect inversion or modeling run backward! Matson (1997) extended ISS multiple removal to ocean-bottom and land data. Matson et al. (1999) and Weglein et al. (2003) were the rst to apply the ISS free-surface and internal multiple algorithms to marine towed-streamer eld data, while Fu et al. (2010) contains the rst ISS internal multiple application on land data. Properties of the rst-order term in the ISS internal multiple-attenuation algorithmuncollapsed f-k migration The algorithm starts with source- and receiver-side deghosted data absent of free-surface eects. Using only reference velocity, an uncollapsed migration (Stolt, 1978; Stolt and Weglein, 1985) maps the input data from time to pseudodepth (i.e., depth identied by imaging using a reference velocity). The concept of pseudodepth is similar to that of traveltime at vertical incidence. The pseudodepth is achieved in the freAugust 2011 The Leading Edge 877

M u l t i p l e

a t t e n u a t i o n

quency domain, where the temporal frequency () observed in the surface recordings can be related to the vertical wavenumber (kz = qg + qs )of a constant-velocity image, through the relationship

for i=( g,s), where, c0 is the chosen reference velocity ki the horizontal wavenumber and the subscripts g and s characterize the Fourier domain variables on the receiver and source side, respectively (Clayton and Stolt, 1981). Within constant velocity migration assumptions, the f-k Stolt migration correctly images the reected waveeld generated by interfaces of any arbitrary shape, including diractions and multipathing. One example of such phenomena is the bow-tie pattern generated by reections over a suciently curved boundary. These eects are common in seismic exploration data and can occur in a variety of geologic features, including salt domes, faults, layer terminations, pinch-outs, fractured and/or irregular volcanic layers and for a rough sea bottom. As we mentioned, several internal multiple-removal algorithms require picking events and traveltimes. In some of those methods (Keydar et al., 1997), the picked traveltimes are directly used to mute the waveeld at earlier or later times with respect to the generator, and internal multiples are predicted using auto- and cross-correlation operations between traces from the resulting elds. In others (e.g., the feedback methods), the traveltimes are used to determine approximate redatuming operators. However, all these approaches are based on the implicit assumption that a one-to-one relationship exists between seismic events (their traveltime) and the Earth features that create them (such as layer boundaries). In the presence of diractions and/or multipathing, a one-to-one relationship does not exist, as, e.g., a single curved interface can produce several seismic arrivals in a single seismic trace. Picking events, traveltimes, and generators may not be viable even in a normal-incidence experiment in a 1D Earth, since destructively interfering primary and multiple events are possible and often prevalent in land eld data (Kelamis et al., 2006; Fu et al., 2010). We present an example based on a simple three-layer Earth model where the shallowest interface is sine-shaped. The model in Figure 1a produces the data in Figure 1b where all seismic events except the second primary at 2.2 s originate at the shallow reector. Clearly, it is an issue to pick a unique traveltime to represent the curved reector, as many events are generated which interfere among themselves and even with the second primary. The ISS method provides a natural solution by using as input the uncollapsed prestack waterspeed migration (Figure 1c) where the spatial (pseudodepth) relationship between seismic arrivals matches the spatial relationship of the reectors in the actual Earth (Nita and Weglein, 2007). The sketch in Figure 2 describes another ex878 The Leading Edge August 2011

ample of an internal multiple which would not be predicted if total traveltimes were the basis of the method. The multiple can be traced back to an Earth feature where the relationship between total traveltimes and vertical traveltimes (pseudodepth) is inverted due to the presence of a high-velocity layer at depth. Vertical travel times have a closer relationship to actual depth than total time, and hence represent a more eective way to approach the removal of actual internal multiples (see, e.g., Hsu, 2011). The latter vertical traveltime is the tool used in ISS internal multiple attenuation algorithms. Properties of the leading (third) order term ISS internal multiple attenuation algorithm Let z1, z2, and z3 be the pseudodepths of three generic points in the data produced by the rst-order term in the internal multiples series. The leading-order internal multiple prediction is composed of three events that satisfy a deepershallower-deeper condition in pseudodepth. As those points span the entire data volume, the leading-order attenuation algorithm (which is third-order in the imaged data) allows

any combination such that z1 is greater than z2 and z3 is greater than z2 to contribute to the prediction (see equation on next page), where b1(kg, ks, z) corresponds to eective incident plane-wave data in the pseudodepth domain. In contrast with the methods based on the convolution and correlation of waveelds, where the denition of the generator is static, the ISS algorithms deeper-shallowerdeeper constraint does not refer to any particular interface or event in the data. On the contrary, it applies to all of their water-speed images, allowing the simultaneous prediction of all rst-order internal multiples from any depth without interpretation and traveltime picking of the data or knowledge of the medium. In our second example, we demonstrate the properties of the ISS internal multiple-prediction algorithm using a set of acoustic nite-dierence data. The model shown in Figure 3a consists of three interfaces, the rst of which features a trench approximately 1.5 km long and 100 m deep. In Figure 3b, the travel paths of some internal multiples are drawn schematically using upgoing and downgoing arrows to represent wave propagation. In a zero-oset section of the data (Figure 4a), a rst train of closely spaced internal multiples (characterized by the pattern 2[12]n) can be shown to originate from the energy reected between the two shallow reectors (1) and (2). A deeper reector (3) causes the entire train to begin again at around 1.4 s (3[12]n train) and once more at 2.1 s (313[12]n and 323[12]n trains). In general, even in a simple three-interface Earth model, the number of reverberations

ATTRIBUTE WORKFLOW

More Solutions to Simplify Your

Resolve GeoSciences is committed to providing geoscientists the highest caliber of seismic attribute services. Extensive data conditioning and quality control procedures ensure that we deliver accurate results every time. Resolve continues to pioneer innovative techniques for viewing and analyzing seismic attributes. At the SEG Convention, we will unveil additional features for our complimentary viewing software, SeisShow, and a 3D viewer that will take our clients to a new dimension of attribute analysis.

Visit us in Booth 3418 at the 2011 SEG Convention

Seismic Attribute Services


Before HQ After HQ
www.resolvegeo.com | info@resolvegeo.com | 713-972-6200 Curvature & RSI Attributes | Frequency Enhancement (HQ) | Complimentary Viewing Software | Free Support

M u l t i p l e

a t t e n u a t i o n

Figure 4. Zero-offset sections: (a) input data, (b) predicted multiples, and (c) labeling of events.

recorded at the surface is extremely large as a result of the various ways three reflectors can be combined to form internal multiples. The ISS internal multiple algorithm predicts all of them at once, without any interpretation required on the data, as shown in Figure 4b and Figure 4c. Discussion We observe that the layer-related approach (Berkhout and Vershuur, 2005; Vershuur and Berkhout, 2005) would not achieve the same result. Figure 5a shows the four types of first-order internal multiples generated within a three-reflector Earth. If the reference level (the lower boundary of the layer) that separates an internal multiples upward and downward reflections were chosen between the first and the second reflectors, the layer-related method would predict the three types of first-order internal multiples shown in Figure 5b. If that strategy were applied in the example shown in Figure 4, all multiples characterized by type 3[23]n would be absent in the prediction. Figure 5c shows a different prediction produced by selecting the reference level between the second and third reflector. Similarly, in the example in Figure 4, if the downward-reflecting level were chosen between events (2) and (3), the 2[12]n event type would not be predicted. Notice that once the reference level is chosen, the events above this level can act only as downward reflectors; similarly, the events below this level can contribute only as upward reflectors. In Figure 5a, however, the second reflector contributes both as an upward reflector (for the two internal multiples in the middle) and as a downward reflector (for the rightmost internal multiple). Therefore, for any choice of
880 The Leading Edge August 2011

downward-reflecting layer, there is at least one type of firstorder internal multiple which cannot be predicted. Conclusions The inverse scattering series provides an approach to internal multiple attenuation with the potential to address the challenges of modern seismic exploration on land and in complex marine settings. That capability has recently been further delineated and demonstrated by complex synthetic and land field data tests (Fu et al.; Luo et al.). Through the analysis and the examples in this paper, we illustrate its inner workings and emphasize the key concepts at the base of its capability to provide (1) a comprehensive and accurate prediction of all internal multiples (2) in a purely data-driven manner, and (3) in an Earth with strong lateral variations.
References
Arajo, F., 1994, Linear and non-linear methods derived from scattering theory: backscattered tomography and internal multiple attenuation: Ph.D. thesis, Universidade Federal da Bahia. Arajo, F., A. Weglein, P. Carvalho, and R. Stolt, 1994, Inverse scattering series for multiple attenuation: An example with surface and internal multiples: 64th Annual International Meeting, SEG, Expanded Abstracts, 10391041. Berkhout, A. J. and D. J. Verschuur, 2005, Removal of internal multiples with the common-focus-point (CFP) approach: Part 1 Explanation of the theory: Geophysics, 70, no. 3, V45V60, doi:10.1190/1.1925753. Coates, R. and A. Weglein, 1996, Internal multiple attenuation using inverse scattering: Results from prestack 1 & 2D acoustic and

M u l t i p l e

a t t e n u a t i o n

Nita, B. G. and A. Weglein, 2007, Inverse-scattering internal multipleattenuation algorithm: An analysis of the pseudodepth and timemonotonicity requirements: 77th Annual International Meeting, SEG, Expanded Abstracts, 24612465. Ramrez, A. C. and A. Weglein, 2005, An inverse scattering internal multiple elimination method: Beyond attenuation, a new algorithm and initial tests: 75th Annual International Meeting, SEG, Expanded Abstracts, 21152118. Stolt, R., 1978, Migration by Fourier transform: Geophysics, 43, no. 1, 2348, doi:10.1190/1.1440826. Stolt, R. H. and A. B. Weglein, 1985, Migration and inversion of seismic data: Geophysics, 50, 24582464, doi: 10.1190/1.1441877. ten Kroode, F., 2002, Prediction of internal multiples: Wave Motion, 35, no. 4, 3153385, doi:10.1016/S0165-2125(01)00109-3. Verschuur, D. J. and A. J. Berkhout, 2005, Removal of internal multiples with the common-focus-point (CFP) approach: Part 2Application strategies and data examples: Geophysics, 70, no. 3, V61 V72, doi:10.1190/1.1925754. Weglein, A., F. Arajo, P. Carvalho, and R. Stolt, 1997, An inverse-scattering series method for attenuating multiples in seismic reflection data: Geophysics, 62, no. 6, 19751989, doi:10.1190/1.1444298. Weglein, A., F. Arajo, P. Carvalho, R. Stolt, K. Matson, R. Coates, D. Corrigan, D. Foster, S. Shaw, and H. Zhang, 2003, Inverse scattering series and seismic exploration: Inverse Problems, 19, no. 6, R27R83, doi:10.1088/0266-5611/19/6/R01.

Acknowledgments: The authors thank Bill Dragoset of WesternGeco (Houston) for providing the data shown in the second example, and for permission to publish the results. We thank all M-OSRP sponsors for constant support and encouragement. Corresponding author: pterenghi2@uh.edu
Figure 5. (a) Four types of first-order internal multiples are generated by three reflectors; (b) and (c) the first-order internal multiples predicted by the feedback layer method using different definitions of the downward generator layer (red dashed lines).
elastic synthetics: 66th Annual International Meeting, SEG, Expanded Abstracts 15221525. Clayton, R. W., and Stolt, R. H., 1981, A Born-WKBJ inversion method for acoustic reflection data: Geophysics, 46, 15591660, 10.1190/1.1441162. Fu, Q., Y. Luo, P. Kelamis, S.-D. Huo, G. Sindi, S.-Y. Hsu, and A. Weglein, 2010, The inverse scattering series approach towards the elimination of land internal multiples: 80th Annual International Meeting, SEG, Expanded Abstracts, 34563461. Hsu, S.-Y., 2011, Efficacy determination and efficiency advances for the inverse scattering series internal multiple attenuation: Complex synthetic and land field data testing, and the impact of reference velocity sensitivity: PhD thesis, University of Houston Jakubowicz, H., 1998, Wave equation prediction and removal of interbed multiples: SEG, Expanded Abstracts ,15271530. Keydar, S., E. Landa, B. Gurevich, and B. Gelchinsky, 1997, Multiple prediction using wavefront characteristics of primary reflections: EAGE Extended Abstracts. Matson, K. H., 1997, An inverse-scattering series method for attenuating elastic multiples from multicomponent land and ocean bottom seismic data: Ph.D. thesis, University of British Columbia. Matson, K., D. Corrigan, A. Weglein, C. Y. Young, and P. Carvalho, 1999, Inverse scattering internal multiple attenuation: Results from complex synthetic and field data examples: 69th Annual International Meeting, SEG, Expanded Abstracts, 10601063.
882 The Leading Edge August 2011

Arcis has the tools to help you characterize reservoirs more

ReseRvoiR AnAlysis

lAnd PRocessing

MARine PRocessing

dePth iMAging

| co-investMent suRveys

PRoject MAnAgeMent

Left: P-impedance obtained from model-based, poststack inversion. Right: P-impedance obtained from Probabilistic Neural Network Analysis.

Leaders in Reservoir Analysis


Add Value to your Reservoir Assets with Arcis Range of Innovative, Proprietary Solutions

At Arcis, we offer a state-of-the-art suite of seismic reservoir services comprising seismic attributes, AVO analysis, AVAz/ VVAz, seismic amplitude inversion, thin-bed reflectivity inversion, spectral decomposition, multi-attribute analysis and other accurate solutions aimed at characterizing reservoirs. Visit us online or call today to discover how Arcis can deliver your solution with industry-leading service and quality.

Notice the detailed and accurate correlation of the impedance values with the impedance log curves as seen on the neural network estimated impedance.

Left: Strat-cube from the most positive curvature attribute co-rendered with coherence seen here in a 3D chair view. Right: Strat-slice from k1 curvature shows fault/fracture skeleton with transparency.

View more Reservoir Solution examples at www.arcis.com

Toll Free 1.888.269.6840 | Visit our website www.arcis.com

We think different. We think seismic.

SPECIAL M u l t SECTION: i p l e a t M tu el ntui a pt li eo a nt t e n u a t i o n

Elimination of land internal multiples based on the inverse scattering series


YI LUO, PANOS G. KELAMIS, QIANG FU, SHOUDONG HUO, and GHADA SINDI, Saudi Aramco SHIH-YING HSU and ARTHUR B. WEGLEIN, University of Houston

espite the explosion of new, innovative technologies in the area of multiple identication and subsequent attenuation, their applicability is mostly limited to marine environments especially in deep water. In land seismic data sets however, the application of such multiple-elimination methodologies is not always straightforward and in many cases poor results are obtained. The unique characteristics of land seismic data (i.e., noise, statics and coupling) are major obstacles in multiple estimation and subsequent elimination. The well-dened surface multiples present in marine data are rarely identiable in land data. Particularly in desert terrains with a complex near surface and low-relief structures, surface multiples hardly exist. In most cases, we are dealing with so called near-surface-related multiples. These are primarily internal multiples generated within the complex near surface. In this paper, we employ theoretical concepts from the inverse scattering series (ISS) formulation and develop computer algorithms for land internal multiple elimination. The key characteristic of the ISS-based methods is that they do not require any information about the subsurface: i.e., they are fully data-driven. Internal multiples from all possible generators are computed and adaptively subtracted from the input data. These methodologies can be applied prestack and poststack and their performance is demonstrated using realistic synthetic and eld data sets from the Arabian Peninsula. These are the rst published eld data examples of the application of the ISS-based internal multiple-attenuation technology to the daunting challenge of land internal multiples. Introduction Radon-based methods are commonly employed for multiple reduction in land seismic data processing. However, in land data, the lack of velocity discrimination between primaries and multiples causes unacceptable results. Thus, wave-equation-based schemes have to be introduced. The research articles of Verschuur et al. (1992), Berkhout (1997), Weglein et al. (1997), Carvalho and Weglein (1994), Dragoset and Jericevic (1998), Jakubowicz (1998), Berkhout (1999), and Verschuur and Berkhout (2001), to mention a few, oer theoretical insights to wave-equation surface and internal multiple elimination along with several applications to synthetic and marine data sets. Kelamis et al. (2002) used concepts from the common focus point (CFP) technology and developed algorithms for internal multiple elimination applicable in land. Luo et al. (2007) and Kelamis et al. (2008) have also presented successful applications of land internal multiple suppression. They employed the layer/boundary approaches introduced by Verschuur and Berkhout (2001). In these schemes, the user has to dene phantom layers/boundaries which correspond to
884 The Leading Edge August 2011

the main internal multiple generators. Thus, some advanced knowledge of the main multiple generators is required. On land, as shown by Kelamis et al. (2006), the majority of internal multiples are generated by a series of complex, thin layers encountered in the near surface. Thus, the applicability of the CFP-based layer/boundary approach is not always straightforward because it requires the denition of many phantom layers. In contrast, the ISS theory does not require the introduction of phantom layers/boundaries. Instead, it computes all possible internal multiples produced by all potential multiple generators. Therefore, fully automated internal multiple-elimination algorithms can be developed in the prestack and poststack domains. Basic principles of ISS technology The ISS-based formulation for internal multiple attenuation (Arajo et al., 1994; Weglein et al., 1997) is a data-driven algorithm. It does not require any information about the reectors that generate the internal multiples or the medium through which the multiples propagate and, in principle, it does not require moveout dierences or interpretive intervention. The algorithm predicts internal multiples for all horizons at once.

Figure 1. ISS internal multiple prediction formulation.

This ISS internal multiple-attenuation scheme is basically the rst term in a subseries of the ISS that predicts the exact time and amplitude of all internal multiples without subsurface information. The ISS attenuation algorithm predicts the correct traveltimes and approximate amplitudes of all the internal multiples in the data, including converted-wave internal multiples (Coates and Weglein, 1996). Carvalho et al. (1992) pioneered the free-surface ISS method and applied it to eld data. Matson et al. (1999) were the rst to apply the ISS internal multiple algorithm to marine towed-streamer eld data, and Ramrez and Weglein (2005) extended the theory from attenuation toward elimination by including more terms in the subseries, thereby improving the amplitude prediction.

M u l t i p l e

a t t e n u a t i o n

Figure 3. (left) 1D synthetic with primaries and internal multiples modeled from the eld sonic log (right). (center left) True primaries. (center right) ISS-estimated primaries.

given by 2iqsD(kg, ks, ) The vertical wavenumbers for receiver and source, qg and qs, and are given by
2 2 qi = sqn() ki c2 0

for i = (g,s);

Figure 2. (a) Synthetic CMP gather from 18-layer velocity model. (b) ISS-estimated primaries. (c) True primaries.

c0 is the constant reference velocity; zs and zg are source and receiver depths; and zi (i = 1,2,3) represents pseudodepth. Note that the obliquity factor, 2iqs, is used to transform an incident wave into a plane wave in the Fourier domain (Weglein et al., 2003). The rst-order internal multiple is composed of three events that satisfy z2 < z1 and z2 < z3. The traveltime of the internal multiple is the sum of the traveltimes of the two deeper events minus the traveltime of the shallower one. The parameter is introduced in the equation of Figure 1 to preclude z2 < z1 and z2 < z3 in the integrals. For bandlimited data, is related to the width of the wavelet. The output of the equation, b3IM, is divided by the obliquity factor and transformed back to the space-time domain. When we subtract the estimated internal multiples from the original input data, all rst-order internal multiples are suppressed and higher-order internal multiples are altered. Synthetic and eld data Figure 2 shows a synthetic CMP gather obtained from an 18-layer velocity model. The data contain only primary reections and internal multiples (Figure 2a). The results of our 1.5D ISS-based algorithm are shown in Figure 2b and compared with the true-primaries-only gather (Figure 2c). Note that almost all internal multiples are attenuated considerably. There is some degradation of the primaries which is due to the adaptive least-squares subtraction. The results of Figure 2 are obtained without any user intervention (i.e., are fully automatic) and are encouraging. More full prestack tests are currently underway in both the shot and CMP domains. Next the application of ISS-based internal multiple attenuation is shown on poststack data. One of our goals is to study
August 2011 The Leading Edge 885

Matson (1997) and Weglein et al. (1997) extended the ISS methods for removing free-surface and internal multiples from ocean-bottom and land data. The ISS internal multiple-attenuation algorithm in 2D starts with the input data, D(kg, ks, ) that are deghosted and have all free-surface multiples eliminated. The parameters, kg, ks, and represent the Fourier conjugates to receiver, source, and time, respectively. The ISS internal multiple-attenuation algorithm for rst-order internal multiple prediction in a 2D Earth is given by Arajo (1994) and Weglein et al. (1997). Figure 1 depicts the mathematical formulation along with a pictorial construction of a rst-order multiple. The quantity b1 (kg, ks, z) corresponds to an uncollapsed migration (Weglein et al., 1997) of an eective incident plane-wave data which is

M u l t i p l e

a t t e n u a t i o n

Figure 4. Stacked section from a land data set from Saudi Arabia. The presence of internal multiples is obvious.

Figure 5. Same data as in Figure 4 after ISS internal multiple elimination.

if ISS can successfully predict internal multiples generated by thin layers. Figure 3 depicts the ISS performance on a realistic zero-oset synthetic data set. The model is composed of a large number of layers with thickness of 1 ft and is obtained from a eld sonic log shown on the extreme right. The data (primaries and internal multiples) are modeled using the acoustic wave equation. The 1D ISS internal multiple-elimination result is shown on the right, while the primaries-only traces are also depicted in the middle panel. The performance of the 1D ISSbased algorithm is very good. Despite the poststack applica886 The Leading Edge August 2011

tion, note the complete internal multiple elimination obtained in the zone of interest between 1.0 and 1.4 s. At the same time, the main primary events are preserved with a minimum degradation. Figure 4 shows a stacked section of land seismic data from Saudi Arabia. The presence of internal multiples is evident in this data set. Moreover, note the spatial variability of these multiples that follows the complex near surface. Its a clear indication that they are all generated within the complex, thin layers of the near surface. Figure 5 exhibits the data after 1D

Optimal Fracturing Conditions

Optimize Shale Gas Exploitation


CGGVeritas offers complete geophysical solutions for shale resources designed to reduce risk and optimize production rates and recovery factors. With a comprehensive portfolio of differentiating technology we can help you make better informed decisions.

cggveritas.com/UR

M u l t i p l e

a t t e n u a t i o n

Figure 6. Difference between Figure 4 and Figure 5 (i.e., the internal multiples).

ISS internal multiple elimination, and Figure 6 shows the difference (i.e., the estimated internal multiples). The results are encouraging. Note the overall reduction of internal multiples. Especially, at the zone of interest between 1.4 and 2.0 s, the ISS internal multiple elimination has resulted in an improved definition of the primaries and thus increased the interpretability of the data. It is also interesting to examine the difference section where the estimated internal multiples are shown (Figure 6). The spatial variability of the internal multiples is quite obvious along with the dull character-free ringing appearance that represents no real geology. Conclusions We have developed and employed algorithms from the inverse scattering series theory for the estimation of internal multiples. They can be applied prestack (1.5D) in the CMP domain and in zero-offset (1D) data. Their performance was demonstrated with complex synthetic and challenging land field data sets with encouraging results; other internal multiple-suppression methods were unable to demonstrate similar effectiveness. This paper presents the first series of onshore field data tests of the ISS-based internal multiple-attenuation technology. ISS technology requires no velocity information for the subsurface or any advanced knowledge of the multiple generators. The main idea is to remove multiples without damaging primaries. In practice, a method like ISS can be used for high-end prediction, and then some form of adaptive subtraction is called upon to address issues omitted in the prediction. The improved multiple prediction offered by ISS is crucial in land seismic data where close interference between primaries and internal multiples occurs. The examples of this paper point to the pressing need to improve the
888 The Leading Edge August 2011

prediction and reduce the reliance on adaptive steps, because the latter can fail precisely when you have interfering events. We will continue our research efforts for more accurate and complete prediction algorithms in order to produce effective, practical and automated internal multiple-attenuation methodologies applicable for land seismic data.
References
Arajo, F. V., 1994, Linear and nonlinear methods derived from scattering theory: backscattered tomography and internal multiple attenuation: Ph.D. Thesis, Universidad Federal de Bahia (in Portuguese). Arajo, F. V., A. B. Weglein, P. M. Carvalho, and R. H. Stolt, 1994, Inverse scattering series for multiple attenuation: An example with surface and internal multiples: 64th Annual International Meeting, SEG, Expanded Abstracts, 10391041. Berkhout, A. J., 1997, Pushing the limits of seismic imaging, Part I: Prestack migration in terms of double dynamic focusing: Geophysics, 62, no. 3, 937953, doi:10.1190/1.1444201. Berkhout, A. J., 1999, Multiple removal based on the feedback model: The Leading Edge, 18, no. 1, 127131, doi:10.1190/1.1438140. Carvalho, P. M., A. B. Weglein, and R. H. Stolt, 1992, Nonlinear inverse scattering for multiple suppression: Application to real data. Part I: 62nd Annual International Meeting, SEG, Expanded Abstracts, 10931095. Carvalho, P. M. and A. B. Weglein, 1994, Wavelet estimation for surface multiple attenuation using a simulated annealing algorithm: 64th Annual International Meeting, SEG, Expanded Abstracts, 14811484. Coates, R. T. and A. B. Weglein, 1996, Internal multiple attenuation using inverse scattering: Results from prestack 1 and 2D acoustic and elastic synthetics: 66th Annual International Meeting, SEG, Expanded Abstracts, 15221525. Dragoset, W. H. and Z. Jericevic, 1998, Some remarks on sur-

M u l t i p l e

a t t e n u a t i o n

face multiple attenuation: Geophysics, 63, no. 2, 772789, doi:10.1190/1.1444377. Jakubowicz, H., 1998, Wave equation prediction and removal of interbed multiples: 68th Annual International Meeting, SEG, Expanded Abstracts, 15271530. Kelamis, P. G., E. Verschuur, K. E. Erickson, R. L. Clark, and R. M. Burnstad, 2002, Data-driven internal multiple-attenuation applications and issues on land data: 72nd Annual International Meeting, SEG, Expanded Abstracts, 20352038. Kelamis, P. G., W. Zhu, K. O. Rufaii, and Y. Luo, 2006, Land multiple attenuation-The future is bright: 76th Annual International Meeting, SEG, Expanded Abstracts, 26992703. Kelamis, P. G., Y. Luo, W. Zhu, and K. O. Al-Rufaii, 2008, Two pragmatic approaches for attenuation of land multiples: 70th EAGE Conference & Exhibition. Luo, Y., W. Zhu, and P. G. Kelamis, 2007, Internal multiple reduction in inverse-data domain: 77th Annual International Meeting, SEG, Expanded Abstracts, 24852489. Matson, K., 1997, An inverse scattering series method for attenuating elastic multiples from multi-component land and ocean bottom seismic data: Ph. D. thesis, The University of British Columbia. Matson, K., D. Corrigan, A. B. Weglein, C. Y. Young, and P. Carvalho, 1999, Inverse scattering internal multiple attenuation: Results from complex synthetic and field data examples: 69th Annual International Meeting, SEG, Expanded Abstracts, 1060 1063. Ramrez, A. C. and A. B. Weglein, 2005, An inverse scattering internal multiple-elimination method: beyond attenuation, a new algorithm and initial tests: 75th Annual International Meeting, SEG, Expanded Abstracts, 21152118. Verschuur, D. J., A. J. Berkhout, and C. P. A. Wapenaar, 1992, Adaptive surface-related multiple elimination: Geophysics, 57, no. 9, 11661177, doi:10.1190/1.1443330. Verschuur, D. J. and A. J. Berkhout, 2001, CFP-based internal multiple removal, the layer-related case: 71st Annual International Meeting, SEG, Expanded Abstracts, 19972000. Weglein, A. B., F. A. Gasparotto, P. M. Carvalho, and R. H. Stolt, 1997, An inverse scattering series method for attenuating multiples in seismic reflection data: Geophysics, 62, no. 6, 19751989, doi:10.1190/1.1444298. Weglein, A. B., F. Arajo, P. Carvalho, R. Stolt, K. Matson, R. Coates, D. Corrigan, D. Foster, S. Shaw, and H. Zhang, 2003, TOPICAL REVIEW: Inverse scattering series and seismic exploration: Inverse Problems, 19, no. 6, 27, doi:10.1088/02665611/19/6/R01.

support and permission to present this paper. We also thank Kevin Erickson for providing the synthetic data sets and Roy Burnstad for many discussions related to the processing of field data. Arthur B. Weglein and Shih-Ying Hsu thank Saudi Aramco for ShihYings internship with Geophysics Technology of EXPEC Advanced Research Center. They also thank all M-OSRP sponsors for their support. Corresponding author: panos.kelamis@aramco.com

Acknowledgments: We thank the Saudi Arabian Oil Company (Saudi Aramco) for
August 2011 The Leading Edge 889

SPECIAL SECTION:

M u l t i p l e

a t t e n u a t i o n

Resolution on multiples: Interpreters perceptions, decision making, and multiple attenuation


LEE HUNT, SCOTT REYNOLDS, MARK HADLEY, and SCOTT HADLEY, Fairborne Energy, Ltd. YE ZHENG, Divestco Inc. MIKE PERZ, Arcis Corporation

e investigated our ability to remove a specic short- ones without interpolation and with dierent tau-p mutes. period multiple from the Nisku and Blueridge We found that our aggressive multiple attenuation was benformations in West Central Alberta, Canada. This problem ecial to the stack results, and that the interpolation led to is commercial in nature, and has persisted because it was better stacks and better behaved tau-p spaces. The correlation believed that the multiple had too little moveout to be results with the various processing ows illustrate that the removed, rendering interpretation of the thin Blueridge stack response did improve; it improved the most with an agzone impossible. Associated with this issue was the belief gressive multiple attenuation, and with the interpolation ow. that the modern high-resolution Radon transforms do not The correlations also show that velocity analysis and wavelet materially aect the stack response of real data in this area resolution are important. This work thus demonstrates the despite their excellent performance on synthetics and on importance of the entire processing ow in achieving optiother data in the literature. Serious technical work seldom mal multiple suppression and quality well ties. We show that aords a discussion of beliefs, but this work is concerned the interpolation-high resolution Radon transform ow was with the decision-making of the interpreter. We show that superior because of its eect on the signal-to-noise ratio of in order to address a specic, real, short-period multiple the data input to the transform as well as its minimization of problem, the interpreter was required to challenge previously geologic smearing. The concept of the interpreters technical held technical assumptions. This required the interpreter assumptions (or beliefs) is undeniably tied to the value of this to consider the nature of the multiple itself, the nature and work. The aggressive tau-p mute could only be chosen belimitations of the multiple suppression technology used, and to objectively measure the level of success in suppressing the multiple. The rst steps in this process were to conrm the existence of the short-period multiple, and to identify the probable multiple generators as well as the approximate minimum dierential moveout of the multiple. This analysis suggested that the differential moveout was as little as 12 ms at the far oset of 4000 m. This knowledge motivated us to consider a practical strategy aimed at achieving a Radon transform with the optimal resolution and behavior. Our strategy involved minimizing both inaccuracies and spread in the Radon transform caused by smearing of geology, lateral velocity changes, noise, poor sampling in the land 3D, and a low-bandwidth wavelet. The noise and poor sampling were major concerns that we felt could be dealt with by employing a 5D interpolation prior to multiple attenuation rather than using common-oset stacking or borrowing procedures. The idea was to avoid potential structural smearing that might arise from less sophisticated methods of handling the noisy, poorly sampled gathers. The 5D interpolation was included in a high-resolution and AVO-compliant processing ow. This new processing ow allowed us to improve the performance of the Radon transform and apply a very aggressive lter in tau-p space. We measured the quality of the Figure 1. The Nisku and Blueridge formations as represented by a deep well that stack results by correlation with a synthetic seismo- ties near the center of the survey. (a) The stratigraphy with and without Blueridge gram and compared the correlations to the results porosity. (b) A simple normal-incidence seismic model. Changes in amplitude produced by other preprocessing ows, including associated with the Blueridge porosity are visible but minor.
890 The Leading Edge August 2011

M u l t i p l e

a t t e n u a t i o n

cause we identied the multiple generators and estimated the expected dierential moveout of the multiple. Without this prior knowledge, it is extremely unlikely that any interpreter would choose to apply such an aggressive tau-p mute. We will discuss the eect of these eorts on the decision making of the interpreter further, including the importance of the quality as well as the resolution of the tau-p space.

because they would have absurd velocities. These velocities would be slower than the trend of primaries and thus yield small or even negative interval velocities. This assessment was a necessary consequence of the assumption that interval velocities generally increase with increasing depth. Maynes (1962) development of the common-midpoint (CMP) method enabled the creation of localized velocity spectra (Taner

The Nisku and Blueridge formations The Nisku Formation in the Deep Basin area commonly consists of thick reefal carbonate that grows on the Bigoray/Lobstick platform. The reefs can have a thickness up to 75 m and porosities over 10%. The equivalent o-reef material consists of tight, ne-grained, open marine carbonates. The Blueridge carbonate overlays the Nisku, but is not pervasive. The Blueridge can produce at economic rates over signicant areas, and is an attractive target in the area. Blueridge reservoir locally develops in dolomitized grainstone shoals, which may or may not be related to the underlying Nisku reefal Figure 2. Original (legacy) seismic line through a Nisku reef and Blueridge development. The Blueridge reservoir commonly porosity. The yellow arrow depicts the well tie, synthetic, and Blueridge zone. The has a thickness of less than 8 m and can have po- thin red horizon line depicts the Wabamun top. The amplitudes at the Blueridge level (red) are much higher than expected (blue and green) from the synthetic tie. rosities approaching 10% locally. The Blueridge reservoir is challenging to image because it is thin and aected by the more dominant Nisku reef response. Fig- and Koehler, 1969) in which primaries and multiples could ure 1 depicts the Nisku and Blueridge stratigraphy using a be observed. These spectra employed stacking or similarity deep well that ties near the center of the 3D survey. The Blu- measures along hyperbolic trajectories as dened by Dix, eridge porosity is completely removed on the right half of the whose work could also be used to estimate the velocity of model, which is equivalent to o-shoal, tight, material. The each interval, and his equations related to velocity are still amplitudes at the Blueridge level are low, and the variations used today. The Radon transform is a popular and intuitive method due to the change in porosity are minor but visible. As Figure 1 indicates, delineation of the Blueridge reser- for the attenuation of multiples because it exploits the diervoir is expected to be challenging due to the small amplitude ence in rms velocity, or moveout, of the multiples relative to variations observed from modeled changes in reservoir qual- the primaries. It does this by representing the data much like ity. Figure 2 shows the original (legacy) processed seismic line a velocity spectra. The Radon transform typically uses parabofrom a 3D survey. This line goes through a well that encoun- las or hyperbolas as its basis function when used for multiple tered Blueridge porosity and a thick Nisku reef. The ampli- attenuation. In either case, the CMP time-oset data are reptudes at the Blueridge level are clearly much too high, and are resented by data ordered in intercept time, tau (T) and the curve parameter, p. If hyperbolic basis functions are used, p thought to be an indication of multiple contamination. The prevailing opinion regarding the Blueridge zone was represents slowness, and tau-p space is essentially a velocity that it was not possible to map it or explore for it as a prima- stack (Thorson and Claerbout, 1985). Early implementations ry target. The subtlety of its expected seismic response from by Thorson and Claerbout and Hampson (1986) both solved modeling (as in Figure 1) combined with the poor tie to well the Radon transform based on the following underdetermined control (as in Figure 2) were considered too dicult to over- system of equations Lm = d (d = the data, m = the data model come. Short-period multiple contamination was blamed for weights in the Radon domain, and L = the basis function opthe poor tie. Earlier attempts at multiple attenuation with the erator). Both approaches combat nonuniqueness through the Radon transform were reported as failures. In fact, the advice use of constraints, with the former method using sparsity conwas that this problem could not be solved with the Radon straints while the latter using nonsparse constraints. Sacchi and transform. We felt that the old assessment was prejudicial, Ulrych (1995) emphasized that this solution was nonunique and should be examined critically. The economic value of the and poorly resolved due to limitations in all surface seismic experiments, and they proposed computationally ecient sparresource warranted a new look. sity constraints to mitigate these problems. These sparsity conMultiples and the high-resolution Radon transform straints have a physical interpretation that is related to the data The earliest and longest-standing approach to attacking mul- aperture: the near- and far-oset limitations represent missing tiples has been through exploiting their velocity characteris- data or truncations of the CMP gathers. These artifacts are tics. Dix (1955) suggested that multiples could be identied minimized by the sparsity. In eect, the sparsity can be thought
August 2011 The Leading Edge 891

M M u l utl itpi lp el e a t at te tn eu na ut a ito in o n

Figure 3. A multiple with 20 ms of dierential moveout relative to the primary event at 2500 m was generated. Each event has the same amplitude, and a 35-Hz Ricker wavelet was used. The intercept time, tau, of the multiple is varied in this gure. (a) The intercept of the multiple is 10 ms above the primary. (b) Both events start at the same time. (c) The intercept of the multiple is 10 ms below the primary. The tau-p spaces for (a), (b), and (c) are given in (d), (e), and (f ), respectively. Despite using a sparse Radon algorithm, the events do not separate completely in the tau-p space of (d).

of as extending the aperture innitely and thus reducing the bow-tie operator artifacts in tau-p space (Sacchi and Ulrych). Cary (1998), Ng and Perz (2004), and others developed this idea commercially and implemented it for both parabolic and hyperbolic applications that could be solved in either the time or frequency domain (Sacchi, 2009). The use of sparsity constraints is now common in high-resolution Radon transforms, and these modern transforms are widely accepted as superior to previous moveout-based methods, especially when supplemented with interactive graphical tools for time- and spacevariant Radon mute denition.
892 The Leading Edge August 2011

Limitations or risks of the Radon transform Despite the fact that we use a sparse Radon transform, smearing persists to some degree in the tau-p spaces of real seismic data. This means that we cannot always know whether or not we can safely remove a multiple with very small dierential moveout. The problem involves two things: the ability to identify the multiple in tau-p space, and the ability to safely remove it. Both identication and safe removal require that the primary and multiple be fully separated in tau-p space, which is inextricably related to how focused the events are in that space. Real data dier from simple constant amplitude

Prot from the Most Reliable Streamers and Hydrophones for Offshore Survey Exploration
Novel designs, zero discharge, rugged and exible handling, wide weather window, and the most reliable hydrophones. Whether you are working under ice in the arctic or under the blistering sun of the tropical equator, our streamers will ensure you collect the highest quality data day after day. We know that maximizing up-time is vital. Let our 45 years of streamer technology experience help you in that goal. All day. Every day.

Houston Singapore UK Tel: +1-713-666-2561 www.teledyne-gi.com

M u l t i p l e

a t t e n u a t i o n

Figure 4. The eects of wavelet resolution in the resolution of the Radon transform. The oset bins are also perfectly regular, with 50-m spacing. (a), (b), and (c) depict a at primary and a multiple. In each case, the multiple starts 10 ms above the primary, and has a dierential moveout of 20 ms at 2500 m. We used Ricker wavelets with dominant frequencies of 15 Hz, 35 Hz, and 60 Hz in (a), (b), and (c), respectively. (d), (e), and (f ) represent the tau-p spaces for (a), (b), (c), respectively. The low-resolution gather of (a) yields an inaccurate and unresolved tau-p space in (d). This problem is incorrect in a dierent way with the wavelet used in the gather of (b), and the corresponding tau-p space shown in (e). The problem is only completely resolved with the highest-frequency wavelet of (c) and the tau-p space of (f ).

synthetic events in numerous ways which aect resolution in Radon space. To understand this disappointing fact, we must consider rstly that however sparse the transform is supposed to be, it must still reconstruct the original CMP gather, and secondly how easily that the data can be represented by the basis functions. This has several implications:
1) Amplitude variations with oset in the data will cause

smearing and lack of resolution in tau-p space. The problem was discussed by Thorson and Claerbout, Kabir and Marfurt (1999), and Verschuur (2007). Ng and Perzs
894 The Leading Edge August 2011

model work showed that a suciently large AVO response on a single event could actually appear to be two separate and smeared events. This implies that it is possible to destroy AVO characteristics in the data by applying a Radondomain mute between two events that are mistakenly interpreted as a primary and a multiple (when in fact, they represent one event which has AVO). This is a serious risk if very aggressive multiple attenuation is contemplated. 2) Separation of closely spaced events, with similar moveout in tau-p space, requires that the character of the data across the input CMP is actually indicative of two events. It has

M u l t i p l e

a t t e n u a t i o n

Figure 5. The velocity structure of the area is represented by the sonic log of the deep well discussed earlier. The velocity prole increases materially at the Wabamun level. The Blueridge and Nisku lie below a thick, high-velocity section with no strong reections. For a multiple to have an intercept near the Blueridge, it must either bounce numerous times or it must peg-leg o the Wabamun event. The yellow arrow indicates the coal section within which the reverberations will be generated before peg-legging o the Wabamun.

two clear implications: rst that the way in which the primaries and multiples interfere can be important, and secondly that the size of the wavelet in the data can also be important. Although typical synthetics demonstrating the resolution of the sparse Radon transform illustrate the separation of each event in tau-p space quite well, the eect of dierent interference patterns for the same dierential moveout has not been thoroughly discussed. Figure 3 illustrates a primary and multiple with the same amplitudes. The dierential moveout of the multiple with respect to the primary is 20 ms at 2500 m oset. The intercept time, tau, of the multiple is varied and the osets were taken from a typical CMP from the 3D to simulate land 3D irregularity. A 35-Hz Ricker wavelet was used for each element of this gure. In Figure 3a, the multiple starts 10 ms above the primary. In Figure 3b, both events start at the same time and, in Figure 3c, the intercept of the multiple is 10 ms below the primary. Only a very narrow range of Radon space is shown in the corresponding tau-p spaces of Figures 3d, 3e, and 3f, which makes this an unusually zoomed-in analysis. The primary and multiple are not resolved equally between these three cases, despite the fact that the dierential moveout is the same.

The primary and multiple events are well resolved in the tau-p spaces of Figure 3e and 3f but are not resolved at all in Figure 3d. The greater the interference in time and space, the more dicult it is for the sparse Radon transform to separate events correctly. This observation means we should be concerned not only with dierential moveout of events, but also their relative temporal positioning. Let us examine the eect of changes in wavelet size on the ability of the sparse Radon transform to separate multiples and primaries properly. The simple model of Figure 4 illustrates this eect. A primary and multiple are depicted in Figures 4a, 4b, and 4c. In each case, the multiple starts 10 ms above the primary and has a dierential moveout of 20 ms at 2500 m. The primary and the multiple have equal amplitude and the oset bins are perfectly regular. The only dierences in these three images are that the wavelet of the data changes from a 15-Hz Ricker in Figure 4a to a 35-Hz Ricker in Figure 4b, to a 60-Hz Ricker in Figure 4c. Figures 4d, 4e, and 4f represent the tau-p spaces for Figures 4a, 4b, 4c, respectively. The tau-p space corresponding to the low-resolution gather of Figure 4d clearly does not resolve the primary or the multiple; most of the energy is on the zero moveout curvature. The mid-resolution gather
August 2011 The Leading Edge 895

M u l t i p l e

a t t e n u a t i o n

Figure 7. Shuey (1985) estimate of AVO at the Blueridge level. (a) Shuey model gather for the Blueridge. It exhibits small AVO aects. The data are muted at 30. (b) The forward Radon space of this model. The tau-p image is well resolved, indicating that the small amount of AVO at our zone of interest does not cause serious spread in Radon space.

Figure 6. A portion of the normal-incidence model of the zone of interest from Figure 1 with the calculated intercept times of the multiples indicated by the horizontal colored lines. The Blueridge and Nisku are indicated by labels only. The red arrows also point out the intercept times of these hypothetical multiples. The dierential moveout and dierences in rms velocity are indicated beside two of the uppermost events. The yellow arrow indicates that interfering multiples with later arrivals have higher dierential moveouts. This indicates that the smallest dierential moveout we have to be concerned with is 12 ms.

fares little better: Figure 4e shows the energy is misallocated in quantity and position. Only the highest-resolution gather of Figure 4c and its tau-p space of Figure 4f does a perfect job of resolving both events, and correctly representing the moveout of each event. These resolution issues will vary depending on the intercept (tau) of each event, and the exact way in which the events interfere in time and oset as depicted in Figure 3. By comparing Figure 4b and Figure 3, we also see that the worst case scenario for resolving multiples and primaries involves a multiple that superimposes symmetrically over the primary, which is the case for Figure 4. Given these issues, we must be wary that the transform may not always uniquely and correctly separate events in tau-p space, despite its sparseness. Our best practical action is to increase the resolution of the wavelet as much as possible. The greater the wavelet resolution, the greater our ability to accurately separate events. 3) The events must actually be hyperbolic (or parabolic, if the basis functions are parabolic) in nature. Dips or lateral variations in the velocity eld could potentially create departures from the basis function that would in turn limit resolution (Dix; Sherwood, 1972; and Verschuur, 2006). 4) Sherwood noted that CMP gathers were noisy, leading him to suggest the use of several CMP gathers in velocity analysis. Neighboring CMPs can be interleaved to form
896 The Leading Edge August 2011

one larger supergather and optionally, all supergather data can be stacked into predened, regular, oset bins to form a common-oset gather, or COFF. The use of too many CMP gathers over too large an area in either scheme may reduce noise, but could limit Radon-domain resolution by virtue of problem 3 above. Any change in structure, wavelet, dip, or velocity over the gathered area will cause some smearing. 5) Missing osets, particularly near osets, cause smearing in tau-p space. The familiar bow-tie eect is caused by the near- and far-oset truncation, but additional events in tau-p space can be created by osets missing anywhere in between. Marfurt et al. (1996) illustrated this in detail; however, the sparse Radon transform has been shown to be eective at combating this problem (Sacchi and Ulrych). Despite this, there are still minor advantages to be gained from using regular osets with the sparse Radon transform. We can see that events in the tau-p space of Figure 3d may be slightly less resolved than in Figure 4e. These two sets of gures have identical wavelets and identical events, but Figure 3 has irregular osets, versus the perfectly regular osets of Figure 4. On this extremely zoomed-in observation of tau-p space, the evidence suggests regularity still retains some desirability. Therefore, in land 3D, concerns for missing osets have likely contributed to the legacy habit of forming supergathers or COFFs prior to application of the Radon transform. Strategy to achieve the highest-quality Radon transform on land 3D data In order to have the highest-resolution Radon transform and most eective multiple attenuation possible, we devised a strategy to mitigate the limitations described above. We felt it was important that this strategy be practical so that any interpreter could use a similar approach. Therefore, only simple modeling was used rather than specialized tools. This strategy involved ve elements:

M u l t i p l e

a t t e n u a t i o n

Understanding the multiple There are many multiples in the data, but they are not all of equal concern to us. In order to understand the specic problem at hand, we investigated the velocity structure of the area. Figure 5 illustrates the velocity prole and reection coecients for the deep well. The sonic velocity increases with depth, profoundly so at the Wabamun level. The Blueridge and Nisku lie below the Wabamun and below a thick section with no strong reections. This velocity structure makes several things apparent. The high velocity of the Wabamun section means that most multiples will have signicantly dierent rms velocities and dierential moveouts than the Blueridge and Nisku reections. The later multiple arrivals will generally have larger differences. We are only concerned with the small dierential-moveout multiples, so this observation removes most multiples from our consideration. Multiples with more than one extra reverberation are mostly removed from consideration because either they will have smaller amplitudes, or the material they travel through is necessarily so slow that they have large dierential moveout. Surfacerelated multiples would also have too much dierential moveout to be considered. By following this logic, we concluded that the multiples of greatest concern were likely peg-leg multiples that reverberated once in the coal section, and then peg-leg o the Wabamun. In order to estimate the dierential moveout of the relevant multiples, we created simple models of the proposed peg-leg pathways and calculated their rms velocities and intercept times with the model data and Dixs equations. We used the Pwave velocity data from the sonic log and modeled Figure 8. Gather and tau-p space (respectively) from (a) the sparse Radon traveltimes and rms velocities for possible multitransform with super-binning, (b) sparse Radon transform with 5D interpolation, ples involving the coals, the base of the Mannville and (c) sparse Radon transform of a 3 3 COFF stacked gather. The Blueridge level is identied with a yellow arrow. The harsh mute is in yellow. The worst case section, all peg-legging o the Wabamun marker. multiple (predicted to have about 12 ms of dierential moveout) is only resolved in Figure 6 illustrates the results of this modeling and the interpolated gather of (b) where it is circled. associated calculations on a portion of the normalincidence model from Figure 1. The intercept times of the calculated multiples are indicated by horizontal colored 1) Understanding the multiple of interest so we know where lines. The dierential moveout and dierence in rms velocity to expect its location in tau-p space. This is somewhat between the primary and multiple events is indicated at two driven by our concerns of mistakenly ltering out an AVO of the early intercept times. The multiples that arrive later will have larger dierences in rms velocity and dierential moveoresponse. ut. These calculations indicate that the most dicult multiple 2) Checking if a signicant AVO response should be expectto remove will have a dierential moveout of about 12 ms at ed at the zone of interest, and estimating if that AVO re4000 m. Th is would require a very tight mute in tau-p space, sponse would cause signicant spreading in tau-p space. and is concerning. 3) Ensuring that the velocity picking on the 3D survey was consistent with the known velocity structure of the area as Can we preserve the AVO in the data? well as the horizons themselves. 4) Reducing the need to supergather or stack CMPs prior to The potential eect of AVO in the data was most easily addressed through simple forward modeling. We created a simapplying the Radon transform. 5) Increasing the temporal resolution and signal-to-noise ra- ple Shuey (1985) AVO model using the deep well tie. Th is model used osets to 30, which was equivalent to both the tio of the data as much as possible.
898 The Leading Edge August 2011

An Integrated Workflow Solution


Get the full picture with comprehensive integration of IHS PETRA, IHS PetraSeis and IHS GeoSyn.

When these three solutions seamlessly connect, geological and seismic workflows are streamlined with easy-to-interpret data and modeling tools. This integrated bundle allows you to access, view and manipulate information within the same project; create multidimensional seismic models and synthetics; improve workflow collaboration; and deliver comprehensive proposals for new prospects. Get to the field faster with seismic, well, production, log and economic data at your fingertips.
See more solutions at www.ihs.com/seg

M u l t i p l e

a t t e n u a t i o n

safely perform aggressive ltering close to the moveout of the primary. Multi-CMP gathering versus multidimensional interpolation Land 3D seismic data are typically noisy and poorly sampled. As mentioned earlier, supergather CMPs and oset binning can improve the sampling and signal-to-noise characteristics of gathers used in the Radon transform. Despite the fact that the sparse Radon transform is less aected by sampling, these methods are still employed, presumably to reduce noise in the gathers. We propose that 5D interpolation (Liu and Sacchi, 2004; Sacchi and Liu, 2005; Trad, 2007) may be a better way to regularize the data prior to the Radon transform since it has been shown (Hunt et al., 2010) to introduce less geological smear than superbinning. Interpolation should also tend to reduce overall noise level, since the number of traces increases, and many noise types do not interpolate as well as signal. Combining interpolation with the sparse Radon transform should produce tau-p spaces with the greatest resolution in the transform domain. Velocities and resolution: The processing ow We employed an aggressive AVO-compliant processing ow that was aimed to achieve stable, high-resolution data. Our processing ow included these key steps: Horizon-based velocity analysis. The horizons were picked by the interpreter so that NMO corrections could be consistently picked. The well-log-derived interval velocities were also used as a guide. These two controls ensured that the velocities were picked in a geologically consistent manner. Cascaded surface-consistent deconvolution and surface consistent prestack f-x noise attenuation (Wang, 1996) were applied. 5D interpolation (Trad, 2007) was applied to reduce the need to borrow traces in multiple modeling and to reduce noise in the gathers. Aggressive and mild multiple attenuation were employed according to our expectations of the moveout of the multiple. Spectral balancing was applied to the stack data. Method The data were stacked at each of the key processing steps, resulting in ten dierent seismic volumes for comparison (Table 1). All results are improvements over the legacy processing result of Figure 2. The comparison also isolates the method of regularization by comparing super gathering against interpolation. The tau-p mutes are picked in two ways: a mild mute picked on the tau-p space of superbinned gathers, and a more aggressive mute picked on the tau-p space calculated from interpolated gathers. We also captured images of the tau-p spaces of some of these dierent approaches for comparison. We used simple correlation with our well tie to quantitatively evaluate the quality of the results. The well

Figure 9. Reprocessing results: (a) optimal new velocities, (b) reprocessing including optimized velocities, noise attenuation, and spectral balance, but no interpolation, (c) the full reprocessing but no interpolation, and sparse Radon transform with the mild mute, and (d) the full reprocessing as well as 5D interpolation, and sparse Radon transform with the aggressive mute. These four comparison stacks are also identied in Table 1 by the bold text. These stacks are all improvements over the legacy stack of Figure 2 and (d) has the best overall match.

oset and angles found in the actual 3D data. This model is illustrated in Figure 7. Figure 7a shows the gather itself, which has some minor AVO characteristics in the Blueridge. We performed the high-resolution Radon transform on this gather and illustrate the tau-p space in Figure 7b. The velocity space image does not suer signicant smear due to the AVO. This simple model indicates that the primary should be well resolved in tau-p space, and that we may be able to
900 The Leading Edge August 2011

Exciting frontiers from Alaska to Australia


Are you up for the challenge? bp.com/exploration/le

Bakuriani, Georgia

Were hiring explorers now Our award-winning seismic imaging technology, world-class team, sizeable investment in new ventures and our exploration heritage combine to create the perfect environment for explorers. Were looking for explorers with outstanding technical skills and a passion for exploration that drives them to unlock the hydrocarbons lying within the earth. The oil and gas industry operates at the forefront of technology and BP is an industry leader in the application of these technologies. With BP, youll use technology to explore exciting new frontiers and as part of a world-class team youll positively impact the countries that we operate in. BPs exploration success has led to an exciting portfolio of career opportunities. Our roles are largely based at our technology centres in London and Houston with opportunities also available at our other international locations. Were looking for great Geoscientists in New Ventures, Deepwater Exploration and Unconventional Plays. Are you up for the challenge?
BP is an equal opportunities employer.

M u l t i p l e

a t t e n u a t i o n

Starting point
Original (legacy) processing Optimal (new) velocites Final noise-attenuated gathers, no spectral balance Final noise-attenuated gathers, no spectral balance Final noise-attenuated gathers, has spectral balance Final noise-attenuated gathers, has spectral balance Final noise-attenuated gathers, has spectral balance Final noise-attenuated gathers, has spectral balance Final noise-attenuated gathers, has spectral balance Final noise-attenuated gathers

Regularization
None None Superbinning

Radon transform
None None None

Tau-p mute
None None None

CC-big window
0.654 0.723 0.774

CC-small window
0.614 0.635 0.729

Key dierence
Velocities

Improvement
0% 3%

Velocities + best noise attenuation

19%

Interpolation

None

None

0.789

0.729

19%

None

None

None

0.811

0.769

Add spectral balance

25%

Interpolation

None

None

0.816

0.788

28%

Superbinning Sparse Radon transform Superbinning Sparse Radon transform Sparse Radon transform Sparse Radon transform

Mild mute 0.844

0.848

Radon transform

38%

Aggressive mute Mild mute

0.869

0.876

43%

Interpolation

0.847

0.859

5D interpolation + Radon

40%

Interpolation

Aggressive mute

0.883

0.924

50%

Table 1. Summary of the experiment with cross-correlation values with the well tie. The CC columns give the correlation coecient of the larger and smaller windows. The interpolation plus multiple-attenuation ow yielded the best correlation coecients. The four example stacks shown in Figure 9 are bolded for clarity. The improvement given is the relative improvement versus the reference original stack of Figure 2.

log was correlated with each data volume identically using a small window around the Nisku and Blueridge (19001992 ms), and a larger correlation window (18601992 ms). Each correlation was also perturbed with ten minor static shift plus correlation jiggles to ensure that the correlation coefcients were determined as fairly as possible. This method allows us to evaluate our results by observing the stacks, the gathers and tau-p spaces, and by comparing the correlation coecients with the synthetic. Results Figure 8 shows production gathers and forward Radon transforms (tau-p spaces) at one CMP location for the supergathered sparse Radon transform and the sparse Radon transform with 5D interpolation. We also included a gather created by a 3 3 COFF stacking of all data centered on the CMP. The 5D interpolated example of Figure 8b is more resolved at the Blueridge level, and illustrates a multiple with 12 ms of moveout, which we expected. The tau-p mute, shown in yellow on the right, consists of a mild mute at 22 ms of moveout at all times, and a harsh mute which cuts inside the 12-ms multiple in the zone of interest, but varies surgically in time. This aggressive tau-p mute could only be designed in the 5D interpolated taup space. The multiple identied at 12 ms in Figure 8b has lim902 The Leading Edge August 2011

ited energy, which suggests it should not dominate the stack, and therefore this multiple is unlikely to be the only reason for the poor data tie of Figure 2. We also noted the existence of this multiple with these general characteristics in numerous other gather comparisons elsewhere across the survey. Figure 8 illustrates two dierences in the data gathers: rst, the 5D interpolated gather and the 3 3 COFF stacked gather have a higher signal-to-noise ratio than the supergather and, second, the data characteristics are sensitive to these processing choices. The 5D interpolated gather appears to have retained the geologic information in the data while reducing the noise, whereas the 3 3 stacked gather is smeared in tau-p space, possibly due to velocity distortions from the gathering process. These dierences are consistent with our expectations. The 5D interpolated gathers attain a higher signal-to-noise ratio because noise is reduced through the 5D interpolation, and stacking the higher-fold interpolated gathers reduces random noise. The parsimonious approach to supergathering of Figure 8a is also a disadvantage; it has not obviously smeared geologic data, but it is too noisy to yield well-resolved, interpretable taup spaces. This example illustrates that the superior appearance of the tau-p space of the 5D interpolated data. Multiple attenuation was performed using these mutes. Figure 9 shows the comparison of our legacy stack (Figure 2)

M u l t i p l e

a t t e n u a t i o n

and selected stacks, including the 5D interpolated plus sparse Radon transform multiple attenuation using the harsh mute. The stack response at the Blueridge level changes significantly in each case when compared to the legacy product of Figure 2, with the aggressively multiple-attenuated version of Figure 9d matching the model result of Figure 1 most closely. Contrary to the statement that stacks do not change, the stack changes significantly in every case relative to the well tie. Moreover, although the multiple attenuated result was the best; our reprocessing for resolution and optimal velocities was responsible for a significant amount of the improvements. This is illustrated by comparing Figure 9b to Figure 2. Correlation coefficients were calculated with the (center) well tie over the zone of interest. These correlation values are summarized in Table 1. The interpolation plus harsh multiple attenuation had the best correlation regardless of whether the larger or smaller correlation widow was used. Each successive step in the processing sequence yielded a better correlation with the well. This supports our assertion that the poor correlation of the legacy processing stack was not just caused by the multiple: better resolution, random noise handling, and velocity determination were also important. Conclusions Aggressive reprocessing with careful velocity analysis, AVO compliant noise attenuation and temporal resolution enhancement improved the data at the Nisku and Blueridge level. The aggressive multiple attenuation also had an additional clear, and measurable affect on the stack response. In short, the sparse Radon transform did affect the stack, contrary to the bleak legacy opinion of the problem. The combination of 5D interpolation and the sparse Radon transform produced the most stable, resolved, tau-p space in the observed CMP gathers. Some improvement may have come because the interpolation produced gathers with the near offsets populated. These interpolated gathers were also cleaner, and had higher signal to noise partly due to an increase in fold. This increase in data quality may be a strong contributing factor in the improved appearance of the tau-p space of the interpolated data. These improvements allowed us to consider and apply a more aggressive mute in tau-p. The mute was also consistent with the differential moveout we expected from forward modeling, which provides some confidence that the results are valid. COFF stacking to create similar increases in signal to noise was not as successful in producing well resolved tau-p spaces. These observations suggest the interpolation is helping in two ways: a reduction in noise as well as a minimization of smearing in velocity space due to lateral variations in the geology. The biggest advantage of the 5D interpolation plus sparse Radon transform approach is that the method enables a clear interpretation and selection of mutes in tau-p space. The effect of these well behaved tau-p spaces on the decision-making capability of the interpreter is difficult to quantify but important to discuss. This was a commercial interpretation project, with business questions regarding the Blueridge and Nisku formations associated with it. Prior to this work, the opinion was preju-

diced against even attempting multiple attenuation because of the belief that the moveout of the multiples was too small to remove. We have shown that an understanding of the multiples themselves and a well resolved tau-p space are better scientific tools with which to evaluate whether or not multiple attenuation should be attempted than legacy assumptions. We also showed that the short-period multiple problem was not necessarily the dominant issue in the data; increasing the temporal resolution of the stack and applying geologically consistent velocities were major contributors in improving the data. This work did not employ esoteric or specialized modeling software. This test only required standard software, simple modeling, and the will to apply the products of previous research found in the literature. Coming back to beliefs: does the discussion of belief or will belong here? Perhaps these concepts do not, and yet our technical decisions rest at least partly on these cousins of assumption. The notion of resolution carries several distinct meanings, and was shown to be important throughout this work. The wavelet needs to be highly resolved prior to interpolation. The Radon transform must have high resolution in the transform domain in order to enable effective interpretation and potential mitigation. The interpreter must be resolved to do the work necessary to understand the multiple problem, and resolved to attempt to suppress the multiple even if the process is aggressive enough to be risky. We have to be resolved to move the step beyond the excellent literature that illustrates improvements on synthetics, and move to real exploration problems and the unique issues that come with them. We must also be resolved to evaluate our results quantitatively so we can measure what improvements we have made and attempt to understand why we have achieved them.
References
Cary, P., 1998, The simplest discrete Radon transform: 68th Annual International Meeting, SEG, Expanded Abstracts, 17, 19992002. Dix, C. H., 1955, Seismic velocities from surface measurements: Geophysics, 20, no. 1, 6886, doi:10.1190/1.1438126. Hampson, D., 1986, Inverse velocity stacking for multiple elimination: Canadian Journal of Exploration Geophysicists, 22, 4455. Hunt, L., J. Downton, S. Reynolds, S. Hadley, D. Trad, and M. Hadley, 2010, The effect of interpolation on imaging and AVO: A Viking case study: Geophysics, 75, no. 6, WB265WB274, doi:10.1190/1.3475390. Kabir, M. M. N. and K. J. Marfurt, 1999, Toward true amplitude multiple removal: The Leading Edge, 18, no. 1, 6673, doi:10.1190/1.1438158. Liu, B. and M. Sacchi, 2004, Minimum weighted norm interpolation of seismic records: Geophysics, 69, no. 6, 15601568, doi:10.1190/1.1836829. Marfurt, K. J., R. J. Schneider, and M. C. Mueller, 1996, Pitfalls of using conventional and discrete Radon transforms on poorly sampled data: Geophysics, 61, no. 5, 14671482, doi:10.1190/1.1444072. Mayne, W. H., 1962, Common reflection point horizontal stacking techniques: Geophysics, 27, no. 6, 927938, doi:10.1190/1.1439118. Ng, M. and M. Perz, 2004, High resolution Radon transform in the t-x domain using intelligent prioritization of the Gauss-Seidel estimation sequence: 74th Annual International Meeting SEG, ExAugust 2011 The Leading Edge 903

M u l t i p l e

a t t e n u a t i o n

panded Abstracts, 21602163. Sacchi, M. D., 2009, A tour of higher resolution transforms: CSEG Annual Convention, 665668. Sacchi, M. D. and B. Liu, 2005, Minimum weighted norm wavefield reconstruction for AVA imaging: Geophysical Prospecting, 53, no. 6, 787801, doi:10.1111/j.1365-2478.2005.00503.x. Sacchi, M. D. and T. D. Ulrych, 1995, High-resolution velocity gathers and offset space reconstruction: Geophysics, 60, no. 4, 1169 1177, doi:10.1190/1.1443845. Sherwood, J. W. C. and P. H. Poe, 1972, Continuous velocity estimation and seismic wavelet processing: Geophysics, 37, no. 5, 769 787, doi:10.1190/1.1440299. Shuey, R. T., 1985, A simplification of the Zoeppritz equations: Geophysics, 50, no. 4, 609614, doi:10.1190/1.1441936. Taner, M. T. and F. Koehler, 1969, Velocity spectra-digital computer derivation and applications of velocity functions: Geophysics, 34, no. 6, 859881, doi:10.1190/1.1440058. Thorson, J. R. and J. F. Claerbout, 1985, Velocity-stack and slantstack stochastic inversion: Geophysics, 50, no. 12, 27272741, doi:10.1190/1.1441893. Trad, D., 2007, A strategy for wide-azimuth land data interpolation: 77th Annual International Meeting, SEG, Expanded Abstracts, 946950. Verschuur, D. J. and A. Berkhout, 2005, Removal of internal multiples with the common-focus-point (CFP) approach: Part 2Application strategies and data examples: Geophysics, 70, no. 3, V61V72, doi:10.1190/1.1925754. Verschuur, D. J., 2006, Seismic multiple removal techniques: past, present and future: EAGE Publications.

Verschuur, D. J., 2007, Expert answers: Does parabolic Radon transform multiple removal hurt amplitudes for AVO analysis? Answer 2: CSEG Recorder, 32, 3, 1014. Wang, X., 1996, Surface consistent noise attenuation of seismic data in frequency domain with adaptive pre-whitening: 65 Annual International Meeting, SEG, Expanded Abstracts, 11921195.

Acknowledgments: We thank Mauricio Sacchi of the University of Alberta for his advice on this project, Darren Betker and Earl Heather of Divestco for their efforts in processing, and Fairborne Energy Ltd. for allowing us to show this information. Corresponding author: LHunt@fairborne-energy.com

Near-Surface Imaging and Statics Solutions


Early-Arrival Waveform Tomography | Nonlinear Traveltime Tomography | GLI Inversion | Joint Gravity & Seismic Tomography

TomoPlus

TM

www.geotomo.com | info@geotomo.com | Houston: +1 281 597 1429 | Beijing: +86 10 6484 1021

904

The Leading Edge

August 2011

SPECIAL M u l t SECTION: i p l e a t M tu el ntui a pt li eo a nt t e n u a t i o n

Applications of interbed multiple attenuation


MALCOLM GRIFFITHS, JESHURUN HEMBD, and HERV PRIGENT, CGGVeritas

urface-related multiple attenuation schemes have proven useful in attenuating a signicant proportion of the highamplitude multiple content in seismic data. Although generally sucient for exploration purposes, there is growing recognition that more complicated interbed multiples exist within the data and that they may interfere with the interpretation of reservoirs and other areas of interest. In this paper, we examine a datadriven method and a model-driven method for attenuating internal multiples showing both synthetic and real-world examples of their application.

Introduction Interbed multiple contamination has been observed previously during reservoir studies from the North Sea, Middle East, and Asia Pacic regions, where shallow carbonates or volcanics generate strong reverberations. Frustrating to all parties, this noise has, for a long time, been categorized as "too dicult" to address because it resists traditional multiple attenuation methods; the multiples and primaries have similar moveout reducing the eectiveness of Radon-based methods, the multiples and primaries have similar frequencies and amplitudes reducing the eectiveness of predictive ltering, and the multiples and primaries have similar dips reducing the eectiveness of f-k lters, tau-p demultiple, and migration methods implementing dipdiscrimination techniques during the imaging (for example, controlled-beam migration). The discovery of the Tupi/Lula Field, oshore Brazil, in 2006, and later discoveries of Jupiter, Sugarloaf, Iara, Azulao, and Iracema have conrmed the potential for signicant, presalt, oil accumulations in the Santos Basin. One of the seismic imaging challenges in the basin, however, is the existence of high-amplitude interbed multiple contamination across the reservoir. An example can be seen in Figure 1b which shows a migrated image. Figure 1 shows near-oset data from a line in the Santos Basin. Note that the rst surface-related multiple appears well below the presalt target. Strong interbed multiples, however, contaminate the presalt primaries. The interbed multiples are generated by a series of strong reectors above the target. The water bottom, top of Albian layer, top of salt, and layered evaporites can all contribute toward generating strong interbed multiples. The strongest of these multiples appear below syncline structures in the top of salt and layered evaporites, due to a focusing eect that traps multiple energy in the minibasin (Pica and Delmas, 2008). The migrated results in Figure 1b show the interbed multiples cross-cutting the reservoir. These interfere with the interpretation of the base of salt event, the reservoir, and the presalt faulting. Following the advent of data-driven surface-related multiple attenuation by Berkhout et al. (Berkhout and de Graa, 1982; Verschuur, 1991; Berkhout and Verschuur, 1998), many authors have extended the data-driven concept to include interbed multiple attenuation. Many methods based on the work of Jakubowicz (1998) have appeared in the literature, for example Ikelle
906 The Leading Edge August 2011

(2004). These all require a two-trace convolution followed by a single-trace correlation or some combination thereof. Equivalent model-driven methods also exist based on the same concept, notably from Pica and Delmas (2008). The concept has also been converted to the inverse data space by Luo et al., (2007). The implicit limitation of this method is that it predicts only internal multiples associated with a single horizon, typically dened inside a single layer, predicting internal multiples from the surrounding reectors that will cross this horizon along their path. A second extension to the data-driven internal multiple attenuation methods, again pioneered by Berkhout et al., (Berkhout and Verschuur, 1997 and 1999; Berkhout, 1999), utilizes common focal point transforms (CFP) to partially redatum (receiver side) and fully redatum the input seismic to the level of the multiple-generating horizon. The method is similar to Jakubowiczs method in that it requires a two-trace convolution followed by a single-trace correlation and predicts only the multiples associated with a single horizon or layer, but it also requires two redatuming steps and the associated CFP operators. Kelamis et al. (2002) and Alai et al. (2006) demonstrated this method on land data highlighting the diculty associated with the estimation of the CFP operators. Yet another method of internal multiple attenuation has been pioneered by Weglein et al. (1997) using the inverse-scattering series. This method is again fully data-driven but has the added benet of being capable of predicting all interbed multiples in a single iteration, (Weglein et al., 2008). Depending upon the implementation complexity, it is possible to predict either an approximation to the internal multiples (leading term), or through an innite series, the exact internal multiple contribution, (Ramirez, 2008). This method has been previously demonstrated on marine data by Otnes et al. (2004) and on land data by Fu et al. (2010). With the exception of the inverse-scattering innite series, all these methods rely upon some form of adaptive subtraction in order to attenuate the internal multiple. This has a plethora of problems associated with it, foremost of which is potential primary damage. None of these issues are covered in this papersee Abma et al. (2002) for a detailed analysis of adaptive subtraction techniques. This paper describes the application of two of the above methods for attenuating internal multiples: a 3D data-driven method derived from Jakubowicz, and the 3D model-driven method from Pica. The focus is on the interbed multiple contamination at the reservoir level for various exploration interests within the Santos Basin, Brazil. The results, however, are equally applicable to reservoir-level contamination observed in the North Sea, Middle East, and Asia Pacic. Methodology Many factors inuence the eectiveness of the various internal multiple attenuation algorithms; geological complexity (multiple periodicity) and acquisition design (oset coverage and

M u l t i p l e

a t t e n u a t i o n

Figure 1. Interbed multiples in data from the Santos Basin. Note the strong interbed multiples obscuring the presalt target in the near-oset data (left). The rst surface-related multiple appears at the bottom of the gure, below the target. The Kirchho-migrated image (right) shows migration swings crossing the base of the target and interfering with fault interpretation at the target. (From Hembd et al., 2011)

Figure 4. Previously stored waveeld will illuminate the reectivity section. (From Pica and Delmas, 2008)

Figure 2. Prediction of multiples reecting downward from horizon 1.

Figure 5. Downward extrapolation of the waveeld resulting from the secondary sources created at stage 2. (From Pica and Delmas, 2008)

Figure 3. Backward extrapolation of the muted input data. (From Pica and Delmas 2008)

Figure 6. Final upward extrapolation. (From Pica and Delmas, 2008)

August 2011

The Leading Edge

907

M u l t i p l e

a t t e n u a t i o n

Figure 7. 2D synthetic data of the Santos Basin used for the data-driven interbed multiple attenuation: (left) input, (center) subtraction result, and (right) difference. (From Hembd et al., 2011)

Figure 8. RTM comparison of the Santos Basin synthetic data: (left) before and (right) after data-driven interbed multiple attenuation. (From Hembd et al., 2011)

spatial sampling) are two key factors. It is then not surprising that the application of any algorithm should first consider the input limitations. The 3D data-driven method, following Jakubowicz, is illustrated in Figure 2. An arbitrary horizon is chosen at a point deeper than the known multiple-generating horizon. The internal multiples predicted are then limited using various muting methods but are always those having the first upward reflection below the user-defined horizon, a downward reflection above the user-defined horizon, and a final upward reflection below the user-defined horizon. The prediction is given mathematically in Equation 1. (1) P213 = P2P1*P3 P1 refers to the wavefields from the surface to the multi908 The Leading Edge August 2011

ple-generating horizons, P2 refers to the source-side wavefield reflecting from below the multiple-generating horizon, and P3 refers to the receiver-side wavefield reflecting from below the multiple-generating horizon. P1* refers to the complex conjugate of P1. Equation 1 neglects both the source term and the surface reflectivity which can both be compensated for using a least-squares matching filter during the subtraction stage. It also includes both primary and higher-order multiple contributions to the interbed multiple prediction. The task of localized wavelet matching falls upon the subtraction. Without knowledge of the downward reflection point, however, it becomes necessary to sum Equation 1 over a user-defined aperture. The method places a high reliance upon the input acquisition density. For accuracy, it requires shot and receiver stations at all possible surface locations as well as measurements

WHEN SEISMIC COUNTS...

...COUNT ON FUGRO
Fugro-Geoteams recently introduced C-Class fleet is among the most modern within the seismic industry. Equipped with state-of-the-art acquisition technology, the vessels are designed with both the environment and efficiency in mind. High performance data acquisition Capability to tow up to 16 streamers Latest in steerable solid-streamer technology Cost-efficient Fresnel Zone Binning Environmental focus with CLEAN and CLEAN DESIGN certification

Meet us at: SEG 18 - 23 September 2011 San Antonio, Texas Booth 2129 AAPG ICE 23 - 26 October 2011 Milan, Italy Booth 338 Fugro-Geoteam Oslo Houston Perth Email: geoteam@fugro-geoteam.com www.fugro-geoteam.com

Fugro-Geoteam: safe, reliable, efficient.

M u l t i p l e

a t t e n u a t i o n

Figure 9. Kirchhoff comparison of Santos Basin 3D data: (left) before and (right) after data-driven interbed multiple attenuation. (From Hembd et al., 2011)

of the wavefield at zero offset. Not surprisingly, these are the same limitations found in the surface multiple implementation and, as such, the method is found to work reasonably well in deeper water where the near offsets can be easily extrapolated and in wide-azimuth acquisitions where the azimuthal sampling is closer to ideal. The 3D model-driven method also follows the logic of Jakubowicz. In it, the input data are backward-propagated though a predefined model to the user-defined arbitrary horizon (Figure 3). The horizon is defined in the same way as in the data-driven method. Prior to the back-propagation, the input data must first be muted to remove all reflections above the target horizon. Once at the target horizon, the wavefield is upward extrapolated through the model to illuminate the shallow events, thus identifying the downward reflection points in Figure 4. For each possible reflection point, the wavefield is downward propagated through the model a second time until it reaches the reflecting horizons (Figure 5) where it is finally forward propagated to the surface (Figure 6). The entire procedure mimics the data-driven case but has two distinct advantages; it doesnt require a dense shot sampling on the surface, and it doesnt require records at zero offset. It does, however, require a reflectivity and velocity model. Application 3D data-driven interbed multiple prediction was performed on 2D synthetic data and on 3D real data from the Santos Basin. With 3 seconds of water and shallow multiple generators, these data were well suited to the data-driven interbed algorithm. The synthetic data (Figure 7) were generated using an acoustic waveequation modeling package, a velocity field extracted from the 3D real data processing, and an arbitrary density model. The
910 The Leading Edge August 2011

interbed multiples were predicted using the water bottom (at approximately 3 s) and the top of salt horizon (at approximately 4 s) as the multiple generators. Figure 8 shows the synthetic results after migration highlighting the impressive attenuation of the strongly focused interbed multiples beneath the minibasins. The real data results (Figure 9) were generated using a single iteration of interbed multiple prediction targeting the waterbottom multiples. Analysis of the postmigration interbed attenuation results shows a considerable reduction in the subsalt migration noise (attributed to interbed multiples). However, some residual multiple-related events are visible on the real data cross-cutting the base of salt. Although the subtraction is by no means perfect, these residuals are most likely related to an imperfect or incomplete prediction. Interbed multiples generated from the top of salt horizon and from the layered evaporate horizons remain unaccounted for. 3D model-driven prediction was also performed on real data from this region to target the intrasalt multiples. The reflectivity model was obtained through prestack time migration of a few near offsets and the arbitrary reference horizon was chosen 100 ms below the top salt (hence, considered the principal downward reflector). All reflectors down to 1 km below top salt were considered as potential upward generators. A minimum gap separating upward and downward reflectors was also used. This corresponds to the shortest leg of the modeled multiple and is needed to avoid modeling primary signal. After modeling, a conservative 3D adaptation and subtraction was performed in the offset domain. The results closely resemble those of Figure 9, showing promising attenuation of the events cross-cutting the base of salt and slightly improved continuity on some targeted presalt events. These specific results, however, cannot be shown here due to proprietary issues.

Gp orth Sea .N U e s a B

Base North Sea Super Gp

Base Ommelanden Fm

Top L. Volpriehausen Sst Mbr

Courtesy of Wintershall, EBN and Dana

Rapid Cycle PSDM

Moving Depth to Interpretation

Want to try different interpretations in real-time? - increase interpretation confidence and reduce risk? PGS hyperBeam integrates our holoSeis interactive software for velocity model building, visualization, QC and interpretation with our full suite of anisotropic (VTI/TTI) depth migration tools and workflows (Kirchhoff, Beam, WEM, RTM). Enabling interpreters to make real time decisions to improve the quality and accuracy of the velocity model.

Fast iterative velocity model building Interactive anisotropic depth imaging Near real-time depth solution Increased confidence Reduced risk

Oslo Tel: +47 67 514547 Fax:+47 67 526464

London Tel: +44 1932 376581 Fax:+44 1932 376102

Houston Tel: +1 281 509 8354 Fax:+1 281 509 8500

Singapore Tel: +65 6838 1960 Fax:+65 6735 6413

A Clearer Image
www.pgs.com

M u l t i p l e

a t t e n u a t i o n

Conclusion In this paper, we have demonstrated the impact of interbed multiple attenuation on synthetic and real data from the Santos Basin. It has been shown that clearer images of the reservoir zone can be obtained by attenuating the influence of shallower, internal multiple-generating horizons in an informed and deliberate fashion. The choice between data-driven and modeldriven prediction is shown to be one of convenience with both methods performing admirably.
References
Abma, R., N. Kabir, K. H. Matson, S. A. Shaw, B. McLain, and S. Michell, 2002, Comparisons of adaptive subtraction techniques for multiple attenuation: 72th Annual International Meeting, SEG, Expanded Abstracts, 21862189, doi:10.1190/1.1817141. Alai, R. and E. Verschuur, 2006, Case Study of surface-related and internal multiple elimination on land data: 76th Annual International Meeting, SEG, Expanded Abstracts, 27272731, doi:10.1190/1.2370090. Berkhout, A. J., 1999, Multiple removal based on the feedback model: The Leading Edge, 18, no. 1, 127131, doi:10.1190/1.1438140. Berkhout, A. J. and M. P. de Graaff, 1982, The inverse scattering problem in terms of multiple elimination and seismic migration: 52nd Annual International Meeting, SEG, Expanded Abstracts, 113114, doi:10.1190/1.1826839. Berkhout, A. J. and D. J. Verschuur, 1997, Estimation of multiple scattering by iterative inversionpart 1: Theoretical considerations: Geophysics, 62, no. 5, 15861595, doi:10.1190/1.1444261. Berkhout, A. J. and D. J. Verschuur, 1998, Wave theory based multiple removal, an overview: 68th Annual International Meeting, SEG, Expanded Abstracts, 15031506, doi:10.1190/1.1820197. Berkhout, A. J. and D. J. Verschuur, 1999, Removal of internal multiples: 69th Annual International Meeting, SEG, Expanded Abstracts, 13341337, doi:10.1190/1.1820758. Berkhout, A. J. and D. J. Verschuur, 2005, Removal of internal multiples with the common-focus-point (CFP) approach: Part 1Explanation of the theory: Geophysics, 70, no. 3, V45V60, doi:10.1190/1.1925753. Berkhout, A. J. and D. J. Verschuur, 2005, Removal of internal multiples with the common-focus-point (CFP) approach: Part 2Application strategies and data examples: Geophysics, 70, no. 3, V61V72. doi:10.1190/1.1925753. Fu, Q., Y. Luo, P. G. Kelamis, S. Huo, G. Sindi, S. Hsu, and A. B. Weglein, 2010, The inverse scattering series approach toward the elimination of land internal multiples. 80th Annual International Meeting, SEG, Expanded Abstracts, 34563461, doi:10.1190/1.3513567. Hembd, J., M. Griffiths, C. Ting, and N. Chazalnoel, 2011, Application

of 3D interbed multiple attenuation in the Santos Basin, Brazil: 73rd EAGE Conference and Exhibition. Ikelle, L. T., 2004, A construct of internal multiples from surface data only: 74th Annual International Meeting, SEG, Expanded Abstracts, 21642167, doi:10.1190/1.1839688. Jakubowicz, H., 1998, Wave equation prediction and removal of interbed multiple: 68th Annual International Meeting, SEG, Expanded Abstracts, 15271530, doi:10.1190/1.1820204. Kelamis, P. G., E. Verschuur, K. E. Erickson, R. L. Clark, and R. M. Burnstad, 2002, Data-driven internal multiple attenuationApplications and issues on land data: 72nd Annual International Meeting, SEG, Expanded Abstracts, 20352038. Luo, Y., W. Zhu, and P. G. Kelamis, 2007, Internal multiple reduction in inverse-data domain: 77th Annual International Meeting, SEG, Expanded Abstracts, 24852489, doi:10.1190/1.2792983. Otnes, E., K. Hokstad, and R. Sollie, 2004, Attenuation of internal multiples for multicomponent and towed streamer data: 74th Annual International Meeting, SEG, Expanded Abstracts, 12971300, doi:10.1190/1.1851105. Pica, A. and L. Delmas, 2008, Wave equation based internal multiple modeling in 3D: 78th Annual International Meeting, SEG, Expanded Abstracts, 24762480, doi:10.1190/1.3063858. Ramirez, A. C. and A. B. Weglein, 2008, Inverse scattering internal multiple elimination: Leading-order and higher-order closed forms: 78th Annual International Meeting, SEG, Expanded Abstracts, 2471 2475, doi:10.1190/1.3063857. Verschuur, D. J., Surface-related multiple elimination: an inversion approach: PhD thesis, Delft University of Technology. Weglein, A. B., A. C. Ramirez, K. A. Innanen, F. Liu, J. E. Lira, and S. Jiang, 2008, The underlying unity of distinct processing algorithms for: (1) the removal of free surface and internal multiples, (2) Q compensation (without Q), (3) depth imaging, and (4) nonlinear AVO, that derive from the inverse scattering series. 78th Annual International Meeting, SEG, Expanded Abstracts, 24812486, doi:10.1190/1.3063859.

Acknowledgments: The authors thank CGGVeritas for permission to publish this work, and Antonio Pica, Nicolas Chazalnoel, and ChuOng Ting for their helpful contributions. Corresponding author: malcolm.griffiths@cggveritas.com

912

The Leading Edge

August 2011

Hit the Sweet Spot with NEOS

Visit NEOS at SEG in San Antonio to see if you have what it takes to hit the target.
Proceeds from this fund-raising event will benefit the foundations of the Society of Exploration Geophysicists and the American Geological Institute.

neosgeo.com

Above, Below and Beyond

SPECIAL SECTION:

M u l t i p l e

a t t e n u a t i o n

Case studies in 3D interbed multiple attenuation


DAVID BROOKES, ION Geophysical Corporation

ttenuating water-column multiples with 3D SRME has become common practice, but interbed multiples are not attenuated with this technique since these multiples do not reect from the free surface. Interbed multiples are often reected between closely spaced events, giving them an eective velocity very similar to primary events. Because there is little moveout discrimination, interbed multiples do not respond to Radon multiple attenuation or other moveout-based techniques. Extending the concept of SRME to predict interbed multiples is not a new idea, but the 3D implementation was not widely applied until recently. The increase in available computing power has made 3D IME (interbed multiple elimination) a viable option. This paper will discuss a synthetic data set, a Gulf of Mexico marine data set, and an Egyptian Western Desert land data set. Attenuating interbed multiples is an important issue in exploration, especially in the Middle East, where this type of multiple is particularly problematic because the antimultiple tools based on velocity dierences do not work well in this geologic setting. In the salt provinces, attenuating interbed multiples is of particular importance as their presence greatly inhibits interpretation. In addition, removing this class of multiple is very helpful in understanding dirty salt. Interbed multiples Problematic interbed multiples are frequently associated with an event with a strong reectivity coecient. Without this strong multiple generator, interbed multiples may be present, but are likely to have insignicant amplitude. Water-column multiples are reected o the water surface, which has a reection coecient near 1, and the ocean bottom, which is typically one of the strongest events in the section. Conversely, interbed multiples have three reections in lower-contrast media. The energy of the interbed event is therefore attenuated as it is related to the product of the three reection coecients. A multiple reection of a mere 1% of incoming energy requires an average reection coecient of the three reections of no less than 0.2. Because of this, most signicant interbed multiples have their two bottom bounces on a common anomalously bright event. That event ends up acting as a mirror for the events above. Common examples are top of salt, carbonates, and volcanic or coal layers. The generator is an important input to the IME removal process, as it is needed to help build the prediction. Identifying such an event therefore is required for aordable implementation of IME. Interbed multiple elimination IME is an extension of the SRME algorithm. In SRME, a multiple prediction is created by convolving traces sharing endpoints on the surface. We can think of a seismic trace from surface point A to surface point B (TAB) as a sum of primary events (PAB), surface-related multiples (SMAB) (when working with marine data), and interbed multiples (IMAB):
914 The Leading Edge August 2011

(1) In 3D SRME algorithms, the surface multiples are approximated by a sum over possible surface reection locations (B) of possible multiples created by convolving the traces sharing the endpoints: (2) Figure 1a illustrates an SRME multiple candidate raypath. Multiple models obtained in this way are approximations of the true multiple path in several ways: the spectrum is squared, the aperture is limited, and since multiple-iteration SRME is not typically used, the relative amplitudes of double bounce multiples, triple bounce multiples, etc. are incorrect. These approximations are typically accurate enough that adaptive subtraction is sucient to remove them. Adaptive subtraction is used to subtract the multiple predictions from the original data to get a primary data set: (3) Of course, this primary data set still contains interbed multiples which, following SRME, can be removed by IME. IME extends SRME by integrating over two surface locations (C and D) instead of just one. Two traces (from the SRME results) which do not share a surface location are convolved, giving a trace with raypaths like the one illustrated in Figure 1b. This convolution produces a trace which has no physical meaning with respect to seismic acquisition. The nonphysical trace is then correlated with the third trace which has source and receiver at points C and D. (4) The correlation removes the nonphysical raypaths, creating an interbed multiple model trace with a reection point on a subsurface event (Figures 1c and 1d). Again, IME generates approximations to the actual interbed multiple requiring the use of adaptive subtraction to give a primary data set with attenuated interbed multiples: (5) There is a complication in the technique, however, because the IME prediction can contain primary events if not properly implemented. Because most interbed multiples have relatively short periods, the surface points (C and D) are often close to the shot or receiver of the original trace. As these surface points get close, the PCD trace may become suciently similar to one of the other traces to simply cancel it:

M u l t i p l e

a t t e n u a t i o n

If, PDB

PCD then

(at least kinematically) (6)

To prevent this, a generator horizon must be provided. By limiting traces PAC and PDB to events below the generator, and limiting trace PCD to events above the generator, only events with multiple raypaths intersecting the generator horizon will be present in the prediction. An example mute horizon is shown in Figure 1c. This method has been used in the industry for some time in 1D and 2D versions, and has been described in the literature by Verschuur (2000), Weglein (1997), and Van Borselen (2002) among others. The computational eort need

Figure 1. (a) SRME multiple prediction. (b) Traces AB and CD are convolved. (c) Trace CB is correlated, removing the extra path. (d) The result is a potential interbed prediction for trace AD.

to run 3D IME is quite signicant. Thus, even though well known, the algorithm has not enjoyed widespread deployment. Where the computational eort for 3D SRME is proportional to the area in the aperture, the computational eort for 3D IME is proportional to the square of the aperture area. In spite of this, with careful preparation and parameterization, the comAugust 2011 The Leading Edge 915

M u l t i p l e

a t t e n u a t i o n

Figure 2. (a) Pluto model, the input to IME. (b) Pluto model with interbed multiples attenuated. (c) Removed interbed multiple.

putational cost for 3D IME can be as little as twice that of 3D SRME. This reduction in computational eort has been achieved while providing the same level of attenuation of interbed multiples as is now common with surface multiples using 3D SRME. Pluto test The Pluto synthetic model has long been used as a validation test for multiple attenuation. It is a reasonable simulation of geology in the Gulf of Mexico. The model contains quite a bit of multiple energy, including some signicant interbed multiples. The main generator for interbed multiples is the base of salt. The main multiple mechanism is an intrasalt multiple reecting o the base of salt, reecting o the top of salt, and then a second reection on the base. The high reectivity of salt boundaries and the relatively smooth base and top in the Pluto model generate high-amplitude multiples. While intrasalt multiples are not particularly common in eld data, this multiple mechanism is not uncommon in that the generator (base of salt) is acting as a mirror for events above it (top of
916 The Leading Edge August 2011

Figure 3. (a) Egyptian Western Desert data with well synthetic overlay, (b) with interbed multiples attenuated, and (c) removed interbed multiples.

salt). Figure 2a illustrates the input data. Figure 2b shows the data with interbed multiples attenuated, and Figure 2c shows the multiple energy removed. Several complex intrasalt multiples are well modeled and removed. Western Desert The Western Desert in Egypt has an interbed problem similar to many onshore areas. The shallow sediments, typically consisting of sand and shale layers, are easy to image. Around 1600 ms into the data is a high-reectivity layer of carbonates. The high reectivity of this layer reects much of the seismic energy, leaving the events below to be illuminated by only a fraction of the source waveeld. The high-reectivity event also acts as a mirror for the events above. The resulting section is doubly dicult to interpret because the primary reections are weak, the interbed multiples are strong, and there is not much dip dierence between the two. The similarity of the dip of primary and multiple events can raise the question of

Do You Know Where the Fractures are


In Your Reservoir?
Advanced 3D Fracture Imaging and Prediction Using Wide Azimuth Data
Integrated Workflow Includes: Azimuthal imaging and velocity analysis Azimuthal attributes for fracture analysis Rock properties prediction including brittleness Neural network inversion to predict fracture systems Fractured framework design and analysis Advanced visualization to aid fracture interpretation Integration of fracture monitoring with predrill prediction

Peak frequency indicates fracture orientation in reservoir

Faulted framework in reservoir

www.fusiongeo.com

Geocellular model in fractured reservoir

Results proven by the drillbit!

COMPANIES

www.prismseismic.com

M u l t i p l e

a t t e n u a t i o n
of entrenched sediment in the salt bodies and the presence of dirty salt. Figure 4b shows the data with attenuated internal multiples, and Figure 4c shows the multiple energy removed. Conclusion 3D interbed multiple removal is being successfully applied to diverse sets of data in both land and marine environments. The resulting multiple attenuation is in line with what has become standard with 3D SRME and is achieved with only a slight increase in time and computational effort.
References
Jacubowicz, H., 1998, Wave equation prediction and removal of interbed multiples: 68th Annual International Meeting, SEG, Expanded Abstracts, 15271530. Reshef, M., S. Ahad, and E. Landa, 2006, 3D prediction of surfacerelated and interbed multiples: Geophysics, 71, no. 1, V1V6, doi:10.1190/1.2159062. Van Borselen, R., 2002, Data-driven interbed multiple removal: strategies and examples: 72nd Annual International Meeting, SEG, Expanded Abstracts, 21062109. Vershuur, D. and A. Berkhout, 2000, CFP-approach to internal multiple removal: field data examples: 70th Annual International Meeting, SEG, Expanded Abstracts, 778781. Weglein, A., F. Gasparotto, P. Carvalho, and R. Stolt, 1997, An inversescattering series method for attenuating multiples in seismic reflection data: Geophysics, 62, no. 6, 19751989, doi:10.1190/1.1444298.

Corresponding author: David.Brookes@iongeo.com

Figure 4. (a) Gulf of Mexico data showing interbed multiples off the top of salt, (b) with interbed multiples attenuated, and (c) removed interbed energy.

whether the correct events are removed. Figure 3a shows an example of Egyptian desert data. Figure 3b shows the data with multiples attenuated, and Figure 3c shows the removed multiple energy. With little dip discrimination between primaries and multiples, well-log synthetic data are useful in verifying that the multiples are being removed while the primaries are being preserved. The blue and red overlay on Figures 3a and 3b shows a primary-only synthetic trace derived from a well log. The events removed by IME do not match events on this synthetic, while those that remain show good agreement with the well. Gulf of Mexico The Gulf of Mexico has large areas of tabular salt. The high reflectivity of the salt boundaries is conducive to interbed multiple generation. Figure 4a shows an example of tabular salt and its interbed multiples. The generator in this case is the top of salt which acts as a mirror to the sediments above the salt. These interbed multiples interfere with the interpretation of the base of salt as well as mimicking subsalt reflectors. Removing salt-generated multiples also clarifies the presence or absence
918 The Leading Edge August 2011

Products you know, Results you trust

Sales & Rentals

Seismographs Magnetometers GeoElectrical Instruments


www.geometrics.com

P: (408) 954-0522 F: (408) 954-0902 E: sales@geometrics.com 2190 Fortune Drive San Jose, CA 95131 U.S.A.

SPECIAL u l t iu p lt ei aa t tt ee nn uu aa tt i o nn SPECIAL M u l SECTION: t SECTION: i p l e aM t M tu el nt ia p l eo nt i o

Enhanced demultiple by 3D SRME using dual-sensor measurements


ROALD VAN BORSELEN, ROB HEGGE, TONY MARTIN, SIMON BARNES, and PETER AARON, PGS

n recent years, dual-sensor recording has been introduced to marine seismic acquisition with reported benets such as the increased bandwidth of the acquired seismic signal, improved signal-to-noise ratios due to deep-tow streamers, and operational eciency due to an increased weather window. This paper focuses on another advantage of dual-sensor streamer measurements, namely the opportunity they provide to better predict and subtract surface-related multiples from marine seismic data compared to using conventional data acquisition. 3D SRME has become the industry-standard in complex multiple removal as it provides a theoretically exact solution to 3D multiple prediction, where only the recorded data themselves are used to predict all the complexities of the multiples. The method requires no a priori information about the subsurface geology, in contrast to methods that either use a velocity model to discriminate between events that have different propagation velocities, or methods that make use of a reection model of the subsurface to compute multiples by propagating the recorded data through the model. Because 3D SRME relies only on the measured data to predict all surface-related multiples, the success of the method depends heavily on how the data are recorded in the eld. For optimal results, the data should t the requirements imposed by the numerical solution to exact 3D multiple prediction, as much as possible, as any deciencies introduced during data acquisition may lead to errors in the predicted multiples. Therefore, it is of paramount importance to ensure that data are acquired without compromises. With marine seismic data acquired using the dual-sensor technology, both the pressure waveeld and the vertical component of the particle velocity waveeld are recorded. In this paper, we will demonstrate that dual-sensor measurements lead to a more accurate set of predicted surface-related multiples with respect to phase and amplitudes. Consequently, the adaptive subtraction of the multiples can be carried out in a more constrained and robust manner, leading to better multiple elimination and better primary preservation.

eld, being phase-reversed by reection at the free surface, is measured as having the same polarity to the upgoing velocity waveeld. As a result, the receiver ghost notches for the pressure and particle-velocity sensors are exactly interleaved in the frequency domain. When signals from the two sensors are properly combined, the ghost reection cancels and the bandwidth of the recorded data is signicantly increased. Furthermore, because dual-sensor streamers can be towed deeper, less swell noise is recorded, thereby improving the signal-to-noise characteristics of the data compared to data recorded with conventional streamers. The stronger reection signal at low frequencies and the weaker swell noise result in greater penetration, better reection event continuity, and better interpretability (Pharez et al., 2008; Tenghamn et al., 2009). Dual-sensor measurements also facilitate the application of more advanced processing techniques. Next, we will discuss the opportunities it provides in advanced 3D surfacerelated multiple elimination. Dual-sensor-enhanced SRME SRME nds its mathematical origin in Rayleighs reciprocity theorem that provides a relationship between the pressure and velocity waveelds in two states: the so-called actual state, which describes seismic data acquisition as carried out in the eld, and the desired state, which diers only from the actual state by the free-surface being removed. Use of this reciprocity theorem allows an integral equation to be derived that describes the multiple-free pressure waveeld in terms of the measured waveelds during data acquisition. This equation takes the following form:
R S R S P MF x1R , x 2 ,0 x1S , x 2 ,0 = PUp x1R , x 2 ,0 x1S , x 2 ,0

' ' ( x1 , x 2 ) 2

P (x
MF

R 1

R , x2

( )  ,0 x , x ,0)V ( x , x ,0 x , x ,0) dA
' 1 ' 2 3 ' 1 ' 2 S 1 S 2

Dual-sensor streamer acquisition A pressure sensor in a towed-streamer records two waveelds: the upgoing waveeld that has been reected by the subsurface and its ghost, which is the downgoing eld that has been reected by the free-surface. The ghost has the opposite polarity from the upgoing waveeld, causing peaks and notches in the amplitude spectrum of the recorded data, due to the interference of the upgoing and downgoing waveelds. As a result, the temporal resolution of the data is reduced. In dual-sensor acquisition, this problem is overcome through the utilization of streamers where hydrophones and velocity sensors are collocated at the same depth. Because the velocity sensors are directional, the downgoing velocity wave920 The Leading Edge August 2011

Although a detailed description of this expression is beyond the scope of this article, it is important to note that the multiple-free waveeld PMF is described in terms of the upgoing pressure waveeld PUp and the vertical particle velocity waveeld V3. In other words, SRME needs both of these quantities in order to compute the free-surface multiples accurately (Fokkema and van den Berg, 1993). SRME can also be visualized in a more physical sense as shown in Figure 1. The upgoing waveeld measured at streamer depth can be decomposed into contributions from a primary and secondary source. The primary source, represented by S+, is typically an impulsive source, such as an airgun array. The free-surface, R, acts as secondary source and reects the upgoing waveeld back into the Earth as a downgoing waveeld. The Earth response, X0, scales all waveeld propagation and reection eects in the subsurface. As such,

M u l t i p l e

a t t e n u a t i o n

Figure 1. The feedback loop that constitutes waveeld propagation in the presence of a free surface.

Figure 2. An arbitrary free-surface multiple (on the left) can be constructed by summing (accomplished through a temporal convolution of traces) the raypaths of the two events that are present in the recorded waveeld

Figure 3. (a) Recorded waveeld at the streamers (ignoring the source ghost). (b) Multiple present in the recorded waveeld. (c) Multiple predicted erroneously because the two raypath constituents to be summed cannot be properly connected due to di erent source and receiver depths. (d) Upgoing multiple predicted correctly from summing the decomposed raypath constituents, whereby the downgoing waveeld was back-propagated as if it had been recorded at the source depth. (e) Downgoing waveeld and (f) upgoing waveeld obtained through waveeld decomposition.

the seismic experiment can be described by a feedback system (Berkhout and Verschuur, 1997). Taking this concept, it can be seen that an arbitrary freesurface multiple can be constructed by convolving two events that are present in the recorded waveeld. Figure 2 illustrates this approach, where a multiple on the left side can be constructed by adding the two raypaths on the right side, one generated by the primary source, and the other by the secondary

source, the free-surface. In the multidimensional equivalent of this process, multidimensional common-shot gathers are convolved with multidimensional common-receiver gathers. In conventional marine acquisition, only the pressure waveeld is recorded. Consequently, in the application of SRME, it is assumed that the upgoing pressure waveeld can be substituted by the (measured) total pressure waveeld, which is dened by the sum of the upgoing and downgoing waveeld constituents. Furthermore, it is assumed that the required vertical particle velocity eld can be substituted by the total pressure waveeld, thereby ignoring the angledependent relationship, also known as the obliquity factor, between the pressure eld and the particle velocity eld. In the application of 3D SRME using conventional streamer data, it is also assumed that any dierences between the acquisition source and receiver depths can be ignored or corrected for with small static shifts. This assumption is illustrated in Figures 3. Recall that multiples are predicted through the convolution of a trace from a common-shot gather, recorded at the receiver depth, with a trace from a common-receiver gather that has its source at the depth of the source arrays. In case the source and receiver depths are not equal, the resulting trace is missing the eect of the waveeld propagation between the source depth, and the receiver depth. This is visualized in Figure 3c by the improper connection between the two traces that are to be convolved to predict the corresponding multiple. It is understood that the impact of ignoring this component of waveeld propagation increases when the depth dierences between the sources and receivers become larger, leading to larger time and amplitude errors during multiple prediction. From the above, it can be concluded that by using conventional streamer data only, amplitude and phase errors are introduced during 3D multiple prediction. Obviously, the adaptive subtraction of the multiples has to account for these errors, thereby compromising the eectiveness of 3D SRME in both multiple removal and primary preservation. In dual-sensor acquisition, measurements of both the pressure waveeld and the vertical particle velocity waveeld are available. Subsequently, the assumptions made in conventional SRME are no longer needed, as there is direct access to the vertical velocity waveeld and the angle-dependent obliquity factor is accounted for implicitly. Using the dual-sensor recordings, the pressure and vertical particle velocity waveelds can be decomposed into their respective upgoing and downgoing waveeld constituents. This makes it possible to modify the SRME equation into a formulation that expresses the multiple-free pressure waveeld purely in terms of upgoing and downgoing waveelds. In this modied SRME equation, the integration path can be along any surface in the water layer. In more physical terms, the connection that is made by convolving the two traces can now be made at any depth. Consequently, any dierences between the source and receiver depths can be handled correctly because the upgoing and downgoing waveelds can be extrapolated to the depths required for the temporal addition of the raypaths. This connection is illustrated in Figure 3d. By
August 2011 The Leading Edge 921

Introducing the Future of Seismic Processing Software SPW version 3.0


A productivity enhancing solution for seismic data processing. Reduce your decision time, eliminate problems and decrease turn-around time for seismic projects. SPW 3.0 provides a flexible set of tools for Field QC and complete final processing.

Affordable Seismic Software Solutions - Since 1988


Tel +1.541.421.3127 www.parallelgeo.com sales@parallelgeo.com

M u l t i p l e

a t t e n u a t i o n

Figure 4. Stacked results of (a) the total (scattered) pressure waveeld at a tow depth of 7 m, (b) the upgoing pressure waveeld after waveeld decomposition of the dual-sensor data, (c) the di erence after multiple subtraction for conventional SRME, (d) the di erence after multiple subtraction for dual-sensor-enhanced SRME using both the upgoing pressure waveeld and the downgoing vertical component of the particle velocity waveeld, (e) the total (scattered) pressure waveeld after conventional SRME, and (f) the upgoing pressure waveeld after dualsensor-enhanced SRME.

extrapolating the downgoing waveeld as if it was recorded by sensors located at the source depth, the convolution of the traces establishes a correct connection between the two traces that are to be convolved to predict the corresponding multiple. An additional feature of the modied SRME equation is that assumptions about the reectivity of the free-surface, required in conventional SRME, are no longer needed in dualsensor-enhanced SRME (see also Frijlink et al., 2010, and Sllner et al., 2007). We conclude that, with dual-sensor measurements, the

phase (both linear and rotational) and amplitudes of the predicted multiples using 3D SRME are not compromised, leading to more accurate demultiple results. In the next section, these advantages are demonstrated using two eld data examples. Field data examples In the rst example, we examine data that were acquired oshore West Africa in the Nigerian Transform Margin. The 3D high-resolution survey was acquired by a vessel equipped with 12 dual-sensor streamers (75 m apart) with a length of
August 2011 The Leading Edge 923

M u l t i p l e

a t t e n u a t i o n

Figure 6. The ratio of the normalized rms values between energy removed using dual-sensor-enhanced SRME and conventional SRME. Note that the section is dominated by values greater than 1 (blue), indicating that dual-sensor-enhanced SRME leads to more multiple energy being removed than conventional SRME.

Figure 5. Normalized rms values of energy removed using (a) conventional SRME and (b) dual-sensor-enhanced SRME.

6 km and dual sources with shots every 25 m. The streamer depth of 15 m results in the rst receiver notch at around 50 Hz. This means that any proper comparison between the two SRME approaches would be severely restricted in frequency range. To facilitate the analysis, the total pressure waveeld with upgoing and downgoing waveeld constituents was reconstructed as if it had been recorded at 7 m depth, thereby extending the bandwidth for the analysis to around 100 Hz. Such a reconstruction is possible only because the recorded pressure waveeld and the vertical component of the particle velocity waveeld are available. All comparisons made hereafter refer to the simulated 7 m tow depth. The stacked sections of the input before SRME are shown in Figure 4a for the total pressure waveeld and in Figure 4b for the upgoing pressure waveeld. Note the improved event continuity and increased bandwidth in the latter. Figures 4c and 4d show the dierences between the input and the result obtained after subtraction of the multiples predicted using conventional SRME and dual-sensor-enhanced SRME, re924 The Leading Edge August 2011

spectively. Similarly, Figures 4e and 4f show the respective results after subtraction. With regard to the latter, note the improved signal-to-noise, the clear reduction in diracted multiples on the left and enhancements to deeper primaries. However, from these displays, it is still dicult to determine directly whether more multiple energy has been removed using the dual-sensor data, as the conventional SRME result contains the receiver ghost, whereas the dual-sensorenhanced result shows the upgoing waveeld after multiple removal and, in both cases, the result is inuenced by the adaptive subtraction process. This is similar to the situation in 4D processing where the eect of a processing step has to be evaluated for two dierent input data sets. Therefore, the level of removed multiple energy is determined and compared between the two SRME methods. Hence, rst the performance of each SRME is quantied by computing the root mean square (rms) values of the dierence normalized by the rms values of the input, where rms values are calculated in overlapping t-x windows over the stacked section. This is shown in Figures 5a and 5b, respectively, for conventional and dual-sensor-enhanced SRME. The former shows less subtraction (in red) along the top of the model with the well dened multiple events. To further quantify this comparison of the performance of two dierent SRME results, the ratio between their normalized rms values was also computed, which is dened as the energy removed by dual-sensor-enhanced SRME divided by the energy removed by conventional SRME. A value larger than 1 indicates that dual-sensor-enhanced SRME removed more multiples than conventional SRME. Figure 6 shows this ratio. Note that the section is dominated by values greater than 1 (in blue), conrming that dual-sensor-enhanced SRME outperformed conventional SRME. In the second example, we will consider data that were acquired oshore Greenland. The streamers were towed at 15

M u l t i p l e

a t t e n u a t i o n

Figure 7. Stacked results of (a) the total (scattered) pressure waveeld at a tow depth of 8 m, (b) the upgoing pressure waveeld after waveeld decomposition of the dual-sensor data, (c) the total (scattered) pressure waveeld after conventional SRME, and (d) the upgoing pressure waveeld after dual-sensor-enhanced SRME using both the upgoing pressure waveeld and the downgoing vertical component of the particle velocity waveeld.

Figure 8. Amplitude spectra over a selected time-space window. Blue represents the spectrum using dual-sensor-enhanced multiple prediction. Red represents the spectrum for the conventional SRME application.

Figure 9. Display of the ratio of the normalized rms values between energy removed using dual-sensor-enhanced SRME and conventional SRME. Note that the section is dominated by values greater than 1 (blue), indicating that the dual-sensor-enhanced SRME led to removal of more multiple energy than conventional SRME.

m depth, which means that the rst receiver notch will occur at around 50 Hz. In this case, the total pressure waveeld with upgoing and downgoing waveeld constituents was reconstructed as if it had been recorded at 8 m depth, thereby extending the bandwidth for the analysis up to around 90 Hz. All comparisons made hereafter refer to the simulated 8

m tow depth. Figures 7a and 7b show the stacked sections before SRME, for the total pressure waveeld (upgoing and downgoing waveelds) and the upgoing pressure waveeld, respectively. Figures 7c and 7d show the results obtained after subtraction of the multiples predicted using conventional SRME and duAugust 2011 The Leading Edge 925

M u l t i p l e

a t t e n u a t i o n

al-sensor-enhanced SRME, respectively. Note the improved signal-to-noise using the latter. Figure 8 shows the amplitude spectra obtained after the application of conventional SRME (in red), and dual-sensor-enhanced SRME (in blue). It can be seen that dual-sensor-enhanced SRME leads to a much flatter amplitude spectrum compared to conventional SRME, due to correct phase and amplitude corrections being applied using dual-sensor-enhanced multiple prediction. Similar to the previous example, the ratio of removed energy by dual-sensor SRME over that by conventional SRME is calculated and shown in Figure 9. Note that the section is dominated by values greater than 1 (in blue), confirming that dual-sensor-enhanced SRME outperformed conventional SRME. Conclusions In dual-sensor-enhanced SRME, both the pressure wavefield and vertical component of the particle velocity wavefield are used for multiple prediction. As a result, phase (both linear and rotational) and amplitudes of the multiples are not compromised, leading to optimal multiple predictions. Adaptive subtraction of the predicted multiples is more robust and constrained when high-fidelity multiple predictions are used, leading to more multiples being removed compared to conventional SRME.

References

Berkhout, A. J. and D. J. Verschuur, 1997, Estimation of multiple scattering by iterative inversion, part 1: Theoretical considerations: Geophysics, 62, no. 5, 15861595, doi:10.1190/1.1444261. Fokkema, J. T. and P. M. van den Berg, 1993, Seismic applications of acoustic reciprocity: Elsevier. Frijlink, M., R. van Borselen, and W. Sollner, 2011, The free surface assumption for marine data-driven demultiple methods: Geophysical Prospecting, 59, no. 2, 269278, doi:10.1111/j.13652478.2010.00914.x. Pharez, S., N. Hendrick, and R. Tenghamn, 2008, First look at seismic data from a towed dual-sensor streamer: The Leading Edge, 27, no. 7, 904907, doi:10.1190/1.2954031. Sllner, W., E. Brox, M. Widmaier, and S. Vaage, 2007, Surfacerelated multiple suppression in dual-sensor towed-streamer data: 77th Annual International Meeting, SEG, Expanded Abstracts, 25402544, doi:10.1190/1.2792994. Tenghamn, R. and P. E. Dhelie, 2009, GeoStreamerincreasing the signal-to-noise ratio using a dual-sensor towed streamer: First Break, 27, no. 10, 4551, doi:10.3997/1365-2397.2009017.

Acknowledgments: Husky Energy and PGS management are thanked for permission to publish this work. Martijn Frijlink, Walter Sllner, and John Brittan are thanked for the fruitful discussions. Corresponding author: Roald.van.Borselen@pgs.com

HorizonCube Understand Create

The New OpendTect plugin autotracks hundreds of horizons

the depositional history more accurate models

Extract

more geology from seismic

www.opendtect.org
926 The Leading Edge August 2011

SPECIAL M u l t SECTION: i p l e a t M tu el ntui a pt li eo a nt t e n u a t i o n

True-azimuth 3D SRME in the Norwegian Sea


PATRICK SMITH, BARTOSZ SZYDLIK, and TEUFELIN TRAYLEN, WesternGeco

rue-azimuth 3D SRME techniques have proved their worth in deepwater areas that have complex waterbottom topography. However, because they take account of the true geometry of the recorded data, they also outperform 2D and nominal-azimuth 3D SRME algorithms in areas of shallower water. In regions with strong water-layer multiples, such as the Norwegian Sea, the improved multiple predictions given by true-azimuth 3D SRME can result in substantially improved data quality. Standard 3D SRME techniques accurately predict the kinematics of the multiples, but not their amplitudes. This is a disadvantage in the Norwegian Sea where low-order peg-leg multiples and higher-order simple reverberations interfere at typical reservoir depths. Iterative 3D SRME techniques can, in principle, overcome these issues, and test results on real 3D seismic data sets are promising. Introduction The capabilities of true-azimuth generalized 3D SRME have been demonstrated in several publications, for example Moore and Bisley (2005). These examples, however, have been largely restricted to deepwater areas with complex water-bottom topography. We show here, using data from the Norwegian Sea, that the algorithm is also applicable to shallower-water areas with relatively simple water bottoms. The Norwegian Sea (Figure 1) is an important petroleum province o the west coast of Norway that contains oil elds such as Norne, Heidrun, Tyrihans, and Draugen. The water depth is around 300400 m. Similar water depths extend further to the south where elds such as Gullfaks and Snorre are located, and north into the Barents Sea. The sea oor in these areas is hard, giving rise to strong water-layer multiples that can almost completely obscure the reservoir reectors (Figure 2). The introduction of 2D SRME (Verschuur et al., 1992) in the mid 1990s greatly improved industrys ability to attenuate these multiples compared with the other algorithms available at the time. However, the 2D algorithm gives minor prediction errors in the presence of gentle seabed topography and the iceberg scouring that is commonplace in this area, and also with the wide-tow streamer acquisition congurations that are used nowadays. The water-layer multiples in this area are so strong that even minor prediction errors result in substantial residual multiple energy after demultiple. True-azimuth 3D SRME The WesternGeco true-azimuth 3D SRME algorithm, called 3D general surface multiple prediction (GSMP), was described by Dragoset et al. (2008). Figure 3 shows schematically the fundamental insight on which SRME is based. The raypath associated with a given surface multiple on a seismic trace can be split into two at the downward reection point (DRP). If these two traces have been recorded, they can be convolved to predict the multiple. In practice, we dont know
928 The Leading Edge August 2011

which multiples are present, or the locations of the downward reection points for a given trace. The solution is to convolve the trace pairs associated with all possible downward reections and perform a Kirchho summation of the results. The energy associated with the surface multiples will sum constructively to create a prediction of the multiples for the trace under consideration. For 3D SRME, we dene a spatial aperture that contains the source and receiver locations of the seismic trace being considered, and this rectangular region is divided into a regular grid of possible downward reection points. For each grid node, we convolve the appropriate trace pairs, and sum the convolutions to create the multiple predictions. But what if the necessary traces were not recorded? 3D GSMP overcomes this in a simple fashion, as shown schematically in Figure 4. Here we are predicting the multiples for a trace with a source at S and a receiver at R. The blue squares denote the grid that denes the downward reection points for which convolutions will be computed. To perform the convolution for the downward reection point marked by X, we need to identify traces R-X and X-S. These traces were most likely not recorded, so the algorithm searches through the entire 3D prestack data set to identify the pair of traces whose midpoint location, oset, and azimuth are most similar to the desired traces. The osets of these traces are adjusted, by dierential moveout, and these traces are convolved. This procedure is repeated for all downward reection points in the aperture, and the resulting convolutions are summed to create the multiple model for this trace. The process is repeated for every trace in the 3D prestack data set. The multiple predictions are then adaptively subtracted from the input data. 3D GSMP predicts multiples at the true shot-receiver azimuth, which has a substantial impact on the accuracy of the multiple estimate. It is simple to use, as it does not require explicit regularization and extrapolation beforehand. This simplicity comes at the cost of substantial CPU usage, but this is surmountable with modern computer systems.

Figure 1. The bathymetry of the Nordic Seas, with the Norwegian Sea highlighted. (Courtesy of the University of Oslo)

M u l t i p l e

a t t e n u a t i o n

Figure 2. A stacked inline created from Norwegian Sea data with no demultiple processing applied. The reservoir is at about 2.2 s in the center of the image. (Courtesy of Statoil)

Figure 3. Schematic diagram showing how a raypath with a downward reection point at the surface can be split into a pair of raypaths that may have been separately recorded.

Figure 5. Comparison of dierent demultiple processes on undershoot data from the Norwegian Sea. (a) No demultiple. (b) Tau-p deconvolution. (c) 3D GSMP. (Courtesy of Statoil)

Figure 4. Schematic diagram showing how GSMP identies the appropriate traces to convolve in order to create the multiple prediction.

Data examples from the Norwegian Sea Undershoot data. Two-vessel acquisition is commonly used to undershoot production platforms and other obstructions. 2D SRME is ill-suited to these data as the large crossline distance between source vessel and receiver vessel hampers the extrapolation to zero oset that 2D SRME requires, and causes large variations in source-receiver azimuth that violate the simple inline geometry assumptions of 2D SRME. The

value of undershoot data in this area has historically been limited by the relatively poor multiple attenuation achievable with the available demultiple methods. Figure 5 shows inline stacks of undershoot data from a eld in the Norwegian Sea. Tau-p deconvolution is commonly used to attenuate multiples on this type of data, but we see here that the benets are minor. 3D GSMP, on the other hand, comprehends the geometry of the input data and gives multiple attenuation of similar quality to that achieved for prime line data. Wide-tow acquisition. Modern 3D seismic surveys are often acquired using wide-tow congurations that maximize eciency. However, the large azimuth variations associated with the outer streamers can be incompatible with the assumptions inherent in 2D SRME. Figure 6 shows a pair of 2D stacks computed from inner-streamer data and outer-streamer data from a wide-tow acquisition in the Norwegian Sea. The rst waterlayer multiple, with associated diracted multiples, is marked with an arrow. 2D SRME (Figure 7) removes the planar and diracted multiples quite eectively from the inner-streamer line, but poorly attenuates the diracted multiples on the outer streamer line. 3D GSMP (Figure 8) performs consistently well
August 2011 The Leading Edge 929

M u l t i p l e

a t t e n u a t i o n

Figure 6. 2D inline stacks from the inner (left) and outer (right) streamers of a wide-tow acquisition, with no multiple attenuation. The rst water-layer multiple is highlighted. (Courtesy of WesternGeco Multiclient) Figure 10. An inline stack with 2D SRME. (Courtesy of ENI Norge)

Figure 7. 2D inline stacks from the inner (left) and outer (right) streamers of a wide-tow acquisition, with 2D SRME. (Courtesy of WesternGeco Multiclient)

Figure 11. An inline stack with 3D GSMP. (Courtesy of ENI Norge)

Figure 8. 2D inline stacks from the inner (left) and outer (right) streamers of a wide-tow acquisition, with 3D GSMP. (Courtesy of WesternGeco Multiclient)

same inline after application of 2D SRME. The 2D algorithm has removed the multiples where the water bottom is fairly planar, but has left substantial residual energy beneath the water-bottom irregularities (for example, in the area marked on the left of the gure). 3D GSMP (Figure 11) gives better attenuation of this energy. Prediction of multiple amplitudes The description of 3D SRME given above glossed over an important limitation of the algorithm. It correctly predicts the kinematics of all surface multiples, but does not accurately predict the amplitude of multiples greater than rst order. Figure 12 shows this schematically. The thirdorder multiple shown in the upper panel can be predicted by three dierent combinations of traces. Each of these contributes to the Kirchho summation, so that the amplitude of the third-order multiple is overpredicted by a factor of three. Th is is often not an issue in deepwater areas, as the dierent orders of multiples tend to be widely separated, and adaptive subtraction can correct the amplitude errors. In the Norwegian Sea, however, the reservoirs are typically below the strong Base Cretaceous Unconformity (BCU) and are often obscured by the rst-order pegleg multiple o the BCU. However, this rst-order peg-leg multiple tends to be overlain by the third- or fourth-order water-layer reverberations. Adaptive subtraction cannot simultaneously correct erroneous amplitudes predicted by

Figure 9. An inline stack with no demultiple. (Courtesy of ENI Norge)

on both inner- and outer-streamer data sets. Water-bottom topography. Figure 9 shows an inline stack, with no demultiple applied, from an area with rather minor variations in water-bottom topography. Figure 10 shows the
930 The Leading Edge August 2011

M u l t i p l e

a t t e n u a t i o n

Iterative SRME (Berkhout and Verschuur, 1997) can be a solution to this problem, as shown schematically in Figure 13. Here, a multiple-free trace is used on one side of the convolution to ensure that the multiple amplitudes are correctly predicted. We can use an initial estimate of the multiple-free data to compute a more accurate multiple model, and iterate to achieve the desired result. Figure 14 shows the effect of iterating 3D GSMP in this manner. The iterative result is less contaminated by both first-order peg-leg multiples and higher-order water-layer reverberations, as marked by the arrows. Summary Water-layer multiples are a significant data-quality issue in the Norwegian Sea. True-azimuth 3D SRME has become the algorithm of choice for removing these multiples, being better suited than 2D SRME to the seismic data acquired in this area. Multipass approaches can achieve even better multiple attenuation, and are starting to be used in production.
Figure 12. Schematic diagram that shows why SRME algorithms overpredict the amplitudes of higher-order multiples.

References

Figure 13. Schematic diagram showing the principle of iterative SRME.

Berkhout, A. J. and D. J. Verschuur, 1997, Estimation of multiple scattering by iterative inversion, Part 1: Theoretical considerations: Geophysics, 62, no. 5, 15861595, doi:10.1190/1.1444261. Dragoset, B., I. Moore, M. Yu, and W. Zhao, 2008, 3D general surface multiple prediction: An algorithm for all surveys: 78th Annual International Meeting, SEG, Expanded Abstracts, 24252430, doi:10.1190/1.3059366. Moore, I. and R. Bisley, 2005, 3D surface-related multiple prediction (SMP): A case history: The Leading Edge, 24, no. 3, 270284, doi:10.1190/1.1895311. Verschuur, D. J., A. J. Berkhout, and C. P. A. Wapenaar, 1992, Adaptive surface-related multiple elimination: Geophysics, 57, no. 9, 11661177, doi:10.1190/1.1443330.

Acknowledgments: Our thanks to the WesternGeco Stavanger seismic data-processing teams who produced the results shown here, and to our clients (cited in the figure captions) who allowed us to show the data examples. Corresponding author: NMcclenaghan@slb.com

Figure 14. Comparison of 3D GSMP (top) and iterative 3D GSMP (bottom). (Courtesy of Statoil)

SRME for the overlapping multiples of different orders. Processing flows in this area frequently use cascaded applications of different demultiple algorithms to achieve the necessary multiple attenuation, even when true-azimuth 3D SRME is included in the workflow.
August 2011 The Leading Edge 931

Honors & Awards Ceremony

New this year! The Honors & Award Ceremony will take place on Tuesday evening at 7:30 p.m. at the Grand Hyatt. You are invited to attend the ceremony to honor some of the top geoscientists in the eld.

Image courtesy of Gary Cralle/SACVB

Student Activities

The Annual Meeting in San Antonio will host multiple student programs. The Student Career Panel, SEG Challenge Bowl Finals and the Student Networking Event will all take place on Monday, 19 September. All events will take place at the Grand Hyatt. Dont miss out!

Riverwalk Hotels

Dont forget to book your hotel room in San Antonio! There are 18 ocial SEG hotels with discounted rates for people attending the SEG meeting. Visit www.seg.org/amhousing to book your room.

Golf Tournament

More than 600 oral & poster presentations Over 8,000 industry professionals More than 300 companies exhibiting Cutting-edge sessions & workshops Networking events

La Cantera Golf Club, Saturday, 17 September Sunday, 18 September SundayWednesday, 18-21 September MondayFriday, 1923 September Monday, 19 September Monday, 19 September Wednesday, 21 September Wednesday, 21 September

Icebreaker Exhibition

Technical Program SEG Forum

Challenge Bowl

Applied Science Education Program Wednesday Night Celebration

Society of Exploration Geophysicists International Exposition and 81st Annual Meeting Henry B. Gonzalez Convention Center 1823 September 2011 San Antonio, 1 TX USA

SEG Annual Meeting returns to San Antonio


Dean Clark Editor, The Leading Edge

he highlight of the applied geophysicists calendar, known in the vernacular as the SEG con ven tion and officially as the Society of Explora tion Geophysicists Inter national Exposition and 81st Annual Meeting, will be held at the Henry B. Gonzalez Convention Center in San Antonio, Texas, on 1823 September 2011. For some delegates, the meeting will actually last longer because of related events. On Friday 16 September, Julien Meunier will present the one-day Distinguished Instructor Short Course, annually one of SEGs most popular offerings. This years topic is Seismic Acquisition from Yesterday to Tomorrow. The first of 15 continuing education courses and the first of the many student-oriented events will begin on Saturday 17 September. Advance registration, which brings a significant discount, is now open and may be done via the Web at www.seg.org/ am, by fax at 19184975565, or regular mail. Advance registration for the meeting will close on 8 August and for housing on 12 August. SEG has secured sleeping rooms in 18 hotels and rooms may be booked through the Web site. The deadline for booking housing through SEG, and for obtaining the SEG group rate, is 12 August. The registration fee covers admission to all technical program sessions, admission to the exhibit floor (including the popular Icebreaker on Sunday evening), the Honors and Awards Ceremony on Tuesday evening, and the alwaystopical SEG Forum. SEGs 2011 Annual Meeting might be historic because of the proposed Bylaws, which will change SEGs governance structure, that will be debated and receive an approved or rejected vote at the Council meeting on Sunday 18 September. Changing SEG governance has been the focus of several recent Executive Committees and a proposal to revise the Bylaws was narrowly defeated at the 2010 Council meeting. That proposal has been adapted to cover objections

by those sections which led the opposition last year. This set of revised Bylaws was published in the July issue of TLE and is available online at seg.org. The July issue of TLE also contained an article by SEG President-elect Bob Hardage which highlighted the major changes in the proposed Bylaws and the reasons that caused recent Executive Committees to determine they were necessary. The Presidents Page in the July and August issues of TLE (by First Vice President Mike Graul and Hardage, respectively) gives additional background on the Bylaws issue and the extensive efforts over the past year to revise the 2010 proposal. If approved by the Council, the proposed Bylaws will then be submitted to the Active membership for the decisive vote on whether they are accepted or rejected. Other than the potentially historic vote by the Council, the 2011 SEG convention gives every indication of following closely along the lines of its immediate predecessors, meaning a Technical Program of amazing breadth and scope and an overflowing exposition floor. This years Technical Program attracted more than 1150 submissions, the second highest total in the history of the SEG Annual Meeting. Well over 850 survived the vigorous review process (which involved the Technical Program Committee and approximately 650 volunteer reviewers) required for approval to be presented at the meeting. Approximately 620 oral presentations will be made in 13 simultaneous sessions which will run consecutively from Monday afternoon until Thursday morning. The 240 poster presentations will be on display through Wednesday. As usual, the Technical Program will be complemented by many other events such as 17 postconvention workshops (which begin at noon Thursday and run through Friday), technical luncheons, and topical luncheons. One major change this year is that the International Showcase and Global Theater have now been merged with the Technical Program which will feature two special global sessions. The focus region this year will be North America
August 2011 The Leading Edge 933

Daily convention schedule


Friday, 16 September Distinguished Instructor Short Course......................... 8 a.m.5 p.m. Saturday, 17 September Continuing Education Courses....................................... 8 a.m.5 p.m. SEG Preconvention Golf Tournament....................................... 8 a.m. Sunday, 18 September Continuing Education Courses....................................... 8 a.m.5 p.m. SEGCouncil Meeting & Presidential Address..................... 13 p.m. Icebreaker/Exposition Preview. ............................................. 68 p.m. Monday, 19 September SEG Forum...........................................................................911:30 a.m. Exposition.......................................................................... 9 a.m.6 p.m. Student Career Panel. .............................................................. 13 p.m. Technical Sessions. .............................................................. 1:305 p.m. SEGChallenge Bowl................................................................ 36 p.m. Student Networking Event...................................................... 68 p.m. Tuesday, 20 September Technical Sessions. ...................................................... 8:30 a.m.5 p.m. Exposition.......................................................................... 9 a.m.6 p.m. Gravity & Magnetics Luncheon...................................... 121:30 p.m. GAC Latin America/ULG Luncheon Meeting...11:30 a.m.1:30 p.m. GAC Europe/FSU Luncheon Meeting. ................11:30 a.m.1:30 p.m. International Reception. .................................................... 4:306 p.m. Near-Surface Geophysics Meeting and Reception. ................. 7 p.m. Honors and Awards Ceremony.............................................. 7:30 p.m. Wednesday, 21 September Technical Sessions. ...................................................... 8:30 a.m.5 p.m. Womens Networking Breakfast. .................................... 910:30 a.m. Exposition.......................................................................... 9 a.m.4 p.m. Applied Science Education Program. ........................ 10 a.m.12 p.m. Development & Production Luncheon...............11:30 a.m.1:30 p.m. GAC Asia/Pacific Luncheon Meeting.................11:30 a.m.1:30 p.m. GAC Middle East/Africa Luncheon....................11:30 a.m.1:30 p.m. SEG Viva! Old Mexico Meets Texas. ..................................... 69 p.m. Thursday, 22 September Technical Sessions. .................................................... 8:30 a.m.12 p.m. Workshops............................................................................ 1:305 p.m. Friday, 23 September Workshops.................................................................... 8:30 a.m.5 p.m.

including Canada, Mexico and the Caribbean, and the sessions will explore opportunities and challenges in the areas. Another major change, vis--vis the Technical Program, is that the Expanded Abstracts will be published on a USB card and available for US $15 for those who order via advanced registration. The familiar Expanded Ab stracts DVD will not be produced. Those purchasing the USB card along with registration will be issued a voucher with their registration material. The voucher may be exchanged in the Book Mart during exposition hours. The USB card may be purchased in the Book Mart, for US $20, by those who did not purchase through advanced registration. In addition, approximately 150 of the presentations on the Technical Program will be recorded and available (on DVDROM) for purchase from the Professional Development Department. The always provocative SEG Forum will be held on Monday morning. This years theme is Exploration Frontiers: Geography, Technology, and Business Models. The panel will include several major figures in the energy industry from a r o u n d the world: Tim Dodson, executive vice president for Statoils Exploration Business Area; Susan Cunningham, senior vice president of Noble Energy (with responsibility for worldwide exploration) and the current chairman of the

Offshore Technology Conference; David Lawrence, Shells executive vice president, Exploration and Commercial; and Carl Trowell, president of WesternGeco. The moderator will be Hank Hamilton, chairman of the board of TGS-NOPEC Geophysical Company. The exposition floor will again be, as it has been for several decades, the worlds premier showplace for state-ofthe-art geophysical technology and instrumentation. Well over 300 companies will be displaying their latest products and services on the floor and many of the exhibitors will be relatively new companies. New geophysical companies have been popping up with great regularity in recent years due to the boom of geophysical technology in many directions and the simultaneous need to develop new equipment and methods to handle the truly enormous amounts of data now available to geophysical interpreters. The exhibit floor will be first open during Sunday evenings Icebreaker/Exposition Preview which, as usual, will be among the most popular events of the meetings social agenda, the place where old acquaintances are renewed and

934

The Leading Edge

August 2011

new friends first met. The Icebreaker, which will run from 6 to 8 p.m., will feature a buffet of complimentary hors doeuvres, cash bars, and musical entertainment. Admission to the Icebreaker is included in all full-delegate and Spouse Program registrations. Separate tickets are available at a cost of US $60. In recent years, the Icebreaker has followed the Honors and Awards Ceremony. However, that will not be the case this year as the ceremony has been moved to Tuesday evening. The extensive social program will begin with the annual Golf Tournament, a Four-Man Texas-Style Scramble, which will tee off on Saturday morning at the Palmer Course at La Cantera Golf Club. Transportation, continental breakfast, lunch, contests, and door prizes will be provided. This cost is US $185 until 8 August. After 8 August, registration cannot be made online and the cost increases to US $210 per golfer. Registration is on a first-come, first-served basis. The major social event this year will be Viva! Old Mexico Meets Texas at Sunset Station from 6 to 9 p.m. on Wednesday 21 September. Tickets are US $20 for registered delegates and US $60 for guests. Tickets may be purchased via advanced registration or at the registra tion area in the convention center. Transportation will be provided. Note that tickets must be purchased by noon on Tuesday 20 September. Students will have many special events during the conference along with the exposition and technical program. There will be a Career Placement Area to visit with recruiters and a Student Career Panel to explore opportunities in the field of geophysics. A Student Networking Event will provide a relaxed atmosphere to meet fellow students. This years Applied Science Education Program, which is aimed at high school students and teachers, will feature two sisters from San Antonio, the Suarez siblings, in whose honor a newly established dinosaur species has been named! And for those needing to recover from a week of information overload, the meeting will offer a diverting field trip to study the geology and history of San Antonio and its missions on Thursday 22 September. The cost is US $65 per person and attendance is limited to 46 people. A final note, although SEGs 2011 Annual Meeting is still some weeks in the future, it is not too early to start planning for the 2012 convention. San Antonio is a popular venue for SEG members but the site of the 2012 meeting, Las Vegas, is equally, if not more, so. It was at the previous SEG convention in Las Vegas, in 2008, that the record for submissions to the Technical Program was established. Considering such factors as the health of the geophysical industry during this time of high oil prices, the steady advance of geophysical technology on almost every front, and the varied attractions of Las Vegas itself, there is every reason to believe that the 2012 SEG Annual Meeting will be just as popular.

Applied Science Education Program


ince 2002, the SEG Annual Meeting, in conjunction with associated sponsors, has hosted an outreach program, known officially as the Applied Science Education Program (ASEP), to introduce geophysics to local high school students and teachers. Topics have ranged from deep sea exploration to space missions to extreme geophysics. The 2011 event, at 10 a.m. on Wednesday, 21 September, will feature a one-hour presentation by sistersSan Antonio natives Celina and Marina Suarezwho recently achieved a signal distinction: A previously unknown dinosaur which they found during their masters research had been named for them (Geminiraptor suarezarum). Their address, Walking with Dinosaurs and other Critters through the Cretaceous Greenhouse World and into the Future, will show how paleontologists and geochemists can use ancient creatures to reconstruct the past, and the importance of Earths past to understanding the future. Admission is free to SEG delegates. The Suarez sisters earned their bachelors degrees from Trinity University in San Antonio, masters degrees in geology from Temple University in Philadelphia, and PhDs in geology from the University of Kansas. Marina Suarez is now an assistant professor at the University of Texas at San Antonio and Celina Suarez is a National Science Foundation Postdoctoral Fellow at Boise State University. More than 350 high school teachers and students are expected to participate in this unique educational opportunity. Many of these students will also participate in the extended portion of the ASEP which includes a tour of the SEG Exposition and a question and answer session with the Suarez sisters. Those interested in participating in the outreach portion of the ASEP can volunteer to be exhibit floor tour guides for the teachers and their students (this can be done on the Annual Meeting Web site under the volunteer link). Exhibitors can also volunteer to host students for a short time at their booth during the extended program. More information about ASEP can be obtained from Cassidy Crain (ccrain@ seg.org or 19184975525).

936

The Leading Edge

August 2011

Continuing Education Courses


Fifteen Continuing Education Courses will be oered on Saturday and Sunday, 1718 September 2011, in conjunction with the SEG Annual Meeting. All courses are two days, unless otherwise noted and will be held at the Henry B. Gonzalez Convention Center on the River Level (Rooms 001A001B, 002A003B, and 006A008B). Check-in is 78 a.m. Each course will start at 8 a.m. and end at 5 p.m. Registration fees are US $895 for SEG members and US $995 for nonmembers. The fee for Student members is US $300. The deadline for advance registration is 17 August 2011. A late fee of US $35 will be incurred after this date. Registration will be accepted until 9 September. After that date, only on-site registration will be available. On-site registration, 78 a.m. on 1718 September, is rst-come, rst-served and space in a particular course is not guaranteed.
The 2011 courses and instructors are: Planning and Operating a Land 3D Seismic Survey by Andreas Cordsen and Peter Eick Understanding Seismic Anisotropy in Exploration and Exploitation: Hands On by Leon Thomsen Applications of Geophysical Inversion and Imaging by Brian Russell and Larry Lines A Practical Understanding of Pre- and Post-Stack Migration by John Bancroft Microseismic Monitoring In Oil or Gas Reservoir by Leo Eisner Full Waveform Inversion by Mrinal Sen AVO: Seismic Lithology by Mike Graul and Fred Hilterman Seismic Stratigraphy and Seismic GeomorphologyAAPG/SEG Short Course by Henry Posamentier Seismic Data Interpretation in the Exploration Domain by Tim E. Smith Marine electromagnetic Methods for Hydrocarbon Exploration by Steve Constable and Kerry Key Multi-Component Seismic, Principles and Applications by Robert Garotta Seismic Fluid Detection, Reservoir Delineation, and Recovery Monitoring: The Rock Physics Basis by Gary Mavko Pore Pressure Prediction in Practice by Martin Traugott 3D Seismic Attributes for Prospect Identication and Reservoir Characterization by Kurt Marfurt Writing for Earth Scientists by Matt Hall
Sponsors:

Wednesday Night Event: Viva! Old Mexico Meets Texas

INOVA Geophysical ION Nexen, Inc. Total SA

Take a journey through time. This event combines old Mexico with new Texas. Be careful as you wander around, you might meet up with some gunslingers and outlaws or you might be delighted to nd some beautiful senoritas dancing or a band of Mariachis. If you have ever wanted to learn to salsa or line dance, this is your event! Some instructors will be at the event to teach you. The main stage act will keep everyone dancing into the night. Enjoy the fun, food, and friends at this closing event.

This event will take place at Sunset Station on Wednesday, 21 September, from 69 p.m. Tickets are US$20 for registered delegates and US$60 for guests. The ticket includes one free drink. Tickets may be purchased at the convention center SEG registration area. Buy your ticket by Tuesday, 20 September, 12 p.m. Transportation will be provided from the convention center and hotels on shuttle route.
Images courtesy of Sunset Station

August 2011

The Leading Edge

937

New titles at Book Mart


The 2011 Annual Meeting Book Mart will feature several new titles and some new conveniences offered to attendees. Multicomponent Seismic Technology, by Bob A. Hardage, Michael V. DeAngelo, Paul E. Murray, and Diana Sava, emphasizes practical applications with chapters dedicated to data-acquisition procedures, data-processing strategies, techniques for depth-registering P and S data, rock physics principles, joint interpretations of P and S data, and numerous case histories that demonstrate the value of multicomponent data for evaluating onshore and offshore prospects. All forms of multicomponent seismic data are consideredthree-component, fourcomponent, and nine-component. The book will be of interest to researchers in multicomponent seismic technology and to explorationists who have to locate and exploit energy resources. The book will be appreciated by those who shun mathematical theory because it explains principles and concepts with real data rather than with mathematical equations. Seismology of Azimuthally Anisotropic Media and Seismic Fracture Characterization presents a systematic analysis of seismic signatures for azimuthally anisotropic media and describes anisotropic inversion/processing methods for wide-azimuth reflection data and VSP surveys. Ilya Tsvankin and Vladimir Grechka focus on kinematic parameter-estimation techniques operating with P-waves as well as with the combination of PP and PS data. The part devoted to prestack amplitudes includes azimuthal AVO analysis and a concise treatment of attenuation coefficients, which are highly sensitive to the presence of anisotropy. Discussion of fracture characterization is based on modern effective media theories and illustrates both the potential and limitations of seismic methods. Field-data examples highlight the improvements achieved by accounting for anisotropy in seismic processing, imaging, and fracture detection. Seismic Acquisition from Yesterday to Tomorrow, the companion book for the 2011 SEG/EAGE Distinguished Instructor Short Course, offers a reflection on the evolution of seismic acquisition. Julien Meunier starts with a short historical overview, followed by discussions of signal and noise. The core of the book is the relationship between acquisition parameters and seismic image quality. It will provide geoscientists and all those interested in seismic images with the still unconventional view of seismic data acquisition as the first component of seismic imaging. Advances in Near-Surface Seismology and Ground-Penetrating Radar, edited by Richard D. Miller, John H. Bradford and Klaus Holliger, is a collection of papers written by re938 The Leading Edge August 2011

nowned and respected authors from around the world. Miller said that technologies used in the application of near-surface seismology and ground-penetrating radar (GPR) have seen significant advances over the last several years. Both methods have benefited from new processing tools, increased computer speeds, and an expanded variety of applications. This four-section book captures the most significant and cutting edge of the active areas of research, unveiling truly pertinent studies that address fundamental applied problems. The book is copublished with AGU and EEGS. Michael Riedel, Eleanor C. Willoughby, and Satinder Chopra edited Geophysical Characterization of Gas Hydrates to bring together papers focusing on geophysical studies of this unconventional form of energy source. Chopra said that three reasons can be given for expanded studies of natural gas hydrates. (1) Gas from hydrate may be a new clean energy source. (2) Natural gas hydrate may play a role in climate change. (3) Gas hydrate is a hazard in conventional hydrocarbon exploration from shallow gas release and from sea-floor instability, especially in areas where hydrate is stable. Also look for the seventh edition of Alistair Browns Interpretation of Three-Dimensional Seismic Data, copublished with AAPG. Numerical Modeling of Seismic Wave Propagation: Gridded Two-way Wave-equation Methods, edited by Johan O. A. Robertsson, Joakim O. Blanch, Kurt Nihei, and Jeroen Tromp, is in production and scheduled to be in print for the meeting. The 2011 Technical Program Expanded Abstracts will be published on a USB card and available for US $15 for those who order and pay through registration. The Expanded Abstracts DVD will not be produced. If you purchase the USB card through registration, a voucher will be included in your registration material. Vouchers may be exchanged in the Book Mart. The USB card may be purchased in the Book Mart for US$20 for those who did not purchase through registration. Not every book in SEGs catalog is available at the Annual Meeting. This year, the Book Mart will offer a help desk for those looking for particular titles that might not be at the Annual Meeting Book Mart. Attendees also may peruse the books at the Book Mart and then order and pay with a credit card at the help desk to have titles shipped to their homes or businesses. A satellite Book Mart featuring new publications and best-sellers will be open in the Registration area on Saturday and Sunday afternoons (1718 September) and in the Technical Program area on Thursday (22 September) for the convenience of meeting attendees. Vouchers for Technical Program USB cards may be exchanged at these locations as well as at the Book Mart on the exhibit floor.

2011 North America Honorary Lecturer

Practical Seismic Petrophysics: The Effective Use of Log Data for Seismic Analysis
Tad Smith, Apache Corporation, Houston, Texas, USA
This talk will focus on the important role of seismic petrophysics in the quest to extract additional information from subtle seismic responses. Topics covered will include various aspects of log editing, petrophysical interpretation (including integration of other data sources core, fluids, pressures, etc.), and some common pitfalls associated with the workhorses of rock physics (invasion corrections, shear velocity estimation, and elements of fluid substitution). It is important to recognize that log data should not simply be recomputed to fit prior expectations as defined by a rock physics model. Instead, rock physics models should be used as templates, which allow the interpreter to better understand the underlying physics of observed log responses and how they are governed by local petrophysical properties. Case studies will be used to reinforce critical concepts.
DATE

29-Aug........ Boise, ID, USA ...................Boise State University Student Section 31-Aug........ Laramie, WY, USA..............University of Wyoming Geophysical Society 8-Sep .......... Socorro, NM, USA .............New Mexico Tech SEG Student Chapter 13-Sep ........ Houston, TX, USA..............Geophysical Society of Houston 14-Sep ........ Houston, TX, USA..............Geophysical Society of Houston 15-Sep ........ Norman, OK, USA ..............Univeristy of Oklahoma Geophysical Society 5-Oct........... St. Louis, MO, USA............St. Louis University Geophysical Society 6-Oct........... Columbia, SC, USA ............University of South Carolina Geophysical Society 11-Oct......... Wichita, KS, USA ...............Geophysical Society of Kansas 12-Oct......... Hays, KS, USA ...................Fort Hays State University Geophysical Society 24-Oct......... Bakersfield, CA, USA..........Pacific Coast Section SEG 25-Oct......... Stanford, CA, USA .............Stanford University Geophysical Society 26-Oct......... San Ramon, CA, USA ........Bay Area Geophysical Society

LOCATION

SECTION

DATE

27-Oct......... Los Angeles, CA, USA........University of Southern California Geophysical Society 28-Oct......... Tucson, AZ, USA ................University of Arizona Geophysical Society 1-Nov .......... Pittsburg, PA, USA.............Geophysical Society of Pittsburgh 3-Nov .......... Newark, NJ, Canada...........Rutgers Geophysical Society 4-Nov .......... Quebec, QC, Canada ..........University of Quebec INRS-ETE 7-Nov .......... Edmonton, AB, Canada ......University of Alberta Geophysical Undergraduate Society 15-Nov........ San Antonio, TX, USA ........San Antonio Geophysical Society 16-Nov........ Midland, TX, USA ..............Permian Basin Geophysical Society 17-Nov........ Plano, TX, USA ..................Dallas Geophysical Society 8-Dec .......... Anchorage, AK, USA ..........Geophysical Society of Alaska 12-Dec ........ Calgary, AB, Canada...........Canadian SEG 13-Dec ........ Saskatoon, SK, Canada......University of Saskatchewan Geoph. Soc.

LOCATION

SECTION

For more information, the full itinerary, or to view previous HL presentations visit: www.seg.org/hl

Attend the SeG VirtuAl ClASS


introduction to Gravity and Magnetics for explorationists
Thursday, 25 August 11:00 am 12:30 pm CDT Instructor: Michal Ruder
Attend this class from the convenience of your home or work computer. Listen to the instructor, view the presentation, and discuss in real time. All you need is a computer with internet connection and audio.

More information at http://seg.org/eLearning $40 Members (USD) $50 Non-Members (USD)

For more information e-mail eLearning@seg.org or call +1-918-497-5526

reGiSter Soon. SpACe iS liMited.

This virtual class is designed for explorationists who would like to learn how gravity and magnetics can provide insight into both regional, frontier plays and also prospect-scale structure and stratigraphy. It will be presented as a web-based, 90-minute class with online Q&A. We will include some basic theory, extensive discussion of case histories on both local and regional scales, and interactive modeling demonstrations.

Recommended by the Honors and Awards Committee and approved by the Executive Committee Maurice Ewing Medal Amos M. Nur Honorary Membership Peter M. Duncan z Yilmaz Roel Snieder CECIL GREEN ENTERPRISE AWARD Chen-Bin Su Wes Bauske Zhiming Li Reginald Fessenden Award Norman Daniel Whitmore, Jr. J. Clarence Karcher Award Mostafa Naghizadeh Ramesh (Neelsh) Neelamani Guojian Shan Life Membership Shivaji Dasgupta Joseph Michael Reilly

2011 Honors and Awards recipients

BEST PAPER in Geophysics in 2010  Oscillating oil drops, resonant frequencies, and lowfrequency passive seismology, Michael K. Broadhead Honorable MentionBEST PAPER in Geophysics  Coupled equations for reverse time migration in transversely isotropic media, Paul Fowler, Xiang Du, and Robin Fletcher  A perspective on 3D surface-related multiple elimination, William Dragoset, Eric Verschuur, Ian Moore, and Richard Bisley  Interpolation with Fourier-radial adaptive thresholding, William Curry  On data-independent multicomponent interpolators and the use of priors for optimal reconstruction and 3D up/ down separation of pressure wavefields, Kemal zdemir, Ali zbek, Dirk-Jan van Manen, and Massimiliano Vassallo Best paper in The Leading Edge in 2010  Multiazimuth versus wide-azimuth acquisition designs for sub-Messinian imaging: A finite-difference modeling study in West Nile Delta, Egypt, Mian Muhammad Nural Kabir, Kyoung-Jin Lee, Walter Rietveld, Brian J. Barley, James Keggin, and Graham Johnson Best oral paper at 2010 Annual Meeting  Controlled-source elastic interferometric imaging by multidimensional deconvolution with downhole receivers below a complex overburden, Joost van der Neut, Kurang Jvalant Mehta, Jan Thorbecke, and Kees Wapenaar Honorable Mention Best oral paper at 2010 Annual Meeting  New opportunities from 4D seismic and lithology prediction at Ringhorne Field, Norwegian North Sea, David Johnston, Upendra Tiwari, Michael Helgerud, and Bernard Laugier
940 The Leading Edge August 2011

Best poster paper at 2010 Annual Meeting  Seismic wave extrapolation using lowrank symbol approximation, Sergey Fomel, Lexing Ying, and Xiaolei Song Honorable MentionBest poster paper at 2010 Annual Meeting I ssues regarding the use of time-lapse seismic surveys to monitor CO2 sequestration, Grace Cairns, Helmut Jakubowicz, Lidia Lonergan, and Ann Muggeridge  Best student oral paper at 2010 Annual Meeting  Role of contact heterogeneities on macroscopic elastic properties of granular media, Ratnanabha Sain  Award of MeritBest student oral paper at 2010 Annual Meeting  Stochastic simulation of fault networks from 2D seismic lines, Nicolas Cherpeau  Best student poster paper at 2010 Annual Meeting  Comprehensive petro-elastic modeling aimed at quantitative seismic reservoir characterization and monitoring, Alireza Shahin Award of MeritBest student poster paper at 2010 Annual Meeting  Separation of blended data by iterative estimation and subtraction of interference noise, Panos Doulgeris

The 2011 Honors and Awards Program will be held at 7:30 p.m. Tuesday, 20 September, in the Texas Ballroom, Level 4, of the Grand Hyatt San Antonio.

Workshops
Seventeen postconvention workshops will be oered on Thursday and Friday after the close of the Technical Sessions. The Thursday workshops will begin at 1:30 p.m. and the Friday workshops at 8:30 a.m. Admission is US $95 for SEG members and US $190 for nonmembers. Student registration is US $25. Registration forms for the workshops will be available in the Registration Area in the Henry B. Gonzalez Convention Center.
Thursday, 22 September Advances in Geothermal Systems Exploration, Characterization, and Challenges Ahead Organizers: Erika Gasperikova and Jacques Leveille Through the support of the SEG Research Committee Basin Modeling Organizers: Tim Grow, Rao Yalamanchili, Betty Johnson, and Jerry Hensel Compressive Sensing and Computation Organizers: Dave Wilkinson, Matthias Imhof, Felix Herrmann, and Bob Clapp From Image to Insight: How Can We Leverage Pore-Scale Observations Through Rock Physics Models? Organizers: Stephan Gelinsky and Ali Mese Full Waveform Inversion: Beyond the Phase of Direct Acoustic Arrivals Organizers: Scott Morton, Jacques Leveille, Jim Gaiser, Paul Williamson, and Robert Bloor Geophysical Data Interpretation for Unconventional Reservoirs Organizers: Yongyi Li, Bill Goodway, Helen Farrell, Stephanie Nowak, and Ron Masters Geophysics and Engineering: Convergence of Disciplines Organizers: Jacques Leveille, Erika Gasperikova, Masoud Nikravesh, Ali Mese, and Paul McKay Seismic Acquisition New Technologies: What for Who? Organizers: Michel Verliac, Stewart Levin, and Sergio Chvez-Prez Use of Seismic Technology in Petroleum Resources Estimation and Classication Organizers: Henk Jaap Kloosterman, Pat Connolly, Raphic van der Weiden, Bruce Shang, Ellen Xu, Fred Aminzadeh, Pierre-Louis Pichon, and Martin Karrebach Carbonate Research in China: Technologies Meeting Tough Challenges Organizers: Sam Sun, Ron Masters, Jianfa Han, and Ping Yang Friday, 23 September Case Studies and Methods in Computational Seismic Chronostratigraphy Organizers: Matthias Imhof, Tracy Stark, Huyen Bui, Ron Masters, and Michael Pelissier Fundamentals of Shale Reservoir Characterization: Why Shales Are Different From Other Formations Organizers: Dan Ebrom, Ali Mese, Azra Tutuncu, Jacques Leveille, Stephan Gelinsky, Cengiz Esmersoy, and Stephanie Nowak Geophysics Applied to Geohazards and Public Safety Organizer: Rick Miller GPU Computing for Upstream Applications and HPC User Tools Organizer: The Society of HPC Professionals (SHPCP) The Highs and Lows of Broadband Seismic: From Acquisition through Inversion Organizers: Adriana C. Ramrez, Jim Gaiser, Tom Dickens, Laurent Sirgue, and Mirko Van der Baan Opportunities and Challenges in Unconventional ResourcesBest of the Best from the 2011 D & P Forum Organizers: Mark Houston, Enru Liu, Sam Sun, and Marty Terrell Seismic Imaging in Thrust-Belts with Rugged TopographyChallenges, Problems, and Solutions Organizers: Constantin Gerea, Ken Larner, JeanMarc Mougenot, Dennis Yanchak, and Pedro A. Munoz

August 2011

The Leading Edge

941

Technical Oral Sessions at-a-glance


Convention Center
Lila Cockrell Theatre Room 210 A Room 212 A Room 213 A Room 214 A Room 214 B Room 214 C Room 214 D Room 216 B Room 217 A Room 217 B Room 217 C Room 217 D Room 218

MONDAY AM
SEG FORUM

MONDAY PM

TUESDAY AM

TUESDAY PM

WEDNESDAY AM

WEDNESDAY PM

THURSDAY AM

GM 1: Processing and Inversion MG: Case Histories and Methods SPMUL 1: Surface Related Multiple Attenuation SPMI 1: Reverse Time Migration Angle Gathers SVE 1: Migration Velocity Analysis TL 1: Case Studies

EM 1: Modeling and Inversion PSC 1: Methods and Case Studies VSP 1: Processing and Imaging SPMI 2: RTM Theory and Computation SVE 2: Anisotropy TL 2: New Advances

GM 2: Applications and Field EM 2: Marine Studies PSC 2: Monitoring and Uncertainty SPMUL 2: Multiple Attenuation II NS 1: Environmental and Geotechnical Applications SPNA 1: Land and Marine Multichannel Filtering

EM 3: Land and Airborne

EM 4: Reservoir Characterization NS 2: Surface Waves VSP 2: Borehole Seismic Processing SPMI 6: Novel Methods SM 4: Reections and Boundary Conditions SPIR: Multi-dimensional Seismic Regularization and Interpolation

SS 3: Hydrogeophysics SPNA 2: Matrix Decomposition and Interpolation Methods SPMI 5: Practical aspects in RTM SM 3: Finite Elements TL 3: Processing

SPMI 3: Illumination from Wide Azimuth and Multiples SPMI 4: Case Histories SVE 3: Tomography SM 1: Finite Differences SVE 4: Near Surface and Complex Structure SM 2: Case Studies SI 4: Time-lapse and CO2 Sequestration Applications

SI 1: FWI Applications

SI 2: FWI Theory I RP 2: Laboratory and Computational Methods RC 2: Lithology II INT 1: Strategies and Techniques I ACQ 2: Current Issues and Future Directions

SI 3: FWI Theory II RP 3: Attenuation, Dispersion, and Fluids RC 3: Fractures INT 2: Attributes

SI 5: FWI Computation and SI 6: Miscellaneous Applications Applications

RP 1: Numerical Modeling

RP 4: Anisotropy, Fractures, AVO 1: New Methodologies AVO 2: New Methods and and Stress Case Studies RC 4: Methods and Interpretation I INT 3: Strategies and Techniques II RC 5: Methods and Interpretation II INT 4: Regional Studies MS 2: Case Histories INT 5: Field Studies

RC 1: Lithology I MS 1: Techniques ACQ 1: Simultaneous Source Applications and Techniques

ACQ 3: Case Histories: New ANI 1: Wave Propagation Acquisition Technologies SS 2: Environmental Challenges in Unconventional Resources ST 1: Interferometry

ANI 2: Migration and Velocity ANI 3: Azimuthal Amplitude Model Building and Velocity Variations PSC 3: Mechanisms and Event Characterization ST 2: Waveeld Approximation and Diffraction Separation PSC 4: Interferometry in Passive Seismic SS 4: Classication Applied to Munitions Response

SS 1: Recent Advances and SGS 1: Caribbean Petroleum SGS 2: North America Systems the Road Ahead BG 1: Simulation of EM and Sonic Borehole Measurements HA: Humanitarian Applications of Geoscience BG 2: Interpretation of Single-well and Cross-well Measurements

Poster Sessions at-a-glance


MONDAY PM
SPNA P: Prestack Hybrid and Curvelet Transform SPMI P1: Theory

Abbreviation/Topic:

TUESDAY AM
RC P1: Attribute Applications I SPMI P2: Novel Methods SM P1: Wave Modeling and Ray Tracing BG P: Single-well and Cross-well Measurements SI P2: FWI Theory I

TUESDAY PM
RC P2: Attribute Applications II

WEDNESDAY AM
RC P3: Techniques

WEDNESDAY PM
RC P4: Diverse Studies

TL P: Seismic

SPMI P3: Applications and Implementation ACQ P: Survey Design and Illumination Modeling PSC P2: Techniques and Processing SI P4: Stochastic Inversion RP P1: Measurements and Applications

SVE P: Miscellaneous Approaches SM P2: Acoustic and Elastic Waves ANI P: Anisotropic Moveouts and Traveltimes SI P5: FWI Theory II

INT P1: Attributes and Techniques PSC P1: Events Locating and First Break Picking

INT P2: Case Studies

NS P: General Contributions SI P3: Miscellaneous Applications EM P2: Theory and Applications II

SI P1: Case Studies

EM P1: Theory and Applications I

GM P: Methods and Applications

RP P2: Measurements and Modeling

Audio and/or videotaping of any portion of the Technical Program or Workshops held in conjunction with SEG meetings is prohibited without prior consent of the SEG Executive Committee.

ACQ....... Acquisition and Survey Design ANI ........ Anisotropy AVO ....... AVO BG ......... Borehole Geophysics EM ........ EM Exploration GM ........ Gravity and Magnetics HA ......... Humanitarian Applications INT ........ Interpretation MS ........ Multicomponent Seismic MG ........ Mining and Geothermal NS ......... Near Surface PSC ....... Passive Seismic RC ......... Reservoir Characterization RP ......... Rock Physics SGS....... Special Global Session SI .......... Seismic Inversion SM ........ Seismic Modeling SPMI ..... Seismic Processing: Migration SPMUL.. Seismic Processing: Multiples SPNA .... Seismic Processing: Noise Attenuation SPIR...... Seismic Processing: Interpolation and Regularization SS ......... Special Session ST ......... Seismic Theory SVE ....... Seismic Velocity Estimation TL.......... Time Lapse VSP ....... VSP

942

The Leading Edge

August 2011

International Workshop on Gravity, Electrical and Magnetic Methods and Their Applications

GEM Beijing 2011

October 10 13, 2011

Beijing, China

Deadline for Abstract Submission: May 31, 2011


For more information visit

http://geophysics.mines.edu/cgem/gem2011.html

SEG Forum Exploration Frontiers: Geography, Technology, and Business Models


Monday, 19 September, 9 a.m. Henry B. Gonzalez Convention Center, Lila Cockrell Theatre With oil prices hovering around US $100 per barrel, exploration for new hydrocarbon reserves is gaining momentum, but oil companies face significant challenges in finding exploration opportunities that have the potential to meaningfully grow reserves. Exploration has always been tempered by the element of risk, especially in true frontier plays. The Forum will examine our industrys current thinking about the new frontiers in exploration. What geographic areas are most promising for new reserves? What new or developing technologies are most critical for managing the risks of frontier exploration? What new business models with partners, suppliers, and host governments are required to enable investment in frontier exploration? Dont miss this opportunity to listen and interact with the panel.
Tim Dodson, executive vice president of exploration, Statoil, is a UK citizen and has 30 years of industry experience, 25 years with Statoil. He has a MSc in geology from the University of Keele in the UK. Dodson started his career in the oil and gas industry in 1980 with an oil and gas service company and worked for five years in South America and the Middle East. He joined Statoils Exploration and Production Norway unit in 1985. He has held various management positions within exploration, production and technology and HR, vice president Exploration Southern North Sea, vice president Technology Arena, Exploration. From 2004 2008, Dodson held the position as senior vice president for Exploration in Norway. In 2008, he was appointed as senior vice president for Global Exploration in Statoils business area for international operations. In January 2011, Dodson became the executive vice president for the Exploration business area in Statoil and a member of Statoils Corporate Executive Committee. Susan M. Cunningham, senior vice president, exploration, Noble Energy, has more than 25 years of in dustry experience. She joined Noble Energy as a senior vice president in April 2001 and is responsible for exploration worldwide. Previously, she was Texacos vice president of worldwide exploration from April 2000 to March 2001. Employed by Statoil from 1997-1999, she advanced from exploration manager for deepwater Gulf of Mexico to vice president. Cunningham began her career in 1980 in Calgary as a geologist at Amoco Canada. From 1981-1994, she served in various positions including general managerDenmark, where she was country manager based in Copenhagen. She is currently chairman of the Offshore Technology Conference and was elected to the Board of Cliffs Natural Resources in 2005. Cunningham earned a BSc in geology and physical geography from McMaster University in Ontario, Canada. David Lawrence, executive vice presidentexploration and commercial, Shell. Lawrences responsibilities include dis covering new oil and natural gas, new business development, natural gas monetization, and wind energy. He started out in research for Shell and has worked across the business including lead positions in Producer Finance, Corporate Strategy, Investor Relations, and Global Exploration. Prior to receiving his PhD from Yale University in 1984, he was a teacher, explored for uranium, and mapped for the U.S. Geological Survey. Carl Trowell, president of WesternGeco. Before assuming his current role in May 2009, Trowell held a variety of management positions for Schlumberger in Europe, Asia and the UK, including region manager for Schlumbergers operations in the North Sea. Most recently, he served as vice president Strategic Marketing. He began his career as a petroleum engineer with Shell in 1995 and came to Schlumberger in 1998, where he was new ventures manager for GecoPrakla, a heritage WesternGeco company. Trowell also has served in marketing and business development positions in IPM. He has a PhD in earth sciences from University of Cambridge, a BSc degree in geology from Imperial College of Science, Technology and Medicine, and an MBA from London Open University. Hank Hamilton, organizer and moderator, chairman of TGS. Hamilton began his career as a geophysicist with Shell Offshore in 1981 before moving to Schlumbergers GECO in 1987, where he ultimately held the position of vice president and general manager for all seismic product lines in North and South America. Hamilton joined TGS as its CEO in 1995 and remained in the position following the 1998 merger with Nopec International that created the initial public listing for TGS. He continued as CEO of TGS through June 2009, when he was elected to his current role of chairman. He served on the Board of Directors for the International Association of Geophysical Contractors (IAGC) from 1993-2011 and currently serves on the boards of the SEG Foundation and Odfjell Offshore, Ltd., in addition to TGS.

Tim Dodson

David Lawrence

Carl Trowell

Susan M. Cunningham

Hank Hamilton

944

The Leading Edge

August 2011

Grand Hyatt San Antonio, Texas Ballroom, Level 4 Tuesday, 20 September, 7:30 p.m.

Join Chairman Terry Young as he hosts the 2011 Honors & Awards Ceremony, recognizing and honoring talented individuals and organizations that have advanced our science and benefited our Society.

Put on your dancing shoes and get ready for a night of music when the SEG Foundation presents its

2011 Presidential Jam: Rockin on the River

From 9:3011:30 p.m.,Tuesday, 20 September, join your friends, family, and colleagues for a fun-filled night as SEGs past presidents and other illustrious geophysicists bring their musical stylings to the Texas Ballroom stage of the Grand Hyatt. Admission is free; donations will be accepted!
For more information, visit the Foundation booth in the SEG Pavilion!

August 2011

The Leading Edge

945

I S E F

Empowering Tomorrow's Innovators in Los Angeles: The 2011 ISEF


RICHARD NOLEN-HOEKSEMA, e4sciencesEarthworks LLC

he theme for the 2011 International Science and trip to the London International Youth Forum from the OfEngineering Fair (ISEF) was Empowering Tomorrows ce of Naval Research, and $1500 Second Award from Intel Innovators. ISEF occurs every May and is the premier for Electrical and Mechanical Engineering. competition for science and engineering projects developed We awarded two second-place Awards of Merit worth by high school students from around the world. To participate $1000. at ISEF, students must compete successfully in a local or Clara Fannjiang, a junior at Davis Senior High School high school science fair and then at a regional or state ISEF- (Davis, California), won for Better Images, Fewer Samples: aliated science fair. This year, more than 1500 high school Optimizing Sample Distribution for Compressed Sensing in studentsfuture entrepreneurs, innovators, and scientists Radio Interferometry. She applied compressed sensing to racompeted. They were the survivors of 443 aliate fairs in 65 dio interferometry. Compressed sensing is a sparse data samcountries, regions, and territories. pling technique that takes advantage of redundancy in data SEG has supported ISEF since 1965 and gives awards sets (for example, in images light pixels are likely to be next to to outstanding projects that relate directly or indirectly to light pixels and dark pixels next to dark pixels). Compressed geophysics and the Earth scisensing algorithms require ences. This amounts annually incoherent sampling (i.e., to US$ 6000 in cash, plaques, high incoherence between certicates, and subscriptions samples). Clara studied how to TLE. dierent sampling strategies The 62nd Intel ISEF was aected sampling incoherheld in Los Angeles, USA, on ence to optimize the applica813 May 2011. The SEG tion of compressed sensing judges were Chuck Meeder, to radio interferometry. She Steven Constable, YoungHee chose orthogonal matching Kim, and me. pursuit as her compressed The rst-place Distinsensing algorithm and tested guished Achievement Award it by reconstructing an image winner went to Andrew Feldof Galaxy 3C75 from dierman, a junior at Manalapan ent sampling schemes of the High School (Englishtown, image; the sampling schemes New Jersey). He won $2500 simulated radio sensor distriSEG rst-place winner Andrew Feldman. (Photo by Chuck Meeder) and a trip to 2011 SEG Anbutions. She tested ve samnual Meeting for Acoustic pling schemes: uniformly Imaging Using Optimized Beamforming Techniques. The random, normally random, and three forms of the Y-shaped motivation for his project was sound source localization for Very Large Array (VLA) geometryVLA, VLA randomized wildlife or vehicle tracking, gunshot detection and location, along arms, and VLA perpendicularly perturbed. Clara found or vehicular testing. He assembled a near-real-time acoustic that uniformly random sampling allowed her to reconstruct beamforming system for about $100 (not including the lap- the original image using compressed sensing with a high detop computer). The system consisted of a square, multiplexed, gree of accuracy. Compressed sensing is applicable to random 16-element microphone array designed for imaging far-eld sampling with the VLA geometry. Clara wrote her software sound sources in the frequency range of 4.59.0 kHz. He using MATLAB. The judging team though Claras poster and also wrote software or obtained open-source software for data presentation was excellent. She answered questions awlessly acquisition, ltering, antialiasing, adaptive delay-and-sum and clearly knew her material. She was able to make it accesbeamforming, and deconvolution. He tested his array and sible to those unfamiliar with it. software using articial narrowband sound sources and varyClara also received a $3000 First Award from the Air ing their frequency, range, and azimuth and elevation angles Force Research Laboratory, a certicate of honorable mention from the microphone array. His system was able to estimate from the American Statistical Association, and $1500 Second the frequency and arrival direction from single sources very Award from Intel for Physics and Astronomy. closely. The systems performance for multiple sources was Marni Wasserman, a senior at Commack High School less precise and accurate. The judging team thought Andrews (Commack, New York), won for Investigating Climate poster and presentation were outstanding. He answered ques- Change: A Comparative Analysis of Colonial and Modern tions extremely well, had intimate knowledge his subject mat- Weather Data. Marni studied the use of colonial era weather ter, and had plans for improving his project. data to investigate climate trends in the northeast United Andrew also received a $1000 First Award from the States. Most weather information from before circa 1850 Acoustical Society of America, $4000 Tuition Award and a comes from proxy sources such as ice cores, tree rings, and
946 The Leading Edge August 2011

I S E F

coral reefs. The standardized-scale thermometer has existed since the early 1700s and people have made weather observations ever since. Phineas Pemberton kept a continuous record of temperature and weather conditions near Philadelphia, Pennsylvania from 1746 to 1776. These records are on le at the American Philosophical Society. Marni digitized Pembertons observations from 1759 and 17671770 and compared them to weather data from NOAA for Philadelphia from 18781883 and 20052009. She checked the reliability of Pembertons thermometer by comparing winter temperature readings with reports of snow or rain. For the Philadelphia area, Marni found the overall temperature has increased since 1759. There are more days over 80, 85, and 90F in the 21st Century than in the 18th Century. For example, there were zero days above 90F during 1759, 17671770 and 45 days above 90F during 20052009. Marni also received a $1000 Second Award from the American Meteorological Society. She will attend Johns Hopkins University in the fall. We awarded three third-place Awards of Merit worth $500. Marian Bechtel, a sophomore at Hempeld High School, (Landisville, Pennsylvania), won for A Stand-O SeismoAcoustic Method for Humanitarian Demining. This is the second year that Marian has received an SEG award. Her 2010 project was Developing a Process for Seismo-Acoustic Imaging Applied to Humanitarian Demining. This year Marian continued her project to nd an eective way to locate both metal and nonmetal unexploded landmines. She is testing the theory that the vibration of landmines, which are partially hollow, is detectably dierent from the vibration from other objects (clutter) in a mineeld. Last year, Marian projected a tone from subwoofer at her test bed and used a geophone to measure vibrations and to detect resonances. This year, she tested landmine detection using noise cancellation of two horizontally spaced (roughly 10 inches apart), stando microphones. Her source is a concrete vibrator located in one corner of the test bed. Marian designed and constructed the stand-o, dual microphone detector, which looks much like a metal detector. Her tests showed that her device yielded a measurable and audible null when centered over one of her buried metal and plastic mock landmines. The results were statistically dierent for landmine and objects that are not landmines. Marian also received a certicate of honorable mention from the International Council on Systems Engineering (INCOSE), three $1000 U.S. Savings Bonds and certicate of achievement and a gold medallion from the United States Army, $250 Second Award from the American Intellectual Property Law Association, and $1000 Third Award from Intel for Electrical and Mechanical Engineering. Alexander Kendrick, a senior at Los Alamos High School (Los Alamos, New Mexico), won for his project Electromagnetic Detection of Aquifers. This is the third year that Alexander has received an SEG award. His 2009 and 2010 projects were Underground Radio II and Underground Imaging, respectively. This year he designed, built, and

eld-tested a more sensitive electromagnetic gradiometer. He tested his new gradiometer and carried out a nine-month time-lapse study over a known underground stream. The new gradiometer showed increased sensitivity. The time-lapse study showed a distinct correlation between the size of electromagnetic phase shift and the discharge (volume rate) of water owing in the underground stream. In addition, Alex was able to use his gradiometer to track the stream from its outow to its last known mapped position in the cave. Alexander received a certicate of honorable mention from the International Council on Systems Engineering (INCOSE) and a $3000 First Award from Intel for Electrical and Mechanical Engineering. He will attend Harvey Mudd College. Amy Robinson, a senior at Keystone School (San Antonio, Texas), won for Astronomical Image Processing: Eliminating Random Atmospheric Noise and Enhancing Low Resolution Images, Year III. Amy was interested in the eects of atmospheric noise on astronomical imaging. In one experiment, she simulated the eects of atmospheric noise be placing an image of the M81 galaxy under a glass dish of water, which she stirred to cause blurring. She took repeated photos of the M81 galaxy through the stirred water. She used image subtraction to nd the degree of blurring in each image. She stacked and averaged the blurred images to try to recover the original image. In a second experiment, Amy introduced blurriness by averaging nine shifted images of the M81 galaxy; she shifted the same image a few pixels in all directions at 45 increments and the average produced a blurred image. She then averaged a number of these blurred images to try to recover the original image. In both experiments, Amy used image subtraction and statistics to quantify and demonstrate image improvement with processing. Amy also received a $3000 First Award from the Air Force Research Laboratory. Amy will attend Wellesley College. This year we awarded four Honorable Mention Recognition Awards: Alexander Finney, a sophomore at Covenant Christian Academy (Huntsville, Alabama) for Precision Location of Acoustic Sources. Jenna Huling, a junior at Ada High School (Ada, Oklahoma) for Enhanced Adsorption of Arsenic on Aquifer Solids: Impact of Oxidative Treatment of Aquifer Solids The team of Calvin Ling and Mark Sands, Liberal Arts and Science Academy High School in Austin, Texas, for Detecting Oil Spills Using Synthetic Aperture Radar The team of Andrey Halim and Reyner Jong, Santa Laurensai High School, Tangerang Selatan, Banten, Indonesia, for Bamboo-based Composites for Earthquake-Resistant Building Materials The 2012 Intel ISEF will be in Pittsburgh, Pennsylvania, 1318 May 2012. For more ISEF information and list of winners, please go to http://www.societyforscience.org/intelisef2012/.
Corresponding author: richard.nolen-hoeksema@e4sciences.com
August 2011 The Leading Edge 947

I S E F

Acoustic imaging using optimized beamforming techniques


ANDREW FELDMAN, Manalapan High School, New Jersey, USA

s computers are becoming more powerful, researchers are extending the range of applications for acoustic imaging. Coherently sampling an array of sonic transducers creates a spatial lter that can map incoming sound waves based on their direction of arrival (DOA). A staple of acoustic array design is delay-and-sum (DAS) beamforming, whereby the microphone signals are delayed to bring signals from a desired direction into phase. Summing these delayed signals selectively amplies waves coming from the desired direction, and repeating this steering process for a range of directions constructs an image of source intensity as a function of angle (Figure 1). DAS and other beamforming techniques were developed for radar and have been applied to seismic imaging, industrial testing, and quality control. Current acoustic array implementations tend to be highly specialized, requiring research and development eorts to expand the technology for specic uses. Lowering the cost of acoustic array technology could open it up to a broader range of applications. The goal of this project was to develop a beamforming system capable of detecting the two-dimensional angular positions of multiple sound sources in real time at a cost of about US$100 (not including the cost of a laptop). A key design issue is the trade-o between the sampling rate of each microphone channel and the cost of the hardware for performing the associated analog-to-digital conversion. Multiplexing a single analog-to-digital converter (ADC) between many microphones could be more cost-eective for some applications than having an independent ADC for each microphone. To this end, the system developed here couples a 16-microphone acoustic array to a microcontroller with a single ADC multiplexed between 16 inputs. To take advantage of the cost benet of the microcontrollers multiplexed ADC, it was necessary to deal with the resulting aliasing, caused by ADC sampling at too low a rate. This was addressed using band-pass sampling techniques, whereby a priori knowledge of the frequency band of interest allows the system to recover the unaliased signal (Proakis and Manolakis, 2007). Due to diraction limits, the output image from a DAS beamformer is convolved with the arrays beam pattern. For the frequencies used, this beam pattern was apparent. To get a clear image that could provide more exact DOA estimates, a CLEAN deconvolution was implemented. The CLEAN algorithm (Hgbom, 1974) was developed to deconvolve aperture synthesis images from radio telescopes. In CLEAN, the DOA of the highest-amplitude source is estimated and stored, a beam pattern scaled and shifted to the source is subtracted from the image, and the process is then repeated for the next highest-amplitude source, iteratively, until a threshold based on the number or amplitude of sources is reached. The sources are reinserted into the image as scaled and shifted Gaussians. A CLEAN deconvolution was chosen as the nal stage of processing for this system because it is straightforward to implement and provides direct DOA estimates.
The Leading Edge August 2011

Acoustic design The heart of the acoustic imaging system is the physical array, 16 electret microphones mounted in foam board and equally spaced in a 4 4 square. The electret microphones have a at frequency response up to 10 kHz, are simple to incorporate into circuitry, and are easy to mount. A photographic image of the array face was used to calibrate the positions of the microphones to ~1 mm precision; this greatly improved the accuracy of DOA measurements for sources at extreme angles. Central to the development process was the selection of design parameters that were matched to the arrays desired frequency range. The Nyquist-Shannon sampling theorem says that, at the lower limit, a discrete systems sampling frequency must be twice the signal bandwidth to prevent aliasing (Shannon, 1949). If the sampling frequency is any lower, aliasing causes the frequency bands above one-half of the sampling frequency to fold down into the frequency band below onehalf of the sampling frequency and become indistinguishable (Figure 2). Spatial aliasing introduces ghost images at the edge of the image, eectively limiting the systems angular range. The spatial sampling requirement is met if the spacing between adjacent microphones is less than or equal to one-half the

Figure 1. Delay-and-sum (DAS) beamforming amplies plane-wave signals from a single direction. The delay stage brings signals from the desired direction into phase. The summation stage combines the microphone channels so that the in-phase signals from the desired direction are amplied by constructive interference.

948

I S E F

Figure 3. System architecture. The multiplexed analog-to-digital converter samples the amplified and filtered signals from the 16 microphones. The microcontroller stores the signals and transfers them to a laptop which processes the data to form an image. Figure 2. The effect of aliasing on signals with frequencies above the Nyquist frequency (one-half of the sampling frequency, fsample). The high frequencies fold down into the frequency band below the Nyquist frequency and become indistinguishable from unaliased signals.

to operate over its entire 4.59.0 kHz range.

System architecture sounds wavelength. Considering this, the systems desired fre- Figure 3 shows the system architecture. The signals from quency range was chosen to be 4.5-9.0 kHz. Very little ambi- the 16 microphones pass through 16 independent two-stage ent noise exists at 9.0 kHz, and a spacing of 1.9 cm between amplifiers before they are sampled by the microcontroller. microphones satisfies the sampling requirements for 9.0 kHz. When it was found that much of the ambient noise, espeThe resulting square aperture is 5.7 cm on a side, which due cially speech, exists at low frequencies, RC high-pass filters to diffraction limiting establishes the lower limit of roughly were inserted between the stages of each amplifier to attenu4.5 kHz on the usable frequency range. A drill press was used ate noise and enforce band-limiting. After completing system to create a 4 4 array of holes in foam board, and a spacing of construction, the gain of each input channel was calibrated 1.9 cm left just enough space so that the foam did not collapse by placing a sound source several meters away and recordas more holes were drilled. Temporal aliasing presented more of a challenge. The ADCs sampling rate of 77 kHz is more than adequate for sampling a single microphone. With the 16way multiplexing, however, the sampling Outstanding Scholarship from Cambridge rate at a given microphone is reduced to 4.8 kHz, corresponding to a tempoThe Lithosphere Computational An Interdisciplinary Methods for ral aliasing threshold of 2.4 kHz. This Approach Geodynamics would limit the system to work in the Irina Artemieva Alik Ismail-Zadeh, 0-2.4 kHz frequency range. It was real$145.00: Hb: 978-0-521-84396-6: Paul Tackley ized that, given a priori knowledge of the 800 pp. $69.00: Hb: 978-0-521-86767-2: frequency band occupied by the signal, 332 pp. band-pass sampling techniques could Structural overcome the limits set by aliasing. BandGeology Heat Generation pass sampling normally involves bandHaakon Fossen and Transport $70.00: Hb: 978-0-521-51664-8: limiting an input channel so that the in the Earth 480 pp. Nyquist-Shannon bandwidth criterion is Claude Jaupart, Jean-Claude Mareschal met. This system treats a narrowband sig$75.00: Hb: 978-0-521-89488-3: nal that is known to exist within a given 476 pp. Rock Fractures frequency bandwidth of 2.4 kHz as being in Geological band-limited because lower-frequency Processes Eruptions that noise is filtered out and higher-frequency Shook the World Agust Gudmundsson noise was found to be insignificant. Dur$80.00: Hb: 978-0-521-86392-6: Clive Oppenheimer 592 pp. ing signal processing, the aliased frequen$30.00: Hb: 978-0-521-64112-8: 408 pp. cies in the sampled data are unfolded into Prices subject to change. the known frequency band and the unaliased signal is extracted. This provided a www.cambridge.org/us/earth marked improvement, allowing the array
August 2011 The Leading Edge 949

I S E F

ing the sound amplitude at each microphone. The dierences in relative amplitude between the microphones were saved so that future measurements could be normalized. The array, microcontroller, and microphone circuitry are mounted on a wooden base to make a single peripheral; this peripheral connects to a laptop via USB (Figure 4). The imaging software performs a 256-point fast-Fourier transform (FFT) on all 16 microphones and unfolds the aliased frequencies. The software makes phase corrections to compensate for the 13 s time delay between ADC samples. It then places the complex frequency components of interest into a matrix and implements DAS beamforming as the multiplication of this matrix by a series of matrices containing complex delay factors that steer the array toward specic horizontal and vertical angles with 2 resolution. The CLEAN algorithm deconvolves the arrays beam pattern from the delay-and-sum beamforming output, returning a list of DOA estimates for the ve highest-amplitude sources in the image. It was found that, if source coordinates were expressed in terms of a horizontal angle and a vertical angle with respect to the line normal to the array face, it was convenient to implement DAS beamforming in two dimensions by breaking the two-dimensional angular range into horizontal one-di-

Figure 5. Measured location of a source at 15 over the 200 Hz20 kHz range. This test veries that the system can be used for source frequencies in the 4.5 kHz9.0 kHz range, as per the design objectives. Measured angles at frequencies below this range are erratic due to diraction limiting and, above this range, spatial aliasing sometimes causes large deviations.

Figure 4. The physical setup.

mensional slices on which beamforming could be performed. As a result, the DOA estimates are expressed in terms of vertical and horizontal angles, rather than in polar coordinates. The DOA measurements are displayed on the screen alongside the CLEAN image and the DAS output image. The cost target of about $100 was achieved. The bulk of the cost is in the microprocessor board. All software was made for the project or was open source and free. Testing and results The DAS beamforming algorithm requires the sound sources to be in the far eld so that they produce plane waves. All sources used in testing were therefore 1 m from the array. At this distance, the arrays Fresnel number is << 1 and the source is essentially in the far eld. During testing, the room
950 The Leading Edge August 2011

was free of audible noise and there was always a line-of-sight path from the array to the sources. Therefore, it was assumed that the DOAs of the highest-amplitude signals detected by the CLEAN deconvolution indicated the locations of the sound sources, and that the remainder of the ve sources detected by CLEAN corresponded to reections or artifacts. As a preliminary, the systems performance was assessed as a function of frequency. A speaker was placed at a horizontal angle of 15 to the array normal. A series of software-generated sine-wave tones ranging in frequency from 200 Hz to 20 kHz was put through the speaker, and the acoustic array was used to measure the sounds DOA at each frequency (Figure 5). The array performs optimally in the 4.5 kHz9.0 kHz range. Directivity is lost at lower frequencies. Measurements at higher frequencies exhibit greater precision but are occasionally skewed by ghost images resulting from spatial aliasing. This gave a clear indication that the array was following the rules set by its design parameters. The systems rst task was to measure the location of a single sound source, a speaker playing a software-generated 7.7-kHz sound. This frequency lies well within the operable range indicated by the previous test. Testing was conducted in a quiet basement. The speaker was fastened to an extendable platform standing on the oor. Shifting and extending the speaker platform would change the DOA of the sound so that, with an attached protractor, it was possible set values for the horizontal and vertical angles of the source relative to the array normal. The systems DOA measurement could then be compared to the known angular location of the speaker. To test the systems ability to make horizontal-angle measurements, the speaker was placed at 0 vertically, and measurements were made while its horizontal angle was varied from 90 to +90 in increments of 10. Vertical-angle testing was made dicult by the ceiling and the oor, which limited acceptable source angles to be between 34 and +35. The ver-

I S E F

Figure 7. Results for two-source testing over horizontal angles, comparing least-squares regression lines for both sources, along with 45 lines that would represent exact agreement. Below the calculated Rayleigh limit, the sources were unresolved as predicted.

Figure 6. Results for single-source testing in horizontal and vertical directions, comparing least-squares regression lines, along with a 45 line that would represent exact agreement.

tical angle measurements were made at 5 intervals with the horizontal angle at 0. An additional ten sets of ten repeated measurements were made at specic horizontal and vertical angles to get a better idea of the systems precision. Figure 6 shows the results of single-source testing. Perfect results would produce a 45 line on a graph of measured angle versus predicted angle. For horizontal angle measurements, the actual regression slope is 1.00 and the intercept is 0.63. Thus, the regression line is within 1 of the 45 line. The measurement precision was 2. For vertical angle measurements, the regression slope is 1.09 and the intercept is 2.23. Therefore, the deviation of the regression line from the 45 is between 0 and 5. The measurement precision was 4. The noticeably larger error in the vertical measurements is attributed to reections o the low ceiling. Overall, these results are highly satisfactory. Based on the Rayleigh criterion, a well-known fundamental resolution limit related to the size of a discrete aperture, it was expected that the CLEAN deconvolution should be able to resolve sources with a separation of 45. This is independent
952 The Leading Edge August 2011

Figure 8. Breakdown of image formation time in milliseconds and fractions of total time. The data transmission and the imaging together consumed 93% of the total. The imaging time would be shorter if CLEAN searched for less than ve sources, as done here.

of the precision of the single-source DOA measurements. In a second test, the system was able to measure the locations of multiple sources down to the Rayleigh resolution limit. Adding a second speaker playing the 7.7-kHz tone and mounted on a separate platform meant two sound sources were in front of the array. Horizontal angle measurements were made while the horizontal angular separation between the two speakers was decreased from 180 to 6. Figure 7 shows that the regression lines for the two-source horizontal angle measurements deviate from their respective 45 lines by a maximum of 10, and that the measurements have a precision of 4. Below 40 (near the predicted 45 Rayleigh limit), the beam patterns from the two sources begin to merge so that they are indistinguishable from a single source. Thus, it appears that CLEAN performed down the Rayleigh limit. Interestingly, there seems to be an almost sinusoidal variation of the two-source mea-

I S E F

surements around the best-fit lines. Because this did not occur with single sources, it is highly likely that this is an artifact introduced into the CLEAN deconvolution by the overlapping of multiple source beam patterns. This illustrates the challenge of choosing the best deconvolution algorithm for a device such as an acoustic array that is strongly diffraction-limited. Figure 8 gives a breakdown of the computation time necessary to produce an image. The time needed to produce an acoustic image and locate five sources is 0.77 seconds. This lag is noticeable and represents near real-time performance. Notably, the CLEAN deconvolution took up 56% of the image formation time. When appropriate, one way to improve this would be to search for fewer than five sources. In addition, the microcontrollers USB interface does not use the highspeed USB protocol. Consequently, the serial data transfer consumed 37% of the image formation time. A future implementation using the higher-speed protocol would reduce the image formation time significantly. Conclusion The goal of achieving two-dimensional imaging on a multiplexed acoustic array was accomplished, and the images were formed in near real time. Performance was excellent for single sources and was realized with some systematic errors for two sources. Digital antialiasing was effective, significantly broadening the systems range of usable frequencies. The cost target of about $100 was achieved. The results validate the multi-

plexed input design, demonstrating that it has the potential to be a feasible and cost-effective option for future technologies. Acoustic imaging can be used to probe internal structures and surface geometries in a wide range of material environments where optical techniques are not applicable. The multiplexed input design could be a cost-effective solution for sonar, seismic imaging, semiconductor testing or other applications where the system listens for a narrowband signal. Future development efforts could focus on processing broadband signals and working in media other than air.
References
Hgbom, J. A., 1974, Aperture synthesis with a nonregular distribution of interferometer baselines: Astronomy and Astrophysics, 15, 41726. Proakis, J. P. and D. G. Manolakis, 2007, Digital signal processing: Pearson Prentice Hall. Shannon, C. E., 1949, Communication in the presence of noise: Proceedings of the Institute of Radio Engineers, 37, no. 1, 1021, doi: 10.1109/JRPROC.1949.232969.

Acknowledgments: I thank my mentor and my father for their guidance with this project. Corresponding author: afsciguy21@aol.com

Seismic driven volumetric Fault Damage Zone Analysis

Identify Fault Damage Zone:

exploring new depths, reaching new horizons

services

True Geometry Internal Variations Lithology Changes Potential Fluid

Migration Pathways

returning new insights within half a day


www.ffa-geosciences.com
August 2011 The Leading Edge 953

The Leading Edge

Memorials

tion of regional phase velocities across the conene Herrin, the Shuler-Foscue Chair tinental US and linked these observations to of Geological Sciences at Southern variations in upper mantle structure. Further Methodist University, died on 20 November work documented the eects of these three2010, a day after his 81st birthday and two dimensional structures on earthquake location days after teaching his nal seismology class. estimates and source bias in particular. His 60-year career in geophysics research and Gene gave lengthy testimony before the teaching is marked with awards, benchmark U. S. Congresss Joint Committee on Atomic research contributions, and leadership (often Energy in 1963 on the Identication and outside of visibility) in the university and Location of Nuclear Weapons Tests. The tesgeophysics community. timony focused on the accuracy of locating Herrins professional association with the epicenters of nuclear tests for the purpose of Department of Earth Sciences (formerly Geolon-site inspection. His detailed presentation ogy) at Southern Methodist University (SMU) and answers to specic questions were folbegan as an instructor in 1955 and then proGene Herrin lowed by congratulations from Chairman Pasgressed to assistant professor in 1958, associ19302011 tore who said, I think you have made a brilate professor in 1961, professor in 1964, and liant presentation. Shuler-Foscue Professor of Geological Sciences Gene was a member of the Dallas Geophysical Society, the in 1973. He received a BS degree in physics from SMU in 1951, a MS in geology from SMU in 1953, and a PhD in Seismological Society of America, and the American Association of University Professors. He was also a fellow of the Geogeology and geophysics from Harvard University in 1958. After returning to SMU, Gene became interested in logical Society of America and fellow and past president of earthquake seismology. On 14 December 1953, the world- the seismology section of the American Geophysical Union. wide standard network seismic station DAL opened at SMU He was a member and chairman of a number of panels and and consisted of three Benio short-period seismometers and committees in the elds of arms control and disarmament, a three Sprengnether long-period seismometers. Gene regularly consultant to several government agencies, and an expert witanalyzed the recordings, beginning his life-long interest in ness before the Joint Committee on Atomic Energy, and an seismic observations. This work led to his paper in the Bul- author of 62 publications. letin of the Seismological Society of America in 1957, The reliHERB ROBERTSON (with editorial ability of North American seismological stations. comments by BRIAN STUMP) In 1960, Gene became a consultant to the Geotechnical Corporation of Garland, Texas (Geotech), later, Teledyne Industries, as it was expanding to support the U. S. Governments research in the detection and identication of nuclear tests. Geotech became regarded as an originator of new ideas for research and development in addition to an instrument manufacturer as a result of Genes contributions. Based on his dissertation work on the Solitario uplift, Gene believed the Big Bend in West Texas would be a site for a quiet seismic station. Geotech and SMU occupied a site near Lajitas, Texas, with the mobile Long Range Seismic Monitoring (LRSM) system to record the Gnome nuclear explosion. His early work was indicative of Genes continuing interest in regional seismic observations and began research that would lead to a new generation of regional seismic arrays. Herrin was able to respond to the apparent mislocation of the Gnome shot as he had been working on improved earthquake location methods with support from the Air Force Oce of Scientic Research (AFOSR). Earthquake location methods including depth estimates were of critical interest as nuclear explosions would be restricted to shallow depths. This research resulted in two denitive papers in 1962: the rst in the Bulletin of the Seismological Society of America (Regional variations in Pn velocity and their eect on the location of epicenters) and the second in the Journal of the Graduate Research Center (Machine computation of earthquake hypocenters). Subsequent publications quantied the distribuThe Leading Edge August 2011

954

The Leading Edge

Memorials
Pat left Phillips in the early 1970s and .P. (Pat) Lindsey, who died on 18 June 2011 began a distinguished career with Geoat age 85, was one of the unsung giants Quest. There he developed cutting-edge of our profession. Many SEG members did data processing and modeling technologies not know Pat nor had occasion to interact and taught many short courses across the with him. Those of us who did know him industry on various topics of geophysics. and did interact with him were consistently His most noted courses were probably those impressed with his acumen, his ability to where he presented his views about how to make simple explanations of complicated estimate the basic illuminating wavelet emaspects of seismic data acquisition and data bedded in seismic data and how to use that processing, his skills in writing and teaching, wavelet to optimize geologic inversion and and his unending wit and humor. data interpretation. Joe Pat Lindsey was born on 7 June 1926 Pat was an SEG member for 48 years and, in Wichita Falls, Texas. He attended Midalthough he deliberately stayed under the rawestern University (in Wichita Falls), OklaJ. P. (Pat) Lindsey dar much of the time, his reputation steadily homa State University, and Texas A&M, and 19262011 grew to the point that he was selected to be served in the United States Navy before he began his geophysical career in 1953. In addition to his con- a Distinguished Lecturer in 1987 and, shortly afterward, to tributions to geophysics, Pat also conceived a method, known serve on the TLE Editorial Board. Pat was elected SEG Editor as the Lindsey-Fox algorithm, which has successfully factored for 199193. He asked me to be his Assistant Editor, and I polynomials with thousands of terms (including one with did not hesitate in accepting. In those years, there was only one Assistant Editor, so Pat and I worked closely as we had four million). Pat was my rst supervisor when I joined the Geophys- at Phillips. Pat made a bold move during his editorship when ics Research Branch of Phillips Petroleum. I was fresh out of he changed the color and cover of Geophysics. For decades, Oklahoma State University with a PhD in physics. Although Geophysics had been published with a bright (and someI knew a bit about acoustics, mathematics, and computer times dull) yellow cover. Pat changed the color to blue and coding, I was woefully ignorant of geology and seismic tech- to the format we have today. So even for those who do not nology. I could not have had a better tutor to teach me the know Pat, his inuence in our profession lives on through the style and format of the cover of Geophysics that circulates basics of our profession than Pat Lindsey. Pat had a masters degree in electrical engineering and was around the world. In SEGs 75th anniversary year (2005), all past SEG particularly interested in signal theory, as that science was practiced in electrical engineering at the time we began to Editors were asked to write a short note describing how the work together in the mid-1960s. These were the years when journal was produced and managed during their tenure. Not seismic data acquisition and data processing were transition- many Editors provided their thoughts but Pat did respond. ing from the analog world to the digital world. Pat was per- Gerard Schuster was Editor in 2005 and I remember his comfectly positioned to be one of the key people at Phillips who ments after he read Pats contribution: Who is this guy? This had the credentials to implement digital technology. My col- is great writing. I told Gerard I appreciated his perception leagues and I were fortunate to hold onto his coattails and to because I considered Pat one of the best writers I have known. absorb some of the principles he preached and taught. From I still refer to his course notes. Thanks, Pat, for your many contributions to our profesday one, he emphasized the principle that you must know the wavelet to understand what geologic information is em- sion. You have left your mark. bedded in seismic data. BOB A. HARDAGE Any of Pats friends could cite one of his funny quips. I will pass along only one. At the end of my rst year at Phillips, I had to complete a job performance form. This was a new experience for me, so I went to Pat to seek his counsel about what was expected and how I should proceed. He made the following statement that was both humorous and deeply philosophical: Dont lie when you describe what you did, but remember you are competing with liars. I have kept that admonition in my head ever since that early-job counseling. His comments always come back to me every time I ll out my annual job performance forms. I also have found Pats philosophy applies to almost everything in life. Later, Pat told me he heard this statement from an IRS tax examiner who was advising him about a tax return.

August 2011

The Leading Edge

955

EAGE/SEG Research Workshop 2011

Towards a Full Integration from Geosciences to Reservoir Simulation


1-2 September 2011 - Trieste, Italy
The multi-disciplinary integration of geosciences has made a lot of progress in the last decade; but this progress is often limited to the early stages of the production chain and mostly to two techniques. Rarely does the integration include three or more data types and seldom can we quantify the achieved benets in terms of production increase. This workshop aims to dene the state-of-the-art in this integration process, highlighting current weaknesses and discussing what we need to develop further. Topics t Engineering t Geosciences t Hydrology t C02 sequestration Who should attend? All professionals whose primary discipline may be geology, geophysics or reservoir engineering, but have a keen interest in integrating these domains.

www.eage.org

Register now!

Share Your Knowledge at OTC 2012


Submit a paper proposal for consideration.
Call for Papers Deadline: 22 August 2011

30 APRIL3 MAY 2012


HO U S T O N, T E X AS, U S A W W W.OTCNE T.ORG /2012

The Offshore Technology Conference is the worlds foremost event for the development of offshore resources in the fields of drilling, exploration, production, and environmental protection.

Faculty of Engineering Department of Earth Science and Engineering

Lectureship in Petroleum Geophysics


Salary Range: 42,500 - 47,450 p.a. Imperial College London is now inviting applications for a Lecturer in Petroleum Geophysics to be held in the Department of Earth Science and Engineering. The Department is one of the worlds leading institutions in both the teaching and research of earth sciences and related engineering. As part of our postgraduate teaching programme, we offer three integrated petroleum-related MSc courses that enjoy an excellent reputation for producing world-class geoscientists, geophysicists and engineers. Successful candidates will have an outstanding track record from either industry or academia, and, in addition to carrying out their own research programme, teach and provide administration of the Petroleum Geophysics MSc course. A background in one or more of the following areas is preferred: Seismic data acquisition Seismic data processing Quantitative seismic interpretation Rock physics Experience in other aspects of petroleum geophysics will also be considered. Successful candidates will be expected, in addition to performing research, to contribute to the teaching and administration of the Petroleum Geophysics MSc course. They will also be responsible for organising and running geophysical field trips, as well as taking on an academic leadership role in the activities of all of the petroleum-related MSc programmes. Please direct any enquiries to Professor Helmut Jakubowicz via e-mail: helmut@imperial.ac.uk Our preferred method of application is online via our website: www.imperial.ac.uk/employment (please select Job Search then enter the job title or vacancy reference number into Keywords vacancy ref: EN20110072). Please complete and upload an application form as directed. Further particulars for the post are also available on the website searching with the same vacancy reference number. Alternatively, if you are unable to apply online or have any queries about the application process, please contact Darakshan Khan via e-mail: d.khan@imperial.ac.uk Closing date: 31st August 2011. Committed to equality and valuing diversity. We are also an Athena Silver SWAN Award winner and a Stonewall Diversity Champion.

The Leading Edge

Reviews
Developing a Talent for Science, by Ritsert C. Jansen, ISBN 9780521149617, Cambridge, 184 pages, 16.99. this setting, timidity is less successful than asking questions that may appear stupid at the time, but which eventually lead to better progress. Communication by denition cannot by unidirectional; thus it is critical to openly invite review of ones work. Clear guidelines for respectfully sharing information can create mutually benecial and enriching collaboration instead of competition. The second chapter concludes with an unexpected recommendation to consider changing jobs as an eective means of improving ones career. The next topic in the book is the acceleration of development of an individuals talent by creating the right conditions in a rst-class educational or professional setting which is structured to attract new talent. The discussion on the ve phases of a teams life span is also enlightening. Actively identifying a teams life cycle can help understand the tensions and options available to the project. The subchapters on how to reward and support scientic sta are a must-read for mentors, team leaders and bosses (from re to inspire). Chapter 4 discusses how to put the material in the previous chapters into practice. Although the course of the book lays out the underlying strategies, the author emphasizes the need for deliberate practice and ongoing execution of the newly acquired goals. He acknowledges that the eorts will be anything but easy (they make you hurt, but they do work). Others have succeeded (which is some consolation), as frequent quotes from giants such as Goethe (A really great talent nds its happiness in execution) and Einstein (I have no special talent, I am only passionately curious) remind the reader. Eventually a highly motivated individual is ready to self-analyze, Have I done the necessary homework and set a substantial groundwork for a successful career? Is my CV updated and easy to read within a minute or two? Am I honestly aware of my current weaknesses and am I truly striving to improve them? This and the following chapter give the interested individual important information to take control of the momentum in the form of general advice and worksheets, including a novel spider web diagram that encapsulates self assessments. A series of appealing examples of real-life situations show how you can get your act together and tackle many of the anticipated hurdles. There are few and only minor issues to criticize in this text. For one, the reader is not alone if the books title gave the erroneous expectation that it was a teaching guide for science education or a discussion on the origin of talent and aptitude. In fact, the latter topic has been interestingly addressed (see, for example Talent is overrated by Geo Colvin) but is not touched on in Jansens book. In addition, there is some level of repetition in the book, as evident by frequent cross-referencing. Also, several tables and gures appear to be slightly out-of-context and would benet from a more thorough analysis of their content. Finally, the best way to summarize this superb publication is to use the authors own words: ... this book aims to give you ideas rather than to be comprehensive. It will plant seeds in your mind, although the watering, nourishing, weeding, and nal harvesting are up to you.

his insightful book oers guidance toward an enhanced productive academic or professional scientic career. To achieve this, Developing a Talent for Science places a strong emphasis on the mutual benets gained both in learning from others and in helping others maximize their individual skills. The book contains a wealth of practical advice, often supported by real-life anecdotes and thought-provoking questions. Its self-reective exercises demand a high level of self analysis and will likely uncover opportunities to address weaknesses in the readers own professional environment and behavior. If you are open to change, then Developing a Talent for Science can be a true life-changing guide to a richer and more satisfying professional evolution. The author heads the Groningen Bioinformatics Centre and is also amply supported by an impressive resume, including his membership in the Health Council of the Netherlands. His book incorporates elements from his experiences teaching workshops and classes on talent and academic skills development. The target audience is the scientic community ranging from the novice level (student) to the more established level (full professor or research director in a large corporation). In addition, much of the material is equally applicable to many occupational elds. The books ve chapters exhibit his highly disciplined and structured approach. The concisely titled (ve words or less) chapters are grouped in short subchapters with simple single-word titles. Subchapters contain exercises and anecdotes, often with catchy titles (Its not simple to achieve simplicity, Resume resumed, Grant-parents), creating a captivating refreshing reading experience. The rst chapter describes how a scientist or a scientist-tobe could improve natural talents by honing essential personal skills such as perseverance and communication, while being driven by the fundamental passion for science (science is fun and you get paid to play). Using techniques from the book, the individual learns both to access and maintain the strong creative ow during hard times and how to protect the passionate interest by prioritizing (saying YES to someone else can be saying NO to yourself ). Jansen suggests that viewing problems as challenges can create feelings of euphoria when they are solved, or teach useful lessons when they are not met. Being a good scientist requires excellent communication skills achieved only through dedicated practice. Are you ready to give a two-minute elevator speech about your research during a shared elevator ride with your boss? The second chapter focuses on synergy and how others can make the individual a better scientist. The subchapter on reading is full of practical suggestion on how to nd and lter relevant scientic information in the digital age. For more personal interactions, Jansen suggests listening with an open mind and to always be aware of the interpreter inside each individual. For example, he recommends taking a more active role in a conference setting by asking relevant questions. In
The Leading Edge August 2011

958

In my opinion, Developing a Talent for Science is a mustread for any professional in the geophysical community and for those pursuing studies in this eld. ANDREAS RUEGER Highlands Ranch, USA Essential Image Processing and GIS for Remote Sensing, by Jian Guo Liu, and Phillipa J. Mason, ISBN 978-0-47051032-2, Wylie-Blackwell, 460 pages, US $159.95. s in seismic interpretation, users of remote imagery cannot be fully informed without understanding how their data are modied and aected during collection and processing. An advanced text complete with end-of-chapter questions, Essential Image Processing is heavily mathematical, yet its ideas and concepts are still accessible without revisiting ones college math texts. The entire book deals with digital images, the kind most likely to be encountered by a present day practitioner. Chapters 19 through 22 should prove especially valuable to anyone using, or interested in this type of data. In chapter 19 the reader is introduced to a basic approach for data processing and the creation of thematic images. The balance of the chapters, 20 through 22, present well done and interesting case studies. Even the casual user of thematic imagery would benet from time spent in these chapters. The book takes a highly mathematical approach to digital images, treating them as an array of data points. This starts by expressing color at each sample point as a vector quantity in a three-dimensional (red, green, blue) space. Chapter 2 deals with processes applied to a single image, such as contrast enhancement. The next chapter logically extends the discussion to multi-image processes such as adding images and, creating indices for iron oxide, vegetation and clay. Chapter 4s ltering discussion will be familiar to those with knowledge of seismic processing because, the math, such as fast Fourier transforms is the same. Image fusion, component analysis and image classication are the subjects of the next several chapters while the chapter on correcting images for geometric factors is especially thorough. Included is an excellent treatment of Synthetic Aperture RADAR imagery. The second section of the book is a thorough treatment of GIS data reduction and correction and introduces fuzzy data sets, fuzzy logic and fuzzy modeling. For those not familiar with the ideas of fuzziness, chapters 17 and 18 will be a bit of a slog, but well worth the time. Although written by practitioners of English, spellings like colour pass almost unnoticed. As usual with textbooks, the binding has the pages glued but seems good quality. There is an abundance of color images and clear monochromatic graphs. In general, Essential Image Processing would be an excellent resource for thematic image users. This book will allow interpreters to approach their work with a wider and deeper understanding of what has happened to imagery before it lands on their desk or computer. ROBERT W. AVAKIAN, Okmulgee, USA
August 2011 The Leading Edge 959

The Leading Edge

Calendar
2011 Development and Production Forum: Opportunities and Challenges in Unconventional Resources, Changping District, Beijing, China, (ccrain@seg.org) 31 Jul5 Aug 1st International Workshop on Rock Physics, Colorado School of Mines, Golden, Colorado, http://1iwrp.rockphysicists.org/ 7 12 Aug SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Melbourne, Australia, www. seg.org/disc, (jabbott@seg.org) 15 Aug 12th International Congress of the Brazilian Geophysical Society, Rio de Janeiro, Brazil, http://sys. sbgf.org.br/congresso/, (eventos@sbgf. org.br) 15 18 Aug NAPE Summer Expo, Houston, USA, www.napeexpo.com 1719 Aug SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Perth, Australia, www.seg.org/ disc, (jabbott@seg.org) 18 Aug SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Adelaide, Australia, www.seg. org/disc, (jabbott@seg.org) 22 Aug SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Bangkok, Thailand, www.seg. org/disc, (jabbott@seg.org) 25 Aug GeoSynthesis 2011, Cape Town, South Africa, http://www.sbs.co.za/ geosynthesis2011/introduction.html, ( george.smith@uct.ac.za) 29 Aug2 Sep 14th Latin American Geological Congress, Medellin, Colombia, (presidencia@sociedadcolombianadegeologia.org) 29 Aug2 Sep EAGE/SEG Summer Research Workshop 2011: Toward a Full Integration from Geosciences to Reservoir Simulation, Trieste, Italy, http://www.eage.org/events/index.php?eventid=505&Opendivs=s3 (skk@eage.org) 12 Sep AAPG/SEG Fall Expo, Houston, USA, (students@seg.org) 8 9 Sep SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Denver, USA, www.seg.org/ disc, (jabbott@seg.org) 13 Sep SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, San Antonio, USA, (SEG Annual Meeting), seg.org/disc, (jabbott@seg.org) 16 Sep SEG/ExxonMobil Student Education Program, San Antonio, USA, www.seg.org/students/SEPExxon 1618 Sep SEG/Chevron Student Leadership Symposium, San Antonio, USA, www.seg.org/students/SLSChevron 1718 Sep SEG Continuing Education Courses, San Antonio, USA, (SEG Annual Meeting), www.seg.org/upcomingcourses, (jabbott@seg.org) 1718 Sep SEG International Exposition and 81st Annual Meeting, San Antonio, USA, www.seg.org/am 1823 Sep Geopressure 2011, Pressure Regimes and Their Prediction, Galveston Island, USA, (hu man@fusiongeo. com) 25 Oct Oshore Technology Conference (OTC) Brazil 2011, Rio de Janeiro, Brazil, http://otcbrasil.org/, (info@OTCBrasil.org) 46 Oct 6th Congress of Balkan Geophysical Society, Budapest, Hungary, www. jbgs.org/, (istvan.kesmarky@ges.hu) 47 Oct 4th International Scientic Conference of Young Scientists and Students, Earth Sciences: New Approaches and Achievements, Baku, Azerbaijan (said.sadykhov@ bakerhughes.com) 56 Oct International Workshop on Gravity, Electrical and Magnetic Methods and Their Applications, Beijing, China, (xli@fugro.com) 1013 Oct AAPG/SEG West Coast Expo, California, USA, (students@seg.org) 1315 Oct SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Copenhagen, www.seg.org/ disc, (jabbott@seg.org) 19 Oct SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Stavanger, www.seg.org/disc, (jabbott@seg.org) 21 Oct

960

The Leading Edge

August 2011

SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Paris, France, www.seg.org/ disc, (jabbott@seg.org) 25 Oct SPE Annual Technical Conference and Exhibition, ATCE 2011, Denver, USA, http://www.spe.org/ atce/2011/, (meetings@spe.org) 30 Oct2 Nov SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Rijswijk, www.seg.org/disc, (jabbott@seg.org) 2 Nov SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Milan, Italy www.seg.org/disc, (jabbott@seg.org) 7 Nov SEG/SPG Geophysical Techniques in Complex Regions, Land and Marine Workshop, Shenzhen City, China, (semery@seg.org) 710 Nov SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Aberdeen, www.seg.org/disc, (jabbott@seg.org) 9 Nov SEG Continuing Education Courses, California, USA, location TBD, www.seg.org/upcomingcourses, (jabbott@seg.org) 1415 Nov

2011 International Petroleum Technology Conference, Bangkok, Thailand, http://iptcnet.org/2011/, (iptc@iptcnet.org) 1517 Nov 10th SEGJ International Symposium, Kyoto, Japan, (mikada@ gakushikai.jp) 2022 Nov SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Moscow, www.seg.org/disc, (jabbott@seg.org) 25 Nov 5th International Congress on Geophysics, Phuet, Thailand 29 Nov1 Dec SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Muscat, www.seg.org/disc, (jabbott@seg.org) 11 Dec SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Abu Dhabi, www.seg.org/disc, (jabbott@seg.org) 13 Dec SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Mumbai, www.seg.org/disc, (jabbott@seg.org) 15 Dec SEG/EAGE DISC: Seismic Acquisition from Yesterday to Tomorrow, Dhahran, www.seg.org/disc, (jabbott@seg.org)

17 Dec 2012 10th Middle East Geosciences Conference and ExhibitionGEO 2012, Bahrain, www.geo2012.com 47 Mar 2012 9th Biennial International Conference and Exposition on Petroleum Geophysics, Hyderabad, Andhra Pradesh, India, (spgindia@ redi mail.com) 1618 Feb OTC 2012, Houston, USA, www. octnet.org 30 Apr3 May 14th International Conference on Ground Penetrating Radar, Shanghai, China, (xiexiongyao@ tongji.edu.cn) 4-8 Jun 5th International Conference on Environmental and Engineering Geophysics, Changsha, Hunan Province, China (ljx6666@126. com) 1216 Jun SEG/AGU Workshop on Hydrogeophysics, Boise, Idaho, USA (awatson@seg.org) 812 Jul

GEOPHYSICS Today: A Survey of the Field as the Journal Celebrates its 75th Anniversary
Edited by the Editors of GEOPHYSICS

In 2010, GEOPHYSICS turned 75 years old. In celebration, the GEOPHYSICS team invited a collection of papers written by wellrecognized experts in various areas of exploration geophysics. These invited papers not only form part of the present book, but they also appear in the September-October 2010 special section of our journal. GEOPHYSICS Today: A Survey of the Field as the Journal Celebrates its 75th Anniversary complements this special section with an additional group of papers, drawn from GEOPHYSICS during the recent past, that address areas the invited articles did not. The result is a snapshot of the state of the art in our eld as GEOPHYSICS passes its three-quarter-century mark. This book is Geophysical References Series No. 16.
ISBN 978-1-56080-226-6 Catalog #176A
Mail order to: SEG Book Order Department P. O. Box 702740 Tulsa, OK 74170-2740 U.S.A. Phone: +1-918-497-5546 Fax: +1-918-497-5565 E-mail: books@seg.org

Published 2010, 530 pages, Hardcover SEG Members $59, List $75
Order publications online at:

www.seg.org/bookmart
august 2011 The Leading4/21/11 Edge11:24 961 AM

GeophyToday Qtr Ad Rev May2011.indd 1

The Leading Edge

Announcements
GEOPHYSICS call for papers: Seismic methods in mineral exploration and mine planning SEG invites papers on the topic of Seismic methods in mineral exploration and mine planning for September-October 2012 publication in a special section or supplement of Geophysics. Great interest and a new trend in exploration and exploitation of mineral resources at great depth are under way. Consequently, a large increase in the use of seismic methods for targeting deep-seated mineral deposits and for deep mine planning is occurring. Seismic methods provide high-resolution images of geologic structures hosting mineral deposits and, in a few cases, can be used for direct targeting of mineral deposits at depths greater than 1 km. This is not limited to only surface seismic surveys but also to borehole seismic methods such as VSP and crosshole imaging. To date, tens of 2D and 3D surface seismic surveys have been acquired in Canada, Europe, Australia, and South Africa to help in targeting mineral deposits at depth or for designing deep mines. Based on these activities, it appears that seismic methods are becoming established within the mining sector. This brings new opportunities for geophysicists but also new challenges. The goal of this special Geophysics issue is to highlight recent advancements in data acquisition, processing, and imaging of mineral deposits and their host rock structures. The recent increase in the use of seismic methods in both industry and academia foreshadows developments and applications in the crystalline environment that are certain to be forthcoming. The organizers of the special issue encourage contributors to bring forward and discuss new advances in data acquisition, processing, imaging, and forward modeling of seismic data applied to mineral exploration and mine planning. Furthermore, contributions from multicomponent sources and receivers, petrophysical studies, and integration of seismic data with other multidisciplinary geophysical and geologic data are encouraged. Unsuccessful case studies, especially 3D cases, demonstrating what went wrong also are welcome. Interested authors should submit their manuscripts for review no later than 30 November 2011. In addition, the special section/supplement editors would like to receive a provisional title and list of authors as soon as possible. Authors should submit via the normal online submission system for Geophysics (https://mc.manuscriptcentral.com/geophysics) and indicate that it is a contribution for this special section or issue. The submitted papers will be subject to the regular peerreview process, and the contributing authors also are expected to participate in the review process as reviewers. We will work according to the following timeline: Submission deadline: 30 November 2011 Peer review complete: 18 May 2012 All les submitted for production: 1 June 2011 Publication of issue: September-October 2012 Given the tight timeline for publication of this issue, Geophysics is going to strictly enforce author submission guidelines, covered in Instructions to Authors published in the January-February 2011 issue and on the SEG Web site. Please note that normal Geophysics page and color charges apply. For specic questions, please contact the special section/ supplement editors: Alireza Malehmir: alireza.malehmir@geo.uu.se Milovan Urosevic: M.Urosevic@curtin.edu.au Gilles Belleeur: Gilles.Belleeur@NRCan-RNCan.gc.ca Christopher Juhlin: Christopher.juhlin@geo.uu.se Bernd Milkereit: bm@physics.utoronto.ca Colin Farquharson: cgfarquh@mun.ca

Womens Network to host breakfast workshop In November 2010, SEG President Klaas Koster formed a The Networks rst major event will be a breakfast task force to provide a recommendation on forming a pro- workshop at the 2011 Annual Meeting. The breakfast will fessional womens network in SEG. The proposal was ap- be held on 21 September. The keynote speaker is Alexanproved by the SEG Executive Committee in March 2011. dra Herger, director of International Exploration and New The Womens Network will establish the pertinent con- Ventures at Marathon Oil. Facilitated roundtable discusnections for support, collaboration, and mentoring at pro- sions will follow. If interested in serving as a table facilitator, fessional gatherings at the SEG Annual Meeting and region- please let us know. Attendance is limited to 100, so sign up al meetings, the online community, and additional informal early. Men who are interested in womens issues are encouractivities. aged to attend. The workshop will be followed by a meeting One of the rst actions taken by the Womens Network of the SEG Womens Network Committee. Attendees at the was the creation of an online collaboration community, breakfast interested in becoming active in the committee http://www.seg.org/web/seg-womens-network. All SEG mem- are welcome to stay for the committee meeting. bers are encouraged to submit information, ideas, and reThe members and leadership of the SEGs Womens sources that would interest and benet the group. The SEG Network extend a thank you to all SEG members and deciWomens Network also has online presence on LinkedIn and sion-makers who recognized the need for such a committee Facebook. The ocial way to join the network is via the and ensured the Networks establishment with their guidSEG Network page. ance and assistance.
962 The Leading Edge August 2011

The Leading Edge

Personals
Baird awarded Colonel Edwin L. Drake, Legendary Oilman Award Houston-based, internationally recognized professional geophysicist and businessman Ralph Baird recently received the Petroleum History Institutes (PHI) most prestigious award, the 2011 Colonel Edwin L. Drake, Legendary Oilman Award for his lifetime commitment, contributions and achievements in advancing petroleum geology and its application in industry, government and academia. The award was presented at the Honors and Awards Banquet held during PHIs 2011 annual Symposium on the History and Heritage of the Global Petroleum Industry and Associated Field Trips this year 2325 June in Marietta, Ohio. The mission of the Petroleum History Institute is to pursue the history, heritage and development of the modern oil industry from its 1859 inception in Oil Creek Valley, Pennsylvania, to its early roots in other regions in North America and the subsequent spread throughout the world to its current global status. The Petroleum History Institute is a not-for-prot 501(c)(3) corporation, and all donations are thereby tax deductible. Ron Weaver retires from FaireldNodal Ron Weaver, SEG member since 1969 announces his retirement in December 2010 from FaireldNodal after 21 years there and 46 years in the industry. He began his career in 1965 joining GSI after serving as an ocer in the U.S. Army. Ron can be contacted by e-mail at ronweaver417@att.net.
Larry Woodfork, left, and Ralph Baird, right. Photo courtesy of Maureen and Dan Leech

The Leading Edge

Membership
Applications for Active membership have been received from the candidates listed below. This publication does not constitute election but places the names before the membership at large in accordance with SEGs Bylaws, Article III, Section 5. If any member has information bearing on the qualications of these candidates, it should be sent to the president within 30 days. The list can be viewed online at membership.seg.org/applicants/. For Active membership Bone, John (R.P.S., Uttoxeter, Staordshire, United Kingdom) Duenas, Jorge (Arequipa, Peru) Jingholm, Johan (Reservoir Exploration Technology ASA, Stockholm, Sweden) Lesnikov, Vladislav (Saudi Aramco, Ras Tanura, Saudia Ararbia) For transfer to Active membership Banerjee, Subrata (BG Group, Reading, United Kingdom) Edwards, Katherine Michelle (Bradenton FL, USA) Srivastav, Ajay Kumar (ONGC, Chennai, Tamil Nadu, India)
963 The Leading Edge August 2011

Requirements for Membership Active: Eight years professional experience, partly involving exercise of independent judgment. Membership applications and details on other types of membership, including Associate, Student, and Corporate, may be obtained at http://membership.seg.org.

FOR reinstate and transfer to Active membership Doruelo, Julius Sondia (Shell Exploration and Production Co., USA)

A memorial fund has been established in the name of these deceased members in honor of their contributions, dedication to the science of geophysics, and support of SEG. Contributions to specic funds will be acknowledged as tax-deductible donations to the SEG Foundation, and family members will be notied of your gifts.

S. Norman Domenico, died on 27 March 2011. J. P. Lindsey, died on 23 June 2011. Harold Seigel, died on 13 July 2011.

Did you know?TLEs gone digital.


view download share print

Find it at www.seg.org/tledigitaledition
TLE Digital Half Ad.indd 1 12/15/10 8:31 AM


The SEG is seeking volunteers to serve as Tour Guides for the Applied Science Education Program and Session Monitors for the Technical Program at this years Annual Meeting in San Antonio, Texas.

Monday, 19 SeptemberThursday, 22 September


For more information on volunteering, visit www.seg.org/amvolunteer
For further details, contact the Volunteer Coordinator Chairman or the SEG Business Ofce. Rick Moran Qvolunteers@seg.org SEG Business Office Q 1-918-497-5500

7+/$7,1$0(5,&$1*(2/2*,&$/&21*5(66 7+&2/20%,$1*(2/2*,&$/&21*5(66
KDVEHHQPRUHWKDQWZHQW\\HDUVVLQFH&RORPELDKRVWHGWKHWK /DWLQ$PHULFDQ*HRORJLFDO&RQJUHVVWKDWZDVKHOGLQ%RJRWD RQ2FWREHU7KH&RORPELDQ*HRORJLFDO6RFLHW\WKH &RORPELDQ*HRORJLFDO6XUYH\ ,1*(20,1$6 WKH&RORPELDQ $VVRFLDWLRQRI3HWUROHXP*HRORJLVWVDQG*HRSK\VLFV $&**3  ($),78QLYHUVLW\DQGWKH1DWLRQDO8QLYHUVLW\RI&RORPELDQDUH YHU\KDSS\WRLQYLWHDOORIWKHZRUOGJHRORJLFDOFRPPXQLW\WRDW WHQGWKHWK/DWLQ$PHULFDQ*HRORJLFDO&RQJUHVVZKLFKZLOOEH KHOGLQ0HGHOOLQ&RORPELDEHWZHHQ$XJXVWDQG6HSWHPEHU  7KH/DWLQ$PHULFDQ*HRORJLFDO&RQJUHVVUHWXUQVWR&RORPELD DIWHUWZHQW\YH\HDUV6LQFHWKH\HDUWKH&RORPELDQ *HRORJLFDO&RQJUHVVKDVEHHQFRQYHQHGHYHU\WZR\HDUV7KH WK&RORPELDQ*HRORJLFDO&RQJUHVVZLOOEHKHOGFRQFXUUHQWO\ ZLWKWKHWK/DWLQ$PHULFDQ*HRORJLFDO&RQJUHVVLQ0HGHOOLQ DW3OD]D0D\RU&RQYHQWLRQV&HQWHUV7KHHYHQWZLOOSURYLGH WKHSDUWLFLSDQWVZLWKDQDUHQDIRUSUHVHQWDWLRQDQGGLVFXVVLRQ RIWKHODWHVWVFLHQWLFUHVXOWVUHJDUGLQJJHRORJLFDOUHVHDUFK ZLWKLQ/DWLQ$PHULFDQDQG&RORPELDRUJDQL]HHOGWULSVWR DUHDVRIH[WUDRUGLQDU\JHRORJLFVHWWLQJVQHDU0HGHOOLQDQG LWVVXUURXQGLQJPRXQWDLQDQGYDOOH\WHUUDLQVDVZHOODVSUH FRQJUHVVVKRUWFRXUVHV $XJXVW  %HVLGHWKLV DFRPPHUFLDOH[KLELWLRQRIVWDWHRIWKHDUWWHFKQRORJLHVWKDW DUHDYDLODEOHIRUJHRVFLHQWLFUHVHDUFKGDWDDFTXLVLWLRQDQG WUDQVPLVVLRQZLOOEHKHOG 7KH/DWLQ$PHULFDQ*HRORJLFDO&RQJUHVVLVRQHRIWKHPRVW LPSRUWDQWJHRORJLFDOPHHWLQJVDIWHUWKH,QWHUQDWLRQDO*HRORJL FDO&RQJUHVV,WVYHUVLRQLVDQRIFLDO6(*PHHWLQJWKDW ZLOOEHDWWHQGHGE\SOHQW\RIPLQLQJDQGRLOFRPSDQLHVZKLFK FXUUHQWO\RSHUDWHLQ/DWLQ$PHULFDDQG&RORPELD:LWKPRUH WKDQSDSHUVVXEPLWWHGXSWRGDWHDQDWWHQGDQFHRIPRUH WKDQJHRVFLHQWLVWVLVJXDUDQWHHG6HH\RXDW3OD]D0D\RULQ 0HGHOOLQ$XJXVWWR6HSWHPEHU

,W

9LVLWXVDWZZZFOJFRP

The Leading Edge

Advertising Index
Company Page Phone Fax E-mail / Web site Contact

Aramco Services 861 713-432-4600 resumes@aramcoservices.com www.jobsataramco.com Arcis 883 403-781-1442 403-781-1710 dhenderson@arcis.com / www.arcis.com Darla Henderson BGP, Inc. 905 86-10-81201469 86-10-81201392 marketing@bgp.com.cn / www.bgp.com.cn Mr. Liu Juxiang BP 901 Cambridge University Press 949 CGGVeritas Cvr4,887 832-351-8821 832-351-8701 www.cggveritas.com Dawson Geophysical Cvr 3 800-D-DAWSON 432-684-3030 jumper@dawson3d.com / www.dawson3d.com Steve Jumper dGB Earth Sciences 926 31-53-4315155 31-53-4315104 info@dgb-group.com / www.dgb-group.com Paul de Groot DownUnder GeoSolutions 841 61 8 9287 4100 61 8 6380 2471 mattl@dugeo.com / www.dugeo.com Matthew G. Lamont Ph.D. FairfieldNodal 847 281-275-7500 281-275-7550 smitchell@fairfieldnodal.com / www.fairfieldnodal.com Steve Mitchell ffA 953 44 1 224 825 084 44 1 224 825 080 acampan@ffa.co.uk / www.ffa.co.uk Agnes Campan, Sales and Marketing Manager Fugro-Geoteam AS 909 713-369-5858 713-369-5893 bhottman@fugro.com / www.fugro.no Brian Hottman GCSSEPM Foundation 843 281-586-0833 gcssepm@comcast.net / www.gcssepm.org Norman C. Rosen, Exec. Dir. GEDCO 915 403-262-5780 403-262-8632 dina@gedco.com / www.gedco.com Dina Gozhykova Geokinetics, Inc. 827 713-850-7600 713-850-7730 louise.cooper@geokinetics.com / www.geokinetics.com Louise Cooper Geometrics 849,875,918 408-954-0522 408-954-0902 rob@mail.geometrics.com / www.geometrics.com Rob Huggins GeoTomo LLC 904 281-597-1429 281-597-1201 tomo@geotomo.com / www.geotomo.com Jie Zhang, Ph.D. Greyco Seismic Personnel Services 853 713-728-6264 713-728-6269 Paul Mitcham IHS Energy Group 899 918-971-7071 (X-200) 918-971-7074 bmeyer@geoplus.com www.geoplus.com Bob Meyer Imperial College London 957 ION 919 281-879-3593 281-879-3626 karen.abercrombie@iongeo.com / www.iongeo.com Karen Abercrombie Mitcham Industries, Inc. 881 936-291-2277 936-295-1922 sales@mitchamindustries.com / www.mitchamindustries.com Bill Mitcham NAPE (American Assoc. of Professional Landmen) 897 817-847-7700 817-847-7704 cpayne@landman.org / www.napeonline.com Christy Payne NEOS GeoSolution 913 281-892-2651 281-892-2092 marketing@neosgeo.com / www.NEOSgeo.com Chris Friedemann, CMO NTNU 830 Paradigm Geophysical 829 713-393-4800 713-393-4801 info@paradigmgeo.com / www.paradigmgeo.com Parallel Geoscience Corporation 922 541-421-3127 541-421-3128 dherold@parallelgeo.com / www.parallelgeo.com Dan Herold PGS Geophysical 825,911 44 (0) 1932 266404 44 (0) 1932 266512 John.walsh@pgs.com / www.pgs.com John Walsh Polarcus DMCC 831 971 4 43 60 966 971 4 43 60 808 Rebecca.Ericson-Grantham@polarcus.com / www.polarcus.com Rebecca Ericson-Grantham Resolve Geosciences, Inc. 879 713-972-6208 281-395-6999 jhudgens@resolvegeo.com / www.resolvegeo.com Jesse Hudgens Sander Geophysics 912 613-521-9626 613-521-0215 argyle@sgl.com / www.sgl.com Malcolm Argyle Schlumberger Oilfield Services 826 281-285-8500 281-285-8970 www.slb.com Seismic Micro-Technology 959 713-464-6188 713-464-6440 bstephenson@seismicmicro.com / www.seismicmicro.com Bill Stephenson SeisWare International, Inc. 889 713-960-6624 713-960-6625 dpaul@seisware.com / www.seisware.com Doug Paul Sercel/Vibtech 835,871 33 2 40 30 1181 33 2 40 30 5894 sales@sercel.fr / www.sercel.com Alain Tisserand 281-363-4903 281-363-4657 gsparkman@fusiongeo.com / www.fusiongeo.com Gene Sparkman SIGMA3 Integrated Reservoir Solutions 917 Teledyne Geophysical Instruments 893 713-666-2561 713-666-6951 chughes@tledyne.com / www.teledyne-gi.com Chris Hughes, V.P and G.M. TERRASYS Geophysics 863 713-893-3630 713-893-3631 info@terrasysgeo.com / www.terrasysgeo.com Oliver Geisler TGS-NOPEC Geophysical Co. Cvr 2 713-860-2100 713-334-3308 bobs@tgsnopec.com / www.tgsnopec.com Bob Schreiber Transform Software and Services, Inc. 867 720-283-1929 720-274-1196 murray@transformsw.com / www.transformsw.com Murray Roth Weatherford Intelligent Completion 851 281-646-7184 281-646-7222 info@weatherford.com / www.weatherford.com WesternGeco 833 44 1293 55 6655 44 1293 55 6627 www.westerngeco.com ADlinc is offered free to display advertisers in the current issue of The Leading Edge. Submission of contact information is the responsibility of the advertiser.

GEOPHYSICS Today: A Survey of the Field as the Journal Celebrates its 75th Anniversary
Edited by the Editors of GEOPHYSICS

In 2010, GEOPHYSICS turned 75 years old. In celebration, the GEOPHYSICS team invited a collection of papers written by wellrecognized experts in various areas of exploration geophysics. These invited papers not only form part of the present book, but they also appear in the September-October 2010 special section of our journal. GEOPHYSICS Today: A Survey of the Field as the Journal Celebrates its 75th Anniversary complements this special section with an additional group of papers, drawn from GEOPHYSICS during the recent past, that address areas the invited articles did not. The result is a snapshot of the state of the art in our eld as GEOPHYSICS passes its three-quarter-century mark. This book is Geophysical References Series No. 16.
ISBN 978-1-56080-226-6 Catalog #176A
Mail order to: SEG Book Order Department P. O. Box 702740 Tulsa, OK 74170-2740 U.S.A. Phone: +1-918-497-5546 Fax: +1-918-497-5565 E-mail: books@seg.org
GeophyToday Qtr Ad Rev May2011.indd 1

Published 2010, 530 pages, Hardcover SEG Members $59, List $75
Order publications online at:

www.seg.org/bookmart
July 2011 The Leading 4/21/11 Edge 11:24 AM 967

The Leading Edge

Interpreter Sam
Everyday misadventures of the everyman of interpretation

am and two other experienced interpreters spent a week team-teaching a basic seismic interpretation class for the companys freshest crop of new hire geoscientists. This class is one small part of a much larger multiyear, multidisciplinary training program in which new hires receive both classroom and on-the-job instruction in front-line business settings, giving them a taste of the dierent career paths available to them and ultimately enabling them to make informed choices (or at least express preferences) for assignment when graduating from the program. The basic seismic interpretation class in which Sam is involved is a core element of the geoscience portion of the program, and owes its success and popularity to the fact that about 75% of class time is spent working with real seismic data, both on paper sections and on workstations, to solve real interpretation problems. On this occasion Sams co-instructors were Steve and Debbie, both seasoned geophysicists with global interpretation experience and genuine interest in teaching (they had not been conscripted to teach the class but had actively volunteered to do so). Debbie in particular has a knack for working with less experienced interpreters in a very nonthreatening way, in contrast to the overbearing Im smarter than you are manner of some so-called instructors. She can quickly identify the crux of a students misconception or diculty, often rephrasing questions in simple terms so as to lead logically and directly to reasonable solutions. Watching her work one-on-one with students conrmed to Sam that hands-on demonstrations of approaches to problem solving are much more meaningful and lasting than lectures followed by a battery of questions/wrong answers/right answers. On the rst two days of the class, students work with 2D migrated data on paper sections, and by necessity deal with the timeless and usually nontrivial interpretation problem of mis-ties at line intersections. It never ceases to amaze Sam that so many students, and probably a few more experienced interpreters than hed be comfortable to admit, do not fully understand seismic migration and the practicalities involved in simulating migration when tying 2D migrated lines, no matter whether the data are time- or depth-migrated. This lack of experience with handling 2D migrated sections has led to some memorable exchanges between instructors and students, as would be the case in this class as well. Toward the end of the second day, during the afternoon snack break, Debbie came up to Sam with a curious smile on her face, as though she had a secret she was bursting to tell. Sam, youre not going to believe what one of the students just asked me. Before you begin, let me quickly tell you what I heard earlier this afternoon. I was walking behind a student who was hard at work changing a horizon pick because an intersecting line showed that his original pick was too low. He muttered to himself, It seems as though all I do here is x the
The Leading Edge August 2011

mistakes I just made a few minutes ago. Sounded to me like he had uncovered one of the fundamental truths of seismic interpretation. Ill bet he had. But the question I was asked was nothing as mundane as that, and I wasnt prepared for it. She asked me if interpreters were ever paid more for having to work with paper sections like they were doing in this class. I can see how youd be surprised by a question like that what did you say? I replied that we werent. And for the life of me I was so nonplussed that I couldnt think of anything else to say. Debbie laughed lightly as she spoke, which Sam interpreted as a subconscious response to her own questions, How could anyone think such a thing? and Did I ever think of that? and If I had thought of that, what could I have done about it? She assuredly was not making fun of the student who had asked the question. I dont know how I would have answered her either, said Sam. But I do know thisif I had been paid proportionally more then, in the good old days before workstations, for the tedium of working with paper sections and 2D migrated data, then I would have made and saved more money and probably would have retired long before now. If that were so, today you wouldnt nd me here in this classroom, as much as I do enjoy teaching. Instead Id be sitting in a lounge chair on a beach somewhere, sipping a mixed drink from a hollowedout pineapple and feeling a sea breeze on my face, gazing up at a sky so pure and deeply blue that Id have no words to describe its hue. Thoughts of 2D migration and mis-ties, velocity model building, reserves calculations, well planning meetings, prospect inventory reviews, and any number of other things would drift by like wispy cirrus clouds in my perfect skyif they occurred to me at all. Youre dreaming, Sam. Indeed.
Corresponding author: dherron7@gmail.com

Mail order to:


SEG Book Order Department P. O. Box 702740 Tulsa, OK 74170-2740 U.S.A. Phone: +1-918-497-5546 Fax: +1-918-497-5565 E-mail: books@seg.org

http://seg.org/bookmart

968

Get to Know Our SeisAble Benefits


CGGVeritas advanced geophysical equipment, technology and services unlock the potential of the subsurface, empowering your oil and gas discovery and reservoir optimization. With our continuous client commitment, the passion of our people and our dedication to health, safety and the environment, CGGVeritas delivers safer, better answers and brings SeisAble BenefitsTM globally to all of our stakeholders.

cggveritas.com

You might also like