Professional Documents
Culture Documents
TABLE OF CONTENTS
I.
INTRODUCTION .........................................................................................1
II.
B.
Related Matters......................................................................................2
C.
III.
IV.
V.
VI.
B.
B.
C.
2.
3.
4.
5.
ii
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
B.
C.
D.
2.
3.
E.
4.
5.
6.
7.
2.
3.
4.
F.
IX.
5.
6.
2.
3.
4.
CONCLUSION ............................................................................................55
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
I.
INTRODUCTION
Petitioner Unified Patents Inc. (Unified) requests Inter Partes Review
(IPR) of claims 216 of U.S. Patent No. 5,523,791 (the 791 patent) assigned
to John L. Berman (Berman) (EX1001).
The 791 patent, which has a filing date of October 12, 1993, discloses
methods and apparatuses for placing images over standard video. EX1001 at 1:67.
The 791 patent suggests that it enables a standard television viewer to express
opinions or make jokes by letting the viewer place static images over video. Id. at
1:3436, 1:4248.
But technologies for selecting images from a video source and selectively
merging those images onto an external display were well-known in the art prior to
the effective filing date of the 791 patent. Indeed, the practice of inserting an
overlay image onto a background video image on devices such as a television,
video and television games, video systems, TV synchronization, and remote
control was known. Id. at 1:79.
Years before the 791 patents filing date, a number of prior art patents and
printed publications disclosed the claimed combination of elements. See EX1002 at
2325. And even the 791 patent itself concedes explicitly that most of the
limitations in the claims are based on known standards and prior art systems
familiar to those of ordinary skill in the art when it was filed. See, e.g., EX1001 at
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
4:3536 (phase lock unit); 4:4546 (video control logic); 5:19 (microprocessor);
5:27 (address change).
As this petition shows, at least the disclosures of Russell (EX1003), Intel
Users Manual (EX1004), and Marlton (EX1005), among many other patents and
publications (and for the detailed reasons explained below), warrant cancellation of
claims 216.
II.
MANDATORY NOTICES
A.
Real Party-in-Interest
discovery.
See
EX1007
(Petitioners
Voluntary
Interrogatory
Responses); see also Unified Patents Inc. v. American Vehicular Sciences, LLC,
IPR2016-00364, Paper 13, at 57 (P.T.A.B. July 27, 2016) (instituting IPR and
denying challenge to Unifieds real party-in-interest identification).
B.
Related Matters
Upon information and belief, the 791 patent was asserted in the following
cases: John Berman v. Comcast Corp. et al., 2-16-cv-00412 (E.D. Tex.); John
Berman v. DIRECTV, LLC, 3-16-cv-00382 (N.D. Tex.), and John Berman v. AT&T
Inc., 3-16-cv-00382 (N.D. Tex.); the patent was at issue in AT&T Services, Inc. et
2
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
al v. John Berman, 3-16-cv-01106 (N.D. Tex.), which was consolidated with the
previously listed case.
C.
The signature block of this petition designates lead counsel, backup counsel,
and service information for petitioner. Unified designates P. Andrew Riley (Reg.
No. 66,290) as lead counsel and designates Yoonhee Kim (Reg. No. L1048) as
backup counsel. Both can be reached at Finnegan, Henderson, Farabow, Garrett &
Dunner, LLP, 901 New York Avenue, NW, Washington, DC 20001-4413 (phone:
202.408.4000; fax: 202.408.4400). Unified also designates as backup counsel
Jonathan Stroud (Reg. No. 72,518), who can be reached at 1875 Connecticut Ave.
NW, Floor 10, Washington, D.C., 20009 (phone: 650-999-0455). Petitioner
consents to e-mail service at Berman791-IPR@finnegan.com.
III.
FEE PAYMENT
The required fees are submitted under 37 C.F.R. 42.103(a) and 42.15(a).
If any additional fees are due during this proceeding, the Office may charge such
fees to Deposit Account No. 50-6990.
IV.
Petitioner is not barred or estopped from requesting IPR challenging the 791
patent on the grounds identified. See 37 C.F.R. 42.104(a). Specifically: (1)
Petitioner is not the owner of the 791 patent; (2) Petitioner is not barred or
3
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
estopped from requesting IPR; and (3) Petitioner has not been served with a
complaint alleging infringement of the 791 patent.
V.
on October 12, 1993. At that time, a person having ordinary skill in the art
(hereinafter, POSA) of video image processing (i.e., in the art for the 791
patent) would have (i) a B.S. degree in computer engineering, computer science, or
equivalent training, and (ii) approximately two years of experience or research
related to image processing and application. See EX1002 at 1516.
VI.
The 791 patent describes a method and apparatus for placing images over
standard television transmissions on various devices and for distorting images.
Such devices include standard televisions, video and television games, video
systems, and remote controls. EX1001 at 1:79.
The interacting apparatus of the 791 patent consists of two units. Id. at
2:5657. The main unit contains the stored overlayed images, the circuitry that
synchronizes the overlayed images with the background video, and all of the logic
that implements the feature of the invention. Id. at 2:5660. The second unit, the
joystick input device, allows the viewer to command the main unit. Id. at
2:6062. The 791 patent declares that [i]t will be clear, however, to those skilled
4
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
in the art that other arrangements, which combine the functions of the main unit
with the source of background video, the joystick control, or the television display
are within the scope and spirit of the present invention. Id. at 2:673:4.
B.
Prosecution History
The application that led to the 791 patent was filed on October 12, 1993 and
did not claim priority to any earlier application. Thus, the 791 patent has an
effective filing date of October 12, 1993.
The Examiner opened prosecution in March 1994 by rejecting original
claims 1, 2, and 4 as anticipated by Russell (EX1003). EX1006 at 2630. In the
Office Action, the Examiner specifically cited Russell at columns 1:592:22,
13:6714:11, 3:455:23, 11:6613:29 to reject original claim 1; cited column
1:6467 to reject original claim 2; and cited column 1:5964 to reject original
claim 4. Id. at 28; EX1002 at 27.
In response, Applicant added new claims 518 and argued that [t]he system
of Russell, while sophisticated, is nevertheless complex and costly and [u]nlike
Russell, in the present invention, the background image is not digitized prior to
processing. EX1006 at 49. After an interview with the Examiner, Applicant added
new claims 1920. Id. at 61. At this time, Applicant retracted its previous argument
against Russell, stating that [i]t is not applicants intention to limit the scope of the
claims to a system where the background is not digitized prior to processing, as
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
applicants invention may be applied to systems using digitized images (e.g.,
HDTV or the like). Id.
The next office action rejected several claims as anticipated by U.S. Patent
No. 5,134,484 to Willson. Id. at 6465. In response, Applicant cancelled claims 1,
2, 4, and 12 and amended claim 5 to further include the limitation of selecting
both image and position of the image. Id. at 7374, 8081. Ultimately, claims 5, 6,
and 11 issued as claims 2, 3, and 9, respectively. Id. at 45 (claim 9), 6970 (claim
2), 7071 (claim 3).
Notably, when adding new claims in response to the office action citing
Russell, Applicant introduced means-plus-function limitations. Id. at 4248. None
of the newly introduced means-plus-function limitations appear in the
specification;1 they appear only in the claims. EX1002 at 33.
VII. STATEMENT OF PRECISE RELIEF REQUESTED FOR EACH
CLAIM CHALLENGED
A.
Petitioner requests that the Board hold claims 216 unpatentable under
1
As IPR does not permit challenges under 35 U.S.C. 112, Unified has not
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
LLC, 792 F.3d 1339, 1349 (Fed. Cir. 2015). Such means-plus-function limitations
should be construed to cover the corresponding structure disclosed in the
specification and structural equivalents thereof. 35 U.S.C. 112, 6; Williamson,
792 F.3d at 1347.
1.
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
the claims. The term operator input means in claim 2 has no antecedent basis.
4.
Function: receiving said input command and generating overlay image data.
Id. at 6:2729. Corresponding structure: a microprocessor. Id. at 4:5256, Fig. 2.
5.
Function: selectively reading the overlay image data from said memory
means in synchronization with said synchronization means and merging said
overlay image with said background video image. Id. at 6:3842. Corresponding
structure: a phase-lock unit and video serializers. Id. at 4:665:9, Fig. 2.
Below is Figure 2 of the 791 patent in which the terms construed above and
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
corresponding structures thereof are indicated with annotations.3
8.
However, a number of the functional blocks shown in Figure 2 of the 791 patent
10
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
The term second memory means or second memory is not defined or
even recited in the specification, other than in the claims. EX1002 at 51. For this
petition, Unified construes the term second memory means to mean any ordinary
and customary memory for storing a plurality of overlay image data representing
a plurality of overlay images. Id.
9.
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
7:4447. Corresponding structure: a phase-lock unit, such as Motorola MC1378.
Id. at 4:1619, Fig. 2.
13.
12
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
VIII. CLAIMS 216 OF THE 791 PATENT ARE UNPATENTABLE
UNDER 35 U.S.C. 103(a)
A.
U.S. Patent No. 5,594,467 (EX1005, Marlton), filed on May 30, 1991,
claims priority to G.B. Patent Application No. 9026667, filed on December 7,
1990. Marlton is prior art under 35 U.S.C. 102(e).
D.
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
which is directed to an apparatus for inserting an overlay image onto a background
video image, Russells image processing system is capable of selectively merging
graphics, text, digitized video frames and/or full motion video into a user
selectable composite television display. Id. at claim 1. Russells video controller
contains the circuitry for controlling the mixing of real time moving or full
motion video pictures and captured/stored video images. Id. at 11:6612:4.
1.
Figure 2 of Russell shows that images from a video source (26) are fed into
the signal processing circuit:
FIG. 2, EX1003.
In particular, the video input signals (composite video or RGB) are fed into the
video controller of Russells image processing system. Id. at Abstract, 11:66
12:38. Thus, Russell teaches the function and structure corresponding to the
14
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
claimed video input means. EX1002 at 71.
2.
The 791 patent specification explicitly allows for other arrangements than
the joystick control unit, which is the corresponding structure of this claim
limitation. EX1001 at 2:673:4; EX1002 at 75. In this regard, Russell relates to
an image processing system capable of selectively merging graphics, text, digitized
video frames and/or full motion video from any composite or RGB component
15
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
video source, such as live video, video camera, video laser disc or video cassette
recorder, into a user selectable composite television display in which a number of
windows may be overlayed with the windows having variable size and location
under user control. EX1003 at 1:5967 (emphases added). In Russell, the host
computer controls the grab control circuit, which enables grabbing of a live video
signal as part of the video controller. Id. at 7:725, 12:48, Fig. 2.
Russell also discloses that users can control both size and location of video
frames in an output display, id. at 15:1113, and that [t]he frames of video to be
grabbed may be grabbed at random and enhanced in size and color and inserted in
variable size windows with each window being independent of the others, id. at
14:811. A POSA would then identify a viewer input means present in Russells
image processing system, for example, as a part of the host computer shown in
Figure 2 such as a keyboard or a mouse. EX1002 at 77. In view of the above,
Russell teaches the function and structure corresponding to the claimed viewer
16
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
input means. Id.
4.
Figure 2 of Russell discloses that the coprocessor (20) coupled to the host
computer (28) generates overlay image data for the memory (32). EX1003 at 3:52
65, Fig. 2:
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
A POSA would understand that overlays in the form of windows superimposed on
the background of the display are generated by processing graphic images or
captured video frames into window data structures supported by the Intel
coprocessor. EX1004 at 3-4 ( 3.1.3); EX1002 at 80. Thus, Russell alone or in
view of Intel Users Manual teaches the function and structure corresponding to
the claimed processor means. EX1002 at 80.
5.
Figure 2 of Russell shows the memory (32) coupled to the coprocessor (20).
EX1003 at Fig. 2. Russell discloses that users can capture a frame of the digitized
video at random and store it in digitized format as a full size image in the memory
(32) of the graphics coprocessors (20). Id. at Abstract (emphases added). Thus,
Russell teaches the function and structure corresponding to the claimed first
memory means. EX1002 at 82.
18
6.
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
addressor generator means, coupled to said memory means,
said processor means and said synchronization means for
selectively generating memory addresses for said memory
means in response to said processor means and in
synchronization with said synchronization means
The 791 patent explicitly states that [t]he video control logic generates the
row address strobe, column address strobe, and read/write signals 42 for the
random access memory (DRAM) 26. As is well-known in the art, these signals
allow read and write access to the desired memory location in the DRAM.
EX1001 at 4:4348 (emphasis added) (bold in original).
Figure 2 of Russell shows the control signal interceptor (84) coupled to the
memory (32), the grab control (54), and the coprocessor (20). EX1003 at Fig. 2;
EX1002 at 84.
Specifically, the control signal interceptor (84) in Russell changes the RD or read
signal from the graphics coprocessor 20a to a WR or write signal and sends the
write signal to the video memory 32a. EX1003 at 7:1822.
19
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
Further, Figure 5 of Russell shows the grab control (54) coupled to the
coprocessor (20) as well as to the control signal interceptor (84), and that the
coprocessor generates address A0A8 for DRAM 32a:
FIG. 5, EX1003.
The control signal interceptor (84) generates signals WE (write enable) and
BE (bus enable) which are used to control the storage and retrieval of data to
and from the DRAM 32a such that Address lines A0A8 of the DRAM 32a are
output by the graphics coprocessor 20a. Id. at 6:667:6 (emphases added).
20
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
Window information is accessed by the coprocessor synchronized with the
background video signal. EX1002 at 84. Intel Users Manual describes that
overlay control and other signals are derived from the sync signals extracted from
the incoming video signal. EX1004 at 1-4 (Essentially, the DP [display processor]
operates as an address generator that accesses appropriate portions of memoryresident bitmaps.); EX1002 at 84; see also EX1003 at 7:648:12, 9:2931, Fig.
12. Thus, a POSA would understand that an address generator means selectively
generating memory addresses for [a] memory means is present in Russell as
coupled to the memory (32), the processor (20), and the synchronization means
that is included in the video controller (200). EX1002 at 85.
Russell also discloses that the coprocessor controls the control signal
interceptor in such operations. EX1003 at 6:6366. In Russell, the grab control
circuit, the synchronizing signal generation circuit, and the overlay control circuit
are all part of the video controller. Id. at 12:48. In particular, the phase detector in
the synchronizing signal generation circuit generates the synchronizing signals, id.
at 12:3942, and the overlay control circuit, using the output from the phase
detector, generates the signals as required by the coprocessor, id. at 13:1318. In
view of such disclosure, a POSA would recognize that Russell teaches selectively
generating memory addresses for [a] memory means in response to [a] processor
means and in synchronization with [a] synchronization means. EX1002 at 86.
21
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
Thus, Russell, either alone or in view of Intel Users Manual, teaches the function
and structure corresponding to the claimed address generator means. Id.
7.
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
onto a background video image would have looked to Russell and Intel Users
Manual for techniques for such implementation. EX1002 at 89. A POSA, as of
October 1993, would have found it obvious and would have been readily able, by
combining the means and circuits disclosed in Russell alone or in view of Intel
Users Manual, to arrive at the claimed invention. Id.
As further explained in the chart below,4 Russell alone or in view of Intel
Users Manual teaches all elements of claim 2 of the 791 patent.
[2.P] An apparatus
for inserting an
overlay image onto a
background video
image, said
apparatus
comprising:
All emphasis in the claim charts in this petition is added unless otherwise noted.
23
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
display in which a number of windows may be overlayed
with the windows having variable size and location under
user control. EX1003 at 1:5967.
Fig. 2 of Russell shows that images from a composite or
component RGB video source (26) are fed into the signal
processing circuit: A television image processing system
in which images from any composite or component RGB
video source (26) may be fed into a signal processing unit
(60)[.] EX1003 at Abstract.
24
[2.B]
synchronization
means, coupled to
said video input
means, for
generating
synchronization
signals from said
background video
signal;
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
The Video Controller 200 is preferably comprised of
four subsystems; the Oscillator circuit, the Synchronizing
Signal Generation circuit, the Overlay Control circuit, and
the Grab Control circuit, all of which are illustrated in
FIG. 12. EX1003 at 12:48.
The Video Controller 200 contains the circuitry for
synchronizing the operations of the image processing
system and controlling the mixing of real time moving or
full motion video pictures and captured/stored video
images. EX1003 at 11:6612:4.
In order to maintain a phase lock between the 12.5 MHz
clock signals and the video input signals (Composite
Video or RGB), a logic array 206 monitors signal
MUTE from the Video Interface/Decoder circuit and
signal XTL from the Bus Interface circuit. When either
of these two signals becomes active, array 206 outputs
signal XTL to Analog Muliplexer 208. Analog
Multiplexer 208 then adjusts the control voltage which is
being output to Crystal 210, and, upon receiving a phase
lock signal from Phase Detector 212, outputs a PhaseAdjust signal to Main Oscillator 202. In this mode, the
12.5 MHz clock signals are phase locked to Crystal 210s
output. When the MUTE and XTL signals are not
active, the output of Crystal 210 is ignored, and Analog
Multiplexer 208 instead phase locks the Main Oscillator
202 to the phase lock signal generated by Phase Detector
214, which receives its composite sync signal from the
Video Interface/Decoder circuit. EX1003 at 12:2038.
With respect to the synchronizing signal generation
circuit portion of the video controller 200, Phase Detector
214 generates the synchronizing signals for the image
processing system. It is controlled by the 5 MHz clock
signal from array 204. Phase Detector 214 preferably
compares the Composite Sync signal from the Video
Interface/Decoder circuit with its own internally
generated line frequency. A voltage proportional to the
phase difference between the Composite Sync signal and
25
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
the internal line frequency is preferably output by Phase
Detector 214 to Analog Multiplexer 208. In addition,
Phase Detector 214 generates Composite Sync,
Horizontal Sync, Vertical Sync, Field Indent, and Burst
Gate signals. EX1003 at 12:3952.
The present invention relates to an image processing
[2.C] viewer input
system capable of selectively merging graphics, text,
means, comprising
digitized video frames and/or full motion video from any
selection means for
composite or RGB component video source, such as live
receiving an input
video, video camera, video laser disc or video cassette
command from a
recorder, into a user selectable composite television
viewer to select an
display in which a number of windows may be overlayed
overlay image and
position input means with the windows having variable size and location under
user control. EX1003 at 1:5967.
for receiving a
position input from a
The frames of video to be grabbed may be grabbed at
viewer and
generating a position random and enhanced in size and color and inserted in
signal to position an variable size windows with each window being
independent of the others. EX1003 at 14:811.
image on a display;
The Video Controller 200 is preferably comprised of
four subsystems; the Oscillator circuit, the Synchronizing
Signal Generation circuit, the Overlay Control circuit, and
the Grab Control circuit, all of which are illustrated in
FIG. 12. EX1003 at 12:48.
The control signal interceptor 84 enables video frame
grabbing in the image processing system of the present
invention. When the host computer 28 signals the grab
control 54 to accept the live video, for example, the grab
control enables both a live video buffer 86 (FIG.2) and
the control signal interceptor 84. The live video buffer 86
preferably receives video information the entire time the
image processing system is in operation, but does nothing
with it until the Grab Control 54 is active. When the Grab
Control 54 is active, the live video buffer 86 sends the
video data to the memory 32a via the data bus (D0D15).
The control signal interceptor 84, as previously
mentioned, changes the RD or read signal from the
graphics coprocessor 20a to a WR or write signal and
26
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
sends the write signal to the video memory 32a. The live
video signal is then passed to the video memory 32a
through the enabled video buffer 86 and stored as a
digitized full size video frame of instantaneously grabbed
video. EX1003 at 7:725.
FIFO buffers 90 and 92, when enabled via signal OE
from the image processing systems control bus, transfer
stored video data via the Color Processors 60a internal
data bus to either the 82786 graphics coprocessor 20a
directly or to the 1 Megabyte8 bit DRAM 32a. This
completes the loop of video storage and retrieval within
each of the Color Processor circuits 60a, 60b, 60c.
EX1003 at 7:5461.
[2.D] processor
means, coupled to
said operator input
means, for receiving
said input command
and generating
overlay image data;
27
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
stored as a full size video image, and manipulated to
provide a variable size and location window for each
grabbed image in the composite video display 22a in
which the grabbed images are merged with text and/or
graphics and/or full motion video, such as live television
signals or television signals from any composite video
source. EX1003 at 3:5265.
Russell states: A graphics coproessor controls the
manipulation and retrievable storage of instantaneously
grabbed full motion video images which are digitized and
stored full size in a memory for enabling selective merger
of the images with the graphics, text and full motion
video in the composite television display. EX1003 at
1:682:6.
Intel Users Manual discloses: The extensive features of
the 82786 accommodate many designs. The list below
contains some of the main 82786 features.
...
Hardware support for fast manipulation and display
of multiple windows on the screen
. . . . EX1004 at 1-2.
Intel Users Manual discloses:
1.1.2 Display Processor/CRT Controller (DP)
The Display Processor (DP) traverses bitmaps generated
by the Graphics Processor (GP) or external CPU,
organizes the data, and displays the bitmaps in the form
of windows on the screen. The DP has a video shift
register that can assemble several windows on the screen
from different bitmaps in memory and zoom any of the
windows in the horizontal and/or vertical directions.
When the DP detects a window edge, it automatically
switches to the next bitmap to display the subsequent
window.
Essentially, the DP operates as an address generator that
accesses appropriate portions of memory-resident
bitmaps. The data fetched from bitmaps is passed to the
DP CRT controller, which displays the bitmap data on the
28
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
screen. The DP CRT controller generates and
synchronizes the Horizontal Synchronization (HSync),
Vertical Synchronization (VSync), and Blank signals. The
DP performs all these functions independent of the GP.
Refer to Chapter 3 for a detailed discussion of the DP and
its functions. EX1004 at 1-3 to 1-4 (figure omitted).
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
said processor
means, for storing
said overlay image
data;
[2.F] address
generator means,
coupled to said
memory means, said
processor means and
said synchronization
means for selectively
generating memory
addresses for said
memory means in
response to said
processor means and
in synchronization
Russell discloses:
with said
synchronization
The control signal interceptor 84, as previously
means; and
mentioned, changes the RD or read signal from the
graphics coprocessor 20a to a WR or write signal and
sends the write signal to the video memory 32a. EX1003
at 7:1822.
When the host computer 28 signals the grab control 54
to accept the live video, for example, the grab control
enables both a live video buffer 86 (FIG. 2) and the
control signal interceptor 84. EX1003 at 7:912.
Relatedly, Fig. 5 of Russell shows the grab control 54
30
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
coupled to the graphics coprocessor 20 as well as to the
control signal interceptor 84. Fig. 5 is a more detailed
functional block diagram of the embodiment of Fig. 2.
EX1003 at 2:4244.
Explaining Fig. 5, Russell discloses:
Logic array 84 is the control signal interceptor which,
under control of the graphics coprocessor 20a,
enables/disables drivers 80 and 82 when appropriate. In
addition, the interceptor 84 preferably generates signals
WE (write enable) and BE (bus enable) which are
used to control the storage and retrieval of data to and
from the DRAM 32a, with each color processing circuit
60a, 60b and 60c each preferably containing its own
DRAM 32a, 32b, 32c, respectively, such as a 1
Megabyte8 bit DRAM. Address lines A0A8 of the
DRAM 32a are output by the graphics coprocessor 20a.
EX1003 at 6:637:6.
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
four subsystems; the Oscillator circuit, the Synchronizing
Signal Generation circuit, the Overlay Control circuit, and
the Grab Control circuit, all of which are illustrated in
FIG. 12. EX1003 at 12:48.
With respect to the synchronizing signal generation
circuit portion of the video controller 200, Phase Detector
214 generates the synchronizing signals for the image
processing system. EX1003 at 12:3942.
With respect to the overlay control circuit portion of the
video controller 200, using the outputs from Phase
Detector 214, array 206 generates the horizontal and
vertical synchronizing signals required by the 82786
graphics coprocessors of the Color Processor circuit 60a,
60b, 60c. EX1003 at 13:1318.
means, coupled to
said memory means,
for selectively
reading the overlay
image data from said
memory means in
synchronization with
said synchronization
means and merging
said overlay image
with said
background video
image.
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
selectively merging graphics, text, digitized video frames
and/or full motion video into a user selectable composite
television display. EX1003 at claim 1.
The system of the present invention will be described as
one for preferably providing high resolution color images
in which full motion color video images, such as live
color television signals 26, may be instantly grabbed
using the graphics coprocessor 20, which normally
merely generates computer images on a television screen,
at random by the user, digitized and stored as a full size
video image, and manipulated to provide a variable size
and location window for each grabbed image in the
composite video display 22a in which the grabbed images
are merged with text and/or graphics and/or full motion
video, such as live television signals or television signals
from any composite video source. EX1003 at 3:5265.
By utilizing the system of the present invention, a
flexible image processing system is provided which
enables full color image processing to provide video
displays of text and/or graphics and/or still frame video
and/or full motion video in a composite overlay display
having multiple windows of still video images, graphics
or text along with full motion video from any composite
video source, such as live TV signals, laser disks, still
frame recorders, video cassette recorders and video
cameras. EX1003 at 13:6714:8.
E.
33
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
includes elements similar to those recited in claim 2. EX1001 at 6:4445.
Specifically, elements 1 through 7 discussed above in Section VIII.D. are recited in
claim 3, while the selection means and position input means of element 3
(viewer input means) are separately recited in claim 3 and in claim 8 dependent
from claim 3. All the discussion concerning claims 3 and 8 should apply to claims
10 and 15 because the only relevant difference between the two sets of claims is
subject matterclaims 10 and 15 are directed to a method for inserting an overlay
image onto a background video image. Id. at 7:6061.
Below, we address the elements in claims 38 and 1015 that may differ
from elements in claim 2, discussed above.
1.
Figure 2 of Russell shows the video buffer (86) coupled to the coprocessor
(20) and to the memory (32):
FIG. 2, EX1003.
34
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
In this regard, Russell discloses that [w]hen the host computer 28 signals the grab
control 54 to accept the live video, for example, the grab control enables both a
live video buffer 86 (FIG. 2) and the control signal interceptor 84. The live video
buffer 86 preferably receives video information the entire time the image
processing system is in operation[.] Id. at 7:914. When the grab control is active,
the live video signal is passed to the video memory 32a through the enabled video
buffer 86 and stored as a digitized full size video frame of instantaneously grabbed
video. Id. at 7:2225.
Russells image processing system is capable of storing a number of grabbed
video frames that can be retrieved. Id. at claim 2. In view of the ability to store
multiple video frames usable as overlay windows in memory 32a, a POSA would
understand that the user operating the host computer can choose which of these
frames to transfer to a window command block that the coprocessor then overlays
on the background video. EX1002 at 94. Thus, the second memory means is the
portion of memory 32a into which a particular frame is copied to form a window to
be displayed by the coprocessor. EX1002 at 9394. Russell, therefore, teaches
the function and structure corresponding to the claimed second memory means. Id.
2.
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
(claim 10)
Russell discloses that:
When the host computer 28 signals the grab control 54 to
accept the live video, for example, the grab control
enables both a live video buffer 86 (FIG.2) and the
control signal interceptor 84. . . . When the Grab Control
54 is active, the live video buffer 86 sends the video data
to the memory 32a via the data bus (D0D15). The
control signal interceptor 84, as previously mentioned,
changes the RD or read signal from the graphics
coprocessor 20a to a WR or write signal and sends the
write signal to the video memory 32a. The live video
signal is then passed to the video memory 32a through
the enabled video buffer 86 and stored as a digitized full
size video frame of instantaneously grabbed video.
EX1003 at 7:925 (emphases added). Russell further discloses that the memory
means is capable of retrievably storing a plurality of different user selected
grabbed digitized video frames. Id. at claim 2 (emphasis added). Based on this
capability, it would have been obvious to a POSA that an operator-guided-input
structure would allow selection from among the available images. EX1002 at 95.
Thus, Russell teaches the recited relationships between the operator input means
(host computer 28), the processor means (coprocessor 20a), the first memory
means (memory 32a), and the second memory (portion of memory 32a in which
36
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
grabbed frames are stored), and functions thereof. Id.
3.
Russell discloses that the memory means is under the control of the host
computer (operator input means) and the coprocessor. EX1003 at Fig. 2. The
memory means is capable of retrievably storing a plurality of different user
selected grabbed digitized video frames for a video display. Id. at claim 2. Thus,
Russell discloses the function and structure corresponding to the claimed first
switching means. EX1002 at 97.
4.
Russell discloses that the memory means is under the control of the host
computer (operator input means) and the coprocessor. EX1003 at Fig. 2. The
memory means is capable of retrievably storing a plurality of different user
selected grabbed digitized video frames to be selectively combined with a
background video. Id. at claim 2. Thus, Russell discloses the function and structure
corresponding to the claimed second switching means. EX1002 at 99.
5.
37
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
This element is a routine and well-known video start address structure at the
time of the invention, particularly in view of the 791 disclosure; a start address
and a latch for storing it were well-known in the art as a structure for controlling
the position of an image on a video screen. EX1001 at 5:2734; EX1002 at 100.
In addition, Russells image processing system operates on an Intel 82786
Graphics Coprocessor, and Intel Users Manual shows that Russells coprocessor
has a Descriptor Address Pointer, which points to the first of a chain of memory
locations containing memory data structures in which each structure defines the
location of windows on the display, and which points to the next location in order
to define the location on the screen for a particular graphic object. EX1004 at 3-22
(Fig. 3-12 describing descriptor address pointer); 3-4 to 3-9 ( 3.1.33.1.3.2);
EX1002 at 100. A POSA would then understand that Russell, either alone or in
view of Intel Users Manual, teaches the start latch for storing a start address from
the processor. EX1002 at 100.
6.
This element recites a routine and ordinary effect of changing a start address,
particularly in view of the 791 disclosure that the ability to move a display
element by changing the starting address was already known. EX1001 at 5:2734;
38
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
EX1002 at 101. It would have been obvious to a POSA to place the value in the
start latch to be controlled by a user generated position signal. EX1002 at 101.
Russell also discloses that users can control both size and location of
video frames in an output display by commanding the coprocessor with an operator
input means. EX1003 at 1:592:6 (emphasis added). In Russell, the video
controller controls the process of overlaying stored images onto background. Id. at
11:6612:4. Russells overlay control circuit generates signals under the control of
the coprocessor. Id. at 13:1318. As discussed above with respect to claims 6 and
13 from which claims 7 and 14 depend, a start latch is present in Russell for storing
a start address from the processor. Thus, it would have been obvious to define the
recited functions of the processor means associated with receiving a position signal
from an operator input means and changing a start address such that the selected
image will overlay a background image in a particular position. EX1002 at 101.
As further explained in the chart below,5 Russell alone or in view of Intel
Users Manual teaches all elements of claims 3 and 10 of the 791 patent.6
[3.P] An apparatus for inserting an overlay image onto a
background video image, said apparatus comprising:
[3.A] video input means, for receiving a video signal corresponding
See [2.P]
above
See [2.A]
All emphasis in the claim charts in this petition is added unless otherwise noted.
The chart does not specifically address claims 48 and 1115. Those claims are
39
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
to said background video image;
above
[3.B] synchronization means, coupled to said video input means, for See [2.B]
generating synchronization signals from said background video
above
signal;
[3.C] operator input means, for receiving an input command from
See [2.C]
an operator to select an overlay image;
above
[3.D] processor means, coupled to said operator input means, for
See [2.D]
receiving said input command and generating overlay image data;
above
[3.E] first memory means, coupled to said processor means, for
See [2.E]
storing said overlay image data;
above
[3.F] address generator means, coupled to said memory means, said See [2.F]
processor means and said synchronization means for selectively
above
generating memory addresses for said memory means in response to
said processor means and in synchronization with said
synchronization means; and
[3.G] video output means, coupled to said memory means, for
See [2.G]
selectively reading the overlay image data from said memory means above
in synchronization with said synchronization means and merging
said overlay image with said background video image; and
Fig. 2 of Russell shows the video buffer 86 coupled to the
[3.H] second
graphics coprocessor 20 and to the memory 32.
memory means,
coupled to said
processor means, for
storing a plurality of
overlay image data
representing a
plurality of overlay
images,
Russell states:
When the host computer 28 signals the grab control 54 to
accept the live video, for example, the grab control enables
both a live video buffer 86 (FIG.2) and the control signal
interceptor 84. The live video buffer 86 preferably
receives video information the entire time the image
processing system is in operation, but does nothing with it
until the Grab Control 54 is active. When the Grab Control
54 is active, the live video buffer 86 sends the video data
40
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
to the memory 32a via the data bus (D0D15). The control
signal interceptor 84, as previously mentioned, changes
the RD or read signal from the graphics coprocessor 20a to
a WR or write signal and sends the write signal to the
video memory 32a. The live video signal is then passed to
the video memory 32a through the enabled video buffer 86
and stored as a digitized full size video frame of
instantaneously grabbed video. EX1003 at 7:925.
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
[10.C] receiving an input command from an operator to
See [2.C], [3.C]
select an overlay image,
above
[10.D] generating overlay image data in response to said See [2.D], [2.E],
input command and storing said overlay image data in a
[3.D] above
first memory,
[10.E] selectively generating memory addresses for the
See [2.F], [3.F] above
first memory in synchronization with said
synchronization signals,
[10.F] selectively reading the overlay image data from the See [2.G], [3.G]
first memory in synchronization with said
above
synchronization signal and merging said overlay image
with said background video image,
[10.G] storing a plurality of overlay image data in a
See [3.H] above
second memory representing a plurality of overlay
images, and
[10.H] receiving an input command from an operator to
See [3.I] above
select an overlay image from said plurality of overlay
images and storing the selected overlay image data in the
first memory.
F.
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
graphics or full motion video from a video source, in which the user can
manipulate a video image to provide a variable size and location window.
EX1003 at 1:5967, 3:5265. Marlton discloses a method of distorting a displayed
image by using different scaling factors in the horizontal and vertical directions.
EX1005 at 3:3537. Thus, to the extent that the preamble of claim 9 is limiting,
Russell in view of Marlton teaches an apparatus for distorting a video image.
EX1002 at 108.
1.
A person of ordinary skill in the art would understand that the composite
video input signal contains horizontal and vertical synchronization signals, and that
a typical sync separator creates pulses for use by the rest of the video processing
circuitry corresponding to the timing periods defined by these signals. EX1002 at
110. Russell discloses that the synchronizing signal generation circuit and the
overlay control circuit are part of the video controller. EX1003 at 12:48. The
video input signals in Russell are fed into the video controller. Id. at 11:6612:38.
In particular, the phase detector in the synchronizing signal generation circuit of
43
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
Russells video controller generates phase lock signals as well as horizontal and
vertical synchronization signals. Id. at 12:2052; EX1002 at 110. Thus, Russell
teaches the function and structure corresponding to the claimed synchronization
means. EX1002 at 110.
3.
44
4.
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
video output means, coupled to said operator input means and
said synchronization means, for selectively applying, in
response to said input command and a predetermined pattern,
said horizontal synchronizing signals and said horizontal
synchronizing pulses to each horizontal line of said video signal
and outputting a distorted video signal for generating a
distorted video image
45
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
EX1005 at 7:2449, by which the user may manipulate an overlaid image, such as
setting the horizontal scale as noted in the previous element, and the information
describing the scaling comprises the required pattern. EX1002 at 113. It is the
interpolator that copies an image of, for example, a circle, and processes it by
scaling to create a distorted circle or ellipse. EX1002 at 113; EX1005 at 24:16
29, Fig. 37b. Also, horizontal and vertical synchronization signals generated from
the synchronizing signal generation circuit of Russells video controller are applied
throughout the image processing system. EX1003 at 12:3959. Russell and
Marlton in combination, therefore, teach techniques that allow distortion of video
images using a processor. EX1002 at 113. Thus, Russell in view of Marlton
discloses the claimed video output means coupled to an operator input means and a
synchronization means. Id.
Marlton discloses selectively applying, in response to said input command
and a predetermined pattern, said horizontal synchronizing signals and said
horizontal synchronizing pulses to each horizontal line of said video signal and
outputting a distorted video signal for generating a distorted video image. A
POSA would understand that the term predetermined pattern means the data
necessary to determine the position of the video on each scan line, and the window
definition data that determines the position and scaling as noted above in Marlton,
is such pattern information. EX1002 at 114.
46
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
Marlton, which relates to combining video signals and graphics signals,
additionally discloses a method of generating a distorted video image. EX1005 at
1:1015, 3:3537; EX1002 at 115. Marlton illustrates the effects of horizontal
and vertical scaling of a video image. EX1005 at 24:1629. In particular, Marlton
discloses that in Figure 37b (reproduced below), the shape of the display window
522 is distorted as the image is scaled by a factor of 50% in the vertical direction,
and by a factor 180% in the horizontal direction, to produce a flattened image. Id.
at 24:2227.
Marlton thus teaches a fading/mixing matrix for combining the video and
graphics signals that can distort a displayed image in the horizontal and vertical
directions. Id. at 2:4252, 3:3537; EX1002 at 115. Russells image processing
system, combined with Marltons techniques, can generate image distortions to the
display. EX1002 at 115. Russell and Marlton are in the same field of image
processing systems aimed at applying an overlay image onto a background video
image. Id. at 107. Thus, a POSA would have had a reasonable expectation of
success because the combination of the two references is obvious and would yield
analogous and predictable results. Id.
47
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
7
As further explained in the chart below, the combination of Russell and
Marlton teaches all elements of claims 9 and 16 of the 791 patent.
[9.P] An apparatus
for distorting a video
image, said
apparatus
comprising:
Marlton discloses:
The displayed image can also be distorted in its aspect
ratio by using different scaling factors in the horizontal
and vertical directions. EX1005 at 3:3537.
[9.A] video input means, for receiving a video signal
See [2.A], [3.A]
corresponding to said video image;
above.
7
All emphasis in the claim charts in this petition is added unless otherwise noted.
48
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
[9.B]
synchronization
means, coupled to
said video input
means, for
separating vertical
and horizontal
synchronization
signals from said
video signal and
generating horizontal
and vertical
synchronizing
pulses;
Russell discloses:
The Video Controller 200 is preferably comprised of
four subsystems; the Oscillator circuit, the Synchronizing
Signal Generation circuit, the Overlay Control circuit, and
the Grab Control circuit, all of which are illustrated in
FIG. 12. EX1003 at 12:48.
In order to maintain a phase lock between the 12.5 MHz
clock signals and the video input signals (Composite
Video or RGB), a logic array 206 monitors signal
MUTE from the Video Interface/Decoder circuit and
signal XTL from the Bus Interface circuit. When either
of these two signals becomes active, array 206 outputs
signal XTL to Analog Muliplexer 208. Analog
Multiplexer 208 then adjusts the control voltage which is
being output to Crystal 210, and, upon receiving a phase
lock signal from Phase Detector 212, outputs a PhaseAdjust signal to Main Oscillator 202. In this mode, the
12.5 MHz clock signals are phase locked to Crystal 210s
output. When the MUTE and XTL signals are not
active, the output of Crystal 210 is ignored, and Analog
Multiplexer 208 instead phase locks the Main Oscillator
202 to the phase lock signal generated by Phase Detector
214, which receives its composite sync signal from the
Video Interface/Decoder circuit. EX1003 at 12:2038.
With respect to the synchronizing signal generation
circuit portion of the video controller 200, Phase Detector
214 generates the synchronizing signals for the image
processing system. It is controlled by the 5 MHz clock
signal from array 204. Phase Detector 214 preferably
compares the Composite Sync signal from the Video
Interface/Decoder circuit with its own internally
generated line frequency. A voltage proportional to the
phase difference between the Composite Sync signal and
the internal line frequency is preferably output by Phase
Detector 214 to Analog Multiplexer 208. In addition,
Phase Detector 214 generates Composite Sync,
Horizontal Sync, Vertical Sync, Field Indent, and Burst
49
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
Gate signals. EX1003 at 12:3952.
See [2.C], [3.C] above.
Russell discloses An image processing system in
accordance with claim 2 wherein said graphics
coprocessor means is capable of user manipulation of said
plurality of different stored video frames and said video
switch and merging means is capable of selectively
merging a user selected plurality of said plurality of said
stored grabbed digitized video frames with said full
motion video, and/or graphics, and/or text in said user
selectable composite television display for providing a
user selectable composite television display comprising a
user selectable plurality of windows of still video images,
graphics and/or text and/or full motion video. EX1003 at
15:2031.
Marlton discloses:
FIGS. 37a and 37b illustrate the effects of horizontal and
vertical scaling of the video image 510. In FIG. 37a, the
input region is set to full size so that the whole of the
input video image 510 is input into the fieldstores. The
size of the display window is set to be 50% of the full
screen size. This causes the video image in the fieldstores
to be scaled by a factor of 50% in both the horizontal and
vertical directions, by the scaler 502. In FIG. 37b, the
shape of the display window 522 on the screen has been
distorted relative to the aspect ratio of the input video
image. The whole of the video image is still displayed in
the display window 522, however, the image is scaled by
a factor of 50% in the vertical direction, and by a factor
180% in the horizontal direction, to produce a flattened
image. EX1005 at 24:1629.
50
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
51
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
compensate for this, Driver 216 receives Phase Detector
214s Sync outputs and drives them to +5 V DC logic
levels. These Sync outputs are then applied throughout
the image processing systems subsystems. EX1003 at
12:3959.
Russell relates to an image processing system capable of
selectively merging graphics, text, digitized video frames
and/or full motion video into a user selectable composite
television display. EX1003 at claim 1.
Russell further discloses: full motion color video images,
such as live color television signals 26, may be instantly
grabbed using the graphics coprocessor 20, which
normally merely generates computer images on a
television screen, at random by the user, digitized and
stored as a full size video image, and manipulated to
provide a variable size and location window for each
grabbed image in the composite video display 22a in
which the grabbed images are merged with text and/or
graphics and/or full motion video, such as live television
signals or television signals from any composite video
source. EX1003 at 3:5265.
Marlton discloses:
Another preferred feature of the invention is a
fading/mixing matrix for combining the video and
graphics signals. The matrix is controlled on a pixel by
pixel basis, enabling full video, full graphics, or a mixture
of video and graphics, to be displayed at each individual
pixel position in the combined display. The matrix allows
windowing and overlaying of the video and graphics
images in the combined display. The matrix can be
controlled by the logical colors used in the graphics
display. Each logical color is assigned keying/fading
attributes, as well as a physical color in the normal
manner. EX1005 at 2:4252.
The displayed image can also be distorted in its aspect
52
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
ratio by using different scaling factors in the horizontal
and vertical directions. EX1005 at 3:3537.
The luma signal is fed through a sync extractor 84 that
removes the sync signal from the luma. EX1005 at 6:56
57.
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
bandwidth of the input luma data stream from the 13.5
MHz input rate, so that aliasing caused by the subsampling is reduced. Referring to FIGS. 8 and 9, the
particular configuration of the filter 100 is dependent on
the degree of horizontal scaling, the narrowest band pass
filter combination E being selected for the smallest video
picture size. EX1005 at 7:2449.
FIGS. 37a and 37b illustrate the effects of horizontal and
vertical scaling of the video image 510. In FIG. 37a, the
input region is set to full size so that the whole of the
input video image 510 is input into the fieldstores. The
size of the display window is set to be 50% of the full
screen size. This causes the video image in the fieldstores
to be scaled by a factor of 50% in both the horizontal and
vertical directions, by the scaler 502. In FIG. 37b, the
shape of the display window 522 on the screen has been
distorted relative to the aspect ratio of the input video
image. The whole of the video image is still displayed in
the display window 522, however, the image is scaled by
a factor of 50% in the vertical direction, and by a factor
180% in the horizontal direction, to produce a flattened
image. EX1005 at 24:1629.
54
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
[16.P] A method of distorting a video image comprising the
See [2.P], [3.P],
steps of:
[9.P] above.
[16A] receiving a video signal corresponding to said video
image,
[16.B] separating vertical and horizontal synchronization
signals from said video signal and generating horizontal and
vertical synchronizing pulses,
[16.C] receiving an input command from an operator for
selecting a normal or distorted image,
[16.D] selectively applying, in response to said input
command and a predetermined pattern, said horizontal
synchronizing signals and said horizontal synchronizing
pulses to each horizontal line of said video signal and
outputting a distorted video signal for generating a distorted
video image.
IX.
CONCLUSION
Challenged claims 216 are unpatentable and should be cancelled. Petitioner
respectfully requests that the Board grant this petition for inter partes review and
institute trial. The undersigned attorneys welcome a telephone call should the
Office have any requests or questions. If there are any additional fees due in
connection with the filing of this paper, please charge the required fees to our
Deposit Account No. 50-6990.
55
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
Respectfully submitted,
Dated: August 10, 2016
56
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
CERTIFICATION UNDER 37 C.F.R. 42.24(d)
Under the provisions of 37 C.F.R. 42.24(d), the undersigned hereby
certifies that the word count for the foregoing Petition for Inter Partes Review
totals 12,600, which is less than the 14,000 words allowed under 37 C.F.R.
42.24(a)(1)(i).
57
IPR2016-01571 Petition
U.S. Pat. No. 5,523,791
CERTIFICATE OF SERVICE
The undersigned certifies that the foregoing Petition for Inter Partes
Review and the associated Exhibits 1001 through 1008 were served on August
10, 2016, by Overnight Express Mail at the following address of record for the
subject patent.
Robert P. Bell
Robert P. Bell & Associates, P.C.
917 Duke Street
Alexandria VA 22314
Telephone: (703) 544-5281
Christopher M. Joe
Eric W. Buether
Mark D. Perantie
Buether Joe & Carpenter, LLC
1700 Pacific Avenue, Suite 4750
Dallas, TX 75201
Telephone: (214) 466-1272
Devon H. Decker
Bibby, McWilliams & Kearney, PLLC
410 Pierce Street, Suite 241
Houston, TX 77002
Telephone: (713) 936-9620
58