Quantcast
Channel: IIUG Forum: IDS Forum
Viewing all 9843 articles
Browse latest View live

Informix CDC

$
0
0
Setting up Golden Gate to talk to Informix 11.5.FC8. On the GG side, "full row
logging" must be enabled for replication. All tables are fine with the "ADD
TRANDATA" comment from ggsci, except 1 (table_A) below. Has anyone seen this
error? I've seen posts about CLIENT_LOCALE or DB_LOCALE needing set but that
doesn't change anything. Created the syscdcv1 database with it unset (default)
and set but no difference. Nothing "peculiar" about table_A that I've found
yet but still looking closer. Haven't oncheck'd anything yet but that's next.

06/14/16 13:19:21 FRL: Could not get Dictionary info for <database>:table_A.
ISAM -1213
06/14/16 13:19:21 FRL:Error in setting FULL row logging for table
<database>:table_A

Thanks in advance!
Mark Scranton
The Mark Scranton Group
mark@markscranton.com




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37265]

*******************************************************************************

Re: RE: test

$
0
0
Let's see if my response issue is fixed also.....




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37266]

*******************************************************************************

srt* files PSORT_DBTEMP DBSPACETEMP strange

$
0
0
Dear All,

Have an production engine (11.7 Enterprise on AIX 7.1) with the following
setup regarding tempdbs:

> onstat -d | grep -i temp
70000102ff5e930 5 0x42001 100 1 4096 N TBA informix tempdbs1
70000102ff5ead8 6 0x42001 101 1 4096 N TBA informix tempdbs2
70000102ff5ec80 7 0x42001 102 1 4096 N TBA informix tempdbs3
70000102ff5ee28 8 0x42001 103 1 4096 N TBA informix tempdbs4
70000102ff72828 100 5 25 2097127 2096524 PO-B-- /usr/informix/dbs/tempdbs1
70000102ff72a28 101 6 25 2097127 2096524 PO-B-- /usr/informix/dbs/tempdbs2
70000102ff72c28 102 7 25 2097127 2096524 PO-B-- /usr/informix/dbs/tempdbs3
70000102ff72e28 103 8 25 2097127 2096524 PO-B-- /usr/informix/dbs/tempdbs4

So basically 4 different chunks 8GB each.

> onstat -c | grep -i temp
....
DBSPACETEMP tempdbs1:tempdbs2:tempdbs3:tempdbs4
....

PSORT_DBTEMP not set neither in onconfig nor as env var.
However even though tempdbs spaces are 99% free most of the time, I get quite
a lot writes in $INFORMIXDIR/tmp ... i.e. srtXXX* files. AFAIK all these
sorting should be done within the tempdbs if space is available. This is
killing my performance. However if must, I'll move the sort directory to a
cooked file on faster disks, just wondering if i get the situation even worse,
i.e. by setting up PSORT_DBTEMP not to cause all the activities from tempdbs
(which are raw UNIX files) to be forwarded to cooked file system thus making
the situation even worse?

Thank you,
A




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37267]

*******************************************************************************

Re: srt* files PSORT_DBTEMP DBSPACETEMP strange

$
0
0
Hi,

how did you put the DBSPACETEMP in the onconfig ? Just added them with an
editor or did you work
with onmode -wf ? Just modifying the onconfig is not enough unless you bounce
the engine, the engine
will not be aware about the tempdbs volumes.

Further checks:
Are the tempdbs volumes used at all ? Does the free counter change in onstat
-d over the time ?
Are the devices active according to onstat -g iof ?
Are the devices used if you set the DBSPACETEMP environment variable ?

Marcus Haarmann

----- Ursprüngliche Mail -----

Von: "ALEKSANDAR IVANOVSKI"<aleksandar.ivanovski@gmail.com>
An: ids@iiug.org
Gesendet: Donnerstag, 16. Juni 2016 05:07:52
Betreff: srt* files PSORT_DBTEMP DBSPACETEMP strange [37267]

Dear All,

Have an production engine (11.7 Enterprise on AIX 7.1) with the following
setup regarding tempdbs:

> onstat -d | grep -i temp
70000102ff5e930 5 0x42001 100 1 4096 N TBA informix tempdbs1
70000102ff5ead8 6 0x42001 101 1 4096 N TBA informix tempdbs2
70000102ff5ec80 7 0x42001 102 1 4096 N TBA informix tempdbs3
70000102ff5ee28 8 0x42001 103 1 4096 N TBA informix tempdbs4
70000102ff72828 100 5 25 2097127 2096524 PO-B-- /usr/informix/dbs/tempdbs1
70000102ff72a28 101 6 25 2097127 2096524 PO-B-- /usr/informix/dbs/tempdbs2
70000102ff72c28 102 7 25 2097127 2096524 PO-B-- /usr/informix/dbs/tempdbs3
70000102ff72e28 103 8 25 2097127 2096524 PO-B-- /usr/informix/dbs/tempdbs4

So basically 4 different chunks 8GB each.

> onstat -c | grep -i temp
.....
DBSPACETEMP tempdbs1:tempdbs2:tempdbs3:tempdbs4
.....

PSORT_DBTEMP not set neither in onconfig nor as env var.
However even though tempdbs spaces are 99% free most of the time, I get quite
a lot writes in $INFORMIXDIR/tmp ... i.e. srtXXX* files. AFAIK all these
sorting should be done within the tempdbs if space is available. This is
killing my performance. However if must, I'll move the sort directory to a
cooked file on faster disks, just wondering if i get the situation even worse,
i.e. by setting up PSORT_DBTEMP not to cause all the activities from tempdbs
(which are raw UNIX files) to be forwarded to cooked file system thus making
the situation even worse?

Thank you,
A








*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37268]

*******************************************************************************

Re: srt* files PSORT_DBTEMP DBSPACETEMP strange

$
0
0
Aleksandar:

Several comments:

First, I can't think of why the engine would be writing sort-work files
to $INFORMIXDIR/tmp if PSORT_DBTEMP is not set.

Second, note that sorting to cooked files is often faster than sorting
to RAW temp dbspaces because the files lives are often short enough that
they never actually get written out to disk and live entirely in the OS's
cache.

Third, you can make that even faster if you point PSORT_DBTEMP to a RAM
disk or in-memory filesystem like tmpfs.

FWIW.

Art

Art S. Kagel, President and Principal Consultant
ASK Database Management
www.askdbmgt.com

Blog: http://informix-myview.blogspot.com/

Disclaimer: Please keep in mind that my own opinions are my own opinions
and do not reflect on the IIUG, nor any other organization with which I am
associated either explicitly, implicitly, or by inference. Neither do
those opinions reflect those of other individuals affiliated with any
entity with which I am affiliated nor those of the entities themselves.

On Wed, Jun 15, 2016 at 11:07 PM, ALEKSANDAR IVANOVSKI <
aleksandar.ivanovski@gmail.com> wrote:

> Dear All,
>
> Have an production engine (11.7 Enterprise on AIX 7.1) with the following
> setup regarding tempdbs:
>
> > onstat -d | grep -i temp
> 70000102ff5e930 5 0x42001 100 1 4096 N TBA informix tempdbs1
> 70000102ff5ead8 6 0x42001 101 1 4096 N TBA informix tempdbs2
> 70000102ff5ec80 7 0x42001 102 1 4096 N TBA informix tempdbs3
> 70000102ff5ee28 8 0x42001 103 1 4096 N TBA informix tempdbs4
> 70000102ff72828 100 5 25 2097127 2096524 PO-B-- /usr/informix/dbs/tempdbs1
> 70000102ff72a28 101 6 25 2097127 2096524 PO-B-- /usr/informix/dbs/tempdbs2
> 70000102ff72c28 102 7 25 2097127 2096524 PO-B-- /usr/informix/dbs/tempdbs3
> 70000102ff72e28 103 8 25 2097127 2096524 PO-B-- /usr/informix/dbs/tempdbs4
>
> So basically 4 different chunks 8GB each.
>
> > onstat -c | grep -i temp
> ....
> DBSPACETEMP tempdbs1:tempdbs2:tempdbs3:tempdbs4
> ....
>
> PSORT_DBTEMP not set neither in onconfig nor as env var.
> However even though tempdbs spaces are 99% free most of the time, I get
> quite
> a lot writes in $INFORMIXDIR/tmp ... i.e. srtXXX* files. AFAIK all these
> sorting should be done within the tempdbs if space is available. This is
> killing my performance. However if must, I'll move the sort directory to a
> cooked file on faster disks, just wondering if i get the situation even
> worse,
> i.e. by setting up PSORT_DBTEMP not to cause all the activities from
> tempdbs
> (which are raw UNIX files) to be forwarded to cooked file system thus
> making
> the situation even worse?
>
> Thank you,
> A
>
>
>
>

>
>
>

--001a1144577e5d6e3e0535627da9




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37269]

*******************************************************************************

Re: srt* files PSORT_DBTEMP DBSPACETEMP strange

$
0
0
Hi,

DBSPACETEMP are in the onconfig and the engine has been restarted afterwards.

Yes the counter changes within the onstat -d/-D but quite small portions (and
i know that programmers using A LOT of temp tables.

70000102ff72828 100 5 25 20834672 20178684 /usr/informix/dbs/tempdbs1
70000102ff72a28 101 6 25 21273194 20567977 /usr/informix/dbs/tempdbs2
70000102ff72c28 102 7 25 20589829 20261282 /usr/informix/dbs/tempdbs3
70000102ff72e28 103 8 25 21503097 21127646 /usr/informix/dbs/tempdbs4

> onstat -D | grep -i temp
70000102ff72828 100 5 25 20834672 20178684 /usr/informix/dbs/tempdbs1
70000102ff72a28 101 6 25 21273194 20567977 /usr/informix/dbs/tempdbs2
70000102ff72c28 102 7 25 20589829 20261282 /usr/informix/dbs/tempdbs3
70000102ff72e28 103 8 25 21503097 21127646 /usr/informix/dbs/tempdbs4

And yes, -g iof shows activities on chunks
105 tempdbs4 88085573632 21505267 86547742720 21129842 2582.6

op type count avg. time

seeks 0 N/A

reads 0 N/A

writes 0 N/A

kaio_reads 8296729 0.0002

kaio_writes 7643479 0.0006

Art, I guess what you are suggesting is that cooked files are in file system
cache only so they never end up written on disk? If thats so, we should be ok.

I'll give it a try to point files to the RAM File system
Thank you

A.




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37270]

*******************************************************************************

RE: srt* files PSORT_DBTEMP DBSPACETEMP strange

$
0
0
The PSORT_DBTEMP can be set by the client side, if a small test I did is
correct.
So probably your developers are setting PSORT_DBTEMP on their side.

Not sure if you can override this behavior on the server side.

Luis Filipe Silvestre Marques

-----Original Message-----
From: ids-bounces@iiug.org [mailto:ids-bounces@iiug.org] On Behalf Of
ALEKSANDAR IVANOVSKI
Sent: quinta-feira, 16 de Junho de 2016 14:14
To: ids@iiug.org
Subject: Re: srt* files PSORT_DBTEMP DBSPACETEMP strange [37270]

Hi,

DBSPACETEMP are in the onconfig and the engine has been restarted afterwards.

Yes the counter changes within the onstat -d/-D but quite small portions (and
i know that programmers using A LOT of temp tables.

70000102ff72828 100 5 25 20834672 20178684 /usr/informix/dbs/tempdbs1
70000102ff72a28 101 6 25 21273194 20567977 /usr/informix/dbs/tempdbs2
70000102ff72c28 102 7 25 20589829 20261282 /usr/informix/dbs/tempdbs3
70000102ff72e28 103 8 25 21503097 21127646 /usr/informix/dbs/tempdbs4

> onstat -D | grep -i temp
70000102ff72828 100 5 25 20834672 20178684 /usr/informix/dbs/tempdbs1
70000102ff72a28 101 6 25 21273194 20567977 /usr/informix/dbs/tempdbs2
70000102ff72c28 102 7 25 20589829 20261282 /usr/informix/dbs/tempdbs3
70000102ff72e28 103 8 25 21503097 21127646 /usr/informix/dbs/tempdbs4

And yes, -g iof shows activities on chunks
105 tempdbs4 88085573632 21505267 86547742720 21129842 2582.6

op type count avg. time

seeks 0 N/A

reads 0 N/A

writes 0 N/A

kaio_reads 8296729 0.0002

kaio_writes 7643479 0.0006

Art, I guess what you are suggesting is that cooked files are in file system
cache only so they never end up written on disk? If thats so, we should be ok.

I'll give it a try to point files to the RAM File system Thank you

A.








*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37271]

*******************************************************************************

Problemas al cambiar mi base a Buffering

$
0
0
Buenos días !!

Por favor si alguien me puede ayudar.

Hice un dbimport de mi base. Pero al momento de cambiar a Buffering me sale el
siguiente error:

prueba@linux-2xb9:~> ontape -s -B prueba
buc_fe.c : Archive API processing failed at line 175 for msgtype

Program over.

Interrupt received ...

En el arcivo ONCONFIG,tengo configurado de la siguiente manera el TAPEDEV

TAPEDEV /dev/null
TAPEBLK 32
TAPESIZE 240000000
LTAPEDEV /dev/null
LTAPEBLK 32
LTAPESIZE 240000000

Por favor no se que puede hacer para poder cambiarle a Buffering. Si alguien
puede ayudarme. Gracias.

Saludos,

David




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37272]

*******************************************************************************

Re: Problemas al cambiar mi base a Buffering

$
0
0
David,

El comando es

ontape -s -L 0 -B prueba

Saludos

El 16 de junio de 2016, 9:26, DAVID VALLEJO <vallejod@hotmail.com> escribió:

> Buenos días !!
>
> Por favor si alguien me puede ayudar.
>
> Hice un dbimport de mi base. Pero al momento de cambiar a Buffering me
> sale el
> siguiente error:
>
> prueba@linux-2xb9:~> ontape -s -B prueba
> buc_fe.c : Archive API processing failed at line 175 for msgtype
>
> Program over.
>
> Interrupt received ...
>
> En el arcivo ONCONFIG,tengo configurado de la siguiente manera el TAPEDEV
>
> TAPEDEV /dev/null
> TAPEBLK 32
> TAPESIZE 240000000
> LTAPEDEV /dev/null
> LTAPEBLK 32
> LTAPESIZE 240000000
>
> Por favor no se que puede hacer para poder cambiarle a Buffering. Si
> alguien
> puede ayudarme. Gracias.
>
> Saludos,
>
> David
>
>
>
>

>
>
>

--001a1145190630d8c9053566260f




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37273]

*******************************************************************************

Re: Problemas al cambiar mi base a Buffering

$
0
0
Gracias por responder...!!

Tambien hice lo que me indicaste y tengo lo siguiente:

informix@linux-2xb9:~/bases/Base2006> ontape -s -L 0 -B prueba
buc_fe.c : Archive API processing failed at line 175 for msgtype

Program over.

Interrupt received ...

Tengo instalado la siguiente version de Informix:
informix@linux-2xb9:~/bases/Base2006> onstat -

IBM Informix Dynamic Server Version 12.10.FC1DE -- On-Line -- Up 00:00:30 --
142516 Kbytes

Mi informix lo tengo instalado en una maquina virtual sobre linux.

DE antemano, Gracias

Saludos,

David




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37274]

*******************************************************************************

Re: RE: srt* files PSORT_DBTEMP DBSPACETEMP st....

$
0
0
Thank you for noticing this,

However since I've "caught" this behavior while running a cron which does an
dbaccess exec proc XXX i think this is not the case, since there in no
definition of this type neither in the script nor in the environment anywhere.

Thank you,
A




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37275]

*******************************************************************************

Odd Bufferpool Behaviou

$
0
0
AIX 6.1
IDS 11.70.FC8X8

I have a new machine with a lot of extra memory. Given that, I added a bunch
of new buffers (4k, 8k, & 16k) and went to town testing away. Long story
short, with all those new buffers, I get virtual segments added at startup. In
onstat -g mem, I see some very large pools named orvfl-buff(number). When I
reduce number of buffers to original settings, those guys go away.

Has anyone seen this behavior before? Does it negatively impact performance? I
plan to find the sweet spot with the largest number of buffers and no extras
in the virtual portion but cannot do that today.

Thanx,
Dan




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37276]

*******************************************************************************

Re: Odd Bufferpool Behaviou

$
0
0
Original post:

AIX 6.1
IDS 11.70.FC8X8

I have a new machine with a lot of extra memory. Given that, I added a bunch
of new buffers (4k, 8k, & 16k) and went to town testing away. Long story
short, with all those new buffers, I get virtual segments added at startup. In
onstat -g mem, I see some very large pools named orvfl-buff(number). When I
reduce number of buffers to original settings, those guys go away.

Has anyone seen this behavior before? Does it negatively impact performance? I
plan to find the sweet spot with the largest number of buffers and no extras
in the virtual portion but cannot do that today.

Thanx,
Dan

Response:

In 11.70 if you have either enough different bufferpools or 1 specific big
enough buffer pool (such that you exhaust the max size for 1 segment,
specifically the 1 resident segment), then the bufferpools spill over into
additionally created virtual segments, and a orvfl_buff pool gets created to
put the buffer pool into that virtual memory (since it's no longer in the
resident segment). So what you are seeing is expected in 11.70. In 12.x the
buffer pool code was changed (in whatever version where buffer pools became
dynamic) so that each buffer pool would then attempt to create it's own shared
memory segment (which was now of type "B" rather then resident or virtual
segments). The buffer pool in virtual segments should not impact performance,
assuming you have the physical memory on the machine to backup the amount of
total shared memory being created.

Jacques Renaut
IBM Informix Advanced Support
APD Team




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37277]

*******************************************************************************

Error in HPL while loading data

$
0
0
Hello All,

I am performing HPL from 11.70 instance to 12.10 instance on same Linux
machine.

Instances:
source: ids_test1 (11.70.FC8W1)
target: ids_test2 (12.10.FC6X5)

I am performing HPL using three scripts as per Technote
(http://www-01.ibm.com/support/docview.wss?uid=swg21587169)
step1: Create job
step2: unloading data
step3: loading data

I am able to perform below activity on source and target instance successfully
except step 3 (loading data) on target with few tables:
At source instance:
step1 (create job)
step2 (unloading data)

At target instance:
step 1 (create job)
step 3 (loading data)

For few tables it is loading data correctly, whereas for others failing below
table.log error:

$more test_table.log
Fri Jun 17 13:38:11 2016

SHMBASE 0x0000004000000000
CLIENTNUM 0x0000000049020000
Session ID 4506

Load Database -> test_db
Load Table -> test_table
Device Array -> test_table
Record Mapping -> test_table
Convert Reject -> /tmp/test_table.rej
Filter Reject -> /tmp/test_table.flt
Set mode of index test_table_idx1 to disabled
Set mode of index test_table_idx2 to disabled
Error occured at HPL failpoint: 10561
Fatal error getting stream buffer from server

and online.log:
08:44:22 Assert Failed: No Exception Handler
08:44:22 IBM Informix Dynamic Server Version 12.10.FC6X5
08:44:22 Who: Session(4597, informix@vbrdbs28.ux.corp.local, 28162,
0x462dfe98)

Thread(5704, stream_2.0, 462b3928, 11)

File: mtex.c Line: 508
08:44:22 Results: Exception Caught. Type: MT_EX_OS, Context: mem
08:44:22 Action: Please notify IBM Informix Techical Support.
08:44:22 See Also: /usr/informix12/tmp/af.1a30f0a5
08:44:24 Thread ID 4506 will now be suspended.

As per error (HPL failpoint), executed the script for below environment
variables but did not worked for pending tables:
export PLOAD_SHMBASE=0x000004000000000
export IFX_XFER_SHMBASE=0x000005000000000

Referred the Technote:
http://www-01.ibm.com/support/docview.wss?uid=swg21683918

As per Technote, I am using same value mentioned in it.. not sure do I have to
adjust based on error logs SHMBASE and CLIENTNUM values.

Any assistance will be appreciated.
Thank you.

~ Pravin Bankar




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37278]

*******************************************************************************

Re: Error in HPL while loading data

$
0
0
Hi Pravin,

as it looks some server side assertions are causing these failures. Would =

this reproduce for the same set of tables?

If possible, please open a PMR, including HPL job definitions, table=20
schemas and af files (as mentioned in online.log).

As an alternative to HPL you might also want to look into external tables=20
which also can be used for super-fast unload and load and can even use=20
pipes for simultaneous unload-load without the need for any intermediate=20
file system storage.
http://www.ibm.com/support/knowledgecenter/search/external%20table?scope=3D=
SSGU8G

HTH,
Andreas

From: "PRAVIN BANKAR"<pravinebankar@gmail.com>
To: ids@iiug.org
Date: 18.06.2016 11:31
Subject: Error in HPL while loading data [37278]
Sent by: ids-bounces@iiug.org

Hello All,=20

I am performing HPL from 11.70 instance to 12.10 instance on same Linux=20
machine.=20

Instances:=20
source: ids=5Ftest1 (11.70.FC8W1)=20
target: ids=5Ftest2 (12.10.FC6X5)=20

I am performing HPL using three scripts as per Technote=20
(http://www-01.ibm.com/support/docview.wss?uid=3Dswg21587169)=20
step1: Create job=20
step2: unloading data=20
step3: loading data=20

I am able to perform below activity on source and target instance=20
successfully=20
except step 3 (loading data) on target with few tables:=20
At source instance:=20
step1 (create job)=20
step2 (unloading data)=20

At target instance:=20
step 1 (create job)=20
step 3 (loading data)=20

For few tables it is loading data correctly, whereas for others failing=20
below=20
table.log error:=20

$more test=5Ftable.log=20
Fri Jun 17 13:38:11 2016=20

SHMBASE 0x0000004000000000=20
CLIENTNUM 0x0000000049020000=20
Session ID 4506=20

Load Database -> test=5Fdb=20
Load Table -> test=5Ftable=20
Device Array -> test=5Ftable=20
Record Mapping -> test=5Ftable=20
Convert Reject -> /tmp/test=5Ftable.rej=20
Filter Reject -> /tmp/test=5Ftable.flt=20
Set mode of index test=5Ftable=5Fidx1 to disabled=20
Set mode of index test=5Ftable=5Fidx2 to disabled=20
Error occured at HPL failpoint: 10561=20
Fatal error getting stream buffer from server=20

and online.log:=20
08:44:22 Assert Failed: No Exception Handler=20
08:44:22 IBM Informix Dynamic Server Version 12.10.FC6X5=20
08:44:22 Who: Session(4597, informix@vbrdbs28.ux.corp.local, 28162,=20
0x462dfe98)=20

Thread(5704, stream=5F2.0, 462b3928, 11)=20

File: mtex.c Line: 508=20
08:44:22 Results: Exception Caught. Type: MT=5FEX=5FOS, Context: mem=20
08:44:22 Action: Please notify IBM Informix Techical Support.=20
08:44:22 See Also: /usr/informix12/tmp/af.1a30f0a5=20
08:44:24 Thread ID 4506 will now be suspended.=20

As per error (HPL failpoint), executed the script for below environment=20
variables but did not worked for pending tables:=20
export PLOAD=5FSHMBASE=3D0x000004000000000=20
export IFX=5FXFER=5FSHMBASE=3D0x000005000000000=20

Referred the Technote:=20
http://www-01.ibm.com/support/docview.wss?uid=3Dswg21683918=20

As per Technote, I am using same value mentioned in it.. not sure do I=20
have to=20
adjust based on error logs SHMBASE and CLIENTNUM values.=20

Any assistance will be appreciated.=20
Thank you.=20

~ Pravin Bankar=20

***************************************************************************=
****=20

=20




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37279]

*******************************************************************************

Re: Error in HPL while loading data

$
0
0
Thank you Andreas for response and alternate.

Yes, the failure reproduces for the same set of tables.

I will check with my supervisor/manager for opening a PMR and confirm on same.

Also, I did not try the external tables for unload & load data. I will test on
same and let you know.

Meanwhile, when I tried HPL (Export Data & Import Data) using Server Studio
for failed tables in above method, it ran successful.

The only pain in performing HPL using Server Studio - Need to specify the file
path for each table in both places (Export Data & Import Data). I am not sure,
if there is any option/method we can specify the file for 100+ tables in a
database. If you can share details on this... it will be really appreciated.

I will keep updated based on progress.
Thanks again. Have a great time ahead.

~ Pravin Bankar




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37280]

*******************************************************************************

Re: Error in HPL while loading data

$
0
0
Pravin, if you have a need to unload and reload an entire database of
tables using external tables, the easiest way to do that is to get my
dbexport/dbimport replacement utility package myexport from the IIUG
Software Repository and use that. It requires myschema (which I believe you
already have), but otherwise is standalone if you use the myexport -E
option to export the data using external tables. That will create a
directory containing the schema, export files, and a set of scripts to
import the data with external tables using myimport -E.

Art

Art S. Kagel, President and Principal Consultant
ASK Database Management
www.askdbmgt.com

Blog: http://informix-myview.blogspot.com/

Disclaimer: Please keep in mind that my own opinions are my own opinions
and do not reflect on the IIUG, nor any other organization with which I am
associated either explicitly, implicitly, or by inference. Neither do
those opinions reflect those of other individuals affiliated with any
entity with which I am affiliated nor those of the entities themselves.

On Mon, Jun 20, 2016 at 7:18 AM, PRAVIN BANKAR <pravinebankar@gmail.com>
wrote:

> Thank you Andreas for response and alternate.
>
> Yes, the failure reproduces for the same set of tables.
>
> I will check with my supervisor/manager for opening a PMR and confirm on
> same.
>
> Also, I did not try the external tables for unload & load data. I will
> test on
> same and let you know.
>
> Meanwhile, when I tried HPL (Export Data & Import Data) using Server Studio
> for failed tables in above method, it ran successful.
>
> The only pain in performing HPL using Server Studio - Need to specify the
> file
> path for each table in both places (Export Data & Import Data). I am not
> sure,
> if there is any option/method we can specify the file for 100+ tables in a
> database. If you can share details on this... it will be really
> appreciated.
>
> I will keep updated based on progress.
> Thanks again. Have a great time ahead.
>
> ~ Pravin Bankar
>
>
>
>

>
>
>

--001a1144b7d04652d10535b441de




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37281]

*******************************************************************************

Re: Error in HPL while loading data

$
0
0
Thanks a lot Art... Surely, I will give try on same & update you accordingly.
Thank you.

~ Pravin Bankar




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37282]

*******************************************************************************

public execute permissions

$
0
0
Informix 11.7FC8W2

Our auditors just ran the NCCSquirel tool on our database.
The tool identified that public can execute a number of functions in the
sysmaster database and suggest that execute permissions should be revoked from
public on these functions.

Is this the correct thing to do ? I am very very hesitend to make any changes
in sysmaster. Any advice would be very much appreciated.
I wonder wat IBM's response would be on doing something like this.
Where can I find more info on Informix vulnerabilities ?




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37283]

*******************************************************************************

Re: public execute permissions

$
0
0
Id give that a miss, thanks. But I will defer to Jonathan Leffler, if he has
any thoughts.

> On 20 Jun 2016, at 13:27, FLIP VAN WYNGAARDT <flipv@raf.co.za> wrote:
>
> Informix 11.7FC8W2
>
> Our auditors just ran the NCCSquirel tool on our database.
> The tool identified that public can execute a number of functions in the
> sysmaster database and suggest that execute permissions should be revoked
from
> public on these functions.
>
> Is this the correct thing to do ? I am very very hesitend to make any
changes
> in sysmaster. Any advice would be very much appreciated.
> I wonder wat IBM's response would be on doing something like this.
> Where can I find more info on Informix vulnerabilities ?
>
>
>

>
>




*******************************************************************************

To post a response via email (IIUG members only):

1. Address it to ids@iiug.org
2. Include the bracketed message number in the subject line: [37284]

*******************************************************************************
Viewing all 9843 articles
Browse latest View live


Latest Images