Showing posts with label service. Show all posts
Showing posts with label service. Show all posts

Friday, March 30, 2012

please help me figure out how best to approach my queue problem

Hi,

I am moving my system from a custom created queue to a service broker queue.

I wish to have two queues :

one to hold processing messages
and one to hold emailing messages.

I do not understand why i would have to have an initiator queue and a target queue for each of the above. I have arranged my sql as follows:

-- processing queue
CREATE MESSAGE TYPE ProcessingMessage VALIDATION = NONE
CREATE CONTRACT ProcessingContract (ProcessingMessage SENT BY INITIATOR)
CREATE QUEUE ProcessingQueue
CREATE SERVICE ProcessingService ON QUEUE ProcessingQueue (ProcessingContract)
GO

-- emailing queue
CREATE MESSAGE TYPE EmailingMessage VALIDATION = NONE
CREATE CONTRACT EmailingContract (EmailingMessage SENT BY INITIATOR)
CREATE QUEUE EmailingQueue
CREATE SERVICE EmailingService ON QUEUE EmailingQueue (EmailingContract)
GO

So basically EmailingQueue plays the role of the initiator and the target ( sends messages to itself )... and so does the ProcessingQueue.

I enqueue my messages with the following SP:

PROC [dbo].[queue_enqueue] (
@.fromService SYSNAME,
@.toService SYSNAME,
@.onContract SYSNAME,
@.messageType SYSNAME,
@.entryId int)
AS
BEGIN
DECLARE @.conversationHandle UNIQUEIDENTIFIER
DECLARE @.error int

BEGIN DIALOG @.conversationHandle
FROM SERVICE @.fromService
TO SERVICE @.toService
ON CONTRACT @.onContract
with LIFETIME = 6000, ENCRYPTION = off;

SEND ON CONVERSATION @.conversationHandle Message Type @.messageType (@.entryId)

END

I do the enqueueing with
[queue_enqueue] 'ProcessingService', N'ProcessingService', 'ProcessingContract', 'ProcessingMessage', 1

I dequeue my messages with the following SP:
PROC [dbo].[queue_dequeue]
@.queue SYSNAME,
@.entryId int OUTPUT
AS
BEGIN
DECLARE @.conversationHandle UNIQUEIDENTIFIER;
DECLARE @.messageTypeName SYSNAME;
DECLARE @.conversationGroupId UNIQUEIDENTIFIER;
get conversation group @.conversationGroupId from ProcessingQueue

if ( @.conversationGroupId is not null )
BEGIN
if (@.queue='ProcessingQueue')
RECEIVE TOP(1) @.entryId = CONVERT(INT, [message_body]), @.conversationHandle = [conversation_handle], @.messageTypeName = [message_type_name] FROM ProcessingQueue where conversation_group_id = @.conversationGroupId
else if (@.queue='EmailingQueue')
RECEIVE TOP(1) @.entryId = CONVERT(INT, [message_body]), @.conversationHandle = [conversation_handle], @.messageTypeName = [message_type_name] FROM EmailingQueue where conversation_group_id = @.conversationGroupId
END
END

I dequeue by calling something like:
declare @.entryId int
exec [queue_dequeue] 'ProcessingQueue', @.entryId output

The above works however I have a few issues that I am having problems figuring out.

1. I don't want to "fire and forget". I want to close conversation at initiator endpoint only when target has closed conversation.
2. I want to know how to properly close conversations at initiator and target.
3. I am not polling with while loop or wait for, because of the way my system ( higher up ) is setup, I simply want to enqueue one message, and dequeue one message. If an error occurs in enqueueing or dequeueing I want to raise that error.
4. How do I handle poison messages?
5. Is it better for me to use two queues ( initiator and target ) for each of my queues above?

Can someone please help me with these issues?

thanx

Are you sure the code is correct? You are retrieving the conversation group from one queue, but potentially receive from another. Besides, GET CONVERSATION GROUP locks the conversation solely for the duration of the transaciton, yif you don't wrap the call to [queue_dequeue] in a transaction, you can have errors if multiple threads are calling this procedure. Also, using GET CONVERSATION GROUP is recommended solely for the cases when the application needs to look up some application specific state (corellated with the converstion_group_id) before actually RECEIVEing the messages. Otherwise one can simply go ahead and call RECEIVE verb directly.

The Service Broker queues are not identical with the Computer Science 'queue' data structures (i.e. a FIFO list). They are more like message storage areas for services and services are endpoints for distributed applications, communicating through messages. As such, using Service Broker queues probably won't map exactly 1-to-1 with your previous user-table-queue code.

You already noticed that you now have to call END CONVERSATION. I'm not sure how the rest of the system works: are you reusing converstions when enqueueing? is each enqueue a separate dialog?. Probably the [queue_dequeue] would have to call END CONVERSATION to end the conversations on which it received the current message. As long as you keep initiator and target on the same queue, this means that the same [queue_dequeue] procedure will have to handle the EndDialog messages (that would be one reason why you should split the queues).

|||Hi Remus, thanks for your reply!

I admit, I used the GET CONVERSATION GROUP when i got desperate.. thought that RECEIVE wasn't locking the queue entry on its own. I'll get rid of it since we don't really need to reuse conversations, we don't have a message intensive system, we are mostly worried about message intergrity and reduced processor load as compared to our previous system.

So, I gather it is better for me to setup my queues in the service broker conventional way, two queues for each physical queue that I have.

I still dont understand fully how the "dont fire and forget" method works:

1. Initiator starts dialog and sends message. does not end dialog.
2. Target receives message and ends dialog which in turn sends EndDialog message to initiator.
3. Initiator activation SP recieves EndDialog message and ends dialog on initiator side?

am I missing something?

thanx once again for your help.

please help me figure out how best to approach my queue problem

Hi,

I am moving my system from a custom created queue to a service broker queue.

I wish to have two queues :

one to hold processing messages
and one to hold emailing messages.

I do not understand why i would have to have an initiator queue and a target queue for each of the above. I have arranged my sql as follows:

-- processing queue
CREATE MESSAGE TYPE ProcessingMessage VALIDATION = NONE
CREATE CONTRACT ProcessingContract (ProcessingMessage SENT BY INITIATOR)
CREATE QUEUE ProcessingQueue
CREATE SERVICE ProcessingService ON QUEUE ProcessingQueue (ProcessingContract)
GO

-- emailing queue
CREATE MESSAGE TYPE EmailingMessage VALIDATION = NONE
CREATE CONTRACT EmailingContract (EmailingMessage SENT BY INITIATOR)
CREATE QUEUE EmailingQueue
CREATE SERVICE EmailingService ON QUEUE EmailingQueue (EmailingContract)
GO

So basically EmailingQueue plays the role of the initiator and the target ( sends messages to itself )... and so does the ProcessingQueue.

I enqueue my messages with the following SP:

PROC [dbo].[queue_enqueue] (
@.fromService SYSNAME,
@.toService SYSNAME,
@.onContract SYSNAME,
@.messageType SYSNAME,
@.entryId int)
AS
BEGIN
DECLARE @.conversationHandle UNIQUEIDENTIFIER
DECLARE @.error int

BEGIN DIALOG @.conversationHandle
FROM SERVICE @.fromService
TO SERVICE @.toService
ON CONTRACT @.onContract
with LIFETIME = 6000, ENCRYPTION = off;

SEND ON CONVERSATION @.conversationHandle Message Type @.messageType (@.entryId)

END

I do the enqueueing with
[queue_enqueue] 'ProcessingService', N'ProcessingService', 'ProcessingContract', 'ProcessingMessage', 1

I dequeue my messages with the following SP:
PROC [dbo].[queue_dequeue]
@.queue SYSNAME,
@.entryId int OUTPUT
AS
BEGIN
DECLARE @.conversationHandle UNIQUEIDENTIFIER;
DECLARE @.messageTypeName SYSNAME;
DECLARE @.conversationGroupId UNIQUEIDENTIFIER;
get conversation group @.conversationGroupId from ProcessingQueue

if ( @.conversationGroupId is not null )
BEGIN
if (@.queue='ProcessingQueue')
RECEIVE TOP(1) @.entryId = CONVERT(INT, [message_body]), @.conversationHandle = [conversation_handle], @.messageTypeName = [message_type_name] FROM ProcessingQueue where conversation_group_id = @.conversationGroupId
else if (@.queue='EmailingQueue')
RECEIVE TOP(1) @.entryId = CONVERT(INT, [message_body]), @.conversationHandle = [conversation_handle], @.messageTypeName = [message_type_name] FROM EmailingQueue where conversation_group_id = @.conversationGroupId
END
END

I dequeue by calling something like:
declare @.entryId int
exec [queue_dequeue] 'ProcessingQueue', @.entryId output

The above works however I have a few issues that I am having problems figuring out.

1. I don't want to "fire and forget". I want to close conversation at initiator endpoint only when target has closed conversation.
2. I want to know how to properly close conversations at initiator and target.
3. I am not polling with while loop or wait for, because of the way my system ( higher up ) is setup, I simply want to enqueue one message, and dequeue one message. If an error occurs in enqueueing or dequeueing I want to raise that error.
4. How do I handle poison messages?
5. Is it better for me to use two queues ( initiator and target ) for each of my queues above?

Can someone please help me with these issues?

thanx

Are you sure the code is correct? You are retrieving the conversation group from one queue, but potentially receive from another. Besides, GET CONVERSATION GROUP locks the conversation solely for the duration of the transaciton, yif you don't wrap the call to [queue_dequeue] in a transaction, you can have errors if multiple threads are calling this procedure. Also, using GET CONVERSATION GROUP is recommended solely for the cases when the application needs to look up some application specific state (corellated with the converstion_group_id) before actually RECEIVEing the messages. Otherwise one can simply go ahead and call RECEIVE verb directly.

The Service Broker queues are not identical with the Computer Science 'queue' data structures (i.e. a FIFO list). They are more like message storage areas for services and services are endpoints for distributed applications, communicating through messages. As such, using Service Broker queues probably won't map exactly 1-to-1 with your previous user-table-queue code.

You already noticed that you now have to call END CONVERSATION. I'm not sure how the rest of the system works: are you reusing converstions when enqueueing? is each enqueue a separate dialog?. Probably the [queue_dequeue] would have to call END CONVERSATION to end the conversations on which it received the current message. As long as you keep initiator and target on the same queue, this means that the same [queue_dequeue] procedure will have to handle the EndDialog messages (that would be one reason why you should split the queues).

|||Hi Remus, thanks for your reply!

I admit, I used the GET CONVERSATION GROUP when i got desperate.. thought that RECEIVE wasn't locking the queue entry on its own. I'll get rid of it since we don't really need to reuse conversations, we don't have a message intensive system, we are mostly worried about message intergrity and reduced processor load as compared to our previous system.

So, I gather it is better for me to setup my queues in the service broker conventional way, two queues for each physical queue that I have.

I still dont understand fully how the "dont fire and forget" method works:

1. Initiator starts dialog and sends message. does not end dialog.
2. Target receives message and ends dialog which in turn sends EndDialog message to initiator.
3. Initiator activation SP recieves EndDialog message and ends dialog on initiator side?

am I missing something?

thanx once again for your help.

Please help me debug Hex Dumps

Hello gurus,

Our SQL Servers is giving us a headache, after a certain period in time, either SQL Service automatically shuts down by itself or hangs. I've opened the logs and found hex dumps. Can you help me out with these?

2007-07-08 04:04:35.20 spid53 SqlDumpExceptionHandler: Process 1760 generated fatal exception c0000005 EXCEPTION_ACCESS_VIOLATION. SQL Server is terminating this process.
* ************************************************** *****************************
*
* BEGIN STACK DUMP:
* 07/08/07 04:04:35 spid 53
*
* Exception Address = 0042D46D
* Exception Code = c0000005 EXCEPTION_ACCESS_VIOLATION
* Access Violation occurred reading address 6AF1EDF0
* Input Buffer 4088 bytes -
* USE Document DBCC DBREINDEX (dtproperties) DBCC DBREINDEX (REfID_DocID
* ) DBCC DBREINDEX (RF_UNISYS_ErrorCodes) DBCC DBREINDEX (SYS_Document_F
* lat_Meta_Data) DBCC DBREINDEX (SYS_Document_Meta_Data) DBCC DBREINDEX
* (SYS_Document_Meta_Detail) DBCC DBREINDEX (SYS_Documents) DBCC DBREIND
* EX (SYS_ETFuelType) DBCC DBREINDEX (SYS_ETStatus) DBCC DBREINDEX (SYS_
* HF_Document_Meta_Data) DBCC DBREINDEX (SYS_HF_MI_Emission_Results) DBC
* C DBREINDEX (SYS_ITP_Failed) DBCC DBREINDEX (SYS_MI_Emission_Results)
* DBCC DBREINDEX (SYS_RF_Document_Meta_Data) DBCC DBREINDEX (SYS_RF_Docum
* ent_Status) DBCC DBREINDEX (SYS_TR_Document) DBCC DBREINDEX (SYS_TR_SM
* S_Document) USE Industry DBCC DBREINDEX (dtproperties) DBCC DBREIND
* EX (IND_MF_Industry) DBCC DBREINDEX (RF_ErrorCodes) DBCC DBREINDEX (RF
* _OperationCodes) DBCC DBREINDEX (RF_Unisys_ErrCodes) DBCC DBREINDEX (S
* YS_Admin_Has_Agents) DBCC DBREINDEX (SYS_Admin_Sharing) DBCC DBREINDEX
* (SYS_Companies) DBCC DBREINDEX (SYS_Company_Has_Admins) DBCC DBREINDE
* X (SYS_Company_Meta_Data) DBCC DBREINDEX (SYS_HF_Admin_Has_Agents) DBC
* C DBREINDEX (SYS_HF_Companies) DBCC DBREINDEX (SYS_HF_Company_Has_Admin
* s) DBCC DBREINDEX (SYS_HF_Company_Meta_Data) DBCC DBREINDEX (SYS_HF_Us
* er_Meta_Data) DBCC DBREINDEX (SYS_HF_Users) DBCC DBREINDEX (SYS_MF_App
* Variables) DBCC DBREINDEX (SYS_MI_Token) DBCC DBREINDEX (SYS_Page_Acce
* ss) DBCC DBREINDEX (SYS_Pages) DBCC DBREINDEX (SYS_Password_History)
* DBCC DBREINDEX (SYS_RF_Announcement) DBCC DBREINDEX (SYS_RF_BodyType)
* DBCC DBREINDEX (SYS_RF_Color) DBCC DBREINDEX (SYS_RF_Company_Meta_Data)
* DBCC DBREINDEX (SYS_RF_Company_Status) DBCC DBREINDEX (SYS_RF_DieselT
* ype) DBCC DBREINDEX (SYS_RF_EmissionFees) DBCC DBREINDEX (SYS_RF_Emiss
* ionRules) DBCC DBREINDEX (SYS_RF_Fuel_Type) DBCC DBREINDEX (SYS_RF_Hel
* pDetails) DBCC DBREINDEX (Sys_RF_Make) DBCC DBREINDEX (SYS_RF_Month)
* DBCC DBREINDEX (SYS_RF_MVClassification) DBCC DBREINDEX (SYS_RF_MVType)
* DBCC DBREINDEX (SYS_RF_MVType2) DBCC DBREINDEX (SYS_RF_Page_Groups)
* DBCC DBREINDEX (SYS_RF_Purpo
*
*
* MODULE BASE END SIZE
* sqlservr 00400000 00CBAFFF 008bb000
* Invalid Address 77F80000 77FFBFFF 0007c000
... <snipped>
* xpstar 09240000 09248FFF 00009000
* rsabase 092D0000 092F2FFF 00023000
* dbghelp 0AA80000 0AB7FFFF 00100000
*
* Edi: 0AA53937: 00000000 00000000 00000000 84004A00 98018901 C501B101
* Esi: 6AF1EDF0:
* Eax: 00000878:
* Ebx: FFFFE000:
* Ecx: 3FFFF800:
* Edx: FFFFE000:
* Eip: 0042D46D: CA8BA5F3 F303E183 DC7D8BA4 83D045FF 4D8B2CC7 E9D233E0
* Ebp: 0A1BFCC0: 0A1BFCE4 0042D5CD 71E428CC 71E40570 71E40988 0AA519A0
* SegCs: 0000001B:
* EFlags: 00010206: 00530053 0052004F 003D0053 00000034 0053004F 0057003D
* Esp: 0A1BFC28: 0AA519A0 71E4052C 00000000 00000010 71E408A0 00000010
* SegSs: 00000023:
* ************************************************** *****************************
* ------------------------
* Short Stack Dump
* 0042D46D Module(sqlservr+0002D46D)
* 0042D5CD Module(sqlservr+0002D5CD)
* 0042D6C7 Module(sqlservr+0002D6C7)
* 00508750 Module(sqlservr+00108750)
* 0051EB18 Module(sqlservr+0011EB18)
* 0051E9E4 Module(sqlservr+0011E9E4)
* 0085EACA Module(sqlservr+0045EACA) (GetIMallocForMsxml+0006A08A)
* 004229A7 Module(sqlservr+000229A7)
* 0087B87B Module(sqlservr+0047B87B) (GetIMallocForMsxml+00086E3B)
* 0087E3C9 Module(sqlservr+0047E3C9) (GetIMallocForMsxml+00089989)
* 0059A449 Module(sqlservr+0019A449)
* 41075309 Module(ums+00005309) (ProcessWorkRequests+000002D9 Line 456+00000000)
* 41074978 Module(ums+00004978) (ThreadStartRoutine+00000098 Line 263+00000007)
* 7C34940F Module(MSVCR71+0000940F) (endthread+000000AA)
* 7C57B3BC Module(KERNEL32+0000B3BC) (lstrcmpiW+000000B7)
* ------------------------
*Dump thread - spid = 53, PSS = 0x55a35280, EC = 0x55a355b0
*Stack Dump being sent to C:\Program Files\Microsoft SQL Server\MSSQL\log\SQLDump0020.txt

*** Problems occur intermittently, sometimes after our full backup which occurs every 0000h, sometimes 30 minutes after our transaction log dumps.

Please advice me on my next step. ThanksProbably need the latest service packs:
http://support.microsoft.com/kb/q293292/|||May also be a plain ol' memory or disk issue. Have you checked the hardware?|||hi blindmin: yes, we've installed the latest service packs both for our OS (windows 2000 server sp4) and rdbms (sql server 2000 sp4)

ReadySetStop: Memory for one, might serve as culprit. our box is running under DELL PowerEdge 1955 (blade server). memory is around 8GB (6gb goes to SQL, 2 GB goes to OS). There was a time when we ported from one blade to another blade server because of faulty OS. Before, when SQL encounters errors, the whole thing just freezes up. Now, SQL has the ability to shoot out hex dumps. That's why I need to have a basis before pointing it directly to a hardware fault.|||these were not the only hex dumps i've encountered. There were some hex dumps that has some variable declaration and value assigning like declare @.asdf datetime etc and lots of hex equivalents on the side. So, it might not only be DBCC stuffs. Just don't know where to start|||If the errors is not related only to DBCC, then I too would look at bad memory as the next possible culprit.|||just an update, we've managed to consult Microsoft for these errors. Initially, they told us that there's an issue having /PAE enabled. I'll keep you guys posted.|||just an update guys, got this from MS tech support.

Hi

I noticed the two SQL Servers have AWE enabled, while the platform is Windows 2000 (Build 2195: Service Pack 4). We have a known issue on such environment, with the similar dump call stacks. Please refer to the following URL.

Access violations when you use the /PAE switch in Windows 2000

http://support.microsoft.com/kb/838647

You may notice unpredictable behavior on a multiprocessor computer that is running SQL Server 2000 and has the Physical Addressing Extensions (PAE) specification enabled

http://support.microsoft.com/kb/838765/en-us

To avoid the issue, could you please upgrade your Windows 2000 to Rollup 1 for Microsoft Windows 2000 Service Pack 4? For more information about the Rollup, please refer to:



http://support.microsoft.com/kb/891861/en-us

And it can be downloaded from:

http://www.microsoft.com/downloads/details.aspx?FamilyId=B54730CF-8850-4531-B52B-BF28B324C662&displaylang=en

After applied it, please keep monitoring your SQL Server. If the issue reoccurs, please send me your SQL Server errorlog files with the new dump file generated.

Please Help Me !

I am as Visual Basic 6.0/.Net Developer
Wath Is The Microsoft SQL Server Reporting Service ?
Wath Is The Microsoft SQL Server Analisie Service/Olap?
Wath Is The Microsoft Engleech Query ?
Thank You !These are features of Microsoft's SQL Server product. The latest
version is SQL Server 2005.
The documentation for SQL Server 2005 can be downloaded from
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx.
If you have further new user questions you might be best to post them
to the microsoft.public.sqlserver.newusers newsgroup.
I hope that helps.
Andrew Watt [MVP]
On Thu, 20 Apr 2006 00:09:53 +0200, "Meftah Tayeb"
<SQLSRVX86@.hotmail.com> wrote:
>I am as Visual Basic 6.0/.Net Developer
>Wath Is The Microsoft SQL Server Reporting Service ?
>Wath Is The Microsoft SQL Server Analisie Service/Olap?
>Wath Is The Microsoft Engleech Query ?
>Thank You !

please help me

Hi

i connect to remote Integration Service. i configure server for remote connection( on component services and DCOM config , ... ) .and now i can connect to Integration Service remotely and correctly.

but when i expand Stored Package and then click to expand MSDB this error will hapen:

login failed for user ... .(microsoft sql native client )

please help me

thanks in advance

anyone answer to me

i need it soon

thanks

Please help lost connection urgent!

Hi,
After I install Service Pack3 on my local PC machine, I cannot connect to
the Development Server by using Query Analyzer and SQL Profiler. but the
strangest thing is i can connect to that server by using Enterprise Manager.
The error message showing on QA is
[Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or
access denied. -- it really doesn't make sense to me to only QA but not EM
.
Please help and thanks
EdThat may help:
http://groups.google.de/groups? q=S...ftngxa06&rnum=1
HTH, Jens Smeyer.
"Ed" <Ed@.discussions.microsoft.com> schrieb im Newsbeitrag
news:C80F022D-B11F-432A-9B86-5A9F86232244@.microsoft.com...
> Hi,
> After I install Service Pack3 on my local PC machine, I cannot connect
> to
> the Development Server by using Query Analyzer and SQL Profiler. but the
> strangest thing is i can connect to that server by using Enterprise
> Manager.
> The error message showing on QA is
> [Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or
> access denied. -- it really doesn't make sense to me to only QA but not
> EM.
> Please help and thanks
> Ed|||I'll quote you some possible solutions from sswug.org mailing list:
I recall that changing the port at which a named instance exists requires
restart of the default instance (or perhaps just SQL Agent?). If you can
easily restart the default instance and agent on the DEV box, I would do
so.
--
Fixed the problem on the DEV server. When the instance came back up, it
was for some reason, listening on 1433.... same as the default instance.
Changed the port number in SNU, restarted the services, and can connect
fine now.
--
Did you check the local Windows firewall settings on the MSDE machine?
You have to explicitly enable TCP and/or named pipes (globally or for a
single instance of SQL).
--
Is this a clustered instance? If so, disabling named pipes messes up the
named pipes reg key and this messes up both TCP and named pipes
connectivity.
Can you connect locally or does that fail as well? What about if you
force a specific protocol or port?
Np:Server\Instance
TCP:server\instance
LPC:server\instance
TCP:server\instance,port
Maybe some of these q&a's may give an idea.
Regards,
Marko Simic
"Ed" wrote:

> Hi,
> After I install Service Pack3 on my local PC machine, I cannot connect
to
> the Development Server by using Query Analyzer and SQL Profiler. but the
> strangest thing is i can connect to that server by using Enterprise Manage
r.
> The error message showing on QA is
> [Microsoft][ODBC SQL Server Driver][DBNETLIB]SQL Server does not exist or
> access denied. -- it really doesn't make sense to me to only QA but not
EM.
> Please help and thanks
> Ed|||Thanks for the answer from both.
I spent some much time looking at MS website to find the solution.
Finally, after I downloaded the MDAC 2.8 (newest version), everything is
back to normal now.
Ed
"Simic Marko" wrote:
> I'll quote you some possible solutions from sswug.org mailing list:
> --
> I recall that changing the port at which a named instance exists requires
> restart of the default instance (or perhaps just SQL Agent?). If you can
> easily restart the default instance and agent on the DEV box, I would do
> so.
> --
> Fixed the problem on the DEV server. When the instance came back up, it
> was for some reason, listening on 1433.... same as the default instance.
> Changed the port number in SNU, restarted the services, and can connect
> fine now.
> --
> Did you check the local Windows firewall settings on the MSDE machine?
> You have to explicitly enable TCP and/or named pipes (globally or for a
> single instance of SQL).
> --
> Is this a clustered instance? If so, disabling named pipes messes up the
> named pipes reg key and this messes up both TCP and named pipes
> connectivity.
> Can you connect locally or does that fail as well? What about if you
> force a specific protocol or port?
> Np:Server\Instance
> TCP:server\instance
> LPC:server\instance
> TCP:server\instance,port
> Maybe some of these q&a's may give an idea.
> Regards,
> Marko Simic
> "Ed" wrote:
>

Monday, March 26, 2012

PLEASE HELP - Accessing Web Services through Windows App

I am running a vb.net application that is using the Web Service to
issue Update Snapshot. There is a Windows account we are using as a
Service account that has the proper permissions to Generate Events.
The application is running on AutoSys on a 2003 server. I have
confirmed that it will run under this users credentials manually by
logging into the server as the user and firing the app. It works
without a problem. This was using Default Credentials.

The AutoSys job is configured to run under this user's credentials.
There are three ways I have executed this application

>>Through AutoSys running as the user with the proper permissions (this is in place of a Local Service account which is default for AutoSys. This windows user has local admin permissions on the AutoSys server as well as Generate Event permission as confirmed before)
>>By logging into the server as Administrator and executing the app with "Run As..." set to the Windows ID
>>By switching to new.NetworkCredentials and passing the hardcoded username/pass/domain

For all of the above scenarios - it fails

The ONLY time it successfully issues UpdateSnapshot is when I am
logged in to the server as the user in question. In ALL other
instances detailed above it fails and the following comes out of the
event log

"Line 1: Incorrect syntax near 'MyDomainName'.

I have no idea why it is choking on the Domain Name in all three of
those scenarios.

Do I need to impersonate that user through code when passing the Web
Service credentials?

PLEASE PLEASE PLEASE help - I need to wrap this up. ANY HELP WOULD BE
GREATLY APPRECIATED

Could you show a sample of your code? Syntax error indicates compilation/parsing error. AutoSys method #2 or #3 sound like they should be workable.

Thank you.

|||

I was wrong (partially) - the error I am referring to in the original post is being caused by my attempt to trap the error and write it to a log table in SQL Server. The error is actually as follows

The permissions granted to user 'mydomain\myuser' are insufficient for performing this operation

The fact that there are single quotes in the error is causing a syntax error in my SQL statement.

So - the actual user still does not have the permissions needed to issue UpdateSnapshot. I have confirmed with the Administrators that the user is in a role on the server that contains Generate Events and is also a Browser on all reports on the server.

I did this before in a previous environment and it worked perfectly. Can anyone definitively tell me what permissions are required in order to do things like UpdateSnapshot, FireEvent, CreateSubscription? I thought Generate Events was all that was needed.

During testing I was using my ID to do it and it worked. So I thought I could just create a dummy account and give it the permissions. I am stumped

|||

For update snapshot, you need Execute permission on the report you operate on. You can get it from Manage Report or Manager History task.

You are right for FireEvent you need Generate Event task (which contains generate event permission).

For CreateSubscription, you need either CreateSubscription or CreateAnySubscription permission. It is in "Manage Individual Subscriptions" or "Manage Any subscriptions" task.

|||

Thank you - I did discover that I was looking at the wrong permissions for Updating a Snapshot. The problem I originally had raises and interesting question. How do I handle the SOAP exception with the single quotes in it if I want to write that to a SQL Table? Seems odd that an error would contain single quotes!

|||This is the way we choose to mark the name (RS item name, username, etc.). I guess you can do a string replacing before you insert into your table.sql

Monday, March 12, 2012

placement of Analysis service? on cluster or lone server?

Hi, we have an active/passive cluster which hosts our production database.
Transactional replication is being made from the production to a 'reporting'
database residing on a lone server. We will need to install Analysis service
for the construction of cubes.
I was wondering where Analysis service should be installed?
Is is ok to install it on the lone server and have it use the data from the
reporting database (which is filled via transactional replication)? Will
Analysis service have a problem with this based on the fact that it is
reading a read only database which doesnt have primary keys in the tables?
OR, should analysis service be installed on the cluster? Is it cluster
aware?
At this stage we are thinking of only running the cubes at night to generate
the data for export to reporting service.
Would it then be a good idea to install reporting service on this same lone
server?
Any insight would be most appreciated!
Cheers, john
If that is the case, you can't possibly be running transactional
replication.
The "reporting" database is LOGICALLY read-only, not physically read-only.
This database also MUST have all of the primary keys in place on the tables,
since it is impossible to replicate a table with transactional replication
if the table does not have a primary key. This primary key exists at both
the publisher and the subscriber.
Analysis Services is not going to write to your tables when building cubes,
so it does not care if the tables are physically read-only, logically
read-only, or read-write.
Where you place AS is entirely up to you. You can install it on the
stand-alone server and you can install it in the cluster. The question you
need to ask is what level of availability do you need for AS. If it needs
to be available as much as possible, even through a hardware failure, then
it should be installed in the cluster. If not, it should be installed on
the stand-alone machine.
Mike
Mentor
Solid Quality Learning
http://www.solidqualitylearning.com
"john clarke" <jclarke@.nospam.com> wrote in message
news:%23QyI71JDGHA.688@.TK2MSFTNGP11.phx.gbl...
> Hi, we have an active/passive cluster which hosts our production database.
> Transactional replication is being made from the production to a
> 'reporting'
> database residing on a lone server. We will need to install Analysis
> service
> for the construction of cubes.
> I was wondering where Analysis service should be installed?
> Is is ok to install it on the lone server and have it use the data from
> the
> reporting database (which is filled via transactional replication)? Will
> Analysis service have a problem with this based on the fact that it is
> reading a read only database which doesnt have primary keys in the tables?
> OR, should analysis service be installed on the cluster? Is it cluster
> aware?
> At this stage we are thinking of only running the cubes at night to
> generate
> the data for export to reporting service.
> Would it then be a good idea to install reporting service on this same
> lone
> server?
> Any insight would be most appreciated!
> Cheers, john
>
>